title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
The Pythonic way to grow a list of lists | 39,716,492 | <p>I have a large file (2GB) of categorical data (mostly "Nan"--but populated here and there with actual values) that is too large to read into a single data frame. I had a rather difficult time coming up with a object to store all the unique values for each column (Which is my goal--eventually I need to factorize this for modeling)</p>
<p>What I ended it up doing was reading the file in chunks into a dataframe and then get the unique values of each column and store them in a list of lists. My solution works, but seemed most un-pythonic--is there a cleaner way to accomplish this in Python (ver 3.5). I do know the number of columns (~2100).</p>
<pre><code>import pandas as pd
#large file of csv separated text data
data=pd.read_csv("./myratherlargefile.csv",chunksize=100000, dtype=str)
collist=[]
master=[]
i=0
initialize=0
for chunk in data:
#so the first time through I have to make the "master" list
if initialize==0:
for col in chunk:
#thinking about this, i should have just dropped this col
if col=='Id':
continue
else:
#use pd.unique as a build in solution to get unique values
collist=chunk[col][chunk[col].notnull()].unique().tolist()
master.append(collist)
i=i+1
#but after first loop just append to the master-list at
#each master-list element
if initialize==1:
for col in chunk:
if col=='Id':
continue
else:
collist=chunk[col][chunk[col].notnull()].unique().tolist()
for item in collist:
master[i]=master[i]+collist
i=i+1
initialize=1
i=0
</code></pre>
<p>after that, my final task for all the unique values is as follows:</p>
<pre><code>i=0
names=chunk.columns.tolist()
for item in master:
master[i]=list(set(item))
master[i]=master[i].append(names[i+1])
i=i+1
</code></pre>
<p>thus master[i] gives me the column name and then a list of unique values--crude but it does work--my main concern is building the list in a "better" way if possible.</p>
| 8 | 2016-09-27T05:18:07Z | 39,716,772 | <p>I would suggest instead of a <code>list</code> of <code>list</code>s, using a <a href="https://docs.python.org/2/library/collections.html#collections.defaultdict"><code>collections.defaultdict(set)</code></a>. </p>
<p>Say you start with</p>
<pre><code>uniques = collections.defaultdict(set)
</code></pre>
<p>Now the loop can become something like this:</p>
<pre><code>for chunk in data:
for col in chunk:
uniques[col] = uniques[col].union(chunk[col].unique())
</code></pre>
<p>Note that:</p>
<ol>
<li><p><code>defaultdict</code> always has a <code>set</code> for <code>uniques[col]</code> (that's what it's there for), so you can skip <code>initialized</code> and stuff.</p></li>
<li><p>For a given <code>col</code>, you simply update the entry with the union of the current set (which initially is empty, but it doesn't matter) and the new unique elements.</p></li>
</ol>
<p><strong>Edit</strong></p>
<p>As Raymond Hettinger notes (thanks!), it is better to use</p>
<pre><code> uniques[col].update(chunk[col].unique())
</code></pre>
| 8 | 2016-09-27T05:40:05Z | [
"python",
"list",
"nested-lists"
]
|
How to use subprocess.Popen with built-in command on Windows | 39,716,557 | <p>In my old python script, I use the following code to show the result for Windows cmd command:</p>
<pre><code>print(os.popen("dir c:\\").read())
</code></pre>
<p>As the python 2.7 document said <code>os.popen</code> is obsolete and <code>subprocess</code> is recommended. I follow the documentation as:</p>
<pre><code>result = subprocess.Popen("dir c:\\").stdout
</code></pre>
<p>And I got error message:</p>
<pre><code>WindowsError: [Error 2] The system cannot find the file specified
</code></pre>
<p>Can you tell me the correct way to use the <code>subprocess</code> module?</p>
| 1 | 2016-09-27T05:23:07Z | 39,717,215 | <p>You should use call <code>subprocess.Popen</code> with <code>shell=True</code> as below:</p>
<pre><code>import subprocess
result = subprocess.Popen("dir c:", shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output,error = result.communicate()
print (output)
</code></pre>
<p><a href="https://pymotw.com/2/subprocess/" rel="nofollow">More info on subprocess module.</a></p>
| 1 | 2016-09-27T06:11:33Z | [
"python",
"windows",
"python-2.7",
"command-line",
"subprocess"
]
|
How to use subprocess.Popen with built-in command on Windows | 39,716,557 | <p>In my old python script, I use the following code to show the result for Windows cmd command:</p>
<pre><code>print(os.popen("dir c:\\").read())
</code></pre>
<p>As the python 2.7 document said <code>os.popen</code> is obsolete and <code>subprocess</code> is recommended. I follow the documentation as:</p>
<pre><code>result = subprocess.Popen("dir c:\\").stdout
</code></pre>
<p>And I got error message:</p>
<pre><code>WindowsError: [Error 2] The system cannot find the file specified
</code></pre>
<p>Can you tell me the correct way to use the <code>subprocess</code> module?</p>
| 1 | 2016-09-27T05:23:07Z | 39,717,239 | <p>Try:</p>
<pre><code>p = subprocess.Popen(["dir", "c:\\"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
outputs = p.communicate()
</code></pre>
<p>Note that <code>communicate()</code> returns a tuple <code>(stdoutdata, stderrdata)</code>, therefore <code>outputs[0]</code> is the <code>stdout</code> messages and <code>outputs[1]</code> is the <code>stderr</code> messages.</p>
| 0 | 2016-09-27T06:12:49Z | [
"python",
"windows",
"python-2.7",
"command-line",
"subprocess"
]
|
Oops, try again. Your function failed on the message yes. It returned 'yes' when it should have returned 'Shutting down' | 39,716,763 | <p>Here is my pyhton practice program: </p>
<pre><code>def shut_down(s):
return s
if yes():
shut_down("Shutting down")
elif no():
shut_down("Shutdown aborted")
else:
shut_down("Sorry")
</code></pre>
<p>the question given to me are:</p>
<ol>
<li>if the shut_down function receives an s equal to "yes", it should return "Shutting down"</li>
<li>elif s is equal to "no", then the function should return "Shutdown aborted".</li>
<li>if shut_down gets anything other than those inputs, the function should return "Sorry"</li>
</ol>
| -10 | 2016-09-27T05:39:39Z | 39,716,950 | <p>As others pointed out in comments your return should at the logical end of your function. </p>
<pre><code>def shut_down(s):
if s == "yes":
r = "Shutting down"
elif s == "no":
r = "Shutdown aborted"
else:
r = "Sorry"
return r
</code></pre>
| -2 | 2016-09-27T05:53:38Z | [
"python",
"python-2.7",
"python-3.x"
]
|
Oops, try again. Your function failed on the message yes. It returned 'yes' when it should have returned 'Shutting down' | 39,716,763 | <p>Here is my pyhton practice program: </p>
<pre><code>def shut_down(s):
return s
if yes():
shut_down("Shutting down")
elif no():
shut_down("Shutdown aborted")
else:
shut_down("Sorry")
</code></pre>
<p>the question given to me are:</p>
<ol>
<li>if the shut_down function receives an s equal to "yes", it should return "Shutting down"</li>
<li>elif s is equal to "no", then the function should return "Shutdown aborted".</li>
<li>if shut_down gets anything other than those inputs, the function should return "Sorry"</li>
</ol>
| -10 | 2016-09-27T05:39:39Z | 39,717,132 | <p>Couple of points:</p>
<ul>
<li>Misplaced return statement. Should be at end.</li>
<li><code>if yes():</code> It is wrong. You want to compare function input with yes. It should be <code>if s == 'yes':</code>. Same for rest also.</li>
<li>Since you have written function definition as <code>def shut_down(s):</code>, it is expecting one argument. You should pass one argument while calling this function as <code>shutdown(yes)</code></li>
<li>Once you called function and since your function has return statement, it will return some value, which you should catch like <code>ret = shutdown(yes)</code> </li>
</ul>
<p></p>
<pre><code>def shut_down(s):
if s == "yes":
r = "Shutting down"
elif s == "no":
r = "Shutdown aborted"
else:
r = "Sorry"
return r
ret = shut_down("yes")
print (ret)
</code></pre>
| 0 | 2016-09-27T06:06:00Z | [
"python",
"python-2.7",
"python-3.x"
]
|
Using Makefile bash to save the contents of a python file | 39,717,071 | <p>For those who are curious as to why I'm doing this: I need specific files in a tar ball - no more, no less. I have to write unit tests for <code>make check</code>, but since I'm constrained to having "no more" files, I have to write the check within the <code>make check</code>. In this way, I have to write bash(but I don't want to).</p>
<p>I dislike using bash for unit testing(sorry to all those who like bash. I just dislike it so much that I would rather go with an extremely hacky approach than to write many lines of bash code), so I wrote a python file. I later learned that I have to use bash because of some unknown strict rule. I figured that there was a way to cache the entire content of the python file into a single string in the bash file, so I could take the string literal in bash and write to a python file and then execute it. </p>
<p>I tried the following attempt (in the following script and result, I used another python file that's not unit_test.py, so don't worry if it doesn't actually look like a unit test):</p>
<h2>toStr.py:</h2>
<pre class="lang-py prettyprint-override"><code>import re
with open("unit_test.py", 'r+') as f:
s = f.read()
s = s.replace("\n", "\\n")
print(s)
</code></pre>
<p>And then I piped the results out using:</p>
<pre><code>python toStr.py > temp.txt
</code></pre>
<p>It looked something like:</p>
<p><code>#!/usr/bin/env python\n\nimport os\nimport sys\n\n#create number of bytes as specified in the args:\nif len(sys.argv) != 3:\n print("We need a correct number of args : 2 [NUM_BYTES][FILE_NAME].")\n exit(1)\nn = -1\ntry:\n n = int(sys.argv[1])\nexcept:\n print("Error casting number : " + sys.argv[1])\n exit(1)\n\nrand_string = os.urandom(n)\n\nwith open(sys.argv[2], 'wb+') as f:\n f.write(rand_string)\n f.flush()\n f.close()\n\n</code></p>
<p>I tried taking this as a string literal and echoing it into a new file and see whether I could run it as a python file but it failed.</p>
<p><code>echo '{insert that giant string above here}' > new_unit_test.py</code></p>
<p>I wanted to take this statement above and copy it into my "bash unit test" file so I can just execute the python file within the bash script.</p>
<p>The resulting file looked exactly like {insert giant string here}. What am I doing wrong in my attempt? Are there other, much easier ways where I can hold a python file as a string literal in a bash script?</p>
| 2 | 2016-09-27T06:02:16Z | 39,717,262 | <p>the easiest way is to only use double-quotes in your python code, then, in your bash script, wrap all of your python code in one pair of single-quotes, e.g.,</p>
<pre><code>#!/bin/bash
python -c 'import os
import sys
#create number of bytes as specified in the args:
if len(sys.argv) != 3:
print("We need a correct number of args : 2 [NUM_BYTES][FILE_NAME].")
exit(1)
n = -1
try:
n = int(sys.argv[1])
except:
print("Error casting number : " + sys.argv[1])
exit(1)
rand_string = os.urandom(n)
# i changed ""s to ''s below -webb
with open(sys.argv[2], "wb+") as f:
f.write(rand_string)
f.flush()
f.close()'
</code></pre>
| 2 | 2016-09-27T06:14:22Z | [
"python",
"bash",
"shell",
"makefile"
]
|
docx to list in python | 39,717,217 | <p>I am trying to read a docx file and to add the text to a list.
Now I need the list to contain lines from the docx file.</p>
<p>example:</p>
<p>docx file:</p>
<pre><code>"Hello, my name is blabla,
I am 30 years old.
I have two kids."
</code></pre>
<p>result:</p>
<pre><code>['Hello, my name is blabla', 'I am 30 years old', 'I have two kids']
</code></pre>
<p>I cant get it to work.</p>
<p>Using the <code>docx2txt</code> module from here:
<a href="https://github.com/ankushshah89/python-docx2txt" rel="nofollow">github link</a></p>
<p>There is only one command of process and it returns all the text from docx file.</p>
<p>Also I would like it to keep the special characters like <code>":\-\.\,"</code></p>
| 0 | 2016-09-27T06:11:38Z | 39,717,637 | <p><strong>docx2txt</strong> module reads docx file and converts it in text format. </p>
<p>You need to split above output using <code>splitlines()</code> and store it in list.</p>
<p><strong>Code (Comments inline) :</strong></p>
<pre><code>import docx2txt
text = docx2txt.process("a.docx")
#Prints output after converting
print ("After converting text is ",text)
content = []
for line in text.splitlines():
#This will ignore empty/blank lines.
if line != '':
#Append to list
content.append(line)
print (content)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>C:\Users\dinesh_pundkar\Desktop>python c.py
After converting text is
Hello, my name is blabla.
I am 30 years old.
I have two kids.
List is ['Hello, my name is blabla.', 'I am 30 years old. ', 'I have two kids.']
C:\Users\dinesh_pundkar\Desktop>
</code></pre>
| 2 | 2016-09-27T06:36:21Z | [
"python",
"python-2.7"
]
|
How are ModelFields assigned in Django Models? | 39,717,356 | <p>When we define a model in django we write something like..</p>
<pre><code>class Student(models.Model):
name = models.CharField(max_length=64)
age = models.IntegerField()
...
</code></pre>
<p>where, <code>name = models.CharField()</code> implies that <code>name</code> would be an <strong>object</strong> of <code>models.CharField</code>. When we have to make an object of student we simple do..</p>
<pre><code>my_name = "John Doe"
my_age = 18
s = Student.objects.create(name=my_name, age=my_age)
</code></pre>
<p>where <code>my_name</code> and <code>my_age</code> are <code>string</code> and <code>integer</code> data types respectively, and not an object of <code>models.CharField</code>/<code>models.IntegerField</code>. Although while assigning the values the respective validations are performed (like checking on the <code>max_length</code> for <code>CharField</code>)</p>
<p>I'm trying to build similar models for an abstraction of Neo4j over Django but not able to get this workflow. How can I implement this ?</p>
<p>Found a similar <a href="http://stackoverflow.com/questions/12006267/how-do-django-models-work">question</a> but didn't find it helpful enough.</p>
| 1 | 2016-09-27T06:20:20Z | 39,718,936 | <h1>How things work</h1>
<p>First thing I we need to understand that each field on your models has own validation, <a href="https://github.com/django/django/blob/master/django/db/models/fields/__init__.py#L1040" rel="nofollow">this one</a> refer to the CharField(<code>_check_max_length_attribute</code>) and it also calling the <code>super</code> on method <code>check</code> from the <code>Field</code> class to validate some basic common things.</p>
<p>That in mind, we now move to the <code>create</code> method which is much more complicated and total different thing, the basics operations for specific object:</p>
<ol>
<li>Create a python object</li>
<li>Call <code>save()</code></li>
<li>Using a lot of <code>getattr</code>s the save does tons of validation</li>
<li>Commit to the DB, if anything wrong goes from the DB, raise it to the user</li>
</ol>
<p>A third thing you need to understand that when you query an object it first get the data from the db, and then(after long process) it set the data to the object.</p>
<h1>Simple Example</h1>
<pre><code>class BasicCharField:
def __init__(self, max_len):
self.max_len = max_len
def validate(self, value):
if value > self.max_len:
raise ValueError('the value must be lower than {}'.format(self.max_len))
class BasicModel:
score = BasicCharField(max_len=4)
@staticmethod
def create(**kwargs):
obj = BasicModel()
obj.score = kwargs['score']
obj.save()
return obj
def save(self):
# Lots of validations here
BasicModel.score.validate(self.score)
# DB commit here
BasicModel.create(score=5)
</code></pre>
<p>And like we was expecting:</p>
<p><code>>>> ValueError: the value must be lower than 4</code></p>
<p>Obviously I had to simplify things to make it into few lines of code, you can improve this by a lot (like iterate over the attribute and not hardcode it like <code>obj.score = ...</code>)</p>
| 1 | 2016-09-27T07:44:40Z | [
"python",
"django",
"oop",
"django-models"
]
|
Adding items to empty pandas DataFrame | 39,717,407 | <p>I want to dynamically extend an empty pandas DataFrame in the following way:</p>
<pre><code>df=pd.DataFrame()
indices=['A','B','C']
colums=['C1','C2','C3']
for colum in colums:
for index in indices:
#df[index,column] = anyValue
</code></pre>
<p>Where both indices and colums can have arbitrary sizes which are not known in advance, i.e. I cannot create a DataFrame with the correct size in advance.</p>
<p>Which pandas function can I use for </p>
<pre><code>#df[index,column] = anyValue
</code></pre>
<p>?</p>
| 4 | 2016-09-27T06:23:55Z | 39,717,446 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow"><code>loc</code></a>:</p>
<pre><code>df = pd.DataFrame()
df.loc[0,1] = 10
df.loc[2,8] = 100
print(df)
1 8
0 10.0 NaN
2 NaN 100.0
</code></pre>
<p>Faster solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_value.html" rel="nofollow"><code>DataFrame.set_value</code></a>:</p>
<pre><code>df = pd.DataFrame()
indices = ['A', 'B', 'C']
columns = ['C1', 'C2', 'C3']
for column in columns:
for index in indices:
df.set_value(index, column, 1)
print(df)
C1 C2 C3
A 1.0 1.0 1.0
B 1.0 1.0 1.0
C 1.0 1.0 1.0
</code></pre>
| 3 | 2016-09-27T06:26:06Z | [
"python",
"pandas"
]
|
Adding items to empty pandas DataFrame | 39,717,407 | <p>I want to dynamically extend an empty pandas DataFrame in the following way:</p>
<pre><code>df=pd.DataFrame()
indices=['A','B','C']
colums=['C1','C2','C3']
for colum in colums:
for index in indices:
#df[index,column] = anyValue
</code></pre>
<p>Where both indices and colums can have arbitrary sizes which are not known in advance, i.e. I cannot create a DataFrame with the correct size in advance.</p>
<p>Which pandas function can I use for </p>
<pre><code>#df[index,column] = anyValue
</code></pre>
<p>?</p>
| 4 | 2016-09-27T06:23:55Z | 39,717,498 | <p><code>loc</code> works very well, but...<br>
For single assignments use <code>at</code></p>
<pre><code>df = pd.DataFrame()
indices = ['A', 'B', 'C']
columns = ['C1', 'C2', 'C3']
for column in columns:
for index in indices:
df.at[index, column] = 1
df
</code></pre>
<p><a href="http://i.stack.imgur.com/MHVHZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/MHVHZ.png" alt="enter image description here"></a></p>
<hr>
<h1><code>.at</code> vs <code>.loc</code> vs <code>.set_value</code> timing</h1>
<p><a href="http://i.stack.imgur.com/wuE0T.png" rel="nofollow"><img src="http://i.stack.imgur.com/wuE0T.png" alt="enter image description here"></a></p>
| 3 | 2016-09-27T06:28:49Z | [
"python",
"pandas"
]
|
Tableau download/export images using Rest api python | 39,717,464 | <p>Need to download image from the tableau server using python script. Tableau Rest API doesn't provide any option to do so.I like to know what is proper way of downloading high resolution/full size image from tableau server using python or any other server scripting language.</p>
| 0 | 2016-09-27T06:27:03Z | 39,732,617 | <p>The simplest approach is to issue an HTTP GET request from Python to your Tableau Server and append a format string to the URL such as ".png" or ".pdf".</p>
<p>There are size options you can experiment with as well -- press the Share button to see the syntax.</p>
<p>You can also pass filter settings in the URL as query parameters</p>
| 1 | 2016-09-27T19:15:42Z | [
"python",
"tableau",
"tableau-online"
]
|
redirecting python -m CGIHTTPserver 8080 to /dev/null | 39,717,471 | <p>The standard > /dev/null and >> /dev/null don't work when a computer sends a GET to the task.
eg:
pi@raspberrypi:~/server $ python -m CGIHTTPServer 8080 &</p>
<p>results in </p>
<p>192.168.0.109 - - [26/Sep/2016 23:14:48] "GET /cgi-bin/DS1822remote.py HTTP/1.1" 200 -</p>
<p>As I've put the python app into the background with the '&' I'd like to also see the GET requests vanish.</p>
<p>How do I do this or is it even possible?</p>
| 0 | 2016-09-27T06:27:30Z | 39,717,571 | <p>You need to redirect stderr, not stdout; best to redirect both with:</p>
<pre><code> python -m CGIHTTPServer 8080 > /dev/null 2>&1 &
</code></pre>
<p>If you still want to expose stdout (to see the start-up message, for example), use:</p>
<pre><code> python -m CGIHTTPServer 8080 2> /dev/null &
</code></pre>
| 0 | 2016-09-27T06:33:07Z | [
"python",
"cgihttpserver"
]
|
How to add additional fields to embedded documents without changing the original model | 39,717,659 | <p>I want to add extra properties to a document before embedding that into other document, but I don't know how to do that.</p>
<p>Here's my code and what I have tried so far:</p>
<pre class="lang-py prettyprint-override"><code>from mongoengine import *
from datetime import datetime
class User(Document):
name = StringField(max_length=80, required=True)
created_at = DateTimeField(default=datetime.now(), required=True)
updated_at = DateTimeField(default=datetime.now(), required=True)
meta = {
'collection': 'users'
}
def save(self, *args, **kwargs):
self.updated_at = datetime.now()
return super(User, self).save(*args, **kwargs)
class Stream(Document):
users = EmbeddedDocumentListField(document_type='User')
created_at = DateTimeField(default=datetime.now(), required=True)
updated_at = DateTimeField(default=datetime.now(), required=True)
meta = {
'collection': 'streams'
}
def save(self, *args, **kwargs):
self.updated_at = datetime.now()
return super(Stream, self).save(*args, **kwargs)
</code></pre>
<p>When I embed the <code>user document</code> to the stream's users <code>EmbeddedDocumentListField</code>, It will be added and look like this</p>
<pre class="lang-py prettyprint-override"><code>{
"_id" : ObjectId("57e6123fe8c39b18b1a13431"),
"users" : [
{
"_id" : ObjectId("57e6123fe8c39b18b1a13432"),
"name": "Rohit Khatri",
"created_at" : ISODate("2016-09-24T11:12:23.182Z"),
"updated_at" : ISODate("2016-09-24T11:12:23.301Z")
}
],
"created_at" : ISODate("2016-09-24T11:12:23.189Z"),
"updated_at" : ISODate("2016-09-24T11:12:23.323Z")
}
</code></pre>
<p>Now I want to embed the User document with additional properties, like roles. Here's what I have tried:-</p>
<pre class="lang-py prettyprint-override"><code>user = User.objects.create(name='Rohit Khatri')
user.roles = ['admin','writer']
stream = Stream.objects.create()
stream.users.append(user)
stream.save()
</code></pre>
<p>But It doesn't add the roles field, It would be thankful if somebody can help me with this.</p>
<p>Thanks</p>
| 1 | 2016-09-27T06:37:33Z | 39,717,838 | <p>use update method to set the new attribute</p>
<pre><code>user.update(set__roles = ['admin','writer'])
</code></pre>
| 0 | 2016-09-27T06:47:52Z | [
"python",
"django",
"mongoengine",
"embedded-documents"
]
|
Insert list into cells which meet column conditions | 39,717,809 | <p>Consider <code>df</code></p>
<pre><code> A B C
0 3 2 1
1 4 2 3
2 1 4 1
3 2 2 3
</code></pre>
<p>I want to add another column <code>"D"</code> such that D contains different Lists based on conditions on <code>"A"</code>, <code>"B"</code> and <code>"C"</code></p>
<pre><code> A B C D
0 3 2 1 [1,0]
1 4 2 3 [1,0]
2 1 4 1 [0,2]
3 2 2 3 [2,0]
</code></pre>
<p>My code snippet looks like:</p>
<pre><code>df['D'] = 0
df['D'] = df['D'].astype(object)
df.loc[(df['A'] > 1) & (df['B'] > 1), "D"] = [1,0]
df.loc[(df['A'] == 1) , "D"] = [0,2]
df.loc[(df['A'] == 2) & (df['C'] != 0) , "D"] = [2,0]
</code></pre>
<p>When I try to run this code it throws the following error:</p>
<pre><code>ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
<p>I have converted the column into <code>Object</code> type as suggested <a href="http://stackoverflow.com/questions/26483254/python-pandas-insert-list-into-a-cell">here</a> but still with error.</p>
<p>What I can infer is that pandas is trying to iterate over the elements of the list and assigns each of those values to the cells where as I am trying to assign the entire list to all the cells meeting the criterion.</p>
<p>Is there any way I can assign lists in the above fashion?</p>
| 4 | 2016-09-27T06:46:12Z | 39,718,151 | <p>Here's a goofy way to do it</p>
<pre><code>cond1 = df.A.gt(1) & df.B.gt(1)
cond2 = df.A.eq(1)
cond3 = df.A.eq(2) & df.C.ne(0)
df['D'] = cond3.map({True: [2, 0]}) \
.combine_first(cond2.map({True: [0, 2]})) \
.combine_first(cond1.map({True: [1, 0]})) \
df
</code></pre>
<p><a href="http://i.stack.imgur.com/f5SLl.png" rel="nofollow"><img src="http://i.stack.imgur.com/f5SLl.png" alt="enter image description here"></a></p>
| 4 | 2016-09-27T07:05:18Z | [
"python",
"pandas"
]
|
Insert list into cells which meet column conditions | 39,717,809 | <p>Consider <code>df</code></p>
<pre><code> A B C
0 3 2 1
1 4 2 3
2 1 4 1
3 2 2 3
</code></pre>
<p>I want to add another column <code>"D"</code> such that D contains different Lists based on conditions on <code>"A"</code>, <code>"B"</code> and <code>"C"</code></p>
<pre><code> A B C D
0 3 2 1 [1,0]
1 4 2 3 [1,0]
2 1 4 1 [0,2]
3 2 2 3 [2,0]
</code></pre>
<p>My code snippet looks like:</p>
<pre><code>df['D'] = 0
df['D'] = df['D'].astype(object)
df.loc[(df['A'] > 1) & (df['B'] > 1), "D"] = [1,0]
df.loc[(df['A'] == 1) , "D"] = [0,2]
df.loc[(df['A'] == 2) & (df['C'] != 0) , "D"] = [2,0]
</code></pre>
<p>When I try to run this code it throws the following error:</p>
<pre><code>ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
<p>I have converted the column into <code>Object</code> type as suggested <a href="http://stackoverflow.com/questions/26483254/python-pandas-insert-list-into-a-cell">here</a> but still with error.</p>
<p>What I can infer is that pandas is trying to iterate over the elements of the list and assigns each of those values to the cells where as I am trying to assign the entire list to all the cells meeting the criterion.</p>
<p>Is there any way I can assign lists in the above fashion?</p>
| 4 | 2016-09-27T06:46:12Z | 39,719,580 | <p>Another solution is create <code>Series</code> filled by <code>list</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html" rel="nofollow"><code>shape</code></a> for generating <code>length</code> of <code>df</code>:</p>
<pre><code>df.loc[(df['A'] > 1) & (df['B'] > 1), "D"] = pd.Series([[1,0]]*df.shape[0])
df.loc[(df['A'] == 1) , "D"] = pd.Series([[0,2]]*df.shape[0])
df.loc[(df['A'] == 2) & (df['C'] != 0) , "D"] = pd.Series([[2,0]]*df.shape[0])
print (df)
A B C D
0 3 2 1 [1, 0]
1 4 2 3 [1, 0]
2 1 4 1 [0, 2]
3 2 2 3 [2, 0]
</code></pre>
| 3 | 2016-09-27T08:18:27Z | [
"python",
"pandas"
]
|
Insert list into cells which meet column conditions | 39,717,809 | <p>Consider <code>df</code></p>
<pre><code> A B C
0 3 2 1
1 4 2 3
2 1 4 1
3 2 2 3
</code></pre>
<p>I want to add another column <code>"D"</code> such that D contains different Lists based on conditions on <code>"A"</code>, <code>"B"</code> and <code>"C"</code></p>
<pre><code> A B C D
0 3 2 1 [1,0]
1 4 2 3 [1,0]
2 1 4 1 [0,2]
3 2 2 3 [2,0]
</code></pre>
<p>My code snippet looks like:</p>
<pre><code>df['D'] = 0
df['D'] = df['D'].astype(object)
df.loc[(df['A'] > 1) & (df['B'] > 1), "D"] = [1,0]
df.loc[(df['A'] == 1) , "D"] = [0,2]
df.loc[(df['A'] == 2) & (df['C'] != 0) , "D"] = [2,0]
</code></pre>
<p>When I try to run this code it throws the following error:</p>
<pre><code>ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
<p>I have converted the column into <code>Object</code> type as suggested <a href="http://stackoverflow.com/questions/26483254/python-pandas-insert-list-into-a-cell">here</a> but still with error.</p>
<p>What I can infer is that pandas is trying to iterate over the elements of the list and assigns each of those values to the cells where as I am trying to assign the entire list to all the cells meeting the criterion.</p>
<p>Is there any way I can assign lists in the above fashion?</p>
| 4 | 2016-09-27T06:46:12Z | 39,720,329 | <p><strong>Disclaimer</strong>: This is my own question.</p>
<p>Both the answers provided by <a href="http://stackoverflow.com/users/2901002/jezrael">jezrael</a> and <a href="http://stackoverflow.com/users/2336654/pirsquared">piRSquared</a> work.</p>
<p>I just wanted to add another way of doing it, albeit slightly different from the requirement I posted in the question. Instead of trying to insert a <code>list</code>, you can convert the <code>list</code> into a <code>string</code> and later access it by typecasting. </p>
<pre><code>df.loc[(df['A'] > 1) & (df['B'] > 1), "D"] = '[1,0]'
df.loc[(df['A'] == 1) , "D"] = '[0,2]'
df.loc[(df['A'] == 2) & (df['C'] != 0) , "D"] = '[2,0]'
</code></pre>
<p>This may not be applicable to everyone's use, but I can definitely think of situations where this would suffice.</p>
| 0 | 2016-09-27T08:56:39Z | [
"python",
"pandas"
]
|
Django ModelForm not saving data that is added to request.POST or through form.save(commit=False) | 39,718,015 | <p>I have django form that submitted to the view that displays it and want to add some data to it before saving it but it seems not be working.</p>
<p><strong>models.py</strong>:</p>
<pre><code>from django.contrib.auth.models import User
from django.db import models
class Profile(models.Model):
user = models.OneToOneField(User)
display_name = models.CharField(max_length=145, blank=True, null=True)
bio = models.CharField(max_length=1000, blank=True, null=True)
class Module(models.Model):
name = models.CharField(max_length=45)
semester = models.CharField(max_length=40)
prof = models.ForeignKey('Profile', null=True, blank=True)
</code></pre>
<p><strong>forms.py</strong>:</p>
<pre><code>class ModuleForm(ModelForm):
class Meta:
model = Module
fields = ['name', 'semester']
</code></pre>
<p><strong>views.py</strong>:</p>
<p>I have tried adding <code>prof</code> to <code>request.POST</code> before passing it to <code>moduleform</code> but it saves every other data submitted from the <em>HTML</em> except <code>prof</code></p>
<pre><code>prof = Profile.objects.get(user=request.user)
if request.method == 'POST':
mtb = request.POST._mutable
request.POST._mutable = True
request.POST['prof'] = str(prof.id)
request.POST._mutable = mtb
moduleform = ModuleForm(request.POST)
moduleform.save()
</code></pre>
<p>I have also tried saving with <code>commit=False</code> but <code>prof</code> is still not getting added.</p>
<pre><code>moduleform.save(commit=False)
moduleform.prof = prof
moduleform.save()
</code></pre>
| 1 | 2016-09-27T06:58:02Z | 39,718,288 | <p>See this example from <a href="https://docs.djangoproject.com/es/1.10/topics/forms/modelforms/#the-save-method" rel="nofollow">django's official documentation</a>:</p>
<pre><code>form = PartialAuthorForm(request.POST)
author = form.save(commit=False)
author.title = 'Mr'
author.save()
</code></pre>
<p>In your case (untested):</p>
<pre><code>if request.method == 'POST':
form = ModuleForm(request.POST)
object = form.save(commit=False)
object.prof = Profile.objects.get(user=request.user)
object.save()
</code></pre>
<p><strong>EDIT:</strong>
To clarify your comment on my answer: </p>
<pre><code>moduleform.save(commit=False)
moduleform.prof = prof
moduleform.save()
</code></pre>
<p>Does not work because the form will never save <code>prof</code> on the instance because it is not part of the <code>fields</code>. This is why you HAVE TO set the <code>prof</code> on the model-level.</p>
| 1 | 2016-09-27T07:11:47Z | [
"python",
"django",
"django-forms"
]
|
Using numbers in a list to access items in a dictionary in python | 39,718,095 | <p>I have the items:</p>
<pre><code>my_list = [18, 15, 22, 22, 25, 10, 7, 25, 2, 22, 14, 10, 27]
</code></pre>
<p>in a list and I would like to use these to access items inside my dictionary by using a 'for loop', I will then append these items to a new list. My dictionary looks like this:</p>
<pre><code>my_dict = {1:"A", 2:"B", 3:"C" ... 26:"Z", 27:"_"}
</code></pre>
<p>How would I got about doing this? I have tried loops like this:</p>
<pre><code>for i in my_list:
my_newlist = []
letters = number_value[keys.index(i)]
letters.append(my_newlist)
</code></pre>
<p>However I am getting errors in doing so. Cheers.</p>
<p>In another area of my code I have the revers function, in which I get the key by searching for the item with:</p>
<pre><code>for i in my_list:
third_list = []
num_val = keys[values.index(i)] + key_shift
third_list.append(num_val)
print(num_val)
</code></pre>
<p>It's being used in a substitution cypher code. The sample above works, it looks in the dictionary for the key which corresponds to "A" and stores the key in a list.</p>
<p>Encrypter is now working. You can find a version <a href="https://github.com/Bluestormy/Python/blob/master/Python%20Encrypter.py" rel="nofollow">here</a></p>
| 1 | 2016-09-27T07:02:30Z | 39,718,181 | <p>You can simply use a <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a> where you implicitly loop through your list of numbers <code>my_list</code> and retrieve the value form your dict <code>my_dict</code> using the respective number as a key:</p>
<pre><code>my_dict = {i: chr(i+64) for i in range(27)}
my_list = [1, 5, 19]
result = [my_dict[i] for i in my_list]
print(result)
</code></pre>
<p>yields</p>
<pre><code>['A', 'E', 'S']
</code></pre>
<p>Make sure to check for <code>KeyError</code>, because your list <code>my_list</code> may contain numbers that are not present in your dict <code>my_dict</code>. If you want to avoid explicit exception handling (and silently omit missing keys), you can do something like this:</p>
<pre><code>result = [my_dict[i] for i in my_list if i in my_dict.keys()]
print(result)
</code></pre>
| 1 | 2016-09-27T07:07:05Z | [
"python",
"list",
"dictionary"
]
|
Using numbers in a list to access items in a dictionary in python | 39,718,095 | <p>I have the items:</p>
<pre><code>my_list = [18, 15, 22, 22, 25, 10, 7, 25, 2, 22, 14, 10, 27]
</code></pre>
<p>in a list and I would like to use these to access items inside my dictionary by using a 'for loop', I will then append these items to a new list. My dictionary looks like this:</p>
<pre><code>my_dict = {1:"A", 2:"B", 3:"C" ... 26:"Z", 27:"_"}
</code></pre>
<p>How would I got about doing this? I have tried loops like this:</p>
<pre><code>for i in my_list:
my_newlist = []
letters = number_value[keys.index(i)]
letters.append(my_newlist)
</code></pre>
<p>However I am getting errors in doing so. Cheers.</p>
<p>In another area of my code I have the revers function, in which I get the key by searching for the item with:</p>
<pre><code>for i in my_list:
third_list = []
num_val = keys[values.index(i)] + key_shift
third_list.append(num_val)
print(num_val)
</code></pre>
<p>It's being used in a substitution cypher code. The sample above works, it looks in the dictionary for the key which corresponds to "A" and stores the key in a list.</p>
<p>Encrypter is now working. You can find a version <a href="https://github.com/Bluestormy/Python/blob/master/Python%20Encrypter.py" rel="nofollow">here</a></p>
| 1 | 2016-09-27T07:02:30Z | 39,718,529 | <p>Your code snipped does not seems to be complete because you are using the variable <code>number_values</code> which is not defined in your example.
EDIT: You still are using variables which are not defined in the code snipped you provided so I cant reproduce your error. In addition you are overwriting your <code>third_list</code> and <code>my_newlis</code> every time </p>
<p>I assume that the keys in <code>my_list</code> are also present in <code>my_dict</code> : If not do as @jbndlr told you and check for <code>KeyError</code>.</p>
<pre><code>my_list = [1, 2, 3, 26, 27]
my_dict = {1:"A", 2:"B", 3:"C" , 26:"Z", 27:"_"}
my_newlist = []
for i in my_list:
letters = my_dict[i]
my_newlist.append(letters)
print my_newlist
</code></pre>
<p>This yield the the items. If you are looking for something else let me know so I can adjust the answer </p>
| 1 | 2016-09-27T07:23:37Z | [
"python",
"list",
"dictionary"
]
|
Using numbers in a list to access items in a dictionary in python | 39,718,095 | <p>I have the items:</p>
<pre><code>my_list = [18, 15, 22, 22, 25, 10, 7, 25, 2, 22, 14, 10, 27]
</code></pre>
<p>in a list and I would like to use these to access items inside my dictionary by using a 'for loop', I will then append these items to a new list. My dictionary looks like this:</p>
<pre><code>my_dict = {1:"A", 2:"B", 3:"C" ... 26:"Z", 27:"_"}
</code></pre>
<p>How would I got about doing this? I have tried loops like this:</p>
<pre><code>for i in my_list:
my_newlist = []
letters = number_value[keys.index(i)]
letters.append(my_newlist)
</code></pre>
<p>However I am getting errors in doing so. Cheers.</p>
<p>In another area of my code I have the revers function, in which I get the key by searching for the item with:</p>
<pre><code>for i in my_list:
third_list = []
num_val = keys[values.index(i)] + key_shift
third_list.append(num_val)
print(num_val)
</code></pre>
<p>It's being used in a substitution cypher code. The sample above works, it looks in the dictionary for the key which corresponds to "A" and stores the key in a list.</p>
<p>Encrypter is now working. You can find a version <a href="https://github.com/Bluestormy/Python/blob/master/Python%20Encrypter.py" rel="nofollow">here</a></p>
| 1 | 2016-09-27T07:02:30Z | 39,727,537 | <p>Others have pointed out the inconsistencies in your code sample, so blithely ignoring those :)</p>
<pre><code>my_list = [18, 15, 22, 22, 25, 10, 7, 25, 2, 22, 14, 10, 27]
my_dict = {0: '@', 1: 'A', 2: 'B', 3: 'C', 4: 'D', 5: 'E', 6: 'F', 7: 'G', 8: 'H', 9: 'I', 10: 'J', 11: 'K', 12: 'L', 13: 'M', 14: 'N', 15: 'O', 16: 'P', 17: 'Q', 18: 'R', 19: 'S', 20: 'T', 21: 'U', 22: 'V', 23: 'W', 24: 'X', 25: 'Y', 26: 'Z', 27: '_'}
my_newlist = []
for i in my_list:
letters = my_dict[i]
my_newlist.append(letters)
my_newlist
['R', 'O', 'V', 'V', 'Y', 'J', 'G', 'Y', 'B', 'V', 'N', 'J', '_']
#Now reverse the operation to build third_list
third_list = []
for i in my_newlist:
key = my_dict.keys()[my_dict.values().index(i)]
third_list.append(key)
third_list
[18, 15, 22, 22, 25, 10, 7, 25, 2, 22, 14, 10, 27]
</code></pre>
| 0 | 2016-09-27T14:36:36Z | [
"python",
"list",
"dictionary"
]
|
Setting to zero of elements of a numpy array different from a value (python3) : a[a <> 5] = 0 | 39,718,129 | <p>In python 2, the following code works:</p>
<pre><code>a = np.array([[1,5],[2,3]])
print a
print()
a[a<2] = 0
print a
a[a <> 5] = 0
print a
</code></pre>
<p>But in python3, it triggers a syntax error:</p>
<pre><code>a[a <> 5] = 0
File "<ipython-input-14-165e29d9f8e4>", line 1
a[a <> 5] = 0
^
SyntaxError: invalid syntax
</code></pre>
| -1 | 2016-09-27T07:04:09Z | 39,718,162 | <p>The correct syntax for "not equal to" is now <code>a[a != 5] = 0</code></p>
<p>(Yet another instance of a backward compatibility break in Python 3).</p>
| 3 | 2016-09-27T07:05:49Z | [
"python",
"arrays",
"python-3.x",
"numpy"
]
|
Setting to zero of elements of a numpy array different from a value (python3) : a[a <> 5] = 0 | 39,718,129 | <p>In python 2, the following code works:</p>
<pre><code>a = np.array([[1,5],[2,3]])
print a
print()
a[a<2] = 0
print a
a[a <> 5] = 0
print a
</code></pre>
<p>But in python3, it triggers a syntax error:</p>
<pre><code>a[a <> 5] = 0
File "<ipython-input-14-165e29d9f8e4>", line 1
a[a <> 5] = 0
^
SyntaxError: invalid syntax
</code></pre>
| -1 | 2016-09-27T07:04:09Z | 39,718,179 | <p>In Python 3, <code><></code> was replaced by <code>!=</code>. It is similar to how <code>print</code> was changed from a statement to a function. See <a href="https://docs.python.org/2/library/stdtypes.html#comparisons" rel="nofollow">Comparisons</a> in the Docs:</p>
<blockquote>
<p><code>!=</code> can also be written <code><></code>, but this is an obsolete usage kept for backwards compatibility only. New code should always use <code>!=</code>.</p>
</blockquote>
<p>P.s: You can be quite sneaky and do:</p>
<pre><code>from __future__ import barry_as_FLUFL
</code></pre>
<p>which allows <code><></code> and makes <code>!=</code> a SyntaxError, but really don't, just use <code>!=</code>. </p>
| 1 | 2016-09-27T07:06:49Z | [
"python",
"arrays",
"python-3.x",
"numpy"
]
|
Panda how to groupby rows into different time buckets? | 39,718,157 | <p>I have a dataframe with a datetime type column called timestamp, I want to split the dataframe into several dataframes based on timestamp the time part, each dataframe contains rows that value by its value modulo x minutes, where x is a variable. </p>
<p>Notice that <code>e</code> and <code>f</code> are not in original order. With modulo 10 minutes, I want all times that end in <code>3</code> together, all times that end in <code>1</code> together, so on and so forth.</p>
<p>Group when x = 10</p>
<pre><code> timestampe text
0 2016-08-11 12:01:00 a
1 2016-08-13 11:11:00 b
2 2016-08-09 11:13:00 c
3 2016-08-05 11:33:00 d
4 2016-08-19 11:27:00 e
5 2016-08-21 11:43:00 f
</code></pre>
<p>into </p>
<pre><code> timestampe text
0 2016-08-11 12:01:00 a
1 2016-08-13 11:11:00 b
0 2016-08-09 11:13:00 c
1 2016-08-05 11:33:00 d
2 2016-08-21 11:43:00 f
0 2016-08-19 11:27:00 e
</code></pre>
| -1 | 2016-09-27T07:05:36Z | 39,728,612 | <p>Your main tools will be <code>df.timestampe.dt.minute % 10</code> and <code>groupby</code>.<br>
I used an <code>apply(pd.DataFrame.reset_index)</code> just as a convenience to illustrate</p>
<pre><code>df.groupby(df.timestampe.dt.minute % 10).apply(pd.DataFrame.reset_index)
</code></pre>
<p><a href="http://i.stack.imgur.com/3saqd.png" rel="nofollow"><img src="http://i.stack.imgur.com/3saqd.png" alt="enter image description here"></a></p>
<hr>
<p>Just using the <code>groupby</code> could be advantageous as well</p>
<pre><code>for name, group in df.groupby(df.timestampe.dt.minute % 10):
print
print(name)
print(group)
1
timestampe text
0 2016-08-11 12:01:00 a
1 2016-08-13 11:11:00 b
3
timestampe text
2 2016-08-09 11:13:00 c
3 2016-08-05 11:33:00 d
5 2016-08-21 11:43:00 f
7
timestampe text
4 2016-08-19 11:27:00 e
</code></pre>
| 2 | 2016-09-27T15:27:13Z | [
"python",
"pandas",
"numpy",
"scipy"
]
|
Python: gb2312 codec can't decode bytes | 39,718,209 | <p>I have a word-encoded string from received mail. When parsing encoded word in Python3, I got an exception </p>
<blockquote>
<p>'gb2312' codec can't decode bytes in position 18-19: illegal multibyte
sequence</p>
</blockquote>
<p>raised from <em>make_header</em> method.</p>
<pre><code>from email.header import decode_header, make_header
hdr = decode_header("""=?gb2312?B?QSBWIM34IMXMILP2IMrbICAgqEMgs8kgyMsg?=""")
make_header(hdr)
</code></pre>
<p>Parsing encoded string in online tools works without problems (<a href="http://dogmamix.com/MimeHeadersDecoder/" rel="nofollow">http://dogmamix.com/MimeHeadersDecoder/</a>).
Any suggestions what I am doing wrong? Thanks</p>
| 0 | 2016-09-27T07:08:34Z | 39,718,996 | <p>The error message tells you that the bytes <em>in position 18-19</em> are not valid for this encoding.</p>
<p><code>decode_header</code> simply extracts a bunch of bytes and an encoding. <code>make_header</code> actually attempts to interpret those bytes in that encoding, and fails, because these bytes are not valid in that encoding.</p>
<p>Similarly,</p>
<pre><code>bash$ base64 -D <<<'QSBWIM34IMXMILP2IMrbICAgqEMgs8kgyMsg' |
> iconv -f gb2312 -t utf-8
A V ç½ ç åº å®
iconv: (stdin):1:18: cannot convert
</code></pre>
<p>So the error message simply tells you that this data is not valid. We cannot tell without more information what the data should be, and neither can Python or your program do that.</p>
<p>For a rough parable, you can g??ss which b?t?s are m?ss?ng here, but not in ?h?? l?ng?? s???e???.</p>
| 1 | 2016-09-27T07:47:52Z | [
"python",
"encoding",
"gb2312"
]
|
Flask logout if sessions expires if no activity and redirect for login page | 39,718,259 | <p>I'm very new to flask and trying updating a website with flask where users have accounts and are able to login. I want to make user session expire and logout if there is no activity for more than 10 mins and redirect the user for login page.</p>
<p>I want to update it in @app.before_request and below is my code . How do i update it pls suggest.
Check for the login time and check if there is been no activity then logout.</p>
<pre><code>@app.before_request
def look_for_user(user=None):
g.usr = {}
g.api = False
if user:
g.usr = user
if 'user_id' in session:
g.usr = get_user((session['user_id'])) //from db
if not g.usr:
g.usr = {}
if not g.usr:
if request.url_rule:
if request.url_rule.rule not in app.config['LOGIN_NOT_REQUIRED']:
session['postlogin_landing_page'] = request.path
if g.api:
return jsonify(error=True, error_message='Invalid Login/Token')
else:
return redirect(app.config['LOGIN_URL'])
elif 'login_page' in session and request.url_rule:
if request.url_rule.rule not in app.config:
landing_page = session.pop('login_page')
return redirect(landing_page)
</code></pre>
| 0 | 2016-09-27T07:10:44Z | 39,720,888 | <p>You can use <code>permanent_session_lifetime</code> and the <code>session.modified</code> flag as described in <a href="http://stackoverflow.com/questions/19760486/resetting-the-expiration-time-for-a-cookie-in-flask/19795394">this question</a>.</p>
<p>Note that sessions are not permanent by default, and need to be activated with <code>session.permanent = True</code>, as described in <a href="http://stackoverflow.com/a/11785722/246534">this answer</a>.</p>
| 0 | 2016-09-27T09:22:40Z | [
"python",
"redirect",
"flask",
"logout",
"expired-sessions"
]
|
Django Zinnia blog error for entries | 39,718,261 | <p>I am trying to fix issue with Zinnia blog application and its popping up error related to aggregator and entries. Following is the code if someone have a suggestion:</p>
<pre><code>def get_authors(context, template='zinnia/tags/authors.html'):
"""
Return the published authors.
"""
return {'template': template,
'authors': Author.published.all().annotate(
count_entries_published=Count('entries')),
'context_author': context.get('author')}
</code></pre>
<p>Above I don't think causing any issue but below is the code that might be causing all the fuss in second last line.</p>
<pre><code>def get_queryset(self):
"""
Return a queryset containing published entries.
"""
now = timezone.now()
return super(
EntryRelatedPublishedManager, self).get_queryset().filter(
models.Q(entries__start_publication__lte=now) |
models.Q(entries__start_publication=None),
models.Q(entries__end_publication__gt=now) |
models.Q(entries__end_publication=None),
entries__status=PUBLISHED,
entries__sites=Site.objects.get_current()
).distinct()
</code></pre>
<p>Please suggest a solution because overall Zinnia is an impressive app but with one error its almost useless. Owner of the repo already got plenty of users with same complain. Error is same with both Python2 and PYthon3.</p>
<p>For generous coders willing to work on this, repo is located at <a href="https://github.com/Fantomas42/django-blog-zinnia/issues" rel="nofollow">https://github.com/Fantomas42/django-blog-zinnia/issues</a>.</p>
<pre><code>Traceback:
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\core\handlers\exception.py" in inner
39. response = get_response(request)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\core\handlers\base.py" in _legacy_get_response
249. response = self._get_response(request)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\core\handlers\base.py" in _get_response
217. response = self.process_exception_by_middleware(e, request)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\core\handlers\base.py" in _get_response
215. response = response.render()
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\response.py" in render
109. self.content = self.rendered_content
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\response.py" in rendered_content
86. content = template.render(context, self._request)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\backends\django.py" in render
66. return self.template.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render
208. return self._render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in _render
199. return self.nodelist.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render
994. bit = node.render_annotated(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\loader_tags.py" in render
174. return compiled_parent._render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in _render
199. return self.nodelist.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render
994. bit = node.render_annotated(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\loader_tags.py" in render
174. return compiled_parent._render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in _render
199. return self.nodelist.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render
994. bit = node.render_annotated(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\loader_tags.py" in render
174. return compiled_parent._render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in _render
199. return self.nodelist.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render
994. bit = node.render_annotated(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\loader_tags.py" in render
70. result = block.nodelist.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render
994. bit = node.render_annotated(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\base.py" in render_annotated
961. return self.render(context)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\template\library.py" in render
225. _dict = self.func(*resolved_args, **resolved_kwargs)
File "C:\Users\Shazia\zinblog\lib\site-packages\django_blog_zinnia-0.18-py2.7.egg\zinnia\templatetags\zinnia.py" in get_authors
85. 'authors': Author.published.all().annotate(
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\db\models\manager.py" in all
160. return self.get_queryset()
File "C:\Users\Shazia\zinblog\lib\site-packages\django_blog_zinnia-0.18-py2.7.egg\zinnia\managers.py" in get_queryset
107. entries__sites=Site.objects.get_current()
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\db\models\query.py" in filter
796. return self._filter_or_exclude(False, *args, **kwargs)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\db\models\query.py" in _filter_or_exclude
814. clone.query.add_q(Q(*args, **kwargs))
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\db\models\sql\query.py" in add_q
1227. clause, _ = self._add_q(q_object, self.used_aliases)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\db\models\sql\query.py" in _add_q
1247. current_negated, allow_joins, split_subq)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\db\models\sql\query.py" in _add_q
1253. allow_joins=allow_joins, split_subq=split_subq,
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\db\models\sql\query.py" in build_filter
1133. lookups, parts, reffed_expression = self.solve_lookup_type(arg)
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\db\models\sql\query.py" in solve_lookup_type
1019. _, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta())
File "C:\Users\Shazia\zinblog\lib\site-packages\django-1.10.1-py2.7.egg\django\db\models\sql\query.py" in names_to_path
1327. "Choices are: %s" % (name, ", ".join(available)))
Exception Type: FieldError at /weblog/
Exception Value: Cannot resolve keyword 'entries' into field. Choices are: comment_comments, comment_flags, date_joined, email, first_name, groups, id, is_active, is_staff, is_superuser, last_login, last_name, logentry, password, user_permissions, username
</code></pre>
| 0 | 2016-09-27T07:10:49Z | 39,720,567 | <p>Try registering django_comments after zinnia in the INSTALLED_APPS setting not before.</p>
<pre><code>INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.admin',
'django.contrib.sites',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.contenttypes',
'mptt',
'tagging',
'zinnia',
'django_comments',
)
</code></pre>
<p>Got this solution from the owner of repo. Check the issue on Github.</p>
| 2 | 2016-09-27T09:07:36Z | [
"python",
"django"
]
|
Python : Basic countdown timer & function() > int() | 39,718,271 | <p>I'm trying to ucreate a timer function that runs in the background of my code and make it so I can use/check the time. What I mean by use/check, I'm trying to make it so I can call upon that timer function and use it as integer.</p>
<p>This is the code I currently have:</p>
<pre><code>def timer():
for endtime in range(0, 15):
print(15 - endtime)
time.sleep(1)
def hall():
timer()
while (timer > 0):
do something
</code></pre>
<p>Currently only using <code>print(15 - endtime)</code> for confirmation it is counting down.</p>
<p>But what the code does now is execute the countdown and that's it, it never touches the while loop. And of course the last issue is I can't set a function to an int. So I'm looking for some way where I can check where the timer is at and use it in that while loop.</p>
| 0 | 2016-09-27T07:11:10Z | 39,718,467 | <pre><code>import time
def timer(tim):
time.sleep(1)
print tim
def hall():
tim = 15
while (tim > 0):
print 'do something'
timer(tim)
tim-=1
</code></pre>
<p>Not the cleanest solution, but it will do what you need.</p>
| 0 | 2016-09-27T07:20:18Z | [
"python",
"timer"
]
|
Python : Basic countdown timer & function() > int() | 39,718,271 | <p>I'm trying to ucreate a timer function that runs in the background of my code and make it so I can use/check the time. What I mean by use/check, I'm trying to make it so I can call upon that timer function and use it as integer.</p>
<p>This is the code I currently have:</p>
<pre><code>def timer():
for endtime in range(0, 15):
print(15 - endtime)
time.sleep(1)
def hall():
timer()
while (timer > 0):
do something
</code></pre>
<p>Currently only using <code>print(15 - endtime)</code> for confirmation it is counting down.</p>
<p>But what the code does now is execute the countdown and that's it, it never touches the while loop. And of course the last issue is I can't set a function to an int. So I'm looking for some way where I can check where the timer is at and use it in that while loop.</p>
| 0 | 2016-09-27T07:11:10Z | 39,718,470 | <p>The way you do it, you'll going to have to use multithread.</p>
<p>Here is another, simpler approach :
On your script beginning, set a <code>time_start</code> variable with the number of seconds since the epoch using <code>time.time()</code>
Then when you need the number of elapsed seconds, use <code>time.time() - time_start</code> :</p>
<pre><code>t_start = time.time()
# do whatever you'd like
t_current = int(time.time()-t_start) # this way you get the number of seconds elapsed since start.
</code></pre>
<p>You can put that in a function as well, defining t_start as a global variable.</p>
<pre><code>import time
t_start = time.time()
def timer():
global t_start
print(str(int(time.time()-t_start)))
print('start')
time.sleep(2)
timer()
time.sleep(3)
timer()
</code></pre>
| 1 | 2016-09-27T07:20:26Z | [
"python",
"timer"
]
|
Python : Basic countdown timer & function() > int() | 39,718,271 | <p>I'm trying to ucreate a timer function that runs in the background of my code and make it so I can use/check the time. What I mean by use/check, I'm trying to make it so I can call upon that timer function and use it as integer.</p>
<p>This is the code I currently have:</p>
<pre><code>def timer():
for endtime in range(0, 15):
print(15 - endtime)
time.sleep(1)
def hall():
timer()
while (timer > 0):
do something
</code></pre>
<p>Currently only using <code>print(15 - endtime)</code> for confirmation it is counting down.</p>
<p>But what the code does now is execute the countdown and that's it, it never touches the while loop. And of course the last issue is I can't set a function to an int. So I'm looking for some way where I can check where the timer is at and use it in that while loop.</p>
| 0 | 2016-09-27T07:11:10Z | 39,718,645 | <p>The problem with your code is that when you run <code>hall()</code>, Python first executes the whole of <code>timer()</code> (i.e. the whole <code>for</code> loop), and then moves on with the rest of the code (it can only do one thing at a time). Thus, by the time it reaches the <code>while</code> loop in <code>hall()</code>, <code>timer</code> is already <code>0</code>.</p>
<p>So, you're going to have to do something about that <code>timer</code> so that it counts down once, and then it moves on to the <code>do something</code> part.</p>
<p>Something that you can do is this:</p>
<pre class="lang-python prettyprint-override"><code>def hall():
for a in range(0, 15):
print(15 - a)
# do something
time.sleep(1)
</code></pre>
<p>This should work just fine (if you're only executing <code>hall</code> 15 times), and condenses your code to just one function.</p>
| 0 | 2016-09-27T07:28:59Z | [
"python",
"timer"
]
|
BadDataError when converting .DBF to .csv | 39,718,413 | <p>I am trying to convert a .DBF file to .csv using Python3. I am trying using the dbf library (<a href="https://pypi.python.org/pypi/dbf" rel="nofollow">https://pypi.python.org/pypi/dbf</a>) </p>
<pre><code>import dbf
def dbf_to_csv(dbf_file_name, csv_file_name):
dbf_file = dbf.Table(dbf_file_name, ignore_memos=True)
dbf_file.open()
dbf.export(dbf_file, filename = csv_file_name, format='csv', header=True)
</code></pre>
<p>The DBF file I am using can be opened in Excel and appears to be fine. However, when I run the above method I get an error on the dbf.export line above:</p>
<pre><code>dbf.ver_33.BadDataError: record data is not the correct length (should be 1442, not 1438)
</code></pre>
<p>The dbf file opens fine in Excel, however, I need to automate this conversion. What should I be doing differently to get this method to create a pdf from a .DBF file?</p>
| 0 | 2016-09-27T07:17:32Z | 39,719,803 | <p>If the file is opening correctly in Excel, then I suggest you use Excel to do the conversion for you to <code>csv</code> format as follows:</p>
<pre><code>import win32com.client as win32
excel = win32.gencache.EnsureDispatch('Excel.Application')
wb = excel.Workbooks.Open(r"input.dbf")
excel.DisplayAlerts = False
wb.DoNotPromptForConvert = True
wb.CheckCompatibility = False
wb.SaveAs(r"output.csv", FileFormat=6, ConflictResolution=2)
excel.Application.Quit()
</code></pre>
<p>Do not forget to add full paths to the required files. Note, <code>FileFormat=6</code> tells Excel to save the file in <code>CSV</code> format.</p>
<p>To export the workbook as a PDF, you could use:</p>
<pre><code>wb.ExportAsFixedFormat(0, r"output.pdf")
</code></pre>
<p>If you do not already have <code>win32com</code> installed you should be able to use the following:</p>
<pre><code>pip install pypiwin32
</code></pre>
<p>This solution is suitable for Windows installations only.</p>
| 0 | 2016-09-27T08:30:41Z | [
"python",
"python-3.x",
"dbf"
]
|
Python list to django javascript as text | 39,718,574 | <p>Hello my generated python output list is </p>
<pre><code>l = ["one","two","there"]
</code></pre>
<p>I am passing that to a django html script as {{list}}</p>
<p>in the html it is shown as </p>
<pre><code>[&#39;one&#39;,&#39;two&#39;,&#39;three&#39;
</code></pre>
<p>which i can't use for my javascript, how do i pass this correctly, even I tried json_dumps like </p>
<pre><code>l = json_dumps (["one","two","there"])
</code></pre>
<p>it just shows as following in html </p>
<pre><code>[&quote;one&quote;,&quote;two&quote;,&quote;three&quote;]
</code></pre>
| 1 | 2016-09-27T07:25:36Z | 39,719,157 | <p>There are two steps here. Firstly, you need to make the view send valid JSON; you've done this with <code>json_dumps</code>.</p>
<p>Secondly, you need to ensure that the template outputs it without escaping. You do that by marking it as safe, with <code>{{ data|safe }}</code> (assuming your data is in a variable called <code>data</code>).</p>
| 1 | 2016-09-27T07:56:10Z | [
"javascript",
"python",
"django",
"python-3.x"
]
|
Convert a byte array to single bits in a array [Python 3.5] | 39,718,576 | <p>I am looking for a operation witch converts my byte array:</p>
<pre><code>mem = b'\x01\x02\xff'
</code></pre>
<p>in something like this:</p>
<pre><code>[ [0 0 0 0 0 0 0 1]
[0 0 0 0 0 0 1 0]
[1 1 1 1 1 1 1 1] ]
</code></pre>
<p>These are operations that I tried:</p>
<pre><code>import numpy as np
mem = b'\x01\x02\xff' #define my input
mem = np.fromstring(mem, dtype=np.uint8) #first convert to int
#print(mem) give me "[ 1 2 255]" at this piont
mem = np.array(['{0:08b}'.format(mem[b]) for b in mem]) #now convert to bin
data= np.array([list(mem[b]) for b in mem]) #finally convert to single bits
print(data)
</code></pre>
<p>This code will crash at line 4.. <code>IndexError: index 255 is out of bounds for axis 0 with size 9</code>
Otherwise, it crash at line 5.. <code>IndexError: too many indices for array</code></p>
<p><strong>These are my Questions:</strong></p>
<p>Why are the number of spaces different after the conversion from hex to int?</p>
<p>Is that the reason that my next conversion from int to bin failed?</p>
<p>Finally, what is wrong with my <code>list</code> operation?</p>
<p>Thank you for your help! :)</p>
| 2 | 2016-09-27T07:25:40Z | 39,718,863 | <p>To solve IndexError you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndindex.html#numpy.ndindex" rel="nofollow"><code>numpy.ndindex</code></a>:</p>
<pre><code>import numpy as np
mem = b'\x01\x02\xff' #define my input
mem = np.fromstring(mem, dtype=np.uint8) #first convert to int
#print(mem) give me "[ 1 2 255]" at this piont
mem=np.array(['{0:07b}'.format(mem[b]) for b in np.ndindex(mem.shape)])
data= np.array([list(mem[b]) for b in np.ndindex(mem.shape)]) #finally convert to single bits
print(data)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>[['0', '0', '0', '0', '0', '0', '1'] ['0', '0', '0', '0', '0', '1', '0']
['1', '1', '1', '1', '1', '1', '1', '1']]
</code></pre>
| 0 | 2016-09-27T07:40:40Z | [
"python",
"arrays",
"hex",
"bit"
]
|
Convert a byte array to single bits in a array [Python 3.5] | 39,718,576 | <p>I am looking for a operation witch converts my byte array:</p>
<pre><code>mem = b'\x01\x02\xff'
</code></pre>
<p>in something like this:</p>
<pre><code>[ [0 0 0 0 0 0 0 1]
[0 0 0 0 0 0 1 0]
[1 1 1 1 1 1 1 1] ]
</code></pre>
<p>These are operations that I tried:</p>
<pre><code>import numpy as np
mem = b'\x01\x02\xff' #define my input
mem = np.fromstring(mem, dtype=np.uint8) #first convert to int
#print(mem) give me "[ 1 2 255]" at this piont
mem = np.array(['{0:08b}'.format(mem[b]) for b in mem]) #now convert to bin
data= np.array([list(mem[b]) for b in mem]) #finally convert to single bits
print(data)
</code></pre>
<p>This code will crash at line 4.. <code>IndexError: index 255 is out of bounds for axis 0 with size 9</code>
Otherwise, it crash at line 5.. <code>IndexError: too many indices for array</code></p>
<p><strong>These are my Questions:</strong></p>
<p>Why are the number of spaces different after the conversion from hex to int?</p>
<p>Is that the reason that my next conversion from int to bin failed?</p>
<p>Finally, what is wrong with my <code>list</code> operation?</p>
<p>Thank you for your help! :)</p>
| 2 | 2016-09-27T07:25:40Z | 39,718,913 | <p>Using unpackbits:</p>
<pre><code>>>> import numpy as np
>>> mem = b'\x01\x02\xff'
>>> x = np.fromstring(mem, dtype=np.uint8)
>>> np.unpackbits(x).reshape(3,8)
array([[0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1]], dtype=uint8)
</code></pre>
<h3>Documentation</h3>
<p>From <code>help(np.unpackbits)</code>:</p>
<blockquote>
<p><strong>unpackbits</strong>(...)<br>
unpackbits(myarray, axis=None)</p>
<p>Unpacks elements of a uint8 array into a binary-valued output array.</p>
<p>Each element of <code>myarray</code> represents a bit-field that should be unpacked
into a binary-valued output array. The shape of the output array is either
1-D (if <code>axis</code> is None) or the same shape as the input array with unpacking
done along the axis specified.</p>
</blockquote>
| 2 | 2016-09-27T07:43:23Z | [
"python",
"arrays",
"hex",
"bit"
]
|
Convert a byte array to single bits in a array [Python 3.5] | 39,718,576 | <p>I am looking for a operation witch converts my byte array:</p>
<pre><code>mem = b'\x01\x02\xff'
</code></pre>
<p>in something like this:</p>
<pre><code>[ [0 0 0 0 0 0 0 1]
[0 0 0 0 0 0 1 0]
[1 1 1 1 1 1 1 1] ]
</code></pre>
<p>These are operations that I tried:</p>
<pre><code>import numpy as np
mem = b'\x01\x02\xff' #define my input
mem = np.fromstring(mem, dtype=np.uint8) #first convert to int
#print(mem) give me "[ 1 2 255]" at this piont
mem = np.array(['{0:08b}'.format(mem[b]) for b in mem]) #now convert to bin
data= np.array([list(mem[b]) for b in mem]) #finally convert to single bits
print(data)
</code></pre>
<p>This code will crash at line 4.. <code>IndexError: index 255 is out of bounds for axis 0 with size 9</code>
Otherwise, it crash at line 5.. <code>IndexError: too many indices for array</code></p>
<p><strong>These are my Questions:</strong></p>
<p>Why are the number of spaces different after the conversion from hex to int?</p>
<p>Is that the reason that my next conversion from int to bin failed?</p>
<p>Finally, what is wrong with my <code>list</code> operation?</p>
<p>Thank you for your help! :)</p>
| 2 | 2016-09-27T07:25:40Z | 39,718,934 | <p>I'm fairly certain the problem with your code is that you're assuming the <code>int</code> in each item in the list will become 8 bits (so <code>2</code> will, in your assumption, return <code>00000010</code>). But it doesn't (<code>2</code> = <code>10</code>), and that screws up your code.</p>
<p>For your last two lines, I think this should be fine:</p>
<pre class="lang-python prettyprint-override"><code>data = [list(str(bin(x))[2:]) for x in mem]
for a in range(len(data)):
while len(data[a]) < 8:
data[a] = "0" + data[a]
</code></pre>
<p><code>str(bin(x))[2:]</code> converts to binary (because it returns <code>0b1</code> for <code>1</code>, you need to use <code>[2:]</code> to get <code>1</code>).</p>
<p>The last chunk of code is to "pad" out your numbers with extra <code>0</code>'s.</p>
| 0 | 2016-09-27T07:44:37Z | [
"python",
"arrays",
"hex",
"bit"
]
|
Convert a byte array to single bits in a array [Python 3.5] | 39,718,576 | <p>I am looking for a operation witch converts my byte array:</p>
<pre><code>mem = b'\x01\x02\xff'
</code></pre>
<p>in something like this:</p>
<pre><code>[ [0 0 0 0 0 0 0 1]
[0 0 0 0 0 0 1 0]
[1 1 1 1 1 1 1 1] ]
</code></pre>
<p>These are operations that I tried:</p>
<pre><code>import numpy as np
mem = b'\x01\x02\xff' #define my input
mem = np.fromstring(mem, dtype=np.uint8) #first convert to int
#print(mem) give me "[ 1 2 255]" at this piont
mem = np.array(['{0:08b}'.format(mem[b]) for b in mem]) #now convert to bin
data= np.array([list(mem[b]) for b in mem]) #finally convert to single bits
print(data)
</code></pre>
<p>This code will crash at line 4.. <code>IndexError: index 255 is out of bounds for axis 0 with size 9</code>
Otherwise, it crash at line 5.. <code>IndexError: too many indices for array</code></p>
<p><strong>These are my Questions:</strong></p>
<p>Why are the number of spaces different after the conversion from hex to int?</p>
<p>Is that the reason that my next conversion from int to bin failed?</p>
<p>Finally, what is wrong with my <code>list</code> operation?</p>
<p>Thank you for your help! :)</p>
| 2 | 2016-09-27T07:25:40Z | 39,719,532 | <pre><code>mem = b'\x01\x02\xff'
[[int(digit) for digit in "{0:08b}".format(byte)] for byte in mem]
</code></pre>
<p>outputs:</p>
<pre><code>[[0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1]]
</code></pre>
| 0 | 2016-09-27T08:15:49Z | [
"python",
"arrays",
"hex",
"bit"
]
|
Passing variables from normal function to init_UI function | 39,718,609 | <p>I have been facing a problem in using a variable of a normal class function and <code>initUI</code> function.<br>
The code looks like this:</p>
<pre><code>def text_trans(self):
...
number = 2
...
def init_UI(self):
number = text_transfer(self)
print(number)
QtWidgets.QMessageBox.information(self,"generated","generated")
</code></pre>
<p>I wrote the above code but it shows a name error. Is there a way, I can use the number variable in the first function in the second one?</p>
| -2 | 2016-09-27T07:26:58Z | 39,719,336 | <blockquote>
<p>I wrote the above code but it shows a name error.</p>
</blockquote>
<p>Probably because, for starters, <code>text_transfer != text_trans</code> and, secondly, in the body of methods you should use <code>self</code> to access other methods. So, <code>init_UI</code> should look like:</p>
<pre><code>def init_UI(self):
number = self.text_trans()
print(number)
</code></pre>
<p>You access the name through <code>self</code>, and <code>self</code> is then implicitly passed to the invoked function, so <code>text_transfer(self)</code> won't work.</p>
<blockquote>
<p>Is there a way, I can use the number variable in the first function in the second one?</p>
</blockquote>
<p>In most cases, this is done by simply <em>returning</em> the wanted name from a given function:</p>
<pre><code>def text_trans(self):
...
number = 2
return number
</code></pre>
| 1 | 2016-09-27T08:05:03Z | [
"python",
"function",
"python-3.x",
"pyqt5"
]
|
Find Regexp and replace expression for CSV file starting with $ | 39,718,822 | <p>I need an Regexp and Replace expression to capture contents from CSV file. My CSV file starts like this.</p>
<p>Example- expression should find key name "FC_host" in my CSV file and replace with different value.</p>
<pre><code>$TH_appName=tuipatthcrfh3320
#$TH_host=10.145.129.75
$TH_host=10.145.129.75
$TH_casPort=8500;
$TH_eacPort=8888;
$FC_appName=tuipatfc3320;
#$FC_host=10.145.129.75
$FC_host=10.145.129.75
$FC_casPort=8500;
$FC_eacPort=8888;
</code></pre>
<p>Below is my code. This code works but has few issues.Kindly help me on this.Moreover I am using regexp and replace so that I need to update them in remote server.</p>
<pre><code> ---
- hosts: local
vars:
properties:
- { name: "TH_appName", value: "10.0.1" }
tasks:
- name: Find and Replace
replace:
dest: /etc/ansible/kalyan-tui/example.csv
regexp: '(.*){{ item.name }}=(.*);'
replace: '\1{{ item.name }}={{ item.value }};'
# state: present
with_items:
- "{{ properties }}"
</code></pre>
| -1 | 2016-09-27T07:39:04Z | 39,719,332 | <pre><code>with open('items.csv', 'r')as file:
for row in file:
print row.replace('TH_host', 'Something else')
>>$TH_appName=tuipatthcrfh3320
>>#$Something else=10.145.129.75
>>$Something else=10.145.129.75
>>...
</code></pre>
<p>You don't need regex unless you need to use the regular expressions such as a number of digits or a certain number of spaces. Regex is very expensive and therefore should be <a href="http://stackoverflow.com/questions/1782586/speed-of-many-regular-expressions-in-python">avoided</a> if possible.</p>
| 0 | 2016-09-27T08:04:59Z | [
"python",
"ansible",
"ansible-playbook",
"ansible-2.x"
]
|
Find Regexp and replace expression for CSV file starting with $ | 39,718,822 | <p>I need an Regexp and Replace expression to capture contents from CSV file. My CSV file starts like this.</p>
<p>Example- expression should find key name "FC_host" in my CSV file and replace with different value.</p>
<pre><code>$TH_appName=tuipatthcrfh3320
#$TH_host=10.145.129.75
$TH_host=10.145.129.75
$TH_casPort=8500;
$TH_eacPort=8888;
$FC_appName=tuipatfc3320;
#$FC_host=10.145.129.75
$FC_host=10.145.129.75
$FC_casPort=8500;
$FC_eacPort=8888;
</code></pre>
<p>Below is my code. This code works but has few issues.Kindly help me on this.Moreover I am using regexp and replace so that I need to update them in remote server.</p>
<pre><code> ---
- hosts: local
vars:
properties:
- { name: "TH_appName", value: "10.0.1" }
tasks:
- name: Find and Replace
replace:
dest: /etc/ansible/kalyan-tui/example.csv
regexp: '(.*){{ item.name }}=(.*);'
replace: '\1{{ item.name }}={{ item.value }};'
# state: present
with_items:
- "{{ properties }}"
</code></pre>
| -1 | 2016-09-27T07:39:04Z | 39,719,859 | <p>The task should look like this:</p>
<pre><code>- name: Find and Replace
replace:
dest: /etc/ansible/kalyan-tui/example.csv
regexp: ^\$FC_host=.*
replace: "$FC_host={{ new_value }}"
</code></pre>
<hr>
<ul>
<li><code>state</code> is not a parameter of <code>replace</code></li>
<li>variable name cannot contain spaces </li>
<li>you don't need to quote the <code>regexp</code> value</li>
<li>you don't need to escape dollar sign in the replacement value</li>
</ul>
| 0 | 2016-09-27T08:33:30Z | [
"python",
"ansible",
"ansible-playbook",
"ansible-2.x"
]
|
Python multiple logger for multiple modules | 39,718,895 | <p>I have two files namley main.py and my_modules.py. In main.py I have defined two loggers like this</p>
<pre><code> #main.py
URL_LOGS = "logs/urls.log"
GEN_LOGS = 'logs/scrape.log'
#Create two logger files
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
# first file logger
url_info_logger = logging.getLogger('URL_Fetcher')
hdlr_1 = logging.FileHandler(URL_LOGS)
hdlr_1.setFormatter(formatter)
url_info_logger.setLevel(logging.DEBUG)
url_info_logger.addHandler(hdlr_1)
#second Logger
general_logger = logging.getLogger("GENERAL")
hdlr_2 = logging.FileHandler(GEN_LOGS)
hdlr_2.setFormatter(formatter)
general_logger.setLevel(logging.DEBUG)
general_logger.addHandler(hdlr_2)
module1()
do_something()
</code></pre>
<p>In my second file (my_modules.py) I have to use both loggers, Following is the sample code for my_modules.py</p>
<pre><code> #my_modules.py
import logging
def module1():
general_logger.info("Logger Module1")
url_info_logger.info("New URL found")
def do_something():
general_logger.info("Logger Module2")
url_info_logger.info("Url parsed")
</code></pre>
<p>How do I implement loggers to be accessed in my_modules.py</p>
| 0 | 2016-09-27T07:42:23Z | 39,719,135 | <p>Quote from <a href="https://docs.python.org/2/library/logging.html" rel="nofollow">logging documentation</a>: <em>Multiple calls to getLogger() with the same name will always return a reference to the same Logger object.</em></p>
<p>So what you want to do in your my_modules.py is just to call the <code>getLogger()</code> again with the same name.</p>
<pre><code>#my_modules.py
import logging
url_info_logger = logging.getLogger('URL_Fetcher')
general_logger = logging.getLogger("GENERAL")
def module1():
general_logger.info("Logger Module1")
url_info_logger.info("New URL found")
def do_something():
general_logger.info("Logger Module2")
url_info_logger.info("Url parsed")
</code></pre>
<p>It should return the same logging object, since it is already defined before you call it the second time.</p>
| 1 | 2016-09-27T07:55:08Z | [
"python",
"logging"
]
|
Python multiple logger for multiple modules | 39,718,895 | <p>I have two files namley main.py and my_modules.py. In main.py I have defined two loggers like this</p>
<pre><code> #main.py
URL_LOGS = "logs/urls.log"
GEN_LOGS = 'logs/scrape.log'
#Create two logger files
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
# first file logger
url_info_logger = logging.getLogger('URL_Fetcher')
hdlr_1 = logging.FileHandler(URL_LOGS)
hdlr_1.setFormatter(formatter)
url_info_logger.setLevel(logging.DEBUG)
url_info_logger.addHandler(hdlr_1)
#second Logger
general_logger = logging.getLogger("GENERAL")
hdlr_2 = logging.FileHandler(GEN_LOGS)
hdlr_2.setFormatter(formatter)
general_logger.setLevel(logging.DEBUG)
general_logger.addHandler(hdlr_2)
module1()
do_something()
</code></pre>
<p>In my second file (my_modules.py) I have to use both loggers, Following is the sample code for my_modules.py</p>
<pre><code> #my_modules.py
import logging
def module1():
general_logger.info("Logger Module1")
url_info_logger.info("New URL found")
def do_something():
general_logger.info("Logger Module2")
url_info_logger.info("Url parsed")
</code></pre>
<p>How do I implement loggers to be accessed in my_modules.py</p>
| 0 | 2016-09-27T07:42:23Z | 39,719,829 | <p>if you transform the second module in a class, you can simply:</p>
<ul>
<li>declare the logger in the first modulue</li>
<li>create the new class passing the 2 logger as parameters</li>
<li>use the logger in the new class</li>
</ul>
<p>Example:</p>
<pre><code>import logging
class MyClassName:
def __init__(self, general_logger, url_info_logger):
self.general_logger = general_logger
self.url_info_logger = url_info_logger
def module1(self):
self.general_logger.info("Logger Module1")
self.url_info_logger.info("New URL found")
def do_something(self):
self.general_logger.info("Logger Module2")
self.url_info_logger.info("Url parsed")
</code></pre>
<p>In the main </p>
<pre><code>#main.py
...
myobj = MyClassName(general_logger,url_info_logger)
myobj.do_something()
myobj.module1()
</code></pre>
| 1 | 2016-09-27T08:31:41Z | [
"python",
"logging"
]
|
How to web scrape data using Python from an html table and store it in a csv file. I am able to extract some parts but not the others | 39,719,108 | <p>I am beginner in Web scraping and I have become very much interested in the process. I set for myself a Project that can keep me motivated till I completed the project.</p>
<p><strong>My Project</strong></p>
<p>My Aim is to write a Python Program that goes to my university results page and scrape all the results of a range of students and store each of their marks in each subject in a .csv file or , delimited text file. I have gotten the code working to submit the post request to the .asp page. I would appreciate it if you could guide me on how to store the subject wise details in separate columns like:</p>
<p><strong>Desired Output:</strong></p>
<p>Sl.no,Name,Subject1,Subject2,Subject3,Subject4,Subject5,Subject6,..etc</p>
<p>1,Jason,8,9,8,8,8,9..etc</p>
<p>2,Peter,6,8,9,8,7,7..etc</p>
<p>.</p>
<p>. </p>
<p>.</p>
<p>for a series of exam numbers.</p>
<p><strong>Some Sample Data to try it out</strong></p>
<p><strong>The Results Website</strong>: <a href="http://result.pondiuni.edu.in/candidate.asp" rel="nofollow">http://result.pondiuni.edu.in/candidate.asp</a></p>
<p><strong>Register Number</strong>: 15te1218</p>
<p><strong>Degree</strong>: BTHEE</p>
<p><strong>Exam</strong>: Second</p>
<p>Could anyone give me directions on how I am to accomplish the task?
Please correct me and would be awesome if you could guide me to solve the problem.</p>
<p>Can this be done in a much more simple way ?</p>
<p>In the code below you can see that I have tried to print out the name of the student but it returns an empty set(doesn't work). and i don't want it to return the data as a set because there is only one occurrence of that detail. </p>
<p>I do not know how to extract the Subject Names and the corresponding mark of that student from the html table in the results page. Some help with this is needed.</p>
<p><strong>Code:</strong> </p>
<pre><code>import requests
from bs4 import BeautifulSoup
import re
import csv
for x in xrange(44,47):
EXAMNO ='15te12'+str(x)
print EXAMNO
data = {"txtregno": EXAMNO,
"cmbdegree": r"BTHEE~\BTHEE\result.mdb", # use raw strings
"cmbexamno": "B",
"dpath": r"\BTHEE\result.mdb",
"dname": "BTHEE",
"txtexamno": "B"}
results_page = requests.post("http://result.pondiuni.edu.in/ResultDisp.asp", data=data).content
soup = BeautifulSoup(results_page, 'html.parser').prettify()
regpa= "<!--Percentage / S.G.P.A : <b>(.+?) </b>&nbsp;&nbsp;&nbsp; -->"
patterngpa =re.compile(regpa)
gpa=re.findall(patterngpa,soup)
print gpa
rename="<font size=3 color=black>(.+?)</font>"
patternname=re.compile(rename)
name=re.findall(patternname,soup)
print (name)
</code></pre>
<p><strong>OUTPUT:</strong></p>
<pre><code>15te1244
[u'8.67']
15te1245
[u'8.8']
[]
15te1246
[u'7.8']
[]
</code></pre>
<p>Would be helpful if you could show me how to print it in the desired output format.</p>
<p>Thanks.</p>
| 0 | 2016-09-27T07:53:42Z | 39,725,728 | <p>Took a lot of time to find a brute force solution.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import re
import csv
for x in xrange(44,47):
EXAMNO ='15te12'+str(x)
data = {"txtregno": EXAMNO,
"cmbdegree": r"BTHEE~\BTHEE\result.mdb", # use raw strings
"cmbexamno": "B",
"dpath": r"\BTHEE\result.mdb",
"dname": "BTHEE",
"txtexamno": "B"}
results_page = requests.post("http://result.pondiuni.edu.in/ResultDisp.asp", data=data).content
soup = BeautifulSoup(results_page, 'html.parser').prettify()
string=str(BeautifulSoup(results_page, 'html.parser'))
regpa= "<!--Percentage / S.G.P.A : <b>(.+?) </b>&nbsp;&nbsp;&nbsp; -->"
print (re.search(regpa,string,re.M|re.I )).group(1)
regname="<b>Name of the student : <b><font color=\"black\" size=\"3\">(.*)</font></b></b>"
print (re.search(regname,string,re.M|re.I )).group(1)
regsub="66%\"><font color=\"black\" face=\"arial\" size=\"2\">(.*)</font></td>"
matches=(re.findall(regsub,string,re.M|re.I ))
for i in xrange(len(matches)):
regsubm=">"+matches[i]+"</font></td>\n<td align=\"center\" bgcolor=\"white\" width=\"2%\"><font color=\"black\" face=\"arial\" size=\"2\">..</font></td>\n<td align=\"center\" bgcolor=\"white\" width=\"7%\"><font color=\"black\" face=\"arial\" size=\"2\">[\xc2]?[\xa0]?[\xc2]?[\xa0]?-</font></td>\n<td align=\"center\" bgcolor=\"white\" width=\"1%\"><font color=\"black\" face=\"arial\" size=\"2\">-</font></td>\n<td align=\"center\" bgcolor=\"white\" width=\"5%\"><font color=\"black\" face=\"arial\" size=\"2\">-</font></td>\n<td align=\"center\" bgcolor=\"white\" width=\"5%\"><font color=\"black\" face=\"arial\" size=\"2\">(.*)</font>"
matchesm=re.findall(regsubm,string,re.M)
print matches[i],'--->',matchesm[0]
</code></pre>
| 0 | 2016-09-27T13:18:09Z | [
"python",
"html",
"csv",
"asp-classic",
"web-scraping"
]
|
Cannot import Owlready library into Python | 39,719,119 | <p>I am a beginner in Python. I am working on an ontology using <a href="http://pythonhosted.org/Owlready/" rel="nofollow">owlready</a>. I've installed <code>owlready</code> library on my PyCharm IDE, but there is an issue with importing <code>owlready</code> in my python code. I've tried <code>from owlready import *</code> just like in the documentation, but it always give me:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/siekoo/OneDrive/Development/python/NER/onto_start.py", line 1, in <module>
from owlready import *
File "C:\winpython\python-2.7.10.amd64\lib\site-packages\owlready\__init__.py", line 85
def __init__(self, *Entities, ontology = None):
^
SyntaxError: invalid syntax
</code></pre>
| 1 | 2016-09-27T07:54:22Z | 39,722,375 | <p>It looks like Owlready is for Python 3 while you're using Python 2. Change your python version for it to work.</p>
<p>The invalid syntax error is because of the new Python 3 argument lists, see: <a href="https://docs.python.org/3/tutorial/controlflow.html#arbitrary-argument-lists" rel="nofollow">https://docs.python.org/3/tutorial/controlflow.html#arbitrary-argument-lists</a></p>
| 1 | 2016-09-27T10:32:31Z | [
"python",
"ontology"
]
|
Pyhon - Best way to find the 1d center of mass in a binary numpy array | 39,719,140 | <p>Suppose I have the following Numpy array, in which I have one and only one continuous slice of <code>1</code>s:</p>
<pre><code>import numpy as np
x = np.array([0,0,0,0,1,1,1,0,0,0], dtype=1)
</code></pre>
<p>and I want to find the index of the 1D center of mass of the <code>1</code> elements. I could type the following:</p>
<pre><code>idx = np.where( x )[0]
idx_center_of_mass = int(0.5*(idx.max() + idx.min()))
# this would give 5
</code></pre>
<p>(Of course this would lead to rough approximation when the number of elements of the <code>1</code>s slice is even.)
Is there any better way to do this, like a computationally more efficient oneliner?</p>
| 4 | 2016-09-27T07:55:13Z | 39,719,208 | <p>As one approach we can get the non-zero indices and get the mean of those as the center of mass, like so -</p>
<pre><code>np.flatnonzero(x).mean()
</code></pre>
<p>Here's another approach using shifted array comparison to get the start and stop indices of that slice and getting the mean of those indices for determining the center of mass, like so -</p>
<pre><code>np.flatnonzero(x[:-1] != x[1:]).mean()+0.5
</code></pre>
<p>Runtime test -</p>
<pre><code>In [72]: x = np.zeros(10000,dtype=int)
In [73]: x[100:2000] = 1
In [74]: %timeit np.flatnonzero(x).mean()
10000 loops, best of 3: 115 µs per loop
In [75]: %timeit np.flatnonzero(x[:-1] != x[1:]).mean()+0.5
10000 loops, best of 3: 38.7 µs per loop
</code></pre>
<p>We can improve the performance by some margin here with the use of <code>np.nonzero()[0]</code> to replace <code>np.flatnonzero</code> and <code>np.sum</code> in place of <code>np.mean</code> -</p>
<pre><code>In [107]: %timeit (np.nonzero(x[:-1] != x[1:])[0].sum()+1)/2.0
10000 loops, best of 3: 30.6 µs per loop
</code></pre>
<p>Alternatively, for the second approach, we can store the start and stop indices and then simply add them to get the center of mass for a bit more efficient approach as we would avoid the function call to <code>np.mean</code>, like so -</p>
<pre><code>start,stop = np.flatnonzero(x[:-1] != x[1:])
out = (stop + start + 1)/2.0
</code></pre>
<p>Timings -</p>
<pre><code>In [90]: %timeit start,stop = np.flatnonzero(x[:-1] != x[1:])
10000 loops, best of 3: 21.3 µs per loop
In [91]: %timeit (stop + start + 1)/2.0
100000 loops, best of 3: 4.45 µs per loop
</code></pre>
<p>Again, we can experiment with <code>np.nonzero()[0]</code> here.</p>
| 2 | 2016-09-27T07:58:24Z | [
"python",
"arrays",
"numpy",
"binary",
"boolean"
]
|
Pyhon - Best way to find the 1d center of mass in a binary numpy array | 39,719,140 | <p>Suppose I have the following Numpy array, in which I have one and only one continuous slice of <code>1</code>s:</p>
<pre><code>import numpy as np
x = np.array([0,0,0,0,1,1,1,0,0,0], dtype=1)
</code></pre>
<p>and I want to find the index of the 1D center of mass of the <code>1</code> elements. I could type the following:</p>
<pre><code>idx = np.where( x )[0]
idx_center_of_mass = int(0.5*(idx.max() + idx.min()))
# this would give 5
</code></pre>
<p>(Of course this would lead to rough approximation when the number of elements of the <code>1</code>s slice is even.)
Is there any better way to do this, like a computationally more efficient oneliner?</p>
| 4 | 2016-09-27T07:55:13Z | 39,719,579 | <p>Can't you simply do the following?</p>
<pre><code>center_of_mass = (x*np.arange(len(x))).sum()/x.sum() # 5
%timeit center_of_mass = (x*arange(len(x))).sum()/x.sum()
# 100000 loops, best of 3: 10.4 µs per loop
</code></pre>
| 2 | 2016-09-27T08:18:21Z | [
"python",
"arrays",
"numpy",
"binary",
"boolean"
]
|
Python: "while" is too slow, and sleep(0.25) becomes sleep(3) in actual execution | 39,719,177 | <p>I am running a Python Program on a Raspberry Pi 3 which I want to log the temperature from a DS18B20 sensor once every 0.25 seconds.</p>
<p>Earlier, when the program was simple and displaying the temperature on shell, it was quite fast and not having issues. Unfortunately due to the program itself now which includes logging to a file, I am getting a log every 2 seconds or 3 seconds only.</p>
<p>How do I ensure the 0.25 second logging interval.</p>
<p>I have shared the code below:</p>
<pre><code>#This program logs temperature from DS18B20 and records it
#Plots the temperature-time plot.
import os
import sys
#import matplotlib.pyplot as plt
from re import findall
from time import sleep, strftime, time
from datetime import *
#plt.ion()
#x = []
#y = []
ds18b20 = ''
def setup():
global ds18b20
for i in os.listdir('/sys/bus/w1/devices'):
if i != 'w1_bus_master1':
ds18b20 = i
# Reads temperature data from the Temp sensor
# This needs to be modified for use with max31855 and K-type thermocouples
def read():
# global ds18b20
location = '/sys/bus/w1/devices/' + ds18b20 + '/w1_slave'
tfile = open(location)
text = tfile.read()
tfile.close()
secondline = text.split("\n")[1]
temperaturedata = secondline.split(" ")[9]
temperature = float(temperaturedata[2:])
temperature = temperature / 1000
return temperature
#Loop for logging - sleep, and interrupt to be configured.
def loop():
while True:
if read() != None:
print "Current temperature : %0.3f C" % read()
#sleep(0.25)
func()
def write_temp(temperature,file_name):
with open(file_name, 'a') as log:
log.write("{0},{1}\n".format(datetime.now().strftime("%d-%m-%Y %H:%M:%S"),str(temperature)))
arg = sys.argv[1]
filename1 = str(arg) + "-" + datetime.now().strftime("%d-%m-%Y-%H-%M-%S")+".csv"
def func():
temperature = read()
#sleep(0.25)
write_temp(temperature,filename1)
#graph(temperature)
#For plotting graph using MatPlotLib
#Comment out this function during foundry trials to avoid system slowdown
#Check system resource usage and slowdown using TOP or HTOP
#def graph(temperature):
# y.append(temperature)
# x.append(time())
# plt.clf()
# plt.scatter(x,y)
# plt.plot(x,y)
# plt.draw()
#Interrupt from command-line
def destroy():
pass
if __name__ == '__main__':
try:
setup()
func()
loop()
except KeyboardInterrupt:
destroy()
</code></pre>
<p>I have commented out sections that I thought to be resource heavy, but still I can't manage anything less than 2 seconds. I am getting results as below:</p>
<p><strong>Output:</strong></p>
<pre><code>27-09-2016 12:18:41,23.0
27-09-2016 12:18:43,23.062
27-09-2016 12:18:46,23.125
27-09-2016 12:18:48,23.187
27-09-2016 12:18:50,23.187
27-09-2016 12:18:53,23.562
27-09-2016 12:18:55,25.875
27-09-2016 12:18:58,27.187
27-09-2016 12:19:00,27.5
</code></pre>
| 0 | 2016-09-27T07:57:11Z | 39,719,488 | <ol>
<li>Only open the logfile once (and close it on program exit)</li>
<li>Don't always re-read the temperature from the sensor. You call <code>read()</code> way too often.</li>
<li>Reduce general overhead and simplify your calls.</li>
</ol>
<p>I am not able to completely test this, but something like this sould work:</p>
<pre><code>import os
import sys
import time
from datetime import datetime
def read_temp(dev):
'''Reads temperature from sensor and returns it as float.'''
loc = '/sys/bus/w1/devices/' + dev + '/w1_slave'
with open(loc) as tf:
return float(tf.read().split('\n')[1].split(' ')[9][2:]) / 1000.0
def write_temp(t, logfile):
'''Writes temperature as .3 float to open file handle.'''
logfile.write('{0},{1:.3f}\n'.format(datetime.now().strftime('%d-%m-%Y %H:%M:%S'), t))
def loop(dev, logfile):
'''Starts temperature logging until user interrupts.'''
while True:
t = read_temp(dev)
if t:
write_temp(t, logfile)
print('Current temperature: {0:.3f} °C'.format(t))
sys.stdout.flush() # Flush. Btw, print is time-consuming!
time.sleep(.25)
if __name__ == '__main__':
# Take the first match for a device that is not 'w1_bus_master1'
dev = [d for d in os.listdir('/sys/bus/w1/devices') if d != 'w1_bus_master1'][0]
# Prepare the log filename
fname = str(sys.argv[1]) + "-" + datetime.now().strftime("%d-%m-%Y-%H-%M-%S")+".csv"
# Immediately open the log in append mode and do not close it!
logfile = open(fname, 'a')
try:
# Only pass device and file handle, not the file name.
loop(dev, logfile)
except KeyboardInterrupt:
# Close log file on exit
logfile.close()
</code></pre>
| 2 | 2016-09-27T08:13:39Z | [
"python",
"runtime",
"raspberry-pi3"
]
|
How to pass output from a python program into a processing program | 39,719,193 | <p>I am reading the orientation a gyroscope <a href="https://www.raspberrypi.org/products/sense-hat/" rel="nofollow">(sense-hat)</a> through a python program that can output the position in strings. I am trying to use this data as an input in a Processing program to make it interactive depending on the orientation of the gyroscope. How would I get Processing to interface with the python program?</p>
| 0 | 2016-09-27T07:57:46Z | 39,720,811 | <p>I have used the following snippet of code in a bash script to take output from a python program. I hope this helps. </p>
<pre><code>OUTPUT="$(python your_program.py)"
print($OUTPUT)
</code></pre>
| 0 | 2016-09-27T09:19:08Z | [
"python",
"raspberry-pi",
"processing",
"gyroscope"
]
|
How to pass output from a python program into a processing program | 39,719,193 | <p>I am reading the orientation a gyroscope <a href="https://www.raspberrypi.org/products/sense-hat/" rel="nofollow">(sense-hat)</a> through a python program that can output the position in strings. I am trying to use this data as an input in a Processing program to make it interactive depending on the orientation of the gyroscope. How would I get Processing to interface with the python program?</p>
| 0 | 2016-09-27T07:57:46Z | 39,723,042 | <p>I've never used the Sense HAT before, but I'm guessing it's using I2C behind the scenes. In theory it should be possible to reimplement the code in Processing using it's <a href="https://processing.org/reference/libraries/io/I2C.html" rel="nofollow">I2C io library</a>, but in practice it may take quite a bit of effort, looking at the <a href="https://github.com/RPi-Distro/python-sense-hat/blob/458b55f5b8f32d855ba0e5850aed517b06e91f84/sense_hat/sense_hat.py#L78" rel="nofollow">sense-hat library uses RTIMU</a> and all the fancy filtering that does on it's own.</p>
<p>To get your Python program to talk to Processing you have at least two options:</p>
<ol>
<li><a href="https://docs.oracle.com/javase/7/docs/api/java/lang/ProcessBuilder.html" rel="nofollow">pipe the output</a> from the python program into Processing's <code>stdin</code> and parse what's coming through</li>
<li>Use sockets.</li>
</ol>
<p>The second option should be simpler and I can think of mutiple options:</p>
<ol>
<li>raw UDP sockets</li>
<li>OSC using <a href="https://pypi.python.org/pypi/pyOSC" rel="nofollow">PyOSC</a> for the Python and <a href="http://www.sojamo.de/libraries/oscP5/" rel="nofollow">oscP5</a> for Processing</li>
<li>Using WebSockets</li>
</ol>
<p>I'd recommend the second option again: UDP is pretty fast and OSC on top of that makes it east to pass messages with arguments.</p>
<p>The Python script would:</p>
<ul>
<li>poll orientation data</li>
<li>share orientation values via a message like <code>/orientation</code></li>
</ul>
<p>The Processing sketch would:</p>
<ul>
<li>be an OSC Server server and wait for data</li>
<li>fetch the 3 float arguments from the <code>/orientation</code> message received and draw</li>
</ul>
<p>Here's an <strong>untested</strong> proof of concept sender script in Python:</p>
<pre><code>import time
from sense_hat import SenseHat
from OSC import OSCClient, OSCMessage
#update 60 times a second -> feel free to adjust this what works best
delay = 1.0/60.0
# sense hat
sense = SenseHat()
# OSC client -> Processing
client = OSCClient()
client.connect( ("localhost", 12000) )
while True:
# read sense hat
orientation = sense.get_orientation_degrees()
print("p: {pitch}, r: {roll}, y: {yaw}".format(**orientation))
# send data to Processing
client.send( OSCMessage("/orientation", [orientation["pitch"],orientation["roll"],orientation["yaw"] ] ) )
# wait
time.sleep(delay)
</code></pre>
<p>and the Processing receiver:</p>
<pre><code>import oscP5.*;
import netP5.*;
OscP5 oscP5;
float pitch,yaw,roll;
void setup() {
size(400,400,P3D);
frameRate(25);
/* start oscP5, listening for incoming messages at port 12000 */
oscP5 = new OscP5(this,12000);
}
void draw() {
background(0);
text("pitch: " + pitch + "\nyaw: " + yaw + "\nroll: " + roll,10,15);
}
/* incoming osc message are forwarded to the oscEvent method. */
void oscEvent(OscMessage message) {
message.print();
if(message.checkAddrPattern("/orientation")==true) {
/* check if the typetag is the right one. -> expecting float (pitch),float (yaw), float (roll)*/
if(message.checkTypetag("fff")) {
pitch = message.get(0).floatValue();
yaw = message.get(1).floatValue();
roll = message.get(2).floatValue();
}
}
}
</code></pre>
<p><strong>Note</strong> that you'll need to install PyOSC and run the Processing sketch before hand, otherwise you might get a Python error about the OSCClient not being able to connect. The idea is Processing becomes an OSC Server and the Python Script is an OSCClient and the server need to be available for the client to connect. (You can make the Python script an OSC Server if you want and the Processing sketch a client if that works better for you)</p>
<p>To install PyOSC try:</p>
<pre><code>sudo pip install pyosc
</code></pre>
<p>Otherwise:</p>
<pre><code>cd ~/Downloads
wget https://pypi.python.org/packages/7c/e4/6abb118cf110813a7922119ed0d53e5fe51c570296785ec2a39f37606d85/pyOSC-0.3.5b-5294.tar.gz
tar xvzf pyOSC-0.3.5b-5294.tar.gz
cd pyOSC-0.3.5b-5294
sudo python setup.py install
</code></pre>
<p>Again, the above is untested, but the idea is to:</p>
<ol>
<li>Download the library</li>
<li>Unzip it</li>
<li>Navigate to the unzipped folder</li>
<li>Install it via <code>sudo python setup.py install</code></li>
</ol>
| 1 | 2016-09-27T11:05:24Z | [
"python",
"raspberry-pi",
"processing",
"gyroscope"
]
|
Python 3, yield expression return value influenced by its value just received via send()? | 39,719,530 | <p>after reading documentation, questions, and making my own test code, I believe I have understood how a <code>yield expression</code> works.</p>
<p>Nevertheless, I am surprised of the behavior of the following example code:</p>
<pre><code>def gen(n=0):
while True:
n = (yield n) or n+1
g=gen()
print( next(g) )
print( next(g) )
print( g.send(5) )
print( next(g) )
print( next(g) )
</code></pre>
<p><strong>I would have expected that it returned 0, 1, 2, 5, 6, while instead it produces: 0, 1, 5, 6, 7.</strong> </p>
<p>I.e: I would have expected that the <code>yield expression</code> produce these effects:</p>
<ol>
<li>calculate the value of the <code>yield expression</code> , and return it to the caller</li>
<li>get the value(s) from the caller's <code>send()</code> and use them in the as the value of the yield expression which the generator function code receives</li>
<li>suspend execution before anything else is executed; it will be resumed at the same point at the same <code>next(g)</code> or <code>g.send()</code> call</li>
</ol>
<p>... and/or that Python would care to avoid any interference between the two
flows of information in (1) and (2), i.e. that they were guaranteed independent such as in a tuple assignment <code>a, b = f(a,b), g(a,b)</code></p>
<p>(I would even wonder if it were better to make the suspension happen in between (1) and (2), but maybe it would be quite complicated because it would imply that only part of the statement is executed and the rest is held for the next resume)</p>
<p>Anyway, the order of the operations is rather (2), then (1), then (3), so that the assignment in (2) occurs before, and can influence the assignment in (1). I.e. the value injected by the <code>g.send()</code> call is used before calculating the yield expression itself, which is directly exposed to the caller as the value of the same <code>g.send()</code> expression.</p>
<p>I am astonished because from the point of view of the code in the generator expression, the value received in its <code>lhs</code> can influence the value taken by the <code>rhs</code>!</p>
<p>To me, this is kind of misleading because one expects that in a statement like <code>lhs expr = rhs expr</code>, all calculations in the <code>rhs expr</code> are finished before doing the assignment, and frozen during the assignment. It looks really weird that the <code>lhs</code> of an assignment can influence it's own <code>rhs</code>! </p>
<p><strong>The question: which are the reasons why it was made this way? Any clue?</strong></p>
<p>(I know that "We prefer questions that can be answered, not just discussed", but this is something in which I stumbled and made me consume a lot of time. I believe a bit of discussion won't to any bad and maybe will save someone else's time) </p>
<p>PS. of course I understand that I can separate the assignment into two steps, so that any value received from <code>send()</code> will be used only after resuming the operation. Like this:</p>
<pre><code>def gen(n=0):
while True:
received = (yield n)
n = received or (n+1)
</code></pre>
| 0 | 2016-09-27T08:15:46Z | 39,719,665 | <p>Your confusion lies with <code>generator.send()</code>. Sending is <em>just the same thing as using <code>next()</code></em>, with the difference being that the <code>yield</code> expression produces a different value. Put differently, <code>next(g)</code> is the same thing as <code>g.send(None)</code>, both operations resume the generator there and then.</p>
<p>Remember that a generator <em>starts</em> paused, at the top. The first <code>next()</code> call advances to the first <code>yield</code> expression, stops the generator and then pauses. When a <code>yield</code> expression is paused and you call <em>either</em> <code>next(g)</code> or <code>g.send(..)</code>, the generator is resumed where it is right now, and then runs until the next <code>yield</code> expression is reached, at which point it pauses again.</p>
<p>For your code, this happens:</p>
<ul>
<li><code>g</code> is created, nothing happens in <code>gen()</code></li>
<li><code>next(g)</code> actually enters the function body, <code>n = 0</code> is executed, <code>yield n</code> pauses <code>g</code> and yields <code>0</code>. This is printed.</li>
<li><code>next(g)</code> resumes the generator; <code>None</code> is returned for <code>yield n</code> (nothing was sent after all), so <code>None or n + 1</code> is executed an <code>n = 1</code> is set. The loop continues on and <code>yield n</code> is reached again, the generator pauses and <code>1</code> is yielded. This is printed.</li>
<li><code>g.send(5)</code> resumes the generator. <code>5 or n + 1</code> means <code>n = 5</code> is executed. The loop continues until <code>yield n</code> is reached, the generator is paused, <code>5</code> is yielded and you print <code>5</code>.</li>
<li><code>next(g)</code> resumes the generator; <code>None</code> is returned (nothing was sent again), so <code>None or n + 1</code> is executed an <code>n = 6</code> is set. The loop continues on and <code>yield n</code> is reached again, the generator pauses and <code>6</code> is yielded and printed.</li>
<li><code>next(g)</code> resumes the generator; <code>None</code> is returned (nothing was sent again), so <code>None or n + 1</code> is executed an <code>n = 7</code> is set. The loop continues on and <code>yield n</code> is reached again, the generator pauses and <code>7</code> is yielded and printed.</li>
</ul>
<p>Given your steps 1., 2. and 3., the actual order is 3., 2., 1. then, with the addition that <code>next()</code> also goes through step <code>2.</code> producing <code>None</code>, and <code>1.</code> being the <em>next</em> invocation of <code>yield</code> encountered after un-pausing.</p>
| 4 | 2016-09-27T08:23:34Z | [
"python",
"yield"
]
|
Not nesting version of @atomic() in Django? | 39,719,567 | <p>From the <a href="https://docs.djangoproject.com/en/dev/topics/db/transactions/#django.db.transaction.atomic">docs of atomic()</a></p>
<blockquote>
<p>atomic blocks can be nested</p>
</blockquote>
<p>This sound like a great feature, but in my use case I want the opposite: I want the transaction to be durable as soon as the block decorated with <code>@atomic()</code> gets left successfully.</p>
<p>Is there a way to ensure durability in django's transaction handling?</p>
<h1>Background</h1>
<p>Transaction are ACID. The "D" stands for durability. That's why I think transactions can't be nested without loosing feature "D".</p>
<p>Example: If the inner transaction is successful, but the outer transaction is not, then the outer and the inner transaction get rolled back. The result: The inner transaction was not durable.</p>
<p>I use PostgreSQL, but AFAIK this should not matter much.</p>
| 15 | 2016-09-27T08:17:37Z | 39,721,631 | <p>You can't do that through any API.</p>
<p>Transactions can't be nested while retaining all ACID properties, and not all databases support nested transactions.</p>
<p>Only the outermost atomic block creates a transaction. Inner atomic blocks create a savepoint inside the transaction, and release or roll back the savepoint when exiting the inner block. As such, inner atomic blocks provide atomicity, but as you noted, not e.g. durability.</p>
<p>Since the outermost atomic block creates a transaction, it <em>must</em> provide atomicity, and you can't commit a nested atomic block to the database if the containing transaction is not committed.</p>
<p>The only way to ensure that the inner block is committed, is to make sure that the code in the transaction finishes executing without any errors. </p>
| 7 | 2016-09-27T09:56:10Z | [
"python",
"django",
"postgresql",
"transactions",
"acid"
]
|
Not nesting version of @atomic() in Django? | 39,719,567 | <p>From the <a href="https://docs.djangoproject.com/en/dev/topics/db/transactions/#django.db.transaction.atomic">docs of atomic()</a></p>
<blockquote>
<p>atomic blocks can be nested</p>
</blockquote>
<p>This sound like a great feature, but in my use case I want the opposite: I want the transaction to be durable as soon as the block decorated with <code>@atomic()</code> gets left successfully.</p>
<p>Is there a way to ensure durability in django's transaction handling?</p>
<h1>Background</h1>
<p>Transaction are ACID. The "D" stands for durability. That's why I think transactions can't be nested without loosing feature "D".</p>
<p>Example: If the inner transaction is successful, but the outer transaction is not, then the outer and the inner transaction get rolled back. The result: The inner transaction was not durable.</p>
<p>I use PostgreSQL, but AFAIK this should not matter much.</p>
| 15 | 2016-09-27T08:17:37Z | 39,798,947 | <p>I agree with knbk's answer that it is not possible: durability is only present at the level of a transaction, and atomic provides that. It does not provide it at the level of save points. Depending on the use case, there may be workarounds.</p>
<p>I'm guessing your use case is something like:</p>
<pre><code>@atomic # possibly implicit if ATOMIC_REQUESTS is enabled
def my_view():
run_some_code() # It's fine if this gets rolled back.
charge_a_credit_card() # It's not OK if this gets rolled back.
run_some_more_code() # This shouldn't roll back the credit card.
</code></pre>
<p>I think you'd want something like:</p>
<pre><code>@transaction.non_atomic_requests
def my_view():
with atomic():
run_some_code()
with atomic():
charge_a_credit_card()
with atomic():
run_some_more_code()
</code></pre>
<p>If your use case is for credit cards specifically (as mine was when I had this issue a few years ago), my coworker discovered that <a href="https://support.stripe.com/questions/does-stripe-support-authorize-and-capture">credit card processors actually provide mechanisms for handling this</a>. A similar mechanism might work for your use case, depending on the problem structure:</p>
<pre><code>@atomic
def my_view():
run_some_code()
result = charge_a_credit_card(capture=False)
if result.successful:
transaction.on_commit(lambda: result.capture())
run_some_more_code()
</code></pre>
<p>Another option would be to use a non-transactional persistence mechanism for recording what you're interested in, like a log database, or a redis queue of things to record.</p>
| 6 | 2016-09-30T19:38:21Z | [
"python",
"django",
"postgresql",
"transactions",
"acid"
]
|
Not nesting version of @atomic() in Django? | 39,719,567 | <p>From the <a href="https://docs.djangoproject.com/en/dev/topics/db/transactions/#django.db.transaction.atomic">docs of atomic()</a></p>
<blockquote>
<p>atomic blocks can be nested</p>
</blockquote>
<p>This sound like a great feature, but in my use case I want the opposite: I want the transaction to be durable as soon as the block decorated with <code>@atomic()</code> gets left successfully.</p>
<p>Is there a way to ensure durability in django's transaction handling?</p>
<h1>Background</h1>
<p>Transaction are ACID. The "D" stands for durability. That's why I think transactions can't be nested without loosing feature "D".</p>
<p>Example: If the inner transaction is successful, but the outer transaction is not, then the outer and the inner transaction get rolled back. The result: The inner transaction was not durable.</p>
<p>I use PostgreSQL, but AFAIK this should not matter much.</p>
| 15 | 2016-09-27T08:17:37Z | 39,808,352 | <p>This type of <em>durability</em> is <strong>impossible due to ACID</strong>, with one connection. (i.e. that a nested block stays committed while the outer block get rolled back) It is a consequence of ACID, not a problem of Django. Imagine a super database and the case that table <code>B</code> has a foreign key to table <code>A</code>.</p>
<pre><code>CREATE TABLE A (id serial primary key);
CREATE TABLE B (id serial primary key, b_id integer references A (id));
-- transaction
INSERT INTO A DEFAULT VALUES RETURNING id AS new_a_id
-- like it would be possible to create an inner transaction
INSERT INTO B (a_id) VALUES (new_a_id)
-- commit
-- rollback (= integrity problem)
</code></pre>
<p>If the inner "transaction" should be durable while the (outer) transaction get rolled back then the integrity would be broken. The rollback operation must be always implemented so that it can never fail, therefore no database would implement a nested independent transaction. It would be against the principle of causality and the integrity can not be guarantied after such selective rollback. It is also against atomicity. </p>
<p>The transaction is related to a database connection. If you create <strong>two connections</strong> then two independent transactions are created. One connection doesn't see uncommitted rows of other transactions (it is possible to set this <em>isolation level</em>, but it depends on the database backend) and no foreign keys to them can be created and the integrity is preserved after rollback by the database backend design.</p>
<p>Django supports multiple databases, therefore multiple connections.</p>
<pre><code># no ATOMIC_REQUESTS should be set for "other_db" in DATABASES
@transaction.atomic # atomic for the database "default"
def my_view():
with atomic(): # or set atomic() here, for the database "default"
some_code()
with atomic("other_db"):
row = OtherModel.objects.using("other_db").create(**kwargs)
raise DatabaseError
</code></pre>
<p>The data in "other_db" stays committed.</p>
<p>It is probably possible in Django to create a trick with two connections to the same database like it would be two databases, with some database backends, but I'm sure that it is untested, it would be prone to mistakes, with problems with migrations, bigger load by the database backend that must create real parallel transactions at every request and it can not be optimized. It is better to use two real databases or to reorganize the code.</p>
<p>The setting DATABASE_ROUTERS is very useful, but I'm not sure yet if you are interested in multiple connections.</p>
| 6 | 2016-10-01T15:29:15Z | [
"python",
"django",
"postgresql",
"transactions",
"acid"
]
|
Model Choice Field - get the id | 39,719,596 | <p>I am busy trying to get the id only in integer format preferably for the ModelChoiceField. I get the list to display but get's returned in a string format. Please helping me in retrieving the id of ModelChoiceField. I think I need to do this in the view. </p>
<pre><code>forms.py
class ProjectForm(forms.ModelForm):
items = forms.ModelChoiceField(queryset=Project.objects.all())
class Meta:
model = Project
fields = ['items']
models.py
class Project(models.Model):
items = models.IntegerField(default=0, blank=True, null=True)
views.py
def ProjectView(request):
form = ProjectForm(request.POST)
if request.method == 'POST':
if form.is_valid():
save_it = form.save(commit=False)
save_it.save()
return HttpResponseRedirect('/')
else:
form = ProjectForm()
return render(request, 't.html', {'form': form })
</code></pre>
| 0 | 2016-09-27T08:19:23Z | 39,719,793 | <p>From what I can tell, <code>items</code> should never be an <code>IntegerField</code>. Your usage has it set up to be a <code>ForeignKey</code> to a <code>Project</code> so you should just make that explicit</p>
<pre><code>items = models.ForeignKey('self', null=True, blank=True)
</code></pre>
<p>Possibly with a better descriptive name than items.</p>
<p>Then, you don't need to define anything on the form, it just becomes a standard model form, with a standard model form usage.</p>
| 0 | 2016-09-27T08:30:21Z | [
"python",
"django",
"django-models",
"django-forms"
]
|
Move patch rather than remove it | 39,719,652 | <p>I have a graph that is gradually revealed. Since this should happen with a huge dataset and on several subplots simultaneously, I was planning to <em>move</em> the <code>patch</code> rather than remove it and draw it from scratch in order to accelerate the code.</p>
<p>My problem is similar to <a href="http://stackoverflow.com/questions/16527930/matplotlib-update-position-of-patches-or-set-xy-for-circles">this question</a>. However, I could not resolve it.</p>
<p>Here is a <em>minimal</em> working example:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
nmax = 10
xdata = range(nmax)
ydata = np.random.random(nmax)
fig, ax = plt.subplots()
ax.plot(xdata, ydata, 'o-')
ax.xaxis.set_ticks(xdata)
plt.ion()
i = 0
while i <= nmax:
# ------------ Here I would like to move it rather than remove and redraw.
if i > 0:
rect.remove()
rect = plt.Rectangle((i, 0), nmax - i, 1, zorder = 10)
ax.add_patch(rect)
# ------------
plt.pause(0.1)
i += 1
plt.pause(3)
</code></pre>
| 1 | 2016-09-27T08:22:34Z | 39,720,245 | <p>Maybe this works for you. Instead of removing the patch, you just update its position and its width (with <code>rect.set_x(left_x)</code> and <code>rect.set_width(width)</code>), and then redraw the canvas. Try replacing your loop with this (note that the <code>Rectangle</code> is created <em>once</em>, before the loop):</p>
<pre><code>rect = plt.Rectangle((0, 0), nmax, 1, zorder=10)
ax.add_patch(rect)
for i in range(nmax):
rect.set_x(i)
rect.set_width(nmax - i)
fig.canvas.draw()
plt.pause(0.1)
</code></pre>
| 1 | 2016-09-27T08:52:28Z | [
"python",
"matplotlib"
]
|
What is the difference between JSON.load() and JSON.loads() functions in PYTHON? | 39,719,689 | <p>In python, what is he difference between <strong>json.load()</strong> and <strong>json.loads()</strong> ?</p>
<p>I guess that the <em>load()</em> function must be used with a file object (I need thus to use a context manager) while the <em>loads()</em> function take the path to the file as a string. It is a bit confusing.</p>
<p>Does the letter "<strong>s</strong>"in <strong>json.loads()</strong> stand for <strong>string</strong> ?</p>
<p>Thanks a lot for your answers !</p>
| -6 | 2016-09-27T08:24:26Z | 39,719,701 | <p>Yes, it stands for string. The <code>json.loads</code> function does not take the file path, but the file contents as a string. Look at the documentation at <a href="https://docs.python.org/2/library/json.html" rel="nofollow">https://docs.python.org/2/library/json.html</a>!</p>
| 3 | 2016-09-27T08:25:26Z | [
"python",
"json",
"python-2.7"
]
|
What is the difference between JSON.load() and JSON.loads() functions in PYTHON? | 39,719,689 | <p>In python, what is he difference between <strong>json.load()</strong> and <strong>json.loads()</strong> ?</p>
<p>I guess that the <em>load()</em> function must be used with a file object (I need thus to use a context manager) while the <em>loads()</em> function take the path to the file as a string. It is a bit confusing.</p>
<p>Does the letter "<strong>s</strong>"in <strong>json.loads()</strong> stand for <strong>string</strong> ?</p>
<p>Thanks a lot for your answers !</p>
| -6 | 2016-09-27T08:24:26Z | 39,719,723 | <p>Documentation is quite clear: <a href="https://docs.python.org/2/library/json.html" rel="nofollow">https://docs.python.org/2/library/json.html</a></p>
<pre><code>json.load(fp[, encoding[, cls[, object_hook[, parse_float[, parse_int[, parse_constant[, object_pairs_hook[, **kw]]]]]]]])
</code></pre>
<blockquote>
<p>Deserialize fp (a .read()-supporting file-like object containing a
JSON document) to a Python object using this conversion table.</p>
</blockquote>
<pre><code>json.loads(s[, encoding[, cls[, object_hook[, parse_float[, parse_int[, parse_constant[, object_pairs_hook[, **kw]]]]]]]])
</code></pre>
<blockquote>
<p>Deserialize s (a str or unicode instance containing a JSON document)
to a Python object using this conversion table.</p>
</blockquote>
<p>So <code>load</code> is for a file, <code>loads</code> for a <code>string</code></p>
| 0 | 2016-09-27T08:26:57Z | [
"python",
"json",
"python-2.7"
]
|
How to retrieve a Control Flow Graph for python code? | 39,719,729 | <p>I would like to dump the Control Flow Graph of a given python code,
similar to the option given by gcc compiler option: -fdump-tree-cfg for c code.</p>
<p>I succeeded getting the AST (Abstract Syntax Trees) of a python code,
but it seams quite complex and hassle to get the Control Flow Graph from AST phase.</p>
<p>Is there an easier way to retrieve directly the Control Flow Graph of a python code? any suggestions?</p>
<p>oh by the way I'm using python3.5</p>
<p>Thank you all!</p>
<p>P.S
I really don't know what kind of interpreter I'm using under the hood,
As far as I know it's CPython (not sure), I don't think it's PyPy(Rpython).
Any suggestion how can I verify it?</p>
| 3 | 2016-09-27T08:27:12Z | 39,720,615 | <p>See my <a href="http://stackoverflow.com/a/9989663/120163">SO answer on how to build a control flow graph, using an AST</a>.</p>
<p>The original question asked about CFGs for Java, but the approach is actually pretty generic, and the same approach would work for producing a CFG for Python.</p>
<p>I wouldn't have called this "quite complex"; the basic idea is pretty simple.</p>
| 1 | 2016-09-27T09:09:59Z | [
"python",
"abstract-syntax-tree",
"control-flow-graph"
]
|
Byte array is a valid UTF8 encoded String in Java but not in Python | 39,719,737 | <p>When I run the following in Python 2.7.6, I get an exception:</p>
<pre><code>import base64
some_bytes = b"\x80\x02\x03"
print ("base 64 of the bytes:")
print (base64.b64encode(some_bytes))
try:
print (some_bytes.decode("utf-8"))
except Exception as e:
print(e)
</code></pre>
<p>The output:</p>
<pre><code>base 64 of the bytes:
gAID
'utf8' codec can't decode byte 0x80 in position 0: invalid start byte
</code></pre>
<p>So in Python 2.7.6 the bytes represented as <code>gAID</code> are not a valid UTF8.</p>
<p>When I try it in Java 8 (HotSpot 1.8.0_74), using this code:</p>
<pre><code>java.util.Base64.Decoder decoder = java.util.Base64.getDecoder();
byte[] bytes = decoder.decode("gAID");
Charset charset = Charset.forName("UTF8");
String s = new String(bytes, charset);
</code></pre>
<p>I don't get any exception.</p>
<p>How so? Why is the same byte array is valid in Java and invalid in Python, using UTF8 decoding?</p>
| -1 | 2016-09-27T08:27:29Z | 39,719,936 | <blockquote>
<p>So in Python 2.7.6 the bytes represented as gAID are not a valid UTF8.</p>
</blockquote>
<p>This is wrong as you try to decode the <code>Base64</code> encoded bytes.</p>
<pre><code>import base64
some_bytes = b"\x80\x02\x03"
print ("base 64 of the bytes:")
print (base64.b64encode(some_bytes))
# store the decoded bytes
some_bytes = base64.b64encode(some_bytes)
decoded_bytes = [hex(ord(c)) for c in some_bytes]
print ("decoded bytes: ")
print (decoded_bytes)
try:
print (some_bytes.decode("utf-8"))
except Exception as e:
print(e)
</code></pre>
<p>output</p>
<pre><code>gAID
['0x67', '0x41', '0x49', '0x44']
gAID
</code></pre>
<p>In Java you try to create a <code>String</code> from the Base64 encoded bytes using the UTF-8 charset. Which results (as already answered) in the default replacement character <a href="https://en.wikipedia.org/wiki/Specials_(Unicode_block)#Replacement_character" rel="nofollow">�</a>.</p>
<p>Running following snippet</p>
<pre><code>java.util.Base64.Decoder decoder = java.util.Base64.getDecoder();
byte[] bytes = decoder.decode("gAID");
System.out.println("base 64 of the bytes:");
for (byte b : bytes) {
System.out.printf("x%02x ", b);
}
System.out.println();
Charset charset = Charset.forName("UTF8");
String s = new String(bytes, charset);
System.out.println(s);
</code></pre>
<p>produce following output</p>
<pre><code>base 64 of the bytes:
x80 x02 x03
?
</code></pre>
<p>There you can see the same bytes you are using in the Python snippet. Which lead in Python to <code>'utf8' codec can't decode byte 0x80 in position 0: invalid start byte</code> lead there to an <code>?</code> (it stands for the default replacement character on a not-unicode console)</p>
<p>Following snippet used the bytes from <code>gAID</code> to construct a <code>String</code> with the UTF-8 character set.</p>
<pre><code>byte[] bytes = "gAID".getBytes(StandardCharsets.ISO_8859_1);
for (byte b : bytes) {
System.out.printf("x%02x ", b);
}
System.out.println();
Charset charset = Charset.forName("UTF8");
String s = new String(bytes, charset);
System.out.println(s);
</code></pre>
<p>output</p>
<pre><code>x67 x41 x49 x44
gAID
</code></pre>
| -1 | 2016-09-27T08:37:09Z | [
"java",
"python",
"utf-8",
"character-encoding"
]
|
Byte array is a valid UTF8 encoded String in Java but not in Python | 39,719,737 | <p>When I run the following in Python 2.7.6, I get an exception:</p>
<pre><code>import base64
some_bytes = b"\x80\x02\x03"
print ("base 64 of the bytes:")
print (base64.b64encode(some_bytes))
try:
print (some_bytes.decode("utf-8"))
except Exception as e:
print(e)
</code></pre>
<p>The output:</p>
<pre><code>base 64 of the bytes:
gAID
'utf8' codec can't decode byte 0x80 in position 0: invalid start byte
</code></pre>
<p>So in Python 2.7.6 the bytes represented as <code>gAID</code> are not a valid UTF8.</p>
<p>When I try it in Java 8 (HotSpot 1.8.0_74), using this code:</p>
<pre><code>java.util.Base64.Decoder decoder = java.util.Base64.getDecoder();
byte[] bytes = decoder.decode("gAID");
Charset charset = Charset.forName("UTF8");
String s = new String(bytes, charset);
</code></pre>
<p>I don't get any exception.</p>
<p>How so? Why is the same byte array is valid in Java and invalid in Python, using UTF8 decoding?</p>
| -1 | 2016-09-27T08:27:29Z | 39,720,372 | <p>It's not valid UTF8. <a href="https://en.wikipedia.org/wiki/UTF-8" rel="nofollow">https://en.wikipedia.org/wiki/UTF-8</a></p>
<p>Bytes between 0x80 and 0xBF cannot be the first byte of a multi-byte character. They can only be the second byte or later.</p>
<p>Java replaces bytes which cannot be decoded with a <code>?</code> rather than throw an exception.</p>
| 0 | 2016-09-27T08:58:46Z | [
"java",
"python",
"utf-8",
"character-encoding"
]
|
Byte array is a valid UTF8 encoded String in Java but not in Python | 39,719,737 | <p>When I run the following in Python 2.7.6, I get an exception:</p>
<pre><code>import base64
some_bytes = b"\x80\x02\x03"
print ("base 64 of the bytes:")
print (base64.b64encode(some_bytes))
try:
print (some_bytes.decode("utf-8"))
except Exception as e:
print(e)
</code></pre>
<p>The output:</p>
<pre><code>base 64 of the bytes:
gAID
'utf8' codec can't decode byte 0x80 in position 0: invalid start byte
</code></pre>
<p>So in Python 2.7.6 the bytes represented as <code>gAID</code> are not a valid UTF8.</p>
<p>When I try it in Java 8 (HotSpot 1.8.0_74), using this code:</p>
<pre><code>java.util.Base64.Decoder decoder = java.util.Base64.getDecoder();
byte[] bytes = decoder.decode("gAID");
Charset charset = Charset.forName("UTF8");
String s = new String(bytes, charset);
</code></pre>
<p>I don't get any exception.</p>
<p>How so? Why is the same byte array is valid in Java and invalid in Python, using UTF8 decoding?</p>
| -1 | 2016-09-27T08:27:29Z | 39,720,401 | <p>This is because the String constructor in Java just doesn't throw exceptions in the case of invalid characters. See documentation <a href="http://docs.oracle.com/javase/8/docs/api/java/lang/String.html#String-byte:A-java.nio.charset.Charset-" rel="nofollow">here</a></p>
<blockquote>
<p>public String(byte[] bytes, Charset charset)</p>
<p>... This method <strong>always replaces malformed-input and unmappable-character sequences</strong> with this charset's default replacement string. The CharsetDecoder class should be used when more control over the decoding process is required.</p>
</blockquote>
| 0 | 2016-09-27T09:00:18Z | [
"java",
"python",
"utf-8",
"character-encoding"
]
|
Exception in worker process: No module named wsgi | 39,719,839 | <p>I am trying to deploy my Django web app on heroku.
An Application error message appears to me when I try to open it.</p>
<p>this is my log:</p>
<pre><code>2016-09-27T07:56:16.836350+00:00 heroku[web.1]: State changed from crashed to starting
2016-09-27T07:56:21.160909+00:00 heroku[web.1]: Starting process with command `gunicorn myblog.wsgi --log-file -`
2016-09-27T07:56:24.063399+00:00 app[web.1]: [2016-09-27 07:56:24 +0000] [3] [INFO] Listening at: http://0.0.0.0:37485 (3)
2016-09-27T07:56:24.062805+00:00 app[web.1]: [2016-09-27 07:56:24 +0000] [3] [INFO] Starting gunicorn 19.4.5
2016-09-27T07:56:24.063556+00:00 app[web.1]: [2016-09-27 07:56:24 +0000] [3] [INFO] Using worker: sync
2016-09-27T07:56:24.066328+00:00 app[web.1]: [2016-09-27 07:56:24 +0000] [9] [INFO] Booting worker with pid: 9
2016-09-27T07:56:24.069171+00:00 app[web.1]: [2016-09-27 07:56:24 +0000] [9] [ERROR] Exception in worker process:
2016-09-27T07:56:24.069172+00:00 app[web.1]: Traceback (most recent call last):
2016-09-27T07:56:24.069175+00:00 app[web.1]: worker.init_process()
2016-09-27T07:56:24.069175+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/workers/base.py", line 122, in init_process
2016-09-27T07:56:24.069176+00:00 app[web.1]: self.load_wsgi()
2016-09-27T07:56:24.069174+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/arbiter.py", line 515, in spawn_worker
2016-09-27T07:56:24.069177+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/workers/base.py", line 130, in load_wsgi
2016-09-27T07:56:24.069178+00:00 app[web.1]: self.wsgi = self.app.wsgi()
2016-09-27T07:56:24.069179+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
2016-09-27T07:56:24.069179+00:00 app[web.1]: self.callable = self.load()
2016-09-27T07:56:24.069180+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
2016-09-27T07:56:24.069181+00:00 app[web.1]: return self.load_wsgiapp()
2016-09-27T07:56:24.069181+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
2016-09-27T07:56:24.069182+00:00 app[web.1]: return util.import_app(self.app_uri)
2016-09-27T07:56:24.069183+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/util.py", line 357, in import_app
2016-09-27T07:56:24.069183+00:00 app[web.1]: __import__(module)
2016-09-27T07:56:24.069185+00:00 app[web.1]: Traceback (most recent call last):
2016-09-27T07:56:24.069185+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/arbiter.py", line 515, in spawn_worker
2016-09-27T07:56:24.069186+00:00 app[web.1]: worker.init_process()
2016-09-27T07:56:24.069184+00:00 app[web.1]: ImportError: No module named wsgi
2016-09-27T07:56:24.069186+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/workers/base.py", line 122, in init_process
2016-09-27T07:56:24.069187+00:00 app[web.1]: self.load_wsgi()
2016-09-27T07:56:24.069188+00:00 app[web.1]: self.wsgi = self.app.wsgi()
2016-09-27T07:56:24.069188+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/workers/base.py", line 130, in load_wsgi
2016-09-27T07:56:24.069189+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
2016-09-27T07:56:24.069189+00:00 app[web.1]: self.callable = self.load()
2016-09-27T07:56:24.069190+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
2016-09-27T07:56:24.069191+00:00 app[web.1]: return self.load_wsgiapp()
2016-09-27T07:56:24.069191+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
2016-09-27T07:56:24.069192+00:00 app[web.1]: return util.import_app(self.app_uri)
2016-09-27T07:56:24.069193+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/util.py", line 357, in import_app
2016-09-27T07:56:24.069193+00:00 app[web.1]: __import__(module)
2016-09-27T07:56:24.069194+00:00 app[web.1]: ImportError: No module named wsgi
2016-09-27T07:56:24.069351+00:00 app[web.1]: [2016-09-27 07:56:24 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-09-27T07:56:24.088872+00:00 app[web.1]: [2016-09-27 07:56:24 +0000] [3] [INFO] Shutting down: Master
2016-09-27T07:56:24.089020+00:00 app[web.1]: [2016-09-27 07:56:24 +0000] [3] [INFO] Reason: Worker failed to boot.
2016-09-27T07:56:24.210605+00:00 heroku[web.1]: State changed from starting to crashed
2016-09-27T07:56:24.197593+00:00 heroku[web.1]: Process exited with status 3
</code></pre>
<p>the errors I noticed in the log are:</p>
<ul>
<li>[ERROR] Exception in worker process:</li>
<li>ImportError: No module named wsg</li>
</ul>
<p>my procfile: (my app is called myblog)</p>
<pre><code>web: gunicorn myblog.wsgi --log-file -
</code></pre>
<p>requirements.txt:</p>
<pre><code>Django==1.9.2
argparse==1.2.1
dj-database-url==0.4.0
dj-static==0.0.6
django-toolbelt==0.0.1
gunicorn==19.4.5
psycopg2==2.6.1
static3==0.7.0
whitenoise==2.0.6
wsgiref==0.1.2
</code></pre>
<p>NOTE :: I Disabled the Collectstatic using this command</p>
<pre><code>heroku config:set DISABLE_COLLECTSTATIC=1
</code></pre>
<p>EDIT::---------------------------------------------------</p>
<p>my wsgi.py file:</p>
<pre><code>import os
from django.core.wsgi import get_wsgi_application
from whitenoise.django import DjangoWhiteNoise
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "blog.settings")
application = get_wsgi_application()
application = DjangoWhiteNoise(application)
</code></pre>
<p>my project structure:</p>
<pre><code>Django-Blog
/blog
settings.py
urls.py
wsgi.py
__init__.py
/myblog
manage.py
requirements.txt
procfile
runtime.txt
/venv
</code></pre>
<p>.gitignore file:</p>
<pre><code># Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# IPython Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# dotenv
.env
# virtualenv
venv/
ENV/
# Spyder project settings
.spyderproject
# Rope project settings
.ropeproject
myblog/static/myblog/images/
myblog/migrations/
.idea/workspace.xml
</code></pre>
| 0 | 2016-09-27T08:32:09Z | 39,720,549 | <p>change your procfile to :</p>
<p><code>web: gunicorn blog.wsgi --log-file -</code></p>
| 2 | 2016-09-27T09:06:57Z | [
"python",
"django",
"heroku",
"wsgi"
]
|
PyQt5 QComboBox in QTableWidget | 39,720,036 | <p>I have really tried everything to solve my problem but it doesn't work.
Here is my simple code to put Comboboxes in each row of the table. It actually works for setItem(), which I use to put strings into each row. But it doesn't work with setCellWidget(), which I have to use to put the Combobox into the rows. It is as if setCellWdiget() deletes the combobox after putting it into the row because it finally appears only in the very last row, which I don't get why.
It would be great if somebody of you could help me out. Many thanks in advance! </p>
<p>Here is the code:</p>
<pre><code>import sys
from PyQt5 import QtWidgets, QtCore
class Window(QtWidgets.QMainWindow):
def __init__(self):
super(Window, self).__init__()
self.setGeometry(50,50,500,500)
self.setWindowTitle('PyQt Tuts')
self.table()
def table(self):
comboBox = QtWidgets.QComboBox()
self.tableWidget = QtWidgets.QTableWidget()
self.tableWidget.setGeometry(QtCore.QRect(220, 100, 411, 392))
self.tableWidget.setColumnCount(2)
self.tableWidget.setRowCount(5)
self.tableWidget.show()
attr = ['one', 'two', 'three', 'four', 'five']
i = 0
for j in attr:
self.tableWidget.setItem(i, 0, QtWidgets.QTableWidgetItem(j))
self.tableWidget.setCellWidget(i, 1, comboBox)
i += 1
def run():
app = QtWidgets.QApplication(sys.argv)
w = Window()
sys.exit(app.exec_())
run()
</code></pre>
| 0 | 2016-09-27T08:42:10Z | 39,720,197 | <p>You create a single combo box, so when you put it into a cell, it is removed from the prevoius cell.
You must create a combo box for each cell (in the <code>for</code> loop).</p>
<p>Example:</p>
<pre><code>attr = ['one', 'two', 'three', 'four', 'five']
i = 0
for j in attr:
self.tableWidget.setItem(i, 0, QtWidgets.QTableWidgetItem(j))
comboBox = QtWidgets.QComboBox()
self.tableWidget.setCellWidget(i, 1, comboBox)
i += 1
</code></pre>
| 1 | 2016-09-27T08:50:14Z | [
"python",
"qt",
"pyqt5",
"qtablewidget",
"qcombobox"
]
|
numpy.cross and similiar functions: Do they allocate a new array on every call? | 39,720,177 | <p>When I use <code>numpy.cross</code>, it will <em>return</em> an array with the results. There's no way to compute <em>into</em> an existing array. The same holds for other functions.</p>
<ol>
<li>Isn't it extremely inefficient to allocate a new array upon each call?</li>
<li>If so, is there a way to speed it up?</li>
</ol>
| 1 | 2016-09-27T08:49:04Z | 39,721,269 | <p>There is an overhead with the function <code>np.cross</code> as it creates a new NumPy array. You can do <code>x = np.cross(x, y)</code> but it will not suppress the overhead.</p>
<p>If you have a program where this is actually a problem (as diagnosed by profiling the program, for instance), you are better off turning to a specific optimization strategy. <a href="http://cython.org/" rel="nofollow">Cython</a> and <a href="http://numba.pydata.org/" rel="nofollow">Numba</a> come to mind.</p>
| 2 | 2016-09-27T09:40:30Z | [
"python",
"numpy"
]
|
numpy.cross and similiar functions: Do they allocate a new array on every call? | 39,720,177 | <p>When I use <code>numpy.cross</code>, it will <em>return</em> an array with the results. There's no way to compute <em>into</em> an existing array. The same holds for other functions.</p>
<ol>
<li>Isn't it extremely inefficient to allocate a new array upon each call?</li>
<li>If so, is there a way to speed it up?</li>
</ol>
| 1 | 2016-09-27T08:49:04Z | 39,729,272 | <p>The current version of <code>np.cross</code> (as of 1.9) takes great effort to avoid temporary arrays. Where possible is uses views of the inputs, even when it has to roll the axes. It creates the output array</p>
<pre><code>cp = empty(shape, dtype)
</code></pre>
<p>and then takes care to perform calculation in-place, using the <code>out</code> of <code>multiply</code> and <code>-=</code> kinds of assignment.</p>
<pre><code>multiply(a0, b1, out=cp)
cp -= a1 * b0
</code></pre>
<p>However most of these operations are still buffered. That is <code>a1*b0</code> writes to a temporary buffer array, which is then subtracted from <code>cp</code>. </p>
<p>Usually we don't worry about those temporary arrays. We let the developers worry about efficiency and reliability. Handling temporary buffers is compiled code's responsibility, not ours.</p>
<p>The unbuffered <code>add.at</code> documentation gives some insight into the use of buffering or not. This unbuffered <code>.at</code> method is used to certain serial operations that the regular buffered versions can't handle. But it is not meant as a way of speeding up code.</p>
<p>It sounds like you want <code>np.cross</code> to take an <code>out</code> parameter, thinking that if you can use</p>
<pre><code>cp = np.empty(rightsize)
for a,b in zip(A,B):
np.cross(A,B,out=cp)
<use cp>
</code></pre>
<p>it will be a lot faster than </p>
<pre><code>for a,b, in zip(A,B):
cp = np.cross(A,B)
<use cp>
</code></pre>
<p>I doubt if that would help. In the big picture the <code>cp=np.empty(...)</code> is a minor time consumer. </p>
<p>But lets do a time test with <code>np.multiply</code> which does take an <code>out</code>:</p>
<pre><code>In [18]: x = np.ones((1000,1000))
In [19]: %%timeit
...: y = np.multiply(x,x)
...:
100 loops, best of 3: 12.4 ms per loop
In [20]: %%timeit y = np.empty(x.shape)
...: np.multiply(x,x, out=y)
...:
100 loops, best of 3: 6.48 ms per loop
</code></pre>
<p>OK, taking the allocation out of the timing loop does cut the time in half.</p>
<p>But if you are repeatedly calling <code>np.cross</code> (or some other function like it), I think you should worry more about the number of repeats than details like array reuse.</p>
<pre><code>np.cross(np.ones((N,3)), np.ones((N,3)))
</code></pre>
<p>is considerably faster than</p>
<pre><code>for i in range(N):
np.cross(np.ones(3), np.ones(3))
</code></pre>
<p>But it would be easy to make a copy of <code>np.cross</code> (it's pure Python), and modify it to take an <code>out</code>. Try it and see if it makes a difference. As long as you use a correct sized <code>cp</code> it should work. You'd have to decide whether to bypass the <code>shape</code> and <code>dtype</code> checks that preceed the <code>cp=empty...</code> line.</p>
| 1 | 2016-09-27T15:58:31Z | [
"python",
"numpy"
]
|
Meaning of the "@package" decorator in Python | 39,720,191 | <p>I'm trying to understand a class method definition which reads similar to the following:</p>
<pre><code>@package(type='accounts', source='system')
def get(self, [other arguments]):
[function body]
</code></pre>
<p>What is the meaning of the <code>@package</code> decorator? I was unable to find documentation on this.</p>
| -1 | 2016-09-27T08:49:55Z | 39,720,308 | <p>There is no default <code>package</code> decorator in the Python standard library.</p>
<p>A decorator is <em>just a simple expression</em>; there will be a <code>package()</code> callable in the same module (either defined there as a function or class, or imported from another module).</p>
<p>The line <code>@package(type='accounts', source='system')</code> executes the expression <code>package(type='accounts', source='system')</code>, and the return value of that is used to decorate the <code>get()</code> function. You could read it as:</p>
<pre><code>def get(self, [other arguments]):
[function body]
get = package(type='accounts', source='system')(get)
</code></pre>
<p>except the name <code>get</code> is set just once.</p>
<p>For example, <code>package</code> could be defined as:</p>
<pre><code>def package(type='foo', source='bar'):
def decorator(func):
def wrapper(*args, **kwargs):
# do something with type and source
return func(*args, **kwargs)
return wrapper
return decorator
</code></pre>
<p>so <code>package()</code> returns <code>decorator()</code>, which in turn returns <code>wrapper()</code>; <code>package()</code> is a <em>decorator factory</em>, producing the actual decorator.</p>
| 3 | 2016-09-27T08:55:32Z | [
"python"
]
|
Can this chained comparison really be simplified like PyCharm claims? | 39,720,220 | <p>I have a class with two integer attributes, <code>_xp</code> and <code>level</code>. I have a <code>while</code> loop which compares these two to make sure they're both positive:</p>
<pre><code>while self.level > 0 and self._xp < 0:
self.level -= 1
self._xp += self.get_xp_quota()
</code></pre>
<p>My PyCharm claims this can be simplified:</p>
<p><a href="http://i.stack.imgur.com/iLOGd.png" rel="nofollow"><img src="http://i.stack.imgur.com/iLOGd.png" alt="Simplify chained comparison"></a></p>
<p>Can it really? I want to make sure before reporting a bug to PyCharm.</p>
<p>I also found <a href="http://stackoverflow.com/questions/26502775/pycharm-simplify-chained-comparison">a similar question</a> but in that case the two variables were the same, mine has two different attributes.</p>
| 0 | 2016-09-27T08:51:28Z | 39,720,359 | <p>IIRC, you could rewrite this as:</p>
<pre><code>while self._xp < 0 < self.level:
self.level -= 1
self._xp += self.get_xp_quota()
</code></pre>
<p>as per your reference above. It doesn't really matter that there's 2 different attributes or the same variable, ultimately you are simply comparing the values of each. </p>
<p>Let me know if that works.</p>
| 3 | 2016-09-27T08:58:12Z | [
"python",
"python-3.x",
"comparison",
"pycharm"
]
|
Python--Sending a email with a attachment | 39,720,248 | <p>I am working on a big project, that includes a database for remembering users. il skip the details, but my client wants me to include a function by wich he can backup all the user data and other files.</p>
<p>I was thinking of a email, (since the project is a android app) and I was trying to figure out how you could send a attachement (i.e a .db sqlite3 file) in a email. I know theres alot of similair questions around over <a href="http://stackoverflow.com/questions/3362600/how-to-send-email-attachments-with-pytho">here</a>, but all of the answers to <a href="http://stackoverflow.com/questions/3362600/how-to-send-email-attachments-with-pytho">this</a> question gives me a error. here is the closest that I got:</p>
<p>This program sends a email without a attachment:</p>
<pre><code>import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
boodskap = MIMEText("Toekomsweb Epos toets", 'plain')
van_adres = "from adres"
na_adres = "to adres"
epos_liggaam = MIMEMultipart('alternatief')
epos_liggaam['Subject'] = "Toets"
epos_liggaam['From'] = van_adres
epos_liggaam['To'] = na_adres
epos_liggaam.attach(boodskap)
mail = smtplib.SMTP('smtp.gmail.com',587)
mail.ehlo()
mail.starttls()
mail.login(van_adres,'PASSWORD')
mail.sendmail(van_adres,na_adres,epos_liggaam.as_string())
mail.close()
print("succes!")
</code></pre>
<p>please excuse my poor variable naming, its not in english.</p>
<p>any help on sending a attachment? </p>
<p>Thanks!</p>
| 0 | 2016-09-27T08:52:37Z | 39,721,062 | <p>Hi this is the code I used... it turns out that ubunto uses a different way of sending a email than windows.</p>
<pre><code>import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.base import MIMEBase
from email import encoders
import os
boodskap = MIMEText("Toekomsweb Epos toets", 'plain')
van_adres = 'From adres'
na_adres = 'To adres'
epos_liggaam = MIMEMultipart('alternatief')
epos_liggaam['Subject'] = "Toets"
epos_liggaam['From'] = van_adres
epos_liggaam['To'] = na_adres
epos_liggaam.attach(boodskap)
f = "toets.db"
part = MIMEBase('application', "octet-stream")
part.set_payload( open(f,"rb").read() )
encoders.encode_base64(part)
part.add_header('Content-Disposition', 'attachment; filename="{0}"'.format(os.path.basename(f)))
epos_liggaam.attach(part)
mail = smtplib.SMTP('smtp.gmail.com',587)
mail.ehlo()
mail.starttls()
mail.login(van_adres,'PASSWORD')
mail.sendmail(van_adres,na_adres,epos_liggaam.as_string())
mail.close()
print("succes!")
</code></pre>
<p>this answer was adapted from <a href="http://stackoverflow.com/questions/3362600/how-to-send-email-attachments-with-python">here</a> (second answer)</p>
<p>hope that this answer will answer other people's question aswell</p>
| 0 | 2016-09-27T09:30:49Z | [
"python",
"python-2.7",
"email",
"smtplib"
]
|
python read multiple serial ports | 39,720,315 | <p>I am trying to read from multiple serial ports in python. But contrary to <a href="http://stackoverflow.com/questions/27484250/python-pyserial-read-data-form-multiple-serial-ports-at-same-time">this</a> thread I want to be able to change the number of ports dynamically (reading it via command line option).</p>
<p>My idea was to put the ports into a file "ports", read this file and put the opened serial ports into a list, according to the number of lines in "ports". My minimal example:</p>
<pre><code>import numpy as np
import serial
p = np.genfromtxt('ports',delimiter=',',dtype=None)
nser = p.size
ser = [serial.Serial(port=p[i][0], baudrate=p[i][1]) for i in xrange(nser)]
</code></pre>
<p>"ports" looks the following (at the moment):</p>
<pre><code>'/dev/ttyUSB0',4800
</code></pre>
<p>The error: </p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: 0-d arrays can't be indexed
</code></pre>
<p>Apparently the file is not correctly read to an array, and I already tried various different methods and ways (using pythons own methods or np.loadtxt).</p>
<p>Does anybody have an idea how to a) read the file correctly and b) solve the multiple port issue in a useful way? Thanks in advance.</p>
| 0 | 2016-09-27T08:55:47Z | 39,723,725 | <p>I didn't quite clearly understood what you are trying to do, but if
I had a file like:</p>
<pre><code>'/dev/ttyUSB0',4800
'/dev/ttyUSB1',4801,'/dev/ttyUSB3',4803
</code></pre>
<p>and want to read it and store as a list, a way to go would be:</p>
<pre><code>with open('ports.txt') as f:
lines = f.read().replace('\n', ',')
print lines
</code></pre>
<p>which will give you:</p>
<pre><code>>>> lines
'/dev/ttyUSB0',4800,'/dev/ttyUSB1',4801,'/dev/ttyUSB3',4803
</code></pre>
<p>and if you want to split the integers, you could do:</p>
<pre><code>>>> l1 = [lines.pop(i) for i,j in enumerate(lines) if type(j)==int ]
>>> l1
[4800, 4801, 4803]
>>> lines
['/dev/ttyUSB0', '/dev/ttyUSB1', '/dev/ttyUSB3']
</code></pre>
<p>Now because you said that 'np.loadtxt' didn't work, a way to convert a python list to a numpy-array is:</p>
<pre><code>>>> lines = ['/dev/ttyUSB0',4800,'/dev/ttyUSB1',4801,'/dev/ttyUSB3',4803]
>>>
>>> import numpy as np
>>> np.asarray(lines)
array(['/dev/ttyUSB0', '4800', '/dev/ttyUSB1', '4801', '/dev/ttyUSB3',
'4803'],
dtype='|S12')
</code></pre>
<p>But again I am not sure If that is what you are looking for. </p>
| 1 | 2016-09-27T11:39:50Z | [
"python",
"serial-port",
"readfile"
]
|
python read multiple serial ports | 39,720,315 | <p>I am trying to read from multiple serial ports in python. But contrary to <a href="http://stackoverflow.com/questions/27484250/python-pyserial-read-data-form-multiple-serial-ports-at-same-time">this</a> thread I want to be able to change the number of ports dynamically (reading it via command line option).</p>
<p>My idea was to put the ports into a file "ports", read this file and put the opened serial ports into a list, according to the number of lines in "ports". My minimal example:</p>
<pre><code>import numpy as np
import serial
p = np.genfromtxt('ports',delimiter=',',dtype=None)
nser = p.size
ser = [serial.Serial(port=p[i][0], baudrate=p[i][1]) for i in xrange(nser)]
</code></pre>
<p>"ports" looks the following (at the moment):</p>
<pre><code>'/dev/ttyUSB0',4800
</code></pre>
<p>The error: </p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: 0-d arrays can't be indexed
</code></pre>
<p>Apparently the file is not correctly read to an array, and I already tried various different methods and ways (using pythons own methods or np.loadtxt).</p>
<p>Does anybody have an idea how to a) read the file correctly and b) solve the multiple port issue in a useful way? Thanks in advance.</p>
| 0 | 2016-09-27T08:55:47Z | 39,724,823 | <p>Your config file format is very simple and can easily be parsed without numpy. You can use simple string splitting to load each port definition.</p>
<pre><code>serial_ports = []
with open('ports') as f:
for line in f:
port, baud = line.split(',')
serial_ports.append(serial.Serial(port, int(baud)))
</code></pre>
<p>Or you could use the <a href="https://docs.python.org/3/library/csv.html#module-csv" rel="nofollow"><code>csv</code></a> module:</p>
<pre><code>import csv
with open('ports') as f:
serial_ports = [serial.Serial(port, int(baud)) for port, baud in csv.reader(f)]
</code></pre>
<hr>
<p>The second part of your question is more difficult because you haven't provided many details about how the serial port readers will process the data received over the ports.</p>
<p>If the application is I/O bound, which is most likely the case, you can asynchronously check when a serial port has some data to read, then read it as required. That can be done with the <a href="https://docs.python.org/3/library/select.html#module-select" rel="nofollow"><code>select()</code></a> module, or if you're using Python >= 3.4, the <a href="https://docs.python.org/3/library/selectors.html#module-selectors" rel="nofollow"><code>selectors</code></a> module. You do not require multiple processes to do this.</p>
<p>If the application is CPU bound then you could use <code>mutiprocessing.Process()</code> or <code>subprocess.Popen()</code>. Instead of opening the serial ports in the parent, pass the serial port parameters to the child as arguments/command line arguments to the child function/process and let the child open the port, processes the data, and close the port.</p>
<p><strong>N.B. Untested - don't know if this will work with a serial port</strong>. If you must open the ports in the parent, hook the stdin of the subprocess up to the serial port. You'll need to be careful with this as it's easy to deadlock processes where the parent and child are are mutually blocked on each other.</p>
<pre><code>from subprocess import Popen, PIPE
s = serial.Serial(port, baud)
p = Popen(['python', 'port_reader.py'], stdin=s, stdout=PIPE, stderr=PIPE)
p.communicate()
</code></pre>
<p>If using <code>multiprocessing</code> you can pass the open serial port to the child as an argument. This might work... ?</p>
<pre><code>from multiprocessing import Process
def child(port):
while True:
line = port.readline()
if not line:
break
print('child(): read line: {!r}'.format(line))
port = serial.Serial(port, baud)
p = Process(target=child, args=(port,))
p.start()
p.join()
</code></pre>
| 2 | 2016-09-27T12:34:36Z | [
"python",
"serial-port",
"readfile"
]
|
Pandas `read_json` function converts strings to DateTime objects even the `convert_dates=False` attr is specified | 39,720,332 | <p>I have the next JSON:</p>
<pre><code>[{
"2016-08": 1355,
"2016-09": 2799,
"2016-10": 2432,
"2016-11": 0
}, {
"2016-08": 1475,
"2016-09": 1968,
"2016-10": 1375,
"2016-11": 0
}, {
"2016-08": 3097,
"2016-09": 1244,
"2016-10": 2339,
"2016-11": 0
}, {
"2016-08": 1305,
"2016-09": 1625,
"2016-10": 3038,
"2016-11": 0
}, {
"2016-08": 1530,
"2016-09": 4385,
"2016-10": 2369,
"2016-11": 0
}, {
"2016-08": 3515,
"2016-09": 4532,
"2016-10": 2497,
"2016-11": 0
}, {
"2016-08": 1539,
"2016-09": 1276,
"2016-10": 4378,
"2016-11": 0
}, {
"2016-08": 4989,
"2016-09": 3143,
"2016-10": 2075,
"2016-11": 0
}, {
"2016-08": 3357,
"2016-09": 2745,
"2016-10": 1592,
"2016-11": 0
}, {
"2016-08": 3224,
"2016-09": 2694,
"2016-10": 3958,
"2016-11": 0
}]
</code></pre>
<p>When I call <code>pandas.read_json(JSON, convert_dates=False)</code> I get the next result:</p>
<p><a href="http://i.stack.imgur.com/zj5eo.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/zj5eo.jpg" alt="enter image description here"></a></p>
<p>As you can see all columns have been converted automatically. What do I use the wrong? </p>
<p>I've been using python3.5 and pandas 0.18.1</p>
| 1 | 2016-09-27T08:56:53Z | 39,720,414 | <p>You need parameter <code>convert_axes=False</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html" rel="nofollow"><code>read_json</code></a>:</p>
<pre><code>df = pd.read_json('file.json', convert_axes=False)
print (df)
2016-08 2016-09 2016-10 2016-11
0 1355 2799 2432 0
1 1475 1968 1375 0
2 3097 1244 2339 0
3 1305 1625 3038 0
4 1530 4385 2369 0
5 3515 4532 2497 0
6 1539 1276 4378 0
7 4989 3143 2075 0
8 3357 2745 1592 0
9 3224 2694 3958 0
</code></pre>
<p><code>convert_dates=False</code> works if value is not converted to <code>index</code> or <code>columns</code>:</p>
<pre><code>[{
"2016-08": "2016-08",
"2016-09": 2799,
"2016-10": 2432,
"2016-11": 0
}, {
"2016-08": 1475,
"2016-09": 1968,
"2016-10": 1375,
"2016-11": 0
},
...
...
#1355 changed to '2016-08'
df = pd.read_json('file.json', convert_dates=False)
print (df)
2016-08-01 2016-09-01 2016-10-01 2016-11-01
0 2016-08 2799 2432 0
1 1475 1968 1375 0
2 3097 1244 2339 0
3 1305 1625 3038 0
4 1530 4385 2369 0
5 3515 4532 2497 0
6 1539 1276 4378 0
7 4989 3143 2075 0
8 3357 2745 1592 0
9 3224 2694 3958 0
</code></pre>
<p>If use both parameters:</p>
<pre><code>df = pd.read_json('file.json', convert_dates=False, convert_axes=False)
print (df)
2016-08 2016-09 2016-10 2016-11
0 2016-08 2799 2432 0
1 1475 1968 1375 0
2 3097 1244 2339 0
3 1305 1625 3038 0
4 1530 4385 2369 0
5 3515 4532 2497 0
6 1539 1276 4378 0
7 4989 3143 2075 0
8 3357 2745 1592 0
9 3224 2694 3958 0
</code></pre>
| 2 | 2016-09-27T09:00:58Z | [
"python",
"json",
"python-3.x",
"pandas",
"jupyter-notebook"
]
|
How to make screenshot while showing video from cam? | 39,720,344 | <pre><code>#Importing necessary libraries, mainly the OpenCV, and PyQt libraries
import cv2
import numpy as np
import sys
from PyQt5 import QtCore
from PyQt5 import QtWidgets
from PyQt5 import QtGui
from PyQt5.QtCore import pyqtSignal
class ShowVideo(QtCore.QObject):
#initiating the built in camera
camera_port = -1
camera = cv2.VideoCapture(camera_port)
VideoSignal = QtCore.pyqtSignal(QtGui.QImage)
def __init__(self, parent = None):
super(ShowVideo, self).__init__(parent)
@QtCore.pyqtSlot()
def startVideo(self):
run_video = True
while run_video:
ret, image = self.camera.read()
color_swapped_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
height, width, _ = color_swapped_image.shape
qt_image = QtGui.QImage(color_swapped_image.data,
width,
height,
color_swapped_image.strides[0],
QtGui.QImage.Format_RGB888)
pixmap = QtGui.QPixmap(qt_image)
qt_image = pixmap.scaled(640, 480, QtCore.Qt.KeepAspectRatio)
qt_image = QtGui.QImage(qt_image)
self.VideoSignal.emit(qt_image)
@QtCore.pyqtSlot()
def makeScreenshot(self):
#cv2.imwrite("test.jpg", self.image)
print("Screenshot saved")
#self.qt_image.save('test.jpg')
class ImageViewer(QtWidgets.QWidget):
def __init__(self, parent = None):
super(ImageViewer, self).__init__(parent)
self.image = QtGui.QImage()
self.setAttribute(QtCore.Qt.WA_OpaquePaintEvent)
def paintEvent(self, event):
painter = QtGui.QPainter(self)
painter.drawImage(0,0, self.image)
self.image = QtGui.QImage()
def initUI(self):
self.setWindowTitle('Test')
@QtCore.pyqtSlot(QtGui.QImage)
def setImage(self, image):
if image.isNull():
print("viewer dropped frame!")
self.image = image
if image.size() != self.size():
self.setFixedSize(image.size())
self.update()
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
thread = QtCore.QThread()
thread.start()
vid = ShowVideo()
vid.moveToThread(thread)
image_viewer = ImageViewer()
#image_viewer.resize(200,400)
vid.VideoSignal.connect(image_viewer.setImage)
#Button to start the videocapture:
push_button = QtWidgets.QPushButton('Start')
push_button.clicked.connect(vid.startVideo)
push_button2 = QtWidgets.QPushButton('Screenshot')
push_button2.clicked.connect(vid.makeScreenshot)
vertical_layout = QtWidgets.QVBoxLayout()
vertical_layout.addWidget(image_viewer)
vertical_layout.addWidget(push_button)
vertical_layout.addWidget(push_button2)
layout_widget = QtWidgets.QWidget()
layout_widget.setLayout(vertical_layout)
main_window = QtWidgets.QMainWindow()
main_window.setCentralWidget(layout_widget)
main_window.resize(640,480)
main_window.show()
sys.exit(app.exec_())
</code></pre>
<p>This code showing video from camera in endless loop using OpenCV and PyQt5. But how to make screenshot and don't stop showing video. I think it needs to be stop loop for a little, make screnshot, and run loop again.</p>
| 0 | 2016-09-27T08:57:22Z | 39,737,798 | <p>You can use cv2.waitKey() for the same, as shown below:</p>
<pre><code>while run_video:
ret, image = self.camera.read()
if(cv2.waitKey(10) & 0xFF == ord('s')):
cv2.imwrite("screenshot.jpg",image)
</code></pre>
<p>(I'm guessing that by the term "screenshot", you mean the camera frame, and not the image of the entire screen.)
When you press 's' on the keyboard, it'll perform imwrite.
Note that if you wish to save multiple images, you'd have to vary the filename. The above code will overwrite screenshot.jpg to save only the latest frame. </p>
| 1 | 2016-09-28T03:58:00Z | [
"python",
"opencv",
"pyqt5"
]
|
How to disable syntax warning for the latest sublime text 3? | 39,720,413 | <p>I have updated the latest Sublime Text 3 build (3124), but when I save the changes of python file, the warning will shown as below:</p>
<p><a href="http://i.stack.imgur.com/zsi92.png" rel="nofollow"><img src="http://i.stack.imgur.com/zsi92.png" alt="enter image description here"></a></p>
<p>I installed the <a href="https://packagecontrol.io/packages/Python%20Flake8%20Lint" rel="nofollow">Python Flake8 Lint</a> package, but the warning is never shown for old sublime version.</p>
<p>How to disable it?</p>
| 0 | 2016-09-27T09:00:52Z | 39,734,142 | <p>Disable the SublimeLinter <code>show_errors_on_save</code> setting.</p>
<p><code>Menu > Preferences > Package Settings > SublimeLinter > Setting - User</code></p>
<blockquote>
<p>Packages/User/SublimeLinter.sublime-settings</p>
</blockquote>
<pre><code>{
// This setting determines if a Quick Panel with all errors is
// displayed when a file is saved. The default value is false.
"show_errors_on_save": false
}
</code></pre>
<p>Or use the Command Palette to disable it.</p>
<ol>
<li>Open the Command Palette: <kbd>Ctrl+Shift+P</kbd></li>
<li>Select: <code>SublimeLinter: Don't Show Errors on Save</code></li>
<li>Done!</li>
</ol>
| 1 | 2016-09-27T20:53:59Z | [
"python",
"sublimetext3"
]
|
Pandas DatetimeIndex strange behaviour | 39,720,641 | <p>I handle a DataFrame, which index is string, year-month, for example:</p>
<pre><code>index = ['2007-01', '2007-03', ...]
</code></pre>
<p>however, the index is not full. e.g. <code>2007-02</code> is missing.
What I want is to reindex the DataFrame with full index.</p>
<p>What I have tried:</p>
<pre><code>In [60]: pd.DatetimeIndex(start='2007-01', end='2007-12', freq='M')
Out[60]:
DatetimeIndex(['2007-01-31', '2007-02-28', '2007-03-31', '2007-04-30',
'2007-05-31', '2007-06-30', '2007-07-31', '2007-08-31',
'2007-09-30', '2007-10-31', '2007-11-30'],
dtype='datetime64[ns]', freq='M')
</code></pre>
<p>The index is every month's ends.</p>
<pre><code>In [64]: pd.DatetimeIndex(['2007-01', '2007-03', '2007-04', '2007-05'])
Out[64]: DatetimeIndex(['2007-01-01', '2007-03-01', '2007-04-01', '2007-05-01'], dtype='datetime64[ns]', freq=None)
</code></pre>
<p>The index is every month's start.</p>
<p>How to handle this problem?</p>
| 1 | 2016-09-27T09:11:16Z | 39,720,677 | <p>I think you need add parameter <code>freq='MS'</code> if need frequency first day of months:</p>
<pre><code>print (pd.DatetimeIndex(start='2007-01', end='2007-12', freq='MS'))
DatetimeIndex(['2007-01-01', '2007-02-01', '2007-03-01', '2007-04-01',
'2007-05-01', '2007-06-01', '2007-07-01', '2007-08-01',
'2007-09-01', '2007-10-01', '2007-11-01', '2007-12-01'],
dtype='datetime64[ns]', freq='MS')
</code></pre>
<p>Link to <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases" rel="nofollow">Offset Aliases in pandas documentation</a>, thank you <a href="http://stackoverflow.com/questions/39720641/pandas-datetimeindex-strange-behaviour/39720677#comment66738837_39720641">EdChum</a>.</p>
<p>Another solution is use <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#periodindex-and-period-range" rel="nofollow"><code>PeriodIndex</code></a> for generating months period:</p>
<pre><code>print (pd.PeriodIndex(start='2007-01', end='2007-12', freq='M'))
PeriodIndex(['2007-01', '2007-02', '2007-03', '2007-04', '2007-05', '2007-06',
'2007-07', '2007-08', '2007-09', '2007-10', '2007-11', '2007-12'],
dtype='int64', freq='M')
</code></pre>
| 2 | 2016-09-27T09:12:49Z | [
"python",
"pandas",
"date-range",
"period",
"datetimeindex"
]
|
Laplace transform initial conditions | 39,720,658 | <p>How to define initial conditions for Laplace transform in Sympy?
For example:</p>
<pre><code>t,s = symbols('t s')
x = Function('x')(t)
laplace_transform(diff(x,t),t,s cond=(x(0) = 1))
</code></pre>
<p>So the output would be:</p>
<pre><code>s*L(x) - 1
</code></pre>
| 1 | 2016-09-27T09:11:58Z | 39,753,631 | <p>Laplace transforms of undefined functions are not yet implemented. There is an <a href="https://github.com/sympy/sympy/issues/7219" rel="nofollow">issue</a> tracking this. Perhaps the simple <a href="https://github.com/sympy/sympy/issues/7219#issuecomment-154768904" rel="nofollow">workaround implementation</a> in that issue will work for you for now. </p>
| 0 | 2016-09-28T17:03:48Z | [
"python",
"sympy"
]
|
How to color a 3d grayscale image in python | 39,720,735 | <p>I want to color a pixel in 3d</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
im = np.random.randint(0, 255, (16, 16))
I = np.dstack([im, im, im])
x = 5
y = 5
I[x, y, :] = [1, 0, 0]
plt.imshow(I, interpolation='nearest' )
plt.imshow(im, interpolation='nearest', cmap='Greys')
</code></pre>
<p>This code is for 2d but instead of the coordiantes i want to give the value of the grayscale pixel in 3d that i want to change.</p>
| 1 | 2016-09-27T09:15:31Z | 39,722,684 | <pre><code>import numpy as np
import matplotlib.pyplot as plt
np.random.seed(4)
im = np.random.randint(0, 255, (16, 16))
I = np.dstack([im, im, im])
I[np.logical_and(np.logical_and(I[:, :, 0]==15, I[:, :, 1]==15), I[:, :, 2]==15)] = [0, 1, 0]
plt.figure()
plt.imshow(I, interpolation='nearest' )
plt.figure()
plt.imshow(im, interpolation='nearest', cmap='Greys')
plt.show()
</code></pre>
| 0 | 2016-09-27T10:48:54Z | [
"python",
"image",
"image-processing",
"grayscale"
]
|
Why is my VotingClassifier accuracy less than my individual classifier? | 39,720,836 | <p>I am trying to create an ensemble of three classifiers (Random Forest, Support Vector Machine and XGBoost) using the VotingClassifier() in scikit-learn. However, I find that the accuracy of the ensemble actually decreases instead of increasing. I can't figure out why. </p>
<p>Here is the code:</p>
<pre><code>from sklearn.ensemble import VotingClassifier
eclf = VotingClassifier(estimators=[('rf', rf_optimized), ('svc', svc_optimized), ('xgb', xgb_optimized)],
voting='soft', weights=[1,1,2])
for clf, label in zip([rf, svc_optimized, xgb_optimized, eclf], ['Random Forest', 'Support Vector Machine', 'XGBoost', 'Ensemble']):
scores = cross_val_score(clf, X, y, cv=10, scoring='accuracy')
print("Accuracy: %0.3f (+/- %0.3f) [%s]" % (scores.mean(), scores.std(), label))
</code></pre>
<p>The XGBoost has the highest accuracy so I even tried giving it more weightage to no avail. </p>
<p>What could I be doing wrong?</p>
| 3 | 2016-09-27T09:20:19Z | 39,733,613 | <p>VotingClassifiers are not always guaranteed to have better performance, especially when using soft voting if you have poorly calibrated base models. </p>
<p>For a contrived example, say all of the models are really wrong when they are wrong (say give a probability of .99 for the incorrect class) but are only slightly right when they are right (say give a probability of .51 for the correct class). Furthermore, say 'rf' and 'svc' are always right when 'xgb' is wrong and vice versa and each classifier has an accuracy of 50% on its own. </p>
<p>The voting classifier that you implement would have an accuracy of 0% since you are using soft voting. Here is why:</p>
<ol>
<li>Case 1: 'xgb' right. Then it gives a probability of .51 to the correct class and gets a weight of 2, for a score of 1.02. However, the other models each give a probability of .99 for the incorrect class for a score of 1.98. That class gets chosen by your voting classifier. </li>
<li>Case 2: 'xgb' is wrong. Then it gives a probability of .99 to the incorrect class with a weight of 2 for a score of 1.98. The other two models give a combined score of 1.02 for the correct class. Again, the wrong class is chosen by your classifier. </li>
</ol>
| 4 | 2016-09-27T20:19:52Z | [
"python",
"machine-learning",
"scikit-learn",
"xgboost",
"ensemble-learning"
]
|
Load csv file based on selection | 39,720,841 | <p>Before I put my question here I've looked everywhere and tried everything to find a solution but couldn't find one.</p>
<p>I've build a simple GUI in Tkinter. The goal is to first make a selection based on choises in a dropdown box.
In my example the subject is "Tennis".</p>
<p>My goal is that the user can first select the "surface" on which the tennis match is played on. We have 3 choises. Hard Court, Clay Court or Grass Court.</p>
<p>Based on the made selection I want a different csv file to be loaded.</p>
<ul>
<li>The Hard Court file is called "match_stats_atp_.csv" </li>
<li>The Clay Court file is called "match_stats_atp_1.csv" </li>
<li>The Grass Court file is called "match_stats_atp_10.csv"</li>
</ul>
<p>Each csv file holds data based on the specifiek tennis surface.</p>
<p>My question now is, how do I load the csv file based on the made selection in the fist dropdown box?
The csv files are in the same directory as my script.</p>
<pre><code>from Tkinter import *
import ttk
import csv
master = Tk()
master.option_add("*Font", "{Bodoni MT} 8")
content = ttk.Frame(master, padding=(12, 12, 12, 12))
frame = ttk.Frame(content, borderwidth=5, relief="groove", width=300, height=100)
content.grid(column=0, row=0, sticky=(N, S, E, W))
frame.grid(column=0, row=0, columnspan=4, rowspan=2, sticky=(N, S, E, W))
text = Text(content, height=8, width=13)
text.grid(row=0, column=4, columnspan=2, rowspan=2, sticky=(N, S, E, W))
v1player1 = StringVar()
v2player1 = StringVar()
c_player1 = Label(frame, text="Service Points Win %:").grid(row=6, column=1, sticky='w')
cc_player1 = Entry(frame, text="value", textvariable=v1player1, justify='center', width=10).grid(row=6, column=2)
d_player1 = Label(frame, text="Return Points Win %: ").grid(row=7, column=1, sticky='w')
dd_player1 = Entry(frame, text="value", textvariable=v2player1, justify='center', width=10).grid(row=7, column=2)
def new_selection_surface(event):
return surface()
surface_types = ['ATP Hard Court', 'ATP Clay Court', 'ATP Grass Court']
box_value_surface = StringVar()
box = ttk.Combobox(frame, textvariable=box_value_surface, justify='center')
box.bind("<<ComboboxSelected>>", new_selection_surface)
box['values'] = surface_types
box.current()
box.grid(column=2, row=1, pady=10, padx=15)
def surface():
if 'ATP Hard Court' in box_value_surface.get():
return 'match_stats_atp_.csv'
else:
if 'ATP Clay Court' in box_value_surface.get():
return 'match_stats_atp_1.csv'
else:
if 'ATP Grass Court' in box_value_surface.get():
return 'match_stats_atp_10.csv'
f = open('CSV FILE BASED ON SELECTION 1st DROPDOWN BOX')
csv_f = csv.reader(f)
players_names = []
for row in csv_f:
players_names.append(row[2])
def new_selection_p1(event):
return player1()
box_value_p1 = StringVar()
box = ttk.Combobox(frame, textvariable=box_value_p1, justify='center')
box.bind("<<ComboboxSelected>>", new_selection_p1)
box['values'] = players_names
box.current()
box.grid(column=2, row=2, pady=10, padx=15)
def player1():
with open('CSV FILE BASED ON SELECTION 1st DROPDOWN BOX') as csvfile:
read_csv = csv.reader(csvfile, delimiter=',')
service_points_wins = []
return_points_wins = []
names = []
for row in read_csv:
name = row[2]
services_point = row[10]
returns_point = row[4]
service_points_wins.append(services_point)
return_points_wins.append(returns_point)
names.append(name)
what_name = (box_value_p1.get())
name_dex = names.index(what_name)
service_points_wins = service_points_wins[name_dex]
points_services_point = return_points_wins[name_dex]
v1player1.set(service_points_wins.replace("%", ''))
v2player1.set(points_services_point.replace("%", ''))
run = ttk.Button(content, text='Run')
run.grid(column=4, row=3)
cancel = ttk.Button(content, text="Cancel", command=master.destroy)
cancel.grid(column=5, row=3)
master.columnconfigure(0, weight=1)
master.rowconfigure(0, weight=1)
content.columnconfigure(0, weight=3)
content.columnconfigure(1, weight=3)
content.columnconfigure(2, weight=3)
content.columnconfigure(3, weight=1)
content.columnconfigure(4, weight=1)
content.rowconfigure(1, weight=1)
master.geometry("1000x500+100+100")
master.mainloop()
</code></pre>
| 0 | 2016-09-27T09:20:36Z | 39,724,414 | <p>It is important to understand what your code does. What you were trying to do is open a file at the time of the start of your program while the filename can only be known, after some time when the user has selected an option.
Therefore you need to make sure that the <code>file.open()</code> is only executed when the filename is known. That means that it should happen only inside the function <code>func</code> that you bind to the combobox via <code>box1.bind("<<ComboboxSelected>>", func)</code></p>
<p>Inside this function you can simply get the current selection of the dropdown and use it for determining the filename. This can be efficiently done with a dictionary.</p>
<p>This is the code that might do the trick for you:</p>
<pre><code>from Tkinter import *
import ttk
import csv
players_names = []
surface_types = ['ATP Hard Court', 'ATP Clay Court', 'ATP Grass Court']
choices = {'ATP Hard Court' : 'match_stats_atp_.csv' ,
'ATP Clay Court' : 'match_stats_atp_1.csv' ,
'ATP Grass Court' : 'match_stats_atp_10.csv'}
master = Tk()
content = ttk.Frame(master, padding=(12, 12, 12, 12))
frame = ttk.Frame(content, borderwidth=5, relief="groove", width=300, height=100)
content.grid(column=0, row=0, sticky=(N, S, E, W))
frame.grid(column=0, row=0, columnspan=4, rowspan=2, sticky=(N, S, E, W))
text = Text(content, height=8, width=13)
text.grid(row=0, column=4, columnspan=2, rowspan=2, sticky=(N, S, E, W))
v1player1 = StringVar()
v2player1 = StringVar()
c_player1 = Label(frame, text="Service Points Win %:").grid(row=6, column=1, sticky='w')
cc_player1 = Entry(frame, text="value", textvariable=v1player1, justify='center', width=10).grid(row=6, column=2)
d_player1 = Label(frame, text="Return Points Win %: ").grid(row=7, column=1, sticky='w')
dd_player1 = Entry(frame, text="value", textvariable=v2player1, justify='center', width=10).grid(row=7, column=2)
def surface(event):
players_names = []
surface_selection = box_value_surface.get()
try:
f = open(choices[surface_selection])
csv_f = csv.reader(f)
for row in csv_f:
players_names.append(row[2])
f.close()
except:
#this except is only to make it work without having the csv files at hand
for name in ["Henry", "Donovan", "John"]:
players_names.append(name)
box2['values'] = players_names
def player1(event):
surface_selection = box_value_surface.get()
with open(choices[surface_selection]) as csvfile:
read_csv = csv.reader(csvfile, delimiter=',')
service_points_wins = []
return_points_wins = []
names = []
for row in read_csv:
name = row[2]
services_point = row[10]
returns_point = row[4]
service_points_wins.append(services_point)
return_points_wins.append(returns_point)
names.append(name)
what_name = (box_value_p1.get())
name_dex = names.index(what_name)
service_points_wins = service_points_wins[name_dex]
points_services_point = return_points_wins[name_dex]
v1player1.set(service_points_wins.replace("%", ''))
v2player1.set(points_services_point.replace("%", ''))
box_value_surface = StringVar()
box1 = ttk.Combobox(frame, textvariable=box_value_surface, justify='center')
box1['values'] = surface_types
box1.bind("<<ComboboxSelected>>", surface)
box1.grid(column=2, row=1, pady=10, padx=15)
box_value_p1 = StringVar()
box2 = ttk.Combobox(frame, textvariable=box_value_p1, justify='center')
box2.bind("<<ComboboxSelected>>", player1)
box2['values'] = players_names
box2.grid(column=2, row=2, pady=10, padx=15)
run = ttk.Button(content, text='Run')
run.grid(column=4, row=3)
cancel = ttk.Button(content, text="Cancel", command=master.destroy)
cancel.grid(column=5, row=3)
master.mainloop()
</code></pre>
| 0 | 2016-09-27T12:15:53Z | [
"python",
"csv"
]
|
Django filtering on foreign key properties | 39,720,884 | <p>I'm trying to filter a table in Django based on the value of a particular field of a foreign key.</p>
<p>my models are:</p>
<pre><code>class Retailer(SCOPEModel):
""" A business or person that exchanges goods for vouchers with recipients
"""
office = models.ForeignKey(Office, related_name='retailers')
uuid = UUIDField(auto=True, version=4, null=True, help_text=_('unique id'))
name = models.CharField(_('Name'), max_length=50, validators=[validate_sluggable],
help_text=_('Name of the retail shop'), blank=False)
location = models.ForeignKey(Location, verbose_name=_('location'), blank=True, null=True,
help_text=_('Location of the retail shop'), related_name='retailers')
class PointOfSaleTerminalAssignment(SCOPEModel):
"""Point Of Sale (POS) is the location where a transaction occurs for
exchange of goods and services
and a POS terminal is the hardware used to perform the transactions.
These terminals are registered in the system.
POS' are managed at office level
"""
office = models.ForeignKey(Office, related_name='pos_terminals')
terminal_type = models.ForeignKey(
TerminalType,
verbose_name=_('Terminal'),
help_text=_("Device | Make (model)"),
)
wfp_number = models.CharField(
_('WFP Number'),
max_length=50,
unique=True,
validators=[validate_sluggable],
help_text=_("WFP unique generated number e.g. Inventory number")
)
serial_number = models.CharField(
_('Serial Number'),
max_length=50,
unique=True,
help_text=_('Hardware device serial number')
)
slug = models.SlugField(
editable=False,
unique=True,
help_text=_('Unique ID generated from the WFP number')
)
assigned_retailer = models.ForeignKey(
Retailer,
related_name='pos_terminals',
null=True,
blank=True,
help_text=_('Retailer this POS terminal is assigned to')
)
</code></pre>
<p>i want to get details of retailers and their assigned pos serial numbers
Currently I am performing two queries:</p>
<pre><code>from maidea.apps.office.models import Office
from maidea.apps.redemption.models import Retailer, PointOfSaleTerminalAssignment
office = Office.objects.get(slug='so-co')
pos = PointOfSaleTerminalAssignment.objects.filter(office=office)
for p in pos:
retailer = p.assigned_retailer
print retailer.name
</code></pre>
<p>but am getting this error, kindly what am i doing wrong?
<a href="http://i.stack.imgur.com/srrkd.png" rel="nofollow"><img src="http://i.stack.imgur.com/srrkd.png" alt="enter image description here"></a></p>
| 0 | 2016-09-27T09:22:31Z | 39,720,980 | <p>Apparently, not all your <code>PointOfSaleTerminalAssignment</code> instances has an <code>assigned_retailer</code>, as the <em>FK</em> field can take <em>NULL</em> values. </p>
<p>You can however <em>safely navigate</em> the attributes of each <code>retailer</code> only if the retailer is not <code>None</code> by testing with an <code>if</code>:</p>
<pre><code>for p in pos:
retailer = p.assigned_retailer
if retailer:
print retailer.name
</code></pre>
| 1 | 2016-09-27T09:26:42Z | [
"python",
"django"
]
|
Django filtering on foreign key properties | 39,720,884 | <p>I'm trying to filter a table in Django based on the value of a particular field of a foreign key.</p>
<p>my models are:</p>
<pre><code>class Retailer(SCOPEModel):
""" A business or person that exchanges goods for vouchers with recipients
"""
office = models.ForeignKey(Office, related_name='retailers')
uuid = UUIDField(auto=True, version=4, null=True, help_text=_('unique id'))
name = models.CharField(_('Name'), max_length=50, validators=[validate_sluggable],
help_text=_('Name of the retail shop'), blank=False)
location = models.ForeignKey(Location, verbose_name=_('location'), blank=True, null=True,
help_text=_('Location of the retail shop'), related_name='retailers')
class PointOfSaleTerminalAssignment(SCOPEModel):
"""Point Of Sale (POS) is the location where a transaction occurs for
exchange of goods and services
and a POS terminal is the hardware used to perform the transactions.
These terminals are registered in the system.
POS' are managed at office level
"""
office = models.ForeignKey(Office, related_name='pos_terminals')
terminal_type = models.ForeignKey(
TerminalType,
verbose_name=_('Terminal'),
help_text=_("Device | Make (model)"),
)
wfp_number = models.CharField(
_('WFP Number'),
max_length=50,
unique=True,
validators=[validate_sluggable],
help_text=_("WFP unique generated number e.g. Inventory number")
)
serial_number = models.CharField(
_('Serial Number'),
max_length=50,
unique=True,
help_text=_('Hardware device serial number')
)
slug = models.SlugField(
editable=False,
unique=True,
help_text=_('Unique ID generated from the WFP number')
)
assigned_retailer = models.ForeignKey(
Retailer,
related_name='pos_terminals',
null=True,
blank=True,
help_text=_('Retailer this POS terminal is assigned to')
)
</code></pre>
<p>i want to get details of retailers and their assigned pos serial numbers
Currently I am performing two queries:</p>
<pre><code>from maidea.apps.office.models import Office
from maidea.apps.redemption.models import Retailer, PointOfSaleTerminalAssignment
office = Office.objects.get(slug='so-co')
pos = PointOfSaleTerminalAssignment.objects.filter(office=office)
for p in pos:
retailer = p.assigned_retailer
print retailer.name
</code></pre>
<p>but am getting this error, kindly what am i doing wrong?
<a href="http://i.stack.imgur.com/srrkd.png" rel="nofollow"><img src="http://i.stack.imgur.com/srrkd.png" alt="enter image description here"></a></p>
| 0 | 2016-09-27T09:22:31Z | 39,721,010 | <p>Your <code>assigned_retailer</code> can be blank and null.
So first of all check if there is assignment. </p>
<pre><code>from maidea.apps.office.models import Office
from maidea.apps.redemption.models import Retailer, PointOfSaleTerminalAssignment
pos = PointOfSaleTerminalAssignment.objects.filter(office__slug='so-co')
for p in pos:
if p.assigned_retailer:
print p.assigned_retailer.name
</code></pre>
| 1 | 2016-09-27T09:28:11Z | [
"python",
"django"
]
|
Integration Github with web2py web application | 39,720,949 | <p>How we can integrate GitHub with "web2py" project.I want to connect may application to github.</p>
| -1 | 2016-09-27T09:25:21Z | 39,878,821 | <p>You should get the json request as a Storage object using:</p>
<pre><code>request.post_vars
</code></pre>
| 0 | 2016-10-05T16:02:22Z | [
"python",
"python-2.7",
"github",
"web2py",
"web2py-modules"
]
|
Python - Put a list in a threading module | 39,720,995 | <p>I want to put a list into my threading script, but I am facing a problem.</p>
<p>Contents of list file (example):</p>
<pre><code>http://google.com
http://yahoo.com
http://bing.com
http://python.org
</code></pre>
<p>My script:</p>
<pre><code>import codecs
import threading
import sys
import requests
from time import time as timer
from timeout import timeout
import time
try:
with codecs.open(sys.argv[1], mode='r', encoding='ascii', errors='ignore') as iiz:
iiz=iiz.read().splitlines()
except IOError:
pass
oz = list(iiz)
def nnn(url):
hzz = {'param1': sys.argv[2], 'param2': sys.argv[3]}
po = requests.post(url,data=hzz)
if po:
print("ok \n")
if __name__ == '__main__':
threads = []
for i in range(1):
t = threading.Thread(target=nnn, args=(oz,))
threads.append(t)
t.start()
</code></pre>
| 0 | 2016-09-27T09:27:29Z | 39,723,226 | <p>Can you please clarify what elaborate on exactly what you're trying to achieve.</p>
<p>I'm guessing that you're trying to request urls to load into a web browser or the terminal...</p>
<p>Also you shouldn't need to put the urls into a list because when you opened up the file containing the urls, it automatically sorted it into a list. So in other words, the contents in iiz are already in the list format.</p>
<p>Personally, I haven't worked much with the modules you're using (apart from time), but I'll try my best to help you and hopefully other users will try and help you too.</p>
| 0 | 2016-09-27T11:14:58Z | [
"python",
"multithreading",
"list"
]
|
Pandas Join Two Tables Based On Time Condition | 39,721,008 | <p>I'm trying to join two tables in Pandas based on timestamp. Basically the structure looks something like this:</p>
<p>Table 2</p>
<pre><code>Timestamp Truck MineX MineY
2016-08-27 01:10 CT77 -11346.36655 -650404.405
2016-08-27 01:12 CT45 -11596.88137 -648294.056
2016-08-27 01:13 CT67 -11953.16118 -648325.114
2016-08-27 01:13 CT75 -11326.54075 -650447.462
2016-08-27 01:14 CT79 -11380.27834 -650425.968
2016-08-27 01:15 CT26 -9493.153286 -652313.633
2016-08-27 01:16 CT73 -11527.47602 -650210.723
2016-08-27 01:16 CT40 -11596.90867 -648260.214
2016-08-27 01:17 CT26 -9493.153286 -652313.633
2016-08-27 01:17 CT80 -11363.34558 -650385.959
2016-08-27 01:17 CT72 -11527.47355 -650213.8
</code></pre>
<p>Table 1</p>
<pre><code>Truck LoadLocation Tonnes ArriveTimestamp
CT70 338-001 261 2016-02-21 00:23
CT66 338-001 271 2016-02-21 00:31
CT62 338-001 264 2016-02-21 00:45
CT73 338-001 254 2016-02-21 00:54
CT71 338-001 250 2016-02-21 01:04
CT39 338-001 182.172 2016-02-21 01:11
CT62 338-001 285 2016-02-21 01:19
CT70 338-001 282 2016-02-21 01:25
CT73 338-001 250 2016-02-21 01:30
CT73 338-001 275 2016-02-21 01:35
CT64 338-001 253 2016-02-21 01:42
</code></pre>
<p>Table 1 and Table 2 need to be joined, where Timestamp and ArriveTimeStamp are within one minute of each other, and the Truck ID is the same. A left join is preferred where records from Table 2 are thrown out if there is no match</p>
| 1 | 2016-09-27T09:28:00Z | 39,721,075 | <pre><code>import pandas as pd
df=pd.merge(left=Table1,right=Table2, left_on=['TimeStamp1'], right_on=['TimeStamp2'])
</code></pre>
| 0 | 2016-09-27T09:31:37Z | [
"python",
"pandas"
]
|
Pandas Join Two Tables Based On Time Condition | 39,721,008 | <p>I'm trying to join two tables in Pandas based on timestamp. Basically the structure looks something like this:</p>
<p>Table 2</p>
<pre><code>Timestamp Truck MineX MineY
2016-08-27 01:10 CT77 -11346.36655 -650404.405
2016-08-27 01:12 CT45 -11596.88137 -648294.056
2016-08-27 01:13 CT67 -11953.16118 -648325.114
2016-08-27 01:13 CT75 -11326.54075 -650447.462
2016-08-27 01:14 CT79 -11380.27834 -650425.968
2016-08-27 01:15 CT26 -9493.153286 -652313.633
2016-08-27 01:16 CT73 -11527.47602 -650210.723
2016-08-27 01:16 CT40 -11596.90867 -648260.214
2016-08-27 01:17 CT26 -9493.153286 -652313.633
2016-08-27 01:17 CT80 -11363.34558 -650385.959
2016-08-27 01:17 CT72 -11527.47355 -650213.8
</code></pre>
<p>Table 1</p>
<pre><code>Truck LoadLocation Tonnes ArriveTimestamp
CT70 338-001 261 2016-02-21 00:23
CT66 338-001 271 2016-02-21 00:31
CT62 338-001 264 2016-02-21 00:45
CT73 338-001 254 2016-02-21 00:54
CT71 338-001 250 2016-02-21 01:04
CT39 338-001 182.172 2016-02-21 01:11
CT62 338-001 285 2016-02-21 01:19
CT70 338-001 282 2016-02-21 01:25
CT73 338-001 250 2016-02-21 01:30
CT73 338-001 275 2016-02-21 01:35
CT64 338-001 253 2016-02-21 01:42
</code></pre>
<p>Table 1 and Table 2 need to be joined, where Timestamp and ArriveTimeStamp are within one minute of each other, and the Truck ID is the same. A left join is preferred where records from Table 2 are thrown out if there is no match</p>
| 1 | 2016-09-27T09:28:00Z | 39,723,087 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>:</p>
<pre><code>df = pd.merge(df1, df2, on='Truck', how='left')
print (df)
Truck LoadLocation Tonnes ArriveTimestamp Timestamp \
0 CT70 338-001 261.000 2016-02-21 00:23:00 NaT
1 CT66 338-001 271.000 2016-02-21 00:31:00 NaT
2 CT62 338-001 264.000 2016-02-21 00:45:00 NaT
3 CT73 338-001 254.000 2016-02-21 00:54:00 2016-08-27 01:16:00
4 CT71 338-001 250.000 2016-02-21 01:04:00 NaT
5 CT39 338-001 182.172 2016-02-21 01:11:00 NaT
6 CT62 338-001 285.000 2016-02-21 01:19:00 NaT
7 CT70 338-001 282.000 2016-02-21 01:25:00 NaT
8 CT73 338-001 250.000 2016-02-21 01:30:00 2016-08-27 01:16:00
9 CT73 338-001 275.000 2016-02-21 01:35:00 2016-08-27 01:16:00
10 CT64 338-001 253.000 2016-02-21 01:42:00 NaT
MineX MineY
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 -11527.47602 -650210.723
4 NaN NaN
5 NaN NaN
6 NaN NaN
7 NaN NaN
8 -11527.47602 -650210.723
9 -11527.47602 -650210.723
10 NaN NaN
</code></pre>
<p>with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>, where filter absolute differences in <code>datetimes</code> - sample return empty <code>DataFrame</code>: </p>
<pre><code>print ((df.Timestamp - df.ArriveTimestamp).astype('timedelta64[s]'))
0 NaN
1 NaN
2 NaN
3 16244520.0
4 NaN
5 NaN
6 NaN
7 NaN
8 16242360.0
9 16242060.0
10 NaN
dtype: float64
print ((df.Timestamp - df.ArriveTimestamp).astype('timedelta64[s]').abs() < 60)
0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 False
dtype: bool
Empty DataFrame
print (df[(df.Timestamp - df.ArriveTimestamp).astype('timedelta64[s]').abs() < 60])
Empty DataFrame
Columns: [Truck, LoadLocation, Tonnes, ArriveTimestamp, Timestamp, MineX, MineY]
Index: []
</code></pre>
| 1 | 2016-09-27T11:07:19Z | [
"python",
"pandas"
]
|
How to set a cron job on Skygear to run every 12 hours? | 39,721,031 | <p>Trying to set a cron job on my Skygear python cloud code, but not sure what I should enter in the decorator. I only know that it will work for units in second, but how to schedule a job to run every 12 hours? It is hard to calculate the seconds every time.</p>
<p>My code is like this, the function is to call a POST request:</p>
<pre><code>@skygear.every('@every 43200s')
def post_req():
print ('scheduled to run every 12 hours')
url = myurl
ref = something
r = requests.post(myurl, data = {'token':some_token, 'ref':something})
</code></pre>
<p>It actually works but is there some ways to write in a better format?</p>
| 1 | 2016-09-27T09:29:00Z | 39,721,174 | <p>It seems like <code>skygear.every</code> also <a href="https://docs.skygear.io/cloud-code/guide/scheduled" rel="nofollow">accepts crontab notation</a>⦠so <a href="http://crontab.guru/#0_*/12_*_*_*" rel="nofollow"><code>0 */12 * * *</code></a> could also do the trick.</p>
<p>Edit: Reading the <a href="https://github.com/robfig/cron/blob/master/doc.go" rel="nofollow">robfig/cron</a> docs, the best solution would actually be just <code>@every 12h</code></p>
| 2 | 2016-09-27T09:35:51Z | [
"python",
"cron",
"crontab",
"cloud-code",
"skygear"
]
|
How to overwrite ORM method unlink from a custom module? | 39,721,250 | <p>I want to overwrite the <code>unlink</code> method of <code>stock.move</code> model. The reason is that I want to remove an OSV exception which warns about a forbidden action, and replace it with other message and other condition.</p>
<p>This is the original code:</p>
<pre><code>def unlink(self, cr, uid, ids, context=None):
context = context or {}
for move in self.browse(cr, uid, ids, context=context):
if move.state not in ('draft', 'cancel'):
raise osv.except_osv(_('User Error!'), _('You can only delete draft moves.'))
return super(stock_move, self).unlink(cr, uid, ids, context=context)
</code></pre>
<p>I have just realized that removing that message is complexer than I thought. This is my current code, which is checking my condition, but then checks the original one I want to avoid:</p>
<pre><code>class StockMove(models.Model):
_inherit = 'stock.move'
@api.multi
def unlink(self):
for move in self:
if move.lot_id and move.lot_id.any_unit_sold is True:
raise Warning(_('You can only delete unsold moves.'))
return super(StockMove, self).unlink()
</code></pre>
<p>If I turn the last line (<code>super</code>) into <code>self.unlink()</code>, I get a <em>maximum recursion depth exceeded</em> error.</p>
<p>How can I manage my purpose from my custom module?</p>
| 1 | 2016-09-27T09:39:40Z | 39,729,472 | <p>Not using a <code>super()</code> call can have unexpected behaviour. You could call <code>models.Model.unlink()</code>, but that will skip all <code>unlink()</code> extensions for <code>stock.move</code> by other modules (even Odoo S.A. apps/modules). In your case it would be:</p>
<pre><code>class StockMove(models.Model):
_inherit = 'stock.move'
@api.multi
def unlink(self):
for move in self:
if move.lot_id and move.lot_id.any_unit_sold is True:
raise Warning(_('You can only delete unsold moves.'))
return models.Model.unlink(self)
</code></pre>
<p>Another possibility would be a monkey patch on the original code.</p>
| 0 | 2016-09-27T16:07:42Z | [
"python",
"python-2.7",
"openerp",
"odoo-8"
]
|
Display select Mysql query in python with output nice and readable | 39,721,290 | <p>I need python script for display sql query with nice output and readable this not readable for heavy tables...</p>
<pre><code>cnx = mysql.connector.connect(user='root', password='*****',
host='127.0.0.1',
database='dietetique')
c = cnx.cursor()
sys.stdout = open('mysql_data.log', 'w')
c.execute("SELECT * FROM administrations;")
for row in c:
print row
</code></pre>
| 0 | 2016-09-27T09:41:21Z | 39,721,591 | <pre><code>import pypyodbc
ID=2
ConnectionDtl='Driver={SQL Server};Server=WIN7-297;Database=AdventureWorks2014;trusted_connection=yes'
connection = pypyodbc.connect(ConnectionDtl)
print("Retrieve row based on [FirstName]='Mani'")
cursor = connection.cursor()
SQLCommand = ("SELECT [FirstName],[LastName] "
"FROM Person.SampleData "
"WHERE FirstName =?")
Values = ['Mani']
print(SQLCommand)
cursor.execute(SQLCommand,Values)
i=1
for x in cursor :
row = cursor.fetchone()
print str(i) + ". FirstName: " + row[0] + " LastName: " + row[1]
i=i+1
connection.close()
</code></pre>
| 0 | 2016-09-27T09:54:24Z | [
"python",
"mysql",
"sql",
"python-2.7",
"web"
]
|
Display select Mysql query in python with output nice and readable | 39,721,290 | <p>I need python script for display sql query with nice output and readable this not readable for heavy tables...</p>
<pre><code>cnx = mysql.connector.connect(user='root', password='*****',
host='127.0.0.1',
database='dietetique')
c = cnx.cursor()
sys.stdout = open('mysql_data.log', 'w')
c.execute("SELECT * FROM administrations;")
for row in c:
print row
</code></pre>
| 0 | 2016-09-27T09:41:21Z | 39,721,593 | <p>you can execute the same code just by adding limits to the Sql Query.</p>
<pre><code>cnx = mysql.connector.connect(user='root', password='*****',
host='127.0.0.1',
database='dietetique')
c = cnx.cursor()
sys.stdout = open('mysql_data.log', 'w')
limitvalue=1000
for offsetvalue in range(0 , maximum_rows_you_want,1000):
c.execute("SELECT * FROM administrations limit "+ limitvalue + " offset " + offsetvalue +";")
for row in c:
print row
</code></pre>
| 0 | 2016-09-27T09:54:27Z | [
"python",
"mysql",
"sql",
"python-2.7",
"web"
]
|
Tornado gen.sleep add delay | 39,721,305 | <p>I'm trying to add a delay between requests in an asynchronous way.
When I use Tornado gen.sleep(x) my function (launch) doesn't get executed.
If I remove <strong>yield</strong> from <strong>yield gen.sleep(1.0)</strong>, function is called, but no delay is added.
How to add delay between requests in my for loop? I need to control Request per second to external API.
If I use <strong>time.sleep</strong> the response is delayed after all requests are completed.
Tried to add @gen.engine decorator to launch function and no results.</p>
<p>Code:</p>
<pre><code>import collections
import tornado.httpclient
class BacklogClient(object):
MAX_CONCURRENT_REQUESTS = 20
def __init__(self, ioloop):
self.ioloop = ioloop
self.client = tornado.httpclient.AsyncHTTPClient(max_clients=self.MAX_CONCURRENT_REQUESTS)
self.client.configure(None, defaults=dict(connect_timeout=20, request_timeout=30))
self.backlog = collections.deque()
self.concurrent_requests = 0
def __get_callback(self, function):
def wrapped(*args, **kwargs):
self.concurrent_requests -= 1
self.try_run_request()
return function(*args, **kwargs)
return wrapped
def try_run_request(self):
while self.backlog and self.concurrent_requests < self.MAX_CONCURRENT_REQUESTS:
request, callback = self.backlog.popleft()
self.client.fetch(request, callback=callback)
self.concurrent_requests += 1
def fetch(self, request, callback=None):
wrapped = self.__get_callback(callback)
self.backlog.append((request, wrapped))
self.try_run_request()
import time
from tornado import ioloop, httpclient, gen
class TornadoBacklog:
def __init__(self):
self.queue = 0
self.debug = 1
self.toProcess = [
'http://google.com',
'http://yahoo.com',
'http://nytimes.com',
'http://msn.com',
'http://cnn.com',
'http://twitter.com',
'http://facebook.com',
]
def handle_request(self, response):
print response.code
if not self.backlog.backlog and self.backlog.concurrent_requests == 0:
ioloop.IOLoop.instance().stop()
def launch(self):
self.ioloop = ioloop.IOLoop.current()
self.backlog = BacklogClient(self.ioloop)
for item in self.toProcess:
yield gen.sleep(1.0)
print item
self.backlog.fetch(
httpclient.HTTPRequest(
item,
method='GET',
headers=None,
),
self.handle_request
)
self.ioloop.start()
def main():
start_time = time.time()
scraper = TornadoBacklog()
scraper.launch()
elapsed_time = time.time() - start_time
print('Process took %f seconds processed %d items.' % (elapsed_time, len(scraper.toProcess)))
if __name__ == "__main__":
main()
</code></pre>
<p>Reference: <a href="https://github.com/tornadoweb/tornado/issues/1400" rel="nofollow">https://github.com/tornadoweb/tornado/issues/1400</a></p>
| 0 | 2016-09-27T09:42:00Z | 39,725,846 | <p>Tornado coroutines have two components:</p>
<ol>
<li>They contain "yield" statements</li>
<li>They are decorated with "gen.coroutine"</li>
</ol>
<p>Use the "coroutine" decorator on your "launch" function:</p>
<pre><code>@gen.coroutine
def launch(self):
</code></pre>
<p>Run a Tornado coroutine from start to finish like this:</p>
<pre><code>tornado.ioloop.IOLoop.current().run_sync(launch)
</code></pre>
<p>Remove the call to "ioloop.start" from your "launch" function: the loop runs the "launch" function, not vice-versa.</p>
| 1 | 2016-09-27T13:23:55Z | [
"python",
"asynchronous",
"tornado"
]
|
.apply(pd.to_numeric) returns error message | 39,721,559 | <p>I have a dataframe with two columns I want to convert to numeric type. I use the following code:</p>
<pre><code>df[["GP","G"]]=df[["GP","G"]].apply(pd.to_numeric)
</code></pre>
<p>Python returns the following error message:</p>
<pre><code>File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\core\frame.py", line 4157, in _apply_standard
results[i] = func(v)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\tools\util.py", line 115, in to_numeric
coerce_numeric=coerce_numeric)
File "pandas\src\inference.pyx", line 612, in pandas.lib.maybe_convert_numeric (pandas\lib.c:53558)
File "pandas\src\inference.pyx", line 598, in pandas.lib.maybe_convert_numeric (pandas\lib.c:53344)
ValueError: ('Unable to parse string', 'occurred at index GP')
</code></pre>
<p>How can I fix this problem? How can I convert multiple column types at once with a command? Thank you!</p>
| 0 | 2016-09-27T09:52:36Z | 39,721,977 | <p>Your code will only work if all the data can be parsed to numeric. </p>
<p>If not, there is at least one value in dataframe which is not convertible to numeric. You can use <code>errors</code> parameter according to your choice in such case. Here is an example.</p>
<pre><code>>>> df = pd.DataFrame({'A' : list('aabbcd'), 'B' : list('ffghhe')})
>>> df
A B
0 a f
1 a f
2 b g
3 b h
4 c h
5 d e
>>> df.apply(pd.to_numeric, errors='ignore')
A B
0 a f
1 a f
2 b g
3 b h
4 c h
5 d e
>>> df.apply(pd.to_numeric, errors='coerce')
A B
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN
5 NaN NaN
>>> df.apply(pd.to_numeric, errors='raise')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 4042, in apply
return self._apply_standard(f, axis, reduce=reduce)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 4138, in _apply_standard
results[i] = func(v)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 4020, in f
return func(x, *args, **kwds)
File "/usr/local/lib/python2.7/dist-packages/pandas/tools/util.py", line 98, in to_numeric
coerce_numeric=coerce_numeric)
File "pandas/src/inference.pyx", line 612, in pandas.lib.maybe_convert_numeric (pandas/lib.c:53932)
File "pandas/src/inference.pyx", line 598, in pandas.lib.maybe_convert_numeric (pandas/lib.c:53719)
ValueError: ('Unable to parse string', u'occurred at index A')
>>>
</code></pre>
<p>Here is the documentation for <code>errors</code></p>
<blockquote>
<p>errors : {âignoreâ, âraiseâ, âcoerceâ}, default âraiseâ</p>
<p>If âraiseâ, then invalid parsing will raise an exception </p>
<p>If âcoerceâ,then invalid parsing will be set as NaN </p>
<p>If âignoreâ, then invalid parsing will return the input</p>
</blockquote>
| 1 | 2016-09-27T10:12:52Z | [
"python"
]
|
Capturing text with Python regular expressions | 39,721,578 | <p>I've been having a bit of trouble with capturing strings between html tags using Python regular expressions. I've been trying to capture the string "example link 2" from the string below:</p>
<pre><code><link>example link 1</link>
<item>
<link>example link 2</link>
</item>
</code></pre>
<p>I've got this so far:</p>
<pre><code>(?<=<link>)(.*)(?=</link>)
</code></pre>
<p>However the regular expression above returns "example link 1" and "example link 2". Could anyone please help with selecting only "example link 2"? </p>
<p>EDIT: Unfortunately I'm required to use regular expressions for this question so i can't use a parser etc. Thanks for the recommendation though.</p>
| -1 | 2016-09-27T09:53:32Z | 39,721,742 | <p>You need to add 'g' modifier at the end. For example the regex should look like: </p>
<pre><code>/(?<=\<link>)(.*)(?=<\/link>)/g
</code></pre>
<p>The 'g' modifier tells the engine not to stop after the first match has been found, but rather to continue until no more matches can be found.<br>
Demo <a href="https://regex101.com/r/oF9eM7/2" rel="nofollow">here</a></p>
| 0 | 2016-09-27T10:01:29Z | [
"python",
"regex",
"python-2.7"
]
|
Robocopy based on extensions | 39,721,614 | <p>Currently I am using robocopy in python to copy out files based on extensions.</p>
<p>The code is below:</p>
<pre><code>call(["robocopy","C:\",dest,"*.7z","/S","/COPYALL"])
</code></pre>
<p>But in a scenario where there is no 7z files, it still creates the dest directory.</p>
<p>Is there a way to only create the directory and copy the file only if it exists?</p>
<p>Thanks in Advance</p>
| 0 | 2016-09-27T09:55:34Z | 39,721,701 | <p>Why not check if there are 7z files before calling the copy utility?</p>
<pre><code>import glob
if glob.glob("*.7z"):
call(["robocopy","C:\",dest,"*.7z","/S","/COPYALL"])
</code></pre>
| 1 | 2016-09-27T09:59:46Z | [
"python",
"robocopy"
]
|
Type not found: '(schema, http://www.w3.org/2001/XMLSchema, ) | 39,721,639 | <p>I am using suds-client for soap wsdl services. When i try to setup the suds client for soap service, I get the Type not found error. I search everywhere. There are many unanswered questions with the same error. I am adding the links as <a href="http://stackoverflow.com/questions/33611461/typenotfound-type-not-found-schema-http-www-w3-org-2001-xmlschema">Question1</a>, <a href="http://stackoverflow.com/questions/25619104/suds-typenotfound-type-not-found-http-www-w3-org-2001-xmlschema">Question2</a>, <a href="http://stackoverflow.com/questions/12877583/suds-raise-typenotfoundquery-ref-suds-typenotfound-type-not-found-array-h">Question3</a></p>
<p>Here is my code </p>
<pre><code>from suds.client import Client
wsdlfile = 'http://track.tcs.com.pk/trackingaccount/track.asmx?WSDL'
track_client = Client(TCS_TRACK_URI)
</code></pre>
<p>On the last line, I got this error </p>
<pre><code>Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/adil/Code/mezino/RoyalTag/royal_tag_services/sms_service/tcs_api.py", line 24, in <module>
track_client = Client(TCS_TRACK_URI)
File "/home/adil/Code/mezino/RoyalTag/royalenv/local/lib/python2.7/site-packages/suds/client.py", line 112, in __init__
self.wsdl = reader.open(url)
File "/home/adil/Code/mezino/RoyalTag/royalenv/local/lib/python2.7/site-packages/suds/reader.py", line 152, in open
d = self.fn(url, self.options)
File "/home/adil/Code/mezino/RoyalTag/royalenv/local/lib/python2.7/site-packages/suds/wsdl.py", line 159, in __init__
self.build_schema()
File "/home/adil/Code/mezino/RoyalTag/royalenv/local/lib/python2.7/site-packages/suds/wsdl.py", line 220, in build_schema
self.schema = container.load(self.options)
File "/home/adil/Code/mezino/RoyalTag/royalenv/local/lib/python2.7/site-packages/suds/xsd/schema.py", line 95, in load
child.dereference()
File "/home/adil/Code/mezino/RoyalTag/royalenv/local/lib/python2.7/site-packages/suds/xsd/schema.py", line 323, in dereference
midx, deps = x.dependencies()
File "/home/adil/Code/mezino/RoyalTag/royalenv/local/lib/python2.7/site-packages/suds/xsd/sxbasic.py", line 422, in dependencies
raise TypeNotFound(self.ref)
TypeNotFound: Type not found: '(schema, http://www.w3.org/2001/XMLSchema, )'
</code></pre>
<p>Please help me find a solution ?</p>
| 3 | 2016-09-27T09:56:27Z | 39,735,604 | <p>I have been searching for the answer and finally I found a solution that solved my problem. </p>
<p>We Just need to add the missing schema to the imports of suds. Below is the code</p>
<pre><code>from suds.xsd.doctor import Import, ImportDoctor
imp=Import('http://www.w3.org/2001/XMLSchema',location='http://www.w3.org/2001/XMLSchema.xsd')
imp.filter.add('http://tempuri.org/')
track_client = Client(TCS_TRACK_URI, doctor=ImportDoctor(imp))
</code></pre>
| 3 | 2016-09-27T22:59:47Z | [
"python",
"python-2.7",
"soap",
"wsdl",
"suds"
]
|
Convert Pandas dataframe to Dask dataframe | 39,721,800 | <p>Suppose I have pandas dataframe as:</p>
<pre><code>df=pd.DataFrame({'a':[1,2,3],'b':[4,5,6]})
</code></pre>
<p>When I convert it into dask dataframe what should <code>name</code> and <code>divisions</code> parameter consist of:</p>
<pre><code>from dask import dataframe as dd
sd=dd.DataFrame(df.to_dict(),divisions=1,meta=pd.DataFrame(columns=df.columns,index=df.index))
</code></pre>
<blockquote>
<p>TypeError: <strong>init</strong>() missing 1 required positional argument: 'name'</p>
</blockquote>
<p><strong>Edit</strong> :
Suppose I create a pandas dataframe like:</p>
<pre><code>pd.DataFrame({'a':[1,2,3],'b':[4,5,6]})
</code></pre>
<p>Similarly how to create dask dataframe as it needs three additional arguments as <code>name,divisions</code> and <code>meta</code>.</p>
<pre><code>sd=dd.Dataframe({'a':[1,2,3],'b':[4,5,6]},name=,meta=,divisions=)
</code></pre>
<p>Thank you for your reply.</p>
| 2 | 2016-09-27T10:04:08Z | 39,722,445 | <p>I think you can use <a href="http://dask.pydata.org/en/latest/dataframe-api.html#dask.dataframe.from_pandas" rel="nofollow"><code>dask.dataframe.from_pandas</code></a>:</p>
<pre><code>from dask import dataframe as dd
sd = dd.from_pandas(df, npartitions=3)
print (sd)
dd.DataFrame<from_pa..., npartitions=2, divisions=(0, 1, 2)>
</code></pre>
<p>EDIT:</p>
<p>I find <a href="https://github.com/dask/dask/blob/57285aa1cd721f0845b4f2b3756cb82d5f4597cf/dask/dataframe/tests/test_categorical.py#L14" rel="nofollow">solution</a>:</p>
<pre><code>import pandas as pd
import dask.dataframe as dd
from dask.dataframe.utils import make_meta
df=pd.DataFrame({'a':[1,2,3],'b':[4,5,6]})
dsk = {('x', 0): df}
meta = make_meta({'a': 'i8', 'b': 'i8'}, index=pd.Index([], 'i8'))
d = dd.DataFrame(dsk, name='x', meta=meta, divisions=[0, 1, 2])
print (d)
dd.DataFrame<x, npartitions=2, divisions=(0, 1, 2)>
</code></pre>
| 2 | 2016-09-27T10:36:21Z | [
"python",
"pandas",
"dataframe",
"data-conversion",
"dask"
]
|
Use dictwriter and write unstructured data into csv file using python | 39,721,841 | <p><a href="http://i.stack.imgur.com/Hb2z5.jpg" rel="nofollow">Output CSv file</a>I am currently trying to put the data from a file into csv file using python.
My data looks as follows:</p>
<pre><code>data_list{} = [{'row': '0', 't0': '8.69E-005', 'elems': ' 4 96 187 ', 'Tspan': '5E-006', 'NP': '625','wave0': '123.65 333.56 3333.78 567.89 345678.77 34E-08'}]
</code></pre>
<p>My output should look like this :</p>
<pre><code> row t0 elems Tspan NP wave0
0 8.69E-005 4 5E-006 625 123.65
96 333.56
187 3333.78
567.89
345678.77
34E-08
</code></pre>
<p>First comes header and then values underneath them. I was succesfull in getting the header part and the row parts for all except for elems and wave0.</p>
<p>python code: </p>
<pre><code> with open('cc.csv','w', newline='') as out_file:
writer = csv.DictWriter(out_file, fieldnames=data_list[0].keys())
writer.writeheader()
for data in data_list:
writer.writerow(data)
</code></pre>
| 0 | 2016-09-27T10:05:46Z | 39,723,052 | <p>It looks like you are after the dictionary inside your <code>data_list</code> variable. In order to get your desired output, you are going to need to do a little bit of work:</p>
<p>Your list of one dictionary:</p>
<pre><code>data_list = [{'row': '0', 't0': '8.69E-005', 'elems': ' 4 96 187 ', 'Tspan': '5E-006', 'NP': '625','wave0': '123.65 333.56 3333.78 567.89 345678.77 34E-08'}]
</code></pre>
<p>First, you need to define your output columns:</p>
<pre><code>output_cols = ['row', 't0', 'elems', 'Tspan', 'NP', 'wave0']
</code></pre>
<p>Then you need to create lists from the <code>values</code> of the dictionary and subsequently <code>*zip*</code> all those lists together:</p>
<pre><code># Python2
from itertools import izip_longest
import csv
data = [data_list[0].get(x).strip().split(" ") for x in output_cols]
data = izip_longest(*data)
with open('somefile.csv','w') as outfile:
file_writer = csv.writer(outfile, delimiter="\t", lineterminator="\n")
file_writer.writerow(output_cols)
file_writer.writerows(data)
</code></pre>
<p>If you are using python3, then <code>from itertools import izip_longest</code> should be replaced with <code>from itertools import zip_longest</code>.</p>
<p>Please note the answer assumes that all your data is that first dictionary (<code>data_list[0]</code>). If your actual list is longer than that, please share more data.</p>
<p>Content of <code>somefile.csv</code>:</p>
<pre><code>row t0 elems Tspan NP wave0
0 8.69E-005 4 5E-006 625 123.65
96 333.56
187 3333.78
567.89
345678.77
34E-08
</code></pre>
<p>I hope this helps.</p>
| 0 | 2016-09-27T11:05:42Z | [
"python",
"csv"
]
|
ConnectionURI MSSQL Python | 39,721,845 | <p>I'm coding a little something in Python. I need to get some data from a MicrosoftSQL database and convert it to a JSONObject. And i think i have some problems with the ConnectionForURI. I'm using simplejson and sqlobject libraries.</p>
<p>Im not sure exactly how that string is supposed to look like. </p>
<p>I'v tried this:</p>
<pre><code>"mssql://COMPUTERNAME\SQLEXPRESS/DATABASENAME"
</code></pre>
<p>But getting the following error:</p>
<pre><code>ImportError: Cannot find an MSSQL driver, tried adodb,pymssql
</code></pre>
<p>Is that becuase my connectionURI is wrong? I'v tried alot different with usernames and stuff, but i didn't need that when using pypyodbc.</p>
<p>Help will be very appreciated.</p>
| 0 | 2016-09-27T10:06:02Z | 39,721,883 | <pre><code>##1st option
cnxn = pypyodbc.connect("dsn=mydsn;Trusted_Connection=Yes")
##mydsn=DSN Name
##2nd option
ConnectionDtl='Driver={SQL Server};Server=WIN7-297;Database=AdventureWorks2014;trusted_connection=yes'
connection = pypyodbc.connect(ConnectionDtl)
</code></pre>
| -1 | 2016-09-27T10:08:16Z | [
"python",
"sql-server",
"database-connection",
"simplejson",
"sqlobject"
]
|
How to save state of execution in python | 39,721,867 | <p>i have an assignment in python for which some complicated idea is needed.
please help me . . . . .
suppose initially i have a file to load into an array 'arr01' then i have two python source code file say 'a.py' and 'b.py'.</p>
<p>a.py takes some element from that loaded array 'arr01' and modifies the array 'arr01' and send an argument to b.py.</p>
<p>b.py will generate a string 'str' and whenever nedded further data it wil call a.py to send more elements. </p>
<p>the problem here is that the array arr01 should to loaded and the execution state of a.py and b.py must be saved because b.py will call a.py whenever element is needed and a.py must have to keep the track of elements that it has sent to b.py from arr01</p>
<p>how can i use both a.py and b.py simutaniously?</p>
| -1 | 2016-09-27T10:07:11Z | 39,722,875 | <p>The following approach should get you there:</p>
<ul>
<li>define in <code>a.py</code> a class with a method that provides data from arr01 and tracks what has been sent</li>
<li><code>import a</code> in <code>b.py</code></li>
<li>call the class method from <code>a</code> in your code in <code>b</code></li>
</ul>
| 0 | 2016-09-27T10:57:28Z | [
"python",
"python-2.7",
"python-3.x"
]
|
How to save state of execution in python | 39,721,867 | <p>i have an assignment in python for which some complicated idea is needed.
please help me . . . . .
suppose initially i have a file to load into an array 'arr01' then i have two python source code file say 'a.py' and 'b.py'.</p>
<p>a.py takes some element from that loaded array 'arr01' and modifies the array 'arr01' and send an argument to b.py.</p>
<p>b.py will generate a string 'str' and whenever nedded further data it wil call a.py to send more elements. </p>
<p>the problem here is that the array arr01 should to loaded and the execution state of a.py and b.py must be saved because b.py will call a.py whenever element is needed and a.py must have to keep the track of elements that it has sent to b.py from arr01</p>
<p>how can i use both a.py and b.py simutaniously?</p>
| -1 | 2016-09-27T10:07:11Z | 39,722,917 | <p><code>from a import function_name</code></p>
<p><code>from b import other_function_name</code></p>
<p>From here, you can use a.py and b.py functions in this python file.</p>
<p>E.g.</p>
<h1>a.py</h1>
<p><code>def say_hi():
print 'hello'
</code></p>
<h1>b.py</h1>
<p><code>from a import say_hi
say_hi()</code></p>
| 0 | 2016-09-27T10:59:30Z | [
"python",
"python-2.7",
"python-3.x"
]
|
How to run multiple commands synchronously from one subprocess.Popen command? | 39,721,924 | <p>Is it possible to execute an arbitrary number of commands in sequence using the same subprocess command?</p>
<p>I need each command to wait for the previous one to complete before executing and I need them all to be executed in the same session/shell. I also need this to work in Python 2.6, Python 3.5. I also need the subprocess command to work in Linux, Windows and macOS (which is why I'm just using <code>echo</code> commands as examples here).</p>
<p>See <strong>non-working</strong> code below:</p>
<pre><code>import sys
import subprocess
cmds = ['echo start', 'echo mid', 'echo end']
p = subprocess.Popen(cmd=tuple([item for item in cmds]),
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
sys.stdout.flush()
print(">>> " + line.rstrip())
</code></pre>
<p>If this is not possible, which approach should I take in order to execute my commands in synchronous sequence within the same session/shell?</p>
| 0 | 2016-09-27T10:10:09Z | 39,722,419 | <p>One possible solution, looks like its running in same shell:</p>
<pre><code>subprocess.Popen('echo start;echo mid;echo end', shell=True)
</code></pre>
<p>Note - If you pass your command as a string then shell has to be True
Note - This is working on linux only, you may have to find something similar way out on windows. </p>
<p>Hope it will help.</p>
<p>From python doc - </p>
<blockquote>
<p>On Unix with shell=True, the shell defaults to /bin/sh. If args is a
string, the string specifies the command to execute through the shell.
This means that the string must be formatted exactly as it would be
when typed at the shell prompt.</p>
</blockquote>
| 0 | 2016-09-27T10:35:16Z | [
"python",
"subprocess",
"python-3.5",
"python-2.6"
]
|
How to run multiple commands synchronously from one subprocess.Popen command? | 39,721,924 | <p>Is it possible to execute an arbitrary number of commands in sequence using the same subprocess command?</p>
<p>I need each command to wait for the previous one to complete before executing and I need them all to be executed in the same session/shell. I also need this to work in Python 2.6, Python 3.5. I also need the subprocess command to work in Linux, Windows and macOS (which is why I'm just using <code>echo</code> commands as examples here).</p>
<p>See <strong>non-working</strong> code below:</p>
<pre><code>import sys
import subprocess
cmds = ['echo start', 'echo mid', 'echo end']
p = subprocess.Popen(cmd=tuple([item for item in cmds]),
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
sys.stdout.flush()
print(">>> " + line.rstrip())
</code></pre>
<p>If this is not possible, which approach should I take in order to execute my commands in synchronous sequence within the same session/shell?</p>
| 0 | 2016-09-27T10:10:09Z | 39,722,695 | <p>This one works in python 2.7 and should work also in windows. Probably some small refinement is needed for python >3. </p>
<p>The produced output is (using date and sleep it is easy to see that the commands are executed in row):</p>
<pre><code>>>>Die Sep 27 12:47:52 CEST 2016
>>>
>>>Die Sep 27 12:47:54 CEST 2016
</code></pre>
<p>As you see the commands are executed in a row. </p>
<pre><code> import sys
import subprocess
import shlex
cmds = ['date', 'sleep 2', 'date']
cmds = [shlex.split(x) for x in cmds]
outputs =[]
for cmd in cmds:
outputs.append(subprocess.Popen(cmd,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT).communicate())
for line in outputs:
print ">>>" + line[0].strip()
</code></pre>
<h3>This is what I obtain merging with @Marichyasana answer:</h3>
<pre><code>import sys
import os
def run_win_cmds(cmds):
@Marichyasana code (+/-)
def run_unix_cmds(cmds):
import subprocess
import shlex
cmds = [shlex.split(x) for x in cmds]
outputs =[]
for cmd in cmds:
outputs.append(subprocess.Popen(cmd,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT).communicate())
rc = ''
for line in outputs:
rc += line[0].strip()+'\n'
return rc
cmds = ['date', 'sleep 2', 'date']
if os.name == 'nt':
run_win_cmds(cmds)
elif os.name == 'posix':
run_unix_cmds(cmds)
</code></pre>
<p>Ask is this one do not fit your needs! ;)</p>
| 0 | 2016-09-27T10:49:43Z | [
"python",
"subprocess",
"python-3.5",
"python-2.6"
]
|
How to run multiple commands synchronously from one subprocess.Popen command? | 39,721,924 | <p>Is it possible to execute an arbitrary number of commands in sequence using the same subprocess command?</p>
<p>I need each command to wait for the previous one to complete before executing and I need them all to be executed in the same session/shell. I also need this to work in Python 2.6, Python 3.5. I also need the subprocess command to work in Linux, Windows and macOS (which is why I'm just using <code>echo</code> commands as examples here).</p>
<p>See <strong>non-working</strong> code below:</p>
<pre><code>import sys
import subprocess
cmds = ['echo start', 'echo mid', 'echo end']
p = subprocess.Popen(cmd=tuple([item for item in cmds]),
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
sys.stdout.flush()
print(">>> " + line.rstrip())
</code></pre>
<p>If this is not possible, which approach should I take in order to execute my commands in synchronous sequence within the same session/shell?</p>
| 0 | 2016-09-27T10:10:09Z | 39,725,789 | <p>Here is a function (and main to run it) that I use. I would say that you can use it for your problem. And it is flexible.</p>
<pre><code># processJobsInAList.py
# 2016-09-27 7:00:00 AM Central Daylight Time
import win32process, win32event
def CreateMyProcess2(cmd):
''' create process width no window that runs a command with arguments
and returns the process handle'''
si = win32process.STARTUPINFO()
info = win32process.CreateProcess(
None, # AppName
cmd, # Command line
None, # Process Security
None, # Thread Security
0, # inherit Handles?
win32process.NORMAL_PRIORITY_CLASS,
None, # New environment
None, # Current directory
si) # startup info
# info is tuple (hProcess, hThread, processId, threadId)
return info[0]
if __name__ == '__main__' :
''' create/run a process for each list element in "cmds"
output may be out of order because processes run concurrently '''
cmds=["echo my","echo heart","echo belongs","echo to","echo daddy"]
handles = []
for i in range(len(cmds)):
cmd = 'cmd /c ' + cmds[i]
handle = CreateMyProcess2(cmd)
handles.append(handle)
rc = win32event.WaitForMultipleObjects( handles, 1, -1) # 1 wait for all, -1 wait infinite
print 'return code ',rc
</code></pre>
<p>output:<br>
heart<br>
my<br>
belongs<br>
to<br>
daddy<br>
return code 0 </p>
<p>UPDATE: If you want to run the same process, which will serialize things for you:<br>
1) Remove line: handles.append(handle)<br>
2) Substitute the variable "handle" in place of the list "handles" on the "WaitFor" line<br>
3) Substitute WaitForSingleObject in place of WaitForMultipleObjects</p>
| 1 | 2016-09-27T13:21:28Z | [
"python",
"subprocess",
"python-3.5",
"python-2.6"
]
|
How to run multiple commands synchronously from one subprocess.Popen command? | 39,721,924 | <p>Is it possible to execute an arbitrary number of commands in sequence using the same subprocess command?</p>
<p>I need each command to wait for the previous one to complete before executing and I need them all to be executed in the same session/shell. I also need this to work in Python 2.6, Python 3.5. I also need the subprocess command to work in Linux, Windows and macOS (which is why I'm just using <code>echo</code> commands as examples here).</p>
<p>See <strong>non-working</strong> code below:</p>
<pre><code>import sys
import subprocess
cmds = ['echo start', 'echo mid', 'echo end']
p = subprocess.Popen(cmd=tuple([item for item in cmds]),
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
sys.stdout.flush()
print(">>> " + line.rstrip())
</code></pre>
<p>If this is not possible, which approach should I take in order to execute my commands in synchronous sequence within the same session/shell?</p>
| 0 | 2016-09-27T10:10:09Z | 39,727,561 | <p>If you want to execute many commands one after the other in the same <em>session/shell</em>, you must start a shell and feed it with all the commands, one at a time followed by a new line, and close the pipe at the end. It makes sense if some commands are not true processes but shell commands that could for example change the shell environment.</p>
<p>Example using Python 2.7 under Windows:</p>
<pre><code>encoding = 'latin1'
p = subprocess.Popen('cmd.exe', stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for cmd in cmds:
p.stdin.write(cmd + "\n")
p.stdin.close()
print p.stdout.read()
</code></pre>
<p>To have this code run under Linux, you would have to replace <code>cmd.exe</code> with <code>/bin/bash</code> and probably change the encoding to utf8. </p>
<p>For Python 3, you would have to encode the commands and probably decode their output, and to use parentheses with print.</p>
<p>Beware: this can only work for little output. If there was enough output to fill the pipe buffer before closing the stdin pipe, this code would deadlock. A more robust way would be to have a second thread to read the output of the commands to avoid that problem.</p>
| 2 | 2016-09-27T14:37:26Z | [
"python",
"subprocess",
"python-3.5",
"python-2.6"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.