title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
How to automatically fill a one2many field values to another one2many field in odoo
| 39,524,632 |
<p>I have two one2many field which is reffering to a single model, existing in different models.</p>
<p>ie,</p>
<pre><code> class modelA(models.Model):
_name = 'modela'
fila = fields.One2many('main','refa')
class moddelB(models.Model):
_name = 'modelb'
filb = fields.One2many('main','refb')
class main(models.Model):
_name = 'main'
name = fields.Char('Name')
date = fields.Date('Date')
refa = fields.Many2one('modela')
refb = fields.Many2one('modelb')
</code></pre>
<p>I will create records in <strong>modela</strong>. In this model there is a button is there. On clicking on that button i need to copy all values of fila field to filb field of <strong>modelb</strong> . How can i do that.</p>
| 0 |
2016-09-16T05:59:43Z
| 39,533,425 |
<p>You need to use <a href="http://odoo-development.readthedocs.io/en/latest/dev/py/x2many.html#x2many-values-filling" rel="nofollow">One2manu values filling</a></p>
<p><strong>XML code</strong> </p>
<pre><code><button name="copy2b" type="object" string="COPY"/>
</code></pre>
<p><strong>Python code</strong>: </p>
<pre><code>@api.multi
def copy2b(self):
for record in self:
filb_values = [(0, 0, {'name': line.name, 'date': line.date}) for line in record.fila]
vals = {'filb': filb_values}
# Just pass these values to `create` or `write` method to save them on `modelb`
</code></pre>
| 1 |
2016-09-16T14:01:27Z
|
[
"python",
"odoo-8"
] |
Linear Search using python
| 39,524,718 |
<p>Hi I am new to python and I am trying to learn it by writing linear search in python here is the code and this is the code I have written so far. </p>
<pre><code>a = [100]
n = int(raw_input("Enter the no. of elements: "))
print "Enter the elements: "
for i in range(0,n):
print i
item = int(raw_input("Enter the item to be searched: "))
for i in range(0,10):
if a[i] == item:
print "Item ", item, "found at location", i+1
break;
if i == n:
print "Item", item, "not found"
</code></pre>
<p>I get a error saying list index out of range, where am I going wrong? </p>
| -1 |
2016-09-16T06:05:47Z
| 39,524,899 |
<p>You have one element in your list. The second look tries to access elements 1 though 9, which are out of bounds. </p>
<p>I'd recommend using <code>enumerate</code>, which returns pairs of (index, element) up to the length of the iterable </p>
<pre><code>for i, x in enumerate(a):
if x == item:
# found item at position i
</code></pre>
<p>If you want to input number to generate a list of elements, then </p>
<pre><code>n = int(raw_input("Enter the no. of elements: "))
# build your list of n elements here
for i in range(0,n):
print i
</code></pre>
<p>For example, <code>a = range(0, n)</code> or import the <code>random</code> module if you don't want a linear range of numbers </p>
| 1 |
2016-09-16T06:19:32Z
|
[
"python"
] |
Linear Search using python
| 39,524,718 |
<p>Hi I am new to python and I am trying to learn it by writing linear search in python here is the code and this is the code I have written so far. </p>
<pre><code>a = [100]
n = int(raw_input("Enter the no. of elements: "))
print "Enter the elements: "
for i in range(0,n):
print i
item = int(raw_input("Enter the item to be searched: "))
for i in range(0,10):
if a[i] == item:
print "Item ", item, "found at location", i+1
break;
if i == n:
print "Item", item, "not found"
</code></pre>
<p>I get a error saying list index out of range, where am I going wrong? </p>
| -1 |
2016-09-16T06:05:47Z
| 39,525,107 |
<p>There are some errors in your code, I have tried to rearrange it:</p>
<pre><code>a = []
n = int(raw_input("Enter the no. of elements: "))
print "Enter the elements: "
for i in range(0,n):
a.append(i)
print (i)
item = int(raw_input("Enter the item to be searched: "))
found = False
for i in range(0,len(a)):
if a[i] == item:
found = True
print "Item ", item, "found at location", i+1
break;
if (not found):
print "Item", item, "not found"
</code></pre>
<p>Please note the following points:</p>
<ol>
<li>Use lists and not arrays: <code>a = []</code></li>
<li>Use the list operations: e.g. <code>a.append(i)</code> to insert an element to a list in the first free slot</li>
<li>Use the boolean values</li>
<li>Discover all the python functionalities ;)</li>
</ol>
| 0 |
2016-09-16T06:33:48Z
|
[
"python"
] |
Linear Search using python
| 39,524,718 |
<p>Hi I am new to python and I am trying to learn it by writing linear search in python here is the code and this is the code I have written so far. </p>
<pre><code>a = [100]
n = int(raw_input("Enter the no. of elements: "))
print "Enter the elements: "
for i in range(0,n):
print i
item = int(raw_input("Enter the item to be searched: "))
for i in range(0,10):
if a[i] == item:
print "Item ", item, "found at location", i+1
break;
if i == n:
print "Item", item, "not found"
</code></pre>
<p>I get a error saying list index out of range, where am I going wrong? </p>
| -1 |
2016-09-16T06:05:47Z
| 39,529,873 |
<p>slightly different (the sort algorithm)</p>
<pre><code>a = []
n = int(input("enter number of elements: "))
print("appending elements: ")
for i in range(0, n):
a.append(i)
print(i)
item = int(input("enter the item to be searched: "))
# for python2.x please use raw_input
for i in range(len(a)):
print("searching...")
if a[i] == item:
print("found! at: %s" % i)
break
else:
print("pretty bad day, nothing found!")
</code></pre>
| 0 |
2016-09-16T11:01:34Z
|
[
"python"
] |
C++ and Python 3 memory leak using PyArg_ParseTuple
| 39,524,724 |
<p>I'm not a C++ developer so i don't really know what I'm doing. Unfortunately I have to debug the following code but I'm not making any progress.</p>
<pre><code>static PyObject* native_deserialize(PyObject *self, PyObject *args){
PyObject * pycontent;
int len;
PyObject * props = NULL;
PyArg_ParseTuple(args, "|SiO", &pycontent, &len, &props);
RecordParser reader("onet_ser_v0");
TrackerListener* listener;
listener = new TrackerListener(props);
#if PY_MAJOR_VERSION >= 3
reader.parse((unsigned char*)PyBytes_AsString(pycontent), len, *listener);
#else
reader.parse((unsigned char*)PyString_AsString(pycontent), len, *listener);
#endif
return listener->obj;
}
</code></pre>
<p>Here is the python that calls that code:</p>
<pre><code> clsname, data = pyorient_native.deserialize(content,
content.__sizeof__(), self.props)
</code></pre>
<p>This code creates a nasty memory leak. In fact, when I run this code, it kills my memory within 20 minutes.</p>
<p>I looked at the code but can't find the problem in the C++. </p>
<p>How can I prevent rogue C++ code from killing my Python code? Is there a way to flag C++ code from within python to be recycled regardless whether the C++ created a memory leak?</p>
<p>Is there a way I can force the memory to be garbage collected in C++. How can I find the exact leak in C++ by running python?</p>
<p>My biggest issue is understanding Py_XDECREF and Py_XINCREF and the rest of the reference counting macros. I'm reading the docs but obviously I'm missing some context because I can't figure out where and when to use these. I have loads of respect for C++ developers. Their jobs seem unnecessarily difficult :(</p>
| 1 |
2016-09-16T06:05:55Z
| 39,615,048 |
<p>It turns out the solution was to Py_XDECREF the reference count for al the created objects. I still don't know exactly how, why and were as many of this still doesn't make sense to me.</p>
<p>I found this page that points out some of the pitfalls of these macros.</p>
<p><a href="https://wingware.com/psupport/python-manual/2.3/ext/node22.html" rel="nofollow">https://wingware.com/psupport/python-manual/2.3/ext/node22.html</a></p>
<p>There is the documentation but that wasn't very helpful.</p>
<p><a href="https://docs.python.org/3/c-api/refcounting.html" rel="nofollow">https://docs.python.org/3/c-api/refcounting.html</a></p>
<p>Maybe someone can share something else that is easier to consume for us non C++ peoplez?</p>
| 0 |
2016-09-21T11:16:54Z
|
[
"python",
"c++"
] |
Reading CSV with multiprocessing pool is taking longer than CSV reader
| 39,524,744 |
<p>From one of our client's requirement, I have to develop an application which should be able to process huge CSV files. File size could be in the range of 10 MB - 2GB in size.</p>
<p>Depending on size, module decides whether to read the file using <code>Multiprocessing pool</code> or by using normal <code>CSV reader</code>.
But from observation, <code>multi processing</code> taking longer time than normal <code>CSV reading</code> when tested both the modes for a file with size of 100 MB. </p>
<p>Is this correct behaviour? OR Am I doing something wrong?</p>
<p>Here is my code:</p>
<pre><code>def set_file_processing_mode(self, fpath):
""" """
fsize = self.get_file_size(fpath)
if fsize > FILE_SIZE_200MB:
self.read_in_async_mode = True
else:
self.read_in_async_mode = False
def read_line_by_line(self, filepath):
"""Reads CSV line by line"""
with open(filepath, 'rb') as csvin:
csvin = csv.reader(csvin, delimiter=',')
for row in iter(csvin):
yield row
def read_huge_file(self, filepath):
"""Read file in chunks"""
pool = mp.Pool(1)
for chunk_number in range(self.chunks): #self.chunks = 20
proc = pool.apply_async(read_chunk_by_chunk,
args=[filepath, self.chunks, chunk_number])
reader = proc.get()
yield reader
pool.close()
pool.join()
def iterate_chunks(self, filepath):
"""Read huge file rows"""
for chunklist in self.read_huge_file(filepath):
for row in chunklist:
yield row
@timeit #-- custom decorator
def read_csv_rows(self, filepath):
"""Read CSV rows and pass it to processing"""
if self.read_in_async_mode:
print("Reading in async mode")
for row in self.iterate_chunks(filepath):
self.process(row)
else:
print("Reading in sync mode")
for row in self.read_line_by_line(filepath):
self.process(row)
def process(self, formatted_row):
"""Just prints the line"""
self.log(formatted_row)
def read_chunk_by_chunk(filename, number_of_blocks, block):
'''
A generator that splits a file into blocks and iterates
over the lines of one of the blocks.
'''
results = []
assert 0 <= block and block < number_of_blocks
assert 0 < number_of_blocks
with open(filename) as fp :
fp.seek(0,2)
file_size = fp.tell()
ini = file_size * block / number_of_blocks
end = file_size * (1 + block) / number_of_blocks
if ini <= 0:
fp.seek(0)
else:
fp.seek(ini-1)
fp.readline()
while fp.tell() < end:
results.append(fp.readline())
return results
if __name__ == '__main__':
classobj.read_csv_rows(sys.argv[1])
</code></pre>
<p>Here is a test: </p>
<pre><code>$ python csv_utils.py "input.csv"
Reading in async mode
FINISHED IN 3.75 sec
$ python csv_utils.py "input.csv"
Reading in sync mode
FINISHED IN 0.96 sec
</code></pre>
<p>Question is : </p>
<p>Why Async mode is taking longer?</p>
<p><strong>NOTE:</strong> Removed unnecessary functions/lines to avoid complexity in the code </p>
| 1 |
2016-09-16T06:07:42Z
| 39,525,703 |
<blockquote>
<p>Is this correct behaviour? </p>
</blockquote>
<p>Yes - it may not be what you expect, but it is consistent with the way you implemented it and how <code>multiprocessing</code> works. </p>
<blockquote>
<p>Why Async mode is taking longer?</p>
</blockquote>
<p>The way your example works is perhaps best illustrated by a parable - bare with me please: </p>
<p>Let's say you ask your friend to engage in an experiment. You want him to go through a book and mark each page with a pen, as fast as he can. There are two rounds with a distinct setup, and you are going to time each round and then compare which one was faster:</p>
<ol>
<li><p>open the book on the first page, mark it, then flip the page and mark the following pages as they come up. Pure sequential processing.</p></li>
<li><p>process the book in chunks. For this he should run through the book's pages chunk by chunk. That is he should first make a list of page numbers
as starting points, say 1, 10, 20, 30, 40, etc. Then for each chunk, he should close the book, open it on the page for the starting point, process all pages before the next starting point comes up, close the book, then start all over again for the next chunk.</p></li>
</ol>
<p>Which of these approaches will be faster? </p>
<blockquote>
<p>Am I doing something wrong?</p>
</blockquote>
<p>You decide both approaches take too long. What you really want to do is ask <em>multiple</em> people (processes) to do the marking <em>in parallel</em>. Now with a book (as with a file) that's difficult because, well, only one person (process) can access the book (file) at any one point. Still it can be done if the order of processing doesn't matter and it is the marking itself - not the accessing - that should run in parallel. So the new approach is like this:</p>
<ol>
<li>cut the pages out of the book and sort them into say 10 stacks</li>
<li>ask ten people to mark one stack each</li>
</ol>
<p>This approach will most certainly speed up the whole process. Perhaps surprisingly though the speed up will be less than a factor of 10 because step 1 takes some time, and only one person can do it. That's called <a href="https://en.wikipedia.org/wiki/Amdahl%27s_law" rel="nofollow">Amdahl's law</a> [wikipedia]:</p>
<p><a href="http://i.stack.imgur.com/800RH.gif" rel="nofollow"><img src="http://i.stack.imgur.com/800RH.gif" alt="$$ S_\text{latency}(s) = \frac{1}{(1 - p) + \frac{p}{s}}"></a></p>
<p>Essentially what it means is that the (theoretical) speed-up of any process can only be as fast as the parallel processing part <em>p</em> is reduced in speed in relation to the part's sequential processing time (<em>p/s</em>). </p>
<p>Intuitively, the speed-up can only come from the part of the task that is processed in parallel, all the sequential parts are not affected and take the same amount of time, whether <em>p</em> is processed in parallel or not. </p>
<p>That said, in our example, obviously the speed-up can only come from step 2 (marking pages in parallel by multiple people), as step 1 (tearing up the book) is clearly sequential.</p>
<blockquote>
<p>develop an application which should be able to process huge CSV files</p>
</blockquote>
<p>Here's how to approach this:</p>
<ol>
<li>determine what part of the <em>processing</em> can be done in parallel, i.e. process each chunk sepearately and out of sequence </li>
<li>read the file sequentially, splitting it up into chunks as you go</li>
<li>use multiprocessing to run <em>multiple</em> processing steps <em>in parallel</em></li>
</ol>
<p>Something like this:</p>
<pre><code>def process(rows):
# do all the processing
...
return result
if __name__ == '__main__':
pool = mp.Pool(N) # N > 1
chunks = get_chunks(...)
for rows in chunks:
result += pool.apply_async(process, rows)
pool.close()
pool.join()
</code></pre>
<p>I'm not defining <code>get_chunks</code> here because there are several documented approaches to doing this e.g. <a href="https://gist.github.com/miku/820490" rel="nofollow">here</a> or <a href="http://stackoverflow.com/a/32743290/890242">here</a>.</p>
<p><strong>Conclusion</strong></p>
<p>Depending on the kind of processing required for each file, it may well be that the sequential approach to processing any one file is the fastest possible approach, simply because the processing parts don't gain much from being done in parallel. You may still end up processing it chunk by chunk due to e.g. memory constraints. If that is the case, you probably don't need multiprocessing.</p>
<p>If you have <em>multiple files</em> that can be processed in parallel,
multiprocessing is a very good approach. It works the same way as shown above, where the chunks are not rows but filenames.</p>
| 2 |
2016-09-16T07:12:19Z
|
[
"python",
"csv",
"python-multiprocessing"
] |
How to turn decimals into hex without prefix `0x`
| 39,524,943 |
<pre><code>def tohex(r, g, b):
#your code here :)
def hex1(decimal):
if decimal < 0:
return '00'
elif decimal > 255:
return 'FF'
elif decimal < 17:
return '0'+ hex(decimal)[2:]
else:
return inthex(decimal)[2:]
return (hex1(r) + hex1(g) + hex1(b)).upper()
print rgb(16 ,159 ,-137)
</code></pre>
<p>I define a new method to get my hex numbers. But when it comes to (16 ,159 ,-137), I got <code>0109F00</code> instead of <code>019F00</code>. Why there is an extra 0ï¼</p>
| 1 |
2016-09-16T06:22:32Z
| 39,525,045 |
<p>You have an extra zero because the line should be <code>elif decimal < 16</code> not <code>17</code>. </p>
<p>Use format strings<sup>1</sup>:</p>
<pre><code>def rgb(r,g,b):
def hex1(d):
return '{:02X}'.format(0 if d < 0 else 255 if d > 255 else d)
return hex1(r)+hex1(g)+hex1(b)
print rgb(16,159,-137)
</code></pre>
<p>Output:</p>
<pre><code>109F00
</code></pre>
<p><sup>1</sup><a href="https://docs.python.org/2.7/library/string.html#format-specification-mini-language" rel="nofollow">https://docs.python.org/2.7/library/string.html#format-specification-mini-language</a></p>
| 4 |
2016-09-16T06:29:41Z
|
[
"python",
"python-2.7",
"hex"
] |
How to use/import PyCrypto in Django?
| 39,524,978 |
<p>I am trying to use PyCrypto with Django. I import it like this:</p>
<pre><code>from Crypto.Cipher import AES
</code></pre>
<p>But it says:</p>
<pre><code>ImportError: No module named 'Crypto'
</code></pre>
<p>But when I try it using the Command Prompt, it is working.</p>
<p>Other details (if it can help):</p>
<p>I am using Eclipse Luna with PyDev installed. My OS is Windows7 32 bit.</p>
| 0 |
2016-09-16T06:24:44Z
| 39,530,336 |
<p>By using pip you can install pycrypto in virtualenv of your django project.</p>
<pre><code>pip install pycrypto
</code></pre>
<p>And then import <code>from Crypto.Cipher import AES</code>in your required views.py file. It will support for <code>Django==1.9</code> and <code>python <=3.4</code></p>
| 0 |
2016-09-16T11:26:13Z
|
[
"python",
"django"
] |
Configurate Spark by given Cluster
| 39,525,214 |
<p>I have to send some applications in python to a Apache Spark cluster. There is given a Clustermanager and some worker nodes with the addresses to send the Application to.</p>
<p>My question is, how to setup and to configure Spark on my local computer to send those requests with the data to be worked out to the cluster?</p>
<p>I am working on Ubuntu 16.xx and already installed java and scala. I have searched the inet but the most find is how to build the cluster or some old advices how to do it, which are out of date.</p>
| 0 |
2016-09-16T06:41:49Z
| 39,530,209 |
<p>your question is unclear. If the data are on your local machine, you should first copy your data to the cluster on HDFS filesystem. Spark can works in 3 modes with YARN (are u using YARN or MESOS ?): cluster, client and standalone. What you are looking for is client-mode or cluster mode. But if you want to start the application from your local machine, use client-mode. If you have an SSH access, you are free to use both.</p>
<p>The simplest way is to copy your code directly on the cluster if it is properly configured then start the application with the <code>./spark-submit</code> script, providing the class to use as an argument. It works with python script and java/scala classes (I only use python so I don't really know)</p>
| 0 |
2016-09-16T11:19:52Z
|
[
"java",
"python",
"scala",
"apache-spark",
"pyspark"
] |
Configurate Spark by given Cluster
| 39,525,214 |
<p>I have to send some applications in python to a Apache Spark cluster. There is given a Clustermanager and some worker nodes with the addresses to send the Application to.</p>
<p>My question is, how to setup and to configure Spark on my local computer to send those requests with the data to be worked out to the cluster?</p>
<p>I am working on Ubuntu 16.xx and already installed java and scala. I have searched the inet but the most find is how to build the cluster or some old advices how to do it, which are out of date.</p>
| 0 |
2016-09-16T06:41:49Z
| 39,535,098 |
<p>i assume you remote cluster is running and you are able to submit jobs on it from remote server itself. what you need is ssh tuneling. Keep in mind that it does not work with aws.</p>
<pre><code>ssh -f user@personal-server.com -L 2000:personal-server.com:7077 -N
</code></pre>
<p>read more here: <a href="http://www.revsys.com/writings/quicktips/ssh-tunnel.html" rel="nofollow">http://www.revsys.com/writings/quicktips/ssh-tunnel.html</a></p>
| 0 |
2016-09-16T15:25:17Z
|
[
"java",
"python",
"scala",
"apache-spark",
"pyspark"
] |
Searching and reading from file with python
| 39,525,334 |
<p>I am trying to search for a specific words in a file and print it.</p>
<p>Here is my code:</p>
<pre><code>import os # os directory library
# Searching for a keyword Name and returning the name if found
def scanName(file):
name = 'Fake'
with open('file.txt', 'r') as file1:
for line in file1:
for word in line.split():
temp = word
if temp.lower() == 'name'.lower():
name = word[word.index("name") + 1]
return name
# To find all files ending with txt in a folder
for file in os.listdir("C:\Users\Vadim\Desktop\Python"):
if file.endswith(".txt"):
print scanName( file )
</code></pre>
<p>Now the function return the name as fake although a do have names in my txt files.</p>
<p>Two txt files with string "name: some name"</p>
<p>How do i fix it?</p>
<p>Thanks!</p>
| 0 |
2016-09-16T06:49:23Z
| 39,525,389 |
<p>Replace <code>'name'.lower()</code> with <code>name.lower()</code>, because you are now checking with the <em>string</em> <code>name</code>, instead of the <em>variable</em> <code>name</code>.</p>
| 1 |
2016-09-16T06:53:29Z
|
[
"python",
"python-2.7"
] |
Searching and reading from file with python
| 39,525,334 |
<p>I am trying to search for a specific words in a file and print it.</p>
<p>Here is my code:</p>
<pre><code>import os # os directory library
# Searching for a keyword Name and returning the name if found
def scanName(file):
name = 'Fake'
with open('file.txt', 'r') as file1:
for line in file1:
for word in line.split():
temp = word
if temp.lower() == 'name'.lower():
name = word[word.index("name") + 1]
return name
# To find all files ending with txt in a folder
for file in os.listdir("C:\Users\Vadim\Desktop\Python"):
if file.endswith(".txt"):
print scanName( file )
</code></pre>
<p>Now the function return the name as fake although a do have names in my txt files.</p>
<p>Two txt files with string "name: some name"</p>
<p>How do i fix it?</p>
<p>Thanks!</p>
| 0 |
2016-09-16T06:49:23Z
| 39,527,072 |
<p>It might be easier not to check, line by line and word by word, instead simply check for the word with <code>if (word) in</code>:</p>
<pre><code>import os # os directory library
# Searching for a keyword Name and returning the name if found
def scanName(file):
name = 'larceny'
with open(file, 'r') as file1:
lines = file1.read()
if name in lines.lower():
return name
# To find all files ending with txt in a folder
for file in os.listdir("C:\Users\Vadim\Desktop\Python"):
if file.endswith(".txt"):
if scanName(file):
print( file )
</code></pre>
<p>Here the entire contents of the file are read in as the variable <code>lines</code>, we check for the searched for word, returning the name of the file, although we could just return <code>True</code>.<br>
If the function returns a result, we print the name of the file.</p>
| 0 |
2016-09-16T08:37:10Z
|
[
"python",
"python-2.7"
] |
Neural network accuracy optimization
| 39,525,358 |
<p>I have constructed an ANN in keras which has 1 input layer(3 inputs), one output layer (1 output) and two hidden layers with with 12 and 3 nodes respectively.</p>
<p><a href="http://i.stack.imgur.com/PkYfu.png" rel="nofollow"><img src="http://i.stack.imgur.com/PkYfu.png" alt="enter image description here"></a></p>
<p>The way i construct and train my network is:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense
from sklearn.cross_validation import train_test_split
import numpy
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
dataset = numpy.loadtxt("sorted output.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:3]
Y = dataset[:,3]
# split into 67% for train and 33% for test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=seed)
# create model
model = Sequential()
model.add(Dense(12, input_dim=3, init='uniform', activation='relu'))
model.add(Dense(3, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=150, batch_size=10)
</code></pre>
<p>Sorted output csv file looks like:</p>
<p><a href="http://i.stack.imgur.com/CIKn3.png" rel="nofollow"><img src="http://i.stack.imgur.com/CIKn3.png" alt="enter image description here"></a></p>
<p>so after 150 epochs i get: <strong>loss: 0.6932 - acc: 0.5000 - val_loss: 0.6970 - val_acc: 0.1429</strong></p>
<p>My question is: how could i modify my NN in order to achieve higher accuracy?</p>
| 3 |
2016-09-16T06:51:05Z
| 39,525,717 |
<p>You could try the following things. I have written this roughly in the order of importance - i.e. the order I would try things to fix the accuracy problem you are seeing:</p>
<ol>
<li><p>Normalise your input data. Usually you would take mean and standard deviation of training data, and use them to offset+scale all further inputs. There is a <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html" rel="nofollow">standard normalising function in sklearn</a> for this. Remember to treat your test data in the same way (using the mean and std from the training data, not recalculating it)</p></li>
<li><p>Train for more epochs. For problems with small numbers of features and limited training set sizes, you often have to run for thousands of epochs before the network will converge. You should plot the training and validation loss values to see whether the network is still learning, or has converged as best as it can.</p></li>
<li><p>For your simple data, I would avoid relu activations. You may have heard they are somehow "best", but like most NN options, they have types of problems where they work well, and others where they are not best choice. I think you would be better off with tanh or sigmoid activations in hidden layers for your problem. Save relu for very deep networks and/or convolutional problems on images/audio.</p></li>
<li><p>Use more training data. Not clear how much you are feeding it, but NNs work best with large amounts of training data.</p></li>
<li><p>Provided you already have lots of training data - increase size of hidden layers. More complex relationships require more hidden neurons (and sometimes more layers) for the NN to be able to express the "shape" of the decision surface. <a href="http://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html" rel="nofollow">Here is a handy browser-based network allowing you to play with that idea and get a feel for it</a>. </p></li>
<li><p>Add one or more <a href="https://keras.io/layers/core/#dropout" rel="nofollow">dropout layers</a> after the hidden layers or add some other regularisation. The network could be over-fitting (although with a training accuracy of 0.5 I suspect it isn't). Unlike relu, using dropout is pretty close to a panacea for tougher NN problems - it improves generalisation in many cases. A small amount of dropout (~0.2) might help with your problem, but like most hyper-parameters, you will need to search for the best values.</p></li>
</ol>
<p>Finally, it is always possible that the relationship you want to find that allows you to predict Y from X is not really there. In which case it would be a correct result from the NN to be no better than guessing at Y.</p>
| 4 |
2016-09-16T07:13:20Z
|
[
"python",
"neural-network",
"theano",
"keras"
] |
Neural network accuracy optimization
| 39,525,358 |
<p>I have constructed an ANN in keras which has 1 input layer(3 inputs), one output layer (1 output) and two hidden layers with with 12 and 3 nodes respectively.</p>
<p><a href="http://i.stack.imgur.com/PkYfu.png" rel="nofollow"><img src="http://i.stack.imgur.com/PkYfu.png" alt="enter image description here"></a></p>
<p>The way i construct and train my network is:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense
from sklearn.cross_validation import train_test_split
import numpy
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
dataset = numpy.loadtxt("sorted output.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:3]
Y = dataset[:,3]
# split into 67% for train and 33% for test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=seed)
# create model
model = Sequential()
model.add(Dense(12, input_dim=3, init='uniform', activation='relu'))
model.add(Dense(3, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=150, batch_size=10)
</code></pre>
<p>Sorted output csv file looks like:</p>
<p><a href="http://i.stack.imgur.com/CIKn3.png" rel="nofollow"><img src="http://i.stack.imgur.com/CIKn3.png" alt="enter image description here"></a></p>
<p>so after 150 epochs i get: <strong>loss: 0.6932 - acc: 0.5000 - val_loss: 0.6970 - val_acc: 0.1429</strong></p>
<p>My question is: how could i modify my NN in order to achieve higher accuracy?</p>
| 3 |
2016-09-16T06:51:05Z
| 39,526,036 |
<p>Neil Slater already provided a long list of helpful general advices.</p>
<p>In your specific examaple, normalization is the important thing. If you add the following lines to your code</p>
<pre><code>...
X = dataset[:,0:3]
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X = scaler.fit_transform(X)
</code></pre>
<p>you will get 100% accuracy on your toy data, even with much simpler network structures. Without normalization, the optimizer won't work.</p>
| 3 |
2016-09-16T07:30:20Z
|
[
"python",
"neural-network",
"theano",
"keras"
] |
Is there a way of getting an item from a list by searching its position in python
| 39,525,375 |
<p>If I have a list of items e.g words=['Hi','I'm','a','computer']. Is there any way I can get the item with the index so it would look something like this...</p>
<p>Input:</p>
<blockquote>
<p>print(words.index(2))</p>
</blockquote>
<p>Output:</p>
<blockquote>
<p>I'm</p>
</blockquote>
<p>So in conclusion i am looking for a way to reverse the index() function and use the index to search an object rather than an object to find the index.</p>
| -3 |
2016-09-16T06:52:18Z
| 39,525,416 |
<p>Lists in python are like arrays in Java, C or other languages. Just index them.</p>
<pre><code>words[1]
</code></pre>
| 1 |
2016-09-16T06:55:34Z
|
[
"python",
"indexing"
] |
Is there a way of getting an item from a list by searching its position in python
| 39,525,375 |
<p>If I have a list of items e.g words=['Hi','I'm','a','computer']. Is there any way I can get the item with the index so it would look something like this...</p>
<p>Input:</p>
<blockquote>
<p>print(words.index(2))</p>
</blockquote>
<p>Output:</p>
<blockquote>
<p>I'm</p>
</blockquote>
<p>So in conclusion i am looking for a way to reverse the index() function and use the index to search an object rather than an object to find the index.</p>
| -3 |
2016-09-16T06:52:18Z
| 39,525,421 |
<p>You could just use the following syntax to access a certain element of your list:</p>
<pre><code>words[1]
</code></pre>
<p>Output:</p>
<pre><code>I'm
</code></pre>
<p>Note that the index starts with 0.</p>
| 3 |
2016-09-16T06:55:40Z
|
[
"python",
"indexing"
] |
How to print csv rows in ascending order Python
| 39,525,469 |
<p>I am trying to read a csv file, and parse the data and return on row (start_date) only if the date is before September 6, 2010. Then print the corresponding values from row (words) in ascending order. I can accomplish the first half using the following: </p>
<pre><code>import csv
with open('sample_data.csv', 'rb') as f:
read = csv.reader(f, delimiter =',')
for row in read:
if row[13] <= '1283774400':
print(row[13]+"\t \t"+row[16])
</code></pre>
<p>It returns the correct start_date range, and corresponding word column values, but they are not returning in ascending order which would display a message if done correctly.</p>
<p>I have tried to use the sort() and sorted() functions, after creating an empty list to populate then appending it to the rows, but I am just not sure where or how to incorporate that into the existing code, and have been terribly unsuccessful. Any help would be greatly appreciated. </p>
| 0 |
2016-09-16T06:58:03Z
| 39,525,636 |
<p>Just read the list, filter the list according to the <code>< date</code> criteria and sort it according to the 13th row <em>as integer</em></p>
<p>Note that the common mistake would be to filter as ASCII (which may appear to work), but integer conversion is relaly required to avoid sort problems.</p>
<pre><code>import csv
with open('sample_data.csv', 'r') as f:
read = csv.reader(f, delimiter =',')
# csv has a title, we have to skip it (comment if no title)
title_row = next(read)
# read csv and filter out to keep only earlier rows
lines = filter(lambda row : int(row[13]) < 1283774400,read)
# sort the filtered list according to the 13th row, as numerical
slist = sorted(lines,key=lambda row : int(row[13]))
# print the result, including title line
for row in title_row+slist:
#print(row[13]+"\t \t"+row[16])
print(row)
</code></pre>
| 0 |
2016-09-16T07:07:41Z
|
[
"python",
"sorting",
"csv"
] |
UnicodeDecodeError while reading a custom created corpus in NLTK
| 39,525,684 |
<p>I made custom corpus for detecting polarity of sentences using nltk module. Here is the hierarchy of the corpus:</p>
<p>polar<br>
--polar<br>
----polar_tweets.txt<br>
--nonpolar<br>
----nonpolar_tweets.txt<br></p>
<p>And here is how I am importing that corpus in my source code:</p>
<pre><code>polarity = LazyCorpusLoader('polar', CategorizedPlaintextCorpusReader, r'(?!\.).*\.txt', cat_pattern=r'(polar|nonpolar)/.*', encoding='utf-8')
corpus = polarity
print(corpus.words(fileids=['nonpolar/non-polar.txt']))
</code></pre>
<p>but it raises the following error:</p>
<pre><code>Traceback (most recent call last):
File "E:/Analytics Practice/Social Media Analytics/analyticsPlatform/DataAnalysis/SentimentAnalysis/data/training_testing_data.py", line 9, in <module>
print(corpus.words(fileids=['nonpolar/nonpolar_tweets.txt']))
File "E:\Analytics Practice\Social Media Analytics\analyticsPlatform\lib\site-packages\nltk\util.py", line 765, in __repr__
for elt in self:
File "E:\Analytics Practice\Social Media Analytics\analyticsPlatform\lib\site-packages\nltk\corpus\reader\util.py", line 291, in iterate_from
tokens = self.read_block(self._stream)
File "E:\Analytics Practice\Social Media Analytics\analyticsPlatform\lib\site-packages\nltk\corpus\reader\plaintext.py", line 122, in _read_word_block
words.extend(self._word_tokenizer.tokenize(stream.readline()))
File "E:\Analytics Practice\Social Media Analytics\analyticsPlatform\lib\site-packages\nltk\data.py", line 1135, in readline
new_chars = self._read(readsize)
File "E:\Analytics Practice\Social Media Analytics\analyticsPlatform\lib\site-packages\nltk\data.py", line 1367, in _read
chars, bytes_decoded = self._incr_decode(bytes)
File "E:\Analytics Practice\Social Media Analytics\analyticsPlatform\lib\site-packages\nltk\data.py", line 1398, in _incr_decode
return self.decode(bytes, 'strict')
File "C:\Users\prabhjot.rai\AppData\Local\Continuum\Anaconda3\lib\encodings\utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc2 in position 269: invalid continuation byte
</code></pre>
<p>While creating the files <code>polar_tweets.txt</code> and <code>nonpolar_tweets.txt</code>, I am decoding the files <code>uncleaned_polar_tweets.txt</code> to <code>utf-8</code> and then writing it to the file <code>polar_tweets.txt</code>. Here is the code for that:</p>
<pre><code>with open(path_to_file, "rb") as file:
output_corpus = clean_text(file.read().decode('utf-8'))['cleaned_corpus']
output_file = open(output_path, "w")
output_file.write(output_corpus)
output_file.close()
</code></pre>
<p>Where output_file is the <code>polar_tweets.txt</code> and <code>nonpolar_tweets.txt</code>.
Where is the error? Since I am encoding in <code>utf-8</code> initially, and then reading also in <code>utf-8</code> by the line</p>
<pre><code>polarity = LazyCorpusLoader('polar', CategorizedPlaintextCorpusReader, r'(?!\.).*\.txt', cat_pattern=r'(polar|nonpolar)/.*', encoding='utf-8')
</code></pre>
<p>If I replace the <code>encoding='utf-8'</code> by <code>encoding='latin-1'</code>, the code works perfect. Where is the issue? Do I need to decode in <code>utf-8</code> also while creating the corpus?</p>
| 0 |
2016-09-16T07:11:02Z
| 39,538,326 |
<p>You need to understand that in Python's model, <code>unicode</code> is a kind of data but <code>utf-8</code> is an <em>encoding.</em> They're not the same thing at all. You're reading your raw text, which is apparently in <code>utf-8</code>; cleaning it, then writing it out to your new corpus <em>without specifying an encoding.</em> So you're writing it out in... who knows what encoding. Don't find out, just clean and generate your corpus again specifying the <code>utf-8</code> encoding. </p>
<p>I hope you're doing all this in Python 3; if not, stop right here and switch to Python 3. Then write out the corpus like this:</p>
<pre><code>output_file = open(output_path, "w", encoding="utf-8")
output_file.write(output_corpus)
output_file.close()
</code></pre>
| 1 |
2016-09-16T18:50:07Z
|
[
"python",
"character-encoding",
"nltk"
] |
Unable to install pyOpenSSL
| 39,525,704 |
<p>I dual booted a Windows 10 partition on a Macbook Pro:
Mac: El Capitan (64-bit)
Windows: Windows 10 64-bit, formatted with the fat file system</p>
<p>On Windows, I installed cygwin and Python 2.7. Through cygwin, I installed pip and gcc.</p>
<p>I also tried installing cffi with <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#cffi" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/#cffi</a>, but when I ran pip install <code>cffi-1.8.2-cp27-cp27m-win_amd64.whl</code>, I got the error: <code>cffi....whl is not supported on this platform</code>. Then I tried many things I can't remember, I think I even tried downloading a tar file, and then now, when I run pip install cffi, I'm told it's already installed.</p>
<p>I then ran <code>pip install pyOpenSSL</code>,however, I got this error:</p>
<pre><code> gcc -fno-strict-aliasing -ggdb -O2 -pipe -Wimplicit-function-declaration -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/build=/usr/src/debug/python-2.7.10-1 -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/src/Python-2.7.10=/usr/src/debug/python-2.7.10-1 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/usr/include/python2.7 -c build/temp.cygwin-2.6.0-x86_64-2.7/_openssl.c -o build/temp.cygwin-2.6.0-x86_64-2.7/build/temp.cygwin-2.6.0-x86_64-2.7/_openssl.o
build/temp.cygwin-2.6.0-x86_64-2.7/_openssl.c:433:30: fatal error: openssl/opensslv.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-9Emw8c/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-wy64bj-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-9Emw8c/cryptography/
</code></pre>
<p>I also got this error when I did <code>pip install cryptography</code>.</p>
<p>I read somewhere that I should try: <code>pip install --global-option build_ext --global-option --compiler=mingw64 pyopenssl</code>, and I got:</p>
<pre><code>Skipping bdist_wheel for pyopenssl, due to binaries being disabled for it.
Skipping bdist_wheel for cryptography, due to binaries being disabled for it.
Installing collected packages: cryptography, pyopenssl
Running setup.py install for cryptography ... error
Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-fIb1xc/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" build_ext --compiler=mingw64 install --record /tmp/pip-7flSlN-record/install-record.txt --single-version-externally-managed --compile:
running build_ext
generating cffi module 'build/temp.cygwin-2.6.0-x86_64-2.7/_padding.c'
creating build
creating build/temp.cygwin-2.6.0-x86_64-2.7
generating cffi module 'build/temp.cygwin-2.6.0-x86_64-2.7/_constant_time.c'
generating cffi module 'build/temp.cygwin-2.6.0-x86_64-2.7/_openssl.c'
error: don't know how to compile C/C++ code on platform 'posix' with 'mingw64' compiler
----------------------------------------
Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-fIb1xc/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" build_ext --compiler=mingw64 install --record /tmp/pip-7flSlN-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-fIb1xc/cryptography/
</code></pre>
<p>Can someone please advise what's wrong??</p>
<p>Update: After installing <code>mingw64x86_64-openssl: OpenSSL encryption library for Win64 toolchain</code> through cygwin, and running <code>pip install pyopenssl</code>, I got:</p>
<pre><code> running build_ext
generating cffi module 'build/temp.cygwin-2.6.0-x86_64-2.7/_padding.c'
creating build/temp.cygwin-2.6.0-x86_64-2.7
generating cffi module 'build/temp.cygwin-2.6.0-x86_64-2.7/_constant_time.c'
generating cffi module 'build/temp.cygwin-2.6.0-x86_64-2.7/_openssl.c'
building '_openssl' extension
creating build/temp.cygwin-2.6.0-x86_64-2.7/build
creating build/temp.cygwin-2.6.0-x86_64-2.7/build/temp.cygwin-2.6.0-x86_64-2.7
/usr/bin/clang -fno-strict-aliasing -ggdb -O2 -pipe -Wimplicit-function-declaration -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/build=/usr/src/debug/python-2.7.10-1 -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/src/Python-2.7.10=/usr/src/debug/python-2.7.10-1 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/usr/include/python2.7 -c build/temp.cygwin-2.6.0-x86_64-2.7/_openssl.c -o build/temp.cygwin-2.6.0-x86_64-2.7/build/temp.cygwin-2.6.0-x86_64-2.7/_openssl.o
build/temp.cygwin-2.6.0-x86_64-2.7/_openssl.c:433:10: fatal error: 'openssl/opensslv.h' file not found
#include <openssl/opensslv.h>
^
1 error generated.
error: command '/usr/bin/clang' failed with exit status 1
----------------------------------------
Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-ZsWN3h/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-D2XH0P-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-ZsWN3h/cryptography/
</code></pre>
| 0 |
2016-09-16T07:12:26Z
| 40,136,918 |
<p>I had a similar issue and installed the libssl header files so I could compile the python module:</p>
<pre><code>sudo apt-get install libssl-dev
</code></pre>
| 0 |
2016-10-19T16:28:25Z
|
[
"python",
"windows",
"python-2.7",
"windows-10",
"pyopenssl"
] |
DecisionTreeRegressor 'tree_' attribute documentation (Tree class)
| 39,525,716 |
<p>I am trying to find the documentation on the 'Tree' class which is the class of the 'tree_' parameter of a 'DecisionTreeRegressor' but no success.</p>
<p>Would anybody know if this class is documented anywhere please ?</p>
<p>Thanks,
Yoann</p>
| 0 |
2016-09-16T07:13:18Z
| 39,525,895 |
<p>I ended up looking in the following file but I am still interested to know if the Tree class is part of the documentation.</p>
<p><a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/tree.py" rel="nofollow">https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/tree.py</a></p>
| 0 |
2016-09-16T07:23:39Z
|
[
"python",
"scikit-learn"
] |
change specific field value in serializer class
| 39,525,754 |
<p>I have a model like this</p>
<pre><code>class ProjectTemplate(models.Model):
project_template_id = models.AutoField(primary_key=True)
name = models.CharField(max_length=64)
description = models.TextField(max_length=255)
created_date = models.DateTimeField(auto_now_add=True)
modified_date = models.DateTimeField(auto_now=True)
created_by = models.IntegerField(blank=True, null=True)
modified_by = models.IntegerField(blank=True, null=True)
activated = models.BooleanField(default=False)
org_id = models.IntegerField(blank=False,null=False)
class Meta:
db_table = 'project_template'
def __str__(self):
return self.name
</code></pre>
<p>and a serializer class like this</p>
<pre><code>class ProjectTemplateSerializer(serializers.ModelSerializer):
class Meta:
model = ProjectTemplate
</code></pre>
<p>in the created_by field i am storing the user id in the db, when listing the project templates, basically i'll get user id's for create_by field, </p>
<p>how to change my serializer class to get a dict like this </p>
<pre><code>{"user_id":id,"user_name":name}
</code></pre>
<p>for created_by field</p>
| 0 |
2016-09-16T07:15:30Z
| 39,529,285 |
<p>Just like Sayse said, you'd better make a foreign key to a user like this:</p>
<pre><code>create_by = models.ForeignKey("User", related_name="project_user")
</code></pre>
<p>Then you can access the info from user like user_name or something by:</p>
<pre><code>class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ("id", "user_name")
class ProjectTemplateSerializer(serializers.ModelSerializer):
user = UserSerializer(source="user", read_only=True)
class Meta:
model = ProjectTemplate
fields = (
"id", "user",
)
</code></pre>
| 0 |
2016-09-16T10:31:18Z
|
[
"python",
"django",
"django-rest-framework"
] |
Extract data by pages one by one using single scrapy spider
| 39,525,828 |
<p>I am trying to extract data from <a href="https://www.goodreads.com/list/show/19793.I_Marked_My_Calendar_For_This_Book_s_Release?page=1" rel="nofollow">goodreads.</a></p>
<p>I want to crawl pages one by one using some time delay.</p>
<p>My spider looks like:</p>
<pre><code>import scrapy
import unidecode
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from lxml import html
class ElementSpider(scrapy.Spider):
name = 'books'
download_delay = 3
allowed_domains = ["https://www.goodreads.com"]
start_urls = ["https://www.goodreads.com/list/show/19793.I_Marked_My_Calendar_For_This_Book_s_Release?page=1",
]
rules = (Rule(LinkExtractor(allow=(), restrict_xpaths=('//a[@class="next_page"]',)), callback="parse", follow= True),)
def parse(self, response):
for href in response.xpath('//div[@id="all_votes"]/table[@class="tableList js-dataTooltip"]/tr/td[2]/div[@class="js-tooltipTrigger tooltipTrigger"]/a/@href'):
full_url = response.urljoin(href.extract())
print full_url
yield scrapy.Request(full_url, callback=self.parse_books)
next_page = response.xpath('.//a[@class="button next"]/@href').extract()
if next_page:
next_href = next_page[0]
print next_href
next_page_url = 'https://www.goodreads.com' + next_href
request = scrapy.Request(url=next_page_url)
yield request
def parse_books(self, response):
yield{
'url': response.url,
'title':response.xpath('//div[@id="metacol"]/h1[@class="bookTitle"]/text()').exract(),
}
</code></pre>
<p>Please suggest what I do so can extract all pages data by running spider once.</p>
| 0 |
2016-09-16T07:19:53Z
| 39,529,450 |
<p>I make changes in my code and it works. change is -</p>
<pre><code>request = scrapy.Request(url=next_page_url)
</code></pre>
<p>should be</p>
<pre><code>request = scrapy.Request(next_page_url, self.parse)
</code></pre>
<p>When I comment <code>allowed_domains = ["https://www.goodreads.com"]</code> It works well otherwise there is no data saved in json file.
Can anyone one explain why?? </p>
| 0 |
2016-09-16T10:39:40Z
|
[
"python",
"web",
"scrapy",
"scrapy-spider"
] |
Extract data by pages one by one using single scrapy spider
| 39,525,828 |
<p>I am trying to extract data from <a href="https://www.goodreads.com/list/show/19793.I_Marked_My_Calendar_For_This_Book_s_Release?page=1" rel="nofollow">goodreads.</a></p>
<p>I want to crawl pages one by one using some time delay.</p>
<p>My spider looks like:</p>
<pre><code>import scrapy
import unidecode
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from lxml import html
class ElementSpider(scrapy.Spider):
name = 'books'
download_delay = 3
allowed_domains = ["https://www.goodreads.com"]
start_urls = ["https://www.goodreads.com/list/show/19793.I_Marked_My_Calendar_For_This_Book_s_Release?page=1",
]
rules = (Rule(LinkExtractor(allow=(), restrict_xpaths=('//a[@class="next_page"]',)), callback="parse", follow= True),)
def parse(self, response):
for href in response.xpath('//div[@id="all_votes"]/table[@class="tableList js-dataTooltip"]/tr/td[2]/div[@class="js-tooltipTrigger tooltipTrigger"]/a/@href'):
full_url = response.urljoin(href.extract())
print full_url
yield scrapy.Request(full_url, callback=self.parse_books)
next_page = response.xpath('.//a[@class="button next"]/@href').extract()
if next_page:
next_href = next_page[0]
print next_href
next_page_url = 'https://www.goodreads.com' + next_href
request = scrapy.Request(url=next_page_url)
yield request
def parse_books(self, response):
yield{
'url': response.url,
'title':response.xpath('//div[@id="metacol"]/h1[@class="bookTitle"]/text()').exract(),
}
</code></pre>
<p>Please suggest what I do so can extract all pages data by running spider once.</p>
| 0 |
2016-09-16T07:19:53Z
| 39,532,937 |
<p>Looks like <code>allowed_domains</code> needs a better explanation in <a href="http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.spiders.Spider.allowed_domains" rel="nofollow">the documentation</a> but if you check the examples, the structure of the domains there should be like <code>domain.com</code>, so avoiding scheme and unnecessary subdomains (<code>www</code> is a subdomain)</p>
| 0 |
2016-09-16T13:38:05Z
|
[
"python",
"web",
"scrapy",
"scrapy-spider"
] |
trying all combinations of operations on list of variables
| 39,525,993 |
<p>I have a list of values like:</p>
<pre><code>values = [1, 2, 3, 4]
</code></pre>
<p>and I want to try all combinations on this list like:</p>
<pre><code>1 + 2
1 + 3
1 + 4
1 * 2
1 * 3
1 * 4
1 + 2 * 3
1 + 2 * 4
1 + 3 * 4
</code></pre>
<p>etc.</p>
<p>What would be the most straightforward way to get all these possible combinations of operations in the most succinct way possible?</p>
<p>I would imagine having two lists, [1,2,3,4] and [+, *, -, /] and then taking all combinations of the numbers of all lengths, and then filling in the blanks with all combinations.</p>
<p>So selecting [1, 2, 3] and then selecting all permutations of the operations and combining them together. This seems messy and I'm hoping there's a clearer way to code this?</p>
| 4 |
2016-09-16T07:28:36Z
| 39,526,322 |
<p>Here's a recursive solution that builds the expression from numbers & operators and then uses <a href="https://docs.python.org/3.4/library/functions.html#eval"><code>eval</code></a> to calculate it:</p>
<pre><code>vals = [1, 2, 3]
operators = ['+', '*', '-', '/']
def expressions(values):
# Base case, only one value left
if len(values) == 1:
yield values
# Iterate over the indexes
for i in range(len(values)):
# Pop value from given index and store the remaining values
# to be used with next recursion
forward = values[:]
val = forward.pop(i)
# Yield all value, operator, subexpression combinations
for op in operators:
for rest in expressions(forward):
yield [val, op] + rest
for expr in expressions(vals):
expr = ' '.join(str(x) for x in expr)
print('{} = {}'.format(expr, eval(expr)))
</code></pre>
<p>Output (partial):</p>
<pre><code>1 + 2 + 3 = 6
1 + 2 * 3 = 7
1 + 2 - 3 = 0
1 + 2 / 3 = 1.6666666666666665
1 + 3 + 2 = 6
1 + 3 * 2 = 7
1 + 3 - 2 = 2
1 + 3 / 2 = 2.5
1 * 2 + 3 = 5
1 * 2 * 3 = 6
1 * 2 - 3 = -1
1 * 2 / 3 = 0.6666666666666666
1 * 3 + 2 = 5
1 * 3 * 2 = 6
1 * 3 - 2 = 1
1 * 3 / 2 = 1.5
1 - 2 + 3 = 2
1 - 2 * 3 = -5
1 - 2 - 3 = -4
</code></pre>
| 5 |
2016-09-16T07:50:10Z
|
[
"python",
"combinations"
] |
trying all combinations of operations on list of variables
| 39,525,993 |
<p>I have a list of values like:</p>
<pre><code>values = [1, 2, 3, 4]
</code></pre>
<p>and I want to try all combinations on this list like:</p>
<pre><code>1 + 2
1 + 3
1 + 4
1 * 2
1 * 3
1 * 4
1 + 2 * 3
1 + 2 * 4
1 + 3 * 4
</code></pre>
<p>etc.</p>
<p>What would be the most straightforward way to get all these possible combinations of operations in the most succinct way possible?</p>
<p>I would imagine having two lists, [1,2,3,4] and [+, *, -, /] and then taking all combinations of the numbers of all lengths, and then filling in the blanks with all combinations.</p>
<p>So selecting [1, 2, 3] and then selecting all permutations of the operations and combining them together. This seems messy and I'm hoping there's a clearer way to code this?</p>
| 4 |
2016-09-16T07:28:36Z
| 39,526,691 |
<p>How about this one (since order of the operands and operators does matter, we must use permutation) ?</p>
<pre><code>from itertools import chain, permutations
def powerset(iterable):
xs = list(iterable)
return chain.from_iterable(permutations(xs,n) for n in range(len(xs)+1) )
lst_expr = []
for operands in map(list, powerset(['1','2','3','4'])):
n = len(operands)
#print operands
if n > 1:
all_operators = map(list, permutations(['+','-','*','/'],n-1))
#print all_operators, operands
for operators in all_operators:
exp = operands[0]
i = 1
for operator in operators:
exp += operator + operands[i]
i += 1
lst_expr += [exp]
print ', '.join(lst_expr)
</code></pre>
<p>with the following output:</p>
<pre><code>1+2, 1-2, 1*2, 1/2, 1+3, 1-3, 1*3, 1/3, 1+4, 1-4, 1*4, 1/4, 2+1, 2-1, 2*1, 2/1, 2+3, 2-3, 2*3, 2/3, 2+4, 2-4, 2*4, 2/4, 3+1, 3-1, 3*1, 3/1, 3+2, 3-2, 3*2, 3/2, 3+4, 3-4, 3*4, 3/4, 4+1, 4-1, 4*1, 4/1, 4+2, 4-2, 4*2, 4/2, 4+3, 4-3, 4*3, 4/3, 1+2-3, 1+2*3, 1+2/3, 1-2+3, 1-2*3, 1-2/3, 1*2+3, 1*2-3, 1*2/3, 1/2+3, 1/2-3, 1/2*3, 1+2-4, 1+2*4, 1+2/4, 1-2+4, 1-2*4, 1-2/4, 1*2+4, 1*2-4, 1*2/4, 1/2+4, 1/2-4, 1/2*4, 1+3-2, 1+3*2, 1+3/2, 1-3+2, 1-3*2, 1-3/2, 1*3+2, 1*3-2, 1*3/2, 1/3+2, 1/3-2, 1/3*2, 1+3-4, 1+3*4, 1+3/4, 1-3+4, 1-3*4, 1-3/4, 1*3+4, 1*3-4, 1*3/4, 1/3+4, 1/3-4, 1/3*4, 1+4-2, 1+4*2, 1+4/2, 1-4+2, 1-4*2, 1-4/2, 1*4+2, 1*4-2, 1*4/2, 1/4+2, 1/4-2, 1/4*2, 1+4-3, 1+4*3, 1+4/3, 1-4+3, 1-4*3, 1-4/3, 1*4+3, 1*4-3, 1*4/3, 1/4+3, 1/4-3, 1/4*3, 2+1-3, 2+1*3, 2+1/3, 2-1+3, 2-1*3, 2-1/3, 2*1+3, 2*1-3, 2*1/3, 2/1+3, 2/1-3, 2/1*3, 2+1-4, 2+1*4, 2+1/4, 2-1+4, 2-1*4, 2-1/4, 2*1+4, 2*1-4, 2*1/4, 2/1+4, 2/1-4, 2/1*4, 2+3-1, 2+3*1, 2+3/1, 2-3+1, 2-3*1, 2-3/1, 2*3+1, 2*3-1, 2*3/1, 2/3+1, 2/3-1, 2/3*1, 2+3-4, 2+3*4, 2+3/4, 2-3+4, 2-3*4, 2-3/4, 2*3+4, 2*3-4, 2*3/4, 2/3+4, 2/3-4, 2/3*4, 2+4-1, 2+4*1, 2+4/1, 2-4+1, 2-4*1, 2-4/1, 2*4+1, 2*4-1, 2*4/1, 2/4+1, 2/4-1, 2/4*1, 2+4-3, 2+4*3, 2+4/3, 2-4+3, 2-4*3, 2-4/3, 2*4+3, 2*4-3, 2*4/3, 2/4+3, 2/4-3, 2/4*3, 3+1-2, 3+1*2, 3+1/2, 3-1+2, 3-1*2, 3-1/2, 3*1+2, 3*1-2, 3*1/2, 3/1+2, 3/1-2, 3/1*2, 3+1-4, 3+1*4, 3+1/4, 3-1+4, 3-1*4, 3-1/4, 3*1+4, 3*1-4, 3*1/4, 3/1+4, 3/1-4, 3/1*4, 3+2-1, 3+2*1, 3+2/1, 3-2+1, 3-2*1, 3-2/1, 3*2+1, 3*2-1, 3*2/1, 3/2+1, 3/2-1, 3/2*1, 3+2-4, 3+2*4, 3+2/4, 3-2+4, 3-2*4, 3-2/4, 3*2+4, 3*2-4, 3*2/4, 3/2+4, 3/2-4, 3/2*4, 3+4-1, 3+4*1, 3+4/1, 3-4+1, 3-4*1, 3-4/1, 3*4+1, 3*4-1, 3*4/1, 3/4+1, 3/4-1, 3/4*1, 3+4-2, 3+4*2, 3+4/2, 3-4+2, 3-4*2, 3-4/2, 3*4+2, 3*4-2, 3*4/2, 3/4+2, 3/4-2, 3/4*2, 4+1-2, 4+1*2, 4+1/2, 4-1+2, 4-1*2, 4-1/2, 4*1+2, 4*1-2, 4*1/2, 4/1+2, 4/1-2, 4/1*2, 4+1-3, 4+1*3, 4+1/3, 4-1+3, 4-1*3, 4-1/3, 4*1+3, 4*1-3, 4*1/3, 4/1+3, 4/1-3, 4/1*3, 4+2-1, 4+2*1, 4+2/1, 4-2+1, 4-2*1, 4-2/1, 4*2+1, 4*2-1, 4*2/1, 4/2+1, 4/2-1, 4/2*1, 4+2-3, 4+2*3, 4+2/3, 4-2+3, 4-2*3, 4-2/3, 4*2+3, 4*2-3, 4*2/3, 4/2+3, 4/2-3, 4/2*3, 4+3-1, 4+3*1, 4+3/1, 4-3+1, 4-3*1, 4-3/1, 4*3+1, 4*3-1, 4*3/1, 4/3+1, 4/3-1, 4/3*1, 4+3-2, 4+3*2, 4+3/2, 4-3+2, 4-3*2, 4-3/2, 4*3+2, 4*3-2, 4*3/2, 4/3+2, 4/3-2, 4/3*2, 1+2-3*4, 1+2-3/4, 1+2*3-4, 1+2*3/4, 1+2/3-4, 1+2/3*4, 1-2+3*4, 1-2+3/4, 1-2*3+4, 1-2*3/4, 1-2/3+4, 1-2/3*4, 1*2+3-4, 1*2+3/4, 1*2-3+4, 1*2-3/4, 1*2/3+4, 1*2/3-4, 1/2+3-4, 1/2+3*4, 1/2-3+4, 1/2-3*4, 1/2*3+4, 1/2*3-4, 1+2-4*3, 1+2-4/3, 1+2*4-3, 1+2*4/3, 1+2/4-3, 1+2/4*3, 1-2+4*3, 1-2+4/3, 1-2*4+3, 1-2*4/3, 1-2/4+3, 1-2/4*3, 1*2+4-3, 1*2+4/3, 1*2-4+3, 1*2-4/3, 1*2/4+3, 1*2/4-3, 1/2+4-3, 1/2+4*3, 1/2-4+3, 1/2-4*3, 1/2*4+3, 1/2*4-3, 1+3-2*4, 1+3-2/4, 1+3*2-4, 1+3*2/4, 1+3/2-4, 1+3/2*4, 1-3+2*4, 1-3+2/4, 1-3*2+4, 1-3*2/4, 1-3/2+4, 1-3/2*4, 1*3+2-4, 1*3+2/4, 1*3-2+4, 1*3-2/4, 1*3/2+4, 1*3/2-4, 1/3+2-4, 1/3+2*4, 1/3-2+4, 1/3-2*4, 1/3*2+4, 1/3*2-4, 1+3-4*2, 1+3-4/2, 1+3*4-2, 1+3*4/2, 1+3/4-2, 1+3/4*2, 1-3+4*2, 1-3+4/2, 1-3*4+2, 1-3*4/2, 1-3/4+2, 1-3/4*2, 1*3+4-2, 1*3+4/2, 1*3-4+2, 1*3-4/2, 1*3/4+2, 1*3/4-2, 1/3+4-2, 1/3+4*2, 1/3-4+2, 1/3-4*2, 1/3*4+2, 1/3*4-2, 1+4-2*3, 1+4-2/3, 1+4*2-3, 1+4*2/3, 1+4/2-3, 1+4/2*3, 1-4+2*3, 1-4+2/3, 1-4*2+3, 1-4*2/3, 1-4/2+3, 1-4/2*3, 1*4+2-3, 1*4+2/3, 1*4-2+3, 1*4-2/3, 1*4/2+3, 1*4/2-3, 1/4+2-3, 1/4+2*3, 1/4-2+3, 1/4-2*3, 1/4*2+3, 1/4*2-3, 1+4-3*2, 1+4-3/2, 1+4*3-2, 1+4*3/2, 1+4/3-2, 1+4/3*2, 1-4+3*2, 1-4+3/2, 1-4*3+2, 1-4*3/2, 1-4/3+2, 1-4/3*2, 1*4+3-2, 1*4+3/2, 1*4-3+2, 1*4-3/2, 1*4/3+2, 1*4/3-2, 1/4+3-2, 1/4+3*2, 1/4-3+2, 1/4-3*2, 1/4*3+2, 1/4*3-2, 2+1-3*4, 2+1-3/4, 2+1*3-4, 2+1*3/4, 2+1/3-4, 2+1/3*4, 2-1+3*4, 2-1+3/4, 2-1*3+4, 2-1*3/4, 2-1/3+4, 2-1/3*4, 2*1+3-4, 2*1+3/4, 2*1-3+4, 2*1-3/4, 2*1/3+4, 2*1/3-4, 2/1+3-4, 2/1+3*4, 2/1-3+4, 2/1-3*4, 2/1*3+4, 2/1*3-4, 2+1-4*3, 2+1-4/3, 2+1*4-3, 2+1*4/3, 2+1/4-3, 2+1/4*3, 2-1+4*3, 2-1+4/3, 2-1*4+3, 2-1*4/3, 2-1/4+3, 2-1/4*3, 2*1+4-3, 2*1+4/3, 2*1-4+3, 2*1-4/3, 2*1/4+3, 2*1/4-3, 2/1+4-3, 2/1+4*3, 2/1-4+3, 2/1-4*3, 2/1*4+3, 2/1*4-3, 2+3-1*4, 2+3-1/4, 2+3*1-4, 2+3*1/4, 2+3/1-4, 2+3/1*4, 2-3+1*4, 2-3+1/4, 2-3*1+4, 2-3*1/4, 2-3/1+4, 2-3/1*4, 2*3+1-4, 2*3+1/4, 2*3-1+4, 2*3-1/4, 2*3/1+4, 2*3/1-4, 2/3+1-4, 2/3+1*4, 2/3-1+4, 2/3-1*4, 2/3*1+4, 2/3*1-4, 2+3-4*1, 2+3-4/1, 2+3*4-1, 2+3*4/1, 2+3/4-1, 2+3/4*1, 2-3+4*1, 2-3+4/1, 2-3*4+1, 2-3*4/1, 2-3/4+1, 2-3/4*1, 2*3+4-1, 2*3+4/1, 2*3-4+1, 2*3-4/1, 2*3/4+1, 2*3/4-1, 2/3+4-1, 2/3+4*1, 2/3-4+1, 2/3-4*1, 2/3*4+1, 2/3*4-1, 2+4-1*3, 2+4-1/3, 2+4*1-3, 2+4*1/3, 2+4/1-3, 2+4/1*3, 2-4+1*3, 2-4+1/3, 2-4*1+3, 2-4*1/3, 2-4/1+3, 2-4/1*3, 2*4+1-3, 2*4+1/3, 2*4-1+3, 2*4-1/3, 2*4/1+3, 2*4/1-3, 2/4+1-3, 2/4+1*3, 2/4-1+3, 2/4-1*3, 2/4*1+3, 2/4*1-3, 2+4-3*1, 2+4-3/1, 2+4*3-1, 2+4*3/1, 2+4/3-1, 2+4/3*1, 2-4+3*1, 2-4+3/1, 2-4*3+1, 2-4*3/1, 2-4/3+1, 2-4/3*1, 2*4+3-1, 2*4+3/1, 2*4-3+1, 2*4-3/1, 2*4/3+1, 2*4/3-1, 2/4+3-1, 2/4+3*1, 2/4-3+1, 2/4-3*1, 2/4*3+1, 2/4*3-1, 3+1-2*4, 3+1-2/4, 3+1*2-4, 3+1*2/4, 3+1/2-4, 3+1/2*4, 3-1+2*4, 3-1+2/4, 3-1*2+4, 3-1*2/4, 3-1/2+4, 3-1/2*4, 3*1+2-4, 3*1+2/4, 3*1-2+4, 3*1-2/4, 3*1/2+4, 3*1/2-4, 3/1+2-4, 3/1+2*4, 3/1-2+4, 3/1-2*4, 3/1*2+4, 3/1*2-4, 3+1-4*2, 3+1-4/2, 3+1*4-2, 3+1*4/2, 3+1/4-2, 3+1/4*2, 3-1+4*2, 3-1+4/2, 3-1*4+2, 3-1*4/2, 3-1/4+2, 3-1/4*2, 3*1+4-2, 3*1+4/2, 3*1-4+2, 3*1-4/2, 3*1/4+2, 3*1/4-2, 3/1+4-2, 3/1+4*2, 3/1-4+2, 3/1-4*2, 3/1*4+2, 3/1*4-2, 3+2-1*4, 3+2-1/4, 3+2*1-4, 3+2*1/4, 3+2/1-4, 3+2/1*4, 3-2+1*4, 3-2+1/4, 3-2*1+4, 3-2*1/4, 3-2/1+4, 3-2/1*4, 3*2+1-4, 3*2+1/4, 3*2-1+4, 3*2-1/4, 3*2/1+4, 3*2/1-4, 3/2+1-4, 3/2+1*4, 3/2-1+4, 3/2-1*4, 3/2*1+4, 3/2*1-4, 3+2-4*1, 3+2-4/1, 3+2*4-1, 3+2*4/1, 3+2/4-1, 3+2/4*1, 3-2+4*1, 3-2+4/1, 3-2*4+1, 3-2*4/1, 3-2/4+1, 3-2/4*1, 3*2+4-1, 3*2+4/1, 3*2-4+1, 3*2-4/1, 3*2/4+1, 3*2/4-1, 3/2+4-1, 3/2+4*1, 3/2-4+1, 3/2-4*1, 3/2*4+1, 3/2*4-1, 3+4-1*2, 3+4-1/2, 3+4*1-2, 3+4*1/2, 3+4/1-2, 3+4/1*2, 3-4+1*2, 3-4+1/2, 3-4*1+2, 3-4*1/2, 3-4/1+2, 3-4/1*2, 3*4+1-2, 3*4+1/2, 3*4-1+2, 3*4-1/2, 3*4/1+2, 3*4/1-2, 3/4+1-2, 3/4+1*2, 3/4-1+2, 3/4-1*2, 3/4*1+2, 3/4*1-2, 3+4-2*1, 3+4-2/1, 3+4*2-1, 3+4*2/1, 3+4/2-1, 3+4/2*1, 3-4+2*1, 3-4+2/1, 3-4*2+1, 3-4*2/1, 3-4/2+1, 3-4/2*1, 3*4+2-1, 3*4+2/1, 3*4-2+1, 3*4-2/1, 3*4/2+1, 3*4/2-1, 3/4+2-1, 3/4+2*1, 3/4-2+1, 3/4-2*1, 3/4*2+1, 3/4*2-1, 4+1-2*3, 4+1-2/3, 4+1*2-3, 4+1*2/3, 4+1/2-3, 4+1/2*3, 4-1+2*3, 4-1+2/3, 4-1*2+3, 4-1*2/3, 4-1/2+3, 4-1/2*3, 4*1+2-3, 4*1+2/3, 4*1-2+3, 4*1-2/3, 4*1/2+3, 4*1/2-3, 4/1+2-3, 4/1+2*3, 4/1-2+3, 4/1-2*3, 4/1*2+3, 4/1*2-3, 4+1-3*2, 4+1-3/2, 4+1*3-2, 4+1*3/2, 4+1/3-2, 4+1/3*2, 4-1+3*2, 4-1+3/2, 4-1*3+2, 4-1*3/2, 4-1/3+2, 4-1/3*2, 4*1+3-2, 4*1+3/2, 4*1-3+2, 4*1-3/2, 4*1/3+2, 4*1/3-2, 4/1+3-2, 4/1+3*2, 4/1-3+2, 4/1-3*2, 4/1*3+2, 4/1*3-2, 4+2-1*3, 4+2-1/3, 4+2*1-3, 4+2*1/3, 4+2/1-3, 4+2/1*3, 4-2+1*3, 4-2+1/3, 4-2*1+3, 4-2*1/3, 4-2/1+3, 4-2/1*3, 4*2+1-3, 4*2+1/3, 4*2-1+3, 4*2-1/3, 4*2/1+3, 4*2/1-3, 4/2+1-3, 4/2+1*3, 4/2-1+3, 4/2-1*3, 4/2*1+3, 4/2*1-3, 4+2-3*1, 4+2-3/1, 4+2*3-1, 4+2*3/1, 4+2/3-1, 4+2/3*1, 4-2+3*1, 4-2+3/1, 4-2*3+1, 4-2*3/1, 4-2/3+1, 4-2/3*1, 4*2+3-1, 4*2+3/1, 4*2-3+1, 4*2-3/1, 4*2/3+1, 4*2/3-1, 4/2+3-1, 4/2+3*1, 4/2-3+1, 4/2-3*1, 4/2*3+1, 4/2*3-1, 4+3-1*2, 4+3-1/2, 4+3*1-2, 4+3*1/2, 4+3/1-2, 4+3/1*2, 4-3+1*2, 4-3+1/2, 4-3*1+2, 4-3*1/2, 4-3/1+2, 4-3/1*2, 4*3+1-2, 4*3+1/2, 4*3-1+2, 4*3-1/2, 4*3/1+2, 4*3/1-2, 4/3+1-2, 4/3+1*2, 4/3-1+2, 4/3-1*2, 4/3*1+2, 4/3*1-2, 4+3-2*1, 4+3-2/1, 4+3*2-1, 4+3*2/1, 4+3/2-1, 4+3/2*1, 4-3+2*1, 4-3+2/1, 4-3*2+1, 4-3*2/1, 4-3/2+1, 4-3/2*1, 4*3+2-1, 4*3+2/1, 4*3-2+1, 4*3-2/1, 4*3/2+1, 4*3/2-1, 4/3+2-1, 4/3+2*1, 4/3-2+1, 4/3-2*1, 4/3*2+1, 4/3*2-1
</code></pre>
| 2 |
2016-09-16T08:13:51Z
|
[
"python",
"combinations"
] |
trying all combinations of operations on list of variables
| 39,525,993 |
<p>I have a list of values like:</p>
<pre><code>values = [1, 2, 3, 4]
</code></pre>
<p>and I want to try all combinations on this list like:</p>
<pre><code>1 + 2
1 + 3
1 + 4
1 * 2
1 * 3
1 * 4
1 + 2 * 3
1 + 2 * 4
1 + 3 * 4
</code></pre>
<p>etc.</p>
<p>What would be the most straightforward way to get all these possible combinations of operations in the most succinct way possible?</p>
<p>I would imagine having two lists, [1,2,3,4] and [+, *, -, /] and then taking all combinations of the numbers of all lengths, and then filling in the blanks with all combinations.</p>
<p>So selecting [1, 2, 3] and then selecting all permutations of the operations and combining them together. This seems messy and I'm hoping there's a clearer way to code this?</p>
| 4 |
2016-09-16T07:28:36Z
| 39,526,839 |
<p>Below is the simpler and cleaner code to achieve this using <a href="https://docs.python.org/2/library/operator.html" rel="nofollow"><code>operator</code></a> and <a href="https://docs.python.org/2/library/itertools.html" rel="nofollow"><code>itertools</code></a>.</p>
<p>Also check regarding why not to use <code>eval</code>: <a href="http://stackoverflow.com/a/1832957/2063361">Is using eval in Python a bad practice?</a></p>
<pre><code>from itertools import product, combinations, chain
from operator import add, sub, mul, div, mod, floordiv
my_list = [1, 2, 3, 4, 0]
my_operations = {'+': add, '-': sub, '/': div, '*': mul, '%': mod, '//': floordiv}
for nums, action in chain(product([i for i in combinations(my_list, 2)], my_operations)):
try:
result = my_operations[action](nums[0], nums[1])
except ZeroDivisionError:
result = 'infinite'
finally:
print '{} {} {} = {}'.format(nums[0], action, nums[1], result)
</code></pre>
<p>Below is the output of above code:</p>
<pre><code>1 // 2 = 0
1 % 2 = 1
1 + 2 = 3
1 * 2 = 2
1 - 2 = -1
1 / 2 = 0
1 // 3 = 0
1 % 3 = 1
1 + 3 = 4
1 * 3 = 3
1 - 3 = -2
1 / 3 = 0
1 // 4 = 0
1 % 4 = 1
1 + 4 = 5
1 * 4 = 4
1 - 4 = -3
1 / 4 = 0
1 // 0 = infinite
1 % 0 = infinite
1 + 0 = 1
1 * 0 = 0
1 - 0 = 1
1 / 0 = infinite
2 // 3 = 0
2 % 3 = 2
2 + 3 = 5
2 * 3 = 6
2 - 3 = -1
2 / 3 = 0
2 // 4 = 0
2 % 4 = 2
2 + 4 = 6
2 * 4 = 8
2 - 4 = -2
2 / 4 = 0
2 // 0 = infinite
2 % 0 = infinite
2 + 0 = 2
2 * 0 = 0
2 - 0 = 2
2 / 0 = infinite
3 // 4 = 0
3 % 4 = 3
3 + 4 = 7
3 * 4 = 12
3 - 4 = -1
3 / 4 = 0
3 // 0 = infinite
3 % 0 = infinite
3 + 0 = 3
3 * 0 = 0
3 - 0 = 3
3 / 0 = infinite
4 // 0 = infinite
4 % 0 = infinite
4 + 0 = 4
4 * 0 = 0
4 - 0 = 4
4 / 0 = infinite
</code></pre>
| 0 |
2016-09-16T08:22:57Z
|
[
"python",
"combinations"
] |
trying all combinations of operations on list of variables
| 39,525,993 |
<p>I have a list of values like:</p>
<pre><code>values = [1, 2, 3, 4]
</code></pre>
<p>and I want to try all combinations on this list like:</p>
<pre><code>1 + 2
1 + 3
1 + 4
1 * 2
1 * 3
1 * 4
1 + 2 * 3
1 + 2 * 4
1 + 3 * 4
</code></pre>
<p>etc.</p>
<p>What would be the most straightforward way to get all these possible combinations of operations in the most succinct way possible?</p>
<p>I would imagine having two lists, [1,2,3,4] and [+, *, -, /] and then taking all combinations of the numbers of all lengths, and then filling in the blanks with all combinations.</p>
<p>So selecting [1, 2, 3] and then selecting all permutations of the operations and combining them together. This seems messy and I'm hoping there's a clearer way to code this?</p>
| 4 |
2016-09-16T07:28:36Z
| 39,528,917 |
<p>My solution consumes a list of values and thus applies operations on the order with which arguments are given, as an alternative to normal arithmetic evaluation order. For example, <code>1 + 3 + 5 + 7</code>, would be <code>(((1 + 3) +5) + 7)</code>. However, it takes all possible permutations of values, so all possibilities are listed anyway.
It also allows one to give any operations as parameters, even lambda expressions.</p>
<p>I use itertools.</p>
<pre><code>from itertools import combinations_with_replacement
from itertools import permutations
from itertools import product
from itertools import chain
</code></pre>
<p>To display all expressions we call:</p>
<pre><code>def list_all(operations, values):
if len(values) == 1:
return values
permutes = []
ops = []
expressions = []
# for 4 values we want combinations with 2, 3 and 4 values.
for subset in range(2, len(values)+1):
# One could use combinations instead of permutations if all ops
# where know to be transitive.
permutes.append(list(permutations(values, subset)))
ops.append(list(combinations_with_replacement(operations, subset - 1)))
for o, v in zip(ops, permutes):
expressions.append(list(itertools.product(o, v)))
return expressions
</code></pre>
<p>And to evaluate them, <code>execute</code> takes the output of <code>list_all</code>:</p>
<pre><code>def execute(expressions):
results = []
# Flatten all expressions into a single iterator
for ops, arguments in chain.from_iterable(expressions):
oplist = list(ops)
arglist = list(arguments)
res = oplist.pop(0)(arglist.pop(0), arglist.pop(0))
while len(arglist) > 0:
res = oplist.pop(0)(res, arglist.pop(0))
results.append(res)
return results
</code></pre>
<p>Example usage:</p>
<pre><code>from operator import add
from operator import sub
import pprint
expressions = list_all([add, sub, lambda x,y : sqrt(x*x + y*y)], [1, 2, 3, 4])
results = execute(expressions)
# Display list with operators, arguments and results.
pprint.pprint(zip(chain.from_iterable(expressions), results))
</code></pre>
| 1 |
2016-09-16T10:11:54Z
|
[
"python",
"combinations"
] |
How to groupby time series data
| 39,526,134 |
<p>I have a dataframe below,column B's dtype is datetime64.</p>
<pre><code> A B
0 a 2016-09-13
1 b 2016-09-14
2 b 2016-09-15
3 a 2016-10-13
4 a 2016-10-14
</code></pre>
<p>I would like to groupby according to month(or in general year and day...)</p>
<p>so I would like to get count result below, key = column B. </p>
<pre><code> a b
2016-09 1 2
2016-10 2 0
</code></pre>
<p>I tried groupby. but I couldn't figure out how to handle dtypes like datetime64...
How can I handle and group dtype datetime64?</p>
| 1 |
2016-09-16T07:37:07Z
| 39,526,402 |
<p>Say you start with</p>
<pre><code>In [247]: df = pd.DataFrame({'A': ['a', 'b', 'b', 'a', 'a'], 'B': ['2016-09-13', '2016-09-14', '2016-09-15', '2016-10-13', '2016-10-14']})
In [248]: df.B = pd.to_datetime(df.B)
</code></pre>
<p>Then you can <code>groupby</code>-<code>size</code>, then <code>unstack</code>:</p>
<pre><code>In [249]: df = df.groupby([df.B.dt.year.astype(str) + '-' + df.B.dt.month.astype(str), df.A]).size().unstack().fillna(0).astype(int)
</code></pre>
<p>Finally, you just need to make <code>B</code> a date again:</p>
<pre><code>In [250]: df.index = pd.to_datetime(df.index)
In [251]: df
Out[251]:
A a b
B
2016-10-01 2 0
2016-09-01 1 2
</code></pre>
<p>Note that the final conversion to a date-time set a uniform day (you can't have a "dayless" object of this type).</p>
| 3 |
2016-09-16T07:55:36Z
|
[
"python",
"pandas"
] |
How to groupby time series data
| 39,526,134 |
<p>I have a dataframe below,column B's dtype is datetime64.</p>
<pre><code> A B
0 a 2016-09-13
1 b 2016-09-14
2 b 2016-09-15
3 a 2016-10-13
4 a 2016-10-14
</code></pre>
<p>I would like to groupby according to month(or in general year and day...)</p>
<p>so I would like to get count result below, key = column B. </p>
<pre><code> a b
2016-09 1 2
2016-10 2 0
</code></pre>
<p>I tried groupby. but I couldn't figure out how to handle dtypes like datetime64...
How can I handle and group dtype datetime64?</p>
| 1 |
2016-09-16T07:37:07Z
| 39,527,634 |
<p>If you set the index to the datetime you can use pd.TimeGrouper to sort by various time ranges. Example code:</p>
<pre><code># recreate dataframe
df = pd.DataFrame({'A': ['a', 'b', 'b', 'a', 'a'], 'B': ['2016-09-13', '2016-09-14', '2016-09-15',
'2016-10-13', '2016-10-14']})
df['B'] = pd.to_datetime(df['B'])
# set column B as index for use of TimeGrouper
df.set_index('B', inplace=True)
# Now do the magic of Ami Tavory's answer combined with timeGrouper:
df = df.groupby([pd.TimeGrouper('M'), 'A']).size().unstack().fillna(0)
</code></pre>
<p>This returns:</p>
<pre><code>A a b
B
2016-09-30 1.0 2.0
2016-10-31 2.0 0.0
</code></pre>
<p>or alternatively (credits to ayhan) skip the setting to index step and use the following one-liner straight after creating the dataframe:</p>
<pre><code># recreate dataframe
df = pd.DataFrame({'A': ['a', 'b', 'b', 'a', 'a'], 'B': ['2016-09-13', '2016-09-14', '2016-09-15',
'2016-10-13', '2016-10-14']})
df['B'] = pd.to_datetime(df['B'])
df = df.groupby([pd.Grouper(key='B', freq='M'), 'A']).size().unstack().fillna(0)
</code></pre>
<p>which returns the same answer</p>
| 3 |
2016-09-16T09:08:41Z
|
[
"python",
"pandas"
] |
Converting hexadecimal to IEEE 754
| 39,526,356 |
<p>If I use a website like <a href="http://www.h-schmidt.net/FloatConverter/IEEE754.html" rel="nofollow">http://www.h-schmidt.net/FloatConverter/IEEE754.html</a> to convert the hex string <code>'424E4B31'</code> into float32, I get 51.57343.</p>
<p>I need to use Python to convert the string, however, using solutions on StackExchange like:</p>
<pre><code>import struct, binascii
hexbytes = b"\x42\x4E\x4B\x31"
struct.unpack('<f',hexbytes)
</code></pre>
<p>or </p>
<pre><code>struct.unpack('f', binascii.unhexlify('424E4B31'))
</code></pre>
<p>I get 2.9584e-09... why is it different?</p>
| 1 |
2016-09-16T07:52:35Z
| 39,526,566 |
<p>Because endianness is a thing.</p>
<pre><code>>>> struct.unpack('>f',hexbytes)
(51.573429107666016,)
</code></pre>
| 2 |
2016-09-16T08:05:55Z
|
[
"python",
"ieee-754"
] |
requests module : post data to remote host, wrong value is sent
| 39,526,416 |
<p>I have string like:</p>
<pre><code>{
'singleQuesResponses': {
'28': '',
'14': '',
'27': '',
'26': ''
},
'org_apache_struts_taglib_html_TOKEN': 'a1553f25303435076412b5ca7299b936',
'quesResponses': {
'some': 'data'
}
}
</code></pre>
<p>when i post it in pyhton requests like:</p>
<pre><code>data = json.loads(data)
page = requests.post('http://localhost/chilivery/index.php', data=data, cookies=bcookies)
</code></pre>
<p>then value that post is something like this:</p>
<pre><code>array(3) { ["quesResponses"]=> string(4) "some" ["singleQuesResponses"]=> string(2) "14" ["org_apache_struts_taglib_html_TOKEN"]=> string(32) "a1553f25303435076412b5ca7299b936" }
</code></pre>
<p>but i expect:</p>
<pre><code>array(3) { ["quesResponses"]=> array["some"=>'data'] ["singleQuesResponses"]=> string(2) "14" ["org_apache_struts_taglib_html_TOKEN"]=> string(32) "a1553f25303435076412b5ca7299b936" }
</code></pre>
<p>I mean why 'some' is not sent by value as array and the only first key of it send as a string ?</p>
| 0 |
2016-09-16T07:56:42Z
| 39,527,426 |
<p>That's how it's supposed to work. The setting the data params as a dict will make the requests module attempt to send it with: </p>
<pre><code>Content-Type: application/x-www-form-urlencoded
</code></pre>
<p>It will only take the first level of the dictionary and encode it in the body as x=data&y=other_data.</p>
<p>You should either encode your object with json.dumps(data) or assign it to the json argument directly.</p>
<pre><code>requests.post(url, json=data)
</code></pre>
<p>This sets the Content-Type header to application/json</p>
<p>OR</p>
<pre><code>requests.post(url, data=json.dumps(data))
</code></pre>
<p>This won't set the Content-Type header.</p>
| 0 |
2016-09-16T08:56:24Z
|
[
"python"
] |
requests module : post data to remote host, wrong value is sent
| 39,526,416 |
<p>I have string like:</p>
<pre><code>{
'singleQuesResponses': {
'28': '',
'14': '',
'27': '',
'26': ''
},
'org_apache_struts_taglib_html_TOKEN': 'a1553f25303435076412b5ca7299b936',
'quesResponses': {
'some': 'data'
}
}
</code></pre>
<p>when i post it in pyhton requests like:</p>
<pre><code>data = json.loads(data)
page = requests.post('http://localhost/chilivery/index.php', data=data, cookies=bcookies)
</code></pre>
<p>then value that post is something like this:</p>
<pre><code>array(3) { ["quesResponses"]=> string(4) "some" ["singleQuesResponses"]=> string(2) "14" ["org_apache_struts_taglib_html_TOKEN"]=> string(32) "a1553f25303435076412b5ca7299b936" }
</code></pre>
<p>but i expect:</p>
<pre><code>array(3) { ["quesResponses"]=> array["some"=>'data'] ["singleQuesResponses"]=> string(2) "14" ["org_apache_struts_taglib_html_TOKEN"]=> string(32) "a1553f25303435076412b5ca7299b936" }
</code></pre>
<p>I mean why 'some' is not sent by value as array and the only first key of it send as a string ?</p>
| 0 |
2016-09-16T07:56:42Z
| 39,591,764 |
<p>In this case, only way is sending your data with <code>Get</code> instead of posting them with data; you can use params in requests module.</p>
| 0 |
2016-09-20T10:28:19Z
|
[
"python"
] |
Python tkinter button returns to black while pressed
| 39,526,427 |
<p>I'm using Tkinter and I've noticed that my buttons become black while mouse button is held down irrespective of what color I have stated when declaring the button.</p>
<p>Does anyone know if this can be controlled?</p>
<p>Kind regards,¨</p>
<p>Daniel</p>
| 0 |
2016-09-16T07:57:14Z
| 39,526,759 |
<p>You're looking for the <code>activebackground</code> and <code>activeforeground</code> properties.<br>
Read more about them <a href="http://effbot.org/tkinterbook/button.htm" rel="nofollow">here</a>.</p>
| 4 |
2016-09-16T08:17:53Z
|
[
"python",
"button",
"tkinter",
"colors"
] |
Django form: restore predefined field value in clean method
| 39,526,462 |
<p>In my Django form I need to perform different field values comparisons (+ some additional checks). If all of these conditions are met I want to raise error and inform user that he can't perform this operation. Right now I'm doing it in <code>clean</code> method and code looks like these: </p>
<pre><code> if self.instance.foo != cleaned_data['foo'] and \
Bar.objects.filter(baz=self.instance.id).count():
cleaned_data['foo'] = self.instance.foo
raise forms.ValidationError("You can't change it")
</code></pre>
<p>That's working fine, but I'd also like to reject the change and restore previous value (before the update). The assignment <code>cleaned_data['foo'] = self.instance.foo</code> obviously is not working because cleaned_data is not being returned due to raising ValidationError. How can I achieve this?</p>
| 0 |
2016-09-16T07:59:29Z
| 39,528,069 |
<p>It turned out that you can access fields' values via <code>self.data</code>. So I ended up doing <code>self.data['foo'] = self.instance.foo</code> and it's working now.</p>
| 0 |
2016-09-16T09:30:10Z
|
[
"python",
"django",
"django-forms"
] |
Python (Kivy) - How to resize tabs of a TabbedPanel
| 39,526,499 |
<p>I'm using the <code>Kivy</code> library for the first time and I'm trying to develop a simple UI with a <code>TabbedPanel</code>. I would set the size (<code>x</code>) of each tab (in the code <code>TabbedPanelItem</code>) to fit at the entire <code>TabbedPanel</code>'s width but if I use <code>height</code> or <code>size_hint</code> in the <code>.kv</code> file, seems they doesn't work.</p>
<p>Here my kv code:</p>
<pre><code>#:import sm kivy.uix.screenmanager
ScreenManagement:
transition: sm.FadeTransition()
SecondScreen:
<SecondScreen>:
tabba: tabba
name: 'second'
FloatLayout:
background_color: (255, 255, 255, 1.0)
BoxLayout:
orientation: 'vertical'
size_hint: 1, 0.10
pos_hint: {'top': 1.0}
canvas:
Color:
rgba: (0.98, 0.4, 0, 1.0)
Rectangle:
pos: self.pos
size: self.size
Label:
text: 'MyApp'
font_size: 30
size: self.texture_size
BoxLayout:
orientation: 'vertical'
size_hint: 1, 0.90
Tabba:
id: tabba
BoxLayout:
orientation: 'vertical'
size_hint: 1, 0.10
pos_hint: {'bottom': 1.0}
Button:
background_color: (80, 1, 0, 1.0)
text: 'Do nop'
font_size: 25
<Tabba>:
do_default_tab: False
background_color: (255, 255, 255, 1.0)
# I would these three tabs' width filling the entire TabbedPanel's width
TabbedPanelItem:
text: 'First_Tab'
Tabs:
TabbedPanelItem:
text: 'Second_Tab'
Tabs:
TabbedPanelItem:
text: 'Third_Tab'
Tabs:
<Tabs>:
grid: grid
ScrollView:
do_scroll_y: True
do_scroll_x: False
size_hint: (1, None)
height: root.height
GridLayout:
id: grid
cols: 1
spacing: 10
padding: 10
size_hint_y: None
height: 2500
</code></pre>
<p>Here my Python code:</p>
<pre><code># coding=utf-8
__author__ = 'drakenden'
__version__ = '0.1'
import kivy
kivy.require('1.9.0') # replace with your current kivy version !
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen, FadeTransition
from kivy.properties import StringProperty, ObjectProperty,NumericProperty
from kivy.uix.tabbedpanel import TabbedPanel
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.button import Button
from kivy.utils import platform
from kivy.uix.gridlayout import GridLayout
from kivy.uix.label import Label
from kivy.uix.scrollview import ScrollView
class Tabs(ScrollView):
def __init__(self, **kwargs):
super(Tabs, self).__init__(**kwargs)
class Tabba(TabbedPanel):
pass
class SecondScreen(Screen):
pass
class ScreenManagement(ScreenManager):
pass
presentation = Builder.load_file("layout2.kv")
class MyApp(App):
def build(self):
return presentation
MyApp().run()
</code></pre>
<p>I've read about using a <code>StripLayout</code> inside the <code>TabbedPanel</code> but I don't know if is a good solution and how to apply it correctly. Any suggestions? </p>
| 0 |
2016-09-16T08:01:52Z
| 39,527,010 |
<p>I have experimented a bit with your code and after reading the TabbedPanel Docs i found out that <code>tab_width</code> specifies the width of the tab header (as tab_height is for the height). To use it in your kivy file you have to add the following line:</p>
<pre><code><Tabba>:
do_default_tab: False
tab_width: self.parent.width / 3
background_color: (255, 0, 255, 1.0)
# I would these three tabs' width filling the entire TabbedPanel's width
TabbedPanelItem:
.
.
.
the rest of your kivy file
</code></pre>
<p>What we added in this line is that each tab will be the 1/3 of it's parent width.</p>
<p>This even works for fewer or more tabs than 3. If you add more tabs, a horizonal scroll bar will be added to scroll through the extra tabs.</p>
<p>I hope I was helpful.</p>
| 1 |
2016-09-16T08:33:23Z
|
[
"python",
"user-interface",
"layout",
"kivy",
"kivy-language"
] |
Circular imports and annotations
| 39,526,742 |
<p>If I want to use annotations in both classes in the different modules is cross?</p>
<pre><code>from BModule import B
class A:
def method(self, b: B):
pass
</code></pre>
<p>~</p>
<pre><code>from AModule import A
class B:
def method(self, a: A):
pass
</code></pre>
<p>I got a <code>ImportError: cannot import name 'B'</code>? But how if I need annotate this, what to do?</p>
<p>Also if I just import AModule\BModule and use class as attribute of module <code>AModule.A</code> I got <code>AttributeError: module 'BModule' has no attribute 'B'</code></p>
| 1 |
2016-09-16T08:16:56Z
| 39,526,853 |
<p>What is forcing the dependency? It seems to me that in this case, any method of <code>A</code> that takes a <code>B</code> could be implemented as a method on <code>B</code> that takes an <code>A</code>, so make one of the classes the "main" class and use that class to operate on objects of the other class, if that makes sense?</p>
| 0 |
2016-09-16T08:23:33Z
|
[
"python",
"import",
"annotations",
"python-import",
"circular-dependency"
] |
Gaining information from the form data in form_valid
| 39,526,785 |
<p>I have to get some information from the form that has just been submitted. The form is a to put an event on a calendar.</p>
<p>I need to get the "Category" field, this is basically a name of the person the event includes.</p>
<p>From there I need to split the name into last and first and get their email from the user table. From there I will send them a email telling them they have been added to the calendar.</p>
<p>This is my <code>form_valid</code> function now: </p>
<pre><code>def form_valid(self, form):
Event = form.save(commit=False)
Event.created_by = self.request.user
Event.save()
send_mail(
'test',
'Is this really working?',
'from@example.com',
['blah@gmail.com'],
fail_silently=False,
)
return HttpResponseRedirect('/calendar/')
</code></pre>
<p>How do I get information that has just been submitted in the form? Is that even possible?</p>
| -2 |
2016-09-16T08:19:12Z
| 39,527,006 |
<p>Well, you've just created a new Event instance with that data. So the data is in that Event, naturally.</p>
<p>You could also get it from the <code>form.cleaned_data</code> dict.</p>
| 0 |
2016-09-16T08:32:58Z
|
[
"python",
"django",
"django-forms"
] |
Multiprocessing python function for numerical calculations
| 39,526,831 |
<p>Hoping to get some help here with parallelising my python code, I've been struggling with it for a while and come up with several errors in whichever way I try, currently running the code will take about 2-3 hours to complete, The code is given below; </p>
<pre><code>import numpy as np
from scipy.constants import Boltzmann, elementary_charge as kb, e
import multiprocessing
from functools import partial
Tc = 9.2
x = []
g= []
def Delta(T):
'''
Delta(T) takes a temperature as an input and calculates a
temperature dependent variable based on Tc which is defined as a
global parameter
'''
d0 = (pi/1.78)*kb*Tc
D0 = d0*(np.sqrt(1-(T**2/Tc**2)))
return D0
def element_in_sum(T, n, phi):
D = Delta(T)
matsubara_frequency = (np.pi * kb * T) * (2*n + 1)
factor_d = np.sqrt((D**2 * cos(phi/2)**2) + matsubara_frequency**2)
element = ((2 * D * np.cos(phi/2))/ factor_d) * np.arctan((D * np.sin(phi/2))/factor_d)
return element
def sum_elements(T, M, phi):
'''
sum_elements(T,M,phi) is the most computationally heavy part
of the calculations, the larger the M value the more accurate the
results are.
T: temperature
M: number of steps for matrix calculation the larger the more accurate the calculation
phi: The phase of the system can be between 0- pi
'''
X = list(np.arange(0,M,1))
Y = [element_in_sum(T, n, phi) for n in X]
return sum(Y)
def KO_1(M, T, phi):
Iko1Rn = (2 * np.pi * kb * T /e) * sum_elements(T, M, phi)
return Iko1Rn
def main():
for j in range(1, 92):
T = 0.1*j
for i in range(1, 314):
phi = 0.01*i
pool = multiprocessing.Pool()
result = pool.apply_async(KO_1,args=(26000, T, phi,))
g.append(result)
pool.close()
pool.join()
A = max(g);
x.append(A)
del g[:]
</code></pre>
<p>My approach was to try and send the KO1 function into a multiprocessing pool but I either get a <code>Pickling</code> error or a <code>too many files open</code>, Any help is greatly appreciated, and if multiprocessing is the wrong approach I would love any guide.</p>
| 0 |
2016-09-16T08:22:11Z
| 39,527,968 |
<p>This is not an answer to the question, but if I may, I would propose how to speed up the code using simple numpy array operations. Have a look at the following code:</p>
<pre><code>import numpy as np
from scipy.constants import Boltzmann, elementary_charge as kb, e
import time
Tc = 9.2
RAM = 4*1024**2 # 4GB
def Delta(T):
'''
Delta(T) takes a temperature as an input and calculates a
temperature dependent variable based on Tc which is defined as a
global parameter
'''
d0 = (np.pi/1.78)*kb*Tc
D0 = d0*(np.sqrt(1-(T**2/Tc**2)))
return D0
def element_in_sum(T, n, phi):
D = Delta(T)
matsubara_frequency = (np.pi * kb * T) * (2*n + 1)
factor_d = np.sqrt((D**2 * np.cos(phi/2)**2) + matsubara_frequency**2)
element = ((2 * D * np.cos(phi/2))/ factor_d) * np.arctan((D * np.sin(phi/2))/factor_d)
return element
def KO_1(M, T, phi):
X = np.arange(M)[:,np.newaxis,np.newaxis]
sizeX = int((float(RAM) / sum(T.shape))/sum(phi.shape)/8) #8byte
i0 = 0
Iko1Rn = 0. * T * phi
while (i0+sizeX) <= M:
print "X = %i"%i0
indices = slice(i0, i0+sizeX)
Iko1Rn += (2 * np.pi * kb * T /e) * element_in_sum(T, X[indices], phi).sum(0)
i0 += sizeX
return Iko1Rn
def main():
T = np.arange(0.1,9.2,0.1)[:,np.newaxis]
phi = np.linspace(0,np.pi, 361)
M = 26000
result = KO_1(M, T, phi)
return result, result.max()
T0 = time.time()
r, rmax = main()
print time.time() - T0
</code></pre>
<p>It runs a bit more than 20sec on my PC. One has to be careful not to use too much memory, that is why there is still a loop with a bit complicated construction to use only pieces of X. If enough memory is present, then it is not necessary.</p>
<p>One should also note that this is just the first step of speeding up. Much improvement could be reached still using e.g. just in time compilation or cython.</p>
| 1 |
2016-09-16T09:25:44Z
|
[
"python",
"multithreading",
"numpy",
"multiprocessing"
] |
Multiprocessing python function for numerical calculations
| 39,526,831 |
<p>Hoping to get some help here with parallelising my python code, I've been struggling with it for a while and come up with several errors in whichever way I try, currently running the code will take about 2-3 hours to complete, The code is given below; </p>
<pre><code>import numpy as np
from scipy.constants import Boltzmann, elementary_charge as kb, e
import multiprocessing
from functools import partial
Tc = 9.2
x = []
g= []
def Delta(T):
'''
Delta(T) takes a temperature as an input and calculates a
temperature dependent variable based on Tc which is defined as a
global parameter
'''
d0 = (pi/1.78)*kb*Tc
D0 = d0*(np.sqrt(1-(T**2/Tc**2)))
return D0
def element_in_sum(T, n, phi):
D = Delta(T)
matsubara_frequency = (np.pi * kb * T) * (2*n + 1)
factor_d = np.sqrt((D**2 * cos(phi/2)**2) + matsubara_frequency**2)
element = ((2 * D * np.cos(phi/2))/ factor_d) * np.arctan((D * np.sin(phi/2))/factor_d)
return element
def sum_elements(T, M, phi):
'''
sum_elements(T,M,phi) is the most computationally heavy part
of the calculations, the larger the M value the more accurate the
results are.
T: temperature
M: number of steps for matrix calculation the larger the more accurate the calculation
phi: The phase of the system can be between 0- pi
'''
X = list(np.arange(0,M,1))
Y = [element_in_sum(T, n, phi) for n in X]
return sum(Y)
def KO_1(M, T, phi):
Iko1Rn = (2 * np.pi * kb * T /e) * sum_elements(T, M, phi)
return Iko1Rn
def main():
for j in range(1, 92):
T = 0.1*j
for i in range(1, 314):
phi = 0.01*i
pool = multiprocessing.Pool()
result = pool.apply_async(KO_1,args=(26000, T, phi,))
g.append(result)
pool.close()
pool.join()
A = max(g);
x.append(A)
del g[:]
</code></pre>
<p>My approach was to try and send the KO1 function into a multiprocessing pool but I either get a <code>Pickling</code> error or a <code>too many files open</code>, Any help is greatly appreciated, and if multiprocessing is the wrong approach I would love any guide.</p>
| 0 |
2016-09-16T08:22:11Z
| 39,528,015 |
<p>I haven't tested your code, but you can do several things to improve it.</p>
<p>First of all, don't create arrays unnecessarily. <code>sum_elements</code> creates three array-like objects when it can use just one generator. First, <code>np.arange</code> creates a numpy array, then the <code>list</code> function creates a list object and and then the list comprehension creates another list. The function does 4 times the work it should.</p>
<p>The correct way to implement it (in python3) would be:</p>
<pre><code>def sum_elements(T, M, phi):
return sum(element_in_sum(T, n, phi) for n in range(0, M, 1))
</code></pre>
<p>If you use python2, replace <code>range</code> with <code>xrange</code>.
This tip will probably help you in any python script you'll write.</p>
<p>Also, try to utilize multiprocessing better. It seems what you need to do is to create a <code>multiprocessing.Pool</code> object <strong>once</strong>, and use the <code>pool.map</code> function.</p>
<p>The main function should look like this:</p>
<pre><code>def job(args):
i, j = args
T = 0.1*j
phi = 0.01*i
return K0_1(26000, T, phi)
def main():
pool = multiprocessing.Pool(processes=4) # You can change this number
x = [max(pool.imap(job, ((i, j) for i in range(1, 314)) for j in range(1, 92)]
</code></pre>
<p>Notice that I used a tuple in order to pass multiple arguments to job.</p>
| 1 |
2016-09-16T09:27:49Z
|
[
"python",
"multithreading",
"numpy",
"multiprocessing"
] |
Copying (efficiently) dataframe content inside of another dataframe using Python
| 39,526,886 |
<p>I have one dataframe (<code>df1</code>):</p>
<p><a href="http://i.stack.imgur.com/eXEeG.png" rel="nofollow"><img src="http://i.stack.imgur.com/eXEeG.png" alt="enter image description here"></a></p>
<p>I then create a new dataframe (<code>df2</code>) which has twice as many rows as <code>fd1</code>. My goal is to copy some elements of the first dataframe inside the second one in a smart way so that the result looks like this:</p>
<p><a href="http://i.stack.imgur.com/3Z1jK.png" rel="nofollow"><img src="http://i.stack.imgur.com/3Z1jK.png" alt="enter image description here"></a></p>
<p>So far I was able to reach this goal by using the following commands:</p>
<pre><code>raw_data = {'A': ['pinco', 'pallo', 'pollo'],
'B': ['lollo', 'fallo', 'gollo'],
'C': ['pizzo', 'pazzo', 'razzo']}
df1 = pd.DataFrame(raw_data, columns = ['A', 'B', 'C'])
columns = ['XXX','YYY', 'ZZZ']
N = 3
df2 = pd.DataFrame(columns=columns,index=range(N*2))
idx = 0
for i in range(N):
df2['XXX'].loc[idx] = df1['A'].loc[i]
df2['XXX'].loc[idx+1] = df1['A'].loc[i]
df2['YYY'].loc[idx] = df1['B'].loc[i]
df2['YYY'].loc[idx+1] = df1['C'].loc[i]
idx += 2
</code></pre>
<p>However I am looking for a more efficient (more compact and elegant) way to obtain this result. I tried to use the following combination inside of the for loop without success:</p>
<pre><code>df2[['XXX','YYY']].loc[idx] = df1[['A', 'B']].loc[i]
df2[['XXX','YYY']].loc[idx+1] = df1[['A', 'C']].loc[i]
</code></pre>
| 2 |
2016-09-16T08:26:44Z
| 39,527,551 |
<p>You could do it this way:</p>
<pre><code>df2['XXX'] = np.repeat(df1['A'].values, 2) # Repeat elements in A twice
df2.loc[::2, 'YYY'] = df1['B'].values # Fill even rows with B values
df2.loc[1::2, 'YYY'] = df1['C'].values # Fill odd rows with C values
XXX YYY ZZZ
0 pinco lollo NaN
1 pinco pizzo NaN
2 pallo fallo NaN
3 pallo pazzo NaN
4 pollo gollo NaN
5 pollo razzo NaN
</code></pre>
| 4 |
2016-09-16T09:04:01Z
|
[
"python",
"pandas",
"dataframe",
"copy"
] |
Copying (efficiently) dataframe content inside of another dataframe using Python
| 39,526,886 |
<p>I have one dataframe (<code>df1</code>):</p>
<p><a href="http://i.stack.imgur.com/eXEeG.png" rel="nofollow"><img src="http://i.stack.imgur.com/eXEeG.png" alt="enter image description here"></a></p>
<p>I then create a new dataframe (<code>df2</code>) which has twice as many rows as <code>fd1</code>. My goal is to copy some elements of the first dataframe inside the second one in a smart way so that the result looks like this:</p>
<p><a href="http://i.stack.imgur.com/3Z1jK.png" rel="nofollow"><img src="http://i.stack.imgur.com/3Z1jK.png" alt="enter image description here"></a></p>
<p>So far I was able to reach this goal by using the following commands:</p>
<pre><code>raw_data = {'A': ['pinco', 'pallo', 'pollo'],
'B': ['lollo', 'fallo', 'gollo'],
'C': ['pizzo', 'pazzo', 'razzo']}
df1 = pd.DataFrame(raw_data, columns = ['A', 'B', 'C'])
columns = ['XXX','YYY', 'ZZZ']
N = 3
df2 = pd.DataFrame(columns=columns,index=range(N*2))
idx = 0
for i in range(N):
df2['XXX'].loc[idx] = df1['A'].loc[i]
df2['XXX'].loc[idx+1] = df1['A'].loc[i]
df2['YYY'].loc[idx] = df1['B'].loc[i]
df2['YYY'].loc[idx+1] = df1['C'].loc[i]
idx += 2
</code></pre>
<p>However I am looking for a more efficient (more compact and elegant) way to obtain this result. I tried to use the following combination inside of the for loop without success:</p>
<pre><code>df2[['XXX','YYY']].loc[idx] = df1[['A', 'B']].loc[i]
df2[['XXX','YYY']].loc[idx+1] = df1[['A', 'C']].loc[i]
</code></pre>
| 2 |
2016-09-16T08:26:44Z
| 39,534,662 |
<p>Working from Nickil Maveli's answer, there's a faster (if somewhat more arcane) solution if you interleave B and C into a single array first. <a href="http://stackoverflow.com/questions/5347065/interweaving-two-numpy-arrays">(c. f. this question).</a></p>
<pre><code># Repeat elements in A twice
df2['XXX'] = np.repeat(df1['A'].values, 2)
# make a single interleaved array from the values of B and C and copy to YYYY
df2['YYY'] = np.dstack((df1['B'].values, df1['C'].values)).ravel()
</code></pre>
<p>On my machine there was about a 3x speedup</p>
<pre><code>In [110]: %timeit df2.loc[::2, 'YYY'] = df1['B'].values; df2.loc[::2, 'YYY'] = df1['C'].values
1000 loops, best of 3: 274 µs per loop
In [111]: %timeit df2['YYY'] = np.dstack((df1['B'].values, df1['C'].values)).ravel()
10000 loops, best of 3: 87.5 µs per loop
</code></pre>
| 2 |
2016-09-16T15:02:31Z
|
[
"python",
"pandas",
"dataframe",
"copy"
] |
Url list problems
| 39,526,953 |
<p>I have a problem with url patterns, you can see them below.</p>
<p>I can connect to the only one category called "Python" (<code>slug = 'python'</code>). Others like "Django", "Other categories", "Myown" links are not working, they're showing me 404 errors like below.</p>
<pre><code>Page not found (404)
Request Method: GET
Request URL: http://127.0.0.1:8000/rango/category/myown
Using the URLconf defined in tang_w_djang.urls, Django tried these URL
patterns, in this order:
^admin/
^$ [name='index']
^rango/ ^$ [name='index']
^rango/ ^about/$ [name='about']
^rango/ ^add_category/$ [name='add_category']
^rango/ ^category/(?P<category_name_slug>[\w\-]+)/$ [name='show_category']
^rango/ ^category/(?P<category_name_slug>[\w\-]+)/add_page/$ [name='add_page']
The current URL, rango/category/myown, didn't match any of these.
</code></pre>
| 0 |
2016-09-16T08:30:18Z
| 39,528,399 |
<p>Your URL pattern <code>^category/(?P<category_name_slug>[\w\-]+)/$</code> has a trailing slash. </p>
<p>Therefore, you should use the URL <code>http://127.0.0.1:8000/rango/category/myown/</code> instead of <code>http://127.0.0.1:8000/rango/category/myown/</code> to view the category.</p>
<p>If you the <a href="https://docs.djangoproject.com/en/1.10/ref/middleware/#module-django.middleware.common" rel="nofollow">common middleware</a> enabled and <code>APPEND_SLASH=True</code> in your settings, then Django should redirect from <code>/rango/category/myown</code> to <code>/rango/category/myown/</code>. See the docs for more info.</p>
| 0 |
2016-09-16T09:46:09Z
|
[
"python",
"django"
] |
Secure deployment of client secrets in python
| 39,527,151 |
<p>I'm planning to write a <code>Kodi (former XBMC)</code> plugin for <code>Spotify</code> using <code>Python</code>. Some time ago, Spotify deprecated their old library <code>libspotify</code> and introduced a new <code>ReST</code> based <code>WebAPI</code>. I would like to use this api to request data like the playlists, followed albums and other user specific stuff from Spotify. This <code>WebAPI</code> uses the <code>OAUTH</code> mechanism to authorize an application to use user-specific data.
Thus, I require a <code>Client ID</code> and a <code>Client Secret</code>. While the <code>Client ID</code> is public I have not problem in storing it in the sourcecode. But what about the <code>Client Secret</code>? This secret is required by the application to authenticate itself at spotify. Thus, it needs to be deployed as well.</p>
<p><strong>How do I securly deploy this secret, such that a user of the plugin is not able to read out the secret?</strong></p>
<p>I can't use obfuscation techniques because python is interpreted and a user can simply start an interpreter, import my modules and read out the reconstructed secret. The same holds for encrypting the key. The application needs to be able to decrypt the secret and because of this, I would need to deploy the encryption key as well. This is a chicken or egg problem. </p>
<p>Any suggestions about this? How does other software solve this problem?</p>
<p>EDIT: I just found this <a href="https://tools.ietf.org/html/rfc6819#page-16" rel="nofollow">RFC6819</a>. Seems like this is a general problem in <code>oauth</code>. </p>
| 2 |
2016-09-16T08:41:22Z
| 39,527,202 |
<p>In this case, you can use the <a href="https://developer.spotify.com/web-api/authorization-guide/#implicit_grant_flow" rel="nofollow">Implicit Grant Flow</a>, which is designed for client-side applications where storing the secret is impractical for security reasons.</p>
| 2 |
2016-09-16T08:44:18Z
|
[
"python",
"rest",
"security",
"oauth",
"spotify"
] |
Iterating over only a part of list in Python
| 39,527,176 |
<p>I have a list in python, that consists of both alphabetic and numeric elements, say something like <code>list = ["a", 1, 2, 3, "b", 4, 5, 6]</code> and I want to slice it into 2 lists, containing numbers that follow the alphabetic characters, so <code>list1 = [1, 2, 3]</code> and <code>list2 = [4, 5, 6]</code>. <code>a</code> and <code>b</code> elements could be in reversed order, but generally, I want to store numeric elements that follow <code>a</code> and <code>b</code> elements in separate lists. The easiest solution that I came up with was creating a loop with condition:</p>
<pre><code> #Generating a list for numeric elements following "a":
for e in list[list.index("a")+1:]:
if not str.isdigit(e):
break
else:
list1.append(e)
</code></pre>
<p>I'd do it similarly for <code>list2</code> and numeric elements after <code>"b"</code>.
But could there be more elegant solutions? I'm new to Python, but I've seen beautiful one-liner constructions, could there be something like that in my case? Thanks in advance. </p>
| 0 |
2016-09-16T08:42:41Z
| 39,527,499 |
<p>Here you have a functional aproach:</p>
<pre><code>>>> l = ["a", 1, 2, 3, "b", 4, 5, 6]
>>> dig = [x for (x, y) in enumerate(l) if type(y) is str] + [len(l)]
>>> dig
[0, 4, 8]
>>> slices = zip(map(lambda x:x+1, dig), dig[1:])
>>> slices
[(1, 4), (5, 8)]
>>> lists = map(lambda (i, e): l[i:e], slices)
>>> lists
[[1, 2, 3], [4, 5, 6]]
</code></pre>
<p>First we get the index of the letters with, notice that we need the size of the list to know the end of it:</p>
<pre><code>[x for (x, y) in enumerate(l) if type(y) is str] + [len(l)]
</code></pre>
<p>Then we get the pair of slices where the lists are:</p>
<pre><code>zip(map(lambda x:x+1, dig), dig[1:])
</code></pre>
<p>Finally, we get each slice from the original list:</p>
<pre><code>map(lambda (i, e): l[i:e], slices)
</code></pre>
| 3 |
2016-09-16T09:00:31Z
|
[
"python",
"iteration"
] |
Iterating over only a part of list in Python
| 39,527,176 |
<p>I have a list in python, that consists of both alphabetic and numeric elements, say something like <code>list = ["a", 1, 2, 3, "b", 4, 5, 6]</code> and I want to slice it into 2 lists, containing numbers that follow the alphabetic characters, so <code>list1 = [1, 2, 3]</code> and <code>list2 = [4, 5, 6]</code>. <code>a</code> and <code>b</code> elements could be in reversed order, but generally, I want to store numeric elements that follow <code>a</code> and <code>b</code> elements in separate lists. The easiest solution that I came up with was creating a loop with condition:</p>
<pre><code> #Generating a list for numeric elements following "a":
for e in list[list.index("a")+1:]:
if not str.isdigit(e):
break
else:
list1.append(e)
</code></pre>
<p>I'd do it similarly for <code>list2</code> and numeric elements after <code>"b"</code>.
But could there be more elegant solutions? I'm new to Python, but I've seen beautiful one-liner constructions, could there be something like that in my case? Thanks in advance. </p>
| 0 |
2016-09-16T08:42:41Z
| 39,527,521 |
<p>Something like this, maybe?</p>
<pre><code>>>> import itertools
>>> import numbers
>>> lst = ["a", 1, 2, 3, "b", 4, 5, 6]
>>> groups = itertools.groupby(lst, key=lambda x: isinstance(x, numbers.Number))
>>> result = [[x for x in group_iter] for is_number, group_iter in groups if is_number]
>>> result
[[1, 2, 3], [4, 5, 6]]
</code></pre>
<p>And here is a less âsexyâ version that outputs a list of tuple pairs <code>(group_key, group_numbers)</code>:</p>
<pre><code>>>> import itertools
>>> import numbers
>>> lst = ["a", 1, 2, 3, "b", 4, 5, 6]
>>> groups = itertools.groupby(lst, key=lambda x: isinstance(x, numbers.Number))
>>> group_key = None
>>> result = []
>>> for is_number, group_iter in groups:
... if not is_number:
... for x in group_iter:
... group_key = x
... else:
... result.append((group_key, [x for x in group_iter]))
>>> result
[('a', [1, 2, 3]), ('b', [4, 5, 6])]
</code></pre>
<p>Note that it is a quick and dirty version which expects the input data to be well-formed.</p>
| 4 |
2016-09-16T09:01:55Z
|
[
"python",
"iteration"
] |
Iterating over only a part of list in Python
| 39,527,176 |
<p>I have a list in python, that consists of both alphabetic and numeric elements, say something like <code>list = ["a", 1, 2, 3, "b", 4, 5, 6]</code> and I want to slice it into 2 lists, containing numbers that follow the alphabetic characters, so <code>list1 = [1, 2, 3]</code> and <code>list2 = [4, 5, 6]</code>. <code>a</code> and <code>b</code> elements could be in reversed order, but generally, I want to store numeric elements that follow <code>a</code> and <code>b</code> elements in separate lists. The easiest solution that I came up with was creating a loop with condition:</p>
<pre><code> #Generating a list for numeric elements following "a":
for e in list[list.index("a")+1:]:
if not str.isdigit(e):
break
else:
list1.append(e)
</code></pre>
<p>I'd do it similarly for <code>list2</code> and numeric elements after <code>"b"</code>.
But could there be more elegant solutions? I'm new to Python, but I've seen beautiful one-liner constructions, could there be something like that in my case? Thanks in advance. </p>
| 0 |
2016-09-16T08:42:41Z
| 39,527,537 |
<p>You can use slices:</p>
<pre><code>list = ["a", 1, 2, 3, "b", 4, 5, 6]
lista = list[list.index('a')+1:list.index('b')]
listb = list[list.index('b')+1:]
</code></pre>
| 0 |
2016-09-16T09:03:09Z
|
[
"python",
"iteration"
] |
Iterating over only a part of list in Python
| 39,527,176 |
<p>I have a list in python, that consists of both alphabetic and numeric elements, say something like <code>list = ["a", 1, 2, 3, "b", 4, 5, 6]</code> and I want to slice it into 2 lists, containing numbers that follow the alphabetic characters, so <code>list1 = [1, 2, 3]</code> and <code>list2 = [4, 5, 6]</code>. <code>a</code> and <code>b</code> elements could be in reversed order, but generally, I want to store numeric elements that follow <code>a</code> and <code>b</code> elements in separate lists. The easiest solution that I came up with was creating a loop with condition:</p>
<pre><code> #Generating a list for numeric elements following "a":
for e in list[list.index("a")+1:]:
if not str.isdigit(e):
break
else:
list1.append(e)
</code></pre>
<p>I'd do it similarly for <code>list2</code> and numeric elements after <code>"b"</code>.
But could there be more elegant solutions? I'm new to Python, but I've seen beautiful one-liner constructions, could there be something like that in my case? Thanks in advance. </p>
| 0 |
2016-09-16T08:42:41Z
| 39,527,735 |
<p>Another approach (Python 3 only):</p>
<pre><code>def chunks(values, idx=0):
''' Yield successive integer values delimited by a character. '''
tmp = []
for idx, val in enumerate(values[1:], idx):
if not isinstance(val, int):
yield from chunks(values[idx + 1:], idx)
break
tmp.append(val)
yield tmp
>>> values = ['a', 1, 2, 3, 'b', 4, 5, 6]
>>> list(chunks(values))
[[4, 5, 6], [1, 2, 3]]
</code></pre>
| 0 |
2016-09-16T09:14:08Z
|
[
"python",
"iteration"
] |
How to load remote database in cache, in python?
| 39,527,206 |
<p>I want to load a database from a remote server to my memory/cache, so that I don't have to make network calls every time I want to use the database. </p>
<p>I am doing this in Python and the database is <code>cassandra</code>. How should I do it? I have heard about <code>memcached</code> and <code>beaker</code>. Which library is best for this purpose?</p>
| 0 |
2016-09-16T08:44:39Z
| 39,527,394 |
<p>If you are trying to get some data from a database, use the <a href="https://mkleehammer.github.io/pyodbc/" rel="nofollow">pyodbc</a> module. This module can be used to download data from a given table in a database. Answers can also be found in <a href="http://stackoverflow.com/questions/11451101/retrieving-data-from-sql-using-pyodbc">here</a>.</p>
<p>An example how to connect:</p>
<pre><code>import pyodbc
cnxn = pyodbc.connect('DRIVER={SQLServer};SERVER=SQLSRV01;
DATABASE=DATABASE;UID=USER;PWD=PASSWORD')
cursor = cnxn.cursor()
cursor.execute("SQL_QUERY")
for row in cursor.fetchall():
print row
</code></pre>
| 0 |
2016-09-16T08:54:28Z
|
[
"python",
"database",
"caching",
"cassandra",
"memcached"
] |
How to get the json from a text file in python
| 39,527,267 |
<pre><code>def calculateStats():
pattern = "Online_R*.txt"
for root, dirs, files in os.walk("."):
for name in files:
if fnmatch.fnmatch(name, pattern):
if(fName in root):
fileName = os.path.join(root, name)
intializeCSV(fileName)
data = []
f = open(fileName, "r")
lines = f.readlines()
first_instance = True
for i in range(len(lines)):
isSubstring = "ONLINE_DATA_RECEIVED_FROM_INFROMATION_RETRIVAL_SYSTEM" in lines[i].rstrip('\n')
isRating = "RATING_ON_KEYWORDS" in lines[i].rstrip('\n')
if(first_instance and isSubstring):
first_instance = False
continue
elif isSubstring:
data = lines[k].rstrip('\n')
data = json.loads(data)
print data
print data["inputs"]
</code></pre>
<p>Data:</p>
<pre><code>2016-09-12 16:31:50.864000 ONLINE_DATA_RECEIVED_FROM_INFROMATION_RETRIVAL_SYSTEM
{u'debug': [u'time to fit model 0.06 s', u'time to generate suggestions 0.11 s', u'time to search documents 4.93 s', u'time to misc operations 0.02 s'], u'articles': [{u'is-saved': False, u'title': u'Computer Vision and Computer Graphics Analysis of Paintings and Drawings: An Introduction to the Literature', u'abstract': u'In the past few years, a number of scholars trained in computer vision; pattern recognition, image processing, computer graphics; and art history have developed rigorous computer methods for addressing an increasing number of problems in the history of art. In some cases; these computer methods aremore accurate than even highly trained connoisseurs, art historians and artists. Computer graphics models of artists\' studios and subjects allow scholars to explore "what if" scenarios and determine artists\' studio praxis. Rigorous computer ray-tracing software sheds light; on claims that; some artists employed optical tools. Computer methods win not replace tradition arthistorical methods of connoisseurship but enhance and extend them. As such, for these computer methods to be useful to the art community, they must continue to be refilled through application to a variety of significant art historical problems.', u'date': u'2009-01-01T00:00:00', u'publication-forum': u'COMPUTER ANALYSIS OF IMAGES AND PATTERNS, PROCEEDINGS', u'publication-forum-type': u'article', u'authors': u'D G Stork', u'keywords': u'pattern recognition, computer image analysis, brush stroke analysis, painting analysis, image forensics, compositing, computer graphics reconstructions, image processing, computer graphics, recognition, computer vision, graph, vision', u'id': u'575b005e12a085663bfef04f'}, {u'is-saved': False, u'title': u'Chomsky and Egan on computational theories of vision', u'abstract': u"Noam Chomsky and Frances Egan argue that David Marr's computational theoryof vision is not intentional, claiming that the formal scientific theory does not include description of visual content. They also argue that the theory is internalist in the sense of not describing things physically external to the perceiver. They argue that these claims hold for computational theories of vision in general. Beyond theories of vision, they argue that representational content does not figure as a topic within formal computationaltheories in cognitive science. I demonstrate that Chomsky's and Egan's claims about Marr's theory are false. Marr's computational theory contains a mathematical theory of visual content, based on empirical psychophysical evidence. It also contains mathematical descriptions of distal physical surfaces and objects, and of their optic projection to the perceiver. Much computational research on vision contains these types of intentional and externalist components within the formal, mathematical, theories. Chomsky's and Egan's claims demonstrate inadequate study and understanding of Marr's work and other research in this area. Computational theories of vision, by containing empirically based mathematical theories of visual content, to this extent present naturalizations of semantics.", u'date': u'2006-01-01T00:00:00', u'publication-forum': u'MINDS AND MACHINES', u'publication-forum-type': u'article', u'authors': u'A Silverberg', u'keywords': u'chomsky, computational theory, egan, marr, physical assumptions, visual content, 2.5-d sketch, 3-d representation, research, vision', u'id': u'575aff6012a085663bfef01a'}, {u'is-saved': False, u'title': u'Inspection and grading of agricultural and food products by computer vision systems - a review', u'abstract': u'Computer vision is a rapid, economic, consistent and objective inspection technique, which has expanded into many diverse industries. Its speed and accuracy satisfy ever-increasing production and quality requirements, hence aiding in the development of totally automated processes. This non-destructive method of inspection has found applications in the agricultural and food industry, including the inspection and grading of fruit and vegetable. Ithas also been used successfully in the analysis of grain characteristics and in the evaluation of foods such as meats, cheese and pizza. This paper reviews the progress of computer vision in the agricultural and food industry, then identifies areas for further research and wider application the technique. (C) 2002 Elsevier Science B.V. All rights reserved.', u'date': u'2002-01-01T00:00:00', u'publication-forum': u'COMPUTERS AND ELECTRONICS IN AGRICULTURE', u'publication-forum-type': u'article', u'authors': u'T Brosnan, D W Sun', u'keywords': u'computer vision, food, fruit, grain, image analysis and processing, vegetables, automation, characters, research, computer vision system, meats, vision', u'id': u'577b28bd12a0856ea8376b9e'}, {u'is-saved': False, u'title': u'Computer Vision Support for the Orthodontic Diagnosis', u'abstract': u"The following paper presents the achievement reached by our joined teams: Computer Vision System Group (ZKSW) in the Institute of Theoretical and Applied Informatics, Polish Academy of Sciences and Department of Orthodontics, Silesian Medical University. The cooperation began from the inspiration of late Prof. A. Mrozek. Computer Vision in supporting orthodontic diagnosismeans all the problems connected with proper acquisition, calibration and analysis of the diagnostic images of orthodontic patients. The aim of traditional cephalometric analysis is the quantitative confirmation of skeletal and/or soft tissue abnormalities on single images, assessment of the treatment plan, long term follow up of growth and treatment results. Beginning with the computerization of the methods used in traditional manual diagnosis in the simplest X-ray films of the patient's head we have developed our research towards engaging different methods of morphometrics, deformation analysis and using different imaging modalities: pairs of cephalograms (lateralan frontal), CT-scans, laser scans of dental models, laser scans of soft tissues, finally merging all the image information into patient's specific geometric and deformable model of the head. The model can be further exploited in the supporting of the surgical correction of jaw discrepancies. Our laboratory equipment allows us to design virtual operations, educational programs in a virtual reality with a CyberGlove device, and finally to verify the plan of intervention on stereo lithographic solid models received from a 3D printer.", u'date': u'2009-01-01T00:00:00', u'publication-forum': u'MAN-MACHINE INTERACTIONS', u'publication-forum-type': u'article', u'authors': u'A Tomaka, A Pisulska-Otremba', u'keywords': u'computer vision, orthodontic diagnosis, image acquisition, calibration, merging information, virtual reality, research, computer vision system, education, vision', u'id': u'577b28bd12a0856ea8376ba3'}, {u'is-saved': False, u'title': u'Computer vision for a robot sculptor', u'abstract': u"Before make computers can be active collaborators in design work, they must be equipped with some human-like visual and design skills. Towards this end, we report some advances in integrating computer vision and automated design in a computational model of ''artistic vision'' - the ability to see something striking in a subject and express it in a creative design. The Artificial Artist studies images of animals, then designs sculpture that conveys something of the strength, tension, and expression in the animals' bodies. It performs an anatomical analysis using conventional computer vision techniques constrained by high-level causal inference to find significant areas of the body, e.g., joints under stress. The sculptural form - kinetic mobiles - presents a number of mechanical and aesthetic design challenges, which the system solves in imagery using field-based computing methods. Coupled potential fields simultaneously enforce soft and hard constraints - e.g., the mobile should resemble the original animal and every subassembly of the mobile must be precisely balanced. The system uses iconic representations in all stages, obviating the need to translate between spatial and predicate representations and allowing a rich flow of information between vision and design.", u'date': u'1997-01-01T00:00:00', u'publication-forum': u'HUMAN VISION AND ELECTRONIC IMAGING II', u'publication-forum-type': u'proceedings paper', u'authors': u'M Brand', u'keywords': u'vision, causal analysis, potential fields, automated design, computer vision, robotics', u'id': u'575aff6012a085663bfef00b'}, {u'is-saved': False, u'title': u'Computer vision syndrome: A review', u'abstract': u'As computers become part Of Our everyday life, more and more people are experiencing a variety of ocular symptoms related to computer use. These include eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. This article describes both the characteristics and treatment modalities that are available at this time. Computer vision syndrome symptoms may be the cause of ocular (ocular-surface abnormalities or accommodative spasms) and/or extraocular (ergonomic) etiologies. However, the major contributor to computer Vision syndrome symptoms by far appears to be dry eye. The visual effects of various display characteristics such as lighting, glare, display quality, refresh rates, and radiation are also discussed. Treatment requires a multidirectional approach combining ocular therapy with adjustment of the workstation. Proper lighting, anti-glare filters, ergonomic positioning of computer monitor and regular work breaks may help improve visual comfort. Lubricatingeye drops and special computer glasses help relieve ocular surface-relatedsymptoms. More work needs to be,done to specifically define the processes that cause computer vision syndrome and to develop and improve effective treatments that successfully address these causes. &COPY; 2005 Elsevier Inc. All rights reserved.', u'date': u'2005-01-01T00:00:00', u'publication-forum': u'SURVEY OF OPHTHALMOLOGY', u'publication-forum-type': u'review', u'authors': u'C Blehm, S Vishnu, A Khattak, S Mitra, R W Yee', u'keywords': u'asthenopia, computer vision syndrome, dry eye, ergonomics, eyestrain, glare, video display terminals, computer vision, vision', u'id': u'575aff6012a085663bfeeff7'}, {u'is-saved': False, u'title': u'Social impact of computer vision', u'abstract': u"From the viewpoint of the economic growth theorist, the broad social impact of improving computer vision should be to improve people's material well-being. Developing computer vision entails building knowledge of perception and interpretation into new devices which enhance the scope and depth of human capability. Some worry that saving lives and replacing tedious jobs through computer vision will burden society with increasing population and unemployment; such worries are unjustified because humans are ''the ultimate resource.'' Because development of computer vision has costs as well as benefits, developers who wish to have a positive social impact should pursue projects that promise to pay off in the open market, and should seek private instead of government funding as much as possible.", u'date': u'1997-01-01T00:00:00', u'publication-forum': u'EMERGING APPLICATIONS OF COMPUTER VISION - 25TH AIPR WORKSHOP', u'publication-forum-type': u'proceedings paper', u'authors': u'H Baetjer', u'keywords': u'computer vision, economic growth, capital, population, employment, funding, profit, perception, vision', u'id': u'575aff6012a085663bfef000'}, {u'is-saved': False, u'title': u'Nondestructive testing of specularly reflective objects using reflection three-dimensional computer vision technique', u'abstract': u'We review an optical method referred to as 3-D computer vision technique for nondestructive inspection of three-dimensional objects whose surfaces are specularly reflective. In the setup, a computer-generated cosinusoidal fringe pattern in the form of linear, parallel fringe lines of equal spacing is displayed on a TV monitor. The monitor is placed in front of the test object, whose specularly reflective surface behaves as a mirror. A virtual image (or mirror image) of the fringe lines is thus formed. For a planar surface, the fringe pattern of the image is undistorted. The fringe lines, however, are distorted according to the slope distribution if the surface is not flat. By digitizing the distorted fringe lines, employing a phase-shift technique, the fringe phase distribution is determined, hence enabling subsequent determination of the surface slope distribution. When applied to nondestructive flaw detection, two separate recordings of the virtual image of the fringe lines are made, one before and another after an incremental loadis applied on the test object. The difference of the two phase-fringe distributions, or the phase change, represents the change in surface slope of the object due to the deformation. As a subsurface flaw also affects surfacedeformation, both surface and subsurface flaws are thus revealed from anomalies in the surface slope change. The method is simple, robust, and applicable in industrial environments. (C) 2003 Society of Photo-Optical Instrumentation Engineers.', u'date': u'2003-01-01T00:00:00', u'publication-forum': u'OPTICAL ENGINEERING', u'publication-forum-type': u'article', u'authors': u'M YY Hung, H M Shang', u'keywords': u'machine vision, computer vision, optical measurement, nondestructive testing, surface quality evaluation, vision', u'id': u'57ac8b0712a0856bc72d8cca'}, {u'is-saved': False, u'title': u'Hybrid optoelectronic processing and computer vision techniques for intelligent debris analysis', u'abstract': u'Intelligent Debris Analysis (IDA) requires significant time and resources due to the large number of images to be processed. To address this problem,we propose a hybrid optoelectronic and computer vision approach. Two majorsteps are involved for IDA: patch-level analysis and particle level analysis. An optoelectronic detection system using two ferroelectric liquid crystal spatial light modulators is designed and constructed to perform patch-level analysis, and advanced computer vision techniques are developed to carry out more intelligent particle-level analysis. Since typically only a small portion of the debris filters require more sophisticated particle-level analysis, the proposed approach enables high-speed automated analysis of debris fitters due to the inherent parallelism provided by the optoelectronic system.', u'date': u'1998-01-01T00:00:00', u'publication-forum': u'ALGORITHMS, DEVICES, AND SYSTEMS FOR OPTICAL INFORMATION PROCESSING', u'publication-forum-type': u'article', u'authors': u'Q MJ Wu, C P Grover, A Dumitras, D Liew, A Jerbi', u'keywords': u'optical information processing, computer vision, image analysis, intelligent debris analysis, automation, vision', u'id': u'57bed02e12a0850d372e5f17'}, {u'is-saved': False, u'title': u'Vlfeat an open and portable library of computer vision algorithms', u'url': u'http://portal.acm.org/citation.cfm?id=1874249', u'abstract': u'VLFeat is an open and portable library of computer vision algorithms. It aims at facilitating fast prototyping and reproducible research for computer vision scientists and students. It includes rigorous implementations of common building blocks such as feature detectors, feature extractors, (hierarchical) k-means clustering, randomized kd-tree matching, and super-pixelization. The source code and interfaces are fully documented. The library integrates directly with MATLAB, a popular language for computer vision research.', u'date': u'2010-01-01T00:00:00', u'publication-forum': u'International Multimedia Conference', u'publication-forum-type': u'conference', u'authors': u'Andrea Vedaldi, Brian Fulkerson', u'keywords': u'computer vision, image classification, object recognition, visual features, research, matching, vision', u'id': u'575aff6012a085663bfef012'}], u'keywords_local': {u'object recognition': {u'distance': 0.7300072813671763, u'angle': 96.66553497533552}, u'computer graphics': {u'distance': 0.7450305181430191, u'angle': 175.1162951377983}, u'graph': {u'distance': 0.6625181921678064, u'angle': 117.37932095235796}, u'reconfigurability': {u'distance': 0.5679946595851635, u'angle': 0.0}, u'course design': {u'distance': 0.8031378823919815, u'angle': 98.29399495312194}, u'research': {u'distance': 0.6153281573320046, u'angle': 137.52338924477087}, u'computer vision': {u'distance': 1.0, u'angle': 112.02639294117806}, u'image analysis': {u'distance': 0.5832147382377356, u'angle': 180.0}, u'education': {u'distance': 0.6887723921268714, u'angle': 111.53630557659233}, u'vision': {u'distance': 0.7595244667669305, u'angle': 136.46185691516604}}, u'keywords_semi_local': {u'glare': {u'distance': 0.15840304799865776, u'angle': 78.75687844118187}, u'neural networks': {u'distance': 0.2544935361506226, u'angle': 96.66553497533552}, u'robotics': {u'distance': 0.4166449886657276, u'angle': 157.7235521114761}, u'vision engineering': {u'distance': 0.23569778705554037, u'angle': 171.53672248243535}, u'ergonomics': {u'distance': 0.15840304799865776, u'angle': 78.75687844118185}, u'image classification': {u'distance': 0.49063995949774913, u'angle': 174.2087227649092}, u'obstacle detection': {u'distance': 0.3377380417460496, u'angle': 131.16295137798312}, u'employment': {u'distance': 0.384487541472409, u'angle': 130.97859424470386}, u'biological vision processes': {u'distance': 0.23569778705554037, u'angle': 171.53672248243535}, u'representation hierarchy': {u'distance': 0.3988520421657333, u'angle': 147.61573760202845}, u'chirplet transform': {u'distance': 0.4708379531993934, u'angle': 180.0}, u'sensor placement graph': {u'distance': 0.582323859465567, u'angle': 130.94290603000658}, u'image processing': {u'distance': 1.0, u'angle': 137.52338924477087}, u'gpu': {u'distance': 0.4708379531993934, u'angle': 180.0}, u'high school teachers': {u'distance': 0.11613705056778983, u'angle': 0.0}, u'mediated reality': {u'distance': 0.4708379531993934, u'angle': 180.0}, u'visual odometry': {u'distance': 0.3377380417460496, u'angle': 131.16295137798312}, u'distributed vision': {u'distance': 0.3496513364802925, u'angle': 95.29639511600689}, u'three dimensional representations': {u'distance': 0.3988520421657333, u'angle': 147.61573760202845}, u'computer science education': {u'distance': 0.11613705056778983, u'angle': 0.0}, u'reconfigurable computing': {u'distance': 0.2859569313350985, u'angle': 102.37579920408332}, u'teaching': {u'distance': 0.628788707830077, u'angle': 152.95762642133994}, u'population': {u'distance': 0.5759898343270131, u'angle': 156.61518332125544}, u'tracking': {u'distance': 0.3789181276549178, u'angle': 149.6144887690062}, u'object modelling': {u'distance': 0.3988520421657333, u'angle': 147.61573760202845}, u'potential fields': {u'distance': 0.24188910369708108, u'angle': 173.2111784523935}, u'asthenopia': {u'distance': 0.15840304799865776, u'angle': 78.75687844118185}, u'physical assumptions': {u'distance': 0.0, u'angle': 71.55289280630292}, u'perception': {u'distance': 0.43562497018771207, u'angle': 130.60310326514812}, u'eyestrain': {u'distance': 0.15840304799865776, u'angle': 78.75687844118185}}, u'inputs': [[u'hci', 1.0, 0.6142454219528725, 0.07306666297061, 0.0800478407947], [u'design', 0.0, 0.5468406837422238, 0.08760202801780537, 0.01], [u'usefulness', 1.0, 0.4562214561022063, 0.04479820099963043, 0.0656453052827], [u'graph', 1.0, 0.6448427829817995, 0.054873346672524956, 0.0374344673723], [u'reconfigurability', 0.0, 0.2808456351828042, 0.11391946280676753, 0.0436526373409], [u'computer vision', 1.0, 1.0, 0.9907708479613715, 1.0], [u'course design', 1.0, 0.6722828427604761, 0.087227437324513, 0.155838952273], [u'ergonomics', 0.0, 0.3744481774120078, 0.13317889968218008, 0.0638618603466], [u'reconfigurable computing', 0.0, 0.13771106030222424, 0.11923776509308054, 0.0473135364178], [u'mediated reality', 0.0, 0.4562214561022063, 0.16472183049428685, 0.104860423781], [u'education', 0.0, 0.3808226715583354, 0.08382258150001105, 0.01], [u'image processing', 0.0, 0.48646855984497794, 0.11701636909528229, 0.0635854457852], [u'fingerprint matching', 1.0, 0.3497471833044457, 0.032857452254179007, 0.0294450507842], [u'vision', 0.0, 0.8841315712906007, 0.04888374610072927, 0.01]]}
</code></pre>
<p>Error: </p>
<pre><code> File "online-analysis-script.py", line 67, in calculateStats
data = json.loads(data)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 367, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 1 column 5 - line 1 column 50 (char 4 - 49)
</code></pre>
<p>Problem:
The problem here is i am reading a text file but when a try to read a line which is json and want to parse it so i can get the input fields, it throws some error regarding decode and encode issues. How can i achieve this functionality?</p>
| -2 |
2016-09-16T08:48:23Z
| 39,527,451 |
<p>It seems that your line is not 100% json. It looks like the line was created by a "print (container)" statement in a previous python script and the output was redirected to create this line, hence the telltale 'u' in front of the strings.</p>
<p>The solution is easy, go back to your previous python script and do this </p>
<pre><code>print (json.dumps(container))
</code></pre>
<p>And then run the script and redirect the output to a file.</p>
| 0 |
2016-09-16T08:57:57Z
|
[
"python",
"json",
"file"
] |
'module' object has no attribute 'py' when running from cmd
| 39,527,284 |
<p>I'm currently learning unittesting, and I have stumbled upon a strange error:</p>
<p>If I run my script from inside PyCharm, everything works perfectly. If I run it from my <code>cmd.exe</code> (as administrator), I get the following error:</p>
<p><a href="http://i.stack.imgur.com/Q2oDm.png" rel="nofollow"><img src="http://i.stack.imgur.com/Q2oDm.png" alt="enter image description here"></a></p>
<p>This is my code:</p>
<pre><code>import unittest
class TutorialUnittest(unittest.TestCase):
def test_add(self):
self.assertEqual(23,23)
self.assertNotEqual(11,12)
# function for raising errors.
def test_raise(self):
with self.assertRaises(Exception):
raise Exception`
</code></pre>
| 0 |
2016-09-16T08:49:24Z
| 39,528,330 |
<p>Just remove the <code>.py</code> extension.</p>
<p>You are running your tests using the <code>-m</code> command-line flag. The Python documentation will tell you more about it, just check out this <a href="http://docs.python.org/2/using/cmdline.html#cmdoption-m" rel="nofollow">link</a>.</p>
<p>In a word, the <code>-m</code> option let you run a module, in your case the <code>unittest</code> module. This module expect to receive a module path or a class path following the Python format for module path (using dots). For example, if you want to run the FirstTest class in the mytests module in a mypackage folder you would use the following command line:</p>
<pre><code>python -m unittest mypackage.mytests.FirstTest
</code></pre>
<p>Assuming that you are running the previous command line from the parent folder of mypackage. This allow you to select precisely the tests you want to run (even inside a module).</p>
<p>When you add the <code>.py</code> extension, <code>unittest</code> is looking for a <code>py</code> object (like a module or a class) inside the last element of the module path you gave but, yet this object does not exist. This is exactly what your terminal error tells:</p>
<pre><code>AttributeError: âmoduleâ object has no attribute âpyâ
</code></pre>
| 1 |
2016-09-16T09:42:50Z
|
[
"python",
"unit-testing",
"cmd"
] |
Associating users with models django
| 39,527,289 |
<p>I have a lot of models in model.py -</p>
<pre><code>class Portfolio(models.Model):
company = models.TextField(null=True)
volume = models.IntegerField(blank=True)
date = models.DateField(null=True)
isin = models.TextField(null=True)
class Endday(models.Model):
company = models.TextField(null=True)
isin = models.TextField(null=True)
eop = models.TextField(max_length=100000)
class Results(models.Model):
companies = models.TextField(default=0)
dates = models.DateField(auto_now_add=False)
eodp = models.FloatField(null=True)
volume = models.IntegerField(null=True)
class Sectors(models.Model):
sector_mc = models.TextField(null=True)
class Insector(models.Model):
foundation = models.ForeignKey(Sectors, null=True)
name = models.TextField(null=True)
value = models.FloatField(default=0)
class AreaLineChart(models.Model):
foundation = models.ForeignKey(CompanyForLineCharts, null=True)
date = models.DateField(auto_now_add=False)
price = models.FloatField(null=True)
class Meta:
ordering = ['date']
</code></pre>
<p>I have more such models but as you can see from this snippet, they are not in any way related to any user. </p>
<p>Now I want to relate them to a particular user. In the views too, I was not classifying data per user in any way.</p>
<p>I make users from django admin with username and password and also generate a token for those users from admin. I can authenticate via username and password but from there I know I'd need to use permissions but how is what I do not know. Also, I have serializers that are associated to these models, I know I'd have to use permissions there too but again, I don't know how to. As much as I understand it has to be in someway like this-</p>
<pre><code>@api_view(['GET'])
def searched_company_ohlc(request):
if request.method == 'GET':
// User.objects.get('username'=some_user)
//I guess.
qs = SearchedCompanyOHLC.objects.all()
serializer = SearchedCompanyOHLCSerializer(qs, many=True)
return Response(serializer.data)
</code></pre>
<p>Also, I'm using angularJS at the front-end that POSTS <code>username</code> and <code>password</code> on a view with <code>POST</code> decorator to verify the credentials. Where to go from here?</p>
| -1 |
2016-09-16T08:49:39Z
| 39,527,550 |
<p>This has nothing to do with permissions.</p>
<p>If you want to associate your model with a user, use a ForeignKey to the user model.</p>
| 0 |
2016-09-16T09:03:59Z
|
[
"python",
"django"
] |
Associating users with models django
| 39,527,289 |
<p>I have a lot of models in model.py -</p>
<pre><code>class Portfolio(models.Model):
company = models.TextField(null=True)
volume = models.IntegerField(blank=True)
date = models.DateField(null=True)
isin = models.TextField(null=True)
class Endday(models.Model):
company = models.TextField(null=True)
isin = models.TextField(null=True)
eop = models.TextField(max_length=100000)
class Results(models.Model):
companies = models.TextField(default=0)
dates = models.DateField(auto_now_add=False)
eodp = models.FloatField(null=True)
volume = models.IntegerField(null=True)
class Sectors(models.Model):
sector_mc = models.TextField(null=True)
class Insector(models.Model):
foundation = models.ForeignKey(Sectors, null=True)
name = models.TextField(null=True)
value = models.FloatField(default=0)
class AreaLineChart(models.Model):
foundation = models.ForeignKey(CompanyForLineCharts, null=True)
date = models.DateField(auto_now_add=False)
price = models.FloatField(null=True)
class Meta:
ordering = ['date']
</code></pre>
<p>I have more such models but as you can see from this snippet, they are not in any way related to any user. </p>
<p>Now I want to relate them to a particular user. In the views too, I was not classifying data per user in any way.</p>
<p>I make users from django admin with username and password and also generate a token for those users from admin. I can authenticate via username and password but from there I know I'd need to use permissions but how is what I do not know. Also, I have serializers that are associated to these models, I know I'd have to use permissions there too but again, I don't know how to. As much as I understand it has to be in someway like this-</p>
<pre><code>@api_view(['GET'])
def searched_company_ohlc(request):
if request.method == 'GET':
// User.objects.get('username'=some_user)
//I guess.
qs = SearchedCompanyOHLC.objects.all()
serializer = SearchedCompanyOHLCSerializer(qs, many=True)
return Response(serializer.data)
</code></pre>
<p>Also, I'm using angularJS at the front-end that POSTS <code>username</code> and <code>password</code> on a view with <code>POST</code> decorator to verify the credentials. Where to go from here?</p>
| -1 |
2016-09-16T08:49:39Z
| 39,527,788 |
<p>in your models.py you can relate user like this for example</p>
<pre><code>from django.contrib.auth.models import User, Group
class Portfolio(models.Model):
owner = models.ForeignKey(User,verbose_name = 'User',related_name='portfolios')
company = models.TextField(null=True)
volume = models.IntegerField(blank=True)
date = models.DateField(null=True)
isin = models.TextField(null=True)
</code></pre>
| 0 |
2016-09-16T09:16:30Z
|
[
"python",
"django"
] |
How to let Django fill an html template, then let the user download it without showing it in browser?
| 39,527,294 |
<p>I have an HTML template, and I want django to fill it with some data, but rather than redirect the user to a view that uses this template, I want Django to send the filled-template to the user as a download file.<br><br>
Any suggestions?</p>
| -3 |
2016-09-16T08:49:58Z
| 39,527,580 |
<p>You create a regular view, but before returning the response, you set the HTTP Header <code>Content-Disposition</code>.</p>
<pre><code>def download_form(request):
form = ...
response = render(request, 'form_template.html', {'form': form})
response['Content-Disposition'] = 'attachment; filename="form.html"'
return response
</code></pre>
| 1 |
2016-09-16T09:06:08Z
|
[
"python",
"django",
"python-2.7",
"django-templates"
] |
How to fix a loop that runs once if value is correct 1st time. But runs for ever if the 1st value is wrong and 2nd value is correct.
| 39,527,317 |
<p>I'm doing this controlled assessment. I am just a beginner so I don't know too much about python.</p>
<p>I have this code:</p>
<pre><code># defining qualification
def qualification():
print("\nQualification Level") # informs user what is AP + FQ
print('\n"AP" = Apprentice', '\n"FQ" = Fully-Qulaified')
user_qual = input("Enter your Qualification Level")
# error message if any other data is entered
while user_qual not in ("AP", "FQ"):
print("You have entered one or more data wrong!")
print("Please re-enter Qualification Level!")
qualification()
</code></pre>
<p>Every time this code runs, it runs good until the while loop. If I enter the correct value (i.e. AP or FQ) the fist time I run the code, then the while loop doesn't run, as it should. But if I enter the wrong value the first time (any value that is not FQ or AP) the while loop runs as it should but then after it runs the first time, eve if I enter the correct value after entering the wrong value the while loop doesn't stop looping. A infinite loop is being created.</p>
<p>Please provide an answer, remember I'm just a beginner at programming with python, so please don't let the solution be too complicated. </p>
| 0 |
2016-09-16T08:50:56Z
| 39,527,437 |
<p>You tried to use recursion at the wrong place. </p>
<p>If the user input is wrong for the first time, you will get into a deeper level recursion that will (maybe) input the right input (or will go deeper).</p>
<p>But then, it will end and go back to the previous level of recursion, where the <code>user_qual</code> variable is still the same, which will result in infinite loop.</p>
<p><strong>Note: variables are not the same while running into a different recursion level. You are getting into another local scope.</strong> You might want to do a little search about scopes before you continue with your program.</p>
<hr>
<p>So, instead of calling to <code>qualification()</code> on the last line, just input again:</p>
<pre><code>while user_qual not in ("AP", "FQ"):
print("You have entered one or more data wrong!")
user_qual = input("Please re-enter Qualification Level!")
</code></pre>
<hr>
<p>Another solution will be to use <code>global user_qual</code> in the beginning of the function and in the beginning of the loop. Read about global variables in python if you plan to act so.</p>
| 0 |
2016-09-16T08:56:55Z
|
[
"python",
"python-3.x"
] |
How to use Dask to process a file (or files) in multiple stages
| 39,527,511 |
<p>I'm processing a large text file in memory in 3 stages (currently not using <code>pandas</code>/<code>dataframes</code>)</p>
<p>This takes one raw data text file and processes it in four stages.</p>
<ul>
<li><p>Stage 1 processes <code>raw.txt</code> and kicks out <code>stage1.txt</code></p></li>
<li><p>Stage 2 processes <code>stage1.txt</code> and kicks out <code>stage2.txt</code></p></li>
<li><p>Stage 3 processes <code>stage2.txt</code> and kicks out <code>results.txt</code></p></li>
</ul>
<p>How should I set a Dask script to work on this locally?
Beyond this, how would can you set it up to work with multiple of <code>raw.txt</code>.
(i.e. raw1, raw2, raw3)</p>
<p>At the moment each stage method does not return anything but writes the next file to a specific file location which the next method knows about.</p>
<pre><code>def stage_1():
outputFile=r"C:\Data\Processed\stage_1.txt"
inputFile=r"C:\Data\RawData\rawData.txt"
f1 = open(outputFile,"w+")
f2 = open(inputFile,'r')
#Process input file f2
#Write results to f1
f2.close()
f1.close()
if __name__ == "__main__":
stage_1()
stage_2()
stage_3()
</code></pre>
| 0 |
2016-09-16T09:01:31Z
| 39,530,058 |
<p>I suspect you'll run into a few issues.</p>
<h3>Function Purity</h3>
<p>Dask generally assumes that functions are <a href="http://toolz.readthedocs.io/en/latest/purity.html" rel="nofollow">pure</a> rather than rely on side effects. If you want to use Dask then I recommend that you change your functions so that they return data rather than produce files.</p>
<p>As a hacky workaround you could pass filenames between functions.</p>
<h3>No Parallelism</h3>
<p>The workflow you've described has no intrinsic parallelism. You can have dask run your functions but it will just run them one after the other. You would need to think about how to break open your computation a bit so that there are several function calls that could run in parallel. Dask will not do this thinking for you.</p>
| 1 |
2016-09-16T11:10:13Z
|
[
"python",
"dask"
] |
Best way to define Django global variable on apache starup
| 39,527,565 |
<p>I have some configuration in a json file and on the database and I want to load those configuration on Django startup (Apache server startup).. I will be using those global variable within all the application.
For Example: External server connection api or number of instances. </p>
<p>What is the best way to define the global variables. I want to load the json file when server starts and use the variable value util server stop. ?</p>
| -2 |
2016-09-16T09:04:51Z
| 39,528,006 |
<p>It sounds like the thing you're probably looking for is <code>environment variables</code> - you can always use a small script to set the environment variables from the JSON that you have at present. </p>
<p>Setting these in your .bashrc file or, more preferably a virtualenv will let you:</p>
<ol>
<li>Take sensitive settings, like <code>SECRET_KEY</code> <a href="https://docs.djangoproject.com/en/1.10/topics/settings/" rel="nofollow">out of version control.</a> </li>
<li>Offer database settings, either by supplying them as a DB URL or as seperate environment variables.</li>
<li>Set both Django settings and other useful variables outside of the immediate Django project. </li>
</ol>
<p>The <a href="https://django-environ.readthedocs.io/en/latest/" rel="nofollow">django-environ docs have a useful tutorial</a> on how to set it up. <a href="https://github.com/pydanny/cookiecutter-django" rel="nofollow">The Django Cookie-Cutter project</a> makes extensive use of Environment Variables (including DB and mailserver settings), and is a great place to pick up hints and approaches.</p>
| 0 |
2016-09-16T09:27:17Z
|
[
"python",
"django"
] |
Regex: Exclude a mix of fixed and unknown length pattern
| 39,527,616 |
<p>I have a string like this:</p>
<pre><code>'Global Software Version (DID 0xFD15): 4.5.3'
</code></pre>
<p>And I want to find:</p>
<pre><code>4.5.3
</code></pre>
<p>The string does always start with <code>Global Software Version</code>, but <code>(DID 0xFD15)</code> is a variable part, it is different each time.</p>
<p><strong>What I did:</strong></p>
<pre><code>>>> x = 'Global Software Version (DID 0xFD15): 4.5.3'
>>> re.search('(?<=Global Software Version ).*', x).group().split(':')[1].strip()
'4.5.3'
</code></pre>
<p>Anybody with a better idea? Only with regex? </p>
| 0 |
2016-09-16T09:07:30Z
| 39,527,774 |
<p>So you can do multiple things here. The easiest solution would be:</p>
<pre><code>>>> x = 'Global Software Version (DID 0xFD15): 4.5.3'
>>> version = re.search('\d+[.]\d+[.]\d+', s).group()
>>> version
'4.5.3'
</code></pre>
<p>But this would also work:</p>
<pre><code>>>> version = 'Global Software Version (DID 0xFD15): 4.5.3'.split(":")[1].strip()
>>> version
'4.5.3'
</code></pre>
<p>Hope this helps!</p>
| 1 |
2016-09-16T09:16:00Z
|
[
"python",
"regex"
] |
Extract IP addresses from system report
| 39,527,727 |
<p>I traverse the address entries, using a Python regex to extract addresses to add to my list. Here is a sample input string and desired output. How do I do this?</p>
<pre><code>var = """
sw1:FID256:root> ipaddrshow
CHASSIS
Ethernet IP Address: 10.17.11.10
Ethernet Subnetmask: 255.255.210.0
CP0
Ethernet IP Address: 10.17.11.11
Ethernet Subnetmask: 255.255.210.0
Host Name: cp0
Gateway IP Address: 10.17.48.1
CP1
Ethernet IP Address: 10.17.11.12
Ethernet Subnetmask: 255.255.210.0
Host Name: cp1
Gateway IP Address: 10.17.18.1
sw1:FID256:root>
"""
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>List Index 0 âchassis,ip 10.17.11.10 chassis,mask 255.255.210.0â
List Index 1 âcp0,ip 10.17.11.11 cp0,mask 255.255.210.0 cp0,gw 10.17.18.1â
List Index 2 âcp1,ip 10.17.11.12 cp1,mask 255.255.240.0 cp1,gw 10.17.18.1â
</code></pre>
| 0 |
2016-09-16T09:13:55Z
| 39,529,637 |
<p>See <a href="https://regex101.com/r/eC0nV7/1" rel="nofollow">this regex</a> that extracts all the data:</p>
<pre><code>(?m)^([a-zA-Z0-9]+)(?:\r?\n|\r)Ethernet IP Address: ([\d.]+)(?:\r?\n|\r)Ethernet Subnetmask: ([\d.]+)(?:(?:\r?\n|\r)Host Name: ([a-z\d]+)(?:\r?\n|\r)Gateway IP Address: ([\d.]+))?
</code></pre>
<p><em>Details</em>:</p>
<ul>
<li><code>(?m)</code> - multiline mode to let <code>^</code> match a line start</li>
<li><code>^</code> - lime start</li>
<li><code>([a-zA-Z0-9]+)</code> - Group 1, one or more alphanumerics</li>
<li><code>(?:\r?\n|\r)</code> - a linebreak</li>
<li><code>Ethernet IP Address:</code> - literal string</li>
<li><code>([\d.]+)</code> - Group 2, 1+ digits and dots</li>
<li><code>(?:\r?\n|\r)Ethernet Subnetmask: ([\d.]+)</code> - similar pattern to above with Group 3 containing subnetmask</li>
<li><code>(?:(?:\r?\n|\r)Host Name: ([a-z\d]+)</code> - similar pattern to above with Group 4 containing host name</li>
<li><code>(?:\r?\n|\r)Gateway IP Address: ([\d.]+))?</code> - similar pattern to above with Group 5 containing gateway.</li>
</ul>
<p>Now, all you need is to use <code>re.finditer</code> and build the resulting string:</p>
<pre><code>import re
p = re.compile(r'^([a-zA-Z0-9]+)(?:\r?\n|\r)Ethernet IP Address: ([\d.]+)(?:\r?\n|\r)Ethernet Subnetmask: ([\d.]+)(?:(?:\r?\n|\r)Host Name: ([a-z\d]+)(?:\r?\n|\r)Gateway IP Address: ([\d.]+))?', re.MULTILINE)
s = "sw1:FID256:root> ipaddrshow \n\nCHASSIS\nEthernet IP Address: 10.17.11.10\nEthernet Subnetmask: 255.255.210.0\n\nCP0\nEthernet IP Address: 10.17.11.11\nEthernet Subnetmask: 255.255.210.0\nHost Name: cp0\nGateway IP Address: 10.17.48.1\n\nCP1\nEthernet IP Address: 10.17.11.12\nEthernet Subnetmask: 255.255.210.0\nHost Name: cp1\nGateway IP Address: 10.17.18.1\n\nsw1:FID256:root>"
result = ["{0},ip {1} {0},mask {2} {3},gw {4}".format(z.group(1).lower(),z.group(2),z.group(3).lower(),z.group(4),z.group(5)) if z.group(4) else "{0},ip {1} {0},mask {2}".format(z.group(1).lower(),z.group(2),z.group(3)) for z in p.finditer(s)]
print(result)
</code></pre>
<p>See <a href="https://ideone.com/JebjLp" rel="nofollow">Python demo</a>.</p>
| 1 |
2016-09-16T10:48:18Z
|
[
"python",
"regex"
] |
Extract IP addresses from system report
| 39,527,727 |
<p>I traverse the address entries, using a Python regex to extract addresses to add to my list. Here is a sample input string and desired output. How do I do this?</p>
<pre><code>var = """
sw1:FID256:root> ipaddrshow
CHASSIS
Ethernet IP Address: 10.17.11.10
Ethernet Subnetmask: 255.255.210.0
CP0
Ethernet IP Address: 10.17.11.11
Ethernet Subnetmask: 255.255.210.0
Host Name: cp0
Gateway IP Address: 10.17.48.1
CP1
Ethernet IP Address: 10.17.11.12
Ethernet Subnetmask: 255.255.210.0
Host Name: cp1
Gateway IP Address: 10.17.18.1
sw1:FID256:root>
"""
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>List Index 0 âchassis,ip 10.17.11.10 chassis,mask 255.255.210.0â
List Index 1 âcp0,ip 10.17.11.11 cp0,mask 255.255.210.0 cp0,gw 10.17.18.1â
List Index 2 âcp1,ip 10.17.11.12 cp1,mask 255.255.240.0 cp1,gw 10.17.18.1â
</code></pre>
| 0 |
2016-09-16T09:13:55Z
| 39,538,095 |
<p>I had something similar laying around and covers any attributes using a dictionary:</p>
<pre><code>(?P<name>^[A-Z0-9]+)\n|(?P<attr>^[\w]+[^:]+):\s(?P<val>[\d\w\.]+)\n
</code></pre>
<p>In python is not possible to recover intermediate captures from groups that match more than once per regex match (as far as I know), but some of mine horrible python should do the work.</p>
<p>I also made a random dictionary because to me is not clear what you are asking.</p>
<pre><code>import re
var = """
sw1:FID256:root> ipaddrshow
CHASSIS
Ethernet IP Address: 10.17.11.10
Ethernet Subnetmask: 255.255.210.0
CP0
Ethernet IP Address: 10.17.11.11
Ethernet Subnetmask: 255.255.210.0
Host Name: cp0
Gateway IP Address: 10.17.48.1
CP1
Ethernet IP Address: 10.17.11.12
Ethernet Subnetmask: 255.255.210.0
Host Name: cp1
Gateway IP Address: 10.17.18.1
sw1:FID256:root>
"""
rgx = re.compile(r'(?P<name>^[A-Z0-9]+)\n|(?P<attr>^[\w]+[^:]+):\s(?P<val>[\d\w\.]+)\n', re.MULTILINE)
dict = {
"Ethernet IP Address": "ip",
"Ethernet Subnetmask": "mask",
"Gateway IP Address": "gw",
"Host Name": ""
}
def translate(attr):
return dict[attr]
def build_list(r):
entry = ""
name = ""
for l in rgx.finditer(var):
if(l.group("name")):
if(entry):
r.append(entry[:-1])
entry = ""
name = l.group("name").lower()
elif(l.group("attr")):
varname = translate(l.group("attr"))
value = l.group("val")
if(varname != ""):
entry += "{0},{1} {2} ".format(name, varname, value)
# add last entry
r.append(entry[:-1])
entry = ""
def build_dict(d):
entry = ""
name = ""
for l in rgx.finditer(var):
if(l.group("name")):
name = l.group("name").lower()
d[name] = {}
elif(l.group("attr")):
varname = translate(l.group("attr"))
value = l.group("val")
if(varname != ""):
d[name][varname] = value
r = []
build_list(r)
print r
d = {}
build_dict(d)
print d
</code></pre>
<p><a href="https://ideone.com/DBkR2E" rel="nofollow">Demo</a></p>
| 0 |
2016-09-16T18:36:19Z
|
[
"python",
"regex"
] |
Replace the delimiter of first line from ',' to ';' in csv file by Python
| 39,527,811 |
<p>I got a weird csv file whose header's delimiter are <strong>','</strong> while common rows' delimiter are <strong>';'</strong> which cause troubles for me to read and process it as dictionary by python:</p>
<pre><code>players.first_name,players.last_name,players.vis_name,players.player_id
Duje;Cop;Cop;8465
Dusan;Svento;Svento;8658
Markus;Henriksen;Henriksen;7687
</code></pre>
<p>I wonder if I can replace only the header's delimiter with <strong>';'</strong> or if there is a way to read such csv file to be dictionary without changing the header's delimiter?</p>
<p><em>BTW:</em> I am using Pyhton 2.7.12 with Anaconda 4.0.0 via IDE PyCharm</p>
<p><em>Any help will be appreciated, thank you</em></p>
| 1 |
2016-09-16T09:17:36Z
| 39,527,955 |
<p>You can read the first line with a classical csv reader, just to get field names, then continue reading with the dictionary reader, changing the separator to <code>;</code> at this point.</p>
<pre><code>import csv
with open("input.csv") as f:
cr = csv.reader(f, delimiter=",")
fieldnames = next(cr)
cr = csv.DictReader(f,fieldnames=fieldnames,delimiter=";")
for d in cr:
print(d)
</code></pre>
<p>result:</p>
<pre><code>{'players.player_id': '8465', 'players.vis_name': 'Cop', 'players.first_name': 'Duje', 'players.last_name': 'Cop'}
{'players.player_id': '8658', 'players.vis_name': 'Svento', 'players.first_name': 'Dusan', 'players.last_name': 'Svento'}
{'players.player_id': '7687', 'players.vis_name': 'Henriksen', 'players.first_name': 'Markus', 'players.last_name': 'Henriksen'}
</code></pre>
<p>PS: my previous solution involved reading/writing to a <code>StringIO</code>, but the current one is much more elegant.</p>
| 2 |
2016-09-16T09:25:16Z
|
[
"python",
"csv",
"delimiter"
] |
How to optimize code that iterates on a big dataframe in Python
| 39,527,826 |
<p>I have a big pandas dataframe. It has thousands of columns and over a million rows. I want to calculate the difference between the max value and the min value row-wise. Keep in mind that there are many NaN values and some rows are all NaN values (but I still want to keep them!).</p>
<p>I wrote the following code. It works but it's time consuming:</p>
<pre><code>totTime = []
for index, row in date.iterrows():
myRow = row.dropna()
if len(myRow):
tt = max(myRow) - min(myRow)
else:
tt = None
totTime.append(tt)
</code></pre>
<p>Is there any way to optimize it? I tried with the following code but I get an error when it encounters all NaN rows:</p>
<pre><code>tt = lambda x: max(x.dropna()) - min(x.dropna())
totTime = date.apply(tt, axis=1)
</code></pre>
<p>Any suggestions will be appreciated!</p>
| 0 |
2016-09-16T09:18:22Z
| 39,528,041 |
<p>I have the same problem about iterating. 2 points:</p>
<ol>
<li>Why don't you replace NaN values with 0? You can do it with this <code>df.replace(['inf','nan'],[0,0])</code>. It replaces inf and nan values.</li>
<li>Take a look at this <a href="http://stackoverflow.com/questions/7837722/what-is-the-most-efficient-way-to-loop-through-dataframes-with-pandas/7837947#7837947">This</a>. Maybe you can understand, I have a similar question about how to optimize the loop to calculate de difference between actual row with the previous one. </li>
</ol>
| 0 |
2016-09-16T09:28:45Z
|
[
"python",
"pandas",
"optimization",
"dataframe"
] |
How to optimize code that iterates on a big dataframe in Python
| 39,527,826 |
<p>I have a big pandas dataframe. It has thousands of columns and over a million rows. I want to calculate the difference between the max value and the min value row-wise. Keep in mind that there are many NaN values and some rows are all NaN values (but I still want to keep them!).</p>
<p>I wrote the following code. It works but it's time consuming:</p>
<pre><code>totTime = []
for index, row in date.iterrows():
myRow = row.dropna()
if len(myRow):
tt = max(myRow) - min(myRow)
else:
tt = None
totTime.append(tt)
</code></pre>
<p>Is there any way to optimize it? I tried with the following code but I get an error when it encounters all NaN rows:</p>
<pre><code>tt = lambda x: max(x.dropna()) - min(x.dropna())
totTime = date.apply(tt, axis=1)
</code></pre>
<p>Any suggestions will be appreciated!</p>
| 0 |
2016-09-16T09:18:22Z
| 39,528,057 |
<p>It is usually a bad idea to use a <code>python</code> <code>for</code> loop to iterate over a large <code>pandas.DataFrame</code> or a <code>numpy.ndarray</code>. You should rather use the available build in functions on them as they are optimized and in many cases actually not written in python but in a compiled language. In your case you should use the methods <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.max.html" rel="nofollow">pandas.DataFrame.max</a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.min.html" rel="nofollow">pandas.DataFrame.min</a> that both give you an option <code>skipna</code> to skip <code>nan</code> values in your <code>DataFrame</code> without the need to actually drop them manually. Furthermore, you can choose a <code>axis</code> to minimize along. So you can specifiy <code>axis=1</code> to get the minimum along columns.</p>
<p>This will add up to something similar as what @EdChum just mentioned in the comments:</p>
<pre><code>data.max(axis=1, skipna=True) - data.min(axis=1, skipna=True)
</code></pre>
| 2 |
2016-09-16T09:29:41Z
|
[
"python",
"pandas",
"optimization",
"dataframe"
] |
How can I disable Web Driver Exceptions when using the Mozilla Marionette web driver with Selenium
| 39,527,858 |
<p>I am remote controlling a Firefox browser using Python and Selenium. I have switched to using Marionette, as directed on the <a href="https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette/WebDriver" rel="nofollow">mozilla developer site</a>. That all works fine. </p>
<p>There is one page, where when I want to select an element. I get an exception. I think it is a Javascript warning that is causing the driver to bork. Does anyone know how I can make the driver less picky about Javascript errors? Additionally does anyone know where there is comprehensive documentation of the Python Marionette client?</p>
<p>Sorry I can't make the code completely reproducible because it is a client's private site that I am attempting to select an element from.</p>
<pre><code>from selenium import webdriver
# see https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette/WebDriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities.FIREFOX
# Tell the Python bindings to use Marionette.
# This will not be necessary in the future,
# when Selenium will auto-detect what remote end
# it is talking to.
caps["marionette"] = True
caps["binary"] = "/Applications/Firefox.app/Contents/MacOS/firefox-bin"
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
browser = webdriver.Firefox(capabilities=caps)
webdriver.Firefox.get_capabilities()
browser.implicitly_wait(3)
browser.get("https://www.example.com/examplepage")
saved_exports_field = browser.find_element_by_id('exportlist')
saved_exports_field_select = Select(saved_exports_field)
</code></pre>
<p>That is where it goes wrong. The trace is as follows</p>
<pre><code>---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
<ipython-input-35-6e712759af43> in <module>()
1 saved_exports_field = browser.find_element_by_id('exportlist')
----> 2 saved_exports_field_select = Select(saved_exports_field)
3 #saved_exports_field_select.select_by_visible_text('test score export dan')
/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/support/select.py in __init__(self, webelement)
39 webelement.tag_name)
40 self._el = webelement
---> 41 multi = self._el.get_attribute("multiple")
42 self.is_multiple = multi and multi != "false"
43
/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/remote/webelement.py in get_attribute(self, name)
134 attributeValue = self.parent.execute_script(
135 "return (%s).apply(null, arguments);" % raw,
--> 136 self, name)
137 else:
138 resp = self._execute(Command.GET_ELEMENT_ATTRIBUTE, {'name': name})
/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py in execute_script(self, script, *args)
463 return self.execute(Command.EXECUTE_SCRIPT, {
464 'script': script,
--> 465 'args': converted_args})['value']
466
467 def execute_async_script(self, script, *args):
/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py in execute(self, driver_command, params)
234 response = self.command_executor.execute(driver_command, params)
235 if response:
--> 236 self.error_handler.check_response(response)
237 response['value'] = self._unwrap_value(
238 response.get('value', None))
/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
190 elif exception_class == UnexpectedAlertPresentException and 'alert' in value:
191 raise exception_class(message, screen, stacktrace, value['alert'].get('text'))
--> 192 raise exception_class(message, screen, stacktrace)
193
194 def _value_or_default(self, obj, key, default):
WebDriverException: Message: SyntaxError: missing ) in parenthetical
</code></pre>
<p>Thanks</p>
| 2 |
2016-09-16T09:20:14Z
| 39,532,113 |
<p>To answer your second question first, <a href="https://marionette-client.readthedocs.io/en/latest/index.html" rel="nofollow">this documentation</a> seems fairly comprehensive; does this meet your needs?</p>
<p>As for the question of how to disable <code>WebDriverException</code>, the only thing I know of would be to use <code>try:</code> <code>except:</code> blocks, but I don't think this would be a good idea. <code>WebDriverException</code> is the base exception that the webdriver uses, and it would catch all errors including <code>NoSuchElementException</code>, which you are using.</p>
<p>I don't know of any way to specifically catch JavaScript errors, since these appear to bubble up as <code>WebDriverException</code>s. I assume that because you are asking this question, fixing the JavaScript itself is not an option?</p>
<p>One thing you might try is using the webdriver's <a href="http://selenium-python.readthedocs.io/api.html#selenium.webdriver.remote.webdriver.WebDriver.get_log" rel="nofollow"><code>get_log()</code> method</a>. From what I have read, JS errors should be visible in the results returned by this method. You could try calling <code>browser.get_log(log_type)</code> (where <code>log_type</code> is one of <code>'browser'</code>, <code>'client'</code>, <code>'driver'</code>, or <code>'server'</code> depending on where the error originates) before your <code>Select()</code> call, parsing that data, and than acting accordingly.</p>
| 0 |
2016-09-16T12:57:27Z
|
[
"python",
"selenium",
"firefox",
"selenium-webdriver",
"firefox-marionette"
] |
How can I disable Web Driver Exceptions when using the Mozilla Marionette web driver with Selenium
| 39,527,858 |
<p>I am remote controlling a Firefox browser using Python and Selenium. I have switched to using Marionette, as directed on the <a href="https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette/WebDriver" rel="nofollow">mozilla developer site</a>. That all works fine. </p>
<p>There is one page, where when I want to select an element. I get an exception. I think it is a Javascript warning that is causing the driver to bork. Does anyone know how I can make the driver less picky about Javascript errors? Additionally does anyone know where there is comprehensive documentation of the Python Marionette client?</p>
<p>Sorry I can't make the code completely reproducible because it is a client's private site that I am attempting to select an element from.</p>
<pre><code>from selenium import webdriver
# see https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette/WebDriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities.FIREFOX
# Tell the Python bindings to use Marionette.
# This will not be necessary in the future,
# when Selenium will auto-detect what remote end
# it is talking to.
caps["marionette"] = True
caps["binary"] = "/Applications/Firefox.app/Contents/MacOS/firefox-bin"
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
browser = webdriver.Firefox(capabilities=caps)
webdriver.Firefox.get_capabilities()
browser.implicitly_wait(3)
browser.get("https://www.example.com/examplepage")
saved_exports_field = browser.find_element_by_id('exportlist')
saved_exports_field_select = Select(saved_exports_field)
</code></pre>
<p>That is where it goes wrong. The trace is as follows</p>
<pre><code>---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
<ipython-input-35-6e712759af43> in <module>()
1 saved_exports_field = browser.find_element_by_id('exportlist')
----> 2 saved_exports_field_select = Select(saved_exports_field)
3 #saved_exports_field_select.select_by_visible_text('test score export dan')
/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/support/select.py in __init__(self, webelement)
39 webelement.tag_name)
40 self._el = webelement
---> 41 multi = self._el.get_attribute("multiple")
42 self.is_multiple = multi and multi != "false"
43
/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/remote/webelement.py in get_attribute(self, name)
134 attributeValue = self.parent.execute_script(
135 "return (%s).apply(null, arguments);" % raw,
--> 136 self, name)
137 else:
138 resp = self._execute(Command.GET_ELEMENT_ATTRIBUTE, {'name': name})
/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py in execute_script(self, script, *args)
463 return self.execute(Command.EXECUTE_SCRIPT, {
464 'script': script,
--> 465 'args': converted_args})['value']
466
467 def execute_async_script(self, script, *args):
/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py in execute(self, driver_command, params)
234 response = self.command_executor.execute(driver_command, params)
235 if response:
--> 236 self.error_handler.check_response(response)
237 response['value'] = self._unwrap_value(
238 response.get('value', None))
/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
190 elif exception_class == UnexpectedAlertPresentException and 'alert' in value:
191 raise exception_class(message, screen, stacktrace, value['alert'].get('text'))
--> 192 raise exception_class(message, screen, stacktrace)
193
194 def _value_or_default(self, obj, key, default):
WebDriverException: Message: SyntaxError: missing ) in parenthetical
</code></pre>
<p>Thanks</p>
| 2 |
2016-09-16T09:20:14Z
| 39,537,510 |
<p>There's a bug in release 3.0.0-beta-3 which prevents the use of <code>get_attribute</code>. So you can either revert to 3.0.0-beta-2 or you can fix the bug by editing the file yourself:</p>
<p>In file
<code>/Users/dan/anaconda/envs/lt/lib/python3.5/site-packages/selenium/webdriver/remote/webelement.py</code>, replace the line 133:</p>
<pre><code>raw = pkgutil.get_data(__package__, 'getAttribute.js')
</code></pre>
<p>by:</p>
<pre><code>raw = pkgutil.get_data(__package__, 'getAttribute.js').decode('utf8')
</code></pre>
| 0 |
2016-09-16T17:56:15Z
|
[
"python",
"selenium",
"firefox",
"selenium-webdriver",
"firefox-marionette"
] |
How to access Google Cloud Datastore entity with dash in its name?
| 39,527,979 |
<p>I am working on a Google App Engine project and I need to access a Datastore entity with a name that contains dash, e.g. <code>random-entity</code>. I want to do that in Python. Since <code>random-entity</code> is invalid syntax for a class name, I cannot create a model and access it like that.</p>
<p>So how can I access this entity? Is it possible to do that without creating a model and just retrieve it in JSON format?</p>
<p>Keep in mind that renaming the entity is not an option for the project I am working on.</p>
| 0 |
2016-09-16T09:26:18Z
| 39,528,191 |
<p>If you are using <a href="https://cloud.google.com/appengine/docs/python/ndb/" rel="nofollow"><strong>NDB</strong> library</a> you need to override <a href="https://github.com/GoogleCloudPlatform/datastore-ndb-python/blob/master/ndb/model.py#L3044" rel="nofollow">class method <code>_get_kind(cls)</code></a> of model, like this:</p>
<pre><code>class RandomEntity(ndb.Model):
@classmethod
def _get_kind(cls)
return 'random-entity'
# You can override property name as well
random_name = ndb.StringProperty('random-name')
</code></pre>
| 2 |
2016-09-16T09:36:19Z
|
[
"python",
"google-app-engine",
"google-cloud-platform",
"google-cloud-datastore"
] |
Comparing two list and getting a new list in python
| 39,528,202 |
<p>I have a list - a and a list of columns - b.</p>
<pre><code>a = [2, 4, 1, 1, 6, 1, 1, 3, 5, 1]
b = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"]
</code></pre>
<p>I want to take the columns from list "b" which when compared with list "a" have the value 1.</p>
<p>I want the output to be:</p>
<pre><code>c = ["C", "D", "F", "G", "J"]
</code></pre>
<p>How can I do it?</p>
| 1 |
2016-09-16T09:36:34Z
| 39,528,256 |
<p>Easy task for comprehension + zip:</p>
<pre><code>>>> c = [y for (x, y) in zip(a, b) if x == 1]
>>> c
['C', 'D', 'F', 'G', 'J']
</code></pre>
| 8 |
2016-09-16T09:39:07Z
|
[
"python",
"list"
] |
Comparing two list and getting a new list in python
| 39,528,202 |
<p>I have a list - a and a list of columns - b.</p>
<pre><code>a = [2, 4, 1, 1, 6, 1, 1, 3, 5, 1]
b = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"]
</code></pre>
<p>I want to take the columns from list "b" which when compared with list "a" have the value 1.</p>
<p>I want the output to be:</p>
<pre><code>c = ["C", "D", "F", "G", "J"]
</code></pre>
<p>How can I do it?</p>
| 1 |
2016-09-16T09:36:34Z
| 39,528,290 |
<p>I'd do it with zip and list comprehension.</p>
<pre><code>>>> a = [2, 4, 1, 1, 6, 1, 1, 3, 5, 1]
>>> b = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"]
>>> c = [x[0] for x in zip(b, a) if x[1] == 1]
>>> c
['C', 'D', 'F', 'G', 'J']
>>>
</code></pre>
| 2 |
2016-09-16T09:40:32Z
|
[
"python",
"list"
] |
Comparing two list and getting a new list in python
| 39,528,202 |
<p>I have a list - a and a list of columns - b.</p>
<pre><code>a = [2, 4, 1, 1, 6, 1, 1, 3, 5, 1]
b = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"]
</code></pre>
<p>I want to take the columns from list "b" which when compared with list "a" have the value 1.</p>
<p>I want the output to be:</p>
<pre><code>c = ["C", "D", "F", "G", "J"]
</code></pre>
<p>How can I do it?</p>
| 1 |
2016-09-16T09:36:34Z
| 39,528,836 |
<p>A classic approach:</p>
<pre><code>>>> c = [b[i] for i in range(len(b)) if i<len(a) and a[i] == 1]
>>> c
['C', 'D', 'F', 'G', 'J']
</code></pre>
| 3 |
2016-09-16T10:07:26Z
|
[
"python",
"list"
] |
Comparing two list and getting a new list in python
| 39,528,202 |
<p>I have a list - a and a list of columns - b.</p>
<pre><code>a = [2, 4, 1, 1, 6, 1, 1, 3, 5, 1]
b = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"]
</code></pre>
<p>I want to take the columns from list "b" which when compared with list "a" have the value 1.</p>
<p>I want the output to be:</p>
<pre><code>c = ["C", "D", "F", "G", "J"]
</code></pre>
<p>How can I do it?</p>
| 1 |
2016-09-16T09:36:34Z
| 39,529,271 |
<p>Done in many ways:</p>
<p><strong>List Comprehension</strong></p>
<pre><code>a = [2, 4, 1, 1, 6, 1, 1, 3, 5, 1]
b = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"]
print [b[index] for index, item in enumerate(a) if item == 1]
</code></pre>
<p><strong>Filter with Lambda</strong></p>
<pre><code>a = [2, 4, 1, 1, 6, 1, 1, 3, 5, 1]
b = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"]
print filter(lambda index, item: len(a) > index and a[index]==1, enumerate(b))
</code></pre>
<p>Note that the list comprehension will be faster because it goes only up to the length of a rather than the list b, in case b is bigger.</p>
| 0 |
2016-09-16T10:30:39Z
|
[
"python",
"list"
] |
How to call value from excel on python
| 39,528,448 |
<p>I'm currently trying to refer to a value from an excel spreadsheet that is full of passenger data for the titanic disaster. Here is the part I'm stuck on.</p>
<blockquote>
<p>Examining the survival statistics, a large majority of males did not
survive the ship sinking. However, a majority of females did survive
the ship sinking. Let's build on our previous prediction: If a
passenger was female, then we will predict that they survived.
Otherwise, we will predict the passenger did not survive. Fill in the
missing code below so that the function will make this prediction.</p>
<p>Hint: You can access the values of each feature for a passenger like a
dictionary. For example, passenger['Sex'] is the sex of the passenger.</p>
</blockquote>
<pre><code>def predictions_1(data):
""" Model with one feature:
- Predict a passenger survived if they are female. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'female':
survived == 1
else survived == 0
# Return our predictions
return pd.Series(predictions)
â
# Make the predictions
predictions = predictions_1(data)
File "<ipython-input-75-6b2fca23446d>", line 12
else survived == 0
^
SyntaxError: invalid syntax
</code></pre>
<p>I input the if else statement and I'm positive there are many errors in my attempt, I'd appreciate some clarity to how to fix this, the data from the excel sheet is the survived and sex data. Here is the github link to the project I'm working on.
<a href="https://github.com/udacity/machine-learning/tree/master/projects/titanic_survival_exploration" rel="nofollow">https://github.com/udacity/machine-learning/tree/master/projects/titanic_survival_exploration</a></p>
| 1 |
2016-09-16T09:48:02Z
| 39,528,534 |
<p>Your syntax is not correct with that <code>else</code> missing a <code>:</code>, and you're mixing the <a href="https://docs.python.org/2/library/stdtypes.html#comparisons" rel="nofollow"><em>equality operator</em> <code>==</code></a> with the <a href="https://docs.python.org/2/reference/simple_stmts.html#assignment-statements" rel="nofollow"><em>assignment operator</em> <code>=</code></a>:</p>
<pre><code> if passenger['Sex'] == 'female':
survived = 1 # bind int value 1 to the name 'survived'
else:
survived = 0
</code></pre>
| 3 |
2016-09-16T09:52:47Z
|
[
"python",
"excel",
"pandas"
] |
Redirect to login when session cookie expires in Django
| 39,528,487 |
<p>I'm trying to redirect to the login page when the cookie expires but it's not working.</p>
<p>It's supposed to be as simple as adding these lines to settings.py:</p>
<pre><code>LOGIN_URL = '/login/'
LOGIN_REDIRECT_URL='/login/'
</code></pre>
<p>I'm using the decorator <em>@login_required</em> in my functions and I have tried <em>@login_required(login_url='/login/')</em> too.</p>
<p>Urls are correctly set and when manually going to /login it works, so it's not an error in the path.</p>
<p>When the session cookie expires and you try to access the app it gives you the error 'ViewDoesNotExist' (Could not import django.views.generic.simple.redirect_to. Parent module django.views.generic.simple does not exist.).</p>
| 2 |
2016-09-16T09:50:00Z
| 39,774,677 |
<p>Something in your code is trying to import <code>redirect_to</code>, which was removed in Django 1.5. You need to find this code and update it.</p>
| 0 |
2016-09-29T15:43:27Z
|
[
"python",
"django",
"session",
"cookies"
] |
Why does SetCellEditor cause error when reapplied after self.m_grid_3.ClearGrid()
| 39,528,535 |
<p>The following code populates a grid each time the combobox is used to select a new value.
The first time the code runs it works fine and creates a populated grid with a drop down in each cell in col 4. However when I select a second new value and the function executes the self.m_grid_3.ClearGrid() and repopulates I get the following error.</p>
<pre><code> self.m_grid3.SetCellEditor(row, col, self.choice_editor)
File "C:\Python27\lib\site-packages\wx-3.0-msw\wx\grid.py", line 2000, in SetCellEditor
return _grid.Grid_SetCellEditor(*args, **kwargs)
TypeError: in method 'Grid_SetCellEditor', expected argument 4 of type 'wxGridCellEditor *'
</code></pre>
<p>Selecting a dropdown in col4 then crashes python.</p>
<p>Any ideas how I can fix this.</p>
<p>Here is the code in question.</p>
<pre><code>class Inspection(BulkUpdate):
def __init__(self, parent):
BulkUpdate.__init__(self, parent)
list = EmployeeList()
list_climbers = list.get_climbers()
for name in list_climbers:
self.edit_kit_comboBox.Append(str(name.employee))
choices = ["Yes", "No", "Not Checked"]
self.choice_editor = wx.grid.GridCellChoiceEditor(choices, True)
def on_engineer_select( self, event ):
self.m_grid3.ClearGrid()
person = self.edit_kit_comboBox.GetValue()
list = KitList()
equipment = list.list_of_equipment(person, 1)
rows = len(equipment)
for row in range(0, rows):
for col in range(0, 5):
print "row = %s col = %s" % (row, col)
if col == 4:
self.m_grid3.SetCellValue(row, col+2, str(equipment[row][col]))
self.m_grid3.SetCellValue(row, col, "Pass")
self.m_grid3.SetCellEditor(row, col, self.choice_editor)
else:
self.m_grid3.SetCellValue(row, col, str(equipment[row][col]))
</code></pre>
<p>The code stops on the second loop while populating the grid the second time.
I have been trying to work this out for days.</p>
| 0 |
2016-09-16T09:52:48Z
| 39,535,216 |
<p>Try adding this to <code>__init__</code>:</p>
<pre><code>self.choice_editor.IncRef()
</code></pre>
<p>My guess is that the C++ portion of the editor object is being deleted when you call <code>ClearGrid</code>. Giving it that extra reference tells the Grid that you want to hold on to it.</p>
| 0 |
2016-09-16T15:31:51Z
|
[
"python",
"wxpython"
] |
Why does SetCellEditor cause error when reapplied after self.m_grid_3.ClearGrid()
| 39,528,535 |
<p>The following code populates a grid each time the combobox is used to select a new value.
The first time the code runs it works fine and creates a populated grid with a drop down in each cell in col 4. However when I select a second new value and the function executes the self.m_grid_3.ClearGrid() and repopulates I get the following error.</p>
<pre><code> self.m_grid3.SetCellEditor(row, col, self.choice_editor)
File "C:\Python27\lib\site-packages\wx-3.0-msw\wx\grid.py", line 2000, in SetCellEditor
return _grid.Grid_SetCellEditor(*args, **kwargs)
TypeError: in method 'Grid_SetCellEditor', expected argument 4 of type 'wxGridCellEditor *'
</code></pre>
<p>Selecting a dropdown in col4 then crashes python.</p>
<p>Any ideas how I can fix this.</p>
<p>Here is the code in question.</p>
<pre><code>class Inspection(BulkUpdate):
def __init__(self, parent):
BulkUpdate.__init__(self, parent)
list = EmployeeList()
list_climbers = list.get_climbers()
for name in list_climbers:
self.edit_kit_comboBox.Append(str(name.employee))
choices = ["Yes", "No", "Not Checked"]
self.choice_editor = wx.grid.GridCellChoiceEditor(choices, True)
def on_engineer_select( self, event ):
self.m_grid3.ClearGrid()
person = self.edit_kit_comboBox.GetValue()
list = KitList()
equipment = list.list_of_equipment(person, 1)
rows = len(equipment)
for row in range(0, rows):
for col in range(0, 5):
print "row = %s col = %s" % (row, col)
if col == 4:
self.m_grid3.SetCellValue(row, col+2, str(equipment[row][col]))
self.m_grid3.SetCellValue(row, col, "Pass")
self.m_grid3.SetCellEditor(row, col, self.choice_editor)
else:
self.m_grid3.SetCellValue(row, col, str(equipment[row][col]))
</code></pre>
<p>The code stops on the second loop while populating the grid the second time.
I have been trying to work this out for days.</p>
| 0 |
2016-09-16T09:52:48Z
| 39,700,462 |
<p>Taking the answer of adding </p>
<pre><code>self.choice_editor.IncRef()
</code></pre>
<p>I moved the choices list definition into the function along with </p>
<pre><code>self.choice_editor.IncRef()
</code></pre>
<p>So now it looks like this</p>
<pre><code>def on_engineer_select( self, event ):
self.m_grid3.ClearGrid()
choices = ["Pass", "Fail", "Not Checked"]
self.choice_editor = wx.grid.GridCellChoiceEditor(choices, False)
person = self.edit_kit_comboBox.GetValue()
list = KitList()
equipment = list.list_of_equipment(person, 1)
print "Length of equipment = %s" % len(equipment)
rows = len(equipment)
for row in range(0, rows):
for col in range(0, 5):
print "row = %s col = %s" % (row, col)
if col == 4:
self.choice_editor.IncRef()
self.m_grid3.SetCellValue(row, col+2, str(equipment[row][col]))
self.m_grid3.SetCellValue(row, col+1, str(date.today()))
self.m_grid3.SetCellEditor(row, col, self.choice_editor)
self.m_grid3.SetCellValue(row, col, "Pass")
else:
self.m_grid3.SetCellValue(row, col, str(equipment[row][col]))
</code></pre>
<p>Now the code works as desired.</p>
| 0 |
2016-09-26T10:30:11Z
|
[
"python",
"wxpython"
] |
Get an ordered dictionary class attributes inside __init__
| 39,528,653 |
<p>I have the following class:</p>
<pre><code>class NewName:
def __init__(self):
self.Name = None
self.DecomposedAlias = OrderedDict([("Prefix", None),
("Measurement", None),
("Direction", None),
("Item", None),
("Location", None),
("Descriptor", None),
("Frame", None),
("RTorigin", None)])
self.Meaning = ""
self.SIUnit = OrderedDict([("ScaleFactor", None),
("Offset", None),
("A", None),
("cd", None),
("K", None),
("kg", None),
("m", None),
("mol", None),
("rad", None),
("s", None)])
self.NormalDisplayUnit = OrderedDict([("ScaleFactor", None),
("Offset", None),
("A", None),
("cd", None),
("K", None),
("kg", None),
("m", None),
("mol", None),
("rad", None),
("s", None)])
self.OrientationConvention = ""
self.ContactPerson = ""
self.Note = ""
self.SubType = None
self.RefersTo = []
</code></pre>
<p>If I instantiate a new object of this class I can obtain a dictionary like this:</p>
<pre><code>mynewname = NewName()
mynewdict = mynewname.__dict__
</code></pre>
<p>What if I want <code>mynewdict</code> to be ordered in the same way the attributes of <code>NewName</code> were instantiated in its <code>__init__</code>?</p>
<p>Doing some research I found <a href="http://stackoverflow.com/questions/4459531/how-to-read-class-attributes-in-the-same-order-as-declared">this</a>, but in my case I would just obtain <code>['__init__']</code>. Is there a way to point to the attributes inside the <code>__init__</code>?</p>
<p>For completeness sake I should mention that I am using Python 3.4.</p>
| 2 |
2016-09-16T09:58:27Z
| 39,531,388 |
<p>You can't do that, because the <code>__init__</code> attribute called after the instance has been created (by <code>__new__()</code>) so if you even override the <code>__new__()</code> and use a <code>__prepare__</code> method <a href="https://docs.python.org/3/reference/datamodel.html#creating-the-class-object" rel="nofollow">using a metaclass</a>, you can just get an ordered sequence (dict or etc.) of other methods and attributes which are not defined within <code>__init__</code> method.</p>
<p>Also based on <a href="https://mail.python.org/pipermail/python-list/2012-January/618121.html" rel="nofollow">this mail</a>:</p>
<blockquote>
<p>It's just not possible to have something different than a dict as a type's <code>__dict__</code>. It's a deliberate limitation and required optimization.</p>
</blockquote>
<p>But this doesn't mean that you can't get a list of ordered attributes of your class. Since every attribute sets by <code>__setattr__</code> method you can simply preserve your attributes in an ordered dict by overriding the <code>__setattr__</code> method:</p>
<pre><code>from collections import OrderedDict
class NewName:
ordered_attrs = OrderedDict()
def __setattr__(self, name, val):
object.__setattr__(self, name, val)
# Use setattr(self, name, val) if you don't want to preserve the attributes in instances `__dict__`
NewName.ordered_attrs[name] = val
def __init__(self):
# your __init__ body
mynewname = NewName()
print(list(NewName.ordered_attrs))
</code></pre>
<p>Output:</p>
<pre><code>['Name', 'DecomposedAlias', 'Meaning', 'SIUnit', 'NormalDisplayUnit', 'OrientationConvention', 'ContactPerson', 'Note', 'SubType', 'RefersTo']
# Output of mynewname.__dict__
{'Note': '', 'Meaning': '', 'RefersTo': [], 'SIUnit': OrderedDict([('ScaleFactor', None), ('Offset', None), ('A', None), ('cd', None), ('K', None), ('kg', None), ('m', None), ('mol', None), ('rad', None), ('s', None)]), 'DecomposedAlias': OrderedDict([('Prefix', None), ('Measurement', None), ('Direction', None), ('Item', None), ('Location', None), ('Descriptor', None), ('Frame', None), ('RTorigin', None)]), 'SubType': None, 'Name': None, 'ContactPerson': '', 'NormalDisplayUnit': OrderedDict([('ScaleFactor', None), ('Offset', None), ('A', None), ('cd', None), ('K', None), ('kg', None), ('m', None), ('mol', None), ('rad', None), ('s', None)]), 'OrientationConvention': ''}
</code></pre>
<p>Also regarding the setting the attributes, based on <a href="https://docs.python.org/3/reference/datamodel.html#object.__setattr__" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>If <code>__setattr__()</code> wants to assign to an instance attribute, it should call the base class method with the same name, for example, <code>object.__setattr__(self, name, value)</code>.</p>
</blockquote>
| 1 |
2016-09-16T12:20:44Z
|
[
"python",
"class",
"object",
"dictionary",
"order"
] |
Creating dictionary from dataframe in python
| 39,528,707 |
<p>I want to create a dictionary out of my data frame. Data frame look like this:</p>
<pre><code>Vehical | Brand | Model Name
car | Suzuki | abc
car | Honda | def
bike | Suzuki | xyz
bike | Honda | asd
</code></pre>
<p>I want my dictionary like:</p>
<pre><code>{car : {Suzuki : abc, Honda : def}, bike : {Suzuki : xyz, Honda : asd}}
</code></pre>
| 1 |
2016-09-16T10:00:57Z
| 39,528,873 |
<p>you can do it this way:</p>
<pre><code>In [29]: df.pivot(index='Vehical', columns='Brand', values='Model Name').to_dict('i')
Out[29]:
{'bike': {'Honda': 'asd', 'Suzuki': 'xyz'},
'car': {'Honda': 'def', 'Suzuki': 'abc'}}
</code></pre>
<p>result of pivoting:</p>
<pre><code>In [28]: df.pivot(index='Vehical', columns='Brand', values='Model Name')
Out[28]:
Brand Honda Suzuki
Vehical
bike asd xyz
car def abc
</code></pre>
| 2 |
2016-09-16T10:09:43Z
|
[
"python",
"pandas",
"dictionary",
"dataframe"
] |
Flask-socketio misses events while copying file in background thread
| 39,528,730 |
<p>(Complete test app on github: <a href="https://github.com/olingerc/socketio-copy-large-file" rel="nofollow">https://github.com/olingerc/socketio-copy-large-file</a>)</p>
<p>I am using Flask together with the Flask-SocketIO plugin. My clients can ask the server to copy files via websocket but while the files are copying, I want the clients to be able to communicate with the server to ask it to do other things. My solution is to run the copy process (shutil) in a background thread. This is the function:</p>
<pre><code>def copy_large_file():
source = "/home/christophe/Desktop/largefile"
destination = "/home/christophe/Desktop/largefile2"
try:
os.remove(destination)
except:
pass
print("Before copy")
socketio.emit('my_response',
{'data': 'Thread says: before'}, namespace='/test')
shutil.copy(source, destination)
print("After copy")
socketio.emit('my_response',
{'data': 'Thread says: after'}, namespace='/test')
</code></pre>
<p>I observe the following behavior:
When starting the function using the native socketio method:</p>
<pre><code>socketio.start_background_task(target=copy_large_file)
</code></pre>
<p>all incoming events while a large file is being copied are delayed until the file is finished and a next file is started. I gues shutil is not relasing the GIL or something like that, so I tested with threading:</p>
<pre><code>thread = threading.Thread(target=copy_large_file)
thread.start()
</code></pre>
<p>same behaviour. Maybe multiprocessing?</p>
<pre><code>thread = multiprocessing.Process(target=copy_large_file)
thread.start()
</code></pre>
<p>Ah! That works and signals emitted via socketio within the copy_large_file function are correctly received.
BUT:
If a user starts to copy a very large file, closes their browser and comes back 2 minutes later, the socket no longer connects to the same socketio "session?" and thus no longer receives messages emitted from the background process.</p>
<p>I guess the main question is: How can I copy large files in the background without blocking flask-socketio but still being able to emit signals to the client from within the background process.</p>
<p>The test app can be used to reproduce the behaviour:</p>
<ul>
<li>clone <a href="https://github.com/olingerc/socketio-copy-large-file" rel="nofollow">https://github.com/olingerc/socketio-copy-large-file</a></li>
<li>Install requirements</li>
<li>Choose method in the copy_file function (line 42)</li>
<li>Start with ./app.py</li>
</ul>
<p>In the browser:</p>
<ul>
<li>go to localhost:5000</li>
<li>Click on Copy file</li>
<li>Click on Ping to send a message while the file is being copied</li>
<li>Also watch for other signals from background thread</li>
</ul>
| 1 |
2016-09-16T10:01:50Z
| 39,542,933 |
<p>You are asking two separate questions.</p>
<p>First, let's discuss the actual copying of the file.</p>
<p>It looks like you are using eventlet for your server. While this framework provides asynchronous replacements for network I/O functions, disk I/O is much more complicated to do in a non-blocking fashion, in particular on Linux (some info on the problem <a href="http://davmac.org/davpage/linux/async-io.html" rel="nofollow">here</a>). So doing I/O on files even with the standard library monkey patched causes blocking, as you have noticed. This is the same with gevent, by the way.</p>
<p>A typical solution to perform non-blocking I/O on files is to use a thread pool. With eventlet, the <a href="http://eventlet.net/doc/threading.html#eventlet.tpool.execute" rel="nofollow">eventlet.tpool.execute</a> function can do this. So basically, instead of calling <code>copy_large_file()</code> directly, you will call <code>tpool.execute(copy_large_file)</code>. This will enable other green threads in your application to run while the copy takes place in another system thread. Your solution of using another process is also valid, by the way, but it may be overkill depending on how many times and how frequently you need to do one of these copies.</p>
<p>Your second question is related to "remembering" a client that starts a long file copy, even if the browser is closed and reopened.</p>
<p>This is really something your application needs to handle by storing the state that is necessary to restore a returning client. Presumably your clients have a way to identify with your application, either with a token or some other identification. When the server starts one of these file copies, it can assign an id to the operation, and store that id in a database, associated with the client that requested it. If the client goes away and then returns, you can find if there are any ongoing file copies for it, and that way sync the client back to the way it was before it closed the browser.</p>
<p>Hope this helps!</p>
| 1 |
2016-09-17T04:44:45Z
|
[
"python",
"multithreading",
"multiprocessing",
"eventlet",
"flask-socketio"
] |
How do you organise a python project that contains multiple packages so that each file in a package can still be run individually?
| 39,528,736 |
<p><strong>TL;DR</strong></p>
<p>Here's an example repository that is set up as described in the first diagram (below): <a href="https://github.com/Poddster/package_problems" rel="nofollow">https://github.com/Poddster/package_problems</a></p>
<p>If you could please make it look like the second diagram in terms of project organisation and can still run the following commands, then you've answered the question:</p>
<pre><code>$ git clone https://github.com/Poddster/package_problems.git
$ cd package_problems
<do your magic here>
$ nosetests
$ ./my_tool/my_tool.py
$ ./my_tool/t.py
$ ./my_tool/d.py
(or for the above commands, $ cd ./my_tool/ && ./my_tool.py is also acceptable)
</code></pre>
<p>Alternatively: Give me a different project structure that allows me to group together related files ('package'), run all of the files individually, import the files into other files in the same package, and import the packages/files into other package's files.</p>
<hr>
<h1>Current situation</h1>
<p>I have a bunch of python files. Most of them are useful when callable from the command line i.e. they all use argparse and <code>if __name__ == "__main__"</code> to do useful things.</p>
<p>Currently I have this directory structure, and everything is working fine:</p>
<pre><code>.
âââ config.txt
âââ docs/
â  âââ ...
âââ my_tool.py
âââ a.py
âââ b.py
âââ c.py
âââ d.py
âââ e.py
âââ README.md
âââ tests
â  âââ __init__.py
â  âââ a.py
â Â âââ b.py
â  âââ c.py
â  âââ d.py
â  âââ e.py
âââ resources
âââ ...
</code></pre>
<p>Some of the scripts <code>import</code> things from other scripts to do their work. But no script is merely a library, they are all invokable. e.g. I could invoke <code>./my_tool.py</code>, <code>./a.by</code>, <code>./b.py</code>, <code>./c.py</code> etc and they would do useful things for the user. </p>
<p>"my_tool.py" is the main script that leverages all of the other scripts.</p>
<h1>What I want to happen</h1>
<p>However I want to change the way the project is organised. The project itself represents an entire program useable by the user, and will be distributed as such, but I know that parts of it will be useful in different projects later so I want to try and encapsulate the current files into a package. In the immediate future I will also add other packages to this same project. </p>
<p>To facilitate this I've decided to re-organise the project to something like the following:</p>
<pre><code>.
âââ config.txt
âââ docs/
â  âââ ...
âââ my_tool
â  âââ __init__.py
â  âââ my_tool.py
â  âââ a.py
â  âââ b.py
â  âââ c.py
â  âââ d.py
â  âââ e.py
â  âââ tests
â  âââ __init__.py
â  âââ a.py
â   âââ b.py
â  âââ c.py
â  âââ d.py
â  âââ e.py
âââ package2
â  âââ __init__.py
â  âââ my_second_package.py
| âââ ...
âââ README.md
âââ resources
âââ ...
</code></pre>
<p>However, I can't figure out an project organisation that satisfies the following criteria:</p>
<ol>
<li>All of the scripts are invokable on the command line (either as <code>my_tool\a.py</code> or <code>cd my_tool && a.py</code>)</li>
<li>The tests actually run :) </li>
<li>Files in package2 can do <code>import my_tool</code></li>
</ol>
<p>The main problem is with the import statements used by the packages and the tests.</p>
<p>Currently, all of the packages, including the tests, simply do <code>import <module></code> and it's resolved correctly. But when jiggering things around it doesn't work.</p>
<p>Note that supporting py2.7 is a requirement so all of the files have <code>from __future__ import absolute_import, ...</code> at the top.</p>
<h1>What I've tried, and the disastrous results</h1>
<h2>1</h2>
<p>If I move the files around as shown above, but leave all of the import statements as they currently are:</p>
<ol>
<li><code>$ ./my_tool/*.py</code> works and they all run properly</li>
<li><code>$ nosetests</code> run from the top directory doesn't work. The tests fail to import the packages scripts.</li>
<li>pycharm highlights import statements in red when editing those files :(</li>
</ol>
<h2>2</h2>
<p>If I then change the test scripts to do:</p>
<pre><code>from my_tool import x
</code></pre>
<ol>
<li><code>$ ./my_tool/*.py</code> still works and they all run properly</li>
<li><code>$ nosetests</code> run from the top directory doesn't work. Then tests can import the correct scripts, but the imports in the scripts themselves fail when the test scripts import them. </li>
<li>pycharm highlights import statements in red in the main scripts still :(</li>
</ol>
<h2>3</h2>
<p>If I keep the same structure and change <em>everything</em> to be <code>from my_tool import</code> then:</p>
<ol>
<li><code>$ ./my_tool/*.py</code> results in <code>ImportError</code>s </li>
<li><code>$ nosetests</code> runs everything ok. </li>
<li>pycharm doesn't complain about anything </li>
</ol>
<p>e.g. of 1.:</p>
<pre><code>Traceback (most recent call last):
File "./my_tool/a.py", line 34, in <module>
from my_tool import b
ImportError: cannot import name b
</code></pre>
<h2>4</h2>
<p>I also tried <code>from . import x</code> but that just ends up with <code>ValueError: Attempted relative import in non-package</code> for the direct running of scripts.</p>
<h1>Looking at some other SO answers:</h1>
<p>I can't just use <code>python -m pkg.tests.core_test</code> as</p>
<p>a) I don't have <strong>main</strong>.py. I guess I could have one?<br>
b) I want to be able to run all of the scripts, not just main?</p>
<p>I've tried:</p>
<pre><code>if __name__ == '__main__' and __package__ is None:
from os import sys, path
sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
</code></pre>
<p>but it didn't help. </p>
<p>I also tried:</p>
<pre><code>__package__ = "my_tool"
from . import b
</code></pre>
<p>But received:</p>
<pre><code>SystemError: Parent module 'loading_tool' not loaded, cannot perform relative import
</code></pre>
<p>adding <code>import my_tool</code> before <code>from . import b</code> just ends up back with <code>ImportError: cannot import name b</code></p>
<h2>Fix?</h2>
<p>What's the correct set of magical incantations and directory layout to make all of this work?</p>
| 19 |
2016-09-16T10:02:06Z
| 39,746,026 |
<p>Once you move to your desired configuration, the absolute imports you are using to load the modules that are specific to <code>my_tool</code> no longer work.</p>
<p>You need three modifications after you create the <code>my_tool</code> subdirectory and move the files into it:</p>
<ol>
<li><p>Create <code>my_tool/__init__.py</code>. (You seem to already do this but I wanted to mention it for completeness.)</p></li>
<li><p>In the files directly under in <code>my_tool</code>: change the <code>import</code> statements to load the modules from the current package. So in <code>my_tool.py</code> change:</p>
<pre><code>import c
import d
import k
import s
</code></pre>
<p>to: </p>
<pre><code>from . import c
from . import d
from . import k
from . import s
</code></pre>
<p>You need to make a similar change to all your other files. (You mention having tried setting <code>__package__</code> and then doing a relative import but setting <code>__package__</code> is not needed.)</p></li>
<li><p>In the files located in <code>my_tool/tests</code>: change the <code>import</code> statements that import the code you want to test to relative imports that load from one package up in the hierarchy. So in <code>test_my_tool.py</code> change:</p>
<pre><code>import my_tool
</code></pre>
<p>to:</p>
<pre><code>from .. import my_tool
</code></pre>
<p>Similarly for all the other test files.</p></li>
</ol>
<p>With the modifications above, I can run modules directly:</p>
<pre><code>$ python -m my_tool.my_tool
C!
D!
F!
V!
K!
T!
S!
my_tool!
my_tool main!
|main tool!||detected||tar edit!||installed||keys||LOL||ssl connect||parse ASN.1||config|
$ python -m my_tool.k
F!
V!
K!
K main!
|keys||LOL||ssl connect||parse ASN.1|
</code></pre>
<p>and I can run tests:</p>
<pre><code>$ nosetests
........
----------------------------------------------------------------------
Ran 8 tests in 0.006s
OK
</code></pre>
<p>Note that I can run the above both with Python 2.7 and Python 3.</p>
<hr>
<p>Rather than make the various modules under <code>my_tool</code> be directly executable, I suggest using a proper <code>setup.py</code> file to declare entry points and let <code>setup.py</code> create these entry points when the package is installed. Since you intend to distribute this code, you should use a <code>setup.py</code> to formally package it anyway.</p>
<ol>
<li><p>Modify the modules that can be invoked from the command line so that, taking <code>my_tool/my_tool.py</code> as example, instead of this:</p>
<pre><code>if __name__ == "__main__":
print("my_tool main!")
print(do_something())
</code></pre>
<p>You have:</p>
<pre><code>def main():
print("my_tool main!")
print(do_something())
if __name__ == "__main__":
main()
</code></pre></li>
<li><p>Create a <code>setup.py</code> file that contains the proper <code>entry_points</code>. For instance:</p>
<pre><code>from setuptools import setup, find_packages
setup(
name="my_tool",
version="0.1.0",
packages=find_packages(),
entry_points={
'console_scripts': [
'my_tool = my_tool.my_tool:main'
],
},
author="",
author_email="",
description="Does stuff.",
license="MIT",
keywords=[],
url="",
classifiers=[
],
)
</code></pre>
<p>The file above instructs <code>setup.py</code> to create a script named <code>my_tool</code> that will invoke the <code>main</code> method in the module <code>my_tool.my_tool</code>. On my system, once the package is installed, there is a script located at <code>/usr/local/bin/my_tool</code> that invokes the <code>main</code> method in <code>my_tool.my_tool</code>. It produces the same output as running <code>python -m my_tool.my_tool</code>, which I've shown above.</p></li>
</ol>
| 6 |
2016-09-28T11:22:37Z
|
[
"python",
"python-2.7",
"import",
"pycharm",
"packages"
] |
How do you organise a python project that contains multiple packages so that each file in a package can still be run individually?
| 39,528,736 |
<p><strong>TL;DR</strong></p>
<p>Here's an example repository that is set up as described in the first diagram (below): <a href="https://github.com/Poddster/package_problems" rel="nofollow">https://github.com/Poddster/package_problems</a></p>
<p>If you could please make it look like the second diagram in terms of project organisation and can still run the following commands, then you've answered the question:</p>
<pre><code>$ git clone https://github.com/Poddster/package_problems.git
$ cd package_problems
<do your magic here>
$ nosetests
$ ./my_tool/my_tool.py
$ ./my_tool/t.py
$ ./my_tool/d.py
(or for the above commands, $ cd ./my_tool/ && ./my_tool.py is also acceptable)
</code></pre>
<p>Alternatively: Give me a different project structure that allows me to group together related files ('package'), run all of the files individually, import the files into other files in the same package, and import the packages/files into other package's files.</p>
<hr>
<h1>Current situation</h1>
<p>I have a bunch of python files. Most of them are useful when callable from the command line i.e. they all use argparse and <code>if __name__ == "__main__"</code> to do useful things.</p>
<p>Currently I have this directory structure, and everything is working fine:</p>
<pre><code>.
âââ config.txt
âââ docs/
â  âââ ...
âââ my_tool.py
âââ a.py
âââ b.py
âââ c.py
âââ d.py
âââ e.py
âââ README.md
âââ tests
â  âââ __init__.py
â  âââ a.py
â Â âââ b.py
â  âââ c.py
â  âââ d.py
â  âââ e.py
âââ resources
âââ ...
</code></pre>
<p>Some of the scripts <code>import</code> things from other scripts to do their work. But no script is merely a library, they are all invokable. e.g. I could invoke <code>./my_tool.py</code>, <code>./a.by</code>, <code>./b.py</code>, <code>./c.py</code> etc and they would do useful things for the user. </p>
<p>"my_tool.py" is the main script that leverages all of the other scripts.</p>
<h1>What I want to happen</h1>
<p>However I want to change the way the project is organised. The project itself represents an entire program useable by the user, and will be distributed as such, but I know that parts of it will be useful in different projects later so I want to try and encapsulate the current files into a package. In the immediate future I will also add other packages to this same project. </p>
<p>To facilitate this I've decided to re-organise the project to something like the following:</p>
<pre><code>.
âââ config.txt
âââ docs/
â  âââ ...
âââ my_tool
â  âââ __init__.py
â  âââ my_tool.py
â  âââ a.py
â  âââ b.py
â  âââ c.py
â  âââ d.py
â  âââ e.py
â  âââ tests
â  âââ __init__.py
â  âââ a.py
â   âââ b.py
â  âââ c.py
â  âââ d.py
â  âââ e.py
âââ package2
â  âââ __init__.py
â  âââ my_second_package.py
| âââ ...
âââ README.md
âââ resources
âââ ...
</code></pre>
<p>However, I can't figure out an project organisation that satisfies the following criteria:</p>
<ol>
<li>All of the scripts are invokable on the command line (either as <code>my_tool\a.py</code> or <code>cd my_tool && a.py</code>)</li>
<li>The tests actually run :) </li>
<li>Files in package2 can do <code>import my_tool</code></li>
</ol>
<p>The main problem is with the import statements used by the packages and the tests.</p>
<p>Currently, all of the packages, including the tests, simply do <code>import <module></code> and it's resolved correctly. But when jiggering things around it doesn't work.</p>
<p>Note that supporting py2.7 is a requirement so all of the files have <code>from __future__ import absolute_import, ...</code> at the top.</p>
<h1>What I've tried, and the disastrous results</h1>
<h2>1</h2>
<p>If I move the files around as shown above, but leave all of the import statements as they currently are:</p>
<ol>
<li><code>$ ./my_tool/*.py</code> works and they all run properly</li>
<li><code>$ nosetests</code> run from the top directory doesn't work. The tests fail to import the packages scripts.</li>
<li>pycharm highlights import statements in red when editing those files :(</li>
</ol>
<h2>2</h2>
<p>If I then change the test scripts to do:</p>
<pre><code>from my_tool import x
</code></pre>
<ol>
<li><code>$ ./my_tool/*.py</code> still works and they all run properly</li>
<li><code>$ nosetests</code> run from the top directory doesn't work. Then tests can import the correct scripts, but the imports in the scripts themselves fail when the test scripts import them. </li>
<li>pycharm highlights import statements in red in the main scripts still :(</li>
</ol>
<h2>3</h2>
<p>If I keep the same structure and change <em>everything</em> to be <code>from my_tool import</code> then:</p>
<ol>
<li><code>$ ./my_tool/*.py</code> results in <code>ImportError</code>s </li>
<li><code>$ nosetests</code> runs everything ok. </li>
<li>pycharm doesn't complain about anything </li>
</ol>
<p>e.g. of 1.:</p>
<pre><code>Traceback (most recent call last):
File "./my_tool/a.py", line 34, in <module>
from my_tool import b
ImportError: cannot import name b
</code></pre>
<h2>4</h2>
<p>I also tried <code>from . import x</code> but that just ends up with <code>ValueError: Attempted relative import in non-package</code> for the direct running of scripts.</p>
<h1>Looking at some other SO answers:</h1>
<p>I can't just use <code>python -m pkg.tests.core_test</code> as</p>
<p>a) I don't have <strong>main</strong>.py. I guess I could have one?<br>
b) I want to be able to run all of the scripts, not just main?</p>
<p>I've tried:</p>
<pre><code>if __name__ == '__main__' and __package__ is None:
from os import sys, path
sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
</code></pre>
<p>but it didn't help. </p>
<p>I also tried:</p>
<pre><code>__package__ = "my_tool"
from . import b
</code></pre>
<p>But received:</p>
<pre><code>SystemError: Parent module 'loading_tool' not loaded, cannot perform relative import
</code></pre>
<p>adding <code>import my_tool</code> before <code>from . import b</code> just ends up back with <code>ImportError: cannot import name b</code></p>
<h2>Fix?</h2>
<p>What's the correct set of magical incantations and directory layout to make all of this work?</p>
| 19 |
2016-09-16T10:02:06Z
| 39,758,989 |
<h1>Point 1</h1>
<p>I believe it's working, so I don't comment on it.</p>
<h1>Point 2</h1>
<p>I always used tests at the same level as my_tool, not below it, but they should work if you do this at the top of each tests files (before importing my_tool or any other py file in the same directory)</p>
<pre><code>import os
import sys
sys.path.insert(0, os.path.abspath(__file__).rsplit(os.sep, 2)[0])
</code></pre>
<h1>Point 3</h1>
<p>In my_second_package.py do this at the top (before importing my_tool)</p>
<pre><code>import os
import sys
sys.path.insert(0,
os.path.abspath(__file__).rsplit(os.sep, 2)[0] + os.sep
+ 'my_tool')
</code></pre>
<p>Best regards,</p>
<p>JM</p>
| 1 |
2016-09-28T23:09:09Z
|
[
"python",
"python-2.7",
"import",
"pycharm",
"packages"
] |
How do you organise a python project that contains multiple packages so that each file in a package can still be run individually?
| 39,528,736 |
<p><strong>TL;DR</strong></p>
<p>Here's an example repository that is set up as described in the first diagram (below): <a href="https://github.com/Poddster/package_problems" rel="nofollow">https://github.com/Poddster/package_problems</a></p>
<p>If you could please make it look like the second diagram in terms of project organisation and can still run the following commands, then you've answered the question:</p>
<pre><code>$ git clone https://github.com/Poddster/package_problems.git
$ cd package_problems
<do your magic here>
$ nosetests
$ ./my_tool/my_tool.py
$ ./my_tool/t.py
$ ./my_tool/d.py
(or for the above commands, $ cd ./my_tool/ && ./my_tool.py is also acceptable)
</code></pre>
<p>Alternatively: Give me a different project structure that allows me to group together related files ('package'), run all of the files individually, import the files into other files in the same package, and import the packages/files into other package's files.</p>
<hr>
<h1>Current situation</h1>
<p>I have a bunch of python files. Most of them are useful when callable from the command line i.e. they all use argparse and <code>if __name__ == "__main__"</code> to do useful things.</p>
<p>Currently I have this directory structure, and everything is working fine:</p>
<pre><code>.
âââ config.txt
âââ docs/
â  âââ ...
âââ my_tool.py
âââ a.py
âââ b.py
âââ c.py
âââ d.py
âââ e.py
âââ README.md
âââ tests
â  âââ __init__.py
â  âââ a.py
â Â âââ b.py
â  âââ c.py
â  âââ d.py
â  âââ e.py
âââ resources
âââ ...
</code></pre>
<p>Some of the scripts <code>import</code> things from other scripts to do their work. But no script is merely a library, they are all invokable. e.g. I could invoke <code>./my_tool.py</code>, <code>./a.by</code>, <code>./b.py</code>, <code>./c.py</code> etc and they would do useful things for the user. </p>
<p>"my_tool.py" is the main script that leverages all of the other scripts.</p>
<h1>What I want to happen</h1>
<p>However I want to change the way the project is organised. The project itself represents an entire program useable by the user, and will be distributed as such, but I know that parts of it will be useful in different projects later so I want to try and encapsulate the current files into a package. In the immediate future I will also add other packages to this same project. </p>
<p>To facilitate this I've decided to re-organise the project to something like the following:</p>
<pre><code>.
âââ config.txt
âââ docs/
â  âââ ...
âââ my_tool
â  âââ __init__.py
â  âââ my_tool.py
â  âââ a.py
â  âââ b.py
â  âââ c.py
â  âââ d.py
â  âââ e.py
â  âââ tests
â  âââ __init__.py
â  âââ a.py
â   âââ b.py
â  âââ c.py
â  âââ d.py
â  âââ e.py
âââ package2
â  âââ __init__.py
â  âââ my_second_package.py
| âââ ...
âââ README.md
âââ resources
âââ ...
</code></pre>
<p>However, I can't figure out an project organisation that satisfies the following criteria:</p>
<ol>
<li>All of the scripts are invokable on the command line (either as <code>my_tool\a.py</code> or <code>cd my_tool && a.py</code>)</li>
<li>The tests actually run :) </li>
<li>Files in package2 can do <code>import my_tool</code></li>
</ol>
<p>The main problem is with the import statements used by the packages and the tests.</p>
<p>Currently, all of the packages, including the tests, simply do <code>import <module></code> and it's resolved correctly. But when jiggering things around it doesn't work.</p>
<p>Note that supporting py2.7 is a requirement so all of the files have <code>from __future__ import absolute_import, ...</code> at the top.</p>
<h1>What I've tried, and the disastrous results</h1>
<h2>1</h2>
<p>If I move the files around as shown above, but leave all of the import statements as they currently are:</p>
<ol>
<li><code>$ ./my_tool/*.py</code> works and they all run properly</li>
<li><code>$ nosetests</code> run from the top directory doesn't work. The tests fail to import the packages scripts.</li>
<li>pycharm highlights import statements in red when editing those files :(</li>
</ol>
<h2>2</h2>
<p>If I then change the test scripts to do:</p>
<pre><code>from my_tool import x
</code></pre>
<ol>
<li><code>$ ./my_tool/*.py</code> still works and they all run properly</li>
<li><code>$ nosetests</code> run from the top directory doesn't work. Then tests can import the correct scripts, but the imports in the scripts themselves fail when the test scripts import them. </li>
<li>pycharm highlights import statements in red in the main scripts still :(</li>
</ol>
<h2>3</h2>
<p>If I keep the same structure and change <em>everything</em> to be <code>from my_tool import</code> then:</p>
<ol>
<li><code>$ ./my_tool/*.py</code> results in <code>ImportError</code>s </li>
<li><code>$ nosetests</code> runs everything ok. </li>
<li>pycharm doesn't complain about anything </li>
</ol>
<p>e.g. of 1.:</p>
<pre><code>Traceback (most recent call last):
File "./my_tool/a.py", line 34, in <module>
from my_tool import b
ImportError: cannot import name b
</code></pre>
<h2>4</h2>
<p>I also tried <code>from . import x</code> but that just ends up with <code>ValueError: Attempted relative import in non-package</code> for the direct running of scripts.</p>
<h1>Looking at some other SO answers:</h1>
<p>I can't just use <code>python -m pkg.tests.core_test</code> as</p>
<p>a) I don't have <strong>main</strong>.py. I guess I could have one?<br>
b) I want to be able to run all of the scripts, not just main?</p>
<p>I've tried:</p>
<pre><code>if __name__ == '__main__' and __package__ is None:
from os import sys, path
sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
</code></pre>
<p>but it didn't help. </p>
<p>I also tried:</p>
<pre><code>__package__ = "my_tool"
from . import b
</code></pre>
<p>But received:</p>
<pre><code>SystemError: Parent module 'loading_tool' not loaded, cannot perform relative import
</code></pre>
<p>adding <code>import my_tool</code> before <code>from . import b</code> just ends up back with <code>ImportError: cannot import name b</code></p>
<h2>Fix?</h2>
<p>What's the correct set of magical incantations and directory layout to make all of this work?</p>
| 19 |
2016-09-16T10:02:06Z
| 39,778,464 |
<p>To run it from both command line and act like library while allowing nosetest to operate in a standard manner, I believe you will have to do a double up approach on Imports. </p>
<p>For example, the Python files will require:</p>
<pre><code>try:
import f
except ImportError:
import tools.f as f
</code></pre>
<p>I went through and made a PR off the github you linked with all test cases working. </p>
<p><a href="https://github.com/Poddster/package_problems/pull/1" rel="nofollow">https://github.com/Poddster/package_problems/pull/1</a></p>
<p>Edit: Forgot the imports in <code>__init__.py</code> to be properly usable in other packages, added. Now should be able to do: </p>
<pre><code>import tools
tools.c.do_something()
</code></pre>
| 0 |
2016-09-29T19:26:38Z
|
[
"python",
"python-2.7",
"import",
"pycharm",
"packages"
] |
ImportError: No module named pymongo
| 39,528,898 |
<p>I'm trying to install <a href="http://api.mongodb.com/python/current/" rel="nofollow">PyMongo</a> Python package with pip. It is required by <a href="https://docs.ansible.com/ansible/mongodb_user_module.html" rel="nofollow">Ansible mongodb_user module</a>.</p>
<p>I'm installing pip and pymongo with following Ansible script:</p>
<pre><code>- hosts: tag_Name_Development
become: true
remote_user: user
tasks:
- name: install python tools
yum: name={{ item }} state=latest
with_items:
- gcc
- python-devel
- python-setuptools
- python-pip
- name: install pymongo
pip: name=pymongo state=latest
- name: add admin user to mongo
mongodb_user:
login_port: 27017
database: admin
name: admin
password: "{{ mongodb.admin_pass }}"
roles: userAdminAnyDatabase
state: present
</code></pre>
<p>After successful installation of tools I get following Ansible error.</p>
<p><code>FAILED! => {"changed": false, "failed": true, "msg": "the python pymongo module is required"}</code></p>
<p>On server where <code>pymongo</code> is installed I get</p>
<pre><code>$ python -c "import pymongo"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named pymongo
</code></pre>
<p>Other relevant info <code>pip freeze</code> and <code>pip list</code></p>
<pre><code>$ python-pip freeze
backports.ssl-match-hostname==3.4.0.2
ordereddict==1.2
pymongo==3.3.0
$ python-pip list
backports.ssl-match-hostname (3.4.0.2)
ordereddict (1.2)
pip (6.1.1)
pymongo (3.3.0)
setuptools (12.2)
</code></pre>
<p>And loaded paths</p>
<pre><code>$ python
Python 2.7.10 (default, Jul 20 2016, 20:53:27)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path
['', '/usr/lib64/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/local/lib64/python2.7/site-packages', '/usr/local/lib/python2.7/site-packages', '/usr/lib64/python2.7/site-packages', '/usr/lib/python2.7/site-packages', '/usr/lib64/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
</code></pre>
<p>There are many other questions related to this problem, but none of those have helped. I do not have <code>bson</code> installed and I'm not using any virtual envs.</p>
| 0 |
2016-09-16T10:10:48Z
| 39,529,692 |
<p>It turned out that I have two versions of python installed (2.6 and 2.7). Even though default is 2.7, my Ansible script installed <code>pymongo</code> under 2.6 folder <code>/usr/local/lib64/python2.6/site-packages</code></p>
<p>I found that out by running <code>$ python-pip show pymongo</code> and <code>$ python2.6 -c "import pymongo"</code> works as expected.</p>
<p>I changed my Ansible script to install <code>python27-pip</code> instead of <code>python-pip</code> and it started to work nicely.</p>
| 0 |
2016-09-16T10:50:48Z
|
[
"python",
"ansible",
"pymongo"
] |
Pyspark - Get all parameters of models created with ParamGridBuilder
| 39,529,012 |
<p>Im using pySpark 2.0 for a kaggle competition. I'd like to know the behavior of a model (randomForest) depending on different parameters. ParamGridBuilder() allows to specify different values for a single parameters, and then perform (i guess) a cartesian product of the entire set of parameters. Assuming my dataframe is already defined:</p>
<pre><code>rdc = RandomForestClassifier()
pipeline = Pipeline(stages=STAGES + [rdc])
paramGrid = ParamGridBuilder().addGrid(rdc.maxDepth, [3, 10, 20])
.addGrid(rdc.minInfoGain, [0.01, 0.001])
.addGrid(rdc.numTrees, [5, 10, 20, 30])
.build()
evaluator = MulticlassClassificationEvaluator()
valid = TrainValidationSplit(estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
trainRatio=0.50)
model = valid.fit(df)
result = model.bestModel.transform(df)
</code></pre>
<p>Ok so now I'm able to retrieves simple information with a handmade function:</p>
<pre><code>def evaluate(result):
predictionAndLabels = result.select("prediction", "label")
metrics = ["f1","weightedPrecision","weightedRecall","accuracy"]
for m in metrics:
evaluator = MulticlassClassificationEvaluator(metricName=m)
print(str(m) + ": " + str(evaluator.evaluate(predictionAndLabels)))
</code></pre>
<p>Now I want several things:</p>
<p>*What are the parameters of the best model? This post partially answers the question: <a href="http://stackoverflow.com/questions/36697304/how-to-extract-model-hyper-parameters-from-spark-ml-in-pyspark">How to extract model hyper-parameters from spark.ml in PySpark?</a></p>
<p>*What are the parameters of all models ?
*What are the results (aka recall, accuracy, etc...) of each model ? I only found <code>print(model.validationMetrics)</code> that displays (it seems) a list containing the accuracy of each model, but I can't get to know which model to refers</p>
<p>If I can retrieve all those informations, I should be able to display graphs, bar charts, and work as I do with panda and sklearn.</p>
| 0 |
2016-09-16T10:17:59Z
| 39,531,097 |
<p>Long story short you simply cannot get parameters for all models because, <a href="http://stackoverflow.com/a/38874828/1560062">similarly to <code>CrossValidator</code></a>, <code>TrainValidationSplitModel</code> retains only the best model. These classes are designed for semi-automated model selection not exploration or experiments.</p>
<blockquote>
<p>What are the parameters of all models?</p>
</blockquote>
<p>While you cannot retrieve actual models <code>validationMetrics</code> correspond to input <code>Params</code> so you should be able to simply <code>zip</code> both:</p>
<pre><code>from typing import Dict, Tuple, List, Any
from pyspark.ml.param import Param
from pyspark.ml.tuning import TrainValidationSplitModel
EvalParam = List[Tuple[float, Dict[Param, Any]]]
def get_metrics_and_params(model: TrainValidationSplitModel) -> EvalParam:
return list(zip(model.validationMetrics, model.getEstimatorParamMaps()))
</code></pre>
<p>to get some about relationship between metrics and parameters.</p>
<p>If you need more information you should use <a href="http://stackoverflow.com/q/35253990/1560062">Pipeline <code>Params</code></a>. It will preserve all model which can be used for further processing:</p>
<pre><code>models = pipeline.fit(df, params=paramGrid)
</code></pre>
<p>It will generate a list of the <code>PipelineModels</code> corresponding to the <code>params</code> argument:</p>
<pre><code>zip(models, params)
</code></pre>
| 1 |
2016-09-16T12:06:26Z
|
[
"python",
"machine-learning",
"pyspark",
"hyperparameters"
] |
Upload multiple files using simple-salesforce python
| 39,529,028 |
<p>I started learning SalesForce and developing apps using django.</p>
<p>I need assistance with uploading a file to salesforce, For that I read <a href="https://github.com/simple-salesforce/simple-salesforce" rel="nofollow">simple-salesforce</a> and <a href="https://gist.github.com/wadewegner/df609a495df2e4bd7a07" rel="nofollow">this</a> that help to upload file using rest and SOAP api.</p>
<p>My question is how do I upload one or more files using simple-salesforce?</p>
| 1 |
2016-09-16T10:18:41Z
| 39,579,090 |
<p>Here is the code block I use for uploading files.</p>
<pre><code>def load_attachments(sf, new_attachments):
'''
Method to attach the Template from the Parent Case to each of the children.
@param: new_attachments the dictionary of child cases to the file name of the template
'''
url = "https://" + sf.get_forced_url() + ".my.salesforce.com/services/data/v29.0/sobjects/Attachment/"
bearer = "Bearer " + sf.get_session_id()
header = {'Content-Type': 'application/json', 'Authorization': bearer}
for each in new_attachments:
body = ""
long_name = str(new_attachments[each]).split(sep="\\")
short_name = long_name[len(long_name) - 1]
with open(new_attachments[each], "rb") as upload:
body = base64.b64encode(upload.read())
data = json.dumps({
'ParentId': each,
'Name': short_name,
'body': body
})
response = requests.post(url, headers=header, data=data)
print(response.text)
</code></pre>
<p>Basically, to send the file, you need to use the requests module and submit the file via a post transaction. The post transaction requires the URL to which the request is sent, the header information, and the data.</p>
<p>Here, sf is the instance of returned by the simple-salesforce initialization. Since my instance uses custom domains, I had to create my own function in simple-salesforce to handle that; I call it get_forced_url(). Note: The URL is may be different for you depending on which version you are using [the v29.0 portion may change].</p>
<p>Then I set up my bearer and header.</p>
<p>The next thing is a loop that submits a new attachment for each attachment in a map from Parent ID to the File I wish to upload. This is important to note, attachments must have a Parent Object so you need to know the ParentId. For each attachment, I blank out the body, create a long and short name for the attachment. Then the important part. On attachments, the actual data of the file is stored as a base-64 binary array. So the file must be opened as binary, hence the "rb" and then encoded to base-64.</p>
<p>Once the file has been parsed to base-64 binary, I build my json string where ParentId is the object ID of the parent object, the Name is the short name, and the body is the base-64 encoded string of data.</p>
<p>Then the file is submitted to the URL with the headers and data. Then I print the response so I could watch it happening.</p>
| 0 |
2016-09-19T17:42:24Z
|
[
"python",
"django",
"salesforce"
] |
Selenium and Chrome Driver issues
| 39,529,341 |
<p>I am carrying out some automated tasks and am required to run the script as root (for writing dirs to shares etc). The problem Im faced with is that chrome cant be run as root (for obvious reasons) so I have attempted various work arounds. The latest being an attempt to launch chrome using a normal users profile which by the looks of it doesnt actually launch the application as that user. </p>
<p>Is there a way to either launch the script as root and within the script launch chrome as a normal user or;</p>
<p>Lauch the script as a normal user and in the script execute the relevant commands as root? Specifically I need to execute <code>os.mkdirs</code>, <code>chmod</code> (this ive accomplished using subprocess) and finally I need to write files to the dirs using <code>with open...</code> (this is where the problem lies in this scenario).</p>
<p>Launching the script as root and attempting to execute chrome as a normal user was carried out as per below:</p>
<pre><code>options = webdriver.ChromeOptions()
options.add_agrument('PATH/TO/NORMAL/USER')
browser = webdriver.Chrome(chrome_options=options)
</code></pre>
<p>As suggested this doesnt seem to launch the application as the normal user but just uses the profile of the user. </p>
| 0 |
2016-09-16T10:34:00Z
| 39,530,372 |
<p>How about using <code>os.setuid</code> to change user id?</p>
<p>Also, I haven't looked at it in detail, but the way this works might be of interest : <a href="https://github.com/ionelmc/python-su" rel="nofollow">https://github.com/ionelmc/python-su</a></p>
| 0 |
2016-09-16T11:28:12Z
|
[
"python",
"google-chrome",
"selenium"
] |
Selenium and Chrome Driver issues
| 39,529,341 |
<p>I am carrying out some automated tasks and am required to run the script as root (for writing dirs to shares etc). The problem Im faced with is that chrome cant be run as root (for obvious reasons) so I have attempted various work arounds. The latest being an attempt to launch chrome using a normal users profile which by the looks of it doesnt actually launch the application as that user. </p>
<p>Is there a way to either launch the script as root and within the script launch chrome as a normal user or;</p>
<p>Lauch the script as a normal user and in the script execute the relevant commands as root? Specifically I need to execute <code>os.mkdirs</code>, <code>chmod</code> (this ive accomplished using subprocess) and finally I need to write files to the dirs using <code>with open...</code> (this is where the problem lies in this scenario).</p>
<p>Launching the script as root and attempting to execute chrome as a normal user was carried out as per below:</p>
<pre><code>options = webdriver.ChromeOptions()
options.add_agrument('PATH/TO/NORMAL/USER')
browser = webdriver.Chrome(chrome_options=options)
</code></pre>
<p>As suggested this doesnt seem to launch the application as the normal user but just uses the profile of the user. </p>
| 0 |
2016-09-16T10:34:00Z
| 39,531,557 |
<p>There are two things you could try.
Try running chrome with <code>--no-sandbox.</code>
Change ownership OR permission of .pki folder in your home directory. By default its ownership is root.
<code>sudo chown -R saurabh:saurabh ~/.pki/</code></p>
| 0 |
2016-09-16T12:29:02Z
|
[
"python",
"google-chrome",
"selenium"
] |
Scrapy settings work using custom_settings but don't work in settings.py
| 39,529,474 |
<p>I have been trying to edit some settings in my Spider but they only seem to work when I override the custom_settings dictionary in my custom Spider.</p>
<pre><code>custom_settings = {
'DOWNLOAD_DELAY': 1,
'FEED_URI': 'generalspider.json',
'FEED_FORMAT': 'json'
}
</code></pre>
<p>When I put them in settings.py they don't seem to work. settings.py was supposed to work for all spiders. Am I missing something?</p>
| 0 |
2016-09-16T10:40:56Z
| 39,533,014 |
<p><code>custom_settings</code> has priority over <code>settings.py</code>. So you'll have to remove the variables in <code>custom_settings</code> for the variables in <code>settings.py</code> to work.</p>
<p>Also please check if the class of your spider is derived from other classes (maybe spiders) and those base classes have their own <code>custom_settings</code>.</p>
| 2 |
2016-09-16T13:42:40Z
|
[
"python",
"scrapy",
"settings"
] |
How to create a Google calendar event with Python and Google calendar API
| 39,529,481 |
<p>I started with quickstart (<a href="https://developers.google.com/google-apps/calendar/quickstart/python" rel="nofollow">https://developers.google.com/google-apps/calendar/quickstart/python</a>) and it worked good. Then i tried to insert event with this guide (<a href="https://developers.google.com/google-apps/calendar/create-events" rel="nofollow">https://developers.google.com/google-apps/calendar/create-events</a>). I added this code to the code from quickstart and got error.
How should look my code to insert event in my google calendar?</p>
<p>So this is my code:</p>
<pre><code>from __future__ import print_function
import httplib2
import os
from apiclient import discovery
import oauth2client
from oauth2client import client
from oauth2client import tools
import datetime
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
# If modifying these scopes, delete your previously saved credentials
# at ~/.credentials/calendar-python-quickstart.json
SCOPES = 'https://www.googleapis.com/auth/calendar'
CLIENT_SECRET_FILE = 'client_secret.json'
APPLICATION_NAME = 'Google Calendar API Python Quickstart'
def get_credentials():
"""Gets valid user credentials from storage.
If nothing has been stored, or if the stored credentials are invalid,
the OAuth2 flow is completed to obtain the new credentials.
Returns:
Credentials, the obtained credential.
"""
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'calendar-python-quickstart.json')
store = oauth2client.file.Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def main():
"""Shows basic usage of the Google Calendar API.
Creates a Google Calendar API service object and outputs a list of the next
10 events on the user's calendar.
"""
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('calendar', 'v3', http=http)
# Refer to the Python quickstart on how to setup the environment:
# https://developers.google.com/google-apps/calendar/quickstart/python
# Change the scope to 'https://www.googleapis.com/auth/calendar' and delete any
# stored credentials.
event = {
'summary': 'Google I/O 2015',
'location': '800 Howard St., San Francisco, CA 94103',
'description': 'A chance to hear more about Google\'s developer products.',
'start': {
'dateTime': '2016-09-28T09:00:00-07:00',
'timeZone': 'America/Los_Angeles',
},
'end': {
'dateTime': '2016-09-28T17:00:00-07:00',
'timeZone': 'America/Los_Angeles',
},
'recurrence': [
'RRULE:FREQ=DAILY;COUNT=2'
],
'attendees': [
{'email': 'lpage@example.com'},
{'email': 'sbrin@example.com'},
],
'reminders': {
'useDefault': False,
'overrides': [
{'method': 'email', 'minutes': 24 * 60},
{'method': 'popup', 'minutes': 10},
],
},
}
event = service.events().insert(calendarId='primary', body=event).execute()
print ('Event created: %s' % (event.get('htmlLink')))
if __name__ == '__main__':
main()
</code></pre>
<p>This is mistake:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/demin.va/Documents/Dropbox/Programming/Google calendar API/google calendar api.py", line 96, in <module>
main()
File "C:/Users/demin.va/Documents/Dropbox/Programming/Google calendar API/google calendar api.py", line 92, in main
event = service.events().insert(calendarId='primary', body=event).execute()
File "C:\Users\demin.va\AppData\Local\Programs\Python\Python35-32\lib\site-packages\oauth2client\util.py", line 137, in positional_wrapper
return wrapped(*args, **kwargs)
File "C:\Users\demin.va\AppData\Local\Programs\Python\Python35-32\lib\site-packages\googleapiclient\http.py", line 838, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/calendar/v3/calendars/primary/events?alt=json returned "Insufficient Permission">
</code></pre>
| 0 |
2016-09-16T10:41:22Z
| 39,530,400 |
<p>Problem was in folder ".credentials". I didn't delete it after previous starting of quickstart example with </p>
<pre><code>SCOPES = 'https://www.googleapis.com/auth/calendar.readonly'
</code></pre>
<p>I just deleted this folder and program works. because now </p>
<pre><code>SCOPES = 'https://www.googleapis.com/auth/calendar'
</code></pre>
| 0 |
2016-09-16T11:29:32Z
|
[
"python",
"calendar",
"google-api",
"google-calendar"
] |
mod_wsgi: Unable to stat Python home and ImportError: No module named 'encodings'
| 39,529,574 |
<p> I am trying to construct a web site on Apache 2.4 using Django and mod_wsgi on Python 3.5.2. But when apache daemon httpd is started, following error message is output on /var/log/httpd/error_log.</p>
<blockquote>
<p>[Fri Sep 16 17:44:57.145900 2016] [wsgi:warn] [pid 20593] (13)Permission denied: mod_wsgi (pid=20593): Unable to stat Python home /home/ec2-user/.pyenv/versions/3.5.2. Python interpreter may not be able to be initialized correctly. Verify the supplied path and access permissions for whole of the path.
<br />Fatal Python error: Py_Initialize: Unable to get the locale encoding
<br />ImportError: No module named 'encodings'</p>
</blockquote>
<p> So, I explored some articles about similar problems, e.g.</p>
<ul>
<li><p><a href="http://stackoverflow.com/questions/9386611/django-apache-mod-wsgi-permission-denied">Django + Apache + mod_wsgi permission denied</a></p></li>
<li><p><a href="http://stackoverflow.com/questions/24495348/mod-wsgi-importerror-no-module-named-encodings">mod_wsgi: ImportError: No module named 'encodings'</a></p></li>
</ul>
<p>but the causes of error message above are not yet resolved.
<br /> Please indicate some points to be checked or documents to read and so on.</p>
<p>My development environments are as follows.</p>
<p><strong>(1) Host and OS</strong><br /></p>
<ul>
<li>Hosting Service: Amazon AWS EC2 </li>
<li>OS: Amazon Linux AMI release 2016.03</li>
</ul>
<p><strong>(2) Django Project</strong> <br /></p>
<p> I made Django project named <strong>testprj</strong> in the directory <strong>/home/ec2-user/django-sites</strong> with user account of <strong>ec2-user</strong>.</p>
<pre><code>[ec2-user@MyEC2 django-sites]$ pwd
/home/ec2-user/django-sites
[ec2-user@MyEC2 django-sites]$ ls -l
total 4
drwxrwxr-x 3 ec2-user ec2-user 4096 Sep 16 14:50 testprj
</code></pre>
<p>Database which the testprj uses is already set up. So, development server provided in Django is successfully started with no error as bellow.</p>
<pre><code>[ec2-user@MyEC2 testprj]$ python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
September 16, 2016 - 14:23:47
Django version 1.10.1, using settings 'testprj.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
</code></pre>
<p><strong>(3) Environments of Python</strong></p>
<p> I installed Python 3.5.2 by <code>pyenv install 3.5.2</code>. And then I set
<code>pyenv virtualenv 3.5.2 django-sites</code>.
And I set local of /home/ec2-user/django-sites to env <strong>django-sites</strong> as bellow.</p>
<pre><code>[ec2-user@MyEC2 django-sites]$ pwd
/home/ec2-user/django-sites
[ec2-user@MyEC2 django-sites]$ pyenv versions
system
3.5.2
3.5.2/envs/django-sites
* django-sites (set by /home/ec2-user/django-sites/.python-version)
[ec2-user@MyEC2 django-sites]$ python -V
Python 3.5.2
</code></pre>
<p> And I installed following modules through pip command.</p>
<pre><code>[ec2-user@MyEC2 django-sites]$ pwd
/home/ec2-user/django-sites
[ec2-user@MyEC2 django-sites]$ pip list
configparser (3.5.0)
Django (1.10.1)
mod-wsgi (4.5.7)
pip (8.1.2)
PyMySQL (0.7.9)
setuptools (27.2.0)
</code></pre>
<p><strong>(4) Web server</strong></p>
<p> Web server which I use is Apache 2.4 as bellow.</p>
<pre><code>[ec2-user@MyEC2 ~]$ httpd -v
Server version: Apache/2.4.23 (Amazon)
Server built: Jul 29 2016 21:42:17
</code></pre>
<p> User and Group of executing Apache are both <strong>apache</strong> as next lines in /etc/httpd/conf/httpd.conf.</p>
<pre><code>User apache
Group apache
</code></pre>
<p><strong>(5) Additinal conf file for the Django project testprj</strong></p>
<p> I intend to configure such that the URL</p>
<pre><code>http://[MyEC2 domain]/testprj/
</code></pre>
<p>can access a top page of Django project located at /home/ec2-user/django-sites/testprj.<br />
[MyEC2 domain] above is like </p>
<pre><code>ec2-XX-XX-XX-XX.us-west-2.compute.amazonaws.com
</code></pre>
<p>So, I wrote <strong>testprj.conf</strong> in <strong>/etc/httpd/conf.d</strong> as bellow.</p>
<pre><code>[ec2-user@MyEC2 conf.d]$ pwd
/etc/httpd/conf.d
[ec2-user@MyEC2 conf.d]$ cat -n testprj.conf
</code></pre>
<pre class="lang-bsh prettyprint-override"><code> 1 LoadModule wsgi_module /home/ec2-user/.pyenv/versions/django-sites/lib/python3.5/site-packages/mod_wsgi/server/mod_wsgi-py35.cpython-35m-x86_64-linux-gnu.so
2
3 WSGIPythonHome /home/ec2-user/.pyenv/versions/3.5.2
4 WSGIScriptReloading On
5 WSGIScriptAlias /testprj/ /home/ec2-user/django-sites/testprj/testprj/wsgi.py
6 WSGIPythonPath /home/ec2-user/django-sites/testprj:/home/ec2-user/.pyenv/versions/django-sites/lib/python3.5/site-packages
7
8 <VirtualHost *:80>
9
10 ServerName http://ec2-XX-XX-XX-XX.us-west-2.compute.amazonaws.com
11 SuexecUserGroup apache apache
12
13 <Directory /home/ec2-user/django-sites/testprj/testprj>
14 <Files wsgi.py>
15 Require all granted
16 </Files>
17 </Directory>
18
19 Alias /static/ "/home/ec2-user/django-sites/testprj/static/"
20
21 <Directory /home/ec2-user/django-sites/testprj/static>
22 <Files *>
23 Require all granted
24 </Files>
25 </Directory>
26
27 </VirtualHost>
28
</code></pre>
<pre class="lang-none prettyprint-override"><code>[ec2-user@MyEC2 conf.d]$
</code></pre>
<p>Best regards.</p>
| 1 |
2016-09-16T10:45:40Z
| 39,530,708 |
<p>A couple of things to check.</p>
<ul>
<li><p>The pyenv tool doesn't install Python with a shared library by default. That could result in problems as mod_wsgi wants a shared library. You need to explicitly tell pyenv to build Python with a shared library.</p></li>
<li><p>A home directory on many Linux systems is not readable to other users. When mod_wsgi is being initialised, it is running as the Apache user and will not be able to see inside of the home directory. There isn't an issue when the mod_wsgi module is loaded by Apache as Apache is running as root at that point.</p></li>
</ul>
<p>There are other issues with your configuration where don't follow best practices, but the above, especially the second item is likely the cause of your problem.</p>
| 1 |
2016-09-16T11:44:57Z
|
[
"python",
"django",
"apache",
"mod-wsgi"
] |
Python/Tkinter: Remove titlebar without overrideredirect()
| 39,529,600 |
<p>I'm currently working with Tkinter and Python 2.7 on Linux and I was wondering if there was a way to remove the <code>TK()</code> window border frame and title bar without using <code>overrideredirect(1)</code>.</p>
<p>I have my own close button and <code>overrideredirect(1)</code> presents me with a few issues that I can't accept:</p>
<ul>
<li>GUI always on top</li>
<li>can't iconify then deiconify properly</li>
<li>no keyboard input so can't type into fields (see <a href="http://stackoverflow.com/questions/5886280/python-tkinter-overrideredirect-cannot-receive-keystrokes-linux">python tkinter overrideredirect; cannot receive keystrokes (Linux)</a>)</li>
</ul>
<p>I can't use <code>attributes("-fullscreen", True)</code> as the titlebar and borders remain.</p>
| 3 |
2016-09-16T10:46:51Z
| 39,530,810 |
<p>The window decoration is all handled by the window manager so what you are trying to do is find a way to tell the window manager to decorate your window differently from a standard application window. Tk provides <code>overrideredirect</code> to have the window manager completely ignore this window but we can also use <a href="https://standards.freedesktop.org/wm-spec/wm-spec-latest.html#idm140200472629520" rel="nofollow">Extended Window Manager Hints</a> to declare the intended use of this toplevel window to the window manager. This is done for instance for tooltip and splashscreen windows to allow the manager to provide minimal decoration and possibly special animations.</p>
<p>In your case, adding a 'splash' hint should do what you want</p>
<pre><code>root = tk.Tk()
root.wm_attributes('-type', 'splash')
</code></pre>
<p>You will need Tk 8.5 or above for this.</p>
| 2 |
2016-09-16T11:51:32Z
|
[
"python",
"python-2.7",
"tkinter"
] |
How to install matplotlib 2.0 beta on Windows?
| 39,529,655 |
<p>I follow the standard installation procedure for matplotlib on Windows. I type the following commands in my terminal:</p>
<pre><code> > python -m pip install -U pip setuptools
> python -m pip install matplotlib
</code></pre>
<p>Then I check my matplotlib version:</p>
<pre><code> > python
>>> import matplotlib
>>> matplotlib.__version__
'1.5.1'
</code></pre>
<p>Apparently this installation procedure doesn't give me the latest matplotlib version. The latest stable release is '1.5.3'. I would expect to get at least that version.</p>
<p>Even better, I would like to test out the latest beta release: '2.0.0 b4'. What instructions should I type in my terminal to get that version?</p>
<p>Note:</p>
<p>I'm using the following python version:</p>
<pre><code>>>> import sys
>>> sys.version
'3.5.2 |Anaconda 2.5.0 (64-bit)| (default, Jul 5 2016, 11:41:13)
[MSC v.1900 64 bit (AMD64)]'
</code></pre>
| 1 |
2016-09-16T10:49:02Z
| 39,529,750 |
<p>for matplotlib 1.5.3</p>
<pre><code>conda update matplotlib
</code></pre>
<p>for matplotlib-2.0.0b4 use wheels provided <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#matplotlib" rel="nofollow">here</a> :
use the .whl as follows :</p>
<pre><code>pip install some-package.whl
</code></pre>
| 1 |
2016-09-16T10:54:21Z
|
[
"python",
"python-3.x",
"matplotlib"
] |
Python - Automatically adjust width of an excel file's columns
| 39,529,662 |
<p>Newbie - I have a Python script that adjusts the width of different columns of an excel file, according to the values specified:</p>
<pre><code>import openpyxl
from string import ascii_uppercase
newFile = "D:\Excel Files\abc.xlsx"
wb = openpyxl.load_workbook(filename = newFile)
worksheet = wb.active
for column in ascii_uppercase:
if (column=='A'):
worksheet.column_dimensions[column].width = 30
elif (column=='B'):
worksheet.column_dimensions[column].width = 40
elif (column=='G'):
worksheet.column_dimensions[column].width = 45
else:
worksheet.column_dimensions[column].width = 15
wb.save(newFile)
</code></pre>
<p>Is there any way through which we can adjust the width of every column to its most optimum value, without explicitly specifying it for different columns (means, without using this "<strong>if-elif-elif-......-elif-else</strong>" structure)?
Thanks!</p>
| 0 |
2016-09-16T10:49:19Z
| 39,530,261 |
<p>If possible you should determine the length of the longest entry in the column and use that to set the width.</p>
<p>I'm assuming you can make use of for entry in ascii_uppercase.</p>
<p>I'm on mobile atm so can't give a concrete code example but what I said before should help you get closer to what you want to achieve. </p>
| -1 |
2016-09-16T11:22:06Z
|
[
"python",
"excel",
"openpyxl"
] |
Python - Automatically adjust width of an excel file's columns
| 39,529,662 |
<p>Newbie - I have a Python script that adjusts the width of different columns of an excel file, according to the values specified:</p>
<pre><code>import openpyxl
from string import ascii_uppercase
newFile = "D:\Excel Files\abc.xlsx"
wb = openpyxl.load_workbook(filename = newFile)
worksheet = wb.active
for column in ascii_uppercase:
if (column=='A'):
worksheet.column_dimensions[column].width = 30
elif (column=='B'):
worksheet.column_dimensions[column].width = 40
elif (column=='G'):
worksheet.column_dimensions[column].width = 45
else:
worksheet.column_dimensions[column].width = 15
wb.save(newFile)
</code></pre>
<p>Is there any way through which we can adjust the width of every column to its most optimum value, without explicitly specifying it for different columns (means, without using this "<strong>if-elif-elif-......-elif-else</strong>" structure)?
Thanks!</p>
| 0 |
2016-09-16T10:49:19Z
| 39,530,676 |
<pre><code>for col in worksheet.columns:
max_length = 0
column = col[0].column # Get the column name
for cell in col:
try: # Necessary to avoid error on empty cells
if len(str(cell.value)) > max_length:
max_length = len(cell.value)
except:
pass
adjusted_width = (max_length + 2) * 1.2
worksheet.column_dimensions[column].width = adjusted_width
</code></pre>
<p>This could probably be made neater but it does the job. You will want to play around with the adjusted_width value according to what is good for the font you are using when viewing it. If you use a monotype you can get it exact but its not a one-to-one correlation so you will still need to adjust it a bit.</p>
<p>If you want to get fancy and exact without monotype you could sort letters by width and assign each width a float value which you then add up. This would require a third loop parsing each character in the cell value and summing up the result for each column and probably a dictionary sorting characters by width, perhaps overkill but cool if you do it. </p>
<p>Edit: Actually there seems to be a better way of measuring visual size of text: <a href="http://stackoverflow.com/a/32565855/6839055">link</a> personally I would prefer the matplotlib technique. </p>
<p>Hope I could be of help, my very first stackoverflow answer =)</p>
| 4 |
2016-09-16T11:43:22Z
|
[
"python",
"excel",
"openpyxl"
] |
Initialization of very big vector in C++
| 39,529,799 |
<p>I created very big O(10M) floating point list in python. I would like to use this lookup table in my C++ project. What is the easiest and the most efficient way to transfer this array from python to C++. </p>
<p>My first idea was to generate c++ function, which is responsible for initializations of such long vector and then compile it.
The python code looks like above: </p>
<pre><code>def generate_initLookupTable_function():
numbers_per_row = 100
function_body = """
#include "PatBBDTSeedClassifier.h"
std::vector<double> PatBBDTSeedClassifier::initLookupTable()
{
std::vector<double> indicesVector ={
"""
row_nb = 1
for bin_value in classifier._lookup_table[:,0]:
function_body += "\t" + str(bin_value) +" , "
if (row_nb % numbers_per_row) == 0:
function_body += "\n"
row_nb += 1
function_body += """\n };
return indicesVector;
}
"""
return function_body
</code></pre>
<p>The output file has size of 500 MB. And there is not possible to compile it (compilation terminated due to gcc crash):</p>
<pre><code>../src/PatBBDTSeedClassifier_lookupTable.cpp
lcg-g++-4.9.3: internal compiler error: Killed (program cc1plus)
0x409edc execute
../../gcc-4.9.3/gcc/gcc.c:2854
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
</code></pre>
<p>The another idea is to store python array into binary file and then read it in C++. But this is tricky. I cannot properly read it.
I generate the table using such simple command: </p>
<pre><code>file = open("models/BBDT_lookuptable.dat", 'wb')
table = numpy.array(classifier._lookup_table[:,0])
table.tofile(file)
file.close()
</code></pre>
<p>Can you tell me how can I do it? I googled SO and I could't find andy sufficient answer. </p>
<p>Do you have any idea how can I deal with such big arrays. </p>
<p>I should have give you more detailed description of the problem.
I use python to train the ML (sklearn) classifier and then I would like to deploy it in C++. Doe to timing issue (execution speed is a crucial part of my study) I use idea of <a href="http://arxiv.org/abs/1210.6861" rel="nofollow">bonsai boosted decision trees</a>. In this approach you transfer BDT into lookup table. </p>
| 1 |
2016-09-16T10:57:33Z
| 39,530,210 |
<p>As you have noticed, the compiler crashes on such big data arrays.</p>
<p>What you can do besides of reading a binary file (since you don't want to do that) is to link with an assembly file. It still makes the executable self-sufficient and GAS is much more tolerant to big files. Here's an example of some asm file I generated using python and which assembles fine with classical <code>gcc</code>:</p>
<pre><code>.section .rodata
.globl FT
.globl FT_end
FT:
.byte 0x46,0x54,0x5f,0x43,0x46,0x0,0x0,0x0,0x0,0x0,0x0,0x3,0x43,0x4f,0x4d,0x50
.byte 0x32,0x30,0x31,0x0,0x3,0x88,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0
.byte 0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0
.byte 0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x28,0xe6,0x47,0x6,0x7,0x8,0x28,0x28
.byte 0x26,0x6,0x2a,0x6,0x6,0x40,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0
FT_end:
</code></pre>
<p>That technique allowed me to embed a 80 Megabyte binary file (1 million code lines) in an executable because I had no filesystem to read data files in that environment (QEMU)</p>
<p>Real test with the python code I finally could dig out:</p>
<p>Python code:</p>
<pre><code>floats = [0.12,0.45,0.34,4.567,22.7]
import struct
contents = struct.pack('f'*len(floats), *floats)
outbase = "_extref"
output_file = "data.s"
fw = open(output_file,"w")
fw.write(""".section .rodata
.globl {0}
.globl {0}_end
{0}:
""".format(outbase,outbase))
eof = False
current_offset = 0
while not eof:
to_write = []
if current_offset==len(contents):
break
if current_offset<len(contents):
fw.write(".byte ")
for i in range(0,16):
if current_offset<len(contents):
to_write.append(hex(ord(contents[current_offset])))
current_offset+=1
else:
eof = True
break
if len(to_write)>0:
fw.write(",".join(to_write)+"\n")
fw.write(outbase+"_end:\n")
fw.close()
</code></pre>
<p><code>test.cpp</code>: C++ code (C++11, I struggled for the pointer references to the asm part):</p>
<pre><code>#include <iostream>
#include <vector>
#include <strings.h>
extern const float extref;
extern const float extref_end;
int main()
{
int size = (&extref_end - &extref);
std::cout << "nb_elements: " << size << std::endl;
std::vector<float> v(size);
memcpy(&v[0],&extref,sizeof(float)*size);
for (auto it : v)
{
std::cout << it << std::endl;
}
return 0;
}
</code></pre>
<p>Python code generates a <code>data.s</code> file. Create an executable with:</p>
<pre><code>g++ -std=c++11 test.cpp data.s
</code></pre>
<p>run:</p>
<pre><code>nb_elements: 5
0.12
0.45
0.34
4.567
22.7
</code></pre>
<p>The main advantage of this method is that you can define as many symbols as you want, with the format that you want.</p>
| 0 |
2016-09-16T11:19:53Z
|
[
"python",
"c++",
"arrays"
] |
Initialization of very big vector in C++
| 39,529,799 |
<p>I created very big O(10M) floating point list in python. I would like to use this lookup table in my C++ project. What is the easiest and the most efficient way to transfer this array from python to C++. </p>
<p>My first idea was to generate c++ function, which is responsible for initializations of such long vector and then compile it.
The python code looks like above: </p>
<pre><code>def generate_initLookupTable_function():
numbers_per_row = 100
function_body = """
#include "PatBBDTSeedClassifier.h"
std::vector<double> PatBBDTSeedClassifier::initLookupTable()
{
std::vector<double> indicesVector ={
"""
row_nb = 1
for bin_value in classifier._lookup_table[:,0]:
function_body += "\t" + str(bin_value) +" , "
if (row_nb % numbers_per_row) == 0:
function_body += "\n"
row_nb += 1
function_body += """\n };
return indicesVector;
}
"""
return function_body
</code></pre>
<p>The output file has size of 500 MB. And there is not possible to compile it (compilation terminated due to gcc crash):</p>
<pre><code>../src/PatBBDTSeedClassifier_lookupTable.cpp
lcg-g++-4.9.3: internal compiler error: Killed (program cc1plus)
0x409edc execute
../../gcc-4.9.3/gcc/gcc.c:2854
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
</code></pre>
<p>The another idea is to store python array into binary file and then read it in C++. But this is tricky. I cannot properly read it.
I generate the table using such simple command: </p>
<pre><code>file = open("models/BBDT_lookuptable.dat", 'wb')
table = numpy.array(classifier._lookup_table[:,0])
table.tofile(file)
file.close()
</code></pre>
<p>Can you tell me how can I do it? I googled SO and I could't find andy sufficient answer. </p>
<p>Do you have any idea how can I deal with such big arrays. </p>
<p>I should have give you more detailed description of the problem.
I use python to train the ML (sklearn) classifier and then I would like to deploy it in C++. Doe to timing issue (execution speed is a crucial part of my study) I use idea of <a href="http://arxiv.org/abs/1210.6861" rel="nofollow">bonsai boosted decision trees</a>. In this approach you transfer BDT into lookup table. </p>
| 1 |
2016-09-16T10:57:33Z
| 39,530,626 |
<p>Here's a simple example of how to write Python float data to a binary file, and how to read that data in C. To encode the data, we use the <code>struct</code> module.</p>
<h2>savefloat.py</h2>
<pre class="lang-python prettyprint-override"><code>#!/usr/bin/env python3
from struct import pack
# The float data to save
table = [i / 16.0 for i in range(32)]
# Dump the table to stdout
for i, v in enumerate(table):
print('%d: %f' % (i, v))
# Save the data to a binary file
fname = 'test.data'
with open(fname, 'wb') as f:
for u in table:
# Pack doubles as little-endian
f.write(pack(b'<d', u))
</code></pre>
<p><strong>output</strong></p>
<pre><code>0: 0.000000
1: 0.062500
2: 0.125000
3: 0.187500
4: 0.250000
5: 0.312500
6: 0.375000
7: 0.437500
8: 0.500000
9: 0.562500
10: 0.625000
11: 0.687500
12: 0.750000
13: 0.812500
14: 0.875000
15: 0.937500
16: 1.000000
17: 1.062500
18: 1.125000
19: 1.187500
20: 1.250000
21: 1.312500
22: 1.375000
23: 1.437500
24: 1.500000
25: 1.562500
26: 1.625000
27: 1.687500
28: 1.750000
29: 1.812500
30: 1.875000
31: 1.937500
</code></pre>
<h2>loadfloat.c</h2>
<pre class="lang-c prettyprint-override"><code>/* Read floats from a binary file & dump to stdout */
#include <stdlib.h>
#include <stdio.h>
#define FILENAME "test.data"
#define DATALEN 32
int main(void)
{
FILE *infile;
double data[DATALEN];
int i, n;
if(!(infile = fopen(FILENAME, "rb")))
exit(EXIT_FAILURE);
n = fread(data, sizeof(double), DATALEN, infile);
fclose(infile);
for(i=0; i<n; i++)
printf("%d: %f\n", i, data[i]);
return 0;
}
</code></pre>
<p>The above C code produces identical output to that shown for <code>savefloat.py</code>. </p>
| 2 |
2016-09-16T11:40:34Z
|
[
"python",
"c++",
"arrays"
] |
Initialization of very big vector in C++
| 39,529,799 |
<p>I created very big O(10M) floating point list in python. I would like to use this lookup table in my C++ project. What is the easiest and the most efficient way to transfer this array from python to C++. </p>
<p>My first idea was to generate c++ function, which is responsible for initializations of such long vector and then compile it.
The python code looks like above: </p>
<pre><code>def generate_initLookupTable_function():
numbers_per_row = 100
function_body = """
#include "PatBBDTSeedClassifier.h"
std::vector<double> PatBBDTSeedClassifier::initLookupTable()
{
std::vector<double> indicesVector ={
"""
row_nb = 1
for bin_value in classifier._lookup_table[:,0]:
function_body += "\t" + str(bin_value) +" , "
if (row_nb % numbers_per_row) == 0:
function_body += "\n"
row_nb += 1
function_body += """\n };
return indicesVector;
}
"""
return function_body
</code></pre>
<p>The output file has size of 500 MB. And there is not possible to compile it (compilation terminated due to gcc crash):</p>
<pre><code>../src/PatBBDTSeedClassifier_lookupTable.cpp
lcg-g++-4.9.3: internal compiler error: Killed (program cc1plus)
0x409edc execute
../../gcc-4.9.3/gcc/gcc.c:2854
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
</code></pre>
<p>The another idea is to store python array into binary file and then read it in C++. But this is tricky. I cannot properly read it.
I generate the table using such simple command: </p>
<pre><code>file = open("models/BBDT_lookuptable.dat", 'wb')
table = numpy.array(classifier._lookup_table[:,0])
table.tofile(file)
file.close()
</code></pre>
<p>Can you tell me how can I do it? I googled SO and I could't find andy sufficient answer. </p>
<p>Do you have any idea how can I deal with such big arrays. </p>
<p>I should have give you more detailed description of the problem.
I use python to train the ML (sklearn) classifier and then I would like to deploy it in C++. Doe to timing issue (execution speed is a crucial part of my study) I use idea of <a href="http://arxiv.org/abs/1210.6861" rel="nofollow">bonsai boosted decision trees</a>. In this approach you transfer BDT into lookup table. </p>
| 1 |
2016-09-16T10:57:33Z
| 39,531,749 |
<p>If you're using GNU tools it is rather easy to directly use <code>objcopy</code> to achieve that which is suggested by Jean-Francois; combining with python script of PM2Ring which writes a binary array, you can execute:</p>
<pre><code>objcopy -I binary test.data -B i386:x86-64 -O elf64-x86-64 testdata.o
</code></pre>
<p>(depending on your actual processor architecture, you might need to adjust). The command will create a new object named <code>testdata.o</code> with the following symbols:</p>
<pre><code>0000000000000100 D _binary_test_data_end
0000000000000100 A _binary_test_data_size
0000000000000000 D _binary_test_data_start
</code></pre>
<p>All these symbols will be visible as symbols with C linkage in the linked program. The <code>size</code> is not usable as such (it will be converted to an address as well), but the <code>*start</code> and <code>*end</code> can be used. Here is a minimal C++ program:</p>
<pre><code>#include <iostream>
extern "C" double _binary_test_data_start[];
extern "C" double _binary_test_data_end[0];
int main(void) {
double *d = _binary_test_data_start;
const double *end = _binary_test_data_end;
std::cout << (end - d) << " doubles in total" << std::endl;
while (d < end) {
std::cout << *d++ << std::endl;
}
}
</code></pre>
<p>The <code>_binary_test_data_end</code> will actually be just past the last element in the array <code>_binary_test_data_start</code>.</p>
<p>Compile + link this program with <code>g++ test.cc testdata.o -o program</code> (using the testdata.o from objcopy above).</p>
<p>Output (<code>cout</code> by default seems to truncate the decimals awkwardly): </p>
<pre><code>% ./a.out
32 doubles in total
0
0.0625
0.125
0.1875
0.25
0.3125
0.375
0.4375
0.5
0.5625
0.625
0.6875
0.75
0.8125
0.875
0.9375
1
1.0625
1.125
1.1875
1.25
1.3125
1.375
1.4375
1.5
1.5625
1.625
1.6875
1.75
1.8125
1.875
1.9375
</code></pre>
<hr>
<p>You can also assign these values into a vector very easily; <code>std::vector<double></code> accepts 2 iterators, where first points to the first element, and second to just one after; you can use the arrays here as they decay into pointers, and pointers can be used as iterators:</p>
<pre><code>std::vector<double> vec(_binary_test_data_start, _binary_test_data_end);
</code></pre>
<p>However, for big arrays, this is just needless copying. Also, using just the C array has the added benefit that it is <em>lazily loaded</em>; ELF executables are not read into memory, but they're paged in as needed; the binary array is loaded from file into RAM only as it is accessed.</p>
| 4 |
2016-09-16T12:38:02Z
|
[
"python",
"c++",
"arrays"
] |
Select k random rows from postgres django ORM
| 39,529,824 |
<p>We have a requirement, that we want to select k random rows from a database.
So, our intial thought was going like this :-</p>
<pre><code>table.objects.filter(..).order_by('?')[:k]
</code></pre>
<p>but then we read over the internet that this is highly inefficient solution so we came up with this (not so innovative):-</p>
<pre><code>random.sample(table.objects.filter(..), k)
</code></pre>
<p>But this seems to be more slower than previous.</p>
<p>So, we want to know what is correct approach for selection exactly <em>k</em> rows from the database, which in our case is postgres.</p>
| 0 |
2016-09-16T10:58:48Z
| 39,530,347 |
<p>As Daniel Roseman mentioned in a comment, the reason why</p>
<pre><code>random.sample(table.objects.filter(..), k)
</code></pre>
<p>is slow, is because you'd have to fetch <em>all</em> objects, then find <code>k</code> out of that query set.</p>
<p>I have encountered exactly the same type of problem and the way we solved this was to</p>
<ul>
<li>Find <code>max_id</code> in the table.</li>
<li>Pick <code>k</code> numbers from the set of <code>1, ..., max_id</code></li>
<li>Do <code>table.objects.filter(id__in=set_of_ks)</code></li>
</ul>
<p>This of course assumes that there are no "holes" in the set of table IDs.</p>
| 0 |
2016-09-16T11:26:43Z
|
[
"python",
"django",
"postgresql",
"orm"
] |
Select k random rows from postgres django ORM
| 39,529,824 |
<p>We have a requirement, that we want to select k random rows from a database.
So, our intial thought was going like this :-</p>
<pre><code>table.objects.filter(..).order_by('?')[:k]
</code></pre>
<p>but then we read over the internet that this is highly inefficient solution so we came up with this (not so innovative):-</p>
<pre><code>random.sample(table.objects.filter(..), k)
</code></pre>
<p>But this seems to be more slower than previous.</p>
<p>So, we want to know what is correct approach for selection exactly <em>k</em> rows from the database, which in our case is postgres.</p>
| 0 |
2016-09-16T10:58:48Z
| 39,532,581 |
<p>Well, if you know roughly the size of your table you can use the new <a href="https://www.postgresql.org/docs/current/static/sql-select.html#SQL-FROM" rel="nofollow">TABLESAMPLE</a> clause to select a percentage of the rows at random. Then, you can always LIMIT it afterwards.</p>
<p>A short blog article covering it <a href="http://blog.2ndquadrant.com/tablesample-in-postgresql-9-5-2/" rel="nofollow">here</a>.</p>
| 0 |
2016-09-16T13:21:53Z
|
[
"python",
"django",
"postgresql",
"orm"
] |
Can we check multiple variables against same expression in python
| 39,529,890 |
<p>I know we can check a variable against multiple conditions as </p>
<pre><code>if all(x >= 2 for x in (A, B, C, D)):
print A, B, C, D
</code></pre>
<p>My question is , can we do the reverse?
can we check me or two variables against same conditions(one or two)</p>
<pre><code>null_check = (None,'','None')
if variable1 not in null_check and variable2 not in null_check:
print (variable1, variable2)
</code></pre>
<p>can we rewrite the above code as</p>
<pre><code>if variable1 and variable2 not in null_check:
print (variable1, variable2)
</code></pre>
<p>If yes, which one is a better practice ? </p>
<p>Thanks in advance :)</p>
| 0 |
2016-09-16T11:02:18Z
| 39,529,962 |
<p>You can put the variables in a <code>list</code> or <code>tuple</code>, then use the same idea using <code>all</code> to check that none of them are in your <code>tuple</code>.</p>
<pre><code>if all(var not in null_check for var in (variable1, variable2)):
print (variable1, variable2)
</code></pre>
| 3 |
2016-09-16T11:05:44Z
|
[
"python"
] |
Can we check multiple variables against same expression in python
| 39,529,890 |
<p>I know we can check a variable against multiple conditions as </p>
<pre><code>if all(x >= 2 for x in (A, B, C, D)):
print A, B, C, D
</code></pre>
<p>My question is , can we do the reverse?
can we check me or two variables against same conditions(one or two)</p>
<pre><code>null_check = (None,'','None')
if variable1 not in null_check and variable2 not in null_check:
print (variable1, variable2)
</code></pre>
<p>can we rewrite the above code as</p>
<pre><code>if variable1 and variable2 not in null_check:
print (variable1, variable2)
</code></pre>
<p>If yes, which one is a better practice ? </p>
<p>Thanks in advance :)</p>
| 0 |
2016-09-16T11:02:18Z
| 39,529,968 |
<p>You can do this very similarly to your first code block:</p>
<pre><code>null_check = (None,'','None')
if all(variable not in null_check for variable in (variable1, variable2)):
print (variable1, variable2)
</code></pre>
<p>Or:</p>
<pre><code>null_check = (None,'','None')
variables = variable1, variable2 # defined elsewhere
if all(variable not in null_check for variable in variables:
print (*variables)
</code></pre>
| 1 |
2016-09-16T11:05:59Z
|
[
"python"
] |
Can we check multiple variables against same expression in python
| 39,529,890 |
<p>I know we can check a variable against multiple conditions as </p>
<pre><code>if all(x >= 2 for x in (A, B, C, D)):
print A, B, C, D
</code></pre>
<p>My question is , can we do the reverse?
can we check me or two variables against same conditions(one or two)</p>
<pre><code>null_check = (None,'','None')
if variable1 not in null_check and variable2 not in null_check:
print (variable1, variable2)
</code></pre>
<p>can we rewrite the above code as</p>
<pre><code>if variable1 and variable2 not in null_check:
print (variable1, variable2)
</code></pre>
<p>If yes, which one is a better practice ? </p>
<p>Thanks in advance :)</p>
| 0 |
2016-09-16T11:02:18Z
| 39,530,014 |
<p>No you can't do that, but as a pythonic approach you can put your <code>null_check</code> items in a <code>set</code>. And check the intersection:</p>
<pre><code>null_check = {None,'','None'}
if null_check.intersection({var1, var2}): # instead of `or` or `any()` function
# pass
if len(null_check.intersection({var1, var2})) == 2: # instead of `and` or `all()` function
# pass
</code></pre>
| 1 |
2016-09-16T11:08:15Z
|
[
"python"
] |
Plotting histograms in Python using pandas
| 39,529,941 |
<p>I'm trying to create a histogram with two data sets overlying each other, however whenever I plot it using pandas.DataFrame.hist(), it creates two graphs:</p>
<p><img src="http://i.stack.imgur.com/ldlcs.png" alt=""></p>
<p>The code is simply:</p>
<pre><code>ratios.hist(bins = 100)
plt.show()
</code></pre>
<p>where ratios is just a DataFrame, 2 columns by about 7000 rows. Any idea on how to put the two graphs on the same axis?</p>
| 0 |
2016-09-16T11:04:57Z
| 39,530,135 |
<p>Try <a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html#histograms" rel="nofollow">plot.hist</a> instead:</p>
<pre><code>ratios = pd.DataFrame(np.random.normal((1, 2), size=(100, 2)))
ratios.hist(bins=10)
</code></pre>
<p>This generates:</p>
<p><a href="http://i.stack.imgur.com/fyop0.png" rel="nofollow"><img src="http://i.stack.imgur.com/fyop0.png" alt="enter image description here"></a></p>
<pre><code>ratios.plot.hist(alpha=0.5, bins=10)
</code></pre>
<p>This, on the other hand, puts them on the same graph:</p>
<p><a href="http://i.stack.imgur.com/e6ElX.png" rel="nofollow"><img src="http://i.stack.imgur.com/e6ElX.png" alt="enter image description here"></a></p>
| 1 |
2016-09-16T11:14:49Z
|
[
"python",
"pandas",
"histogram"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.