title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
How to identify a string as being a byte literal? | 39,778,978 | <p>In Python 3, if I have a string such that:</p>
<pre><code>print(some_str)
</code></pre>
<p>yields something like this:</p>
<pre><code>b'This is the content of my string.\r\n'
</code></pre>
<p>I know it's a byte literal. </p>
<p>Is there a function that can be used to determine if that string is in byte literal format (versus having, say, the Unicode <code>'u'</code> prefix) without first interpreting? Or is there another best practice for handling this? I have a situation wherein getting a byte literal string needs to be dealt with differently than if it's in Unicode. In theory, something like this:</p>
<pre><code>if is_byte_literal(some_str):
// handle byte literal case
else:
// handle unicode case
</code></pre>
| 7 | 2016-09-29T19:59:29Z | 39,779,592 | <p>Just to complement the other answer, the built-in <a href="https://docs.python.org/3/library/functions.html#type" rel="nofollow"><code>type</code></a> also gives you this information. You can use it with <code>is</code> and the corresponding type to check accordingly.</p>
<p>For example, in Python 3:</p>
<pre><code>a = 'foo'
print(type(a) is str) # prints `True`
a = b'foo'
print(type(a) is bytes) # prints `True` as well
</code></pre>
| 4 | 2016-09-29T20:39:52Z | [
"python",
"string",
"python-3.x"
]
|
Pandas scatter_matrix analog function to pairs(lower.panel, upper.panel) | 39,778,987 | <p>I need to create a scatter matrix in Python. I tried using scatter_matrix for this but I would like to leave only the scatter plots above the diagonal line.</p>
<p>I`m in the really beginning (did not got far) and I have troubles when columns have names (not the default numbers).</p>
<p>Here is my code:</p>
<pre><code>import itertools
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
data=pd.DataFrame(np.random.randint(0,100,size=(10, 5)), columns=list('ABCDE')) #THE PROBLEM IS HERE - I WILL HAVE COLUMNS WITH NAMES
d = data.shape[1]
fig, axes = plt.subplots(nrows=d, ncols=d, sharex=True, sharey=True)
for i in range(d):
for j in range(d):
ax = axes[i,j]
if i == j:
ax.text(0.5, 0.5, "Diagonal", transform=ax.transAxes,
horizontalalignment='center', verticalalignment='center',
fontsize=16)
else:
ax.scatter(data[j], data[i], s=10)
</code></pre>
| 1 | 2016-09-29T20:00:22Z | 39,779,214 | <p>You have an issue when selecting a column from a data frame. You can use <code>iloc</code> to select columns based on integer location. Change your last line to:</p>
<pre><code>ax.scatter(data.iloc[:,j], data.iloc[:,i], s=10)
</code></pre>
<p>Gives:</p>
<p><a href="http://i.stack.imgur.com/gxNuV.png" rel="nofollow"><img src="http://i.stack.imgur.com/gxNuV.png" alt="enter image description here"></a></p>
| 1 | 2016-09-29T20:14:23Z | [
"python",
"pandas",
"scatter-plot",
"subplot"
]
|
sklean fit_predict not accepting a 2 dimensional numpy array | 39,779,113 | <p>I am trying to perform some clustering analysis using three different clustering algorithms. I am loading in data from stdin as follows</p>
<pre><code>import sklearn.cluster as cluster
X = []
for line in sys.stdin:
x1, x2 = line.strip().split()
X.append([float(x1), float(x2)])
X = numpy.array(X)
</code></pre>
<p>and then storing my clustering parameters and types in an array as such</p>
<pre><code>clustering_configs = [
### K-Means
['KMeans', {'n_clusters' : 5}],
### Ward
['AgglomerativeClustering', {
'n_clusters' : 5,
'linkage' : 'ward'
}],
### DBSCAN
['DBSCAN', {'eps' : 0.15}]
]
</code></pre>
<p>And I am trying to call them in a for loop</p>
<pre><code>for alg_name, alg_params in clustering_configs:
class_ = getattr(cluster, alg_name)
instance_ = class_(alg_params)
instance_.fit_predict(X)
</code></pre>
<p>Everything is working correctly except for the <code>instance_.fit_prefict(X)</code> function. I am getting returned an error </p>
<pre><code>Traceback (most recent call last):
File "meta_cluster.py", line 47, in <module>
instance_.fit_predict(X)
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.17.1-py2.7-linux-x86_64.egg/sklearn/cluster/k_means_.py", line 830, in fit_predict
return self.fit(X).labels_
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.17.1-py2.7-linux-x86_64.egg/sklearn/cluster/k_means_.py", line 812, in fit
X = self._check_fit_data(X)
File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.17.1-py2.7-linux-x86_64.egg/sklearn/cluster/k_means_.py", line 789, in _check_fit_data
X.shape[0], self.n_clusters))
TypeError: %d format: a number is required, not dict
</code></pre>
<p>Anyone have a clue where I could be going wrong? I read the sklearn docs <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans.fit_predict" rel="nofollow">here</a> and it claims you just need an <code>array-like or sparse matrix, shape=(n_samples, n_features)</code> which I believe I have.</p>
<p>Any suggestions? Thanks!</p>
| 2 | 2016-09-29T20:08:36Z | 39,779,313 | <pre><code> class sklearn.cluster.KMeans(n_clusters=8, init='k-means++', n_init=10, max_iter=300, tol=0.0001, precompute_distances='auto', verbose=0, random_state=None, copy_x=True, n_jobs=1, algorithm='auto')[source]
</code></pre>
<p>They way you'd call the KMeans class is,</p>
<pre><code>KMeans(n_clusters=5)
</code></pre>
<p>With your current code you are calling</p>
<pre><code>KMeans({'n_clusters': 5})
</code></pre>
<p>which is causing alg_params to be passed as a Dict instead of a class parameter. Same goes for the other algorithms.</p>
| 2 | 2016-09-29T20:21:47Z | [
"python",
"numpy",
"scikit-learn"
]
|
Python : split text to list of lines | 39,779,316 | <p>am new to Python , But i have text File like :</p>
<pre><code>12345 | 6789 | abcd | efgh
</code></pre>
<p>i want my output be Like :</p>
<pre><code>12345
6789
abcd
efgh
</code></pre>
<p>=====================</p>
<p>i really don't know the script
but i made a lot of scripts by those function split() , strip() , blame blame blame </p>
<p>but i failed to do it
so am asking for help is someone can .</p>
<p>i will appreciate any Help .</p>
<pre><code>with open('contacts_index1.txt') as f:
lines = f.read().splitlines("|")
</code></pre>
| 2 | 2016-09-29T20:22:01Z | 39,779,485 | <p>Some problems with the code you posted:</p>
<ul>
<li><code>f.read</code> doesn't read the whole line. It should be <code>f.readline()</code>. </li>
<li>What is the function <code>splitlines</code>?</li>
</ul>
<p>Your question is pretty unclear in differnt aspects. Maybe this snippet could be of some help:</p>
<pre><code>for line in open('contacts_index1.txt'):
elements = line.split('|')
for element in elements:
print element.strip()
</code></pre>
<p>Editted: I didn't know the function <code>splitlines</code>. Just looked it up. The way you used it in your code doesn't seem to be correct anyway.</p>
| 0 | 2016-09-29T20:32:24Z | [
"python",
"text",
"split",
"lines"
]
|
Python : split text to list of lines | 39,779,316 | <p>am new to Python , But i have text File like :</p>
<pre><code>12345 | 6789 | abcd | efgh
</code></pre>
<p>i want my output be Like :</p>
<pre><code>12345
6789
abcd
efgh
</code></pre>
<p>=====================</p>
<p>i really don't know the script
but i made a lot of scripts by those function split() , strip() , blame blame blame </p>
<p>but i failed to do it
so am asking for help is someone can .</p>
<p>i will appreciate any Help .</p>
<pre><code>with open('contacts_index1.txt') as f:
lines = f.read().splitlines("|")
</code></pre>
| 2 | 2016-09-29T20:22:01Z | 39,779,512 | <p>I strongly suggest using csv module for this kind of task, as it seems like a csv-type file, using '|' as delimiter:</p>
<pre><code>import csv
with open('contacts_index1.txt','r') as f:
reader=csv.reader(f,delimiter='|')
for row in reader:
#do things with each line
print "\n".join(row)
</code></pre>
| 0 | 2016-09-29T20:34:05Z | [
"python",
"text",
"split",
"lines"
]
|
Python : split text to list of lines | 39,779,316 | <p>am new to Python , But i have text File like :</p>
<pre><code>12345 | 6789 | abcd | efgh
</code></pre>
<p>i want my output be Like :</p>
<pre><code>12345
6789
abcd
efgh
</code></pre>
<p>=====================</p>
<p>i really don't know the script
but i made a lot of scripts by those function split() , strip() , blame blame blame </p>
<p>but i failed to do it
so am asking for help is someone can .</p>
<p>i will appreciate any Help .</p>
<pre><code>with open('contacts_index1.txt') as f:
lines = f.read().splitlines("|")
</code></pre>
| 2 | 2016-09-29T20:22:01Z | 39,783,096 | <p>From all of your comments, it looks like the issue has to do with the actual text in the file, and not the ability to parse it. It looks like everyone's solution in here is on the right track, you just need to force the encoding.</p>
<p>The error you are describing is described <a href="http://stackoverflow.com/questions/9233027/unicodedecodeerror-charmap-codec-cant-decode-byte-x-in-position-y-character">in this other StackOverflow post</a>.</p>
<pre><code>with open('contacts_index1.txt', 'r') as f:
lines = f.read().encode("utf-8").replace("|", "\n")
</code></pre>
<p>EDIT: The issue appears to be a nasty character that wasn't properly decoding. With <code>open</code> you can tell it to ignore characters it can't decode.</p>
<pre><code>import io
with io.open("contacts_index1.txt", errors="ignore") as f:
lines = f.read()replace("|", "\n")
</code></pre>
| 0 | 2016-09-30T03:16:06Z | [
"python",
"text",
"split",
"lines"
]
|
Python : split text to list of lines | 39,779,316 | <p>am new to Python , But i have text File like :</p>
<pre><code>12345 | 6789 | abcd | efgh
</code></pre>
<p>i want my output be Like :</p>
<pre><code>12345
6789
abcd
efgh
</code></pre>
<p>=====================</p>
<p>i really don't know the script
but i made a lot of scripts by those function split() , strip() , blame blame blame </p>
<p>but i failed to do it
so am asking for help is someone can .</p>
<p>i will appreciate any Help .</p>
<pre><code>with open('contacts_index1.txt') as f:
lines = f.read().splitlines("|")
</code></pre>
| 2 | 2016-09-29T20:22:01Z | 39,783,129 | <p>You will have to use decode. The following code will work:</p>
<pre><code>def dataFunction(filename):
with open(filename, encoding="utf8") as f:
return f.read()
</code></pre>
<p>Call this function with filename as parameter:</p>
<pre><code>Contents = dataFunction(filename)
elements = Contents.split("|")
for element in elements:
print(element)
</code></pre>
| 1 | 2016-09-30T03:19:11Z | [
"python",
"text",
"split",
"lines"
]
|
Python : split text to list of lines | 39,779,316 | <p>am new to Python , But i have text File like :</p>
<pre><code>12345 | 6789 | abcd | efgh
</code></pre>
<p>i want my output be Like :</p>
<pre><code>12345
6789
abcd
efgh
</code></pre>
<p>=====================</p>
<p>i really don't know the script
but i made a lot of scripts by those function split() , strip() , blame blame blame </p>
<p>but i failed to do it
so am asking for help is someone can .</p>
<p>i will appreciate any Help .</p>
<pre><code>with open('contacts_index1.txt') as f:
lines = f.read().splitlines("|")
</code></pre>
| 2 | 2016-09-29T20:22:01Z | 39,783,414 | <p>Please do this line by line. There is no need to read the entire file at once.</p>
<p>Something like:</p>
<pre><code>with open(file_name) as f_in:
for line in f_in:
for word in line.split('|'):
print word.strip()
</code></pre>
<hr>
<p>If it is a unicode issue, most of the time it is automatic:</p>
<pre><code>$ cat /tmp/so.txt
12345 | 6789 | abcd | éfgh
</code></pre>
<p>(note the <code>é</code> in the file)</p>
<p>The program above works. If it does NOT work, use a codec:</p>
<pre><code>with open(fn) as f_in:
for line in f_in:
line=line.decode('utf-8') # or whatever codec is used for that file...
for word in line.split('|'):
print word.strip()
</code></pre>
<hr>
<p>With Python3, just set the encoding when you open the file:</p>
<pre><code>with open(fn, encoding='utf-8') as f_in: # <= replace with the encoding of the file...
for line in f_in:
for word in line.split('|'):
print(word.strip())
</code></pre>
| 0 | 2016-09-30T03:53:23Z | [
"python",
"text",
"split",
"lines"
]
|
how to export data to unix system location using python | 39,779,412 | <p>I am trying to write the file to my company's project folder which is unix system and the location is /department/projects/data/. So I used the following code</p>
<p>df.to_csv("/department/projects/data/Test.txt", sep='\t', header = 0)</p>
<p>The error message shows it cannot find the locations. how to specify the file location in Unix using python?</p>
| 0 | 2016-09-29T20:27:59Z | 39,780,239 | <pre><code>df.to_csv("C:\Users\abc\Desktop\Test.txt", sep='\t', header = 0)
</code></pre>
<p>Either you really mean that the backslashes are doubled as in</p>
<pre><code>df.to_csv("C:\\Users\\abc....
</code></pre>
<p>or your strings are just wrong. I believe that Python will support Unix style paths on both Windows and Unix - use that style.</p>
<p>Replace your \ characters with / and dump the "C:".</p>
| 0 | 2016-09-29T21:22:53Z | [
"python",
"unix"
]
|
how to export data to unix system location using python | 39,779,412 | <p>I am trying to write the file to my company's project folder which is unix system and the location is /department/projects/data/. So I used the following code</p>
<p>df.to_csv("/department/projects/data/Test.txt", sep='\t', header = 0)</p>
<p>The error message shows it cannot find the locations. how to specify the file location in Unix using python?</p>
| 0 | 2016-09-29T20:27:59Z | 39,813,792 | <p>I have find the solution. It might because I am using Spyder from anaconda. As long as I use "\" instead of "\", python can recognize the location.</p>
| 0 | 2016-10-02T04:13:43Z | [
"python",
"unix"
]
|
What is a pythonic way to select data from one of two locations? | 39,779,435 | <p>I am making a backwards incompatible change to an api endpoint in a different app that I call from a client app (where this code lives). I need to for a time support it handling both the previous case (where the data lived at the "ledger" level") and the new case (where the data lived at the "profile" ledger).</p>
<p>The code below works to grab it from either place but I have a nagging feeling there must be a more pythonic way to do this. Any ideas?</p>
<pre><code>class Profile(object):
@property
def account_owner(self):
owner_data_from_ledger = self.account.ledger.data.get('owner', None)
owner_data_from_profile = self.data.get('owner', None)
owner_data = owner_data_from_ledger if owner_data_from_ledger else owner_data_from_profile
if owner_data:
return Human(owner_data)
return None
</code></pre>
| 1 | 2016-09-29T20:29:24Z | 39,779,689 | <p>Instead of</p>
<pre><code>owner_data = owner_data_from_ledger if owner_data_from_ledger else owner_data_from_profile
</code></pre>
<p>you can write this which is equivalent:</p>
<pre><code>owner_data = owner_data_from_ledger or owner_data_from_profile
</code></pre>
<p>Alternatively shorten the whole thing:</p>
<pre><code>owner_data_ = self.account.ledger.data.get('owner',
self.data.get('owner', None))
</code></pre>
<p>You can also leave out the <code>None</code> at the end above since that's the default value of that argument.</p>
| 0 | 2016-09-29T20:45:37Z | [
"python",
"styles",
"readability"
]
|
Transform dna alignment into numpy array using biopython | 39,779,488 | <p>I have several DNA sequences that have been aligned and I would like to keep only the bases that are variable at a specific position. </p>
<p>This maybe could be done if we first transform the alignment into an array. I tried using the code in the Biopython tutorial but it gives an error.</p>
<pre><code>import numpy as np
from Bio import AlignIO
alignment = AlignIO.parse("ma-all-mito.fa", "fasta")
align_array = np.array([list(rec) for rec in alignment], np.character)
print("Array shape %i by %i" % align_array.shape)
</code></pre>
<p>The error I get:</p>
<pre><code>Traceback (most recent call last):
File "C:/select-snps.py", line 8, in <module>
print("Array shape %i by %i" % align_array.shape)
TypeError: not all arguments converted during string formatting
</code></pre>
| 3 | 2016-09-29T20:32:35Z | 39,779,933 | <p><code>AlignIO</code> doesn't seem to be the tool you want for this job. You have a file presumably with many sequences, not with many multiple sequence alignments, so you probably want to use <code>SeqIO</code>, not <code>AlignIO</code> (<a href="http://biopython.org/wiki/AlignIO" rel="nofollow">source</a>). This is why the shape of your array is (1, 99, 16926), because you have 1 alignment of 99 sequences of length 16926.</p>
<p>If you just want an array of the sequences (which it appears you do from the <code>np.character</code> dtype supplied to <code>np.array</code>), then do the following:</p>
<pre><code>import numpy as np
from Bio import SeqIO
records = SeqIO.parse("ma-all-mito.fa", "fasta")
align_array = np.array([record.seq for record in records], np.character)
print("Array shape %i by %i" % align_array.shape)
# expect to be (99, 16926)
</code></pre>
<p>Note above that technically each element of <code>records</code> is also a BioPython <code>SeqRecord</code> which includes the sequence in addition to metadata. <code>list(record)</code> is a shortcut for getting the sequence, the other way being <code>record.seq</code>. Either should work, but I chose using the attribute way since it is more explicit.</p>
| 1 | 2016-09-29T20:59:17Z | [
"python",
"numpy",
"biopython"
]
|
Transform dna alignment into numpy array using biopython | 39,779,488 | <p>I have several DNA sequences that have been aligned and I would like to keep only the bases that are variable at a specific position. </p>
<p>This maybe could be done if we first transform the alignment into an array. I tried using the code in the Biopython tutorial but it gives an error.</p>
<pre><code>import numpy as np
from Bio import AlignIO
alignment = AlignIO.parse("ma-all-mito.fa", "fasta")
align_array = np.array([list(rec) for rec in alignment], np.character)
print("Array shape %i by %i" % align_array.shape)
</code></pre>
<p>The error I get:</p>
<pre><code>Traceback (most recent call last):
File "C:/select-snps.py", line 8, in <module>
print("Array shape %i by %i" % align_array.shape)
TypeError: not all arguments converted during string formatting
</code></pre>
| 3 | 2016-09-29T20:32:35Z | 39,784,698 | <p>I'm answering to your problem instead of fixing your code. If you want to keep only certain positions, you want to use <code>AlignIO</code>:</p>
<p>FASTA sample <code>al.fas</code>:</p>
<pre><code>>seq1
CATCGATCAGCATCGACATGCGGCA-ACG
>seq2
CATCGATCAG---CGACATGCGGCATACG
>seq3
CATC-ATCAGCATCGACATGCGGCATACG
>seq4
CATCGATCAGCATCGACAAACGGCATACG
</code></pre>
<p>Now suppose you want to keep only certain positions. <a href="http://biopython.org/DIST/docs/api/Bio.Align.MultipleSeqAlignment-class.html" rel="nofollow" title="MultipleSeqAlignment">MultipleSeqAlignment</a> allows you to <em>query</em> the alignment like a numpy array:</p>
<pre><code>from Bio import AlignIO
al = AlignIO.read("al.fas", "fasta")
# Print the 11th column
print(al[:, 10])
# Print the 12-15 columns
print(al[:, 11:14])
</code></pre>
<p>If you want to know the shape of the alignment, use <code>len</code> and <code>get_alignment_length</code>:</p>
<pre><code>>>> print(len(al), al.get_alignment_length())
4 29
</code></pre>
<hr>
<p>When you use <code>AlignIO.parse()</code> to load an alignment, it assumes the file to be parsed could contain more than one alignment (PHYLIP does this). Thus the parser returns an iterator over each alignment and not over records as your code implies. But your FASTA file only contain one alignment per file and <code>parse()</code> yields only one <code>MultipleSeqAlignment</code>. So the fix to your code is:</p>
<pre><code>alignment = AlignIO.read("ma-all-mito.fa", "fasta")
align_array = np.array(alignment, np.character)
print("Array shape %i by %i" % align_array.shape)
</code></pre>
| 1 | 2016-09-30T06:01:42Z | [
"python",
"numpy",
"biopython"
]
|
Programmatic distinction between ternary then function call versus function call dependent on condition | 39,779,501 | <p>Recently I have been conflicted regarding the following question. It may just be a stylistic choice, but I was wondering if there is a programmatic difference between the following... (in python, but applicable for most languages)</p>
<p>Case #1:</p>
<pre><code>arg = A if condition else B
result = func(arg)
</code></pre>
<p>Case #2:</p>
<pre><code>if condition:
result = func(A)
else:
result = func(B)
</code></pre>
<p>Is there an industry standard for choosing between these two? Is there a programmatic difference?</p>
| 0 | 2016-09-29T20:33:17Z | 39,779,808 | <p>My opinion is in Case #1 you'd better be sure that you're checking for some kind of binary condition, e.g. an integer is either odd or even. In thoses cases I'd prefer Case #1 for the sake of simplicity.</p>
<p>If the expression in the condition is not binary, such as checking the remainder of an integer module n, with n > 2, you make need to use a nested ternary expression, and the code gets hard to read quickly. In theses cases, Case #2 would be better.</p>
| 0 | 2016-09-29T20:52:05Z | [
"python",
"function",
"ternary-operator"
]
|
How to get lineno of "end-of-statement" in Python ast | 39,779,538 | <p>I am trying to work on a script that manipulates another script in Python, the script to be modified has structure like:</p>
<pre><code>class SomethingRecord(Record):
description = 'This records something'
author = 'john smith'
</code></pre>
<p>I use <code>ast</code> to locate the <code>description</code> line number, and I use some code to change the original file with new description string base on the line number. So far so good.</p>
<p>Now the only issue is <code>description</code> occasionally is a multi-line string, e.g.</p>
<pre><code> description = ('line 1'
'line 2'
'line 3')
</code></pre>
<p>or</p>
<pre><code> description = 'line 1' \
'line 2' \
'line 3'
</code></pre>
<p>and I only have the line number of the first line, not the following lines. So my one-line replacer would do</p>
<pre><code> description = 'new value'
'line 2' \
'line 3'
</code></pre>
<p>and the code is broken. I figured that if I know both the lineno of start and end/number of lines of <code>description</code> assignment I could repair my code to handle such situation. How do I get such information with Python standard library?</p>
| 35 | 2016-09-29T20:35:58Z | 39,811,752 | <p><a href="https://docs.python.org/3.5/library/ast.html" rel="nofollow">ast</a> only stores the beginning line number for each node.</p>
<p>If your scripts always have another assignment (to <code>author</code>, from your example) after the <code>description</code> assignment, then you could do something like this ...</p>
<ul>
<li>Build a list of line numbers with assignments</li>
<li>When you find a <code>description</code> assignment node, look up the next assignment line</li>
<li>Use that to calculate then end of the <code>description</code> assignment</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import ast
class Assignment(ast.NodeVisitor):
lines = []
def visit_Assign(self, node):
for target in node.targets:
self.lines.append(target.lineno)
self.generic_visit(node)
class DescriptionUpdater(ast.NodeVisitor):
def visit_Assign(self, node):
if node.targets[0].id == 'description':
current = Assignment.lines.index(node.lineno)
next_line = Assignment.lines[current + 1]
num_lines = next_line - node.lineno
# update_description(node, num_lines)
self.generic_visit(node)
with open('script.py') as script:
tree = ast.parse(script.read())
Assignment().visit(tree)
DescriptionUpdater().visit(tree)
</code></pre>
| -3 | 2016-10-01T21:45:42Z | [
"python",
"abstract-syntax-tree"
]
|
How to get lineno of "end-of-statement" in Python ast | 39,779,538 | <p>I am trying to work on a script that manipulates another script in Python, the script to be modified has structure like:</p>
<pre><code>class SomethingRecord(Record):
description = 'This records something'
author = 'john smith'
</code></pre>
<p>I use <code>ast</code> to locate the <code>description</code> line number, and I use some code to change the original file with new description string base on the line number. So far so good.</p>
<p>Now the only issue is <code>description</code> occasionally is a multi-line string, e.g.</p>
<pre><code> description = ('line 1'
'line 2'
'line 3')
</code></pre>
<p>or</p>
<pre><code> description = 'line 1' \
'line 2' \
'line 3'
</code></pre>
<p>and I only have the line number of the first line, not the following lines. So my one-line replacer would do</p>
<pre><code> description = 'new value'
'line 2' \
'line 3'
</code></pre>
<p>and the code is broken. I figured that if I know both the lineno of start and end/number of lines of <code>description</code> assignment I could repair my code to handle such situation. How do I get such information with Python standard library?</p>
| 35 | 2016-09-29T20:35:58Z | 39,879,026 | <p>Indeed, the information you need is not stored in the <code>ast</code>. I don't know the details of what you need, but it looks like you could use the <code>tokenize</code> module from the standard library. The idea is that every logical Python statement is ended by a <code>NEWLINE</code> token (also it could be a semicolon, but as I understand it is not your case). I tested this approach with such file:</p>
<pre><code># first comment
class SomethingRecord:
description = ('line 1'
'line 2'
'line 3')
class SomethingRecord2:
description = ('line 1',
'line 2',
# comment in the middle
'line 3')
class SomethingRecord3:
description = 'line 1' \
'line 2' \
'line 3'
whatever = 'line'
class SomethingRecord3:
description = 'line 1', \
'line 2', \
'line 3'
# last comment
</code></pre>
<p>And here is what I propose to do:</p>
<pre><code>import tokenize
from io import BytesIO
from collections import defaultdict
with tokenize.open('testmod.py') as f:
code = f.read()
enc = f.encoding
rl = BytesIO(code.encode(enc)).readline
tokens = list(tokenize.tokenize(rl))
token_table = defaultdict(list) # mapping line numbers to token numbers
for i, tok in enumerate(tokens):
token_table[tok.start[0]].append(i)
def find_end(start):
i = token_table[start][-1] # last token number on the start line
while tokens[i].exact_type != tokenize.NEWLINE:
i += 1
return tokens[i].start[0]
print(find_end(3))
print(find_end(8))
print(find_end(15))
print(find_end(21))
</code></pre>
<p>This prints out:</p>
<pre><code>5
12
17
23
</code></pre>
<p>This seems to be correct, you could tune this approach depending on what exactly you need. <code>tokenize</code> is more verbose than <code>ast</code> but also more flexible. Of course the best approach is to use them both for different parts of your task.</p>
<hr>
<p><strong>EDIT:</strong> I tried this in Python 3.4, but I think it should also work in other versions.</p>
| 2 | 2016-10-05T16:12:40Z | [
"python",
"abstract-syntax-tree"
]
|
How to get lineno of "end-of-statement" in Python ast | 39,779,538 | <p>I am trying to work on a script that manipulates another script in Python, the script to be modified has structure like:</p>
<pre><code>class SomethingRecord(Record):
description = 'This records something'
author = 'john smith'
</code></pre>
<p>I use <code>ast</code> to locate the <code>description</code> line number, and I use some code to change the original file with new description string base on the line number. So far so good.</p>
<p>Now the only issue is <code>description</code> occasionally is a multi-line string, e.g.</p>
<pre><code> description = ('line 1'
'line 2'
'line 3')
</code></pre>
<p>or</p>
<pre><code> description = 'line 1' \
'line 2' \
'line 3'
</code></pre>
<p>and I only have the line number of the first line, not the following lines. So my one-line replacer would do</p>
<pre><code> description = 'new value'
'line 2' \
'line 3'
</code></pre>
<p>and the code is broken. I figured that if I know both the lineno of start and end/number of lines of <code>description</code> assignment I could repair my code to handle such situation. How do I get such information with Python standard library?</p>
| 35 | 2016-09-29T20:35:58Z | 39,879,196 | <p>As a workaround you can change:</p>
<pre><code> description = 'line 1' \
'line 2' \
'line 3'
</code></pre>
<p>to:</p>
<pre><code> description = 'new value'; tmp = 'line 1' \
'line 2' \
'line 3'
</code></pre>
<p>etc. </p>
<p>It is a simple change but indeed ugly code produced.</p>
| 6 | 2016-10-05T16:22:55Z | [
"python",
"abstract-syntax-tree"
]
|
How to get lineno of "end-of-statement" in Python ast | 39,779,538 | <p>I am trying to work on a script that manipulates another script in Python, the script to be modified has structure like:</p>
<pre><code>class SomethingRecord(Record):
description = 'This records something'
author = 'john smith'
</code></pre>
<p>I use <code>ast</code> to locate the <code>description</code> line number, and I use some code to change the original file with new description string base on the line number. So far so good.</p>
<p>Now the only issue is <code>description</code> occasionally is a multi-line string, e.g.</p>
<pre><code> description = ('line 1'
'line 2'
'line 3')
</code></pre>
<p>or</p>
<pre><code> description = 'line 1' \
'line 2' \
'line 3'
</code></pre>
<p>and I only have the line number of the first line, not the following lines. So my one-line replacer would do</p>
<pre><code> description = 'new value'
'line 2' \
'line 3'
</code></pre>
<p>and the code is broken. I figured that if I know both the lineno of start and end/number of lines of <code>description</code> assignment I could repair my code to handle such situation. How do I get such information with Python standard library?</p>
| 35 | 2016-09-29T20:35:58Z | 39,897,622 | <p>My solution takes a different path: When I had to change code in another file I opened the file, found the line and got all the next lines which had a deeper indent than the first and return the line number for the first line which isn't deeper.
I return None, None if I couldn't find the text I was looking for.
This is of course incomplete, but I think it's enough to get you through :)</p>
<pre><code>def get_all_indented(text_lines, text_in_first_line):
first_line = None
indent = None
for line_num in range(len(text_lines)):
if indent is not None and first_line is not None:
if not text_lines[line_num].startswith(indent):
return first_line, line_num # First and last lines
if text_in_first_line in text_lines[line_num]:
first_line = line_num
indent = text_lines[line_num][:text_lines[line_num].index(text_in_first_line)] + ' ' # At least 1 more space.
return None, None
</code></pre>
| 1 | 2016-10-06T13:40:01Z | [
"python",
"abstract-syntax-tree"
]
|
How to get lineno of "end-of-statement" in Python ast | 39,779,538 | <p>I am trying to work on a script that manipulates another script in Python, the script to be modified has structure like:</p>
<pre><code>class SomethingRecord(Record):
description = 'This records something'
author = 'john smith'
</code></pre>
<p>I use <code>ast</code> to locate the <code>description</code> line number, and I use some code to change the original file with new description string base on the line number. So far so good.</p>
<p>Now the only issue is <code>description</code> occasionally is a multi-line string, e.g.</p>
<pre><code> description = ('line 1'
'line 2'
'line 3')
</code></pre>
<p>or</p>
<pre><code> description = 'line 1' \
'line 2' \
'line 3'
</code></pre>
<p>and I only have the line number of the first line, not the following lines. So my one-line replacer would do</p>
<pre><code> description = 'new value'
'line 2' \
'line 3'
</code></pre>
<p>and the code is broken. I figured that if I know both the lineno of start and end/number of lines of <code>description</code> assignment I could repair my code to handle such situation. How do I get such information with Python standard library?</p>
| 35 | 2016-09-29T20:35:58Z | 39,945,762 | <p>I looked at the other answers; it appears people are doing backflips to get around the problems of computing line numbers, when your real problem is one of modifying the code. That suggests the baseline machinery is not helping you the way you really need.</p>
<p>If you use a <a href="https://en.wikipedia.org/wiki/Program_transformation">program transformation system (PTS)</a>, you could avoid a lot of this nonsense.</p>
<p>A good PTS will parse your source code to an AST, and then let you apply source-level rewrite rules to modify the AST, and will finally convert the modified AST back into source text. Generically PTSes accept transformation rules of essentially this form:</p>
<pre><code> if you see *this*, replace it by *that*
</code></pre>
<p>[A parser that builds an AST is NOT a PTS. They don't allow rules like this; you can write ad hoc code to hack at the tree, but that's usually pretty awkward. Not do they do the AST to source text regeneration.]</p>
<p>(My PTS, see bio, called) DMS is a PTS that could accomplish this. OP's specific example would be accomplished easily by using the following rewrite rule:</p>
<pre><code> source domain Python; -- tell DMS the syntax of pattern left hand sides
target domain Python; -- tell DMS the syntax of pattern right hand sides
rule replace_description(e: expression): statement -> statement =
" description = \e "
->
" description = ('line 1'
'line 2'
'line 3')";
</code></pre>
<p>The one transformation rule is given an name <em>replace_description</em> to distinguish it from all the other rule we might define. The rule parameters (e: expression) indicate the pattern will allow an arbitrary expression as defined by the source language. <em>statement->statement</em> means the rule maps a statement in the source language, to a statement in the target language; we could use any other syntax category from the Python grammar provided to DMS. The <strong>"</strong> used here is a <em>metaquote</em>, used to distinguish the syntax of the rule language form the syntax of the subject language. The second <strong>-></strong> separates the source pattern <em>this</em> from the target pattern <em>that</em>.</p>
<p>You'll notice that there is no need to mention line numbers. The PTS converts the rule surface syntax into corresponding ASTs by actually parsing the patterns with the same parser used to parse the source file. The ASTs produced for the patterns are used to effect the pattern match/replacement. Because this is driven from ASTs, the actual layout of the orginal code (spacing, linebreaks, comments) don't affect DMS's ability to match or replace. Comments aren't a problem for matching because they are attached to tree nodes rather than being tree nodes; they are preserved in the transformed program. DMS does capture line and precise column information for all tree elements; just not needed to implement transformations. Code layout is also preserved in the output by DMS, using that line/column information. </p>
<p>Other PTSes offer generally similar capabilities.</p>
| 7 | 2016-10-09T16:17:34Z | [
"python",
"abstract-syntax-tree"
]
|
Reverse model admin custom URLs | 39,779,545 | <p>Inside my <code>admin.py</code> file I have:</p>
<pre><code>def get_urls(self):
urls = super(TextAdmin, self).get_urls()
my_urls = patterns('',
url(
r'customfunc1',
customfunc2,
name='customfunc23',
),
)
return my_urls + urls
</code></pre>
<p>Which will enable the following URL:</p>
<pre><code>http://localhost:8000/admin/text/customfunc1
</code></pre>
<p>Which will execute function <code>customfunc2</code>. My question is now how would I reference this URL through doing <code>reverse</code>?</p>
<p>I tried:</p>
<pre><code>reverse("admin:text_customfunc1")
reverse("admin:text_customfunc2")
reverse("admin:text_customfunc3")
reverse("text:customfunc1")
</code></pre>
<p>But none of those work.</p>
| 1 | 2016-09-29T20:36:19Z | 39,779,708 | <p>You have <code>name='customfunc23'</code>, and it is in the <code>admin</code> app, so you should use:</p>
<pre><code>reverse('admin:customfunc23')
</code></pre>
| 0 | 2016-09-29T20:46:30Z | [
"python",
"django",
"python-2.7",
"django-admin"
]
|
Cannot make boolean global in Python | 39,779,551 | <p>I am very new to programming and am creating a function that takes a letter or letters and checks to see if it is within a word. I need to control the number of guesses so I would like to set a boolean to test whether it was correct. To be clear, I have looked through many other answers on this type of question but cannot figure out what is going on in this case.</p>
<p>I cannot seem to make the bool 'correct' return True even if the if statement is true. After looking this up it seems that making it a global variable should fix this, but it is not working for me. It's still returning False.
How do I fix this? I am using Python 2.5.</p>
<p>Any help would be greatly appreciated!
Thanks! </p>
<pre><code>randomWord = choose_word(wordlist)
guessedLetters = []
correct = False
def userGuesses(letters):
newLetters = list(letters)
global correct #change correct to global
for i in range(len(letters)):
curLetter = letters[i]
for j in range(len(randomWord)):
if curLetter == randomWord[j]:
guessedLetters[j] = randomWord[j]
correct = True
else:
correct = False
return guessedLetters
</code></pre>
| 1 | 2016-09-29T20:36:50Z | 39,779,691 | <p>Some issues in the code:</p>
<ul>
<li><code>randomWord</code> seems to be undefined.</li>
<li><code>guessedLetters</code> seems to be undefined as well.</li>
<li>There are 2 returns in the function</li>
</ul>
<p>If your code did run then maybe it never gets into the inner loop. Add some <code>print</code> statements at different places to see if the code at that point gets run.</p>
<p>Also note that you don't need to do <code>for j in range(len(letters))</code> just to access <code>letters[i]</code> later on. Just do:</p>
<pre><code>for letter in letters:
# your code
</code></pre>
<p>It's shorter and has the same effect. The same for the inner loop.</p>
<p>Good luck!</p>
<p>Editted: try to print out <code>randomWord</code> before entering the function to see what the value of that variable is.</p>
| 0 | 2016-09-29T20:45:47Z | [
"python",
"global-variables",
"local"
]
|
Retrieve a list of files located at a URL with filenames matching a known pattern | 39,779,570 | <p>There is a URL where a colleague has set up a large number of files for me to download, </p>
<pre><code>url = "http://www.some.url.edu/some/dirname/"
</code></pre>
<p>Inside this directory, there are a large number of files with different filename patterns that are known to me in advance, e.g., "subvol1_file1.tar.gz", "subvol1_file2.tar.gz", etc. I am going to selectively download these files based on their filename patterns using fnmatch. </p>
<p>What I need is a simple list or generator of <em>all</em> filenames located in <em>dirname</em>. Is there a simple way to use, for example, BeautifulSoup or urllib2 to retrieve such a list? </p>
<p>Once I have the list/iterable, let's call it <strong>filename_sequence</strong>, I plan to download the files with a pattern <em>filepat</em> with the following pseudocode:</p>
<pre><code>filename_sequence = code_needed
filepat = "*my.pattern*"
import os, fnmatch
for basename in fnmatch.filter(filename_sequence, filepat):
os.system("wget "+os.path.join(url, basename))
</code></pre>
| 1 | 2016-09-29T20:38:27Z | 39,779,827 | <p>Not sure this is applicable in your case, but you can apply a regular expression pattern on the <code>href</code> attribute values:</p>
<pre><code>import re
pattern = re.compile(r"subvol1_file\d+\.tar\.gz")
links = [a["href"] for a in soup.find_all("a", href=pattern)]
</code></pre>
| 0 | 2016-09-29T20:53:18Z | [
"python",
"beautifulsoup",
"urllib2"
]
|
Calculating the entropy of an attribute in the ID3 algorithm when a split is perfectly classified | 39,779,587 | <p>I have been reading about the ID3 algorithm recently and it says that the best attribute to be selected for splitting should result in the maximum information gain which can be computed with the help of the entropy. </p>
<p>I have written a simple python program to compute the entropy. It is shown below:</p>
<pre><code>def _E(p, n):
x = (p/(p+n))
y = (n/(p+n))
return(-1* (x*math.log2(x)) -1* (y*math.log2(y)))
</code></pre>
<p>However suppose we have a table consisting of 10 elements as follows:</p>
<p>x = [1, 0, 1, 0, 0, 0, 0, 0, 0, 0]</p>
<p>y = [1, 1, 1, 0, 1, 0, 1, 0, 1, 0]</p>
<p>Where x is the attribute and y is the class. Here P(0) = 0.8 and P(1) = 0.2. The entropy will be as follows:</p>
<p>Entropy(x) = 0.8*_E(5, 3) + 0.2*_E(2, 0)</p>
<p>However the second split P(1) is perfectly classified and this results in a math error since log2(0) is negative infinity. How is the entropy calculated in such cases?</p>
| 3 | 2016-09-29T20:39:31Z | 39,786,695 | <p><strong>Entropy is a measure of impurity.</strong> So if a node is pure it means entropy is zero. </p>
<p>Have a look at <a href="https://github.com/HashCode55/ML_from_scratch/blob/master/decision_tree.py" rel="nofollow">this</a> - </p>
<pre><code>def information_gain(data, column, cut_point):
"""
For calculating the goodness of a split. The difference of the entropy of parent and
the weighted entropy of children.
:params:attribute_index, labels of the node t as `labels` and cut point as `cut_point`
:returns: The net entropy of partition
"""
subset1, subset2 = divide_data(data, column, cut_point)
lensub1, lensub2 = len(subset1), len(subset2)
#if the node is pure return 0 entropy
if len(subset1) == 0 or len(subset2) == 0:
return (0, subset1, subset2)
weighted_ent = (len(subset1)*entropy(subset1) + len(subset2)*entropy(subset2)) / len(data)
return ((entropy(data) - weighted_ent), subset1, subset2)
</code></pre>
| 1 | 2016-09-30T08:11:26Z | [
"python",
"machine-learning",
"decision-tree"
]
|
Calculating the entropy of an attribute in the ID3 algorithm when a split is perfectly classified | 39,779,587 | <p>I have been reading about the ID3 algorithm recently and it says that the best attribute to be selected for splitting should result in the maximum information gain which can be computed with the help of the entropy. </p>
<p>I have written a simple python program to compute the entropy. It is shown below:</p>
<pre><code>def _E(p, n):
x = (p/(p+n))
y = (n/(p+n))
return(-1* (x*math.log2(x)) -1* (y*math.log2(y)))
</code></pre>
<p>However suppose we have a table consisting of 10 elements as follows:</p>
<p>x = [1, 0, 1, 0, 0, 0, 0, 0, 0, 0]</p>
<p>y = [1, 1, 1, 0, 1, 0, 1, 0, 1, 0]</p>
<p>Where x is the attribute and y is the class. Here P(0) = 0.8 and P(1) = 0.2. The entropy will be as follows:</p>
<p>Entropy(x) = 0.8*_E(5, 3) + 0.2*_E(2, 0)</p>
<p>However the second split P(1) is perfectly classified and this results in a math error since log2(0) is negative infinity. How is the entropy calculated in such cases?</p>
| 3 | 2016-09-29T20:39:31Z | 39,803,320 | <p>The entropy of a split measures the uncertainty associated with the class labels in that split. In a binary classification problem (classes = {0,1}), the probability of class 1 (in your text, x) can range from 0 to 1. The entropy is maximum (with a value of 1) when x=0.5. Here both classes are equally probable. The entropy is minimum when one of the classes is absent, i.e. either x=0 or x=1. Here, there is no uncertainty regarding the class, hence the entropy is 0.</p>
<p><br/><br/>
Graph of entropy (y-axis) vs x (x-axis):</p>
<p><a href="http://i.stack.imgur.com/GXnDC.png" rel="nofollow"><img src="http://i.stack.imgur.com/GXnDC.png" alt="Graph of entropy (y-axis) vs x (x-axis)"></a></p>
<p><br/><br/>
The following calculation shows how to deal with the entropy calculation mathematically, when x=0 (the case when x=1 is analogous):</p>
<p><a href="http://i.stack.imgur.com/rCrwe.png" rel="nofollow"><img src="http://i.stack.imgur.com/rCrwe.png" alt="enter image description here"></a></p>
<p><br/>
In your program, you could treat x=0 and x=1 as special cases, and return 0. For other values of x, the above equation can be used directly.</p>
| 0 | 2016-10-01T05:23:53Z | [
"python",
"machine-learning",
"decision-tree"
]
|
Django - Changing order_by on click | 39,779,617 | <p>Using Django and I would like to change the order of the display of the objects on click without refreshing the page.</p>
<p>my model</p>
<pre><code>class IndexView(generic.ListView):
template_name = 'movies/index.html'
page_template = 'movies/all_movies.html'
context_object_name = 'all_movies'
model = Movie
def get_context_data(self, **kwargs):
context = super(IndexView, self).get_context_data(**kwargs)
context.update({
'all_genres': Genre.objects.all(),
'our_pick': Movie.objects.get(pk=259)
})
return context
def get_queryset(self):
return Movie.objects.all()
</code></pre>
<p>And this is my <strong>index.html</strong></p>
<pre><code><menu class="menu">
<ul>
<li><a href="#">Newest</a></li>
<li><a href="#">Most Popular</a></li>
</ul>
</menu>
</code></pre>
<p>on clink on Newest, the query will become:</p>
<pre><code>Movie.objects.all().order_by('release_date')
</code></pre>
<p>and on click on Most popular, the query will become:</p>
<pre><code>Movie.objects.all().order_by('popularity')
</code></pre>
<p>How can I achieve that without refreshing the page? any help would be appreciated!</p>
| 1 | 2016-09-29T20:41:29Z | 39,780,037 | <p>Your question seems to illustrate a misunderstanding of how front-end and back-end languages work: a click event occurs on the front-end, <em>after</em> the data has been sent from the server. The <code>order_by</code> function is run and completed <em>before</em> the request is completed. </p>
<p>To load new data from the server without reloading, you will have to send a new request in the background using AJAX. </p>
<p>This is probably <strong>not a good idea</strong>, though, since you are sending the same query set ordered in a different way. I recommend using JS or jQuery to order the list on a click event based on a <code>data</code> attribute that you can set in the list itself. For more information about how to do this, see <a href="http://stackoverflow.com/questions/21600802/jquery-sort-list-based-on-data-attribute-value">jquery sort list based on data attribute value</a>.</p>
| 1 | 2016-09-29T21:07:11Z | [
"python",
"django"
]
|
Setting up a LearningRateScheduler in Keras | 39,779,710 | <p>I'm setting up a Learning Rate Scheduler in Keras, using history loss as an updater to self.model.optimizer.lr, but the value on self.model.optimizer.lr does not get inserted in the SGD optimizer and the optimizer is using the dafault learning rate. The code is:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.preprocessing import StandardScaler
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
self.model.optimizer.lr=3
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
self.model.optimizer.lr=lr-10000*self.losses[-1]
def base_model():
model=Sequential()
model.add(Dense(4, input_dim=2, init='uniform'))
model.add(Dense(1, init='uniform'))
sgd = SGD(decay=2e-5, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error',optimizer=sgd,metrics['mean_absolute_error'])
return model
history=LossHistory()
estimator = KerasRegressor(build_fn=base_model,nb_epoch=10,batch_size=16,verbose=2,callbacks=[history])
estimator.fit(X_train,y_train,callbacks=[history])
res = estimator.predict(X_test)
</code></pre>
<p>Everything works fine using Keras as a regressor for continuous variables, But I want to reach a smaller derivative by updating the optimizer learning rate.</p>
| 2 | 2016-09-29T20:46:41Z | 39,781,985 | <p>The learning rate is a variable on the computing device, e.g. a GPU if you are using GPU computation. That means that you have to use <code>K.set_value</code>, with <code>K</code> being <code>keras.backend</code>. For example:</p>
<pre><code>import keras.backend as K
K.set_value(opt.lr, 0.01)
</code></pre>
<p>or in your example</p>
<pre><code>K.set_value(self.model.optimizer.lr, lr-10000*self.losses[-1])
</code></pre>
| 0 | 2016-09-30T00:36:26Z | [
"python",
"optimization",
"keras"
]
|
Setting up a LearningRateScheduler in Keras | 39,779,710 | <p>I'm setting up a Learning Rate Scheduler in Keras, using history loss as an updater to self.model.optimizer.lr, but the value on self.model.optimizer.lr does not get inserted in the SGD optimizer and the optimizer is using the dafault learning rate. The code is:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.preprocessing import StandardScaler
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
self.model.optimizer.lr=3
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
self.model.optimizer.lr=lr-10000*self.losses[-1]
def base_model():
model=Sequential()
model.add(Dense(4, input_dim=2, init='uniform'))
model.add(Dense(1, init='uniform'))
sgd = SGD(decay=2e-5, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error',optimizer=sgd,metrics['mean_absolute_error'])
return model
history=LossHistory()
estimator = KerasRegressor(build_fn=base_model,nb_epoch=10,batch_size=16,verbose=2,callbacks=[history])
estimator.fit(X_train,y_train,callbacks=[history])
res = estimator.predict(X_test)
</code></pre>
<p>Everything works fine using Keras as a regressor for continuous variables, But I want to reach a smaller derivative by updating the optimizer learning rate.</p>
| 2 | 2016-09-29T20:46:41Z | 39,807,000 | <p>Thanks, I found an alternative solution, as I'm not using GPU:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.callbacks import LearningRateScheduler
sd=[]
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = [1,1]
def on_epoch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
sd.append(step_decay(len(self.losses)))
print('lr:', step_decay(len(self.losses)))
epochs = 50
learning_rate = 0.1
decay_rate = 5e-6
momentum = 0.9
model=Sequential()
model.add(Dense(4, input_dim=2, init='uniform'))
model.add(Dense(1, init='uniform'))
sgd = SGD(lr=learning_rate,momentum=momentum, decay=decay_rate, nesterov=False)
model.compile(loss='mean_squared_error',optimizer=sgd,metrics=['mean_absolute_error'])
def step_decay(losses):
if float(2*np.sqrt(np.array(history.losses[-1])))<0.3:
lrate=0.01*1/(1+0.1*len(history.losses))
momentum=0.8
decay_rate=2e-6
return lrate
else:
lrate=0.1
return lrate
history=LossHistory()
lrate=LearningRateScheduler(step_decay)
model.fit(X_train,y_train,nb_epoch=epochs,callbacks=[history,lrate],verbose=2)
model.predict(X_test)
</code></pre>
<p>The output is (lr is learning rate):</p>
<pre><code>Epoch 41/50
lr: 0.0018867924528301887
0s - loss: 0.0126 - mean_absolute_error: 0.0785
Epoch 42/50
lr: 0.0018518518518518517
0s - loss: 0.0125 - mean_absolute_error: 0.0780
Epoch 43/50
lr: 0.0018181818181818182
0s - loss: 0.0125 - mean_absolute_error: 0.0775
Epoch 44/50
lr: 0.0017857142857142857
0s - loss: 0.0126 - mean_absolute_error: 0.0785
Epoch 45/50
lr: 0.0017543859649122807
0s - loss: 0.0126 - mean_absolute_error: 0.0773
</code></pre>
<p>And this is what happens to Learning Rate over the epochs:
<a href="http://i.stack.imgur.com/DKRK7.png" rel="nofollow"><img src="http://i.stack.imgur.com/DKRK7.png" alt="Learning Rate Scheduler"></a></p>
| 1 | 2016-10-01T13:09:46Z | [
"python",
"optimization",
"keras"
]
|
Python DataFrame error for having a header with parenthesis | 39,779,726 | <p>I am very new to python and programming in general. I have a CSV file that has the first column as string headers.</p>
<p>ie</p>
<pre><code>time, speed(m/s), distance(m)
2,6,20
3,2,10
etc..
</code></pre>
<p>What I am trying to do is a program that will spit out a bunch of graphs based on my selection. for example time vs speed. or time vs speed and distance.</p>
<p>My first issue is whenever I try to graph something that has a parenthesis in its name for example:</p>
<pre><code>df.Accel_Y(g).plot(color='r',lw=1.3)
</code></pre>
<p>I receive an error of:</p>
<blockquote>
<p>AttributeError: 'DataFrame' object has no attribute 'Accel_Y'</p>
</blockquote>
<p>It works fine if I try with a different column that has no parenthesis.</p>
<p>I tried to assign accel_y(g) to a letter for example z. Then doing this:</p>
<pre><code>f.z.plot(color='r',lw=1.3)
</code></pre>
<p>that also didn't work.</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
from pandas import DataFrame, read_csv
import numpy as np
import matplotlib.pyplot as plt
df = pd.DataFrame.from_csv('Flight102_Complete.csv')
df.KestrelTime.plot(color='g',lw=1.3)
df.Accel_Y(g).plot(color='r',lw=1.3)
print (df)
</code></pre>
| 1 | 2016-09-29T20:47:40Z | 39,780,184 | <p>You should consider that accessing your columns via the <code>.</code> (dot) operator a privileged convenience. It can break under many circumstances and sometimes silently so. If you want to access the column do so with <code>[]</code> or <code>loc[]</code> or <code>iloc[]</code>.</p>
<p>Parenthesis are one of the conditions that will break the <code>.</code> accessor paradigm.</p>
<p>In your case<br>
Don't do</p>
<pre><code>df.Accel_Y(g).plot(color='r', lw=1.3)
</code></pre>
<p>Instead do</p>
<pre><code>df['Accel_Y(g)'].plot(color='r', lw=1.3)
</code></pre>
| 1 | 2016-09-29T21:18:35Z | [
"python",
"pandas",
"matplotlib",
"dataframe"
]
|
After installing anaconda - command not found: jupyter | 39,779,744 | <p>I have installed anaconda on my MAC laptop, and tried to run jupyter notebook to install it, but I get error jupyter command not found.</p>
| 2 | 2016-09-29T20:48:22Z | 39,779,791 | <p>You need to activate your conda environment and then do</p>
<pre><code>$ pip install jupyter
$ jupyter notebook # to actually run the notebook server
</code></pre>
| 3 | 2016-09-29T20:51:19Z | [
"python",
"anaconda",
"jupyter-notebook"
]
|
Python list comprehension vs. nested loop, conciseness/efficiency | 39,779,757 | <p>The following 2 methods do the same thing. Which one is more efficient in terms of time/space complexity?</p>
<pre><code>** Method A**
for student in group.students:
for grade in student.grades:
some_operation(grade)
** Method B**
for grade in [grade for student in group.students for grade in student.grades]
some_operation(grade)
</code></pre>
| 0 | 2016-09-29T20:49:20Z | 39,779,801 | <p>These have the same time complexity, <code>O(nm)</code>, since it's a loop over another loop. So, the <code>n</code> is <code>group.students</code>, and <code>m</code> is <code>students.grades</code>. Functionally, these should be the same time complexity too since it's iterating over both lists either way.</p>
| 0 | 2016-09-29T20:51:45Z | [
"python",
"code-complexity"
]
|
Python list comprehension vs. nested loop, conciseness/efficiency | 39,779,757 | <p>The following 2 methods do the same thing. Which one is more efficient in terms of time/space complexity?</p>
<pre><code>** Method A**
for student in group.students:
for grade in student.grades:
some_operation(grade)
** Method B**
for grade in [grade for student in group.students for grade in student.grades]
some_operation(grade)
</code></pre>
| 0 | 2016-09-29T20:49:20Z | 39,779,816 | <p>Method B looks weird and redundant. You could shorten it to:</p>
<pre><code>[some_operation(grade) for student in group.students for grade in student.grades]
</code></pre>
<p>But method A is better either way because it doesn't create a list. Making a list simply to throw it away is confusing to the reader and wastes memory.</p>
| 1 | 2016-09-29T20:52:26Z | [
"python",
"code-complexity"
]
|
Why can't add a row to pandas dataframe? | 39,779,832 | <p>I have a 370864*493 dataset, I want to add a new row at the tail of the data set. I have tried Dataframe.loc[370864]=.... and Dataframe.append(). Both methods didn't work as my expectation.But the same code can work on a smaller data set, just 20 thousand rows. I hope to know the reason. The size of dataset is 1.6G. I use pandas and My IDE is spyder. The image shows more detail. The data source is UCSC cancer browser, LUAD methylation data.<a href="http://i.stack.imgur.com/1apUH.png" rel="nofollow">The tail of the dataframe</a></p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import MinMaxScaler
"""
get clinical information and count number of M0 and M1
"""
def get_Metastasis(sampleID_list,df_clinical):
num_M0=0
num_M1=0
list_Metastasis=[]
list_Metastasis.append('Metastasis')
for ID in sampleID_list:
row_number=df_clinical.loc[df_clinical.sampleID==ID].index[0]
list_Metastasis.append(df_clinical.loc[row_number,'pathologic_M'])
for i in range(1,len(list_Metastasis)):
if list_Metastasis[i]!="M0" and isinstance(list_Metastasis[i],str):
list_Metastasis[i]="M1"
num_M1+=1
list_Metastasis[i]=1
elif list_Metastasis[i]=="M0":
num_M0+=1
list_Metastasis[i]=0
# else:
# list_Metastasis[i]=None
return list_Metastasis, num_M1, num_M0
"""
read Data
"""
path_for_clinical_data="clinical_data"
path_for_genomicMatrix="genomicMatrix"
df_clinical = pd.read_table(path_for_clinical_data)
df_genomicMatrix = pd.read_table(path_for_genomicMatrix)
df_genomicMatrix=df_genomicMatrix.dropna(axis=0) ##get rid of row include nan
"""
Add metastasis information
"""
sampleID_list=list(df_genomicMatrix.columns.values)
sampleID_list=sampleID_list[1:]
list_M=[]
list_M,num_M1,num_M0=get_Metastasis(sampleID_list,df_clinical)
df_genomicMatrix.loc[len(df_genomicMatrix)]=list_M ## Here is the problem.
</code></pre>
<p>Here is the result:</p>
<pre><code> sample TCGA-44-4112-01 TCGA-NJ-A4YP-01 TCGA-86-8278-01 \
485566 cg15678817 0.02110 0.0961 -0.1652
485567 cg14483317 -0.41520 -0.4051 -0.4117
485573 cg10230711 -0.42750 -0.3067 -0.4182
485574 cg16651827 0.22345 0.2358 0.2007
485576 cg07883722 0.36660 0.3932 0.4155
</code></pre>
| 0 | 2016-09-29T20:53:29Z | 39,780,542 | <p>Try changing the last line that you gave us from </p>
<pre><code>df_genomicMatrix.loc[370864]=list_M
</code></pre>
<p>to</p>
<pre><code>df_genomicMatrix.loc[len(df_genomicMatrix)]=list_M
</code></pre>
<p><code>len(df_genomicMatrix)</code> returns the length of the dataframe. Using this instead of a static number should make the code more generic. You included a line in your code to remove rows with NA values, which could have changed the size of the dataframe and making <code>df_genomicMatrix.loc[370864]=list_M</code> inaccurate.</p>
<p>This should really be a comment, but I don't have enough rep to comment on other people's posts (welp).</p>
| 0 | 2016-09-29T21:46:26Z | [
"python",
"pandas"
]
|
pip says it is a package, by python says it is not | 39,779,961 | <p>I am using Windows and have a directory like this:</p>
<pre><code>troopcalc/
setup.py
troopcalc/
__init__.py
troopcalc.py
troopcfg.py
tests/
__init__.py
test_troopcalc.py
test_troopcfg.py
</code></pre>
<p>I used pip to install the package... and pip list shows the package installed and pointing to the top of the dir structure.</p>
<hr>
<p><code>troopcfg.py</code>:</p>
<pre><code>class TroopCfg(object):
...
</code></pre>
<hr>
<p><code>troopcalc.py</code>:</p>
<pre><code>from troopcalc.troopcfg import TroopCfg
</code></pre>
<hr>
<p><code>setup.py</code>:</p>
<pre><code>from setuptools import setup
setup(name='troopcalc',
version='0.1',
description='Calculates troop distribution',
url='...',
author='Randell ...',
author_email='...',
packages=['troopcalc', 'troopcalc.tests', 'troopcalc.data'],
package_dir={'troopcalc': 'troopcalc', 'troopcalc.tests': 'troopcalc/tests', 'troopcalc.data': 'troopcalc/data'},
package_data={'troopcalc.data': ['*.json']},
)
</code></pre>
<hr>
<p>If I run <code>python .\troopcalc.py</code> I get the following:</p>
<pre><code>Traceback (most recent call last):
File ".\troopcalc.py", line 2, in <module>
from troopcalc.troopcfg import TroopCfg
File "...\python\troopcalc\troopcalc\troopcalc.py", line 2, in <module>
from troopcalc.troopcfg import TroopCfg
ImportError: No module named 'troopcalc.troopcfg'; 'troopcalc' is not a package
</code></pre>
<p>pip says it is a package, but python says it is not. The goal is not to use absolute paths of course. What am I missing?</p>
<p>Thanks.</p>
| 0 | 2016-09-29T21:01:22Z | 39,797,884 | <blockquote>
<p>If I run <code>python .\troopcalc.py</code> I get the following:</p>
</blockquote>
<p>Well, yes. Python thinks that <code>troopcalc</code> is <code>troopcalc.py</code>, since that's in the directories in <code>sys.path</code>.</p>
<pre><code>python -m troopcalc.troopcalc
</code></pre>
| 1 | 2016-09-30T18:29:17Z | [
"python"
]
|
Django Aggregate ManyToMany fields | 39,780,067 | <p>Say I have Django models like so:</p>
<pre><code>class Author(models.Model):
name = models.CharField(max_length=100)
class Book(models.Model):
authors = models.ManyToManyField(Author)
</code></pre>
<p>What's the best way to get the the number of books each author created (preferably ordered) within a query or two?</p>
<p>The Author set might get pretty big, so I'm looking for a way to avoid iterating over all of it.</p>
| 0 | 2016-09-29T21:08:55Z | 39,780,133 | <p>in your model book add...</p>
<pre><code>...
author = models.ForeignKey(Author)
...
</code></pre>
<p>and to get then you use a QuerySet like</p>
<pre><code>...
Author.objects.get(id=some_id).book_set.all() # this line, return all book of these author, if you want get the number just add at end '.count()'
...
</code></pre>
| -2 | 2016-09-29T21:14:50Z | [
"python",
"django",
"many-to-many"
]
|
Python 2.7 connection to Oracle: loosing (Polish) characters | 39,780,090 | <p>I connect from Python 2.7 to Oracle data base.
When I use:</p>
<pre><code>cursor.execute("SELECT column1 FROM table").fetchall()]
</code></pre>
<p>I have got almost proper values for column1 because all Polish characters ("ÄóÄ
ÅÅżÄÅ") are converted to ascii one ("eoaslzcn"). Using another tool like SQLDeveloper and using the same select statement I get proper value.</p>
| 0 | 2016-09-29T21:10:45Z | 39,783,932 | <p>Try setting the environment variable <code>NLS_LANG</code> to your database language string, something like</p>
<pre><code>os.environ['NLS_LANG'] = 'POLISH_POLAND.EE8MSWIN1250'
</code></pre>
| 0 | 2016-09-30T04:55:04Z | [
"python",
"oracle",
"python-2.7",
"character-encoding",
"polish"
]
|
Python 2.7 connection to Oracle: loosing (Polish) characters | 39,780,090 | <p>I connect from Python 2.7 to Oracle data base.
When I use:</p>
<pre><code>cursor.execute("SELECT column1 FROM table").fetchall()]
</code></pre>
<p>I have got almost proper values for column1 because all Polish characters ("ÄóÄ
ÅÅżÄÅ") are converted to ascii one ("eoaslzcn"). Using another tool like SQLDeveloper and using the same select statement I get proper value.</p>
| 0 | 2016-09-29T21:10:45Z | 39,872,368 | <p>@Mark Harrison - thank you very much ! It works! There is an exact instruction what I did:</p>
<p>I checked in Oracle what is the language:</p>
<pre><code>SELECT USERENV ('language') FROM DUAL
</code></pre>
<p>The response was:</p>
<pre><code>POLISH_POLAND.UTF8
</code></pre>
<p>Then I changed NLS_LA value in Python:</p>
<pre><code>os.environ['NLS_LANG'] = 'POLISH_POLAND.UTF8'
</code></pre>
<p>Using</p>
<pre><code>print "%s, %s" % (name, surname)
</code></pre>
<p>and "> file.txt" in command line I obtained a (proper) file in utf8.</p>
| 0 | 2016-10-05T11:09:40Z | [
"python",
"oracle",
"python-2.7",
"character-encoding",
"polish"
]
|
stackplot legend issue when there is a secondary subplot for Matplotlib in python | 39,780,101 | <p>Here is my code and below it is the error produced. </p>
<p>My legend appears, but only one of the lines is in it.<br>
The stacked subplot labels do not appear. </p>
<pre><code>import xlrd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import datetime
file_location = "/Users/adampatel/Desktop/psw02.xls"
workbook = xlrd.open_workbook(file_location)
worksheet = workbook.sheet_by_name('Data 1')
x = [worksheet.cell_value(i+1425, 0) for i in range(worksheet.nrows-1425)]
y1 = [worksheet.cell_value(i+1425, 1) for i in range(worksheet.nrows-1425)]
y2 = [worksheet.cell_value(i+1425, 25) for i in range(worksheet.nrows-1425)]
y3 = [worksheet.cell_value(i+1425, 35) for i in range(worksheet.nrows-1425)]
y4 = [worksheet.cell_value(i+1425, 41) for i in range(worksheet.nrows-1425)]
y5 = [worksheet.cell_value(i+1425, 50) for i in range(worksheet.nrows-1425)]
fig = plt.figure()
ax = fig.add_subplot()
start_date = datetime.date(1899, 12, 30)
dates=[start_date + datetime.timedelta(xval) for xval in x]
ax.xaxis.set_major_locator(mdates.MonthLocator((), bymonthday=1, interval=6))
ax.xaxis.set_minor_locator(mdates.MonthLocator((), bymonthday=1, interval=1))
ax.xaxis.set_major_formatter(mdates.DateFormatter("%b'%y"))
ly1 = ax.plot(dates, y1, '-k', label = 'Oil Intake (LHS)')
ax2 = ax.twinx()
ly2 = ax2.stackplot(dates, y2, y3, y4, y5, colors=['0.2','0.4','0.6','0.8'], label=['gasoline', 'kerosene', 'distillates', 'residuals'])
ly1y2 = ly1+ly2
labs = [l.get_label() for l in ly1y2]
ax.legend(ly1y2, labs, fontsize = 10, loc = 2)
ax.set_ylim(11500,17500)
ax2.set_ylim(8000, 28000)
plt.show()
=================== RESTART: /Users/adampatel/Desktop/1.py ===================
Warning (from warnings module):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/matplotlib/legend.py", line 613
(str(orig_handle),))
UserWarning: Legend does not support <matplotlib.collections.PolyCollection object at 0x10b292590>
Use proxy artist instead.
</code></pre>
<p><a href="http://matplotlib.sourceforge.net/users/legend_guide.html#using-proxy-artist" rel="nofollow">http://matplotlib.sourceforge.net/users/legend_guide.html#using-proxy-artist</a></p>
| 0 | 2016-09-29T21:11:57Z | 39,939,998 | <p>So I thought about it and googled a few solutions. Here are the lines of code that I used. </p>
<pre><code>import xlrd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.patches as mpatches
import datetime
file_location = "/Users/adampatel/Desktop/psw02.xls"
workbook = xlrd.open_workbook(file_location, on_demand = False)
worksheet = workbook.sheet_by_name('Data 1')
x = [worksheet.cell_value(i+1425, 0) for i in range(worksheet.nrows-1425)]
y1 = [worksheet.cell_value(i+1425, 1) for i in range(worksheet.nrows-1425)]
y2 = [worksheet.cell_value(i+1425, 25) for i in range(worksheet.nrows-1425)]
y3 = [worksheet.cell_value(i+1425, 35) for i in range(worksheet.nrows-1425)]
y4 = [worksheet.cell_value(i+1425, 41) for i in range(worksheet.nrows-1425)]
y5 = [worksheet.cell_value(i+1425, 50) for i in range(worksheet.nrows-1425)]
fig = plt.figure(figsize = (10, 7))
ax = fig.add_subplot(111)
start_date = datetime.date(1899, 12, 30)
dates=[start_date + datetime.timedelta(xval) for xval in x]
ax.xaxis.set_major_locator(mdates.MonthLocator((), bymonthday=1, interval=6))
ax.xaxis.set_minor_locator(mdates.MonthLocator((), bymonthday=1, interval=1))
ax.xaxis.set_major_formatter(mdates.DateFormatter("%b'%y"))
ly1, = ax.plot(dates, y1, '-k', linewidth=2.0, label = 'Oil Intake (LHS)')
ax2 = ax.twinx()
ly2 = ax2.stackplot(dates, y2, y3, y4, y5, colors=['0.2','0.4','0.6','0.8'], label=['gasoline', 'kerosene', 'distillates', 'residuals'])
ax.grid()
ax.set_ylabel("Thousands of Barrels per Day")
ax2.set_ylabel("Thousands of Barrels per Day")
ax.set_ylim(11500,17500)
ax2.set_ylim(8000, 28000)
plt.legend([ly1, mpatches.Patch(color='.2'), mpatches.Patch(color='.4'), mpatches.Patch(color='.6'), mpatches.Patch(color='.8')], ['Oil Intake (LHS)', 'Gasoline','Kerosene','Distillates', 'Residuals'], loc = 2)
plt.title('Refinar-Blendar Intake')
plt.savefig('Refinar Blendar2.png', bbox_inches='tight', dpi=90)
plt.show()
</code></pre>
| 0 | 2016-10-09T04:21:00Z | [
"python",
"matplotlib",
"subplot"
]
|
Python code to accept pre-defined grammar | 39,780,166 | <p>I'm trying to make a code to recognize a chain of characters following the grammar rules:</p>
<ul>
<li>S-> abK</li>
<li>K-> x|xK|H</li>
<li>H-> c|d</li>
</ul>
<p>So that words like <em>abx</em>, <em>abxxx</em>, <em>abc</em>, <em>abxd</em>, <em>abxc</em>, etc... are all accepted and words like <em>ab</em>, <em>abb</em>, <em>xxx</em>, etc... aren't accepted.</p>
<p>I wrote a code to do that and in my analysis it should do the trick, but there is something wrong in it, i.e, it returns False for abxx, a sentence that should be accepted by the grammar and I think it has to do with nested return values from functions, which I don't understand very well. </p>
<p>The code will be pasted below, if you guys can figure out or point me what I'm doing wrong I will be thankful.</p>
<pre><code>def S(word):
if word[0] == 'a':
atual = 1
else:
return False
if word[1] == 'b':
atual = 2
else:
return False
accepted = K(atual, word)
if accepted == True:
return True
else:
return False
def K(atual, word):
if word[atual] == 'x':
atual += 1
if len(word) <= atual: # checks to see if the word ended and accept by the first rule of the set K.
return True
else:
K(atual, word) # keeps increasing the value of atual, satisfying the rule xK
else:
value = H(atual, word) # if no more 'x' are found, try the rule H
return value
def H(atual, word):
if word[atual] == 'c' or word[atual] == 'd':
return True
else:
return False
print(S(['a','b','x','x']))
</code></pre>
| 0 | 2016-09-29T21:17:14Z | 39,780,372 | <p>Your implementation is unnecessarily verbose and repetitive: there is no need to pass around the index, for instance, when you can just pass to the next function the relevant part of the word. Here is a quick implementation I threw together that should resolve it:</p>
<pre><code>def S(chars):
word = ''.join(chars)
try:
return word[:2] == 'ab' and K(word[2:])
except IndexError:
return False
def K(word):
return word == 'x' or (word[0] == 'x' and K(word[1:])) or H(word)
def H(word):
return word in ['c', 'd']
</code></pre>
<p>Using this function, I get:</p>
<pre><code>>>> list(map(S, ['abx', 'abxxx', 'abc', 'abxd', 'abxc']))
[True, True, True, True, True]
</code></pre>
| 1 | 2016-09-29T21:32:49Z | [
"python",
"grammar"
]
|
Python Multiprocessing - execution time increased, what am I doing wrong? | 39,780,179 | <p>I'm doing a simple multiprocessing test and something seems off. Im running this on i5-6200U 2.3 Ghz with Turbo Boost.</p>
<pre><code>from multiprocessing import Process, Queue
import time
def multiply(a,b,que): #add a argument to function for assigning a queue
que.put(a*b) #we're putting return value into queue
if __name__ == '__main__':
queue1 = Queue() #create a queue object
jobs = []
start_time = time.time()
#####PARALLEL####################################
for i in range(0,400):
p = p = Process(target= multiply, args= (5,i,queue1))
jobs.append(p)
p.start()
for j in jobs:
j.join()
print("PARALLEL %s seconds ---" % (time.time() - start_time))
#####SERIAL################################
start_time = time.time()
for i in range(0,400):
multiply(5,i,queue1)
print("SERIAL %s seconds ---" % (time.time() - start_time))
</code></pre>
<p>Output:</p>
<pre><code>PARALLEL 22.12951421737671 seconds ---
SERIAL 0.004009723663330078 seconds ---
</code></pre>
<p>Help is much appreciated.</p>
| -1 | 2016-09-29T21:18:12Z | 39,782,468 | <p>Here's a brief example of (silly) code that gets a nice speedup. As already covered in comments, it doesn't create an absurd number of processes, and the work done per remote function invocation is high compared to interprocess communication overheads.</p>
<pre><code>import multiprocessing as mp
import time
def factor(n):
for i in range(n):
pass
return n
if __name__ == "__main__":
ns = range(100000, 110000)
s = time.time()
p = mp.Pool(4)
got = p.map(factor, ns)
print(time.time() - s)
assert got == list(ns)
s = time.time()
got = [factor(n) for n in ns]
print(time.time() - s)
assert got == list(ns)
</code></pre>
| 0 | 2016-09-30T01:46:27Z | [
"python",
"multiprocessing",
"python-multiprocessing"
]
|
python plotting with twiny() doesn't position ticks and destroys aspect-ratio | 39,780,201 | <p>I have created a function to plot many similar plots of contours. I am showing the minimal form of what I am doing. In this minimal example I would like to put only ticks at (0,0) A, (3,0) B, (3,3) C, (0,3) D, where A,B,C,D the labels of the ticks. I can get it to work up to putting A and B. When I introduce <code>twiny()</code> to create C and D ticks nothings seams to work any-more. Ticks are not placed where expected, and aspect ration is destroyed. I tried using <code>host_subplot</code> from <code>mpl_toolkits.axes_grid1</code>, but it didn't fix this.</p>
<pre><code>import matplotlib
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
def plot2DFS(fign,x,y,z):
fig = plt.figure(fign)
ax = fig.add_subplot(111, adjustable='box-forced', aspect='equal')
ax.set_title('Iso-surf contour')
ax.contour(x, y, z)
ax.spines['left'].set_position('center')
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('center')
ax.spines['top'].set_color('none')
ax.tick_params(axis='y',which='both', right='off', left='off', labelleft='off')
ax.tick_params(axis='x',which='both', bottom='off', top='off')
ax.xaxis.set_ticks([0.0,3.0])
ax.xaxis.set_ticklabels(['A','B'])
bx = ax.twiny()
bx.tick_params(axis='x',which='both', bottom='off', top='off')
bx.xaxis.set_ticks([0.0,3.0])
bx.xaxis.set_ticklabels(['D','C'])
return
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
plot2DFS(1,X,Y,Z)
plt.show()
</code></pre>
| 1 | 2016-09-29T21:20:07Z | 39,780,937 | <p>The aspect ratio is just fine, you see a perfect circle, don't you?</p>
<p>Concerning the labels and their position you need to tell matplotlib that you want equal x limits. Put <code>bx.set_xlim(ax.get_xlim())</code> at the end of your function.</p>
| 0 | 2016-09-29T22:23:25Z | [
"python",
"matplotlib",
"aspect-ratio"
]
|
CSV Filtering for numpy | 39,780,244 | <p>I've got this rather large CSV file, and I only care about the rows that contain the word "Slave"...Some rows that contain Slave are not exactly the same as others, but they all contain the word "Slave".</p>
<p>I want to throw away all the other rows and then work on the data that's left over in the other column.</p>
<p>Here's the catch: The other column isn't clean either...It always looks like this, though:</p>
<p><code>digit (text)</code>
so, for instance:</p>
<p><code>7 (medium)</code></p>
<p><code>12 (strong)</code></p>
<p>I want to grab the first 1 or 2 (depending on if there is 1 or 2 digits, of course) and then plot them in a histogram with numpy and matplotlib/pyplot.</p>
<p>I've got two problems:<br>
This code:</p>
<pre><code>import csv
x=csv.reader(open('sample.csv', 'rt'), delimiter=',')
x=list(x)
</code></pre>
<p>Does OK, but now I have to address things like <code>x[1][1]</code>...This will show</p>
<pre><code>Slave (0x00-02-5b-00-a5-a5) (#1)
</code></pre>
<p>But, something like <code>x[:][1]</code> shows </p>
<pre><code>['6 (medium)', 'Slave (0x00-02-5b-00-a5-a5) (#1)']
</code></pre>
<p>Which, is not what I expect...I would expect it to just print the second column.</p>
<p>Anyway, if I can get past that, the next issue will be that the surviving column will have some character filtering to do (I guess) to just keep the digits and remove the alpha characters...However, I'm dreading on how to do just that and also be able to stuff in into some numpy friendly data structure.</p>
<p>Any thoughts on how to proceed? Here's a sample of the data I'm working with:</p>
<pre><code>6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
5 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
5 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
4 (weak),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
5 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
5 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
4 (weak),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
13 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
10 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
11 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
8 (medium),Master (0x00-25-52-f5-a6-f1) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
13 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
12 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
10 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
5 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
11 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
4 (weak),Slave (0x00-02-5b-00-a5-a5) (#1)
13 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
5 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
11 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
6 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
11 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
13 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
7 (medium),Slave (0x00-02-5b-00-a5-a5) (#1)
10 (strong),Master (0x00-25-52-f5-a6-f1) (#1)
</code></pre>
<p>Thanks</p>
| 1 | 2016-09-29T21:23:31Z | 39,780,388 | <p>As a csv, each line contains 2 columns, one before and one after the comma. <code>x[1][1]</code> is the second column from the second list, since python uses 0-based indices. And <code>x[:][1]</code> is equivalent to <code>x[1]</code>, so no surprise there.</p>
<p>I suggest filtering and keeping the very first column that contains your numbers:</p>
<pre><code>firstcol_filt = [int(str.split(k[0])[0]) for k in x if 'Slave' in k[1]]
</code></pre>
<p>then you can transform this list of lists into a numpy array if you wish,</p>
<pre><code>firstcol_arr = np.array(firstcol_filt)
</code></pre>
<p>Due to its shape, this will be a 1d array which you can use in a histogram.</p>
<hr>
<p>Just to elaborate: <code>x</code> from your CSV is a list of lists. The list comprehension loops over <code>x</code>, <code>k</code> is each row of the CSV, so <code>k</code> is a two-element list. If <code>'Slave'</code> is in the second element, we split the first element at whitespace and transform its first part into an integer.</p>
<p>The list comprehension can be expanded into an equivalent loop like so:</p>
<pre><code>firstcol_filt = []
for k in x:
if 'Slave' in k[1]:
firstcol_filt.append(int(str.split(k[0])[0]))
</code></pre>
<p>Since you asked in a comment, here's the filtering step and the splitting step separately, for clarity:</p>
<pre><code>filtered_rows = [k for k in x if 'Slave' in k[1]]
firstcol_filt = [int(str.split(k[0])[0]) for k in filtered_rows]
</code></pre>
| 1 | 2016-09-29T21:33:57Z | [
"python",
"list",
"csv",
"numpy",
"filtering"
]
|
Django FileField not including uploaded file in form output request | 39,780,247 | <p>I am having trouble getting access to a simple uploaded file which I need to parse without saving it. Since the file does not need to be saved I have not made a model for it. All other threads on this state the html form encoding type, name tag are the primary reasons why request.FILES is not appearing-- I have addressed these and still there is no request.FILES being captured.</p>
<p><strong>forms.py</strong></p>
<pre><code>class DataExportForm(forms.Form):
docfile = forms.FileField(label='Select a template',help_text='Extracted info will be returned')
</code></pre>
<p><strong>HTML</strong></p>
<pre><code><html>
<body>
<form action="." method="post" enctype="multipart/form-data">{% csrf_token %}
<tr><th><label for="id_docfile">Select a template:</label></th><td><input id="id
_docfile" name="docfile" type="file" />
<button type="submit" name="action">Upload</button>
<br /><span class="helptext">Zip file wil
l be returned with data</span></td></tr>
</form>
</body>
</html>
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>if request.method=='POST':
form=DataExportForm(request.POST, request.FILES)
if form.is_valid():
#Code runs OK till here but request.FILES does not exist.
groupList=extractTemplateNames(request.FILES['file'].read())
</code></pre>
<p>I guess if I get it working I may find the file not in request.FILES['file'] but in request.FILES['docfile'] but at this point request.FILES does not exist. Any tips to solve this would be appreciated.</p>
| 0 | 2016-09-29T21:23:35Z | 39,781,528 | <p>Most likely, problem is just that you are trying to access file using wrong name. Field on form has name <code>docfile</code>. Same name it will have in <code>request.FILES</code> array. </p>
<p>Possibly, you simply misunderstood an error message saying that there is no <code>file</code> in <code>FILES</code>. </p>
<p>And <code>form.is_valid</code> access FILES array correctly, that is why form is considered to be valid. </p>
| 0 | 2016-09-29T23:28:02Z | [
"python",
"html",
"django",
"filefield"
]
|
Django FileField not including uploaded file in form output request | 39,780,247 | <p>I am having trouble getting access to a simple uploaded file which I need to parse without saving it. Since the file does not need to be saved I have not made a model for it. All other threads on this state the html form encoding type, name tag are the primary reasons why request.FILES is not appearing-- I have addressed these and still there is no request.FILES being captured.</p>
<p><strong>forms.py</strong></p>
<pre><code>class DataExportForm(forms.Form):
docfile = forms.FileField(label='Select a template',help_text='Extracted info will be returned')
</code></pre>
<p><strong>HTML</strong></p>
<pre><code><html>
<body>
<form action="." method="post" enctype="multipart/form-data">{% csrf_token %}
<tr><th><label for="id_docfile">Select a template:</label></th><td><input id="id
_docfile" name="docfile" type="file" />
<button type="submit" name="action">Upload</button>
<br /><span class="helptext">Zip file wil
l be returned with data</span></td></tr>
</form>
</body>
</html>
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>if request.method=='POST':
form=DataExportForm(request.POST, request.FILES)
if form.is_valid():
#Code runs OK till here but request.FILES does not exist.
groupList=extractTemplateNames(request.FILES['file'].read())
</code></pre>
<p>I guess if I get it working I may find the file not in request.FILES['file'] but in request.FILES['docfile'] but at this point request.FILES does not exist. Any tips to solve this would be appreciated.</p>
| 0 | 2016-09-29T21:23:35Z | 39,801,190 | <p>So apparently the only thing missing was looking for request.FILES['docfile']</p>
<p>not request.FILES['file'] . It works! Hope this is helpful to someone </p>
| 0 | 2016-09-30T22:55:39Z | [
"python",
"html",
"django",
"filefield"
]
|
Rock, Paper, Scissors doesn't let user win | 39,780,248 | <pre><code>#RoShamBo
import random
count=0
while count<2 and count> -2:
compnum=random.randint(0,2)
usernum=int(input("Scissor(0), Rock(1), Paper(2)"))
if compnum==0:
if usernum==0:
print("Draw")
elif usernum==1:
print("Win")
count=count+1
elif usernum==2:
print("Lose")
count=count-1
elif compnum==1:
if usernum==0:
print("Lose")
count=count-1
elif usernum==1:
print("Draw")
elif usernum==2:
print("Win")
count=count+1
elif compnum==2:
if usernum==0:
print("Win")
count=count+1
elif usernum==1:
print("Lose")
count=count-1
elif usernum==2:
print("Draw")
if count>2:
print("You won more than 2 times")
else:
print("The computer won more than 2 times")
</code></pre>
<p>The output is messed up -- for one thing, it won't let the user win. Also, it's not calculating the numbers properly. This was a lab assignment for a class in introduction to Python, but the professor I believe wrote the code incorrectly. Here's a sample broken output: </p>
<pre><code>============== RESTART: C:/Users/FieryAssElsa/Desktop/Broken.py ==============
Scissor(0), Rock(1), Paper(2)2
Draw
Scissor(0), Rock(1), Paper(2)2
Win
Scissor(0), Rock(1), Paper(2)2
Draw
Scissor(0), Rock(1), Paper(2)2
Lose
Scissor(0), Rock(1), Paper(2)2
Win
Scissor(0), Rock(1), Paper(2)2
Win
The computer won more than 2 times
</code></pre>
| 1 | 2016-09-29T21:23:36Z | 39,780,332 | <p>You can try it with <code>if count==2:</code></p>
| 2 | 2016-09-29T21:29:22Z | [
"python",
"while-loop"
]
|
Numpy Savetxt Overwriting, Cannot Figure Out Where to Place Loop | 39,780,364 | <p>I am creating a program that calculates correlations between my customer's data. I want to print the correlation values to a CSV so I can further analyze the data.</p>
<p>I have successfully gotten my program to loop through all the customers (12 months of data each) while calculating their individual correlations for multiple arrangements. I can see this if I print to the dialog.</p>
<p>However, when I try to save using Savetxt, I am only getting the final values I calculate.</p>
<p>I think I have placed my for loop in the wrong place, where should it go? I have tried checking out other questions, but it didn't shed too much light onto it.</p>
<p>EDIT: I have attempted aligning the writing with both the outer for loop and the inner for loop as suggested, both yielded the same results.</p>
<pre><code>for x_customer in range(0,len(overalldata),12):
for x in range(0,13,1):
cust_months = overalldata[0:x,1]
cust_balancenormal = overalldata[0:x,16]
cust_demo_one = overalldata[0:x,2]
cust_demo_two = overalldata[0:x,3]
num_acct_A = overalldata[0:x,4]
num_acct_B = overalldata[0:x,5]
#Correlation Calculations
demo_one_corr_balance = numpy.corrcoef(cust_balancenormal, cust_demo_one)[1,0]
demo_two_corr_balance = numpy.corrcoef(cust_balancenormal, cust_demo_two)[1,0]
demo_one_corr_acct_a = numpy.corrcoef(num_acct_A, cust_demo_one)[1,0]
demo_one_corr_acct_b = numpy.corrcoef(num_acct_B, cust_demo_one)[1,0]
demo_two_corr_acct_a = numpy.corrcoef(num_acct_A, cust_demo_two)[1,0]
demo_two_corr_acct_b = numpy.corrcoef(num_acct_B, cust_demo_two)[1,0]
result_correlation = [(demo_one_corr_balance),(demo_two_corr_balance),(demo_one_corr_acct_a),(demo_one_corr_acct_b),(demo_two_corr_acct_a),(demo_two_corr_acct_b)]
result_correlation_combined = emptylist.append([result_correlation])
cust_delete_list = [0,(x_customer),1]
overalldata = numpy.delete(overalldata, (cust_delete_list), axis=0)
numpy.savetxt('correlationoutput.csv', numpy.column_stack(result_correlation), delimiter=',')
print result_correlation
</code></pre>
| 0 | 2016-09-29T21:32:10Z | 39,781,761 | <p>This portion of the code is just sloppy:</p>
<pre><code> result_correlation = [(demo_one_corr_balance),...]
result_correlation_combined = emptylist.append([result_correlation])
cust_delete_list = [0,(x_customer),1]
overalldata = numpy.delete(overalldata, (cust_delete_list), axis=0)
numpy.savetxt('correlationoutput.csv', numpy.column_stack(result_correlation), delimiter=',')
print result_correlation
</code></pre>
<p>You set <code>result_correlation</code> in the inner most loop, and then you use it in the final save and print. Obviously it will print the result of the last loop.</p>
<p>Meanwhile you append it to <code>result_correlation_combined</code>, outside of the <code>x</code> loop, near tend of the <code>x_customer</code> loop. But you don't do anything with the list.</p>
<p>And finally in the <code>x_customer</code> loop you play with <code>overalldata</code>, but I don't see any further use.</p>
<p>Forget about the <code>savetxt</code> for now, and get the data collection straight.</p>
| 1 | 2016-09-30T00:00:15Z | [
"python",
"csv",
"numpy",
"for-loop",
"overwrite"
]
|
Numpy Savetxt Overwriting, Cannot Figure Out Where to Place Loop | 39,780,364 | <p>I am creating a program that calculates correlations between my customer's data. I want to print the correlation values to a CSV so I can further analyze the data.</p>
<p>I have successfully gotten my program to loop through all the customers (12 months of data each) while calculating their individual correlations for multiple arrangements. I can see this if I print to the dialog.</p>
<p>However, when I try to save using Savetxt, I am only getting the final values I calculate.</p>
<p>I think I have placed my for loop in the wrong place, where should it go? I have tried checking out other questions, but it didn't shed too much light onto it.</p>
<p>EDIT: I have attempted aligning the writing with both the outer for loop and the inner for loop as suggested, both yielded the same results.</p>
<pre><code>for x_customer in range(0,len(overalldata),12):
for x in range(0,13,1):
cust_months = overalldata[0:x,1]
cust_balancenormal = overalldata[0:x,16]
cust_demo_one = overalldata[0:x,2]
cust_demo_two = overalldata[0:x,3]
num_acct_A = overalldata[0:x,4]
num_acct_B = overalldata[0:x,5]
#Correlation Calculations
demo_one_corr_balance = numpy.corrcoef(cust_balancenormal, cust_demo_one)[1,0]
demo_two_corr_balance = numpy.corrcoef(cust_balancenormal, cust_demo_two)[1,0]
demo_one_corr_acct_a = numpy.corrcoef(num_acct_A, cust_demo_one)[1,0]
demo_one_corr_acct_b = numpy.corrcoef(num_acct_B, cust_demo_one)[1,0]
demo_two_corr_acct_a = numpy.corrcoef(num_acct_A, cust_demo_two)[1,0]
demo_two_corr_acct_b = numpy.corrcoef(num_acct_B, cust_demo_two)[1,0]
result_correlation = [(demo_one_corr_balance),(demo_two_corr_balance),(demo_one_corr_acct_a),(demo_one_corr_acct_b),(demo_two_corr_acct_a),(demo_two_corr_acct_b)]
result_correlation_combined = emptylist.append([result_correlation])
cust_delete_list = [0,(x_customer),1]
overalldata = numpy.delete(overalldata, (cust_delete_list), axis=0)
numpy.savetxt('correlationoutput.csv', numpy.column_stack(result_correlation), delimiter=',')
print result_correlation
</code></pre>
| 0 | 2016-09-29T21:32:10Z | 39,782,057 | <p>I took the advice of the above poster and corrected my code. I am now able to write to a file. However, I am having trouble with the number of iterations complete, I will post that in a different question as it is unrelated. Here is the solution that I used.</p>
<pre><code>for x_customer in range(0,len(overalldata),12):
for x in range(0,13,1):
cust_months = overalldata[0:x,1]
cust_balancenormal = overalldata[0:x,16]
cust_demo_one = overalldata[0:x,2]
cust_demo_two = overalldata[0:x,3]
num_acct_A = overalldata[0:x,4]
num_acct_B = overalldata[0:x,5]
out_mark_channel_one = overalldata[0:x,25]
out_service_channel_two = overalldata[0:x,26]
out_mark_channel_three = overalldata[0:x,27]
out_mark_channel_four = overalldata[0:x,28]
#Correlation Calculations
#Demographic to Balance Correlations
demo_one_corr_balance = numpy.corrcoef(cust_balancenormal, cust_demo_one)[1,0]
demo_two_corr_balance = numpy.corrcoef(cust_balancenormal, cust_demo_two)[1,0]
#Demographic to Account Number Correlations
demo_one_corr_acct_a = numpy.corrcoef(num_acct_A, cust_demo_one)[1,0]
demo_one_corr_acct_b = numpy.corrcoef(num_acct_B, cust_demo_one)[1,0]
demo_two_corr_acct_a = numpy.corrcoef(num_acct_A, cust_demo_two)[1,0]
demo_two_corr_acct_b = numpy.corrcoef(num_acct_B, cust_demo_two)[1,0]
#Marketing Response Channel One
mark_one_corr_acct_a = numpy.corrcoef(num_acct_A, out_mark_channel_one)[1, 0]
mark_one_corr_acct_b = numpy.corrcoef(num_acct_B, out_mark_channel_one)[1, 0]
mark_one_corr_balance = numpy.corrcoef(cust_balancenormal, out_mark_channel_one)[1, 0]
#Marketing Response Channel Two
mark_two_corr_acct_a = numpy.corrcoef(num_acct_A, out_service_channel_two)[1, 0]
mark_two_corr_acct_b = numpy.corrcoef(num_acct_B, out_service_channel_two)[1, 0]
mark_two_corr_balance = numpy.corrcoef(cust_balancenormal, out_service_channel_two)[1, 0]
#Marketing Response Channel Three
mark_three_corr_acct_a = numpy.corrcoef(num_acct_A, out_mark_channel_three)[1, 0]
mark_three_corr_acct_b = numpy.corrcoef(num_acct_B, out_mark_channel_three)[1, 0]
mark_three_corr_balance = numpy.corrcoef(cust_balancenormal, out_mark_channel_three)[1, 0]
#Marketing Response Channel Four
mark_four_corr_acct_a = numpy.corrcoef(num_acct_A, out_mark_channel_four)[1, 0]
mark_four_corr_acct_b = numpy.corrcoef(num_acct_B, out_mark_channel_four)[1, 0]
mark_four_corr_balance = numpy.corrcoef(cust_balancenormal, out_mark_channel_four)[1, 0]
#Result Correlations For Exporting to CSV of all Correlations
result_correlation = [(demo_one_corr_balance),(demo_two_corr_balance),(demo_one_corr_acct_a),(demo_one_corr_acct_b),(demo_two_corr_acct_a),(demo_two_corr_acct_b),(mark_one_corr_acct_a),(mark_one_corr_acct_b),(mark_one_corr_balance),
(mark_two_corr_acct_a),(mark_two_corr_acct_b),(mark_two_corr_balance),(mark_three_corr_acct_a),(mark_three_corr_acct_b),(mark_three_corr_balance),(mark_four_corr_acct_a),(mark_four_corr_acct_b),
(mark_four_corr_balance)]
result_correlation_nan_nuetralized = numpy.nan_to_num(result_correlation)
c.writerow(result_correlation)
result_correlation_combined = emptylist.append([result_correlation])
cust_delete_list = [0,x_customer,1]
overalldata = numpy.delete(overalldata, (cust_delete_list), axis=0)
</code></pre>
| 0 | 2016-09-30T00:46:36Z | [
"python",
"csv",
"numpy",
"for-loop",
"overwrite"
]
|
Group by Column in Dataframe and create seperate csv for each group | 39,780,399 | <p>I have a huge csv file of 1 GB which contains records of each day .Example like below </p>
<pre><code>Date orderquantity
2015-06-19 23
2015-06-19 30
2015-06-20 33
2015-06-20 40
</code></pre>
<p>So record is present of each and everyday ,is there a efficient way in Python Pandas data frame where I can group the data according to date and then store it as a seperate csv for each date .</p>
<p>My output result for above example would be </p>
<pre><code> CSV 1
Date orderquantity
2015-06-19 23
2015-06-19 30
CSV 2
Date orderquantity
2015-06-20 33
2015-06-20 40
</code></pre>
<p>Will I have to like sort/group by date in the data frame and then have a for loop and iterate through the entire data frame ? </p>
| 1 | 2016-09-29T21:35:23Z | 39,780,621 | <p>Try this:</p>
<pre><code>for name, group in df.groupby('Date'):
group.to_csv('{}.csv'.format(name), index=False)
</code></pre>
| 1 | 2016-09-29T21:53:29Z | [
"python",
"csv",
"pandas",
"dataframe"
]
|
python3: Read json file from url | 39,780,403 | <p>In python3, I want to load <a href="https://www.govtrack.us/data/congress/113/votes/2013/s11/data.json" rel="nofollow">this_file</a>, which is a json format.</p>
<p>Basically, I want to do something like [pseudocode]:</p>
<pre><code>>>> read_from_url = urllib.some_method_open(this_file)
>>> my_dict = json.load(read_from_url)
>>> print(my_dict['some_key'])
some value
</code></pre>
| 1 | 2016-09-29T21:35:31Z | 39,780,472 | <p>So you want to be able to reference specific values with inputting keys? If i think i know what you want to do, this should help you get started. You will need the libraries urlllib2, json, and bs4. just pip install them its easy.</p>
<pre><code>import urllib2
import json
from bs4 import BeautifulSoup
url = urllib2.urlopen("https://www.govtrack.us/data/congress/113/votes/2013/s11/data.json")
content = url.read()
soup = BeautifulSoup(content, "html.parser")
newDictionary=json.loads(str(soup))
</code></pre>
<p>I used a commonly used url to practice with.</p>
| 1 | 2016-09-29T21:40:57Z | [
"python",
"json",
"url"
]
|
python3: Read json file from url | 39,780,403 | <p>In python3, I want to load <a href="https://www.govtrack.us/data/congress/113/votes/2013/s11/data.json" rel="nofollow">this_file</a>, which is a json format.</p>
<p>Basically, I want to do something like [pseudocode]:</p>
<pre><code>>>> read_from_url = urllib.some_method_open(this_file)
>>> my_dict = json.load(read_from_url)
>>> print(my_dict['some_key'])
some value
</code></pre>
| 1 | 2016-09-29T21:35:31Z | 39,780,510 | <p>You were close:</p>
<pre><code>import requests
import json
response = json.loads(requests.get("your_url").text)
</code></pre>
| 3 | 2016-09-29T21:43:42Z | [
"python",
"json",
"url"
]
|
python3: Read json file from url | 39,780,403 | <p>In python3, I want to load <a href="https://www.govtrack.us/data/congress/113/votes/2013/s11/data.json" rel="nofollow">this_file</a>, which is a json format.</p>
<p>Basically, I want to do something like [pseudocode]:</p>
<pre><code>>>> read_from_url = urllib.some_method_open(this_file)
>>> my_dict = json.load(read_from_url)
>>> print(my_dict['some_key'])
some value
</code></pre>
| 1 | 2016-09-29T21:35:31Z | 39,780,515 | <p>Just use json and requests modules:</p>
<pre><code>import requests, json
content = requests.get("http://example.com")
json = json.loads(content.content)
</code></pre>
| 3 | 2016-09-29T21:43:54Z | [
"python",
"json",
"url"
]
|
scikit-learn - how to force selection of at least a single label in LinearSVC | 39,780,473 | <p>I'm doing a <a href="http://scikit-learn.org/stable/modules/multiclass.html" rel="nofollow">multi-label classification</a>. I've trained on a dataset and am getting back suggested labels. However, not all have at least a single label. I'm running into this exact issue that was <a href="http://scikit-learn-general.narkive.com/lm8imv9z/linearsvc-somtimes-returns-no-label" rel="nofollow">discussed on the mailing list</a>. It looks like there was discussion around potentially adding a parameter to force selection of a minimum number of labels, however, in looking at the documentation I don't see that it was ever implemented. I don't quite understand the suggested hack. Is there no way to do this after all the learning has completed?</p>
<p>The learning portion of my code:</p>
<pre><code>lb = preprocessing.MultiLabelBinarizer()
Y = lb.fit_transform(y_train_text)
classifier = Pipeline([
('vectorizer', CountVectorizer(stop_words="english")),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(LinearSVC()))])
classifier.fit(X_train, Y)
predicted = classifier.predict(X_test)
all_labels = lb.inverse_transform(predicted)
</code></pre>
| 0 | 2016-09-29T21:40:58Z | 39,792,433 | <p>Thanks to "lejlot", this was extremely close to what I wanted. I didn't want to override the cases where I had one or more predictions though. This is what I came up with that seems to be working:</p>
<pre><code>lb = preprocessing.MultiLabelBinarizer()
Y = lb.fit_transform(y_train_text)
classifier = Pipeline([
('vectorizer', CountVectorizer(stop_words="english")),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(LinearSVC()))])
classifier.fit(X_train, Y)
predicted = classifier.predict(X_test)
x = classifier.decision_function(X_test)
predicted_all = sp.sign(x - x.max(1).reshape(x.shape[0], 1) + 1e-20)
predicted_all = (predicted_all + 1)/2
for i in range(0, len(predicted)):
#if we never came up with a prediction, use our "forced" single prediction
if (all(v == 0 for v in predicted[i])):
predicted[i] = predicted_all[i]
all_labels = lb.inverse_transform(predicted)
</code></pre>
| 0 | 2016-09-30T13:14:17Z | [
"python",
"machine-learning",
"scipy",
"scikit-learn"
]
|
Python - Split crontab entry to 6 fields | 39,780,573 | <p>I would like to know how to split a crontab line into 6 variables like the following. Maybe with split() or other string functions.</p>
<pre><code>Input:
0 5 * * * Command1 arg1 arg2;Command2 arg1 arg2;...
Output:
Var1 = 0
Var2 = 5
Var3 = *
Var4 = *
Vat5 = Command1 arg1 arg 2;Command 2 arg1 arg2...
</code></pre>
<p>Thanks</p>
| 0 | 2016-09-29T21:49:28Z | 39,780,603 | <p><a href="https://docs.python.org/3.6/library/stdtypes.html#str.split" rel="nofollow"><code>str.split()</code></a> has an optional 2nd argument to limit the number of returned values:</p>
<pre><code>In [11]: s='0 5 * * * Command1 arg1 arg2;Command2 arg1 arg2;...'
In [12]: s.split(None, 5)
Out[12]: ['0', '5', '*', '*', '*', 'Command1 arg1 arg2;Command2 arg1 arg2;...']
</code></pre>
<p>Complete sample program:</p>
<pre><code>import fileinput
for line in fileinput.input():
parts = line.split(None, 5)
print('Input:')
print(line)
print('Output:')
print('\n'.join('Var{} = {}'.format(i, v) for i,v in enumerate(parts)))
</code></pre>
| 1 | 2016-09-29T21:51:40Z | [
"python",
"split",
"crontab"
]
|
UDP- Trouble reading bytes from socket in python? | 39,780,683 | <p>I'm trying to send 14bit sensor data from a microcontroller to a PC using UDP Protocol. When I send the data and receive it on the <code>package sender</code> application I am getting data in hex as expected.</p>
<p><a href="http://i.stack.imgur.com/xyUBI.jpg" rel="nofollow">Energia Code: </a></p>
<p><a href="http://i.stack.imgur.com/p9xeA.png" rel="nofollow">Python, <code>Package Sender</code> app screenshots</a></p>
<p>Here, I am receiving it as char.</p>
<p>Sensor value in decimal: <code>(855) --- hex(357)</code> higher byte <strong>03</strong>, lower byte <strong>57</strong>. <code>57h</code> is char <code>W</code> in Ascii table</p>
<p>So when received through the socket, python outputs this as <strong>03W</strong></p>
<p>How to receive in hex and convert it to decimal?
Thank you in advance!!</p>
| 0 | 2016-09-29T21:59:40Z | 39,782,080 | <p>You can use the <code>struct</code> module to unpack it like this:</p>
<pre><code>import struct
data = ['\x03W']
val = struct.unpack('>H', data[0]) # now an integer
</code></pre>
| 0 | 2016-09-30T00:50:16Z | [
"python",
"arduino",
"network-programming",
"udp",
"serversocket"
]
|
Python index out of range when missing user user input | 39,780,700 | <p>I know this is a simple fix--but I can't for the life of me figure out how to fix this IndexError. </p>
<pre><code>def show_status():
print("\nThis is the " + rooms[current_room]["name"])
rooms = {
1 : { "name" : "Highway" ,
"west" : 2 ,
"east" : 2 ,
"north": 2 ,
"south": 2} ,
2 : { "name" : "Forest" ,
"west" : 1 ,
"east" : 1 ,
"north": 1 ,
"south": 1} ,
}
current_room = 1
while True:
show_status()
move = input(">> ").lower().split()
if move[0] == "go":
if move[1] in rooms[current_room]:
current_room = rooms[current_room][move[1]]
else:
print("you can't go that way!")
else:
print("You didn't type anything!")
</code></pre>
<p>If the user pushes "return" without putting a value in for move, the game crashes with a "List Index out of range". I don't understand why the "else" isn't catching that in the while loop. </p>
| 1 | 2016-09-29T22:01:37Z | 39,780,749 | <p><code>move[0]</code> checks for the first member of the list, and will throw an <code>IndexError</code> if <code>move</code> is an empty, as when the user simply presses enter. You can check that <code>move</code> is true first: if it isn't, the <code>and</code> operator will circumvent the next check.</p>
<p>It seems that you are expecting a user input with one space, causing two members. You should check that <code>len(move) == 2</code> to ensure this.</p>
<p>Amend as follows:</p>
<pre><code># ...
move = input(">> ").lower().split()
if len(move) == 2 and move[0] == "go":
# the rest
</code></pre>
| 1 | 2016-09-29T22:05:45Z | [
"python",
"indexoutofrangeexception"
]
|
Hiding rows in QTableWidget if 1 of the column does not have any values | 39,780,704 | <p>I had wanted some opinions about a portion of code that I have written. My UI consists of a QTableWidget in which it has 2 columns, where one of the 2 columns are populated with QComboBox.</p>
<p>For the first column, it will fill in the cells with the list of character rigs (full path) it finds in the scene, while the second column will creates a combobox per cell and it populates in the color options as the option comes from a json file.</p>
<p>Right now I am trying to create some radio buttons that gives user the option to show all the results, or it will hides those rows if there are no color options within the combobox for that particular row.</p>
<p>As you can see in my code, I am populating the data per column, and so, when I tried to put in <code>if not len(new_sub_name) == 0:</code> while it does not put in any combobox with zero options, but how do I go about hiding such rows where there are no options in the combobox? </p>
<pre><code>def populate_table_data(self):
self.sub_names, self.fullpaths = get_chars_info()
# Output Results
# self.sub_names : ['/character/nicholas/generic', '/character/mary/default']
# self.fullpaths : ['|Group|character|nicholas_generic_001', '|Group|character|mary_default_001']
# Insert fullpath into column 1
for fullpath_index, fullpath_item in enumerate(self.fullpaths):
new_path = QtGui.QTableWidgetItem(fullpath_item)
self.character_table.setItem(fullpath_index, 0, new_path)
self.character_table.resizeColumnsToContents()
# Insert colors using itempath into column 2
for sub_index, sub_name in enumerate(self.sub_names):
new_sub_name = read_json(sub_name)
if not len(new_sub_name) == 0:
self.costume_color = QtGui.QComboBox()
self.costume_color.addItems(list(sorted(new_sub_name)))
self.character_table.setCellWidget(sub_index, 1, self.costume_color)
</code></pre>
| 0 | 2016-09-29T22:01:49Z | 39,781,300 | <p>You can hide rows using <a href="http://doc.qt.io/qt-4.8/qtableview.html#setRowHidden" rel="nofollow">setRowHidden</a>. As for the rest of the code, I don't see much wrong with what you currently have, but FWIW I would write it something like this (completely untested, of course):</p>
<pre><code>def populate_table_data(self):
self.sub_names, self.fullpaths = get_chars_info()
items = zip(self.sub_names, self.fullpaths)
for index, (sub_name, fullpath) in enumerate(items):
new_path = QtGui.QTableWidgetItem(fullpath)
self.character_table.setItem(index, 0, new_path)
new_sub_name = read_json(sub_name)
if len(new_sub_name):
combo = QtGui.QComboBox()
combo.addItems(sorted(new_sub_name))
self.character_table.setCellWidget(index, 1, combo)
else:
self.character_table.setRowHidden(index, True)
self.character_table.resizeColumnsToContents()
</code></pre>
| 1 | 2016-09-29T23:00:23Z | [
"python",
"combobox",
"pyqt",
"maya"
]
|
how to generate a responsive PDF with Django? | 39,780,715 | <p>how to generate a responsive PDF with Django?.</p>
<p>I want to generate a PDF with Django but i need that is responsive, that is to say, the text of the PDF has that adapted to don't allow space empty.</p>
<p>for example to a agreement this change in the text, then, i need to adapt the to space of paper leaf.</p>
| 1 | 2016-09-29T22:02:24Z | 39,792,862 | <p>PDF is not built to be responsive, it is built to display the same no matter where it is viewed.</p>
<p>As @alxs pointed out in a comment, there are a few features that PDF viewing applications have added to simulate PDFs being responsive. Acrobat's <em>Reflow</em> feature is the best example of this that I am aware of and even it struggles with most PDFs that users come across in the wild.</p>
<p>One of the components (if not the only one) that matters, is that in order for a PDF to be useful in Acrobat's <em>Reflow</em> mode is to make sure that the PDFs you are creating contain structure information, this would be a Tagged PDF. Tagged PDF contains content that has been marked, similar to HTML tags, where text that makes up a paragraph is tagged in the PDF as being a paragraph. A number of PDF tools (creation or viewing) do not interpret the structure of a PDF though.</p>
| 1 | 2016-09-30T13:36:09Z | [
"python",
"html",
"django",
"pdf-generation",
"weasyprint"
]
|
How to remove spacing inside a gridLayout (QT)? | 39,780,766 | <p>I want to create a child container layout which will contains 2 widgets. Those 2 widgets should be placed right next to each other but my current setup still has some spacing in between.</p>
<p>I have already set the spacing to 0 <code>setSpacing(0)</code>. And <code>setContentsMargins(0,0,0,0)</code> doesn't helped.</p>
<p>I am using PyQt5 but it shouldn't be a problem converting c++ code.</p>
<p>As you can see in the picture there is still a small gap:</p>
<p><a href="http://i.stack.imgur.com/PiuHk.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/PiuHk.jpg" alt=""></a>
(Left: LineEdit - Right: PushButton)</p>
<pre><code>import PyQt5.QtCore as qc
import PyQt5.QtGui as qg
import PyQt5.QtWidgets as qw
import sys
class Window(qw.QWidget):
def __init__(self):
qw.QWidget.__init__(self)
self.initUI()
def initUI(self):
gridLayout = qw.QGridLayout()
height = 20
self.label1 = qw.QLabel("Input:")
self.label1.setFixedHeight(height)
gridLayout.addWidget(self.label1, 0, 0)
# Child Container
childGridLayout = qw.QGridLayout()
childGridLayout.setContentsMargins(0,0,0,0)
childGridLayout.setHorizontalSpacing(0)
self.lineEdit1 = qw.QLineEdit()
self.lineEdit1.setFixedSize(25, height)
childGridLayout.addWidget(self.lineEdit1, 0, 0)
self.pushButton1 = qw.QPushButton("T")
self.pushButton1.setFixedSize(20, height)
childGridLayout.addWidget(self.pushButton1, 0, 1)
# -----------------
gridLayout.addItem(childGridLayout, 0,1)
self.setLayout(gridLayout)
if __name__ == '__main__':
app = qw.QApplication(sys.argv)
window = Window()
window.show()
sys.exit(app.exec_())
</code></pre>
| -1 | 2016-09-29T22:07:32Z | 39,781,072 | <p>The QT documentation says:
<strong>By default, QLayout uses the values provided by the style. On most platforms, the margin is 11 pixels in all directions.</strong></p>
<p>Ref:<a href="http://doc.qt.io/qt-4.8/qlayout.html#setContentsMargins" rel="nofollow">http://doc.qt.io/qt-4.8/qlayout.html#setContentsMargins</a></p>
<p>So you may need to use "setHorizontalSpacing(int spacing)" for horizontal space and "setVerticalSpacing(int spacing)" for vertical.</p>
<p>Based on the documentation, this may delete space in your case.
Ref:<a href="http://doc.qt.io/qt-4.8/qgridlayout.html#horizontalSpacing-prop" rel="nofollow">http://doc.qt.io/qt-4.8/qgridlayout.html#horizontalSpacing-prop</a></p>
<p>If not resolved, there is an option to override style settings for space (from where the layout gets).... I think this is tedious</p>
<p>If you want to provide custom layout spacings in a QStyle subclass, implement a slot called layoutSpacingImplementation() in your subclass.</p>
<p>More detials:
<a href="http://doc.qt.io/qt-4.8/qstyle.html#layoutSpacingImplementation" rel="nofollow">http://doc.qt.io/qt-4.8/qstyle.html#layoutSpacingImplementation</a></p>
| 0 | 2016-09-29T22:38:24Z | [
"python",
"c++",
"qt",
"pyqt",
"qgridlayout"
]
|
How to build a sparkSession in Spark 2.0 using pyspark? | 39,780,792 | <p>I just got access to spark 2.0; I have been using spark 1.6.1 up until this point. Can someone please help me set up a sparkSession using pyspark (python)? I know that the scala examples available online are similar (<a href="https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/6122906529858466/431554386690884/4814681571895601/latest.html" rel="nofollow">here</a>), but I was hoping for a direct walkthrough in python language. </p>
<p>My specific case: I am loading in avro files from S3 in a zeppelin spark notebook. Then building df's and running various pyspark & sql queries off of them. All of my old queries use sqlContext. I know this is poor practice, but I started my notebook with </p>
<p><code>sqlContext = SparkSession.builder.enableHiveSupport().getOrCreate()</code>. </p>
<p>I can read in the avros with </p>
<p><code>mydata = sqlContext.read.format("com.databricks.spark.avro").load("s3:...</code> </p>
<p>and build dataframes with no issues. But once I start querying the dataframes/temp tables, I keep getting the "java.lang.NullPointerException" error. I think that is indicative of a translational error (e.g. old queries worked in 1.6.1 but need to be tweaked for 2.0). The error occurs regardless of query type. So I am assuming </p>
<p>1.) the sqlContext alias is a bad idea </p>
<p>and </p>
<p>2.) I need to properly set up a sparkSession. </p>
<p>So if someone could show me how this is done, or perhaps explain the discrepancies they know of between the different versions of spark, I would greatly appreciate it. Please let me know if I need to elaborate on this question. I apologize if it is convoluted. </p>
| 1 | 2016-09-29T22:09:33Z | 39,781,384 | <p>As you can see in the scala example, Spark Session is part of sql module. Similar in python. hence, see <a href="http://spark.apache.org/docs/latest/api/python/pyspark.sql.html" rel="nofollow">pyspark sql module documentation</a></p>
<blockquote>
<p>class pyspark.sql.SparkSession(sparkContext, jsparkSession=None) The
entry point to programming Spark with the Dataset and DataFrame API. A
SparkSession can be used create DataFrame, register DataFrame as
tables, execute SQL over tables, cache tables, and read parquet files.
To create a SparkSession, use the following builder pattern:</p>
</blockquote>
<pre><code>>>> spark = SparkSession.builder \
... .master("local") \
... .appName("Word Count") \
... .config("spark.some.config.option", "some-value") \
... .getOrCreate()
</code></pre>
| 0 | 2016-09-29T23:11:01Z | [
"python",
"sql",
"apache-spark",
"pyspark"
]
|
Why is python adding "ï £" to my filenames when decrypting AES? | 39,780,940 | <p>I'm not running into any error just the output for my program is doing something strange. I've also noticed this thread here: <a href="http://stackoverflow.com/questions/30070309/encrypt-file-with-aes-256-and-decrypt-file-to-its-original-format">Encrypt file with AES-256 and Decrypt file to its original format</a>. This is my own approach to this issue, so I hope this isn't considered a duplicate. I'll post my code below, and explain how it functions. (Not including the encryption code)</p>
<h1>For Encryption</h1>
<pre><code>path = 'files/*'
files = glob.glob(path)
with open('extensions.txt', 'w') as extension:
for listing in files:
endfile = os.path.splitext(listing)[1]
extension.write(endfile + "\n")
extension.close()
for in_filename in files:
out_filename1 = os.path.splitext(in_filename)[0]
out_filename = out_filename1 + '.pycrypt'
with open(in_filename, 'rb') as in_file, open(out_filename, 'wb') as out_file:
encrypt(in_file, out_file, password)
in_file.close()
out_file.close()
os.remove(in_filename)
print 'Files Encrypted'
</code></pre>
<h1>For Decryption</h1>
<pre><code>password = raw_input('Password-> ')
path = 'files/*'
files = glob.glob(path)
for in_filename in files:
f=open('extensions.txt')
lines=f.readlines()
counter+=1
out_filename1 = os.path.splitext(in_filename)[0]
out_filename = out_filename1 + lines[counter]
with open(in_filename, 'rb') as in_file, open(out_filename, 'wb') as out_file:
decrypt(in_file, out_file, password)
in_file.close()
out_file.close()
os.remove(in_filename)
print 'Files Decrypted'
</code></pre>
<p>The code takes all the files in a folder, and encrypts them using AES. Then changes all the files extensions to .pycrypt, saving the old extension(s) into a file called "extensions.txt". After decryption it gives the files there extensions back by reading the text file line by line.</p>
<p>Here's the issue, after decryption every file goes from this:</p>
<p>15.png, sam.csv</p>
<p>To this</p>
<pre><code>15.pngï £, sam.csvï £
</code></pre>
<p>I've also noticed that if I re-encrypted the files with the ï £ symbol, the "extensions.txt" go from this:</p>
<pre><code>15.png
sam.csv
bill.jpeg
</code></pre>
<p>To this (notice the spaces):</p>
<pre><code>15.png
sam.csv
bill.jpeg
</code></pre>
<p>Any ideas what is causing this?</p>
| 0 | 2016-09-29T22:23:56Z | 39,781,113 | <p>Let's read the <a href="https://docs.python.org/2/library/stdtypes.html#bltin-file-objects" rel="nofollow">documentation</a> (emphasis mine):</p>
<blockquote>
<pre><code>file.readline([size])
</code></pre>
<p>Read one entire line from the file. <strong><em>A trailing newline character is kept in the string (but may be absent when a file ends with an
incomplete line).</em></strong> [6] If the size argument is present and
non-negative, it is a maximum byte count (including the trailing
newline) and an incomplete line may be returned. When size is not 0,
an empty string is returned only when EOF is encountered immediately.</p>
<pre><code>file.readlines([sizehint])
</code></pre>
<p>Read until EOF using <strong><em><code>readline()</code></em></strong> and return a list containing the lines thus read. If the optional sizehint argument is present, instead
of reading up to EOF, whole lines totalling approximately sizehint
bytes (possibly after rounding up to an internal buffer size) are
read. Objects implementing a file-like interface may choose to ignore
sizehint if it cannot be implemented, or cannot be implemented efficiently.</p>
</blockquote>
<p>This means that <code>lines[counter]</code> doesn't only contain the file extension, but also the newline character after that. You can remove all whitespace at the beginning and end with: <code>lines[counter].strip()</code>.</p>
<p>A better way to do this is to encrypt a file "a.jpg" as "a.jpg.enc", so you don't need to store the extension in a separate file.</p>
| 2 | 2016-09-29T22:41:30Z | [
"python",
"linux",
"encryption",
"cryptography",
"filenames"
]
|
Getting Error with Django Phone Number Form Field | 39,780,945 | <p>I'm building a Django form that includes a phone number field. I've been referring to these two SO questions to understand how to do it: <a href="http://stackoverflow.com/questions/19130942/whats-the-best-way-to-store-phone-number-in-django-models">1</a>, <a href="http://stackoverflow.com/questions/6478875/regular-expression-matching-e-164-formatted-phone-numbers">2</a>. I've created this form field:</p>
<pre><code>class ContactForm(forms.Form):
phone = forms.RegexField(
regex = r'^\+?[1-9]\d{1,14}$',
#regex = r'\+?\d{10,14}$',
error_messages = {'required', 'Phone number required'},
widget = forms.TextInput(attrs={'class': 'form-control'})
)
</code></pre>
<p>I display the field in my template:</p>
<pre><code><div>
<label for="id_phone">Your Phone Number</label>
{{ form.phone.errors }}
{{ form.phone }}
</div>
</code></pre>
<p>I understand what the regexes are doing and they look correct to me. However, I'm getting this error if I use either one of them:</p>
<pre><code>ValueError at /business/contact/
dictionary update sequence element #0 has length 8; 2 is required
...
Exception Location: /srv/http/swingerpixels.com/venvs/dev/local/lib/python2.7/site-packages/django/forms/fields.py in __init__, line 125
(stacktrace...)
widget = forms.TextInput(attrs={'class': 'form-control'})
super(RegexField, self).__init__(max_length, min_length, *args, **kwargs)
super(CharField, self).__init__(*args, **kwargs)
messages.update(error_messages or {})
(end of stacktrace)
</code></pre>
<p>Can anyone see what's causing this error? It seems to be caused by the regexes.</p>
| 0 | 2016-09-29T22:24:11Z | 39,781,139 | <p>I just discovered the error. It's in this line:</p>
<pre><code>error_messages = {'required', 'Phone number required'},
</code></pre>
<p>I needed to replace the "," with a ":":</p>
<pre><code>error_messages = {'required': 'Phone number required'},
</code></pre>
| 0 | 2016-09-29T22:43:31Z | [
"python",
"django",
"django-forms"
]
|
Using while loops and variables | 39,780,993 | <p>I'm trying to write a collatz program from the 'Automate the Boring Stuff with Python book', but have ran into some problems. I'm using python 3.5.2. Here's the project outline:</p>
<blockquote>
<p>Write a function named collatz() that has one parameter named number. If number is even, then collatz() should print number // 2 and return this value. If number is odd, then collatz() should print and return 3 * number + 1. Then write a program that lets the user type in an integer and that keeps calling collatz() on that number until the function returns the value 1.</p>
</blockquote>
<p>My code:</p>
<pre><code>def collatz(number):
if number % 2 == 0: #its even
print(number // 2)
return number // 2
elif number % 2 == 1: #its odd
print(3*number+1)
return 3*number+1
print('Type an integer: ')
num=int(input())
while(True):
if collatz(num) == 1:
break
# Or even simpler:
# while(collatz(num) != 1):
# pass
</code></pre>
<p>The output gives me an infinite loop:</p>
<pre><code>Type an integer:
10
5
5
5
5
5
5
5
5
...
</code></pre>
<p>But when I break it down and use a variable to store the return value, it works: </p>
<pre><code> while(True):
num=collatz(num)
if num == 1:
break
</code></pre>
<p>Output:</p>
<pre><code>Type an integer:
5
16
8
4
2
1
</code></pre>
<p>Why is it? I don't understand why the first program doesn't work. Both are similar but I just chose to test the return value directly in my original program instead of using variables.
I'd appreciate any help, Thanks.</p>
| 0 | 2016-09-29T22:29:57Z | 39,781,021 | <p>Your code:</p>
<pre><code>while(True):
if collatz(num) == 1:
break
</code></pre>
<p>didn't work because everytime <code>collatz</code> gets called it gets called with the same value of num and as a result returns the same number again and again. That number is not 1, so you have an infinite loop.</p>
<p>When you do <code>num = collatz(num)</code>, the value of <code>num</code> is changed the first time the function is called. The new value is then passed to the second time the function is called, and so on. So eventually you reach a point when the value of <code>num</code> becomes 1 and exit the loop.</p>
| 2 | 2016-09-29T22:33:10Z | [
"python",
"variables",
"while-loop"
]
|
traversing folders, several subfolders for the files in python | 39,781,110 | <p>I've a folder structure similar to what's outlined below.</p>
<pre><code>Path
|
|
+----SubDir1
| |
| +---SubDir1A
| | |
| | |----- FileA.0001.ext
| | |----- ...
| | |----- ...
| | |----- FileA.1001.ext
| | |----- FileB.0001.ext
| | |----- ...
| | |----- ...
| | |----- FileB.1001.ext
| +---SubDir1B
|
| | |----- FileA.0001.ext
| | |----- ...
| | |----- ...
| | |----- FileA.1001.ext
| | |----- FileB.0001.ext
| | |----- ...
| | |----- ...
| | |----- FileB.1001.ext
+----SubDir2
| |
| |----- FileA.0001.ext
| |----- ...
| |----- ...
| |----- FileA.1001.ext
| |----- FileB.0001.ext
| |----- ...
| |----- ...
| |----- FileB.1001.ext
</code></pre>
<p>I want to be able to list the first FileA and first FileB for each SubDir1 and SubDir2</p>
<p>I've looked online and seen os.walk in a for loop, similar to:</p>
<pre><code>import os
rootDir = '.'
for dirName, subdirList, fileList in os.walk(rootDir):
print('Found directory: %s' % dirName)
for fname in fileList:
print('\t%s' % fname)
# Remove the first entry in the list of sub-directories
# if there are any sub-directories present
if len(subdirList) > 0:
del subdirList[0
</code></pre>
<p>But that seems to only work if there's a file directly inside a subdirectory. My problem is that sometimes there's an additional subdirectory inside the subdirectory(!!)</p>
<p>Does anyone have any ideas how to solve this?</p>
| 1 | 2016-09-29T22:41:09Z | 39,829,545 | <p>Your issue is actually those two lines, remove them and you sould be fine:</p>
<pre><code>if len(subdirList) > 0:
del subdirList[0]
</code></pre>
<p><strong>Explanation</strong> : </p>
<p>What they do is <strong>they make the first subdirectory inside each directory disappear before <code>os.walk</code> had time to walk it</strong>. So it is not surprising that you get weird behaviour regarding subdirectories.</p>
<p>Here's an illustration of that behaviour using the following tree :</p>
<pre><code>test0/
âââ test10
â âââ test20
â â âââ testA
â âââ test21
â â âââ testA
â âââ testA
âââ test11
â âââ test22
â â âââ testA
â âââ test23
â â âââ testA
â âââ testA
âââ testA
</code></pre>
<p><strong>Without</strong> the problematic lines:</p>
<pre><code>Found directory: ./test/test0
testA
Found directory: ./test/test0/test10
testA
Found directory: ./test/test0/test10/test21
testA
Found directory: ./test/test0/test10/test20
testA
Found directory: ./test/test0/test11
testA
Found directory: ./test/test0/test11/test22
testA
Found directory: ./test/test0/test11/test23
testA
</code></pre>
<p><strong>With</strong> the problematic lines:</p>
<pre><code>Found directory: ./test/test0
testA
Found directory: ./test/test0/test11
testA
Found directory: ./test/test0/test11/test23
testA
</code></pre>
<p>So we clearly see that the two subfolders <code>test10</code> and <code>test22</code> that were first in line have been ignored altogether because of the "bad lines".</p>
| 0 | 2016-10-03T10:32:12Z | [
"python",
"glob",
"folders",
"subdirectories",
"os.walk"
]
|
Python: Looping through a list of dictionaries and extracting values to a new dictionary | 39,781,154 | <p>I need to loop on a list of dictionaries and check if a value exist. If it exists then I take another value from this same dictionary and store it on a new dictionary inside another list. What I have is this</p>
<pre><code>class_copy=[]
for root, dirs, files in os.walk(files_path+"/TD"):
for file in files:
file_name=os.path.splitext(file)[0]
for d in data_list:
if d['id'] == file_name:
cc['class']=d['fic']
class_copy.append(cc)
break
</code></pre>
<p>So I loop through some files. data_list is a list of dictionaries. These dictionaries each have an 'id' which matches a file name so when the dictionary 'd' with the value 'id' is found I take the value of 'fic' on dictionary 'd' and make a new dictionary with the key 'class' to store the value of 'fic'. Then I store this dictionary on a new list of dictionaries called class_copy. </p>
<p>The problem is that after looping through all, all the dictionaries in class_copy are the same. I guess that by looping and changing the instance of d the values on class_copy are also changing but how could I retain the values? Is there a better way to do this?</p>
| -2 | 2016-09-29T22:44:46Z | 39,781,192 | <p>You are not actually creating a <em>new</em> dictionary, you are updating an existing one:</p>
<pre><code>cc['class']=d['fic']
</code></pre>
<p>simply updates the value associated with the key <code>'class'</code>. Change your code to this:</p>
<pre><code>cc = {'class': d['fic']}
</code></pre>
<p>which will create a new instance of a dictionary.</p>
<p>The reason that all entries in your list end up being the same is because each item in the list is the <em>same</em> dictionary. By creating a new dictionary as shown you will have independent instances in the list as you were expecting.</p>
| 3 | 2016-09-29T22:49:34Z | [
"python",
"loops",
"dictionary"
]
|
Python: Looping through a list of dictionaries and extracting values to a new dictionary | 39,781,154 | <p>I need to loop on a list of dictionaries and check if a value exist. If it exists then I take another value from this same dictionary and store it on a new dictionary inside another list. What I have is this</p>
<pre><code>class_copy=[]
for root, dirs, files in os.walk(files_path+"/TD"):
for file in files:
file_name=os.path.splitext(file)[0]
for d in data_list:
if d['id'] == file_name:
cc['class']=d['fic']
class_copy.append(cc)
break
</code></pre>
<p>So I loop through some files. data_list is a list of dictionaries. These dictionaries each have an 'id' which matches a file name so when the dictionary 'd' with the value 'id' is found I take the value of 'fic' on dictionary 'd' and make a new dictionary with the key 'class' to store the value of 'fic'. Then I store this dictionary on a new list of dictionaries called class_copy. </p>
<p>The problem is that after looping through all, all the dictionaries in class_copy are the same. I guess that by looping and changing the instance of d the values on class_copy are also changing but how could I retain the values? Is there a better way to do this?</p>
| -2 | 2016-09-29T22:44:46Z | 39,781,197 | <pre><code>class_copy.append({'class': d['fic']'})
</code></pre>
| 0 | 2016-09-29T22:49:59Z | [
"python",
"loops",
"dictionary"
]
|
multiple for loop to select special rows from a dataframe in python | 39,781,176 | <p>I have a large data frame in python and I want to select specific rows based on multiple for loops. Some columns contain lists in them. My final goal is to generate some optimization constraints and pass them through another software:</p>
<pre><code> T S W Arrived Departed
[1,2] [4,2] 1 8 10
[3,4,5] [3] 1 12 18
[6,7] [1,2] 2 10 11
. . . . .
. . . . .
def Cons(row):
if row['W'] == w and sum(pd.Series(row['T']).isin([t])) != 0 and sum(pd.Series(row['S']).isin([s])) != 0:
return 1
for w in range(50):
for s in range(30):
for t in range(12):
df.Situation = df.apply(Cons, axis = 1)
A = df[ (df.Situation == 1) ]
A1 = pd.Series(A.Arrived).tolist()
D1 = pd.Series(A.Departed).tolist()
Time = tuplelist(zip(A1,D1))
</code></pre>
<p>How can I efficiently do this because going through multiple for loops takes a long time to run?</p>
| 1 | 2016-09-29T22:46:44Z | 39,783,264 | <p>Currently, you are constantly adjusting your dataframe with each nested loop where <code>A</code> is re-written over each time and does not yield a growing result but only of the very, very last iteration.</p>
<p>But consider creating a cross join of all ranges and then checking the equality logic:</p>
<pre><code>wdf = pd.DataFrame({'w': range(50), 'key': 1})
sdf = pd.DataFrame({'s': range(30), 'key': 1})
tdf = pd.DataFrame({'t': range(12), 'key': 1})
dfs = [wdf, sdf, tdf]
# DATA FRAME OF CROSS PRODUCT w X s X T (N = 18,000)
rangedf = reduce(lambda left,right: pd.merge(left, right, on=['key']), dfs)[['w','s','t']]
# w s t
# 0 0 0 0
# 1 0 0 1
# 2 0 0 2
# 3 0 0 3
# 4 0 0 4
# ...
def Cons(row):
if any((rangedf['w'].isin([row['W']])) & (rangedf['t'].isin([row['T']])) & \
(rangedf['s'].isin([row['S']]))) == True:
return 1
df.Situation = df.apply(Cons, axis = 1)
A = df[ (df.Situation == 1) ].reset_index(drop=True)
</code></pre>
| 0 | 2016-09-30T03:35:51Z | [
"python",
"for-loop",
"dataframe"
]
|
install YCM error: python site module not loaded | 39,781,219 | <p>So I really wanted to try YCM, which has been said to be a great plugin for Vim. I have been spending several hours on installation and cannot succeed due to the error of <code>E887: Sorry, this command is disabled, the Python's site module could not be loaded.</code> </p>
<p>I installed MacVim, Vim, and Python using Homebrew. I reinstalled them so many times and still cannot get it done -- as many people suggested in the issues on YCM GitHub page. </p>
<p>My OS version is MacOS Sierra (10.12). </p>
<p><code>which python</code> returns <code>/usr/local/bin/python</code> and <code>python --version</code> gives <code>Python 2.7.12</code>. Typing <code>:echo has('python')</code> returns 1.</p>
<p>Any suggestions? Thanks!</p>
| 1 | 2016-09-29T22:51:56Z | 39,781,841 | <p>This issue usually happens when recompiling python after vim, try to just reinstall vim & macvim, the issue might get resolved.</p>
<pre><code>$ brew reinstall vim macvim
</code></pre>
<p>hope this helps</p>
| 0 | 2016-09-30T00:15:14Z | [
"python",
"vim",
"homebrew",
"macvim",
"youcompleteme"
]
|
install YCM error: python site module not loaded | 39,781,219 | <p>So I really wanted to try YCM, which has been said to be a great plugin for Vim. I have been spending several hours on installation and cannot succeed due to the error of <code>E887: Sorry, this command is disabled, the Python's site module could not be loaded.</code> </p>
<p>I installed MacVim, Vim, and Python using Homebrew. I reinstalled them so many times and still cannot get it done -- as many people suggested in the issues on YCM GitHub page. </p>
<p>My OS version is MacOS Sierra (10.12). </p>
<p><code>which python</code> returns <code>/usr/local/bin/python</code> and <code>python --version</code> gives <code>Python 2.7.12</code>. Typing <code>:echo has('python')</code> returns 1.</p>
<p>Any suggestions? Thanks!</p>
| 1 | 2016-09-29T22:51:56Z | 39,796,681 | <p>So I had this same problem on Sierra, home-brew seems to be placing the latest python here:</p>
<pre><code>/usr/local/Cellar/python/2.7.12_1/Frameworks
</code></pre>
<p>But <code>brew install vim</code> ends up trying to link to python from the wrong directory. Looking at <code>vim --version | grep python</code> I saw:</p>
<pre><code>-lc -F/usr/local/Cellar/python/2.7.12/Frameworks -framework Python
</code></pre>
<p>(see <code>vim --version | grep python</code>)</p>
<p>To fix this, I did the following;</p>
<pre><code>brew uninstall python vim
brew install python
brew install vim --build-from-source
</code></pre>
<p>Now, <code>vim --version | grep python</code> shows vim is correctly linked to the correct python Framework dir.</p>
| 2 | 2016-09-30T17:11:30Z | [
"python",
"vim",
"homebrew",
"macvim",
"youcompleteme"
]
|
install YCM error: python site module not loaded | 39,781,219 | <p>So I really wanted to try YCM, which has been said to be a great plugin for Vim. I have been spending several hours on installation and cannot succeed due to the error of <code>E887: Sorry, this command is disabled, the Python's site module could not be loaded.</code> </p>
<p>I installed MacVim, Vim, and Python using Homebrew. I reinstalled them so many times and still cannot get it done -- as many people suggested in the issues on YCM GitHub page. </p>
<p>My OS version is MacOS Sierra (10.12). </p>
<p><code>which python</code> returns <code>/usr/local/bin/python</code> and <code>python --version</code> gives <code>Python 2.7.12</code>. Typing <code>:echo has('python')</code> returns 1.</p>
<p>Any suggestions? Thanks!</p>
| 1 | 2016-09-29T22:51:56Z | 39,799,937 | <p>While @Matthew Hutchinson's answer help me got vim and python connected, I found the answer in this <a href="https://github.com/Valloric/YouCompleteMe/issues/620" rel="nofollow">issue of YCM</a> stop Python from crashing by the command <code>export DYLD_FORCE_FLAT_NAMESPACE=1</code>, thanks to <a href="https://github.com/koepsell" rel="nofollow">Koepsell</a></p>
| 0 | 2016-09-30T20:53:45Z | [
"python",
"vim",
"homebrew",
"macvim",
"youcompleteme"
]
|
Why is my python unittest script calling my script when I am importing it? | 39,781,233 | <p>I am trying to test a python script and when I import the script into my testing suite, it calls the script. In my example below I import list3rdparty and once I run the test it immediate calls list3rdparty. I do not want this to happen. I would like the test to only call the functions within every test case.</p>
list3rdpartytest.py
<pre><code>import unittest
from list3rdparty import * ## this is where the script is being imported
class TestOutputMethods(unittest.TestCase):
def setUp(self):
pass
def test_no_args_returns_help(self):
args = []
self.assertEqual(get_third_party(args), help())
##get_third_party is a function in list3rdparty##
if __name__ == '__main__':
unittest.main(warnings = False)
</code></pre>
list3rdparty.py
<pre><code>def get_third_party(args_array):
##does a bunch of stuff
def get_args():
get_third_party(sys.argv)
get_args()
</code></pre>
| 1 | 2016-09-29T22:53:20Z | 39,781,284 | <p>You probably have code at the module level which will be executed on import. For example, if you had a file with the following, it will print the string the first time it's imported.</p>
<pre><code>import something
from whatever import another
print 'ding'
</code></pre>
<p>To avoid this, put the code inside a block like this:</p>
<pre><code>if __name__ == '__main__':
# your module-level code here
get_args()
</code></pre>
<p>This will only run the code if it's being called directly from the command line.</p>
| 2 | 2016-09-29T22:59:09Z | [
"python",
"unit-testing",
"python-unittest"
]
|
File does not complete write until script completes | 39,781,257 | <p>I'm getting stuck on something i think should be simple enough.
I'm creating a file containing a json string to import into a postgres database. However the file does not import even though an internal test by the python script says it is present. </p>
<p>However if i execute the postgres import after the script has completed it will copy fine, or if i wrap them in seperate scripts and call them from a single one it will work, but never if both requests are in the same script. I've tried close(), fsync and flush but with no luck.. </p>
<p>can anyone help ?</p>
<p>The relevant code is below. </p>
<pre><code>command=str("PGPASSWORD=password psql -d database -U postgres -V -c \"copy import.table from Data.txt'\"")
print command
dataFile=open('Data.txt','w')
for x in xx:
xString=json.loads(data)
xString[i]['source']=x
xString[i]['created_at'].replace('"','')
xStringJson=json.dumps(xString)
dataFile.write(xStringJson)
dataFile.close
dataFile.flush()
os.fsync(dataFile)
print os.path.isfile('Data.txt')
pg_command(command)
i=i+1
</code></pre>
| 0 | 2016-09-29T22:56:28Z | 39,781,314 | <p>You are not closing the file.</p>
<p>This does nothing, becuase it is missing parenthesis:</p>
<pre><code>dataFile.close
</code></pre>
<p>But event if it did close, it would do it in first iteration through xx.</p>
<p>Do it this way:</p>
<pre><code>with open('Data.txt','w') as dataFile:
for x in xx:
# write to the file
# when you are back here, the file is flushed, closed, and ready to be read.
</code></pre>
| 4 | 2016-09-29T23:01:43Z | [
"python",
"json",
"io"
]
|
Find the location that occurs most in every cluster in DBSCAN | 39,781,262 | <p>Playing around with this DBSCAN example:
<a href="http://geoffboeing.com/2014/08/clustering-to-reduce-spatial-data-set-size/" rel="nofollow">http://geoffboeing.com/2014/08/clustering-to-reduce-spatial-data-set-size/</a></p>
<p>The author uses center-most point for each cluster. However, I would like to use the co-ordinates of the location that appears the most number of times in each cluster to represent that cluster. In my dataset, DBSCAN works quite well, but I would like to cluster these clusters together, probably using Hierarchical Clustering on the resulting smaller dataset. Any guidance on how to find the location that occurs most number of time would be great. Any other suggestions to improve clustering are welcome! Thanks!</p>
<p>Data == dataframe similar to locations history in the reference blog</p>
<pre><code>eps_rad = 32 / 6371.0088 #convert to radians
coords = data.as_matrix(columns=['LocLatDegrees', 'LocLongDegrees'])
db = DBSCAN(eps=eps_rad, min_samples=50, algorithm='ball_tree', metric='haversine').fit(np.radians(coords))
cluster_labels = db.labels_
num_clusters = len(set(cluster_labels))
n_clusters = len(set(cluster_labels)) - (1 if -1 in cluster_labels else 0)
print('Number of clusters: {:,}'.format(n_clusters))
#remove the noise i.e. cluster label -1
data =data[cluster_labels!=-1]
coords =coords[cluster_labels!=-1]
cluster_labels =cluster_labels[cluster_labels!=-1]
clusters = pd.Series([coords[cluster_labels==n] for n in range(n_clusters)])
</code></pre>
| 0 | 2016-09-29T22:57:05Z | 39,785,224 | <p>That blog post is not particularly good.</p>
<p>By setting min_samples=1 he is not really using DBSCAN (because this disables <em>density</em>). Instead, he obtained a <em>single-linkage hierarchical clustering</em> result (with the dendrogram 'cut' at height epsilon).</p>
<p>Because DBSCAN allows arbitrary shaped clusters, the center, and centermost point, may actually be a <em>bad</em> choice. And his code assumes earth is flat when determining the center... So also this part of that blog post is not very sound...</p>
<p>If you consider this <a href="https://en.m.wikipedia.org/wiki/DBSCAN#/media/File%3ADBSCAN-density-data.svg" rel="nofollow">image from Wikipedia</a></p>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/05/DBSCAN-density-data.svg/1024px-DBSCAN-density-data.svg.png" alt="enter link description here"></p>
<p>then you can see that the most central point of the red cluster probably is not a good choice.</p>
<p>If you simply want to reduce your data set size, I suggest you use the very simple <strong>Leader</strong> clustering approach.</p>
<blockquote>
<p>J. A. Hartigan. Clustering Algorithms. John Wiley & Sons, New York, 1975</p>
</blockquote>
<p>This is much closer to the objective of reducing the data set size: essentially you define a threshold d, and you skip points if you already have an object that is closer than d, and keep it otherwise. In contrast to DBSCAN, this will not produce banana-like clusters.</p>
<p>But if you inte d to do hierarchical clustering afterwards, then why first use this approximarion?</p>
<p>As pointed out in another stackoverflow question, beware that <code>scipy.cluster.hierarchy.leaders</code> is NOT the leader-algorithm. There is an R package named <code>leaderCluster</code>, and the ELKI project that I follow for clustering recently <a href="https://github.com/elki-project/elki/commit/dd10823f6c24954c2e5a2810bffe5735257244f4" rel="nofollow">added Leader to Github</a>, too. As the ELKI version can use an index, I expect it to be much faster, but I haven't tried (their DBSCAN and OPTICS are really fast, so I usually use ELKI for large data sets; I like the cover tree index, which doesn't need more parameters than the distance function and just works well - found it to be faster and easier to use than the R*-tree; but these are my personal preferences - I wish jupyter would add some Java support).</p>
| 1 | 2016-09-30T06:41:49Z | [
"python",
"pandas",
"scikit-learn",
"cluster-analysis",
"dbscan"
]
|
Reference imported class from within imported class | 39,781,301 | <p>For a chess board, imagine there's a <code>Board</code> class, with a <code>squares</code> attribute, which is an array of <code>Square</code> instances. The file structure is that there is <code>main.py</code>, <code>board.py</code>, <code>square.py</code>, and an empty <code>__init__.py</code> (I have to say I don't fully understand the purpose of the latter... but apparently that's the way to do things). These are all in the same directory. (I've not done anything in Python involving multiple files before.)</p>
<p>In <code>main.py</code> I want to instantiate a <code>Board</code> object. Here's the contents of <code>main.py</code>:</p>
<pre><code>from board import Board
from square import Square
board = Board()
</code></pre>
<p>here's <code>square.py</code>:</p>
<pre><code>class Square:
def __init__(self):
pass
#this class doesn't do anything yet
</code></pre>
<p>and here's <code>board.py</code>:</p>
<pre><code>class Board:
row_count = 8
column_count = 8
def __init__(self):
self.squares = self.generate_squares()
def generate_squares(self):
squares = {}
for i in range(0, self.row_count * self.column_count):
squares[i] = Square()
return squares
</code></pre>
<p>However, when I run <code>main.py</code> I get told that there's an error on the <code>squares[i] = Square()</code> line; namely that <code>NameError: global name 'Square' is not defined</code>.</p>
<p>I've tried changing it to <code>squares[i] = square.Square()</code> but that yields the same error.</p>
<p>If I remove the <code>import</code> statements and just copy the class definitions into <code>main.py</code> then the instantiation works fine, so that pinpoints the issue down to being related to the <code>import</code> statements themselves.</p>
| -1 | 2016-09-29T23:00:23Z | 39,781,336 | <p>In main.py you import both classes, however, Board has no way to see Square, and vice versa. Add "import Square" to your Board class, and remove it from main.py.</p>
| 0 | 2016-09-29T23:04:43Z | [
"python",
"class",
"namespaces",
"python-import"
]
|
UndefinedError: 'None' has no attribute 'key' | 39,781,327 | <p>How would I access <code>{{c.key().id()}}</code> in my python file? It correctly prints out the correct id in my html. When I try to get the id from the request I get an error <code>UndefinedError: 'None' has no attribute 'key'</code> but if I set the <code>id</code> to something such as
<code>id=5222955109842944</code> it works.</p>
<p>When I used <code>id= Comment.key().id()</code> I get </p>
<pre><code>id = Comment.key.id
AttributeError: 'function' object has no attribute 'id'or id= Comment.key.id
</code></pre>
<p>When I tried <code>id = Comment.key()</code> I got</p>
<pre><code>id = Comment.key()
TypeError: unbound method key() must be called with Comment instance as first argument (got nothing instead)
</code></pre>
<p>I have this in my python file</p>
<pre><code> #id=5222955109842944
id = self.request.get('id')
name = 'testname1'
key = db.Key.from_path('Comment', id)
comments = db.get(key)
</code></pre>
<p>When I try to get the id from the request I get an error <code>UndefinedError: 'None' has no attribute 'key'</code></p>
<p>This is in my html file:</p>
<pre><code> comment id{{c.key().id()}}
<a href="/blog/editcomment?id={{c.key().id()}}">Edit Comment</a><br><br>
</code></pre>
| 0 | 2016-09-29T23:03:16Z | 39,781,341 | <p><code>self.request.get('id')</code> is a string. You need to turn it into an <code>int</code>:</p>
<p><code>int(self.request.get('id'))</code></p>
| 1 | 2016-09-29T23:05:29Z | [
"python",
"google-app-engine",
"google-cloud-datastore"
]
|
Django. Access the foreign key fields in the template from a form object | 39,781,330 | <p>I use Django 1.8.14. I have two models:</p>
<pre><code>class Event(models.Model):
title = models.CharField(max_length=255,)
date = models.DateTimeField()
...
def __unicode__(self):
return self.title
class Card(models.Model):
user = models.ForeignKey(MyUser, related_name="user")
event = models.ForeignKey(Event,)
...
</code></pre>
<p>Card model form:</p>
<pre><code>class CardForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(CardForm, self).__init__(*args, **kwargs)
self.fields['event'].empty_label = None
class Meta:
model = Card
fields = ('event', )
widgets = {'event': forms.RadioSelect, }
</code></pre>
<p>I render form like this:</p>
<pre><code><form method="POST">
{% csrf_token %}
{% for choice in cardform.event %}
{{ choice.tag }}
{{ choice.choice_label }}
{% endfor %}
</form >
</code></pre>
<p>In the label of each radio button I need to display both fields value "title" and "date" of Event model which is ForeignKey of Card. Now label includes only "title" value. What is the best way to do it?
I tried <strong>{{ cardform.instance.event.date }}</strong> but it doesn't work.</p>
| 0 | 2016-09-29T23:03:40Z | 39,795,625 | <p>I've found a solution.</p>
<pre><code>class CardForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(CardForm, self).__init__(*args, **kwargs)
self.fields['event'].label_from_instance = lambda obj: "%s %s" % (obj.title, obj.date)
</code></pre>
| 0 | 2016-09-30T16:00:26Z | [
"python",
"django",
"django-forms"
]
|
Communication to Different Networks | 39,781,344 | <p>I would like to make a game with python and PyGame where two players play via wi-fi on different networks networks. I currently have this code (which I got from a video). </p>
<pre><code># SERVER
import socket
def Main():
host = '127.0.0.1'
port = 5000
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind((host,port))
print("Server Started.")
while True:
data, addr = s.recvfrom(1024)
data = data.decode('utf-8')
print("message From: " + str(addr))
print("from connected user: " + data)
data = data.upper()
print("sending: " + data)
s.sendto(data.encode('utf-8'), addr)
c.close()
if __name__ == '__main__':
Main()
# CLIENT
import socket
def Main():
host = '127.0.0.1'
port = 5001
server = ('127.0.0.1',5000)
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind((host, port))
message = raw_input("-> ")
while message != 'q':
s.sendto(message.encode('utf-8'), server)
data, addr = s.recvfrom(1024)
data = data.decode('utf-8')
print('Received from server: ' + data)
message = raw_input("-> ")
s.close()
if __name__ == '__main__':
Main()
</code></pre>
<p>This works fine on the same machine. How could I make this work on two different computers (on two different LAN)?</p>
| 0 | 2016-09-29T23:05:45Z | 39,781,459 | <p>change to <code>host= '0.0.0.0'</code> (for the server)</p>
<p>this makes it publish to any available interface ... if you have a router you (probably) will also need to use the port-forward settings of your router to direct traffic to the correct computer</p>
<p>as an aside ... what is this nonsense? <code>server = ('127.0.0.1',5000)</code></p>
<p>for the client ... just set the public facing IP address of your network with the appropriate port(if you are using portforwarding) ... you canfind this ip address @ <a href="http://whatismyip.com" rel="nofollow">http://whatismyip.com</a></p>
| 0 | 2016-09-29T23:19:32Z | [
"python",
"networking",
"udp"
]
|
python function parameters that include symbols | 39,781,444 | <p>I wrote a python script as below</p>
<pre><code>def single_value(account,key):
file = open('%s.txt'%account)
file.write('Hello')
file.close()
file2 = open('%s.txt'%key)
file2.write('hoiiii')
file2.close()
single_value(accountname, 2345kwjhf53825==)
</code></pre>
<p>when I execute the script I am getting error invalid syntax. I think it is because of '==' in key. Is there is a way to define this key.
Please help </p>
| 0 | 2016-09-29T23:16:47Z | 39,781,502 | <p>The invalid syntax error is because strings must be in quotes. Thus, replace:</p>
<pre><code>single_value(accountname, 2345kwjhf53825==)
</code></pre>
<p>With:</p>
<pre><code>single_value('accountname', '2345kwjhf53825==')
</code></pre>
<p>The next error is that the files are opened read-only and you want to write to them. All together:</p>
<pre><code>def single_value(account,key):
with open('%s.txt'%account, 'w') as file:
file.write('Hello')
with open('%s.txt'%key, 'w') as file2:
file2.write('hoiiii')
single_value('accountname', '2345kwjhf53825==')
</code></pre>
| 2 | 2016-09-29T23:24:25Z | [
"python",
"python-3.x"
]
|
Trying to average a list, but I don't know what is meant by the error: unsupported operand type(s) for +: 'int' and 'tuple' | 39,781,621 | <pre><code>First_Name = input("What is your first name: ")
Last_Name = input("what is your Last Name: ")
print ("Hello, let's see what your grades are like", First_Name, Last_Name, ",you degenerate!")
grade_one = int(input("Enter your first grade: "))
grade_two = int(input("Enter your second grade: "))
grade_three = int(input("Enter your third grade: "))
grade_four = int(input("Enter your fourth grade: "))
grade_five = int(input("Enter your fith grade: "))
grades = grade_one,grade_two,grade_three,grade_four,grade_five
Grade_list.append(grades)
print (Grade_list)
def average(numbers):
total = sum(numbers)
total = float(total)
results = total/len(numbers)
return results
print (average(Grade_list))
</code></pre>
<p>Basically what I'm trying to accomplish here is getting the average of a list of grades input by the user, which I then converted to a list. But I can seem to average the list no matter how many different techniques I've used (Granted i'm very new to this, so I probably just haven't employed the proper technique). I keep coming across the error:</p>
<pre><code>Traceback (most recent call last):
File "python", line 23, in <module>
File "python", line 19, in average
TypeError: unsupported operand type(s) for +: 'int' and 'tuple'
</code></pre>
<p>I'm not sure what this error means, I have an idea that my list is printing as a tuple when it should be printing as a consecutive list of integers. I'm not sure how to go about fixing that though (or if that's even the issue). Thanks in advance! I realize my code probably isn't the most efficient piece of code out there, i'm certainly open to suggestions! :). This is a school assignment so it's not exactly rocket science i'm aware, but I can't seem to wrap my head around this.</p>
| 1 | 2016-09-29T23:40:31Z | 39,781,660 | <h1>Explanation</h1>
<p>That is because you are creating a tuple at this line: </p>
<pre><code>grades = grade_one,grade_two,grade_three,grade_four,grade_five
</code></pre>
<p>If you set a <code>print(grades)</code> right after that line, you will see your output is, for example: </p>
<pre><code>(56, 56, 56, 56, 56)
</code></pre>
<p>So, when you call this: </p>
<pre><code>Grade_list.append(grades)
</code></pre>
<p>You are now creating a list with a single tuple inside it: </p>
<pre><code>[(56, 56, 56, 56, 56)]
</code></pre>
<p>So, when you call your method, you are trying to perform your calculation against the tuple, which is exactly where your error message is coming from.</p>
<h1>Solution</h1>
<p>To strictly focus on your code, what you should be doing instead, is after each entry you are asking is append each answer to your <code>Grade_list</code> instead. </p>
<p>For example, to show a snippet of your code of what you should do: </p>
<pre><code>grade_one = int(input("Enter your first grade: "))
Grades_List.append(grade_one)
grade_two = int(input("Enter your second grade: "))
Grades_List.append(grade_two)
</code></pre>
<p>and so on...</p>
<p>Now, for the suggestion of how to improve what you are doing. What you should do instead, is loop over your question until you have exhausted how many times you want to ask the user for their grades and just append to the list, so you don't have to create several variables to do all this.</p>
<p>So, your entire chunk of code where you ask the user for their grades can be replaced with: </p>
<pre><code>Grade_list = []
for grade_number in range(1, 6):
grade = int(input("Enter grade {}: ".format(grade_number)))
Grade_list.append(grade)
</code></pre>
<p>When running the updated code, with the changes, we get:</p>
<pre><code>What is your first name: bob
what is your Last Name: hope
Hello, let's see what your grades are like bob hope ,you degenerate!
Enter grade 1: 44
Enter grade 2: 44
Enter grade 3: 44
Enter grade 4: 44
44.0
</code></pre>
| 1 | 2016-09-29T23:45:00Z | [
"python",
"list",
"compiler-errors"
]
|
NumPy Array to List Conversion | 39,781,628 | <p>In OpenCV 3 the function <code>goodFeaturesToTrack</code> returns an array of the following form</p>
<pre><code>[[[1, 2]]
[[3, 4]]
[[5, 6]]
[[7, 8]]]
</code></pre>
<p>After converting that array into a Python list I get </p>
<pre><code>[[[1, 2]], [[3, 4]], [[5, 6]], [[7, 8]]]
</code></pre>
<p>Although this is a list, if you see it has one more pair of brackets than it should and when I try to access an element via A[0][1] I get an error. Why the array and the list have that form? How should I fix it?</p>
| 0 | 2016-09-29T23:41:31Z | 39,781,884 | <p>Because you have a 3d array with one element in second axis:</p>
<pre><code>In [26]: A = [[[1, 2]], [[3, 4]], [[5, 6]], [[7, 8]]]
In [27]: A[0]
Out[27]: [[1, 2]]
</code></pre>
<p>And when you want to access the second item by <code>A[0][1]</code> it raises an IndexError:</p>
<pre><code>In [28]: A[0][1]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-28-99c47bb3f368> in <module>()
----> 1 A[0][1]
IndexError: list index out of range
</code></pre>
<p>You can use <code>np.squeeze()</code> in order to reduce the dimension, and convert the array to a 2D array: </p>
<pre><code>In [21]: import numpy as np
In [22]: A = np.array([[[1, 2]], [[3, 4]], [[5, 6]], [[7, 8]]])
In [33]: A = np.squeeze(A)
In [34]: A
Out[34]:
array([[1, 2],
[3, 4],
[5, 6],
[7, 8]])
</code></pre>
| 1 | 2016-09-30T00:21:42Z | [
"python",
"python-3.x",
"opencv"
]
|
TypeError: 'int' object does not support item assignment, In threads | 39,781,629 | <p>I have 2 modules:</p>
<p>First Keyboard.py</p>
<pre><code>import USB,evdev,threading,sys
global codigo
codigo = [1]
class Teclado:
def __init__(self,port):
self.puerto = USB.usb(port)
def iniciar_teclado(self):
p = threading.Thread(target=self.puerto.obtener_evento,args=(codigo))
p.start()
while 1:
if codigo[0] == evdev.ecodes.KEY_A:
print('A')
elif codigo[0] == evdev.ecodes.KEY_B:
print('B')
</code></pre>
<p>and USB.py:</p>
<pre><code>import evdev,os,signal,sys
class usb:
def __init__(self,dev):
self.device = evdev.InputDevice(dev)
print(self.device)
def obtener_evento(self,c):
for event in self.device.read_loop():
if event.type == evdev.ecodes.EV_KEY and event.value == 1:
c[0] = event.code
</code></pre>
<p>So to pass by reference a variable in a thread, i use a list of a single element. As help, the following code has been taken as reference:</p>
<pre><code>>>> c = [1]
>>> def f(list):
>>> list[0] = 'a'
>>> f(c)
>>> c[0]
'a'
</code></pre>
<p>but in my code, in the line</p>
<pre><code>c[0] = event.code
</code></pre>
<p>python tell me </p>
<pre><code>TypeError: 'int' object does not support item assignment
</code></pre>
<p>Some help?</p>
| 0 | 2016-09-29T23:41:35Z | 39,781,850 | <p>try</p>
<pre><code>p = threading.Thread(target=self.puerto.obtener_evento,args=(codigo,))
</code></pre>
| 0 | 2016-09-30T00:16:21Z | [
"python",
"pass-by-reference",
"python-multithreading",
"input-devices"
]
|
FlaskWTFDeprecationWarning with Flask_Security | 39,781,635 | <p>I am received a warning every time I use Flask Security.</p>
<pre><code>FlaskWTFDeprecationWarning: "flask_wtf.Form" has been renamed to "FlaskForm"
and will be removed in 1.0.
</code></pre>
<p>Is this an issue with Flask Security or something I could address myself? I am using Flask-Security==1.7.5</p>
<pre><code>from flask_security import current_user, login_required, RoleMixin, Security, \
SQLAlchemyUserDatastore, UserMixin, utils
</code></pre>
<p>I don't seem to import Flask_WTF directly.</p>
| 1 | 2016-09-29T23:42:04Z | 39,782,045 | <p>It looks like 1.7.5 is the latest release of Flask-Security. And the latest version of Flask-WTF is 0.13 (make sure you have that installed by checking a <code>pip freeze</code>).</p>
<p>Since you don't use Flask-WTF directly, the issue isn't your code. The issue is coming from Flask-Security's code itself, <a href="https://github.com/mattupstate/flask-security/blob/1.7.5/requirements.txt" rel="nofollow">which has Flask-WTF as a dependency</a>.</p>
<p>The way that Flask-Security imports the Form class from Flask-WTF is deprecated, so you're seeing the error when this line runs:</p>
<pre><code>from flask_wtf import Form as BaseForm
</code></pre>
<p><a href="https://github.com/mattupstate/flask-security/blob/e01cd63a214969cf8e4ee800d398e1c43b460c7f/flask_security/forms.py#L15" rel="nofollow">https://github.com/mattupstate/flask-security/blob/e01cd63a214969cf8e4ee800d398e1c43b460c7f/flask_security/forms.py#L15</a></p>
<p>You can either open an issue on Flask-Security (feel free to link to this question) or submit a pull request yourself to the author updating this line to the non-deprecated import</p>
<pre><code>from flask_wtf import FlaskForm as BaseForm
</code></pre>
<p>Make sure to run tests before / after too before submitting.</p>
<p>For a little more context, you can use git blame to see the commit that last changed the deprecated import line in Flask-Security (<a href="https://github.com/mattupstate/flask-security/commit/6f68f1d540502a1747cae87f0ffa2332cb4e2c94" rel="nofollow">6f68f1d</a>) on August 15, 2013.</p>
<p>Doing the same on Flask-WTF, you can see that the deprecation was introduced in <a href="https://github.com/lepture/flask-wtf/commit/42cc47562abf817c751ff22debbd9032a3c3f45d" rel="nofollow">42cc475</a> on June 30, 2016.</p>
| 3 | 2016-09-30T00:45:13Z | [
"python",
"flask-security"
]
|
Django - get_or_create() with auto_now=True | 39,781,666 | <p>Iâm using Django and I'm having a problem with a Python script that uses Django models
The script that I'm using takes data from an api and loads it into my database.</p>
<p>my model:</p>
<pre><code>class Movie(models.Model):
title = models.CharField(max_length=511)
tmdb_id = models.IntegerField(null=True, blank=True)
release = models.DateField(null=True, blank=True)
poster = models.TextField(max_length=500, null=True)
runtime = models.IntegerField(null=True, blank=True)
description = models.TextField(null=True, blank=True)
edit = models.DateTimeField(auto_now=True, null=True, blank=True)
backdrop = models.TextField(max_length=500, null=True, blank=True)
popularity = models.TextField(null=True, blank=True)
</code></pre>
<p>the script:</p>
<pre><code>movies = tmdb.Movies().upcoming()
results = movies['results']
ids = []
for movie in results:
data, created = Movie.objects.get_or_create(title=movie['title'],
tmdb_id=movie['id'],
release=movie['release_date'],
description=movie['overview'],
backdrop=movie['backdrop_path'],
poster=movie['poster_path'],
popularity=movie['popularity'])
</code></pre>
<p>The problem I'm having is that whenever I run the script, the entries are duplicated because the edit field is change, but the purpose I put the edit field is to know when exactly a movie got edited, ie: some other field got changed.</p>
<p>How can I avoid the duplicates, but also keep the edit field in case some real change happened?</p>
| 1 | 2016-09-29T23:45:27Z | 39,785,754 | <blockquote>
<p>but the purpose I put the edit field is to know when exactly a movie
got edited, ie: some other field got changed.</p>
</blockquote>
<p>That probably means you are using the wrong function. You should be using <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#update-or-create" rel="nofollow">update_or_create</a> istead.</p>
<blockquote>
<p>A convenience method for updating an object with the given kwargs,
creating a new one if necessary. The defaults is a dictionary of
(field, value) pairs used to update the object.</p>
</blockquote>
<p>This is different from get_or_create, which creates an object if it does not exists, or simply fetches it when it does exist. update_or_create is the one that does the actually updating. </p>
<p>However, changing to this method doesn't solve this:</p>
<blockquote>
<p>How can I avoid the duplicates, but also keep the edit field in case
some real change happened?</p>
</blockquote>
<p>Duplicates are created because you do not have a unique index on any of your fields. Both <code>get_or_create</code> and <code>update_or_create</code> require that you have a unique field. It seems that the following change is in order:</p>
<pre><code>class Movie(models.Model):
title = models.CharField(max_length=511)
tmdb_id = models.IntegerField(unique=True)
</code></pre>
| 1 | 2016-09-30T07:13:30Z | [
"python",
"django"
]
|
Factory class with abstractmethod | 39,781,670 | <p>I've created a factory class called <code>FitFunction</code> that adds a whole bunch of stuff beyond what I've shown. The label method <code>pretty_string</code> is supposed to just return the string as written. When I run this file, it prints a string that is as useful as the <code>repr</code>. Does someone know how I would go about implementing this?</p>
<pre><code>#!/usr/bin/env python
from __future__ import print_function, absolute_import
import abc
import types
import numpy as np
class FitFunction(object):
def __init__(self, python_function):
assert isinstance(python_function, types.FunctionType)
self._py_function = python_function
@abc.abstractmethod
def pretty_string():
r"""
Return some pretty string.
"""
class Gaussian(FitFunction):
def __init__(self):
def gaussian(x, mu, sigma, A):
coeff = (_np.sqrt(2.0 * _np.pi) * sigma)**(-1.0)
arg = -.5 * (((x - mu) / sigma)**2.0)
return A * coeff * _np.exp(arg)
FitFunction.__init__(self, gaussian)
@staticmethod
def pretty_string():
return "1D Gaussian"
if __name__ == "__main__":
print("Gaussian.pretty_string: %s" % Gaussian().pretty_string() )
</code></pre>
<p>I subclass <code>FitFunction</code> to create <code>Gaussian</code> because I apply <code>Gaussian</code> to many different data sets with the same parameters so that I can compare the output.</p>
<p>For reference, this is what happens when I execute the file:</p>
<pre><code>me$ ./FitFunction_SO_test.py
Gaussian.pretty_string: <bound method Gaussian.pretty_string of <__main__.Gaussian object at 0x1005e2f90>>
</code></pre>
<p>I'm looking for the following result:</p>
<pre><code>me$ ./FitFunction_SO_test.py
Traceback (most recent call last):
File "./FitFunction_SO_test.py", line 43, in <module>
print("Gaussian.pretty_string: %s" % Gaussian().pretty_string())
TypeError: pretty_string() takes no arguments (1 given)
</code></pre>
| 1 | 2016-09-29T23:45:53Z | 39,781,711 | <p>Use:</p>
<pre><code>print("Gaussian.pretty_string: %s" % Gaussian.pretty_string())
</code></pre>
<p>Or else you are printing the <code>repr</code> of the <em>method</em>, not the <em>result of calling the method</em>, which is the string you are looking for.</p>
| 0 | 2016-09-29T23:53:57Z | [
"python",
"abstract-factory",
"abc"
]
|
Factory class with abstractmethod | 39,781,670 | <p>I've created a factory class called <code>FitFunction</code> that adds a whole bunch of stuff beyond what I've shown. The label method <code>pretty_string</code> is supposed to just return the string as written. When I run this file, it prints a string that is as useful as the <code>repr</code>. Does someone know how I would go about implementing this?</p>
<pre><code>#!/usr/bin/env python
from __future__ import print_function, absolute_import
import abc
import types
import numpy as np
class FitFunction(object):
def __init__(self, python_function):
assert isinstance(python_function, types.FunctionType)
self._py_function = python_function
@abc.abstractmethod
def pretty_string():
r"""
Return some pretty string.
"""
class Gaussian(FitFunction):
def __init__(self):
def gaussian(x, mu, sigma, A):
coeff = (_np.sqrt(2.0 * _np.pi) * sigma)**(-1.0)
arg = -.5 * (((x - mu) / sigma)**2.0)
return A * coeff * _np.exp(arg)
FitFunction.__init__(self, gaussian)
@staticmethod
def pretty_string():
return "1D Gaussian"
if __name__ == "__main__":
print("Gaussian.pretty_string: %s" % Gaussian().pretty_string() )
</code></pre>
<p>I subclass <code>FitFunction</code> to create <code>Gaussian</code> because I apply <code>Gaussian</code> to many different data sets with the same parameters so that I can compare the output.</p>
<p>For reference, this is what happens when I execute the file:</p>
<pre><code>me$ ./FitFunction_SO_test.py
Gaussian.pretty_string: <bound method Gaussian.pretty_string of <__main__.Gaussian object at 0x1005e2f90>>
</code></pre>
<p>I'm looking for the following result:</p>
<pre><code>me$ ./FitFunction_SO_test.py
Traceback (most recent call last):
File "./FitFunction_SO_test.py", line 43, in <module>
print("Gaussian.pretty_string: %s" % Gaussian().pretty_string())
TypeError: pretty_string() takes no arguments (1 given)
</code></pre>
| 1 | 2016-09-29T23:45:53Z | 39,782,921 | <p>I am unsure if this will fix your problem since I cannot check it myself right now but you should probably be using <code>@abc.abstractstaticmethod</code> (and get rid of the <code>self</code> argument obviously) to decorate the base class method. If that doesn't fix it I'll delete this answer later. If it does fix it I'll edit this into a better answer. </p>
| 0 | 2016-09-30T02:52:44Z | [
"python",
"abstract-factory",
"abc"
]
|
SSHed into my Vagrant virtual machine to run a python script...but the python script doesn't work unless I'm in the VM itself | 39,781,746 | <p>I have a Vagrant virtual machine that I use for running automated tests. When I vagrant up and open up the console in my virtual machine, I'm able to start my tests with a simple command on the command line. After SSHing into that virtual machine and running the same exact script from the same exact directory, I'm getting errors with regards to modules not existing and certain files not existing. What might be the case?</p>
<p>I used nano to make a random txt file and surely enough I saw that the txt file appeared in my SSH terminal when I looked in the directory where I placed the txt. What could be different about the environment from SSH's perspective? Why would executing the same python script from the VM's terminal and from the SSH terminal have drastically different results?</p>
<p>I'm using the robots framework and selenium for my testing. The python script I'm executing from the command line kicks off those tests.</p>
| 1 | 2016-09-29T23:58:26Z | 39,796,430 | <p>You aren't using the same Python executable in the two environments. For some reason, your vagrant console is using a <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtual environment</a>.</p>
<p>When you SSH into your VM, run this command before executing your test script:</p>
<pre><code>source /home/vagrant/regression_venv/bin/activate
</code></pre>
| 2 | 2016-09-30T16:53:32Z | [
"python",
"selenium",
"ssh",
"vagrant",
"robot"
]
|
how to convert data into list in python | 39,781,774 | <p>I have sample dataset A. It looks like:</p>
<pre><code>1:CH,AG,ME,GS;AP,CH;HE,AC;AC,AG
2:CA;HE,AT;AT,AC;AT,OG
3:NE,AG,AC;CS,OD
</code></pre>
<p>The expected result should be:</p>
<pre><code>['CH','AG','ME','GS','AP','CH','HE','AC','AC','AG','CA','HE','AT','AT','AC','AT','OG','NE','AG','AC','CS','OD']
</code></pre>
<p>I am not sure how to write the code in Python to a list.</p>
| -1 | 2016-09-30T00:02:43Z | 39,781,792 | <p>One option would be to locate all 2 consecutive upper-case letter cases with a regular expression:</p>
<pre><code>In [1]: import re
In [2]: data = """
...: 1:CH,AG,ME,GS;AP,CH;HE,AC;AC,AG
...: 2:CA;HE,AT;AT,AC;AT,OG
...: 3:NE,AG,AC;CS,OD"""
In [3]: re.findall(r"[A-Z]{2}", data, re.MULTILINE)
Out[3]:
['CH',
'AG',
'ME',
'GS',
'AP',
'CH',
'HE',
'AC',
'AC',
'AG',
'CA',
'HE',
'AT',
'AT',
'AC',
'AT',
'OG',
'NE',
'AG',
'AC',
'CS',
'OD']
</code></pre>
| 4 | 2016-09-30T00:05:24Z | [
"python",
"list"
]
|
excel file merge with sheets in python | 39,781,784 | <p>I am trying to append many excel files with sheet1 and sheet2.
I have written the following code</p>
<pre><code>import os
import pandas as pd
files = os.listdir("C:/Python27/files")
files
df = pd.DataFrame()
for f in files:
data = pd.read_excel(f, 'Sheet1', 'Sheet2')
df = df.append(data)
</code></pre>
<p>example names of the files :Total Apr 2014,Total Aug 2014</p>
<p>The following is the error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Python27/filemerge2.py", line 10, in <module>
data = pd.read_excel(f, 'Sheet1', 'Sheet2')
File "C:\Python27\lib\site-packages\pandas\io\excel.py", line 170, in read_excel
io = ExcelFile(io, engine=engine)
File "C:\Python27\lib\site-packages\pandas\io\excel.py", line 227, in __init__
self.book = xlrd.open_workbook(io)
File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 395, in open_workbook
with open(filename, "rb") as f:
IOError: [Errno 2] No such file or directory: 'Total Apr 2014.xls'
</code></pre>
<p>It would be great if someone can help me out with this error</p>
| 0 | 2016-09-30T00:04:27Z | 39,782,199 | <p>Python <code>os.listdir</code> will return a list containing the <strong>relative names</strong> of files inside the given directory. If you are running your script from a folder other than <code>"C:/Python27/files"</code> (or whatever folder your xls files are located), then you need to give it the full path to the required file to the <code>read_excel()</code> function (or whatever file management function you call).</p>
<p>To make your script work, just append the base folder to the file name and it should work:</p>
<pre><code>import os
import pandas as pd
folder = "C:/Python27/files"
files = os.listdir(folder)
files
df = pd.DataFrame()
for f in files:
data = pd.read_excel(folder + '/' + f, 'Sheet1', 'Sheet2')
df = df.append(data)
</code></pre>
<p>Notice that <code>os.listdir</code> will return all files and directories inside <code>folder</code>, regardless of their types. Make sure to perform some file type checking before trying to open it with <code>read_excel()</code> (or just use a <code>try-except</code> block around the contents of your for loop that depend on this call).</p>
| 0 | 2016-09-30T01:09:53Z | [
"python",
"excel"
]
|
How can I tell if Gensim Word2Vec is using the C compiler? | 39,781,812 | <p>I am trying to use Gensim's Word2Vec implementation. Gensim warns that if you don't have a C compiler, the training will be 70% slower. Is there away to verify that Gensim is correctly using the C Compiler I have installed?</p>
<p>I am using Anaconda Python 3.5 on Windows 10.</p>
| 1 | 2016-09-30T00:09:37Z | 39,940,007 | <p>Apparently gensim offers a variable to detect this:</p>
<pre><code>assert gensim.models.doc2vec.FAST_VERSION > -1
</code></pre>
<p>I found this line in this tutorial:
<a href="https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/doc2vec-IMDB.ipynb" rel="nofollow">https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/doc2vec-IMDB.ipynb</a></p>
| 1 | 2016-10-09T04:24:25Z | [
"python",
"compilation",
"installation",
"gensim",
"word2vec"
]
|
How can I tell if Gensim Word2Vec is using the C compiler? | 39,781,812 | <p>I am trying to use Gensim's Word2Vec implementation. Gensim warns that if you don't have a C compiler, the training will be 70% slower. Is there away to verify that Gensim is correctly using the C Compiler I have installed?</p>
<p>I am using Anaconda Python 3.5 on Windows 10.</p>
| 1 | 2016-09-30T00:09:37Z | 39,940,533 | <p>Gensim provides both wheels and an installer for Windows.</p>
<pre><code>pip install gensim
</code></pre>
<p>should get you gensim with Cython optimization without the work of getting Cython up and running (not that it's not great to have Cython, but sometimes it's nice to just have stuff run).</p>
| 1 | 2016-10-09T06:02:17Z | [
"python",
"compilation",
"installation",
"gensim",
"word2vec"
]
|
How can i simplify this condition in python? | 39,781,887 | <p>Do you know a simpler way to achieve the same result as this?
I have this code:</p>
<pre><code>color1 = input("Color 1: ")
color2 = input("Color 2: ")
if ((color1=="blue" and color2=="yellow") or (color1=="yellow" and color2=="blue")):
print("{0} + {1} = Green".format(color1, color2))
</code></pre>
<p>I also tried with this:</p>
<pre><code>if (color1 + color2 =="blueyellow" or color1 + color2 =="yellowblue")
</code></pre>
| 9 | 2016-09-30T00:22:10Z | 39,781,928 | <p>Don't miss the <em>bigger picture</em>. Here is a better way to approach the problem in general.</p>
<p>What if you would define the "mixes" dictionary where you would have mixes of colors as keys and the resulting colors as values.</p>
<p>One idea for implementation is to use immutable by nature <a href="https://docs.python.org/3/library/stdtypes.html#frozenset" rel="nofollow"><code>frozenset</code></a>s as mapping keys:</p>
<pre><code>mixes = {
frozenset(['blue', 'yellow']): 'green'
}
color1 = input("Color 1: ")
color2 = input("Color 2: ")
mix = frozenset([color1, color2])
if mix in mixes:
print("{0} + {1} = {2}".format(color1, color2, mixes[mix]))
</code></pre>
<p>This way you may easily <em>scale</em> the solution up, add different mixes without having multiple if/else nested conditions.</p>
| 7 | 2016-09-30T00:27:48Z | [
"python",
"python-3.x",
"if-statement",
"condition",
"simplify"
]
|
How can i simplify this condition in python? | 39,781,887 | <p>Do you know a simpler way to achieve the same result as this?
I have this code:</p>
<pre><code>color1 = input("Color 1: ")
color2 = input("Color 2: ")
if ((color1=="blue" and color2=="yellow") or (color1=="yellow" and color2=="blue")):
print("{0} + {1} = Green".format(color1, color2))
</code></pre>
<p>I also tried with this:</p>
<pre><code>if (color1 + color2 =="blueyellow" or color1 + color2 =="yellowblue")
</code></pre>
| 9 | 2016-09-30T00:22:10Z | 39,781,933 | <p>You can use <code>set</code>s for comparison.</p>
<blockquote>
<p>Two sets are equal if and only if every element of each set is contained in the other</p>
</blockquote>
<pre><code>In [35]: color1 = "blue"
In [36]: color2 = "yellow"
In [37]: {color1, color2} == {"blue", "yellow"}
Out[37]: True
In [38]: {color2, color1} == {"blue", "yellow"}
Out[38]: True
</code></pre>
| 19 | 2016-09-30T00:28:37Z | [
"python",
"python-3.x",
"if-statement",
"condition",
"simplify"
]
|
Split words on boundary | 39,781,936 | <p>I have some tweets which I wish to split into words. Most of it works fine except when people combine words like: <code>trumpisamoron</code> or <code>makeamericagreatagain</code>. But then there are also things like <code>password</code> which shouldn't be split up into <code>pass</code> and <code>word</code>.</p>
<p>I know that the nltk package has a <code>punkt tokenizer</code> module which splits sentences up in a smart way. Is there something similar for words? Even if it isn't in the nltk package?</p>
<p>Note: The example of <code>password -> pass + word</code> is much less of a problem than the splitting word problem.</p>
| 1 | 2016-09-30T00:29:07Z | 39,800,133 | <p>Ref : My Answer on another Question - <a href="http://stackoverflow.com/questions/38621703/need-to-split-tags-to-text/38624458#38624458">Need to split #tags to text</a>.</p>
<p>Changes in this answer I made are - (1) Different corpus to get <code>WORDS</code> and (2) Added <code>def memo(f)</code> to speed up process. You may need to add/use corpus depending upon Domain you are working on. </p>
<p>Check - <a href="http://nbviewer.jupyter.org/url/norvig.com/ipython/How%20to%20Do%20Things%20with%20Words.ipynb" rel="nofollow">Word Segmentation Task</a> from <a href="http://norvig.com/" rel="nofollow">Norvig</a>'s work.</p>
<pre><code>from __future__ import division
from collections import Counter
import re, nltk
from datetime import datetime
WORDS = nltk.corpus.reuters.words() + nltk.corpus.words.words()
COUNTS = Counter(WORDS)
def memo(f):
"Memoize function f, whose args must all be hashable."
cache = {}
def fmemo(*args):
if args not in cache:
cache[args] = f(*args)
return cache[args]
fmemo.cache = cache
return fmemo
def pdist(counter):
"Make a probability distribution, given evidence from a Counter."
N = sum(counter.values())
return lambda x: counter[x]/N
P = pdist(COUNTS)
def Pwords(words):
"Probability of words, assuming each word is independent of others."
return product(P(w) for w in words)
def product(nums):
"Multiply the numbers together. (Like `sum`, but with multiplication.)"
result = 1
for x in nums:
result *= x
return result
def splits(text, start=0, L=20):
"Return a list of all (first, rest) pairs; start <= len(first) <= L."
return [(text[:i], text[i:])
for i in range(start, min(len(text), L)+1)]
@memo
def segment(text):
"Return a list of words that is the most probable segmentation of text."
if not text:
return []
else:
candidates = ([first] + segment(rest)
for (first, rest) in splits(text, 1))
return max(candidates, key=Pwords)
print segment('password') # ['password']
print segment('makeamericagreatagain') # ['make', 'america', 'great', 'again']
print segment('trumpisamoron') # ['trump', 'is', 'a', 'moron']
print segment('narcisticidiots') # ['narcistic', 'idiot', 's']
</code></pre>
<p>Sometimes, in case, word gets spilt into smaller token, there may be higher chances that word is not present in our <code>WORDS</code> Dictionary.</p>
<p>Here in last segment, it broke <code>narcisticidiots</code> into 3 tokens because token <code>idiots</code> was not there in our <code>WORDS</code>.</p>
<pre><code># Check for sample word 'idiots'
if 'idiots' in WORDS:
print("YES")
else:
print("NO")
</code></pre>
<p>You can add new user defined words to <code>WORDS</code>.</p>
<pre><code>.
.
user_words = []
user_words.append('idiots')
WORDS+=user_words
COUNTS = Counter(WORDS)
.
.
.
print segment('narcisticidiots') # ['narcistic', 'idiots']
</code></pre>
<p>For better solution than this you can use bigram/trigram. </p>
<p>More examples at : <a href="http://nbviewer.jupyter.org/url/norvig.com/ipython/How%20to%20Do%20Things%20with%20Words.ipynb" rel="nofollow">Word Segmentation Task</a></p>
| 1 | 2016-09-30T21:08:36Z | [
"python",
"nlp",
"nltk"
]
|
Python Django, access to function value through urls? | 39,781,967 | <p>how can I get the value of "otro" function ?, this code works but it only show me the value of get function, how can get the value of otro? I do not understand how to do it un the urls.</p>
<pre><code>views:
from django.views.generic import ListView, View
from . models import Autor
from django.shortcuts import render, redirect
from django.http import HttpResponse, HttpResponseRedirect
def inicio(request):
return HttpResponse('HOLA')
# Create your views here.
class MiVista(View):
def get(self, request):
# <la logica de la vista>
return HttpResponse('resultado')
def otro(self, request):
# <la logica de la vista>
return HttpResponse('otro')
urls:
from django.conf.urls import url, include
from django.contrib import admin
from .import views
from .views import MiVista
urlpatterns = [
url(r'^hola$', views.inicio),
url(r'^indice/', MiVista.as_view()),
]
</code></pre>
<p>Thank you !</p>
| 0 | 2016-09-30T00:33:34Z | 39,782,026 | <p>Django has <a href="https://docs.djangoproject.com/en/1.10/topics/class-based-views/intro/" rel="nofollow">class based generic views</a>, a system that allows you to use basic functionally without writing repetitive code. If you would like to return a <code>HttpResponse</code> with custom output, like "otro" or a JSON response, you wouldn't like to have another method inside a generic view because you cannot call it (If you would like to determine what can and cannot be done with generic views, click the link above). Instead, you would want to do what you have done with the <code>incido</code> function.</p>
<p>try something like this (and add it to the urls.py module):</p>
<pre><code>def inicio(request):
return HttpResponse('HOLA')
# Create your views here.
class MiVista(View):
def get(self, request):
# <la logica de la vista>
return HttpResponse('resultado')
def otro(request):
# <la logica de la vista>
return HttpResponse('otro')
</code></pre>
| 0 | 2016-09-30T00:42:33Z | [
"python",
"django"
]
|
Python 2.7 on Mac OSX. Corrupt module of framework | 39,782,024 | <p>I'm learning python and made a program on Mac OSX El Capitan and the code was working fine but randomly it started givving me errors without me changing anything in the code. I keep getting this Message:</p>
<pre>
Traceback (most recent call last):
File "time.py", line 2, in <module>
from lxml import html
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/lxml-3.6.4-py2.7-macosx-10.6-intel.egg/lxml/html/__init__.py", line 54, in <module>
from .. import etree
File "src/lxml/serializer.pxi", line 4, in init lxml.etree (src/lxml/lxml.etree.c:218282)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/gzip.py", line 8, in <module>
import struct, sys, time, os
File "/Users/user/Desktop/time.py", line 2, in <module>
from lxml import html
ImportError: cannot import name html
</pre>
<p>I have everything installed including:</p>
<pre><code>from lxml import html
from tabulate import tabulate
import requests
import datetime
</code></pre>
<p>I'm not sure what happened. I even used Homebrew to uninstall and reinstall python2.7 and still getting the same error.</p>
<p>What is going on?</p>
| 0 | 2016-09-30T00:42:14Z | 39,785,814 | <p>Thanks for the help. I changed the name of the file to t.py like ShreevatsaR said. Also put the file on Desktop, and I have everything installed through pip. For some odd reason I installed requests and tabulate manually by downloading them and running setup.py and vuala!! It worked!</p>
<p>user3543300, maybe the pip was installing it on the homebrew because it acted like they're not there. When I ran which python it gave me this:
/usr/local/bin/python</p>
| 0 | 2016-09-30T07:17:18Z | [
"python",
"osx"
]
|
Move to next column after specific number of lines in python | 39,782,060 | <p>If I have a data set that runs:<br>
1<br>
2<br>
3<br>
4<br>
5<br>
6<br>
from a python output, and i want:<br>
1 4<br>
2 5<br>
3 6<br>
basically after a specific number of lines, I would like to move the output into the next column, can this be done in Python? </p>
<p>This is what I currently have: </p>
<p><code>aa=[]
for index, line in enumerate(open("random.txt")):
if index <= 0: continue
else:
holder = line.split("\t")
dataEle = [ val[6] , val[7] , val[9] , val[10] ]
dataL.append( dataEle )</code></p>
<pre><code>for ee in dataL:
for line in ee:
print line
</code></pre>
| 0 | 2016-09-30T00:46:54Z | 39,782,805 | <pre><code>lines = ['1','2','3','4','5','6']
print(lines[:round(len(lines)/2)])
for i in range(round(len(lines)/2)):
print(lines[i] + lines[i + 3], sep=' ')
## 1 4
## 2 5
## 3 6
</code></pre>
<p>Anyways you are going to keep your file data in list. So I see that you are able to do it by your own. It is the first idea which came into my mind and I think there are some way to optimize this code. But still I hope you caught the main plot. Good Luck!</p>
| 0 | 2016-09-30T02:35:52Z | [
"python",
"file",
"row",
"multiple-columns",
"lines"
]
|
Python Iterating Through Large Data Set and Deleting Assessed Data | 39,782,084 | <p>I am working with a data set with 10,000 customers data from months 1-12. I am generating correlations for different values over the 12 month period for each customer.</p>
<p>Currently my output correlation file has more rows than my original file. I realize this is an iteration error from when I am trying to delete the already assessed rows from the original data set.</p>
<p>The result I expect is a data set of 10,000 entries of various correlations corresponding to each customers yearly assessment.</p>
<p>I have bolded (starred) where I believe the error is.</p>
<p>Here is my current code:</p>
<pre><code>for x_customer in range(0,len(overalldata),12):
for x in range(0,13,1):
cust_months = overalldata[0:x,1]
cust_balancenormal = overalldata[0:x,16]
cust_demo_one = overalldata[0:x,2]
cust_demo_two = overalldata[0:x,3]
num_acct_A = overalldata[0:x,4]
num_acct_B = overalldata[0:x,5]
out_mark_channel_one = overalldata[0:x,25]
out_service_channel_two = overalldata[0:x,26]
out_mark_channel_three = overalldata[0:x,27]
out_mark_channel_four = overalldata[0:x,28]
#Correlation Calculations
#Demographic to Balance Correlations
demo_one_corr_balance = numpy.corrcoef(cust_balancenormal, cust_demo_one)[1,0]
demo_two_corr_balance = numpy.corrcoef(cust_balancenormal, cust_demo_two)[1,0]
#Demographic to Account Number Correlations
demo_one_corr_acct_a = numpy.corrcoef(num_acct_A, cust_demo_one)[1,0]
demo_one_corr_acct_b = numpy.corrcoef(num_acct_B, cust_demo_one)[1,0]
demo_two_corr_acct_a = numpy.corrcoef(num_acct_A, cust_demo_two)[1,0]
demo_two_corr_acct_b = numpy.corrcoef(num_acct_B, cust_demo_two)[1,0]
#Marketing Response Channel One
mark_one_corr_acct_a = numpy.corrcoef(num_acct_A, out_mark_channel_one)[1, 0]
mark_one_corr_acct_b = numpy.corrcoef(num_acct_B, out_mark_channel_one)[1, 0]
mark_one_corr_balance = numpy.corrcoef(cust_balancenormal, out_mark_channel_one)[1, 0]
#Marketing Response Channel Two
mark_two_corr_acct_a = numpy.corrcoef(num_acct_A, out_service_channel_two)[1, 0]
mark_two_corr_acct_b = numpy.corrcoef(num_acct_B, out_service_channel_two)[1, 0]
mark_two_corr_balance = numpy.corrcoef(cust_balancenormal, out_service_channel_two)[1, 0]
#Marketing Response Channel Three
mark_three_corr_acct_a = numpy.corrcoef(num_acct_A, out_mark_channel_three)[1, 0]
mark_three_corr_acct_b = numpy.corrcoef(num_acct_B, out_mark_channel_three)[1, 0]
mark_three_corr_balance = numpy.corrcoef(cust_balancenormal, out_mark_channel_three)[1, 0]
#Marketing Response Channel Four
mark_four_corr_acct_a = numpy.corrcoef(num_acct_A, out_mark_channel_four)[1, 0]
mark_four_corr_acct_b = numpy.corrcoef(num_acct_B, out_mark_channel_four)[1, 0]
mark_four_corr_balance = numpy.corrcoef(cust_balancenormal, out_mark_channel_four)[1, 0]
#Result Correlations For Exporting to CSV of all Correlations
result_correlation = [(demo_one_corr_balance),(demo_two_corr_balance),(demo_one_corr_acct_a),(demo_one_corr_acct_b),(demo_two_corr_acct_a),(demo_two_corr_acct_b),(mark_one_corr_acct_a),(mark_one_corr_acct_b),(mark_one_corr_balance),
(mark_two_corr_acct_a),(mark_two_corr_acct_b),(mark_two_corr_balance),(mark_three_corr_acct_a),(mark_three_corr_acct_b),(mark_three_corr_balance),(mark_four_corr_acct_a),(mark_four_corr_acct_b),
(mark_four_corr_balance)]
result_correlation_nan_nuetralized = numpy.nan_to_num(result_correlation)
c.writerow(result_correlation)
**result_correlation_combined = emptylist.append([result_correlation])
cust_delete_list = [0,x_customer,1]
overalldata = numpy.delete(overalldata, (cust_delete_list), axis=0)**
</code></pre>
| -1 | 2016-09-30T00:50:57Z | 39,791,710 | <p>This may not completely solve your problem, but I think it's relevant.</p>
<p>When you run <code>.append</code> on a list object (empty or otherwise), the value returned by that method is <code>None</code>. So, with the line <code>result_correlation_combined = emptylist.append([result_correlation])</code>, regardless of whether <code>empty_list</code> is an empty or non-empty list, the value of <code>result_correlation_combined</code> will be <code>None</code>.</p>
<p>Here's a simple example of what I'm talking about - I'll just make up some numbers since no data were provided.</p>
<pre><code>>>> empty_list = []
>>> result_correlation = []
>>> for j in range(10):
result_correlation.append(j)
>>> result_correlation
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> result_correlation_combined = empty_list.append(result_correlation)
>>> print(result_correlation_combined)
None
</code></pre>
<p>So, you could run <code>result_correlation_combined.append(result_correlation)</code> or <code>result_correlation_combined += result_correlation</code>, or even <code>result_correlation_combined.extend(result_correlation)</code>... They will all produce the same result. See if that gives you the answer you're looking for. If not, come back.</p>
| 0 | 2016-09-30T12:37:19Z | [
"python",
"list",
"csv",
"for-loop",
"iteration"
]
|
Computing aggregate by creating nested dictionary on the fly | 39,782,108 | <p>I'm new to python and I could really use your help and guidance at the moment. I am trying to read a csv file with three cols and do some computation based on the first and second column i.e.</p>
<pre><code>A spent 100 A spent 2040
A earned 60
B earned 48
B earned 180
A spent 40
.
.
.
</code></pre>
<p>Where A spent 2040 would be the addition of all 'A' and 'spent' amounts. This does not give me an error but it's not logically correct:</p>
<pre><code>for row in rows:
cols = row.split(",")
truck = cols[0]
if (truck != 'A' and truck != 'B'):
continue
record = cols[1]
if(record != "earned" and record != "spent"):
continue
amount = int(cols[2])
#print(truck+" "+record+" "+str(amount))
if truck in entries:
#entriesA[truck].update(record)
if record in records:
records[record].append(amount)
else:
records[record] = [amount]
else:
entries[truck] = records
if record in records:
records[record].append(amount)
else:
entries[truck][record] = [amount]
print(entries)
</code></pre>
<p>I am aware that this part is incorrect because I would be adding the same inner dictionary list to the outer dictionary but I'm not sure how to go from there:</p>
<pre><code>entries[truck] = records
if record in records:
records[record].append(amount)
</code></pre>
<p>However, Im not sure of the syntax to create a new dictionary on the fly that would not be 'records'</p>
<p>I am getting:</p>
<pre><code>{'B': {'earned': [60, 48], 'spent': [100]}, 'A': {'earned': [60, 48], 'spent': [100]}}
</code></pre>
<p>But hoping to get:</p>
<pre><code>{'B': {'earned': [48]}, 'A': {'earned': [60], 'spent': [100]}}
</code></pre>
<p>Thanks.</p>
| 4 | 2016-09-30T00:55:38Z | 39,782,312 | <pre><code>if record in entries[truck]:
entries[truck][record].append(amount)
else:
entries[truck][record] = [amount]
</code></pre>
<p>I believe this is what you would want? Now we are directly accessing the truck's records, instead of trying to check a local dictionary called <code>records</code>. Just like you did if there wasn't any entry of a truck.</p>
| 0 | 2016-09-30T01:23:40Z | [
"python",
"pandas",
"dictionary",
"group-by",
"aggregate"
]
|
Computing aggregate by creating nested dictionary on the fly | 39,782,108 | <p>I'm new to python and I could really use your help and guidance at the moment. I am trying to read a csv file with three cols and do some computation based on the first and second column i.e.</p>
<pre><code>A spent 100 A spent 2040
A earned 60
B earned 48
B earned 180
A spent 40
.
.
.
</code></pre>
<p>Where A spent 2040 would be the addition of all 'A' and 'spent' amounts. This does not give me an error but it's not logically correct:</p>
<pre><code>for row in rows:
cols = row.split(",")
truck = cols[0]
if (truck != 'A' and truck != 'B'):
continue
record = cols[1]
if(record != "earned" and record != "spent"):
continue
amount = int(cols[2])
#print(truck+" "+record+" "+str(amount))
if truck in entries:
#entriesA[truck].update(record)
if record in records:
records[record].append(amount)
else:
records[record] = [amount]
else:
entries[truck] = records
if record in records:
records[record].append(amount)
else:
entries[truck][record] = [amount]
print(entries)
</code></pre>
<p>I am aware that this part is incorrect because I would be adding the same inner dictionary list to the outer dictionary but I'm not sure how to go from there:</p>
<pre><code>entries[truck] = records
if record in records:
records[record].append(amount)
</code></pre>
<p>However, Im not sure of the syntax to create a new dictionary on the fly that would not be 'records'</p>
<p>I am getting:</p>
<pre><code>{'B': {'earned': [60, 48], 'spent': [100]}, 'A': {'earned': [60, 48], 'spent': [100]}}
</code></pre>
<p>But hoping to get:</p>
<pre><code>{'B': {'earned': [48]}, 'A': {'earned': [60], 'spent': [100]}}
</code></pre>
<p>Thanks.</p>
| 4 | 2016-09-30T00:55:38Z | 39,794,082 | <p>For the kind of calculation you are doing here, I highly recommend <a href="http://pandas.pydata.org/" rel="nofollow">Pandas</a>.</p>
<p>Assuming <code>in.csv</code> looks like this:</p>
<pre><code>truck,type,amount
A,spent,100
A,earned,60
B,earned,48
B,earned,180
A,spent,40
</code></pre>
<p>You can do the totalling with three lines of code:</p>
<pre><code>import pandas
df = pandas.read_csv('in.csv')
totals = df.groupby(['truck', 'type']).sum()
</code></pre>
<p><code>totals</code> now looks like this:</p>
<pre><code> amount
truck type
A earned 60
spent 140
B earned 228
</code></pre>
<p>You will find that Pandas allows you to think on a much higher level and avoid fiddling with lower level data structures in cases like this.</p>
| 2 | 2016-09-30T14:38:57Z | [
"python",
"pandas",
"dictionary",
"group-by",
"aggregate"
]
|
List previous prime numbers of n | 39,782,111 | <p>I am trying to create a program in python that takes a number and determines whether or not that number is prime, and if it is prime I need it to list all of the prime numbers before it. What is wrong with my code?</p>
<pre><code>import math
def factor(n):
num=[]
for x in range(2,(n+1)):
i=2
while i<=n-1:
if n % i == 0:
break
i = i + 1
if i > abs(n-1):
num.append(n)
print(n,"is prime",num)
else:
print(i,"times",n//i,"equals",n)
return
</code></pre>
| -1 | 2016-09-30T00:56:21Z | 39,782,296 | <p>Your method only returns whether the 'n' is prime or not.
For this purpose, you don't need a nested loop. Just like this</p>
<pre><code>def factor(n):
num=[]
for x in range(2,(n+1)):
if n % x == 0:
break
if x > abs(n-1):
num.append(n)
print(n,"is prime",num)
else:
print(x,"times",n//x,"equals",n)
return
</code></pre>
<p>And then, if you want all the other primes less than n, you can use prime number sieve algorithm.</p>
<p>--------- Update --------------</p>
<p>A modification of your code which can find the other primes (but prime sieve algorithm still has better performance than this)</p>
<pre><code>def factor(n):
num=[]
for x in range(2,(n+1)):
i=2
while i<=x-1:
if x % i == 0:
break
i = i + 1
if i > abs(x-1):
num.append(x)
if n in num:
print num
else:
print str(n) + ' is not num'
return
</code></pre>
| 0 | 2016-09-30T01:22:12Z | [
"python",
"prime-factoring"
]
|
Python: Check if a numpy array contains an object with specific attribute | 39,782,113 | <p>I want to check if there is any object with a specific attribute in my numpy array:</p>
<pre><code>class Test:
def __init__(self, name):
self.name = name
l = numpy.empty( (2,2), dtype=object)
l[0][0] = Test("A")
l[0][1] = Test("B")
l[1][0] = Test("C")
l[1][1] = Test("D")
</code></pre>
<p>I know that following line of code working for a list, but what is the alternative process for a numpy array?</p>
<pre><code>print numpy.any(l[:,0].name == "A")
</code></pre>
| 1 | 2016-09-30T00:56:32Z | 39,782,268 | <p>One simple way would be creating your array object by inheriting from Numpy's ndarray object. Then use a custom function for checking the existence of your object based on the name attribute:</p>
<pre><code>In [71]: class Myarray(np.ndarray):
....: def __new__(cls, inputarr):
....: obj = np.asarray(inputarr).view(cls)
....: return obj
....: def custom_contain(self, name):
....: return any(obj.name == name for obj in self.flat)
</code></pre>
<p>Demo:</p>
<pre><code>In [4]: A = np.empty((2,2),dtype=object)
In [8]: A.flat[:] = [Test("A"), Test("B"), Test("C"), Test("D")]
In [9]: A
Out[9]:
array([[<__main__.Test instance at 0x7fae0a14ddd0>,
<__main__.Test instance at 0x7fae0a14de18>],
[<__main__.Test instance at 0x7fae0a14de60>,
<__main__.Test instance at 0x7fae0a14dea8>]], dtype=object)
In [11]: A = Myarray(A)
In [12]: A.custom_contain('C')
Out[12]: True
In [13]: A.custom_contain('K')
Out[13]: False
</code></pre>
| 1 | 2016-09-30T01:17:30Z | [
"python",
"python-2.7",
"numpy"
]
|
Python: Check if a numpy array contains an object with specific attribute | 39,782,113 | <p>I want to check if there is any object with a specific attribute in my numpy array:</p>
<pre><code>class Test:
def __init__(self, name):
self.name = name
l = numpy.empty( (2,2), dtype=object)
l[0][0] = Test("A")
l[0][1] = Test("B")
l[1][0] = Test("C")
l[1][1] = Test("D")
</code></pre>
<p>I know that following line of code working for a list, but what is the alternative process for a numpy array?</p>
<pre><code>print numpy.any(l[:,0].name == "A")
</code></pre>
| 1 | 2016-09-30T00:56:32Z | 39,782,433 | <p>I'm clearly not proficient in numpy but couldn't you just do something like:</p>
<pre><code>numpy.any([ x.name=='A' for x in l[:,0] ])
</code></pre>
<p>edit:
(Google tells me that) it's possible to iterate over arrays with <code>nditer</code>; is this what you want?</p>
<pre><code>numpy.any([ x.name=='A' for x in numpy.nditer(l) ])
</code></pre>
| 0 | 2016-09-30T01:40:44Z | [
"python",
"python-2.7",
"numpy"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.