title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Next Biggest Number Same Digits | 39,962,076 | <p>I don't see anything wrong with my code but I can't seem to return -1 when the input cannot produce a next bigger number, i.e. input of <code>531</code> which is descending. </p>
<pre><code>import itertools as it
def next_bigger(n):
if sorted("531", reverse = True) == list("531"):
return -1
s = tuple(str(n))
for x in it.dropwhile(lambda x: x <= s, it.permutations(sorted(s))):
return int(''.join(x))
return s
</code></pre>
<p>Can someone please help?</p>
| -6 | 2016-10-10T15:52:07Z | 39,962,254 | <p>You can simply use an <code>if</code> statement at the beginning of your function to test whether the number is already in reverse sorted order. If it is sorted <code>return -1</code> straight away:</p>
<pre><code>>>> sorted("531", reverse = True) == list("531")
True
</code></pre>
| 2 | 2016-10-10T16:02:01Z | [
"python",
"integer"
] |
Grouping values in Pandas value_counts() | 39,962,217 | <p>I want to create histogram from my pandas dataframe. I have 1 column, where I save percentage values. I used value_counts() but I have too much percentage values.
Example:</p>
<pre><code>0.752 1
0.769 2
0.800 1
0.823 1
...
80.365 1
84.000 1
84.615 1
85.000 10
85.714 1
</code></pre>
<p>I need to group this values by same rate. For example 5 %. (0 - 4,999 , 5,000 - 9,999, ...) I want this result:</p>
<p>(Example)</p>
<pre><code>0 - 4,999 24
5 - 9,999 12
10 - 14,999 30
...
</code></pre>
| 1 | 2016-10-10T15:59:59Z | 39,962,397 | <p>you can group your data by the result of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow">pd.cut()</a> method:</p>
<pre><code>In [38]: df
Out[38]:
value count
0 0.752 1
1 11.769 3
2 22.800 4
3 33.823 5
4 55.365 1
5 84.000 1
6 84.615 1
7 85.000 10
8 99.714 1
In [39]: df.groupby(pd.cut(df.value, bins=np.linspace(0, 100, 21)))['count'].sum().fillna(0)
Out[39]:
value
(0, 5] 1.0
(5, 10] 0.0
(10, 15] 3.0
(15, 20] 0.0
(20, 25] 4.0
(25, 30] 0.0
(30, 35] 5.0
(35, 40] 0.0
(40, 45] 0.0
(45, 50] 0.0
(50, 55] 0.0
(55, 60] 1.0
(60, 65] 0.0
(65, 70] 0.0
(70, 75] 0.0
(75, 80] 0.0
(80, 85] 12.0
(85, 90] 0.0
(90, 95] 0.0
(95, 100] 1.0
Name: count, dtype: float64
</code></pre>
<p>alternatively you can drop NaN's:</p>
<pre><code>In [40]: df.groupby(pd.cut(df.value, bins=np.linspace(0, 100, 21)))['count'].sum().dropna()
Out[40]:
value
(0, 5] 1.0
(10, 15] 3.0
(20, 25] 4.0
(30, 35] 5.0
(55, 60] 1.0
(80, 85] 12.0
(95, 100] 1.0
Name: count, dtype: float64
</code></pre>
<p>Explanation:</p>
<pre><code>In [41]: pd.cut(df.value, bins=np.linspace(0, 100, 21))
Out[41]:
0 (0, 5]
1 (10, 15]
2 (20, 25]
3 (30, 35]
4 (55, 60]
5 (80, 85]
6 (80, 85]
7 (80, 85]
8 (95, 100]
Name: value, dtype: category
Categories (20, object): [(0, 5] < (5, 10] < (10, 15] < (15, 20] ... (80, 85] < (85, 90] < (90, 95] < (95, 100]]
</code></pre>
| 1 | 2016-10-10T16:09:44Z | [
"python",
"pandas",
"count",
"value"
] |
Fully understand for loop python | 39,962,277 | <p>I start out with a small code example right away:</p>
<pre><code>def foo():
return 0
a = [1, 2, 3]
for el in a:
el = foo()
print(a) # [1, 2, 3]
</code></pre>
<p>I would like to know what <strong><em>el</em></strong> is in this case. As <strong><em>a</em></strong> remains the same, I intuite that <strong><em>el</em></strong> is a reference to an int. But after reassigning it, el points to a new int object that has nothing to do with the list <strong><em>a</em></strong> anymore.</p>
<p>Please tell me, if I understand it correctly. Furthermore, how do you get around this pythonic-ly? is <em>enumerate()</em> the right call as</p>
<pre><code>for i, el in enumerate(a):
a[i] = foo()
</code></pre>
<p>works fine.</p>
| -1 | 2016-10-10T16:03:25Z | 39,962,348 | <p>Yes, you understood this correctly. <code>for</code> sets the target variable <code>el</code> to point to each of the elements in <code>a</code>. <code>el = foo()</code> indeed then updates that name to point to a different, unrelated integer.</p>
<p>Using <code>enumerate()</code> is a good way to replace the references in <code>a</code> instead.</p>
<p>In this context, you may find the <a href="http://nedbatchelder.com/text/names.html" rel="nofollow"><em>Facts and myths about Python names and values</em> article</a> by Ned Batchelder helpful.</p>
<p>Another way would be to create a new list object altogether and re-bind <code>a</code> to point to that list. You could build such a list with a list comprehension:</p>
<pre><code>a = [foo() for el in a]
</code></pre>
| 1 | 2016-10-10T16:06:53Z | [
"python",
"python-3.x",
"for-loop",
"reference"
] |
Assign a class to itself in python | 39,962,292 | <p>I just want to ask if I can assign an instance of a class to itself in a method.</p>
<p>For example, is the following valid python code?</p>
<pre><code>class O(object):
def __init__(self,value):
self.value = value
def do_something(self):
self = O(1)
</code></pre>
<p>Does this lead to any unexpected behaviour?</p>
<p>Obviously, the code can be run. But when I do</p>
<pre><code>A = O(2)
A.do_something()
A.value
</code></pre>
<p>the output is <code>2</code>, when I expect it to be <code>1</code>.</p>
| 0 | 2016-10-10T16:03:48Z | 39,962,320 | <p><code>self</code> is a local variable. So, it will make your code confusing in that method, and will prevent modifications on (or calls to) the original self object. </p>
| 0 | 2016-10-10T16:05:25Z | [
"python",
"class",
"oop"
] |
Assign a class to itself in python | 39,962,292 | <p>I just want to ask if I can assign an instance of a class to itself in a method.</p>
<p>For example, is the following valid python code?</p>
<pre><code>class O(object):
def __init__(self,value):
self.value = value
def do_something(self):
self = O(1)
</code></pre>
<p>Does this lead to any unexpected behaviour?</p>
<p>Obviously, the code can be run. But when I do</p>
<pre><code>A = O(2)
A.do_something()
A.value
</code></pre>
<p>the output is <code>2</code>, when I expect it to be <code>1</code>.</p>
| 0 | 2016-10-10T16:03:48Z | 39,962,451 | <p><code>self</code> is <em>just another local variable</em>. You can assign something else to it just like you can assign something to any variable. To begin with, <code>self</code> points to the same object <code>A</code> points to in your example.</p>
<p>When you then execute <code>self = O(1)</code>, that statement <em>rebinds</em> the name <code>self</code>. It was previously referencing an instance of <code>O()</code>, and now you point it to <em>another, different</em> instance of <code>O()</code>. Those two objects are otherwise independent, doing this doesn't make anything else happen. <code>A</code> still references the first instance, <code>self</code> now references a different one.</p>
<p>So assigning <code>O(2)</code> to <code>self</code> doesn't do anything to the previous object that <code>self</code> pointed to. It certainly won't alter the value of the <code>value</code> attribute of that previous object. <code>A</code> still points to the first instance, where the <code>value</code> attribute is pointing to the integer <code>2</code>.</p>
<p>You may want to read this article on Python names, by Ned Batchelder: <a href="http://nedbatchelder.com/text/names.html" rel="nofollow"><em>Facts and myths about Python names and values</em></a>, which lays out what happens when you assign something to a Python variable. <code>self</code> is nothing special here.</p>
| 3 | 2016-10-10T16:12:32Z | [
"python",
"class",
"oop"
] |
Assign a class to itself in python | 39,962,292 | <p>I just want to ask if I can assign an instance of a class to itself in a method.</p>
<p>For example, is the following valid python code?</p>
<pre><code>class O(object):
def __init__(self,value):
self.value = value
def do_something(self):
self = O(1)
</code></pre>
<p>Does this lead to any unexpected behaviour?</p>
<p>Obviously, the code can be run. But when I do</p>
<pre><code>A = O(2)
A.do_something()
A.value
</code></pre>
<p>the output is <code>2</code>, when I expect it to be <code>1</code>.</p>
| 0 | 2016-10-10T16:03:48Z | 39,962,492 | <p>As Marcin answered, it is feasible, but it is not an advisable pattern.</p>
<p>Why would you like to do something like that? If you'd like instances of your class to have a settable property after instantiation, you shouldn't do it with a constructor. A simple solution for your proposal:</p>
<pre><code>class O(object):
def __init__(self,value):
self.value = value
def do_something(self):
self.value = 1
</code></pre>
<p>There would be alternative options depending on your intentions. Perhaps a default value?</p>
<pre><code>class O(object):
def __init__(self,value):
self.value = value
def do_something(self, value=1):
self.value = value
o = O(2)
// o.value == 2
o.do_something()
// o.value == 1
</code></pre>
<p>I think the key here is: Try to be <strong>explicit</strong> instead of <strong>smart</strong>. It may look to you that the solution you're looking for is clever, but nothing beats <strong>clarity</strong> when you're coding.</p>
| 0 | 2016-10-10T16:15:25Z | [
"python",
"class",
"oop"
] |
i want to create a matrix of multiplication tables | 39,962,323 | <p>i want to create a matrix of first 12 multiplication tables .</p>
<p>my code so far is:</p>
<pre><code>x = range(1,13,1)
n = range(1,13,1)
list_to_append = []
list_for_matrix = []
for i in x:
for j in n:
list_to_append.append(i*j)
list_for_matrix.append(list_to_append[0:12])
list_for_matrix.append(list_to_append[12:24])
list_for_matrix.append(list_to_append[24:36])
print (list_to_append)
print (list_for_matrix)
</code></pre>
<p>the output i got is:</p>
<pre><code>[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72, 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 9, 18, 27, 36, 45, 54, 63, 72, 81, 90, 99, 108, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 11, 22, 33, 44, 55, 66, 77, 88, 99, 110, 121, 132, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120, 132, 144]
[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24], [3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36]]
</code></pre>
<p>In my for loop, when <code>i=1</code> and <code>j = range(1,12,1)</code> , i want the output <code>(i*j)</code> as a list like <code>[1,2,3,4,5,6,7,8,9,10,11,12]</code> and this should happen for every iteration. finally , i want to append the above list to an empty list like
<code>[[1,2,3,4,5,6,7,8,9,,10,11,12]]</code>.so ,in my code , i can't do slicing for 12 multiplication tables. is there any better way to do it?</p>
| 0 | 2016-10-10T16:05:42Z | 39,962,394 | <p>You can use the following <em>list comprehension</em>:</p>
<pre><code>x = range(1,13) # default step value is 1, no need to specify
n = range(1,13)
mult_table = [[i*j for j in x] for i in n]
</code></pre>
<hr>
<p>Output:</p>
<pre><code>print(mult_table)
# [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
# [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24],
# [3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36],
# [4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48],
# [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60],
# [6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72],
# [7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84],
# [8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96],
# [9, 18, 27, 36, 45, 54, 63, 72, 81, 90, 99, 108],
# [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120],
# [11, 22, 33, 44, 55, 66, 77, 88, 99, 110, 121, 132],
# [12, 24, 36, 48, 60, 72, 84, 96, 108, 120, 132, 144]]
</code></pre>
<p>Notice the <em>nested comprehension</em> from which the values of the other dimension are generated and are multiplied with those of the first.</p>
| 4 | 2016-10-10T16:09:35Z | [
"python",
"python-2.7",
"python-3.x",
"ipython"
] |
i want to create a matrix of multiplication tables | 39,962,323 | <p>i want to create a matrix of first 12 multiplication tables .</p>
<p>my code so far is:</p>
<pre><code>x = range(1,13,1)
n = range(1,13,1)
list_to_append = []
list_for_matrix = []
for i in x:
for j in n:
list_to_append.append(i*j)
list_for_matrix.append(list_to_append[0:12])
list_for_matrix.append(list_to_append[12:24])
list_for_matrix.append(list_to_append[24:36])
print (list_to_append)
print (list_for_matrix)
</code></pre>
<p>the output i got is:</p>
<pre><code>[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72, 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 9, 18, 27, 36, 45, 54, 63, 72, 81, 90, 99, 108, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 11, 22, 33, 44, 55, 66, 77, 88, 99, 110, 121, 132, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120, 132, 144]
[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24], [3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36]]
</code></pre>
<p>In my for loop, when <code>i=1</code> and <code>j = range(1,12,1)</code> , i want the output <code>(i*j)</code> as a list like <code>[1,2,3,4,5,6,7,8,9,10,11,12]</code> and this should happen for every iteration. finally , i want to append the above list to an empty list like
<code>[[1,2,3,4,5,6,7,8,9,,10,11,12]]</code>.so ,in my code , i can't do slicing for 12 multiplication tables. is there any better way to do it?</p>
| 0 | 2016-10-10T16:05:42Z | 39,962,625 | <p>You can create an empty list and append lists to it.
Like:</p>
<pre><code>#!/usr/bin/python
table = []
for y in range(1, 13):
# Create the inner lists with a temporary variable.
# You must do this every time before the inner loop is entered,
# otherwise
row = []
# Fill the inner list.
for x in range(1, 13):
row.append(x*y)
# Append the inner list to the outer list.
table.append(row)
# A much more convenient way would be:
table = [[x*y for x in range(1, 13)] for y in range(1, 13)]
# [f(x) for x in v] is a list of the value of the "f" function for each
# value in the list "v".
</code></pre>
| 1 | 2016-10-10T16:22:32Z | [
"python",
"python-2.7",
"python-3.x",
"ipython"
] |
How are the values from c variable in scatter (from matplotlib) converted? | 39,962,372 | <p>I am creating a PCA plot using matplotlib Python library, I choose the color of each point according to a class value (which is 0, 1 or 2).
To do so I am using the parameter called c:</p>
<pre><code>plt.scatter(pca_data[:, 0], pca_data[:, 1], c=[0,1,1,0,2])
</code></pre>
<p>What I would like to do is add a legend linking the colour from the plot with a label, however, I cannot find how the values I give (0, 1 or 2) are converted to actual colors.</p>
<p>I thought about giving colors directly but I would like the process to be automated so it would work no matter the actual number of classes.</p>
<p>I tried using <code>to_rgb (from matplotlib.colors)</code> but since the values are not between 0 and 1 it does not work and if I scale them I end up with an odd color vector (black, white, black). </p>
<p>Any idea?</p>
<p>Thanks.</p>
| 1 | 2016-10-10T16:07:54Z | 39,962,691 | <p>Here is an example where there are 3 labels (but the method is scalable to any number):</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
pca_data = np.random.randn(100,2) #some fake data
labels = np.random.randint(low=0,high=3,size=100) #some fake class labels
cNorm = colors.Normalize(vmin=0,vmax=2) #normalise the colormap
scalarMap = cm.ScalarMappable(norm=cNorm,cmap='hot') #map numbers to colors
fig,ax = plt.subplots()
for i in np.unique(labels):
ax.scatter(pca_data[labels==i,0],pca_data[labels==i,1],\
c=scalarMap.to_rgba(i),label="class {}".format(i),s=100)
ax.legend(scatterpoints=1)
</code></pre>
<p><a href="http://i.stack.imgur.com/3pACk.png" rel="nofollow"><img src="http://i.stack.imgur.com/3pACk.png" alt="enter image description here"></a></p>
| 0 | 2016-10-10T16:26:02Z | [
"python",
"matplotlib"
] |
Why is BeautifulSoup not extracting all of HTML from a webpage? | 39,962,420 | <p>I am trying to extract text from this website: <a href="https://www.searchgurbani.com/guru_granth_sahib/ang_by_ang" rel="nofollow">searchgurbani</a>. This website has some old scripture translated in English and Punjabi (an Indian Language) line-by-line. It makes a very good parallel corpus. I have successfully extracted all the English translations in a separate text file. But when I go for Punjabi, It returns nothing.</p>
<p>This is the Inspect element screenshot: (Highlighted text is the translated Punjabi language)</p>
<p><a href="http://i.stack.imgur.com/Jzai7.png" rel="nofollow">Screenshot 1</a></p>
<p>In Screenshot 1, highlighted text which belongs to <em>class=lang_16</em> is not listed in the soup object <em>beautiful</em> which should contain all of the HTML. Here is the Python code:</p>
<pre><code>outputFilePunjabi = open("1.txt","w",newline="",encoding="utf-16")
r=urlopen("")
beautiful = BeautifulSoup(r.read().decode('utf-8'),"html5lib")
#beautiful = BeautifulSoup(r.read().decode('utf-8'),"lxml")
punjabi_text = beautiful.find_all(class_="lang_16")
for i in punjabi_text:
outputFilePunjabi.write(i.get_text())
outputFilePunjabi.write('\n')
</code></pre>
<p>If I run the same code with <em>class_=lang_4</em> it does the work.</p>
<p>Please do the following to see lang_16 in inspect element:</p>
<p><em>Please do the following on that web page: Go to preferences --> Tick "translation of Sri Guru Granth Sahib ji (by S. Manmohan Singh) - Punjabi" under Additional Translations available on Guru Granth Shahib: --> scroll down - submit changes -> reopen page</em></p>
<p>Please guide me where I am going wrong.</p>
<p>(python version = 3.5)</p>
<p><em>PS: I have very less experience in web scrapping.</em></p>
| 2 | 2016-10-10T16:10:50Z | 39,962,713 | <p>Remember you've suggested to do the following:</p>
<blockquote>
<p>Please do the following on that web page: Go to preferences -> Tick
"ranslation of Sri Guru Granth Sahib ji (by S. Manmohan Singh) -
Punjabi" under Additional Translations available on Guru Granth
Shahib: -> scroll down - submit changes</p>
</blockquote>
<p>Now, this is also required when you download the page in Python. In other words, use <a href="http://docs.python-requests.org/en/master/" rel="nofollow"><code>requests</code></a> and <strong>set the <code>lang_16="yes"</code> cookie</strong> to enable the Punjabi translation: </p>
<pre><code>import requests
from bs4 import BeautifulSoup
with requests.Session() as session:
response = session.get("https://www.searchgurbani.com/guru_granth_sahib/ang_by_ang", cookies={
"lang_16": "yes"
})
soup = BeautifulSoup(response.content, "html5lib")
for item in soup.select(".lang_16"):
print(item.get_text())
</code></pre>
<p>Prints:</p>
<pre><code>ਵਾਹਿà¨à©à¨°à© à¨à©à¨µà¨² à¨à¨ ਹà©à¥¤ ਸੱà¨à¨¾ ਹ੠à¨à¨¸ ਦਾ ਨਾਮ, ਰà¨à¨¨à¨¹à¨¾à¨° à¨à¨¸ ਦ੠ਵਿà¨
à¨à¨¤à© à¨
ਤ੠à¨
ਮਰ à¨à¨¸ ਦਾ ਸਰà©à¨ªà¥¤ à¨à¨¹ ਨਿਡਰ, à¨à©à¨¨à¨¾-ਰਹਿਤ, à¨
à¨à¨¨à¨®à¨¾ ਤ੠ਸਵà©-ਪà©à¨°à¨à¨¾à¨¶à¨µà¨¾à¨¨ ਹà©à¥¤ à¨à©à¨°à¨¾à¨ ਦ੠ਦਯਾ ਦà©à¨à¨°à¨¾ à¨à¨¹ ਪਰਾਪਤ ਹà©à©°à¨¦à¨¾ ਹà©à¥¤
à¨à¨¸ ਦਾ ਸਿਮਰਨ à¨à¨°à¥¤
ਪਰਾਰੰਠਵਿੱਠਸੱà¨à¨¾, ਯà©à¨à¨¾à¨ ਦ੠ਸ਼à©à¨°à© ਵਿੱਠਸੱà¨à¨¾,
à¨
ਤ੠ਸੱà¨à¨¾ à¨à¨¹ ਹà©à¨£ à¨à© ਹà©, ਹ੠ਨਾਨà¨! ਨਿਸà¨à¨¿à¨¤ ਹà©, à¨à¨¹ ਸੱà¨à¨¾ ਹà©à¨µà©à¨à¨¾à¥¤
...
à¨à¨ à¨à¨ à¨à¨¾à¨à¨¨ à¨à¨°à¨¦à© ਹਨ à¨à¨¿ ਵਾਹਿà¨à©à¨°à© ਪà©à¨°à¨¾à¨£ ਲ੠ਲà©à¨à¨¦à¨¾ ਹ੠ਤ੠ਮà©à© ਵਾਪਸ ਦ੠ਦਿੰਦਾ ਹà©à¥¤
à¨à¨ à¨à¨¾à¨à¨¨ à¨à¨°à¨¦à© ਹਨ à¨à¨¿ ਹਰ੠ਦà©à¨°à©à¨¡à© ਮਲà©à¨® ਹà©à©°à¨¦à¨¾ à¨
ਤ੠ਸà©à©±à¨à¨¦à¨¾ ਹà©à¥¤
</code></pre>
| 1 | 2016-10-10T16:27:28Z | [
"python",
"html",
"python-3.x",
"web-scraping",
"beautifulsoup"
] |
pandas python using head() generic function | 39,962,476 | <p>I am a beginner to python,in our college they assigned a project i.e. to display timetables of our department using pandas python.Iam presenting my program as menu driven program.But the problem is Iam not able to display my timetables in a grid pattern using head() in if else loop..can u let me know where I am i wrong..??
Sample code:</p>
<pre><code>import pandas as pd
import numpy as np
loop=1
while loop==1:
print('WELCOME TO CBIT')
print('1.IT 2')
print('2.EXIT')
print()
choice=input('Enter ur choice')
choice=int(choice)
if choice==1:
df=input('tt is')
df=pd.read_excel('Book1.xlsx')
df.head()
elif choice==2:
loop=0
</code></pre>
| -1 | 2016-10-10T16:14:24Z | 39,962,578 | <h1>use this code. it will work. before using this code go to command prompt and write execute this command <code>pip install xlrd</code> I am sure it will work.</h1>
<pre><code>import pandas as pd
import numpy as np
loop=1
while loop==1:
print('WELCOME TO CBIT')
print('1.IT 2')
print('2.EXIT')
choice=int(raw_input('Input:'))
if choice==1:
#df=input('tt is')
df=pd.read_excel('filename.xlsx') #enter file name here
print(df)
#df.head()
elif choice==2:
loop=0
</code></pre>
| 0 | 2016-10-10T16:20:20Z | [
"python",
"pandas",
"head"
] |
pandas python using head() generic function | 39,962,476 | <p>I am a beginner to python,in our college they assigned a project i.e. to display timetables of our department using pandas python.Iam presenting my program as menu driven program.But the problem is Iam not able to display my timetables in a grid pattern using head() in if else loop..can u let me know where I am i wrong..??
Sample code:</p>
<pre><code>import pandas as pd
import numpy as np
loop=1
while loop==1:
print('WELCOME TO CBIT')
print('1.IT 2')
print('2.EXIT')
print()
choice=input('Enter ur choice')
choice=int(choice)
if choice==1:
df=input('tt is')
df=pd.read_excel('Book1.xlsx')
df.head()
elif choice==2:
loop=0
</code></pre>
| -1 | 2016-10-10T16:14:24Z | 39,963,537 | <p>Add a print statement: <code>print(df.head())</code> . If you are looking for a nicer display you can have a look at <a href="https://docs.python.org/2/library/pprint.html" rel="nofollow"><code>pprint</code></a>.</p>
| 0 | 2016-10-10T17:20:40Z | [
"python",
"pandas",
"head"
] |
How to understand/use the Python difflib output? | 39,962,499 | <p>I am trying to make comprehensive diff that compares command line output of two programs. I used <code>difflib</code> and came up with this code:</p>
<pre><code>from difflib import Differ
from pprint import pprint
import sys
def readable_whitespace(line):
return line.replace("\n", "\\n")
# Two strings are expected as input
def print_diff(text1, text2):
d = Differ()
text1 = text1.splitlines(True)
text2 = text2.splitlines(True)
text1 = [readable_whitespace(line) for line in text1]
text1 = [readable_whitespace(line) for line in text2]
result = list(d.compare(text1, text2))
sys.stdout.writelines(result)
sys.stdout.write("\n")
</code></pre>
<p>Some requirements I have:</p>
<ul>
<li>(obvious) It should be clear what is from which output when there is a difference</li>
<li>New lines are replaced with <code>\n</code> because they matter in my case and must be clearly visible when causing conflict</li>
</ul>
<p>I made a simple test for my diff function:</p>
<pre><code>A = "AAABAAA\n"
A += "BBB\n"
B = "AAAAAAA\n"
B += "\n"
B += "BBB"
print_diff(A,B)
</code></pre>
<p>For your convenience, here is test merged with the function so that you can execute it as file: <a href="http://pastebin.com/BvQw9naa" rel="nofollow">http://pastebin.com/BvQw9naa</a></p>
<p>I have no idea what is this output trying to say to me:</p>
<pre><code>- AAAAAAA\n? ^^
+ AAAAAAA
? ^
- \n+
BBB
</code></pre>
<p>Notice those two <code>^</code> symbols on first line? What are they pointing to...? Also, I intentionally put trailing new line into one test string. I don't think the diff noticed that.</p>
<p>How to make the output comprehensive <strong>or</strong> learn to understand it?</p>
| 3 | 2016-10-10T16:15:53Z | 39,963,449 | <p>The main problem with your example is how you are handling endline characters. If you completely replace them in the input, the output will no longer line up correctly, and so won't make any sense. To fix that, the <code>readable_whitespace</code> function should look something like this:</p>
<pre><code>def readable_whitespace(line):
end = len(line.rstrip('\r\n'))
return line[:end] + repr(line[end:])[1:-1] + '\n'
</code></pre>
<p>This will handle all types of endline sequence, and ensures that the lines are displayed correctly when printed.</p>
<p>The other minor problem is due to a typo:</p>
<pre><code>text1 = [readable_whitespace(line) for line in text1]
text1 = [readable_whitespace(line) for line in text2]
# --^ oops!
</code></pre>
<p>Once these fixes are made, the output will look like this:</p>
<pre><code>- AAABAAA\n
? ^
+ AAAAAAA\n
? ^
+ \n
- BBB\n
? --
+ BBB
</code></pre>
<p>which should hopefully now make sense to you.</p>
| 2 | 2016-10-10T17:14:38Z | [
"python",
"python-2.7",
"difflib"
] |
Passing variables between functions in Python | 39,962,564 | <p>Ok so I am having a tough time getting my head around passing variables between functions:</p>
<p>I can't seem to find a clear example. </p>
<p>I do not want to run funa() in funb(). </p>
<pre><code>def funa():
name=input("what is your name?")
age=input("how old are you?")
return name, age
funa()
def funb(name, age):
print(name)
print(age)
funb()
</code></pre>
| -3 | 2016-10-10T16:19:35Z | 39,962,605 | <p>Since <code>funa</code> is <em>returning</em> the values for name and age, you need to assign those to local variables and then pass them into funb:</p>
<pre><code>name, age = funa()
funb(name, age)
</code></pre>
<p>Note that the names within the function and outside are not linked; this would work just as well:</p>
<pre><code>foo, bar = funa()
funb(foo, bar)
</code></pre>
| 5 | 2016-10-10T16:21:48Z | [
"python"
] |
Passing variables between functions in Python | 39,962,564 | <p>Ok so I am having a tough time getting my head around passing variables between functions:</p>
<p>I can't seem to find a clear example. </p>
<p>I do not want to run funa() in funb(). </p>
<pre><code>def funa():
name=input("what is your name?")
age=input("how old are you?")
return name, age
funa()
def funb(name, age):
print(name)
print(age)
funb()
</code></pre>
| -3 | 2016-10-10T16:19:35Z | 39,962,692 | <p>Think of it as passing objects around by using variables and function parameters as references to those objects. When I updated your example, I also changed the names of the variables so that it is clear that the objects live in different variables in different namespaces.</p>
<pre><code>def funa():
name=input("what is your name?")
age=input("how old are you?")
return name, age # return objects in name, age
my_name, my_age = funa() # store returned name, age objects
# in global variables
def funb(some_name, some_age): # a function that takes name and
# age objects
print(some_name)
print(some_age)
funb(my_name, my_age) # use the name, age objects in the
# global variables to call the function
</code></pre>
| 0 | 2016-10-10T16:26:03Z | [
"python"
] |
Passing variables between functions in Python | 39,962,564 | <p>Ok so I am having a tough time getting my head around passing variables between functions:</p>
<p>I can't seem to find a clear example. </p>
<p>I do not want to run funa() in funb(). </p>
<pre><code>def funa():
name=input("what is your name?")
age=input("how old are you?")
return name, age
funa()
def funb(name, age):
print(name)
print(age)
funb()
</code></pre>
| -3 | 2016-10-10T16:19:35Z | 39,962,738 | <p>As it is returning a tuple, you can simply unpack it with <code>*</code>:</p>
<pre><code>funb(*funa())
</code></pre>
<p>It should look something like this:</p>
<pre><code>def funa():
# funa stuff
return name, age
def funb(name, age):
# funb stuff
print ()
funb(*funa())
</code></pre>
| 0 | 2016-10-10T16:29:42Z | [
"python"
] |
ValueError when running keras in python | 39,962,723 | <p>When I train the CNN:</p>
<pre><code>model = Sequential()
model.add(Convolution2D(4, 5, 5, border_mode='valid', input_shape=(1,28,28)))
model.add(Activation('tanh'))
model.add(Convolution2D(8, 3, 3, border_mode='valid'))
model.add(Activation('tanh'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(16, 3, 3, border_mode='valid'))
model.add(Activation('tanh'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(128, init='normal'))
model.add(Activation('tanh'))
model.add(Dense(10, init='normal'))
model.add(Activation('softmax'))
sgd = SGD(l2=0.0,lr=0.05, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd,class_mode="categorical")
</code></pre>
<p>and the Traceback (most recent call last):</p>
<pre><code>Traceback (most recent call last):
File "F:\eclipse\dasd\aaa\test1.py", line 89, in <module>
model.add(Dense(128, init='normal'))
File "D:\Anaconda2\lib\site-packages\keras\models.py", line 308, in add
output_tensor = layer(self.outputs[0])
File "D:\Anaconda2\lib\site-packages\keras\engine\topology.py", line 487, in __call__
self.build(input_shapes[0])
File "D:\Anaconda2\lib\site-packages\keras\layers\core.py", line 695, in build
name='{}_W'.format(self.name))
File "D:\Anaconda2\lib\site-packages\keras\initializations.py", line 36, in normal
return K.random_normal_variable(shape, 0.0, scale, name=name)
File "D:\Anaconda2\lib\site-packages\keras\backend\theano_backend.py", line 145, in random_normal_variable
return variable(np.random.normal(loc=0.0, scale=scale, size=shape),
File "mtrand.pyx", line 1903, in mtrand.RandomState.normal (numpy\random\mtrand\mtrand.c:18479)
File "mtrand.pyx", line 234, in mtrand.cont2_array_sc (numpy\random\mtrand\mtrand.c:3092)
ValueError: negative dimensions are not allowed
</code></pre>
<p>Could you please tell me where problem is?</p>
| 0 | 2016-10-10T16:28:41Z | 40,013,512 | <p>You need to add <code>dim_ordering='th'</code> in <code>Convolution2D</code> and <code>MaxPooling2D</code> sine your <code>input_shape</code> is <code>(1, 28, 28)</code>. Or else if you don't want to add an <code>dim_ordering</code> , then you can change input shape to <code>(28, 28, 1)</code>.</p>
<pre><code>#!/usr/bin/env python
# coding=utf-8
from keras.models import Sequential
from keras.layers import Dense, Activation, Convolution2D, MaxPooling2D, Flatten
from keras.optimizers import SGD
model = Sequential()
model.add(Convolution2D(4, 5, 5, border_mode='valid', input_shape=(1,28,28), dim_ordering='th'))
model.add(Activation('tanh'))
model.add(Convolution2D(8, 3, 3, border_mode='valid', dim_ordering='th'))
model.add(Activation('tanh'))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering='th'))
model.add(Convolution2D(16, 3, 3, border_mode='valid',dim_ordering='th'))
model.add(Activation('tanh'))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering='th'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('tanh'))
model.add(Dense(10))
model.add(Activation('softmax'))
sgd = SGD(lr=0.05, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd,class_mode="categorical")
print model.summary()
</code></pre>
<p><strong>UPDATE</strong>: You can also add <code>"image_dim_ordering": "th"</code> in <code>~/.keras/keras.json</code>. But, I'm not sure where this file is created on a Windows machine.</p>
| 0 | 2016-10-13T06:09:21Z | [
"python",
"numpy",
"keras"
] |
Pandas- how to do merge on multiple columns including index | 39,962,731 | <p>I want to perform a merge in pandas on more than one column, where one of the columns is an index column.</p>
<p>Here are example dataframes:</p>
<pre><code> df1 = pd.DataFrame(np.random.rand(4,4), columns=list('ABCD'))
df2 = pd.DataFrame(np.random.rand(4,4), columns=list('EFGH'), index= [5,2,4,1])
df1['E'] = ['hello','hello','hello','world']
df2['E'] = ['world','world','hello','hello']
</code></pre>
<p>I want to perform an inner merge on the index and column E, so that it will return only one row:(index,E) = (1,'hello').</p>
| 1 | 2016-10-10T16:29:16Z | 39,962,925 | <p>what about this?</p>
<pre><code>In [82]: pd.merge(df1.reset_index(), df2.reset_index(), on=['index','E']).set_index('index')
Out[82]:
A B C D E F G H
index
1 0.516878 0.56163 0.082839 0.420587 hello 0.62601 0.787371 0.121979
</code></pre>
| 1 | 2016-10-10T16:40:56Z | [
"python",
"pandas",
"merge"
] |
Can't reference Lib/site-packages virtualenv flask | 39,962,794 | <p>I am trying to tinker with Flask and Python using virtualenv. I have made my current working directory C:/Users/dylan/Desktop/TestPython/FlaskTest and activated a virtualenv here. Now when I'm in here and activated, I ran the command
pip install flask and the package was copied to Lib/site-packages. I've look at other SO posts, they didn't really explain how to import modules from the site-packages directory. </p>
<p>My folder structure is as follows (directories with M were created manually by me not the activate script)</p>
<pre><code>/TestPython
/Lib (contains /site-packages/flask)
/Include
/resources (M)
/Scripts
/static(M)
/templates (M)
routes.py
</code></pre>
<p>Now from my routes.py file, I get an ImportError for trying to import flask. How do I accomplish importing flask in my routes.py file.</p>
| 0 | 2016-10-10T16:33:21Z | 39,963,134 | <p>So I figured out that Sublime Text does not use the same command line instance that you have open. It doesn't actually build using the virtual environment. I think I need to modify my build system. If I run python routes.py while activated it does work as expected. I was trying to run it from sublime text and that was my mistake. Does anyone know how to modify your build system in sublime text 3 to use virtual environment?</p>
| 0 | 2016-10-10T16:53:14Z | [
"python",
"python-3.x",
"pip",
"virtualenv"
] |
error copying folder with python | 39,962,808 | <p>i made a program that copy automatically a usb device.
when it copy the usb it create one folder in correct destination, and one folder in the same path of python program. i want that itcreate only one folder in correct destination! thanks</p>
<p>this is the code:</p>
<pre><code>import shutil
from array import *
import math
import time
import os
import sys
import random
import datetime
def data():
now = datetime.datetime.now()
format = "%d %b %H.%M"
global now_date
now_date = now.strftime(format)
format = "%M"
global minuti
minuti = now.strftime(format)
data()
old_date = now_date
alfabeto = ['A:','B:','F:','G:','H:','I:','L:','M:','N:','O:',] #mancano e,c,d
a = (r'')
b=random.choice('abcdefghilmnopqrstuvz1234567890èòà ù')
new_dir = '{}'.format(now_date)
inc = (r'C:\documenti\program\file\collegamenti\'')
incollaa = "".join([inc, new_dir,' ',b])
i=0
while True:
try:
if i==10: i=0
time.sleep(1)
copiaa = "".join([a, alfabeto[i]])
i=i+1
shutil.copytree(copiaa,incollaa)
if not os.path.exists(new_dir):
os.makedirs(new_dir)
break
except FileNotFoundError:
pass
</code></pre>
| 0 | 2016-10-10T16:34:10Z | 39,963,042 | <p>Your problem is the following lines:</p>
<pre><code>if not os.path.exists(new_dir):
os.makedirs(new_dir)
</code></pre>
<p>Since <code>new_dir</code> is a relative path (a date string), it will be created in the working folder of your script.</p>
| 0 | 2016-10-10T16:47:58Z | [
"python",
"python-3.x",
"subprocess",
"folder",
"copy-paste"
] |
How to use a train.csv , test.csv and ground_truth.csv in a machine learning model? (cross validation/ python) | 39,962,836 | <p>Up to now I had only one dataset (df.csv). So far I used a validation size of 20% and <code>.train_test_split</code> for a normal regression model. </p>
<pre><code>array = df.values
X = array[:,0:26]
Y = array[:,26]
validation_size = 0.20
seed = 7
X_train, X_validation, Y_train, Y_validation =
cross_validation.train_test_split(X, Y,
test_size=validation_size, random_state=seed)
num_folds = 10
num_instances = len(X_train)
seed = 7
scoring = 'mean_squared_error'
</code></pre>
<p>When I have three seperate datasets (train.csv/test.csv/ground_truth.csv), how can I handle it? Of course, at first I use the train.csv, then the test.csv and finally the ground_truth. But how should I implement these different datasets in my model?</p>
| 0 | 2016-10-10T16:35:46Z | 39,973,288 | <p>When you perform cross-validation, train and test data are essentially the same dataset which is split in different ways in order to prevent overfitting. The number of folds indicates the different ways the set is split. </p>
<p>For example, 5-fold cross validation splits the training set in 5 pieces and each time 4 of them are used for training and 1 for testing. So in your case, you have the following options: </p>
<p>Either perform cross-validation just on the training set, then check with the test set and the ground truth (fitting is done just on the training set so if done correctly accuracy on test and ground truth ought to be similar) or combine training and test for a larger and possibly more representative dataset and then check on ground truth.</p>
| 1 | 2016-10-11T08:32:07Z | [
"python",
"numpy",
"machine-learning",
"scipy",
"cross-validation"
] |
Set default logging level in python | 39,962,926 | <p>I am having trouble with a module (specifically pymel.core in maya) which seems to change the default logging level. When I import pymel, all the different loggers in the modules I'm using suddenly get set to debug and start spewing out loads of things that I don't want to see. It looks to me like pymel is changing the default logging level in the logging module, but I'm not quite sure where. I've looked at the logging docs and I'm not sure how to set it back to what it was before, I can only see how to set the level on an individual logger.</p>
<p>Can anyone suggest how I can switch the default logging level?</p>
<pre><code>>>> import logging
>>> logging.getLogger().getEffectiveLevel()
30
>>> import pymel.core
>>> logging.getLogger().getEffectiveLevel()
0
</code></pre>
<p>I'd like to be able to set that default level back to 30 somehow, so all my loggers are back to how they were before when they inherit there level from the logging module's default. Apologies if I'm misunderstanding how the logging module works, I'm quite new to it.</p>
| 0 | 2016-10-10T16:40:59Z | 39,963,223 | <p>I don't know <code>pymel</code> specifically but it doesn't look like it's well-behaved with respect to logging. You could try:</p>
<pre><code>if __name__ == '__main__':
import logging, pymel.core
logging.getLogger().setLevel(logging.WARNING) # or whatever
logging.getLogger('pymel').setLevel(logging.WARNING)
</code></pre>
<p>However, this may not prevent all undesired messages from being output. You can tweak the above to adjust to what you find in practice.</p>
| 0 | 2016-10-10T16:59:10Z | [
"python",
"python-2.7",
"logging",
"maya",
"pymel"
] |
Set default logging level in python | 39,962,926 | <p>I am having trouble with a module (specifically pymel.core in maya) which seems to change the default logging level. When I import pymel, all the different loggers in the modules I'm using suddenly get set to debug and start spewing out loads of things that I don't want to see. It looks to me like pymel is changing the default logging level in the logging module, but I'm not quite sure where. I've looked at the logging docs and I'm not sure how to set it back to what it was before, I can only see how to set the level on an individual logger.</p>
<p>Can anyone suggest how I can switch the default logging level?</p>
<pre><code>>>> import logging
>>> logging.getLogger().getEffectiveLevel()
30
>>> import pymel.core
>>> logging.getLogger().getEffectiveLevel()
0
</code></pre>
<p>I'd like to be able to set that default level back to 30 somehow, so all my loggers are back to how they were before when they inherit there level from the logging module's default. Apologies if I'm misunderstanding how the logging module works, I'm quite new to it.</p>
| 0 | 2016-10-10T16:40:59Z | 39,968,463 | <p>There's a file called <code>pymel.conf</code> in the Maya install in the pymel subfolder of the <code>site-packages</code> directory. It controls the log settings for different parts of pymel using the same config file system in the default logging module.</p>
| 0 | 2016-10-10T23:52:33Z | [
"python",
"python-2.7",
"logging",
"maya",
"pymel"
] |
Reading multiple CSV files with headers in python with numpy? | 39,962,949 | <p>Currently I am using this code to read one complete csv file to my code:</p>
<pre><code>data = np.loadtxt('csv_Complete.csv', delimiter=',', skiprows=1)
</code></pre>
<p>However, Now I have multiple csv files that are in this format.</p>
<p>Log1.csv</p>
<pre><code>x1,x2,x3,x4....
1.5,3,5,7,8
2,5,1.2,5,2
1,3,3,5.5,6
</code></pre>
<p>log2.csv</p>
<pre><code> x1,x2,x3,x4....
1,3.3,5,7,8
2,5.1,1,5.5,2
1,3,3,5,6
</code></pre>
<p>This is the method I am thinking of doing but it is not working. Getting a
ValueError: could not convert string to float: </p>
<pre><code>log1 = np.loadtxt('log1.csv', delimiter=',', skiprows=1)
log2 = np.loadtxt('log2.csv', delimiter=',', skiprows=1)
log3 = np.loadtxt('log3.csv', delimiter=',', skiprows=1)
data = np.append([log1, log2, log3])
</code></pre>
<p>The error I am getting is:</p>
<pre><code> File "<ipython-input-6-6155c8de61ad>", line 1, in <module>
runfile('C:/Users/Mmyname/.spyder2-py3/setdataexp.py', wdir='C:/Users/myname/.spyder2-py3')
File "C:\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/myname/.spyder2-py3/setdataexp.py", line 5, in <module>
log1 = np.loadtxt('log40a.csv', delimiter=',', skiprows=1)
File "C:\Anaconda3\lib\site-packages\numpy\lib\npyio.py", line 930, in loadtxt
items = [conv(val) for (conv, val) in zip(converters, vals)]
File "C:\Anaconda3\lib\site-packages\numpy\lib\npyio.py", line 930, in <listcomp>
items = [conv(val) for (conv, val) in zip(converters, vals)]
File "C:\Anaconda3\lib\site-packages\numpy\lib\npyio.py", line 659, in floatconv
return float(x)
ValueError: could not convert string to float:
</code></pre>
| 1 | 2016-10-10T16:42:25Z | 39,963,524 | <p>It must be a <code>missing value</code> in file <code>log40a.csv</code>.</p>
<p>I have the same error for file like:</p>
<pre><code>x1,x2,x3,x4....
1,3.3,5,7,8
2,5.1,,5.5,2
1,3,3,5,6
</code></pre>
<p>Base on <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html" rel="nofollow">documentation</a> if you have missing values you should use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html#numpy.genfromtxt" rel="nofollow">genfromtxt function</a>.</p>
| 1 | 2016-10-10T17:19:47Z | [
"python"
] |
Converting Nodejs signature hashing function to Python | 39,962,967 | <p>I'm trying to connect to an API that only has Nodejs doc, but I need to use Python.</p>
<p>The official doc states the hhtp request need to be signed like this and only gives this code:</p>
<pre><code>var pk = ".... your private key ....";
var data = JSON.strigify( {some JSON object} );
var signature = crypto.createSign('RSA-SHA256').update(data).sign(pk, 'base64');
</code></pre>
<p>So far, I am blocked there:</p>
<pre><code>from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
from Crypto.Hash import SHA256
import base64
private_key_loc='D:\\api_key'
BODY={ some JSON }
data=json.dumps(BODY)
def sign_data(private_key_loc, data):
key = open(private_key_loc, "r").read()
rsakey = RSA.importKey(key,passphrase='c9dweQ393Ftg')
signer = PKCS1_v1_5.new(rsakey)
digest = SHA256.new()
digest.update(data.encode())
sign = signer.sign(digest)
return base64.b64encode(sign)
headers={}
headers['X-Signature']=sign_data(private_key_loc, data)
response = requests.post(url, headers=headers, data=BODY)
</code></pre>
<p>Esentially, I cannot make their code run on Nodejs; I don't know it and I get error because of the private key format. So I can't really compare what I do. Yet, I'm getting an forbidden message with the python.</p>
<p>Any idea?</p>
<p><strong>----------------------------------------------------------------------</strong></p>
<p><strong>EDIT 1</strong></p>
<p>Ok after two days, here is were I am:
1/ I managed to produce a valid signature with Nodejs using:</p>
<pre><code>const crypto = require('crypto');
const fs = require('fs');
var pk = fs.readFileSync('./id_rsa4.txt').toString();
let sampleRequest = {accessKey: 'TESTKEY',reportDate: '2016-09-27T14:25:54.386Z'};
var data = JSON.stringify(sampleRequest);
var signature = crypto.createSign('RSA-SHA256').update(data).sign(pk, 'base64');
</code></pre>
<p>2/ Impossible to reproduce in Python.... Even worse, the whatever i try the hash is always half the size of the one in Nodejs:</p>
<pre><code>import hmac
import hashlib
import base64
import json
private_key="""PRIVATE KEY SAME FORMAT BUT WITH LINE BREAKS LIKE \n"""
data=json.dumps({"accessKey": "TESTKEY","reportDate": "2016-09-27T14:25:54.386Z"})
dig = hmac.new(private_key, msg=data, digestmod=hashlib.sha256).digest()
print base64.b64encode(dig) #not valid
dig = hmac.new(private_key, msg=data, digestmod=hashlib.sha256).hexdigest()
print base64.b64encode(dig) #not valid either
</code></pre>
<p>This is SO frustrating, any more idea?</p>
| 1 | 2016-10-10T16:43:27Z | 39,963,293 | <p>You should be able to use the python standard library to generate the hashed signature. This will encode the data with the signed key. Depending on the server you may have to manually set header values in your request as well.</p>
<pre><code>import hmac
import hashlib
import base64
import json
private_key = '12345'
data = json.dumps({'foo': 'bar'})
dig = hmac.new(private_key, msg=data, digestmod=hashlib.sha256).digest()
b64data = base64.b64encode(dig)
request.post(url, data=b64data)
</code></pre>
| 2 | 2016-10-10T17:04:20Z | [
"python",
"node.js",
"hash",
"rsa",
"http-signature"
] |
How to import CSV file to django models | 39,962,977 | <p>I have Django models like this</p>
<pre><code>class Revo(models.Model):
SuiteName = models.CharField(max_length=255)
Test_Case = models.CharField(max_length=255)
FileName = models.CharField(max_length=255)
Total_Action = models.CharField(max_length=255)
Pass = models.CharField(max_length=255)
Fail = models.CharField(max_length=255)
Exe_Time = models.CharField(max_length=255)
Result = models.CharField(max_length=255)
create_date = models.DateTimeField(default=datetime.datetime.now)
class Meta:
verbose_name_plural = "Revo"
</code></pre>
<p>I have CSV file like this</p>
<pre><code>SuiteName,Test Case,FileName,Total Action,Pass,Fail,Exe Time,Result
DEMO_TEST_SUITE,Testcase 1,file1,82,0,108,0:27:52,FAIL
DEMO_TEST_SUITE,Testcase 2,file2,86,0,108,0:27:52,FAIL
DEMO_TEST_SUITE,Testcase 3,file3,820,0,108,0:27:52,FAIL
DEMO_TEST_SUITE,Testcase 4,file4,182,0,108,0:27:52,FAIL
DEMO_TEST_SUITE,Testcase 5,file5,102,0,108,0:27:52,FAIL
DEMO_TEST_SUITE,Testcase 6,file6,111,0,108,0:27:52,FAIL
</code></pre>
<p>How do I import this csv data into my django models? Also, is there any way to plot graphs out this data directly from database?</p>
| 1 | 2016-10-10T16:43:54Z | 39,967,865 | <p>You can use the built in <a href="https://docs.python.org/3/library/csv.html" rel="nofollow">csv module</a> to turn your csv file into a <a href="https://docs.python.org/3/library/csv.html#csv.DictReader" rel="nofollow">dict like object</a>:</p>
<pre><code>import csv
with open('import.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
# The header row values become your keys
suite_name = row['SuiteName']
test_case = row['Test Case']
# etc....
new_revo = Revo(SuiteName=suite_name, TestCase=test_case,...)
new_revo.save()
</code></pre>
| 0 | 2016-10-10T22:44:59Z | [
"python",
"django",
"csv",
"graph"
] |
How to import CSV file to django models | 39,962,977 | <p>I have Django models like this</p>
<pre><code>class Revo(models.Model):
SuiteName = models.CharField(max_length=255)
Test_Case = models.CharField(max_length=255)
FileName = models.CharField(max_length=255)
Total_Action = models.CharField(max_length=255)
Pass = models.CharField(max_length=255)
Fail = models.CharField(max_length=255)
Exe_Time = models.CharField(max_length=255)
Result = models.CharField(max_length=255)
create_date = models.DateTimeField(default=datetime.datetime.now)
class Meta:
verbose_name_plural = "Revo"
</code></pre>
<p>I have CSV file like this</p>
<pre><code>SuiteName,Test Case,FileName,Total Action,Pass,Fail,Exe Time,Result
DEMO_TEST_SUITE,Testcase 1,file1,82,0,108,0:27:52,FAIL
DEMO_TEST_SUITE,Testcase 2,file2,86,0,108,0:27:52,FAIL
DEMO_TEST_SUITE,Testcase 3,file3,820,0,108,0:27:52,FAIL
DEMO_TEST_SUITE,Testcase 4,file4,182,0,108,0:27:52,FAIL
DEMO_TEST_SUITE,Testcase 5,file5,102,0,108,0:27:52,FAIL
DEMO_TEST_SUITE,Testcase 6,file6,111,0,108,0:27:52,FAIL
</code></pre>
<p>How do I import this csv data into my django models? Also, is there any way to plot graphs out this data directly from database?</p>
| 1 | 2016-10-10T16:43:54Z | 39,988,049 | <p><strong>Before you start</strong></p>
<p>The first thing to do is to normalize your database. The data in the CSV file isn't normalized - it need not be. However there is no reason for your table to reflect the structure of your CSV file.</p>
<p>For exaple <code>DEMO_TEST_SUITE</code> is repeated and there's no reason to store 'Testcase 1', 'Testcase 2' etc. It's far more efficient to store just the 1,2,.. in that column.</p>
<pre><code>class Revo(models.Model):
SuiteName = models.ForeignKey(Suite)
Test_Case = models.IntegerField()
FileName = models.CharField(max_length=255)
Total_Action = models.IntegerField()
Pass = models.CharField(max_length=255)
Fail = models.CharField(max_length=255)
Exe_Time = models.CharField(max_length=255)
Result = models.IntegerField()
create_date = models.DateTimeField(default=datetime.datetime.now)
class Meta:
verbose_name_plural = "Revo"
</code></pre>
<p>Some of the other fields should really be integers as well instead of varchar(255). But before you the changes, make a table with the same structure as your current one, it's going to be usefull too!</p>
<p><strong>Doing the import.</strong></p>
<p>Creating a CSV reader, reading each line and creating records one by one is probably the last thing you want to do. This will be slow. Very slow if you have even a hundred thousand records.</p>
<p>What you really ought to do is to use <a class='doc-link' href="http://stackoverflow.com/documentation/mysql/2356/load-data-infile#t=201610112255043619058">LOAD DATA INFILE</a> and if you are using postgresql, you ought to be using the <a href="https://www.postgresql.org/docs/9.5/static/sql-copy.html" rel="nofollow">COPY</a> and in the case of Sqlite, you ought to use the <a href="https://www.sqlite.org/cvstrac/wiki?p=ImportingFiles" rel="nofollow">.import</a> command. These are built in mechanisms for import CSV data that operate very fast and they have the capability to handle duplicates as well.</p>
<p>Import into the 'old' table, create the foreign keys and the SELECT INTO the proper table represented by your normalized model.</p>
| 0 | 2016-10-11T22:59:18Z | [
"python",
"django",
"csv",
"graph"
] |
django-orm : How can I insert element to a table of a particular database | 39,963,068 | <p>I have 2 different database in setting.py.
To do select operation I use following statement and is working fine:</p>
<pre><code>all_data = Bugs.objects.using('database_one').filter(reporter=user_id, bug_status='resolved', resolution__in=all_resolutions)[:2]
</code></pre>
<p>But how can I pass the database value to insert an entry in the table of same database.</p>
<p>I tried this but this doesn't seems to be working:</p>
<pre><code>row_to_be_added = TableName(pr=pr, case=case, comments=comments).using('bugzilla').save()
</code></pre>
<p>Can anyone please help me out here.</p>
| 0 | 2016-10-10T16:49:10Z | 39,971,303 | <p>From <a href="https://docs.djangoproject.com/en/1.10/topics/db/multi-db/#selecting-a-database-for-save" rel="nofollow">docs</a>:</p>
<pre><code>row_to_be_added = TableName(pr=pr, case=case, comments=comments).save(using='bugzilla')
</code></pre>
| 0 | 2016-10-11T06:07:26Z | [
"python",
"mysql",
"django",
"django-orm"
] |
Why jpeg image becomes 2D array after being loaded | 39,963,151 | <p>I have a jpeg image as follows:</p>
<p><a href="http://i.stack.imgur.com/b0Dma.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/b0Dma.jpg" alt="enter image description here"></a>
<br>Now I want to load this image to do image processing. I use the following code:</p>
<pre><code>from scipy import misc
import numpy as np
im = misc.imread('logo.jpg')
</code></pre>
<p>Because the image is a coloured one, I would expect <code>im</code> is a 3D matrix. However, <code>im.shape</code> gives me a 2D matrix:
<code>(150, 150)</code></p>
<p>I tried another way of loading image as follows:</p>
<pre><code>from PIL import Image
jpgfile = Image.open("logo.jpg")
</code></pre>
<p>But <code>jpgfile</code> also has the size of <code>150x150</code>.
My question is: What's wrong with my code, or my understanding about RGB image is wrong?
Thank you very much.</p>
| 1 | 2016-10-10T16:54:25Z | 39,963,287 | <p>From the docs here: <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.imread.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.imread.html</a>, specify <code>mode='RGB'</code> to get the red, green, blue values. The output appears to default to conversion to a grayscale number.</p>
| -1 | 2016-10-10T17:03:47Z | [
"python",
"image",
"image-processing"
] |
Approximating a sine function with tflearn | 39,963,178 | <p>I am attempting a ridiculously simplistic approximation of a sine function using tflearn, inspired by <a href="http://mourafiq.com/2016/05/15/predicting-sequences-using-rnn-in-tensorflow.html" rel="nofollow">this</a> paper. </p>
<pre><code>import tflearn
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# generate cosine function
x = np.linspace(-np.pi,np.pi,10000)
y = np.sin(x)
# Network building
net = tflearn.input_data(shape=[10,10000])
net = tflearn.fully_connected(net, 1000)
net = tflearn.layers.core.activation (net, activation='relu')
net = tflearn.regression(net)
# Define model
model = tflearn.DNN(net)
# Start training (apply gradient descent algorithm)
model.fit(x, y,batch_size=10)
</code></pre>
<p>But I keep running into a </p>
<blockquote>
<p>ValueError: Cannot feed value of shape (10,) for Tensor u'InputData/X:0', which has shape '(?, 10, 10000)'</p>
</blockquote>
<p>error. </p>
<p>Any ideas on where I am going wrong?</p>
<p>Thank you!</p>
| 0 | 2016-10-10T16:56:40Z | 39,977,259 | <p><strong>UPDATE</strong>: I was not assigning a shape to the <code>x = np.linspace(-np.pi,np.pi,10000)</code> tensor:</p>
<p>Solved (@lejlot) by changing the line to <code>np.linspace(-np.pi,np.pi,10000).reshape(-1, 1)</code></p>
<p>In the line <code>input_data(shape=[10,10000])</code> the shape of each input tensor is actually [None,1] and so changing this line to net = tflearn.input_data(shape=[None,1]) solved the issue in the end.</p>
| 0 | 2016-10-11T12:28:27Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow"
] |
h2o.exceptions.H2OResponseError: Server error water.exceptions.H2OKeyNotFoundArgumentException | 39,963,181 | <p>I get this error when I run the code below.</p>
<pre><code>import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator as GBM
from sklearn import datasets
import numpy as np
import pandas as pd
h2o.init(ip='192.168.0.4',port=54321)
# writing data to CSV so that h2o can read it
digits = datasets.load_digits()
predictors = digits.data[:-1]
targets = digits.target[:-1]
record_count = targets.shape[0]
targets = targets.reshape([record_count,1])
data = predictors
data = np.concatenate((data, targets), axis=1)
write_df = pd.DataFrame(data).to_csv(path_or_buf='data.csv',index=False)
model = GBM(ntrees=3,distribution='multinomial',max_depth=3)
everything = h2o.import_file(path='data.csv')
everything[64] = everything[64].asfactor()
model.start(training_frame=everything,x=list(range(64)),y=64,validation_frame=everything)
# model seems to be None for some reason
predictions = model.predict(everything)
</code></pre>
<p>The specific error is:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ryanzotti/anaconda/lib/python3.4/site-packages/h2o/model/model_base.py", line 148, in predict
j = H2OJob(h2o.api("POST /4/Predictions/models/%s/frames/%s" % (self.model_id, test_data.frame_id)),
File "/Users/ryanzotti/anaconda/lib/python3.4/site-packages/h2o/h2o.py", line 83, in api
return h2oconn.request(endpoint, data=data, json=json, filename=filename, save_to=save_to)
File "/Users/ryanzotti/anaconda/lib/python3.4/site-packages/h2o/backend/connection.py", line 259, in request
return self._process_response(resp, save_to)
File "/Users/ryanzotti/anaconda/lib/python3.4/site-packages/h2o/backend/connection.py", line 586, in _process_response
raise H2OResponseError(data)
h2o.exceptions.H2OResponseError: Server error water.exceptions.H2OKeyNotFoundArgumentException:
Error: Object 'None' not found in function: predict for argument: model
Request: POST /4/Predictions/models/None/frames/py_1_sid_a5e2
</code></pre>
<p>There are no other errors prior to this one.</p>
<p><strong>H2O Version:</strong> 3.11.0.3645</p>
<p><strong>Python Version:</strong> 3.4.4</p>
| 1 | 2016-10-10T16:56:45Z | 39,967,391 | <p>Change <code>model.start</code> into <code>model.train</code> (3rd line from the bottom), and it should work.</p>
<p>The documentation for <code>model.start()</code> method says "Train the model asynchronously". This means that the model is being trained in the background and is not available right away for the prediction call.</p>
<p>The <code>model.train()</code> method on the other hand waits until the training is completed before continuing. </p>
| 1 | 2016-10-10T21:59:06Z | [
"python",
"h2o"
] |
LFSR code is giving wrong result | 39,963,222 | <p>I have the code for LFSR and getting wrong results, the first 8 bits should be 01110010 but i'm getting 0101111001.</p>
<p>I'm talking about Galois LSFR: en.wikipedia.org/wiki/Linear-feedback_shift_register </p>
<p>Can anyone see what the problem is with this code?</p>
<pre><code>def lfsr(seed, taps):
for i in range(10):
nxt = sum([ seed[x] for x in taps]) % 2
yield nxt
seed = ([nxt] + seed)[:max(taps)+1]
for x in lfsr([0,0,1,1,1,0,0,1],[6,5,1]) :
print x
</code></pre>
| -1 | 2016-10-10T16:59:06Z | 39,965,500 | <p>My answer to the question posted, "Can anyone see what the problem is with this code?", is no. The code is operational, implementing an LFSR (of the type frequently used to do pseudorandom signals in hardware, and the basis for popular CRC functions). I'm left to guess at why you think it isn't. </p>
<p>An LFSR of this type can be visualised as a shift register with taps:</p>
<pre><code>pos 0 1 2 3 4 5 6 7
reg 0 0 1 1 1 0 0 1
^- + + +
</code></pre>
<p>Each iteration, one value is calculated from the taps and inserted on one end, shifting the other values. In this case, the new bit becomes LSB. So let's run this LFSR a few cycles:</p>
<pre><code>taps + + +
pos 0 1 2 3 4 5 6 7
reg 0 0 1 1 1 0 0 1
c1 0 0 0 1 1 1 0 0
c2 1 0 0 0 1 1 1 0
c3 0 1 0 0 0 1 1 1
c4 1 0 1 0 0 0 1 1
c5 1 1 0 1 0 0 0 1
c6 1 1 1 0 1 0 0 0
c7 1 1 1 1 0 1 0 0
c8 0 1 1 1 1 0 1 0
</code></pre>
<p>Note that we read the output bits yielded in column 0, from c1 down. Incidentally, position 7 doesn't need to exist, because there are no taps that far back; the slice in the code removes such columns. </p>
<p>I've managed to reproduce the value you say you're getting by reversing the inputs and output of eight cycles. Can you explain how you arrive at the value you say it should be? </p>
<p>One way I can imagine arriving at a similar value is by shifting the other way and observing the shift register's state after one cycle. This requires maintaining its width past active taps (not unusual in CRC use).</p>
<pre><code>taps + + + -v
pos 0 1 2 3 4 5 6 7
reg 0 0 1 1 1 0 0 1
c1 0 1 1 1 0 0 1 0
c2 1 1 1 0 0 1 0 0
c3 1 1 0 0 1 0 0 0
c4 1 0 0 1 0 0 0 1
</code></pre>
<p>But even so the output is 0001010111 (this time read in column 7). </p>
| 1 | 2016-10-10T19:35:55Z | [
"python",
"lfsr"
] |
Discord API, get_member(user_id) errors | 39,963,246 | <p>I have this code and i am really curious about why it is not working. The problem is in <code>k = discord.Server.get_member(j)</code> it says</p>
<blockquote>
<p>"TypeError: get_member() missing 1 required positional argument: 'user_id'".</p>
</blockquote>
<p>This code uses <a href="https://pypi.python.org/pypi/discord.py" rel="nofollow">discord.py</a>:</p>
<pre><code>@client.event
async def on_message(message):
if message.content.startswith('/sendpm'):
print(message.author)
j = message.content.replace('/sendpm ', '')
print(j)
j = j.replace('@', '')
j = j.replace('<', '')
j = j.replace('>', '')
j = j.replace('!', '')
print(l)
k = discord.Server.get_member(j) #problem is here
await client.send_message(await client.start_private_message(k), spammsg)
await client.send_message(message.channel, 'sent' + message.author.mention)
</code></pre>
| 1 | 2016-10-10T17:01:18Z | 39,964,884 | <p>This code is accessing a method of <code>discord.Server</code> as if it was a static method:</p>
<pre><code>k = discord.Server.get_member(j)
</code></pre>
<p>Function <a href="https://github.com/Rapptz/discord.py/blob/v0.13.0/discord/server.py#L132" rel="nofollow"><code>get_member</code> is defined as</a>:</p>
<pre><code>def get_member(self, user_id):
</code></pre>
<p>It accepts <code>self</code> as the first argument because it is meant to be called on an instance, e.g.:</p>
<pre><code>server = discord.Server()
server.get_member(user_id)
</code></pre>
<p>Whether that is the correct way to get a <code>Server</code> instance I do not know. <a href="https://github.com/Rapptz/discord.py/blob/master/examples/new_member.py" rel="nofollow">This example</a> seems to have a different way to get to a server instance:</p>
<blockquote>
<pre><code>@client.event
async def on_member_join(member):
server = member.server
fmt = 'Welcome {0.mention} to {1.name}!'
await client.send_message(server, fmt.format(member, server))
</code></pre>
</blockquote>
| 0 | 2016-10-10T18:50:36Z | [
"python",
"python-3.x"
] |
Change _FillValue in netCDF file | 39,963,309 | <p>Is there a python netCDF4 command/example to change the global metadata _FillValue in a netCDF file? I have tried replacing all -ve values in a netCDF file, but till the time the _FillValue attribute is set, that does not work</p>
| 0 | 2016-10-10T17:05:10Z | 39,965,512 | <p>I don't believe python netCDF4 has a specific function for this, but <a href="http://nco.sourceforge.net/nco.html#ncatted-netCDF-Attribute-Editor" rel="nofollow">NCO's ncatted</a> is an ideal tool for this task. </p>
<p>From the docs:</p>
<p>To change the missing value from the IEEE NaN value to a normal IEEE number, like 1.0e36:</p>
<pre><code>ncatted -a _FillValue,,m,f,1.0e36 in.nc
</code></pre>
| 1 | 2016-10-10T19:36:56Z | [
"python",
"netcdf",
"netcdf4"
] |
Python to iterate through sheets and drop columns | 39,963,349 | <p>I need to read one excel file and perform some computations on each sheet. Basically, it needs to drop rows if the column date is not "today".</p>
<p>I got this code so far:</p>
<p>
import datetime
import pandas as pd</p>
<pre><code>'''
Parsing main excel sheet to save transactions != today's date
'''
mainSource = pd.ExcelFile('path/to/file.xlsx')
dfs = {sheet_name: mainSource.parse(sheet_name)
for sheet_name in mainSource.sheet_names }
for i in dfs:
now = datetime.date.today();
dfs = dfs.drop(dfs.columns[6].dt.year != now, axis = 1); # It is the 6th column
if datetime.time()<datetime.time(11,0,0,0):
dfs.to_excel(r'path\to\outpt\test\'+str(i)+now+'H12.xlsx', index=False); #Save as sheetname+timestamp+textstring
else:
dfs.to_excel(r'path\to\output\'+str(i)+now+'H16.xlsx', index=False)
</code></pre>
<p>When running the script, I am getting the following error:</p>
<pre><code>dfs = dfs.drop(...):
AttributeError: 'dict' object has no attribute 'drop'
</code></pre>
<p>Any suggestions?</p>
<p>Thanks!</p>
| 1 | 2016-10-10T17:07:51Z | 39,963,401 | <p>I think you need replace <code>i</code> to <code>dfs[i]</code>, because <code>dfs</code> is dict of <code>DataFrames</code>: </p>
<pre><code>df1 = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'C':['10-05-2011','10-05-2012','10-10-2016']})
df1.C = pd.to_datetime(df1.C)
print (df1)
A B C
0 1 4 2011-10-05
1 2 5 2012-10-05
2 3 6 2016-10-10
df2 = pd.DataFrame({'A':[3,5,7],
'B':[9,3,4],
'C':['08-05-2013','08-05-2012','10-10-2016']})
df2.C = pd.to_datetime(df2.C)
print (df2)
A B C
0 3 9 2013-08-05
1 5 3 2012-08-05
2 7 4 2016-10-10
names = ['a','b']
dfs = {names[i]:x for i, x in enumerate([df1,df2])}
print (dfs)
{'a': A B C
0 1 4 2011-10-05
1 2 5 2012-10-05
2 3 6 2016-10-10, 'b': A B C
0 3 9 2013-08-05
1 5 3 2012-08-05
2 7 4 2016-10-10}
</code></pre>
<p>Remove all rows by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>for i in dfs:
now = pd.datetime.today().date();
print (now)
#select 3.column, in real data replace to 5
mask = dfs[i].iloc[:,2].dt.date == now
print (mask)
df = dfs[i][mask]
print (df)
2016-10-10
0 False
1 False
2 True
Name: C, dtype: bool
A B C
2 3 6 2016-10-10
2016-10-10
0 False
1 False
2 True
Name: C, dtype: bool
A B C
2 7 4 2016-10-10
if datetime.time()<datetime.time(11,0,0,0):
df.to_excel(r'path\to\outpt\test\'+str(i)+now+'H12.xlsx', index=False);
else:
df.to_excel(r'path\to\output\'+str(i)+now+'H16.xlsx', index=False)
</code></pre>
| 1 | 2016-10-10T17:11:24Z | [
"python",
"excel",
"pandas",
"text-parsing"
] |
Speed up creation of numpy array from list | 39,963,357 | <p>I have a 33620x160 <code>pandas</code> <code>DataFrame</code> which has one column that contains lists of numbers. Each list entry in the <code>DataFrame</code> contains 30 elements.</p>
<pre><code>df['dlrs_col']
0 [0.048142470608688, 0.047021138711858, 0.04573...
1 [0.048142470608688, 0.047021138711858, 0.04573...
2 [0.048142470608688, 0.047021138711858, 0.04573...
3 [0.048142470608688, 0.047021138711858, 0.04573...
4 [0.048142470608688, 0.047021138711858, 0.04573...
5 [0.048142470608688, 0.047021138711858, 0.04573...
6 [0.048142470608688, 0.047021138711858, 0.04573...
7 [0.048142470608688, 0.047021138711858, 0.04573...
8 [0.048142470608688, 0.047021138711858, 0.04573...
9 [0.048142470608688, 0.047021138711858, 0.04573...
10 [0.048142470608688, 0.047021138711858, 0.04573...
</code></pre>
<p>I'm creating a 33620x30 array whose entries are the unlisted values from that single <code>DataFrame</code> column. I'm currently doing this as:</p>
<pre><code>np.array(df['dlrs_col'].tolist(), dtype = 'float64')
</code></pre>
<p>This works just fine, but it takes a significant amount of time, especially when considering I do a similar calculation for 6 additional columns of lists. Any ideas on how I can speed this up?</p>
| 2 | 2016-10-10T17:08:32Z | 39,964,483 | <p>you can do it this way:</p>
<pre><code>In [140]: df
Out[140]:
dlrs_col
0 [0.048142470608688, 0.047021138711858, 0.04573]
1 [0.048142470608688, 0.047021138711858, 0.04573]
2 [0.048142470608688, 0.047021138711858, 0.04573]
3 [0.048142470608688, 0.047021138711858, 0.04573]
4 [0.048142470608688, 0.047021138711858, 0.04573]
5 [0.048142470608688, 0.047021138711858, 0.04573]
6 [0.048142470608688, 0.047021138711858, 0.04573]
7 [0.048142470608688, 0.047021138711858, 0.04573]
8 [0.048142470608688, 0.047021138711858, 0.04573]
9 [0.048142470608688, 0.047021138711858, 0.04573]
In [141]: df.dlrs_col.apply(pd.Series)
Out[141]:
0 1 2
0 0.048142 0.047021 0.04573
1 0.048142 0.047021 0.04573
2 0.048142 0.047021 0.04573
3 0.048142 0.047021 0.04573
4 0.048142 0.047021 0.04573
5 0.048142 0.047021 0.04573
6 0.048142 0.047021 0.04573
7 0.048142 0.047021 0.04573
8 0.048142 0.047021 0.04573
9 0.048142 0.047021 0.04573
In [142]: df.dlrs_col.apply(pd.Series).values
Out[142]:
array([[ 0.04814247, 0.04702114, 0.04573 ],
[ 0.04814247, 0.04702114, 0.04573 ],
[ 0.04814247, 0.04702114, 0.04573 ],
[ 0.04814247, 0.04702114, 0.04573 ],
[ 0.04814247, 0.04702114, 0.04573 ],
[ 0.04814247, 0.04702114, 0.04573 ],
[ 0.04814247, 0.04702114, 0.04573 ],
[ 0.04814247, 0.04702114, 0.04573 ],
[ 0.04814247, 0.04702114, 0.04573 ],
[ 0.04814247, 0.04702114, 0.04573 ]])
</code></pre>
| 1 | 2016-10-10T18:25:01Z | [
"python",
"arrays",
"performance",
"pandas",
"numpy"
] |
Speed up creation of numpy array from list | 39,963,357 | <p>I have a 33620x160 <code>pandas</code> <code>DataFrame</code> which has one column that contains lists of numbers. Each list entry in the <code>DataFrame</code> contains 30 elements.</p>
<pre><code>df['dlrs_col']
0 [0.048142470608688, 0.047021138711858, 0.04573...
1 [0.048142470608688, 0.047021138711858, 0.04573...
2 [0.048142470608688, 0.047021138711858, 0.04573...
3 [0.048142470608688, 0.047021138711858, 0.04573...
4 [0.048142470608688, 0.047021138711858, 0.04573...
5 [0.048142470608688, 0.047021138711858, 0.04573...
6 [0.048142470608688, 0.047021138711858, 0.04573...
7 [0.048142470608688, 0.047021138711858, 0.04573...
8 [0.048142470608688, 0.047021138711858, 0.04573...
9 [0.048142470608688, 0.047021138711858, 0.04573...
10 [0.048142470608688, 0.047021138711858, 0.04573...
</code></pre>
<p>I'm creating a 33620x30 array whose entries are the unlisted values from that single <code>DataFrame</code> column. I'm currently doing this as:</p>
<pre><code>np.array(df['dlrs_col'].tolist(), dtype = 'float64')
</code></pre>
<p>This works just fine, but it takes a significant amount of time, especially when considering I do a similar calculation for 6 additional columns of lists. Any ideas on how I can speed this up?</p>
| 2 | 2016-10-10T17:08:32Z | 39,965,177 | <p>You can first convert to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html" rel="nofollow"><code>values</code></a>:</p>
<pre><code>df = pd.DataFrame({'dlrs_col':[
[0.048142470608688, 0.047021138711858, 0.04573],
[0.048142470608688, 0.047021138711858, 0.04573],
[0.048142470608688, 0.047021138711858, 0.04573],
[0.048142470608688, 0.047021138711858, 0.04573],
[0.048142470608688, 0.047021138711858, 0.04573],
[0.048142470608688, 0.047021138711858, 0.04573],
[0.048142470608688, 0.047021138711858, 0.04573],
[0.048142470608688, 0.047021138711858, 0.04573]]})
print (df)
dlrs_col
0 [0.048142470608688, 0.047021138711858, 0.04573]
1 [0.048142470608688, 0.047021138711858, 0.04573]
2 [0.048142470608688, 0.047021138711858, 0.04573]
3 [0.048142470608688, 0.047021138711858, 0.04573]
4 [0.048142470608688, 0.047021138711858, 0.04573]
5 [0.048142470608688, 0.047021138711858, 0.04573]
6 [0.048142470608688, 0.047021138711858, 0.04573]
7 [0.048142470608688, 0.047021138711858, 0.04573]
print (np.array(df['dlrs_col'].values.tolist(), dtype = 'float64'))
[[ 0.04814247 0.04702114 0.04573 ]
[ 0.04814247 0.04702114 0.04573 ]
[ 0.04814247 0.04702114 0.04573 ]
[ 0.04814247 0.04702114 0.04573 ]
[ 0.04814247 0.04702114 0.04573 ]
[ 0.04814247 0.04702114 0.04573 ]
[ 0.04814247 0.04702114 0.04573 ]
[ 0.04814247 0.04702114 0.04573 ]]
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>In [56]: %timeit (np.array(df['dlrs_col'].values.tolist(), dtype = 'float64'))
The slowest run took 9.76 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 14.1 µs per loop
In [57]: %timeit (np.array(df['dlrs_col'].tolist(), dtype = 'float64'))
The slowest run took 9.33 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 28.4 µs per loop
</code></pre>
| 0 | 2016-10-10T19:12:00Z | [
"python",
"arrays",
"performance",
"pandas",
"numpy"
] |
need user to be able to input up to three letters at a time for python turtle to draw | 39,963,364 | <p>I have the code all worked out to be able to input one letter at a time but for some reason cant figure out how to make it so the user can input up to three letters for turtle to draw. this is my code so far. any help would be appreciated, thank you in advance</p>
<pre><code>import turtle
velcro = turtle.Turtle()
wn = turtle.Screen()
wn.bgcolor('pink')
velcro.color("purple", "blue")
velcro.color()
('purple', 'blue')
velcro.pensize("12")
def color_purple():
velcro.color ('purple')
def color_blue():
velcro.color('blue')
def color_black():
velcro.color ('black')
velcroColor = turtle.textinput("pick a color", "please chose from the colors purple, blue or black, to draw in")
if (velcroColor == 'purple'):
color_purple()
elif (velcroColor == 'blue'):
color_blue()
elif (velcroColor == 'black'):
color_black()
def letter_A():
velcro.left(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
velcro.right(180)
velcro.forward(75)
velcro.left(90)
velcro.forward(150)
def letter_B():
velcro.left(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(80)
velcro.right(90)
velcro.forward(80)
velcro.right(90)
velcro.forward(80)
velcro.right(180)
velcro.forward(95)
velcro.right(90)
velcro.forward(80)
velcro.right(90)
velcro.forward(93)
def letter_C():
velcro.right(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(150)
velcro.left(180)
velcro.forward(150)
velcro.right(90)
velcro.forward(200)
velcro.right(90)
velcro.forward(150)
def letter_D():
velcro.left(90)
velcro.forward(200)
velcro.right(98)
velcro.forward(180)
velcro.right(85)
velcro.forward(200)
velcro.right(96)
velcro.forward(172)
def letter_E():
velcro.right(90)
velcro.forward(200)
velcro.left(90)
velcro.forward(155)
velcro.left(180)
velcro.forward(155)
velcro.right(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(155)
velcro.left(180)
velcro.forward(155)
velcro.left(90)
velcro.forward(125)
velcro.left(90)
velcro.forward(150)
def letter_F():
velcro.right(90)
velcro.forward(200)
velcro.right(180)
velcro.forward(250)
velcro.right(90)
velcro.forward(150)
velcro.left(180)
velcro.forward(150)
velcro.left(90)
velcro.forward(95)
velcro.left(90)
velcro.forward(125)
def letter_G():
velcro.left(90)
velcro.forward(200)
velcro.right(90)
velcro.forward(155)
velcro.right(180)
velcro.forward(155)
velcro.left(90)
velcro.forward(225)
velcro.left(90)
velcro.forward(175)
velcro.left(90)
velcro.forward(80)
velcro.left(90)
velcro.forward(80)
def letter_H():
velcro.forward(150)
velcro.right(90)
velcro.forward(175)
velcro.right(180)
velcro.forward(300)
velcro.left(180)
velcro.forward(125)
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(125)
velcro.right(180)
velcro.forward(300)
def letter_I():
velcro.left(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(100)
velcro.right(180)
velcro.forward(200)
velcro.right(180)
velcro.forward(100)
velcro.left(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(100)
velcro.right(180)
velcro.forward(200)
def letter_J():
velcro.left(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(100)
velcro.left(180)
velcro.forward(200)
velcro.left(180)
velcro.forward(100)
velcro.left(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(100)
velcro.right(90)
velcro.forward(95)
def letter_K():
velcro.left(90)
velcro.forward(200)
velcro.left(180)
velcro.forward(100)
velcro.left(145)
velcro.forward(100)
velcro.left(180)
velcro.forward(100)
velcro.left(85)
velcro.forward(150)
def letter_L():
velcro.left(90)
velcro.forward(250)
velcro.left(180)
velcro.forward(300)
velcro.left(90)
velcro.forward(200)
def letter_M():
velcro.left(90)
velcro.forward(250)
velcro.right(140)
velcro.forward(200)
velcro.left(100)
velcro.forward(200)
velcro.right(140)
velcro.forward(250)
def letter_N():
velcro.left(90)
velcro.forward(200)
velcro.right(140)
velcro.forward(250)
velcro.left(140)
velcro.forward(200)
def letter_O():
velcro.left(180)
velcro.forward(100)
velcro.right(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(100)
velcro.right(90)
velcro.forward(250)
def letter_P():
velcro.left(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(95)
velcro.right(90)
velcro.forward(95)
velcro.right(90)
velcro.forward(95)
def letter_Q():
velcro.left(90)
velcro.forward(250)
velcro.left(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(250)
velcro.left(90)
velcro.forward(150)
velcro.left(140)
velcro.forward(50)
velcro.left(180)
velcro.forward(100)
def letter_R():
velcro.left(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(100)
velcro.right(90)
velcro.forward(115)
velcro.right(90)
velcro.forward(100)
velcro.left(135)
velcro.forward(180)
def letter_S():
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
velcro.left(180)
velcro.forward(150)
velcro.left(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
def letter_T():
velcro.left(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(50)
velcro.right(180)
velcro.forward(100)
def letter_U():
velcro.left(90)
velcro.forward(200)
velcro.left(180)
velcro.forward(200)
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(200)
def letter_V():
velcro.left(115)
velcro.forward(250)
velcro.left(180)
velcro.forward(250)
velcro.left(120)
velcro.forward(250)
def letter_W():
velcro.left(270)
velcro.forward(275)
velcro.right(140)
velcro.forward(200)
velcro.left(100)
velcro.forward(200)
velcro.right(140)
velcro.forward(275)
def letter_X():
velcro.left(140)
velcro.forward(300)
velcro.left(180)
velcro.forward(150)
velcro.right(270)
velcro.forward(150)
velcro.right(180)
velcro.forward(300)
def letter_Y():
velcro.left(140)
velcro.forward(200)
velcro.left(180)
velcro.forward(200)
velcro.right(275)
velcro.forward(200)
velcro.left(180)
velcro.forward(200)
velcro.left(45)
velcro.forward(250)
def letter_Z():
velcro.forward(225)
velcro.left(140)
velcro.forward(300)
velcro.right(140)
velcro.forward(225)
velcroLetter = turtle.textinput("Turtle alphabet", "please enter a capital letter from A-Z to draw:")
if (velcroLetter == 'a' or velcroLetter == 'A'):
letter_A()
elif (velcroLetter == 'b' or velcroLetter == 'B'):
letter_B()
elif (velcroLetter == 'c' or velcroLetter == 'C'):
letter_C()
elif (velcroLetter == 'd' or velcroLetter == 'D'):
letter_D()
elif (velcroLetter == 'e' or velcroLetter == 'E'):
letter_E()
elif (velcroLetter == 'f' or velcroLetter == 'F'):
letter_F()
elif (velcroLetter == 'g' or velcroLetter == 'G'):
letter_G()
elif (velcroLetter == 'h' or velcroLetter == 'H'):
letter_H()
elif (velcroLetter == 'i' or velcroLetter == 'I'):
letter_I()
elif (velcroLetter == 'j' or velcroLetter == 'J'):
letter_J()
elif (velcroLetter == 'k' or velcroLetter == 'K'):
letter_K()
elif (velcroLetter == 'l' or velcroLetter == 'L'):
letter_L()
elif (velcroLetter == 'm' or velcroLetter == 'M'):
letter_M()
elif (velcroLetter == 'n' or velcroLetter == 'N'):
letter_N()
elif (velcroLetter == 'o' or velcroLetter == 'O'):
letter_O()
elif (velcroLetter == 'p' or velcroLetter == 'P'):
letter_P()
elif (velcroLetter == 'q' or velcroLetter == 'Q'):
letter_Q()
elif (velcroLetter == 'r' or velcroLetter == 'R'):
letter_R()
elif (velcroLetter == 's' or velcroLetter == 'S'):
letter_S()
elif (velcroLetter == 't' or velcroLetter == 'T'):
letter_T()
elif (velcroLetter == 'u' or velcroLetter == 'U'):
letter_U()
elif (velcroLetter == 'v' or velcroLetter == 'V'):
letter_V()
elif (velcroLetter == 'w' or velcroLetter == 'W'):
letter_W()
elif (velcroLetter == 'x' or velcroLetter == 'X'):
letter_X()
elif (velcroLetter == 'y' or velcroLetter == 'Y'):
letter_Y()
elif (velcroLetter == 'z' or velcroLetter == 'Z'):
letter_Z()
turtle.mainloop()
</code></pre>
| 3 | 2016-10-10T17:09:00Z | 39,964,056 | <p>Maybe it's because you're typing multiple letters at once. Your program then checks if e. g. "ABC" matches "A". Try entering single letters.</p>
| -1 | 2016-10-10T17:55:39Z | [
"python",
"turtle-graphics"
] |
need user to be able to input up to three letters at a time for python turtle to draw | 39,963,364 | <p>I have the code all worked out to be able to input one letter at a time but for some reason cant figure out how to make it so the user can input up to three letters for turtle to draw. this is my code so far. any help would be appreciated, thank you in advance</p>
<pre><code>import turtle
velcro = turtle.Turtle()
wn = turtle.Screen()
wn.bgcolor('pink')
velcro.color("purple", "blue")
velcro.color()
('purple', 'blue')
velcro.pensize("12")
def color_purple():
velcro.color ('purple')
def color_blue():
velcro.color('blue')
def color_black():
velcro.color ('black')
velcroColor = turtle.textinput("pick a color", "please chose from the colors purple, blue or black, to draw in")
if (velcroColor == 'purple'):
color_purple()
elif (velcroColor == 'blue'):
color_blue()
elif (velcroColor == 'black'):
color_black()
def letter_A():
velcro.left(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
velcro.right(180)
velcro.forward(75)
velcro.left(90)
velcro.forward(150)
def letter_B():
velcro.left(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(80)
velcro.right(90)
velcro.forward(80)
velcro.right(90)
velcro.forward(80)
velcro.right(180)
velcro.forward(95)
velcro.right(90)
velcro.forward(80)
velcro.right(90)
velcro.forward(93)
def letter_C():
velcro.right(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(150)
velcro.left(180)
velcro.forward(150)
velcro.right(90)
velcro.forward(200)
velcro.right(90)
velcro.forward(150)
def letter_D():
velcro.left(90)
velcro.forward(200)
velcro.right(98)
velcro.forward(180)
velcro.right(85)
velcro.forward(200)
velcro.right(96)
velcro.forward(172)
def letter_E():
velcro.right(90)
velcro.forward(200)
velcro.left(90)
velcro.forward(155)
velcro.left(180)
velcro.forward(155)
velcro.right(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(155)
velcro.left(180)
velcro.forward(155)
velcro.left(90)
velcro.forward(125)
velcro.left(90)
velcro.forward(150)
def letter_F():
velcro.right(90)
velcro.forward(200)
velcro.right(180)
velcro.forward(250)
velcro.right(90)
velcro.forward(150)
velcro.left(180)
velcro.forward(150)
velcro.left(90)
velcro.forward(95)
velcro.left(90)
velcro.forward(125)
def letter_G():
velcro.left(90)
velcro.forward(200)
velcro.right(90)
velcro.forward(155)
velcro.right(180)
velcro.forward(155)
velcro.left(90)
velcro.forward(225)
velcro.left(90)
velcro.forward(175)
velcro.left(90)
velcro.forward(80)
velcro.left(90)
velcro.forward(80)
def letter_H():
velcro.forward(150)
velcro.right(90)
velcro.forward(175)
velcro.right(180)
velcro.forward(300)
velcro.left(180)
velcro.forward(125)
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(125)
velcro.right(180)
velcro.forward(300)
def letter_I():
velcro.left(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(100)
velcro.right(180)
velcro.forward(200)
velcro.right(180)
velcro.forward(100)
velcro.left(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(100)
velcro.right(180)
velcro.forward(200)
def letter_J():
velcro.left(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(100)
velcro.left(180)
velcro.forward(200)
velcro.left(180)
velcro.forward(100)
velcro.left(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(100)
velcro.right(90)
velcro.forward(95)
def letter_K():
velcro.left(90)
velcro.forward(200)
velcro.left(180)
velcro.forward(100)
velcro.left(145)
velcro.forward(100)
velcro.left(180)
velcro.forward(100)
velcro.left(85)
velcro.forward(150)
def letter_L():
velcro.left(90)
velcro.forward(250)
velcro.left(180)
velcro.forward(300)
velcro.left(90)
velcro.forward(200)
def letter_M():
velcro.left(90)
velcro.forward(250)
velcro.right(140)
velcro.forward(200)
velcro.left(100)
velcro.forward(200)
velcro.right(140)
velcro.forward(250)
def letter_N():
velcro.left(90)
velcro.forward(200)
velcro.right(140)
velcro.forward(250)
velcro.left(140)
velcro.forward(200)
def letter_O():
velcro.left(180)
velcro.forward(100)
velcro.right(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(100)
velcro.right(90)
velcro.forward(250)
def letter_P():
velcro.left(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(95)
velcro.right(90)
velcro.forward(95)
velcro.right(90)
velcro.forward(95)
def letter_Q():
velcro.left(90)
velcro.forward(250)
velcro.left(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(250)
velcro.left(90)
velcro.forward(150)
velcro.left(140)
velcro.forward(50)
velcro.left(180)
velcro.forward(100)
def letter_R():
velcro.left(90)
velcro.forward(250)
velcro.right(90)
velcro.forward(100)
velcro.right(90)
velcro.forward(115)
velcro.right(90)
velcro.forward(100)
velcro.left(135)
velcro.forward(180)
def letter_S():
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
velcro.left(180)
velcro.forward(150)
velcro.left(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
def letter_T():
velcro.left(90)
velcro.forward(150)
velcro.left(90)
velcro.forward(50)
velcro.right(180)
velcro.forward(100)
def letter_U():
velcro.left(90)
velcro.forward(200)
velcro.left(180)
velcro.forward(200)
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(200)
def letter_V():
velcro.left(115)
velcro.forward(250)
velcro.left(180)
velcro.forward(250)
velcro.left(120)
velcro.forward(250)
def letter_W():
velcro.left(270)
velcro.forward(275)
velcro.right(140)
velcro.forward(200)
velcro.left(100)
velcro.forward(200)
velcro.right(140)
velcro.forward(275)
def letter_X():
velcro.left(140)
velcro.forward(300)
velcro.left(180)
velcro.forward(150)
velcro.right(270)
velcro.forward(150)
velcro.right(180)
velcro.forward(300)
def letter_Y():
velcro.left(140)
velcro.forward(200)
velcro.left(180)
velcro.forward(200)
velcro.right(275)
velcro.forward(200)
velcro.left(180)
velcro.forward(200)
velcro.left(45)
velcro.forward(250)
def letter_Z():
velcro.forward(225)
velcro.left(140)
velcro.forward(300)
velcro.right(140)
velcro.forward(225)
velcroLetter = turtle.textinput("Turtle alphabet", "please enter a capital letter from A-Z to draw:")
if (velcroLetter == 'a' or velcroLetter == 'A'):
letter_A()
elif (velcroLetter == 'b' or velcroLetter == 'B'):
letter_B()
elif (velcroLetter == 'c' or velcroLetter == 'C'):
letter_C()
elif (velcroLetter == 'd' or velcroLetter == 'D'):
letter_D()
elif (velcroLetter == 'e' or velcroLetter == 'E'):
letter_E()
elif (velcroLetter == 'f' or velcroLetter == 'F'):
letter_F()
elif (velcroLetter == 'g' or velcroLetter == 'G'):
letter_G()
elif (velcroLetter == 'h' or velcroLetter == 'H'):
letter_H()
elif (velcroLetter == 'i' or velcroLetter == 'I'):
letter_I()
elif (velcroLetter == 'j' or velcroLetter == 'J'):
letter_J()
elif (velcroLetter == 'k' or velcroLetter == 'K'):
letter_K()
elif (velcroLetter == 'l' or velcroLetter == 'L'):
letter_L()
elif (velcroLetter == 'm' or velcroLetter == 'M'):
letter_M()
elif (velcroLetter == 'n' or velcroLetter == 'N'):
letter_N()
elif (velcroLetter == 'o' or velcroLetter == 'O'):
letter_O()
elif (velcroLetter == 'p' or velcroLetter == 'P'):
letter_P()
elif (velcroLetter == 'q' or velcroLetter == 'Q'):
letter_Q()
elif (velcroLetter == 'r' or velcroLetter == 'R'):
letter_R()
elif (velcroLetter == 's' or velcroLetter == 'S'):
letter_S()
elif (velcroLetter == 't' or velcroLetter == 'T'):
letter_T()
elif (velcroLetter == 'u' or velcroLetter == 'U'):
letter_U()
elif (velcroLetter == 'v' or velcroLetter == 'V'):
letter_V()
elif (velcroLetter == 'w' or velcroLetter == 'W'):
letter_W()
elif (velcroLetter == 'x' or velcroLetter == 'X'):
letter_X()
elif (velcroLetter == 'y' or velcroLetter == 'Y'):
letter_Y()
elif (velcroLetter == 'z' or velcroLetter == 'Z'):
letter_Z()
turtle.mainloop()
</code></pre>
| 3 | 2016-10-10T17:09:00Z | 40,050,122 | <p>Since your letters print relative to the origin (0, 0), the trick is to divide the screen into portions (say thirds) and for each letter, move to the center of the appropriate portion before drawing the letter.</p>
<p>I've reworked your code below to do the above (for up to three letters) and simplify some of your logic. The letter functions themselves are unchanged so I left out the bulk of them below, just splice in your original code there:</p>
<pre><code>from turtle import Turtle, Screen
SCREEN_WIDTH, SCREEN_HEIGHT = 1200, 600
SCREEN_MARGIN = 0.1 * SCREEN_WIDTH
MAXIMUM_CHARACTERS = 3
CHARACTER_WIDTH = (SCREEN_WIDTH - (2 * SCREEN_MARGIN)) / MAXIMUM_CHARACTERS
COLORS = ['purple', 'blue', 'black']
velcro = Turtle()
screen = Screen()
def letter_A():
velcro.left(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
velcro.right(90)
velcro.forward(150)
velcro.right(180)
velcro.forward(75)
velcro.left(90)
velcro.forward(150)
# ...
def letter_Z():
velcro.forward(225)
velcro.left(140)
velcro.forward(300)
velcro.right(140)
velcro.forward(225)
letter_lookup = {
'A': letter_A,
'B': letter_B,
'C': letter_C,
'D': letter_D,
'E': letter_E,
'F': letter_F,
'G': letter_G,
'H': letter_H,
'I': letter_I,
'J': letter_J,
'K': letter_K,
'L': letter_L,
'M': letter_M,
'N': letter_N,
'O': letter_O,
'P': letter_P,
'Q': letter_Q,
'R': letter_R,
'S': letter_S,
'T': letter_T,
'U': letter_U,
'V': letter_V,
'W': letter_W,
'X': letter_X,
'Y': letter_Y,
'Z': letter_Z,
}
screen.setup(width=SCREEN_WIDTH, height=SCREEN_HEIGHT)
screen.bgcolor('pink')
velcro.color("purple", "blue")
velcro.pensize("12")
color_string = ", ".join(COLORS[:-1]) + " or " + COLORS[-1]
velcroColor = screen.textinput("Pick a Color", "Please chose from the colors " + color_string + ", to draw with")
if (velcroColor in COLORS):
velcro.color(velcroColor)
velcroLetters = screen.textinput("Turtle alphabet", "Please enter up to 3 letters from A-Z to draw:")[:MAXIMUM_CHARACTERS]
center_x = CHARACTER_WIDTH * (len(velcroLetters) - 1) / -2
for letter in velcroLetters:
velcro.penup()
velcro.home()
velcro.goto(center_x, 0)
velcro.pendown()
letter_lookup[letter.upper()]()
center_x += CHARACTER_WIDTH
screen.mainloop()
</code></pre>
<p><a href="https://i.stack.imgur.com/HEnvs.png" rel="nofollow"><img src="https://i.stack.imgur.com/HEnvs.png" alt="enter image description here"></a></p>
| 0 | 2016-10-14T18:47:31Z | [
"python",
"turtle-graphics"
] |
list of strings - remove commonalities of common strings | 39,963,368 | <p>I'm struggling to come up with a way to solve this problem (and how to come up with a name for this question on stackoverflow).</p>
<p>I need to somehow remove commonalities of strings preserving the remainder. </p>
<p>Given a list such as the following:</p>
<pre><code>l = ('first',
'first.second',
'first.second.third',
'a.b.c',
'x.y.z',
'x.y')
</code></pre>
<p>I'm hoping to have a list output as:</p>
<pre><code>('first',
'second',
'third',
'a.b.c',
'x.y',
'z' )
</code></pre>
<p>as you can see, when <code>first.second</code> subtracts <code>first</code> and remains <code>second</code>. <code>first.second</code> is subtracted from <code>first.second.third</code> we get <code>third</code>. <code>a.b.c</code> doesn't have anything to subtract from itself, so it remains. <code>x.y</code> is subtracted from <code>x.y.z</code> and <code>z</code> remains.</p>
<p>I'm thinking maybe a <code>sort(key=len)</code> will be part of the solution, as well as some sort of recursion to end up with the string remainder. I'm hoping for a clean way to remove commonalities of each string in the list. </p>
| 1 | 2016-10-10T17:09:22Z | 39,963,718 | <p>I believe you need to define your problem a little more exactly before writing a solution. Here's what I infer from your test case:</p>
<ol>
<li>"Members" are delimited by periods: the same "member" can't appear in two tuple items.</li>
<li>Each member should only appear once.</li>
</ol>
<p>The problem, though, is that the <em>precedence</em> is ambiguous. For example, in the following sequence:</p>
<pre><code>lst = ('a.b.c',
'a.b.d')
</code></pre>
<p>Where do 'a' and 'b' belong? Your test case implies that a common member should go to the one with the least common members (so "z" doesn't stay with "x.y.z"), but there are plenty of edge cases that need to be considered. You will need to put your requirements in a more exact format.</p>
<p>Adopting the much simpler rule that a "member" should stay in the first place that it appears, the following function does the trick:</p>
<pre><code>def remove_common(lst):
seen = set()
res = []
for i in lst:
members = i.split('.')
res.append('.'.join(w for w in members if w not in seen))
seen |= set(members)
return tuple(res)
</code></pre>
<p>This gives a result very close to yours:</p>
<pre><code>>>> remove_common(l)
('first', 'second', 'third', 'a.b.c', 'x.y.z', '')
</code></pre>
| 2 | 2016-10-10T17:32:20Z | [
"python",
"algorithm"
] |
list of strings - remove commonalities of common strings | 39,963,368 | <p>I'm struggling to come up with a way to solve this problem (and how to come up with a name for this question on stackoverflow).</p>
<p>I need to somehow remove commonalities of strings preserving the remainder. </p>
<p>Given a list such as the following:</p>
<pre><code>l = ('first',
'first.second',
'first.second.third',
'a.b.c',
'x.y.z',
'x.y')
</code></pre>
<p>I'm hoping to have a list output as:</p>
<pre><code>('first',
'second',
'third',
'a.b.c',
'x.y',
'z' )
</code></pre>
<p>as you can see, when <code>first.second</code> subtracts <code>first</code> and remains <code>second</code>. <code>first.second</code> is subtracted from <code>first.second.third</code> we get <code>third</code>. <code>a.b.c</code> doesn't have anything to subtract from itself, so it remains. <code>x.y</code> is subtracted from <code>x.y.z</code> and <code>z</code> remains.</p>
<p>I'm thinking maybe a <code>sort(key=len)</code> will be part of the solution, as well as some sort of recursion to end up with the string remainder. I'm hoping for a clean way to remove commonalities of each string in the list. </p>
| 1 | 2016-10-10T17:09:22Z | 39,963,803 | <p>If the output order is not important, this solution will give you the expected values.</p>
<p>This is by implementation almost the same as @brianpck's answer. But I used sorting to deal with the <code>"x.y.z"</code> problem. And some extra explanations.</p>
<pre><code>l = ('first',
'first.second',
'first.second.third',
'a.b.c',
'x.y.z',
'x.y')
def clarify(lst):
# Sort the list to deal with the order problem.
# for ex. "x.y.z" deletes "x.y" if not sorted
lst = sorted(lst)
# Words encountered.
words = set()
new_list = []
for elem in lst:
# Divide elements using dots.
divided = elem.split(".")
# New element to be added to the result.
new_elem = []
# For each word in a divided element.
for word in divided:
# If a word in the element is not encountered before.
# Add it to new element
# Add it to words
if word not in words:
new_elem.append(word)
words.add(word)
# Join new element
new_list.append(".".join(new_elem))
return new_list
print clarify(l)
# Gives ['a.b.c', 'first', 'second', 'third', 'x.y', 'z']
# You can make this a tuple in the solution as in the input if you want.
</code></pre>
| 2 | 2016-10-10T17:38:34Z | [
"python",
"algorithm"
] |
list of strings - remove commonalities of common strings | 39,963,368 | <p>I'm struggling to come up with a way to solve this problem (and how to come up with a name for this question on stackoverflow).</p>
<p>I need to somehow remove commonalities of strings preserving the remainder. </p>
<p>Given a list such as the following:</p>
<pre><code>l = ('first',
'first.second',
'first.second.third',
'a.b.c',
'x.y.z',
'x.y')
</code></pre>
<p>I'm hoping to have a list output as:</p>
<pre><code>('first',
'second',
'third',
'a.b.c',
'x.y',
'z' )
</code></pre>
<p>as you can see, when <code>first.second</code> subtracts <code>first</code> and remains <code>second</code>. <code>first.second</code> is subtracted from <code>first.second.third</code> we get <code>third</code>. <code>a.b.c</code> doesn't have anything to subtract from itself, so it remains. <code>x.y</code> is subtracted from <code>x.y.z</code> and <code>z</code> remains.</p>
<p>I'm thinking maybe a <code>sort(key=len)</code> will be part of the solution, as well as some sort of recursion to end up with the string remainder. I'm hoping for a clean way to remove commonalities of each string in the list. </p>
| 1 | 2016-10-10T17:09:22Z | 39,969,725 | <p>I think maybe i was trying too get too fancy with list and dict comprehensions. </p>
<p>I believe i was able to solve the problem with the following:</p>
<ol>
<li>sort in-bound list</li>
<li>use a variable for "previous list element" </li>
<li>loop, and output the the current element replacing previous element (if it is found) </li>
</ol>
<p>Here's what i have so far:</p>
<pre><code>li = [
'first',
'first.second',
'first.second.third',
'a',
'a.b',
'a.b.c',
'x.y.z',
'x.y']
li.sort()
prev_l = ''
output = {}
for l in li:
if l.find(prev_l) ==0:
output[l] = l.replace(prev_l,'')
else:
output[l] = l
prev_l = l
</code></pre>
<p>output:</p>
<pre><code>{
'a' : 'a',
'a.b' : '.b',
'a.b.c' : '.c',
'first' : 'first',
'first.second' : '.second',
'first.second.third' : '.third',
'x.y' : 'x.y',
'x.y.z' : '.z'}
</code></pre>
| 0 | 2016-10-11T02:41:44Z | [
"python",
"algorithm"
] |
list of strings - remove commonalities of common strings | 39,963,368 | <p>I'm struggling to come up with a way to solve this problem (and how to come up with a name for this question on stackoverflow).</p>
<p>I need to somehow remove commonalities of strings preserving the remainder. </p>
<p>Given a list such as the following:</p>
<pre><code>l = ('first',
'first.second',
'first.second.third',
'a.b.c',
'x.y.z',
'x.y')
</code></pre>
<p>I'm hoping to have a list output as:</p>
<pre><code>('first',
'second',
'third',
'a.b.c',
'x.y',
'z' )
</code></pre>
<p>as you can see, when <code>first.second</code> subtracts <code>first</code> and remains <code>second</code>. <code>first.second</code> is subtracted from <code>first.second.third</code> we get <code>third</code>. <code>a.b.c</code> doesn't have anything to subtract from itself, so it remains. <code>x.y</code> is subtracted from <code>x.y.z</code> and <code>z</code> remains.</p>
<p>I'm thinking maybe a <code>sort(key=len)</code> will be part of the solution, as well as some sort of recursion to end up with the string remainder. I'm hoping for a clean way to remove commonalities of each string in the list. </p>
| 1 | 2016-10-10T17:09:22Z | 39,988,612 | <p>I've thought about this interesting problem some more and came up with a solution.</p>
<p>The problem is fundamentally tree structured, regardless of which tree-like technique you end up using: </p>
<ul>
<li>an actual tree datatype (which is how I initially solved it, but it was much more verbose)</li>
<li>recursion</li>
<li>simulating recursion using stacks (which is what I ultimately ended doing).</li>
</ul>
<p>I'll use the following list of expanded list of words since it points out some edge cases that make other solutions fail:</p>
<pre><code>li = ['First',
'FirstSecond',
'FirstSecondFirst',
'FirstSecondFirstly',
'FirstSecondThird',
'FirstFoo',
'FirstBarracudaSphyraena',
'xyz',
'xy',
'12345',
'123',
'1',
'FireTruckGarage',
'FireTruck',
'Fire']
</code></pre>
<p>The trick is in noticing that, every time there's a potential lengthening of the prefix, we must save the previous prefix on the prefix stack (called <code>prefixes</code> here), which contains all prefixes seen so far that haven't yet been exhausted. After we've done with some of the "deeper" wordsâin the sense of being nodes deeper down the treeâwe may need to backtrack and use an old prefix for some shorter word yet again.</p>
<p>After encountering a word that is not prefixed by the current prefix, we must pop the stack of prefixes until we reach one that <em>does</em> prefix the word and continue from there.</p>
<pre><code>def ambiguate(words):
output = {}
prefixes = ['']
prefix = prefixes[0]
for item in sorted(set(words)):
backtracked = False
while not item.startswith(prefix):
prefix = prefixes.pop()
backtracked = True
# If we have backtracked, we put back the current prefix onto the
# prefix stack since we may have to use it later on.
if backtracked:
prefixes.append(prefix)
# Construct new output and a new prefix and append it to the
# prefix stack
output[item] = item.replace(prefix, '', 1)
prefix = item
prefixes.append(prefix)
return output
</code></pre>
<p>Running:</p>
<pre><code>print(ambiguate(li))
</code></pre>
<p>Yields:</p>
<pre><code>{'1': '1',
'123': '23',
'12345': '45',
'Fire': 'Fire',
'FireTruck': 'Truck',
'FireTruckGarage': 'Garage',
'First': 'First',
'FirstBarracudaSphyraena': 'BarracudaSphyraena',
'FirstFoo': 'Foo',
'FirstSecond': 'FirstSecond',
'FirstSecondFirst': 'First',
'FirstSecondFirstly': 'ly',
'FirstSecondFourth': 'Fourth',
'FirstSecondThird': 'FirstSecondThird',
'a': 'a',
'abc': 'bc',
'xy': 'xy',
'xyz': 'z'}
</code></pre>
| 1 | 2016-10-12T00:09:07Z | [
"python",
"algorithm"
] |
Python: open two different tabs via selenium | 39,963,422 | <p>i try to open two different tabs in browser through Selenium.
But when i finished query in first tab and switched to second tab, my next query perform in first tab again. What do i have to change for perform two queries in different tabs (not in one tab like now).</p>
<pre><code> <!-- language: python3 -->
import time,os
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
chromedriver = "/home/andrew/ÐагÑÑзки/chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
driver = webdriver.Chrome(chromedriver)
for elem in range(0,3):
driver.find_element_by_tag_name("body").send_keys(
Keys.CONTROL + "t")
driver.get("http://google.com")
time.sleep(3)
# first tab
search = driver.find_element_by_name('q')
search.send_keys('andrew sotnikov site:progreso.com.ua')
search.send_keys(
Keys.RETURN) # hit return after you enter search text
time.sleep(5)
time.sleep(3)
# second tab
driver.find_element_by_tag_name("body").send_keys(
Keys.CONTROL + "t")
driver.get("http://google.com")
time.sleep(2)
search = driver.find_element_by_name('q')
search.send_keys('andrew sotnikov site:progreso.com.ua')
search.send_keys(
Keys.RETURN) # hit return after you enter search text
time.sleep(5)
driver.find_element_by_tag_name('body').send_keys(
Keys.CONTROL + 'w')
</code></pre>
| -2 | 2016-10-10T17:12:59Z | 39,965,509 | <p>The best solution for me is using window_handles, as <strong>Saurabh Gaur</strong> advised me.
But before switching beetween tabs i should declare all my tabs. Just after that i can switch tabs like any iterable object.
my solution is below:</p>
<pre><code>for elem in range(0,3):
driver.find_element_by_tag_name("body").send_keys(
Keys.CONTROL + "t")
for handle in driver.window_handles:
driver.get("http://google.com")
driver.switch_to_window(driver.handle)
time.sleep(2)
search = driver.find_element_by_name('q')
search.send_keys('andrew sotnikov site:progreso.com.ua')
search.send_keys(
Keys.RETURN) # hit return after you enter search text
time.sleep(5)
</code></pre>
| 0 | 2016-10-10T19:36:41Z | [
"python",
"selenium"
] |
Non hexadecimal digit found | 39,963,516 | <p>I received the following hex string which on conversion to binary throws error </p>
<p>hex string : <code>value='(\xd2M\x00\x18\x00\x18\x80\x00\x80\x00\x00\x00\x00\x00\x00\xe0\xd2\xe0\xd2.\xd2\x00\x00\x00\x00\x00\x00\n\x00\x18\x00&\x00\x00\x00\x00\x00\x00\x00\x0f0\xfe/\x010\xff/\x000\xff/\x000\xff/\xff/\xff/\xff/\xff/\x000\xff/\xff/\xff/\x000\x000\xff/\x000\x000\x000\xff/\xff/\x000\x000\xff/\x000\xad\xff\x0c\x00\xdd\xff\xc2\xff\xd3\xff\xde\xff\xe9\xff\xca\xff\xd8\xff\xe6\xff\xb5\xff\xb2\xff\xe6\xff\x92\xff\xd0\xff\xa0\xff\xbd\xff\xb4\xff\x82\xff\x90\xfff\xff\xe1\xff\x9f\xff\x94\xff\xd4\xff\xa4\xff\xbb\xff\xe8\xff\x00\x00\x02\x00\xff\x7f\xff\x7f\x97\xff\xd0\xff\xb7\xff~\xffG\xff\xa1\xff\xa1\xff\xcd\xab\x00\x00A\n\x00\x00'</code></p>
<pre><code>binascii.unhexlify(value)
</code></pre>
<p>TypeError: Non-hexadecimal digit found</p>
| -3 | 2016-10-10T17:19:05Z | 39,963,544 | <p>That's not a hex string. You are confusing the Python <code>repr()</code> output for a bytestring, which aims to make debugging easier, with the contents.</p>
<p>Each <code>\xhh</code> is a standard Python string literal escape sequence, and displaying the string like this makes it trivial to copy and paste into another Python session to reproduce the exact same value.</p>
<p>You don't need to hex decode this at all.</p>
<p>An actual hex string consists <em>only</em> of the digits <code>0</code> through to <code>9</code>, and the letters <code>a</code> through to <code>f</code> (upper or lowercase). Your value, converted to hex, looks like this:</p>
<pre><code>>>> value='(\xd2M\x00\x18\x00\x18\x80\x00\x80\x00\x00\x00\x00\x00\x00\xe0\xd2\xe0\xd2.\xd2\x00\x00\x00\x00\x00\x00\n\x00\x18\x00&\x00\x00\x00\x00\x00\x00\x00\x0f0\xfe/\x010\xff/\x000\xff/\x000\xff/\xff/\xff/\xff/\xff/\x000\xff/\xff/\xff/\x000\x000\xff/\x000\x000\x000\xff/\xff/\x000\x000\xff/\x000\xad\xff\x0c\x00\xdd\xff\xc2\xff\xd3\xff\xde\xff\xe9\xff\xca\xff\xd8\xff\xe6\xff\xb5\xff\xb2\xff\xe6\xff\x92\xff\xd0\xff\xa0\xff\xbd\xff\xb4\xff\x82\xff\x90\xfff\xff\xe1\xff\x9f\xff\x94\xff\xd4\xff\xa4\xff\xbb\xff\xe8\xff\x00\x00\x02\x00\xff\x7f\xff\x7f\x97\xff\xd0\xff\xb7\xff~\xffG\xff\xa1\xff\xa1\xff\xcd\xab\x00\x00A\n\x00\x00'
>>> import binascii
>>> binascii.hexlify(value)
'28d24d00180018800080000000000000e0d2e0d22ed20000000000000a00180026000000000000000f30fe2f0130ff2f0030ff2f0030ff2fff2fff2fff2fff2f0030ff2fff2fff2f00300030ff2f003000300030ff2fff2f00300030ff2f0030adff0c00ddffc2ffd3ffdeffe9ffcaffd8ffe6ffb5ffb2ffe6ff92ffd0ffa0ffbdffb4ff82ff90ff66ffe1ff9fff94ffd4ffa4ffbbffe8ff00000200ff7fff7f97ffd0ffb7ff7eff47ffa1ffa1ffcdab0000410a0000'
</code></pre>
| 1 | 2016-10-10T17:20:57Z | [
"python"
] |
How can I get a python code to run over and over again? | 39,963,586 | <p>I have a scraper that scrapes data from a website, then saves the data in .csv files. What I am looking for is a way to run this code every 10 minutes, without using a loop. I have very little knowledge on how to do this. What approach would you use?</p>
| -3 | 2016-10-10T17:23:22Z | 39,963,771 | <p>It's impossible to repeat a code without a loop.</p>
<p>I can only think that you want something like Task Scheluding on Windows (<a href="https://msdn.microsoft.com/pt-br/library/windows/desktop/aa383614.aspx" rel="nofollow">https://msdn.microsoft.com/pt-br/library/windows/desktop/aa383614.aspx</a>) or crontab/timers on Linux.</p>
| 0 | 2016-10-10T17:36:05Z | [
"python",
"web-scraping"
] |
How can I get a python code to run over and over again? | 39,963,586 | <p>I have a scraper that scrapes data from a website, then saves the data in .csv files. What I am looking for is a way to run this code every 10 minutes, without using a loop. I have very little knowledge on how to do this. What approach would you use?</p>
| -3 | 2016-10-10T17:23:22Z | 39,963,778 | <p>Are you using windows or a unix based system?</p>
<p>If you're using UNIX, you can schedule jobs to take place at regular intervals with CRON.</p>
<p>Execute the following in a terminal window:</p>
<pre><code>#edit crontab
crontab -e
</code></pre>
<p>Then add the required file you want to execute prefaced by the following CRON instructions:</p>
<pre><code>*/6 * * * * /path/to/desired/Python_version /path/to/executable/file
</code></pre>
| 1 | 2016-10-10T17:36:30Z | [
"python",
"web-scraping"
] |
Downloading a file using TCP(Client/Server)? | 39,963,631 | <p>I am trying to make a TCP (Client and Server) in python to download a file that is available on the Server. I am a total beginner in networking in Python, and following a tutorial for this purpose. The problem I am getting is that whenever I try to download a file from the server I get this error:</p>
<pre class="lang-none prettyprint-override"><code>Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "fileServer.py", line 8, in RetrFile
sock.send("EXISTS " + str(os.path.getsize(filename)));
TypeError: 'str' does not support the buffer interface
</code></pre>
<p>FileServer.py</p>
<pre><code>import socket;
import threading;
import os;
def RetrFile(name,sock):
filename = sock.recv(1024).decode();
if os.path.isfile(filename):
sock.send("EXISTS " + str(os.path.getsize(filename)));
userResponse = sock.recv(1024).decode();
if (userResponse[:2] == 'OK'):
with open(filename,'rb') as f:
bytesToSend = f.read(1024);
sock.send(bytesToSend);
while (bytesToSend != ""):
byteToSend = f.read(1024);
sock.send(bytesToSend);
else:
sock.send("ERR");
sock.close();
def Main():
host = "127.0.0.1";
port = 5003;
s = socket.socket();
s.bind((host,port));
s.listen(5);
print("Server Started.")
while True:
c , addr = s.accept();
print("Client connected ip : " + str(addr));
t = threading.Thread(target = RetrFile,args=("retrThread",c))
t.start();
s.close();
if __name__ == '__main__':
Main();
</code></pre>
<p>FileClient.py</p>
<pre><code>import socket
def Main():
host = "127.0.0.1";
port = 5003;
s = socket.socket();
s.connect((host,port));
filename = input("Filename? -> ");
if (filename != "q"):
s.send(filename.encode())
data = s.recv(1024)
if (data[:6] == "EXISTS"):
filesize = long(data[6:])
message = input("File Exists, " + str(fielsize) + "Bytes, download? (Y/N)? -> ");
if (message == "Y"):
s.send("OK")
f = open('new_'+filename,'wb')
data = s.recv(1024)
totalRecv = len(data)
f.write(data)
while(totalRecv < filesize):
data = s.recv(1024);
totalRecv += len(data)
f.write(data)
print("{0:.2f}".format((totalRecv/float(filesize))*100 + "%Done"));
print("Download Complete!");
else:
print("File does not exist!");
s.close();
if __name__ == '__main__':
Main();
</code></pre>
| 0 | 2016-10-10T17:26:52Z | 39,963,856 | <p>You need to be sending <code>bytes</code> to the socket, not a <code>string</code>. You can convert a string to a bytes with <code>.encode()</code> Try:</p>
<pre><code>message = "EXISTS " + str(os.path.getsize(filename)))
sock.send(message.encode())
</code></pre>
<p>As a side note, you don't need semicolons when using Python, so I would recommend removing them from your code.</p>
| 0 | 2016-10-10T17:42:27Z | [
"python",
"networking",
"tcp",
"fileserver"
] |
Python - Matplotlib in Tkinter: The toolbar pan and zoom doesnt change the cursor | 39,963,669 | <p>I've been developing a tkinter app, which has a matplotlib graph embeded in it. The problem i am having right now is :</p>
<p>Allthough the functionality of the toolbar is working, it doesnt change the cursor which makes a worse user experience since you cant tell if you are zooming or not.
From what i have seen from various documentation and online tutorials this shouldnt be an issue.</p>
<p>From what i read, the toolbar should even makes changing cursor harder, since it overrides to cursor all the time when pan or zoom is selected. I am posting the related parts of my code, please let me know if you see something that doesnt add up, or anything that i might be missing. </p>
<p>Any suggestion is much appriciated!</p>
<pre><code>from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
class Controller(tk.Tk):
self.canvass = FigureCanvasTkAgg(self.plots, self)
self.canvass.show()
self.canvass.get_tk_widget().grid(row=1, column=0, sticky="news")
self.toolbar_frame = tk.Frame(self, width=410, height=30)
self.toolbar_frame.config(relief="sunken", borderwidth=1)
self.toolbar_frame.pack_propagate(flag=False)
self.toolbar_frame.grid(column=0, row=3, sticky="w")
self.toolbar = NavigationToolbar2TkAgg(self.canvass, self.toolbar_frame)
self.toolbar.pack()
self.toolbar.update()
</code></pre>
<p><strong>It turned out putting the toolbar into a frame causes that weird behavior. I just took a simple 'matplotlib in tkinter embedding' example which was working and modified it to use a frame as i did in my code, and the same problem occoured.</strong></p>
<p>I need the frame because normally the toolbar uses pack and i dont want to change the whole layout just because of this, since i'have used grid all along.</p>
| 1 | 2016-10-10T17:29:07Z | 40,060,136 | <p>Well I have encountered the same problem too, and oddly enough this is caused by the canvas and the toolbar not having the same master. Just making a new frame and putting the canvas and the toolbar in it will solve the problem. You still can place that frame with grid in your layout .</p>
| 1 | 2016-10-15T14:18:32Z | [
"python",
"python-3.x",
"matplotlib",
"tkinter"
] |
List of tab delimited arrays to numpy array? | 39,963,693 | <p>Win 7, x64, Python 2.7.12</p>
<p>I have data in the form</p>
<pre><code>myData = [[a1, b1, c1, d1, e1, f1, g1, h1], [a2, b2, c2, .... ], ..... ]
</code></pre>
<p>where <code>myData</code> is a <code>np.ndarray</code> of floats. I saved this by using the following...</p>
<pre><code>with open('myData.txt', 'w') as f:
for s in myData:
f.write(str(s) + '\n')
</code></pre>
<p>Which on inspection was actually saved like...</p>
<pre><code>[a1 b1 c1 d1 e1 f1 g1 h1]
[a2 b2 c2 d2 e2 f2 g2 h1]
.....
</code></pre>
<p>i.e. tab delimited.</p>
<p>So I tried to read it back in using...</p>
<pre><code>import numpy as np
from ast import literal_eval
with open('myData.txt', 'r') as f:
fromFile = [np.ndarray(literal_eval(line)) for line in f]
f.close()
</code></pre>
<p>But this throws an error...</p>
<pre><code> File "<unknown>", line 1
[ 1. 1.198 2.063 1.833 1.458 1.885 1.969 0.343]
^
SyntaxError: invalid syntax
</code></pre>
<p>So given that I cant regenerate the file <code>myData.txt</code> how do restore it to its initial data type?</p>
<p>Also is there a way of stopping the data being written out like that in the first place?</p>
<p>EDIT: A solution to the above...</p>
<pre><code>import numpy as np
from ast import literal_eval
branches = ['[ 1. 1.198 2.063 1.833 1.458 1.885 1.969 0.343]\n',
'[ 2. 1.26 2. 1.26 1.26 2. 1.26 0. ]\n',
'[ 3. 1.688 2. 1.781 1.573 2.021 1.979 0.23 ]\n',
'[ 4. 1.604 2.729 1.792 1.667 2.49 1.948 0.293]\n']
branches = [line.rstrip(']\n') for line in branches]
branches = [line.lstrip('[ ') for line in branches]
print branches[0]
branches = [line.split(' ') for line in branches]
newBranches = []
for branch in branches:
branch = filter(None, branch)
branch = [float(item) for item in branch]
newBranches.append(branch)
print newBranches[0]
branches = np.array(newBranches)
</code></pre>
<p>Unless there is a faster way of doing this then thats how I'll be doing it. I will also be taking Nils Werner's advice below in the answers.</p>
| 0 | 2016-10-10T17:30:33Z | 39,963,815 | <p>You should use</p>
<pre><code>numpy.save('myData.npy', myData)
</code></pre>
<p>which you can then read like</p>
<pre><code>myData = numpy.load('myData.npy')
</code></pre>
| 1 | 2016-10-10T17:39:06Z | [
"python",
"arrays",
"numpy",
"multidimensional-array"
] |
Download files from public S3 bucket with boto3 | 39,963,745 | <p>I cannot download a file or even get a listing of the <strong>public</strong> S3 bucket with <code>boto3</code>.</p>
<p>The code below works with my own bucket, but not with public one:</p>
<pre><code>def s3_list(bucket, s3path_or_prefix):
bsession = boto3.Session(aws_access_key_id=settings.AWS['ACCESS_KEY'],
aws_secret_access_key=settings.AWS['SECRET_ACCESS_KEY'],
region_name=settings.AWS['REGION_NAME'])
s3 = bsession.resource('s3')
my_bucket = s3.Bucket(bucket)
items = my_bucket.objects.filter(Prefix=s3path_or_prefix)
return [ii.key for ii in items]
</code></pre>
<p>I get an <code>AccessDenied</code> error on this code. The bucket is not in my own and I cannot set permissions there, but I am sure it is open to public read.</p>
| 4 | 2016-10-10T17:34:09Z | 39,963,893 | <p>I had the similar issue in the past. I have found a key to this bug in <a href="https://github.com/boto/boto3/issues/134" rel="nofollow">https://github.com/boto/boto3/issues/134</a> .</p>
<p>You can use undocumented trick:</p>
<pre><code>import botocore
def s3_list(bucket, s3path_or_prefix, public=False):
bsession = boto3.Session(aws_access_key_id=settings.AWS['ACCESS_KEY'],
aws_secret_access_key=settings.AWS['SECRET_ACCESS_KEY'],
region_name=settings.AWS['REGION_NAME'])
client = bsession.client('s3')
if public:
client.meta.events.register('choose-signer.s3.*', botocore.handlers.disable_signing)
result = client.list_objects(Bucket=bucket, Delimiter='/', Prefix=s3path_or_prefix)
return [obj['Prefix'] for obj in result.get('CommonPrefixes')]
</code></pre>
| 0 | 2016-10-10T17:44:27Z | [
"python",
"boto3"
] |
Easiest way to "fill" payload? | 39,963,798 | <p>I wan't to send a UDP packet with a arbitrary packet size depending on my input argument, so whenever my "data" is not enough to fill up the wanted packet payload I want to just "fill" the rest with empty data.</p>
<p>So if I send <code>123</code> but I want the packet to be of size 100 bytes, the method will pad the packet with the access data, I don't want to iterate and add spaces to fill it out manually. </p>
<p>Any tips?</p>
| -1 | 2016-10-10T17:38:23Z | 39,963,995 | <p>You could create a padding <code>bytearray()</code>, and just prepend (or append) it to your payload:</p>
<pre><code>payload = b'123'
padding_length = 100 - len(payload)
padding_byte = b' '
return bytearray(padding_byte * padding_length) + payload
</code></pre>
| 0 | 2016-10-10T17:51:49Z | [
"python",
"sockets",
"udp",
"scapy"
] |
Easiest way to "fill" payload? | 39,963,798 | <p>I wan't to send a UDP packet with a arbitrary packet size depending on my input argument, so whenever my "data" is not enough to fill up the wanted packet payload I want to just "fill" the rest with empty data.</p>
<p>So if I send <code>123</code> but I want the packet to be of size 100 bytes, the method will pad the packet with the access data, I don't want to iterate and add spaces to fill it out manually. </p>
<p>Any tips?</p>
| -1 | 2016-10-10T17:38:23Z | 39,964,704 | <p>I'm not a networking expert, but you might want to take a look at <a href="https://docs.python.org/3.5/library/struct.html" rel="nofollow">struct.pack()</a></p>
<p>This will zero pad a length of characters and should be blazing fast:</p>
<pre><code>from struct import pack
result = pack('!100s', 'input_value')
</code></pre>
<p>This also allows you to mind the endian-ness of your data if that is important in your domain. In this case the data appears right null padded.</p>
<p>Hope that helps!</p>
| 0 | 2016-10-10T18:38:57Z | [
"python",
"sockets",
"udp",
"scapy"
] |
Easiest way to "fill" payload? | 39,963,798 | <p>I wan't to send a UDP packet with a arbitrary packet size depending on my input argument, so whenever my "data" is not enough to fill up the wanted packet payload I want to just "fill" the rest with empty data.</p>
<p>So if I send <code>123</code> but I want the packet to be of size 100 bytes, the method will pad the packet with the access data, I don't want to iterate and add spaces to fill it out manually. </p>
<p>Any tips?</p>
| -1 | 2016-10-10T17:38:23Z | 39,969,577 | <p>Try this way:</p>
<pre><code>>>> from scapy.all import *
WARNING: No route found for IPv6 destination :: (no default route?)
>>> p = IP(dst="62.21.20.21")/UDP()
>>> p = p/Raw('a'*(100-len(p)))
>>> send(p)
.
Sent 1 packets.
>>>
# tcpdump -ni cplane0 udp -e -v -X
tcpdump: listening on cplane0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:15:31.904204 54:ab:3a:56:59:1e > fa:16:3e:e1:9e:14, ethertype IPv4 (0x0800), length 114: (tos 0x0, ttl 62, id 1, offset 0, flags [none], proto UDP (17), length 100)
44.60.11.3.53 > 62.21.20.21.53: 24929 updateM+ [b2&3=0x6161] [24929a] [24929q] [24929n] [24929au][|domain]
0x0000: 4500 0064 0001 0000 3e11 f31f 2c3c 0b03 E..d....>...,<..
0x0010: 3e15 1415 0035 0035 0050 c3c9 6161 6161 >....5.5.P..aaaa
0x0020: 6161 6161 6161 6161 6161 6161 6161 6161 aaaaaaaaaaaaaaaa
0x0030: 6161 6161 6161 6161 6161 6161 6161 6161 aaaaaaaaaaaaaaaa
0x0040: 6161 6161 6161 6161 6161 6161 6161 6161 aaaaaaaaaaaaaaaa
0x0050: 6161 6161 6161 6161 6161 6161 6161 6161 aaaaaaaaaaaaaaaa
0x0060: 6161 6161 aaaa
</code></pre>
<p>Please note "proto UDP (17), length 100" in tcpdump output.</p>
| 0 | 2016-10-11T02:22:46Z | [
"python",
"sockets",
"udp",
"scapy"
] |
Searching for a named tuple in a list of named tuples | 39,963,822 | <p>I need to find out if a given named tuple exists in a list of named tuples (the named tuples are points ex. A(2,3) in a 'Polygon' class). If the given tuple doesn't exist in the list, we append the tuple to the list. If it dos exist, a user defined exception is raised. The function works when the given point doesn't exist on the list. But there's no exception raised if the point does exist and it simply gets added again to the end of the list. And here's what I have so far:
class ExistingPointError(Exception):
def <strong>init</strong>(self,value):
self.value=0</p>
<pre><code>class Polygon(object):
counter = 0
def __init__(self):
Polygon.counter+=1
self.points = []
# and here's the function that I'm working with
def setter(self,pt):
def isThere(pt):
if pt in self.points: raise ExistingPointError()
print("Setting Point")
try:
isThere(pt)
self.points.append(pt)
except ExistingPointError as E:
print("Point exists! value: ", E)
print(self.points)
P = Polygon()
point=collections.namedtuple('PointName','Name x y')
A = point(Name = 'A', x = 5, y = 0)
B = point(Name = 'B',x = 10,y = 5)
C = point(Name = 'C',x=5,y=10)
D = point(Name = 'D', x=-2,y=8)
lst = [A,B,C,D]
P.createPolygon(lst)
P.setter(D)
</code></pre>
| -1 | 2016-10-10T17:39:40Z | 39,963,915 | <p>How about this ?</p>
<pre><code>def setter(self,pt):
def isThere(pt):
if pt in self.points:
raise ExistingPointError()
print("Setting Point")
try:
isThere(pt)
self.points.append(pt)
except ExistingPointError as E:
print("Point exists! value: ", E)
print(self.points)
</code></pre>
<p>I'm not convinced however that exceptions are the way to go here. Maybe try this :</p>
<pre><code>def setter(self,pt):
if pt in self.points:
print("Point exists!")
else:
self.points.append(pt)
print(self.points)
</code></pre>
| 0 | 2016-10-10T17:46:08Z | [
"python",
"list",
"class",
"python-3.x",
"tuples"
] |
Searching for a named tuple in a list of named tuples | 39,963,822 | <p>I need to find out if a given named tuple exists in a list of named tuples (the named tuples are points ex. A(2,3) in a 'Polygon' class). If the given tuple doesn't exist in the list, we append the tuple to the list. If it dos exist, a user defined exception is raised. The function works when the given point doesn't exist on the list. But there's no exception raised if the point does exist and it simply gets added again to the end of the list. And here's what I have so far:
class ExistingPointError(Exception):
def <strong>init</strong>(self,value):
self.value=0</p>
<pre><code>class Polygon(object):
counter = 0
def __init__(self):
Polygon.counter+=1
self.points = []
# and here's the function that I'm working with
def setter(self,pt):
def isThere(pt):
if pt in self.points: raise ExistingPointError()
print("Setting Point")
try:
isThere(pt)
self.points.append(pt)
except ExistingPointError as E:
print("Point exists! value: ", E)
print(self.points)
P = Polygon()
point=collections.namedtuple('PointName','Name x y')
A = point(Name = 'A', x = 5, y = 0)
B = point(Name = 'B',x = 10,y = 5)
C = point(Name = 'C',x=5,y=10)
D = point(Name = 'D', x=-2,y=8)
lst = [A,B,C,D]
P.createPolygon(lst)
P.setter(D)
</code></pre>
| -1 | 2016-10-10T17:39:40Z | 39,964,102 | <p>You want to raise a user defined error, ExistingPointError(), yet you haven't really defined what that is. When i run your code and insert a duplicate tuple into a Polygon object, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "python", line 27, in <module>
File "python", line 20, in setter
NameError: name 'ExistingPointError' is not defined
</code></pre>
<p>You may not need to raise an exception for this as @Gjhuizing mentioned. A simple message telling the user that the object already exists should be enough for your case.</p>
| 0 | 2016-10-10T17:59:07Z | [
"python",
"list",
"class",
"python-3.x",
"tuples"
] |
Why is AttributeError raised twice? | 39,963,873 | <pre><code>class ValidatingDB(object):
def __init__(self):
self.exists = 5
def __getattribute__(self, name):
print('Called __getattribute__(%s)' % name)
try:
return super(ValidatingDB, self).__getattribute__(name)
except AttributeError:
value = 'Value for %s' % name
setattr(self, name, value)
return value
data = ValidatingDB()
data.exists
data.foo
data.foo
</code></pre>
<p>I would expect that the first call to data.foo would "set" and the second call to data.foo would no longer produce the AttributeError. Why does it raise the AttributeError for each call of data.foo?</p>
| -3 | 2016-10-10T17:43:21Z | 39,964,057 | <p>Your code doesn't allow you to see the difference between an <code>AttributeError</code> and an attribute existing. In both cases you get <em>exactly the same result</em>.</p>
<p>So the first <code>data.foo</code> access raises an <code>AttributeError</code>, so a new value is created, set <em>and returned</em>.</p>
<p>The second <code>data.foo</code> access returns that value you already set. This is indistinguishable from the previous expression because <em>in both cases you get a value back</em>.</p>
<p>You'd have to change your code do something <em>else</em> when an <code>AttributeError</code> has been raised. You could add an extra <code>print</code> statement, for example:</p>
<pre><code>def __getattribute__(self, name):
print('Called __getattribute__(%s)' % name)
try:
return super(ValidatingDB, self).__getattribute__(name)
except AttributeError:
print('No such attribute, generating a new value')
value = 'Value for %s' % name
setattr(self, name, value)
return value
</code></pre>
<p>Now you'll see a difference:</p>
<pre><code>>>> data = ValidatingDB()
>>> data.exists
Called __getattribute__(exists)
5
>>> data.foo
Called __getattribute__(foo)
No such attribute, generating a new value
'Value for foo'
>>> data.foo
Called __getattribute__(foo)
'Value for foo'
</code></pre>
| 0 | 2016-10-10T17:55:45Z | [
"python",
"python-2.7"
] |
BeautifulSoup to return div class content | 39,963,921 | <pre><code>import requests
from bs4 import BeautifulSoup
url = 'https://weather.com/weather/today/l/90006:4:US'
r = requests.get(url)
html_content = r.text
soup = BeautifulSoup(html_content, 'lxml')
weather_row= soup.find('div', {"class" : "today_nowcard-hili"})
print weather_row
</code></pre>
<p>I found the class <code>today_nowcard-hili</code> that returns weather highs and lows: i.e <code>H 84° / L 61°</code>. But the above code keeps giving me <code>None</code></p>
| 1 | 2016-10-10T17:46:26Z | 39,963,968 | <p>It is <em>hilo</em> not <em>hili</em>:</p>
<pre><code>soup.find('div', {"class" : "today_nowcard-hilo"})
</code></pre>
<p>But you can see from what it returns that the data is inserted using Js:</p>
<pre><code><div class="today_nowcard-hilo">
<span class="btn-text" data-ng-bind="::'H' | pfTranslate: {context: 'today_nowcard'}"></span>
<span class="deg-hilo-nowcard" data-gm-wx-temperature="::todayWxcardVm.forecast.items[0].day.daytemp" data-text-to-replace="{{ todayWxcardVm.getProperValue('[[ forecast.day.temperature[0] ]]', '--') }}">[[ forecast.day.temperature[0] || '--' ]]</span>
<span data-ng-if="::todayWxcardVm.forecast.items[0].day.daytemp"> /</span>
<span class="btn-text" data-ng-bind="::'L' | pfTranslate: {context: 'today_nowcard'}"></span>
<span class="deg-hilo-nowcard" data-gm-wx-temperature="::todayWxcardVm.forecast.items[0].day.nighttemp" data-text-to-replace="{{ todayWxcardVm.getProperValue('[[ forecast.night.temperature[0] ]]', '--') }}">[[ forecast.night.temperature[0] || '--' ]]</span>
<div>
<span class="btn-text" data-ng-bind="::('UV Index' | pfTranslate: { context: 'weather_terms'})"></span>
<span data-gm-wx-uv-index="::todayWxcardVm.obs.uvIndex"></span>
</div>
</div>
</code></pre>
<p>The weather info is retrieved through an ajax request:</p>
<pre><code>r = requests.get("https://api.weather.com/v2/turbo/vt1precipitation;vt1currentdatetime;vt1pollenforecast;vt1dailyForecast;vt1observation?units=e&language=en-US&geocode=34.05,-118.29&format=json&apiKey=c1ea9f47f6a88b9acb43aba7faf389d4")
print(r.json())
</code></pre>
<p>Which would give you:</p>
<pre><code>{u'vt1currentdatetime': {u'tmZnAbbr': u'PDT', u'datetime': u'2016-10-10T10:54:44.496-07:00'}, u'vt1dailyForecast': {u'dayOfWeek': [u'Monday', u'Tuesday', u'Wednesday', u'Thursday', u'Friday', u'Saturday', u'Sunday', u'Monday', u'Tuesday', u'Wednesday', u'Thursday', u'Friday', u'Saturday', u'Sunday', u'Monday'], u'moonrise': [u'2016-10-10T14:56:44-0700', u'2016-10-11T15:39:14-0700', u'2016-10-12T16:20:52-0700', u'2016-10-13T17:01:06-0700', u'2016-10-14T17:41:27-0700', u'2016-10-15T18:23:24-0700', u'2016-10-16T19:06:47-0700', u'2016-10-17T19:53:38-0700', u'2016-10-18T20:44:27-0700', u'2016-10-19T21:38:21-0700', u'2016-10-20T22:36:08-0700', u'2016-10-21T23:34:45-0700', None, u'2016-10-23T00:34:37-0700', u'2016-10-24T01:32:49-0700'], u'moonset': [u'2016-10-10T00:57:11-0700', u'2016-10-11T01:55:40-0700', u'2016-10-12T02:57:31-0700', u'2016-10-13T04:01:56-0700', u'2016-10-14T05:08:27-0700', u'2016-10-15T06:17:17-0700', u'2016-10-16T07:26:40-0700', u'2016-10-17T08:37:23-0700', u'2016-10-18T09:46:17-0700', u'2016-10-19T10:53:04-0700', u'2016-10-20T11:54:49-0700', u'2016-10-21T12:51:05-0700', u'2016-10-22T13:40:38-0700', u'2016-10-23T14:25:15-0700', u'2016-10-24T15:04:29-0700'], u'validDate': [u'2016-10-10T07:00:00-0700', u'2016-10-11T07:00:00-0700', u'2016-10-12T07:00:00-0700', u'2016-10-13T07:00:00-0700', u'2016-10-14T07:00:00-0700', u'2016-10-15T07:00:00-0700', u'2016-10-16T07:00:00-0700', u'2016-10-17T07:00:00-0700', u'2016-10-18T07:00:00-0700', u'2016-10-19T07:00:00-0700', u'2016-10-20T07:00:00-0700', u'2016-10-21T07:00:00-0700', u'2016-10-22T07:00:00-0700', u'2016-10-23T07:00:00-0700', u'2016-10-24T07:00:00-0700'], u'moonPhrase': [u'Waxing Gibbous', u'Waxing Gibbous', u'Waxing Gibbous', u'Waxing Gibbous', u'Waxing Gibbous', u'Full Moon', u'Full Moon', u'Waning Gibbous', u'Waning Gibbous', u'Waning Gibbous', u'Waning Gibbous', u'Waning Gibbous', u'Last Quarter', u'Waning Crescent', u'Waning Crescent'], u'sunset': [u'2016-10-10T18:24:49-0700', u'2016-10-11T18:23:32-0700', u'2016-10-12T18:22:15-0700', u'2016-10-13T18:20:59-0700', u'2016-10-14T18:19:43-0700', u'2016-10-15T18:18:29-0700', u'2016-10-16T18:17:15-0700', u'2016-10-17T18:16:02-0700', u'2016-10-18T18:14:49-0700', u'2016-10-19T18:13:38-0700', u'2016-10-20T18:12:27-0700', u'2016-10-21T18:11:18-0700', u'2016-10-22T18:10:09-0700', u'2016-10-23T18:09:02-0700', u'2016-10-24T18:07:55-0700'], u'night': {u'iconExtended': [3100, 3100, 3100, 2900, 2900, 2700, 2700, 3100, 3100, 3100, 3100, 3100, 3100, 3100, 3100], u'windDirCompass': [u'SSW', u'SSW', u'SE', u'SSE', u'S', u'S', u'SSW', u'NNW', u'N', u'NNE', u'NE', u'NNE', u'NE', u'SE', u'ESE'], u'humidityPct': [75, 87, 92, 97, 98, 99, 100, 56, 33, 35, 36, 39, 55, 65, 67], u'temperature': [61, 58, 56, 59, 62, 62, 62, 60, 60, 62, 62, 61, 59, 58, 56], u'precipType': [u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain'], u'uvDescription': [u'Low', u'Low', u'Low', u'Low', u'Low', u'Low', u'Low', u'Low', u'Low', u'Low', u'Low', u'Low', u'Low', u'Low', u'Low'], u'cloudPct': [1, 8, 5, 38, 53, 67, 66, 5, 4, 8, 7, 10, 4, 4, 6], u'narrative': [u'A clear sky. Low 61F. Winds light and variable.', u'Clear skies. Low 58F. Winds light and variable.', u'Clear. Low 56F. Winds light and variable.', u'Clear during the evening followed by cloudy skies overnight. Low 59F. Winds light and variable.', u'Clear skies in the evening then becoming cloudy overnight. Low 62F. Winds light and variable.', u'Partly cloudy skies early will become overcast later during the night. Low 62F. Winds light and variable.', u'Partly cloudy skies early will become overcast later during the night. Low 62F. Winds light and variable.', u'Clear skies. Low around 60F. Winds light and variable.', u'Clear. Low around 60F. Winds light and variable.', u'Clear skies. Low 62F. Winds NNE at 5 to 10 mph.', u'Clear skies. Low 62F. Winds NE at 10 to 15 mph.', u'Clear. Low 61F. Winds NNE at 5 to 10 mph.', u'Clear. Low 59F. Winds light and variable.', u'Clear skies. Low 58F. Winds light and variable.', u'Clear skies. Low 56F. Winds light and variable.'], u'thunderEnum': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], u'qualifier': [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None], u'windSpeed': [3, 2, 2, 2, 3, 3, 4, 5, 5, 9, 11, 7, 2, 2, 1], u'thunderEnumPhrase': [u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder'], u'windDirDegrees': [193, 207, 144, 152, 171, 174, 197, 332, 351, 13, 36, 33, 51, 132, 116], u'snowRange': [u'', u'', u'', u'', u'', u'', u'', u'', u'', u'', u'', u'', u'', u'', u''], u'precipPct': [10, 10, 10, 10, 10, 10, 20, 0, 0, 0, 0, 0, 0, 0, 0], u'precipAmt': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], u'phrase': [u'Clear', u'Clear', u'Clear', u'Partly Cloudy', u'Partly Cloudy', u'Mostly Cloudy', u'Mostly Cloudy', u'Clear', u'Clear', u'Clear', u'Clear', u'Clear', u'Clear', u'Clear', u'Clear'], u'dayPartName': [u'Tonight', u'Tomorrow night', u'Wednesday night', u'Thursday night', u'Friday night', u'Saturday night', u'Sunday night', u'Monday night', u'Tuesday night', u'Wednesday night', u'Thursday night', u'Friday night', u'Saturday night', u'Sunday night', u'Monday night'], u'uvIndex': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], u'icon': [31, 31, 31, 29, 29, 27, 27, 31, 31, 31, 31, 31, 31, 31, 31]}, u'moonIcon': [u'WXG', u'WXG', u'WXG', u'WXG', u'WXG', u'F', u'F', u'WNG', u'WNG', u'WNG', u'WNG', u'WNG', u'LQ', u'WNC', u'WNC'], u'day': {u'iconExtended': [3200, 3200, 3200, 3200, 3000, 9003, 9003, 9003, 3200, 3200, 3200, 3200, 3200, 3200, 3200], u'windDirCompass': [u'SW', u'SSW', u'SSW', u'SSW', u'SSW', u'SW', u'SW', u'W', u'NW', u'N', u'NE', u'NNE', u'SSE', u'S', u'S'], u'humidityPct': [40, 66, 73, 71, 75, 84, 87, 68, 29, 21, 25, 25, 32, 40, 51], u'temperature': [82, 75, 71, 74, 75, 73, 73, 75, 80, 84, 84, 84, 82, 79, 76], u'precipType': [u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain', u'rain'], u'uvDescription': [u'High', u'High', u'High', u'High', u'High', u'High', u'High', u'High', u'High', u'High', u'High', u'Moderate', u'Moderate', u'Moderate', u'Moderate'], u'cloudPct': [3, 6, 4, 7, 41, 46, 60, 45, 5, 5, 6, 9, 8, 4, 8], u'narrative': [u'Sunny. High 82F. Winds SW at 5 to 10 mph.', u'Sunny skies. High near 75F. Winds light and variable.', u'Sunny skies. High 71F. Winds light and variable.', u'Sunny. High 74F. Winds light and variable.', u'Sunshine and clouds mixed. High around 75F. Winds SSW at 5 to 10 mph.', u'Cloudy early, becoming mostly sunny in the afternoon. High 73F. Winds light and variable.', u'Cloudy skies early, then partly cloudy in the afternoon. High 73F. Winds SW at 5 to 10 mph.', u'Cloudy early, becoming mostly sunny in the afternoon. High around 75F. Winds W at 5 to 10 mph.', u'Mainly sunny. High around 80F. Winds NW at 5 to 10 mph.', u'Mainly sunny. High 84F. Winds light and variable.', u'Sunny skies. High 84F. Winds NE at 10 to 20 mph.', u'Sunny skies. High 84F. Winds NNE at 5 to 10 mph.', u'A mainly sunny sky. High 82F. Winds SSE at 5 to 10 mph.', u'Mainly sunny. High 79F. Winds light and variable.', u'Sunny skies. High 76F. Winds light and variable.'], u'thunderEnum': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], u'qualifier': [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None], u'windSpeed': [6, 5, 4, 4, 6, 5, 6, 6, 9, 5, 15, 9, 6, 4, 4], u'thunderEnumPhrase': [u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder', u'No thunder'], u'windDirDegrees': [234, 213, 205, 203, 204, 217, 220, 281, 321, 8, 35, 25, 153, 180, 189], u'snowRange': [u'', u'', u'', u'', u'', u'', u'', u'', u'', u'', u'', u'', u'', u'', u''], u'precipPct': [0, 10, 10, 10, 10, 10, 10, 10, 0, 0, 0, 0, 0, 0, 0], u'precipAmt': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], u'phrase': [u'Sunny', u'Sunny', u'Sunny', u'Sunny', u'Partly Cloudy', u'AM Clouds/PM Sun', u'AM Clouds/PM Sun', u'AM Clouds/PM Sun', u'Sunny', u'Sunny', u'Sunny', u'Sunny', u'Sunny', u'Sunny', u'Sunny'], u'dayPartName': [u'Today', u'Tomorrow', u'Wednesday', u'Thursday', u'Friday', u'Saturday', u'Sunday', u'Monday', u'Tuesday', u'Wednesday', u'Thursday', u'Friday', u'Saturday', u'Sunday', u'Monday'], u'uvIndex': [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 5, 5, 5], u'icon': [32, 32, 32, 32, 30, 30, 30, 30, 32, 32, 32, 32, 32, 32, 32]}, u'sunrise': [u'2016-10-10T06:54:58-0700', u'2016-10-11T06:55:43-0700', u'2016-10-12T06:56:30-0700', u'2016-10-13T06:57:16-0700', u'2016-10-14T06:58:03-0700', u'2016-10-15T06:58:50-0700', u'2016-10-16T06:59:37-0700', u'2016-10-17T07:00:25-0700', u'2016-10-18T07:01:13-0700', u'2016-10-19T07:02:01-0700', u'2016-10-20T07:02:50-0700', u'2016-10-21T07:03:40-0700', u'2016-10-22T07:04:29-0700', u'2016-10-23T07:05:20-0700', u'2016-10-24T07:06:10-0700']}, u'vt1precipitation': {u'severity': [1], u'characteristic': [0], u'forecastedRainAmount': [0.0], u'intensity': [0], u'imminence': [0], u'startTime': [u'2016-10-10T11:00:00-0700'], u'eventType': [0], u'endTime': [u'2016-10-10T18:00:00-0700'], u'forecastedSnowAmount': [0.0]}, u'vt1observation': {u'icon': 32, u'phrase': u'Sunny', u'barometerCode': 1, u'precip24Hour': 0.0, u'temperatureMaxSince7am': 79, u'dewPoint': 49, u'barometerTrend': u'Rising', u'snowDepth': 0.0, u'windDirCompass': u'WSW', u'visibility': 10.0, u'feelsLike': 79, u'altimeter': 30.0, u'temperature': 79, u'uvDescription': u'Moderate', u'gust': None, u'humidity': 35, u'windSpeed': 4, u'windDirDegrees': 250, u'obsQualifierCode': None, u'observationTime': u'2016-10-10T10:25:00-0700', u'obsQualifierSeverity': None, u'uvIndex': 4, u'barometerChange': 0.03}, u'vt1pollenforecast': {u'reportDate': [u'2016-10-10T07:00:00.000-07:00', u'2016-10-11T07:00:00.000-07:00', u'2016-10-12T07:00:00.000-07:00', u'2016-10-13T07:00:00.000-07:00'], u'grass': [0, 0, 0, 0], u'tree': [0, 0, 0, 0], u'weed': [0, 0, 0, 0]}, u'id': u'34.05,-118.29'}
</code></pre>
<p>You can see all the parameters passed including the <code>geocode=34.05,-118.29</code> so you would need to get the coordinates for each location, that is available in the source in one of the scripts as json under <code>"lat":34.05,"long":-118.29</code>:</p>
<pre><code> window.explicit_location = "90006:4:US";
window.explicit_location_obj = {"zipCd":"90006","cntryCd":"US","procTm":"20160905130833","locId":"90006","cityNm":"LOS ANGELES","stCd":"CA","prsntNm":"Los Angeles, CA (90006)","coopId":"72295023","lat":34.05,"long":-118.29,"obsStn":"KCQT","secObsStn":"KHHR","tertObsStn":"KSMO","gmtDiff":-8.0,"regSat":"sw","cntyId":"CAC037","cntyNm":"LOS ANGELES","zoneId":"CAZ041","zoneNm":"Los Angeles County Coast including Downtown Los Angeles","cntyFips":"06037","active":1,"dySTInd":"Y","dmaCd":803,"elev":197,"cliStn":"045115","tmZnNm":"Pacific Daylight Time","tmZnAbbr":"PDT","dySTAct":"Y","clsRad":"LAX","ultRad":"LAX","ssRad":"sw","lsRad":"we","siteId":"US","idxId":"KCQT","primTecci":"T72295023","arptId":"LAX","mrnZoneId":"PZZ655","pllnId":"SAN","skiId":"267","tideId":"W9410777","epaId":"ca131","_arptNear":["BUR","LAX","LGB"],"_arptNearDist":[{"key":"BUR:9:US","dist":9},{"key":"LAX:9:US","dist":9},{"key":"LGB:9:US","dist":15}],"_skiNear":[{"key":"261:11:US","tLifts":14,"dist":37},{"key":"267:11:US","tLifts":4,"dist":41},{"key":"402:11:US","tLifts":12,"dist":75},{"key":"43:11:US","tLifts":12,"dist":83},{"key":"400:11:US","tLifts":14,"dist":84},{"key":"361:11:US","tLifts":11,"dist":106},{"key":"227:11:US","tLifts":28,"dist":111},{"key":"194:11:US","tLifts":7,"dist":117},{"key":"29:11:US","tLifts":5,"dist":133},{"key":"134:11:US","tLifts":12,"dist":154}],"_gprId":"NAM","_dstDates":{"startDate":"2016-03-13T10:00:00.000Z","endDate":"2016-11-06T09:00:00.000Z"},"wmId":"SMO","PollenIds":{"tree":"KSNA","grass":"KSNA","ragweed":"KSNA"},"isBoatBeach":true,"locType":4,"stNm":"California","_country":"United States Of America"};
</code></pre>
<p>But you would be using the sites api key which I would not recommend. There are various api's lots which offer free plans you can use to get the weather details like <a href="https://openweathermap.org/appid#get" rel="nofollow">https://openweathermap.org/appid#get</a> or <a href="https://www.wunderground.com/weather/api/d/docs" rel="nofollow">https://www.wunderground.com/weather/api/d/docs</a>.</p>
| 1 | 2016-10-10T17:49:51Z | [
"python",
"beautifulsoup"
] |
Python: How to simulate a click using BeautifulSoup | 39,963,972 | <p><strong>I don't want to use selenium since I dont want to open any browsers.</strong></p>
<p>The button triggers a Javascript method that changes something in the page.
I want to simulate a button click so I can get the "output" from it.</p>
<h2>Example (not what the button actually do) :</h2>
<p>I enter a name such as "John", press the button and it changes "John" to "nhoJ".
so I already managed to change the value of the input to John but I have no clue how I could simulate a button click so I can get the output.</p>
<p>Thanks.</p>
| -2 | 2016-10-10T17:50:08Z | 39,964,037 | <p>You can't do what you want. Beautiful soup is a text processor which has no way to run JavaScript. </p>
| 2 | 2016-10-10T17:54:17Z | [
"python"
] |
Python: How to simulate a click using BeautifulSoup | 39,963,972 | <p><strong>I don't want to use selenium since I dont want to open any browsers.</strong></p>
<p>The button triggers a Javascript method that changes something in the page.
I want to simulate a button click so I can get the "output" from it.</p>
<h2>Example (not what the button actually do) :</h2>
<p>I enter a name such as "John", press the button and it changes "John" to "nhoJ".
so I already managed to change the value of the input to John but I have no clue how I could simulate a button click so I can get the output.</p>
<p>Thanks.</p>
| -2 | 2016-10-10T17:50:08Z | 39,964,061 | <p>BeautifulSoup is an <code>HtmlParser</code> you can't do such thing. Buf if that button calls an API, you could make a <code>request</code> to that api and I guess that would simulate clicking the button.</p>
| 0 | 2016-10-10T17:55:47Z | [
"python"
] |
Python NameError, global name not defined | 39,964,028 | <p>I am following <a href="https://code.tutsplus.com/tutorials/creating-a-web-app-from-scratch-using-python-flask-and-mysql--cms-22972" rel="nofollow">this</a> tutorial to create a Python 2.7 and Flask web app with a MySQL database.</p>
<p>Why am I getting this error on <code>_password</code>?</p>
<p><code>app.py</code></p>
<pre><code>from flask import Flask, render_template, json, request
from flask.ext.mysql import MySQL
from werkzeug import generate_password_hash, check_password_hash
# create the app
app = Flask(__name__)
@app.route("/")
def main():
return render_template("index.html")
@app.route("/showSignUp")
def showSignUp():
return render_template("signup.html")
@app.route("/signUp", methods=["POST"])
def signUp():
# read the values input by the user
global _name, _email, _password
_name = request.form["inputName"]
_email = request.form["inputEmail"]
_password = request.form["inputPassword"]
# validate inputs
if _name and _email and _password:
return json.dumps({"html":"<span>fields are good</span>"})
else:
return json.dumps({"html":"<span>enter required fields</span>"})
# MySQL configurations
app.config["MYSQL_DATABASE_USER"] = "root"
app.config["MYSQL_DATABASE_PASSWORD"] = ""
app.config["MYSQL_DATABASE_DB"] = "NotesList"
app.config["MYSQL_DATABASE_HOST"] = "localhost"
MySQL().init_app(app)
def connectDB():
conn = mysql.connect()
cursor = conn.cursor()
_hashed_password = generate_password_hash(_password)
cursor.callproc("sp_createUser", (_name, _email, _hashed_password))
def check():
data = cursor.fetchall()
if len(data) is 0:
conn.commit()
return json.dumps({"message":"user created"})
else:
return json.dumps({"error":str(data[0])})
app.debug = True
if __name__ == "__main__":
app.run()
</code></pre>
| -1 | 2016-10-10T17:53:56Z | 39,964,464 | <p>The reason is that the app can't be instantiated because you're calling upon names that haven't been created yet. It's almost NEVER a good idea to use <code>global</code> to move variables into the global scope. Don't do that.</p>
<p>Even if you decided you don't care about best-practices, the variables won't get into the global scope until the function is actually called, which never happens before those variables are being used in your code.</p>
<p>You should make your db connection before the request. You'll probably want to utilize the <a href="http://flask.pocoo.org/docs/0.11/api/#flask.Flask.before_request" rel="nofollow">before_request</a> decorator to do this, storing the connection in <code>g</code> -- see the <a href="http://flask.pocoo.org/docs/0.11/tutorial/dbcon/" rel="nofollow">database connection</a> portion of the Flaskr tutorial for more info.</p>
<p>Then put the parts of the code that <strong>work with the database INSIDE of the route function</strong>. The way your code is written write now, you're executing those lines EVERY time your app gets instantiated, not just when someone uses the view <code>/signup</code></p>
| 1 | 2016-10-10T18:23:14Z | [
"python",
"mysql",
"flask"
] |
Appending links to new rows in pandas df after using beautifulsoup | 39,964,034 | <p>I'm attempting to extract some links from a chunk of beautiful soup html and append them to rows of a new pandas dataframe. </p>
<p>So far, I have this code:</p>
<pre><code>url = "http://www.reed.co.uk/jobs
datecreatedoffset=Today&isnewjobssearch=True&pagesize=100"
r = ur.urlopen(url).read()
soup = BShtml(r, "html.parser")
adcount = soup.find_all("div", class_="pages")
print(adcount)
</code></pre>
<p>From my output I then want to take every link, identified by href="" and store each one in a new row of a pandas dataframe.</p>
<p>Using the above snippet I would end up with 6 rows in my new dataset.</p>
<p>Any help would be appreciated!</p>
| 0 | 2016-10-10T17:54:12Z | 39,966,523 | <p>Your links gives a 404 but the logic should be the same as below. You just need to extract the anchor tags with the <em>page</em> class and join them to the base url:</p>
<pre><code>import pandas as pd
from urlparse import urljoin
import requests
base = "http://www.reed.co.uk/jobs"
url = "http://www.reed.co.uk/jobs?keywords=&location=&jobtitleonly=false"
r = requests.get(url).content
soup = BeautifulSoup(r, "html.parser")
df = pd.DataFrame(columns=["links"], data=[urljoin(base, a["href"]) for a in soup.select("div.pages a.page")])
print(df)
</code></pre>
<p>Which gives you:</p>
<pre><code> links
0 http://www.reed.co.uk/jobs?cached=True&pageno=2
1 http://www.reed.co.uk/jobs?cached=True&pageno=3
2 http://www.reed.co.uk/jobs?cached=True&pageno=4
3 http://www.reed.co.uk/jobs?cached=True&pageno=5
4 http://www.reed.co.uk/jobs?cached=True&pageno=...
5 http://www.reed.co.uk/jobs?cached=True&pageno=2
</code></pre>
| 0 | 2016-10-10T20:49:12Z | [
"python",
"pandas",
"beautifulsoup",
"append"
] |
How to rotate tick labels in floating cylindrical axes? | 39,964,068 | <p><a href="http://matplotlib.org/mpl_toolkits/axes_grid/users/overview.html" rel="nofollow">http://matplotlib.org/mpl_toolkits/axes_grid/users/overview.html</a></p>
<p>Check out the VERY bottom of this link. I'm interested in that axes in the middle, where the axis objects are curved into the shape of a quarter-washer. If you check the sourcecode, this axes object is made by setup_axes2:</p>
<pre><code>def setup_axes2(fig, rect):
"""
With custom locator and formatter.
Note that the extreme values are swapped.
"""
tr = PolarAxes.PolarTransform()
pi = np.pi
angle_ticks = [(0, r"$0$"),
(.25*pi, r"$\frac{1}{4}\pi$"),
(.5*pi, r"$\frac{1}{2}\pi$")]
grid_locator1 = FixedLocator([v for v, s in angle_ticks])
tick_formatter1 = DictFormatter(dict(angle_ticks))
grid_locator2 = MaxNLocator(2)
grid_helper = floating_axes.GridHelperCurveLinear(
tr, extremes=(.5*pi, 0, 2, 1),
grid_locator1=grid_locator1,
grid_locator2=grid_locator2,
tick_formatter1=tick_formatter1,
tick_formatter2=None)
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
# create a parasite axes whose transData in RA, cz
aux_ax = ax1.get_aux_axes(tr)
aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax
ax1.patch.zorder = 0.9 # but this has a side effect that the patch is
# drawn twice, and possibly over some other
# artists. So, we decrease the zorder a bit to
# prevent this.
return ax1, aux_ax
</code></pre>
<p>When I label the ticks in the theta axis, the labels are always upside down. I don't know how to flip them. I also don't know how to flip the axis labels upside down. Does anyone know about these confusing floating axes?</p>
| 1 | 2016-10-10T17:56:13Z | 39,968,624 | <p>The hint was in <code>setup_axes3()</code> from the example you linked. The individual axes in the <code>FloatingSubplot</code> are referred to like <code>ax.axis[side]</code> where <code>side</code> is one of <code>["top","bottom","left","right"]</code>. From there you get the usual.</p>
<pre><code>ax = ax2.axis["bottom"]
ax.major_ticklabels.set_rotation(180)
ax.set_label("foo")
ax.label.set_rotation(180)
ax.LABELPAD += 10
</code></pre>
<p>Just do <code>dir(ax)</code> to see what you have access to.</p>
<p><a href="http://i.stack.imgur.com/S7dvL.png" rel="nofollow"><img src="http://i.stack.imgur.com/S7dvL.png" alt="enter image description here"></a></p>
| 1 | 2016-10-11T00:13:32Z | [
"python",
"matplotlib",
"plot"
] |
No metadata download | 39,964,096 | <p>I'm using libtorrent 1.0.9 and custom bindings (reproducible with python). Sometimes I can not download magnets because they're stuck without metadata (while there're >200 DHT nodes available). I'm able to reproduce the issue with this magnet:</p>
<pre><code>magnet:?xt=urn:btih:565DB305A27FFB321FCC7B064AFD7BD73AEDDA2B&dn=bbb_sunflower_1080p_60fps_normal.mp4&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2ftracker.publicbt.com%3a80%2fannounce&ws=http%3a%2f%2fdistribution.bbb3d.renderfarming.net%2fvideo%2fmp4%2fbbb_sunflower_1080p_60fps_normal.mp4
</code></pre>
<p>Meanwhile in other torrent clients (qBittorrent, Vuze) it gets the metadata very quickly. It's reproducible with following code:</p>
<pre><code>import libtorrent as lt
import time
session = lt.session()
session.listen_on(6881, 6891)
session.add_extension('ut_metadata')
session.add_extension('ut_pex')
session.add_extension('metadata_transfer')
session.add_dht_router("router.utorrent.com", 6881)
session.add_dht_router("router.bittorrent.com", 6881)
session.add_dht_router("dht.transmissionbt.com", 6881)
session.add_dht_router("dht.aelitis.com", 6881)
session.start_dht()
session.start_lsd()
session.start_upnp()
session.start_natpmp()
params = { 'save_path': '/tmp/'}
link ="magnet:?xt=urn:btih:565DB305A27FFB321FCC7B064AFD7BD73AEDDA2B&dn=bbb_sunflower_1080p_60fps_normal.mp4&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2ftracker.publicbt.com%3a80%2fannounce&ws=http%3a%2f%2fdistribution.bbb3d.renderfarming.net%2fvideo%2fmp4%2fbbb_sunflower_1080p_60fps_normal.mp4"
handle = lt.add_magnet_uri(session, link, params)
print('downloading metadata...')
while (not handle.has_metadata()):
status=session.status()
print('dht nodes: ', status.dht_nodes)
time.sleep(1)
print ('got metadata, starting torrent download...')
while (handle.status().state != lt.torrent_status.seeding):
print('%d %% done' % (handle.status().progress*100))
time.sleep(1)
</code></pre>
<p>What I'm doing wrong?</p>
| 0 | 2016-10-10T17:58:42Z | 39,990,864 | <p>This is most likely caused by a problem in the 1.0.x series, where some of the first responses from the DHT will make the node change its node ID (to match its external IP address, see <a href="http://libtorrent.org/dht_sec.html" rel="nofollow">this post</a>).</p>
<p>It does this by restarting the DHT node. Any in-flight torrent announces at that time will be lost. Waiting 15 minutes for the next announce should make the announce go through. Another option is to wait for the <a href="http://libtorrent.org/reference-Alerts.html#dht_bootstrap_alert" rel="nofollow">dht_bootstrap_alert</a> before adding the first torrent to the session.</p>
<p>This issue was fixed in the 1.1.x releases.</p>
| 1 | 2016-10-12T05:05:11Z | [
"python",
"bittorrent",
"dht",
"libtorrent",
"libtorrent-rasterbar"
] |
How to cast a int from string that contains not only numbers | 39,964,234 | <p>Casting a string is easy:</p>
<pre><code>string1 = "12"
int1 = int(string1)
</code></pre>
<p>But what if I want to extract the int from</p>
<pre><code>string1 = "12 pieces"
</code></pre>
<p>The cast should return 12. Is there a pythonic way to do that and ignore any chars that are not numbers?</p>
| 0 | 2016-10-10T18:08:25Z | 39,964,280 | <p>How about this?</p>
<pre><code>>>> string1 = "12 pieces"
>>> y = int(''.join([x for x in string1 if x in '1234567890']))
>>> print(y)
12
</code></pre>
<p>or better yet:</p>
<pre><code>>>> string1 = "12 pieces"
>>> y = int(''.join([x for x in string1 if x.isdigit() ]))
>>> print(y)
12
</code></pre>
| 1 | 2016-10-10T18:11:59Z | [
"python"
] |
How to cast a int from string that contains not only numbers | 39,964,234 | <p>Casting a string is easy:</p>
<pre><code>string1 = "12"
int1 = int(string1)
</code></pre>
<p>But what if I want to extract the int from</p>
<pre><code>string1 = "12 pieces"
</code></pre>
<p>The cast should return 12. Is there a pythonic way to do that and ignore any chars that are not numbers?</p>
| 0 | 2016-10-10T18:08:25Z | 39,964,296 | <blockquote>
<p>Is there a way to do that and ignore any chars that are not numbers?</p>
</blockquote>
<p>Ignoring anything but a digit is, probably, not a good idea. What if a string contains <code>"12 pieces 42"</code>, should it return <code>12</code>, <code>1242</code>, <code>[12, 42]</code>, or raise an exception?</p>
<p>You may, instead, split the string and convert the first element to an <code>int</code>:</p>
<pre><code>In [1]: int('12 pieces'.split()[0])
Out[1]: 12
</code></pre>
| 1 | 2016-10-10T18:12:42Z | [
"python"
] |
How to cast a int from string that contains not only numbers | 39,964,234 | <p>Casting a string is easy:</p>
<pre><code>string1 = "12"
int1 = int(string1)
</code></pre>
<p>But what if I want to extract the int from</p>
<pre><code>string1 = "12 pieces"
</code></pre>
<p>The cast should return 12. Is there a pythonic way to do that and ignore any chars that are not numbers?</p>
| 0 | 2016-10-10T18:08:25Z | 39,964,417 | <p>Assuming that the first part of the string is a number, one way to do this is to use a regex match:</p>
<pre><code>import re
def strint(str):
m=re.match("^[0-9]+", str)
if m:
return int(m.group(0))
else:
raise Exception("String does not start with a number")
</code></pre>
<p>The advantage of this approach is that even strings with just numbers can be cast easily, so you can use this function as a drop in replacement to <code>int()</code></p>
| 1 | 2016-10-10T18:20:26Z | [
"python"
] |
Python Updating Global variables | 39,964,254 | <p>Could anyone tell me what I am doing wrong in my code. How come, I cannot update my global variable? To my understanding, if it is a global variable I can modify it anywhere.</p>
<p>If the numpy is creating a new array (when I use np.delete), what would be the best way to delete an element in an numpy array. </p>
<pre><code>import numpy as np
global a
a = np.array(['a','b','c','D'])
def hello():
a = np.delete(a, 1)
print a
hello()
</code></pre>
| 2 | 2016-10-10T18:10:13Z | 39,964,274 | <p>If you want to use a global variable in a function, you have to say it's global IN THAT FUNCTION:</p>
<pre><code>import numpy as np
a = np.array(['a','b','c','D'])
def hello():
global a
a = np.delete(a, 1)
print a
hello()
</code></pre>
<p>If you wouldn't use the line <code>global a</code> in your function, a new, local variable a would be created. So the keyword <code>global</code> isn't used to create global variable, but to avoid creating a local one that 'hides' an already existing global variable.</p>
| 5 | 2016-10-10T18:11:30Z | [
"python",
"numpy"
] |
Clean data in pandas | 39,964,282 | <p>I have the following dataframe:</p>
<pre><code> Datum Unternehmen Event
0 9 Termine vom 01.01.2016 bis zum 31.12.2017 9 Termine vom 01.01.2016 bis zum 31.12.2017 NaN
1 9 Termine vom 01.01.2016 bis zum 31.12.2017 NaN NaN
2 Datum Unternehmen Event
3 12.05.2017 ADIDAS AG Dividenden
4 09.11.2017 ADIDAS AG Ergebnisberichte
5 03.08.2017 ADIDAS AG Ergebnisberichte
6 11.05.2017 ADIDAS AG Hauptversammlung
7 04.05.2016 ADIDAS AG Ergebnisberichte
8 03.03.2016 ADIDAS AG Ergebnisberichte
9 04.08.2016 ADIDAS AG Ergebnisberichte
10 03.11.2016 ADIDAS AG Ergebnisberichte
11 12.05.2016 ADIDAS AG Hauptversammlung
</code></pre>
<p>And I want to keep the rows(!) with an apparent date only. </p>
<p>At the moment, I am iterating with <code>df.iterrows()</code> and check the value with a regular expression (<code>r'^[\d.]+$'</code>) but I wonder if there's a more "pythonic way" as <code>iterrows()</code> is very slow when applied to a couple of hundred dataframes.</p>
| 0 | 2016-10-10T18:12:03Z | 39,964,448 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a> with parameter <code>errors='coerce'</code> and check where are not <code>NaN</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>print (pd.to_datetime(df.Datum, errors='coerce'))
0 NaT
1 NaT
2 NaT
3 2017-12-05
4 2017-09-11
5 2017-03-08
6 2017-11-05
7 2016-04-05
8 2016-03-03
9 2016-04-08
10 2016-03-11
11 2016-12-05
Name: Datum, dtype: datetime64[ns]
print (pd.to_datetime(df.Datum, errors='coerce').notnull())
0 False
1 False
2 False
3 True
4 True
5 True
6 True
7 True
8 True
9 True
10 True
11 True
Name: Datum, dtype: bool
</code></pre>
<pre><code>print (df[pd.to_datetime(df.Datum, errors='coerce').notnull()])
Datum Unternehmen Event
3 12.05.2017 ADIDAS AG Dividenden
4 09.11.2017 ADIDAS AG Ergebnisberichte
5 03.08.2017 ADIDAS AG Ergebnisberichte
6 11.05.2017 ADIDAS AG Hauptversammlung
7 04.05.2016 ADIDAS AG Ergebnisberichte
8 03.03.2016 ADIDAS AG Ergebnisberichte
9 04.08.2016 ADIDAS AG Ergebnisberichte
10 03.11.2016 ADIDAS AG Ergebnisberichte
11 12.05.2016 ADIDAS AG Hauptversammlung
</code></pre>
<p>If need convert column <code>Datum</code> to <code>datetime</code>:</p>
<pre><code>df.Datum = pd.to_datetime(df.Datum, errors='coerce')
print (df.Datum.notnull())
0 False
1 False
2 False
3 True
4 True
5 True
6 True
7 True
8 True
9 True
10 True
11 True
Name: Datum, dtype: bool
print (df[df.Datum.notnull()])
Datum Unternehmen Event
3 2017-12-05 ADIDAS AG Dividenden
4 2017-09-11 ADIDAS AG Ergebnisberichte
5 2017-03-08 ADIDAS AG Ergebnisberichte
6 2017-11-05 ADIDAS AG Hauptversammlung
7 2016-04-05 ADIDAS AG Ergebnisberichte
8 2016-03-03 ADIDAS AG Ergebnisberichte
9 2016-04-08 ADIDAS AG Ergebnisberichte
10 2016-03-11 ADIDAS AG Ergebnisberichte
11 2016-12-05 ADIDAS AG Hauptversammlung`
</code></pre>
| 1 | 2016-10-10T18:22:32Z | [
"python",
"pandas"
] |
Clean data in pandas | 39,964,282 | <p>I have the following dataframe:</p>
<pre><code> Datum Unternehmen Event
0 9 Termine vom 01.01.2016 bis zum 31.12.2017 9 Termine vom 01.01.2016 bis zum 31.12.2017 NaN
1 9 Termine vom 01.01.2016 bis zum 31.12.2017 NaN NaN
2 Datum Unternehmen Event
3 12.05.2017 ADIDAS AG Dividenden
4 09.11.2017 ADIDAS AG Ergebnisberichte
5 03.08.2017 ADIDAS AG Ergebnisberichte
6 11.05.2017 ADIDAS AG Hauptversammlung
7 04.05.2016 ADIDAS AG Ergebnisberichte
8 03.03.2016 ADIDAS AG Ergebnisberichte
9 04.08.2016 ADIDAS AG Ergebnisberichte
10 03.11.2016 ADIDAS AG Ergebnisberichte
11 12.05.2016 ADIDAS AG Hauptversammlung
</code></pre>
<p>And I want to keep the rows(!) with an apparent date only. </p>
<p>At the moment, I am iterating with <code>df.iterrows()</code> and check the value with a regular expression (<code>r'^[\d.]+$'</code>) but I wonder if there's a more "pythonic way" as <code>iterrows()</code> is very slow when applied to a couple of hundred dataframes.</p>
| 0 | 2016-10-10T18:12:03Z | 39,965,120 | <p>I was looking for:</p>
<pre><code>df[df['Datum'].str.contains("^[\d.]+$")]
</code></pre>
<p>Which selects the row according to the expression in the <code>.contains()</code> function.</p>
| 0 | 2016-10-10T19:07:28Z | [
"python",
"pandas"
] |
Instance variable name a reserved word in Python | 39,964,360 | <p>I checked the Python style guide, and I found no specific references to having instance variable names with reserved words e.g. <code>self.type</code>, <code>self.class</code>, etc.</p>
<p>What's the best practice for this?</p>
| 0 | 2016-10-10T18:16:56Z | 39,964,406 | <p>Avoid it if possible.</p>
<p>You can get and set such attributes via <code>getattr</code> and <code>setattr</code>, but they can't be accessed with ordinary dot syntax (something like <code>obj.class</code> is a syntax error), so they're a pain to use.</p>
<p>As Aurora0001 mentioned in a comment, a convention if you "need" to use them is to append an underscore. The most common reason to "need" to have such attributes is that they're generated programatically from an external data source.</p>
<p>(Note that <code>type</code> is not a keyword, so you can do <code>self.type</code> just fine.)</p>
| 1 | 2016-10-10T18:19:58Z | [
"python"
] |
NameError: global name 'isdigit' is not defined. Cannot Understand what this python error means, this is my work using Kivy and python | 39,964,384 | <pre><code>class ScatterText(ScrollView):
def val_change(self):
label = ['bar','1','2','3','4','5','6','7','8','9','10','11','12','13']
label_b = self.ids['bar']
label[1] = self.ids['1']
label[2] = self.ids['2']
label[3] = self.ids['3']
label[4] = self.ids['4']
label[5] = self.ids['5']
label[6] = self.ids['6']
label[7] = self.ids['7']
label[8] = self.ids['8']
label[9] = self.ids['9']
label[10] = self.ids['10']
label[11] = self.ids['11']
label[12] = self.ids['12']
label[13] = self.ids['13']
set_val = 0
if not(label[1].text == ''):
set_val+=100
if not(label[2].text == ''):
set_val+=100
if not(label[3].text == ''):
set_val+=100
if not(label[4].text == ''):
set_val+=100
if not(label[5].text == ''):
set_val+=100
if not(label[6].text == ''):
set_val+=100
if not(label[7].text == ''):
set_val+=100
if not(label[8].text == ''):
set_val+=100
if not(label[9].text == ''):
set_val+=100
if not(label[10].text == ''):
set_val+=100
if not(label[11].text == ''):
set_val+=100
if not(label[12].text == ''):
set_val+=100
if not(label[13].text == ''):
set_val+=100
label_b.value = set_val
def pass_valid(self):
pass_w = self.ids['2']
uCase = 0
lCase = 0
num = 0
splChar = 0
lent = 0
VAL = pass_w.text
_suggest = ""
_valid = self.ids['MSG']
if len(VAL)<8 or len(VAL)>16:
lent = 0
else:
lent = 1
for i in range (0,len(VAL)):
if isupper(VAL[i]):
uCase = 1
if islower(VAL[i]):
lCase = 1
if isdigit(VAL[i]):
num = 1
if (not(islower(VAL[i])) and not(isdigit(VAL[i]))):
splChar = 1
if not(uCase):
_suggest = _suggest + "\n Password Must Have One Upper Case Char."
else :
_suggest = _suggest + " "
if not(lCase):
_suggest = _suggest + "\n Password Must Have One Lower Case Char."
if not(num):
_suggest = _suggest + "\n Password Must Have One Digit."
if not(splChar):
_suggest = _suggest + "\n Password Must Have One Special Char."
if not(lent):
_suggest = _suggest + "\n Password Must Be Between 8 and 16 characters long\n"
_valid.text = _suggest
def pass_work(self):
self.val_change()
self.pass_valid()
class MyApp(App):
def build(self):
return (ScatterText())
#textinput = TextInput()
#input = TextInput(focus=True)
if __name__ == "__main__":
MyApp().run()
</code></pre>
| -8 | 2016-10-10T18:18:43Z | 39,964,521 | <p><code>isdigit()</code> is a method on string objects, not a standalone method.</p>
<p>In other words, you call it like this:</p>
<pre><code>s = '1'
if s.isdigit():
print 's is a digit!'
</code></pre>
| 0 | 2016-10-10T18:27:54Z | [
"python"
] |
How to generate random non-recurring numbers in a loop and within a range in python? | 39,964,444 | <blockquote>
<p>Hi, I'm still a beginner and a bit lost. I'm working on a project for school that requires me to write different small programs that will 'guess' the given password. This is a bruteforce program, and I need it to guess every possible combination of 4 number passwords like those on the old iPhones. My problem is that when I use random.sample it generates the same random numbers multiple times. What function can I use, or what should I change so that the random numbers within the given range don't repeat themselves? I tried doing rand.int but it gave me "TypeError: 'int' object is not iterable"</p>
<p>Additional questions:
- How do I get my loop to stop once n == Password4 ? It simply continues, even after the correct password is found.
- Is there a way I can count the number of fails(n != Password4) before my success (n == Password4)?</p>
</blockquote>
<p>This is my code:</p>
<pre><code> import random
Password4 = 1234
def crack_password():
while True:
for n in (random.sample(range(1112, 10000), 1)):
while n == Password4:
print(n, "is the password")
break
if n != Password4:
print('fail')
break
crack_password()
</code></pre>
<p>Update: Using a code now that does not generate random non-recurring numbers but works for the purposes I intended. Please still feel free to answer the original questions, and thank you all so much for your kindness and prompt responses.</p>
<p>New Code (credit goes to @roganjosh):</p>
<pre><code> import datetime as dt
Password4 = 9999
def crack_password():
start = dt.datetime.now()
for n in range(10000):
password_guess = '{0:04d}'.format(n)
if password_guess == str(Password4):
end = dt.datetime.now()
print("Password found: {} in {}".format(password_guess, end - start))
break
guesses = crack_password()
</code></pre>
| 0 | 2016-10-10T18:21:52Z | 39,964,533 | <p>You probably want to check out all possible values for a password under certain rules, e.g "4 digits" or "8 lowercase characters". Consider these answers as starting points:</p>
<ul>
<li><a href="http://stackoverflow.com/a/7074066/37020">Generating 3-letter strings with itertools.product</a> for using an alphabet and generating n-length strings</li>
<li><a href="https://docs.python.org/2/library/itertools.html#itertools.permutations" rel="nofollow">itertools.permutations</a> if you know a set of characters that don't repeat</li>
<li>If you just want to format integers with zero padding on the left (0000, 0001, ... 9999) see how to <a href="http://stackoverflow.com/a/339013/37020">accomplish that with format strings</a>.</li>
</ul>
| 0 | 2016-10-10T18:28:22Z | [
"python",
"for-loop",
"random",
"while-loop",
"range"
] |
How to generate random non-recurring numbers in a loop and within a range in python? | 39,964,444 | <blockquote>
<p>Hi, I'm still a beginner and a bit lost. I'm working on a project for school that requires me to write different small programs that will 'guess' the given password. This is a bruteforce program, and I need it to guess every possible combination of 4 number passwords like those on the old iPhones. My problem is that when I use random.sample it generates the same random numbers multiple times. What function can I use, or what should I change so that the random numbers within the given range don't repeat themselves? I tried doing rand.int but it gave me "TypeError: 'int' object is not iterable"</p>
<p>Additional questions:
- How do I get my loop to stop once n == Password4 ? It simply continues, even after the correct password is found.
- Is there a way I can count the number of fails(n != Password4) before my success (n == Password4)?</p>
</blockquote>
<p>This is my code:</p>
<pre><code> import random
Password4 = 1234
def crack_password():
while True:
for n in (random.sample(range(1112, 10000), 1)):
while n == Password4:
print(n, "is the password")
break
if n != Password4:
print('fail')
break
crack_password()
</code></pre>
<p>Update: Using a code now that does not generate random non-recurring numbers but works for the purposes I intended. Please still feel free to answer the original questions, and thank you all so much for your kindness and prompt responses.</p>
<p>New Code (credit goes to @roganjosh):</p>
<pre><code> import datetime as dt
Password4 = 9999
def crack_password():
start = dt.datetime.now()
for n in range(10000):
password_guess = '{0:04d}'.format(n)
if password_guess == str(Password4):
end = dt.datetime.now()
print("Password found: {} in {}".format(password_guess, end - start))
break
guesses = crack_password()
</code></pre>
| 0 | 2016-10-10T18:21:52Z | 39,964,644 | <p>If you really want to try all passwords in random order. this is more easily accomplished by</p>
<pre><code>import random
digits = [str(i) for i in range(10)]
s = [''.join([a,b,c,d]) for a in digits for b in digits for c in digits for d in digits]
random.shuffle(s)
real_password = '1234'
i = 0
for code in s:
if code == real_password:
print()
print('The password is: ', code)
break
else:
i += 1
print(i, ' failures', end='\r')
</code></pre>
| 1 | 2016-10-10T18:35:40Z | [
"python",
"for-loop",
"random",
"while-loop",
"range"
] |
How to generate random non-recurring numbers in a loop and within a range in python? | 39,964,444 | <blockquote>
<p>Hi, I'm still a beginner and a bit lost. I'm working on a project for school that requires me to write different small programs that will 'guess' the given password. This is a bruteforce program, and I need it to guess every possible combination of 4 number passwords like those on the old iPhones. My problem is that when I use random.sample it generates the same random numbers multiple times. What function can I use, or what should I change so that the random numbers within the given range don't repeat themselves? I tried doing rand.int but it gave me "TypeError: 'int' object is not iterable"</p>
<p>Additional questions:
- How do I get my loop to stop once n == Password4 ? It simply continues, even after the correct password is found.
- Is there a way I can count the number of fails(n != Password4) before my success (n == Password4)?</p>
</blockquote>
<p>This is my code:</p>
<pre><code> import random
Password4 = 1234
def crack_password():
while True:
for n in (random.sample(range(1112, 10000), 1)):
while n == Password4:
print(n, "is the password")
break
if n != Password4:
print('fail')
break
crack_password()
</code></pre>
<p>Update: Using a code now that does not generate random non-recurring numbers but works for the purposes I intended. Please still feel free to answer the original questions, and thank you all so much for your kindness and prompt responses.</p>
<p>New Code (credit goes to @roganjosh):</p>
<pre><code> import datetime as dt
Password4 = 9999
def crack_password():
start = dt.datetime.now()
for n in range(10000):
password_guess = '{0:04d}'.format(n)
if password_guess == str(Password4):
end = dt.datetime.now()
print("Password found: {} in {}".format(password_guess, end - start))
break
guesses = crack_password()
</code></pre>
| 0 | 2016-10-10T18:21:52Z | 39,964,679 | <p>You have two <code>while</code> loops, so even though you attempt to <code>break</code> when you find the password, the outer (first) <code>while</code> loop just starts it off all over again.</p>
<p>If you want unique guesses then you would have to look into permutations. However, since it's reasonable to assume that the password itself would be random, then random guesses of that password would be no more efficient at cracking the password than simply going through the whole list of potential passwords sequentially.</p>
<p>Try something like this:</p>
<pre><code>import datetime as dt
Password4 = 5437
def crack_password():
start = dt.datetime.now()
for n in range(9999):
password_guess = '{0:04d}'.format(n)
if password_guess == str(Password4):
end = dt.datetime.now()
print "Password found: {} in {}".format(password_guess, end - start)
break
guesses = crack_password()
</code></pre>
| 0 | 2016-10-10T18:37:08Z | [
"python",
"for-loop",
"random",
"while-loop",
"range"
] |
Python convert string to categorical - numpy | 39,964,451 | <p>I'm desperately trying to change my string variables <code>day</code>,<code>car2</code>, in the following dataset. </p>
<pre><code><class 'pandas.core.frame.DataFrame'>
Int64Index: 23653 entries, 0 to 23652
Data columns (total 7 columns):
day 23653 non-null object
clustDep 23653 non-null int64
clustArr 23653 non-null int64
car2 23653 non-null object
clustRoute 23653 non-null int64
scheduled_seg 23653 non-null int64
delayed 23653 non-null int64
dtypes: int64(5), object(2)
memory usage: 1.4+ MB
None
</code></pre>
<p>I have tried everything that is on <strong>SO</strong>, as you can see in the code sample below. I'm running <code>Python 2.7, numpy 1.11.1</code>. I tried <code>scikits.tools.categorical</code> but to no vail, it wont event load the namespace. This is my code: </p>
<pre><code>import numpy as np
#from scikits.statsmodels import sm
trainId = np.random.choice(range(df.shape[0]), size=int(df.shape[0]*0.8), replace=False)
train = df[['day', 'clustDep', 'clustArr', 'car2', 'clustRoute', 'scheduled_seg', 'delayed']]
#for col in ['day', 'car2', 'scheduled_seg']:
# train[col] = train.loc[:, col].astype('category')
train['day'] = train['day'].astype('category')
#train['day'] = sm.tools.categorical(train, cols='day', drop=True)
#train['car2C'] = train['car2'].astype('category')
#train['scheduled_segC'] = train['scheduled_seg'].astype('category')
train = df.loc[trainId, train.columns]
testId = np.in1d(df.index.values, trainId, invert=True)
test = df.loc[testId, train.columns]
#from sklearn import tree
#clf = tree.DecisionTreeClassifier()
#clf = clf.fit(train.drop(['delayed'], axis=1), train['delayed'])
</code></pre>
<p>this yields the following error:</p>
<pre><code>/Users/air/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
</code></pre>
<p>Any help would be greatly appreciated.
Thanks a lot!</p>
<p>--- UPDATE ---
sample data:</p>
<pre><code> day clustDep clustArr car2 clustRoute scheduled_seg delayed
0 Saturday 12 15 AA 1 5 1
1 Tuesday 12 15 AA 1 1 1
2 Tuesday 12 15 AA 1 5 1
3 Saturday 12 13 AA 4 3 1
4 Saturday 2 13 AB 6 3 1
5 Wednesday 2 13 IB 6 3 1
6 Monday 2 13 EY 6 3 0
7 Friday 2 13 EY 6 3 1
8 Saturday 11 13 AC 6 5 1
9 Friday 11 13 DL 6 5 1
</code></pre>
| 2 | 2016-10-10T18:22:40Z | 39,964,661 | <p>It works just fine for me (Pandas 0.19.0):</p>
<pre><code>In [155]: train
Out[155]:
day clustDep clustArr car2 clustRoute scheduled_seg delayed
0 Saturday 12 15 AA 1 5 1
1 Tuesday 12 15 AA 1 1 1
2 Tuesday 12 15 AA 1 5 1
3 Saturday 12 13 AA 4 3 1
4 Saturday 2 13 AB 6 3 1
5 Wednesday 2 13 IB 6 3 1
6 Monday 2 13 EY 6 3 0
7 Friday 2 13 EY 6 3 1
8 Saturday 11 13 AC 6 5 1
9 Friday 11 13 DL 6 5 1
In [156]: train.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10 entries, 0 to 9
Data columns (total 7 columns):
day 10 non-null object
clustDep 10 non-null int64
clustArr 10 non-null int64
car2 10 non-null object
clustRoute 10 non-null int64
scheduled_seg 10 non-null int64
delayed 10 non-null int64
dtypes: int64(5), object(2)
memory usage: 640.0+ bytes
In [157]: train.day = train.day.astype('category')
In [158]: train.car2 = train.car2.astype('category')
In [159]: train.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10 entries, 0 to 9
Data columns (total 7 columns):
day 10 non-null category
clustDep 10 non-null int64
clustArr 10 non-null int64
car2 10 non-null category
clustRoute 10 non-null int64
scheduled_seg 10 non-null int64
delayed 10 non-null int64
dtypes: category(2), int64(5)
memory usage: 588.0 bytes
</code></pre>
| 1 | 2016-10-10T18:36:23Z | [
"python",
"numpy",
"dataframe",
"categorical-data"
] |
Read outlook mail in html format | 39,964,549 | <p>I receive a mail in Microsoft Outlook that contains a html table. I would like to parse this in to a pandas dataframe. </p>
<p>I have already written a script that uses beautiful soup to parse the html text in to the dataframe. But I am struggling with reading the email in html in the first place. </p>
<p>Having found the message I am using the below code to read it in to a text file. But it is writing the text as a /n separated string rather than something like data as I was expecting. Which means that I then can't use beautiful soup to get this in to a dataframe.</p>
<p>I have found lots of examples of how to write and send a html mail but not how to read one in html format. Any ideas?</p>
<pre><code>contents = msg.Body.encode('ascii', 'ignore').decode('ascii')
contents_file = open("U:\body.txt", "w")
contents_file.write(contents)
contents_file.close()
</code></pre>
| 0 | 2016-10-10T18:29:30Z | 39,984,914 | <p>Found the answer myself. I should use msg.HTMLBody rather than msg.Body</p>
| 0 | 2016-10-11T19:10:29Z | [
"python",
"html",
"email",
"outlook"
] |
numpy: efficiently add rows of a matrix | 39,964,555 | <p>I have a matrix. </p>
<pre><code>mat = array([
[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]
])
</code></pre>
<p>I'd like to get the sum of the rows at certain indices: eg.</p>
<pre><code>ixs = np.array([0,2,0,0,0,1,1])
</code></pre>
<p>I know I can compute the answer as:</p>
<pre><code>mat[ixs].sum(axis=0)
> array([16, 23, 30, 37])
</code></pre>
<p>The problem is ixs may be very long, and I don't want to use all the memory to create the intermediate product mat[ixs], only to reduce it again with the sum.</p>
<p>I also know I could simply count up the indices and use multiplication instead. </p>
<pre><code>np.bincount(ixs, minlength=mat.shape[0).dot(mat)
> array([16, 23, 30, 37])
</code></pre>
<p>But that will be expensive if my ixs are sparse. </p>
<p>I know about scipy's sparse matrices, and I suppose I could use them, but I'd prefer a pure numpy solution as sparse matrices are limited in various ways (such as only being 2-d)</p>
<p>So, is there a pure numpy way to merge the indexing and sum-reduction in this case?</p>
<h1>Conclusions:</h1>
<p>Thanks you Divakar and hpaulj for your very thorough responses. By "sparse" I meant that most of the values in <code>range(w.shape[0])</code> are not in ixs. Using that new definition (and with more realisitic data size, I re-ran Divakar tests, with some new funcitona dded :</p>
<pre><code>rng = np.random.RandomState(1234)
mat = rng.randn(1000, 500)
ixs = rng.choice(rng.randint(mat.shape[0], size=mat.shape[0]/10), size=1000)
# Divakar's solutions
In[42]: %timeit org_indexing_app(mat, ixs)
1000 loops, best of 3: 1.82 ms per loop
In[43]: %timeit org_bincount_app(mat, ixs)
The slowest run took 4.07 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 177 µs per loop
In[44]: %timeit indexing_modified_app(mat, ixs)
1000 loops, best of 3: 1.81 ms per loop
In[45]: %timeit bincount_modified_app(mat, ixs)
1000 loops, best of 3: 258 µs per loop
In[46]: %timeit simply_indexing_app(mat, ixs)
1000 loops, best of 3: 1.86 ms per loop
In[47]: %timeit take_app(mat, ixs)
1000 loops, best of 3: 1.82 ms per loop
In[48]: %timeit unq_mask_einsum_app(mat, ixs)
10 loops, best of 3: 58.2 ms per loop
# hpaulj's solutions
In[53]: %timeit hpauljs_sparse_solution(mat, ixs)
The slowest run took 9.34 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 524 µs per loop
%timeit hpauljs_second_sparse_solution(mat, ixs)
100 loops, best of 3: 9.91 ms per loop
# Sparse version of original bincount solution (see below):
In[60]: %timeit sparse_bincount(mat, ixs)
10000 loops, best of 3: 71.7 µs per loop
</code></pre>
<p><strong>The winner</strong> in this case is the sparse version of the bincount solution.</p>
<pre><code>def sparse_bincount(mat, ixs):
x = np.bincount(ixs)
nonzeros, = np.nonzero(x)
x[nonzeros].dot(mat[nonzeros])
</code></pre>
| 3 | 2016-10-10T18:30:02Z | 39,964,818 | <p>Since we are assuming that <code>ixs</code> could be <em>sparsey</em>, we could modify the strategy to get the summations of rows from the <code>zero-th</code> row and rest of the rows separately based on the given row indices. So, we could use the <code>bincount</code> method for the <code>non-zero-th</code> indexed rows summation and add it with the <code>(zero-th row x no. of zeros</code> in <code>ixs</code>).</p>
<p>Thus, the second approach could be modified, like so -</p>
<pre><code>nzmask = ixs!=0
nzsum = np.bincount(ixs[nzmask]-1, minlength=mat.shape[0]-1).dot(mat[1:])
row0_sum = mat[0]*(len(ixs) - np.count_nonzero(nzmask))
out = nzsum + row0_sum
</code></pre>
<p>We could extend this strategy to the first approach as well, like so -</p>
<pre><code>out = mat[0]*(len(ixs) - len(nzidx)) + mat[ixs[nzidx]].sum(axis=0)
</code></pre>
<p>If we are working with lots of non-zero indices that are repeated, we could alternatively make use of <code>np.take</code> with focus on performance. Thus, <code>mat[ixs[nzidx]]</code> could be replaced by <code>np.take(mat,ixs[nzidx],axis=0)</code> and similarly <code>mat[ixs]</code> by <code>np.take(mat,ixs,axis=0)</code>. With such repeated indices based indexing <code>np.take</code> brings out some noticeable speedup as compared to simply indexing.</p>
<p>Finally, we could use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> to perform these row ID based selection and summing, like so -</p>
<pre><code>nzmask = ixs!=0
unq,tags = np.unique(ixs[nzmask],return_inverse=1)
nzsum = np.einsum('ji,jk->k',np.arange(len(unq))[:,None] == tags,mat[unq])
out = mat[0]*(len(ixs) - np.count_nonzero(nzmask)) + nzsum
</code></pre>
<h2>Benchmarking</h2>
<p>Let's list out all the five approaches posted thus far in this post and also include the two approaches posted in the question for some runtime testing as functions -</p>
<pre><code>def org_indexing_app(mat,ixs):
return mat[ixs].sum(axis=0)
def org_bincount_app(mat,ixs):
return np.bincount(ixs, minlength=mat.shape[0]).dot(mat)
def indexing_modified_app(mat,ixs):
return np.take(mat,ixs,axis=0).sum(axis=0)
def bincount_modified_app(mat,ixs):
nzmask = ixs!=0
nzsum = np.bincount(ixs[nzmask]-1, minlength=mat.shape[0]-1).dot(mat[1:])
row0_sum = mat[0]*(len(ixs) - np.count_nonzero(nzmask))
return nzsum + row0_sum
def simply_indexing_app(mat,ixs):
nzmask = ixs!=0
nzsum = mat[ixs[nzmask]].sum(axis=0)
return mat[0]*(len(ixs) - np.count_nonzero(nzmask)) + nzsum
def take_app(mat,ixs):
nzmask = ixs!=0
nzsum = np.take(mat,ixs[nzmask],axis=0).sum(axis=0)
return mat[0]*(len(ixs) - np.count_nonzero(nzmask)) + nzsum
def unq_mask_einsum_app(mat,ixs):
nzmask = ixs!=0
unq,tags = np.unique(ixs[nzmask],return_inverse=1)
nzsum = np.einsum('ji,jk->k',np.arange(len(unq))[:,None] == tags,mat[unq])
return mat[0]*(len(ixs) - np.count_nonzero(nzmask)) + nzsum
</code></pre>
<p><strong>Timings</strong></p>
<p>Case #1 (<code>ixs</code> is 95% sparsey) :</p>
<pre><code>In [301]: # Setup input
...: mat = np.random.rand(20,4)
...: ixs = np.random.randint(0,10,(100000))
...: ixs[np.random.rand(ixs.size)<0.95] = 0 # Make it approx 95% sparsey
...:
In [302]: # Timings
...: %timeit org_indexing_app(mat,ixs)
...: %timeit org_bincount_app(mat,ixs)
...: %timeit indexing_modified_app(mat,ixs)
...: %timeit bincount_modified_app(mat,ixs)
...: %timeit simply_indexing_app(mat,ixs)
...: %timeit take_app(mat,ixs)
...: %timeit unq_mask_einsum_app(mat,ixs)
...:
100 loops, best of 3: 4.89 ms per loop
1000 loops, best of 3: 428 µs per loop
100 loops, best of 3: 3.29 ms per loop
1000 loops, best of 3: 329 µs per loop
1000 loops, best of 3: 537 µs per loop
1000 loops, best of 3: 462 µs per loop
1000 loops, best of 3: 1.07 ms per loop
</code></pre>
<p>Case #2 (<code>ixs</code> is 98% sparsey) :</p>
<pre><code>In [303]: # Setup input
...: mat = np.random.rand(20,4)
...: ixs = np.random.randint(0,10,(100000))
...: ixs[np.random.rand(ixs.size)<0.98] = 0 # Make it approx 98% sparsey
...:
In [304]: # Timings
...: %timeit org_indexing_app(mat,ixs)
...: %timeit org_bincount_app(mat,ixs)
...: %timeit indexing_modified_app(mat,ixs)
...: %timeit bincount_modified_app(mat,ixs)
...: %timeit simply_indexing_app(mat,ixs)
...: %timeit take_app(mat,ixs)
...: %timeit unq_mask_einsum_app(mat,ixs)
...:
100 loops, best of 3: 4.86 ms per loop
1000 loops, best of 3: 438 µs per loop
100 loops, best of 3: 3.5 ms per loop
1000 loops, best of 3: 260 µs per loop
1000 loops, best of 3: 318 µs per loop
1000 loops, best of 3: 288 µs per loop
1000 loops, best of 3: 694 µs per loop
</code></pre>
| 2 | 2016-10-10T18:46:59Z | [
"python",
"numpy",
"indexing"
] |
numpy: efficiently add rows of a matrix | 39,964,555 | <p>I have a matrix. </p>
<pre><code>mat = array([
[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]
])
</code></pre>
<p>I'd like to get the sum of the rows at certain indices: eg.</p>
<pre><code>ixs = np.array([0,2,0,0,0,1,1])
</code></pre>
<p>I know I can compute the answer as:</p>
<pre><code>mat[ixs].sum(axis=0)
> array([16, 23, 30, 37])
</code></pre>
<p>The problem is ixs may be very long, and I don't want to use all the memory to create the intermediate product mat[ixs], only to reduce it again with the sum.</p>
<p>I also know I could simply count up the indices and use multiplication instead. </p>
<pre><code>np.bincount(ixs, minlength=mat.shape[0).dot(mat)
> array([16, 23, 30, 37])
</code></pre>
<p>But that will be expensive if my ixs are sparse. </p>
<p>I know about scipy's sparse matrices, and I suppose I could use them, but I'd prefer a pure numpy solution as sparse matrices are limited in various ways (such as only being 2-d)</p>
<p>So, is there a pure numpy way to merge the indexing and sum-reduction in this case?</p>
<h1>Conclusions:</h1>
<p>Thanks you Divakar and hpaulj for your very thorough responses. By "sparse" I meant that most of the values in <code>range(w.shape[0])</code> are not in ixs. Using that new definition (and with more realisitic data size, I re-ran Divakar tests, with some new funcitona dded :</p>
<pre><code>rng = np.random.RandomState(1234)
mat = rng.randn(1000, 500)
ixs = rng.choice(rng.randint(mat.shape[0], size=mat.shape[0]/10), size=1000)
# Divakar's solutions
In[42]: %timeit org_indexing_app(mat, ixs)
1000 loops, best of 3: 1.82 ms per loop
In[43]: %timeit org_bincount_app(mat, ixs)
The slowest run took 4.07 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 177 µs per loop
In[44]: %timeit indexing_modified_app(mat, ixs)
1000 loops, best of 3: 1.81 ms per loop
In[45]: %timeit bincount_modified_app(mat, ixs)
1000 loops, best of 3: 258 µs per loop
In[46]: %timeit simply_indexing_app(mat, ixs)
1000 loops, best of 3: 1.86 ms per loop
In[47]: %timeit take_app(mat, ixs)
1000 loops, best of 3: 1.82 ms per loop
In[48]: %timeit unq_mask_einsum_app(mat, ixs)
10 loops, best of 3: 58.2 ms per loop
# hpaulj's solutions
In[53]: %timeit hpauljs_sparse_solution(mat, ixs)
The slowest run took 9.34 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 524 µs per loop
%timeit hpauljs_second_sparse_solution(mat, ixs)
100 loops, best of 3: 9.91 ms per loop
# Sparse version of original bincount solution (see below):
In[60]: %timeit sparse_bincount(mat, ixs)
10000 loops, best of 3: 71.7 µs per loop
</code></pre>
<p><strong>The winner</strong> in this case is the sparse version of the bincount solution.</p>
<pre><code>def sparse_bincount(mat, ixs):
x = np.bincount(ixs)
nonzeros, = np.nonzero(x)
x[nonzeros].dot(mat[nonzeros])
</code></pre>
| 3 | 2016-10-10T18:30:02Z | 39,965,947 | <p>An alternative to <code>bincount</code> is <code>add.at</code>:</p>
<pre><code>In [193]: mat
Out[193]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
In [194]: ixs
Out[194]: array([0, 2, 0, 0, 0, 1, 1])
In [195]: J = np.zeros(mat.shape[0],int)
In [196]: np.add.at(J, ixs, 1)
In [197]: J
Out[197]: array([4, 2, 1])
In [198]: np.dot(J, mat)
Out[198]: array([16, 23, 30, 37])
</code></pre>
<p>By the sparsity, you mean, I assume, that <code>ixs</code> might not include all the rows, for example, <code>ixs</code> without the 0s:</p>
<pre><code>In [199]: ixs = np.array([2,1,1])
In [200]: J=np.zeros(mat.shape[0],int)
In [201]: np.add.at(J, ixs, 1)
In [202]: J
Out[202]: array([0, 2, 1])
In [203]: np.dot(J, mat)
Out[203]: array([16, 19, 22, 25])
</code></pre>
<p><code>J</code> still has the <code>mat.shape[0]</code> shape. But the <code>add.at</code> should scale as the length of <code>ixs</code>. </p>
<p>A sparse solution would look something like:</p>
<p>Make a sparse matrix from <code>ixs</code> that looks like:</p>
<pre><code>In [204]: I
Out[204]:
array([[1, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 1, 1],
[0, 1, 0, 0, 0, 0, 0]])
</code></pre>
<p>sum the rows; sparse does this with matrix multiplication like:</p>
<pre><code>In [205]: np.dot(I, np.ones((7,),int))
Out[205]: array([4, 2, 1])
</code></pre>
<p>then do our dot:</p>
<pre><code>In [206]: np.dot(np.dot(I, np.ones((7,),int)), mat)
Out[206]: array([16, 23, 30, 37])
</code></pre>
<p>Or in sparse code:</p>
<pre><code>In [225]: J = sparse.coo_matrix((np.ones_like(ixs,int),(np.arange(ixs.shape[0]), ixs)))
In [226]: J.A
Out[226]:
array([[1, 0, 0],
[0, 0, 1],
[1, 0, 0],
[1, 0, 0],
[1, 0, 0],
[0, 1, 0],
[0, 1, 0]])
In [227]: J.sum(axis=0)*mat
Out[227]: matrix([[16, 23, 30, 37]])
</code></pre>
<p><code>sparse</code>, when converting from <code>coo</code> to <code>csr</code> sums duplicates. I can take advantage that with</p>
<pre><code>In [229]: J = sparse.coo_matrix((np.ones_like(ixs,int), (np.zeros_like(ixs,int), ixs)))
In [230]: J
Out[230]:
<1x3 sparse matrix of type '<class 'numpy.int32'>'
with 7 stored elements in COOrdinate format>
In [231]: J.A
Out[231]: array([[4, 2, 1]])
In [232]: J*mat
Out[232]: array([[16, 23, 30, 37]], dtype=int32)
</code></pre>
| 2 | 2016-10-10T20:05:47Z | [
"python",
"numpy",
"indexing"
] |
numpy: efficiently add rows of a matrix | 39,964,555 | <p>I have a matrix. </p>
<pre><code>mat = array([
[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]
])
</code></pre>
<p>I'd like to get the sum of the rows at certain indices: eg.</p>
<pre><code>ixs = np.array([0,2,0,0,0,1,1])
</code></pre>
<p>I know I can compute the answer as:</p>
<pre><code>mat[ixs].sum(axis=0)
> array([16, 23, 30, 37])
</code></pre>
<p>The problem is ixs may be very long, and I don't want to use all the memory to create the intermediate product mat[ixs], only to reduce it again with the sum.</p>
<p>I also know I could simply count up the indices and use multiplication instead. </p>
<pre><code>np.bincount(ixs, minlength=mat.shape[0).dot(mat)
> array([16, 23, 30, 37])
</code></pre>
<p>But that will be expensive if my ixs are sparse. </p>
<p>I know about scipy's sparse matrices, and I suppose I could use them, but I'd prefer a pure numpy solution as sparse matrices are limited in various ways (such as only being 2-d)</p>
<p>So, is there a pure numpy way to merge the indexing and sum-reduction in this case?</p>
<h1>Conclusions:</h1>
<p>Thanks you Divakar and hpaulj for your very thorough responses. By "sparse" I meant that most of the values in <code>range(w.shape[0])</code> are not in ixs. Using that new definition (and with more realisitic data size, I re-ran Divakar tests, with some new funcitona dded :</p>
<pre><code>rng = np.random.RandomState(1234)
mat = rng.randn(1000, 500)
ixs = rng.choice(rng.randint(mat.shape[0], size=mat.shape[0]/10), size=1000)
# Divakar's solutions
In[42]: %timeit org_indexing_app(mat, ixs)
1000 loops, best of 3: 1.82 ms per loop
In[43]: %timeit org_bincount_app(mat, ixs)
The slowest run took 4.07 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 177 µs per loop
In[44]: %timeit indexing_modified_app(mat, ixs)
1000 loops, best of 3: 1.81 ms per loop
In[45]: %timeit bincount_modified_app(mat, ixs)
1000 loops, best of 3: 258 µs per loop
In[46]: %timeit simply_indexing_app(mat, ixs)
1000 loops, best of 3: 1.86 ms per loop
In[47]: %timeit take_app(mat, ixs)
1000 loops, best of 3: 1.82 ms per loop
In[48]: %timeit unq_mask_einsum_app(mat, ixs)
10 loops, best of 3: 58.2 ms per loop
# hpaulj's solutions
In[53]: %timeit hpauljs_sparse_solution(mat, ixs)
The slowest run took 9.34 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 524 µs per loop
%timeit hpauljs_second_sparse_solution(mat, ixs)
100 loops, best of 3: 9.91 ms per loop
# Sparse version of original bincount solution (see below):
In[60]: %timeit sparse_bincount(mat, ixs)
10000 loops, best of 3: 71.7 µs per loop
</code></pre>
<p><strong>The winner</strong> in this case is the sparse version of the bincount solution.</p>
<pre><code>def sparse_bincount(mat, ixs):
x = np.bincount(ixs)
nonzeros, = np.nonzero(x)
x[nonzeros].dot(mat[nonzeros])
</code></pre>
| 3 | 2016-10-10T18:30:02Z | 39,975,586 | <p>After much number crunching (see Conclusions of original Question), the best-performing answer, when the inputs are defined as follows:</p>
<pre><code>rng = np.random.RandomState(1234)
mat = rng.randn(1000, 500)
ixs = rng.choice(rng.randint(mat.shape[0], size=mat.shape[0]/10), size=1000)
</code></pre>
<p>Seems to be:</p>
<pre><code>def sparse_bincount(mat, ixs):
x = np.bincount(ixs)
nonzeros, = np.nonzero(x)
x[nonzeros].dot(mat[nonzeros])
</code></pre>
| 0 | 2016-10-11T10:48:04Z | [
"python",
"numpy",
"indexing"
] |
Pandas max value index | 39,964,558 | <p>I have a Pandas DataFrame with a mix of screen names, tweets, fav's etc. I want find the max value of 'favcount' (which i have already done) and also return the screen name of that 'tweet'</p>
<pre><code>df = pd.DataFrame()
df['timestamp'] = timestamp
df['sn'] = sn
df['text'] = text
df['favcount'] = fav_count
print df
print '------'
print df['favcount'].max()
</code></pre>
<p>I cant seem to find anything on this, can anyone help guide me in the right direction?</p>
| 1 | 2016-10-10T18:30:08Z | 39,964,614 | <p>use <code>.argmax()</code> to get the index of the max value. then you can use <code>loc</code></p>
<pre><code>df.loc[df['favcount'].argmax(), 'sn']
</code></pre>
| 1 | 2016-10-10T18:34:23Z | [
"python",
"pandas",
"twitter",
"indexing",
"max"
] |
Pandas max value index | 39,964,558 | <p>I have a Pandas DataFrame with a mix of screen names, tweets, fav's etc. I want find the max value of 'favcount' (which i have already done) and also return the screen name of that 'tweet'</p>
<pre><code>df = pd.DataFrame()
df['timestamp'] = timestamp
df['sn'] = sn
df['text'] = text
df['favcount'] = fav_count
print df
print '------'
print df['favcount'].max()
</code></pre>
<p>I cant seem to find anything on this, can anyone help guide me in the right direction?</p>
| 1 | 2016-10-10T18:30:08Z | 39,964,631 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.idxmax.html" rel="nofollow"><code>idxmax</code></a> - get index of max value of <code>favcount</code> and then select value in column <code>sn</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a>:</p>
<pre><code>df = pd.DataFrame({'favcount':[1,2,3], 'sn':['a','b','c']})
print (df)
favcount sn
0 1 a
1 2 b
2 3 c
print (df.favcount.idxmax())
2
print (df.ix[df.favcount.idxmax()])
favcount 3
sn c
Name: 2, dtype: object
print (df.ix[df.favcount.idxmax(), 'sn'])
c
</code></pre>
| 1 | 2016-10-10T18:35:19Z | [
"python",
"pandas",
"twitter",
"indexing",
"max"
] |
How do I iterate through a list of strings and print each item? | 39,964,625 | <p>I have a list of strings and print each of the strings in the list, meaning not <code>['word1','word2','word3']</code> but instead: <code>word1</code>, <code>word2</code>, <code>word3</code>.</p>
<p>I tried doing this:</p>
<pre><code>for i in list:
print list[i]
</code></pre>
<p>but I get the message </p>
<blockquote>
<p>"list indices must be integers, not str"</p>
</blockquote>
<p>I am really confused on how I should actually do this?</p>
| -1 | 2016-10-10T18:35:05Z | 39,964,700 | <pre><code>for i in list:
print i
</code></pre>
<p>I is the list element: in other words, it takes on the values of the member strings, in order.</p>
| 3 | 2016-10-10T18:38:50Z | [
"python",
"string",
"list"
] |
How do I iterate through a list of strings and print each item? | 39,964,625 | <p>I have a list of strings and print each of the strings in the list, meaning not <code>['word1','word2','word3']</code> but instead: <code>word1</code>, <code>word2</code>, <code>word3</code>.</p>
<p>I tried doing this:</p>
<pre><code>for i in list:
print list[i]
</code></pre>
<p>but I get the message </p>
<blockquote>
<p>"list indices must be integers, not str"</p>
</blockquote>
<p>I am really confused on how I should actually do this?</p>
| -1 | 2016-10-10T18:35:05Z | 39,964,820 | <p>Firstly, don't name your variable <code>list</code>, since that is a python built-in reserved class. You'll save yourself confusion later. Let's call it <code>lst</code>, here. </p>
<p>Now, to your error. </p>
<blockquote>
<p>"list indices must be integers, not str"</p>
</blockquote>
<p><code>lst[i]</code> is accessing an <em>index</em>, but it "must be an integer". However, <code>i</code> is a <code>str</code> (A Python string class). What is <code>i</code>, though? Well, it is the element <code>in lst</code> for the current iteration. </p>
<hr>
<p>You could "debug" your script by just printing <code>i</code>, see what it is. </p>
<p>If you were still confused (like you see it printing <code>1</code>), then you should print <code>type(i)</code>, it will say <code><type 'str'></code>, and printing <code>repr(i)</code> would see <code>'1'</code>, so you would print <code>int(i)</code> to cast the string <code>'1'</code> to an int <code>1</code></p>
| 1 | 2016-10-10T18:47:05Z | [
"python",
"string",
"list"
] |
Error with math equation | 39,964,627 | <p>My problem is that if you look at 'support' variable. (In the variable list) it doesn't apply to the Current Consumption. For EX. If I press 'S' and enter to start the game then press 'M' to display missions then press 'S' to choose the Survivors mission. And I recieve 2 survivors. That count won't add to the support for some reason and display "You are consuming 0.5 blah blah blah" not "You are consuming 0.7 blah blah blah" as it should be adding 0.1 per human? Sorry if this is hard to understand, I'm only 11 trying to program!
import random
from PIL import Image</p>
<pre><code>print('\x1b[6;30;42m' + 'Zombie Survival Simulator' + '\x1b[0m')
print "Press [S] to start!"
resp = raw_input()
if 's' in resp or 'S' in resp:
foodmission = ['Convience Store','Grocery Store','Restraunt','Food Storage Area']
watermission = ['Convience Store', 'Old Gas Station', 'Water Tower','Toppled Coca-Cola truck.']
survivormission = ['Abandoned Refugee Camp','Bus','Army Camp','Train Station']
"FOOD"
#Pick Area
def pickfoodMission():
foodmis = random.choice(foodmission)
return foodmis
#Chance to get food
def chanceFood():
foodcha = random.randint(1,20)
return foodcha
#How much food you gain a mission
def foodPickup():
foodpick = random.randint(1,2)
return foodpick
"WATER"
#Pick the area
def pickwaterMission():
watermis = random.choice(watermission)
return watermis
#Chance for getting water
def chancewater():
watercha = random.randint(1,20)
return watercha
#Number of water you gain a mission
def waterPickup():
waterpick = random.randint(1,2)
return waterpick
"SURVIVORS"
#Pick the area
def picksurvivorMission():
survivormis = random.choice(survivormission)
return survivormis
#Chance for getting water
def chancehuman():
humancha = random.randint(1,20)
return humancha
#Number of water you gain a mission
def humanPickup():
humanpick = random.randint(1,2)
return humanpick
food = 3
water = 3
human = 5
healthy = 0
con = 0.1
level = 1
game = 1
new = 1
foodcon = 0
watercon = 0
support = 0.1 * human
newhuman = (human + (1 + (human / 5)) + healthy)
newwater = (water + (1 + (human / 5)) + healthy)
newfood = (water + (1 + (human / 5)) + healthy)
while game == 1:
if food <= 0 or water <= 0:
print('\x1b[7;30;41m' + 'You and your friends are dead.' + '\33[3m')
break
if food >= 3 or water >= 3:
healthy = healthy + 1
if food <= 2 or water <= 2:
healthy = healthy - 1
print "Current Resources: Food: " +str(food) + " Day(s) Water: " + str(water) + " Day(s)"
print "Current Survivors " + str(human)
if healthy <= -3 and healthy >= -1:
print "Current Survivors are " + ('\x1b[7;30;41m' + 'Nearly Dead' + '\33[3m')
if healthy == 0:
print "Current survivors " + ('\x1b[7;30;41m' + 'Are not healthy' + '\33[3m')
if healthy >= 1 and healthy <= 3:
print "Current Survivors are " + ('\x1b[7;32;43m' + 'Ok' + '\x1b[0m')
if healthy >= 3 and healthy <= 5:
print "Current Survivors are " + ('\x1b[7;32;43m' + 'Great' + '\x1b[0m')
if healthy >= 5 and healthy <= 7:
print "Current Survivors are " + ('\x1b[7;32;43m' + 'Excellent' + '\x1b[0m')
foodcon = support
watercon = support
food = food - support
water = water - support
print human
print support
print "You are consuming " + str(support) + " food and " + str(support) + " water per day"
if food - support <= 0 or water - support <= 0:
print('\x1b[7;30;41m' + 'You will not survive the next day.' + '\33[3m')
print "[M]issions [B]uilding [H]oard [E]nd Day"
resp = raw_input()
if 'M' in resp or 'm' in resp:
print "[F]ood [W]ater [S]urvivor"
resp = raw_input()
if 'F' in resp or 'f' in resp:
foodmis = pickfoodMission()
print "You go to a " + foodmis
foodcha = chanceFood()
if foodcha >= 14:
foodpick = foodPickup()
food = newfood
img = Image.open('food.png')
img.show()
print('\x1b[7;32;43m' + 'You are now at ' + str(newfood) + ' day(s) of food' + '\x1b[0m')
elif foodcha < 14:
print('\x1b[7;30;41m' + 'You come back empty handed.' + '\x1b[0m')
elif 'w' in resp or 'W' in resp:
watermis = pickwaterMission()
print "You go to a " + watermis
watercha = chancewater()
if watercha >= 14:
waterpick = waterPickup()
water = newwater
img = Image.open('water.png')
img.show()
print('\x1b[7;32;43m' + 'You are now at ' + str(newwater) + ' day(s) of water' + '\x1b[0m')
elif watercha <= 14:
print('\x1b[7;30;41m' + 'You come back empty handed.' + '\x1b[0m')
elif 's' in resp or 'S' in resp:
humanmis = picksurvivorMission()
print "You go to a " + humanmis
humancha = chancehuman()
if humancha >= 14:
humanpick = humanPickup()
human = newhuman
print('\x1b[7;32;43m' + 'You are now at ' + str(human) + ' survivor(s)' + '\x1b[0m')
img = Image.open('cats.jpg')
img.show()
elif humancha <= 14:
print('\x1b[7;30;41m' + 'You come back with no one else new.' + '\x1b[0m')
if 'B' in resp or 'b' in resp:
print "[F]ood"
</code></pre>
| 0 | 2016-10-10T18:35:06Z | 39,964,870 | <p>You're never updating your <code>support</code> variable after you set it the first time so each time you print it out it's the same. Since <code>support</code> is dependent on <code>human</code>, you should either recalculate <code>support</code> every time <code>human</code> is updated or have a function like <code>calculate_support()</code> which calculates it when you need it.</p>
| 1 | 2016-10-10T18:49:59Z | [
"python"
] |
Error with math equation | 39,964,627 | <p>My problem is that if you look at 'support' variable. (In the variable list) it doesn't apply to the Current Consumption. For EX. If I press 'S' and enter to start the game then press 'M' to display missions then press 'S' to choose the Survivors mission. And I recieve 2 survivors. That count won't add to the support for some reason and display "You are consuming 0.5 blah blah blah" not "You are consuming 0.7 blah blah blah" as it should be adding 0.1 per human? Sorry if this is hard to understand, I'm only 11 trying to program!
import random
from PIL import Image</p>
<pre><code>print('\x1b[6;30;42m' + 'Zombie Survival Simulator' + '\x1b[0m')
print "Press [S] to start!"
resp = raw_input()
if 's' in resp or 'S' in resp:
foodmission = ['Convience Store','Grocery Store','Restraunt','Food Storage Area']
watermission = ['Convience Store', 'Old Gas Station', 'Water Tower','Toppled Coca-Cola truck.']
survivormission = ['Abandoned Refugee Camp','Bus','Army Camp','Train Station']
"FOOD"
#Pick Area
def pickfoodMission():
foodmis = random.choice(foodmission)
return foodmis
#Chance to get food
def chanceFood():
foodcha = random.randint(1,20)
return foodcha
#How much food you gain a mission
def foodPickup():
foodpick = random.randint(1,2)
return foodpick
"WATER"
#Pick the area
def pickwaterMission():
watermis = random.choice(watermission)
return watermis
#Chance for getting water
def chancewater():
watercha = random.randint(1,20)
return watercha
#Number of water you gain a mission
def waterPickup():
waterpick = random.randint(1,2)
return waterpick
"SURVIVORS"
#Pick the area
def picksurvivorMission():
survivormis = random.choice(survivormission)
return survivormis
#Chance for getting water
def chancehuman():
humancha = random.randint(1,20)
return humancha
#Number of water you gain a mission
def humanPickup():
humanpick = random.randint(1,2)
return humanpick
food = 3
water = 3
human = 5
healthy = 0
con = 0.1
level = 1
game = 1
new = 1
foodcon = 0
watercon = 0
support = 0.1 * human
newhuman = (human + (1 + (human / 5)) + healthy)
newwater = (water + (1 + (human / 5)) + healthy)
newfood = (water + (1 + (human / 5)) + healthy)
while game == 1:
if food <= 0 or water <= 0:
print('\x1b[7;30;41m' + 'You and your friends are dead.' + '\33[3m')
break
if food >= 3 or water >= 3:
healthy = healthy + 1
if food <= 2 or water <= 2:
healthy = healthy - 1
print "Current Resources: Food: " +str(food) + " Day(s) Water: " + str(water) + " Day(s)"
print "Current Survivors " + str(human)
if healthy <= -3 and healthy >= -1:
print "Current Survivors are " + ('\x1b[7;30;41m' + 'Nearly Dead' + '\33[3m')
if healthy == 0:
print "Current survivors " + ('\x1b[7;30;41m' + 'Are not healthy' + '\33[3m')
if healthy >= 1 and healthy <= 3:
print "Current Survivors are " + ('\x1b[7;32;43m' + 'Ok' + '\x1b[0m')
if healthy >= 3 and healthy <= 5:
print "Current Survivors are " + ('\x1b[7;32;43m' + 'Great' + '\x1b[0m')
if healthy >= 5 and healthy <= 7:
print "Current Survivors are " + ('\x1b[7;32;43m' + 'Excellent' + '\x1b[0m')
foodcon = support
watercon = support
food = food - support
water = water - support
print human
print support
print "You are consuming " + str(support) + " food and " + str(support) + " water per day"
if food - support <= 0 or water - support <= 0:
print('\x1b[7;30;41m' + 'You will not survive the next day.' + '\33[3m')
print "[M]issions [B]uilding [H]oard [E]nd Day"
resp = raw_input()
if 'M' in resp or 'm' in resp:
print "[F]ood [W]ater [S]urvivor"
resp = raw_input()
if 'F' in resp or 'f' in resp:
foodmis = pickfoodMission()
print "You go to a " + foodmis
foodcha = chanceFood()
if foodcha >= 14:
foodpick = foodPickup()
food = newfood
img = Image.open('food.png')
img.show()
print('\x1b[7;32;43m' + 'You are now at ' + str(newfood) + ' day(s) of food' + '\x1b[0m')
elif foodcha < 14:
print('\x1b[7;30;41m' + 'You come back empty handed.' + '\x1b[0m')
elif 'w' in resp or 'W' in resp:
watermis = pickwaterMission()
print "You go to a " + watermis
watercha = chancewater()
if watercha >= 14:
waterpick = waterPickup()
water = newwater
img = Image.open('water.png')
img.show()
print('\x1b[7;32;43m' + 'You are now at ' + str(newwater) + ' day(s) of water' + '\x1b[0m')
elif watercha <= 14:
print('\x1b[7;30;41m' + 'You come back empty handed.' + '\x1b[0m')
elif 's' in resp or 'S' in resp:
humanmis = picksurvivorMission()
print "You go to a " + humanmis
humancha = chancehuman()
if humancha >= 14:
humanpick = humanPickup()
human = newhuman
print('\x1b[7;32;43m' + 'You are now at ' + str(human) + ' survivor(s)' + '\x1b[0m')
img = Image.open('cats.jpg')
img.show()
elif humancha <= 14:
print('\x1b[7;30;41m' + 'You come back with no one else new.' + '\x1b[0m')
if 'B' in resp or 'b' in resp:
print "[F]ood"
</code></pre>
| 0 | 2016-10-10T18:35:06Z | 39,964,905 | <p>As far as I can see at a quick glance you are only assigning value to the 'support' variable once in the code:</p>
<pre><code>support = 0.1 * human
</code></pre>
<p>and I don't think this code gets to run again. Once support gets a value through this assignment statement it is not going to update even if you update the value of the 'human' variable. </p>
| 0 | 2016-10-10T18:51:54Z | [
"python"
] |
Error "virtualenv : command not found" but install location is in PYTHONPATH | 39,964,635 | <p>This has been driving me crazy for the past 2 days.
I installed virtualenv on my Macbook using <code>pip install virtualenv</code>.
But when I try to create a new virtualenv using <code>virtualenv venv</code>, I get the error saying "virtualenv : command not found".</p>
<p>I used <code>pip show virtualenv</code> and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me.</p>
<p>Any ideas what might be going wrong here?</p>
| 0 | 2016-10-10T18:35:23Z | 39,972,160 | <p>The only workable approach I could figure out (with help from @Gator_Python was to do <code>python -m virtualenv venv</code>. This creates the virtual environment and works as expected.</p>
<p>I have custom python installed and maybe that's why the default approach doesn't work for me.</p>
| 0 | 2016-10-11T07:17:01Z | [
"python",
"python-2.7",
"pip"
] |
Error "virtualenv : command not found" but install location is in PYTHONPATH | 39,964,635 | <p>This has been driving me crazy for the past 2 days.
I installed virtualenv on my Macbook using <code>pip install virtualenv</code>.
But when I try to create a new virtualenv using <code>virtualenv venv</code>, I get the error saying "virtualenv : command not found".</p>
<p>I used <code>pip show virtualenv</code> and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me.</p>
<p>Any ideas what might be going wrong here?</p>
| 0 | 2016-10-10T18:35:23Z | 39,977,369 | <p>As mentioned in the comments, you've got the virtualenv module installed properly in the expected environment since <code>python -m venv</code> allows you to create virtualenv's. </p>
<p>The fact that <code>virtualenv</code> is not a recognized command is a result of the <code>virtualenv.py</code> not being in your system PATH and/or not being executable. The root cause could be outdated distutils or setuptools.</p>
<p>You should attempt to locate the <code>virtualenv.py</code> file, ensure it is executable (<code>chmod +x</code>) and that its location is in your system PATH. On my system, <code>virtualenv.py</code> is in the <code>../Pythonx.x/Scripts</code> folder, but this may be different for you.</p>
| 0 | 2016-10-11T12:34:54Z | [
"python",
"python-2.7",
"pip"
] |
How to use the Xpath and CSS selector in for function | 39,964,639 | <p>I am a rookie, and want use the scrapy framework to grab something, but I have trouble:</p>
<p>Html A:</p>
<pre><code><ul class="tip" id="tip1">
<li id="tip1_0">
<a href="http://***" title="***" target="_self">***
</a>
</li>
<li id="tip1_1">
<a href="http://***" title="***" target="_self">***
</a>
</li>
<li id="tip1_2">
<a href="http://***" title="***" target="_self">***
</a>
</li>
</ul>
</code></pre>
<p>I use:</p>
<pre><code>f = response.xpath("//*[@id='tip1']//li/a/@href | //*[@id='tip1']//li/a/@title").extract()
</code></pre>
<p>When I get the f is a list, and i will change the list(f) to dict(name0=f[0], value0=f[1], name1=f[2], value1=[f3], and so on). Is any way to more easy? </p>
<p>Html B:</p>
<pre><code><div class="info">
<a target="_blank" href="***" title="***">
</a>
</div>
<div class="info">
<a target="_blank" href="***" title="***">
</a>
</div>
<div class="info">
<a target="_blank" href="***" title="***">
</a>
</div>
</code></pre>
<p>In this case:</p>
<pre><code>file = response.xpath('//div[@class="info"]')
for line in file:
f = line.xpath('/a/@href').extract()
d = line.xpath('/a/@title').extract()
</code></pre>
<p>But, It do not work, just return 'f = []' and 'd =[]', Soï¼ i was confuse, and how can I slove this problem? Thanks a lot.</p>
| 0 | 2016-10-10T18:35:29Z | 39,964,681 | <p>You could have made your inner expressions context-specific by prepending dots:</p>
<pre><code>f = line.xpath('./a/@href').extract()
d = line.xpath('./a/@title').extract()
</code></pre>
<p>Or, point your outer expression to <code>a</code> and get the <code>@href</code> and <code>@title</code>:</p>
<pre><code>file = response.xpath('//div[@class="info"]/a')
for line in file:
f = line.xpath('@href').extract_first()
d = line.xpath('@title').extract_first()
</code></pre>
<p>Also note the use of <code>extract_first()</code> method.</p>
| 1 | 2016-10-10T18:37:13Z | [
"python",
"html",
"css",
"xpath"
] |
Splitting one column into many, counting frequency: 'int' object is not iterable | 39,964,728 | <p>This is my first question on stack overflow and it may be a bit clunky as I learn the ropes - tips or pointers in question formatting welcome!</p>
<p>I'm very new to python and have an issue nearly identical to the one below:</p>
<p><a href="http://stackoverflow.com/questions/33596216/how-to-split-one-column-into-many-columns-and-count-the-frequency">how to split one column into many columns and count the frequency</a></p>
<p>For my data, I have two columns, "logger" and "page", where logger is a column of IP addresses in non-null object string format, and page is a randomized 1-10 non-null int number that represents a webpage the logger visited. An example of this is below:</p>
<pre><code> logger page
0 10.1.60.203 3
1 3.75.190.181 5
2 10.1.60.203 4
3 10.1.60.203 6
4 10.1.60.253 1
</code></pre>
<p>What I'd like to do is to is to have one row for each unique IP in the logger column, and to have a series of columns from 1-10 representing the total count of page views for each page for each IP address, which are then counted for each column, like below:</p>
<pre><code> logger page1 page2 page3 page4 page5 ...
0 10.1.60.203 5 7 14 7 2
1 3.75.190.181 10 3 20 8 6
2 10.1.60.253 22 9 2 12 18
</code></pre>
<p>I've tried a lot of different options to work through this - pivot tables, groupby, but I can't seem to wrap my head around how to get the counts into their respective unique columns per IP address. When I came upon the other forum I felt that that answer should work quite well, but unfortunately I'm coming across the error that 'int' object is not iterable. Here is the code from that user that I'm currently working with:</p>
<pre><code>df2 = pd.DataFrame([x for x in df['page'].apply(
... lambda item: dict(map(
... lambda x: (x,1),
... item))
... ).values]).fillna(0)
>>> df2.join(df)
</code></pre>
<p>I can somewhat grasp what the aforementioned error means but am not confident in ability to work out the answer from there. Any help with this error or particular, or with a more broad solution to my problem, would be greatly, greatly appreciated.</p>
<p>Thank you!</p>
| 1 | 2016-10-10T18:40:30Z | 39,964,826 | <p>Is that what you want?</p>
<pre><code>In [8]: df
Out[8]:
logger page
0 10.1.60.203 3
1 3.75.190.181 5
2 10.1.60.203 4
3 10.1.60.203 6
4 10.1.60.253 1
In [9]: df.pivot_table(index='logger', columns='page', aggfunc='size', fill_value=0)
Out[9]:
page 1 3 4 5 6
logger
10.1.60.203 0 1 1 0 1
10.1.60.253 1 0 0 0 0
3.75.190.181 0 0 0 1 0
</code></pre>
| 0 | 2016-10-10T18:47:24Z | [
"python",
"pandas",
"dataframe",
"pivot-table",
"iterable"
] |
How to make a list of lists in Python when it has multiple separators? | 39,964,756 | <p>The sample file looks like this (all on one line, wrapped for legibility):</p>
<pre><code> ['>1\n', 'TCCGGGGGTATC\n', '>2\n', 'TCCGTGGGTATC\n',
'>3\n', 'TCCGTGGGTATC\n', '>4\n', 'TCCGGGGGTATC\n',
'>5\n', 'TCCGTGGGTATC\n', '>6\n', 'TCCGTGGGTATC\n',
'>7\n', 'TCCGTGGGTATC\n', '>8\n', 'TCCGGGGGTATC\n','\n',
'$$$\n', '\n',
'>B1\n', 'ATCGGGGGTATT\n', '>B2\n', 'TT-GTGGGAATC\n',
'>3\n', 'TTCGTGGGAATC\n', '>B4\n', 'TT-GTGGGTATC\n',
'>B5\n', 'TTCGTGGGTATT\n', '>B6\n','TTCGGGGGTATC\n',
'>B7\n', 'TT-GTGGGTATC\n', '>B8\n', 'TTCGGGGGAATC\n',
'>B9\n', 'TTCGGGGGTATC\n','>B10\n', 'TTCGGGGGTATC\n',
'>B42\n', 'TT-GTGGGTATC\n']
</code></pre>
<p>The <code>$$$</code> separates the two sets. I need to use <code>.strip</code> function and remove the <code>\n</code> and all the "headers". </p>
<p>I need to make a list of lists (as below) and replace "-" with Z (again, all on one line; wrapped here for legibility):</p>
<pre><code> [['TCCGGGGGTATC','TCCGTGGGTATC','TCCGTGGGTATC', 'TCCGGGGGTATC',
'TCCGTGGGTATC',CGTGGGTATC','TCCGTGGGTATC', 'TCCGGGGGTATC'],
['ATCGGGGGTATT', 'TT-GTGGGAATC','TTCGTGGGAATC', 'TT-GTGGGTATC',
'TTCGTGGGTATT', 'TTCGGGGGTATC','TT-GTGGGTATC', 'TTCGGGGGAATC',
'TTCGGGGGTATC', 'TTCGGGGGTATC','TT-GTGGGTATC]]
</code></pre>
| 0 | 2016-10-10T18:42:47Z | 39,964,871 | <p>You can exploit the smaller length of the headers (and other unwanted items) as the criterion to filter them out. You start by creating a list containing one list and <em>appending</em> the items that pass the length test to the inner list. </p>
<p>A new sublist is added to the resulting list when the <em>separator</em> <code>'$$$'</code> is reached, and the length test is again used to add the remaining items to this new sublist:</p>
<pre><code>lst = ['>1\n', 'TCCGGGGGTATC\n', '>2\n', 'TCCGTGGGTATC\n', '>3\n', 'TCCGTGGGTATC\n', '>4\n', 'TCCGGGGGTATC\n', '>5\n', 'TCCGTGGGTATC\n', '>6\n', 'TCCGTGGGTATC\n', '>7\n', 'TCCGTGGGTATC\n', '>8\n', 'TCCGGGGGTATC\n','\n', '$$$\n', '\n', '>B1\n', 'ATCGGGGGTATT\n', '>B2\n', 'TT-GTGGGAATC\n', '>3\n', 'TTCGTGGGAATC\n', '>B4\n', 'TT-GTGGGTATC\n', '>B5\n', 'TTCGTGGGTATT\n', '>B6\n','TTCGGGGGTATC\n', '>B7\n', 'TT-GTGGGTATC\n', '>B8\n', 'TTCGGGGGAATC\n', '>B9\n', 'TTCGGGGGTATC\n','>B10\n', 'TTCGGGGGTATC\n','>B42\n', 'TT-GTGGGTATC\n']
result = [[]]
for x in lst:
if len(x) > 6:
result[-1].append(x.strip())
if x.startswith('$$$'):
result.append([])
print(result)
# [['TCCGGGGGTATC', 'TCCGTGGGTATC', 'TCCGTGGGTATC', 'TCCGGGGGTATC', 'TCCGTGGGTATC', 'TCCGTGGGTATC', 'TCCGTGGGTATC', 'TCCGGGGGTATC'], ['ATCGGGGGTATT', 'TT-GTGGGAATC', 'TTCGTGGGAATC', 'TT-GTGGGTATC', 'TTCGTGGGTATT', 'TTCGGGGGTATC', 'TT-GTGGGTATC', 'TTCGGGGGAATC', 'TTCGGGGGTATC', 'TTCGGGGGTATC', 'TT-GTGGGTATC']]
</code></pre>
| 2 | 2016-10-10T18:50:02Z | [
"python",
"list",
"readfile",
"strip"
] |
How to make a list of lists in Python when it has multiple separators? | 39,964,756 | <p>The sample file looks like this (all on one line, wrapped for legibility):</p>
<pre><code> ['>1\n', 'TCCGGGGGTATC\n', '>2\n', 'TCCGTGGGTATC\n',
'>3\n', 'TCCGTGGGTATC\n', '>4\n', 'TCCGGGGGTATC\n',
'>5\n', 'TCCGTGGGTATC\n', '>6\n', 'TCCGTGGGTATC\n',
'>7\n', 'TCCGTGGGTATC\n', '>8\n', 'TCCGGGGGTATC\n','\n',
'$$$\n', '\n',
'>B1\n', 'ATCGGGGGTATT\n', '>B2\n', 'TT-GTGGGAATC\n',
'>3\n', 'TTCGTGGGAATC\n', '>B4\n', 'TT-GTGGGTATC\n',
'>B5\n', 'TTCGTGGGTATT\n', '>B6\n','TTCGGGGGTATC\n',
'>B7\n', 'TT-GTGGGTATC\n', '>B8\n', 'TTCGGGGGAATC\n',
'>B9\n', 'TTCGGGGGTATC\n','>B10\n', 'TTCGGGGGTATC\n',
'>B42\n', 'TT-GTGGGTATC\n']
</code></pre>
<p>The <code>$$$</code> separates the two sets. I need to use <code>.strip</code> function and remove the <code>\n</code> and all the "headers". </p>
<p>I need to make a list of lists (as below) and replace "-" with Z (again, all on one line; wrapped here for legibility):</p>
<pre><code> [['TCCGGGGGTATC','TCCGTGGGTATC','TCCGTGGGTATC', 'TCCGGGGGTATC',
'TCCGTGGGTATC',CGTGGGTATC','TCCGTGGGTATC', 'TCCGGGGGTATC'],
['ATCGGGGGTATT', 'TT-GTGGGAATC','TTCGTGGGAATC', 'TT-GTGGGTATC',
'TTCGTGGGTATT', 'TTCGGGGGTATC','TT-GTGGGTATC', 'TTCGGGGGAATC',
'TTCGGGGGTATC', 'TTCGGGGGTATC','TT-GTGGGTATC]]
</code></pre>
| 0 | 2016-10-10T18:42:47Z | 39,965,048 | <p>Here is a variation of Moses Koledoye's answer which examines the first character for <code>></code> and discards any matches as well as any empty elements. I also included replacing "-" with "Z".</p>
<pre><code>lst = ['>1\n', 'TCCGGGGGTATC\n', '>2\n', 'TCCGTGGGTATC\n',
'>3\n', 'TCCGTGGGTATC\n', '>4\n', 'TCCGGGGGTATC\n',
'>5\n', 'TCCGTGGGTATC\n', '>6\n', 'TCCGTGGGTATC\n',
'>7\n', 'TCCGTGGGTATC\n', '>8\n', 'TCCGGGGGTATC\n','\n',
'$$$\n', '\n',
'>B1\n', 'ATCGGGGGTATT\n', '>B2\n', 'TT-GTGGGAATC\n',
'>3\n', 'TTCGTGGGAATC\n', '>B4\n', 'TT-GTGGGTATC\n',
'>B5\n', 'TTCGTGGGTATT\n', '>B6\n','TTCGGGGGTATC\n',
'>B7\n', 'TT-GTGGGTATC\n', '>B8\n', 'TTCGGGGGAATC\n',
'>B9\n', 'TTCGGGGGTATC\n','>B10\n', 'TTCGGGGGTATC\n',
'>B42\n', 'TT-GTGGGTATC\n']
result = [[]]
for x in lst:
if x.startswith('>'):
continue
if x.startswith('$$$'):
result.append([])
continue
x = x.strip()
if x:
result[-1].append(x.replace("-", "Z"))
print(result)
</code></pre>
<p>This avoids assigning any particular significance to the length of any element.</p>
| 1 | 2016-10-10T19:02:46Z | [
"python",
"list",
"readfile",
"strip"
] |
Python IndexError: too many indices for array when trying to append two csv files | 39,964,775 | <p>I keep getting this error whenever I try to append two csv files together.</p>
<pre><code>log1 = np.genfromtxt('log40a.csv', dtype = float, delimiter=',', skip_header =1)
log2 = np.genfromtxt('log40b.csv', dtype = float, delimiter=',', skip_header= 1)
data = np.append(log1, log2)
</code></pre>
<p>This is the line the error I get on.</p>
<pre><code>mSec = data[:,0]
</code></pre>
<p>It works fine if I don't append both csv and I just plot the log1 file, but when I try to append them for some reason I the error:</p>
<pre><code> File "<ipython-input-5-6155c8de61ad>", line 1, in <module>
runfile('C:/Users/myname/.spyder2-py3/setdataexp.py', wdir='C:/Users/Myname/.spyder2-py3')
File "C:\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Myname/.spyder2-py3/setdataexp.py", line 11, in <module>
mSec = data[:,0]
IndexError: too many indices for array
</code></pre>
<p>EDIT: I forgot to add a sample of the csv file</p>
<pre><code>mSec,speed,heading,gspeed,....
20,3.5,3,4.5
21,3,4,5
22,3.5,6,5.5
</code></pre>
| 1 | 2016-10-10T18:43:45Z | 39,965,088 | <p>Just remove <code>,0</code> and everything will work fine.</p>
<pre><code>mSec = data[:]
</code></pre>
<p>This traceback</p>
<pre><code> File "C:/Users/Myname/.spyder2-py3/setdataexp.py", line 11, in <module>
mSec = data[:,0]
</code></pre>
<p>said that it is a problem in your code.</p>
<p><strong>UPD:</strong></p>
<p><code>np.append</code> produce single dimensional array and it doesn't allow slicing operations, you have to modify append operation like this:</p>
<pre><code>data = np.append(log1, log2, axis=0)
</code></pre>
| 1 | 2016-10-10T19:05:14Z | [
"python"
] |
Python IndexError: too many indices for array when trying to append two csv files | 39,964,775 | <p>I keep getting this error whenever I try to append two csv files together.</p>
<pre><code>log1 = np.genfromtxt('log40a.csv', dtype = float, delimiter=',', skip_header =1)
log2 = np.genfromtxt('log40b.csv', dtype = float, delimiter=',', skip_header= 1)
data = np.append(log1, log2)
</code></pre>
<p>This is the line the error I get on.</p>
<pre><code>mSec = data[:,0]
</code></pre>
<p>It works fine if I don't append both csv and I just plot the log1 file, but when I try to append them for some reason I the error:</p>
<pre><code> File "<ipython-input-5-6155c8de61ad>", line 1, in <module>
runfile('C:/Users/myname/.spyder2-py3/setdataexp.py', wdir='C:/Users/Myname/.spyder2-py3')
File "C:\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Myname/.spyder2-py3/setdataexp.py", line 11, in <module>
mSec = data[:,0]
IndexError: too many indices for array
</code></pre>
<p>EDIT: I forgot to add a sample of the csv file</p>
<pre><code>mSec,speed,heading,gspeed,....
20,3.5,3,4.5
21,3,4,5
22,3.5,6,5.5
</code></pre>
| 1 | 2016-10-10T18:43:45Z | 39,965,118 | <p>Giving the python documetation and you are using numpay you need to check this method for fusing existing arrays <code>numpy.concatenate</code>
Ex taked from the docs: </p>
<pre><code>>>> a = np.array([[1, 2], [3, 4]])
>>> b = np.array([[5, 6]])
>>> np.concatenate((a, b), axis=0)
array([[1, 2],
[3, 4],
[5, 6]])
</code></pre>
<p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow">Documetation of numpay</a></p>
<p>And you have more methods to fusion your array:</p>
<ul>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.hstack.html" rel="nofollow">np.hstack</a></li>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow">np.vstack</a></li>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html" rel="nofollow">np.column_stack</a></li>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ma.row_stack.html" rel="nofollow">np.row_stack</a></li>
</ul>
| 1 | 2016-10-10T19:07:20Z | [
"python"
] |
Unable to update dictionary | 39,964,791 | <p>I am facing this problem while trying to code a Sudoku solver (with some parts referencing to <a href="http://norvig.com/sudoku.html" rel="nofollow">http://norvig.com/sudoku.html</a>)</p>
<p>Here's the code I have made so far with reference to the above URL.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>puzzle = '003198070890370600007004893030087009079000380508039040726940508905800000380756900'
cell = '123456789'
cell_break = ['123','456','789']
def generate_keys(A, B):
"Cross product of elements in A and elements in B."
return [a+b for a in A for b in B]
#print generate_keys(cell,cell)
def dict_puzzle(puzzle,cell):
'Making a dictionary to store the key and values of the puzzle'
trans_puzzle = {}
key_list = generate_keys(cell,cell)
i=0
for x in puzzle:
trans_puzzle[str(key_list[i])] = x
i = i + 1
return trans_puzzle
dict_puzzle(puzzle,cell)['11'] = 'die'
print dict_puzzle(puzzle,cell)['11']</code></pre>
</div>
</div>
</p>
<p>for the last 2 lines of the code, I tried to mutate the dictionary but to no avail. It just returns me 0, which is the original value. (i.e the mutation wasnt successful)</p>
<p>I am not sure why is this happening :(</p>
| 1 | 2016-10-10T18:45:11Z | 39,964,872 | <p>You're calling the function again, so it returns a new dictionary. You need to assign the result of the first call to a variable and mutate that.</p>
<pre><code>result = dict_puzzle(puzzle,cell)
result['11'] = 'die'
print(result)
</code></pre>
| 1 | 2016-10-10T18:50:09Z | [
"python",
"dictionary"
] |
Getting 'large' errors that seem too big to be from rounding alone | 39,964,995 | <p>When I use Python, I'm getting some significant rounding errors that seem too large for simply floating point problems. I have the following variables:</p>
<pre><code>p = 2.2*10**9
m = 0.510999*10**6
</code></pre>
<p>I then send it through the following:</p>
<pre><code>b = 1/np.sqrt((m/p)**2 + 1) = 0.99999997302479693
</code></pre>
<p>I then use this value through another equation that should return p:</p>
<pre><code>p = (1/np.sqrt(1-b**2)) * m * b = 2200000008.1937...
</code></pre>
<p>Aforementioned gives a difference in p of 8.19... (error in the 9th decimal place if scientific notation is used), which seems to be way too big to be simply a matter of rounding.</p>
<p>I've tried using <code>Decimal().sqrt()</code> with arbitrarily high precision for all the calculations and I get a difference in p of 1.8935..., which is only marginally better.</p>
<p>Is there a better way of getting higher precision?</p>
| 2 | 2016-10-10T18:58:06Z | 39,966,598 | <p>It is the operation</p>
<pre><code>sqrt(1+x)
</code></pre>
<p>that loses you that much precision. Or really the <code>1+x</code> part of it. As your <code>x=(m/p)**2</code> has the magnitude <code>1e-6</code>, you lose about 5-6 digits of the 15-16 valid decimal digits of <code>x</code>, so that only 9-10 valid digits remain. And in the reconstruction you see exactly that, (only) the leading 9 digits are correct.</p>
| 4 | 2016-10-10T20:54:34Z | [
"python",
"python-3.x",
"numpy",
"math",
"rounding"
] |
Discontinuity in graphs but no discontinuity in data in plotting dataframes using matplotlib | 39,965,003 | <p>I'm trying to plot predictions using scikit-learn, pandas, and matplotlib.
I'm able to predict the data and able to save them in dataframes. But now when I plot them there are two cases.</p>
<h3>1st case</h3>
<p>I created a new column for the forecast predictions and plot them with the values i predicted from
<code>Adj. Close</code> is my feature and <code>forecast</code> are my predictions. As you can see the dates are matched perfectly so there shouldn't be any gaps.</p>
<p>My data:</p>
<pre><code> HL_Precentage Adj. High Adj. Low Adj. Close Adj. Volume \
Date
2016-08-19 0.545879 801.23 796.88 799.65 1120763.0
2016-08-22 0.625685 799.30 794.33 796.95 853365.0
2016-08-23 0.629405 801.00 795.99 796.59 917513.0
2016-08-24 0.973747 798.46 790.76 793.60 1284437.0
2016-08-25 NaN NaN NaN NaN NaN
2016-08-26 NaN NaN NaN NaN NaN
2016-08-27 NaN NaN NaN NaN NaN
2016-08-28 NaN NaN NaN NaN NaN
2016-08-29 NaN NaN NaN NaN NaN
2016-08-30 NaN NaN NaN NaN NaN
label Forecast
Date
2016-08-19 802.79 NaN
2016-08-22 801.23 NaN
2016-08-23 803.08 NaN
2016-08-24 800.71 NaN
2016-08-25 NaN 797.835059
2016-08-26 NaN 799.896814
2016-08-27 NaN 802.552861
2016-08-28 NaN 798.483859
2016-08-29 NaN 795.999011
2016-08-30 NaN 797.866796
</code></pre>
<p><img src="http://i.stack.imgur.com/tNHEq.png" alt="Image Link: red is Adj. Close and blue is forecast"></p>
<p>(red is <code>Adj. Close</code> and blue is forecast)</p>
<h3>2nd case</h3>
<p>To get rid of the gap I plugged the values of predictions in the <code>Adj. Close</code> column only and to my amusement the graph was a continuous one.</p>
<p>Data:</p>
<pre><code> HL_Precentage Adj. High Adj. Low Adj. Close Adj. Volume \
Date
2016-08-19 0.545879 801.23 796.88 799.650000 1120763.0
2016-08-22 0.625685 799.30 794.33 796.950000 853365.0
2016-08-23 0.629405 801.00 795.99 796.590000 917513.0
2016-08-24 0.973747 798.46 790.76 793.600000 1284437.0
2016-08-25 NaN NaN NaN 796.877634 NaN
2016-08-26 NaN NaN NaN 799.448407 NaN
2016-08-27 NaN NaN NaN 801.340352 NaN
2016-08-28 NaN NaN NaN 798.130538 NaN
2016-08-29 NaN NaN NaN 794.900353 NaN
2016-08-30 NaN NaN NaN 796.483742 NaN
label Forecast
Date
2016-08-19 802.79 NaN
2016-08-22 801.23 NaN
2016-08-23 803.08 NaN
2016-08-24 800.71 NaN
2016-08-25 NaN NaN
2016-08-26 NaN NaN
2016-08-27 NaN NaN
2016-08-28 NaN NaN
2016-08-29 NaN NaN
2016-08-30 NaN NaN
</code></pre>
<p><img src="http://i.stack.imgur.com/6MxqG.png" alt="Image Link: red line is for Adj. close"></p>
<p>My question is how to make my first graph continuous so as to make my predictions not discontinuous?</p>
| 0 | 2016-10-10T18:59:14Z | 39,965,296 | <p>if you want the line to be continuous the data needs to overlap or a continuous series from 1 column. an easy way to do this would be to replace NA's in Forecast by adj close before plotting such has:</p>
<pre><code>df['Adj. Close'].fillna(df.Forecast, inplace=True).plot()
</code></pre>
| 0 | 2016-10-10T19:21:42Z | [
"python",
"pandas",
"matplotlib"
] |
Discontinuity in graphs but no discontinuity in data in plotting dataframes using matplotlib | 39,965,003 | <p>I'm trying to plot predictions using scikit-learn, pandas, and matplotlib.
I'm able to predict the data and able to save them in dataframes. But now when I plot them there are two cases.</p>
<h3>1st case</h3>
<p>I created a new column for the forecast predictions and plot them with the values i predicted from
<code>Adj. Close</code> is my feature and <code>forecast</code> are my predictions. As you can see the dates are matched perfectly so there shouldn't be any gaps.</p>
<p>My data:</p>
<pre><code> HL_Precentage Adj. High Adj. Low Adj. Close Adj. Volume \
Date
2016-08-19 0.545879 801.23 796.88 799.65 1120763.0
2016-08-22 0.625685 799.30 794.33 796.95 853365.0
2016-08-23 0.629405 801.00 795.99 796.59 917513.0
2016-08-24 0.973747 798.46 790.76 793.60 1284437.0
2016-08-25 NaN NaN NaN NaN NaN
2016-08-26 NaN NaN NaN NaN NaN
2016-08-27 NaN NaN NaN NaN NaN
2016-08-28 NaN NaN NaN NaN NaN
2016-08-29 NaN NaN NaN NaN NaN
2016-08-30 NaN NaN NaN NaN NaN
label Forecast
Date
2016-08-19 802.79 NaN
2016-08-22 801.23 NaN
2016-08-23 803.08 NaN
2016-08-24 800.71 NaN
2016-08-25 NaN 797.835059
2016-08-26 NaN 799.896814
2016-08-27 NaN 802.552861
2016-08-28 NaN 798.483859
2016-08-29 NaN 795.999011
2016-08-30 NaN 797.866796
</code></pre>
<p><img src="http://i.stack.imgur.com/tNHEq.png" alt="Image Link: red is Adj. Close and blue is forecast"></p>
<p>(red is <code>Adj. Close</code> and blue is forecast)</p>
<h3>2nd case</h3>
<p>To get rid of the gap I plugged the values of predictions in the <code>Adj. Close</code> column only and to my amusement the graph was a continuous one.</p>
<p>Data:</p>
<pre><code> HL_Precentage Adj. High Adj. Low Adj. Close Adj. Volume \
Date
2016-08-19 0.545879 801.23 796.88 799.650000 1120763.0
2016-08-22 0.625685 799.30 794.33 796.950000 853365.0
2016-08-23 0.629405 801.00 795.99 796.590000 917513.0
2016-08-24 0.973747 798.46 790.76 793.600000 1284437.0
2016-08-25 NaN NaN NaN 796.877634 NaN
2016-08-26 NaN NaN NaN 799.448407 NaN
2016-08-27 NaN NaN NaN 801.340352 NaN
2016-08-28 NaN NaN NaN 798.130538 NaN
2016-08-29 NaN NaN NaN 794.900353 NaN
2016-08-30 NaN NaN NaN 796.483742 NaN
label Forecast
Date
2016-08-19 802.79 NaN
2016-08-22 801.23 NaN
2016-08-23 803.08 NaN
2016-08-24 800.71 NaN
2016-08-25 NaN NaN
2016-08-26 NaN NaN
2016-08-27 NaN NaN
2016-08-28 NaN NaN
2016-08-29 NaN NaN
2016-08-30 NaN NaN
</code></pre>
<p><img src="http://i.stack.imgur.com/6MxqG.png" alt="Image Link: red line is for Adj. close"></p>
<p>My question is how to make my first graph continuous so as to make my predictions not discontinuous?</p>
| 0 | 2016-10-10T18:59:14Z | 39,968,124 | <p>There <em>is</em> a discontinuity in your data - the last <code>Adj. close</code> value was recorded on 08/24 and the first <code>Forecast</code> value is for 08/25.</p>
<p>In order to have no breaks in the line you would need the ends of your two series to overlap by at least one timepoint. You could, for example, compute a <code>Forecast</code> for 08/24 where you also have an <code>Adj. close</code> value.</p>
| 0 | 2016-10-10T23:12:19Z | [
"python",
"pandas",
"matplotlib"
] |
Optimizing Python 3n + 1 Programming Challange | 39,965,107 | <p>I am trying to find a efficient solution for the 3n + 1 problem on <a href="https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=3&page=show_problem&problem=36" rel="nofollow">uvaonlinejudge</a>. The code I have uses memoization using a dictionary. Can anyone suggest an improvement(s) that will help with the execution time of this code? At the moment I am getting a 'Time limit Exceeded' error when I submit the code. If anyone has a working solution to the problem please share it with me. PLEASE DON'T mark this post as DUPLICATE. I have already seen <a href="http://stackoverflow.com/questions/39401993/uva-online-judge-python-time-limit-exceeded">this</a> post and others on stackoverflow but they don't answer the question posted here. My code is as below:</p>
<pre><code>import sys
def recCycleLength(n,cycLenDict):
if n==1:
return 1
if n not in cycLenDict:
if n%2==0:
cycLen = recCycleLength(n//2, cycLenDict)
cycLenDict[n] = cycLen + 1
return cycLen+1
else:
cycLen = recCycleLength(3*n+1, cycLenDict)
cycLenDict[n] = cycLen + 1
return cycLen+1
else:
return cycLenDict[n]
def maxCycle(a, b):
i = a
mydict = {}
maxLength = 1
while i <= b:
m = recCycleLength(i, mydict)
if m > maxLength:
maxLength = m
i = i + 1
return maxLength
for line in sys.stdin:
curr_line=line.split()
num1 = int(curr_line[0])
num2 = int(curr_line[1])
if num1>num2:
num1, num2 = num2, num1
m = maxCycle(num1, num2)
print("{} {} {}".format(num1, num2, m))
</code></pre>
| -2 | 2016-10-10T19:06:18Z | 39,965,831 | <p>I found the problem in your code. Actually, you are not saving <code>cycLenDict</code> generated on the previous interval for next one. And this is why your code is so "slow" because it will generate all possible endings over and over again. Just move it in global scope or make something like this:</p>
<pre><code>import sys
def rec(n, cache):
if n in cache:
return cache[n]
if n % 2 == 0:
cycle = rec(n//2, cache)
else:
cycle = rec(3*n+1, cache)
cache[n] = cycle + 1
return cache[n]
def cycle(a, b, cache):
return max(rec(i, cache) for i in range(a, b+1))
if __name__ == '__main__':
cache = {1: 1}
for line in sys.stdin:
a, b = map(int, line.split())
a, b = min(a, b), max(a, b)
m = cycle(a, b, cache)
print("{} {} {}".format(a, b, m))
</code></pre>
| 0 | 2016-10-10T19:57:28Z | [
"python",
"optimization"
] |
Optimizing Python 3n + 1 Programming Challange | 39,965,107 | <p>I am trying to find a efficient solution for the 3n + 1 problem on <a href="https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=3&page=show_problem&problem=36" rel="nofollow">uvaonlinejudge</a>. The code I have uses memoization using a dictionary. Can anyone suggest an improvement(s) that will help with the execution time of this code? At the moment I am getting a 'Time limit Exceeded' error when I submit the code. If anyone has a working solution to the problem please share it with me. PLEASE DON'T mark this post as DUPLICATE. I have already seen <a href="http://stackoverflow.com/questions/39401993/uva-online-judge-python-time-limit-exceeded">this</a> post and others on stackoverflow but they don't answer the question posted here. My code is as below:</p>
<pre><code>import sys
def recCycleLength(n,cycLenDict):
if n==1:
return 1
if n not in cycLenDict:
if n%2==0:
cycLen = recCycleLength(n//2, cycLenDict)
cycLenDict[n] = cycLen + 1
return cycLen+1
else:
cycLen = recCycleLength(3*n+1, cycLenDict)
cycLenDict[n] = cycLen + 1
return cycLen+1
else:
return cycLenDict[n]
def maxCycle(a, b):
i = a
mydict = {}
maxLength = 1
while i <= b:
m = recCycleLength(i, mydict)
if m > maxLength:
maxLength = m
i = i + 1
return maxLength
for line in sys.stdin:
curr_line=line.split()
num1 = int(curr_line[0])
num2 = int(curr_line[1])
if num1>num2:
num1, num2 = num2, num1
m = maxCycle(num1, num2)
print("{} {} {}".format(num1, num2, m))
</code></pre>
| -2 | 2016-10-10T19:06:18Z | 39,965,896 | <p>The code seems to do the right thing to execute <code>maxCycle</code> optimally by caching all calculated results in <code>mydict</code>.</p>
<p>However, the input to the application consists of many pairs of values to be processed and <code>maxCycle</code> will reset <code>mydict = {}</code> and calculate everything from scratch.</p>
<p>I suggest remembering the results globally instead. A simple modification of the original code would be:</p>
<pre><code>cycLenDict = {} # global dictionary
def recCycleLength(n): # no cycLenDict argument
if n==1:
return 1
if n not in cycLenDict:
# ...
def maxCycle(a, b):
# ...
while i <= b:
m = recCycleLength(i) # no myDict argument
</code></pre>
<p>To make everything a little bit nicer looking (without any difference performance-wise, compared to the solution above), make a decorator which remembers the results, so that the remainder of the code does not have to take care of that:</p>
<pre><code>def memoize(func):
"""decorate any function which takes positional arguments to cache its results"""
func.results = {} # results are stored globally as the funtion's attribute
def memoized(*a): # memoized version of func
if a not in func.results: # if not cached
func.results[a] = func(*a) # save to cache
return func.results[a] # return from cache
return memoized
@memoize # with this, recCycleLength is called only once for every n value
def recCycleLength(n):
if n==1:
return 1
elif n%2==0:
return recCycleLength(n//2) + 1
else:
return recCycleLength(3*n+1) + 1
def maxCycle(a, b):
return max(recCycleLength(i) for i in range(a, b+1))
</code></pre>
| 0 | 2016-10-10T20:01:28Z | [
"python",
"optimization"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.