title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Find next lower value in list of (float) numbers?
| 39,662,977 |
<p>How should I write the <code>find_nearest_lower</code> function?</p>
<pre><code>>>> values = [10.1, 10.11, 10.20]
>>> my_value = 10.12
>>> nearest_lower = find_nearest_lower(values, my_value)
>>> nearest_lower
10.11
</code></pre>
<p>This needs to work in Python 2.6 without access to numpy.</p>
| -2 |
2016-09-23T14:14:14Z
| 39,663,064 |
<p>You can use <a href="https://docs.python.org/2/library/itertools.html#itertools.dropwhile" rel="nofollow"><code>itertools.dropwhile</code></a>:</p>
<pre><code>>>> from itertools import dropwhile
>>> values = [10.1, 10.11, 10.20]
>>> my_value = 10.12
>>> next(dropwhile(lambda x: x > my_value, sorted(values, reverse=True)))
10.11
</code></pre>
<p>You can also pass a <code>default</code> argument to <a href="https://docs.python.org/2/library/functions.html#next" rel="nofollow"><code>next()</code></a> which will be returned if the iterator is exhausted instead of raising <code>StopIteration</code>.</p>
| 1 |
2016-09-23T14:17:56Z
|
[
"python",
"python-2.6"
] |
Restructuring Array of Tuples
| 39,663,071 |
<p>I have an array of tuples of tuples where the second level should not be a tuple and I want to convert it all to something like a 2-d array.
Is there a quick way to restructure from this messy 1-d to a nice clean 2-d or structured array?</p>
<p>Note: These tuples <strong>do</strong> contain various types. I would like to be able to transpose and 2-d slice etc.. this data. </p>
<p>ie...</p>
<pre><code>[((1,-4,7.0),)
((2,-5,8.0),)
((3,-6,9.0),)]
</code></pre>
<p><em>Edited to try and accommodate issues people pointed out with the original question</em> </p>
| 0 |
2016-09-23T14:18:14Z
| 39,663,599 |
<p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.squeeze.html" rel="nofollow">np.squeeze</a></p>
<p><code>np.squeeze(<your array>)</code></p>
| 0 |
2016-09-23T14:44:28Z
|
[
"python",
"arrays",
"numpy",
"dimensions"
] |
Restructuring Array of Tuples
| 39,663,071 |
<p>I have an array of tuples of tuples where the second level should not be a tuple and I want to convert it all to something like a 2-d array.
Is there a quick way to restructure from this messy 1-d to a nice clean 2-d or structured array?</p>
<p>Note: These tuples <strong>do</strong> contain various types. I would like to be able to transpose and 2-d slice etc.. this data. </p>
<p>ie...</p>
<pre><code>[((1,-4,7.0),)
((2,-5,8.0),)
((3,-6,9.0),)]
</code></pre>
<p><em>Edited to try and accommodate issues people pointed out with the original question</em> </p>
| 0 |
2016-09-23T14:18:14Z
| 39,664,883 |
<p>The <code>dtype</code> is important here. The closest I can come to your display is with a nested dtype</p>
<pre><code>In [182]: dt1=np.dtype('i,i,f')
In [183]: dt=np.dtype([('a',dt1,),('b',dt1,),('c',dt1,)])
In [184]: x=np.ones(1,dtype=dt)
In [185]: print(x)
[((1, 1, 1.0), (1, 1, 1.0), (1, 1, 1.0))]
</code></pre>
<p>(no final <code>,</code>)</p>
<p>If I use the <code>repr</code> rather than print's default <code>str</code>, I see the dtype as well:</p>
<pre><code>In [186]: print(repr(x))
array([((1, 1, 1.0), (1, 1, 1.0), (1, 1, 1.0))],
dtype=[('a', [('f0', '<i4'), ('f1', '<i4'), ('f2', '<f4')]), ('b', [('f0', '<i4'), ('f1', '<i4'), ('f2', '<f4')]), ('c', [('f0', '<i4'), ('f1', '<i4'), ('f2', '<f4')])])
</code></pre>
<p>Reshape or squeeze does not work here because it is already 1d. <code>view</code> or <code>astype</code> can work. Do you want to just flatten the dtype, or make it all float? What kind of shape do you expect? Currently each record consists of 9 numbers.</p>
<p>With a compatible dtype I can view this array as a record of 9 values:</p>
<pre><code>In [195]: dt2=np.dtype('i,i,f,i,i,f,i,i,f')
In [196]: x.view(dt2)
Out[196]:
array([(1, 1, 1.0, 1, 1, 1.0, 1, 1, 1.0)],
dtype=[('f0', '<i4'), ('f1', '<i4'), ('f2', '<f4'), ('f3', '<i4'), ('f4', '<i4'), ('f5', '<f4'), ('f6', '<i4'), ('f7', '<i4'), ('f8', '<f4')])
</code></pre>
<p>The simplest way to turn this <code>x</code> into an array of floats is with <code>tolist</code> (it's not fastest):</p>
<pre><code>In [256]: x['c']=(20,21,22)
In [257]: x['b']=(10,11,12)
In [258]: x['a']=(1,2,3)
In [263]: print(x)
[((1, 2, 3.0), (10, 11, 12.0), (20, 21, 22.0))]
In [264]: np.array(x.tolist())
Out[264]:
array([[[ 1., 2., 3.],
[ 10., 11., 12.],
[ 20., 21., 22.]]])
</code></pre>
| 0 |
2016-09-23T15:50:53Z
|
[
"python",
"arrays",
"numpy",
"dimensions"
] |
How can I add a python script to the windows system path?
| 39,663,091 |
<p>I'm using windows cmd to run my python script. I want to run my python script withouth to give the cd command and the directory path.
I would like to type only the name of the python script and run it.</p>
<p>I'm using python 2.7</p>
| 0 |
2016-09-23T14:18:59Z
| 39,663,773 |
<p>1.Go to <strong>Environmental Variables</strong> >
<strong>system variable</strong> > <strong>Path</strong> > <strong>Edit</strong></p>
<p>2.It look like this</p>
<p><strong><em>Path C:\Program Files\Java\jdk1.8.0\bin;%SystemRoot%\system32;C:\Program Files\nodejs\;</em></strong></p>
<p>3.You can add semicolon (;) at the end and add <strong>C:\Python27</strong></p>
<p>4.After adding it look like this</p>
<p><strong>C:\Program Files\Java\jdk1.8.0\bin;%SystemRoot%\system32;C:\Program Files\nodejs\;C:\Python27;</strong></p>
| -1 |
2016-09-23T14:53:18Z
|
[
"python",
"python-2.7",
"cmd"
] |
pandas elementwise difference between two DatetimeIndex
| 39,663,117 |
<p>Is there a pandas idiomatic way to find the difference in days between two pandas DatetimeIndex?</p>
<pre><code>>>> d1 = pd.to_datetime(['2000-01-01', '2000-01-02'])
>>> d2 = pd.to_datetime(['2001-01-01', '2001-01-02'])
</code></pre>
<p><code>-</code> operator is set difference, ie dates in d1 but not in d2. </p>
<pre><code>>>> d1-d2
DatetimeIndex(['2000-01-01', '2000-01-02'], dtype='datetime64[ns]', freq=None)
</code></pre>
<p>IMO, this is not consistent with numpy and pure python behaviour. Not even pandas itself</p>
<pre><code>>>> d2[0]-d1[0]
Timedelta('366 days 00:00:00')
</code></pre>
<p>This is what I want, but ugly.</p>
<pre><code>>>> [d.days for d in d2.to_pydatetime() - d1.to_pydatetime()]
[366, 366]
</code></pre>
| 1 |
2016-09-23T14:20:21Z
| 39,663,228 |
<p>You use <code>np.subtract</code> directly:</p>
<pre><code>np.subtract(d2, d1)
</code></pre>
<p>Which'll give you a TimedeltaIndex as a result:</p>
<pre><code>TimedeltaIndex(['366 days', '366 days'], dtype='timedelta64[ns]', freq=None)
</code></pre>
<p>Then if wanted use <code>.days</code> on that.</p>
<p>Another possible way:</p>
<pre><code>pd.to_timedelta(d2.values - d1.values).days
</code></pre>
<p>Which'll leave you with:</p>
<pre><code>array([366, 366])
</code></pre>
| 1 |
2016-09-23T14:25:53Z
|
[
"python",
"pandas"
] |
pandas elementwise difference between two DatetimeIndex
| 39,663,117 |
<p>Is there a pandas idiomatic way to find the difference in days between two pandas DatetimeIndex?</p>
<pre><code>>>> d1 = pd.to_datetime(['2000-01-01', '2000-01-02'])
>>> d2 = pd.to_datetime(['2001-01-01', '2001-01-02'])
</code></pre>
<p><code>-</code> operator is set difference, ie dates in d1 but not in d2. </p>
<pre><code>>>> d1-d2
DatetimeIndex(['2000-01-01', '2000-01-02'], dtype='datetime64[ns]', freq=None)
</code></pre>
<p>IMO, this is not consistent with numpy and pure python behaviour. Not even pandas itself</p>
<pre><code>>>> d2[0]-d1[0]
Timedelta('366 days 00:00:00')
</code></pre>
<p>This is what I want, but ugly.</p>
<pre><code>>>> [d.days for d in d2.to_pydatetime() - d1.to_pydatetime()]
[366, 366]
</code></pre>
| 1 |
2016-09-23T14:20:21Z
| 39,663,248 |
<p>This is because the <code>dtype</code> is <code>datetimeIndex</code> so arithmetic operations are set-wise by design, if you construct a <code>Series</code> from them then you can perform the element-wise subtraction as desired:</p>
<pre><code>In [349]:
d1 = pd.to_datetime(['2000-01-01', '2000-01-02'])
d2 = pd.to_datetime(['2001-01-01', '2001-01-02'])
s1 = pd.Series(d1)
s2 = pd.Series(d2)
(s1-s2).abs()
Out[349]:
0 366 days
1 366 days
dtype: timedelta64[ns]
</code></pre>
| 1 |
2016-09-23T14:26:43Z
|
[
"python",
"pandas"
] |
Python share global variable only for functions inside of function
| 39,663,207 |
<p>I have a function which will recursively execute another function inside and I want to share variable for all execution of that function.</p>
<p>Something like that:</p>
<pre><code>def testglobal():
x = 0
def incx():
global x
x += 2
incx()
return x
testglobal() # should return 2
</code></pre>
<p>However, I'm getting error <code>NameError: name 'x' is not defined</code></p>
<p>There is hacky solution to make list and use first value of that list as <code>x</code>. But this is so ugly.</p>
<p>So how can I share <code>x</code> with <code>incx</code> function ? Or should I use completely different approach ?</p>
| 1 |
2016-09-23T14:24:43Z
| 39,663,266 |
<p>You want to use the <code>nonlocal</code> statement to access <code>x</code>, which is not global but local to <code>testglobal</code>.</p>
<pre><code>def testglobal():
x = 0
def incx():
nonlocal x
x += 2
incx()
return x
assert 2 == testglobal()
</code></pre>
<p>The closest you can come to doing this in Python 2 is to replace <code>x</code> with a mutable value, similar to the argument hack you mentioned in your question.</p>
<pre><code>def testglobal():
x = [0]
def incx():
x[0] += 2
incx()
return x[0]
assert 2 == testglobal()
</code></pre>
<p>Here's an example using a function attribute instead of a list, an alternative that you might find more attractive.</p>
<pre><code>def testglobal():
def incx():
incx.x += 2
incx.x = 0
incx()
return inc.x
assert 2 == testglobal()
</code></pre>
| 1 |
2016-09-23T14:27:26Z
|
[
"python",
"scope",
"global-variables",
"python-3.5"
] |
Python share global variable only for functions inside of function
| 39,663,207 |
<p>I have a function which will recursively execute another function inside and I want to share variable for all execution of that function.</p>
<p>Something like that:</p>
<pre><code>def testglobal():
x = 0
def incx():
global x
x += 2
incx()
return x
testglobal() # should return 2
</code></pre>
<p>However, I'm getting error <code>NameError: name 'x' is not defined</code></p>
<p>There is hacky solution to make list and use first value of that list as <code>x</code>. But this is so ugly.</p>
<p>So how can I share <code>x</code> with <code>incx</code> function ? Or should I use completely different approach ?</p>
| 1 |
2016-09-23T14:24:43Z
| 39,663,269 |
<p>This will work unless you are still using Python 2.x:</p>
<pre><code>def testglobal():
x = 0
def incx():
nonlocal x
x += 2
incx()
return x
testglobal() # should return 2
</code></pre>
<p>Possible a cleaner solution though would be to define a class to store your state between method calls.</p>
| 3 |
2016-09-23T14:27:34Z
|
[
"python",
"scope",
"global-variables",
"python-3.5"
] |
Python share global variable only for functions inside of function
| 39,663,207 |
<p>I have a function which will recursively execute another function inside and I want to share variable for all execution of that function.</p>
<p>Something like that:</p>
<pre><code>def testglobal():
x = 0
def incx():
global x
x += 2
incx()
return x
testglobal() # should return 2
</code></pre>
<p>However, I'm getting error <code>NameError: name 'x' is not defined</code></p>
<p>There is hacky solution to make list and use first value of that list as <code>x</code>. But this is so ugly.</p>
<p>So how can I share <code>x</code> with <code>incx</code> function ? Or should I use completely different approach ?</p>
| 1 |
2016-09-23T14:24:43Z
| 39,663,287 |
<p>Use the <a href="https://docs.python.org/3/reference/simple_stmts.html#nonlocal" rel="nofollow"><code>nonlocal</code></a> statement, so <code>incx</code> will use the <code>x</code> variable from <code>testglobal</code>:</p>
<pre><code>def testglobal():
x = 0
def incx():
nonlocal x
x += 2
incx()
return x
testglobal()
</code></pre>
| 2 |
2016-09-23T14:28:28Z
|
[
"python",
"scope",
"global-variables",
"python-3.5"
] |
Sorting and arranging a list using pandas
| 39,663,214 |
<p>I have an input file as shown below which needs to be arranged in such an order that the key values need to be in ascending order, while the keys which are not present need to be printed in the last.
I am getting the data arranged in the required format but the order is missing.</p>
<p>I have tried using sort() method but it shows "list has no attribute sort".
Please suggest solution and also suggest if any modifications required.</p>
<p>Input file:</p>
<pre><code>3=1388|4=1388|5=IBM|8=157.75|9=88929|1021=1500|854=n|388=157.75|394=157.75|474=157.75|1584=88929|444=20160713|459=93000546718000|461=7|55=93000552181000|22=89020|400=157.75|361=0.73|981=0|16=1468416600.6006|18=1468416600.6006|362=0.46
3=1388|4=1388|5=IBM|8=157.73|9=100|1021=0|854=p|394=157.73|474=157.749977558|1584=89029|444=20160713|459=93001362639104|461=26142|55=93001362849000|22=89120|361=0.71|981=0|16=1468416601.372|18=1468416601.372|362=0.45
3=1388|4=1388|5=IBM|8=157.69|9=100|1021=600|854=p|394=157.69|474=157.749910415|1584=89129|444=20160713|459=93004178882560|461=27052|55=93004179085000|22=89328|361=0.67|981=1|16=1468416604.1916|18=1468416604.1916|362=0.43
</code></pre>
<p>Code i tried:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv('inputfile', index_col=None, names=['text'])
s = df.text.str.split('|')
ds = [dict(w.split('=', 1) for w in x) for x in s]
p = pd.DataFrame.from_records(ds)
p1 = p.replace(np.nan,'n/a', regex=True)
st = p1.stack(level=0,dropna=False)
dfs = [g for i,g in st.groupby(level=0)]
#print st
i = 0
while i < len(dfs):
#index of each column
print ('\nindex[%d]'%i)
for (_,k),v in dfs[i].iteritems():
print k,'\t',v
i = i + 1
</code></pre>
<p>output getting:</p>
<pre><code>index[0]
1021 1500
1584 88929
16 1468416600.6006
18 1468416600.6006
22 89020
3 1388
361 0.73
362 0.46
388 157.75
394 157.75
4 1388
400 157.75
444 20160713
459 93000546718000
461 7
474 157.75
5 IBM
55 93000552181000
8 157.75
854 n
9 88929
981 0
index[1]
1021 0
1584 89029
16 1468416601.372
18 1468416601.372
22 89120
3 1388
361 0.71
362 0.45
388 n/a
394 157.73
4 1388
400 n/a
444 20160713
459 93001362639104
461 26142
474 157.749977558
5 IBM
55 93001362849000
8 157.73
854 p
9 100
981 0
</code></pre>
<p>Expected output:</p>
<pre><code>index[0]
3 1388
4 1388
5 IBM
8 157.75
9 88929
16 1468416600.6006
18 1468416600.6006
22 89020
55 93000552181000
361 0.73
362 0.46
388 157.75
394 157.75
400 157.75
444 20160713
459 93000546718000
461 7
474 157.75
854 n
981 0
1021 1500
1584 88929
index[1]
3 1388
4 1388
5 IBM
8 157.75
9 88929
16 1468416600.6006
18 1468416600.6006
22 89020
55 93000552181000
361 0.73
362 0.46
394 157.75
444 20160713
459 93000546718000
461 7
474 157.75
854 n
981 0
1021 1500
1584 88929
388 n/a
400 n/a
</code></pre>
| 0 |
2016-09-23T14:25:06Z
| 39,663,524 |
<p>Here:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv('inputfile', index_col=None, names=['text'])
s = df.text.str.split('|')
ds = [dict(w.split('=', 1) for w in x) for x in s]
p1 = pd.DataFrame.from_records(ds).fillna('n/a')
st = p1.stack(level=0,dropna=False)
for k, v in st.groupby(level=0):
print(k, v.sort_index())
</code></pre>
| 0 |
2016-09-23T14:40:43Z
|
[
"python",
"sorting",
"pandas",
"split"
] |
Sorting and arranging a list using pandas
| 39,663,214 |
<p>I have an input file as shown below which needs to be arranged in such an order that the key values need to be in ascending order, while the keys which are not present need to be printed in the last.
I am getting the data arranged in the required format but the order is missing.</p>
<p>I have tried using sort() method but it shows "list has no attribute sort".
Please suggest solution and also suggest if any modifications required.</p>
<p>Input file:</p>
<pre><code>3=1388|4=1388|5=IBM|8=157.75|9=88929|1021=1500|854=n|388=157.75|394=157.75|474=157.75|1584=88929|444=20160713|459=93000546718000|461=7|55=93000552181000|22=89020|400=157.75|361=0.73|981=0|16=1468416600.6006|18=1468416600.6006|362=0.46
3=1388|4=1388|5=IBM|8=157.73|9=100|1021=0|854=p|394=157.73|474=157.749977558|1584=89029|444=20160713|459=93001362639104|461=26142|55=93001362849000|22=89120|361=0.71|981=0|16=1468416601.372|18=1468416601.372|362=0.45
3=1388|4=1388|5=IBM|8=157.69|9=100|1021=600|854=p|394=157.69|474=157.749910415|1584=89129|444=20160713|459=93004178882560|461=27052|55=93004179085000|22=89328|361=0.67|981=1|16=1468416604.1916|18=1468416604.1916|362=0.43
</code></pre>
<p>Code i tried:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv('inputfile', index_col=None, names=['text'])
s = df.text.str.split('|')
ds = [dict(w.split('=', 1) for w in x) for x in s]
p = pd.DataFrame.from_records(ds)
p1 = p.replace(np.nan,'n/a', regex=True)
st = p1.stack(level=0,dropna=False)
dfs = [g for i,g in st.groupby(level=0)]
#print st
i = 0
while i < len(dfs):
#index of each column
print ('\nindex[%d]'%i)
for (_,k),v in dfs[i].iteritems():
print k,'\t',v
i = i + 1
</code></pre>
<p>output getting:</p>
<pre><code>index[0]
1021 1500
1584 88929
16 1468416600.6006
18 1468416600.6006
22 89020
3 1388
361 0.73
362 0.46
388 157.75
394 157.75
4 1388
400 157.75
444 20160713
459 93000546718000
461 7
474 157.75
5 IBM
55 93000552181000
8 157.75
854 n
9 88929
981 0
index[1]
1021 0
1584 89029
16 1468416601.372
18 1468416601.372
22 89120
3 1388
361 0.71
362 0.45
388 n/a
394 157.73
4 1388
400 n/a
444 20160713
459 93001362639104
461 26142
474 157.749977558
5 IBM
55 93001362849000
8 157.73
854 p
9 100
981 0
</code></pre>
<p>Expected output:</p>
<pre><code>index[0]
3 1388
4 1388
5 IBM
8 157.75
9 88929
16 1468416600.6006
18 1468416600.6006
22 89020
55 93000552181000
361 0.73
362 0.46
388 157.75
394 157.75
400 157.75
444 20160713
459 93000546718000
461 7
474 157.75
854 n
981 0
1021 1500
1584 88929
index[1]
3 1388
4 1388
5 IBM
8 157.75
9 88929
16 1468416600.6006
18 1468416600.6006
22 89020
55 93000552181000
361 0.73
362 0.46
394 157.75
444 20160713
459 93000546718000
461 7
474 157.75
854 n
981 0
1021 1500
1584 88929
388 n/a
400 n/a
</code></pre>
| 0 |
2016-09-23T14:25:06Z
| 39,663,724 |
<p>Replace your ds line with</p>
<pre><code>ds = [{int(pair[0]): pair[1] for pair in [w.split('=', 1) for w in x]} for x in s]
</code></pre>
<p>To convert the index to an integer so it will be sorted numerically</p>
<p>To output the n/a values at the end, you could use the pandas selection to output the nonnull values first, then the null values, e.g:</p>
<pre><code>for (ix, series) in p.iterrows():
print('\nindex[%d]' % ix)
output_series(ix, series[pd.notnull])
output_series(ix, series[pd.isnull].fillna('n/a'))
</code></pre>
<p>btw, you can also simplify your stack, groupby, print to:</p>
<pre><code>for (ix, series) in p1.iterrows():
print('\nindex[%d]' % ix)
for tag, value in series.iteritems():
print(tag, '\t', value)
</code></pre>
<p>So the whole script becomes:</p>
<pre><code>def output_series(ix, series):
for tag, value in series.iteritems():
print(tag, '\t', value)
df = pd.read_csv('inputfile', index_col=None, names=['text'])
s = df.text.str.split('|')
ds = [{int(pair[0]): pair[1] for pair in [w.split('=', 1) for w in x]} for x in s]
p = pd.DataFrame.from_records(ds)
for (ix, series) in p.iterrows():
print('\nindex[%d]' % ix)
output_series(ix, series[pd.notnull])
output_series(ix, series[pd.isnull].fillna('n/a'))
</code></pre>
| 0 |
2016-09-23T14:50:16Z
|
[
"python",
"sorting",
"pandas",
"split"
] |
Manipulating list of lists and dictionary
| 39,663,232 |
<p>I have a list of lists representing the keys in a dictionary. I wish to pickup the smaller key for each list in lists. For instance,</p>
<pre><code>L1 = [['1_A','2_A'],['1_B','2_B']]
D1 = {'1_A': 0.22876, '2_A': 0.22382, '1_B': 0.2584, '2_B': 0.25373}
for li in L1:
for ll in li:
if ll in D1.keys():
print "Value for %s is %s" %(ll,D1[ll])
else:
print "Values not found"
</code></pre>
<p>When I print it, I get:</p>
<pre><code>Value for 1_A is 0.22876
Value for 2_A is 0.22382
Value for 1_B is 0.2584
Value for 2_B is 0.25373
</code></pre>
<p>The output I expect is <code>2_A</code>, <code>2_B</code> since both of them have smaller values compared to <code>1_A</code> and <code>1_B</code> respectively. Can anyone suggest how to do this?</p>
| 1 |
2016-09-23T14:25:58Z
| 39,663,364 |
<p>You're not comparing the values anywhere.</p>
<pre><code>L1 = [['1_A','2_A'],['1_B','2_B']]
D1 = {'1_A': 0.22876, '2_A': 0.22382, '1_B': 0.2584, '2_B': 0.25373}
template = "Value for {} is {}"
for i,j in L1:
if D1[i] < D1[j]:
print template.format(i,D1[i])
else:
print template.format(j,D1[j])
</code></pre>
| 2 |
2016-09-23T14:32:13Z
|
[
"python",
"python-2.7",
"dictionary"
] |
Manipulating list of lists and dictionary
| 39,663,232 |
<p>I have a list of lists representing the keys in a dictionary. I wish to pickup the smaller key for each list in lists. For instance,</p>
<pre><code>L1 = [['1_A','2_A'],['1_B','2_B']]
D1 = {'1_A': 0.22876, '2_A': 0.22382, '1_B': 0.2584, '2_B': 0.25373}
for li in L1:
for ll in li:
if ll in D1.keys():
print "Value for %s is %s" %(ll,D1[ll])
else:
print "Values not found"
</code></pre>
<p>When I print it, I get:</p>
<pre><code>Value for 1_A is 0.22876
Value for 2_A is 0.22382
Value for 1_B is 0.2584
Value for 2_B is 0.25373
</code></pre>
<p>The output I expect is <code>2_A</code>, <code>2_B</code> since both of them have smaller values compared to <code>1_A</code> and <code>1_B</code> respectively. Can anyone suggest how to do this?</p>
| 1 |
2016-09-23T14:25:58Z
| 39,746,525 |
<p>I found another simple answer !</p>
<pre><code>L1 = [['1_A','2_A'],['1_B','2_B']]
D1 = {'1_A': 0.22876, '2_A': 0.22382, '1_B': 0.2584, '2_B': 0.25373}
for i in L1:
val = (sorted(i, key=D1.get))[0]
newlist.append(val)
"newlist = ['2_A', '2_B']"
</code></pre>
<p>A list comprehension version:</p>
<pre><code>newlist = [sorted(i,key=D1.get)[0] for i in L1]
</code></pre>
<p>Thanks !</p>
| 0 |
2016-09-28T11:44:59Z
|
[
"python",
"python-2.7",
"dictionary"
] |
Writing a table with values for several years as a dictionary?
| 39,663,310 |
<p>I have a table I want to write as a Python code, but I'm stuck and can't figure out how to do it.</p>
<p>I have a table with data for two years (1991 and 1992), and different values for each year (men: 35 (1991) and 42 (1992), women: 38 (1991), 39 (1992) and children: 15 (1991), 10 (1992).</p>
<p>What I want is to be able to write one variable (dictionary) in python that makes me able to search for a specific value in a specific year (ex: men(1992) = 42).</p>
<p>My best suggestion so far is to make a dictionary including tuples in something like this:</p>
<pre><code>people = {
'year': (1991, 1992),
'men': (35, 42),
'women': (38, 39),
'children': (15, 10)
}
</code></pre>
<p>But this obviously not make it possible to search for both specific values in a specific year.</p>
| -1 |
2016-09-23T14:29:38Z
| 39,663,527 |
<p>You want a nested dict:</p>
<pre><code>people = {
"men": {
1991: 35,
1992: 42
},
"women": {
1991: 38,
1992: 39
},
"children": {
1991: 15,
1992: 10
}
}
</code></pre>
<p>Now you can do <code>people['men'][1991]</code> to get the result 35.</p>
| 1 |
2016-09-23T14:40:57Z
|
[
"python",
"dictionary",
"tuples"
] |
Writing a table with values for several years as a dictionary?
| 39,663,310 |
<p>I have a table I want to write as a Python code, but I'm stuck and can't figure out how to do it.</p>
<p>I have a table with data for two years (1991 and 1992), and different values for each year (men: 35 (1991) and 42 (1992), women: 38 (1991), 39 (1992) and children: 15 (1991), 10 (1992).</p>
<p>What I want is to be able to write one variable (dictionary) in python that makes me able to search for a specific value in a specific year (ex: men(1992) = 42).</p>
<p>My best suggestion so far is to make a dictionary including tuples in something like this:</p>
<pre><code>people = {
'year': (1991, 1992),
'men': (35, 42),
'women': (38, 39),
'children': (15, 10)
}
</code></pre>
<p>But this obviously not make it possible to search for both specific values in a specific year.</p>
| -1 |
2016-09-23T14:29:38Z
| 39,663,546 |
<p>I would suggest something like the following:</p>
<pre><code>people = {'1991':{'men':35, 'women':38, 'children':15},
'1992':{'men':42, 'women':39, 'children':10}}
</code></pre>
<p>Then you can access specific example data using:</p>
<pre><code>print(people['1991']['men'])
</code></pre>
<p><strong>EDIT</strong></p>
<p>If you really need to use tuples and also need the identifiers/keys you will have to use <strong>lists of tuples</strong> like so:</p>
<pre><code>people = {'1991':[('men', 35), ('women', 38), ('children', 15)],
'1992':[('men', 42), ('women', 39), ('children', 10)]}
</code></pre>
<p>With this variation you can access the same data like:</p>
<pre><code>print(people['1991'][0][1])
</code></pre>
| 3 |
2016-09-23T14:41:52Z
|
[
"python",
"dictionary",
"tuples"
] |
Writing a table with values for several years as a dictionary?
| 39,663,310 |
<p>I have a table I want to write as a Python code, but I'm stuck and can't figure out how to do it.</p>
<p>I have a table with data for two years (1991 and 1992), and different values for each year (men: 35 (1991) and 42 (1992), women: 38 (1991), 39 (1992) and children: 15 (1991), 10 (1992).</p>
<p>What I want is to be able to write one variable (dictionary) in python that makes me able to search for a specific value in a specific year (ex: men(1992) = 42).</p>
<p>My best suggestion so far is to make a dictionary including tuples in something like this:</p>
<pre><code>people = {
'year': (1991, 1992),
'men': (35, 42),
'women': (38, 39),
'children': (15, 10)
}
</code></pre>
<p>But this obviously not make it possible to search for both specific values in a specific year.</p>
| -1 |
2016-09-23T14:29:38Z
| 39,664,253 |
<p>I suggest you put the dictionary in a custom class. This will give you a lot of flexibility regardless of how the data is laid-out in the dictionary itself because you can create methods to add, change, and delete entries in the table that will hide the details of the data structure. </p>
<p>For example, let's say you've decided to organized the data in the dictionary like this:</p>
<pre><code>{1991: {'men': 35, 'women': 38, 'children': 15},
1992: {'men': 42, 'women': 39, 'children': 10}}
</code></pre>
<p>Then you could wrap that in a class like this:</p>
<pre><code>class People(object):
def __init__(self):
self._data = {}
def add(self, year, men, women, children):
self._data[year] = dict(men=men, women=women, children=children)
def lookup(self, key, year):
return self._data[year][key]
def delete(self, year):
del self._data[year]
people = People()
people.add(1991, 35, 38, 15)
people.add(1992, 42, 39, 10)
print(people.lookup('women', 1991)) # --> 38
print(people.lookup('men', 1992)) # --> 42
</code></pre>
| 0 |
2016-09-23T15:17:50Z
|
[
"python",
"dictionary",
"tuples"
] |
Plotting line with marker as head
| 39,663,311 |
<p>I have the following code that produces an animation of drawing a circle.</p>
<pre><code>from math import cos, sin
import matplotlib.pyplot as plt
import matplotlib.animation as animation
def update_plot(num, x, y, line):
line.set_data(x[:num], y[:num])
line.axes.axis([-1.5, 1.5, -1.5, 1.5])
return line
def plot_circle():
x = []
y = []
for i in range(100):
x.append(cos(i/10.0))
y.append(sin(i/10.0))
fig, ax = plt.subplots()
line, = ax.plot(x, y, color = "k")
ani = animation.FuncAnimation(fig, update_plot, len(x), fargs=[x, y, line], interval = 1, blit = False)
plt.show()
plot_circle()
</code></pre>
<p>The line is longer than a full lap, and so to be able to still see the drawing when the line overlaps, I would like a marker that shows what is being drawn. I tried to add a scatter plot into the update call, like</p>
<pre><code>scat = plt.scatter(0, 0)
ani = animation.FuncAnimation(fig, update_plot, len(x), fargs=[x, y, line, scat], interval = 1, blit = False)
</code></pre>
<p>and try to update the position of the scatter-plot point using <code>x[num]</code> and <code>y[num]</code> in <code>update_plot</code> without success. How can I achieve this effect?</p>
| 0 |
2016-09-23T14:29:42Z
| 39,664,135 |
<p>You need to return <code>scat</code> in <code>update_plot()</code>. </p>
<p>Here is another method, draw the line with <code>markevery</code> argument:</p>
<pre><code>line, = ax.plot(x, y, "-o", color="k", markevery=100000)
</code></pre>
<p>reverse the points order:</p>
<pre><code>line.set_data(x[:num][::-1], y[:num][::-1])
</code></pre>
<p>for example:</p>
<pre><code>import numpy as np
import pylab as pl
t = np.linspace(0, 2, 100)
x = np.cos(t)
y = np.sin(t)
pl.plot(x[::-1], y[::-1], "-o", markevery=10000)
</code></pre>
<p>outputs:</p>
<p><a href="http://i.stack.imgur.com/oPAh6.png" rel="nofollow"><img src="http://i.stack.imgur.com/oPAh6.png" alt="enter image description here"></a></p>
| 1 |
2016-09-23T15:12:08Z
|
[
"python",
"matplotlib",
"plot"
] |
Plotting line with marker as head
| 39,663,311 |
<p>I have the following code that produces an animation of drawing a circle.</p>
<pre><code>from math import cos, sin
import matplotlib.pyplot as plt
import matplotlib.animation as animation
def update_plot(num, x, y, line):
line.set_data(x[:num], y[:num])
line.axes.axis([-1.5, 1.5, -1.5, 1.5])
return line
def plot_circle():
x = []
y = []
for i in range(100):
x.append(cos(i/10.0))
y.append(sin(i/10.0))
fig, ax = plt.subplots()
line, = ax.plot(x, y, color = "k")
ani = animation.FuncAnimation(fig, update_plot, len(x), fargs=[x, y, line], interval = 1, blit = False)
plt.show()
plot_circle()
</code></pre>
<p>The line is longer than a full lap, and so to be able to still see the drawing when the line overlaps, I would like a marker that shows what is being drawn. I tried to add a scatter plot into the update call, like</p>
<pre><code>scat = plt.scatter(0, 0)
ani = animation.FuncAnimation(fig, update_plot, len(x), fargs=[x, y, line, scat], interval = 1, blit = False)
</code></pre>
<p>and try to update the position of the scatter-plot point using <code>x[num]</code> and <code>y[num]</code> in <code>update_plot</code> without success. How can I achieve this effect?</p>
| 0 |
2016-09-23T14:29:42Z
| 39,696,945 |
<p>I ended up finding a way to add the scatter plot to the same animation. The key was to use <code>scat.set_offsets</code> to set the data. The changes that are needed is as follows:</p>
<pre><code>def update_plot(num, x, y, line, scat):
# ...
scat.set_offsets([x[num - 1], y[num - 1]])
return line, scat
def plot_circle():
# ...
scat = ax.scatter([0], [0], color = 'k') # Set the dot at some arbitrary position initially
ani = animation.FuncAnimation(fig, update_plot, len(x), fargs=[x, y, line, scat], interval = 1, blit = False)
plt.show()
</code></pre>
| 0 |
2016-09-26T07:24:21Z
|
[
"python",
"matplotlib",
"plot"
] |
ckan datapusher /api/3/action/resource_show (Caused by <class 'socket.error'>: [Errno 111] Connection refused) error
| 39,663,415 |
<p>I'm trying install ckan 2.2.1 + pgsql 9.1 + solr 3.6 + rhel 6.6.</p>
<p>I set file store, and datastore plugin. I tried to use 'upload to datastore' menu in ckan web. then I got this error.</p>
<pre><code>2016-09-23 23:16:54,655 INFO [ckan.lib.base] /dataset/datastore/resource_data/7a82b5c2-d68c-4bed-b5c6-fcc460011455 render time 0.363 seconds
Job "push_to_datastore (trigger: RunTriggerNow, run = True, next run at: None)" raised an exception
Traceback (most recent call last):
File "/usr/lib/ckan/default/lib/python2.7/site-packages/apscheduler/scheduler.py", line 512, in _run_job
retval = job.func(*job.args, **job.kwargs)
File "/usr/lib/ckan/default/src/ckan/datapusher/datapusher/jobs.py", line 300, in push_to_datastore
resource = get_resource(resource_id, ckan_url, api_key)
File "/usr/lib/ckan/default/src/ckan/datapusher/datapusher/jobs.py", line 250, in get_resource
'Authorization': api_key}
File "/usr/lib/ckan/default/lib/python2.7/site-packages/requests/api.py", line 87, in post
return request('post', url, data=data, **kwargs)
File "/usr/lib/ckan/default/lib/python2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/ckan/default/lib/python2.7/site-packages/requests/sessions.py", line 279, in request
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
File "/usr/lib/ckan/default/lib/python2.7/site-packages/requests/sessions.py", line 374, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/ckan/default/lib/python2.7/site-packages/requests/adapters.py", line 209, in send
raise ConnectionError(e)
ConnectionError: HTTPConnectionPool(host='default.ckan.com', port=80): Max retries exceeded with url: /api/3/action/resource_show (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
</code></pre>
<p>ckan, solr is running well. datapusher with 8800 port is running.</p>
<pre><code>$ curl localhost:8800
{
"help": "\n Get help at:\n http://ckan-service-provider.readthedocs.org/."
}
</code></pre>
<p>Am I missing something for my datapusher ?
Thanks.</p>
<p>I added my ini </p>
<pre><code>cache_dir = /tmp/%(ckan.site_id)s/
beaker.session.key = ckan
beaker.session.secret = CkL+a+Nc6grW1jBM/Ts69mRsE
app_instance_uuid = {f41a65ac-4a33-44fe-bb03-af15b456978e}
who.config_file = %(here)s/who.ini
who.log_level = warning
who.log_file = %(cache_dir)s/who_log.ini
sqlalchemy.url = postgresql://ckan_default:PASS@localhost/ckan_default
ckan.datastore.write_url = postgresql://ckan_default:PASS@localhost/datastore_default
ckan.datastore.read_url = postgresql://datastore_default:PASS@localhost/datastore_default
ckan.datastore.default_fts_lang = english
ckan.datastore.default_fts_index_method = gist
ckan.site_url = http://ckan.daniel.com
ckan.auth.anon_create_dataset = false
ckan.auth.create_unowned_dataset = false
ckan.auth.create_dataset_if_not_in_organization = false
ckan.auth.user_create_groups = false
ckan.auth.user_create_organizations = false
ckan.auth.user_delete_groups = true
ckan.auth.user_delete_organizations = true
ckan.auth.create_user_via_api = false
ckan.auth.create_user_via_web = true
ckan.auth.roles_that_cascade_to_sub_groups = admin
ckan.site_id = default
solr_url = http://127.0.0.1:8983/solr/ckan
ckan.plugins = stats text_view image_view recline_view datastore datapusher
ckan.views.default_views = image_view text_view recline_view
ckan.site_title = CKAN
ckan.site_logo = /base/images/ckan-logo.png
ckan.site_description =
ckan.favicon = /images/icons/ckan.ico
ckan.gravatar_default = identicon
ckan.preview.direct = png jpg gif csv
ckan.preview.loadable = html htm rdf+xml owl+xml xml n3 n-triples turtle plain atom csv tsv rss txt json
ckan.locale_default = en
ckan.locale_order = en pt_BR ja it cs_CZ ca es fr el sv sr sr@latin no sk fi ru de pl nl bg ko_KR hu sa sl lv
ckan.locales_offered =
ckan.locales_filtered_out = en_GB
ckan.feeds.authority_name =
ckan.feeds.date =
ckan.feeds.author_name =
ckan.feeds.author_link =
ckan.storage_path = /usr/lib/ckan/korea/src/ckan/filestore
ckan.max_resource_size = 10
ckan.max_image_size = 5
ckan.datapusher.formats = csv xls xlsx tsv application/csv application/vnd.ms-excel application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
ckan.datapusher.url = http://ckan.daniel.com:8800/
ckan.hide_activity_from_users = %(ckan.site_id)s
</code></pre>
| 2 |
2016-09-23T14:34:45Z
| 39,742,223 |
<p>As you noticed in your comment, the problem is that <strong>the datapusher is trying to connect to the wrong port</strong>:</p>
<blockquote>
<p>ConnectionError: HTTPConnectionPool(host='default.ckan.com', <strong>port=80</strong>): Max retries exceeded with url: /api/3/action/resource_show (Caused by : [Errno 111] Connection refused)</p>
</blockquote>
<p>You already found a possible workaround: changing CKAN's port to 80.</p>
<p>As an alternative workaround, I found that adding the port (the same one configured with <code>port</code> in the <code>[server:main]</code> section) to <code>ckan.site_url</code> makes the datapusher use it instead of the default HTTP port (80). So, in your case it could be:</p>
<pre><code>ckan.site_url = http://default.ckan.com:5000
</code></pre>
<p>Of course, this is just another possible workaround; the <em>real</em> solution would be to fix the datapusher so that it correctly reads the port from the configuration ...</p>
| 1 |
2016-09-28T08:41:24Z
|
[
"python",
"datastore",
"ckan"
] |
HTTP500 result from SOAP request in python
| 39,663,479 |
<p>this question might be fairly specific for one service but I don't understand why I am getting a HTTP500 response from the SOAP service. I am seeing the service I want to access and I see which parameters are required. Still I am getting HTTP500. Is there something wrong with the service or my code?</p>
<pre><code>#!/usr/bin/env python
# Import WSDL package
from SOAPpy import WSDL
# Create service interface
wsdlUrl = 'http://bioinf.cs.ucl.ac.uk/psipred_api/wsdl'
# Download the WSDL file
server = WSDL.Proxy(wsdlUrl)
# Get the information about which services are provided by this host
print server.methods.keys()
# After selecting the service of interest let's find out which arguments are necessary
callInfo = server.methods['PsipredSubmit']
for para in callInfo.inparams:
print para.name, para.type
# Now let's discover what we will get back
for para in callInfo.outparams:
print para.name, para.type
sequence = "MLELLPTAVEGVSQAQITGRPEWIWLALGTALMGLGTLYFLVKGMGVSDPDAKKFYAITTLVPAIAFTMYLSMLLGYGLTMVPFGGEQNPIYWARYADWLFTTPLLLLDLALLVDADQGTILALVGADGIMIGTGLVGALTKVYSYRFVWWAISTAAMLYILYVLFFGFTSKAESMRPEVASTFKVLRNVTVVLWSAYPVVWLIGSEGAGIVPLNIETLLFMVLDVSAKVGFGLILLRSRAIFGEAEAPEPSAGDGAAATSD"
email = "psipred@cs.ucl.ac.uk"
subject = "test"
result = server.PsipredSubmit(sequence, email, subject, "True", "False", "False", "all")
print result
</code></pre>
| 0 |
2016-09-23T14:37:33Z
| 39,664,288 |
<p>Generally a 500 result means the server encountered an unexpected error while processing your request.</p>
<p>This could be a temporary situation which will get resolved in a day or two: perhaps the server has a bad RAM chip, or its disk is full.</p>
<p>Or it could be completely intentional: perhaps one of the values you submitted was somehow incorrect, and the server is basically saying "You screwed up; go away." (In this case one hopes that the server would respond with a more helpful diagnostic message, but it ain't always so.)</p>
<p>If you have an official connection with the hosting organization, or if they're nice people willing to help, you might be able to send them a message asking what went wrong.</p>
| 0 |
2016-09-23T15:19:18Z
|
[
"python",
"soap",
"bioinformatics"
] |
HTTP500 result from SOAP request in python
| 39,663,479 |
<p>this question might be fairly specific for one service but I don't understand why I am getting a HTTP500 response from the SOAP service. I am seeing the service I want to access and I see which parameters are required. Still I am getting HTTP500. Is there something wrong with the service or my code?</p>
<pre><code>#!/usr/bin/env python
# Import WSDL package
from SOAPpy import WSDL
# Create service interface
wsdlUrl = 'http://bioinf.cs.ucl.ac.uk/psipred_api/wsdl'
# Download the WSDL file
server = WSDL.Proxy(wsdlUrl)
# Get the information about which services are provided by this host
print server.methods.keys()
# After selecting the service of interest let's find out which arguments are necessary
callInfo = server.methods['PsipredSubmit']
for para in callInfo.inparams:
print para.name, para.type
# Now let's discover what we will get back
for para in callInfo.outparams:
print para.name, para.type
sequence = "MLELLPTAVEGVSQAQITGRPEWIWLALGTALMGLGTLYFLVKGMGVSDPDAKKFYAITTLVPAIAFTMYLSMLLGYGLTMVPFGGEQNPIYWARYADWLFTTPLLLLDLALLVDADQGTILALVGADGIMIGTGLVGALTKVYSYRFVWWAISTAAMLYILYVLFFGFTSKAESMRPEVASTFKVLRNVTVVLWSAYPVVWLIGSEGAGIVPLNIETLLFMVLDVSAKVGFGLILLRSRAIFGEAEAPEPSAGDGAAATSD"
email = "psipred@cs.ucl.ac.uk"
subject = "test"
result = server.PsipredSubmit(sequence, email, subject, "True", "False", "False", "all")
print result
</code></pre>
| 0 |
2016-09-23T14:37:33Z
| 39,666,255 |
<p>Your code looks fine and I just tried to access the server via <code>suds</code> and it works.</p>
<pre><code>from suds.client import Client
client = Client('http://bioinf.cs.ucl.ac.uk/psipred_api/wsdl')
print('PsipredSubmit' in client.wsdl.services[0].ports[0].methods)
>>> True
</code></pre>
<p>Are you usually using a proxy?</p>
<p>Perhaps the server was temporarily down?</p>
<hr>
<pre><code>sequence = "MLELLPTAVEGVSQAQITGRPEWIWLALGTALMGLGTLYFLVKGMGVSDPDAKKFYAITTLVPAIAFTMYLSMLLGYGLTMVPFGGEQNPIYWARYADWLFTTPLLLLDLALLVDADQGTILALVGADGIMIGTGLVGALTKVYSYRFVWWAISTAAMLYILYVLFFGFTSKAESMRPEVASTFKVLRNVTVVLWSAYPVVWLIGSEGAGIVPLNIETLLFMVLDVSAKVGFGLILLRSRAIFGEAEAPEPSAGDGAAATSD"
email = "psipred@cs.ucl.ac.uk"
subject = "test"
client.service.PsipredSubmit(sequence, email, subject, "True", "False", "False", "all")
>>> (reply){
>>> message = "job submission succesful"
>>> job_id = "2e9f0864-826a-11e6-9da3-00163e110593"
>>> state = 1
>>> }
</code></pre>
<p>Submitting a job with <code>suds</code> works, perhaps you just caught the server at a bad time or there is something wrong with your <code>SOAP</code> library?</p>
| 1 |
2016-09-23T17:15:59Z
|
[
"python",
"soap",
"bioinformatics"
] |
reindex some DataFrame columns to multi index
| 39,663,486 |
<p>At some point in my workflow I end up with a regular pandas DataFrame with some columns and some rows. I want to export this DataFrame into a latex table,using <code>df.to_latex()</code>. This worked great, however, I know want to use multicolumn where some columns are part of a multi table. For instance a DataFrame with columns a,b,c,d,e I would want to leave column a as it is, but group up b and c, as well as d and e.</p>
<pre><code>import numpy as np
import pandas as pd
# where I am
data = np.arange(15).reshape(3, 5)
df = pd.DataFrame(data=data, columns=['a', 'b', 'c', 'd', 'e'])
</code></pre>
<p>It looks like this:</p>
<pre><code>In [161]: df
Out[161]:
a b c d e
0 0 1 2 3 4
1 5 6 7 8 9
2 10 11 12 13 14
</code></pre>
<p>I would like to group columns b and c, as well as d and e, but leave a alone. So my desired output should look like this.</p>
<pre><code># where I want to be: leave column 'a' alone, group b&c as well as d&e
multi_index = pd.MultiIndex.from_tuples([
('a', ''),
('bc', 'b'),
('bc', 'c'),
('de', 'd'),
('de', 'e'),
])
desired = pd.DataFrame(data, columns=multi_index)
</code></pre>
<p>It looks like this:</p>
<pre><code>In [162]: desired
Out[162]:
a bc de
b c d e
0 0 1 2 3 4
1 5 6 7 8 9
2 10 11 12 13 14
</code></pre>
<p>In order to get there, i tried a simple reindex. This give me the desired shape, but all columns only got NaN as value.</p>
<pre><code># how can use df and my multiindexreindex to multi column DataFrame
result = df.reindex(columns=multi_index)
</code></pre>
<p>The result looks like described, correct indices but all NaN</p>
<pre><code>In [166]: result
Out[166]:
a bc de
b c e e
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN
</code></pre>
<p>How can I get my desired result?</p>
| 1 |
2016-09-23T14:38:20Z
| 39,664,088 |
<p>You can assign the multiIndex to the columns attribute of the data frame directly:</p>
<pre><code>df.columns = multi_index
df
</code></pre>
<p><a href="http://i.stack.imgur.com/WpwiH.png" rel="nofollow"><img src="http://i.stack.imgur.com/WpwiH.png" alt="enter image description here"></a></p>
| 1 |
2016-09-23T15:09:51Z
|
[
"python",
"pandas",
"dataframe",
"multi-index",
"reindex"
] |
reindex some DataFrame columns to multi index
| 39,663,486 |
<p>At some point in my workflow I end up with a regular pandas DataFrame with some columns and some rows. I want to export this DataFrame into a latex table,using <code>df.to_latex()</code>. This worked great, however, I know want to use multicolumn where some columns are part of a multi table. For instance a DataFrame with columns a,b,c,d,e I would want to leave column a as it is, but group up b and c, as well as d and e.</p>
<pre><code>import numpy as np
import pandas as pd
# where I am
data = np.arange(15).reshape(3, 5)
df = pd.DataFrame(data=data, columns=['a', 'b', 'c', 'd', 'e'])
</code></pre>
<p>It looks like this:</p>
<pre><code>In [161]: df
Out[161]:
a b c d e
0 0 1 2 3 4
1 5 6 7 8 9
2 10 11 12 13 14
</code></pre>
<p>I would like to group columns b and c, as well as d and e, but leave a alone. So my desired output should look like this.</p>
<pre><code># where I want to be: leave column 'a' alone, group b&c as well as d&e
multi_index = pd.MultiIndex.from_tuples([
('a', ''),
('bc', 'b'),
('bc', 'c'),
('de', 'd'),
('de', 'e'),
])
desired = pd.DataFrame(data, columns=multi_index)
</code></pre>
<p>It looks like this:</p>
<pre><code>In [162]: desired
Out[162]:
a bc de
b c d e
0 0 1 2 3 4
1 5 6 7 8 9
2 10 11 12 13 14
</code></pre>
<p>In order to get there, i tried a simple reindex. This give me the desired shape, but all columns only got NaN as value.</p>
<pre><code># how can use df and my multiindexreindex to multi column DataFrame
result = df.reindex(columns=multi_index)
</code></pre>
<p>The result looks like described, correct indices but all NaN</p>
<pre><code>In [166]: result
Out[166]:
a bc de
b c e e
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN
</code></pre>
<p>How can I get my desired result?</p>
| 1 |
2016-09-23T14:38:20Z
| 39,664,125 |
<pre><code>pd.concat([df.set_index('a')[['b', 'c']],
df.set_index('a')[['d', 'e']]],
axis=1, keys=['bc', 'de']).reset_index(col_level=1)
</code></pre>
<p><a href="http://i.stack.imgur.com/oOFSv.png" rel="nofollow"><img src="http://i.stack.imgur.com/oOFSv.png" alt="enter image description here"></a></p>
| 1 |
2016-09-23T15:11:43Z
|
[
"python",
"pandas",
"dataframe",
"multi-index",
"reindex"
] |
What should be my python packages path?
| 39,663,498 |
<p>I'm on Mac OS X and I've heard that to avoid global installation of packages (using sudo) that might cause problems with the python files that OS X uses, the path to install python packages must be different than that of OS X. </p>
<p>Currently python executables are installed in : </p>
<pre><code>/usr/local/bin/
</code></pre>
<p>Pip installs modules over here : </p>
<pre><code>/usr/local/lib/python2.7/site-packages
</code></pre>
<p>Python is used from here : </p>
<pre><code>/usr/local/bin/python
</code></pre>
<p>Are these paths safe?</p>
| 1 |
2016-09-23T14:39:09Z
| 39,663,639 |
<p>You shouldn't be mucking about with Python package paths, and you shouldn't be installing Python packages globally at all. Use a virtualenv for each project, and let pip install the libraries locally inside the virtualenv.</p>
| 0 |
2016-09-23T14:46:10Z
|
[
"python",
"osx",
"pip"
] |
What should be my python packages path?
| 39,663,498 |
<p>I'm on Mac OS X and I've heard that to avoid global installation of packages (using sudo) that might cause problems with the python files that OS X uses, the path to install python packages must be different than that of OS X. </p>
<p>Currently python executables are installed in : </p>
<pre><code>/usr/local/bin/
</code></pre>
<p>Pip installs modules over here : </p>
<pre><code>/usr/local/lib/python2.7/site-packages
</code></pre>
<p>Python is used from here : </p>
<pre><code>/usr/local/bin/python
</code></pre>
<p>Are these paths safe?</p>
| 1 |
2016-09-23T14:39:09Z
| 39,663,690 |
<p>I suggest you to make use of virtual environments, you can learn more about them here: <a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow">http://docs.python-guide.org/en/latest/dev/virtualenvs/</a></p>
<p>to sum up </p>
<ol>
<li>Create a virtual environment (venv): <code>$ virtualenv venv</code></li>
</ol>
<blockquote>
<p>This creates a copy of Python in whichever directory you ran the command in placing it in a folder named venv.</p>
</blockquote>
<ol start="2">
<li>Activate the virtual environment: <code>$ source venv/bin/activate</code></li>
<li>Install packages with pip: <code>$ pip install requests</code></li>
<li>When you are done deactivate the virtual environment: <code>$ deactivate</code></li>
</ol>
| 0 |
2016-09-23T14:48:06Z
|
[
"python",
"osx",
"pip"
] |
What should be my python packages path?
| 39,663,498 |
<p>I'm on Mac OS X and I've heard that to avoid global installation of packages (using sudo) that might cause problems with the python files that OS X uses, the path to install python packages must be different than that of OS X. </p>
<p>Currently python executables are installed in : </p>
<pre><code>/usr/local/bin/
</code></pre>
<p>Pip installs modules over here : </p>
<pre><code>/usr/local/lib/python2.7/site-packages
</code></pre>
<p>Python is used from here : </p>
<pre><code>/usr/local/bin/python
</code></pre>
<p>Are these paths safe?</p>
| 1 |
2016-09-23T14:39:09Z
| 39,664,129 |
<p>There are a number of options you can take, the easiest (as others have suggested) is <code>virtualenv</code>. Hopefully that's already installed if not, this is one of the few modules you should install globally. If you have Python 3.4+, you "should" have <code>venv</code> module (which is similar to virtualenv, but it's maintained by the Python team).</p>
<p><em>python2</em></p>
<pre><code>virtualenv ~/.py-venvs/python2
</code></pre>
<p><em>python3</em></p>
<pre><code>python3 -m venv ~/.py-venvs/python3
</code></pre>
<p>You could install modules to the local user using <code>pip</code>, but I'm not sure how well this is supported these days though:</p>
<pre><code>pip install --user requests
</code></pre>
<p>You could also append directories to the <code>$PYTHONPATH</code> bash variable, but this should only be done as a last resort and under parental supervision :D. Try the other methods before this one.</p>
| 0 |
2016-09-23T15:11:57Z
|
[
"python",
"osx",
"pip"
] |
What should be my python packages path?
| 39,663,498 |
<p>I'm on Mac OS X and I've heard that to avoid global installation of packages (using sudo) that might cause problems with the python files that OS X uses, the path to install python packages must be different than that of OS X. </p>
<p>Currently python executables are installed in : </p>
<pre><code>/usr/local/bin/
</code></pre>
<p>Pip installs modules over here : </p>
<pre><code>/usr/local/lib/python2.7/site-packages
</code></pre>
<p>Python is used from here : </p>
<pre><code>/usr/local/bin/python
</code></pre>
<p>Are these paths safe?</p>
| 1 |
2016-09-23T14:39:09Z
| 39,665,806 |
<p>If you are on OS X, you should also have Python in <code>/usr/bin</code>:</p>
<pre><code>$ which -a python
/usr/local/bin/python
/usr/bin/python
</code></pre>
<p>If you are using <code>brew</code>, the first <code>python</code> should be a symlink:</p>
<pre><code>$ ls -hl $(which python)
lrwxr-xr-x 1 user admin 34B Jun 23 16:53 /usr/local/bin/python -> ../Cellar/python/2.7.11/bin/python
</code></pre>
<p>If you are not using <code>brew</code>, you will have to explain to us how you installed a second version of <code>python</code>.</p>
<p>You should also have at least two <code>site-packages</code>:</p>
<pre><code>$ find /usr -name 'site-packages'
/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
/usr/local/lib/python2.7/site-packages
</code></pre>
<p>If you installed <code>python</code> using <code>brew</code>, you should also have <code>pip</code>:</p>
<pre><code>$ which pip
/usr/local/bin/pip
</code></pre>
<p>You should probably upgrade that to the latest <code>pip</code>:</p>
<pre><code>$ pip install --upgrade pip
</code></pre>
<p>It should be safe to install <code>python</code> packages using <code>/usr/local/bin/pip</code> because they will be installed in <code>/usr/local/lib/python2.7/site-packages</code>. The <code>/usr/local</code> path is specifically for <a href="http://www.pathname.com/fhs/pub/fhs-2.3.html#USRLOCALLOCALHIERARCHY" rel="nofollow">local software</a>. Also, <code>brew</code> installs its files to <code>/usr/local</code>, so if you are using <code>brew</code>, you are already installing files there.</p>
<p>I am not sure why some folks say not to install any packages globally. I have never seen a reference that explained why this was a bad idea. If multiple users need the same package, it makes more sense to install it globally.</p>
<p>When I first started using <code>virtualenv</code>, it did not always work the way I expected it too. I had a machine with multiple users that needed <code>requests</code>, and because of problems with <code>virtualenv</code>, I wound up installing it globally using <code>pip</code>.</p>
<p>Both <code>virtualenv</code> and <code>pip</code> have improved a lot since I first started using them and I can see how using them can prevent some problems. If you are developing new software that needs the latest version of a package, <code>virtualenv</code> allows you to install the package without affecting the rest of the system. However, I still do not see why it is a bad idea to install packages globally.</p>
| 0 |
2016-09-23T16:47:24Z
|
[
"python",
"osx",
"pip"
] |
How can I match the current date in a <td></td> based on datetime.date.today()?
| 39,663,526 |
<p>everyone! I'm having some trouble in matching two strings.</p>
<p>So, I've got this HTML code from a page I'm testing:</p>
<pre><code><td class="fixedColumn ng-binding"
ng-style="{'padding':'5px','line-height':'10px'}" style="padding: 5px; line-height: 10px;">
(today's date: 2016-09-23)
</td>
</code></pre>
<p>The actual string that's displayed in the page is <strong>(today's date: 2016-09-23)</strong>.</p>
<p>What I tried to do using Python is this:</p>
<pre><code>#check if today's date = current date
currentDate = datetime.date.today().strftime("(today's date: %Y-%m-%d)")
todayDate = driver.find_element_by_xpath("//td[contains(text(), 'currentDate']")
allOk = 'all good!'
notOk = 'still not OK...'
if todayDate == currentDate:
print(allOk)
home = driver.find_element_by_xpath("//a[@title='Home']").click()
else:
print(notOk)
driver.close()
</code></pre>
<p>What happens when I run the script is, obviously, that the browser closes, according to <code>driver.close()</code> , but I need the shell to print 'all good!' in the shell and for the browser to go to "Home" by "clicking".</p>
<p>I'm really new to Python, but as far as I'm concerned, I did everything possible to make this work. Could anyone give me some hints and point out to what I'm missing? Thank you :)</p>
| 1 |
2016-09-23T14:40:49Z
| 39,663,824 |
<pre><code>todayDate = driver.find_element_by_xpath("//td[contains(text(), 'currentDate']")
</code></pre>
<p>The quotes around currentDate are making the XPath refer to something that contains the actual text 'currentDate', not the text that the currentDate variable refers to, you need to change it to this:</p>
<pre><code>todayDate = driver.find_element_by_xpath("//td[contains(text(), " + currentDate + "]")
</code></pre>
<p>you may also need to wrap currentDate in str(currentDate) to make sure it casts as a string, that was a problem I once faced.</p>
<p>'+' in python concatenates strings together, so this should make it look for the text that variable refers to. Hope this fixes it for you!</p>
<p>Another method that does not use a python variable as suggested by Jon Clements:</p>
<p>You can make this clearer by using string formatting instead, eg:</p>
<pre><code>"//td[contains(text(), {})]".format(date.today())
</code></pre>
<p>just make sure there's a from datetime import date before hand...</p>
| 1 |
2016-09-23T14:56:04Z
|
[
"python",
"string",
"selenium",
"selenium-webdriver"
] |
How can I match the current date in a <td></td> based on datetime.date.today()?
| 39,663,526 |
<p>everyone! I'm having some trouble in matching two strings.</p>
<p>So, I've got this HTML code from a page I'm testing:</p>
<pre><code><td class="fixedColumn ng-binding"
ng-style="{'padding':'5px','line-height':'10px'}" style="padding: 5px; line-height: 10px;">
(today's date: 2016-09-23)
</td>
</code></pre>
<p>The actual string that's displayed in the page is <strong>(today's date: 2016-09-23)</strong>.</p>
<p>What I tried to do using Python is this:</p>
<pre><code>#check if today's date = current date
currentDate = datetime.date.today().strftime("(today's date: %Y-%m-%d)")
todayDate = driver.find_element_by_xpath("//td[contains(text(), 'currentDate']")
allOk = 'all good!'
notOk = 'still not OK...'
if todayDate == currentDate:
print(allOk)
home = driver.find_element_by_xpath("//a[@title='Home']").click()
else:
print(notOk)
driver.close()
</code></pre>
<p>What happens when I run the script is, obviously, that the browser closes, according to <code>driver.close()</code> , but I need the shell to print 'all good!' in the shell and for the browser to go to "Home" by "clicking".</p>
<p>I'm really new to Python, but as far as I'm concerned, I did everything possible to make this work. Could anyone give me some hints and point out to what I'm missing? Thank you :)</p>
| 1 |
2016-09-23T14:40:49Z
| 39,696,512 |
<p>Ok, so it's been about 10 minutes since I've added a comment, but I've made it work.
Code before:</p>
<pre><code>todayDate = driver.find_element_by_xpath("//td[contains(text(),
" + str(currentDate) + ")]")
</code></pre>
<p>Code after:</p>
<pre><code>todayDate = driver.find_element_by_xpath("//td[contains(text(),
'" + str(currentDate) + "')]")
</code></pre>
<p>I put <code>" + str(currentDate) + "</code> in <code>''</code> . Also, I changed</p>
<pre><code>`if todayDate == currentDate:` to `if todayDate:`
</code></pre>
<p>It works as intended now, I only need to figure out why :) anyway, thank you AntlerFox and Jon Clements for your trouble.</p>
| 0 |
2016-09-26T06:58:30Z
|
[
"python",
"string",
"selenium",
"selenium-webdriver"
] |
spark Dataframe/RDD equivalent to pandas command given in description?
| 39,663,596 |
<p>How to perform same functionality as this pandas command via Pyspark dataframe or RDD ?</p>
<pre><code>df.drop(df.std()[(df.std() == 0)].index, axis=1)
</code></pre>
<p>For details on what this command does, refer:
<a href="http://stackoverflow.com/questions/39658574/how-to-drop-columns-which-have-same-values-in-all-rows-via-pandas-or-spark-dataf">How to drop columns which have same values in all rows via pandas or spark dataframe?</a></p>
<p>NOTE: file is too big to use df.toPandas().</p>
| 0 |
2016-09-23T14:44:20Z
| 39,664,296 |
<p>In general you can use <code>countDistinct</code>:</p>
<pre><code>from pyspark.sql.functions import countDistinct
cnts = (df
.select([countDistinct(c).alias(c) for c in df.columns])
.first()
.asDict())
df.select(*[k for (k, v) in cnts.items() if v > 1])
## +---+-----+-----+-----+
## | id|index| name|data1|
## +---+-----+-----+-----+
## |345| 0|name1| 3|
## | 12| 1|name2| 2|
## | 2| 5|name6| 7|
## +---+-----+-----+-----+
</code></pre>
<p>This won't work on data with cardinality but can handle non-numeric columns.</p>
<p>You can use the same approach to filter with standard deviations:</p>
<pre><code>from pyspark.sql.functions import stddev
stddevs = df.select(*[stddev(c).alias(c) for c in df.columns]).first().asDict()
df.select(*[k for (k, v) in stddevs.items() if v is None or v != 0.0])
## +---+-----+-----+-----+
## | id|index| name|data1|
## +---+-----+-----+-----+
## |345| 0|name1| 3|
## | 12| 1|name2| 2|
## | 2| 5|name6| 7|
## +---+-----+-----+-----+
</code></pre>
| 0 |
2016-09-23T15:19:45Z
|
[
"python",
"pandas",
"pyspark",
"rdd",
"spark-dataframe"
] |
Generating pandas column values based upon filename
| 39,663,649 |
<p>I wish to populate three columns (day, week, year) in a pandas data frame.</p>
<p>The data is to be extracted from a filename which are in this format:</p>
<pre><code>AAAA_DDWWYYYY.txt
</code></pre>
<p>where:</p>
<ul>
<li>AAAA = alpha characters</li>
<li>DD = Day</li>
<li>WW = Month</li>
<li>YYYY = Year</li>
</ul>
<p>For example:</p>
<pre><code>NDVI_01012016.txt
</code></pre>
<p>is:</p>
<ul>
<li>Day = 01</li>
<li>Week = 01</li>
<li>Year = 2016</li>
</ul>
| -1 |
2016-09-23T14:46:30Z
| 39,664,077 |
<p>Assuming <code>df</code> is your dataframe::</p>
<pre><code>df.ix[df.filename == 'NDVI_01012016.txt', 'Day'] = 1
df.ix[df.filename == 'NDVI_01012016.txt', 'Week'] = '01'
df.ix[df.filename == 'NDVI_01012016.txt', 'Year'] = '2016'
</code></pre>
| 0 |
2016-09-23T15:09:12Z
|
[
"python",
"pandas"
] |
Problems when using py2app to freeze a python script which uses pdfkit
| 39,663,767 |
<p>I recently created an application using pdfkit that takes a keyword entered by the user and makes various internet searches using that keyword. It then uses pdfkit to generate pdf files of each of the separate searches and saves them to a user-defined directory.</p>
<p>Everything works dandy when I run the code from terminal, however when I attempt to freeze the script using py2app, everything works fine up until it comes to actually saving the pdf's, at which the application does absolutely nothing. </p>
<p>I have tried to include both pdfkit and wkhtmltopdf in the setup.py file which py2app uses to create the application, but to no luck, I have tried listing them under the includes section like this:</p>
<pre><code>'includes':['requests','pdfkit']
</code></pre>
<p>In the packages section:</p>
<pre><code>'packages':['requests','pdfkit']
</code></pre>
<p>and even in the setup_requires section below:</p>
<pre><code>setup_requires=['py2app', 'wkhtmltopdf']
</code></pre>
<p>But still the application does nothing. I presume its something to do with the fact that a dependency doesn't get carried over to the frozen application. However I am starting to rethink this as even when I create the app in alias mode (which claims to keep all dependencies), the same problem occurs.</p>
<p>Is this a known issue? Or has anyone found out a solution to this.</p>
<p>Many thanks. My full setup.py file is below:</p>
<pre><code> from setuptools import setup
APP = ['pdtest.py']
DATA_FILES = []
OPTIONS = {'argv_emulation': False, 'includes':['requests','pdfkit'],'packages':['requests','pdfkit'], 'iconfile':'icon.icns'}
setup(
app=APP,
data_files=DATA_FILES,
options={'py2app': OPTIONS},
setup_requires=['py2app', 'wkhtmltopdf'],
)
</code></pre>
| 0 |
2016-09-23T14:53:05Z
| 39,666,097 |
<p>Ok found an answer so anyone who is having the same problem can use it.</p>
<p>Basically the problem lies in the fact that pdfkit is just a wrapper for the module WKHTMLTOPDF. </p>
<p>So basically all that happens when you call on PDFKIT is that it just asks WKHTMLTOPDF to do the work for it, however on my system for some reason, the script was able to find the WKHTMLTOPDF module when it was being ran from terminal, but was unable to when converted into an applet using py2app.</p>
<p>What I needed to do was tell the script exactly where WKHTMLTOPDF was located, but first you need to know that information yourself, just type into terminal:</p>
<pre><code>which wkhtmltopdf
</code></pre>
<p>and you should get a path returned. If you don't then maybe you should think about installing it first, that usually helps.</p>
<p>Then you need to set the configuration file to look in that place for the wkhtmltopdf module, so just add this to your pdfkit script:</p>
<pre><code>config = pdfkit.configuration(wkhtmltopdf='/usr/local/bin/wkhtmltopdf')
</code></pre>
<p>Except replace the path after the "=' with whatever path the earlier which command returned. Then, everytime you call on pdfkit, you need to add this configuration, for example:</p>
<pre><code>normalpdf = pdfkit.from_url('URL, 'test.pdf',configuration = config)
</code></pre>
<p>And that should solve the issue. Hope it helps!</p>
| 0 |
2016-09-23T17:06:05Z
|
[
"python",
"py2app",
"python-pdfkit"
] |
Manipulating dictionaries within lists
| 39,663,866 |
<p>I have a list with dictionaries in it as below:</p>
<pre><code>wordsList = [
{'Definition': 'Allows you to store data with ease' , 'Word': 'Database'},
{'Definition': 'This can either be static or dynamic' , 'Word': 'IP'},
]
</code></pre>
<p>Essentially, what I want to do is:</p>
<ul>
<li>Be able to print each separate definition</li>
<li>Be able to print each separate word</li>
</ul>
<p>And so my question is: How do I do this? I only know how to do this with regular lists/dictionaries, not what I have here.</p>
| -1 |
2016-09-23T14:58:16Z
| 39,663,887 |
<pre><code>for word_def in wordsList:
print word_def.get("Word")
print word_def.get("Definition")
print
</code></pre>
<p>output</p>
<pre><code>Database
Allows you to store data with ease
IP
This can either be static or dynamic
</code></pre>
| 1 |
2016-09-23T14:59:40Z
|
[
"python",
"list",
"dictionary"
] |
Manipulating dictionaries within lists
| 39,663,866 |
<p>I have a list with dictionaries in it as below:</p>
<pre><code>wordsList = [
{'Definition': 'Allows you to store data with ease' , 'Word': 'Database'},
{'Definition': 'This can either be static or dynamic' , 'Word': 'IP'},
]
</code></pre>
<p>Essentially, what I want to do is:</p>
<ul>
<li>Be able to print each separate definition</li>
<li>Be able to print each separate word</li>
</ul>
<p>And so my question is: How do I do this? I only know how to do this with regular lists/dictionaries, not what I have here.</p>
| -1 |
2016-09-23T14:58:16Z
| 39,663,972 |
<p>Essentially, these are "regular" lists/dictionaries.</p>
<p>You must understand, that a list in Python can contain any object, also dicts. Thus, neither the list nor the contained dicts become anyhow "irregular".</p>
<p>You can access anything inside your list/dict like that:</p>
<pre><code>word_def[index][name]
</code></pre>
<p>With appropriate values for index/name.</p>
<p>You can also iterate over the list (as shown by SSNR) and thus grab any of the dictionaries contained and deal with them like ordinary dicts.</p>
<p>You also can get hold of one of the dicts this way:</p>
<pre><code>one_dict = word_def[index]
</code></pre>
<p>Than just access the contents:</p>
<pre><code>value = one_dict[name]
</code></pre>
| 1 |
2016-09-23T15:03:34Z
|
[
"python",
"list",
"dictionary"
] |
Write to file bytes and strings
| 39,663,909 |
<p>I have to create files which have some chars and hex value in little-endian encoding. To do encoding, I use:</p>
<pre><code>pack("I", 0x01ddf23a)
</code></pre>
<p>and this give me:</p>
<pre><code>b':\xf2\xdd\x01'
</code></pre>
<p>First problem is that, this give me bytes string which I cannot write to file. Second one is that \x3a is turn to ':'. What I expect, is write to file \x3a\xf2\xdd\x01 as bytes not as chars.</p>
<p>What I tried:</p>
<pre><code>>>> a=0x01ddf23a
>>> str(pack("I", a))
"b':\\xf2\\xdd\\x01'" <= wrong
>>> pack("I", a).hex()
'3af2dd01 <= I need '\x' before each byte
>>> pack("I", a).decode()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf2 in position 1: invalid continuation byte
</code></pre>
<p>Changing open() from "w" to "wb" force me to write only bytes, but I want to writes lots of strings and few bytes, eg.:</p>
<pre><code>Hello world
^I^M^T^B
End file
</code></pre>
<p>I know I can simple do this:</p>
<pre><code>fs.open("file" "w")
fs.write("Hello world")
fs.write("\x3a\xf2\xdd\x01")
fs.write("End file")
fs.close()
</code></pre>
<p>But this is make my byte value 0x01ddf23a hard to read and there is easy to make some mistake when changing this value in that form.</p>
| 0 |
2016-09-23T15:00:31Z
| 39,664,488 |
<p>You are producing bytes, which can be written to files opened in <em>binary mode</em> without issue. Add <code>b</code> to the file mode when opening and either use <code>bytes</code> string literals or encode your strings to bytes if you need to write other data too:</p>
<pre><code>with open("file", "wb") as fs:
fs.write(b"Hello world") # note, a byte literal!
fs.write(pack("I", 0x01ddf23a))
fs.write("End file".encode('ASCII')) # encoded string to bytes
</code></pre>
<p>The alternative would be to decode your binary packed data to a text string first, but since packed data does not, in fact, contain decodable text, that approach would require contortions to <em>force</em> the binary data to be decodable and encodable again, which only works if your file encoding was set to Latin-1 and severely limits what actual text you could add.</p>
<p>A <code>bytes</code> representation will always try to show <em>printable characters</em> where possible. The byte <code>\x3a</code> is also the correct ASCII value for the <code>':'</code> character, so in a <code>bytes</code> representation the latter is preferred over using the <code>\x3a</code> escape sequence. The <em>correct value</em> is present in the <code>bytes</code> value and would be written to the file entirely correctly:</p>
<pre><code>>>> b'\x3a'
b':'
>>> b'\x3a' == b':'
True
>>> b':'[0]
58
>>> b'\x3a'[0]
58
>>> hex(58)
'0x3a'
</code></pre>
| 1 |
2016-09-23T15:28:49Z
|
[
"python",
"python-3.x",
"file-io",
"byte"
] |
xlwings:How to acrire the value of the cell and to indicate message box
| 39,663,923 |
<p>I'm using xlwings on a Windows.
I acquire the value of the cell and want to indicate message box.</p>
<pre><code>import xlwings as xw
import win32ui
def msg_box():
wb = xw.Book.caller()
win32ui.MessageBox(xw.sheets[0].range(4, 1).value,"MesseageBox")
</code></pre>
<p>However, Python stops.Could you tell me the way of worked.Thank you.</p>
| 0 |
2016-09-23T15:01:26Z
| 39,670,133 |
<p>On Windows, something like this should work as a workaround:</p>
<pre><code>import xlwings as xw
import win32api
def msg_box():
wb = xw.Book.caller()
win32api.MessageBox(wb.app.hwnd, "YourMessage")
</code></pre>
| 0 |
2016-09-23T21:58:42Z
|
[
"python",
"winapi",
"messagebox",
"xlwings"
] |
Splitting up a list with all values sitting in the same index in Python
| 39,664,024 |
<p>Im pretty new to Python.
I have a list which looks like the following:</p>
<pre><code>list = [('foo,bar,bash',)]
</code></pre>
<p>I grabbed it from and sql table (someone created the most rubbish sql table!), and I cant adjust it. This is literally the only format I can pull it in. I need to chop it up. I can't split it by index:</p>
<pre><code>print list[0]
</code></pre>
<p>because that just literally gives me: </p>
<pre><code>[('foo,bar,bash',)]
</code></pre>
<p>How can I split this up? I want to split it up and write it into another list. </p>
<p>Thank you. </p>
| 0 |
2016-09-23T15:06:26Z
| 39,664,051 |
<p><code>list = [('foo,bar,bash',)]</code> is a list which contains a tuple with 1 element. You should also use a different variable name instead of <code>list</code> because <code>list</code> is a python built in.</p>
<p>You can split that one element using <code>split</code>:</p>
<pre><code>lst = [('foo,bar,bash',)]
print lst[0][0].split(',')
</code></pre>
<p>Output:</p>
<p><code>['foo', 'bar', 'bash']</code></p>
<hr>
<p>If the tuple contains more than one element, you can loop through it:</p>
<pre><code>lst = [('foo,bar,bash','1,2,3')]
for i in lst[0]:
print i.split(',')
</code></pre>
| 1 |
2016-09-23T15:07:53Z
|
[
"python",
"sql",
"list"
] |
selenium Action error
| 39,664,031 |
<p>I am trying to automate a mouse movement to an element, and I see that the method for doing so is something like: </p>
<pre><code>Actions action = new Actions(driver)
action.moveToElement(hoverElement)
</code></pre>
<p>However when I run this code I get a syntax error, and Pycharm is telling me Actions is an unreselved reference. I have also tried:</p>
<pre><code>import org.openqa.selenium.interactions.Actions
</code></pre>
<p>but I still get an error "no module named org." I am stuck, this code seems to work for everyone else, what's going wrong? Thanks.</p>
| 0 |
2016-09-23T15:06:48Z
| 39,664,399 |
<p>In Python, it is not <code>Actions</code>, it is <a href="http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.action_chains" rel="nofollow"><code>ActionChains</code></a> - imported this way:</p>
<pre><code>from selenium.webdriver.common.action_chains import ActionChains
</code></pre>
<p>Sample usage:</p>
<pre><code>from selenium.webdriver.common.action_chains import ActionChains
actions = ActionChains(driver)
actions.move_to_element(elm).perform()
</code></pre>
| 2 |
2016-09-23T15:24:50Z
|
[
"python",
"selenium"
] |
Is there any equivalent to the Perl regexes' \K backslash sequence in Python?
| 39,664,058 |
<p>Perl's regular expressions have the <a href="http://perldoc.perl.org/perlrebackslash.html#Misc" rel="nofollow"><code>\K</code></a> backslash sequence:</p>
<blockquote>
<p><strong>\K</strong><br>
This appeared in perl 5.10.0. Anything matched left of <code>\K</code> is not
included in <code>$&</code>, and will not be replaced if the pattern is used in a
substitution. This lets you write <code>s/PAT1 \K PAT2/REPL/x</code> instead of
<code>s/(PAT1) PAT2/${1}REPL/x</code> or <code>s/(?<=PAT1) PAT2/REPL/x</code>.</p>
<p>Mnemonic: <em>Keep</em>.</p>
</blockquote>
<p>Is there anything equivalent in Python?</p>
| 4 |
2016-09-23T15:08:19Z
| 39,664,218 |
<p>The proposed replacement to the Python <code>re</code> module, <a href="https://pypi.python.org/pypi/regex" rel="nofollow">available from <code>pypi</code> under the name <code>regex</code></a>, has this feature. Its canonical source repository and bug tracker are <a href="https://bitbucket.org/mrabarnett/mrab-regex" rel="nofollow">in bitbucket</a>.</p>
<p>This was added in late 2015, in <a href="https://bitbucket.org/mrabarnett/mrab-regex/issues/151/request-k" rel="nofollow">ticket 151</a>; taking an example of its use from that ticket:</p>
<blockquote>
<pre><code>import regex as mrab
>>> bsk = mrab.compile(r'start=>\K.*')
>>> print(bsk.search('boring stuff start=>interesting stuff'))
<regex.Match object; span=(20, 37), match='interesting stuff'>
</code></pre>
</blockquote>
| 4 |
2016-09-23T15:16:24Z
|
[
"python",
"regex",
"perl"
] |
cython: run prange sequentially (profiling/debugging)
| 39,664,080 |
<p>I have a meanwhile pretty large code base written mostly in Cython. Meanwhile, I've started parallelizing it by replacing "range"s by "prange"s. (So far more or less at random, as I still have to develop a gut feeling as to where I really profit from this and where not so much.)</p>
<p>One big question to which I haven't found an answer arose when profiling/debugging:</p>
<p><strong>Is there a way to turn off parallelization (i.e. run "prange"s sequentially) at either a global or local level?</strong></p>
<p>A global switch would be most convenient, but even if there's a local solution that should at least allow me to implement a global switch myself.</p>
<p>Just in case it's relevant: I'm using Python 3 (currently 3.4.5).</p>
| 0 |
2016-09-23T15:09:36Z
| 39,673,882 |
<p>There's at least three very easy ways:</p>
<ol>
<li><p><code>prange</code> takes a <code>num_threads</code> argument. Set that equal to 1. This gives you local control.</p></li>
<li><p>If you compile without OpenMP it'll still still run, but not in parallel. Remove <code>extra_compile_args=['-fopenmp']</code> and <code>extra_link_args=['-fopenmp']</code> (or equivalent) from setup.py. (OpenMP is implemented in terms of <code>#pragmas</code> so just gets ignored without compiler support). This gives you an easy global switch.</p></li>
<li><p>OpenMP <a href="https://software.intel.com/en-us/node/528373" rel="nofollow">defines an environmental variable for the maximum number of threads</a> <code>OMP_NUM_THREADS</code>. Set that to 1 in your operating system and then run your cython program. This gives a global switch (without recompiling)</p></li>
</ol>
| 2 |
2016-09-24T07:45:02Z
|
[
"python",
"python-3.x",
"debugging",
"parallel-processing",
"cython"
] |
getting weighted average then grouping in pandas
| 39,664,195 |
<p>I have the following dataframe. </p>
<pre><code> weight x value
0 5 -8.7 2
1 9 -8.7 3
2 12 -21.4 10
3 32 -21.4 15
</code></pre>
<p>I need to get weighted average of the value and grouped on x. Result will be:</p>
<p>-8.7: (5/(5+9) * 2) + ((9/14) * 3) = 2.64</p>
<p>-21.4: ((12/44) * 10) + ((32/44) * 15) = 13.63</p>
<pre><code> x weighted_value
0 -8.7 2.64
1 -21.4 13.63
</code></pre>
| 0 |
2016-09-23T15:15:19Z
| 39,664,322 |
<p><code>numpy.average</code> admits a <code>weights</code> argument:</p>
<pre><code>import io
import numpy as np
import pandas as pd
data = io.StringIO('''\
weight x value
0 5 -8.7 2
1 9 -8.7 3
2 12 -21.4 10
3 32 -21.4 15
''')
df = pd.read_csv(data, delim_whitespace=True)
df.groupby('x').apply(lambda g: np.average(g['value'], weights=g['weight']))
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>x
-21.4 13.636364
-8.7 2.642857
dtype: float64
</code></pre>
| 1 |
2016-09-23T15:21:09Z
|
[
"python",
"pandas",
"numpy"
] |
getting weighted average then grouping in pandas
| 39,664,195 |
<p>I have the following dataframe. </p>
<pre><code> weight x value
0 5 -8.7 2
1 9 -8.7 3
2 12 -21.4 10
3 32 -21.4 15
</code></pre>
<p>I need to get weighted average of the value and grouped on x. Result will be:</p>
<p>-8.7: (5/(5+9) * 2) + ((9/14) * 3) = 2.64</p>
<p>-21.4: ((12/44) * 10) + ((32/44) * 15) = 13.63</p>
<pre><code> x weighted_value
0 -8.7 2.64
1 -21.4 13.63
</code></pre>
| 0 |
2016-09-23T15:15:19Z
| 39,664,553 |
<p>Here's a vectorized approach using NumPy tools -</p>
<pre><code># Get weighted averages and corresponding unique x's
unq,ids = np.unique(df.x,return_inverse=True)
weight_avg = np.bincount(ids,df.weight*df.value)/np.bincount(ids,df.weight)
# Store into a dataframe
df_out = pd.DataFrame(np.column_stack((unq,weight_avg)),columns=['x','wghts'])
</code></pre>
<p>Sample run -</p>
<pre><code>In [97]: df
Out[97]:
weight x value
0 5 -8.7 2
1 9 -8.7 3
2 12 -21.4 10
3 32 -21.4 15
In [98]: df_out
Out[98]:
x wghts
0 -21.4 13.636364
1 -8.7 2.642857
</code></pre>
| 0 |
2016-09-23T15:31:59Z
|
[
"python",
"pandas",
"numpy"
] |
Regex Search in Python: Exclude port 22 lines with ' line 22 '
| 39,664,230 |
<p>My current regex search in python looks for lines with <code>' 22 '</code>, but I would like to exclude lines that have <code>' line 22 '</code>. How could I express this in <code>Regex</code>? Would I be <code>'.*(^line) 22 .*$'</code></p>
<pre><code>import re
sshRegexString='.* 22 .*$'
sshRegexExpression=re.compile(sshRegexString)
</code></pre>
| 1 |
2016-09-23T15:16:53Z
| 39,670,306 |
<p>You current requirement to find a line that <em>contains</em> <code> 22 </code> but does not contain <code>line 22 </code> can be implemented without the help of a regex. </p>
<p>Just check if these texts are <code>in</code> or are <code>not in</code> the string inside list comprehension. Here is a <a href="http://ideone.com/o2jFU7" rel="nofollow">Python demo</a> (I assume you have lines in a list, but it can be adjusted to handling lines read from file one by one):</p>
<pre><code>lines = ['Some text 1 line 22 here', 'Some text 2 Text2 22 here', 'Some text 3 Text3 22 here']
good = [s for s in lines if ' 22 ' in s and 'line 22 ' not in s]
print(good) # the first lines[0] is not printed!
</code></pre>
| 0 |
2016-09-23T22:16:48Z
|
[
"python",
"regex"
] |
run a while loop untill a css selector is present on web page
| 39,664,323 |
<p>I need to run this loop until ".loadMore" css selector is present on webpage:</p>
<pre><code>while ec.presence_of_element_located('.loadMore'):
element_number = 25 * i
wait = WebDriverWait(driver, time1);
sub_button = (by, hook + str(element_number))
wait.until(ec.presence_of_element_located(sub_button))
driver.find_element_by_css_selector(button).click()
time.sleep(5) # Makes the page wait for the element to change
i += 1
</code></pre>
| 0 |
2016-09-23T15:21:15Z
| 39,665,004 |
<p>Write your def to check for element presence</p>
<pre><code>from selenium.common.exceptions import NoSuchElementException
def check_element_presence(selector):
try:
webdriver.find_element_by_css_selector(selector)
except NoSuchElementException:
return False
return True
</code></pre>
<p>Now run it like</p>
<pre><code>while check_element_presence('.loadmore'):
...
...
...
</code></pre>
| -1 |
2016-09-23T15:57:33Z
|
[
"python",
"python-2.7",
"selenium",
"chrome-web-driver"
] |
Tensorflow CIFAR10 code analysis
| 39,664,345 |
<p>I'm trying to figure out, a Tensorflow CIFAR10 tutorial and currently I can't understand <a href="https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/models/image/cifar10/cifar10.py#L245" rel="nofollow">line #245</a>, namely why is the shape for weight [dim, 384]? Is 384 a hyperparameter or is it somehow calculated?</p>
| -1 |
2016-09-23T15:21:58Z
| 39,669,418 |
<p>Basically it was an arbitrary choice that worked out with their batch size and knowledge about the dataset.</p>
<p>So cifar images are 32 * 32 * 3 and by the convolution now they have 32 * 32 * 64 features and just before that they had 64 filters but they just max pooled it so now it's half the size so now its 16 * 16 * 64. They resized the images to the batch size = 128 so now its 128 * 128. Then they use weights to bring it up to 384. </p>
<p>Feel free to use another number but make sure you change the next layers as well. It's just an example CNN.</p>
| 1 |
2016-09-23T20:55:30Z
|
[
"python",
"image-processing",
"tensorflow",
"conv-neural-network"
] |
Indexing an array with a logical array of the same shape in Python
| 39,664,432 |
<p>I would like to be able to do:</p>
<pre><code>a = [1,2,3]
a[[True, True, False]]
>> array([1,2])
</code></pre>
<p>I just can't find the simple way how... Thanks!</p>
| 1 |
2016-09-23T15:26:19Z
| 39,664,474 |
<p>There's <a href="https://docs.python.org/3.1/library/itertools.html#itertools.compress" rel="nofollow"><code>itertools.compress</code></a>:</p>
<pre><code>>>> from itertools import compress
>>> a = [1,2,3]
>>> mask = [True, True, False]
>>> list(compress(a, mask))
[1, 2]
</code></pre>
<hr>
<p>If you're using <em>numpy</em>, you can slice directly with the mask:</p>
<pre><code>>>> np.array(a)[np.array(mask)]
>>> array([1, 2])
</code></pre>
| 2 |
2016-09-23T15:28:08Z
|
[
"python"
] |
Indexing an array with a logical array of the same shape in Python
| 39,664,432 |
<p>I would like to be able to do:</p>
<pre><code>a = [1,2,3]
a[[True, True, False]]
>> array([1,2])
</code></pre>
<p>I just can't find the simple way how... Thanks!</p>
| 1 |
2016-09-23T15:26:19Z
| 39,664,548 |
<p>You can use <code>zip</code> and list comprehension.</p>
<pre><code>a = [1,2,3]
b = [True, True, False]
c = [i for i,j in zip(a,b) if j]
</code></pre>
| 0 |
2016-09-23T15:31:41Z
|
[
"python"
] |
Indexing an array with a logical array of the same shape in Python
| 39,664,432 |
<p>I would like to be able to do:</p>
<pre><code>a = [1,2,3]
a[[True, True, False]]
>> array([1,2])
</code></pre>
<p>I just can't find the simple way how... Thanks!</p>
| 1 |
2016-09-23T15:26:19Z
| 39,664,560 |
<p>You can play with built-in <code>filter</code> and <code>map</code></p>
<pre><code>k = [True, True, False, True]
l = [1,2,3,4]
print map(lambda j: j[1], filter(lambda i: i[0], zip(k,l)))
</code></pre>
| 0 |
2016-09-23T15:32:24Z
|
[
"python"
] |
Indexing an array with a logical array of the same shape in Python
| 39,664,432 |
<p>I would like to be able to do:</p>
<pre><code>a = [1,2,3]
a[[True, True, False]]
>> array([1,2])
</code></pre>
<p>I just can't find the simple way how... Thanks!</p>
| 1 |
2016-09-23T15:26:19Z
| 39,664,612 |
<p>If <code>a</code> and the mask are truly Python arrays, then you can do a simple list comprehension:</p>
<pre><code>arr = [1,2,3]
mask = [True, True, False]
[a for a, m in zip(arr, mask) if m]
</code></pre>
<p>If you are OK with additional imports, you can use @moses-koledoye's suggestion of using <code>itertools.compress</code>.</p>
<p>If on the other hand you are using <code>numpy</code>, as the final output of <code>array([1,2])</code> suggests, you can just do the indexing directly:</p>
<pre><code>import numpy as np
arr = np.array([1, 2, 3])
mask = np.array([True, True, False])
arr[mask]
</code></pre>
<p>Note that mask has to be an actual <code>np.boolean</code> array. You can not just use a Python list like <code>[True, True, False]</code>. This is because <code>np.array.__getitem__</code> checks if the input is exactly another array. If not, the input is converted to integers, so you end up effectively indexing with <code>[1, 1, 0]</code> instead of a mask. You can get a lot more details on this particular tangential issue here: <a href="http://stackoverflow.com/a/39168021/2988730">http://stackoverflow.com/a/39168021/2988730</a></p>
| 2 |
2016-09-23T15:34:49Z
|
[
"python"
] |
using django celery beat locally I get error 'PeriodicTask' object has no attribute '_default_manager'
| 39,664,493 |
<p>using django celery beat locally I get error 'PeriodicTask' object has no attribute '_default_manager'. I am using Django 1.10. When i schedule a task it works. But then a few moments later a red error traceback like the following occurs</p>
<pre><code>[2016-09-23 11:08:34,962: INFO/Beat] Writing entries...
[2016-09-23 11:08:34,965: INFO/Beat] Writing entries...
[2016-09-23 11:08:34,965: INFO/Beat] Writing entries...
[2016-09-23 11:08:34,966: ERROR/Beat] Process Beat
Traceback (most recent call last):
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/billiard/process.py", line 292, in _bootstrap
self.run()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/beat.py", line 553, in run
self.service.start(embedded_process=True)
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/beat.py", line 486, in start
self.scheduler._do_sync()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/beat.py", line 276, in _do_sync
self.sync()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/djcelery/schedulers.py", line 209, in sync
self.schedule[name].save()
File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/djcelery/schedulers.py", line 98, in save
obj = self.model._default_manager.get(pk=self.model.pk)
AttributeError: 'PeriodicTask' object has no attribute '_default_manager'
</code></pre>
<p>after this happens the next schedule wont run unless I "control+c" out of the terminal and start it again. I saw on git hub that this may be because I am using django 1.10. I have already git pushed this to my heroku server. How can I fix this issue? The git hub post said he fixed it by doing this</p>
<pre><code>Model = type(self.model)
obj = Model._default_manager.get(pk=self.model.pk)
</code></pre>
<p>I was willing to try this but I don't know where to put this and I don't want to cause a bigger unforeseen issue that this could cause. What are my options? am I supposed to manually go inside my remote app and reset it after every time it runs? thats unfeasible and defeats the purpose of task automation. </p>
| 0 |
2016-09-23T15:29:04Z
| 39,665,447 |
<p>I figured it out. At line 98 in schedulers.py it was</p>
<pre><code>obj = self.model._default_manager.get(pk=self.model.pk)
</code></pre>
<p>so a line above it I added</p>
<pre><code>Model = type(self.model)
</code></pre>
<p>and changed </p>
<pre><code>obj = self.model._default_manager.get(pk=self.model.pk)
</code></pre>
<p>to </p>
<pre><code>obj = Model._default_manager.get(pk=self.model.pk)
</code></pre>
<p>so completed it looks like this</p>
<pre><code>98 Model = type(self.model)
99 obj = Model._default_manager.get(pk=self.model.pk)
</code></pre>
| 0 |
2016-09-23T16:24:17Z
|
[
"python",
"django",
"heroku",
"redis",
"scheduled-tasks"
] |
How can I add to the initial definition of a python class inheriting from another class?
| 39,664,509 |
<p>I'm trying to define <code>self.data</code> inside a class inheriting from a class</p>
<pre><code>class Object():
def __init__(self):
self.data="1234"
class New_Object(Object):
# Code changing self.data here
</code></pre>
<p>But I ran into an issue.</p>
<pre><code>class Object():
def __init__(self):
self.data="1234"
</code></pre>
<p>So I have the beginning class here, which is imported from elsewhere, and let's say that the class is a universal one so I can't modify the original at all.</p>
<p>In the original, the instance is referred to as "<code>self</code>" inside the class, and it is defined as self inside the definition <code>__init__</code>.</p>
<pre><code>class New_Object(Object):
# Code changing self.data here
</code></pre>
<p>So if I wanted to inherit from the class <code>Object</code>, but define self.data inside <code>New_Object</code>, I thought I would have to define <code>__init__</code> in <code>New_Object</code>, but this overrides the <code>__init__</code> from <code>New_Object</code></p>
<p>Is there any way I could do this without copypasting the <code>__init__</code> from <code>Object</code>?</p>
| 2 |
2016-09-23T15:29:43Z
| 39,664,564 |
<p>You use <code>super</code> to call the original implementation.</p>
<pre><code>class New_Object(Object):
def __init__(self):
super(NewObject, self).__init__()
self.info = 'whatever'
</code></pre>
| 3 |
2016-09-23T15:32:33Z
|
[
"python",
"class",
"inheritance",
"instance"
] |
How can I add to the initial definition of a python class inheriting from another class?
| 39,664,509 |
<p>I'm trying to define <code>self.data</code> inside a class inheriting from a class</p>
<pre><code>class Object():
def __init__(self):
self.data="1234"
class New_Object(Object):
# Code changing self.data here
</code></pre>
<p>But I ran into an issue.</p>
<pre><code>class Object():
def __init__(self):
self.data="1234"
</code></pre>
<p>So I have the beginning class here, which is imported from elsewhere, and let's say that the class is a universal one so I can't modify the original at all.</p>
<p>In the original, the instance is referred to as "<code>self</code>" inside the class, and it is defined as self inside the definition <code>__init__</code>.</p>
<pre><code>class New_Object(Object):
# Code changing self.data here
</code></pre>
<p>So if I wanted to inherit from the class <code>Object</code>, but define self.data inside <code>New_Object</code>, I thought I would have to define <code>__init__</code> in <code>New_Object</code>, but this overrides the <code>__init__</code> from <code>New_Object</code></p>
<p>Is there any way I could do this without copypasting the <code>__init__</code> from <code>Object</code>?</p>
| 2 |
2016-09-23T15:29:43Z
| 39,664,566 |
<p>That's what <a href="https://docs.python.org/2/library/functions.html#super" rel="nofollow"><code>super</code></a> is for:</p>
<pre><code>class NewObject(Object):
def __init__(self):
super(NewObject, self).__init__()
# self.data exists now, and you can modify it if necessary
</code></pre>
| 2 |
2016-09-23T15:32:36Z
|
[
"python",
"class",
"inheritance",
"instance"
] |
How can I add to the initial definition of a python class inheriting from another class?
| 39,664,509 |
<p>I'm trying to define <code>self.data</code> inside a class inheriting from a class</p>
<pre><code>class Object():
def __init__(self):
self.data="1234"
class New_Object(Object):
# Code changing self.data here
</code></pre>
<p>But I ran into an issue.</p>
<pre><code>class Object():
def __init__(self):
self.data="1234"
</code></pre>
<p>So I have the beginning class here, which is imported from elsewhere, and let's say that the class is a universal one so I can't modify the original at all.</p>
<p>In the original, the instance is referred to as "<code>self</code>" inside the class, and it is defined as self inside the definition <code>__init__</code>.</p>
<pre><code>class New_Object(Object):
# Code changing self.data here
</code></pre>
<p>So if I wanted to inherit from the class <code>Object</code>, but define self.data inside <code>New_Object</code>, I thought I would have to define <code>__init__</code> in <code>New_Object</code>, but this overrides the <code>__init__</code> from <code>New_Object</code></p>
<p>Is there any way I could do this without copypasting the <code>__init__</code> from <code>Object</code>?</p>
| 2 |
2016-09-23T15:29:43Z
| 39,664,645 |
<p>You can use <code>super().__init__()</code> to call <code>Object.__init__()</code> from <code>New_Object.__init__()</code>.</p>
<p>What you would do:</p>
<pre><code>class Object:
def __init__(self):
print("Object init")
self.data = "1234"
class New_Object(Object):
def __init__(self):
print("calling super")
super().__init__()
print("data is now", self.data)
self.data = self.data.split("3")
o = New_Object()
# calling super
# Object init
# data is now 1234
</code></pre>
<p>Note that you do not have to give any arguments to <code>super()</code>, as long as you are using Python 3.</p>
| 1 |
2016-09-23T15:36:40Z
|
[
"python",
"class",
"inheritance",
"instance"
] |
How can I add to the initial definition of a python class inheriting from another class?
| 39,664,509 |
<p>I'm trying to define <code>self.data</code> inside a class inheriting from a class</p>
<pre><code>class Object():
def __init__(self):
self.data="1234"
class New_Object(Object):
# Code changing self.data here
</code></pre>
<p>But I ran into an issue.</p>
<pre><code>class Object():
def __init__(self):
self.data="1234"
</code></pre>
<p>So I have the beginning class here, which is imported from elsewhere, and let's say that the class is a universal one so I can't modify the original at all.</p>
<p>In the original, the instance is referred to as "<code>self</code>" inside the class, and it is defined as self inside the definition <code>__init__</code>.</p>
<pre><code>class New_Object(Object):
# Code changing self.data here
</code></pre>
<p>So if I wanted to inherit from the class <code>Object</code>, but define self.data inside <code>New_Object</code>, I thought I would have to define <code>__init__</code> in <code>New_Object</code>, but this overrides the <code>__init__</code> from <code>New_Object</code></p>
<p>Is there any way I could do this without copypasting the <code>__init__</code> from <code>Object</code>?</p>
| 2 |
2016-09-23T15:29:43Z
| 39,664,748 |
<p>The answer is that you call the superclass's <code>__init__</code> explicitly during the subclass's <code>__init__</code>. This can be done either of two ways:</p>
<pre><code>Object.__init__(self) # requires you to name the superclass explicitly
</code></pre>
<p>or</p>
<pre><code>super(NewObject, self).__init__() # requires you to name the subclass explicitly
</code></pre>
<p>The latter also requires you to ensure that you're using "new-style" classes: in Python 3 that's always the case, but in Python 2 you must be sure to inherit from the builtin <code>object</code> class. In Python 3 it can actually be expressed even more simply:</p>
<pre><code>super().__init__()
</code></pre>
<p>Personally, in most of my code the "disadvantage" of having to name the superclass explicitly is no disadvantage at all, and <code>Object.__init__()</code> lends transparency since it makes it absolutely clear what is being called. This is because most of my code is single-inheritance only. The <code>super</code> route comes into its own when you have multiple inheritance. See <a href="http://stackoverflow.com/questions/222877/how-to-use-super-in-python">How to use 'super' in Python?</a></p>
<p>Python 2 example:</p>
<pre><code>class Object(object):
def __init__(self):
self.data = "1234"
class NewObject:
def __init__(self):
# subclass-specific stuff
super(NewObject, self).__init__()
# more subclass-specific stuff
</code></pre>
| 1 |
2016-09-23T15:42:25Z
|
[
"python",
"class",
"inheritance",
"instance"
] |
Group DataFrame by period of time with aggregation
| 39,664,657 |
<p>I am using Pandas to structure and process Data. This is my DataFrame:</p>
<p><a href="http://i.stack.imgur.com/delaC.png" rel="nofollow"><img src="http://i.stack.imgur.com/delaC.png" alt="enter image description here"></a></p>
<p>I grouped many datetimes by minute and I did an aggregation in order to have the sum of 'bitrate' scores by minute.
This was my code to have this Dataframe:</p>
<pre><code>def aggregate_data(data):
def delete_seconds(time):
return (datetime.datetime.strptime(time, '%Y-%m-%d %H:%M:%S')).replace(second=0)
data['new_time'] = data['beginning_time'].apply(delete_seconds)
df = (data[['new_time', 'bitrate']].groupby(['new_time'])).aggregate(np.sum)
return df
</code></pre>
<p>Now I want to do a similar thing with 5 minutes as buckets. I wand to do group my datetimes by 5 minutes and do a mean..
Something like this : (This dosent work of course!)</p>
<pre><code>df.groupby([df.index.map(lambda t: t.5minute)]).aggregate(np.mean)
</code></pre>
<p>Ideas ? Thx !</p>
| 0 |
2016-09-23T15:37:06Z
| 39,664,924 |
<p>use <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#resampling" rel="nofollow">resample</a>. </p>
<p><code>
df.resample('5Min').sum()
</code></p>
<p>This assumes your index is properly set as a DateTimeIndex. </p>
<p>you can also use the TimeGrouper, as resampling is really just a groupby operation on time buckets. </p>
<p><code>
df.groupby(pd.TimeGrouper('5Min')).sum()
</code></p>
| 1 |
2016-09-23T15:53:18Z
|
[
"python",
"pandas",
"group-by",
"aggregate",
"aggregation"
] |
pandas pivot table aggfunc troubleshooting
| 39,664,739 |
<p>This <code>DataFrame</code> has two columns, both are object type.</p>
<pre><code> Dependents Married
0 0 No
1 1 Yes
2 0 Yes
3 0 Yes
4 0 No
</code></pre>
<p>I want to aggregate 'Dependents' based on 'Married'.</p>
<pre><code>table = df.pivot_table(
values='Dependents',
index='Married',
aggfunc = lambda x: x.map({'0':0,'1':1,'2':2,'3':3}).mean())
</code></pre>
<p>This works, however, surprisingly, the following doesn't:</p>
<pre><code>table = df.pivot_table(values = 'Dependents',
index = 'Married',
aggfunc = lambda x: x.map(int).mean())
</code></pre>
<p>It will produce a <code>None</code> instead.</p>
<p>Can anyone help explain?</p>
| 0 |
2016-09-23T15:41:53Z
| 39,670,029 |
<p>Both examples of code provided in your question work. However, they are not the idiomatic way to achieve what you want to do -- particularly the first one.</p>
<p>I think this is the proper way to obtain the expected behavior.</p>
<pre><code># Test data
df = DataFrame({'Dependents': ['0', '1', '0', '0', '0'],
'Married': ['No', 'Yes', 'Yes', 'Yes', 'No']})
# Converting object to int
df['Dependents'] = df['Dependents'].astype(int)
# Computing the mean by group
df.groupby('Married').mean()
Dependents
Married
No 0.00
Yes 0.33
</code></pre>
<p>However, the following code works.</p>
<pre><code>df.pivot_table(values = 'Dependents', index = 'Married',
aggfunc = lambda x: x.map(int).mean())
</code></pre>
<p>It is equivalent (and more readable) of converting to <code>int</code> with <code>map</code> before pivoting data.</p>
<pre><code>df['Dependents'] = df['Dependents'].map(int)
df.pivot_table(values = 'Dependents', index = 'Married')
</code></pre>
<h1>Edit</h1>
<p>I you have messy <code>DataFrame</code>, you can use <code>to_numeric</code> with the <code>error</code> parameter set to <code>coerce</code>.</p>
<blockquote>
<p>If <code>coerce</code>, then invalid parsing will be set as <code>NaN</code></p>
</blockquote>
<p>Here is the <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/2959/data-types/10052/changing-dtypes#t=201609241949239596277">link</a> to the Stack Overflow documentation concerning the change of types.</p>
<pre><code># Test data
df = DataFrame({'Dependents': ['0', '1', '2', '3+', 'NaN'],
'Married': ['No', 'Yes', 'Yes', 'Yes', 'No']})
df['Dependents'] = pd.to_numeric(df['Dependents'], errors='coerce')
print(df)
Dependents Married
0 0.0 No
1 1.0 Yes
2 2.0 Yes
3 NaN Yes
4 NaN No
print(df.groupby('Married').mean())
Dependents
Married
No 0.0
Yes 1.5
</code></pre>
| 0 |
2016-09-23T21:48:00Z
|
[
"python",
"pandas",
"pivot-table"
] |
scatter update with animation
| 39,664,741 |
<p>I am trying to do a real time scatter-kind plot using matplotlib's animation module but I'm quite a newbie with it. My objective is to update the plot whenever I receive the data I want to plot, so that any time data is received, previous points disappear and the new ones are plotted. </p>
<p>My program can be written like this if I substitute the data receiving with a endless loop and a random generation of data:</p>
<pre><code>fig = plt.figure()
skyplot = fig.add_subplot(111, projection='polar')
skyplot.set_ylim(90) # sets radius of the circle to maximum elevation
skyplot.set_theta_zero_location("N") # sets 0(deg) to North
skyplot.set_theta_direction(-1) # sets plot clockwise
skyplot.set_yticks(range(0, 90, 30)) # sets 3 concentric circles
skyplot.set_yticklabels(map(str, range(90, 0, -30))) # reverse labels
plt.ion()
while(1):
azimuths = random.sample(range(360), 8)
elevations = random.sample(range(90), 8)
colors = numpy.random.rand(3,1)
sat_plot = satellite()
ani= animation.FuncAnimation(fig, sat_plot.update, azimuths, elevations, colors)
class satellite:
def __init__(self):
self.azimuths = []
self.elevations = []
self.colors = []
self.scatter = plt.scatter(self.azimuths, self.elevations, self.colors)
def update(self, azimuth, elevation, colors):
self.azimuths = azimuth
self.elevations = elevation
return self.scatter
</code></pre>
<p>Right now, I'm getting the following error:</p>
<pre><code>> Traceback (most recent call last):
File "./skyplot.py", line 138, in <module>
ani= animation.FuncAnimation(fig, sat_plot.update, azimuths, elevations, colors)
File "/usr/lib/pymodules/python2.7/matplotlib/animation.py", line 442, in __init__
TimedAnimation.__init__(self, fig, **kwargs)
File "/usr/lib/pymodules/python2.7/matplotlib/animation.py", line 304, in __init__
Animation.__init__(self, fig, event_source=event_source, *args, **kwargs)
File "/usr/lib/pymodules/python2.7/matplotlib/animation.py", line 53, in __init__
self._init_draw()
File "/usr/lib/pymodules/python2.7/matplotlib/animation.py", line 469, in _init_draw
self._drawn_artists = self._init_func()
TypeError: 'list' object is not callable
</code></pre>
<p>Can anyone tell me what I'm doing wrong and how could I do this?</p>
<p>Thanks in advance</p>
| 0 |
2016-09-23T15:42:00Z
| 39,669,252 |
<p>I think you do not need an animation. You need a simple endless loop (<code>while</code> for example) with plot update in a thread. I can propose something like this:</p>
<pre><code>import threading,time
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
data = np.random.uniform(0, 1, (5, 3))
plt.scatter(data[:, 0], data[:,1],data[:, 2]*50)
def getDataAndUpdate():
while True:
"""update data and redraw function"""
new_data = np.random.uniform(0, 1, (5, 3))
time.sleep(1)
plt.clf()
plt.scatter(new_data[:, 0], new_data[:, 1], new_data[:, 2] * 50)
plt.draw()
t = threading.Thread(target=getDataAndUpdate)
t.start()
plt.show()
</code></pre>
<p>The result is an animated-like figure with scatterplot.</p>
| 0 |
2016-09-23T20:44:16Z
|
[
"python",
"animation",
"matplotlib",
"scatter-plot"
] |
How to show the actual value of a signature using the RSA library in Python?
| 39,665,045 |
<p>I'm using the <a href="https://stuvel.eu/python-rsa-doc/index.html" rel="nofollow">RSA library</a> to check a digital signature using the Public Key as follows:</p>
<pre><code>rsa.verify(message, sig, key)
</code></pre>
<p>The function works as expected however, for incorrect cases the library prints out </p>
<pre><code>rsa.pkcs1.VerificationError: Verification failed
</code></pre>
<p>I want to see the actual calculated value, so that I can compare it to the expected value. Is there a way to print that without tweaking the internals of the library?</p>
| 0 |
2016-09-23T15:59:59Z
| 39,667,265 |
<p>Using the <code>verify()</code> method as a template: </p>
<pre><code>from rsa import common, core, transform
keylength = common.byte_size(base)
decrypted = core.decrypt_int(sig, exp, base)
clearsig = transform.int2bytes(decrypted, keylength)
</code></pre>
<p>This is with the assumption that your signature is given as <code>sig</code>, the modulus and exp of the public key are <code>base</code> and <code>exp</code> respectively. </p>
<p>Last thing to note is that your hash might include padding in the beginning. As I used SHA-256, I had to look at the last 32 bytes of <code>clearsig</code>.</p>
| 1 |
2016-09-23T18:22:11Z
|
[
"python",
"rsa",
"digital-signature"
] |
String index out of range while sending email
| 39,665,065 |
<p>I'm running into this error: <a href="http://pastebin.com/hH1ZbeWZ" rel="nofollow">link</a>, while trying to send mail with Django EmailMultiAlternatives. I tried searching for this error but no luck, also I tried removing or changing every variable for email, but with no luck.</p>
<p>This is the code:</p>
<pre><code>def spremembapodatkovproc(request):
if request.method == 'POST':
req_id = request.POST.get('req_num', 'Neznan ID zahtevka')
old_email = request.user.email
old_name = request.user.get_full_name
new_email = request.POST.get('email_new', 'Nov e-mail ni znan')
new_fname = request.POST.get('fname_new', 'Novo ime ni znano')
dokument = request.FILES.get('doc_file')
komentar = request.POST.get('comment', 'Ni komentarja')
# try:
plaintext = get_template('email/usr-data-change.txt')
htmly = get_template('email/usr-data-change.html')
d = Context(
{
'old_email': old_email,
'old_fname': old_name,
'new_email': new_email,
'new_fname': new_fname,
'req_id': req_id,
'komentar': komentar,
'user_ip': request.META.get('REMOTE_ADDR', 'IP Naslova ni mogoÄe pridobiti.')
}
)
subject, from_email, to = 'eBlagajna Sprememba podatkov', 'eblagajna@ksoft.si', ["info@korenc.eu"]
text_content = plaintext.render(d)
html_content = htmly.render(d)
print(text_content)
msg = EmailMultiAlternatives(subject, text_content, from_email, [to])
msg.attach_alternative(html_content, "text/html")
msg.mixed_subtype = 'related'
for f in ["templates\\email\\img1.png"]:
fp = open(os.path.join(BASE_DIR, f), 'rb')
msg_img = MIMEImage(fp.read())
fp.close()
msg_img.add_header('Content-ID', '<{}>'.format(f))
msg.attach(msg_img)
msg.send()
</code></pre>
<p>Thank you for your help.</p>
| 0 |
2016-09-23T16:01:13Z
| 39,665,394 |
<p>The problem was with redundant wrapping list of emails in another list.</p>
<p>So basically variable <code>to = ["info@korenc.eu"]</code>, then when line run</p>
<pre><code>msg = EmailMultiAlternatives(subject, text_content, from_email, [to])
</code></pre>
<p>it wrapped <code>to</code> one more time with <code>[ ]</code> brackets. and <code>[to] = [["info@korenc.eu"]]</code>, but it supposed to be simple list. So by changing the problem line to</p>
<pre><code>msg = EmailMultiAlternatives(subject, text_content, from_email, to)
</code></pre>
<p>everything worked.</p>
| 0 |
2016-09-23T16:21:29Z
|
[
"python",
"django",
"email"
] |
pandas - add a column of the mean of the last 3 elements in groupby
| 39,665,074 |
<p>I have a dataframe of several columns, which I sorted, grouped by index and calculated the difference between each row and the next one in the group. Next I want to add a column of the means of the last 3 differences. For example:</p>
<pre><code>index A B A_diff B_diff A_diff_last3mean B_diff_last3mean
1111 1 2 0 0 NaN NaN
1111 1 2 0 0 NaN NaN
1111 2 4 1 2 0.33 0.67
1111 4 6 2 2 1 1.33
2222 5 7 NaN NaN NaN NaN #index changed
2222 2 8 -3 1 NaN NaN
</code></pre>
<p>I managed to create such columns using </p>
<pre><code>df=df.join(df.groupby(['index'],sort=False,as_index=False).diff(),rsuffix='_diff')
y=df.groupby(['index'],sort=False,as_index=False).nth([-1,-2,-3])
z=y.groupby(['index'],sort=False,as_index=False).mean()
</code></pre>
<p>but that creates an aggregated dataframe, and I need the values to be merged in the original one. I tried with the .transform() function and did not succeed much. Would really appreciate your help.</p>
| 0 |
2016-09-23T16:01:41Z
| 39,665,907 |
<pre><code>import io
import pandas as pd
data = io.StringIO('''\
group A B
1111 1 2
1111 1 2
1111 2 4
1111 4 6
2222 5 7
2222 2 8
''')
df = pd.read_csv(data, delim_whitespace=True)
diff = (df.groupby('group')
.diff()
.fillna(0)
.add_suffix('_diff'))
df = df.join(diff)
last3mean = (df.groupby('group')[diff.columns]
.rolling(3).mean()
.reset_index(drop=True)
.add_suffix('_last3mean'))
df = df.join(last3mean)
print(df)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code> group A B A_diff B_diff A_diff_last3mean B_diff_last3mean
0 1111 1 2 0.0 0.0 NaN NaN
1 1111 1 2 0.0 0.0 NaN NaN
2 1111 2 4 1.0 2.0 0.333333 0.666667
3 1111 4 6 2.0 2.0 1.000000 1.333333
4 2222 5 7 0.0 0.0 NaN NaN
5 2222 2 8 -3.0 1.0 NaN NaN
</code></pre>
<p>Notes:</p>
<ul>
<li><p>Although <code>index</code> is a perfectly valid column name, pandas DataFrames have indices too. To avoid confusion, I have renamed that column to <code>group</code>.</p></li>
<li><p>In your desired output, you seem to have filled the <code>NaN</code>s in columns <code>A_diff</code> and <code>B_diff</code> for the group <code>1111</code> but not for the group <code>2222</code>. The first line in your code snippet does not perform such filling. I have filled them all — <code>.fillna(0)</code> in the definition of <code>diff</code>, but you can drop that if you want.</p></li>
</ul>
| 1 |
2016-09-23T16:53:02Z
|
[
"python",
"pandas"
] |
TemplateDoesNotExist but it exist
| 39,665,105 |
<p>Python 2.7 & Django 1.10
my template exist but i do somesing wrong! </p>
<pre><code>TemplateDoesNotExist at /basicview/2/
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>TEST</title>
</head>
<body>
This is template_two view!
</body>
</html>
Request Method: GET
Request URL: http://127.0.0.1:8000/basicview/2/
Django Version: 1.10.1
Exception Type: TemplateDoesNotExist
Exception Value:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>TEST</title>
</head>
<body>
This is template_two view!
</body>
</html>
Exception Location: /home/i/djangoenv/local/lib/python2.7/site-packages/Django-1.10.1-py2.7.egg/django/template/loader.py in get_template, line 25
Python Executable: /home/i/djangoenv/bin/python
Python Version: 2.7.11
Python Path:
['/home/i/djangoenv/bin/firstapp',
'/home/i/djangoenv/lib/python2.7',
'/home/i/djangoenv/lib/python2.7/plat-i386-linux-gnu',
'/home/i/djangoenv/lib/python2.7/lib-tk',
'/home/i/djangoenv/lib/python2.7/lib-old',
'/home/i/djangoenv/lib/python2.7/lib-dynload',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-i386-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/home/i/djangoenv/local/lib/python2.7/site-packages',
'/home/i/djangoenv/local/lib/python2.7/site-packages/Django-1.10.1-py2.7.egg',
'/home/i/djangoenv/lib/python2.7/site-packages',
'/home/i/djangoenv/lib/python2.7/site-packages/Django-1.10.1-py2.7.egg']
Server time: ÐÑ, 23 Сен 2016 15:43:30 +0000
</code></pre>
<p><strong>settings.py</strong> (os.path.join(BASE_DIR), 'templates', or /home/mainapp/templates) not working..</p>
<pre><code>TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
</code></pre>
<p><strong>article/views.py</strong> my def looks like:</p>
<pre><code>def template_two(request):
view = "template_two"
t = get_template('myview.html')
html = t.render(Context({'name': view}))
return render(request, html, {})
</code></pre>
<p>My file:</p>
<pre><code>mainapp/mainapp/settings.py
mainapp/mainapp/article/views.py
mainapp/templates/myview.html
</code></pre>
| 0 |
2016-09-23T16:03:14Z
| 39,665,150 |
<p>In your <em>settings.py</em> you have <code>'DIRS': ['templates'],</code></p>
<p>And path to your template is <code>mainapp/templetes/myview.html</code></p>
<p>You have typo <code>templetes != templates</code>. Rename folder with templates to <code>templates</code>.</p>
| 1 |
2016-09-23T16:05:51Z
|
[
"python",
"django",
"python-2.7",
"django-templates"
] |
TemplateDoesNotExist but it exist
| 39,665,105 |
<p>Python 2.7 & Django 1.10
my template exist but i do somesing wrong! </p>
<pre><code>TemplateDoesNotExist at /basicview/2/
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>TEST</title>
</head>
<body>
This is template_two view!
</body>
</html>
Request Method: GET
Request URL: http://127.0.0.1:8000/basicview/2/
Django Version: 1.10.1
Exception Type: TemplateDoesNotExist
Exception Value:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>TEST</title>
</head>
<body>
This is template_two view!
</body>
</html>
Exception Location: /home/i/djangoenv/local/lib/python2.7/site-packages/Django-1.10.1-py2.7.egg/django/template/loader.py in get_template, line 25
Python Executable: /home/i/djangoenv/bin/python
Python Version: 2.7.11
Python Path:
['/home/i/djangoenv/bin/firstapp',
'/home/i/djangoenv/lib/python2.7',
'/home/i/djangoenv/lib/python2.7/plat-i386-linux-gnu',
'/home/i/djangoenv/lib/python2.7/lib-tk',
'/home/i/djangoenv/lib/python2.7/lib-old',
'/home/i/djangoenv/lib/python2.7/lib-dynload',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-i386-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/home/i/djangoenv/local/lib/python2.7/site-packages',
'/home/i/djangoenv/local/lib/python2.7/site-packages/Django-1.10.1-py2.7.egg',
'/home/i/djangoenv/lib/python2.7/site-packages',
'/home/i/djangoenv/lib/python2.7/site-packages/Django-1.10.1-py2.7.egg']
Server time: ÐÑ, 23 Сен 2016 15:43:30 +0000
</code></pre>
<p><strong>settings.py</strong> (os.path.join(BASE_DIR), 'templates', or /home/mainapp/templates) not working..</p>
<pre><code>TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
</code></pre>
<p><strong>article/views.py</strong> my def looks like:</p>
<pre><code>def template_two(request):
view = "template_two"
t = get_template('myview.html')
html = t.render(Context({'name': view}))
return render(request, html, {})
</code></pre>
<p>My file:</p>
<pre><code>mainapp/mainapp/settings.py
mainapp/mainapp/article/views.py
mainapp/templates/myview.html
</code></pre>
| 0 |
2016-09-23T16:03:14Z
| 39,665,895 |
<p>I would suggest that you put your temlates in your app.</p>
<p>Your file will then be here: </p>
<pre><code>mainapp/mainapp/templates/myview.html
</code></pre>
<p>Please make sure you add <code>mainapp</code> to your <code>INSTALLED_APPS</code> like this:</p>
<pre><code>INSTALLED_APPS = [
...
'mainapp',
]
</code></pre>
| 0 |
2016-09-23T16:52:24Z
|
[
"python",
"django",
"python-2.7",
"django-templates"
] |
TemplateDoesNotExist but it exist
| 39,665,105 |
<p>Python 2.7 & Django 1.10
my template exist but i do somesing wrong! </p>
<pre><code>TemplateDoesNotExist at /basicview/2/
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>TEST</title>
</head>
<body>
This is template_two view!
</body>
</html>
Request Method: GET
Request URL: http://127.0.0.1:8000/basicview/2/
Django Version: 1.10.1
Exception Type: TemplateDoesNotExist
Exception Value:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>TEST</title>
</head>
<body>
This is template_two view!
</body>
</html>
Exception Location: /home/i/djangoenv/local/lib/python2.7/site-packages/Django-1.10.1-py2.7.egg/django/template/loader.py in get_template, line 25
Python Executable: /home/i/djangoenv/bin/python
Python Version: 2.7.11
Python Path:
['/home/i/djangoenv/bin/firstapp',
'/home/i/djangoenv/lib/python2.7',
'/home/i/djangoenv/lib/python2.7/plat-i386-linux-gnu',
'/home/i/djangoenv/lib/python2.7/lib-tk',
'/home/i/djangoenv/lib/python2.7/lib-old',
'/home/i/djangoenv/lib/python2.7/lib-dynload',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-i386-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/home/i/djangoenv/local/lib/python2.7/site-packages',
'/home/i/djangoenv/local/lib/python2.7/site-packages/Django-1.10.1-py2.7.egg',
'/home/i/djangoenv/lib/python2.7/site-packages',
'/home/i/djangoenv/lib/python2.7/site-packages/Django-1.10.1-py2.7.egg']
Server time: ÐÑ, 23 Сен 2016 15:43:30 +0000
</code></pre>
<p><strong>settings.py</strong> (os.path.join(BASE_DIR), 'templates', or /home/mainapp/templates) not working..</p>
<pre><code>TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
</code></pre>
<p><strong>article/views.py</strong> my def looks like:</p>
<pre><code>def template_two(request):
view = "template_two"
t = get_template('myview.html')
html = t.render(Context({'name': view}))
return render(request, html, {})
</code></pre>
<p>My file:</p>
<pre><code>mainapp/mainapp/settings.py
mainapp/mainapp/article/views.py
mainapp/templates/myview.html
</code></pre>
| 0 |
2016-09-23T16:03:14Z
| 39,666,505 |
<p>The problem is that you are manually rendering your template and using the <code>render</code> shortcut at the same time. Your <code>get_template</code> is working, but when you call <code>render(request, html, {})</code>, Django is treating <code>html</code> as the filename, and looking for a template file named <code><!DOCTYPE html>\n<html>...</code>.</p>
<p>You should either render the template manually:</p>
<pre><code>def template_two(request):
view = "template_two"
t = get_template('myview.html')
html = t.render({'name': view}) # Note you should use a plain dictionary, not `Context` on Django 1.8+
return HttpResponse(html)
</code></pre>
<p>Alternatively, it's simpler to use the <code>render</code> shortcut.</p>
<pre><code>def template_two(request):
view = "template_two"
return render(request, "myview.html", {'name': view})
</code></pre>
<p>You should also change your <code>DIRS</code> setting back to use <code>os.path.join(BASE_DIR, 'templates')</code>. Using the string <code>'templates'</code> is not going to work.</p>
| 0 |
2016-09-23T17:32:37Z
|
[
"python",
"django",
"python-2.7",
"django-templates"
] |
Unwanted Rainbow icon on pygame
| 39,665,152 |
<p>I'm currently working on a pygame script, which is basically displaying an user interface over a webcam stream. This app is running on raspberry pi, on dual screen via fbcp.</p>
<p>I noticed that a strange rainbow square icon did recently appeared in the upper right corner of the screen. </p>
<p>Looking like this, but smaller :
<a href="http://i.stack.imgur.com/OlteU.png" rel="nofollow"><img src="http://i.stack.imgur.com/OlteU.png" alt="enter image description here"></a></p>
<p>What is it ? How can i remove the display of this icon ?</p>
<p>Thank you !</p>
| -1 |
2016-09-23T16:05:58Z
| 39,667,920 |
<p>I found the answer by myself :</p>
<p>This icon is shown by the raspberry itself to inform about under-voltage issue.
To prevent it from showing, solve the power issue, or remove rpi warnings (not a safe approach), by adding to /boot/config.txt :</p>
<pre><code>avoid_warnings=2
</code></pre>
| 1 |
2016-09-23T19:06:55Z
|
[
"python",
"raspberry-pi",
"pygame",
"webcam",
"raspberry-pi2"
] |
youtube-dl python script postprocessing error: FFMPEG codecs aren't being recognized
| 39,665,160 |
<p>My python script is trying to download youtube videos with youtube-dl.py. Works fine unless postprocessing is required. The code:</p>
<pre><code>import youtube_dl
options = {
'format':'bestaudio/best',
'extractaudio':True,
'audioformat':'mp3',
'outtmpl':'%(id)s', #name the file the ID of the video
'noplaylist':True,
'nocheckcertificate':True,
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192',
}]
}
with youtube_dl.YoutubeDL(options) as ydl:
ydl.download(['http://www.youtube.com/watch?v=BaW_jenozKc'])
</code></pre>
<p>Below is the output I receive:<a href="http://i.stack.imgur.com/g3mBo.png" rel="nofollow"><img src="http://i.stack.imgur.com/g3mBo.png" alt="enter image description here"></a></p>
<p>I get a similar error if I try setting 'preferredcodec' to 'opus' or 'best'.
I'm not sure if this is relevant, but I can run the command line counterpart fine:</p>
<pre><code>youtube-dl -o 'test2.%(ext)s' --extract-audio --audio-format mp3 --no-check-certificate https://www.youtube.com/watch?v=BaW_jenozKc
</code></pre>
<p>I've gotten a few clues from the internet and other questions and from what i understand this is most likely an issue with my ffmpeg, which isn't a python module right? Here is my ffmpeg version and configuration:
<a href="http://i.stack.imgur.com/0Uprn.png" rel="nofollow"><img src="http://i.stack.imgur.com/0Uprn.png" alt="enter image description here"></a></p>
<p>If the answer to my problem is to add some configuration setting to my ffmpeg please explain how i go about doing that.</p>
| 0 |
2016-09-23T16:06:26Z
| 39,669,975 |
<p>This is a bug in the interplay between youtube-dl and ffmpeg, caused by the lack of extension in the filename. youtube-dl calls ffmpeg. Since the filename does not contain any extension, youtube-dl asks ffmpeg to generate a temporary file <code>mp3</code>. However, ffmpeg detects the output container type automatically by the extension and fails because <code>mp3</code> has no extension.</p>
<p>As a workaround, simply add <code>%(ext)s</code> in your filename template:</p>
<pre><code>'outtmpl': u'%(id)s.%(ext)s',
</code></pre>
| 1 |
2016-09-23T21:43:47Z
|
[
"python",
"youtube",
"ffmpeg",
"youtube-dl"
] |
Define a variable in sympy to be a CONSTANT
| 39,665,207 |
<pre><code>from sympy import *
from sympy.stats import *
mu, Y = symbols('mu Y', real = True, constant = True)
sigma = symbols('sigma', real = True, positive=True)
X = Normal('X', mu, sigma)
</code></pre>
<p>When asking for:</p>
<pre><code>E(X, evaluate=False)
</code></pre>
<p>I get:</p>
<pre><code>â
â
â® 2
⮠-(X - μ)
â® ââââââââââ
â® 2
â® 2â
Ï
â® â2â
Xâ
â¯
â® ââââââââââââââââ dX
â® 2â
âÏâ
Ï
â¡
-â
</code></pre>
<p>Which is what I expect. When asking for:</p>
<pre><code>E(X, X>0, evaluate=False)
E(X, X>pi, evaluate=False)
E(X, X >-3, evaluate=False)
</code></pre>
<p>Using any constant, the result is as expected from the Normal Definition of conditional expectation. However, when trying to solve for:</p>
<pre><code>E(X, X>Y)
</code></pre>
<p>I'm getting an error that has to do with roots. Is there a way to define a Y, such that sympy acknowledges that it is a constant, just like a 0 or a -3 or even pi, and shows the integration as expected? I'm assuming the problem with the request I have from sympy is that somehow the Y isn't acknowledges as a constant and therefore, when trying to solve this request, sympy is faced with a roots problem.</p>
| 3 |
2016-09-23T16:09:18Z
| 39,668,837 |
<p>Your problem appears to be a limitation in the current inequality solver: the algorithm that transforms a system of inequalities to a union of sets apparently needs to sort the boundary points determined by those inequalities (even if there's only one such point). Reduction of inequalities with symbolic limits has not been implemented yet.</p>
<p>I suggest a <strong>dirty trick</strong> to get around this limitation. Define:</p>
<pre><code>class SymbolTrick(NumberSymbol):
def __new__(self, name):
obj = NumberSymbol.__new__(self)
obj._name = name
return obj
_as_mpf_val = pi._as_mpf_val
approximation_interval = pi.approximation_interval
__str__ = lambda self: str(self._name)
</code></pre>
<p>This defines a subclass of <em>NumberSymbol</em> having the same numeric value of <em>pi</em> (it is necessary to specify one, as the inequality reduction algorithm needs to sort the list boundaries otherwise it will fail).</p>
<p>At this point:</p>
<pre><code>In [7]: Y = SymbolTrick("Y")
In [8]: E(X, X > Y, evaluate=False)
Out[8]:
â
â
â® 2
⮠-(X - μ)
â® ââââââââââ
â® 2
â® 2â
Ï
â® â2â
Xâ
â¯
â® ââââââââââââââââââââââââââ dX
â® â
â® â
â® â® 2
⮠⮠-(X - μ)
â® â® ââââââââââ
â® â® 2
â® â® 2â
Ï
â® â® â2â
â¯
â® 2â
âÏâ
Ïâ
â® ââââââââââââââ dX
â® â® 2â
âÏâ
Ï
â® â¡
â® Y
â¡
Y
</code></pre>
| 3 |
2016-09-23T20:12:26Z
|
[
"python",
"symbols",
"sympy"
] |
Is there a difference between str function and percent operator in Python
| 39,665,286 |
<p>When converting an object to a string in python, I saw two different idioms:</p>
<p>A: <code>mystring = str(obj)</code></p>
<p>B: <code>mystring = "%s" % obj</code></p>
<p>Is there a difference between those two? (Reading the Python docs, I would suspect no, because the latter case would implicitly call <code>str(obj)</code> to convert <code>obj</code> to a string.</p>
<p>If yes, when should I use which? </p>
<p>If no, which one should I prefer in "good" python code? (From the python philosophy "explicit over implicit", A would be considered the better one?)</p>
| 9 |
2016-09-23T16:14:31Z
| 39,665,338 |
<p>The second version does more work.</p>
<p>The <code>%s</code> operator calls <code>str()</code> on the value it interpolates, but it also has to parse the template string first to find the placeholder in the first place.</p>
<p>Unless your template string contains <em>more text</em>, there is no point in asking Python to spend more cycles on the <code>"%s" % obj</code> expression.</p>
<p>However, paradoxically, the <code>str()</code> conversion is, in practice, slower as looking up the name <code>str()</code> and pushing the stack to call the function takes more time than the string parsing:</p>
<pre><code>>>> from timeit import timeit
>>> timeit('str(obj)', 'obj = 4.524')
0.32349491119384766
>>> timeit('"%s" % obj', 'obj = 4.524')
0.27424097061157227
</code></pre>
<p>You can recover most of that difference by binding <code>str</code> to a local name first:</p>
<pre><code>>>> timeit('_str(obj)', 'obj = 4.524; _str = str')
0.28351712226867676
</code></pre>
<p>To most Python developers, using the string templating option is going to be confusing as <code>str()</code> is far more straightforward. Stick to the function unless you have a critical section that does a lot of string conversions.</p>
| 16 |
2016-09-23T16:17:33Z
|
[
"python",
"python-2.7"
] |
How to make a group id using pandas
| 39,665,374 |
<p>R's <code>data.table</code> package has a really convenient <code>.GRP</code> method for generating group index values.</p>
<pre><code>library(data.table)
dt <- data.table(
Grp=c("a", "z", "a", "f", "f"),
Val=c(3, 2, 1, 2, 2)
)
dt[, GrpIdx := .GRP, by=Grp]
Grp Val GrpIdx
1: a 3 1
2: z 2 2
3: a 1 1
4: f 2 3
5: f 2 3
</code></pre>
<p>What's the best way to accomplish the same thing using <code>pandas</code>?</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Grp':["a", "z", "a", "f", "f"], 'Val':[3, 2, 1, 2, 2]})
</code></pre>
| 1 |
2016-09-23T16:20:28Z
| 39,666,130 |
<p>You could use <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.rank.html" rel="nofollow"><code>rank</code></a> to identify unique groups with the <code>method</code> arg set to <code>dense</code> which accepts <code>string</code> values:</p>
<pre><code>df['GrpIdx'] = df['Grp'].rank(method='dense').astype(int)
</code></pre>
<p><a href="http://i.stack.imgur.com/9H47r.png" rel="nofollow"><img src="http://i.stack.imgur.com/9H47r.png" alt="Image"></a></p>
| 2 |
2016-09-23T17:08:32Z
|
[
"python",
"pandas",
"dataframe"
] |
I want to make it either print true or false from using variables and lists
| 39,665,375 |
<pre><code> x = 1
y = ['1','2']
if x/y[1] == 2:
print ('true')
else:
print ('false')
</code></pre>
<p>But the variable can not be divided by a list and it gives out</p>
<pre><code> TypeError: unsupported operand type(s) for /: 'int' and 'str'
</code></pre>
<p>Please help. </p>
| -3 |
2016-09-23T16:20:31Z
| 39,665,419 |
<p>That's because you're trying to divide a string. Changing your code to convert <code>y</code> into an <code>int</code> will fix your problem.</p>
<pre><code>x = 1
y = ['1','2']
if x/int(y[1]) == 2:
print ('true')
else:
print ('false')
</code></pre>
| 1 |
2016-09-23T16:22:47Z
|
[
"python"
] |
Failed to extends User form in django
| 39,665,404 |
<p>The error I get is: extend user to a custom form, the "user_id" field is my custom form is the "property", which is linked to the table "auth_user" is not saved, and I need both tables relate to make use my custom form attributes and shape of the User of django.</p>
<p>my models.py</p>
<pre><code>from __future__ import unicode_literals
from django.contrib.auth.models import User
from django.db import models
# Create your models here.
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
Matricula = models.CharField(max_length=25)
</code></pre>
<p>forms.py</p>
<pre><code> class SignupForm(forms.ModelForm):
class Meta:
model = Profile
fields = ('first_name', 'last_name', 'Matricula')
#Saving user data
def signup(self, request, user):
user.first_name = self.cleaned_data['first_name']
user.last_name = self.cleaned_data['last_name']
user.Matricula = self.cleaned_data['Matricula']
user.save()
##Save profile
profile = Profile()
Profile.user = user
profile.Matricula = self.cleaned_data['Matricula']
profile.save()
</code></pre>
<p>i tried:</p>
<pre><code>user = models.OneToOneField(User, on_delete=models.CASCADE)
</code></pre>
<p>but I get an error:
<a href="http://i.stack.imgur.com/AevyD.png" rel="nofollow">Error</a></p>
<p>You believe that ForeignKey can be used or correct use OneToOneField?</p>
| 0 |
2016-09-23T16:22:01Z
| 39,665,460 |
<p>You are not setting <code>user</code> to that instance of <code>Profile</code>:</p>
<pre><code>profile = Profile()
profile.user = user # Notice the case of Profile
</code></pre>
| 1 |
2016-09-23T16:24:51Z
|
[
"python",
"django",
"user"
] |
Failed to extends User form in django
| 39,665,404 |
<p>The error I get is: extend user to a custom form, the "user_id" field is my custom form is the "property", which is linked to the table "auth_user" is not saved, and I need both tables relate to make use my custom form attributes and shape of the User of django.</p>
<p>my models.py</p>
<pre><code>from __future__ import unicode_literals
from django.contrib.auth.models import User
from django.db import models
# Create your models here.
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
Matricula = models.CharField(max_length=25)
</code></pre>
<p>forms.py</p>
<pre><code> class SignupForm(forms.ModelForm):
class Meta:
model = Profile
fields = ('first_name', 'last_name', 'Matricula')
#Saving user data
def signup(self, request, user):
user.first_name = self.cleaned_data['first_name']
user.last_name = self.cleaned_data['last_name']
user.Matricula = self.cleaned_data['Matricula']
user.save()
##Save profile
profile = Profile()
Profile.user = user
profile.Matricula = self.cleaned_data['Matricula']
profile.save()
</code></pre>
<p>i tried:</p>
<pre><code>user = models.OneToOneField(User, on_delete=models.CASCADE)
</code></pre>
<p>but I get an error:
<a href="http://i.stack.imgur.com/AevyD.png" rel="nofollow">Error</a></p>
<p>You believe that ForeignKey can be used or correct use OneToOneField?</p>
| 0 |
2016-09-23T16:22:01Z
| 39,665,498 |
<p>You should be careful with your capitalisation. You've assigned the user value to the class, not the instance. It should be:</p>
<pre><code>profile = Profile()
profile.user = user
</code></pre>
<p>Or better:</p>
<pre><code>profile = Profile(user=user)
</code></pre>
| 1 |
2016-09-23T16:27:43Z
|
[
"python",
"django",
"user"
] |
Python BeautifulSoup does not print()
| 39,665,413 |
<p>Following beautifulsoup script shows no output. Did i miss anything?
It was intended to hit some of the prints.</p>
<pre><code>from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
import sys
url1 = "https://www.youtube.com/watch?v=APmUWC8S1_M"
def getTitle(url):
try:
html = urlopen(url)
except HTTPError as e:
print(e)
return None
try:
bsObj = BeautifulSoup(html.read())
except AttributeError as e:
return None
return bsObj
title = getTitle(url1)
if title == None:
print("None at URL: " + url1)
else:
print(title)
</code></pre>
| -2 |
2016-09-23T16:22:30Z
| 39,665,869 |
<p>For BeautifulSoup4, I would reccommend using the requests module (obtained via pip), for getting the website data.</p>
<p>To get the html of the desired site, use</p>
<pre><code>content = requests.get(url).content
</code></pre>
<p>That will save the entire html doc to the variable "content".</p>
<p>From that, you can get use the following script to print out any data you need.</p>
<p>Note: lxml (the html parser that is good for bs4) has problems when installing in python 3, so 2.7 is the best version for this.</p>
<pre><code>import requests
from bs4 import BeautifulSoup as bs
def getTitle(url):
content = requests.get(url).content
page = bs(content, "lxml")
title = page.title.string
return title
url1 = "https://www.youtube.com/watch?v=APmUWC8S1_M"
t = getTitle(url1)
if t == None:
print "None at url " + url1
else:
print t
</code></pre>
<p>I tested this on my local machine (Win 10, Python 2.7.12, requests, beautifulsoup4, and lxml installed via pip) and it worked perfectly.</p>
<p>If you want more information on requests, you can look <a href="http://docs.python-requests.org/en/master/" rel="nofollow">here</a>, and more info for BeautifulSoup can be found <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">here</a>.</p>
<p>Hope that this has helped you.</p>
| 1 |
2016-09-23T16:51:00Z
|
[
"python",
"printing",
"error-handling",
"beautifulsoup"
] |
Python BeautifulSoup does not print()
| 39,665,413 |
<p>Following beautifulsoup script shows no output. Did i miss anything?
It was intended to hit some of the prints.</p>
<pre><code>from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
import sys
url1 = "https://www.youtube.com/watch?v=APmUWC8S1_M"
def getTitle(url):
try:
html = urlopen(url)
except HTTPError as e:
print(e)
return None
try:
bsObj = BeautifulSoup(html.read())
except AttributeError as e:
return None
return bsObj
title = getTitle(url1)
if title == None:
print("None at URL: " + url1)
else:
print(title)
</code></pre>
| -2 |
2016-09-23T16:22:30Z
| 39,666,174 |
<h3>EDIT:</h3>
<p>You problem is finally... identation.</p>
<pre><code>from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
import sys
url1 = "https://www.youtube.com/watch?v=APmUWC8S1_M"
def getTitle(url):
try:
html = urlopen(url)
except HTTPError as e:
print(e)
return None
try:
bsObj = BeautifulSoup(html.read())
except AttributeError as e:
return None
return bsObj
title = getTitle(url1)
if title == None:
print("None at URL: " + url1)
else:
print(title)
</code></pre>
<h3>Old answer</h3>
<p>Your problem is that <code>return bsObj</code> prevent the function to execute the <code>print</code>s. The only thing your function can print is a <code>HTTPError</code> or an <code>ArgumentError</code>.</p>
<p>If you want to return <code>bsObj</code>, you need it to return it at the end of the function, because <code>return</code> exit the function.</p>
<p>Oh, and you inconditionally recurse the function, so anyway it will StackOverflow.</p>
<pre><code>from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
import sys
url1 = "https://www.youtube.com/watch?v=APmUWC8S1_M"
def getTitle(url):
try:
html = urlopen(url)
except HTTPError as e:
print(e)
return None
try:
bsObj = BeautifulSoup(html.read())
except AttributeError as e:
return None
title = getTitle(url1) # Infinite recursion
if title == None:
print("None at URL: " + url1)
else:
print(title)
return bsObj # Moved to the end
</code></pre>
| 0 |
2016-09-23T17:11:11Z
|
[
"python",
"printing",
"error-handling",
"beautifulsoup"
] |
Python BeautifulSoup does not print()
| 39,665,413 |
<p>Following beautifulsoup script shows no output. Did i miss anything?
It was intended to hit some of the prints.</p>
<pre><code>from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
import sys
url1 = "https://www.youtube.com/watch?v=APmUWC8S1_M"
def getTitle(url):
try:
html = urlopen(url)
except HTTPError as e:
print(e)
return None
try:
bsObj = BeautifulSoup(html.read())
except AttributeError as e:
return None
return bsObj
title = getTitle(url1)
if title == None:
print("None at URL: " + url1)
else:
print(title)
</code></pre>
| -2 |
2016-09-23T16:22:30Z
| 39,667,169 |
<p>This worked for me:</p>
<pre><code>from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
import sys
def getContent(url):
try:
html = urlopen(url)
except HTTPError as e:
print(e)
return None
try:
bsObj = BeautifulSoup(html.read())
except AttributeError as e:
return None
return bsObj
url1 = "https://www.youtube.com/watch?v=v5NeyI4-fdI"
content = getContent(url1)
if content == None:
print("Conent could not be found at URL: " + url1)
else:
print(content)
</code></pre>
| 0 |
2016-09-23T18:16:02Z
|
[
"python",
"printing",
"error-handling",
"beautifulsoup"
] |
Is there a better way to handle writing csv in text mode and reading in binary mode?
| 39,665,461 |
<p>I have code that looks something like this:</p>
<pre><code>import csv
import os
import tempfile
from azure.storage import CloudStorageAccount
account = CloudStorageAccount(
account_name=os.environ['AZURE_ACCOUNT'],
account_key=os.environ['AZURE_ACCOUNT_KEY'],
)
service = account.create_block_blob_service()
with tempfile.NamedTemporaryFile(mode='w') as f:
writer = csv.DictWriter(f, fieldnames=['foo', 'bar'])
writer.writerow({'foo': 'just an example', 'bar': 'of what I do'})
with open(f.name, 'rb') as stream:
service.create_blob_from_stream(
container_name='test',
blob_name='nothing_secret.txt',
stream=stream,
)
</code></pre>
<p>Now, this is ugly. I don't like having to open the file twice. I know that the Azure API provides a way to upload text and binary, but my file has the potential to be several hundred MB large so I'm not too interested in sticking the whole thing in memory at a time (not that it would be the end of the world, but still).</p>
<p>Azure doesn't support uploading a file in text mode (that I can see), and csv doesn't seem to support writing to a binary file (at least not text data).</p>
<p>Is there a way that I can have two handles to the same file, one in binary and one in text mode? Of course I <em>could</em> write my own file wrapper, but I'd prefer to use something <em>I</em> don't have to maintain. Is there a better way to do this than what I've got?</p>
| 1 |
2016-09-23T16:24:51Z
| 39,665,852 |
<p>Files opened in text mode have a <a href="https://docs.python.org/3/library/io.html#io.TextIOBase.buffer" rel="nofollow"><code>buffer</code></a> attribute. This object is the same one you would get by opening the file in binary mode, the text mode is just a wrapper on top of it.</p>
<p>Open your file in text mode, use it for read it, then seek the buffer back to the start and use it for uploading. Make sure you use <code>+</code> mode for reading and writing from the same handle.</p>
<pre><code>with tempfile.NamedTemporaryFile(mode='w+') as f:
...
f.seek(0)
service.create_blob_from_stream(
...
stream=f.buffer,
)
</code></pre>
<p>You can go the other way too, by opening in binary mode then wrapping with <code>io.TextIOWRapper(f)</code>.</p>
| 2 |
2016-09-23T16:50:09Z
|
[
"python",
"python-3.x",
"csv",
"azure",
"windows-azure-storage"
] |
grouping list elements in python
| 39,665,472 |
<pre><code>list = [('a5', 1), 1, ('a1', 1), 0, 0]
</code></pre>
<p>I want to group the elements of the list into 3, if the second or third element is missing in the list 'None' has to appended in the corresponding location.</p>
<pre><code>exepected_output = [[('a5', 1), 1,None],[('a1', 1), 0, 0]]
</code></pre>
<p>Is there a pythonic way for this? New to this, any suggestions would be helpful.</p>
| 1 |
2016-09-23T16:25:49Z
| 39,665,682 |
<p>As far as I am aware, the only way to get the result you want is to loop through your list and detect when you encounter tuples. </p>
<p>Example which should work:</p>
<pre><code>temp = None
result = []
for item in this_list:
if type(item) == tuple:
if temp is not None:
while len(temp) < 3:
temp.append(None)
result.append(temp)
temp = []
temp.append(item)
</code></pre>
<p>Edit: As someone correctly commented, don't name a variable list, you'd be overwriting the built in list function. Changed name in example. </p>
| 0 |
2016-09-23T16:39:15Z
|
[
"python",
"python-2.7"
] |
grouping list elements in python
| 39,665,472 |
<pre><code>list = [('a5', 1), 1, ('a1', 1), 0, 0]
</code></pre>
<p>I want to group the elements of the list into 3, if the second or third element is missing in the list 'None' has to appended in the corresponding location.</p>
<pre><code>exepected_output = [[('a5', 1), 1,None],[('a1', 1), 0, 0]]
</code></pre>
<p>Is there a pythonic way for this? New to this, any suggestions would be helpful.</p>
| 1 |
2016-09-23T16:25:49Z
| 39,666,046 |
<p>Here's a slightly different approach from the other answers, doing a comparison on the type of each element and then breaking the original list into chunks.</p>
<pre><code>li = [('a5', 1), 1, ('a1', 1), 0, 0]
for i in range(0, len(li), 3):
if type(li[i]) is not tuple:
li.insert(i, None)
if type(li[i+1]) is not int:
li.insert(i+1, None)
if type(li[i+2]) is not int:
li.insert(i+2, None)
print [li[i:i + 3] for i in range(0, len(li), 3)]
</code></pre>
| 1 |
2016-09-23T17:02:29Z
|
[
"python",
"python-2.7"
] |
Is it possible to create python virtual environment with ansible playbook?
| 39,665,553 |
<p>I have tried to create virtual environment in vagrant VM using ansible-local, but failed.</p>
<p>This is my Vagrant file:</p>
<pre><code>Vagrant.configure(2) do |config|
config.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 2
end
config.vm.define "shortener" do |shortener|
shortener.vm.box = "ubuntu/trusty64"
# database
shortener.vm.network :forwarded_port, host: 3307, guest: 3306
# browser
shortener.vm.network :forwarded_port, host: 4568, guest: 4568
shortener.vm.provision :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
end
end
config.ssh.forward_agent = true
end
</code></pre>
<p>This is "playbook.yml":</p>
<pre><code>- name: Deploy shortener
hosts: all
become: true
become_method: sudo
tasks:
- name: Install packages
apt: update_cache=yes name={{ item }} state=present
with_items:
- git
- python-pip
- nginx-full
- vim
- python-virtualenv
- virtualenvwrapper
- python3.4
- python3.4-doc
- python3.4-dev
- software-properties-common
- python-software-properties
- postgresql
- postgresql-client
- name: Load virtualenvwrapper
shell: source /etc/bash_completion.d/virtualenvwrapper
- name: Create virtual environment
shell: mkvirtualenv shortener --python=/usr/bin/python3
- name: Install requirements
pip: requirements='/vagrant/configs/requirements.txt'
</code></pre>
<p>And this is the output of 'vagrant up':</p>
<pre><code>hedin@home:~/url_shortener$ vagrant provision
==> shortener: Running provisioner: ansible_local...
shortener: Running ansible-playbook...
PLAY [Deploy shortener]
**************************
TASK [setup]
**************************
ok: [shortener]
**************************
TASK [Install packages]
ok: [shortener] => (item=[u'git', u'python-pip', u'nginx-full', u'vim', u'python-virtualenv', u'virtualenvwrapper', u'python3.4', u'python3.4-doc', u'python3.4-dev', u'software-properties-common', u'python-software-properties', u'postgresql', u'postgresql-client'])
TASK [Load virtualenvwrapper]
**************************
fatal: [shortener]: FAILED! => {"changed": true, "cmd": "source /etc/bash_completion.d/virtualenvwrapper", "delta": "0:00:00.003591", "end": "2016-09-23 16:06:43.169513", "failed": true, "rc": 127, "start": "2016-09-23 16:06:43.165922", "stderr": "/bin/sh: 1: source: not found", "stdout": "", "stdout_lines": [], "warnings": []}
NO MORE HOSTS LEFT
**************************
[WARNING]: Could not create retry file 'playbook.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP
**************************
shortener : ok=2 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again.
</code></pre>
<p>Also I tried to use '<code>command</code>' instead of '<code>shell</code>' with the same result.</p>
<p>I think I could use a shell script that creates virtualenvironment, but is it possible to fix that error with ansible means ?</p>
| 1 |
2016-09-23T16:31:01Z
| 39,667,204 |
<p>I have found a solution. This is my "playbook.yml" file:</p>
<pre><code>- name: Deploy shortener
hosts: all
remote_user: vagrant
tasks:
- name: Install packages
become: true
become_method: sudo
apt: update_cache=yes name={{ item }} state=present
with_items:
- git
- python-pip
- nginx-full
- vim
- python-virtualenv
- virtualenvwrapper
- python3.4
- python3.4-doc
- python3.4-dev
- software-properties-common
- python-software-properties
- postgresql
- postgresql-client
- name: Install requirements
become: true
become_method: sudo
pip:
requirements: /vagrant/configs/requirements.txt
virtualenv: /home/vagrant/.virtualenvs/shortener
virtualenv_python: python3.4
</code></pre>
<p>I have used standard pip module for that. Thanks for helpful comments!</p>
| 0 |
2016-09-23T18:18:09Z
|
[
"python",
"vagrant",
"ansible",
"virtualenv",
"ansible-local"
] |
Calculate run time of a given function python
| 39,665,630 |
<p>I have created a function that takes in a another function as parameter and calculates the run time of that particular function. but when i run it, i can not seem to understand why this is not working . Does any one know why ? </p>
<pre><code>import time
import random
import timeit
import functools
def ListGenerator(rangeStart,rangeEnd,lenth):
sampleList = random.sample(range(rangeStart,rangeEnd),lenth)
return sampleList
def timeit(func):
@functools.wraps(func)
def newfunc(*args):
startTime = time.time()
func(*args)
elapsedTime = time.time() - startTime
print('function [{}] finished in {} ms'.format(
func.__name__, int(elapsedTime * 1000)))
return newfunc
@timeit
def bubbleSort(NumList):
compCount,copyCount= 0,0
for currentRange in range(len(NumList)-1,0,-1):
for i in range(currentRange):
compCount += 1
if NumList[i] > NumList[i+1]:
temp = NumList[i]
NumList[i] = NumList[i+1]
NumList[i+1] = temp
# print("Number of comparisons:",compCount)
NumList = ListGenerator(1,200,10)
print("Before running through soriting algorithm\n",NumList)
print("\nAfter running through soriting algorithm")
bubbleSort(NumList)
print(NumList,"\n")
for i in range (0, 10, ++1):
print("\n>Test run:",i+1)
bubbleSort(NumList)
compCount = ((len(NumList))*((len(NumList))-1))/2
print("Number of comparisons:",compCount)
</code></pre>
<p>run time screen shot
<a href="http://i.stack.imgur.com/i5M2f.png" rel="nofollow"><img src="http://i.stack.imgur.com/i5M2f.png" alt="enter image description here"></a></p>
| -2 |
2016-09-23T16:35:45Z
| 39,666,397 |
<p>It looks like the code just executes incredibly fast. In <code>bubbleSort</code>, I added an additional <code>for</code> loop to execute the comparisons another <code>10000</code> times:</p>
<pre><code>@timeit
def bubbleSort(NumList):
compCount,copyCount= 0,0
for i in range(10000):
for currentRange in range(len(NumList)-1,0,-1):
for i in range(currentRange):
compCount += 1
if NumList[i] > NumList[i+1]:
temp = NumList[i]
NumList[i] = NumList[i+1]
NumList[i+1] = temp
</code></pre>
<p>Now the result is:</p>
<pre><code> ('Before running through soriting algorithm\n', [30, 18, 144, 28, 155, 183, 50, 101, 156, 26])
After running through soriting algorithm
function [bubbleSort] finished in 12 ms
([18, 26, 28, 30, 50, 101, 144, 155, 156, 183], '\n')
('\n>Test run:', 1)
function [bubbleSort] finished in 12 ms
('Number of comparisons:', 45)
('\n>Test run:', 2)
function [bubbleSort] finished in 8 ms
('Number of comparisons:', 45)
('\n>Test run:', 3)
</code></pre>
<p>etc... @vishes_shell points this out in the comments as well.</p>
| 0 |
2016-09-23T17:25:35Z
|
[
"python",
"timeit"
] |
REGEX extracting specific part non greedy
| 39,665,633 |
<p>I'm new to Python 2.7. Using regular expressions, I'm trying to extract from a text file just the emails from input lines. I am using the non-greedy method as the emails are repeated 2 times in the same line. Here is my code:</p>
<pre><code>import re
f_hand = open('mail.txt')
for line in f_hand:
line.rstrip()
if re.findall('\S+@\S+?',line): print re.findall('\S+@\S+?',line)
</code></pre>
<p>however this is what i"m getting instead of just the email address:</p>
<pre><code>['href="mailto:secretary@abc-mediaent.com">sercetary@a']
</code></pre>
<p>What shall I use in <code>re.findall</code> to get just the email out? </p>
| 0 |
2016-09-23T16:35:50Z
| 39,665,834 |
<p>try this
<code>re.findall('mailto:(\S+@\S+?\.\S+)\"',str))</code></p>
<p>It should give you something like
<code>['secretary@abc-mediaent.com']</code></p>
| 1 |
2016-09-23T16:48:48Z
|
[
"python",
"regex",
"python-2.7",
"non-greedy"
] |
REGEX extracting specific part non greedy
| 39,665,633 |
<p>I'm new to Python 2.7. Using regular expressions, I'm trying to extract from a text file just the emails from input lines. I am using the non-greedy method as the emails are repeated 2 times in the same line. Here is my code:</p>
<pre><code>import re
f_hand = open('mail.txt')
for line in f_hand:
line.rstrip()
if re.findall('\S+@\S+?',line): print re.findall('\S+@\S+?',line)
</code></pre>
<p>however this is what i"m getting instead of just the email address:</p>
<pre><code>['href="mailto:secretary@abc-mediaent.com">sercetary@a']
</code></pre>
<p>What shall I use in <code>re.findall</code> to get just the email out? </p>
| 0 |
2016-09-23T16:35:50Z
| 39,665,875 |
<p><code>\S</code> means not a space. <code>"</code> and <code>></code> are not spaces.</p>
<p>You should use <code>mailto:([^@]+@[^"]+)</code> as the regex (quoted form: <code>'mailto:([^@]+@[^"]+)'</code>). This will put the email address in the first capture group.</p>
| 1 |
2016-09-23T16:51:12Z
|
[
"python",
"regex",
"python-2.7",
"non-greedy"
] |
REGEX extracting specific part non greedy
| 39,665,633 |
<p>I'm new to Python 2.7. Using regular expressions, I'm trying to extract from a text file just the emails from input lines. I am using the non-greedy method as the emails are repeated 2 times in the same line. Here is my code:</p>
<pre><code>import re
f_hand = open('mail.txt')
for line in f_hand:
line.rstrip()
if re.findall('\S+@\S+?',line): print re.findall('\S+@\S+?',line)
</code></pre>
<p>however this is what i"m getting instead of just the email address:</p>
<pre><code>['href="mailto:secretary@abc-mediaent.com">sercetary@a']
</code></pre>
<p>What shall I use in <code>re.findall</code> to get just the email out? </p>
| 0 |
2016-09-23T16:35:50Z
| 39,665,913 |
<p>If you parse a simple file with anchors for email addresses and always the same syntax (like double quotes to enclose attributes), you can use:</p>
<pre><code>for line in f_hand:
print re.findall(r'href="mailto:([^"@]+@[^"]+)">\1</a>', line)
</code></pre>
<p><em>(<code>re.findall</code> returns only the capture group. <code>\1</code> stands for the content of the first capture group.)</em></p>
<p>If the file is a more complicated html file, use a parser, extract the links and filter them.<br>Or eventually use XPath, something like: <br><code>substring-after(//a/@href[starts-with(., "mailto:")], "mailto:")</code></p>
| 1 |
2016-09-23T16:53:28Z
|
[
"python",
"regex",
"python-2.7",
"non-greedy"
] |
REGEX extracting specific part non greedy
| 39,665,633 |
<p>I'm new to Python 2.7. Using regular expressions, I'm trying to extract from a text file just the emails from input lines. I am using the non-greedy method as the emails are repeated 2 times in the same line. Here is my code:</p>
<pre><code>import re
f_hand = open('mail.txt')
for line in f_hand:
line.rstrip()
if re.findall('\S+@\S+?',line): print re.findall('\S+@\S+?',line)
</code></pre>
<p>however this is what i"m getting instead of just the email address:</p>
<pre><code>['href="mailto:secretary@abc-mediaent.com">sercetary@a']
</code></pre>
<p>What shall I use in <code>re.findall</code> to get just the email out? </p>
| 0 |
2016-09-23T16:35:50Z
| 39,665,931 |
<p>\S accepts many characters that aren't valid in an e-mail address. Try a regular expression of</p>
<pre><code>[a-zA-Z0-9-_.]+@[a-zA-Z0-9-_.]+\\.[a-zA-Z0-9-_.]+
</code></pre>
<p>(presuming you are not trying to support Unicode -- it seems that you aren't since your input is a "text file").</p>
<p>This will require a "." in the server portion of the e-mail address, and your match will stop on the first character that is not valid within the e-mail address.</p>
| 1 |
2016-09-23T16:54:49Z
|
[
"python",
"regex",
"python-2.7",
"non-greedy"
] |
REGEX extracting specific part non greedy
| 39,665,633 |
<p>I'm new to Python 2.7. Using regular expressions, I'm trying to extract from a text file just the emails from input lines. I am using the non-greedy method as the emails are repeated 2 times in the same line. Here is my code:</p>
<pre><code>import re
f_hand = open('mail.txt')
for line in f_hand:
line.rstrip()
if re.findall('\S+@\S+?',line): print re.findall('\S+@\S+?',line)
</code></pre>
<p>however this is what i"m getting instead of just the email address:</p>
<pre><code>['href="mailto:secretary@abc-mediaent.com">sercetary@a']
</code></pre>
<p>What shall I use in <code>re.findall</code> to get just the email out? </p>
| 0 |
2016-09-23T16:35:50Z
| 39,667,321 |
<p>This is the format of an email address - <a href="https://tools.ietf.org/html/rfc5322#section-3.4.1" rel="nofollow">https://tools.ietf.org/html/rfc5322#section-3.4.1</a>.</p>
<p>Keeping that in mind the regex that you need is - <code>r"([a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+)"</code>. <em>(This works without having to depend on the text surrounding an email address.)</em></p>
<p>The following lines of code -</p>
<pre><code>html_str = r'<a href="mailto:sachin.gokhale@indiacast.com">sachin.gokhale@indiacast.com</a>'
email_regex = r"([a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+)"
print re.findall(email_regex, html_str)
</code></pre>
<p>yields - </p>
<pre><code>['sachin.gokhale@indiacast.com', 'sachin.gokhale@indiacast.com']
</code></pre>
<p>P.S. - I got the regex for email addresses by googling for "<em>email address regex</em>" and clicking on the first site - <a href="http://emailregex.com/" rel="nofollow">http://emailregex.com/</a></p>
| 0 |
2016-09-23T18:25:24Z
|
[
"python",
"regex",
"python-2.7",
"non-greedy"
] |
Split large JSON file into batches of 100 at a time to run through an API
| 39,665,699 |
<p>I am so close to being done with this tool I am developing, but as a junior developer with NO senior programmer to work with I am stuck. I have a script in python that takes data from our data base converts it to JSON to be run through an Address validation API, I have it all working, but the fact is that the API only accepts 100 objects at a time. I need to basically break up the file with X objects into batches of 100 to be run then stored into the same output file. Here is the snippit of my script structure:</p>
<pre><code>for row in rows:
d = collections.OrderedDict()
d['input_id'] = str(row.INPUT_ID)
d['addressee'] = row.NAME
d['street'] = row.ADDRESS
d['city'] = row.CITY
d['state'] = row.STATE
d['zipcode'] = row.ZIP
d['candidates'] = row.CANDIDATES
obs_list.append(d)
json.dump(obs_list, file)
ids_file = '.csv'
cur.execute(input_ids)
columns = [i[0] for i in cur.description]
ids_input = cur.fetchall()
#ids_csv = csv.writer(with open('.csv','w',newline=''))
with open('.csv','w',newline='') as f:
ids_csv = csv.writer(f,delimiter=',')
ids_csv.writerow(columns)
ids_csv.writerows(ids_input)
print('Run through API')
url = 'https://api.'
headers = {'content-type': 'application/json'}
</code></pre>
<p>this is where i assume i need to do the loop to break it up </p>
<pre><code>with open('.json', 'r') as run:
dict_run = run.readlines()
dict_ready = (''.join(dict_run))
#lost :(
for object in dict_ready:
# do something with object to only run 100 at a time
r = requests.post(url, data=dict_ready, headers=headers)
ss_output = r.text
output = 'C:\\Users\\TurnerC1\\Desktop\\ss_output.json'
with open(output,'w') as of:
of.write(ss_output)
</code></pre>
<p>at the moment I have about 4,000 of these in a file to be run through the API that only accepts 100 at a time. Im sure there is an easy answer, I am just burnt out doing this by myself lol. Any help is greatly appreciated. </p>
<p>sample json:</p>
<pre><code>[
{
"street":"1 Santa Claus",
"city":"North Pole",
"state":"AK",
"candidates":10
},
{
"addressee":"Apple Inc",
"street":"1 infinite loop",
"city":"cupertino",
"state":"CA",
"zipcode":"95014",
"candidates":10
}
]'
</code></pre>
| 1 |
2016-09-23T16:40:52Z
| 39,665,784 |
<p>So assuming the rest of your code works, this will give the api a break every 100 rows for 10 secs. You will need to import the time module.</p>
<pre><code> for i, object in enumerate(dict_ready):
r = requests.post(url, data=dict_ready, headers=headers)
if i%100==0:
time.sleep(10)
</code></pre>
| 0 |
2016-09-23T16:46:15Z
|
[
"python",
"json"
] |
Split large JSON file into batches of 100 at a time to run through an API
| 39,665,699 |
<p>I am so close to being done with this tool I am developing, but as a junior developer with NO senior programmer to work with I am stuck. I have a script in python that takes data from our data base converts it to JSON to be run through an Address validation API, I have it all working, but the fact is that the API only accepts 100 objects at a time. I need to basically break up the file with X objects into batches of 100 to be run then stored into the same output file. Here is the snippit of my script structure:</p>
<pre><code>for row in rows:
d = collections.OrderedDict()
d['input_id'] = str(row.INPUT_ID)
d['addressee'] = row.NAME
d['street'] = row.ADDRESS
d['city'] = row.CITY
d['state'] = row.STATE
d['zipcode'] = row.ZIP
d['candidates'] = row.CANDIDATES
obs_list.append(d)
json.dump(obs_list, file)
ids_file = '.csv'
cur.execute(input_ids)
columns = [i[0] for i in cur.description]
ids_input = cur.fetchall()
#ids_csv = csv.writer(with open('.csv','w',newline=''))
with open('.csv','w',newline='') as f:
ids_csv = csv.writer(f,delimiter=',')
ids_csv.writerow(columns)
ids_csv.writerows(ids_input)
print('Run through API')
url = 'https://api.'
headers = {'content-type': 'application/json'}
</code></pre>
<p>this is where i assume i need to do the loop to break it up </p>
<pre><code>with open('.json', 'r') as run:
dict_run = run.readlines()
dict_ready = (''.join(dict_run))
#lost :(
for object in dict_ready:
# do something with object to only run 100 at a time
r = requests.post(url, data=dict_ready, headers=headers)
ss_output = r.text
output = 'C:\\Users\\TurnerC1\\Desktop\\ss_output.json'
with open(output,'w') as of:
of.write(ss_output)
</code></pre>
<p>at the moment I have about 4,000 of these in a file to be run through the API that only accepts 100 at a time. Im sure there is an easy answer, I am just burnt out doing this by myself lol. Any help is greatly appreciated. </p>
<p>sample json:</p>
<pre><code>[
{
"street":"1 Santa Claus",
"city":"North Pole",
"state":"AK",
"candidates":10
},
{
"addressee":"Apple Inc",
"street":"1 infinite loop",
"city":"cupertino",
"state":"CA",
"zipcode":"95014",
"candidates":10
}
]'
</code></pre>
| 1 |
2016-09-23T16:40:52Z
| 39,665,824 |
<p>As I read it, you have a range of objects, and you want to break it up into subsets, each of which is no more than 100 items.</p>
<p>Assuming that <code>dict_ready</code> is a list of objects (if it's not, modify your code to make it so):</p>
<pre><code>count = 100
subsets = (dict_ready[x:x + count] for x in range(0, len(dict_ready), count))
for subset in subsets:
r = requests.post(url, data=subset, headers=headers)
</code></pre>
| 2 |
2016-09-23T16:48:15Z
|
[
"python",
"json"
] |
Split large JSON file into batches of 100 at a time to run through an API
| 39,665,699 |
<p>I am so close to being done with this tool I am developing, but as a junior developer with NO senior programmer to work with I am stuck. I have a script in python that takes data from our data base converts it to JSON to be run through an Address validation API, I have it all working, but the fact is that the API only accepts 100 objects at a time. I need to basically break up the file with X objects into batches of 100 to be run then stored into the same output file. Here is the snippit of my script structure:</p>
<pre><code>for row in rows:
d = collections.OrderedDict()
d['input_id'] = str(row.INPUT_ID)
d['addressee'] = row.NAME
d['street'] = row.ADDRESS
d['city'] = row.CITY
d['state'] = row.STATE
d['zipcode'] = row.ZIP
d['candidates'] = row.CANDIDATES
obs_list.append(d)
json.dump(obs_list, file)
ids_file = '.csv'
cur.execute(input_ids)
columns = [i[0] for i in cur.description]
ids_input = cur.fetchall()
#ids_csv = csv.writer(with open('.csv','w',newline=''))
with open('.csv','w',newline='') as f:
ids_csv = csv.writer(f,delimiter=',')
ids_csv.writerow(columns)
ids_csv.writerows(ids_input)
print('Run through API')
url = 'https://api.'
headers = {'content-type': 'application/json'}
</code></pre>
<p>this is where i assume i need to do the loop to break it up </p>
<pre><code>with open('.json', 'r') as run:
dict_run = run.readlines()
dict_ready = (''.join(dict_run))
#lost :(
for object in dict_ready:
# do something with object to only run 100 at a time
r = requests.post(url, data=dict_ready, headers=headers)
ss_output = r.text
output = 'C:\\Users\\TurnerC1\\Desktop\\ss_output.json'
with open(output,'w') as of:
of.write(ss_output)
</code></pre>
<p>at the moment I have about 4,000 of these in a file to be run through the API that only accepts 100 at a time. Im sure there is an easy answer, I am just burnt out doing this by myself lol. Any help is greatly appreciated. </p>
<p>sample json:</p>
<pre><code>[
{
"street":"1 Santa Claus",
"city":"North Pole",
"state":"AK",
"candidates":10
},
{
"addressee":"Apple Inc",
"street":"1 infinite loop",
"city":"cupertino",
"state":"CA",
"zipcode":"95014",
"candidates":10
}
]'
</code></pre>
| 1 |
2016-09-23T16:40:52Z
| 39,665,844 |
<p>try this as your second chunk of code</p>
<pre><code>with open('.json', 'r') as run:
dict_run = json.loads(run)
ss_output=[]
for i in range(0,len(dict_ready),100):
# do something with object to only run 100 at a time
dict_ready=json.dumps(dict_run[i:i+100])
r = requests.post(url, data=dict_ready, headers=headers)
ss_output.extend(r.json())
output = 'C:\\Users\\TurnerC1\\Desktop\\ss_output.json'
with open(output,'w') as of:
json.dump(ss_output,of)
</code></pre>
| 1 |
2016-09-23T16:49:42Z
|
[
"python",
"json"
] |
Tensorflow: value error with variable_scope
| 39,665,702 |
<p>This is my code below:</p>
<pre><code>'''
Tensorflow LSTM classification of 16x30 images.
'''
from __future__ import print_function
import tensorflow as tf
from tensorflow.python.ops import rnn, rnn_cell
import numpy as np
from numpy import genfromtxt
from sklearn.cross_validation import train_test_split
import pandas as pd
'''
a Tensorflow LSTM that will sequentially input several lines from each single image
i.e. The Tensorflow graph will take a flat (1,480) features image as it was done in Multi-layer
perceptron MNIST Tensorflow tutorial, but then reshape it in a sequential manner with 16 features each and 30 time_steps.
'''
blaine = genfromtxt('./Desktop/Blaine_CSV_lstm.csv',delimiter=',') # CSV transform to array
target = [row[0] for row in blaine] # 1st column in CSV as the targets
data = blaine[:, 1:481] #flat feature vectors
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.05, random_state=42)
f=open('cs-training.csv','w') #1st split for training
for i,j in enumerate(X_train):
k=np.append(np.array(y_train[i]),j )
f.write(",".join([str(s) for s in k]) + '\n')
f.close()
f=open('cs-testing.csv','w') #2nd split for test
for i,j in enumerate(X_test):
k=np.append(np.array(y_test[i]),j )
f.write(",".join([str(s) for s in k]) + '\n')
f.close()
new_data = genfromtxt('cs-training.csv',delimiter=',') # Training data
new_test_data = genfromtxt('cs-testing.csv',delimiter=',') # Test data
x_train=np.array([ i[1::] for i in new_data])
ss = pd.Series(y_train) #indexing series needed for later Pandas Dummies one-hot vectors
y_train_onehot = pd.get_dummies(ss)
x_test=np.array([ i[1::] for i in new_test_data])
gg = pd.Series(y_test)
y_test_onehot = pd.get_dummies(gg)
# General Parameters
learning_rate = 0.001
training_iters = 100000
batch_size = 33
display_step = 10
# Tensorflow LSTM Network Parameters
n_input = 16 # MNIST data input (img shape: 28*28)
n_steps = 30 # timesteps
n_hidden = 128 # hidden layer num of features
n_classes = 20 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
# Define weights
weights = {
'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([n_classes]))
}
def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, n_steps, n_input)
# Required shape: 'n_steps' tensors list of shape (batch_size, n_input)
# Permuting batch_size and n_steps
x = tf.transpose(x, [1, 0, 2])
# Reshaping to (n_steps*batch_size, n_input)
x = tf.reshape(x, [-1, n_input])
# Split to get a list of 'n_steps' tensors of shape (batch_size, n_input)
x = tf.split(0, n_steps, x)
# Define a lstm cell with tensorflow
with tf.variable_scope('cell_def'):
lstm_cell = tf.nn.rnn_cell.LSTMCell(n_hidden, forget_bias=1.0)
# Get lstm cell output
with tf.variable_scope('rnn_def'):
outputs, states = tf.nn.rnn(lstm_cell, x, dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
pred = RNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
while step * batch_size < training_iters:
batch_x = np.split(x_train, 15)
batch_y = np.split(y_train_onehot, 15)
for index in range(len(batch_x)):
ouh1 = batch_x[index]
ouh2 = batch_y[index]
# Reshape data to get 28 seq of 28 elements
ouh1 = np.reshape(ouh1,(batch_size, n_steps, n_input))
sess.run(optimizer, feed_dict={x: ouh1, y: ouh2}) # Run optimization op (backprop)
if step % display_step == 0:
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: ouh1, y: ouh2})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: ouh1, y: ouh2})
print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc))
step += 1
print("Optimization Finished!")
</code></pre>
<p>and I am getting the below error that it seems i am re-iterating over the same variable on lines 92 and 97, and i am concerned that it might be a case of incompatibility with Tensorflow 0.10.0 on the RNN def side:</p>
<pre><code>ValueError: Variable RNN/BasicLSTMCell/Linear/Matrix already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
File "/home/mohsen/lstm_mnist.py", line 92, in RNN
outputs, states = tf.nn.rnn(lstm_cell, x, dtype=tf.float32)
File "/home/mohsen/lstm_mnist.py", line 97, in <module>
pred = RNN(x, weights, biases)
File "/home/mohsen/anaconda2/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 81, in execfile
builtins.execfile(filename, *where)
</code></pre>
<p>What could be the origin of this error and how i can resolve it?</p>
<p>EDIT: from the original repo where i build upon my code the same variable_scope problem persists <a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py" rel="nofollow">https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py</a></p>
| 2 |
2016-09-23T16:41:11Z
| 39,704,568 |
<p>You are not iterating over the same variable in line 92 and 97, since those will always be in the same namespace, at least in the current setting, since you are calling one namespace from within another (since one is embedded in the RNN function). So your effective variable scope will be something like <code>'backward/forward'</code>.</p>
<p>Hence the problem, in my guess, is in lines 89 and 92, since both "live" in the same namespace (see above), and both may introduce a variable called <code>RNN/BasicLSTMCell/Linear/Matrix</code>. So you should change your code to the following:</p>
<pre><code># Define a lstm cell with tensorflow
with tf.variable_scope('cell_def'):
lstm_cell = tf.nn.rnn_cell.LSTMCell(n_hidden, forget_bias=1.0)
# Get lstm cell output
with tf.variable_scope('rnn_def'):
outputs, states = tf.nn.rnn(lstm_cell, x, dtype=tf.float32)
</code></pre>
<p>This makes the LSTMCell initialization live in one namespace - <code>"cell_def/*"</code>, and the initialization of the complete RNN in another - <code>"rnn_def/*"</code>.</p>
| 1 |
2016-09-26T13:44:10Z
|
[
"python",
"numpy",
"machine-learning",
"tensorflow"
] |
matplotlib place axes within figure box
| 39,665,712 |
<p>This is my code (I am running it in a jupyter notebook on OS X )</p>
<pre><code>% matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure()
fig.set_facecolor('gray')
ax = fig.add_axes([0.2, 0.2, 0.2, 0.2])
print fig.get_figwidth()
</code></pre>
<p>I was expecting to see a large gray figure box with a small white axes box in the bottom left hand corner. What I get is a small white axes box with a small gray box surrounding it. </p>
<p>I am obviously missing some setting or other. How do I get what I was expecting?</p>
<p><a href="http://i.stack.imgur.com/1WT2o.png" rel="nofollow"><img src="http://i.stack.imgur.com/1WT2o.png" alt="What I see"></a></p>
| 0 |
2016-09-23T16:41:44Z
| 39,667,989 |
<p>After some more experimentation and digging I found the answer in the <a href="http://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-matplotlib" rel="nofollow">matplotlib magic</a> documentation referenced from <a href="http://stackoverflow.com/questions/37864735/matplotlib-and-ipython-notebook-displaying-exactly-the-figure-that-will-be-save?rq=1">here</a>.</p>
<p>The ipython magics are enforcing bbox_inches='tight' by default. which causes the figure bbox to shrink to fit the axes.</p>
<p>The trick is to add the magic line</p>
<p><code>% config InlineBackend.print_figure_kwargs = {'bbox_inches':None}</code></p>
| 0 |
2016-09-23T19:11:12Z
|
[
"python",
"matplotlib"
] |
matplotlib node alignment and custom line style
| 39,665,724 |
<p>I'm plotting netowrk graphs (Water distribution networks) using bokeh and or matplotlib. From the reference software the plots look like this:</p>
<p><img src="http://i.stack.imgur.com/8myGf.jpg" alt="Overview from EPAnet"></p>
<p>As you can see pumps and water towers have their own little symbols.
I'm using matplotlib and bokeh to plot the same graphs (with a little more info on the system state):</p>
<p><img src="http://i.stack.imgur.com/HQiew.png" alt="Current Bokeh plot"></p>
<p>As you can see we have squares and triangles as symbols now. So I would like to either add my own symbols based on some vector graphic of the symbols in the first plot, or at least rotate the triangles to be aligned with the arcs, such that they point along them. Any ideas on how to achieve either? I find the bokeh documentation rather confusing (as you can tell I'm a civil engineer not a programmer)</p>
| 0 |
2016-09-23T16:42:41Z
| 39,665,889 |
<p>Instead of rotating the triangle, you may consider <a href="http://matplotlib.org/examples/pylab_examples/arrow_simple_demo.html" rel="nofollow">arrows</a>.
If you really want to rotate the triangle, I normally rotate the three points of the triangle around its center (by rotation matrix). </p>
<p>About custom symbols, I have never import any external symbols into my matplotlib figure. I usually create the symbol as a polygon and then draw it using the <a href="http://matthiaseisen.com/pp/patterns/p0203/" rel="nofollow">polygon patches</a>.</p>
<p>Hope it helps.</p>
| 1 |
2016-09-23T16:52:09Z
|
[
"python",
"matplotlib",
"plot",
"bokeh"
] |
wxPython - code highlighting and pygment
| 39,665,775 |
<p>I am trying to utilize pygment for some code highlighting in a wxPython RichTextCtrl.</p>
<p>I can't find much online (other than broken links) about achieving this.</p>
<p>Here is some sample code. I've tried a few different formatters and they all fail. I believe editra using pygment and wxpython, but the source is difficult to navigate.</p>
<pre><code>import wx
import wx.richtext
from pygments import highlight
from pygments.lexers import get_lexer_by_name
from pygments.formatters.rtf import RtfFormatter
lexer = get_lexer_by_name("python", stripall=True)
formatter = RtfFormatter()
code = """ # Comment
a = 5
print(a)
print(b)
"""
formatted_code = highlight(code, lexer, formatter)
########################################################################
class MyFrame(wx.Frame):
# ----------------------------------------------------------------------
def __init__(self):
wx.Frame.__init__(self, None, title='Richtext Test')
sizer = wx.BoxSizer(wx.VERTICAL)
self.rt = wx.richtext.RichTextCtrl(self)
self.rt.SetMinSize((300, 200))
self.rt.ChangeValue(formatted_code)
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(self.rt, 1, wx.EXPAND | wx.ALL, 6)
self.SetSizer(sizer)
self.Show()
# ----------------------------------------------------------------------
if __name__ == "__main__":
app = wx.App(False)
frame = MyFrame()
app.MainLoop()
</code></pre>
<p>Thanks for any help</p>
| 0 |
2016-09-23T16:45:48Z
| 39,705,793 |
<p>I ended up using StyledTextCtrl as suggested in the comments. It turns out there are 2 demos included with the wxPython source, the 2nd of which does exactly what I was trying. I would post the code but it is ~400 lines.</p>
| 0 |
2016-09-26T14:41:51Z
|
[
"python",
"wxpython",
"wxwidgets",
"wx"
] |
How to remove all the elements in the array without changing its size in python
| 39,665,960 |
<p>If i created an array </p>
<pre><code>b = [[] for _ in xrange(10)]
</code></pre>
<p>and i store some numbers in it like this</p>
<pre><code>b = [[], [1], [22, 132], [3, 123], [], [], [6], [], [], [89]]
</code></pre>
<p>Now i want to delete all the elements and it should look like this</p>
<pre><code> b = [[], [], [], [], [], [], [], [], [], []]
</code></pre>
<p>but when i use this </p>
<pre><code> del b[:]
print b
</code></pre>
<p>It becomes <code>b = []</code></p>
<p>I know <code>del b[:]</code> is applicable for list but what about array how to del elements from the array?</p>
| -4 |
2016-09-23T16:56:59Z
| 39,666,197 |
<p>You need to loop over <code>b</code>, either setting each element to the empty list or deleting the contents of that element:</p>
<pre><code>for i in xrange(len(b)):
b[i] = []
</code></pre>
<p>or</p>
<pre><code>for i in xrange(len(b)):
del b[i][:]
</code></pre>
| 0 |
2016-09-23T17:12:21Z
|
[
"python"
] |
Button position in kivy gridlayout
| 39,665,964 |
<p>I'm trying to move this button to top-right but no matters what I do I just can't move it. It is <strong>always</strong> in the bottom-left and never leaves this position.</p>
<p>This is my .py:</p>
<pre><code> #!/usr/bin/python
# coding=UTF-8
import kivy
from kivy.uix.gridlayout import GridLayout
from kivy.app import App
from kivy.uix.button import Button
from kivy.lang import Builder
Builder.load_file('listadex.kv')
class TestS(GridLayout):
def bt1(self):
print 'Olar galera' #layout.add_widget(bt1.Button(text='TestS'))
class SegundaTela(App):
def build(self):
#layout = GridLayout(cols=2, row_force_default=False, row_default_height=10)
#layout.add_widget(bt1.Button(text='TestS'))
#layout.add_widget(bt2.Button(text='TestA'))
#layout.add_widget(bt3.Button(text='TestD'))
#layout.add_widget(bt4.Button(text='TestMC'))
#return layout
CF = TestS()
return CF
SegundaTela().run()
</code></pre>
<p>And this is my .kv</p>
<pre><code> <TestS>:
GridLayout:
rows: 1
cols: 1
padding: 3
spacing: 3
Button:
text: 'botao1'
on_press: root.bt1()
pos_hint: {'center_x':.15}
</code></pre>
| 0 |
2016-09-23T16:57:12Z
| 39,668,212 |
<p>I suggest using relative layout. for example </p>
<pre><code>RelativeLayout:
Button:
text: 'botao1'
on_press: root.bt1()
pos_hint: {'center_x': 0.5, 'center_y': 0.5}
</code></pre>
<p>change <code>center_x</code> and <code>center_y</code> till they meet your needs.</p>
| 2 |
2016-09-23T19:26:24Z
|
[
"python",
"python-2.7",
"kivy",
"kivy-language"
] |
Parsing XML with throws unknown key error in function but works at command prompt
| 39,666,022 |
<p>I'm running Python 2.7.12 in Anaconda 4.1.1. I installed untangle to parse a pretty complex XML document.</p>
<p>Here's my code:</p>
<pre><code>import untangle
obj = untangle.parse('ear.xml')
for rd in obj.SaData.Session.Test.Data.RecordedData:
tls = rd.Measured.TestLines
tl = tls.Testline
for line in tl:
snl = line.SnLevel.cdata
pn = line.PresentNoise.cdata
print snl + " " + pn
</code></pre>
<p>This returns the following error message:</p>
<p>IndexError: Unknown key </p>
<p>But if I immediately run tl = tls.Testline from the command prompt, I don't get any error.</p>
<p>Gotta be something simple but I'm a noob so help appreciated.</p>
<p>EDIT: I can't attach a file and the fully expanded XML is too big to enter here. I will try to present a partially expanded version to give some sense of how the file is organized.</p>
<pre><code><SaData Version="2" xsi:schemaLocation="uuid:ee2fbfd9-47a5-4dc8-a9eb-42d9995802ab SaData.xsd">
<ClientInfo></ClientInfo>
<Session><Platform FirmwareVersion=""></Platform><Created>2016-09-21T11:08:58</Created>
<Changed>2016-09-21T11:08:58</Changed>
<Module Version="2.0.0.0">DPOAE</Module>
<ProtocolName>DP 2 - 10 kHz (8/octave)</ProtocolName>
<Settings></Settings>
<Test><TestName>DP-Gram</TestName>
<Settings></Settings>
<Data>
<RecordedData>
<Settings></Settings>
<Measured>
<Earside>Left</Earside>
<TestType>DPGram</TestType>
<Readonly>false</Readonly>
<PeakPressure>-5</PeakPressure>
<TestStatus>9</TestStatus>
<TestLines>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
<TestLine></TestLine>
</TestLines>
<TimeStamp>2016-09-19T12:28:11.7110965-05:00</TimeStamp><Duration>PT1M30S</Duration>
</Measured>
<Calculated></Calculated>
<PrivateData></PrivateData>
</RecordedData><
RecordedData></RecordedData>
</Data>
</Test>
</Session></SaData>
</code></pre>
| 0 |
2016-09-23T17:00:58Z
| 39,667,877 |
<p>It was a stupid typo. I wrote Testline when I should have written TestLine. Sorry to waste everyone's time.</p>
<p>Dessie</p>
| 0 |
2016-09-23T19:04:22Z
|
[
"python",
"xml"
] |
how to compile cython code
| 39,666,048 |
<p>I want to compile my code. I want to write my own cpp file that calls functions from other libraries.</p>
<p>my cpp code</p>
<pre><code>//my_vl.h
int my_vl_k(int normalized_feat_set, int k){}
</code></pre>
<p>my pxd file</p>
<pre><code>#my_vl.pxd
import libc.stdlib
cdef extern from "my_vl.h":
int my_vl_k(int normalized_feat_set, int k)
</code></pre>
<p>I know I won't need the .h file and the .pxd file for now but it's good practice, and if you can help me on how to include that would really help as well.</p>
<p>my cpp file:</p>
<pre><code>//my_vl.cpp
extern "C" {
#include <vl/random.h>
#include <vl/generic.h>
#include <vl/kmeans.h>
#include <vl/mathop.h>
}
#include <cstdlib>
#include <stdio.h>
int my_vl_k(int normalized_feat_set, int K){
int r = 5;
return r;
}
</code></pre>
<p>my pyx file:</p>
<pre><code>#my_vl.pyx
cimport my_vl
cdef extern from "my_vl.cpp":
int my_vl_k(int, int)
</code></pre>
<p>my setup file:</p>
<pre><code>#setup.my_val.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
sourcefiles = ['my_vl.pyx', 'my_vl.cpp']
ext_modules = [Extension("my_vl",
sourcefiles,
include_dirs = ['/path/to/vlfeat-0.9.20'],
libraries = ['vl'],
library_dirs = ['/path/to/vlfeat-0.9.20/bin/glnxa64/']
)]
setup(
name = 'my_val app',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules
)
</code></pre>
<p>error I get is:</p>
<pre><code>python setup.my_val.py build_ext --inplace
running build_ext
skipping 'my_vl.c' Cython extension (up-to-date)
building 'my_vl' extension
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/path/to/vlfeat-0.9.20 -I/usr/include/python2.7 -c my_vl.cpp -o build/temp.linux-x86_64-2.7/my_vl.o
cc1plus: warning: command line option â-Wstrict-prototypesâ is valid for C/ObjC but not for C++ [enabled by default]
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/path/to/vlfeat-0.9.20 -I/usr/include/python2.7 -c my_vl.cpp -o build/temp.linux-x86_64-2.7/my_vl.o
cc1plus: warning: command line option â-Wstrict-prototypesâ is valid for C/ObjC but not for C++ [enabled by default]
c++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/my_vl.o build/temp.linux-x86_64-2.7/my_vl.o -L/path/to/vlfeat-0.9.20/bin/glnxa64/ -lvl -o /mnt/disk2/work/visual_clusters/my_vl.so
build/temp.linux-x86_64-2.7/my_vl.o: In function `my_vl_k(int, int)':
/path/to/my_vl.cpp:45: multiple definition of `my_vl_k(int, int)'
build/temp.linux-x86_64-2.7/my_vl.o:/path/to/my_vl.cpp:45: first defined here
collect2: error: ld returned 1 exit status
error: command 'c++' failed with exit status 1
</code></pre>
<p>Not sure what I have to do</p>
| -2 |
2016-09-23T17:02:36Z
| 39,666,191 |
<p>In my_v1.h you should use header guards:</p>
<p>my_v1.h:</p>
<pre><code>#ifndef MY_V1_H
#define MY_V1_H
// your function decl.
#endif
</code></pre>
| 0 |
2016-09-23T17:12:01Z
|
[
"python",
"cython"
] |
how to compile cython code
| 39,666,048 |
<p>I want to compile my code. I want to write my own cpp file that calls functions from other libraries.</p>
<p>my cpp code</p>
<pre><code>//my_vl.h
int my_vl_k(int normalized_feat_set, int k){}
</code></pre>
<p>my pxd file</p>
<pre><code>#my_vl.pxd
import libc.stdlib
cdef extern from "my_vl.h":
int my_vl_k(int normalized_feat_set, int k)
</code></pre>
<p>I know I won't need the .h file and the .pxd file for now but it's good practice, and if you can help me on how to include that would really help as well.</p>
<p>my cpp file:</p>
<pre><code>//my_vl.cpp
extern "C" {
#include <vl/random.h>
#include <vl/generic.h>
#include <vl/kmeans.h>
#include <vl/mathop.h>
}
#include <cstdlib>
#include <stdio.h>
int my_vl_k(int normalized_feat_set, int K){
int r = 5;
return r;
}
</code></pre>
<p>my pyx file:</p>
<pre><code>#my_vl.pyx
cimport my_vl
cdef extern from "my_vl.cpp":
int my_vl_k(int, int)
</code></pre>
<p>my setup file:</p>
<pre><code>#setup.my_val.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
sourcefiles = ['my_vl.pyx', 'my_vl.cpp']
ext_modules = [Extension("my_vl",
sourcefiles,
include_dirs = ['/path/to/vlfeat-0.9.20'],
libraries = ['vl'],
library_dirs = ['/path/to/vlfeat-0.9.20/bin/glnxa64/']
)]
setup(
name = 'my_val app',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules
)
</code></pre>
<p>error I get is:</p>
<pre><code>python setup.my_val.py build_ext --inplace
running build_ext
skipping 'my_vl.c' Cython extension (up-to-date)
building 'my_vl' extension
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/path/to/vlfeat-0.9.20 -I/usr/include/python2.7 -c my_vl.cpp -o build/temp.linux-x86_64-2.7/my_vl.o
cc1plus: warning: command line option â-Wstrict-prototypesâ is valid for C/ObjC but not for C++ [enabled by default]
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/path/to/vlfeat-0.9.20 -I/usr/include/python2.7 -c my_vl.cpp -o build/temp.linux-x86_64-2.7/my_vl.o
cc1plus: warning: command line option â-Wstrict-prototypesâ is valid for C/ObjC but not for C++ [enabled by default]
c++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/my_vl.o build/temp.linux-x86_64-2.7/my_vl.o -L/path/to/vlfeat-0.9.20/bin/glnxa64/ -lvl -o /mnt/disk2/work/visual_clusters/my_vl.so
build/temp.linux-x86_64-2.7/my_vl.o: In function `my_vl_k(int, int)':
/path/to/my_vl.cpp:45: multiple definition of `my_vl_k(int, int)'
build/temp.linux-x86_64-2.7/my_vl.o:/path/to/my_vl.cpp:45: first defined here
collect2: error: ld returned 1 exit status
error: command 'c++' failed with exit status 1
</code></pre>
<p>Not sure what I have to do</p>
| -2 |
2016-09-23T17:02:36Z
| 39,668,693 |
<p>You provide the definition of <code>my_vl_k</code> in your header file (<code>{}</code>). Instead write</p>
<pre><code>int my_vl_k(int normalized_feat_set, int k);
</code></pre>
<p>(replace the curly brackets which provide a useless empty definition with a semi-colon which just says that the function exists).</p>
| 0 |
2016-09-23T20:02:22Z
|
[
"python",
"cython"
] |
how to use the `query` method to check if elements of a column contain a specific string
| 39,666,081 |
<p>I know how to check if a column contains a string. My preffered method is to use <code>.str.contains</code>. However, that returns a boolean array that I have to use as a mask on the original dataframe. The convenience of <code>query</code> is that it returns the already filtered dataframe.</p>
<p>consider the <code>df</code></p>
<pre><code>df = pd.DataFrame(np.array(list('abcdefghijklmno')).reshape(5, 3),
columns=list('XYZ')).add('w')
df
</code></pre>
<p><a href="http://i.stack.imgur.com/fC22s.png" rel="nofollow"><img src="http://i.stack.imgur.com/fC22s.png" alt="enter image description here"></a></p>
<p>Using <code>str.contains</code></p>
<pre><code>df[df.Y.str.contains('b')]
</code></pre>
<p><a href="http://i.stack.imgur.com/tAFQW.png" rel="nofollow"><img src="http://i.stack.imgur.com/tAFQW.png" alt="enter image description here"></a></p>
<p>But I have a preference to use <code>query</code></p>
<pre><code>df.query('Y == "bw"')
</code></pre>
<p><a href="http://i.stack.imgur.com/tAFQW.png" rel="nofollow"><img src="http://i.stack.imgur.com/tAFQW.png" alt="enter image description here"></a></p>
<p>The problem is, I don't know how to use <code>query</code> to check for substrings. I wanted something similar to this.</p>
<pre><code>df.query('Y like "b%"')
</code></pre>
| 1 |
2016-09-23T17:04:51Z
| 39,666,369 |
<p>This is currently not supported, <code>query</code> only implements a subset of operations, basically none of the string functions.</p>
<p>Just a sidenote to the comment, <code>query</code> does support a vectorized version of the <code>in</code> keyword.</p>
<pre><code>df.query('X in ["aw", "dw"]')
Out[9]:
X Y Z
0 aw bw cw
1 dw ew fw
</code></pre>
| 4 |
2016-09-23T17:23:48Z
|
[
"python",
"pandas"
] |
How do go into a directory without being given its full path in python 2?
| 39,666,121 |
<p>I am currently trying to go into a folder and call a python 2 script, but I cannot get any answer to go into a folder without using its full path. As example in DOS I would normally type this:</p>
<pre><code>C:\unknownpath\> cd otherpath
C:\unknownpath\otherpath\>
</code></pre>
<p>Thanks for any help.</p>
| 0 |
2016-09-23T17:07:47Z
| 39,666,241 |
<p>Try this:</p>
<pre><code>import os
os.chdir('otherpath')
</code></pre>
<p>This at least matches your DOS example, and will change your working directory to <code>otherpath</code> relative to the directory the command is run from. For example if you are in <code>/home/myusername/</code>, then this will take you to <code>/home/myusername/otherpath/</code>. You can also use <code>.</code> for the current directory or <code>..</code> to move back one directory. So if you are in <code>/home/myusername/Desktop/</code>, <code>os.chdir('..')</code> would change the working directory to <code>/home/myusername/</code> and <code>os.chdir('../Documents/</code> would change you to <code>/home/myusername/Documents/</code>, etc.</p>
<p>Forgive my use of Unix-style paths, but you should be able to easily translate these commands to Windows paths if that is the platform you are on. I don't want to attempt to use Windows paths in my examples because I won't be able to test their efficacy. </p>
| 1 |
2016-09-23T17:15:02Z
|
[
"python"
] |
How do go into a directory without being given its full path in python 2?
| 39,666,121 |
<p>I am currently trying to go into a folder and call a python 2 script, but I cannot get any answer to go into a folder without using its full path. As example in DOS I would normally type this:</p>
<pre><code>C:\unknownpath\> cd otherpath
C:\unknownpath\otherpath\>
</code></pre>
<p>Thanks for any help.</p>
| 0 |
2016-09-23T17:07:47Z
| 39,666,281 |
<p><code>os.chdir</code> works on relative path.</p>
<pre><code>>>> os.getcwd()
'C:\\Users\\sba001\\PycharmProjects'
>>> os.listdir('.')
['untitled', 'untitled1', 'untitled2', 'untitled3', 'untitled4', 'untitled5']
>>> os.chdir('untitled')
>>> os.getcwd()
'C:\\Users\\sba001\\PycharmProjects\\untitled'
</code></pre>
| 0 |
2016-09-23T17:17:29Z
|
[
"python"
] |
Logical operations with array of strings in Python
| 39,666,136 |
<p>I know the following logical operation works with numpy: </p>
<pre><code>A = np.array([True, False, True])
B = np.array([1.0, 2.0, 3.0])
C = A*B = array([1.0, 0.0, 3.0])
</code></pre>
<p>But the same isn't true if B is an array of strings. Is it possible to do the following: </p>
<pre><code>A = np.array([True, False, True])
B = np.array(['eggs', 'milk', 'cheese'])
C = A*B = array(['eggs', '', 'cheese'])
</code></pre>
<p>That is a string multiplied with False should equal an empty string. Can this be done without a loop in Python (doesn't have to use numpy)?</p>
<p>Thanks!</p>
| 2 |
2016-09-23T17:08:51Z
| 39,666,157 |
<p>You can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html"><code>np.where</code></a> for making such selection based on a mask -</p>
<pre><code>np.where(A,B,'')
</code></pre>
<p>Sample run -</p>
<pre><code>In [4]: A
Out[4]: array([ True, False, True], dtype=bool)
In [5]: B
Out[5]:
array(['eggs', 'milk', 'cheese'],
dtype='|S6')
In [6]: np.where(A,B,'')
Out[6]:
array(['eggs', '', 'cheese'],
dtype='|S6')
</code></pre>
| 5 |
2016-09-23T17:10:07Z
|
[
"python",
"string",
"numpy",
"logical-operators"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.