title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
How to improve the run time of a python code
| 39,583,361 |
<p>I am trying to solve the below problem from an online coding site.</p>
<p><strong>QUESTION:</strong>
Fredo is pretty good at dealing large numbers. So, once his friend Zeus gave him an array of N numbers , followed by Q queries which he has to answer. In each query , he defines the type of the query and the number f for which Fredo has to answer. Each query is of the following two types:
Type 0: For this query, Fredo has to answer the first number in the array (starting from index 0) such that its frequency is atleast equal to f.
Type 1: For this query, Fredo has to answer the first number in the array such that frequecy is exactly equal to f.
Now, Fredo answers all his queries but now Zeus imagines how he should verify them . So, he asks you to write a code for the same.
Note: If there is no number which is the answer to the query, output 0.
Use fast I/O.</p>
<p>Input :
The first line of the input contains N , the size of the array
The next line contains N space separated integers.
The next line contains Q, denoting the number of queries.
Then follow Q lines, each line having two integers type and f, denoting the type of query and the frequency for which you have to answer the query.</p>
<p>Output:
You have to print the answer for each query in a separate line.</p>
<p>Input Constraints:</p>
<p>1â¤Nâ¤106</p>
<p>1â¤A[i]â¤1018</p>
<p>1â¤Qâ¤106</p>
<p>0â¤typeâ¤1</p>
<p>1â¤fâ¤1018</p>
<p>SAMPLE INPUT</p>
<p>6</p>
<p>1 2 2 1 2 3</p>
<p>5</p>
<p>0 1</p>
<p>0 2</p>
<p>1 2</p>
<p>1 3</p>
<p>0 3</p>
<p>SAMPLE OUTPUT</p>
<p>1</p>
<p>1</p>
<p>1</p>
<p>2</p>
<p>2</p>
<p><strong>Solution:</strong>
Here is the solution that I have tried</p>
<pre><code>from collections import Counter
import sys
tokenizedInput = sys.stdin.read().split()
t = int(tokenizedInput[0])
a = []
for i in range(t):
s = int(tokenizedInput[i+1])
a.append(s)
collection = Counter(a)
key = collection.keys()
value = collection.values()
q = int(tokenizedInput[i+2])
k = i+2
for j in range(q):
query = int(tokenizedInput[k+2*j+1])
f = int(tokenizedInput[k+2*j+2])
for w in range(t):
index = key.index(a[w])
if query == 0:
if value[index] >= f:
print a[w]
break
else:
if value[index] == f:
print a[w]
break
if w == t-1:
print 0
</code></pre>
<p>This code runs properly and gives the correct output for smaller test cases but crosses the time limit on larger test cases. Can someone please suggest what improvements can be made to this code to improve the speed.</p>
| 0 |
2016-09-19T23:04:43Z
| 39,584,617 |
<p>Some suggestions: get the <code>int()</code> conversion of <code>tokenizedInput</code> done and out of the way rather than repeatedly calling <code>int()</code> over and over in the loop; a dictionary like <code>collection</code> is very efficient, don't second guess it by extracting the keys and values, use it as it was intended; precalculate anything you can before the loop; simplify, simplify, simplify.</p>
<p>I've reworked your code along the suggestions above, plus other tweaks, see if it makes sense to you and performs the exercise within the time limit:</p>
<pre><code>import sys
from collections import Counter
tokenizedInput = map(int, sys.stdin.read().split())
N = tokenizedInput[0]
array = tokenizedInput[1:N + 1]
collection = Counter(array)
Q = tokenizedInput[N + 1]
k = N + 2
for j in range(Q):
offset = 2 * j + k
query, frequency = tokenizedInput[offset:offset + 2]
for w in range(N):
value = collection[array[w]]
if query == 0:
if value >= frequency:
print array[w]
break
else:
if value == frequency:
print array[w]
break
else: # aka "no break"
print 0
</code></pre>
<p>Most folks would have handled this with separate read statements to input the various scalar and array values. You chose to do it in one read at the beginning, which is fine, but you must take care in your design that that initial choice doesn't get in your way later on.</p>
| 0 |
2016-09-20T02:01:30Z
|
[
"python",
"performance",
"for-loop",
"runtime"
] |
Pandas Dataframe return index with inaccurate decimals
| 39,583,363 |
<p>I have a Pandas Dataframe like this:</p>
<pre><code> 0 1 2 3 4 5 \
event_at
0.00 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000
0.01 0.975381 0.959061 0.979856 0.985625 0.986080 0.976601
0.02 0.959103 0.932374 0.966486 0.976037 0.976791 0.961114
0.03 0.946154 0.911362 0.955820 0.968362 0.969353 0.948785
0.04 0.935378 0.894024 0.946924 0.961940 0.963129 0.938518
0.05 0.926099 0.879201 0.939248 0.956385 0.957744 0.929672
0.06 0.917608 0.865726 0.932212 0.951282 0.952796 0.921574
......
0.96 0.072472 0.012264 0.117352 0.217737 0.228561 0.082670
0.97 0.066553 0.010632 0.109468 0.207225 0.217870 0.076244
0.98 0.060532 0.009069 0.101313 0.196119 0.206555 0.069677
0.99 0.054657 0.007642 0.093212 0.184828 0.195031 0.063237
1.00 0.019128 0.001314 0.039558 0.100442 0.108064 0.023328
</code></pre>
<p>I want to get all indexes</p>
<pre><code>>>> df.index
[0.0, 0.01, 0.02, 0.029999999999999999, 0.040000000000000001, 0.050000000000000003, 0.059999999999999998,
...
0.95999999999999996, 0.96999999999999997, 0.97999999999999998, 0.98999999999999999, 1.0]
# What I expect is like:
[0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06,
...
0.96, 0.97, 0.98, 0.99, 1.0]
</code></pre>
<p>This floating point problem makes me get his exception:</p>
<pre><code>>>> df.loc[0.35].values
Traceback (most recent call last):
File "I:\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1395, in _has_valid_type
error()
File "I:\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1390, in error
(key, self.obj._get_axis_name(axis)))
KeyError: 'the label [0.35] is not in the [index]'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "J:\Workspace\dataset_loader.py", line 171, in <module>
print(y_pred_cox_alldep.loc[0.35].values)
File "I:\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1296, in __getitem__
return self._getitem_axis(key, axis=0)
File "I:\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1466, in _getitem_axis
self._has_valid_type(key, axis)
File "I:\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1403, in _has_valid_type
error()
File "I:\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1390, in error
(key, self.obj._get_axis_name(axis)))
KeyError: 'the label [0.35] is not in the [index]'
</code></pre>
| 2 |
2016-09-19T23:04:51Z
| 39,583,454 |
<p>you can do it this way (assuming we want to get a row with a <code>0.96</code> index, which is internally represented as <code>0.95999999999</code>):</p>
<pre><code>In [466]: df.index
Out[466]: Float64Index([0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.95999999999, 0.97, 0.98, 0.99, 1.0], dtype='float64')
In [467]: df.ix[df.index[np.abs(df.index - 0.96) < 1e-6]]
Out[467]:
0 1 2 3 4 5
0.96 0.072472 0.012264 0.117352 0.217737 0.228561 0.08267
</code></pre>
<p>or, if you can change (round) your index:</p>
<pre><code>In [430]: df.index = [0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.95999999999, 0.97, 0.98, 0.99, 1.0]
In [431]: df
Out[431]:
0 1 2 3 4 5
0.00 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000
0.01 0.975381 0.959061 0.979856 0.985625 0.986080 0.976601
0.02 0.959103 0.932374 0.966486 0.976037 0.976791 0.961114
0.03 0.946154 0.911362 0.955820 0.968362 0.969353 0.948785
0.04 0.935378 0.894024 0.946924 0.961940 0.963129 0.938518
0.05 0.926099 0.879201 0.939248 0.956385 0.957744 0.929672
0.06 0.917608 0.865726 0.932212 0.951282 0.952796 0.921574
0.96 0.072472 0.012264 0.117352 0.217737 0.228561 0.082670
0.97 0.066553 0.010632 0.109468 0.207225 0.217870 0.076244
0.98 0.060532 0.009069 0.101313 0.196119 0.206555 0.069677
0.99 0.054657 0.007642 0.093212 0.184828 0.195031 0.063237
1.00 0.019128 0.001314 0.039558 0.100442 0.108064 0.023328
In [432]: df.index
Out[432]: Float64Index([0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.95999999999, 0.97, 0.98, 0.99, 1.0], dtype='float64')
In [433]: df.ix[.96]
... skipped ...
KeyError: 0.96
</code></pre>
<p>let's round the index:</p>
<pre><code>In [434]: df.index = df.index.values.round(2)
In [435]: df.index
Out[435]: Float64Index([0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.96, 0.97, 0.98, 0.99, 1.0], dtype='float64')
In [436]: df.ix[.96]
Out[436]:
0 0.072472
1 0.012264
2 0.117352
3 0.217737
4 0.228561
5 0.082670
Name: 0.96, dtype: float64
</code></pre>
| 2 |
2016-09-19T23:15:34Z
|
[
"python",
"pandas",
"numpy",
"dataframe"
] |
Binding callbacks to minimize and maximize events in Toplevel windows
| 39,583,495 |
<p>I've read through related answers and it seems that the accepted way to do this is by binding callbacks to <code><Map></code> and <code><Unmap></code> events in the Toplevel widget. I've tried the following but to no effect:</p>
<pre><code>from Tkinter import *
tk = Tk()
def visible(event):
print 'visible'
def invisible(event):
print 'invisible'
tk.bind('<Map>', visible)
tk.bind('<Unmap>', invisible)
tk.mainloop()
</code></pre>
<p>I'm running python 2.7 on Linux. Could this be related to window manager code in different operating systems?</p>
<p>Calling <code>tk.iconify()</code> before <code>tk.mainloop()</code> has no effect either. In fact, the only command that produces the correct behavior is <code>tk.withdraw()</code> which is certainly not the same thing as minimizing the window. Additionally, if <code><Map></code> and <code><Unmap></code> events are triggered by calling <code>pack()</code>, <code>grid()</code>, or <code>place()</code>, why is <code><Map></code> triggered when the application window is minimized on Windows and/or Mac, as <a href="http://stackoverflow.com/a/3840216/2592879">this</a> and <a href="http://stackoverflow.com/a/26160588/2592879">this</a> answer suggest. And why would they be triggered when calling <code>withdraw()</code> and <code>deiconify()</code> on Linux?</p>
| 1 |
2016-09-19T23:21:34Z
| 39,585,436 |
<p>For me, on Win10, your code works perfectly, with the caveat that the middle frame button produces 'visible' whether it means 'maximize' or 'restore'. So maximize followed by restore results in 2 new 'visibles' becoming visible.</p>
<p>I did not particularly expect this because <a href="http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/event-types.html" rel="nofollow">this reference</a> says Map is produced when </p>
<blockquote>
<p>A widget is being mapped, that is, made visible in the application. This will happen, for example, when you call the widget's .grid() method. </p>
</blockquote>
<p>Toplevels are not managed by geometry. The more authoritative <a href="http://www.tcl.tk/man/tcl8.6/TkCmd/bind.htm#M13" rel="nofollow">tk doc</a> says </p>
<blockquote>
<p>Map, Unmap</p>
<p>The Map and Unmap events are generated whenever the mapping state of a window changes.</p>
<p>Windows are created in the unmapped state. Top-level windows become mapped when they transition to the normal state, and are unmapped in the withdrawn and iconic states.</p>
</blockquote>
<p>Try adding <code>tk.iconify()</code> before the mainloop call. This should be doing the same as the minimize button. If it does not result in 'invisible', then there appears to be a tcl/tk bug on Linux.</p>
| 0 |
2016-09-20T03:54:40Z
|
[
"python",
"linux",
"tkinter"
] |
Binding callbacks to minimize and maximize events in Toplevel windows
| 39,583,495 |
<p>I've read through related answers and it seems that the accepted way to do this is by binding callbacks to <code><Map></code> and <code><Unmap></code> events in the Toplevel widget. I've tried the following but to no effect:</p>
<pre><code>from Tkinter import *
tk = Tk()
def visible(event):
print 'visible'
def invisible(event):
print 'invisible'
tk.bind('<Map>', visible)
tk.bind('<Unmap>', invisible)
tk.mainloop()
</code></pre>
<p>I'm running python 2.7 on Linux. Could this be related to window manager code in different operating systems?</p>
<p>Calling <code>tk.iconify()</code> before <code>tk.mainloop()</code> has no effect either. In fact, the only command that produces the correct behavior is <code>tk.withdraw()</code> which is certainly not the same thing as minimizing the window. Additionally, if <code><Map></code> and <code><Unmap></code> events are triggered by calling <code>pack()</code>, <code>grid()</code>, or <code>place()</code>, why is <code><Map></code> triggered when the application window is minimized on Windows and/or Mac, as <a href="http://stackoverflow.com/a/3840216/2592879">this</a> and <a href="http://stackoverflow.com/a/26160588/2592879">this</a> answer suggest. And why would they be triggered when calling <code>withdraw()</code> and <code>deiconify()</code> on Linux?</p>
| 1 |
2016-09-19T23:21:34Z
| 39,586,786 |
<h3>Unmapping on Linux</h3>
<p>The term <code>Unmap</code> has a quite different meaning on Linux than it has on Windows. On Linux, <em>Unmapping</em> a window means making it (nearly) untraceable; It does not appear in the application's icon, nor is it listed anymore in the output of <code>wmctrl -l</code>. We can unmap / map a window by the commands:</p>
<pre><code>xdotool windowunmap <window_id>
</code></pre>
<p>and:</p>
<pre><code>xdotool windowmap <window_id>
</code></pre>
<p>To see if we can even possibly make tkinter detect the window's state <em>minimized</em>, I added a thread to your basic window, printing the window's state once per second, using:</p>
<pre><code>root.state()
</code></pre>
<p>Minimized or not, the thread always printed:</p>
<pre><code>normal
</code></pre>
<h3>Workaround</h3>
<p>Luckily, if you <em>must</em> be able to detect the window's minimized state, on Linux we have alternative tools like <code>xprop</code> and <code>wmctrl</code>. Although as dirty as it gets, it is very well scriptable reliably inside your application.</p>
<p>As requested in a comment, below a simplified example to create your own version of the bindings with external tools.</p>
<p><a href="http://i.stack.imgur.com/pLSI1.png" rel="nofollow"><img src="http://i.stack.imgur.com/pLSI1.png" alt="enter image description here"></a></p>
<h3>How it works</h3>
<ul>
<li>When the window appears (the application starts), We use <code>wmctrl -lp</code> to get the window's <code>id</code> by checking both <em>name</em> and <em>pid</em> (<code>tkinter</code> windows have pid 0).</li>
<li>Once we have the <code>window id</code>, we can check if the string <code>_NET_WM_STATE_HIDDEN</code> is in output of <code>xprop -id <window_id></code>. If so, the window is minimized.</li>
</ul>
<p>Then we can easily use <code>tkinter</code>'s <a href="http://stackoverflow.com/a/459131/1391444">after() method</a> to include a periodic check. In the example below, the comments should speak for themselves.</p>
<h3>What we need</h3>
<p>We need both <a href="https://linux.die.net/man/1/wmctrl" rel="nofollow">wmctrl</a> and <a href="https://linux.die.net/man/1/xprop" rel="nofollow">xprop</a> to be installed. On Dedian based systems:</p>
<pre><code>sudo apt-get install wmctrl xprop
</code></pre>
<h3>The code example</h3>
<pre><code>import subprocess
import time
from Tkinter import *
class TestWindow:
def __init__(self, master):
self.master = master
self.wintitle = "Testwindow"
self.checked = False
self.state = None
button = Button(self.master, text = "Press me")
button.pack()
self.master.after(0, self.get_state)
self.master.title(self.wintitle)
def get_window(self):
"""
get the window by title and pid (tkinter windows have pid 0)
"""
return [w.split() for w in subprocess.check_output(
["wmctrl", "-lp"]
).decode("utf-8").splitlines() if self.wintitle in w][-1][0]
def get_state(self):
"""
get the window state by checking if _NET_WM_STATE_HIDDEN is in the
output of xprop -id <window_id>
"""
try:
"""
checked = False is to prevent repeatedly fetching the window id
(saving fuel in the loop). after window is determined, it passes further checks.
"""
self.match = self.get_window() if self.checked == False else self.match
self.checked = True
except IndexError:
pass
else:
win_data = subprocess.check_output(["xprop", "-id", self.match]).decode("utf-8")
if "_NET_WM_STATE_HIDDEN" in win_data:
newstate = "minimized"
else:
newstate = "normal"
# only take action if state changes
if newstate != self.state:
print newstate
self.state = newstate
# check once per half a second
self.master.after(500, self.get_state)
def main():
root = Tk()
app = TestWindow(root)
root.mainloop()
if __name__ == '__main__':
main()
</code></pre>
| 1 |
2016-09-20T06:05:25Z
|
[
"python",
"linux",
"tkinter"
] |
Binding callbacks to minimize and maximize events in Toplevel windows
| 39,583,495 |
<p>I've read through related answers and it seems that the accepted way to do this is by binding callbacks to <code><Map></code> and <code><Unmap></code> events in the Toplevel widget. I've tried the following but to no effect:</p>
<pre><code>from Tkinter import *
tk = Tk()
def visible(event):
print 'visible'
def invisible(event):
print 'invisible'
tk.bind('<Map>', visible)
tk.bind('<Unmap>', invisible)
tk.mainloop()
</code></pre>
<p>I'm running python 2.7 on Linux. Could this be related to window manager code in different operating systems?</p>
<p>Calling <code>tk.iconify()</code> before <code>tk.mainloop()</code> has no effect either. In fact, the only command that produces the correct behavior is <code>tk.withdraw()</code> which is certainly not the same thing as minimizing the window. Additionally, if <code><Map></code> and <code><Unmap></code> events are triggered by calling <code>pack()</code>, <code>grid()</code>, or <code>place()</code>, why is <code><Map></code> triggered when the application window is minimized on Windows and/or Mac, as <a href="http://stackoverflow.com/a/3840216/2592879">this</a> and <a href="http://stackoverflow.com/a/26160588/2592879">this</a> answer suggest. And why would they be triggered when calling <code>withdraw()</code> and <code>deiconify()</code> on Linux?</p>
| 1 |
2016-09-19T23:21:34Z
| 39,590,204 |
<p>My own implementation of the <a href="http://stackoverflow.com/a/39586786/2592879">hack</a> suggested by Jacob.</p>
<pre><code>from Tkinter import Tk, Toplevel
import subprocess
class CustomWindow(Toplevel):
class State(object):
NORMAL = 'normal'
MINIMIZED = 'minimized'
def __init__(self, parent, **kwargs):
Toplevel.__init__(self, parent, **kwargs)
self._state = CustomWindow.State.NORMAL
self.protocol('WM_DELETE_WINDOW', self.quit)
self.after(50, self._poll_window_state)
def _poll_window_state(self):
id = self.winfo_id() + 1
winfo = subprocess.check_output(
['xprop', '-id', str(id)]).decode('utf-8')
if '_NET_WM_STATE_HIDDEN' in winfo:
state = CustomWindow.State.MINIMIZED
else:
state = CustomWindow.State.NORMAL
if state != self._state:
sequence = {
CustomWindow.State.NORMAL: '<<Restore>>',
CustomWindow.State.MINIMIZED: '<<Minimize>>'
}[state]
self.event_generate(sequence)
self._state = state
self.after(50, self._poll_window_state)
if __name__ == '__main__':
root = Tk()
root.withdraw()
window = CustomWindow(root)
def on_restore(event):
print 'restore'
def on_minimize(event):
print 'minimize'
window.bind('<<Restore>>', on_restore)
window.bind('<<Minimize>>', on_minimize)
root.mainloop()
</code></pre>
| 0 |
2016-09-20T09:18:40Z
|
[
"python",
"linux",
"tkinter"
] |
Converting a dictionary of tuple arrays to a CSV
| 39,583,508 |
<p>I'm trying to convert a dictionary structured like this:</p>
<pre><code>{
'AAA': [ ('col1', 1), ('col2', 2), ('col3', 3) ],
'BBB': [ ('col2', 1), ('col3', 4) ],
'CCC': [ ('col4', 7) ]
}
</code></pre>
<p>...into a csv structured like this:</p>
<pre><code>key col1, col2, col3, col4
AAA 1 2 3
BBB 1 4
CCC 7
</code></pre>
<p>To be specific, I don't know what the columns will be named, or which columns will need to be created, until runtime, with the exception of the <code>key</code> column, which corresponds directly to the keys. If data isn't supplied for a given column, then it is considered to be empty.</p>
<p>Is there a simple way to do this in Python? I'm trying to avoid excessively re-shuffling the data around into different structures, and all examples I've seen for numpy involves parallel lists. I'm open to using libraries such as numpy and pandas.</p>
| 1 |
2016-09-19T23:23:01Z
| 39,583,796 |
<p>There isn't a simple way to do what you're asking for without processing your dictionary first.</p>
<p>Python has a csv library: <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a> but you have to have your data in the right format before using it. Your best bet is the <a href="https://docs.python.org/2/library/csv.html#csv.DictWriter" rel="nofollow"><code>DictWriter</code></a> class which can take a dict as each row. Your tuples can be easily converted to dicts, so all you need to be able to use this class is to get a list of the fieldnames (column names).</p>
<p>Here is how I printed your info into a csv:</p>
<pre><code>from csv import DictWriter
d = { 'AAA': [ ('c1', 1), ('c2', 2), ('c3', 3)],
'BBB': [ ('c2', 1), ('c3', 4)],
'CCC': [ ('c4', 7)]
}
# convert dictionary of tuples into list of dictionaries
# and gather fieldnames at the same time
rows = []
fieldnames = set()
for k in d.keys():
# a list of (k, v) tuples can be converted to a dict
# but watch out for duplicate keys!
tmp = dict(d[k])
fieldnames.update(tmp.keys())
tmp['key'] = k
rows.append(tmp)
# add key to the front of the list, since sets are unordered
# you could sort the fieldnames however you want here
fieldnames = ['key'] + list(fieldnames)
# open the file and write the csv
with open('out.csv', 'w') as csvfile:
writer = DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for row in rows:
writer.writerow(row)
</code></pre>
| 1 |
2016-09-20T00:04:39Z
|
[
"python",
"csv",
"dictionary"
] |
Python Multiprocessing and Queue
| 39,583,557 |
<p>I am new to amazing world of python, was developing a test system consist of continuous sense and test run. i have three or more while loops of which one is producer and rest two are consumers. did not understand multiprocessing very well, here are a sample code, first loop will create a data and second loop will get the data, how to impliment this in a infinity while loop, i will stop loop in the main program but asking your kind help to understand data exchange between while loops</p>
<pre><code>from multiprocessing import Process,Queue
from time import sleep
q=Queue()
cnt=0
def send():
global cnt
while True:
sleep(1)
cnt=cnt+1
q.put(cnt,False)
print ("data Send:",cnt)
def rcv():
while True:
sleep(1)
newdata=q.get(cnt, False)
print ("data Received",newdata)
if __name__=='__main__':
p1=Process(target=send)
p2=Process(target=rcv)
p1.start()
p2.start()
p1.join()
p2.join()
</code></pre>
| 2 |
2016-09-19T23:30:06Z
| 39,583,700 |
<pre><code>def send():
cnt = -1
while True:
cnt += 1
yield cnt
def rcv(value):
... # logic
consumers = [rcv] * NUM_CONSUMERS
idx = 0
for val in send():
consumers[idx](val)
idx += 1
idx %= NUM_CONSUMERS - 1
</code></pre>
| -1 |
2016-09-19T23:49:56Z
|
[
"python"
] |
Python Multiprocessing and Queue
| 39,583,557 |
<p>I am new to amazing world of python, was developing a test system consist of continuous sense and test run. i have three or more while loops of which one is producer and rest two are consumers. did not understand multiprocessing very well, here are a sample code, first loop will create a data and second loop will get the data, how to impliment this in a infinity while loop, i will stop loop in the main program but asking your kind help to understand data exchange between while loops</p>
<pre><code>from multiprocessing import Process,Queue
from time import sleep
q=Queue()
cnt=0
def send():
global cnt
while True:
sleep(1)
cnt=cnt+1
q.put(cnt,False)
print ("data Send:",cnt)
def rcv():
while True:
sleep(1)
newdata=q.get(cnt, False)
print ("data Received",newdata)
if __name__=='__main__':
p1=Process(target=send)
p2=Process(target=rcv)
p1.start()
p2.start()
p1.join()
p2.join()
</code></pre>
| 2 |
2016-09-19T23:30:06Z
| 39,586,063 |
<p>I would suggest you to dive in to the <a href="https://docs.python.org/2/library/multiprocessing.html#exchanging-objects-between-processes" rel="nofollow">documentation</a> of the multiprocessing-library you are using.</p>
<p>Basically, you have two options. Queue and Pipe. Now you are using Queue, </p>
<pre><code>q.put(cnt,False)
...
newdata=q.get(cnt, False)
</code></pre>
<p>this will crash because you will try to get data from and empty Queue at some point, so you will need to check the queue status before reading from it.</p>
<pre><code>while not q.empty() and not q.task_done():
newdata = q.get(cnt)
</code></pre>
<p>Other than that, if you want to have multiple receivers, you need to think about some kind of mutexes (see <a href="https://docs.python.org/2/library/multiprocessing.html#synchronization-between-processes" rel="nofollow">multitprocessing.Lock</a>), or multiprocessing.Pipe, since if one reader-process is just getting a value from the queue, and another one is checking the status of the queue, it will fail because the queue will actually be empty when the second one tries to read from it.</p>
<p>However, for this minimal example, using a mutex (<em>mutual exclusive lock</em>, prevents multiple processes accessing the same memory at the same time), will most likely negate the advantage gained when using multiple cores. However, if the different processes actually do some heavy calculation with the values before/after accessing the queue, the benefit gained will be greater than the loss from using a lock.</p>
| 2 |
2016-09-20T05:05:56Z
|
[
"python"
] |
Python dictionary: number of keys with specific number of values
| 39,583,568 |
<p>I have a huge dictionary that looks like this: </p>
<pre><code>data = {'this': [{'DT': 100}], 'run': [{'NN': 215}, {'VB': 2}], 'the': [{'NNP': 6}, {'JJ': 7}, {'DT': 39517}]}
</code></pre>
<p>What I would like to to do is to run enquires that would return ,for example, the number of keys with exactly two values, in which case the answer is one because <code>'run'</code> is the only one with exactly two values <code>{'NN': 215}</code> & <code>{'VB': 2}</code></p>
<p>I think this could be done with regular expression but could not find out how. </p>
| 0 |
2016-09-19T23:31:36Z
| 39,583,732 |
<p>This will do the job:</p>
<pre><code>print len( filter( lambda x: len( x ) == 2, data.values() ) )
</code></pre>
<p>The lambda returns true when the length of an item is 2. <code>filter()</code> selects only those items where the lambda returns true, and then we count the length of the sequence returned by <code>filter()</code>. <code>data.values()</code> allows us to filter on the values from the dict rather than the keys (which is what plain <code>data</code> would've given us). Since you only wanted the count, the values are all that's needed.</p>
| 2 |
2016-09-19T23:55:06Z
|
[
"python",
"dictionary"
] |
Python dictionary: number of keys with specific number of values
| 39,583,568 |
<p>I have a huge dictionary that looks like this: </p>
<pre><code>data = {'this': [{'DT': 100}], 'run': [{'NN': 215}, {'VB': 2}], 'the': [{'NNP': 6}, {'JJ': 7}, {'DT': 39517}]}
</code></pre>
<p>What I would like to to do is to run enquires that would return ,for example, the number of keys with exactly two values, in which case the answer is one because <code>'run'</code> is the only one with exactly two values <code>{'NN': 215}</code> & <code>{'VB': 2}</code></p>
<p>I think this could be done with regular expression but could not find out how. </p>
| 0 |
2016-09-19T23:31:36Z
| 39,583,757 |
<p>You do not need <code>regex</code> to achieve this. Regex are for parsing the strings. Better way is to create a new list to store the <code>key</code> with value as list of <code>len() == 2</code>:</p>
<pre><code>data = {'this': [{'DT': 100}], 'run': [{'NN': 215}, {'VB': 2}], 'the': [{'NNP': 6}, {'JJ': 7}, {'DT': 39517}]}
key_list = [k for k, v in data.items() if len(v) == 2]
# key_list => ['run']
</code></pre>
<p>To get the list of such values, call <code>len</code> as:</p>
<pre><code>>>> len(key_list)
1
</code></pre>
| 0 |
2016-09-20T00:00:08Z
|
[
"python",
"dictionary"
] |
Python dictionary: number of keys with specific number of values
| 39,583,568 |
<p>I have a huge dictionary that looks like this: </p>
<pre><code>data = {'this': [{'DT': 100}], 'run': [{'NN': 215}, {'VB': 2}], 'the': [{'NNP': 6}, {'JJ': 7}, {'DT': 39517}]}
</code></pre>
<p>What I would like to to do is to run enquires that would return ,for example, the number of keys with exactly two values, in which case the answer is one because <code>'run'</code> is the only one with exactly two values <code>{'NN': 215}</code> & <code>{'VB': 2}</code></p>
<p>I think this could be done with regular expression but could not find out how. </p>
| 0 |
2016-09-19T23:31:36Z
| 39,583,934 |
<p>Just get the lengths and count the desired length.</p>
<pre><code>>>> map(len, data.values()).count(2)
1
</code></pre>
| 0 |
2016-09-20T00:22:59Z
|
[
"python",
"dictionary"
] |
Box Plot of grouped data in Pandas
| 39,583,581 |
<p>I am trying to plot a boxplot of a grouped dataset.</p>
<p>Imagine my data set looks like this</p>
<pre><code>Gender | Age
------ | ------
Male | 20
------ | ------
Female | 40
------ | ------
Female | 45
------ | ------
Unknown| 5
------ | ------
Male | 80
------ | ------
Female | 30
------ | ------
Unknown| 50
------ | ------
Male | 12
</code></pre>
<p>Now what I want to do is to plot a box plot which shows the Mean age of all three genders in the same plot figure which looks something like this:</p>
<p><a href="http://i.stack.imgur.com/BR0iK.png" rel="nofollow"><img src="http://i.stack.imgur.com/BR0iK.png" alt="Multi Box plots"></a></p>
<p>Currently what I have done is to group my dataset by Genders.</p>
<pre><code>data = data.groupby("Gender")
data["Age"].plot(kind="box")
</code></pre>
<p>But what this does is produce one box plot like this -</p>
<p><a href="http://i.stack.imgur.com/mdrh1.png" rel="nofollow"><img src="http://i.stack.imgur.com/mdrh1.png" alt="enter image description here"></a></p>
<p>How do I unstack them and produce a more meaningful visualization?</p>
| 1 |
2016-09-19T23:32:38Z
| 39,584,062 |
<p>I was able to figure it out on my own.</p>
<p>Using the seaborn package, one can simply do this</p>
<pre><code>sns.boxplot(data["Age"], groupby=data["Gender"])
</code></pre>
<p>and it renders a beautiful boxplot grouped</p>
| 0 |
2016-09-20T00:41:45Z
|
[
"python",
"matplotlib",
"plot"
] |
Spark Logistic Regression Error Dimension Mismatch
| 39,583,623 |
<p>I just started using spark and am trying to run a logistic regression.
I keep getting this error: </p>
<pre><code>Caused by: java.lang.IllegalArgumentException: requirement failed:
Dimensions mismatch when adding new sample. Expecting 21 but got 17.
</code></pre>
<p>The number of features that I have is 21 , but I'm not sure what the 17 means here. Not sure what to do?
My code is here:</p>
<pre><code>from pyspark.mllib.regression import LabeledPoint
from numpy import array
def isfloat(string):
try:
float(string)
return True
except ValueError:
return False
def parse_interaction(line):
line_split = line.split(",")
# leave_out = [1,2,3]
clean_line_split = line_split[3:24]
retention = 1.0
if line_split[0] == '0.0':
retention = 0.0
return LabeledPoint(retention, array([map(float,i) for i in clean_line_split if isfloat(i)]))
training_data = raw_data.map(parse_interaction)
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from time import time
t0 = time()
logit_model = LogisticRegressionWithLBFGS.train(training_data)
tt = time() - t0
print "Classifier trained in {} seconds".format(round(tt,3))
</code></pre>
| 0 |
2016-09-19T23:38:51Z
| 39,585,121 |
<p>Looks like some problem with the raw data. I guess some of the values are not passing through the <strong><code>isFloat</code></strong> validation. Can you just try printing the values on console, It will help you in identifying the error lines.</p>
| 0 |
2016-09-20T03:09:53Z
|
[
"python",
"apache-spark",
"pyspark",
"logistic-regression"
] |
Spark Logistic Regression Error Dimension Mismatch
| 39,583,623 |
<p>I just started using spark and am trying to run a logistic regression.
I keep getting this error: </p>
<pre><code>Caused by: java.lang.IllegalArgumentException: requirement failed:
Dimensions mismatch when adding new sample. Expecting 21 but got 17.
</code></pre>
<p>The number of features that I have is 21 , but I'm not sure what the 17 means here. Not sure what to do?
My code is here:</p>
<pre><code>from pyspark.mllib.regression import LabeledPoint
from numpy import array
def isfloat(string):
try:
float(string)
return True
except ValueError:
return False
def parse_interaction(line):
line_split = line.split(",")
# leave_out = [1,2,3]
clean_line_split = line_split[3:24]
retention = 1.0
if line_split[0] == '0.0':
retention = 0.0
return LabeledPoint(retention, array([map(float,i) for i in clean_line_split if isfloat(i)]))
training_data = raw_data.map(parse_interaction)
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from time import time
t0 = time()
logit_model = LogisticRegressionWithLBFGS.train(training_data)
tt = time() - t0
print "Classifier trained in {} seconds".format(round(tt,3))
</code></pre>
| 0 |
2016-09-19T23:38:51Z
| 39,585,625 |
<p>The error comes from the matrix multiplication where dimensions are not matching. array is not getting all 21 values. I suggest you to set variables to 0 in case they are not float, as that (seemingly) you want</p>
| 0 |
2016-09-20T04:22:21Z
|
[
"python",
"apache-spark",
"pyspark",
"logistic-regression"
] |
Pandas group by, filter & plot
| 39,583,634 |
<p>I have a dataframe </p>
<pre><code>Date rule_name
Jan 1 2016 A
Feb 4 2016 B
Jun 6 2016 C
Feb 5 2016 B
Feb 9 2016 D
Jun 5 2016 A
</code></pre>
<p>And so on ...</p>
<p>I am hoping to get a dataframe for each rule similar to below:
E.g. Dataframe for rule_name A:</p>
<pre><code>date counts (rule_name) %_rule_name
Jan 16 1 100
Feb 16 0 0
Jun 16 1 50
</code></pre>
<p>E.g Dataframe for rule_name B:</p>
<pre><code>date counts (rule_name) %_rule_name
Jan 16 0 0
Feb 16 2 66.6
Jun 16 0 0
</code></pre>
<p>Etc.</p>
<p>My current solution:</p>
<pre><code>rule_names = df['rule_name'].unique().tolist()
for i in rule_names:
df_temp = df[df['rule_name'] == i]
df_temp = df.groupby(df['date'].map(lambda x: str(x.year) + '-' + str(x.strftime('%m')))).count()
df_temp.plot(kind='line', title = 'Rule Name: ' + str(i))
</code></pre>
<p>As you can see I am unable to get the % of rule name and am only plotting the count_rule_name. I feel like there is (a) a solution and (b) a better solution then iterating through the each rule name and plotting but am unable to figure it out unfortunately. </p>
| 2 |
2016-09-19T23:39:29Z
| 39,583,669 |
<p><strong><em>Solution</em></strong><br>
Use <code>df.Date.str.split().str[0]</code> to get months</p>
<pre><code>df.groupby([df.Date.str.split().str[0]]).rule_name.value_counts(True) \
.unstack(fill_value=0).mul(100).round(1)
</code></pre>
<p><a href="http://i.stack.imgur.com/YI8TV.png" rel="nofollow"><img src="http://i.stack.imgur.com/YI8TV.png" alt="enter image description here"></a></p>
<p><strong><em>Plot</em></strong></p>
<pre><code>df.groupby([df.Date.str.split().str[0]]).rule_name.value_counts(True) \
.unstack(fill_value=0).mul(100).round(1).plot.bar()
</code></pre>
<p><a href="http://i.stack.imgur.com/ujd23.png" rel="nofollow"><img src="http://i.stack.imgur.com/ujd23.png" alt="enter image description here"></a></p>
<p><strong><em>Validate Counts</em></strong> </p>
<pre><code>df.groupby([df.Date.str.split().str[0], df.rule_name]).size().unstack(fill_value=0)
</code></pre>
<p><a href="http://i.stack.imgur.com/s8nJS.png" rel="nofollow"><img src="http://i.stack.imgur.com/s8nJS.png" alt="enter image description here"></a></p>
| 3 |
2016-09-19T23:45:02Z
|
[
"python",
"pandas",
"plot",
"group-by",
"filtering"
] |
Not sure solution is correct for python classes
| 39,583,721 |
<p>Here is the question;
create a car class with the following data attributes</p>
<pre><code>__year_model
__make
__speed
</code></pre>
<p>The class should have an <code>__init__</code> method that accepts the car's year model
and make as argument. The value should be assigned to the objects <code>__year_model</code> and <code>__make attributes</code>. It should assign 0 to the <code>__speed</code> data attribute</p>
<p>It should also have the following method:</p>
<p>accelerate: accelerate should add 5 to the speed data attribute each time it
is called.</p>
<p>brake: this method should subtract 5 from the speed</p>
<p>get_speed: this should display the current speed</p>
<p>Next, design a program that creates a Car object, and then calls the accelerate method
five times. After each call to the accelerate method, get the current speed of the car and
display it. Then call the brake method five times. After each call to the brake method, get
the current speed of the car and display it.</p>
<pre><code>class Car:
def __init__(self,year_model,make,speed):
self.year_mode = year_model
self.make = make
self.speed = 0
def accelerate(self,accelerate):
self.accelerate = accelerate
speed = 0
for x in range(5):
speed = speed + 5
print speed
def brake(self,brake):
self.brake = brake
speed = 0
for x in range(5):
brake = brake - 5
print brake
def get_speed(self):
return self.speed
test = Car(2006,'Honda',100)
</code></pre>
<p>I know test is the instance. Is it also the object?
Thank you</p>
| 1 |
2016-09-19T23:53:46Z
| 39,583,749 |
<p>Yes, <code>test</code> is an object, i.e. an instance of a class <em>is</em> an object.</p>
<p>Your class needs some work. According to the spec, <code>speed</code> is not supposed to be passed to <code>__init__()</code>. Remove that and initialise <code>self.__speed</code> to <code>0</code> in <code>__init__()</code>.</p>
<p><code>accelerate()</code> and <code>brake()</code> are not supposed to take arguments - each method should either increase the current speed by 5, or reduce the speed by 5. Therefore these methods do not require any arguments (other than <code>self</code>).</p>
<p>So <code>accelerate()</code> should be this:</p>
<pre><code>def accelerate(self):
self.__speed += 5 # shorthand for self.speed = self.speed + 5
</code></pre>
<p>which simply increases the speed by 5 each time it is called. Note that <code>self</code> is used so that the member <code>__speed</code> is incremented, not some temporary local varible. You can do the same for <code>brake()</code>.</p>
<p>Other than that, add the tests that your assignment specifies: call <code>accelerate</code> 5 times printing the speed after each call. Similarly for the <code>brake</code> method. Does the output agree with what you would expect?</p>
<p>Test like this:</p>
<pre><code>car = Car(2006,'Honda')
for i in range(5):
car.accelerate()
print(car.get_speed())
for i in range(5):
car.brake()
print(car.get_speed())
</code></pre>
<p>Oh, and name your members as per the spec: <code>__speed</code>, not <code>speed</code> etc.</p>
| 2 |
2016-09-19T23:58:15Z
|
[
"python",
"class",
"object"
] |
Simple Sorting Python Objects - Indexing Error
| 39,583,729 |
<p>I thought this would be a super simple sorting exercise but I'm not understanding the errors I'm getting.</p>
<p>I'm creating a simple <code>Book</code> class and then adding two entries. I am then sorting via <code>sorted</code> and using a <code>lambda</code> so that I sort by the <code>isbn</code> attribute.</p>
<pre><code>books = []
class Book:
def __init__(self, title, author, isbn):
self.title, self.author, self.isbn = title, author, isbn
def __repr__(self):
return repr((self.title, self.author, self.isbn))
#add 2 items
b = Book('Russian for All', 'Chompsky', '334')
books.append(b)
c = Book('English for All', 'Zöd', '229')
books.append(c)
#sort items by the isbn number
sorted_books = sorted(books, key=lambda book: book[2])
</code></pre>
<p><em>a)</em> I believe that <code>__repr__</code> is needed so that a get a nice looking tuple for each entry when adding it to the array, as such:</p>
<pre><code>print(books)
[('Russian for All', 'Chompsky', '334'), ('English for All', 'Zöd', '229')]
</code></pre>
<p><em>b)</em> Because this looks like a standard list of tuples, using the sorted function should function without issue, but alas I get this error:</p>
<pre><code>Traceback (most recent call last):
File "Untitled 2.py", line 19, in <module>
sorted_books = sorted(books, key=lambda book: book[2])
File "Untitled 2.py", line 19, in <lambda>
sorted_books = sorted(books, key=lambda book: book[2])
TypeError: 'Book' object does not support indexing
</code></pre>
<p>It's as if I'm seeing a nice tuple representing the objects but they're actually masking as objects which can't be accessed by the sorting function.</p>
| 0 |
2016-09-19T23:54:44Z
| 39,583,741 |
<p>No, you need to use:</p>
<pre><code>sorted_books = sorted(books, key=lambda book: book.isbn)
</code></pre>
<p>The result:</p>
<pre><code>In [2]: sorted_books = sorted(books, key=lambda book: book.isbn)
In [3]: sorted_books
Out[3]: [('English for All', 'Zöd', '229'), ('Russian for All', 'Chompsky', '334')]
</code></pre>
<p>You need to access the book object attribute <code>isbn</code>. The <code>__repr__</code> is just that, a string representation <em>for printing</em>.</p>
| 1 |
2016-09-19T23:57:24Z
|
[
"python",
"sorting"
] |
Iterating through multiple text files and comparing
| 39,583,740 |
<p>I'm trying to write a function that puts text files into a list and then iterates through the files to find exact and partial copies to weed out people who may have cheated by plagarising their work. I start by using my class roster and adding .txt to their name to find their assignments and whether they've even completed the assignment or not. I have over 500 students' papers to read. With the code i've written so far it is literally iterating word by word within the .txt files so I'm getting WAY TOO many "cheated"s back. PLEASE HELP.</p>
<pre><code>def Cheaters():
file = open("roster.txt", "r")
L = []
for i in file:
new = [i[:-1], ".txt"]
new2 = "".join(new)
if i not in L:
L.append(new2)
for j in L:
try:
file2 = open(j, "r")
for n in file2:
for m in file2:
if n == m:
print("Cheated")
except:
print("No work submitted")
</code></pre>
| 1 |
2016-09-19T23:57:08Z
| 39,584,310 |
<p>Try this. You may need to modify it for your file structure, but it should be close.</p>
<pre><code>import re
from itertools import product
def hash_sentences(document):
# remove all characters except those below, replace with a space
# split into a list
cleaned_text = re.sub(r'[^A-z0-9,;:\.\?! ]', ' ', document)
sentences = re.split(r'[\?.!\.]', cleaned_text)
# the less than 5 removes short sentences like "Dr."
# return a hash of the sentences for comparison
return [hash(s.strip().lower()) for s in sentences if len(s) > 5]
def compare_documents(doc1, doc2):
hash1 = hash_sentences(doc1)
hash2 = hash_sentences(doc2)
# return the percentage of sentences of doc1 that are in doc2
return sum((h in hash2) for h in hash1) / float(len(hash1))
# get list of document file names
with open('roster.txt', 'r') as fp:
doc_fnames = [d+'.txt' for d in fp.readlines()]
# create dictionay of file names and content
doc_dict = {}
for fname in doc_fnames:
try:
with open(fname, 'r') as fp:
doc_dict[fname] = fp.read()
except:
print('No submission: %s' %fname)
# iterate through the pairs of documents
for doc_pair in product(doc_dict.keys(), doc_dict.keys()):
pct = compare_documents(doc_dict[doc_pair[0]], doc_dict[doc_pair[1]])
print('Percentage of %s sentences in %s: %0.2f%%' %(doc_pair[0], doc_pair[1], 100*pct))
</code></pre>
| 0 |
2016-09-20T01:18:42Z
|
[
"python",
"list",
"file",
"iteration"
] |
NaN from sparse_softmax_cross_entropy_with_logits in Tensorflow
| 39,583,752 |
<p>I am getting NaN when I attempt to use the sparse_softmax_cross_entropy_with_logits loss function in tensorflow. I have a simple network, something like:</p>
<pre><code>layer = tf.nn.relu(tf.matmul(inputs, W1) + b1)
layer = tf.nn.relu(tf.matmul(inputs, W2) + b2)
logits = tf.matmul(inputs, W3) + b3
loss = tf.sparse_softmax_cross_entropy_with_logits(logits, labels)
</code></pre>
<p>I have many classes (~10000), so I imagine I am getting NaN because the logit corresponding to correct class in at least one of my examples got truncated to zero. Is there a way to avoid this?</p>
| 0 |
2016-09-19T23:58:44Z
| 39,588,018 |
<p><code>tf.sparse_softmax_cross_entropy_with_logits</code> handles the case of <code>log(0)</code> for you, you don't have to worry about it.</p>
<p>Usually a <code>NaN</code> is due to a high learning rate of your optimization algorithm. Try to lower it until <code>NaN</code> errors disappear and the loss starts to decrease </p>
| 1 |
2016-09-20T07:21:53Z
|
[
"python",
"tensorflow"
] |
NaN from sparse_softmax_cross_entropy_with_logits in Tensorflow
| 39,583,752 |
<p>I am getting NaN when I attempt to use the sparse_softmax_cross_entropy_with_logits loss function in tensorflow. I have a simple network, something like:</p>
<pre><code>layer = tf.nn.relu(tf.matmul(inputs, W1) + b1)
layer = tf.nn.relu(tf.matmul(inputs, W2) + b2)
logits = tf.matmul(inputs, W3) + b3
loss = tf.sparse_softmax_cross_entropy_with_logits(logits, labels)
</code></pre>
<p>I have many classes (~10000), so I imagine I am getting NaN because the logit corresponding to correct class in at least one of my examples got truncated to zero. Is there a way to avoid this?</p>
| 0 |
2016-09-19T23:58:44Z
| 39,588,174 |
<p>The <code>NaN</code> error probably occurs when one of the softmaxed logits gets truncated to 0, as you have said, and then it performs log(0) to compute the cross-entropy error.</p>
<p>To avoid this, as it is suggested in <a href="http://stackoverflow.com/questions/33712178/tensorflow-nan-bug/33713196#33713196">this other answer</a>, you could clip the values of the softmax output so that they are never zero.</p>
<pre><code>out = tf.clip_by_value(out,1e-10,100.0)
</code></pre>
<p>Or you could add a small constant to avoid having zeros:</p>
<pre><code>out = out + 1e-10
</code></pre>
<p>The problem with it is that the softmax function is applied on the logits internally by <code>sparse_softmax_cross_entropy_with_logits()</code> so you can not change its behavior.</p>
<p>To overcome this, code the cross entropy error yourself and add the constant <code>1e-10</code> to the output of the softmax, not to the logits.</p>
<pre><code>loss = -tf.reduce_sum(labels*tf.log(tf.nn.softmax(logits) + 1e-10))
</code></pre>
<p>Be aware that with the <code>sparse_softmax_cross_entropy_with_logits()</code> function the variable <code>labels</code> was the numeric value of the label, but if you implement the cross-entropy loss yourself, <code>labels</code> have to be the one-hot encoding of these numeric labels.</p>
<p><strong>Update:</strong> I have corrected the answer thanks to the comment by <a href="http://stackoverflow.com/users/997378/mdaoust">@mdaoust</a>. As he said the zeros are only relevant after the softmax function has been applied to the logits, not before.</p>
| 1 |
2016-09-20T07:29:42Z
|
[
"python",
"tensorflow"
] |
NaN from sparse_softmax_cross_entropy_with_logits in Tensorflow
| 39,583,752 |
<p>I am getting NaN when I attempt to use the sparse_softmax_cross_entropy_with_logits loss function in tensorflow. I have a simple network, something like:</p>
<pre><code>layer = tf.nn.relu(tf.matmul(inputs, W1) + b1)
layer = tf.nn.relu(tf.matmul(inputs, W2) + b2)
logits = tf.matmul(inputs, W3) + b3
loss = tf.sparse_softmax_cross_entropy_with_logits(logits, labels)
</code></pre>
<p>I have many classes (~10000), so I imagine I am getting NaN because the logit corresponding to correct class in at least one of my examples got truncated to zero. Is there a way to avoid this?</p>
| 0 |
2016-09-19T23:58:44Z
| 39,602,346 |
<p>It actually turns out that some of my labels were out of range (e.g. a label of 14000, when my logits matrix is just 150 x 10000). It turns out this results in a NaN rather than an error.</p>
| 0 |
2016-09-20T19:29:12Z
|
[
"python",
"tensorflow"
] |
Pyspark DataFrame - How to use variables to make join?
| 39,583,773 |
<p>I'm having a bit of trouble to make a join on two Data Frames using Spark Data Frames on python. I have two data frames that I had to change the name of the columns in order to make them unique for each data frame, so later I could tell which column is which. I did this to rename the columns (firstDf and secondDf are Spark DataFrames created using the function createDataFrame):</p>
<pre><code>oldColumns = firstDf.schema.names
newColumns = list(map(lambda x: "{}.{}".format('firstDf', x), oldColumns))
firstDf = firstDf.toDF(*newColumns)
</code></pre>
<p>I repeated this for the second DataFrame. Then I tried to join them, using the following code:</p>
<pre><code>from pyspark.sql.functions import *
firstColumn = 'firstDf.firstColumn'
secondColumn = 'secondDf.firstColumn'
joinedDF = firstDf.join(secondDf, col(firstColumn) == col(secondColumn), 'inner')
</code></pre>
<p>Using it like this I get the following error:</p>
<blockquote>
<p>AnalysisException "cannot resolve 'firstDf.firstColumn' given input columns: [firstDf.firstColumn, ...];"</p>
</blockquote>
<p>This was only to illustrate that the column exists in the input columns array.</p>
<p>If I don't rename the DataFrames columns I'm able to join them using this piece of code:</p>
<pre><code>joinedDf = firstDf.join(secondDf, firstDf.firstColumn == secondDf.firstColumn, 'inner')
</code></pre>
<p>But this give me a DataFrame with ambiguous column names.</p>
<p>Any ideas on how to approach this?</p>
| 2 |
2016-09-20T00:02:43Z
| 39,583,831 |
<p>Generally speaking don't use dots in names. These have special meaning (can be used either to determine the table or to access <code>struct</code> fields) and require some additional work to be correctly recognized.</p>
<p>For equi joins all you need is a column name:</p>
<pre><code>from pyspark.sql.functions import col
firstDf = spark.createDataFrame([(1, "foo")], ("firstColumn", "secondColumn"))
secondDf = spark.createDataFrame([(1, "foo")], ("firstColumn", "secondColumn"))
column = 'firstColumn'
firstDf.join(secondDf, [column], 'inner')
## DataFrame[firstColumn: bigint, secondColumn: string, secondColumn: string]
</code></pre>
<p>For complex cases use table aliases:</p>
<pre><code>firstColumn = 'firstDf.firstColumn'
secondColumn = 'secondDf.firstColumn'
firstDf.alias("firstDf").join(
secondDf.alias("secondDf"),
# After alias prefix resolves to table name
col(firstColumn) == col(secondColumn),
"inner"
)
## DataFrame[firstColumn: bigint, secondColumn: string, firstColumn: bigint, secondColumn: string]
</code></pre>
<p>You could also use parent frames directly:</p>
<pre><code>column = 'firstColumn'
firstDf.join(secondDf, firstDf[column] == secondDf[column])
</code></pre>
| 0 |
2016-09-20T00:10:02Z
|
[
"python",
"apache-spark",
"pyspark",
"spark-dataframe",
"pyspark-sql"
] |
Python: how would I write a keyevent?
| 39,583,782 |
<p>So simply put I want my code to call a event like OnKeyPress("Keyname") or something, each time I press a key. I don't want to use tkinter.</p>
<p>I have a Update method that gets called 20 times a second that's in a parent class
I have several child classes that have that function called. On some of them I want to use a key press event to make parts of my robot I am working on move.(eg: while I am pressing the up arrow the robot will move foreword) tkinter is well not designed for this its more for drawing content on a screen and I want to make my code more efficient for its purpous. also I would prefer to have my own then use a pre-made one so I can get a better understanding on how it works.</p>
| 0 |
2016-09-20T00:03:28Z
| 39,583,899 |
<p>This should work
<pre>
class _GetchUnix:
def <strong>init</strong>(self):
import tty, sys
def <strong>call</strong>(self):
import sys, tty, termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
class _GetchWindows:
def <strong>init</strong>(self):
import msvcrt
def <strong>call</strong>(self):
import msvcrt
return msvcrt.getch()
class _Getch:
def <strong>init</strong>(self):
try:
self.impl = _GetchWindows()
except ImportError:
self.impl = _GetchUnix()</p>
<code> def __call__(self): return self.impl()
getch = _Getch()
while True:
if getch.impl():
print getch.impl()
</code></pre>
<p>Every thing up to the <code>while</code> loop you need. Then when ever you call <code>getch.impl()</code> it will wait for a input, but if there is none, it will move on. That's why I used the <code>while</code> loop</p>
<p><strong>DISCLAIMER</strong><br>
This is not my code, please check out:<br>
<a href="http://code.activestate.com/recipes/134892/" rel="nofollow">http://code.activestate.com/recipes/134892/</a></p>
| 0 |
2016-09-20T00:18:26Z
|
[
"python",
"events",
"keypress"
] |
Python: how would I write a keyevent?
| 39,583,782 |
<p>So simply put I want my code to call a event like OnKeyPress("Keyname") or something, each time I press a key. I don't want to use tkinter.</p>
<p>I have a Update method that gets called 20 times a second that's in a parent class
I have several child classes that have that function called. On some of them I want to use a key press event to make parts of my robot I am working on move.(eg: while I am pressing the up arrow the robot will move foreword) tkinter is well not designed for this its more for drawing content on a screen and I want to make my code more efficient for its purpous. also I would prefer to have my own then use a pre-made one so I can get a better understanding on how it works.</p>
| 0 |
2016-09-20T00:03:28Z
| 39,584,910 |
<p>I urge you to reconsider not using tkinter. Ignoring the GUI stuff, it has a tested, cross-platform, <em>asynchonous</em> event loop system that handles both scheduled (timed) events and key press and release events. And it is written in C and updated as platforms change. (See the comments for the recipe @Ty quoted about platform changes.) You will find it very difficult to reproduce the same facilities in Python.</p>
<p>Usage would start with</p>
<pre><code>import tkinter
root = tkinter.Tk()
root.withdraw() # make the window invisible
</code></pre>
<p>Thereafter, one can ignore the GUI stuff and only use root.bind(key event, function) and root.after(milleseconds, function, *args). (In the meanwhile, the option to use graphics to interact with the robot, in the future, would still be there.)</p>
<p>Given existing move_forward and stop_move functions, here is how to make a robot move while pressing up-arrow.</p>
<pre><code>root.bind('<KeyPress-Up>', move_forward)
root.bind('<keyRelease-Up>' stop_move)
</code></pre>
<p>About the importance of being asynchronous (non-blocking): I presume you call an update function 20 times a second by calling time.sleep(.05) between each update. This is a blocking call and during this period your code is frozen. It cannot do anything and cannot respond to events.</p>
<p>Suppose you develop two or more animations for the robot: say one for the arms, one for the head (or whatever is appropriate for your robot). Each would consist of a series times commands, implemented by root.after calls. Now suppose you want to run two or more animations in parallel. Assuming that each only requires a small fraction of CPU time, this would be trivial with tkinter because each would run asynchronously and not block the program from doing other things. With a blocking event system, you would have to write a new animation each each combination you want.</p>
| 1 |
2016-09-20T02:42:41Z
|
[
"python",
"events",
"keypress"
] |
unicode for arrow keys?
| 39,583,798 |
<p>I am trying to make a game where you can instantly have the user's input be transmitted to the computer instead of having to press <code>enter</code> every time. I know how to do that, but I cannot seem to find the unicode number for the arrow keys. Is there unicode for that, or am I just going to be stuck with wasd?</p>
<pre><code>class _GetchUnix:
def __init__(self):
import tty, sys
def __call__(self):
import sys, tty, termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
class _GetchWindows:
def __init__(self):
import msvcrt
def __call__(self):
import msvcrt
return msvcrt.getch()
class _Getch:
def __init__(self):
try:
self.impl = _GetchWindows()
except ImportError:
self.impl = _GetchUnix()
def __call__(self): return self.impl()
getch = Getch()
</code></pre>
<p>I using <code>getch.impl()</code> as a trial-or-error input, as in if there's a key being pressed when the function is called, it returns that key, and moves on. If there's no key being pressed, it just moves on.<br>
I'm using Python 2.7.10</p>
| 0 |
2016-09-20T00:04:55Z
| 39,584,425 |
<p>Start by reading the relevant doc for msvcrt.</p>
<blockquote>
<p>msvcrt.kbhit()</p>
<p>Return true if a keypress is waiting to be read.</p>
<p>msvcrt.getch()</p>
<p>Read a keypress and return the resulting character as a byte string. Nothing is echoed to the console. This call will block if a keypress is not already available, but will not wait for Enter to be pressed. If the pressed key was a special function key, this will return '\000' or '\xe0'; the next call will return the keycode. The Control-C keypress cannot be read with this function.</p>
</blockquote>
<p>Notice that getch blocks and requires two calls for special function keys, which include arrow keys (they initially return <code>b'\xe0</code>).</p>
<p>Then use sys.platform and write two versions of a get_arrow function.</p>
<pre><code>import sys
if sys.platform == 'win32':
import msvcrt as ms
d = {b'H': 'up', b'K': 'lt', b'P': 'dn', b'M': 'rt'}
def get_arrow():
if ms.kbhit() and ms.getch() == b'\xe0':
return d.get(ms.getch(), None)
else:
return None
else: # unix
...
</code></pre>
<p>I experimentally determined the mapping of keys to codes with the following code. ( This will not work when run in IDLE and maybe not in other GUI frameworks, as getch conflicts with GUI handling of the keyboard.)</p>
<pre><code>>>> import msvcrt as ms
>>> for i in range(8): print(ms.getch())
...
b'\xe0'
b'H'
b'\xe0'
b'K'
b'\xe0'
b'P'
b'\xe0'
b'M'
</code></pre>
<p>I tested the function on Windows with</p>
<pre><code>while True:
dir = get_arrow()
if dir: print(dir)
</code></pre>
| 0 |
2016-09-20T01:36:08Z
|
[
"python",
"python-2.7",
"unicode",
"python-unicode"
] |
Matplotlib squishes axis labels when setting labels on twiny() axis
| 39,583,802 |
<p>I'm trying to add a second set of tic marks to my plot. I'm getting the original tic marks with <code>get_xticks()</code>, converting them to what I want (in this example simply adding 100) and calling <code>set_xticks()</code> on an axis I got from <code>ax.twiny()</code>.</p>
<p>When I do this, my twin axis labels are all crammed to the right as seen in the top right of the graph. </p>
<p>Here's the code I used:</p>
<pre><code> ax2 = ax.twiny()
ax2.set_xticks(ax.get_xticks()+100)
</code></pre>
<p><a href="http://i.stack.imgur.com/LyAAA.png" rel="nofollow"><img src="http://i.stack.imgur.com/LyAAA.png" alt="enter image description here"></a></p>
<p>FYI:</p>
<pre><code> print(ax.get_xticks())
[ 950. 1000. 1050. 1100. 1150. 1200. 1250.]
</code></pre>
| 0 |
2016-09-20T00:05:07Z
| 39,583,986 |
<p>Try this (incomplete code; needs to be merged with OPs):</p>
<pre><code>from matplotlib.ticker import FuncFormatter
def shift(x, pos):
return x+100
formatter = FuncFormatter(shift)
ax2 = ax1.twiny()
ax2.xaxis.set_major_formatter(formatter)
ax2.set_xlim(ax1.get_xlim())
</code></pre>
<p>Read also <a href="http://stackoverflow.com/questions/13772673/matplotlib-imshow-twiny-problems">this question</a>.</p>
| 0 |
2016-09-20T00:30:05Z
|
[
"python",
"matplotlib",
"plot"
] |
Static files not being served on Django
| 39,583,896 |
<p>I am trying to serve static files in Django in development (<code>DEBUG=True</code>) mode. I have a directory structure like this:</p>
<pre><code>my_project/
...
static/
img.png
</code></pre>
<p>In my <code>settings.py</code> I have this:</p>
<pre><code>STATIC_ROOT = os.path.join(BASE_DIR, "static")
STATICFILE_DIRS = [
os.path.join(BASE_DIR, "static"),
]
STATIC_URL = '/static/'
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
</code></pre>
<p>and in <code>my_project/urls.py</code> I have this:</p>
<pre><code>from django.conf import settings
from django.conf.urls import include, url
from django.conf.urls.static import static
from django.contrib import admin
urlpatterns = [
# ...
url(r'^admin/', include(admin.site.urls)),
url(r'^', include('app.landing.urls')),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
</code></pre>
<p>This all seems to be correct. When I visit <a href="http://127.0.0.1:8000/static/img.png" rel="nofollow">http://127.0.0.1:8000/static/img.png</a> though, I get a 404.</p>
<p>What am I doing wrong?</p>
| 1 |
2016-09-20T00:17:59Z
| 39,585,592 |
<p>You can debug this in many different ways. Here's my approach.</p>
<p>main settings.py:</p>
<pre><code>DEBUG = False
TDEBUG = True
</code></pre>
<p>urls.py:</p>
<pre><code>from django.conf import settings
import os
if settings.TDEBUG:
urlpatterns += patterns('',
(r'^static/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': os.path.join(os.path.dirname(__file__), 'static')} ),
)
</code></pre>
| 0 |
2016-09-20T04:17:16Z
|
[
"python",
"django"
] |
Multiple Levels of Sorting without module
| 39,583,953 |
<p><a href="https://wiki.python.org/moin/HowTo/Sorting" rel="nofollow">This article</a> states that you can use multiple levels of sorting with the <code>operator</code> module.</p>
<blockquote>
<p>The operator module functions allow multiple levels of sorting. For example, to sort by grade then by age: </p>
</blockquote>
<pre><code>>>>sorted(student_objects, key=attrgetter('grade', 'age'))
[('john', 'A', 15), ('dave', 'B', 10), ('jane', 'B', 12)]
</code></pre>
<p>This should be possible to do the standard way without needing a module:</p>
<pre><code>sorted(student_objects, key=lambda student: student.age #somehow add another
</code></pre>
<p>I can't figure out the standard way to do this though, is it possible?</p>
| 0 |
2016-09-20T00:26:08Z
| 39,583,964 |
<p>Do what the function in <code>operator</code> does and return a tuple:</p>
<pre><code>key=lambda student: (student.grade, student.age)
</code></pre>
| 2 |
2016-09-20T00:27:27Z
|
[
"python",
"sorting"
] |
Why is 'é' and 'é' encoding to different bytes?
| 39,583,993 |
<h1>Question</h1>
<p>Why is the same character encoding to different bytes in different parts of my code base?</p>
<h1>Context</h1>
<p>I have a unit test that generates a temporary file tree and then checks to make sure my scan actually finds the file in question.</p>
<pre><code>def test_unicode_file_name():
test_regex = "é"
file_tree = {"files": ["é"]} # File created with python.open()
with TempTree(file_tree) as tmp_tree:
import pdb; pdb.set_trace()
result = tasks.find_files(test_regex, root_path=tmp_tree.root_path)
expected = [os.path.join(tmp_tree.root_path, "é")]
assert result == expected
</code></pre>
<h1>Function that's failing</h1>
<pre><code>for dir_entry in scandir(current_path):
if dir_entry.is_dir():
dirs_to_search.append(dir_entry.path)
if dir_entry.is_file():
testing = dir_entry.name
if filename_regex.match(testing):
results.append(dir_entry.path)
</code></pre>
<h1>PDB Session</h1>
<p>When I started digging into things I found that the test character (copied from my unit test) and the character in <code>dir_entry.name</code> encoded to different bytes.</p>
<pre><code>(Pdb) testing
'é'
(Pdb) 'é'
'é'
(Pdb) testing == 'é'
False
(Pdb) testing in 'é'
False
(Pdb) type(testing)
<class 'str'>
(Pdb) type('é')
<class 'str'>
(Pdb) repr(testing)
"'é'"
(Pdb) repr('é')
"'é'"
(Pdb) 'é'.encode("utf-8")
b'\xc3\xa9'
(Pdb) testing.encode("utf-8")
b'e\xcc\x81'
</code></pre>
| 2 |
2016-09-20T00:31:04Z
| 39,585,206 |
<p>Your operating system (MacOS, at a guess) has converted the filename <code>'é'</code> to <a href="https://en.wikipedia.org/wiki/Unicode_equivalence#Normal_forms" rel="nofollow">Unicode Normal Form D</a>, decomposing it into an unaccented <code>'e'</code> and a combining acute accent. You can see this clearly with a quick session in the Python interpreter:</p>
<pre><code>>>> import unicodedata
>>> e1 = b'\xc3\xa9'.decode()
>>> e2 = b'e\xcc\x81'.decode()
>>> [unicodedata.name(c) for c in e1]
['LATIN SMALL LETTER E WITH ACUTE']
>>> [unicodedata.name(c) for c in e2]
['LATIN SMALL LETTER E', 'COMBINING ACUTE ACCENT']
</code></pre>
<p>To ensure that you're comparing like with like, you can convert the filename given by <code>dir_entry.name</code> back to Normal Form C before testing it against your regex:</p>
<pre><code>import unicodedata
for dir_entry in scandir(current_path):
if dir_entry.is_dir():
dirs_to_search.append(dir_entry.path)
if dir_entry.is_file():
testing = unicodedata.normalize('NFC', dir_entry.name)
if filename_regex.match(testing):
results.append(dir_entry.path)
</code></pre>
| 3 |
2016-09-20T03:20:59Z
|
[
"python",
"python-3.x",
"unicode",
"normalization"
] |
Reading .obj file and split lines
| 39,584,021 |
<p>I made a simple code to read a .obj file that looks like this</p>
<pre><code>g cube
v 0.0 0.0 0.0
v 0.0 0.0 1.0
v 0.0 1.0 0.0
v 0.0 1.0 1.0
v 1.0 0.0 0.0
v 1.0 0.0 1.0
v 1.0 1.0 0.0
v 1.0 1.0 1.0
f 1//2 7//2 5//2
f 1//2 3//2 7//2
f 1//6 4//6 3//6
f 1//6 2//6 4//6
f 3//3 8//3 7//3
f 3//3 4//3 8//3
f 5//5 7//5 8//5
f 5//5 8//5 6//5
f 1//4 5//4 6//4
f 1//4 6//4 2//4
f 2//1 6//1 8//1
f 2//1 8//1 4//1
</code></pre>
<p>And the python code looks like this</p>
<pre><code>class objeto():
def __init__(self, obj = None):
if obj:
self.cargar_obj(obj)
def cargar_obj(self, archivo):
with open(archivo, 'r') as obj:
datos = obj.read()
lineas = datos.splitlines()
self.vertices = []
self.superficies = []
for linea in lineas:
elem = linea.split()
if elem:
if elem[0] == 'v':
v = vertice(float(elem[1]), float(elem[2]), float(elem[3]))
self.vertices.append(v)
elif elem[0] == 'f':
vs = []
for i in range(1, len(elem)):
vs.append(self.vertices[int(elem[i].split('/'))])
f = cara(vs)
self.superficies.append(f)
else:
pass
</code></pre>
<p>The problem appears to be this line:</p>
<pre><code>vs.append(self.vertices[int(elem[i].split('/'))])
</code></pre>
<p>because when I try to run the code, the followed TypeError appears</p>
<pre><code>int() argument must be a string, a bytes-like object or a number, not 'list'
</code></pre>
<p>I'm a newbie in Python, so I don't seem to get around this error, can somebody give me some advice? Thanks.</p>
<p>I'm using Python 3.x, and ipython to run .py files</p>
| 0 |
2016-09-20T00:35:11Z
| 39,584,038 |
<p>The error message is pretty clear: you are passing <code>int</code> a list (the result of calling <code>split</code>); maybe you only meant to call it with one of the elements of that list.</p>
| 0 |
2016-09-20T00:38:05Z
|
[
"python",
"python-3.x"
] |
Reading .obj file and split lines
| 39,584,021 |
<p>I made a simple code to read a .obj file that looks like this</p>
<pre><code>g cube
v 0.0 0.0 0.0
v 0.0 0.0 1.0
v 0.0 1.0 0.0
v 0.0 1.0 1.0
v 1.0 0.0 0.0
v 1.0 0.0 1.0
v 1.0 1.0 0.0
v 1.0 1.0 1.0
f 1//2 7//2 5//2
f 1//2 3//2 7//2
f 1//6 4//6 3//6
f 1//6 2//6 4//6
f 3//3 8//3 7//3
f 3//3 4//3 8//3
f 5//5 7//5 8//5
f 5//5 8//5 6//5
f 1//4 5//4 6//4
f 1//4 6//4 2//4
f 2//1 6//1 8//1
f 2//1 8//1 4//1
</code></pre>
<p>And the python code looks like this</p>
<pre><code>class objeto():
def __init__(self, obj = None):
if obj:
self.cargar_obj(obj)
def cargar_obj(self, archivo):
with open(archivo, 'r') as obj:
datos = obj.read()
lineas = datos.splitlines()
self.vertices = []
self.superficies = []
for linea in lineas:
elem = linea.split()
if elem:
if elem[0] == 'v':
v = vertice(float(elem[1]), float(elem[2]), float(elem[3]))
self.vertices.append(v)
elif elem[0] == 'f':
vs = []
for i in range(1, len(elem)):
vs.append(self.vertices[int(elem[i].split('/'))])
f = cara(vs)
self.superficies.append(f)
else:
pass
</code></pre>
<p>The problem appears to be this line:</p>
<pre><code>vs.append(self.vertices[int(elem[i].split('/'))])
</code></pre>
<p>because when I try to run the code, the followed TypeError appears</p>
<pre><code>int() argument must be a string, a bytes-like object or a number, not 'list'
</code></pre>
<p>I'm a newbie in Python, so I don't seem to get around this error, can somebody give me some advice? Thanks.</p>
<p>I'm using Python 3.x, and ipython to run .py files</p>
| 0 |
2016-09-20T00:35:11Z
| 39,584,318 |
<p>Recreate your case with one line:</p>
<pre><code>In [435]: line='f 1//2 7//2 5//2'
In [436]: elem = line.split()
In [437]: elem
Out[437]: ['f', '1//2', '7//2', '5//2']
In [438]: elem[0]
Out[438]: 'f'
</code></pre>
<p>split on <code>/</code> behaves as I expect:</p>
<pre><code>In [439]: for i in range(1,len(elem)):
...: print(elem[i].split('/'))
...:
['1', '', '2']
['7', '', '2']
['5', '', '2']
</code></pre>
<p>Your code has problems applying <code>int</code> to that list of strings:</p>
<pre><code>In [440]: for i in range(1,len(elem)):
...: print(int(elem[i].split('/')))
...
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
</code></pre>
<p>So we need to apply <code>int()</code> to the individual strings, not the list. But now the empty string is giving me problems.</p>
<pre><code>In [441]: for i in range(1,len(elem)):
...: print([int(e) for e in elem[i].split('/')])
...:
ValueError: invalid literal for int() with base 10: ''
</code></pre>
<p>The rest is left as an exercise for the reader.</p>
<p>Couldn't resist:</p>
<pre><code>[int(e) for e in elem[i].replace('//','/').split('/')]
[int(e) for e in elem[i].split('/') if e]
</code></pre>
| 1 |
2016-09-20T01:20:00Z
|
[
"python",
"python-3.x"
] |
Folder creation with timestamp in python
| 39,584,028 |
<p>Hi I am a beginner in python and not well versed with file operations.I am writing a python script for logging. Below is my code snippet:</p>
<pre><code>infile = open('/home/nitish/profiles/Site_info','r')
lines = infile.readlines()
folder_output = '/home/nitish/profiles/output/%s'%datetime.now().strftime('%Y-%m-%d-%H:%M:%S')
folder = open(folder_output,"w")
for index in range(len(lines)):
URL = lines[index]
cmd = "curl -L " +URL
curl = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE)
file_data = curl.stdout.read()
print file_data
filename = '/home/nitish/profiles/output/log-%s.html'%datetime.now().strftime('%Y-%m-%d-%H:%M:%S')
output = open(filename,"w")
output.write(file_data)
output.close()
folder.close()
infile.close()
</code></pre>
<p>I don't know if this is correct. I wish to create a new folder with timestamp everytime the script is run and place all the output from the for loop into the folder with timestamp. </p>
<p>Thanks for your help in advance</p>
| 0 |
2016-09-20T00:36:43Z
| 39,584,131 |
<p>You have trailing newlines on all the urls so that would not work, you won't get past <code>folder = open(folder_output,"w")</code> as you are trying to create a file not a folder, there is also no need for a subprocess. You can do it all using standard lib functions:</p>
<pre><code>from os import mkdir
import urllib.request
from datetime import datetime
now = datetime.now
new_folder = '/home/nitish/profiles/output/{}'.format(now().strftime('%Y-%m-%d-%H:%M:%S'))
# actually make the folder
mkdir(new_folder)
# now open the urls file and strip the newlines
with open('/home/nitish/profiles/Site_info') as f:
for url in map(str.strip, f):
# open a new file for each request and write to new folder
with open("{}/log-{}.html".format(new_folder, now().strftime('%Y-%m-%d-%H:%M:%S')), "w") as out:
out.write(urllib.request.urlopen(url).read())
</code></pre>
<p>For python2, use <code>import urllib</code> and `urllib.urlopen or better yet use <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a> </p>
| 0 |
2016-09-20T00:51:27Z
|
[
"python",
"timestamp",
"folder"
] |
unsupported operand type(s) for ** or pow(): 'method' and 'int'
| 39,584,033 |
<pre><code>from tkinter import *
import math
import sys
def quit():
root.destroy()
def a_b_c():
print_a()
print_b()
print_c()
calculation()
return
def print_a():
get_a = a.get()
printing_a = Label(root, text=get_a).grid(row=8, column=1)
return
def print_b():
get_b = b.get()
printing_b =Label(root, text=get_b).grid(row=12, column=1)
return
def print_c():
get_c = c.get()
printing_c =Label(root, text=get_c).grid(row=16, column=1)
return
root = Tk()
a = StringVar()
b = StringVar()
c = StringVar()
root.title('Solving Quadratic Equations')
quit_b = Button(root, text="quit",command=quit).grid(row=1, column=1)
go_b = Button(root, text="go", command=a_b_c).grid(row=1, column=2)
welcome = Label(root, text="Welcome to Solving Quadratic Equations with GlaDOS",font=("Helvetica",13))
welcome.grid(row=2, column=1)
instructions = Label(root, text="So how do i use this program? is what you may ask yourself. So, for example, \n you have the equation 2x^2+5x+8=0. So the 2x^2 is a but you don't put the\n whole thing in you just but the 2 from the start in. The next thing is b\n in this case b = 5 and c is equal to 8. Once you have all you number in the boxes \n hit the go button. Remember you don't need the x's. ", font=("Helvetica",11))
instructions.grid(row=3, column=1)
line = Label(root, text="************************************************************************************************************").grid(row=4, column=1)
input_a = Label(root, text="Pls input A here", font=("Helvetica",11)).grid(row=6, column=1)
entry_a = Entry(root,textvariable=a).grid(row=7,column=1)
line = Label(root, text="************************************************************************************************************").grid(row=9, column=1)
input_b = Label(root, text="Pls input B here", font=("Helvetica",11)).grid(row=10, column=1)
entry_b = Entry(root,textvariable=b).grid(row=11,column=1)
line = Label(root, text="*************************************************************************************************************").grid(row=13, column=1)
input_c = Label(root, text="Pls input C here", font=("Helvetica",11)).grid(row=14, column=1)
entry_c = Entry(root,textvariable=c).grid(row=15,column=1)
two_a = a.get
two_b = b.get
two_c = c.get
d = two_b**2-4*two_a*two_c
def calculation():
if d < 0:
no_solution = Label(root, text="This equation has no real solution").grid(row=19, column=1)
elif d == 0:
x = (-two_b+math.sqrt(two_b**2-4*two_a*two_c))/2*two_a
one_solution = Label(root, text="This equation has one solutions: {}".format(x)).grid(row=20, column=1)
else:
x1 = (-two_b+math.sqrt((two_b**2)-(4*(two_a*two_c))))/(2*two_a)
x1 = (-two_b-math.sqrt((two_b**2)-(4*(two_a*two_c))))/(2*two_a)
two_solution= label(root, text="This equation has two solutions: {} or {} ".format(x1,x2)).grid(row=21, colum=1)
root.mainloop()
</code></pre>
<p>why does it say unsupported operand type(s) for ** or pow(): method or int? can someone help me to change it so it works it is for school and the teacher can't help me. I am trying to make a program that helps you to solve quadratic equations and the part at the bottom dosent work( def calculation)
thanks for helping me :)</p>
| 0 |
2016-09-20T00:37:47Z
| 39,584,143 |
<p>You are assigning a few values:</p>
<pre><code>two_a = a.get
two_b = b.get
two_c = c.get
</code></pre>
<p>And then doing calculations:</p>
<pre><code>d = two_b**2-...
</code></pre>
<p>However, <code>a.get</code> is a method that retrieves the value of that <code>StringVar</code>. To actually call it and retrieve the value, you have to... well, call it, with parentheses:</p>
<pre><code>two_a = a.get()
</code></pre>
<p>Furthermore, you will then have strings. Cast them to integers or floating-point numbers with <code>int</code> or <code>float</code>:</p>
<pre><code>two_a = int(a.get())
# or:
two_a = float(a.get())
</code></pre>
<p>Then your arithmetic will work as expected.</p>
| 2 |
2016-09-20T00:52:31Z
|
[
"python",
"python-3.x",
"math"
] |
unsupported operand type(s) for ** or pow(): 'method' and 'int'
| 39,584,033 |
<pre><code>from tkinter import *
import math
import sys
def quit():
root.destroy()
def a_b_c():
print_a()
print_b()
print_c()
calculation()
return
def print_a():
get_a = a.get()
printing_a = Label(root, text=get_a).grid(row=8, column=1)
return
def print_b():
get_b = b.get()
printing_b =Label(root, text=get_b).grid(row=12, column=1)
return
def print_c():
get_c = c.get()
printing_c =Label(root, text=get_c).grid(row=16, column=1)
return
root = Tk()
a = StringVar()
b = StringVar()
c = StringVar()
root.title('Solving Quadratic Equations')
quit_b = Button(root, text="quit",command=quit).grid(row=1, column=1)
go_b = Button(root, text="go", command=a_b_c).grid(row=1, column=2)
welcome = Label(root, text="Welcome to Solving Quadratic Equations with GlaDOS",font=("Helvetica",13))
welcome.grid(row=2, column=1)
instructions = Label(root, text="So how do i use this program? is what you may ask yourself. So, for example, \n you have the equation 2x^2+5x+8=0. So the 2x^2 is a but you don't put the\n whole thing in you just but the 2 from the start in. The next thing is b\n in this case b = 5 and c is equal to 8. Once you have all you number in the boxes \n hit the go button. Remember you don't need the x's. ", font=("Helvetica",11))
instructions.grid(row=3, column=1)
line = Label(root, text="************************************************************************************************************").grid(row=4, column=1)
input_a = Label(root, text="Pls input A here", font=("Helvetica",11)).grid(row=6, column=1)
entry_a = Entry(root,textvariable=a).grid(row=7,column=1)
line = Label(root, text="************************************************************************************************************").grid(row=9, column=1)
input_b = Label(root, text="Pls input B here", font=("Helvetica",11)).grid(row=10, column=1)
entry_b = Entry(root,textvariable=b).grid(row=11,column=1)
line = Label(root, text="*************************************************************************************************************").grid(row=13, column=1)
input_c = Label(root, text="Pls input C here", font=("Helvetica",11)).grid(row=14, column=1)
entry_c = Entry(root,textvariable=c).grid(row=15,column=1)
two_a = a.get
two_b = b.get
two_c = c.get
d = two_b**2-4*two_a*two_c
def calculation():
if d < 0:
no_solution = Label(root, text="This equation has no real solution").grid(row=19, column=1)
elif d == 0:
x = (-two_b+math.sqrt(two_b**2-4*two_a*two_c))/2*two_a
one_solution = Label(root, text="This equation has one solutions: {}".format(x)).grid(row=20, column=1)
else:
x1 = (-two_b+math.sqrt((two_b**2)-(4*(two_a*two_c))))/(2*two_a)
x1 = (-two_b-math.sqrt((two_b**2)-(4*(two_a*two_c))))/(2*two_a)
two_solution= label(root, text="This equation has two solutions: {} or {} ".format(x1,x2)).grid(row=21, colum=1)
root.mainloop()
</code></pre>
<p>why does it say unsupported operand type(s) for ** or pow(): method or int? can someone help me to change it so it works it is for school and the teacher can't help me. I am trying to make a program that helps you to solve quadratic equations and the part at the bottom dosent work( def calculation)
thanks for helping me :)</p>
| 0 |
2016-09-20T00:37:47Z
| 39,584,166 |
<p>Please read and study this SO help page about <a href="https://stackoverflow.com/help/mcve">mcves</a>.
Here is an mcve based on the code you posted. It produces the exact same error message on the last line.</p>
<pre><code>import tkinter as tk
root = tk.Tk()
b = tk.StringVar(root)
two_b = b.get
d = two_b**2
</code></pre>
<p>Raising a method to a power is nonsensical. You need to call () the method and convert the string to a number. Do the latter either by using DoubleVar or IntVar instead of StringVar or by passing the result of get to float() or int() before doing arithmetic on it.</p>
| 0 |
2016-09-20T00:56:21Z
|
[
"python",
"python-3.x",
"math"
] |
How do I delete duplicate lines and create a new file without duplicates?
| 39,584,043 |
<p>I searched on here an found many postings, however none that I can implement into the following code</p>
<pre><code>with open('TEST.txt') as f:
seen = set()
for line in f:
line_lower = line.lower()
if line_lower in seen and line_lower.strip():
print(line.strip())
else:
seen.add(line_lower)
</code></pre>
<p>I can find the duplicate lines inside my TEST.txt file which contains hundreds of URLs.</p>
<p>However I need to remove these duplicates and create a new text file with these removed and all other URLs intact.</p>
<p>I will be Checking this newly created file for 404 errors using r.status_code.</p>
<p>In a nutshell I basically need help getting rid of duplicates so I can check for dead links. thanks for your help.</p>
| 1 |
2016-09-20T00:38:32Z
| 39,584,745 |
<p>this is something you could use:</p>
<pre><code>import linecache
with open('pizza11.txt') as f:
for i, l in enumerate(f):
pass
x=i+1
k=0
i=2
j=1
initial=linecache.getline('pizza11.txt', 1)
clean= open ('clean.txt','a')
clean.write(initial)
while i<(x+1):
a=linecache.getline('pizza11.txt', i)
while j<i:
b=linecache.getline('pizza11.txt', j)
if a==b:
k=k+1
j=j+1
if k==0:
clean= open ('clean.txt','a')
clean.write(a)
k=0
j=1
i=i+1
</code></pre>
<p>With this you are going through every line and checking with the ones before itself, if there are no matches with the previous written lines then it adds it on the document.</p>
<p>pizza11 is the name of a file I have on my computer which is a text file with a ton of stuff in a list that I use to try stuff like this out, you would just need to change that to whatever your starting file is. Your output file with no duplicates would be clean.txt</p>
| 0 |
2016-09-20T02:17:48Z
|
[
"python",
"hyperlink",
"duplicates"
] |
How do I delete duplicate lines and create a new file without duplicates?
| 39,584,043 |
<p>I searched on here an found many postings, however none that I can implement into the following code</p>
<pre><code>with open('TEST.txt') as f:
seen = set()
for line in f:
line_lower = line.lower()
if line_lower in seen and line_lower.strip():
print(line.strip())
else:
seen.add(line_lower)
</code></pre>
<p>I can find the duplicate lines inside my TEST.txt file which contains hundreds of URLs.</p>
<p>However I need to remove these duplicates and create a new text file with these removed and all other URLs intact.</p>
<p>I will be Checking this newly created file for 404 errors using r.status_code.</p>
<p>In a nutshell I basically need help getting rid of duplicates so I can check for dead links. thanks for your help.</p>
| 1 |
2016-09-20T00:38:32Z
| 39,598,759 |
<p>Sounds simple enough, but what you did looks overcomplicated. I think the following should be enough:</p>
<pre><code>with open('TEST.txt', 'r') as f:
unique_lines = set(f.readlines())
with open('TEST_no_dups.txt', 'w') as f:
f.writelines(unique_lines)
</code></pre>
<p>A couple things to note:</p>
<ul>
<li>If you are going to use a set, you might as well dump all the lines at creation, and <code>f.readlines()</code>, which returns the list of all the lines in your file, is perfect for that.</li>
<li><code>f.writelines()</code> will write a sequence of lines to your files, but using a set breaks the order of the lines. So if that matters to you, I suggest replacing the last line by <code>f.writelines(sorted(unique_lines, key=whatever you need))</code></li>
</ul>
| 1 |
2016-09-20T15:57:46Z
|
[
"python",
"hyperlink",
"duplicates"
] |
dask dataframe how to convert column to to_datetime
| 39,584,118 |
<p>I am trying to convert one column of my dataframe to datetime. Following the discussion here <a href="https://github.com/dask/dask/issues/863" rel="nofollow">https://github.com/dask/dask/issues/863</a> I tried the following code:</p>
<pre><code>import dask.dataframe as dd
df['time'].map_partitions(pd.to_datetime, columns='time').compute()
</code></pre>
<p>But I am getting the following error message</p>
<pre><code>ValueError: Metadata inference failed, please provide `meta` keyword
</code></pre>
<p>What exactly should I put under meta? should I put a dictionary of ALL the columns in df or only of the 'time' column? and what type should I put? I have tried dtype and datetime64 but none of them work so far.</p>
<p>Thank you and I appreciate your guidance,</p>
<p><strong>Update</strong></p>
<p>I will include here the new error messages:</p>
<p>1) Using Timestamp</p>
<pre><code>df['trd_exctn_dt'].map_partitions(pd.Timestamp).compute()
TypeError: Cannot convert input to Timestamp
</code></pre>
<p>2) Using datetime and meta</p>
<pre><code>meta = ('time', pd.Timestamp)
df['time'].map_partitions(pd.to_datetime,meta=meta).compute()
TypeError: to_datetime() got an unexpected keyword argument 'meta'
</code></pre>
<p>3) Just using date time: gets stuck at 2%</p>
<pre><code> In [14]: df['trd_exctn_dt'].map_partitions(pd.to_datetime).compute()
[ ] | 2% Completed | 2min 20.3s
</code></pre>
<p>Also, I would like to be able to specify the format in the date, as i would do in pandas:</p>
<pre><code>pd.to_datetime(df['time'], format = '%m%d%Y'
</code></pre>
<p><strong>Update 2</strong></p>
<p>After updating to Dask 0.11, I no longer have problems with the meta keyword. Still, I can't get it past 2% on a 2GB dataframe.</p>
<pre><code>df['trd_exctn_dt'].map_partitions(pd.to_datetime, meta=meta).compute()
[ ] | 2% Completed | 30min 45.7s
</code></pre>
<p><strong>Update 3</strong></p>
<p>worked better this way:</p>
<pre><code>def parse_dates(df):
return pd.to_datetime(df['time'], format = '%m/%d/%Y')
df.map_partitions(parse_dates, meta=meta)
</code></pre>
<p>I'm not sure whether it's the right approach or not</p>
| 1 |
2016-09-20T00:50:15Z
| 39,593,279 |
<h3>Use <code>astype</code></h3>
<p>You can use the <code>astype</code> method to convert the dtype of a series to a NumPy dtype</p>
<pre><code>df.time.astype('M8[us]')
</code></pre>
<p>There is probably a way to specify a Pandas style dtype as well (edits welcome)</p>
<h3>Use map_partitions and meta</h3>
<p>When using black-box methods like <code>map_partitions</code>, dask.dataframe needs to know the type and names of the output. There are a few ways to do this listed in the docstring for <code>map_partitions</code>.</p>
<p>You can supply an empty Pandas object with the right dtype and name</p>
<pre><code>meta = pd.Series([], name='time', dtype=pd.Timestamp)
</code></pre>
<p>Or you can provide a tuple of <code>(name, dtype)</code> for a Series or a dict for a DataFrame</p>
<pre><code>meta = ('time', pd.Timestamp)
</code></pre>
<p>Then everything should be fine</p>
<pre><code>df.time.map_partitions(pd.Timestamp, meta=meta)
</code></pre>
<p>If you were calling <code>map_partitions</code> on <code>df</code> instead then you would need to provide the dtypes for everything. That isn't the case in your example though.</p>
| 1 |
2016-09-20T11:49:19Z
|
[
"python",
"pandas",
"dask"
] |
How to convert for loops to while loops in Python?
| 39,584,119 |
<p>I need to change all the for to while. How to do it for the below code? I have commented about what each for block does.</p>
<pre><code>def createClusters(k, centroids, datadict, repeats):
for apass in range(repeats):
print("****PASS",apass,"****")
clusters = []
for i in range(k):
clusters.append([])
for akey in datadict: #Creating empty list of distances
distances = []
for clusterIndex in range(k): #Calculating the distances between data point and centroid and placing them into the list of distances.
dist = euclidD(datadict[akey],centroids[clusterIndex])
distances.append(dist)
mindist = min(distances) #centroids are recalculated
index = distances.index(mindist)
clusters[index].append(akey)
dimensions = len(datadict[1]) #Specifies the dimension of exam score which will be one.
for clusterIndex in range(k): #Sum include the sum for each dimension of data point
sums = [0]*dimensions #Sum initialized to zero
for akey in clusters[clusterIndex]:
datapoints = datadict[akey] #Each data point will have a data key in data dictionary
for ind in range(len(datapoints)): #Calculates sum of components continuously
sums[ind] = sums[ind] + datapoints[ind]
for ind in range(len(sums)): #Calculates the average
clusterLen = len(clusters[clusterIndex])
if clusterLen != 0:
sums[ind] = sums[ind]/clusterLen
centroids[clusterIndex] = sums #Assigning average to centroid list at proper positions
for c in clusters:
print ("CLUSTER") #Prints all the data of clusters after each pass
for key in c:
print(datadict[key], end=" ")
print()
return clusters
</code></pre>
| -5 |
2016-09-20T00:50:15Z
| 39,584,219 |
<p>Why on earth would you want to convert all the <code>for</code> loops to <code>while</code> loops.<br>
Just to show how ugly this would be, consider a canonical <code>for</code> loop:</p>
<pre><code>for i in iterable:
...
</code></pre>
<p>Would turn into:</p>
<pre><code>it = iter(iterable)
while True:
try:
i = next(it)
except StopIteration:
break
...
</code></pre>
<p>Ugly!!!</p>
| 0 |
2016-09-20T01:03:39Z
|
[
"python"
] |
Removing extra characters from a string with a specific pattern PHP
| 39,584,160 |
<p>I am moving data from the output of a python function to php and then converting it to JSON and sending it to a javascript function that calls it with AJAX. </p>
<p>The format should look like this:</p>
<pre><code>{"data_name" : [1.02, 3.013, -24.12, 39], "data_name_2" : [-0.32151], "data_name_3" : [0.321, -21.42425, 225125.002]}
</code></pre>
<p>Right now, python is returning the data via <code>json.dumps</code> and the output is kind of strange once it reaches PHP. Since the data is already json from Python, <code>json_encode</code> does not work properly, but just echoing the json data produces this:</p>
<pre><code>"[{\"data_name\" : [[1.02], [3.013], [-24.12], [39]]}, {\"data_name_2\" : -0.32151}, {\"data_name_3\" : [[0.321], [-21.42425], [225125.002]]}, null]"
</code></pre>
<p>So how can I do the following things?</p>
<ol>
<li>Remove the first <code>"[</code> from the beginning of the string</li>
<li>Remove all the backslashes before the quotes (so <code>\"data_name\"</code> becomes <code>"data_name"</code>)</li>
<li>Remove <code>}, {</code> from each of the fields (except the first one)</li>
<li>For the 2D arrays, remove the inner <code>[]</code> symbols, so that <code>[[1.02], [2.02]]</code> becomes <code>[1.02, 2.02]</code></li>
<li>For the single numbers, add a <code>[]</code> around them, so <code>-0.32151</code> becomes <code>[-0.32151]</code></li>
<li>Finally, once I reach <code>}, null]"</code>, I want to remove <code>, null]"</code> and just keep the end <code>}</code>.</li>
</ol>
<p>I understand a little about <code>strpos</code> but I'm not sure how I can incorporate that with some form of regex, or if I need to manually loop through the string to take care of each of the steps. If anyone has an idea of how to start I would appreciate it.</p>
| 0 |
2016-09-20T00:55:49Z
| 39,584,830 |
<p>I agree with Sammitch. </p>
<p>The quick fix for 2. is <a href="http://php.net/manual/en/function.stripslashes.php" rel="nofollow">stripslashes</a></p>
<p>For the rest, you may be able to create patterns with <a href="http://php.net/manual/en/function.preg-replace.php" rel="nofollow">preg_replace</a> that can filter it correctly, but why not just change the python output?</p>
| 0 |
2016-09-20T02:31:06Z
|
[
"javascript",
"php",
"python",
"json"
] |
Why is my list not getting reset on every loop in Python
| 39,584,181 |
<p>I have a function that needs to take in a list called edges. I need to preform many loops and on each loop I change properties values of edges. At the beginning of the next loop I need edges to take on its original value. To try and deal with this I have cast my list as a tuple so it wouldn't be mutable and assigned it to permEdges. I then initialize trialEdges from permEdges so that I can make changes to trialEdges. The changes to trialEdges are staying around for the next loop and I don't know why. </p>
<p>If someone could explain why the output isn't consistent and how I can accomplish this goal of having the same list at the start of every loop I would be forever grateful :)</p>
<pre><code>def randomMinCut(edges):
permEdges = tuple(edges)
tries = 3
for i in xrange(tries):
trialEdges = list(permEdges)
print trialEdges
trialEdges[0][0] = 100
return
</code></pre>
<p>sample input and output:</p>
<pre><code>input:
[[1, 2], [1, 3], [1, 4], [1, 7]]
output:
[[1, 2], [1, 3], [1, 4], [1, 7]]
[[100, 2], [1, 3], [1, 4], [1, 7]]
[[100, 2], [1, 3], [1, 4], [1, 7]]
</code></pre>
<p>The first row is what I want repeated 3 times</p>
| 0 |
2016-09-20T00:58:49Z
| 39,584,226 |
<p><code>permEdges</code> may be a tuple, but its contents are not; remember that both <code>permEdges</code> and <code>trialEdges</code> are storing pointers to four two-element lists. When you set <code>trialEdges[0][0] = 100</code>, you're <em>modifying</em> but not <em>replacing</em> the first element of <code>trialEdges</code>, which is also the first element of <code>permEdges</code>.</p>
<p>There are several ways to achieve what you want, but <a href="https://docs.python.org/2/library/copy.html" rel="nofollow"><code>copy.deepcopy</code></a> is probably the best:</p>
<pre><code>import copy
def randomMinCut(edges):
tries = 3
for i in xrange(tries):
trialEdges = copy.deepcopy(edges)
print trialEdges
trialEdges[0][0] = 100
return
</code></pre>
| 4 |
2016-09-20T01:04:48Z
|
[
"python",
"list",
"tuples"
] |
pandas.Series.get fails with: object has no attribute 'values'
| 39,584,228 |
<p>I don't seem to be able to call Series.get on a Series object. </p>
<pre><code>>> print col
0 1
1 1
2 0
Name: a, dtype: float64
>>> counts = col.value_counts()
>>> print counts
1 2
0 1
dtype: int64
</code></pre>
<p>... makes sense. 2 ones. 1 zero</p>
<pre><code>>>> print type(counts)
<class 'pandas.core.series.Series'>
</code></pre>
<p>... OK. The result is a Series. How can I read out the elements? According to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.get.html#pandas.Series.get" rel="nofollow">Series.get</a>, and the docstring for counts.get, I should be able to:</p>
<pre><code>zeros = counts.get(0,0)
ones = counts.get(1,0)
</code></pre>
<p>... but this fails with:</p>
<pre><code>AttributeError: 'numpy.ndarray' object has no attribute 'values'
</code></pre>
<p>What I have misunderstood?</p>
<pre><code>>>> help(counts.get)
Help on method get in module pandas.core.series:
get(self, label, default=None) method of pandas.core.series.Series instance
Returns value occupying requested label, default to specified
missing value if not present. Analogous to dict.get
Parameters
----------
label : object
Label value looking for
default : object, optional
Value to return if label not in index
Returns
-------
y : scalar
</code></pre>
<p>In:</p>
<pre><code>>>> print counts
1 2
0 1
</code></pre>
<p>aren't 1 and 0 the labels?</p>
| 0 |
2016-09-20T01:04:57Z
| 39,584,660 |
<p>Seems to be bug in pandas 15</p>
<p>Upgrading to pandas 18 resolves this.</p>
| 0 |
2016-09-20T02:06:51Z
|
[
"python",
"pandas"
] |
Loaded pickle data is much larger in memory than on disk and seems to leak. (Python 2.7)
| 39,584,233 |
<p>I'm having a memory issue. I have a pickle file that I wrote with the Python 2.7 cPickle module. This file is 2.2GB on disk. It contains a dictionary of various nestings of dictionaries, lists, and numpy arrays. </p>
<p>When I load this file (again using cPickle on Python 2.7) the python process ends up using 5.13GB of memory. Then, if I delete the reference to the loaded data the data usage drops by 2.79GB. At the end of the program there is still another 2.38GB that has not been cleaned up.</p>
<p>Is there some cache or memoization table that cPickle keeps in the backend? Where is this extra data coming from? Is there a way to clear it?</p>
<p>There are no custom objects in the loaded cPickle, just dicts, lists, and numpy arrays. I can't wrap my head around why its behaving this way. </p>
<hr>
<h3>The offending example</h3>
<p>Here is a simple script I wrote to demonstrate the behavior: </p>
<pre><code>from six.moves import cPickle as pickle
import time
import gc
import utool as ut
print('Create a memory tracker object to snapshot memory usage in the program')
memtrack = ut.MemoryTracker()
print('Print out how large the file is on disk')
fpath = 'tmp.pkl'
print(ut.get_file_nBytes_str('tmp.pkl'))
print('Report memory usage before loading the data')
memtrack.report()
print(' Load the data')
with open(fpath, 'rb') as file_:
data = pickle.load(file_)
print(' Check how much data it used')
memtrack.report()
print(' Delete the reference and check again')
del data
memtrack.report()
print('Check to make sure the system doesnt want to clean itself up')
print(' This never does anything. I dont know why I bother')
time.sleep(1)
gc.collect()
memtrack.report()
time.sleep(10)
gc.collect()
for i in range(10000):
time.sleep(.001)
print(' Check one more time')
memtrack.report()
</code></pre>
<p>And here is its output</p>
<pre><code>Create a memory tracker object to snapshot memory usage in the program
[memtrack] +----
[memtrack] | new MemoryTracker(Memtrack Init)
[memtrack] | Available Memory = 12.41 GB
[memtrack] | Used Memory = 39.09 MB
[memtrack] L----
Print out how large the file is on disk
2.00 GB
Report memory usage before loading the data
[memtrack] +----
[memtrack] | diff(avail) = 0.00 KB
[memtrack] | [] diff(used) = 12.00 KB
[memtrack] | Available Memory = 12.41 GB
[memtrack] | Used Memory = 39.11 MB
[memtrack] L----
Load the data
Check how much data it used
[memtrack] +----
[memtrack] | diff(avail) = 5.09 GB
[memtrack] | [] diff(used) = 5.13 GB
[memtrack] | Available Memory = 7.33 GB
[memtrack] | Used Memory = 5.17 GB
[memtrack] L----
Delete the reference and check again
[memtrack] +----
[memtrack] | diff(avail) = -2.80 GB
[memtrack] | [] diff(used) = -2.79 GB
[memtrack] | Available Memory = 10.12 GB
[memtrack] | Used Memory = 2.38 GB
[memtrack] L----
Check to make sure the system doesnt want to clean itself up
This never does anything. I dont know why I bother
[memtrack] +----
[memtrack] | diff(avail) = 40.00 KB
[memtrack] | [] diff(used) = 0.00 KB
[memtrack] | Available Memory = 10.12 GB
[memtrack] | Used Memory = 2.38 GB
[memtrack] L----
Check one more time
[memtrack] +----
[memtrack] | diff(avail) = -672.00 KB
[memtrack] | [] diff(used) = 0.00 KB
[memtrack] | Available Memory = 10.12 GB
[memtrack] | Used Memory = 2.38 GB
[memtrack] L----
</code></pre>
<hr>
<h3>Sanity Check 1 (garbage collection)</h3>
<p>As a sanity check here is a script that allocates the same amount of data and then deletes it, the processes cleans itself up perfectly. </p>
<p>Here is the script:</p>
<pre><code>import numpy as np
import utool as ut
memtrack = ut.MemoryTracker()
data = np.empty(2200 * 2 ** 20, dtype=np.uint8) + 1
print(ut.byte_str2(data.nbytes))
memtrack.report()
del data
memtrack.report()
</code></pre>
<p>And here is the output</p>
<pre><code>[memtrack] +----
[memtrack] | new MemoryTracker(Memtrack Init)
[memtrack] | Available Memory = 12.34 GB
[memtrack] | Used Memory = 39.08 MB
[memtrack] L----
2.15 GB
[memtrack] +----
[memtrack] | diff(avail) = 2.15 GB
[memtrack] | [] diff(used) = 2.15 GB
[memtrack] | Available Memory = 10.19 GB
[memtrack] | Used Memory = 2.19 GB
[memtrack] L----
[memtrack] +----
[memtrack] | diff(avail) = -2.15 GB
[memtrack] | [] diff(used) = -2.15 GB
[memtrack] | Available Memory = 12.34 GB
[memtrack] | Used Memory = 39.10 MB
[memtrack] L----
</code></pre>
<hr>
<h3>Sanity Check 2 (ensuring types)</h3>
<p>Just to do a sanity check that there are no custom types in this list this these are the set of types that occur in this structure.
data itself is a dict with the following keys:
['maws_lists', 'int_rvec', 'wx_lists', 'aid_to_idx', 'agg_flags', 'agg_rvecs', 'gamma_list', 'wx_to_idf', 'aids', 'fxs_lists', 'wx_to_aids'].
The following script is specific to the particular nesting of this structure, but it exhaustively shows the types used in this container:</p>
<pre><code>print(data.keys())
type_set = set()
type_set.add(type(data['int_rvec']))
type_set.add(type(data['wx_to_aids']))
type_set.add(type(data['wx_to_idf']))
type_set.add(type(data['gamma_list']))
type_set.update(set([n2.dtype for n1 in data['agg_flags'] for n2 in n1]))
type_set.update(set([n2.dtype for n1 in data['agg_rvecs'] for n2 in n1]))
type_set.update(set([n2.dtype for n1 in data['fxs_lists'] for n2 in n1]))
type_set.update(set([n2.dtype for n1 in data['maws_lists'] for n2 in n1]))
type_set.update(set([n1.dtype for n1 in data['wx_lists']]))
type_set.update(set([type(n1) for n1 in data['aids']]))
type_set.update(set([type(n1) for n1 in data['aid_to_idx'].keys()]))
type_set.update(set([type(n1) for n1 in data['aid_to_idx'].values()]))
</code></pre>
<p>The output of type set is</p>
<pre><code>{bool,
dtype('bool'),
dtype('uint16'),
dtype('int8'),
dtype('int32'),
dtype('float32'),
NoneType,
int}
</code></pre>
<p>which shows that all sequences end up resolving to None, a standard python type or a standard numpy type. You'll have to trust me that the iterable types are all lists and dicts. </p>
<hr>
<p>In short my question is: </p>
<ul>
<li>Why does loading a 2GB pickle file end up with 5GB of memory used in RAM? </li>
<li>Why does only 2.5GB/5GB get cleaned up when the recently loaded data is garbage collected? </li>
<li>Is there anything that can be done to reclaim this lost memory?</li>
</ul>
| 1 |
2016-09-20T01:05:34Z
| 39,584,495 |
<p>One possible culprit here is that Python, by design, overallocates data structures like lists and dictionaries to make appending to them faster, because memory allocations are slow. For example, on a 32-bit Python, an empty dictionary has a <code>sys.getsizeof()</code> of 36 bytes. Add one element and it becomes 52 bytes. It remains 52 bytes until it has five elements, at which point it becomes 68 bytes. So, clearly, when you appended the first element, Python allocated enough memory for four, and then it allocated enough memory for four more when you added the fifth element (LEELOO DALLAS). As the list grows, the amount of padding added grows faster and faster: essentially you double the memory allocation of the list each time you fill it up. </p>
<p>So I expect there is something like that going on, since the pickle protocol does not appear to store the length of pickled objects, at least for the Python data types, so it is essentially reading one list or dictionary item at a time and appending it, and Python is growing the object as items are added just as described above. Depending on how the size of the objects shake out when you unpickle your data, you might have a lot of extra space left over in your lists and dictionaries. (Not sure how <code>numpy</code> objects are stored, however; they might be more compact.) </p>
<p>Potentially there are also some temporary objects being allocated as well, which would help explain how the memory usage got that large.</p>
<p>Now, when you make a copy of a list or dictionary, Python knows exactly how many items it has and can allocate exactly the right amount of memory for the copy. If a hypothetical 5-element list <code>x</code> is allocated 68 bytes because it is expected to grow to 8 elements, the copy <code>x[:]</code> is allocated 56 bytes because that's exactly the right amount. So you could give that a shot with one of your more sizable objects after loading, and see if it helps noticeably.</p>
<p>But it might not. Python doesn't necessarily release memory back to the OS when objects are destroyed. Instead, it may hold on to the memory in case it needs to allocate more objects of the same kind (which is pretty likely), because reusing memory you already have is less costly than releasing that memory only to re-allocate it later. So although Python might not have given the memory back to the OS, that doesn't mean there's a leak. It's available for use by the rest of your script, the OS just can't see it. There isn't a way to force Python to give it back in this case.</p>
<p>I don't know what <code>utool</code> is (I found a Python package by that name but it doesn't seem to have a <code>MemoryTracker</code> class) but depending on what it's measuring, it might be showing the OS's take on it, not Python's. In this case, what you're seeing is essentially your script's <em>peak</em> memory use, since Python is holding onto that memory in case you need it for something else. If you never use it, it will eventually be swapped out by the OS and the physical RAM will be given to some other process that needs it.</p>
<p>Bottom line, the amount of memory your script is using is not a problem to be solved in itself, and in general is not something you will need to concern yourself with. (That's why you're using Python in the first place!) Does your script work, and does it run quickly enough? Then you're fine. Python and NumPy are both mature and widely used software; the likelihood of finding a true, previously-undetected memory leak of this size in something as frequently used as the <code>pickle</code> library is pretty slim. </p>
<p>If available, it would be interesting to compare your script's memory usage with the amount of memory used by the script that <em>writes</em> the data.</p>
| 3 |
2016-09-20T01:44:56Z
|
[
"python",
"python-2.7",
"numpy",
"pickle"
] |
PCA in sklearn vs numpy is diferent.
| 39,584,263 |
<p>Am i misunderstanding something. This is my code </p>
<p><strong>using sklearn</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn import decomposition
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
pca = decomposition.PCA(n_components=3)
x = np.array([
[0.387,4878, 5.42],
[0.723,12104,5.25],
[1,12756,5.52],
[1.524,6787,3.94],
])
pca.fit_transform(x)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>array([[ -4.25324997e+03, -8.41288672e-01, -8.37858943e-03],
[ 2.97275001e+03, -1.25977271e-01, 1.82476780e-01],
[ 3.62475003e+03, -1.56843494e-01, -1.65224286e-01],
[ -2.34425007e+03, 1.12410944e+00, -8.87390454e-03]])
</code></pre>
<p><strong>Using numpy methods</strong> </p>
<pre><code>x_std = StandardScaler().fit_transform(x)
cov = np.cov(X.T)
ev , eig = np.linalg.eig(cov)
a = eig.dot(x_std.T)
</code></pre>
<p><strong>Output</strong> </p>
<pre><code>array([[ 1.38252552, -1.25240764, 0.2133338 ],
[-0.53279935, -0.44541231, -0.77988021],
[-0.45230635, 0.21983192, -1.23796328],
[-0.39741982, 1.47798804, 1.80450969]])
</code></pre>
<p>I have kept all 3 components but it doesnt seem to allow me to retain my original data. </p>
<p>May i know why is it so? </p>
| 0 |
2016-09-20T01:09:34Z
| 39,586,059 |
<p>Don't use <code>StandardScaler</code>. Instead, just subtract the mean of each column from <code>x</code>:</p>
<pre><code>In [92]: xm = x - x.mean(axis=0)
In [93]: cov = np.cov(xm.T)
In [94]: evals, evecs = np.linalg.eig(cov)
In [95]: xm.dot(evecs)
Out[95]:
array([[ -4.2532e+03, -8.3786e-03, -8.4129e-01],
[ 2.9728e+03, 1.8248e-01, -1.2598e-01],
[ 3.6248e+03, -1.6522e-01, -1.5684e-01],
[ -2.3443e+03, -8.8739e-03, 1.1241e+00]])
</code></pre>
<p>That last result contains the same information as the <code>sklearn</code> result, but the order of the columns is different.</p>
| 2 |
2016-09-20T05:05:38Z
|
[
"python",
"numpy",
"scikit-learn"
] |
Python Crash Course 8-15, Not Defined Error
| 39,584,329 |
<p>I'm reading the book 'Python Crash Course' by Eric Matthes and I seem to be having issues with exercise 8-15.</p>
<p>8-15 says: "Put the functions for the example print_models.py in a separate file called printing_functions.py. Write an import statement at the top of print_models.py, and modify the file to use the imported functions.</p>
<p><strong>Here is my code for the printing_functions.py module:</strong></p>
<pre><code>def print_models(unprinted_designs, completed_models):
"""Simulate printing each design, until none are left.
Move each design to completed_models after printing."""
while unprinted_designs:
current_design = unprinted_designs.pop()
# Simulate creating a 3D print from the design.
print("Printing model: " + current_design)
completed_models.append(current_design)
def show_completed_models(completed_models):
"""Show all the models that were printed."""
print("\nThe following models have been printed:")
for completed_model in completed_models:
print(completed_model)
unprinted_designs = ['iphone case', 'robot pendant', 'dodecahedron']
completed_models = []
print_models(unprinted_designs, completed_models)
show_completed_models(completed_models)
</code></pre>
<p><strong>Here is my code for exercise 8-15</strong></p>
<pre><code>import printing_functions as pf
pf.print_models(unprinted_designs, completed_models)
pf.show_completed_models(completed_models)
unprinted_designs = ['iphone case', 'robot pendant', 'dodecahedron']
completed_models = []
print_models(unprinted_designs, completed_models)
show_completed_models(completed_models)
</code></pre>
<p>When I run this code, I'm able to receive the same output as in the module. However I also receive and error at the bottom stating that 'unprinted_designs' is not defined. But I clearly have this variable defined near the bottom of my code as a list so I do not understand why I am getting this error. </p>
<p>Does anyone have any ideas as to what I may be doing wrong? Any feedback would be greatly appreciated. Thank you for your time.</p>
| 0 |
2016-09-20T01:22:04Z
| 39,584,403 |
<p>You've mostly done the correct thing, but variables need to be assigned before you can use them. </p>
<p>Also, since you've separated / moved the functions to a module, you have to use those imported ones while the variables remain in this module </p>
<pre><code>import printing_functions as pf
unprinted_designs = ['iphone case', 'robot pendant', 'dodecahedron']
completed_models = []
pf.print_models(unprinted_designs, completed_models)
pf.show_completed_models(completed_models)
</code></pre>
<p>Then, remove the unnecessary lines at the end of the other module. Only the function definitions should be there </p>
| 1 |
2016-09-20T01:32:17Z
|
[
"python"
] |
Loop through all pixels of 2 images and replace the black pixels with white
| 39,584,343 |
<p>I have 2 co-located images, both created in a similar way and both have the size of 7,221 x 119 pixels.</p>
<p>I want to loop through all the pixels of both images. If the pixels are black on the first image and also black on the second image then turn it into white, otherwise, no change.</p>
<p>How can I do it with python?</p>
| 0 |
2016-09-20T01:24:41Z
| 39,584,624 |
<p>I suggest the use of the Pillow library (<a href="https://python-pillow.org/" rel="nofollow">https://python-pillow.org/</a>) which is a fork of the PIL library.</p>
<p>Here's something from the Pillow docs: <a href="http://pillow.readthedocs.io/en/3.1.x/reference/PixelAccess.html" rel="nofollow">http://pillow.readthedocs.io/en/3.1.x/reference/PixelAccess.html</a></p>
<p>And a couple of Stackoverflow questions that may help you:</p>
<p><a href="http://stackoverflow.com/questions/3596433/is-it-possible-to-change-the-color-of-one-individual-pixel-in-python">Is it possible to change the color of one individual pixel in Python?</a></p>
<p><a href="http://stackoverflow.com/questions/13167269/changing-pixel-color-python">Changing pixel color Python</a></p>
<p>I guess you'd just have to open both images, loop through each pixel of rach image, compare the pixels, compare pixels, then replace if necessary.</p>
| 0 |
2016-09-20T02:03:06Z
|
[
"python",
"image"
] |
Loop through all pixels of 2 images and replace the black pixels with white
| 39,584,343 |
<p>I have 2 co-located images, both created in a similar way and both have the size of 7,221 x 119 pixels.</p>
<p>I want to loop through all the pixels of both images. If the pixels are black on the first image and also black on the second image then turn it into white, otherwise, no change.</p>
<p>How can I do it with python?</p>
| 0 |
2016-09-20T01:24:41Z
| 39,584,922 |
<p>This should hopefully be pretty close to what you're looking for.</p>
<pre><code>from PIL import Image
from PIL import ImageFilter
im = Image.open('a.png')
imb = Image.open('b.png')
pix = im.load()
width, height = im.size
for w in xrange(width):
for h in xrange(height):
r,g,b,a = pix[(w,h)]
rb, gb, bb, ab = pix[(w,h)]
if not (r+g+b+rb+gb+bb): #all values 0
pix[w,h] = (255,255,255,255)
im.save('test','BMP')
</code></pre>
| 0 |
2016-09-20T02:44:15Z
|
[
"python",
"image"
] |
Crawling Links from JSON file
| 39,584,415 |
<p>So, I am new to the world of web crawlers and I'm having a little difficulty crawling a simple JSON file and retrieving links from it. I am using scrapy framework to try and accomplish this. </p>
<p>My JSON example file: </p>
<pre><code>{
"pages": [
{
"address":"http://foo.bar.com/p1",
"links": ["http://foo.bar.com/p2",
"http://foo.bar.com/p3", "http://foo.bar.com/p4"]
},
{
"address":"http://foo.bar.com/p2",
"links": ["http://foo.bar.com/p2",
"http://foo.bar.com/p4"]
},
{
"address":"http://foo.bar.com/p4",
"links": ["http://foo.bar.com/p5",
"http://foo.bar.com/p1", "http://foo.bar.com/p6"]
},
{
"address":"http://foo.bar.com/p5",
"links": []
},
{
"address":"http://foo.bar.com/p6",
"links": ["http://foo.bar.com/p7",
"http://foo.bar.com/p4", "http://foo.bar.com/p5"]
}
]
}
</code></pre>
<p>My items.py file</p>
<pre><code>import scrapy
from scrapy.item import Item, Field
class FoobarItem(Item):
# define the fields for your item here like:
title = Field()
link = Field()
</code></pre>
<p>My spider file</p>
<pre><code>from scrapy.spider import Spider
from scrapy.selector import Selector
from foobar.items import FoobarItem
class MySpider(Spider):
name = "foo"
allowed_domains = ["localhost"]
start_urls = ["http://localhost/testdata.json"]
def parse(self, response):
yield response.url
</code></pre>
<p>Eventually I would like to crawl the file and return the links in the object without duplicates but right now I am even struggling to crawl the json. I thought the code above would crawl through the json object and return the links but my output file is empty. Not sure what I'm doing wrong but any help would be appreciated</p>
| 0 |
2016-09-20T01:33:36Z
| 39,584,790 |
<p>So first thing is you need to have a way to parse the json file, <code>json</code> lib should do nicely. Then the next bit would be to run your crawler with the url.</p>
<pre><code>import json
with open("myExample.json", 'r') as infile:
contents = json.load(infile)
#contents is now a dictionary of your json but it's a json array/list
#continuing on you would then iterate through each dictionary
#and fetch the pieces you need.
links_list = []
for item in contents:
for key, val in item.items():
if 'http' in key:
links_list.append(key)
if 'http' in value:
if isinstance(value, list):
for link in value:
links_list.append(link)
else:
links_list.append(value)
#get rid of dupes
links_list = list(set(links_list))
#do rest of your crawling with list of links
</code></pre>
| -1 |
2016-09-20T02:25:06Z
|
[
"python",
"scrapy",
"web-crawler"
] |
python script to check if module is present else install module
| 39,584,442 |
<p>I have this simple Python script. I want to include some sort of condition that checks for the Python module (in my example below <code>subprocess</code>) before running it. If the module is not present, install the module then run the script. If the module is present, skip the installation of the module and run the script. I am struggling with most of the similar scenarios I am seeing online.</p>
<pre><code>import subprocess
ls_output = subprocess.check_output(['ls'])
print ls_output
</code></pre>
| 1 |
2016-09-20T01:37:38Z
| 39,584,932 |
<p>Here is how you can check if pycurl is installed:</p>
<pre><code># if you want to now install pycurl and run it, it is much trickier
# technically, you might want to check if pip is installed as well
import sys
import pip
def install(package):
pip.main(['install', package])
try:
import pycurl
except ImportError:
print 'pycurl is not installed, installing it now!'
install('pycurl')
# not recommended because http://stackoverflow.com/questions/7271082/how-to-reload-a-modules-function-in-python
import pycurl
. . . .
# better option:
print 'just installed pycurl, please rerun this script at your convenience'
sys.exit(1)
</code></pre>
| 1 |
2016-09-20T02:45:16Z
|
[
"python",
"python-2.7"
] |
Pandas Percent Change on Non-Descending Cells
| 39,584,544 |
<p>I'm new to Pandas and Stack Overflow, so please bear with me. I'm trying to calculate the percent change on two times (e.g., for a race, not time of day). So suppose I have five athletes. I've formatted the .csv to give me something like the following:</p>
<pre><code>In [3]: df
Out [3]:
Athlete Time Seconds
1 Gavin 0:17:00 1020
2 Noah 0:17:45 1065
3 Chris 0:18:46 1126
4 David 0:21:40 1300
5 Travis 0:23:11 1391
</code></pre>
<p>I used a function to convert the times to seconds to make the next step easier, but if I don't need to do this please let me know. What I'm wondering is how to calculate the percent difference from some specified person who might not be first (i.e., the change won't be descending from the fastest time). I'd like to be able to enter a name and have it calculate from that. So if I picked 'Chris', the output would be the following:</p>
<pre><code> Athlete Time Seconds Percent_Diff
1 Gavin 0:17:00 1020 -9.4
2 Noah 0:17:45 1065 -5.4
3 Chris 0:18:46 1126 0
4 David 0:21:40 1300 15.5
5 Travis 0:23:11 1391 23.5
</code></pre>
<p>I've found this way to choose a row by name:</p>
<pre><code>(df1.loc[df1['Athlete'] == 'Chris']['Seconds'])
</code></pre>
<p>This produces the row for Chris. Is there a way for me to use pct_change() for this, regardless of which name I choose? How do I do this? Thanks! </p>
| 1 |
2016-09-20T01:50:57Z
| 39,585,018 |
<pre><code>df1['pct_diff'] = df['seconds'] / df.loc['Chris', 'seconds'] - 1
</code></pre>
| 1 |
2016-09-20T02:58:44Z
|
[
"python",
"pandas"
] |
Plotting an array of vectors in Python (pyplot)
| 39,584,547 |
<p>I'm trying to plot a large array of vectors using pyplot. Right now I've got</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import operator
t = np.arange(0, np.pi, .01)
def pos1(t):
x1 = .72 * np.cos(13*t)
y1 = .72 * np.sin(13*t)
return x1, y1
def pos2(t):
x2 = np.cos(8*t)
y2 = np.sin(8*t)
return x2, y2
def vec(t):
x1 = pos1(t)[0]
x2 = pos2(t)[0]
y1 = pos1(t)[1]
y2 = pos2(t)[1]
x = np.subtract(x1, x2)
y = np.subtract(y1, y2)
return x, y
X = pos2(t)[0]
Y = pos2(t)[1]
U = vec(t)[0]
V = vec(t)[1]
plot1 = plt.figure()
plt.quiver(X, Y, U, V, headlength=4)
plt.show(plot1)
</code></pre>
<p>Where <code>pos1(t)</code>, <code>pos2(t)</code> and <code>vec(t)</code> are functions that return a tuple of the form <code>([a,...,z],[a1,...,z1])</code>.</p>
<p>This plot gives me something close to what I want, but the vector lengths are all wrong. the two functions, <code>pos1(t),pos2(t)</code> return a tuple of the point on a particular curve, and the <code>vec(t)</code> function is their difference, leading to a vector from a point on the first curve to a point on the second. My plot has the correct direction, but not magnitude. </p>
<p><a href="http://i.stack.imgur.com/4sod4.png" rel="nofollow"><img src="http://i.stack.imgur.com/4sod4.png" alt="enter image description here"></a></p>
| 1 |
2016-09-20T01:51:41Z
| 39,612,879 |
<p><code>quiver</code> handles length of arrows. It seems <code>quiver</code> is not what you need.</p>
<p><strong>Using regular <code>plot</code>:</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
t = np.arange(0, 2 * np.pi, 0.01)
x0 = np.sin(8 * t)
y0 = np.cos(8 * t)
x1 = 0.72 * np.sin(13 * t)
y1 = 0.72 * np.cos(13 * t)
data = np.column_stack((x0, x1, y0, y1)).reshape(-1, 2)
plt.plot(*data, color='black')
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
</code></pre>
<p><strong>Result:</strong></p>
<p><a href="http://i.imgur.com/iFEpcU5.png" rel="nofollow"><img src="http://i.imgur.com/iFEpcU5.png" alt="Result"></a>
<strong>Original:</strong></p>
<p><a href="http://i.stack.imgur.com/vB5Pm.gif" rel="nofollow"><img src="http://i.stack.imgur.com/vB5Pm.gif" alt="Original"></a></p>
| 1 |
2016-09-21T09:40:17Z
|
[
"python",
"arrays",
"numpy"
] |
Process command using python
| 39,584,565 |
<p>I want to pick up the command and its arguments in Python.</p>
<p>I can use </p>
<pre><code>process=os.popen('ps -elf').read().split("\n")
</code></pre>
<p>and then use regular expression to extract the command but its ugly.</p>
<p>psutils returns a process name but not the actual commands and arguments</p>
<p>Is there a simple way to do this?</p>
| 0 |
2016-09-20T01:54:21Z
| 39,585,228 |
<p><code>psutil</code> can get the command line arguments:</p>
<pre><code>import psutil
for p in psutil.process_iter():
cmd_line = p.cmdline()
if cmd_line:
print(cmd_line)
</code></pre>
<p>EDIT: updated to fix the issue found by @Keir</p>
| 0 |
2016-09-20T03:23:53Z
|
[
"python",
"process",
"command"
] |
Process command using python
| 39,584,565 |
<p>I want to pick up the command and its arguments in Python.</p>
<p>I can use </p>
<pre><code>process=os.popen('ps -elf').read().split("\n")
</code></pre>
<p>and then use regular expression to extract the command but its ugly.</p>
<p>psutils returns a process name but not the actual commands and arguments</p>
<p>Is there a simple way to do this?</p>
| 0 |
2016-09-20T01:54:21Z
| 39,606,006 |
<p>The last suggestion is almost correct.
It should be</p>
<pre><code>for p in psutil.process_iter():
cline = p.cmdline
if cline: print(cline)
</code></pre>
| 0 |
2016-09-21T01:05:50Z
|
[
"python",
"process",
"command"
] |
How to get request object in scrapy pipeline
| 39,584,817 |
<p>I know that when the pipelines are called, it means the request have been stopped, generally we should do some validation,persist job based on the extracted item, it seems there is no sense to get request in the pipeline.</p>
<p>However I found it may useful in certain situation,in my application I use two pipelines: <code>FilesPipeline</code> and <code>MysqlStorePipeline</code>.</p>
<p>When an item is extracted, the <code>FilesPipeline</code> will tried to send request to get the image of the item, and save them to the db after completed.</p>
<p>However I use a download middleware <code>RandomProxy</code> at the sametime, which will get a proxy record randomly from the database, and set it to the request meta. But the proxy is not granted can be used all the time.</p>
<p>So the following may happen:</p>
<p>When retrieve the item, a proxy <code>http://proxy1</code> is used, but it can not be used, thanks to the retry middleware, scrapy will try again, and another proxy <code>http://proxy2</code> is fetched from db, if it can be used, an item is generated, then <code>FilesPipeline</code> will tried to download the image for the item by sending an image request which will be filled with a proxy say it is <code>http://proxy3</code>, once the proxy3 can not be used, scrapy will retry too. But there are chances of getting bad proxies during all the retry. Then the item will be dropped because of no bound image fetched which MUST can not be empty.</p>
<p>Furthermore, the image request does not contain a referer which may be blocked by the server sometime.</p>
<p>So I wonder if the origin request used to extract an item can be accessed through the pipeline.</p>
<p>Is this possible or other suggestion?</p>
| 0 |
2016-09-20T02:29:43Z
| 39,585,287 |
<p>Here are two approaches:</p>
<ol>
<li><p>Add a dummy field to the item to store whatever you want in the spider code. And later retrieve the value (and pop out the field) in the item pipeline.</p></li>
<li><p>Instead of using an item pipeline, use a <a href="http://doc.scrapy.org/en/latest/topics/spider-middleware.html" rel="nofollow">spider middleware</a>. In its <code>process_spider_output</code> method you could access both the response and the spider output.</p></li>
</ol>
| 2 |
2016-09-20T03:32:15Z
|
[
"python",
"scrapy"
] |
Loop through a finite language described by regex in python
| 39,584,908 |
<p>The given input is a regex that describe a finite language. Is there a simple way to enumerate the language in python (or in other programming language)?</p>
<p>The following is what I expect:</p>
<p>Psuedocode:</p>
<pre><code>for x in r'[a-c]':
print(x)
</code></pre>
<p>Output:</p>
<pre><code>a
b
c
</code></pre>
| 2 |
2016-09-20T02:42:25Z
| 39,630,865 |
<p>There's no way to do this with the built-in <code>re</code> module.</p>
<p>Instead, what you need to do is build your own regular expression parser and use that to generate your language.</p>
<p>Just to see if I could do it, I made a <em>basic</em> regular expression parser and generator. The code is 410 lines long including some minimal documentation, so probably too big to fit here, so it's in a <a href="https://gist.github.com/jearls/a8235e13c21f2c2a283dd4080f3cee17" rel="nofollow">gist</a>.</p>
<p>Sample output:</p>
<pre><code>$ python regen.py '[a-c]'
'a'
'b'
'c'
</code></pre>
<p>Sequence of items:</p>
<pre><code>$ python regen.py '[a-c][1-5]'
'a1'
'a2'
'a3'
'a4'
'a5'
'b1'
'b2'
'b3'
'b4'
'b5'
'c1'
'c2'
'c3'
'c4'
'c5'
</code></pre>
<p>Alternate items:</p>
<pre><code>$ python regen.py '[a-c]|[1-5]'
'a'
'b'
'c'
'1'
'2'
'3'
'4'
'5'
</code></pre>
<p>Infinite operators are capped at 5 repetitions...</p>
<pre><code>$ python regen.py 'a*'
''
'a'
'aa'
'aaa'
'aaaa'
'aaaaa'
</code></pre>
<p>but finite operators are not:</p>
<pre><code>$ python regen.py 'a{3,6}'
'aaa'
'aaaa'
'aaaaa'
'aaaaaa'
</code></pre>
| 0 |
2016-09-22T05:25:28Z
|
[
"python",
"regex"
] |
Python Pandas Data sampling/aggregation
| 39,584,916 |
<p>I have a huge comma separated datetime, <code>unique_id</code> dataset which looks as below.</p>
<pre><code>datetime, unique_id
2016-09-01 19:50:01, bca8ca1c91d283212faaade44c6185956265cc09
2016-09-01 19:50:02, ddd20611d47597435412739db48b0cb04599e340
2016-09-01 19:50:10, 5b8776d7dc0b83f9bd9ad70a403a5f605e37d4d4
2016-09-01 19:50:14, 2b8a2d7179fe08f8c87d125ad5bc41b5eb79d06f
2016-09-01 19:50:20, 902c4428e08f4324a70a5a4bbfabb657c4a9ffc3
2016-09-01 19:50:23, bca8ca1c91d283212faaade44c6185956265cc09
2016-09-01 19:51:10, a2e6521c66e7207398ffe3d4e5bab449f75e616d
2016-09-01 19:51:11, a2e6521c66e7207398ffe3d4e5bab449f75e616d
2016-09-01 19:51:20, f7cfa02eeb3feed2a0f616185312925e4190c66b
2016-09-01 19:51:30, 0bb21868b55b832f1315438ccdb9c508cf37b8b4
2016-09-01 19:51:40, cb3cfe7bc2fa40d20db23ddc209d2062e10c2ce3
2016-09-01 19:51:50, 2b8a2d7179fe08f8c87d125ad5bc41b5eb79d06f
2016-09-01 19:51:55, 099ba09cd602f9d9bb20f5ebc195686dc133b464
2016-09-01 19:52:00, c300e6a54013ee56facab294e326aa523cd4c60a
2016-09-01 19:53:01, bca8ca1c91d283212faaade44c6185956265cc09
2016-09-01 19:53:04, 902c4428e08f4324a70a5a4bbfabb657c4a9ffc3
2016-09-01 19:53:10, 5b8776d7dc0b83f9bd9ad70a403a5f605e37d4d4
2016-09-01 19:53:11, 2b8a2d7179fe08f8c87d125ad5bc41b5eb79d06f
2016-09-01 19:53:17, bca8ca1c91d283212faaade44c6185956265cc09
2016-09-01 19:53:20, 0fe1560c790c78b960b66e7d7336dd76d2ea12cf
2016-09-01 19:53:40, ddd20611d47597435412739db48b0cb04599e340
</code></pre>
<p>Using Python Pandas, I would like to get count of <code>unique ids</code> per <code>minute</code>.
For ex.</p>
<pre><code>datetime, count(unique_id)
2016-09-01 19:50:00, 5
2016-09-01 19:51:00, 6
2016-09-01 19:52:00, 1
2016-09-01 19:53:00, 6
</code></pre>
<p>I tried using <code>pandas.DataFrame.resample</code> but looks like that is not the way to approach this problem. </p>
<pre><code>resampled_data = raw_df.set_index(pd.DatetimeIndex(raw_df["datetime"])).resample("1T")
</code></pre>
<p>Any pointers on how to solve this problem would be greatly appreciated.</p>
| 1 |
2016-09-20T02:43:18Z
| 39,584,979 |
<p>You can set the datetime as index and use the <code>pandas.TimeGrouper</code> to create the group variable, which can group your data frame with specified frequency in time, and then count the number of unique ids:</p>
<pre><code>import pandas as pd
df.set_index(pd.to_datetime(df.datetime)).groupby(pd.TimeGrouper(freq = "min"))['unique_id'].nunique()
# datetime
#2016-09-01 19:50:00 5
#2016-09-01 19:51:00 6
#2016-09-01 19:52:00 1
#2016-09-01 19:53:00 6
#Freq: T, Name: unique_id, dtype: int64
</code></pre>
| 2 |
2016-09-20T02:53:55Z
|
[
"python",
"python-2.7",
"pandas",
"group",
"aggregate"
] |
Python Pandas Data sampling/aggregation
| 39,584,916 |
<p>I have a huge comma separated datetime, <code>unique_id</code> dataset which looks as below.</p>
<pre><code>datetime, unique_id
2016-09-01 19:50:01, bca8ca1c91d283212faaade44c6185956265cc09
2016-09-01 19:50:02, ddd20611d47597435412739db48b0cb04599e340
2016-09-01 19:50:10, 5b8776d7dc0b83f9bd9ad70a403a5f605e37d4d4
2016-09-01 19:50:14, 2b8a2d7179fe08f8c87d125ad5bc41b5eb79d06f
2016-09-01 19:50:20, 902c4428e08f4324a70a5a4bbfabb657c4a9ffc3
2016-09-01 19:50:23, bca8ca1c91d283212faaade44c6185956265cc09
2016-09-01 19:51:10, a2e6521c66e7207398ffe3d4e5bab449f75e616d
2016-09-01 19:51:11, a2e6521c66e7207398ffe3d4e5bab449f75e616d
2016-09-01 19:51:20, f7cfa02eeb3feed2a0f616185312925e4190c66b
2016-09-01 19:51:30, 0bb21868b55b832f1315438ccdb9c508cf37b8b4
2016-09-01 19:51:40, cb3cfe7bc2fa40d20db23ddc209d2062e10c2ce3
2016-09-01 19:51:50, 2b8a2d7179fe08f8c87d125ad5bc41b5eb79d06f
2016-09-01 19:51:55, 099ba09cd602f9d9bb20f5ebc195686dc133b464
2016-09-01 19:52:00, c300e6a54013ee56facab294e326aa523cd4c60a
2016-09-01 19:53:01, bca8ca1c91d283212faaade44c6185956265cc09
2016-09-01 19:53:04, 902c4428e08f4324a70a5a4bbfabb657c4a9ffc3
2016-09-01 19:53:10, 5b8776d7dc0b83f9bd9ad70a403a5f605e37d4d4
2016-09-01 19:53:11, 2b8a2d7179fe08f8c87d125ad5bc41b5eb79d06f
2016-09-01 19:53:17, bca8ca1c91d283212faaade44c6185956265cc09
2016-09-01 19:53:20, 0fe1560c790c78b960b66e7d7336dd76d2ea12cf
2016-09-01 19:53:40, ddd20611d47597435412739db48b0cb04599e340
</code></pre>
<p>Using Python Pandas, I would like to get count of <code>unique ids</code> per <code>minute</code>.
For ex.</p>
<pre><code>datetime, count(unique_id)
2016-09-01 19:50:00, 5
2016-09-01 19:51:00, 6
2016-09-01 19:52:00, 1
2016-09-01 19:53:00, 6
</code></pre>
<p>I tried using <code>pandas.DataFrame.resample</code> but looks like that is not the way to approach this problem. </p>
<pre><code>resampled_data = raw_df.set_index(pd.DatetimeIndex(raw_df["datetime"])).resample("1T")
</code></pre>
<p>Any pointers on how to solve this problem would be greatly appreciated.</p>
| 1 |
2016-09-20T02:43:18Z
| 39,586,252 |
<p>I think you need specify <code>Series</code> - <code>['unique_id']</code> and add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.tseries.resample.Resampler.nunique.html" rel="nofollow"><code>Resampler.nunique</code></a>:</p>
<pre><code>resampled_data = raw_df.set_index(pd.DatetimeIndex(raw_df["datetime"]))
.resample("1T")['unique_id']
.nunique()
print (resampled_data)
2016-09-01 19:50:00 5
2016-09-01 19:51:00 6
2016-09-01 19:52:00 1
2016-09-01 19:53:00 6
Freq: T, Name: unique_id, dtype: int64
</code></pre>
| 2 |
2016-09-20T05:23:33Z
|
[
"python",
"python-2.7",
"pandas",
"group",
"aggregate"
] |
How to make a function that replaces the exponent feature?
| 39,584,924 |
<p>Before hand yes this is a homework question. Well there are two I'd like to ask for help with. The first one is a function that needs to take two numbers (base, exp) and then multiply base by itself by the amount of times that exponent represents. eg. base = 2, exp = 3. it would be 2*2*2. So far this is what I have:</p>
<pre><code>def iterPower(base, exp):
if base and exp <= 0:
return("The answer is 0")
elif base and exp == 0:
return("1")
else:
for i in range(exp):
ans = base * base
return ans
print("iterPower(0, 0): should be 1 == " \
+ str(iterPower(0, 0)))
print("iterPower(5, 3): should be 125 == " \
+ str(iterPower(5,3)))
print("iterPower(-2.0, 3): should be -8.0 == " \
+ str(iterPower(-2.0, 3)))
</code></pre>
<p>However when I run the above function. The first test gives me none, the second test gives me just 25, and the last test gives me 4.0. I ran the code itself through a code visualizer to see what's happening(or not happening) and I see that the range is not being utilized. I also notice that as I have it (base * base) would not give a correct answer either. I tried googling and going through the already answered questions here and I come up with nothing. </p>
<p>edit: I know there's a built in operator but the point of the assignment is to not use the operator. The paper practically has giant bold letters saying it's use is banned -,-</p>
| 0 |
2016-09-20T02:44:21Z
| 39,585,063 |
<p>You probably just want:</p>
<pre><code>ans = 1
for i in range(exp):
ans *= base
return ans
</code></pre>
<p>As your last condition.<br>
You also don't need the middle condition.</p>
| 1 |
2016-09-20T03:03:54Z
|
[
"python",
"python-3.x"
] |
How to make a function that replaces the exponent feature?
| 39,584,924 |
<p>Before hand yes this is a homework question. Well there are two I'd like to ask for help with. The first one is a function that needs to take two numbers (base, exp) and then multiply base by itself by the amount of times that exponent represents. eg. base = 2, exp = 3. it would be 2*2*2. So far this is what I have:</p>
<pre><code>def iterPower(base, exp):
if base and exp <= 0:
return("The answer is 0")
elif base and exp == 0:
return("1")
else:
for i in range(exp):
ans = base * base
return ans
print("iterPower(0, 0): should be 1 == " \
+ str(iterPower(0, 0)))
print("iterPower(5, 3): should be 125 == " \
+ str(iterPower(5,3)))
print("iterPower(-2.0, 3): should be -8.0 == " \
+ str(iterPower(-2.0, 3)))
</code></pre>
<p>However when I run the above function. The first test gives me none, the second test gives me just 25, and the last test gives me 4.0. I ran the code itself through a code visualizer to see what's happening(or not happening) and I see that the range is not being utilized. I also notice that as I have it (base * base) would not give a correct answer either. I tried googling and going through the already answered questions here and I come up with nothing. </p>
<p>edit: I know there's a built in operator but the point of the assignment is to not use the operator. The paper practically has giant bold letters saying it's use is banned -,-</p>
| 0 |
2016-09-20T02:44:21Z
| 39,585,085 |
<p>It should work.</p>
<pre><code>def iterPower(base, exp):
if base < 0:
return 0
ans = 1
for i in range(exp):
ans *= base
return ans
</code></pre>
<p>First, check how the conditionals work in Python, <code>base and exp <= 0</code> is different than <code>base <= 0 and exp <= 0</code>.</p>
<p>Second, blocks in Python are defined accordingly to the indentation, so those two loops do different things:</p>
<pre><code>for i in range(exp):
ans *= base
return ans
for i in range(exp):
ans *= base
return ans
</code></pre>
<p>Third, <code>base * base</code> is constant if <code>base</code> doesn't change, with that in mind compare <code>ans = base * base</code> to <code>ans = ans * base</code>.</p>
| 0 |
2016-09-20T03:06:05Z
|
[
"python",
"python-3.x"
] |
How to make a function that replaces the exponent feature?
| 39,584,924 |
<p>Before hand yes this is a homework question. Well there are two I'd like to ask for help with. The first one is a function that needs to take two numbers (base, exp) and then multiply base by itself by the amount of times that exponent represents. eg. base = 2, exp = 3. it would be 2*2*2. So far this is what I have:</p>
<pre><code>def iterPower(base, exp):
if base and exp <= 0:
return("The answer is 0")
elif base and exp == 0:
return("1")
else:
for i in range(exp):
ans = base * base
return ans
print("iterPower(0, 0): should be 1 == " \
+ str(iterPower(0, 0)))
print("iterPower(5, 3): should be 125 == " \
+ str(iterPower(5,3)))
print("iterPower(-2.0, 3): should be -8.0 == " \
+ str(iterPower(-2.0, 3)))
</code></pre>
<p>However when I run the above function. The first test gives me none, the second test gives me just 25, and the last test gives me 4.0. I ran the code itself through a code visualizer to see what's happening(or not happening) and I see that the range is not being utilized. I also notice that as I have it (base * base) would not give a correct answer either. I tried googling and going through the already answered questions here and I come up with nothing. </p>
<p>edit: I know there's a built in operator but the point of the assignment is to not use the operator. The paper practically has giant bold letters saying it's use is banned -,-</p>
| 0 |
2016-09-20T02:44:21Z
| 39,585,105 |
<p>Can also use a while loop to shorten your code.</p>
<pre><code>def iterPower(base, exp):
answer = 1
while exp > 0:
answer *= base
exp -= 1
return answer
</code></pre>
| 0 |
2016-09-20T03:08:03Z
|
[
"python",
"python-3.x"
] |
How to make a function that replaces the exponent feature?
| 39,584,924 |
<p>Before hand yes this is a homework question. Well there are two I'd like to ask for help with. The first one is a function that needs to take two numbers (base, exp) and then multiply base by itself by the amount of times that exponent represents. eg. base = 2, exp = 3. it would be 2*2*2. So far this is what I have:</p>
<pre><code>def iterPower(base, exp):
if base and exp <= 0:
return("The answer is 0")
elif base and exp == 0:
return("1")
else:
for i in range(exp):
ans = base * base
return ans
print("iterPower(0, 0): should be 1 == " \
+ str(iterPower(0, 0)))
print("iterPower(5, 3): should be 125 == " \
+ str(iterPower(5,3)))
print("iterPower(-2.0, 3): should be -8.0 == " \
+ str(iterPower(-2.0, 3)))
</code></pre>
<p>However when I run the above function. The first test gives me none, the second test gives me just 25, and the last test gives me 4.0. I ran the code itself through a code visualizer to see what's happening(or not happening) and I see that the range is not being utilized. I also notice that as I have it (base * base) would not give a correct answer either. I tried googling and going through the already answered questions here and I come up with nothing. </p>
<p>edit: I know there's a built in operator but the point of the assignment is to not use the operator. The paper practically has giant bold letters saying it's use is banned -,-</p>
| 0 |
2016-09-20T02:44:21Z
| 39,585,112 |
<p>There are a couple of issues with your code.</p>
<ol>
<li><p>The statement <code>if base and exp <= 0</code> basically translates to <code>if (base) and (exp <= 0)</code>. Here the conditions are evaluated separately. You may want to use <code>if base <= 0 and exp <= 0:</code> (or something similar).</p></li>
<li><p>The statement <code>ans = base * base</code> basically squares <code>base</code> and assigns the result into <code>ans</code>. Wouldn't <code>ans = ans * base</code> be better (ensure you initialize <code>ans</code> to 1 before you do that.</p></li>
<li><p>The return statement (<code>return ans</code>) is inside the for loop which means that the loop will only be executed once. You may want to put it outsdide the loop.</p></li>
</ol>
<p>Hope this helps.</p>
| 0 |
2016-09-20T03:08:50Z
|
[
"python",
"python-3.x"
] |
How to make a function that replaces the exponent feature?
| 39,584,924 |
<p>Before hand yes this is a homework question. Well there are two I'd like to ask for help with. The first one is a function that needs to take two numbers (base, exp) and then multiply base by itself by the amount of times that exponent represents. eg. base = 2, exp = 3. it would be 2*2*2. So far this is what I have:</p>
<pre><code>def iterPower(base, exp):
if base and exp <= 0:
return("The answer is 0")
elif base and exp == 0:
return("1")
else:
for i in range(exp):
ans = base * base
return ans
print("iterPower(0, 0): should be 1 == " \
+ str(iterPower(0, 0)))
print("iterPower(5, 3): should be 125 == " \
+ str(iterPower(5,3)))
print("iterPower(-2.0, 3): should be -8.0 == " \
+ str(iterPower(-2.0, 3)))
</code></pre>
<p>However when I run the above function. The first test gives me none, the second test gives me just 25, and the last test gives me 4.0. I ran the code itself through a code visualizer to see what's happening(or not happening) and I see that the range is not being utilized. I also notice that as I have it (base * base) would not give a correct answer either. I tried googling and going through the already answered questions here and I come up with nothing. </p>
<p>edit: I know there's a built in operator but the point of the assignment is to not use the operator. The paper practically has giant bold letters saying it's use is banned -,-</p>
| 0 |
2016-09-20T02:44:21Z
| 39,585,114 |
<p>1I think the code you want is:</p>
<pre><code>def iterPower( base, exp ):
if base and exp < 0:
return 0
elif base and exp == 0:
return 1
elif base and exp == 1:
return base
else:
ans = base
for i in range( 1, exp ):
ans *= base
return ans
</code></pre>
<p>The modifications are:</p>
<ul>
<li>The function always returns an integer.</li>
<li>Negative exponents return 0.</li>
<li>An exponent of 0 returns 1.</li>
<li>An exponent of 1 returns <code>base</code>.</li>
<li>Exponents of 2 and greater multiply <code>base</code> by itself <code>exp</code> times.</li>
</ul>
| 0 |
2016-09-20T03:09:10Z
|
[
"python",
"python-3.x"
] |
How to install virtualenv on Centos6?
| 39,584,981 |
<p>I want to apply my Flask project on my workplace Centos6. So I followed guide from google to install pip, virtualenv, and flask, but I cannot successfully install either pip or virtualenv.</p>
<p>What I have done is this:</p>
<p>1) <a href="http://sharadchhetri.com/2014/05/30/install-pip-centos-rhel-ubuntu-debian/" rel="nofollow">http://sharadchhetri.com/2014/05/30/install-pip-centos-rhel-ubuntu-debian/</a></p>
<pre><code>#rpm -ivh httplink://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
#yum install -y python-pip
(version 7.1.0-1.el6)
#pip install virtualenv
</code></pre>
<p>this gives</p>
<blockquote>
<p>urllib3 will issue (InsecurePlatformWarning)</p>
</blockquote>
<p>2) <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-python-2-7-6-and-3-3-3-on-centos-6-4" rel="nofollow">https://www.digitalocean.com/community/tutorials/how-to-set-up-python-2-7-6-and-3-3-3-on-centos-6-4</a></p>
<pre><code>#curl httplink://raw.githubusercontent.com/pypa/pip/master/contrib/get-pip.py | python2.7 -
</code></pre>
<p>this gives</p>
<blockquote>
<p>curl: (77) Problem with the SSL CA cert (path? access rights?)</p>
</blockquote>
<p>3) <a href="http://www.ylabs.co.kr/index.php?document_srl=31854&mid=board_dev_python&order_type=asc&sort_index=title" rel="nofollow">http://www.ylabs.co.kr/index.php?document_srl=31854&mid=board_dev_python&order_type=asc&sort_index=title</a></p>
<p>with python2.7</p>
<pre><code>#cd /tmp
#wget --no-check-certificate httplink://pypi.python.org/packages/source/s/setuptools/setuptools-1.4.2.tar.gz
#tar -xvf setuptools-1.4.2.tar.gz
#cd setuptools-1.4.2
#python2.7 setup.py install
</code></pre>
<p>This gives</p>
<pre><code>Searching for pip
Reading httplink://pypi.python.org/simple/pip/
Download error on httplink://pypi.python.org/simple/pip/: [Errno 0] _ssl.c:343: error:00000000:lib(0):func(0):reason(0) -- Some packages may not be found!
Couldn't find index page for 'pip' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading httplink://pypi.python.org/simple/
Download error on httplink://pypi.python.org/simple/: [Errno 0] _ssl.c:343: error:00000000:lib(0):func(0):reason(0) -- Some packages may not be found!
No local packages or download links found for pip
error: Could not find suitable distribution for Requirement.parse('pip')
</code></pre>
<p>4) <a href="http://novafactory.net/archives/3074" rel="nofollow">http://novafactory.net/archives/3074</a></p>
<pre><code># wget httplink://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py
# python3.4 ez_setup.py
# easy_install-3.4 pip
# pip3.4 install virtualenv
</code></pre>
<p>This gives</p>
<pre><code>Downloading/unpacking virtualenv
Cannot fetch index base URL httplink://pypi.python.org/simple/
Could not find any downloads that satisfy the requirement virtualenv
Cleaning up...
No distributions at all found for virtualenv
Storing debug log for failure in /root/.pip/pip.log
</code></pre>
<p>My system is CentOS release 6.8 (Final), python 2.6/2.7/3.4</p>
<pre><code>pip3 -V :
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
pip2 -V :
pip 7.1.0 from /usr/lib/python2.6/site-packages (python 2.6)
</code></pre>
| -3 |
2016-09-20T02:54:10Z
| 39,605,794 |
<p>Thankyou for the advices</p>
<p>what I have done is to not use pip, instead I just download python module packages.</p>
<p>I went to the link <a href="http://pypi.python.org/simple/" rel="nofollow">http://pypi.python.org/simple/</a>
which I got from error message from above (my 3 trial)</p>
<p>and download [package] what I looked for</p>
<p>and tar -xvf [package]</p>
<p>cd [package]</p>
<p>[python setup install]</p>
<p>So I could install Flask and virtualenv without using pip</p>
| 0 |
2016-09-21T00:39:14Z
|
[
"python",
"flask",
"pip",
"virtualenv",
"centos6"
] |
Run time call within pprint
| 39,585,003 |
<p>How to pass value inside pprint at run time?</p>
<pre><code>import nltk, sys
from pprint import pprint
from nltk.corpus import framenet as fn
#Word = raw_input("enter a word: ")
pprint(fn.frames(r'(?i)Medical_specialties'))
f = fn.frame(256)
f.ID
f.name
f.definition
print f
print '\b'
pprint(sorted([x for x in f.FE]))
pprint(f.frameRelations)
print
</code></pre>
<p>At run time, I need to get a word from the user and pass it to fn.frames function in place of Medical_specialties, which in turn throws a list of frames as frame ID relevant to the word. Then I can call those numbers to query further.</p>
<p>Output:</p>
<pre><code>[<frame ID=256 name=Medical_specialties>]
</code></pre>
| 0 |
2016-09-20T02:56:45Z
| 39,594,091 |
<p>Although you happen to be working with the <code>nltk</code>, your question has nothing to do with it (or with <code>pprint</code>, for that matter): You need to input a string from the user, then stick <code>"(?i)"</code> in front to construct your desired regexp. </p>
<p>Since that's all you need to do, the simplest solution is this:</p>
<pre><code>word = raw_input("enter a word: ")
rexp = "(?i)" + word
pprint(fn.frames(rexp))
</code></pre>
<p>A more powerful way to put together strings is with python's <a href="https://docs.python.org/2/library/stdtypes.html#string-formatting" rel="nofollow">C-style string formatting</a>, or the newer <a href="https://docs.python.org/2/library/string.html#format-string-syntax" rel="nofollow"><code>format()</code></a> syntax. E.g., to specify word boundaries before and after the input "word", you'd do it like this (C-style syntax):</p>
<pre><code>rexp = r"(?i)\b%s\b" % word
</code></pre>
<p>You'll probably find the above links hard to digest, so try <a href="https://pyformat.info/" rel="nofollow">this exposition</a> or <a href="http://stackoverflow.com/questions/5082452/python-string-formatting-vs-format">this high-voted SO question.</a></p>
| 1 |
2016-09-20T12:27:53Z
|
[
"python",
"python-2.7",
"function",
"pprint"
] |
Quickest way to find smallest Hamming distance in a list of fixed-length hexes
| 39,585,069 |
<p>I'm using <a href="https://github.com/JohannesBuchner/imagehash" rel="nofollow">Imagehash</a> in Python to generate 48-digit hexadecimal hashes of around 30,000 images, which I'm storing in a list of dictionaries (the phashes as well as some other image properties). For example:</p>
<pre><code>[{"name":"name1", "phash":"a12a5e81127d890a7c91897edc752b506657233f56c594b7e6575e24e457d465"},
{"name":"name2", "phash":"a1aa7e011367812a7c9181be9975a9e86657239f3ec09697e6565a24e50bf477"}
...
{"name":"name30000", "phash":"a1aa7e05136f810afc9181ba9951a9686617239f3ec4d497e6765a04e52bfc77"}]
</code></pre>
<p>I then have video input from a Raspberry Pi which is phashed, and that hash is compared to this database (given the nature of the Pi camera, the test hash from the video stream will NOT ever match the hashes in the database). Right now I'm doing a dumb loop, which takes about 5 seconds to loop through and check the Hamming distance of each of the ~30,000 pre-calculated hashes, which is way too slow. The Imagehash library I'm using means that the Hamming distance can be calculated simply by doing <code>dbHash1 - testHash</code>. Apparently sorting and doing <code>bisect</code> is not the way to approach this, since sorting is irrelevant to Hamming distances. So, I assume there must be a faster way to get this done? I've read <a href="http://stackoverflow.com/questions/6389841/efficiently-find-binary-strings-with-low-hamming-distance-in-large-set">this question</a> regarding metric spaces, but I wanted to check if there's a (relatively) simple Python implementation that someone knows of.</p>
| 0 |
2016-09-20T03:04:36Z
| 39,585,339 |
<p><a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.pdist.html" rel="nofollow">Scipy's pairwise distance function </a> supports Hamming distances. I'd try that.</p>
| -1 |
2016-09-20T03:39:29Z
|
[
"python",
"hamming-distance"
] |
Quickest way to find smallest Hamming distance in a list of fixed-length hexes
| 39,585,069 |
<p>I'm using <a href="https://github.com/JohannesBuchner/imagehash" rel="nofollow">Imagehash</a> in Python to generate 48-digit hexadecimal hashes of around 30,000 images, which I'm storing in a list of dictionaries (the phashes as well as some other image properties). For example:</p>
<pre><code>[{"name":"name1", "phash":"a12a5e81127d890a7c91897edc752b506657233f56c594b7e6575e24e457d465"},
{"name":"name2", "phash":"a1aa7e011367812a7c9181be9975a9e86657239f3ec09697e6565a24e50bf477"}
...
{"name":"name30000", "phash":"a1aa7e05136f810afc9181ba9951a9686617239f3ec4d497e6765a04e52bfc77"}]
</code></pre>
<p>I then have video input from a Raspberry Pi which is phashed, and that hash is compared to this database (given the nature of the Pi camera, the test hash from the video stream will NOT ever match the hashes in the database). Right now I'm doing a dumb loop, which takes about 5 seconds to loop through and check the Hamming distance of each of the ~30,000 pre-calculated hashes, which is way too slow. The Imagehash library I'm using means that the Hamming distance can be calculated simply by doing <code>dbHash1 - testHash</code>. Apparently sorting and doing <code>bisect</code> is not the way to approach this, since sorting is irrelevant to Hamming distances. So, I assume there must be a faster way to get this done? I've read <a href="http://stackoverflow.com/questions/6389841/efficiently-find-binary-strings-with-low-hamming-distance-in-large-set">this question</a> regarding metric spaces, but I wanted to check if there's a (relatively) simple Python implementation that someone knows of.</p>
| 0 |
2016-09-20T03:04:36Z
| 39,671,900 |
<p>I got an answer from the guy behind ImageHash, <a href="https://github.com/JohannesBuchner" rel="nofollow">Johannes Buchner</a>.</p>
<p>I can store the DB as a 2d matrix:</p>
<pre><code>arr = []
for dbHash in db:
arr.append(dbHash.hash.flatten())
arr = numpy.array(arr)
</code></pre>
<p>Then I can do the comparison against all at the same time:</p>
<pre><code>binarydiff = arr != testhash.hash.reshape((1,-1))
hammingdiff = binarydiff.sum(axis=1)
closestdbHash_i = numpy.argmin(hammingdiff)
closestdbHash = db[closestdbHash_i]
</code></pre>
| 0 |
2016-09-24T02:25:03Z
|
[
"python",
"hamming-distance"
] |
Number refuses to divide
| 39,585,070 |
<p>I have made a simple function called "Approx" which multiplies two numbers together then divides them by two. When I use the function by itself it works great but it seems in the hunk of code I have it doesn't divide the number in half and I have no idea why. This is my code where is the error and how can I fix it?</p>
<pre><code>import math
def Approx(low,high):
base = low * high
return base/2
root = float(input("What to approx the sqrt of : "))
vague = float(input("How off can it be? : "))
wrong = True
oroot = root
root = math.floor(float(root))
trunk = root + 1
step = 0
while wrong:
if Approx(root,trunk) > oroot - vague and Approx(root,trunk) < oroot:
print("Done. " + str(step) + " steps taken.")
else:
if Approx(root,trunk) > oroot:
temproot = root
root = Approx(root,trunk)
trunk = temproot
step += 1
print("Step " + str(step) + " finished. Approx is " + str(Approx(root,trunk)))
else:
temptrunk = trunk
trunk = Approx(root,trunk)
root = trunk
step += 1
print("Step " + str(step) + " finished. Approx is " + str(Approx(root,trunk)))
if step > 50:
print("More than fifty steps required.")
wrong = False
</code></pre>
| 0 |
2016-09-20T03:04:43Z
| 39,605,552 |
<p>Your function works the way you describe it, however I don't understand how you use it in the rest of the code.</p>
<p>It seems like you are trying to approximate square roots using a variant of Newton's method, but it's hard to understand how you implement it. Some variables in your code are not used (what is <code>temptrunk</code> for ?), and it's hard to determine if it's intended or a mistake.</p>
<p>If it is indeed the newton method you'd like to implement, you'll want to have an approximation function that converges to the target value. In order to do that, you compute the arithmetic mean of a guess and your target value divided by this guess (<code>new_guess = mean([old_guess, target/old_guess])</code>). Once you have that, you just need to compare the difference between <code>new_guess</code> and <code>target</code>, and once it reaches a given threshold (in your code, <code>vague</code>), you can break of the loop.</p>
<p>There are multiple ways to improve other aspects of your code:</p>
<ul>
<li>I'd advise against using a sentinel value for breaking out of the loop, <code>break</code> statements are more explicit.</li>
<li><p>You can directly make a loop have maximum number of steps, with :</p>
<pre><code>for step in range(MAX_STEPS):
guess = ... # try to guess
if abs(target - guess) < delta:
break
else:
print("Maximum steps reached.")
</code></pre>
<p>The <code>else</code> block will only be called if <code>break</code> is <strong>not</strong> reached.</p></li>
</ul>
| 1 |
2016-09-21T00:05:31Z
|
[
"python",
"python-3.x"
] |
Number refuses to divide
| 39,585,070 |
<p>I have made a simple function called "Approx" which multiplies two numbers together then divides them by two. When I use the function by itself it works great but it seems in the hunk of code I have it doesn't divide the number in half and I have no idea why. This is my code where is the error and how can I fix it?</p>
<pre><code>import math
def Approx(low,high):
base = low * high
return base/2
root = float(input("What to approx the sqrt of : "))
vague = float(input("How off can it be? : "))
wrong = True
oroot = root
root = math.floor(float(root))
trunk = root + 1
step = 0
while wrong:
if Approx(root,trunk) > oroot - vague and Approx(root,trunk) < oroot:
print("Done. " + str(step) + " steps taken.")
else:
if Approx(root,trunk) > oroot:
temproot = root
root = Approx(root,trunk)
trunk = temproot
step += 1
print("Step " + str(step) + " finished. Approx is " + str(Approx(root,trunk)))
else:
temptrunk = trunk
trunk = Approx(root,trunk)
root = trunk
step += 1
print("Step " + str(step) + " finished. Approx is " + str(Approx(root,trunk)))
if step > 50:
print("More than fifty steps required.")
wrong = False
</code></pre>
| 0 |
2016-09-20T03:04:43Z
| 39,605,809 |
<p>It looks to me that it definitely <strong>does</strong> divide by two, it is just that dividing by two doesn't undo multiplying two large numbers together. For example, say you wanted to find the square root of <code>10</code>. <code>trunk</code> is set to <code>11</code>. <code>Approx(root, trunk)</code> is <code>10 * 11 / 2 = 55</code>. This is set to <code>root</code> and <code>trunk</code> becomes the old <code>root</code>, <code>10</code>. Now you have <code>55</code> and <code>10</code> instead of <code>10</code> and <code>11</code>. Repeat this a few times and you end up with <code>inf</code>. Look more into the method you're trying to implement (is it the Babylonian method?) and see where your program and the method differ. That is likely the source of your woes, and not a lack of division. </p>
| 1 |
2016-09-21T00:41:33Z
|
[
"python",
"python-3.x"
] |
How to increment training Theano saved models?
| 39,585,131 |
<p>I have a trained model by <a href="http://www.deeplearning.net/software/theano/" rel="nofollow">Theano</a>, and there are new training data I want to increase the mode, How could I do this?</p>
| 0 |
2016-09-20T03:11:21Z
| 39,589,279 |
<p>Initialize the model with the pre-trained weights and perform gradient updates for the new examples, but you do have to take care of the learning rate and other parameters (depending on your optimizer). You may also try storing optimizer's parameter as well, initializing the optimizer with those values of parameters to make sure new training data does not drastically change the trained model.</p>
| 1 |
2016-09-20T08:34:20Z
|
[
"python",
"numpy",
"machine-learning",
"neural-network",
"theano"
] |
Python: print the dictionary elements which has multiple values assigned for each key
| 39,585,179 |
<p>I've got a dictionary like the one, below:</p>
<pre><code>{ "amplifier": ["t_audio"],
"airbag": ["t_trigger"],
"trigger": ["t_sensor1", "t_sensor2"],
"hu": ["t_fused"],
"cam": ["t_front", "t_ldw", "t_left", "t_nivi", "t_rear_camera", "t_right"],
"video_screen": ["t_video"] }
</code></pre>
<p>as you can see, there are some elements which have more than one value assigned for each key. I'd like to extract those values as string, separately within (preferably) a for loop then print them out. Printed result should be something like this:</p>
<pre><code>group(amplifier, t_audio)
group(airbag, t_trigger)
group(trigger, t_sensor1)
group(trigger, t_sensor2)
group(hu, t_fused)
group(cam, t_front)
group(cam, t_ldw)
...
...
</code></pre>
<p>I can easily perform this on a normal dictionary where each key has only one values but got almost confused about this one(sorry if I'm newbe to Python...). Any kind of help is appreciated on how to get this result.</p>
| 1 |
2016-09-20T03:17:57Z
| 39,585,212 |
<pre><code>for k, v in mydict.iteritems():
for vv in v:
print "group(%s,%s)" % (k,vv)
#or
print "group(",k,",",vv,")"
#or the python 3 format syntax
</code></pre>
| 0 |
2016-09-20T03:22:08Z
|
[
"python",
"list",
"dictionary"
] |
Python: print the dictionary elements which has multiple values assigned for each key
| 39,585,179 |
<p>I've got a dictionary like the one, below:</p>
<pre><code>{ "amplifier": ["t_audio"],
"airbag": ["t_trigger"],
"trigger": ["t_sensor1", "t_sensor2"],
"hu": ["t_fused"],
"cam": ["t_front", "t_ldw", "t_left", "t_nivi", "t_rear_camera", "t_right"],
"video_screen": ["t_video"] }
</code></pre>
<p>as you can see, there are some elements which have more than one value assigned for each key. I'd like to extract those values as string, separately within (preferably) a for loop then print them out. Printed result should be something like this:</p>
<pre><code>group(amplifier, t_audio)
group(airbag, t_trigger)
group(trigger, t_sensor1)
group(trigger, t_sensor2)
group(hu, t_fused)
group(cam, t_front)
group(cam, t_ldw)
...
...
</code></pre>
<p>I can easily perform this on a normal dictionary where each key has only one values but got almost confused about this one(sorry if I'm newbe to Python...). Any kind of help is appreciated on how to get this result.</p>
| 1 |
2016-09-20T03:17:57Z
| 39,585,231 |
<pre><code>for key in dict:
for value in dict[key]:
print value
</code></pre>
| 0 |
2016-09-20T03:24:07Z
|
[
"python",
"list",
"dictionary"
] |
Python: print the dictionary elements which has multiple values assigned for each key
| 39,585,179 |
<p>I've got a dictionary like the one, below:</p>
<pre><code>{ "amplifier": ["t_audio"],
"airbag": ["t_trigger"],
"trigger": ["t_sensor1", "t_sensor2"],
"hu": ["t_fused"],
"cam": ["t_front", "t_ldw", "t_left", "t_nivi", "t_rear_camera", "t_right"],
"video_screen": ["t_video"] }
</code></pre>
<p>as you can see, there are some elements which have more than one value assigned for each key. I'd like to extract those values as string, separately within (preferably) a for loop then print them out. Printed result should be something like this:</p>
<pre><code>group(amplifier, t_audio)
group(airbag, t_trigger)
group(trigger, t_sensor1)
group(trigger, t_sensor2)
group(hu, t_fused)
group(cam, t_front)
group(cam, t_ldw)
...
...
</code></pre>
<p>I can easily perform this on a normal dictionary where each key has only one values but got almost confused about this one(sorry if I'm newbe to Python...). Any kind of help is appreciated on how to get this result.</p>
| 1 |
2016-09-20T03:17:57Z
| 39,585,280 |
<p>Very simple: loop through each key in the dictionary. Since each value is going to be a list with one or more elements, just loop through those and print the string you need:</p>
<pre><code>d = {'amplifier': ['t_audio'], 'hu': ['t_fused'], 'trigger': ['t_sensor1', 't_sensor2'], 'cam': ['t_front', 't_ldw', 't_left', 't_nivi', 't_rear_camera', 't_right'], 'airbag': ['t_trigger'], 'video_screen': ['t_video']}
for key in d:
for value in d[key]:
print 'group({},{})'.format(key,value)
</code></pre>
<p>You can see it in action here: <a href="https://eval.in/645071" rel="nofollow">https://eval.in/645071</a></p>
| 1 |
2016-09-20T03:31:11Z
|
[
"python",
"list",
"dictionary"
] |
BeautifulSoup - How do I get all <div> from a class in html?
| 39,585,184 |
<p>I am trying to get a list of all NFL teams from a website and I am very close. I am able to get some data, but I can't drill down far enough to get what I want.</p>
<p>My code:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
f = open('C:\Users\Josh\Documents\Python\outFileRoto.txt', 'w')
errorFile = open('C:\Users\Josh\Documents\Python\errors.txt', 'w')
r = requests.get('https://rotogrinders.com/team-stats/nfl-allowed?sport=nfl&position=QB&site=draftkings&range=season')
data = r.text
#soup = BeautifulSoup(urllib2.urlopen('http://games.espn.com/ffl/tools/projections?startIndex=' +str(x).read(), 'html')
soup = BeautifulSoup(data, 'html.parser')
leftTable = soup.find('div', attrs={'class' : 'rgt-bdy left'})
#f.write("LEFT TABLE\n" + str(leftTable) + '\n')
rightCol = leftTable.find('div', attrs={'class' : 'rgt-colwrap'})
for row in rightCol.findAll('div'):
#col = row.findAll('div')
#f.write("col" + str(col))
try:
name = str(row)
f.write("----------------------------COLUMN---------------------------\n" + name + '\n')
except Exception as e:
errorFile.write (str(x) + ">>>>>>>>>>>>" + str(a) + "<<<<<<<<<<<<<<ROW" + str(row) + '\n')
pass
f.close
errorFile.close
</code></pre>
<p>The problem is that I get this:</p>
<pre><code>----------------------------COLUMN---------------------------
<div class="rgt-col">
<div class="rgt-hdr">Team<span class="icn-arw-down"></span></div>
</div>
----------------------------COLUMN---------------------------
<div class="rgt-hdr">Team<span class="icn-arw-down"></span></div>
----------------------------COLUMN---------------------------
<div class="rgt-col">
<div class="rgt-hdr">Abbr<span class="icn-arw-down"></span></div>
</div>
----------------------------COLUMN---------------------------
<div class="rgt-hdr">Abbr<span class="icn-arw-down"></span></div>
</code></pre>
<p>But I need this:</p>
<p><img src="http://i.stack.imgur.com/gPuXf.png" alt="NFL Teams"></p>
| 0 |
2016-09-20T03:18:41Z
| 39,585,457 |
<p>To drill down further, get beautifulsoup to return the div that has the class "rgt-col", and the style "display: block;".</p>
<p>Once you have that, drill down further by finding all the divs within that div, but ignoring the first result. Or you can also get all the divs that do not have a class.</p>
<p>EDIT 1:
This answer was provided with the assumption that the html code was already available, and all that was needed was to drill down to get the specific elements. However, as mentioned by the Padraic Cunningham and Casey wireman, the desired data is dynamically loaded, and as such, the html is not available in the first place. Therefore, the first step would be to obtain the html first, maybe through identifying and loading the json endpoint, or, through the use of tools which allow for browser automation such as selenium.</p>
<p>EDIT 2:
In this case however, it seems that the desired data is already in the html, in json format. All that's left is to parse this as was done by Padraic Cunningham in his answer.</p>
| 1 |
2016-09-20T03:58:22Z
|
[
"python",
"python-2.7",
"beautifulsoup",
"python-requests"
] |
BeautifulSoup - How do I get all <div> from a class in html?
| 39,585,184 |
<p>I am trying to get a list of all NFL teams from a website and I am very close. I am able to get some data, but I can't drill down far enough to get what I want.</p>
<p>My code:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
f = open('C:\Users\Josh\Documents\Python\outFileRoto.txt', 'w')
errorFile = open('C:\Users\Josh\Documents\Python\errors.txt', 'w')
r = requests.get('https://rotogrinders.com/team-stats/nfl-allowed?sport=nfl&position=QB&site=draftkings&range=season')
data = r.text
#soup = BeautifulSoup(urllib2.urlopen('http://games.espn.com/ffl/tools/projections?startIndex=' +str(x).read(), 'html')
soup = BeautifulSoup(data, 'html.parser')
leftTable = soup.find('div', attrs={'class' : 'rgt-bdy left'})
#f.write("LEFT TABLE\n" + str(leftTable) + '\n')
rightCol = leftTable.find('div', attrs={'class' : 'rgt-colwrap'})
for row in rightCol.findAll('div'):
#col = row.findAll('div')
#f.write("col" + str(col))
try:
name = str(row)
f.write("----------------------------COLUMN---------------------------\n" + name + '\n')
except Exception as e:
errorFile.write (str(x) + ">>>>>>>>>>>>" + str(a) + "<<<<<<<<<<<<<<ROW" + str(row) + '\n')
pass
f.close
errorFile.close
</code></pre>
<p>The problem is that I get this:</p>
<pre><code>----------------------------COLUMN---------------------------
<div class="rgt-col">
<div class="rgt-hdr">Team<span class="icn-arw-down"></span></div>
</div>
----------------------------COLUMN---------------------------
<div class="rgt-hdr">Team<span class="icn-arw-down"></span></div>
----------------------------COLUMN---------------------------
<div class="rgt-col">
<div class="rgt-hdr">Abbr<span class="icn-arw-down"></span></div>
</div>
----------------------------COLUMN---------------------------
<div class="rgt-hdr">Abbr<span class="icn-arw-down"></span></div>
</code></pre>
<p>But I need this:</p>
<p><img src="http://i.stack.imgur.com/gPuXf.png" alt="NFL Teams"></p>
| 0 |
2016-09-20T03:18:41Z
| 39,586,338 |
<p>It looks like your main problem is that the table you're interested in is dynamically built by JS. See this answer for info on scraping dynamically loaded content. <a href="http://stackoverflow.com/questions/17597424/how-to-retrieve-the-values-of-dynamic-html-content-using-python">How to retrieve the values of dynamic html content using Python</a></p>
<p>Alternatively, it looks like they have the initialization data that they generate the table with in the page, you could scrape that and parse the array that way if you don't want to go to the trouble of setting up things like selenium.</p>
| 0 |
2016-09-20T05:29:54Z
|
[
"python",
"python-2.7",
"beautifulsoup",
"python-requests"
] |
BeautifulSoup - How do I get all <div> from a class in html?
| 39,585,184 |
<p>I am trying to get a list of all NFL teams from a website and I am very close. I am able to get some data, but I can't drill down far enough to get what I want.</p>
<p>My code:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
f = open('C:\Users\Josh\Documents\Python\outFileRoto.txt', 'w')
errorFile = open('C:\Users\Josh\Documents\Python\errors.txt', 'w')
r = requests.get('https://rotogrinders.com/team-stats/nfl-allowed?sport=nfl&position=QB&site=draftkings&range=season')
data = r.text
#soup = BeautifulSoup(urllib2.urlopen('http://games.espn.com/ffl/tools/projections?startIndex=' +str(x).read(), 'html')
soup = BeautifulSoup(data, 'html.parser')
leftTable = soup.find('div', attrs={'class' : 'rgt-bdy left'})
#f.write("LEFT TABLE\n" + str(leftTable) + '\n')
rightCol = leftTable.find('div', attrs={'class' : 'rgt-colwrap'})
for row in rightCol.findAll('div'):
#col = row.findAll('div')
#f.write("col" + str(col))
try:
name = str(row)
f.write("----------------------------COLUMN---------------------------\n" + name + '\n')
except Exception as e:
errorFile.write (str(x) + ">>>>>>>>>>>>" + str(a) + "<<<<<<<<<<<<<<ROW" + str(row) + '\n')
pass
f.close
errorFile.close
</code></pre>
<p>The problem is that I get this:</p>
<pre><code>----------------------------COLUMN---------------------------
<div class="rgt-col">
<div class="rgt-hdr">Team<span class="icn-arw-down"></span></div>
</div>
----------------------------COLUMN---------------------------
<div class="rgt-hdr">Team<span class="icn-arw-down"></span></div>
----------------------------COLUMN---------------------------
<div class="rgt-col">
<div class="rgt-hdr">Abbr<span class="icn-arw-down"></span></div>
</div>
----------------------------COLUMN---------------------------
<div class="rgt-hdr">Abbr<span class="icn-arw-down"></span></div>
</code></pre>
<p>But I need this:</p>
<p><img src="http://i.stack.imgur.com/gPuXf.png" alt="NFL Teams"></p>
| 0 |
2016-09-20T03:18:41Z
| 39,589,610 |
<p>The data is in <em>json</em> format in the page source inside the <code>$(document).ready(function()</code> call which is what loads the data you see in your browser. You just need to find the correct <em>script</em> tag with <em>bs4</em> and parse it using a regex then use <em>json.loads</em> the result to get a list of dicts:</p>
<pre><code>In [1]: from bs4 import BeautifulSoup
In [2]: import requests
In [3]: import re
In [4]: import json
In [5]: soup = BeautifulSoup(requests.get("https://rotogrinders.com/team-stats/nfl-allowed?sport=nfl&position=QB&site=draftkings&range=season").content)
In [6]: script = soup.find("script", text=re.compile(r'data\s+=\s+')).text
In [7]: data = json.loads(re.search(r"data\s+=\s+(\[.*?\])", script).group(1))
In [8]: print(data)
[{u'fuml': 0, u'tyds': 11, u'tar': 0, u'gp': 2, u'int': 2, u'rztar': 0, u'retd': 0, u'pct': u'63.64%', u'tchs': 7, u'rutd': 0, u'payds': 371, u'rec': 0, u'ruyds': 11, u'patd': 2, u'reypc': u'0.00', u'rzatt': 1, u'fpts': u'24.94', u'ruypc': u'1.57', u'att': 55, u'ruatt': 7, u'team': u'Baltimore Ravens', u'reyds': 0, u'cmp': 35, u'abbr': u'BAL'}, {u'fuml': 0, u'tyds': 29, u'tar': 0, u'gp': 2, u'int': 3, u'rztar': 0, u'retd': 0, u'pct': u'52.78%', u'tchs': 5, u'rutd': 0, u'payds': 448, u'rec': 0, u'ruyds': 29, u'patd': 5, u'reypc': u'0.00', u'rzatt': 2, u'fpts': u'40.82', u'ruypc': u'5.80', u'att': 72, u'ruatt': 5, u'team': u'Cincinnati Bengals', u'reyds': 0, u'cmp': 38, u'abbr': u'CIN'}, {u'fuml': 0, u'tyds': 2, u'tar': 0, u'gp': 2, u'int': 2, u'rztar': 0, u'retd': 0, u'pct': u'57.32%', u'tchs': 3, u'rutd': 0, u'payds': 580, u'rec': 0, u'ruyds': 2, u'patd': 4, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'40.40', u'ruypc': u'0.67', u'att': 82, u'ruatt': 3, u'team': u'Cleveland Browns', u'reyds': 0, u'cmp': 47, u'abbr': u'CLE'}, {u'fuml': 0, u'tyds': 15, u'tar': 0, u'gp': 2, u'int': 2, u'rztar': 0, u'retd': 0, u'pct': u'62.89%', u'tchs': 3, u'rutd': 0, u'payds': 695, u'rec': 0, u'ruyds': 15, u'patd': 1, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'34.30', u'ruypc': u'5.00', u'att': 97, u'ruatt': 3, u'team': u'Pittsburgh Steelers', u'reyds': 0, u'cmp': 61, u'abbr': u'PIT'}, {u'fuml': 0, u'tyds': 24, u'tar': 0, u'gp': 2, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'62.32%', u'tchs': 10, u'rutd': 0, u'payds': 421, u'rec': 0, u'ruyds': 24, u'patd': 3, u'reypc': u'0.00', u'rzatt': 2, u'fpts': u'33.24', u'ruypc': u'2.40', u'att': 69, u'ruatt': 10, u'team': u'Chicago Bears', u'reyds': 0, u'cmp': 43, u'abbr': u'CHI'}, {u'fuml': 0, u'tyds': 31, u'tar': 0, u'gp': 2, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'70.00%', u'tchs': 6, u'rutd': 0, u'payds': 623, u'rec': 0, u'ruyds': 31, u'patd': 6, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'56.02', u'ruypc': u'5.17', u'att': 80, u'ruatt': 6, u'team': u'Detroit Lions', u'reyds': 0, u'cmp': 56, u'abbr': u'DET'}, {u'fuml': 0, u'tyds': -1, u'tar': 0, u'gp': 2, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'65.71%', u'tchs': 3, u'rutd': 0, u'payds': 606, u'rec': 0, u'ruyds': -1, u'patd': 3, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'38.14', u'ruypc': u'-0.33', u'att': 70, u'ruatt': 3, u'team': u'Green Bay Packers', u'reyds': 0, u'cmp': 46, u'abbr': u'GBP'}, {u'fuml': 2, u'tyds': 48, u'tar': 0, u'gp': 2, u'int': 2, u'rztar': 0, u'retd': 0, u'pct': u'58.44%', u'tchs': 7, u'rutd': 1, u'payds': 484, u'rec': 0, u'ruyds': 48, u'patd': 3, u'reypc': u'0.00', u'rzatt': 2, u'fpts': u'41.16', u'ruypc': u'6.86', u'att': 77, u'ruatt': 7, u'team': u'Minnesota Vikings', u'reyds': 0, u'cmp': 45, u'abbr': u'MIN'}, {u'fuml': 1, u'tyds': -3, u'tar': 0, u'gp': 1, u'int': 0, u'rztar': 0, u'retd': 0, u'pct': u'67.65%', u'tchs': 4, u'rutd': 0, u'payds': 258, u'rec': 0, u'ruyds': -3, u'patd': 1, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'13.02', u'ruypc': u'-0.75', u'att': 34, u'ruatt': 4, u'team': u'Buffalo Bills', u'reyds': 0, u'cmp': 23, u'abbr': u'BUF'}, {u'fuml': 1, u'tyds': 28, u'tar': 0, u'gp': 2, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'64.56%', u'tchs': 8, u'rutd': 0, u'payds': 584, u'rec': 0, u'ruyds': 28, u'patd': 4, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'43.16', u'ruypc': u'3.50', u'att': 79, u'ruatt': 8, u'team': u'Miami Dolphins', u'reyds': 0, u'cmp': 51, u'abbr': u'MIA'}, {u'fuml': 0, u'tyds': 36, u'tar': 0, u'gp': 2, u'int': 2, u'rztar': 0, u'retd': 0, u'pct': u'68.29%', u'tchs': 8, u'rutd': 0, u'payds': 660, u'rec': 0, u'ruyds': 36, u'patd': 4, u'reypc': u'0.00', u'rzatt': 1, u'fpts': u'47.00', u'ruypc': u'4.50', u'att': 82, u'ruatt': 8, u'team': u'New England Patriots', u'reyds': 0, u'cmp': 56, u'abbr': u'NEP'}, {u'fuml': 0, u'tyds': 7, u'tar': 0, u'gp': 1, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'76.67%', u'tchs': 3, u'rutd': 0, u'payds': 366, u'rec': 0, u'ruyds': 7, u'patd': 1, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'21.34', u'ruypc': u'2.33', u'att': 30, u'ruatt': 3, u'team': u'New York Jets', u'reyds': 0, u'cmp': 23, u'abbr': u'NYJ'}, {u'fuml': 2, u'tyds': 14, u'tar': 0, u'gp': 2, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'54.55%', u'tchs': 4, u'rutd': 0, u'payds': 402, u'rec': 0, u'ruyds': 14, u'patd': 1, u'reypc': u'0.00', u'rzatt': 1, u'fpts': u'21.48', u'ruypc': u'3.50', u'att': 66, u'ruatt': 4, u'team': u'Houston Texans', u'reyds': 0, u'cmp': 36, u'abbr': u'HOU'}, {u'fuml': 0, u'tyds': 12, u'tar': 0, u'gp': 2, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'73.61%', u'tchs': 3, u'rutd': 0, u'payds': 606, u'rec': 0, u'ruyds': 12, u'patd': 3, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'41.44', u'ruypc': u'4.00', u'att': 72, u'ruatt': 3, u'team': u'Indianapolis Colts', u'reyds': 0, u'cmp': 53, u'abbr': u'IND'}, {u'fuml': 1, u'tyds': 25, u'tar': 0, u'gp': 2, u'int': 0, u'rztar': 0, u'retd': 0, u'pct': u'62.71%', u'tchs': 7, u'rutd': 1, u'payds': 419, u'rec': 0, u'ruyds': 25, u'patd': 6, u'reypc': u'0.00', u'rzatt': 1, u'fpts': u'51.26', u'ruypc': u'3.57', u'att': 59, u'ruatt': 7, u'team': u'Jacksonville Jaguars', u'reyds': 0, u'cmp': 37, u'abbr': u'JAC'}, {u'fuml': 0, u'tyds': 39, u'tar': 0, u'gp': 2, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'54.79%', u'tchs': 4, u'rutd': 0, u'payds': 496, u'rec': 0, u'ruyds': 39, u'patd': 1, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'29.74', u'ruypc': u'9.75', u'att': 73, u'ruatt': 4, u'team': u'Tennessee Titans', u'reyds': 0, u'cmp': 40, u'abbr': u'TEN'}, {u'fuml': 0, u'tyds': 20, u'tar': 0, u'gp': 2, u'int': 2, u'rztar': 0, u'retd': 0, u'pct': u'63.51%', u'tchs': 2, u'rutd': 0, u'payds': 571, u'rec': 0, u'ruyds': 20, u'patd': 4, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'41.84', u'ruypc': u'10.00', u'att': 74, u'ruatt': 2, u'team': u'Dallas Cowboys', u'reyds': 0, u'cmp': 47, u'abbr': u'DAL'}, {u'fuml': 0, u'tyds': 12, u'tar': 0, u'gp': 2, u'int': 0, u'rztar': 0, u'retd': 0, u'pct': u'60.67%', u'tchs': 2, u'rutd': 0, u'payds': 490, u'rec': 0, u'ruyds': 12, u'patd': 1, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'27.80', u'ruypc': u'6.00', u'att': 89, u'ruatt': 2, u'team': u'New York Giants', u'reyds': 0, u'cmp': 54, u'abbr': u'NYG'}, {u'fuml': 1, u'tyds': 37, u'tar': 0, u'gp': 2, u'int': 2, u'rztar': 0, u'retd': 0, u'pct': u'60.00%', u'tchs': 5, u'rutd': 0, u'payds': 425, u'rec': 0, u'ruyds': 37, u'patd': 0, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'20.70', u'ruypc': u'7.40', u'att': 55, u'ruatt': 5, u'team': u'Philadelphia Eagles', u'reyds': 0, u'cmp': 33, u'abbr': u'PHI'}, {u'fuml': 0, u'tyds': 4, u'tar': 0, u'gp': 2, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'73.13%', u'tchs': 2, u'rutd': 1, u'payds': 592, u'rec': 0, u'ruyds': 4, u'patd': 3, u'reypc': u'0.00', u'rzatt': 1, u'fpts': u'44.08', u'ruypc': u'2.00', u'att': 67, u'ruatt': 2, u'team': u'Washington Redskins', u'reyds': 0, u'cmp': 49, u'abbr': u'WAS'}, {u'fuml': 0, u'tyds': 13, u'tar': 0, u'gp': 2, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'73.08%', u'tchs': 6, u'rutd': 0, u'payds': 580, u'rec': 0, u'ruyds': 13, u'patd': 7, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'54.50', u'ruypc': u'2.17', u'att': 78, u'ruatt': 6, u'team': u'Atlanta Falcons', u'reyds': 0, u'cmp': 57, u'abbr': u'ATL'}, {u'fuml': 0, u'tyds': 30, u'tar': 0, u'gp': 2, u'int': 4, u'rztar': 0, u'retd': 0, u'pct': u'56.45%', u'tchs': 8, u'rutd': 1, u'payds': 421, u'rec': 0, u'ruyds': 30, u'patd': 3, u'reypc': u'0.00', u'rzatt': 2, u'fpts': u'36.84', u'ruypc': u'3.75', u'att': 62, u'ruatt': 8, u'team': u'Carolina Panthers', u'reyds': 0, u'cmp': 35, u'abbr': u'CAR'}, {u'fuml': 1, u'tyds': 12, u'tar': 0, u'gp': 2, u'int': 0, u'rztar': 0, u'retd': 0, u'pct': u'70.89%', u'tchs': 6, u'rutd': 0, u'payds': 687, u'rec': 0, u'ruyds': 12, u'patd': 1, u'reypc': u'0.00', u'rzatt': 3, u'fpts': u'38.68', u'ruypc': u'2.00', u'att': 79, u'ruatt': 6, u'team': u'New Orleans Saints', u'reyds': 0, u'cmp': 56, u'abbr': u'NOS'}, {u'fuml': 0, u'tyds': 10, u'tar': 0, u'gp': 2, u'int': 0, u'rztar': 0, u'retd': 0, u'pct': u'62.16%', u'tchs': 3, u'rutd': 0, u'payds': 653, u'rec': 0, u'ruyds': 10, u'patd': 5, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'52.12', u'ruypc': u'3.33', u'att': 74, u'ruatt': 3, u'team': u'Tampa Bay Buccaneers', u'reyds': 0, u'cmp': 46, u'abbr': u'TBB'}, {u'fuml': 1, u'tyds': 76, u'tar': 0, u'gp': 2, u'int': 2, u'rztar': 0, u'retd': 0, u'pct': u'53.42%', u'tchs': 14, u'rutd': 1, u'payds': 391, u'rec': 0, u'ruyds': 76, u'patd': 2, u'reypc': u'0.00', u'rzatt': 1, u'fpts': u'37.24', u'ruypc': u'5.43', u'att': 73, u'ruatt': 14, u'team': u'Denver Broncos', u'reyds': 0, u'cmp': 39, u'abbr': u'DEN'}, {u'fuml': 0, u'tyds': 6, u'tar': 0, u'gp': 2, u'int': 2, u'rztar': 0, u'retd': 0, u'pct': u'63.77%', u'tchs': 4, u'rutd': 0, u'payds': 511, u'rec': 0, u'ruyds': 6, u'patd': 2, u'reypc': u'0.00', u'rzatt': 0, u'fpts': u'30.04', u'ruypc': u'1.50', u'att': 69, u'ruatt': 4, u'team': u'Kansas City Chiefs', u'reyds': 0, u'cmp': 44, u'abbr': u'KCC'}, {u'fuml': 1, u'tyds': 5, u'tar': 0, u'gp': 2, u'int': 1, u'rztar': 0, u'retd': 0, u'pct': u'71.05%', u'tchs': 2, u'rutd': 0, u'payds': 819, u'rec': 0, u'ruyds': 5, u'patd': 7, u'reypc': u'0.00', u'rzatt': 1, u'fpts': u'64.26', u'ruypc': u'2.50', u'att': 76, u'ruatt': 2, u'team': u'Oakland Raiders', u'reyds': 0, u'cmp': 54, u'abbr': u'OAK'}, {u'fuml': 1, u'tyds': 49, u'tar': 0, u'gp': 2, u'int': 3, u'rztar': 0, u'retd': 0, u'pct': u'66.33%', u'tchs': 7, u'rutd': 1, u'payds': 692, u'rec': 0, u'ruyds': 49, u'patd': 4, u'reypc': u'0.00', u'rzatt': 1, u'fpts': u'53.58', u'ruypc': u'7.00', u'att': 98, u'ruatt': 7, u'team': u'San Diego Chargers', u'reyds': 0, u'cmp': 65, u'abbr': u'SDC'}, {u'fuml': 2, u'tyds': 24, u'tar': 1, u'gp': 2, u'int': 4, u'rztar': 0, u'retd': 0, u'pct': u'60.00%', u'tchs': 8, u'rutd': 0, u'payds': 507, u'rec': 1, u'ruyds': 21, u'patd': 2, u'reypc': u'3.00', u'rzatt': 1, u'fpts': u'28.68', u'ruypc': u'3.00', u'att': 85, u'ruatt': 7, u'team': u'Arizona Cardinals', u'reyds': 3, u'cmp': 51, u'abbr': u'ARI'}, {u'fuml': 0, u'tyds': 41, u'tar': 1, u'gp': 2, u'int': 0, u'rztar': 0, u'retd': 0, u'pct': u'62.86%', u'tchs': 15, u'rutd': 0, u'payds': 424, u'rec': 1, u'ruyds': 57, u'patd': 1, u'reypc': u'-16.00', u'rzatt': 1, u'fpts': u'29.06', u'ruypc': u'4.07', u'att': 70, u'ruatt': 14, u'team': u'Los Angeles Rams', u'reyds': -16, u'cmp': 44, u'abbr': u'LAR'}, {u'fuml': 1, u'tyds': 47, u'tar': 0, u'gp': 2, u'int': 3, u'rztar': 0, u'retd': 0, u'pct': u'54.67%', u'tchs': 9, u'rutd': 0, u'payds': 483, u'rec': 0, u'ruyds': 47, u'patd': 4, u'reypc': u'0.00', u'rzatt': 1, u'fpts': u'39.02', u'ruypc': u'5.22', u'att': 75, u'ruatt': 9, u'team': u'San Francisco Niners', u'reyds': 0, u'cmp': 41, u'abbr': u'SFO'}, {u'fuml': 0, u'tyds': 22, u'tar': 0, u'gp': 2, u'int': 0, u'rztar': 0, u'retd': 0, u'pct': u'57.63%', u'tchs': 8, u'rutd': 1, u'payds': 425, u'rec': 0, u'ruyds': 22, u'patd': 0, u'reypc': u'0.00', u'rzatt': 2, u'fpts': u'28.20', u'ruypc': u'2.75', u'att': 59, u'ruatt': 8, u'team': u'Seattle Seahawks', u'reyds': 0, u'cmp': 34, u'abbr': u'SEA'}]
In [9]: print([d["team"] for d in data])
[u'Baltimore Ravens', u'Cincinnati Bengals', u'Cleveland Browns', u'Pittsburgh Steelers', u'Chicago Bears', u'Detroit Lions', u'Green Bay Packers', u'Minnesota Vikings', u'Buffalo Bills', u'Miami Dolphins', u'New England Patriots', u'New York Jets', u'Houston Texans', u'Indianapolis Colts', u'Jacksonville Jaguars', u'Tennessee Titans', u'Dallas Cowboys', u'New York Giants', u'Philadelphia Eagles', u'Washington Redskins', u'Atlanta Falcons', u'Carolina Panthers', u'New Orleans Saints', u'Tampa Bay Buccaneers', u'Denver Broncos', u'Kansas City Chiefs', u'Oakland Raiders', u'San Diego Chargers', u'Arizona Cardinals', u'Los Angeles Rams', u'San Francisco Niners', u'Seattle Seahawks']
</code></pre>
<p>On a side note, use <em>raw strings</em> for your paths and open your files using with.</p>
<pre><code>with open(r'C:\Users\Josh\Documents\Python\outFileRoto.txt', 'w') as f
</code></pre>
| 1 |
2016-09-20T08:50:01Z
|
[
"python",
"python-2.7",
"beautifulsoup",
"python-requests"
] |
mac two version python conflict
| 39,585,238 |
<p>I installed python3.5 on my mac, its installation was automatically. but these days i found there was already python2 on my mac and every module i installed through pip went to <code>/Library/Python/2.7/site-packages</code>.</p>
<p>I find python3 installed location is <code>/Library/Frameworks/Python.framework/Versions/3.5</code></p>
<p>Now download a mysql-connector-python and installed it, install location is <code>python2.7/site-packages</code>, when i open pycharm whose default interceptor is python3.5, hence i can not use mysql-connector, so is there any body who know this question?</p>
| 0 |
2016-09-20T03:25:42Z
| 39,585,904 |
<p>for mysql-connector installation problem, i found the solution:</p>
<p>Try go to python3 bin directory and find pip method. this pip method can be override by the system python2 pip command, so if you want to install MySQL-python module to python3.x site-packages, you should cd to such bin directory and <code>./pip install MySQL-python</code>, it can download such module successfully but installed error:<code>ImportError:No module named 'ConfigParser'</code>, I google such error and find there is no such module in python3 and we can get its fork version:<code>mysqlclient</code>.</p>
<p><strong>NOTE:</strong> In order not to be conflict with system default python2 pip command, cd and go to python3 bin directory and <code>./pip install mysqlclient</code> and succeed.</p>
| 0 |
2016-09-20T04:50:11Z
|
[
"python",
"osx"
] |
How to add a class instance from user input in python?
| 39,585,258 |
<pre><code>class Student(object):
def __init__(self, name, chinese = 0, math = 0, english = 0):
self.name = name
self.chinese = chinese
self.math = math
self.english = english
self.total = self.chinese + self.math + self.english
Student.list.append(name)'
</code></pre>
<p>I'm trying to write a grade management system, all the student's score are stored in classes of their name. How can I add new instances to the class Student based on user input?</p>
<pre><code> name = raw_input("Please input the student's name:")
chinese = input("Please input Chinese score:")
math = input("Please input Math score:")
english = input("Please input English score:")
name = Student(name, chinese, math, english)
# eval(name)
# name = Student(name, chinese, math, english)
</code></pre>
<p>I've tried with these method but nothing works out.</p>
| 0 |
2016-09-20T03:27:54Z
| 39,585,294 |
<pre><code>import pprint
class Student():
#blah blah blah
if __name__ == "__main__":
list_of_students = []
while True:
if raw_input("Add student? (y/n)") == "n":
break
# ask your questions here
list_of_students.append( Student( # student data ) )
for student in list_of_students:
pprint.pprint(student.__dict__)
</code></pre>
| 0 |
2016-09-20T03:33:29Z
|
[
"python"
] |
How to add a class instance from user input in python?
| 39,585,258 |
<pre><code>class Student(object):
def __init__(self, name, chinese = 0, math = 0, english = 0):
self.name = name
self.chinese = chinese
self.math = math
self.english = english
self.total = self.chinese + self.math + self.english
Student.list.append(name)'
</code></pre>
<p>I'm trying to write a grade management system, all the student's score are stored in classes of their name. How can I add new instances to the class Student based on user input?</p>
<pre><code> name = raw_input("Please input the student's name:")
chinese = input("Please input Chinese score:")
math = input("Please input Math score:")
english = input("Please input English score:")
name = Student(name, chinese, math, english)
# eval(name)
# name = Student(name, chinese, math, english)
</code></pre>
<p>I've tried with these method but nothing works out.</p>
| 0 |
2016-09-20T03:27:54Z
| 39,585,551 |
<p>Try doing it the following way. :</p>
<pre><code>from collections import defaultdict
class Student:
def __init__(self, name=None, chinese=None, math=None, english=None):
self.student_info = defaultdict(list)
if name is not None:
self.student_info['name'].append(name)
self.student_info['chinese'].append(chinese)
self.student_info['math'].append(math)
self.student_info['english'].append(english)
def add_student(self, name, chinese, math, english):
if name is not None:
self.student_info['name'].append(name)
self.student_info['chinese'].append(chinese)
self.student_info['math'].append(math)
self.student_info['english'].append(english)
return None
</code></pre>
<p>In your original question's code, there is no add method (just one constructor). Hence you cannot add any new student's information to the object. </p>
| 0 |
2016-09-20T04:09:56Z
|
[
"python"
] |
Get unique intersection values of two sets
| 39,585,328 |
<p>I'd like to get the indexes of unique vectors using hash (for matrices it is efficient) but np.intersect1d does not give indices, it gives values. np.in1d on the other hand does give indices but not unique ones. I zipped a dict to make it work but it doesn't seem like the most efficient. I am new to python so trying to see if there is a better way to do this. Thanks for the help!</p>
<p>code:</p>
<pre><code>import numpy as np
import hashlib
x=np.array([[1, 2, 3],[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y=np.array([[4, 5, 6], [7, 8, 9],[1, 2, 3]])
xhash=[hashlib.sha1(row).digest() for row in x]
yhash=[hashlib.sha1(row).digest() for row in y]
z=np.intersect1d(xhash,yhash)
idx=list(range(len(xhash)))
d=dict(zip(xhash,idx))
unique_idx=[d[i] for i in z] #is there a better way to get this or boolean array
print(unique_idx)
uniques=np.array([x[i] for i in unique_idx])
print(uniques)
</code></pre>
<p>output:</p>
<pre><code>>>> [2, 3, 1]
[[4 5 6]
[7 8 9]
[1 2 3]]
</code></pre>
<p>I'm having a similar issue for np.unique() where it doesn't give me any indexes.</p>
| 3 |
2016-09-20T03:38:37Z
| 39,587,314 |
<p>The <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package (disclaimer: I am its author) has efficient functionality for doing things like this (and related functionality):</p>
<pre><code>import numpy_indexed as npi
uniques = npi.intersection(x, y)
</code></pre>
<p>Note that this solution does not use hashing, but bitwise equality of the elements of the sequence; so no risk of hash collisions, and likely a lot faster in practice.</p>
| 0 |
2016-09-20T06:40:42Z
|
[
"python",
"numpy",
"hash",
"intersection"
] |
Get unique intersection values of two sets
| 39,585,328 |
<p>I'd like to get the indexes of unique vectors using hash (for matrices it is efficient) but np.intersect1d does not give indices, it gives values. np.in1d on the other hand does give indices but not unique ones. I zipped a dict to make it work but it doesn't seem like the most efficient. I am new to python so trying to see if there is a better way to do this. Thanks for the help!</p>
<p>code:</p>
<pre><code>import numpy as np
import hashlib
x=np.array([[1, 2, 3],[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y=np.array([[4, 5, 6], [7, 8, 9],[1, 2, 3]])
xhash=[hashlib.sha1(row).digest() for row in x]
yhash=[hashlib.sha1(row).digest() for row in y]
z=np.intersect1d(xhash,yhash)
idx=list(range(len(xhash)))
d=dict(zip(xhash,idx))
unique_idx=[d[i] for i in z] #is there a better way to get this or boolean array
print(unique_idx)
uniques=np.array([x[i] for i in unique_idx])
print(uniques)
</code></pre>
<p>output:</p>
<pre><code>>>> [2, 3, 1]
[[4 5 6]
[7 8 9]
[1 2 3]]
</code></pre>
<p>I'm having a similar issue for np.unique() where it doesn't give me any indexes.</p>
| 3 |
2016-09-20T03:38:37Z
| 39,592,711 |
<p>Use np.unique's return_index property to return flags for the unique values given by in1d</p>
<p>code:</p>
<pre><code>import numpy as np
import hashlib
x=np.array([[1, 2, 3],[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y=np.array([[1, 2, 3], [7, 8, 9]])
xhash=[hashlib.sha1(row).digest() for row in x]
yhash=[hashlib.sha1(row).digest() for row in y]
z=np.in1d(xhash,yhash)
##Use unique to get unique indices to ind1 results
_,unique=np.unique(np.array(xhash)[z],return_index=True)
##Compute indices by indexing an array of indices
idx=np.array(range(len(xhash)))
unique_idx=(np.array(idx)[z])[unique]
print('x=',x)
print('unique_idx=',unique_idx)
print('x[unique_idx]=',x[unique_idx])
</code></pre>
<p>Output:</p>
<pre><code>x= [[1 2 3]
[1 2 3]
[4 5 6]
[7 8 9]]
unique_idx= [3 0]
x[unique_idx]= [[7 8 9]
[1 2 3]]
</code></pre>
| 0 |
2016-09-20T11:19:40Z
|
[
"python",
"numpy",
"hash",
"intersection"
] |
Python current time comparison with other time
| 39,585,338 |
<p>I am looking for a comparison of two times in Python. One time is the real time from computer and the other time is stored in a string formatted like <code>"01:23:00"</code>.</p>
<pre><code>import time
ctime = time.strptime("%H:%M:%S") # this always takes system time
time2 = "08:00:00"
if (ctime > time2):
print "foo"
</code></pre>
| 2 |
2016-09-20T03:39:20Z
| 39,585,471 |
<p><a href="https://docs.python.org/2/library/datetime.html" rel="nofollow">https://docs.python.org/2/library/datetime.html</a></p>
<p>The <code>datetime</code> module will parse dates, times, or combined date-time values into objects that can be compared.</p>
| 0 |
2016-09-20T03:59:39Z
|
[
"python",
"python-2.7",
"datetime"
] |
Python current time comparison with other time
| 39,585,338 |
<p>I am looking for a comparison of two times in Python. One time is the real time from computer and the other time is stored in a string formatted like <code>"01:23:00"</code>.</p>
<pre><code>import time
ctime = time.strptime("%H:%M:%S") # this always takes system time
time2 = "08:00:00"
if (ctime > time2):
print "foo"
</code></pre>
| 2 |
2016-09-20T03:39:20Z
| 39,585,673 |
<pre><code>import datetime
now = datetime.datetime.now()
my_time_string = "01:20:33"
my_datetime = datetime.datetime.strptime(my_time_string, "%H:%M:%S")
# I am supposing that the date must be the same as now
my_datetime = now.replace(hour=my_datetime.time().hour, minute=my_datetime.time().minute, second=my_datetime.time().second, microsecond=0)
if (now > my_datetime):
print "Hello"
</code></pre>
<p><strong>EDIT:</strong></p>
<p>The above solution was not taking into account leap second days (<code>23:59:60</code>). Below is an updated version that deals with such cases:</p>
<pre><code>import datetime
import calendar
import time
now = datetime.datetime.now()
my_time_string = "23:59:60" # leap second
my_time_string = now.strftime("%Y-%m-%d") + " " + my_time_string # I am supposing the date must be the same as now
my_time = time.strptime(my_time_string, "%Y-%m-%d %H:%M:%S")
my_datetime = datetime.datetime(1970, 1, 1) + datetime.timedelta(seconds=calendar.timegm(my_time))
if (now > my_datetime):
print "Foo"
</code></pre>
| 2 |
2016-09-20T04:27:01Z
|
[
"python",
"python-2.7",
"datetime"
] |
Python current time comparison with other time
| 39,585,338 |
<p>I am looking for a comparison of two times in Python. One time is the real time from computer and the other time is stored in a string formatted like <code>"01:23:00"</code>.</p>
<pre><code>import time
ctime = time.strptime("%H:%M:%S") # this always takes system time
time2 = "08:00:00"
if (ctime > time2):
print "foo"
</code></pre>
| 2 |
2016-09-20T03:39:20Z
| 39,586,091 |
<pre><code>from datetime import datetime
current_time = datetime.strftime(datetime.utcnow(),"%H:%M:%S") #output: 11:12:12
mytime = "10:12:34"
if current_time > mytime:
print "Time has passed."
</code></pre>
| 0 |
2016-09-20T05:08:39Z
|
[
"python",
"python-2.7",
"datetime"
] |
Show how a projectile (turtle) travels over time
| 39,585,354 |
<p>I am new to Python, and currently having a rough time with turtle graphics. This is what I am trying to solve</p>
<blockquote>
<p>On Turtellini (the planet where Python turtles live) the
transportation system propels turtles with a giant slingshot. A
particular turtle's original location (x0, y0) is (-180, -100). He is
then shot upward at an initial vertical velocity (vy) of 88 units per
second and a horizontal velocity (vx) of 20 units per second to the
right. He travels for 16 seconds. The acceleration due to gravity (g)
is 11 units per second squared. The the location of the turtle at a
given second (t) is calculated as follows: x = x0 + vx * t and y = y0
+ vy * t - g/2 * t2 . This program is to show how a turtle travels over this period of time.</p>
</blockquote>
<p>The output should be like this:</p>
<p><img src="http://i.stack.imgur.com/7sqkv.jpg" alt="Output Image"></p>
<p>Here is what I should do;</p>
<ul>
<li>set up the constants (vertical velocity, horizontal velocity,
gravity) and variables (x and y coordinates) set up the turtle by
giving him a proper shape, putting his tail up, moving him to the
initial position, putting his tail down make a loop that repeats for
seconds 1 through 16 inclusive. in each iteration of the loop display
the the values of the x and y variables (in the shell window), move
the turtle to those coordinates, have the turtle stamp his shape,
calculate the new values for the x and y variables after the loop
terminates, move the turtle to the last calculated coordinates,
change his color, and stamp his shape, then wait for a mouse click</li>
</ul>
<p>My code so far:</p>
<pre><code>import turtle
def main():
wn = turtle.Screen()
turtellini = turtle.Turtle()
t = int(input("Blab blab blab: "))
x0 = -180
y0 = -100
vx = 20
vy = 88
g = 11
x = (float(x0 + vx * t))
y = (float(y0 + vy * t - g / 2 * t**2))
turtellini.color("black")
turtellini.shape("turtle")
turtellini.up()
turtellini.goto(-180,-100)
turtellini.down()
for i in range(1,16,1):
turtellini.stamp()
turtellini.forward(i)
turtellini.right(i)
print(x)
print(y)
if __name__ == "__main__":
main()
</code></pre>
<p>I know I am doing bad; but can anyone help me to solve this problem?</p>
| 0 |
2016-09-20T03:41:55Z
| 39,736,969 |
<p>You seem to have most of the parts and pieces. The biggest issue I see is you didn't put your x,y calculation in the loop. The loop iteration variable <code>i</code> is really <code>t</code> in your motion equations. Each time you calculate a new x,y you simply move the turtle to that position:</p>
<pre><code>import turtle
from math import pi, atan
x0, y0 = -180, -100 # initial location
vx, vy = 20.0, 88.0 # initial velocity in units per second
travel_time = 16 # seconds
g = 11.0 # acceleration due to gravity in units per second squared
turtellini = turtle.Turtle(shape='turtle', visible=False)
turtellini.penup()
turtellini.radians() # to make turtle compatible with math.atan()
turtellini.setheading(pi / 2) # straight up
turtellini.goto(x0, y0)
turtellini.pendown()
turtellini.showturtle()
turtellini.stamp()
for t in range(1, travel_time + 1):
x = x0 + vx * t
y = y0 + vy * t - g / 2 * t**2
turtellini.goto(x, y)
print(x, y)
angle = atan((vy * t - g * t**2) / (vx * t)) # a guess!
turtellini.setheading(angle)
turtellini.stamp()
turtle.exitonclick()
</code></pre>
<p>Unlike the gold standard image, I assumed the turtle was aerodynamic like a bullet and travelled head first through the flight. I don't know, and couldn't quickly find, the formula for the flight angle of a projectile so I guessed from the existing formulas:</p>
<p><a href="http://i.stack.imgur.com/jsnrY.png" rel="nofollow"><img src="http://i.stack.imgur.com/jsnrY.png" alt="enter image description here"></a></p>
| 0 |
2016-09-28T02:12:30Z
|
[
"python",
"turtle-graphics"
] |
Deskew Text with OpenCV and Python (RotatedRect, minAreaRect)
| 39,585,407 |
<p>I'm new with OpenCV and I want to deskew an image that have a skewed text. First I read the image in GrayScale and Binarize it, then I try to do <a href="http://opencvpython.blogspot.com.ar/2012/06/contours-2-brotherhood.html" rel="nofollow">this</a>:</p>
<pre><code>import cv2
import numpy as np
img = cv2.imread('m20.jpg',0)
ret,byw = cv2.threshold(img,127,255,cv2.THRESH_BINARY_INV)
_, contours, hierarchy = cv2.findContours(byw.copy(), cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cnt = contours[0]
draw = cv2.cvtColor(byw, cv2.COLOR_GRAY2BGR)
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(draw, [box], 0, (0, 255, 0), 2)
</code></pre>
<p>But doesn't work beacause findContours() expect to receive an image with a body shape.
Other way that i try is to translate this code of c++:</p>
<pre><code>// Read image
Mat3b img = imread("path_to_image");
// Binarize image. Text is white, background is black
Mat1b bin;
cvtColor(img, bin, COLOR_BGR2GRAY);
bin = bin < 200;
// Find all white pixels
vector<Point> pts;
findNonZero(bin, pts);
// Get rotated rect of white pixels
RotatedRect box = minAreaRect(pts);
if (box.size.width > box.size.height)
{
swap(box.size.width, box.size.height);
box.angle += 90.f;
}
Point2f vertices[4];
box.points(vertices);
for (int i = 0; i < 4; ++i)
{
line(img, vertices[i], vertices[(i + 1) % 4], Scalar(0, 255, 0));
}
// Rotate the image according to the found angle
Mat1b rotated;
Mat M = getRotationMatrix2D(box.center, box.angle, 1.0);
warpAffine(bin, rotated, M, bin.size());
</code></pre>
<p>And i have this:</p>
<pre><code>draw = cv2.cvtColor(byw, cv2.COLOR_GRAY2BGR)
data = np.array(byw)
subzero = np.nonzero(data)
subuno = np.reshape(subzero,(17345,2)) # this is because cv2.minAreaRect() receives a Nx2 numpy
rect = cv2.minAreaRect(subuno)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(draw,[box],0,(0,255,0),2)
</code></pre>
<p>But then again the result is not the expected, the rectangle is not well positioned.
<hr>
Also occurs to me that might try to make the <code>for</code> like in C++ but I don't know how to obtain the vertices from the
<code>box = cv2.boxPoints(rect)</code>. Please help!</p>
| 0 |
2016-09-20T03:50:58Z
| 39,585,700 |
<p>Maybe you can check this out: <a href="http://www.pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/" rel="nofollow">http://www.pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/</a></p>
<p>In that link, the author deskews or transforms the entire document (and thus also the contained text), however, it depends on finding the edges of the document, based on the contours found in an image.</p>
<p>He takes it further in this following tutorial: <a href="http://www.pyimagesearch.com/2014/09/01/build-kick-ass-mobile-document-scanner-just-5-minutes/" rel="nofollow">http://www.pyimagesearch.com/2014/09/01/build-kick-ass-mobile-document-scanner-just-5-minutes/</a></p>
<p>His solutions work because he can adjust the entire document based on the detected position, orientation, and skewness of the documents. Adjusting the position of the document as a whole in effect adjusts everything found inside the document, including the text.</p>
<p>However, I believe what you're asking is you want to deskew text even <em>without</em> detecting any document edges and contours. If this is the case, then I'm supposing that you'll need to provide another basis or standard to base your text deskewing on (i.e. detect that there are letters in the image, then detect how skewed the letters are based on your standard, then adjust the letters), which may be a non-trivial exercise.</p>
| 0 |
2016-09-20T04:30:13Z
|
[
"python",
"c++",
"opencv",
"vertices",
"skew"
] |
Python USB Serial works in IDLE but not when run as a file
| 39,585,435 |
<p>I'm currently attempting to write over USB serial to an Arduino Nano of mine using Python. However, what I've discovered is that (using the <em>exact</em> same code), the code works perfectly when I type it into IDLE, but when I save it to a file and attempt to run from there, for some reason the Arduino is never receiving the data. I've checked and in both locations the correct version of Python is being used (2.7.9) (I unfortunately can't use Python 3 due to other libraries I'm using).</p>
<p>The code I'm using:</p>
<pre><code>import serial
ser = serial.Serial(port='/dev/ttyUSB0', baudrate=9600)
ser.write('0')
print ser.readline()
</code></pre>
<p>When I run it in IDLE just by typing in the lines individually, the correct behavior is seen: the Arduino responds (turning a servo) and echoes back the data it was sent, which is printed correctly. Running from a saved file however, the servo does not respond and no echo is received.</p>
<p>Any ideas?</p>
| 0 |
2016-09-20T03:54:34Z
| 39,585,773 |
<p>I somehow missed this answer on SO before (<a href="http://stackoverflow.com/questions/28192190/pyserial-write-works-fine-in-python-interpreter-but-not-python-script?rq=1">pySerial write() works fine in Python interpreter, but not Python script</a>), but it turns out that I needed to add a time.sleep(2) after opening the serial port. My guess is that in IDLE the time it took for me to type the next line accounted for this delay, but it was happening instantly in code.</p>
| 0 |
2016-09-20T04:37:35Z
|
[
"python",
"arduino",
"raspberry-pi",
"pyserial"
] |
Is there a way to return a dictionary value without quotations?
| 39,585,570 |
<p>Sorry if this is somewhere out there but I just couldn't find a solution to what I was looking for. I want to return the value of my dictionary without the quotations and can't figure out what I'm doing wrong.</p>
<pre><code>def read_wiktionary():
answer = dict()
f = open('wiktionary.txt', 'r')
for line in f:
word,value = line.rstrip('\n').split(' ')
answer[word] = (value)
return answer
</code></pre>
| -2 |
2016-09-20T04:12:55Z
| 39,585,608 |
<p>The quotation is just used as a separator between different keys and values so it cannot be removed.The quotation doesn't affect your values in the dictionary.</p>
| 0 |
2016-09-20T04:20:10Z
|
[
"python",
"dictionary",
"return"
] |
Is there a way to return a dictionary value without quotations?
| 39,585,570 |
<p>Sorry if this is somewhere out there but I just couldn't find a solution to what I was looking for. I want to return the value of my dictionary without the quotations and can't figure out what I'm doing wrong.</p>
<pre><code>def read_wiktionary():
answer = dict()
f = open('wiktionary.txt', 'r')
for line in f:
word,value = line.rstrip('\n').split(' ')
answer[word] = (value)
return answer
</code></pre>
| -2 |
2016-09-20T04:12:55Z
| 39,585,676 |
<p>When you read a file, everything is read as a string. To get a float value, you'll need to cast it to a float.</p>
<pre><code>def read_wiktionary():
answer = dict()
f = open('wiktionary.txt', 'r')
for line in f:
word, value = line.rstrip('\n').split(' ')
answer[word] = float(value)
return answer
</code></pre>
<p>As an aside, it's generally good practice to use <code>with</code> to open a file so that it will automatically be closed when it exits the block. From <a href="https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects" rel="nofollow">the docs</a>:</p>
<blockquote>
<p>It is good practice to use the with keyword when dealing with file
objects. This has the advantage that the file is properly closed after
its suite finishes, even if an exception is raised on the way.</p>
</blockquote>
<p>So, I would suggest:</p>
<pre><code>def read_wiktionary():
answer = {}
with open('wiktionary.txt', 'r') as f:
for line in f:
word, value = line.rstrip('\n').split(' ')
answer[word] = float(value)
return answer
</code></pre>
| 0 |
2016-09-20T04:27:20Z
|
[
"python",
"dictionary",
"return"
] |
Python UDP communication using Socket, check data received
| 39,585,724 |
<p>I'm pretty new to the Python, trying to write a code to receive string from UDP connection, the problem I have now is that I need to receive data from 2 sources, I want the program continue looping if there is no data from either or both of them, but now if there is no data from source 2, it will stop there and wait for the data, how to solve it?
I was thinking about using if statement, but I don't know how to check if the incoming data is empty of not, any ideas will be appreciated!</p>
<pre><code>import socket
UDP_IP1 = socket.gethostname()
UDP_PORT1 = 48901
UDP_IP2 = socket.gethostname()
UDP_PORT2 = 48902
sock1 = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock1.bind((UDP_IP1, UDP_PORT1))
sock2 = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock2.bind((UDP_IP2, UDP_PORT2))
while True:
if sock1.recv != None:
data1, addr = sock1.recvfrom(1024)
data1_int = int(data1)
print "SensorTag[1] RSSI:", data1_int
if sock2.recv != None:
data2, addr = sock2.recvfrom(1024)
data2_int = int(data2)
print "SensorTag[2] RSSI:", data2_int
</code></pre>
| 0 |
2016-09-20T04:32:50Z
| 39,585,988 |
<p>If <a href="https://docs.python.org/2/library/select.html" rel="nofollow">select</a> doesn't work out for you you can always throw them into a thread. You'll just have to be careful about the shared data and place good mutex around them. See <a href="https://docs.python.org/2/library/threading.html#threading.Lock" rel="nofollow">threading.Lock</a> for help there.</p>
<pre><code>import socket
import threading
import time
UDP_IP1 = socket.gethostname()
UDP_PORT1 = 48901
UDP_IP2 = socket.gethostname()
UDP_PORT2 = 48902
sock1 = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock1.bind((UDP_IP1, UDP_PORT1))
sock2 = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock2.bind((UDP_IP2, UDP_PORT2))
def monitor_socket(name, sock):
while True:
sock.recv != None:
data, addr = sock.recvfrom(1024)
data_int = int(data)
print name, data_int
t1 = threading.Thread(target=monitor_socket, args=["SensorTag[1] RSSI:", sock1])
t1.daemon = True
t1.start()
t2 = threading.Thread(target=monitor_socket, args=["SensorTag[2] RSSI:", sock2])
t2.daemon = True
t2.start()
while True:
# We don't want to while 1 the entire time we're waiting on other threads
time.sleep(1)
</code></pre>
<p><em>Note this wasn't tested due not having two UPD sources running.</em></p>
| 1 |
2016-09-20T04:59:48Z
|
[
"python",
"sockets",
"udp"
] |
Global list is appending nothing
| 39,585,833 |
<pre><code>answers = []
def search(visit_order, nodes_to_visit, distance):
if len(nodes_to_visit) == 0:
print visit_order
answers.append(visit_order)
return
else:
for node in nodes_to_visit:
nodes_to_visit.remove(node)
visit_order.append(node)
search(visit_order, nodes_to_visit, 0)
visit_order.remove(node)
nodes_to_visit.append(node)
search([],nodes, 0)
print answers
</code></pre>
<p>I have a global list <code>answers</code> and a recursive function that goes through given <code>nodes_to_visit</code> list which will add <code>visit_order</code> to the <code>answers</code> list when there are no more <code>nodes_to_visit</code>. </p>
<p>When I print <code>Visit_order</code> right before appending, I get a correct value. However, when I print <code>answers</code>, I only get list of lists such as <code>[[], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []]</code>. What is the problem?</p>
<p>For example, if I give search([],[1,2,3,4],0) as the input it is supposed to give me something like
[[3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4], [3, 1, 2, 4]]
but it gives me [[], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []] instead.</p>
| 0 |
2016-09-20T04:43:42Z
| 39,586,364 |
<p>So the problem is you are appending <em>the same object</em> to <code>answers</code> which you then empty. Check the output of <code>[id(e) for e in answers]</code> and you should see the same object ids. A quick fix is to append <em>a copy</em> by using <code>answers.append(list(visit_order))</code> or <code>answers.append(visit_order[:])</code></p>
<pre><code>In [4]: search([],[1,2,3,4],0)
In [5]: answers
Out[5]:
[[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[]]
In [6]: [id(e) for e in answers]
Out[6]:
[140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400,
140399731251400]
In [7]:
</code></pre>
<p>However, if I change the function to:</p>
<pre><code>def search(visit_order, nodes_to_visit, distance):
if len(nodes_to_visit) == 0:
answers.append(visit_order[:])
return
else:
for node in nodes_to_visit:
nodes_to_visit.remove(node)
visit_order.append(node)
search(visit_order, nodes_to_visit, 0)
visit_order.remove(node)
nodes_to_visit.append(node)
</code></pre>
<p>Now...</p>
<pre><code>In [8]: answers = []
In [9]: search([],[1,2,3,4],0)
In [10]: answers
Out[10]:
[[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 3, 4, 2],
[1, 3, 4, 2],
[1, 3, 2, 4],
[1, 3, 2, 4],
[2, 4, 3, 1],
[2, 4, 3, 1],
[2, 3, 1, 4],
[2, 3, 1, 4],
[2, 3, 4, 1],
[2, 3, 4, 1],
[3, 1, 4, 2],
[3, 1, 4, 2],
[3, 4, 2, 1],
[3, 4, 2, 1],
[3, 4, 1, 2],
[3, 4, 1, 2],
[3, 2, 1, 4],
[3, 2, 1, 4],
[3, 1, 4, 2],
[3, 1, 4, 2],
[3, 1, 2, 4],
[3, 1, 2, 4]]
In [11]:
</code></pre>
| 0 |
2016-09-20T05:31:55Z
|
[
"python",
"list",
"global"
] |
Swagger Integration in Django
| 39,585,869 |
<p>I need to integrate Swagger in Django. So, can anyone discuss the steps to integrate Swagger in Django. I need the full description.</p>
<p>Thanks in advance.</p>
| 0 |
2016-09-20T04:46:59Z
| 39,586,908 |
<p>If you're trying to use Swagger, then I'm sure you're building an API (REST). So answering to your wide range question about Swagger integration with Django, you can use Django Rest Framework + Swagger (which I recommend) or Swagger only. What you need to do in this case:</p>
<h2>Django Rest Framework + Swagger</h2>
<pre><code>pip install django djangorestframework django-rest-swagger
</code></pre>
<p>then in your <code>settings.py</code> just include:</p>
<pre><code>INSTALLED_APPS = [
...,
'rest_framework',
'rest_framework_swagger',
]
</code></pre>
<p>then in your <code>urls.py</code>:</p>
<pre><code>from rest_framework import routers
from yourapp.accounts.views import UserViewSet
router = routers.DefaultRouter()
router.register(r'users', UserViewSet)
urlpatterns = [
url(r'^api/v1/', include(router.urls)),
...
]
</code></pre>
<p>in <code>views.py</code>:</p>
<pre><code>from rest_framework import viewsets
class UserViewSet(viewsets.ReadOnlyModelViewSet):
"""
This viewset automatically provides `list` and `detail` actions.
"""
queryset = User.objects.all()
serializer_class = UserSerializer
</code></pre>
<p>and after entering home page <code>/</code>, you will get swagger interface with all endpoints rendered (registered in router as in the example above).</p>
<h2>Django + Swagger</h2>
<pre><code>pip install django django-rest-swagger
</code></pre>
<p>then in your <code>settings.py</code>:</p>
<pre><code>INSTALLED_APPS = [
...
'rest_framework_swagger',
...
]
</code></pre>
<p>in <code>urls.py</code>:</p>
<pre><code>from django.conf.urls import url
from views import schema_view
urlpatterns = [
url('/', schema_view),
...
]
</code></pre>
<p>and in your example <code>views.py</code>:</p>
<pre><code>from rest_framework.decorators import api_view, renderer_classes
from rest_framework import response, schemas
from rest_framework_swagger.renderers import OpenAPIRenderer, SwaggerUIRenderer
@api_view()
@renderer_classes([OpenAPIRenderer, SwaggerUIRenderer])
def schema_view(request):
generator = schemas.SchemaGenerator(title='Bookings API')
return response.Response(generator.get_schema(request=request))
</code></pre>
<p>Reference:</p>
<p><a href="https://django-rest-swagger.readthedocs.io/en/latest/" rel="nofollow">Django REST Swagger</a></p>
<p><a href="http://www.django-rest-framework.org/tutorial/6-viewsets-and-routers/" rel="nofollow">Django REST Framework</a></p>
| 1 |
2016-09-20T06:14:12Z
|
[
"python",
"django",
"python-2.7",
"swagger",
"swagger-ui"
] |
what's wrong with my code? countconsonant
| 39,585,895 |
<pre><code>def countConsonant (s):
"""Count consonants.
'y' and 'Y' are not consonants.
Params: s (string)
Returns: (int) #consonants in s (either case)
"""
# do not use a brute force solution:
# think of a short elegant solution (mine is 5 lines long);
# do not use lines longer than 80 characters long
# INSERT YOUR CODE HERE, replacing 'pass'
countConsonant = 0
for index in s:
if index == 'bcdfghjklmnpqrstvwxyz':
countConsonant += 1
return countConsonant
print (countConsonant ('"Carpe diem", every day.')) # should return 8
</code></pre>
| -7 |
2016-09-20T04:49:40Z
| 39,585,918 |
<p><code>==</code> checks for equality, and that's probably not what you want. You have to use the <code>in</code> operator to check membership, and in this case membership of a character in a string. It follows this general syntax:</p>
<pre><code>if x in y:
</code></pre>
<p>Where <code>x</code> is the operand or the one being checked if membership is present in <code>y</code>. Applying that to this case, replace your <code>if</code> statement to this:</p>
<pre><code>if index in 'bcdfghjklmnpqrstvwxyz':
</code></pre>
<p>This will then check if the certain character is in the given consonant string. Also, one thing to note, you only check lower case. That means <code>C</code> in <code>Carpe diem</code> is ignored giving the result of 9. To ignore case try:</p>
<pre><code>if index.lower() in 'bcdfghjklmnpqrstvwxyz':
</code></pre>
<p>This will make the string lowercase when checking.</p>
| 3 |
2016-09-20T04:51:24Z
|
[
"python"
] |
what's wrong with my code? countconsonant
| 39,585,895 |
<pre><code>def countConsonant (s):
"""Count consonants.
'y' and 'Y' are not consonants.
Params: s (string)
Returns: (int) #consonants in s (either case)
"""
# do not use a brute force solution:
# think of a short elegant solution (mine is 5 lines long);
# do not use lines longer than 80 characters long
# INSERT YOUR CODE HERE, replacing 'pass'
countConsonant = 0
for index in s:
if index == 'bcdfghjklmnpqrstvwxyz':
countConsonant += 1
return countConsonant
print (countConsonant ('"Carpe diem", every day.')) # should return 8
</code></pre>
| -7 |
2016-09-20T04:49:40Z
| 39,587,592 |
<p>With the statement </p>
<p><code>for index in s:</code> </p>
<p>you are iterating over all the characters in s. So the condition </p>
<p><code>if index == 'bcdfghjklmnpqrstvwxyz':</code></p>
<p>ever evaluates to<code>False</code> and the function returns 0 in every cases.</p>
<p>To check the membership of a character in a string, you need to use the <code>in</code> operator rather than <code>==</code>, which checks for the equality of the values. </p>
<p>If your algorithm is case independent, you could either lowercase the input or add all the corresponding uppercase letters to the string <code>'bcdfghjklmnpqrstvwxz'</code>. Since <code>'y'</code> and <code>'Y'</code> are not consonants as per your requirement, you need to remove <code>y</code> from the string <code>'bcdfghjklmnpqrstvwxyz'</code>. And your final code is</p>
<pre><code>def countConsonant (s):
"""Count consonants.
'y' and 'Y' are not consonants.
Params: s (string)
Returns: (int) #consonants in s (either case)
"""
countConsonant = 0
for index in s.lower():
if index in 'bcdfghjklmnpqrstvwxz':
countConsonant += 1
return countConsonant
print (countConsonant ('"Carpe diem", every day.')) # should return 8
</code></pre>
| 0 |
2016-09-20T06:57:43Z
|
[
"python"
] |
tkinter tag_config does not work
| 39,585,905 |
<p>I am building a notepad like application in tkinter-python. There is an option to change the font of the text writen in the text field of the application.</p>
<p>I have created a Font Chooser popup screen to be called from main window on clicking 'font' menu, which basically creates a FontChooser class object and passes to the main window, which sets the font in man window. </p>
<p>A sample of the code where font is getting set in main window is,</p>
<pre><code>root = Tix.Tk(className="Notepad")
notepad = ScrolledText(root, width=100, height=100)
def open_font():
font = MyFont.askChooseFont(root)
notepad.tag_add("bt", "sel.first", "sel.last")
notepad.tag_config("bt", font=font.getFontTuple())
</code></pre>
<p>Now when I first run the application and select a portion of text and change the font, it works correctly. But after that, whatever portion of text I am selecting and changing the font, it is ignoring the selection and applying the font on the whole text. Can anyone let me know what is the problem here? </p>
| 0 |
2016-09-20T04:50:14Z
| 39,607,923 |
<p>IDLE uses tag_config to syntax color python code and it works on all Python versions and major OSes for the last 15 years.</p>
<p>To have some idea of why it seems to fail for you, you need to find an <a href="https://stackoverflow.com/help/mcve">MCVE</a> that fails. Start without tix and scrollbars. (Tix is deprecated in 3.6 and bugs are not being fixed.) Also notice that your code uses the same tag for each selection, so that when you change the configuration, it applies to all previous selections.</p>
<p>Here is simplified code that works as intended and expected.</p>
<pre><code>import tkinter as tk
import time
root = tk.Tk()
text = tk.Text(root)
text.pack()
text.insert('1.0', "line 1\nline 2\nline 3\n")
text.tag_add('bg', '1.0', '1.4')
text.tag_config('bg', background='red')
root.update()
time.sleep(1)
text.tag_add('bg', '2.0', '2.4')
text.tag_config('bg', background='blue')
root.update()
</code></pre>
<p>You could try modifying it step by step until it either reproduces your problem or does what you want.</p>
<p>EDIT with example modification: use 'sel.first' and 'sel.last' instead of hard-coded indexes.</p>
<pre><code>import tkinter as tk
import time
root = tk.Tk()
text = tk.Text(root)
text.pack()
text.insert('1.0', "line 1\nline 2\nline 3\n")
root.update() # make text visible for selection
input('select some text')
text.tag_add('bg', 'sel.first', 'sel.last')
text.tag_config('bg', background='red')
root.update() # make change visible
input('select some text')
text.tag_add('bg', 'sel.first', 'sel.last')
text.tag_config('bg', background='blue')
root.update() # make 2nd change visible
input('look at result')
</code></pre>
<p>Run in console. Move tk window so console and GUI are both visible. Make selection as prompted. Click on console* and hit return to allow input statement to return. Repeat. The result for me is that both selections, but not everything, turns blue. I suggest changing font instead of bg color for the next experiment.</p>
<ul>
<li>On Windows, the selection highlighting in the tk windows disappears when one clicks on the console because Windows only allows visible selection in one window at a time. However, the select markers are still present in the text widget so that tag_add still works.</li>
</ul>
| 0 |
2016-09-21T05:02:32Z
|
[
"python",
"tkinter"
] |
Python- obtain maximum value in an interval
| 39,586,287 |
<p>I have a .CSV file (a list) that contains 43142 rows and 2 columns.</p>
<p>When plotting the list's values x vs y:</p>
<pre><code> import numpy as np
import matplotlib.pyplot as plt
filename=np.genfromtxt(list.CSV,delimiter=',')
plt.plot(filename[:,0],filename[:,1])
</code></pre>
<p>I get a graph which has multiple maxima values and looks like this:
<a href="http://i.stack.imgur.com/4XDpq.png" rel="nofollow">x vs y values of list.CSV</a></p>
<p>What i want to do is, given an aproximate interval in the x values in which the peaks are positioned, find the maximum values and the corresponding indices on the list.</p>
<p><em>e.g</em> if there's a maximum <strong>y</strong> value in the interval <strong>x=(2720,2730)</strong> (refer to figure <a href="http://i.stack.imgur.com/ZNJwZ.png" rel="nofollow">2</a>), i want to find the exact index in which the value is maximum.</p>
| 1 |
2016-09-20T05:26:32Z
| 39,586,753 |
<p>If you have a range <code>xmin < x < xmax</code> then this should work (taking <code>x=filename[:,0]</code> and <code>y=filename[:,1]</code>) :</p>
<pre><code>idx = np.where(y==np.max(y[(x>xmin)&(x<xmax)]))[0][0]
</code></pre>
<p>This will return a single index corresponding to the maximum y value in the given range.</p>
| 0 |
2016-09-20T06:02:36Z
|
[
"python",
"python-2.7",
"numpy",
"argmax"
] |
(Scrapy) How to get the CSS rule for a HTML element?
| 39,586,331 |
<p>I am building a crawler using Scrapy. I need to get the font-family assigned to a particular HTML element.</p>
<p>Let's say there is a css file, styles.css, which contains the following:</p>
<pre><code>p {
font-family: "Times New Roman", Georgia, Serif;
}
</code></pre>
<p>And in the HTML page there is text as follows:</p>
<pre><code><p>Hello how are you?</p>
</code></pre>
<p>Its easy for me to extract the text using Scrapy, however I would also like to know the font-family applied to <em>Hello how are you?</em></p>
<p>I am hoping it is simply a case of (imaginary XPATH) <code>/p[font-family]</code> or something like that.</p>
<p>Do you know how I can do this?</p>
<p>Thanks for your thoughts.</p>
| 0 |
2016-09-20T05:29:14Z
| 39,589,441 |
<p>You need to download and parse css seperately. For css parsing you can use <a href="https://pythonhosted.org/tinycss/" rel="nofollow">tinycss</a> or even regex:</p>
<pre><code>import tinycss
class MySpider(Spider):
name='myspider'
start_urls = [
'http://some.url.com'
]
css_rules = {}
def parse(self, response):
# find css url and parse it
css_url = response.xpath("").extract_first()
yield Request(css_url, self.parse_css)
def parse_css(self, response):
parser = tinycss.make_parser()
stylesheet = parser.parse_stylesheet(response.body)
for rule in stylesheet.rules:
if not getattr(rule, 'selector'):
continue
path = rule.selector.as_css()
css = [d.value.as_css() for d in rule.declarations]
self.css_rules[path] = css
</code></pre>
<p>Now you have a dictionary with css paths and their attributes that you can use later in your spider request chain to assign some values:</p>
<pre><code>def parse_item(self, response):
item = {}
item['name'] = response.css('div.name').extract_first()
name_css = []
for k,v in css_rules.items():
if 'div' in k and '.name' in k:
name_css.append(v)
item['name_css'] = name_css
</code></pre>
| 1 |
2016-09-20T08:42:24Z
|
[
"python",
"xpath",
"scrapy"
] |
Kivy scatter region is limited to window size
| 39,586,388 |
<p>I'm basically running into issues with the only "grabbable" scatter regions being entirely defined by the size of the window I am viewing the program in, and not the size of the scatter.</p>
<p>Here's a working example of the bug:</p>
<pre><code>from kivy.app import App
from kivy.uix.widget import Widget
from kivy.uix.button import Button
from kivy.uix.gridlayout import GridLayout
from kivy.uix.scatter import Scatter
class AppFrame(GridLayout):
def __init__(self,**kwargs):
super(AppFrame,self).__init__(**kwargs)
self.myscatter=Scatter(
width=2000,
height=200,
do_rotation=False,
do_scale=True,
do_translation=True)
self.add_widget(self.myscatter)
self.layout=GridLayout(cols=30,width=2000,height=200)
self.myscatter.add_widget(self.layout)
for i in range(300):
self.layout.add_widget(Button(text=str(i)))
class TestApp(App):
def build(self):
return AppFrame(cols=3)
if __name__ == '__main__':
TestApp().run()
</code></pre>
<p>On my screen, if I use the scatter to move to the right (increasing with the buttons), I can't grab anything past button ~10-11. If I resize my screen, I can grab a bit farther. If I make my screen small, the grabbable area shrinks and might not even be reachable. How on earth can I fix this? </p>
<p>I'm using the Kivy environment with the Android emulator, so I have a pretty big screen. Unfortunately if I moved this to a phone, the screen size would shrink significantly, making this bug effectively disable scrolling.</p>
<p>(The toplevel layout is for adding a menu. The actual menu isn't included in this example as it's not necessary to recreate the bug)</p>
<p>The main things I'm trying to figure out are:</p>
<ol>
<li>Is this an issue with the code or with the android emulator?</li>
<li>If its an issue with the code, can it be fixed and how?</li>
<li>If it can't be fixed, how else can I get this functionality?</li>
</ol>
<p>UPDATE:
After the comment from George Bou, I've isolated the problem to be within the scatter's BBOX. On creation, the scatter bbox size is 2000x200. However, after window creation it is 800x600 (the default size of the screen that pops up).</p>
<p>Annoyingly, I can't figure out how to fix this. If I change the scatter's width/height at any point(i.e., in a button), it immediately gets reset back to 800x600 whenever a translation/zoom occurs. Anyone know how to make scatter stop automatically resizing its bbox to the window?</p>
| 0 |
2016-09-20T05:34:31Z
| 39,605,397 |
<p>Ok. I couldn't figure out a way to do it with the vanilla scatter object itself, but I made a workaround that seems to work well enough.</p>
<p>Basically, scatter controls what's grabbable through the <code>collide_point</code> method in its class. This class references its own width/height (which are irritatingly immutable). So I got things to work by creating a custom scatter class that overwrites the collide_point method with something changeable.</p>
<pre><code>class CustScatter(Scatter):
def collide_point(self, x, y):
x, y = self.to_local(x, y)
return 0 <= x <= self.xboundval and 0 <= y <= self.yboundval
def custSetBounds(self,xval,yval):
self.xboundval=xval
self.yboundval=yval
</code></pre>
<p>To use this, I just make sure to set the bounds (<code>custSetBounds</code>) immediately after I create an instance of the class.</p>
<p>Kivy <em>really</em> makes me miss Tkinter...</p>
| 0 |
2016-09-20T23:45:32Z
|
[
"python",
"kivy"
] |
How to compare decimal numbers available in columns of pandas dataframe?
| 39,586,398 |
<p>I want to compare decimal values which are available in two columns of pandas dataframe.</p>
<p>I have a dataframe:</p>
<pre><code>data = {'AA' :{0:'-14.35',1:'632.0',2:'619.5',3:'352.35',4:'347.7',5:'100'},
'BB' :{0:'-14.3500',1:'632.0000',2:'619.5000',3:'352.3500',4:'347.7000',5:'200'}
}
df1 = pd.DataFrame(data)
print df1
</code></pre>
<p><strong>dataframe look like this :</strong></p>
<pre><code> AA BB
0 -14.35 -14.3500
1 632.0 632.0000
2 619.5 619.5000
3 352.35 352.3500
4 347.7 347.7000
5 100 200
</code></pre>
<p>I want to compare <code>AA</code> and <code>BB</code> columns. As shown in above dataframe, values of both columns are same except <strong>5th</strong> row. only issue is the trailing zeros. </p>
<p>If both <code>AA</code> and <code>BB</code> columns are same then I want result of these comparison in third column as a <code>Result</code> i.e. <code>True</code> or <code>False</code>.</p>
<p><strong>Expected Result :</strong> </p>
<pre><code> AA BB Result
0 -14.35 -14.35 True
1 632.0 632.0 True
2 619.5 619.5 True
3 352.35 352.35 True
4 347.7 347.7 True
5 100 200 False
</code></pre>
<p>How I can compare these decimal values?</p>
| 3 |
2016-09-20T05:35:18Z
| 39,586,425 |
<p>You need cast column to <code>float</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>astype</code></a> and then compare columns, because <code>type</code> of values in columns is <code>string</code>. Then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.mask.html" rel="nofollow"><code>mask</code></a> and as condition use boolean column <code>Result</code>:</p>
<pre><code>print (type(df1.ix[0,'AA']))
<class 'str'>
print (type(df1.ix[0,'BB']))
<class 'str'>
df1['Result'] = df1.AA.astype(float) == df1.BB.astype(float)
df1.BB = df1.BB.mask(df1.Result,df1.AA)
print (df1)
AA BB Result
0 -14.35 -14.35 True
1 632.0 632.0 True
2 619.5 619.5 True
3 352.35 352.35 True
4 347.7 347.7 True
5 100 200 False
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a>:</p>
<pre><code>df1['Result'] = df1.AA.astype(float) == df1.BB.astype(float)
df1.ix[df1.Result, 'BB'] = df1.AA
print (df1)
AA BB Result
0 -14.35 -14.35 True
1 632.0 632.0 True
2 619.5 619.5 True
3 352.35 352.35 True
4 347.7 347.7 True
5 100 200 False
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>#len(df) = 6k
df1 = pd.concat([df1]*1000).reset_index(drop=True)
In [31]: %timeit df1.ix[df1.Result, 'BB'] = df1.AA
The slowest run took 4.88 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 1.19 ms per loop
In [33]: %timeit df1.BB = df1.BB.mask(df1.Result,df1.AA)
1000 loops, best of 3: 900 µs per loop
</code></pre>
| 2 |
2016-09-20T05:37:14Z
|
[
"python",
"pandas",
"indexing",
"dataframe",
"condition"
] |
Python tracing, doesnt show the indexes
| 39,586,439 |
<p>Hey people i just got a quick question, for some of you it might be very simple but please help out.
lets say we got:</p>
<pre><code>--- modulename: test, funcname: <module>
test.py(1): nums = [3, 1, 2, 10]
test.py(3): where = 0
test.py(5): for number in range(1, len(nums)):
test.py(7): if nums[number] < nums[where]:
test.py(9): where = number
test.py(5): for number in range(1, len(nums)):
test.py(7): if nums[number] < nums[where]:
test.py(5): for number in range(1, len(nums)):
test.py(7): if nums[number] < nums[where]:
test.py(5): for number in range(1, len(nums)):
test.py(11): answer = nums[where]
--- modulename: trace, funcname: _unsettrace
trace.py(80): sys.settrace(None)}
</code></pre>
<p>So as you can see it doesn't show me the output i need. I need to see the numbers that its putting inside of each stage looping. Is there a way to get it done?</p>
| 0 |
2016-09-20T05:38:46Z
| 39,586,540 |
<p>Add print statement where you want to see output.</p>
<p><strong>Version:</strong></p>
<pre><code>[root@dsp-centos ~]# python -V
Python 2.7.5
[root@dsp-centos ~]#
[root@dsp-centos ~]# python -m trace --version
trace 2.0
[root@dsp-centos ~]#
</code></pre>
<p><strong>Code :</strong></p>
<pre><code>nums = [33, 21, 4, 8]
where = 0
for number in range(1, len(nums)):
print number
if nums[number] < nums[where]:
where = number
print where
answer = nums[where]
print answer
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> [root@dsp-centos ~]# python -m trace -t a.py
--- modulename: a, funcname: <module>
a.py(3): nums = [33, 21, 4, 8]
a.py(5): where = 0
a.py(7): for number in range(1, len(nums)):
a.py(8): print number
1
a.py(9): if nums[number] < nums[where]:
a.py(11): where = number
a.py(12): print where
1
a.py(14): answer = nums[where]
a.py(15): print answer
21
a.py(7): for number in range(1, len(nums)):
a.py(8): print number
2
a.py(9): if nums[number] < nums[where]:
a.py(11): where = number
a.py(12): print where
2
a.py(14): answer = nums[where]
a.py(15): print answer
4
a.py(7): for number in range(1, len(nums)):
a.py(8): print number
3
a.py(9): if nums[number] < nums[where]:
a.py(7): for number in range(1, len(nums)):
--- modulename: trace, funcname: _unsettrace
trace.py(80): sys.settrace(None)
[root@dsp-centos ~]#
</code></pre>
| 0 |
2016-09-20T05:45:56Z
|
[
"python",
"tracing"
] |
Python tracing, doesnt show the indexes
| 39,586,439 |
<p>Hey people i just got a quick question, for some of you it might be very simple but please help out.
lets say we got:</p>
<pre><code>--- modulename: test, funcname: <module>
test.py(1): nums = [3, 1, 2, 10]
test.py(3): where = 0
test.py(5): for number in range(1, len(nums)):
test.py(7): if nums[number] < nums[where]:
test.py(9): where = number
test.py(5): for number in range(1, len(nums)):
test.py(7): if nums[number] < nums[where]:
test.py(5): for number in range(1, len(nums)):
test.py(7): if nums[number] < nums[where]:
test.py(5): for number in range(1, len(nums)):
test.py(11): answer = nums[where]
--- modulename: trace, funcname: _unsettrace
trace.py(80): sys.settrace(None)}
</code></pre>
<p>So as you can see it doesn't show me the output i need. I need to see the numbers that its putting inside of each stage looping. Is there a way to get it done?</p>
| 0 |
2016-09-20T05:38:46Z
| 39,586,770 |
<p>You need to format it accordingly to see what goes inside at each step</p>
<pre><code>nums = [33, 21, 4, 8]
where = 0
for number in range(1, len(nums)):
print number
if nums[number] < nums[where]:
print where
where = number
answer = nums[where]
print answer
</code></pre>
| 0 |
2016-09-20T06:04:06Z
|
[
"python",
"tracing"
] |
Referencing an object of a model with all foreign keys in Django admin UI
| 39,586,476 |
<p>I am developing my Djnago app for processing data into DB from admin frontend.
I have a table with both foreign keys in it.</p>
<p><code>class exam_questions(models.Model):
exam_id=models.ForeignKey(exams, on_delete=models.CASCADE)
question_id=models.ForeignKey(questions, on_delete=models.CASCADE)
def __int__(self):
return self.exam_id</code></p>
<p>When I am trying to return the above field it is shown in admin UI as exam_questions object for all values added into table.
Is there a way to get the actual value of the column displayed in admin UI.</p>
<p>I have other tables which have different definitions and I am able to display the required field from that. The issue is observed only when all keys in model are foreign keys</p>
<p>Any help is appreciated.</p>
| 0 |
2016-09-20T05:41:42Z
| 39,586,618 |
<p>You need to provide the <code>__str__</code> or <code>__unicode__</code> functions, with regard to your python version, in a model for Django admin to be able to list items with their proper intended information.</p>
<pre><code>@python_2_unicode_compatible
class exam_questions(models.Model):
exam_id=models.ForeignKey(exams, on_delete=models.CASCADE)
question_id=models.ForeignKey(questions, on_delete=models.CASCADE)
def __int__(self):
return self.exam_id
def __str__(self):
return '%s - %s' % (self.exam_id.name, self.question_id.name)
</code></pre>
<p>If you're coding with cross version compatibility, you need to decorate your model with <code>@python_2_unicode_compatible</code>, and only override the <code>__str__</code> method.</p>
<p>Anyways but, why your <code>ForeignKeys</code> have <em>_id</em> in them ? Django automatically adds the <em>_id</em> to FKs that points to the id of the related instance.</p>
| 2 |
2016-09-20T05:51:52Z
|
[
"python",
"mysql",
"django"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.