title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Bug with a program for a guessing game | 39,902,412 | <p>I am creating a program where a user had to guess a random number in a specific range and has 3 tries to do so. After each try, if the guess is correct you tell the user they won and if the guess is wrong you tell them its wrong and how many tries they have got remaining. When the number of tries exceed 3 without a correct guess, you tell the user they are out of tries and tell them they lost. When the program finishes after a correct or 3 wrong tries and the game is over, you ask if the user wants to play again. If they say "yes", you star the game again and if they say "no", you end the game.
This is the code I have come up with:</p>
<pre><code>import random
x =round(random.random()*5)
guess = False
tries = 0
while guess != True:
if tries<=3:
inp = input("Guess the number: ")
if tries<3:
if inp == x:
guess = True
print "Correct! You won"
else:
print "Wrong guess! Try again."
tries = tries + 1
print "Your remaining tries are",(3-tries)
if tries>2:
if inp == x:
guess = True
print "Correct! You won"
else:
print "Wrong guess! You are out of tries"
print "Game Over!"
again = raw_input("Do you want to play again? ")
if again == "no":
print "Okay thanks for playing!"
elif again =="yes":
print "Great!"
</code></pre>
<p>Now when you guess the correct number before 3 tries, the program seems to be working fine, but even when the number of wrong tries exceeds 3, the program keeps asking for more guesses but I want it to stop the program right there. The second thing I don't get is how do I start the whole game again when the user wants to play again?
How can I modify my code to fix these errors?</p>
| 1 | 2016-10-06T17:39:52Z | 39,902,537 | <p>The simplest way to stop a loop when some condition becomes true is to use <code>break</code>. So when the player guesses correctly, the code would look like</p>
<pre><code>if inp == x:
guess = True
print "Correct! You won"
break
</code></pre>
<p>Additionally, you should break the loop when you run out of guesses, otherwise it will keep running</p>
<pre><code>else:
print "Wrong guess! You are out of tries"
print "Game Over!"
break
</code></pre>
<p>To let the player play again, you can have a variable <code>playing</code> and use it in a while loop that wraps the whole game, and when they say they don't want to play anymore, make it <code>False</code> so that the outer loop ends.</p>
<pre><code>playing = True
while playing:
x = round(random.random()*5)
guess = False
tries = 0
while guess != True:
...
again = raw_input("Do you want to play again? ")
if again == "no":
print "Okay thanks for playing!"
playing = False
elif again =="yes":
print "Great!"
</code></pre>
<p>As you noted in comments on another answer, when the player gets their third guess wrong, the game tells them they have 0 guesses left and to try again, and then tells them they are out of tries and that the game is over. Since you want it to only tell them they are out of tries and that the game is over, you can change the main block of code to:</p>
<pre><code>while guess != True:
inp = input("Guess the number: ")
if inp == x:
guess = True
print "Correct! You won"
else:
tries = tries + 1
if tries == 3:
print "Wrong guess! You are out of tries"
print "Game Over!"
break
else:
print "Wrong guess! Try again."
print "Your remaining tries are",(3-tries)
</code></pre>
| 3 | 2016-10-06T17:47:36Z | [
"python",
"python-2.7"
]
|
Bug with a program for a guessing game | 39,902,412 | <p>I am creating a program where a user had to guess a random number in a specific range and has 3 tries to do so. After each try, if the guess is correct you tell the user they won and if the guess is wrong you tell them its wrong and how many tries they have got remaining. When the number of tries exceed 3 without a correct guess, you tell the user they are out of tries and tell them they lost. When the program finishes after a correct or 3 wrong tries and the game is over, you ask if the user wants to play again. If they say "yes", you star the game again and if they say "no", you end the game.
This is the code I have come up with:</p>
<pre><code>import random
x =round(random.random()*5)
guess = False
tries = 0
while guess != True:
if tries<=3:
inp = input("Guess the number: ")
if tries<3:
if inp == x:
guess = True
print "Correct! You won"
else:
print "Wrong guess! Try again."
tries = tries + 1
print "Your remaining tries are",(3-tries)
if tries>2:
if inp == x:
guess = True
print "Correct! You won"
else:
print "Wrong guess! You are out of tries"
print "Game Over!"
again = raw_input("Do you want to play again? ")
if again == "no":
print "Okay thanks for playing!"
elif again =="yes":
print "Great!"
</code></pre>
<p>Now when you guess the correct number before 3 tries, the program seems to be working fine, but even when the number of wrong tries exceeds 3, the program keeps asking for more guesses but I want it to stop the program right there. The second thing I don't get is how do I start the whole game again when the user wants to play again?
How can I modify my code to fix these errors?</p>
| 1 | 2016-10-06T17:39:52Z | 39,903,070 | <p>this is a slightly different version to your problem:</p>
<pre><code>import random
def play():
x = round(random.random()*5)
guess = False
tries = 0
while guess is False and tries < 3:
inp = raw_input("Guess the number: ")
if int(inp) == x:
guess = True
print "Correct! You won"
else:
print "Wrong guess! Try again."
tries = tries + 1
print "Your remaining tries are",(3-tries)
if tries == 3:
print("Game Over!")
play()
again = "maybe"
while again != "no":
again = raw_input("Do you want to play again? yes OR no? \n")
if again == "yes":
print "Great!"
play()
print "Okay thanks for playing!"
</code></pre>
| 2 | 2016-10-06T18:20:06Z | [
"python",
"python-2.7"
]
|
Maya Python - Set object pivot to selection center | 39,902,493 | <p>I'm trying to move the selected object pivot to the center of the objects selected vertices.</p>
<p>When I run the code I don't recieve any errors and almost everything works as intended, However the pivot of (obj)my selected object doesn't seem to set itself to the locator xform(piv).</p>
<pre><code>import maya.cmds as cmds
sel = cmds.ls(sl=True)
print sel
obj = cmds.ls(*sel, o=True)
print obj
selVerts = cmds.ls(sl=True)
tempClstr = cmds.cluster()
pos = cmds.xform(tempClstr[1], q=True, ws=True, rp=True)
loc = cmds.spaceLocator()
cmds.move(pos[0], pos[1], pos[2])
cmds.delete(tempClstr)
piv = cmds.xform (loc[1], piv=True, q=True, ws=True)
print piv
cmds.xform( obj, ws=True, piv=(piv[0], piv[1], piv[2]) )
</code></pre>
<p>Need some help on this one fast.
Any extra eyes that can spot what I'm missing would be greatly appreciated.</p>
| 1 | 2016-10-06T17:44:48Z | 39,911,586 | <p>I think the main issue was that when you were using <code>obj = cmds.ls(*sel, o=True)</code>, it was only capturing the object's shape node instead of its transform. You can use <code>cmds.listRelatives</code> to get the shape's transform. You also don't need to create the locator as the cluster already gives you the position.</p>
<p>This seems to work for me, though you may want to consider some additional error checking for the selection portion as it assumes a lot.</p>
<pre><code>import maya.cmds as cmds
sel = cmds.ls(sl=True)
shapes = cmds.ls(sel, o=True)
obj = cmds.listRelatives(shapes[0], f=True, parent=True)[0]
selVerts = cmds.ls(sl=True)
tempClstr = cmds.cluster()
piv = cmds.xform(tempClstr[1], q=True, ws=True, rp=True)
cmds.delete(tempClstr)
cmds.xform(obj, ws=True, piv=(piv[0], piv[1], piv[2]) )
</code></pre>
| 2 | 2016-10-07T07:18:33Z | [
"python",
"python-2.7",
"maya"
]
|
Pandas - df.loc assigning NaN when used to replace columns based on a different df | 39,902,514 | <p>I'm trying to assign values of some columns based on another column mapping them by one single key. The problem is that I don't think the mapping is being used correctly, because it is assigning NaN to the columns.</p>
<p>I should be mapping them by 'SampleID'.</p>
<p>Here is the DF I want to assign values to</p>
<pre><code>>>> df.ix[new_df['SampleID'].isin(pooled['SampleID']), cols]
Volume_Received Quantity massug
88280 2.0 15.0 1.0
88282 3.0 55.0 5.0
88284 2.5 46.2 3.0
88286 2.0 98.0 5.0
229365 2.0 8.4 3.0
229366 3.0 15.9 3.0
229367 1.5 7.7 2.0
233666 1.5 50.8 3.0
233667 4.0 60.2 5.0
</code></pre>
<p>This is the new value I have for them</p>
<pre><code>>>> numerical
Volume_Received Quantity massug
SampleID
sample8 10.0 75.0 5.0
sample70 15.0 275.0 25.0
sample72 12.5 231.0 15.0
sample89 6.0 294.0 15.0
sample90 4.0 16.8 6.0
sample96 6.0 31.8 6.0
sample97 3.0 15.4 4.0
sample99 3.0 101.6 6.0
sample100 8.0 120.4 10.0
</code></pre>
<p>I'm using this command to assign the values:</p>
<pre><code>df.ix[df['SampleID'].isin(pooled['SampleID']), cols] = numerical[cols]
</code></pre>
<p>Where pooled is basically <code>pooled = df[df['type'] == 'Pooled']</code> and <code>cols</code> is a list with the three columns shown above. After I run the code above I receive NaN in all the values. I think I'm telling pandas to get values where it does not exist because of the mapping and it's returning something null which is being converted to NaN (assumption). </p>
| 0 | 2016-10-06T17:45:59Z | 39,903,100 | <p>index does not match,</p>
<p>you can use </p>
<pre><code>df.ix[df['SampleID'].isin(pooled['SampleID']), cols] = numerical[cols].values
</code></pre>
<p>only if the size are exactly the same!</p>
| 1 | 2016-10-06T18:21:57Z | [
"python",
"pandas"
]
|
Pandas groupby object in legend on plot | 39,902,522 | <p>I am trying to plot a pandas <code>groupby</code> object using the code <code>fil.groupby('imei').plot(x=['time'],y = ['battery'],ax=ax, title = str(i))</code> </p>
<p>The problem is the plot legend lists <code>['battery']</code>as the legend value. Given it's drawing a line for each item in the <code>groupby</code> object, it makes more sense to plot those values in the legend instead. However I'm not sure how to do that. Any help would be appreciated.</p>
<p>Data</p>
<pre><code> time imei battery_raw
0 2016-09-30 07:01:23 862117020146766 42208
1 2016-09-30 07:06:23 862117024146766 42213
2 2016-09-30 07:11:23 862117056146766 42151
3 2016-09-30 07:16:23 862117995146745 42263
4 2016-09-30 07:21:23 862117020146732 42293
</code></pre>
<p>Full code </p>
<pre><code>for i in entity:
fil = df[(df['entity_id']==i)]
fig, ax = plt.subplots(figsize=(18,6))
fil.groupby('imei').plot(x=['time'],y = ['battery'],ax=ax, title = str(i))
plt.legend(fil.imei)
plt.show()
</code></pre>
<p>Current plot </p>
<p><a href="http://i.stack.imgur.com/rDNvf.png" rel="nofollow"><img src="http://i.stack.imgur.com/rDNvf.png" alt="enter image description here"></a></p>
| 1 | 2016-10-06T17:46:35Z | 39,906,127 | <p>Slightly tidied data:</p>
<pre><code> date time imei battery_raw
0 2016-09-30 07:01:23 862117020146766 42208
1 2016-09-30 07:06:23 862117020146766 42213
2 2016-09-30 07:11:23 862117020146766 42151
3 2016-09-30 07:16:23 862117995146745 42263
4 2016-09-30 07:21:23 862117995146745 42293
</code></pre>
<p>Complete example code:</p>
<pre><code>import matplotlib.pyplot as plt
fil = pd.read_csv('imei.csv', sep=r'\s*', engine='python')
fig, ax = plt.subplots(figsize=(18,6))
for name, group in fil.groupby('imei'):
group.plot(x=pd.to_datetime(group['time']), y='battery_raw', ax=ax, label=name)
plt.show()
</code></pre>
<p>The x-values have to be converted to datetime for plotting to come out right, as usual. You could do that in the dataframe, too. </p>
<p>Result, labeled by imei:</p>
<p><a href="http://i.stack.imgur.com/wHM4b.png" rel="nofollow"><img src="http://i.stack.imgur.com/wHM4b.png" alt="enter image description here"></a>
(NOTE: edited to get rid of an oddity I tripped over the first time. If you pass a list as the <code>y</code> argument to <code>group.plot</code>, the list IDs will be used as the line labels, presumably as a handy default for when you're plotting several dependent variables at once.</p>
<pre><code>#for name, group in fil.groupby('imei'):
# group.plot(x=['time'], y=['battery_raw'], ax=ax, label=name)
</code></pre>
<p>)</p>
| 2 | 2016-10-06T21:48:27Z | [
"python",
"pandas",
"matplotlib",
"plot"
]
|
loop in python code to avoid repetition and reduce the code | 39,902,529 | <p>i would like create a loop and continue the script as long as data is not none.
I'm bad with loops. </p>
<pre><code>import requests
import re
import urllib
import json
url = 'http://bla/bla.html'
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux i686; rv:42.0) Gecko/20100101 Firefox/42.0 Iceweasel/42.0', 'Referer': ''}
r1 = requests.get(url, headers=headers)
data_a = re.findall('data-a="(.*?)"', r1.text)[0]
data_a = urllib.unquote_plus(data_a)
data_a ={'p': data_a}
r2 = requests.post('http://bla/bla/get', headers=headers, data=data_a)
jdata = json.loads(r2.text)
data = jdata["data"]
newa = re.findall('newa": "(.*?)"', r2.text)[0]
newa = urllib.unquote_plus(newa)
newa ={'p': newa}
if data == None:
print r1.text
else:
r3 = requests.post('http://bla/bla/get', headers=headers, data=newa)
jdata = json.loads(r3.text)
data1 = jdata["data"]
newa = re.findall('newa": "(.*?)"', r3.text)[0]
newa = urllib.unquote_plus(newa)
newa ={'p': newa}
if data1 == None:
print r1.text + data
else:
r4 = requests.post('http://bla/bla/get', headers=headers, data=newa)
jdata = json.loads(r4.text)
data2 = jdata["data"]
newa = re.findall('newa": "(.*?)"', r4.text)[0]
newa = urllib.unquote_plus(newa)
newa ={'p': newa}
if data2 == None:
print r1.text + data + data1
else:
...............
</code></pre>
<p>I suppose need 'for i in' but I still do not understand how it works.<br>
the hardest thing for me is the story of addition r1.text + data + data1 .....<br>
really thanks for your help :)</p>
| 0 | 2016-10-06T17:47:00Z | 39,907,502 | <p>ok i have the solution with an user in a forum, Now it's more clear</p>
<pre><code>url = 'http://bla/bla.html'
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux i686; rv:42.0) Gecko/20100101 Firefox/42.0 Iceweasel/42.0', 'Referer': ''}
r = requests.get(url, headers=headers).text
data_a = urllib.unquote_plus(re.findall('data-a="(.*?)"', r)[0])
#print data_a
data_a ={'p': data_a}
nr = requests.post('http://bla/bla/get/', headers=headers, data=data_a).text
jdata = json.loads(nr)
data = jdata["data"]
#print data
while data != None :
r=r+data
newa = re.findall('newa": "(.*?)"', nr)[0]
newa = urllib.unquote_plus(newa)
newa ={'p': newa}
nr = requests.post('http://bla/bla/get/', headers=headers, data=newa).text
jdata = json.loads(nr)
data = jdata["data"]
else:
print r
</code></pre>
| 0 | 2016-10-07T00:15:10Z | [
"python",
"loops"
]
|
how to insert a widget to another, if they are from different classes. and how to create a universal function of closing the child widgets? | 39,902,546 | <p>this is my code. I can't put label2 to self.main, and I don't know, how to write a generic function code, that would close the child widgets, that can be specified in the arguments.</p>
<pre><code>import tkinter
class mainwin:
def __init__(self):
self.root = tkinter.Tk()
self.main = tkinter.Canvas(self.root, width=200, height=400)
self.main.place(x=0, y=0, relwidth=1, relheight=1)
self.main.config(bg='green')
self.root.mainloop()
class addlabel:
def __init__(self):
self.label2 = tkinter.Label(mainwin.main, height=2, width=50, text='Hello Noob!!')
#can't put on the canvas 'main'
self.label2.place(x=0, y=50)
self.exit_button = tkinter.Button(self.label2, text='Exit')
self.exit.button.bind('<1>', quit_from_widget)
'''
class quit_from_widget:
def __init__(self):
# what code should be written here, to quit any child widget.
'''
mainwin()
addlabel()
</code></pre>
| -1 | 2016-10-06T17:48:22Z | 39,902,949 | <p>You might be able to use: </p>
<pre><code>mylist = parent.winfo_children();
</code></pre>
<p>Then use a for loop and destroy() to close them off</p>
<p>The main reason you can't put the label in is because you are calling mainloop() before addLabel. The program loops through the code and doesn't execute addlabel() until you close the mainwin() function.</p>
<p>secondly, you can't do mainw.main. the class has no reference to that function. instead try adding a parent function to your addlabel like so:</p>
<pre><code>class addlabel:
def __init__(self, parent):
self.label2 = tkinter.Label(parent, height=2, width=50, text='Hello Noob!!')
self.label2.place(x=0, y=50)
self.exit_button = tkinter.Button(self.label2, text='Exit')
self.exit_button.bind('<1>', quit)
</code></pre>
<p>Then, when you call the function in mainw class (before the self.root.mainloop() line) you would write:</p>
<pre><code>addlabel(self.main)
</code></pre>
| 0 | 2016-10-06T18:13:15Z | [
"python",
"class",
"tkinter",
"widget"
]
|
how to insert a widget to another, if they are from different classes. and how to create a universal function of closing the child widgets? | 39,902,546 | <p>this is my code. I can't put label2 to self.main, and I don't know, how to write a generic function code, that would close the child widgets, that can be specified in the arguments.</p>
<pre><code>import tkinter
class mainwin:
def __init__(self):
self.root = tkinter.Tk()
self.main = tkinter.Canvas(self.root, width=200, height=400)
self.main.place(x=0, y=0, relwidth=1, relheight=1)
self.main.config(bg='green')
self.root.mainloop()
class addlabel:
def __init__(self):
self.label2 = tkinter.Label(mainwin.main, height=2, width=50, text='Hello Noob!!')
#can't put on the canvas 'main'
self.label2.place(x=0, y=50)
self.exit_button = tkinter.Button(self.label2, text='Exit')
self.exit.button.bind('<1>', quit_from_widget)
'''
class quit_from_widget:
def __init__(self):
# what code should be written here, to quit any child widget.
'''
mainwin()
addlabel()
</code></pre>
| -1 | 2016-10-06T17:48:22Z | 39,903,723 | <pre><code>import tkinter
class mainwin:
def __init__(self):
self.root = tkinter.Tk()
self.main = tkinter.Canvas(self.root, width=200, height=400)
self.main.place(x=0, y=0, relwidth=1, relheight=1)
self.main.config(bg='green')
self.root.mainloop()
class CustomLabel(tkinter.Frame):
def __init__(self, parent):
tkinter.Frame.__init__(self, parent)
self.label = tkinter.Label(self, height=20, width=30, bg='Red', fg='white', text='Hello')
self.exit_button = tkinter.Button(self, command=self.destroy)
# pack these widgets into this frame. You can use grid or
# place, but pack is easiest for such a simple layout
self.exit_button.pack(side="right")
self.label.pack(side="left", fill="both", expand=True)
window = mainwin()
label = CustomLabel(window.main)
</code></pre>
<p>nothing happens. gives green background and the child widget is not visible. but when closing, he writes an error:<br>
....
(widgetName, self._w) + extra + self._options(cnf))
_tkinter.TclError: can't invoke "frame" command: application has been destroyed</p>
<p>Process finished with exit code 1</p>
| -1 | 2016-10-06T18:59:21Z | [
"python",
"class",
"tkinter",
"widget"
]
|
How to add wxpython frame style to disable frame resizing | 39,902,556 | <p>I build a sample program using <code>wxpython</code> and i need to disable frame resizing.</p>
<p>I know i should use <strong>wx.Frame style</strong> but i don't know where i can add this in my code.</p>
<p><code>style=wx.DEFAULT_FRAME_STYLE ^ wx.RESIZE_BORDER</code></p>
<p>Code:</p>
<pre><code>class Example(wx.Frame):
def __init__(self, *args, **kwargs):
super(Example, self).__init__(*args, **kwargs)
self.AppUI()
def AppUI(self):
panel = wx.Panel(self)
sizer = wx.GridBagSizer(5, 5)
logo = wx.StaticBitmap(panel, bitmap=wx.Bitmap('./logo.png'))
sizer.Add(logo, pos=(0, 2), flag=wx.TOP|wx.CENTER|wx.ALIGN_CENTER, border=3)
line = wx.StaticLine(panel)
sizer.Add(line, pos=(1, 0), span=(1, 5), flag=wx.EXPAND|wx.BOTTOM, border=10)
# rest of code ........
#.....
#.....
#app required
self.SetSize((550, 560))
self.SetTitle('Example App')
self.Centre()
self.SetSizer(sizer)
self.Show(True)
def main():
app = wx.App()
Example(None)
app.MainLoop()
if __name__ == '__main__':
main()
</code></pre>
| 0 | 2016-10-06T17:48:56Z | 39,902,964 | <p>I solve my problem by deleting <code>**kwargs</code> from <code>init</code>.</p>
<pre><code>class Example(wx.Frame):
def __init__(self, *args, **kwargs):
super(Example, self).__init__(*args, style=wx.MINIMIZE_BOX | wx.SYSTEM_MENU | wx.CAPTION | wx.CLOSE_BOX | wx.CLIP_CHILDREN)
self.AppUI()
#rest of code....
</code></pre>
| 0 | 2016-10-06T18:13:59Z | [
"python",
"python-2.7",
"python-3.x",
"wxpython"
]
|
GridSearch with SVM producing IndexError | 39,902,562 | <p>I'm building a classifier using an SVM and want to perform a Grid Search to help automate finding the optimal model. Here's the code:</p>
<pre><code>from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.multiclass import OneVsRestClassifier
X.shape # (22343, 323)
y.shape # (22343, 1)
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size=0.4, random_state=0
)
tuned_parameters = [
{
'estimator__kernel': ['rbf'],
'estimator__gamma': [1e-3, 1e-4],
'estimator__C': [1, 10, 100, 1000]
},
{
'estimator__kernel': ['linear'],
'estimator__C': [1, 10, 100, 1000]
}
]
model_to_set = OneVsRestClassifier(SVC(), n_jobs=-1)
clf = GridSearchCV(model_to_set, tuned_parameters)
clf.fit(X_train, y_train)
</code></pre>
<p>and I get the following error message (this isn't the whole stack trace. just the last 3 calls): </p>
<pre><code>----------------------------------------------------
/anaconda/lib/python3.5/site-packages/sklearn/model_selection/_split.py in split(self, X, y, groups)
88 X, y, groups = indexable(X, y, groups)
89 indices = np.arange(_num_samples(X))
---> 90 for test_index in self._iter_test_masks(X, y, groups):
91 train_index = indices[np.logical_not(test_index)]
92 test_index = indices[test_index]
/anaconda/lib/python3.5/site-packages/sklearn/model_selection/_split.py in _iter_test_masks(self, X, y, groups)
606
607 def _iter_test_masks(self, X, y=None, groups=None):
--> 608 test_folds = self._make_test_folds(X, y)
609 for i in range(self.n_splits):
610 yield test_folds == i
/anaconda/lib/python3.5/site-packages/sklearn/model_selection/_split.py in _make_test_folds(self, X, y, groups)
593 for test_fold_indices, per_cls_splits in enumerate(zip(*per_cls_cvs)):
594 for cls, (_, test_split) in zip(unique_y, per_cls_splits):
--> 595 cls_test_folds = test_folds[y == cls]
596 # the test split can be too big because we used
597 # KFold(...).split(X[:max(c, n_splits)]) when data is not 100%
IndexError: too many indices for array
</code></pre>
<p>Also, when I try reshaping the arrays so that the y is (22343,) I find that the GridSearch never finishes even if I set the tuned_parameters to only default values.</p>
<p>And here are the versions for all of the packages if that helps:</p>
<p>Python: 3.5.2</p>
<p>scikit-learn: 0.18</p>
<p>pandas: 0.19.0</p>
| 1 | 2016-10-06T17:49:28Z | 39,902,864 | <p>It seems that there is no error in your implementation.</p>
<p>However, as it's mentioned in the <code>sklearn</code>documentation, the "fit time complexity is more than quadratic with the number of samples which makes it hard to scale to dataset with more than a couple of <code>10000</code> samples". <a href="http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html" rel="nofollow">See documentation here</a></p>
<p>In your case, you have <code>22343</code> samples, which can lead to some computational problems/memory issues. That is why when you do your default CV it takes a lot of time. Try to <strong>reduce</strong> your train set using <code>10000</code> samples or less.</p>
| 3 | 2016-10-06T18:07:38Z | [
"python",
"pandas",
"machine-learning",
"scikit-learn",
"svm"
]
|
Right way to set variables at python class | 39,902,629 | <p>Which is the right way to work with variables inside a class?</p>
<p>1- setting them as class attributes where we get them and access them from class itself:</p>
<pre><code>class NeuralNetwork(object):
def __init__(self, topology):
self.topology = topology
self.buildLayers()
def buildLayers(self):
for layer in self.topology:
#do thing
</code></pre>
<p>2- passing them through methods that we need them without assign at class if they are not really useful variables:</p>
<pre><code>class NeuralNetwork(object):
def __init__(self, topology):
self.buildLayers(topology)
def buildLayers(self, topology):
for layer in topology:
#do thing
</code></pre>
<p>3- a mix of the above two:</p>
<pre><code>class NeuralNetwork(object):
def __init__(self, topology):
self.topology = topology
self.buildLayers(self.topology) # or self.buildLayers(topology) ?
def buildLayers(self, topology):
for layer in topology:
#do thing
</code></pre>
<p>I think that the first one is the correct, but it don't let you to reuse the function for different purposes without assigning a new value to the variable, what would look like these:</p>
<pre><code>self.topology = x
self.buildLayers()
</code></pre>
<p>What looks weird and you don't really understand that changing <code>self.topology</code> is affecting the call of <code>self.buildLayers()</code></p>
| 0 | 2016-10-06T17:53:00Z | 39,902,712 | <p>In general the first way is the "really object oriented" way, and much preferred over the second and the third.</p>
<p>If you want your buildLayers function to be able to change the topology occasionally, give it a param. topology with default value = None.</p>
<p>As long as you don't pass that param. at calling buildLayers, it will use this.topology. If you pass it, it will use the one you passed, and (if you wish) change this.topology to it.</p>
<p>By the way, while it's wise to stick to rules like this in the beginning, there are no real dogmas in programming. As experience grows, you'll find that to each rule there are many perfectly sane exceptions.</p>
<p>That's the fun of programming, you never stop learning from experience.</p>
| 2 | 2016-10-06T17:57:33Z | [
"python",
"class",
"variables",
"standards"
]
|
Right way to set variables at python class | 39,902,629 | <p>Which is the right way to work with variables inside a class?</p>
<p>1- setting them as class attributes where we get them and access them from class itself:</p>
<pre><code>class NeuralNetwork(object):
def __init__(self, topology):
self.topology = topology
self.buildLayers()
def buildLayers(self):
for layer in self.topology:
#do thing
</code></pre>
<p>2- passing them through methods that we need them without assign at class if they are not really useful variables:</p>
<pre><code>class NeuralNetwork(object):
def __init__(self, topology):
self.buildLayers(topology)
def buildLayers(self, topology):
for layer in topology:
#do thing
</code></pre>
<p>3- a mix of the above two:</p>
<pre><code>class NeuralNetwork(object):
def __init__(self, topology):
self.topology = topology
self.buildLayers(self.topology) # or self.buildLayers(topology) ?
def buildLayers(self, topology):
for layer in topology:
#do thing
</code></pre>
<p>I think that the first one is the correct, but it don't let you to reuse the function for different purposes without assigning a new value to the variable, what would look like these:</p>
<pre><code>self.topology = x
self.buildLayers()
</code></pre>
<p>What looks weird and you don't really understand that changing <code>self.topology</code> is affecting the call of <code>self.buildLayers()</code></p>
| 0 | 2016-10-06T17:53:00Z | 39,902,732 | <p>1st one is correct and recommended. As that will be object dependent.</p>
| 0 | 2016-10-06T17:58:59Z | [
"python",
"class",
"variables",
"standards"
]
|
Printing a list of files from a directory | 39,902,695 | <p>I was wondering how to print a list of files from a directory. I know this should be very easy to do, but I'm blanking out on how to do it. My second method search characteristics(directory) is a method that should return the list of files found in that directory given a key press. The 3rd method take_action(directory1) should print the files returned by what you input under search_directory(directory) and then there should be more under that method later but for now let's focus on getting the list of files to print. </p>
<p>Here's what it should do.</p>
<p>The third line of the input specifies the action that should be taken on each of the interesting files found in the search. No matter what, you should always print the file's path, on its own line of output, to the console when you find an interesting one; the action chosen here specifies what else should be done with it.</p>
<p>Here's my code.</p>
<pre><code>import os
import os.path
import shutil
from pathlib import Path
import pathlib
def search_files():
exist = Path(directory)
if exist.exists():
return directory
else:
print("Error")
print("Try again: ")
return search_files()
def search_characteristics(directory):
interesting = input()
interesting1=interesting.split(" ")
if (interesting1[0] == 'N'):
path1 = os.path.join(directory, interesting1[1])
if(os.path.isfile(path1)):
return path1
else:
return search_characteristics(directory)
print(path1)
return path1
elif interesting1[0] == 'E':
for file in os.listdir(directory):
if file.endswith(interesting1[1]):
return file
elif interesting1[0] == 'S':
for file in os.listdir(directory):
try:
if os.path.getsize(file) > int(interesting1[1]):
return file
except:
print('Only Numbers after S please.')
return search_characteristics(directory)
else:
print("Error")
return search_characteristics(directory)
def take_action(directory1):
action = input()
action1=action.split(" ")
if (action1[0] == 'P'):
print(directory1)
if __name__ == '__main__':
directory = input()
search_files()
directory1=search_characteristics(directory)
take_action(directory1)
</code></pre>
<p>When I run it, it only seems to return the first file from the list of files that is supposed to be returned. I'm also not sure if I'm reading what it should do correctly. </p>
| 0 | 2016-10-06T17:56:46Z | 39,902,803 | <p>Try <code>os.walk()</code></p>
<pre><code>list(os.walk("."))[0]
</code></pre>
<p>will give you all the sub directories in the current folder.</p>
<p>EDIT</p>
<p>Maybe this is more suited to your needs </p>
<pre><code>filter(lambda x : os.path.isdir(x) , os.listdir("."))
</code></pre>
| 0 | 2016-10-06T18:03:29Z | [
"python"
]
|
Printing a list of files from a directory | 39,902,695 | <p>I was wondering how to print a list of files from a directory. I know this should be very easy to do, but I'm blanking out on how to do it. My second method search characteristics(directory) is a method that should return the list of files found in that directory given a key press. The 3rd method take_action(directory1) should print the files returned by what you input under search_directory(directory) and then there should be more under that method later but for now let's focus on getting the list of files to print. </p>
<p>Here's what it should do.</p>
<p>The third line of the input specifies the action that should be taken on each of the interesting files found in the search. No matter what, you should always print the file's path, on its own line of output, to the console when you find an interesting one; the action chosen here specifies what else should be done with it.</p>
<p>Here's my code.</p>
<pre><code>import os
import os.path
import shutil
from pathlib import Path
import pathlib
def search_files():
exist = Path(directory)
if exist.exists():
return directory
else:
print("Error")
print("Try again: ")
return search_files()
def search_characteristics(directory):
interesting = input()
interesting1=interesting.split(" ")
if (interesting1[0] == 'N'):
path1 = os.path.join(directory, interesting1[1])
if(os.path.isfile(path1)):
return path1
else:
return search_characteristics(directory)
print(path1)
return path1
elif interesting1[0] == 'E':
for file in os.listdir(directory):
if file.endswith(interesting1[1]):
return file
elif interesting1[0] == 'S':
for file in os.listdir(directory):
try:
if os.path.getsize(file) > int(interesting1[1]):
return file
except:
print('Only Numbers after S please.')
return search_characteristics(directory)
else:
print("Error")
return search_characteristics(directory)
def take_action(directory1):
action = input()
action1=action.split(" ")
if (action1[0] == 'P'):
print(directory1)
if __name__ == '__main__':
directory = input()
search_files()
directory1=search_characteristics(directory)
take_action(directory1)
</code></pre>
<p>When I run it, it only seems to return the first file from the list of files that is supposed to be returned. I'm also not sure if I'm reading what it should do correctly. </p>
| 0 | 2016-10-06T17:56:46Z | 39,904,265 | <p>First, let me presume to offer a bit of very general advice. Get yourself some software that enables you to try out small snippets of code, if you don't have that already. On Windows the best I know of is PythonWin; you're looking for a REPL program. I mention this because I notice you were very close to an answer with the mention of Path in your code. If you had experimented a little with that you would have been on top of the answer to you principal question.</p>
<p>Getting a list of files in a directory:</p>
<pre><code>import os
from pathlib import Path
path=Path('C:\\Python34')
for fileName in path.iterdir():
fileName.name
</code></pre>
<p>In response to one or two of the comments: Rather than depending on globals and on user input the usual practice is to fall back on a default. In this case that would be the current directory. To do this, make None the default directory and check whether it has been declared in functions; use current working directory otherwise. </p>
<pre><code>def search_files(directory=None):
if not directory:
directory=os.getcwd
</code></pre>
| 0 | 2016-10-06T19:33:40Z | [
"python"
]
|
How to produce the following three dimensional array? | 39,902,759 | <p>I have a cube of size <code>N * N * N</code>, say <code>N=8</code>. Each dimension of the cube is discretised to 1, so that I have labelled points <code>(0,0,0), (0,0,1)..(N,N,N)</code>. At each labelled points, I would like to assign a random value, and thus produce an array which stores value at each vertex. For example <code>val[0,0,0]=1, val[0,0,1]=1.2 val[0,1,0]=1.3</code>, ...</p>
<p>How do I write a python code to acheive this?</p>
| -1 | 2016-10-06T18:00:58Z | 39,903,131 | <p>You could simply generate lists of lists. While not in any way efficient, it would allow you to access your cube like <code>val[0][0][0]</code>.</p>
<pre><code>arr = [[[] for _ in range(8)] for _ in range(8)]
arr[0][0].append(1)
</code></pre>
| 1 | 2016-10-06T18:23:37Z | [
"python",
"arrays",
"python-2.7"
]
|
How to produce the following three dimensional array? | 39,902,759 | <p>I have a cube of size <code>N * N * N</code>, say <code>N=8</code>. Each dimension of the cube is discretised to 1, so that I have labelled points <code>(0,0,0), (0,0,1)..(N,N,N)</code>. At each labelled points, I would like to assign a random value, and thus produce an array which stores value at each vertex. For example <code>val[0,0,0]=1, val[0,0,1]=1.2 val[0,1,0]=1.3</code>, ...</p>
<p>How do I write a python code to acheive this?</p>
| -1 | 2016-10-06T18:00:58Z | 39,903,155 | <p>For large matrices, look into using <code>numpy</code>. This is the problem that it's designed to solve</p>
| 0 | 2016-10-06T18:25:15Z | [
"python",
"arrays",
"python-2.7"
]
|
How to produce the following three dimensional array? | 39,902,759 | <p>I have a cube of size <code>N * N * N</code>, say <code>N=8</code>. Each dimension of the cube is discretised to 1, so that I have labelled points <code>(0,0,0), (0,0,1)..(N,N,N)</code>. At each labelled points, I would like to assign a random value, and thus produce an array which stores value at each vertex. For example <code>val[0,0,0]=1, val[0,0,1]=1.2 val[0,1,0]=1.3</code>, ...</p>
<p>How do I write a python code to acheive this?</p>
| -1 | 2016-10-06T18:00:58Z | 39,903,161 | <p>Did you mean this:</p>
<pre><code>import numpy as np
n = 5
val = np.empty((n, n, n)) # Create an 3d array full of 0's
val[0,0,0] = 11
val[0,0,1] = 33
print(val[0, 0])
array([ 11., 33., 0., 0., 0.])
</code></pre>
| 1 | 2016-10-06T18:25:37Z | [
"python",
"arrays",
"python-2.7"
]
|
Python to CSV is splitting string into two columns when I want one | 39,902,832 | <p>I am scraping a page with BeautifulSoup, and part of the logic is that sometimes part of the contents of a <code><td></code> tag can have a <code><br></code> in it. </p>
<p>So sometimes it looks like this:</p>
<pre class="lang-html prettyprint-override"><code><td class="xyz">
text 1
<br>
text 2
</td>
</code></pre>
<p>and sometimes it looks like this:</p>
<pre class="lang-html prettyprint-override"><code><td class="xyz">
text 1
</td>
</code></pre>
<p>I am looping through this and adding to an output_row list that I eventually add to a list of lists. Whether I see the former format or the latter, I want the text to be in one cell. </p>
<p>I've found a way to determine if I am seeing the <code><br></code> tag because the td.string shows up as none and I also know that text 2 always has 'ABC' in it. So:</p>
<pre class="lang-python prettyprint-override"><code> elif td.string == None:
if 'ABC' in td.contents[2]:
new_string = td.contents[0] + ' ' + td.contents[2]
output_row.append(new_string)
print(new_string)
else:
#this is for another situation and it works fine
</code></pre>
<p>As I print this in a Jupyter Notebook, it shows up as "text 1 text 2" as one line. But when I open up my CSV, it is in two different columns. So when td.string has contents (meaning no <code><br></code> tag), text 1 shows up in one column, but when I get to the pieces that have a <code><br></code> tag, all my data gets shifted. </p>
<p>I'm not sure why it shows up as two different strings (two columns) when I concatenate them before appending them to the list. </p>
<p>I'm writing to file like this:</p>
<pre class="lang-python prettyprint-override"><code>with open('C:/location/file.csv', 'w',newline='') as csv_file:
writer=csv.writer(csv_file,delimiter=',')
#writer.writerow(headers)
for row in output_rows:
writer.writerow(row)
csv_file.close
</code></pre>
| 4 | 2016-10-06T18:05:02Z | 39,903,427 | <p>You can handle both cases using <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text" rel="nofollow"><code>get_text()</code></a> with "strip" and "separator":</p>
<pre><code>from bs4 import BeautifulSoup
dat="""
<table>
<tr>
<td class="xyz">
text 1
<br>
text 2
</td>
<td class="xyz">
text 1
</td>
</tr>
</table>
"""
soup = BeautifulSoup(dat, 'html.parser')
for td in soup.select("table > tr > td.xyz"):
print(td.get_text(separator=" ", strip=True))
</code></pre>
<p>Prints:</p>
<pre><code>text 1 text 2
text 1
</code></pre>
| 2 | 2016-10-06T18:39:56Z | [
"python",
"csv",
"beautifulsoup"
]
|
I'm starting out using tkinter | 39,902,922 | <p>I am just starting out to use tkinter in Python 3.5 and I don't understand it the best yet but hopefully will over time.<br>
Anyway, what I was asking was how do I create a text box that a user can type into, I know it is possible but don't yet know how to execute this as, as I said at the beginning, I'm just starting out.<br>
If there is any scaffolding code of a box that I could use to that would allow a user to enter text, it would be a great help if you could show it to me.<br>
Also, to interact with this input, would it be the same as python? </p>
<p>I know this is a lot to be asking but any responses are welcome,<br>
Thanks in advance :)</p>
| -6 | 2016-10-06T18:11:12Z | 39,958,831 | <p>Both of the following links contain code relating to text entry.</p>
<p><a href="http://effbot.org/tkinterbook/entry.htm" rel="nofollow">http://effbot.org/tkinterbook/entry.htm</a></p>
<p><a href="http://effbot.org/tkinterbook/text.htm" rel="nofollow">http://effbot.org/tkinterbook/text.htm</a></p>
<p>Entry is a single line box.
Text is a multiple line box and supports text formatting</p>
| 0 | 2016-10-10T12:56:29Z | [
"python",
"tkinter"
]
|
Python - Cannot join thread - No multiprocessing | 39,903,009 | <p>I have this piece of code in my program. Where OnDone function is an event in a wxPython GUI. When I click the button DONE, the OnDone event fires up, which then does some functionality and starts the thread self.tstart - with target function StartEnable. This thread I want to join back using self.tStart.join(). However I am getting an error as follows: </p>
<pre><code>Exception in thread StartEnablingThread:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "//wagnernt.wagnerspraytech.com/users$/kundemj/windows/my documents/Production GUI/Trial python Codes/GUI_withClass.py", line 638, in StartEnable
self.tStart.join()
File "C:\Python27\lib\threading.py", line 931, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
</code></pre>
<p>I have not got this type of error before. Could any one of you guys tell me what I am missing here. </p>
<pre><code> def OnDone(self, event):
self.WriteToController([0x04],'GuiMsgIn')
self.status_text.SetLabel('PRESSURE CALIBRATION DONE \n DUMP PRESSURE')
self.led1.SetBackgroundColour('GREY')
self.add_pressure.Disable()
self.tStart = threading.Thread(target=self.StartEnable, name = "StartEnablingThread", args=())
self.tStart.start()
def StartEnable(self):
while True:
time.sleep(0.5)
if int(self.pressure_text_control.GetValue()) < 50:
print "HELLO"
self.start.Enable()
self.tStart.join()
print "hello2"
break
</code></pre>
<p>I want to join the thread after the "if" condition has executed. Until them I want the thread to run. </p>
| 0 | 2016-10-06T18:16:32Z | 39,904,295 | <p>When the <code>StartEnable</code> method is executing, it is running on the <em>StartEnablingThread</em> you created in the <code>__init__</code> method. You cannot join the current thread. This is clearly stated in the documentation for the <a href="https://docs.python.org/2/library/threading.html#threading.Thread.join" rel="nofollow">join</a> call.</p>
<blockquote>
<p>join() raises a RuntimeError if an attempt is made to join the current thread as that would cause a deadlock. It is also an error to join() a thread before it has been started and attempts to do so raises the same exception.</p>
</blockquote>
| 1 | 2016-10-06T19:35:27Z | [
"python",
"multithreading",
"wxpython"
]
|
Python - Cannot join thread - No multiprocessing | 39,903,009 | <p>I have this piece of code in my program. Where OnDone function is an event in a wxPython GUI. When I click the button DONE, the OnDone event fires up, which then does some functionality and starts the thread self.tstart - with target function StartEnable. This thread I want to join back using self.tStart.join(). However I am getting an error as follows: </p>
<pre><code>Exception in thread StartEnablingThread:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "//wagnernt.wagnerspraytech.com/users$/kundemj/windows/my documents/Production GUI/Trial python Codes/GUI_withClass.py", line 638, in StartEnable
self.tStart.join()
File "C:\Python27\lib\threading.py", line 931, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
</code></pre>
<p>I have not got this type of error before. Could any one of you guys tell me what I am missing here. </p>
<pre><code> def OnDone(self, event):
self.WriteToController([0x04],'GuiMsgIn')
self.status_text.SetLabel('PRESSURE CALIBRATION DONE \n DUMP PRESSURE')
self.led1.SetBackgroundColour('GREY')
self.add_pressure.Disable()
self.tStart = threading.Thread(target=self.StartEnable, name = "StartEnablingThread", args=())
self.tStart.start()
def StartEnable(self):
while True:
time.sleep(0.5)
if int(self.pressure_text_control.GetValue()) < 50:
print "HELLO"
self.start.Enable()
self.tStart.join()
print "hello2"
break
</code></pre>
<p>I want to join the thread after the "if" condition has executed. Until them I want the thread to run. </p>
| 0 | 2016-10-06T18:16:32Z | 39,904,373 | <p>I have some bad news. Threading in Python is pointless and you best bet to look at using only 1 thread or use multi process. If you will need to look at thread then you will need to look at a different language like C# or C. Have look at <a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow">https://docs.python.org/2/library/multiprocessing.html</a> </p>
<p>The reason that threading is pointless in python is because of the global interpreter lock (GIL). This make you only able to use one thread at a time, so no multi-threading in python but there are people working on it. <a href="http://pypy.org/" rel="nofollow">http://pypy.org/</a> </p>
| -1 | 2016-10-06T19:40:06Z | [
"python",
"multithreading",
"wxpython"
]
|
Python - Cannot join thread - No multiprocessing | 39,903,009 | <p>I have this piece of code in my program. Where OnDone function is an event in a wxPython GUI. When I click the button DONE, the OnDone event fires up, which then does some functionality and starts the thread self.tstart - with target function StartEnable. This thread I want to join back using self.tStart.join(). However I am getting an error as follows: </p>
<pre><code>Exception in thread StartEnablingThread:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "//wagnernt.wagnerspraytech.com/users$/kundemj/windows/my documents/Production GUI/Trial python Codes/GUI_withClass.py", line 638, in StartEnable
self.tStart.join()
File "C:\Python27\lib\threading.py", line 931, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
</code></pre>
<p>I have not got this type of error before. Could any one of you guys tell me what I am missing here. </p>
<pre><code> def OnDone(self, event):
self.WriteToController([0x04],'GuiMsgIn')
self.status_text.SetLabel('PRESSURE CALIBRATION DONE \n DUMP PRESSURE')
self.led1.SetBackgroundColour('GREY')
self.add_pressure.Disable()
self.tStart = threading.Thread(target=self.StartEnable, name = "StartEnablingThread", args=())
self.tStart.start()
def StartEnable(self):
while True:
time.sleep(0.5)
if int(self.pressure_text_control.GetValue()) < 50:
print "HELLO"
self.start.Enable()
self.tStart.join()
print "hello2"
break
</code></pre>
<p>I want to join the thread after the "if" condition has executed. Until them I want the thread to run. </p>
| 0 | 2016-10-06T18:16:32Z | 39,904,589 | <h2>Joining Is Waiting</h2>
<p><em>Joining</em> a thread actually means waiting fo another thread to finish.</p>
<p>So, in <code>thread1</code>, there can be code which says:</p>
<pre><code>thread2.join()
</code></pre>
<p>That means <em>"stop here and do not execute the next line of code until <code>thread2</code> is finished"</em>.</p>
<p>If you did (in <code>thread1</code>) the following, that would fail with the error from the question:</p>
<pre><code>thread1.join() # RuntimeError: cannot join current thread
</code></pre>
<h2>Joining Is Not Stopping</h2>
<p>Calling <code>thread2.join()</code> does not cause <code>thread2</code> to stop, nor even signal to it in any way that it should stop.</p>
<p>A thread stops when its target function exits. Often, a thread is implemented as a loop which checks for a signal (a variable) which tells it to stop, e.g.</p>
<pre><code>def run():
while whatever:
# ...
if self.should_abort_immediately:
print 'aborting'
return
</code></pre>
<p>Then, the way to stop the thread is to do:</p>
<pre><code>thread2.should_abort_immediately = True # tell the thread to stop
thread2.join() # entirely optional: wait until it stops
</code></pre>
<h1>The Code from the Question</h1>
<p>That code already implements the stopping correctly with the <code>break</code>. The <code>join</code> should just be deleted.</p>
<pre><code> if int(self.pressure_text_control.GetValue()) < 50:
print "HELLO"
self.start.Enable()
print "hello2"
break
</code></pre>
| 1 | 2016-10-06T19:54:11Z | [
"python",
"multithreading",
"wxpython"
]
|
SoftLayer_Virtual_Guest_Block_Device_Template_Group.addLocations returns true but no locations added | 39,903,063 | <p>Trying to copy a private image to another datacenter using Python. Code below returns true but transaction is not initiated and image is not copied.</p>
<pre><code>info=client['SoftLayer_Virtual_Guest_Block_Device_Template_Group'].addLocations({'id':449494},id=123456)
</code></pre>
<p>id is confirmed valid - using id with other services (getBlockDevices,getStatus,getChildren) provides appropriate responses.</p>
<p>what is wrong with the code above?</p>
| 0 | 2016-10-06T18:19:26Z | 39,903,561 | <p>You should send between <strong>[ ]</strong> the template for locations, please thy the following</p>
<pre><code>info=client['SoftLayer_Virtual_Guest_Block_Device_Template_Group'].addLocations([{'id':449494}],id=123456)
</code></pre>
<p>References:</p>
<ul>
<li><a href="http://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest_Block_Device_Template_Group/addLocations" rel="nofollow">SoftLayer_Virtual_Guest_Block_Device_Template_Group::addLocations</a></li>
</ul>
| 0 | 2016-10-06T18:48:29Z | [
"python",
"softlayer"
]
|
How to switch context in Dragonfly | 39,903,075 | <p>I have tried the Python module for handling speech recognition Dragonfly and successfully run notepad example with Windows speech recognition. Now I would like to try something more general, but I cannot find how contexts are switched, i.e. grammars loaded. There are always lines like:</p>
<pre><code>grammar = Grammar("Eclipse", context=DynamicContext(winContext, nixContext))
grammar.add_rule(rules)
grammar.load()
</code></pre>
<p>But the context is always tied to an executable or window title. How do I switch between grammars at will, like a word command or at least mouse click, key press?</p>
| 1 | 2016-10-06T18:20:20Z | 39,927,097 | <p>Create a rule which calls a function which does this:</p>
<pre><code>grammar.disable()
other_grammar.enable()
</code></pre>
<p>Have a look at <code>grammar_base.py</code> for other relevant functions.</p>
| 0 | 2016-10-07T23:30:08Z | [
"python",
"python-dragonfly"
]
|
efficiently replace values from a column to another column Pandas DataFrame | 39,903,090 | <p>I have a Pandas DataFrame like the following one: </p>
<pre><code> col1 col2 col3
1 0.2 0.3 0.3
2 0.2 0.3 0.3
3 0 0.4 0.4
4 0 0 0.3
5 0 0 0
6 0.1 0.4 0.4
</code></pre>
<p>I want to replace the <code>col1</code> values with the values in the second column (<code>col2</code>) only if <code>col1</code> values are equal to 0, and after (for the zero values remaining), do it again but with the third column (<code>col3</code>). The Desired Result is the next one:</p>
<pre><code> col1 col2 col3
1 0.2 0.3 0.3
2 0.2 0.3 0.3
3 0.4 0.4 0.4
4 0.3 0 0.3
5 0 0 0
6 0.1 0.4 0.4
</code></pre>
<p>I did it using the <code>pd.replace</code> function, but it seems too slow.. I think must be a faster way to accomplish that. </p>
<pre><code>df.col1.replace(0,df.col2,inplace=True)
df.col1.replace(0,df.col3,inplace=True)
</code></pre>
<p>is there a faster way to do that?, using some other function instead of the <code>pd.replace</code> function?, maybe slicing the <code>df.col1</code> ?</p>
<p>PS1. Additional info: the original goal of the problem is to replace the zero values (in <code>col1</code>) with the statistic mode of some groups of ids. That is why I have <code>col2</code>(best mode, first option of replacement) and <code>col3</code>(last mode, still functional but not so desirable as <code>col2</code>). </p>
| 2 | 2016-10-06T18:21:33Z | 39,903,800 | <p>I'm not sure if it's faster, but you're right that you can slice the dataframe to get your desired result.</p>
<pre><code>df.col1[df.col1 == 0] = df.col2
df.col1[df.col1 == 0] = df.col3
print(df)
</code></pre>
<p>Output:</p>
<pre><code> col1 col2 col3
0 0.2 0.3 0.3
1 0.2 0.3 0.3
2 0.4 0.4 0.4
3 0.3 0.0 0.3
4 0.0 0.0 0.0
5 0.1 0.4 0.4
</code></pre>
<p>Alternatively if you want it to be more terse (though I don't know if it's faster) you can combine what you did with what I did.</p>
<pre><code>df.col1[df.col1 == 0] = df.col2.replace(0, df.col3)
print(df)
</code></pre>
<p>Output:</p>
<pre><code> col1 col2 col3
0 0.2 0.3 0.3
1 0.2 0.3 0.3
2 0.4 0.4 0.4
3 0.3 0.0 0.3
4 0.0 0.0 0.0
5 0.1 0.4 0.4
</code></pre>
| 2 | 2016-10-06T19:03:41Z | [
"python",
"pandas",
"replace",
"dataframe"
]
|
efficiently replace values from a column to another column Pandas DataFrame | 39,903,090 | <p>I have a Pandas DataFrame like the following one: </p>
<pre><code> col1 col2 col3
1 0.2 0.3 0.3
2 0.2 0.3 0.3
3 0 0.4 0.4
4 0 0 0.3
5 0 0 0
6 0.1 0.4 0.4
</code></pre>
<p>I want to replace the <code>col1</code> values with the values in the second column (<code>col2</code>) only if <code>col1</code> values are equal to 0, and after (for the zero values remaining), do it again but with the third column (<code>col3</code>). The Desired Result is the next one:</p>
<pre><code> col1 col2 col3
1 0.2 0.3 0.3
2 0.2 0.3 0.3
3 0.4 0.4 0.4
4 0.3 0 0.3
5 0 0 0
6 0.1 0.4 0.4
</code></pre>
<p>I did it using the <code>pd.replace</code> function, but it seems too slow.. I think must be a faster way to accomplish that. </p>
<pre><code>df.col1.replace(0,df.col2,inplace=True)
df.col1.replace(0,df.col3,inplace=True)
</code></pre>
<p>is there a faster way to do that?, using some other function instead of the <code>pd.replace</code> function?, maybe slicing the <code>df.col1</code> ?</p>
<p>PS1. Additional info: the original goal of the problem is to replace the zero values (in <code>col1</code>) with the statistic mode of some groups of ids. That is why I have <code>col2</code>(best mode, first option of replacement) and <code>col3</code>(last mode, still functional but not so desirable as <code>col2</code>). </p>
| 2 | 2016-10-06T18:21:33Z | 39,903,944 | <p>Using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow"><code>np.where</code></a> is faster. Using a similar pattern as you used with <code>replace</code>:</p>
<pre><code>df['col1'] = np.where(df['col1'] == 0, df['col2'], df['col1'])
df['col1'] = np.where(df['col1'] == 0, df['col3'], df['col1'])
</code></pre>
<p>However, using a nested <code>np.where</code> is slightly faster:</p>
<pre><code>df['col1'] = np.where(df['col1'] == 0,
np.where(df['col2'] == 0, df['col3'], df['col2']),
df['col1'])
</code></pre>
<p><strong>Timings</strong></p>
<p>Using the following setup to produce a larger sample DataFrame and timing functions:</p>
<pre><code>df = pd.concat([df]*10**4, ignore_index=True)
def root_nested(df):
df['col1'] = np.where(df['col1'] == 0, np.where(df['col2'] == 0, df['col3'], df['col2']), df['col1'])
return df
def root_split(df):
df['col1'] = np.where(df['col1'] == 0, df['col2'], df['col1'])
df['col1'] = np.where(df['col1'] == 0, df['col3'], df['col1'])
return df
def pir2(df):
df['col1'] = df.where(df.ne(0), np.nan).bfill(axis=1).col1.fillna(0)
return df
def pir2_2(df):
slc = (df.values != 0).argmax(axis=1)
return df.values[np.arange(slc.shape[0]), slc]
def andrew(df):
df.col1[df.col1 == 0] = df.col2
df.col1[df.col1 == 0] = df.col3
return df
def pablo(df):
df['col1'] = df['col1'].replace(0,df['col2'])
df['col1'] = df['col1'].replace(0,df['col3'])
return df
</code></pre>
<p>I get the following timings:</p>
<pre><code>%timeit root_nested(df.copy())
100 loops, best of 3: 2.25 ms per loop
%timeit root_split(df.copy())
100 loops, best of 3: 2.62 ms per loop
%timeit pir2(df.copy())
100 loops, best of 3: 6.25 ms per loop
%timeit pir2_2(df.copy())
1 loop, best of 3: 2.4 ms per loop
%timeit andrew(df.copy())
100 loops, best of 3: 8.55 ms per loop
</code></pre>
<p>I tried timing your method, but it's been running for multiple minutes without completing. As a comparison, timing your method on just the 6 row example DataFrame (not the much larger one tested above) took 12.8 ms.</p>
| 4 | 2016-10-06T19:11:46Z | [
"python",
"pandas",
"replace",
"dataframe"
]
|
efficiently replace values from a column to another column Pandas DataFrame | 39,903,090 | <p>I have a Pandas DataFrame like the following one: </p>
<pre><code> col1 col2 col3
1 0.2 0.3 0.3
2 0.2 0.3 0.3
3 0 0.4 0.4
4 0 0 0.3
5 0 0 0
6 0.1 0.4 0.4
</code></pre>
<p>I want to replace the <code>col1</code> values with the values in the second column (<code>col2</code>) only if <code>col1</code> values are equal to 0, and after (for the zero values remaining), do it again but with the third column (<code>col3</code>). The Desired Result is the next one:</p>
<pre><code> col1 col2 col3
1 0.2 0.3 0.3
2 0.2 0.3 0.3
3 0.4 0.4 0.4
4 0.3 0 0.3
5 0 0 0
6 0.1 0.4 0.4
</code></pre>
<p>I did it using the <code>pd.replace</code> function, but it seems too slow.. I think must be a faster way to accomplish that. </p>
<pre><code>df.col1.replace(0,df.col2,inplace=True)
df.col1.replace(0,df.col3,inplace=True)
</code></pre>
<p>is there a faster way to do that?, using some other function instead of the <code>pd.replace</code> function?, maybe slicing the <code>df.col1</code> ?</p>
<p>PS1. Additional info: the original goal of the problem is to replace the zero values (in <code>col1</code>) with the statistic mode of some groups of ids. That is why I have <code>col2</code>(best mode, first option of replacement) and <code>col3</code>(last mode, still functional but not so desirable as <code>col2</code>). </p>
| 2 | 2016-10-06T18:21:33Z | 39,904,139 | <p>approach using <code>pd.DataFrame.where</code> and <code>pd.DataFrame.bfill</code></p>
<pre><code>df['col1'] = df.where(df.ne(0), np.nan).bfill(axis=1).col1.fillna(0)
df
</code></pre>
<p><a href="http://i.stack.imgur.com/SeDin.png" rel="nofollow"><img src="http://i.stack.imgur.com/SeDin.png" alt="enter image description here"></a></p>
<p>Another approach using <code>np.argmax</code></p>
<pre><code>def pir2(df):
slc = (df.values != 0).argmax(axis=1)
return df.values[np.arange(slc.shape[0]), slc]
</code></pre>
<p>I know there is a better way to use <code>numpy</code> to slice. I just can't think of it at the moment.</p>
| 2 | 2016-10-06T19:25:37Z | [
"python",
"pandas",
"replace",
"dataframe"
]
|
How to assert/verify successful login using Python / Selenium Webdriver | 39,903,144 | <p>I have done a little research and could not find any answer specific to Selenium WebDriver with Python.
I can successfully sign into the page but I cannot find way(s) to verify that the login was successful. Page title does not work for me since it does not change.
Python Selenium documentation does not have any good explanation or examples.
All I want to do after this code is to put a line and assert that the username "Tuto" is visible on the page</p>
<pre><code>LoginButtonLocator = "//a[contains(text(), 'Login')]"
facebookConnectButtonLocator = "//a[contains(text(), 'Connect with Facebook')]"
facebookLoginLocatorID = "email"
facebookPasswordLocatorID = "pass"
facebookLoginButtonLocatorID = "loginbutton"
LoginButtonElement = WebDriverWait(driver, 20).until(lambda driver: driver.find_element_by_xpath(LoginButtonLocator))
LoginButtonElement.click()
facebookConnectButtonElement = WebDriverWait(driver, 20).until(lambda driver: driver.find_element_by_xpath(facebookConnectButtonLocator))
facebookConnectButtonElement.click()
facebookLoginElement = WebDriverWait(driver, 20).until(lambda driver: driver.find_element_by_id(facebookLoginLocatorID))
facebookLoginElement.send_keys(facebookID)
facebookPasswordElement = WebDriverWait(driver, 20).until(lambda driver: driver.find_element_by_id(facebookPasswordLocatorID))
facebookPasswordElement.send_keys(facebookPW)
facebookLoginButtonElement = WebDriverWait(driver, 20).until(lambda driver: driver.find_element_by_id(facebookLoginButtonLocatorID))
facebookLoginButtonElement.click()
</code></pre>
| 0 | 2016-10-06T18:24:39Z | 39,905,433 | <p>I am working with the Javascript API instead of the Python version, so the syntax is different, but here is how I would get about it (using mocha as a test framework):</p>
<pre><code>facebookLoginButtonElement.click().then(function(_) {
driver.findElement(By.xpath('//a[text() = "Tuto"]')).then(function(userLink) {
assert.ok(userLink);
});
});
</code></pre>
<p>In the javascript version, if <code><a ...>Tuto</a></code> can't be found there would be an error before the callback is called, so the assertion would be redundant (would only get there if the link was found), but I find it as self-documenting so I add the assert.</p>
| 0 | 2016-10-06T20:53:17Z | [
"python",
"selenium",
"selenium-webdriver"
]
|
Checking if a letter is in a character array python | 39,903,179 | <p>How would I write a small program that takes an array of characters and a search character and prints true if the search character is in the array, and false if it is not?</p>
| -2 | 2016-10-06T18:26:36Z | 39,903,227 | <p>In Python this kind of thing is very easy.</p>
<p><code>search_character in arr</code></p>
<p>Evaluates to False when an item is not in an array and True otherwise.</p>
| 2 | 2016-10-06T18:28:56Z | [
"python",
"arrays",
"character"
]
|
Creating multilabel HDF5 file for caffe | 39,903,197 | <p>I created HDF5 data using the following python script and placed HDF5 data layer. However, when I tried to train caffe using this data it keeps complaining </p>
<blockquote>
<pre><code>Check failed: num_spatial_axes_ == 2 (0 vs. 2) kernel_h & kernel_w can only be used for 2D convolution
</code></pre>
</blockquote>
<p>Here is how my data looks like:</p>
<ol>
<li><p>Data(1x3253), label(1x128) binary. I sliced the 128 into 16 bytes and translated that to dec to use it as a mulitlabel. So a typical key would look like, (20, 38, 123, 345,...) 1x16. and I have 1,000,000 of data like (1). For now I am just using the first byte, so I will have one integer as a label.</p>
<pre><code> DIR ="/x/"
h5_fn= os.path.join('/x/h5Data_train.h5')
from numpy import genfromtxt
dim=64000
InputData=np.arange(3253)
data=np.arange(dim*3253)
data.shape=(dim,3253)
fileList=[os.path.join(i) for folder, subdir,files in os.walk(DIR) for i in files]
for i in range(0,len(fileList)):
InputData=np.genfromtxt(DIR+fileList[i], delimiter=',',skip_header=24)
data[i]=InputData
label=np.arange(dim)
labelData=np.genfromtxt(DIR+'label_file',comments='\t',dtype=None)
for i in range(0,dim):
label[i]=int(labelData[i][0:2],16)
print "Creating HDF5..."
with h5py.File(h5_fn,'w') as f:
f['InputData']=data
f['label']=label
text_fn=os.path.join('/x/hdf5.txt')
with open(text_fn,'w') as f:
f.write('h5_fn')
</code></pre></li>
</ol>
<p>This script creates the HDF5, but I am suspecting that the error from caffe is related to how I created my HDF5 file. Can someone tell me if there is anything wrong on how I created the HDF5. Also, is there anyway one can check if the HDF5 file created is as you want? Thanks!</p>
| 1 | 2016-10-06T18:27:32Z | 39,940,555 | <h3>The problem:</h3>
<p>Caffe, by default, expects its data to be 4D: batch_size -by- channel -by- height -by- width.<br>
In your <a href="http://stackoverflow.com/q/39888054/1714410">model</a> you assume each sample is of shape 1-by-1-by-3253, that is: your data is 1D with only non-singleton width dimension. This is an important detail since you apply convolution along the width dimension.<br>
On the other hand, your HDF5 data is only 2D, and caffe interpret it as <code>dim</code> examples with 3253 <em>channels</em> of width and height 1.<br>
Now you can understand the error message you get: you have a convolution layer with <code>kernel_width</code> and <code>kernel_height</code> params, but the data (as far as caffe understands it) has width and height of 1.</p>
<h3>A solution:</h3>
<p>You simply need to <code>reshape</code> your <code>data</code>:</p>
<pre><code>data.shape=(dim,1,1,3253)
</code></pre>
<p>Now <code>data</code> has 1 channel and height 1 for each sample and width of 3253.</p>
<hr>
<p>PS,<br>
You are writting to <code>'/x/hdf5.txt'</code> the actual <em>string</em> <code>'h5_fn'</code> instead of the string stroed in the variable <code>h5_fn</code>...</p>
| 1 | 2016-10-09T06:06:02Z | [
"python",
"machine-learning",
"deep-learning",
"caffe",
"hdf5"
]
|
How to display data in tabular format on a txt file | 39,903,199 | <p>I have been stuck on this python problem for hours. I'm trying to figure out how to write the data that can be manually entered above into a txt file in a way that it shows up in a two row eight column table. The contents in name_array are supposed to be headers and the contents in data_array are the actual data pieces.</p>
<pre><code>name = str(raw_input( "Enter the student's name: "))
medianScore = float(raw_input("Enter the median group score for quizzes:"))
indScore = float(raw_input("Enter the score of the individual quiz: "))
assignmentScore = float(raw_input("Enter the score of the assignment: "))
test1Score = float(raw_input("Enter the score of exam one: "))
test2Score = float(raw_input("Enter the score of exam two: "))
test3Score = float(raw_input("Enter the score of the final exam: "))
fileName = str(raw_input("Enter the name of the file you would like to create: "))
f = file(fileName + ".txt" , a)
finalScore = ((medianScore * .14) + (indScore * .14) + (assignmentScore * .12) + (test1Score * .15) +(test2Score * .20) + (test3Score * .25))
data_array = [name, finalScore, test3Score, test1Score, test2Score, assignmentScore, indScore, medianScore]
name_array = [ "Student", "Final Grade", "Final Exam", "Exam 1", "Exam 2", "Assignments", "Solo Quizzes", "Group Quizzes"]
</code></pre>
| 0 | 2016-10-06T18:27:33Z | 39,903,249 | <p>Have you tried something like:</p>
<pre><code>output_file = 'out.txt'
with open(output_file, 'r+') as file:
file.write('\t'.join(name_array) + '\n')
file.write('\t'.join(data_array) + '\n')
</code></pre>
| -1 | 2016-10-06T18:30:12Z | [
"python",
"file",
"tabular"
]
|
How to display data in tabular format on a txt file | 39,903,199 | <p>I have been stuck on this python problem for hours. I'm trying to figure out how to write the data that can be manually entered above into a txt file in a way that it shows up in a two row eight column table. The contents in name_array are supposed to be headers and the contents in data_array are the actual data pieces.</p>
<pre><code>name = str(raw_input( "Enter the student's name: "))
medianScore = float(raw_input("Enter the median group score for quizzes:"))
indScore = float(raw_input("Enter the score of the individual quiz: "))
assignmentScore = float(raw_input("Enter the score of the assignment: "))
test1Score = float(raw_input("Enter the score of exam one: "))
test2Score = float(raw_input("Enter the score of exam two: "))
test3Score = float(raw_input("Enter the score of the final exam: "))
fileName = str(raw_input("Enter the name of the file you would like to create: "))
f = file(fileName + ".txt" , a)
finalScore = ((medianScore * .14) + (indScore * .14) + (assignmentScore * .12) + (test1Score * .15) +(test2Score * .20) + (test3Score * .25))
data_array = [name, finalScore, test3Score, test1Score, test2Score, assignmentScore, indScore, medianScore]
name_array = [ "Student", "Final Grade", "Final Exam", "Exam 1", "Exam 2", "Assignments", "Solo Quizzes", "Group Quizzes"]
</code></pre>
| 0 | 2016-10-06T18:27:33Z | 39,903,667 | <p>If you want to simply output a csv-like file you can use the <code>csv</code> package:</p>
<pre><code>import csv
writer = csv.writer(f, delimiter='\t')
writer.writerow(name_array)
writer.writerow(data_array)
</code></pre>
<p>It will output:</p>
<pre><code>Student Final Grade Final Exam Exam 1 Exam 2 Assignments Solo Quizzes Group Quizzes
asd 3.88 6 4 5 3 2 1
</code></pre>
<p>In this example use <code>tab</code>s as separator, but you can cange it with any char you want. See <a href="https://docs.python.org/2/library/csv.html#csv.writer" rel="nofollow">this documentation</a> for more options.</p>
<hr>
<p>Instead if you want something more human-readable you can use intead the <code>tabulate</code> package:</p>
<pre><code>from tabulate import tabulate
f.write(tabulate([data_array], headers=name_array))
</code></pre>
<p>It will produce:</p>
<pre><code>Student Final Grade Final Exam Exam 1 Exam 2 Assignments Solo Quizzes Group Quizzes
--------- ------------- ------------ -------- -------- ------------- -------------- ---------------
asd 3.88 6 4 5 3 2 1
</code></pre>
<p>See <a href="https://pypi.python.org/pypi/tabulate" rel="nofollow">this documentation</a> for more options to format your table.</p>
| 1 | 2016-10-06T18:55:40Z | [
"python",
"file",
"tabular"
]
|
Python: Create file from list of dictionaries | 39,903,222 | <p>I have about 5000 .gz files from which I have to extract the data which is in form of "list of dictionaries". </p>
<p>Sample Source Data :</p>
<pre><code>{"user" : "J101", "ip" : "192.0.0.0", "usage" : "1000", "Location" : "CA",
"time" : "12038098048"}
{"user" : "M101", "ip" : "192.0.0.1", "usage" : "5000",
"time" : "12038098048", "Device" : "iOS" , "user_type" : "Premium"}
{"user" : "T101", "usage" : "10", "Location" : "AK","time" : "12038098048"}
{"user" : "A101", "ip" : "192.0.0.3", "usage" : "2000",
"time" : "12038098048", "user_type" : "Platinum" }
{"user" : "T101", "usage" : "10", "Location" : "AK","time" : "12038098048"}
{"user" : "J101", "ip" : "192.0.0.0", "usage" : "1000", "Location" : "CA",
"time" : "12038098048" }
</code></pre>
<p>Each line above represents data for a particular event; users <code>J101</code> and <code>T101</code> reported data twice, so they each have 2 rows. </p>
<p>I am in the initial phase of writing code this, so I started by extracting data from 1 .gz and trying to see if I can parse the data of interest and create a .txt or .csv file. </p>
<p>My requirement is to get only few attributes from these files, like <code>user</code>, <code>ip</code>, <code>time</code> and <code>usage</code>.</p>
<p>Below is the code I wrote to extract the data from .gz file and to store the data in form of list of dictionaries.</p>
<pre><code>import gzip
from collections import defaultdict
import json
import csv
e_dict = { 'userid' : { 'e_name' : 'user'},
'ipaddr' : { 'e_name' : 'ip' },
'event_time' : { 'e_name' : 'time' },
'usage_in_mb' : { 'e_name' : 'usage' }
}
dict_list = []
inputdict = defaultdict(lambda: None)
count_valueerror = 0
class parser(object):
def read_entries(self):
count = 0
with gzip.open('testfile.gz', 'r') as test:
for row in test:
try:
# Few rows are empty in the source file and have a new line character
if row == "\n":
continue
else:
# Changing the type of each row in file to string type for parsing dictionary
row_new = json.loads(row)
for key, val in e_dict.iteritems():
if val['e_name'] in row_new:
inputdict[key] = row_new[val['e_name']]
except ValueError:
count_valueerror += 1
dict_list.append(inputdict)
def create_csv(self):
with open('dict.csv', 'wb') as csv_file:
for row in dict_list:
for key, val in row:
csvwriter = csv.DictWriter(csv_file, fieldnames= row.keys(), extrasaction='raise', dialect='excel')
csvwriter.writeheader()
csvwriter.writerows(val)
return csv_file
</code></pre>
<p>The <code>create_csv</code> method isn't working correctly. I am not sure how to parse the <code>dict_list</code> and take each dictionary object to write it in csv/text file.</p>
<p>I am getting this error
<code>ValueError: dict contains fields not in fieldnames: 'p</code> for <code>create_csv</code> method.</p>
| 0 | 2016-10-06T18:28:37Z | 39,903,526 | <p>At some point in your code you have row set to a string, say,</p>
<pre><code>'{"user" : "J101", "ip" : "192.0.0.0", "usage" : "1000", "Location" : "CA", "time" : "12038098048"}'
</code></pre>
<p>Then to get what you want calculate,</p>
<pre><code>[eval(row)[_] for _ in ['user', 'ip', 'time', 'usage']]
</code></pre>
<p>to obtain a result like,</p>
<pre><code>['J101', '192.0.0.0', '12038098048', '1000']
</code></pre>
| 0 | 2016-10-06T18:46:26Z | [
"python",
"json",
"csv",
"dictionary"
]
|
Python: Create file from list of dictionaries | 39,903,222 | <p>I have about 5000 .gz files from which I have to extract the data which is in form of "list of dictionaries". </p>
<p>Sample Source Data :</p>
<pre><code>{"user" : "J101", "ip" : "192.0.0.0", "usage" : "1000", "Location" : "CA",
"time" : "12038098048"}
{"user" : "M101", "ip" : "192.0.0.1", "usage" : "5000",
"time" : "12038098048", "Device" : "iOS" , "user_type" : "Premium"}
{"user" : "T101", "usage" : "10", "Location" : "AK","time" : "12038098048"}
{"user" : "A101", "ip" : "192.0.0.3", "usage" : "2000",
"time" : "12038098048", "user_type" : "Platinum" }
{"user" : "T101", "usage" : "10", "Location" : "AK","time" : "12038098048"}
{"user" : "J101", "ip" : "192.0.0.0", "usage" : "1000", "Location" : "CA",
"time" : "12038098048" }
</code></pre>
<p>Each line above represents data for a particular event; users <code>J101</code> and <code>T101</code> reported data twice, so they each have 2 rows. </p>
<p>I am in the initial phase of writing code this, so I started by extracting data from 1 .gz and trying to see if I can parse the data of interest and create a .txt or .csv file. </p>
<p>My requirement is to get only few attributes from these files, like <code>user</code>, <code>ip</code>, <code>time</code> and <code>usage</code>.</p>
<p>Below is the code I wrote to extract the data from .gz file and to store the data in form of list of dictionaries.</p>
<pre><code>import gzip
from collections import defaultdict
import json
import csv
e_dict = { 'userid' : { 'e_name' : 'user'},
'ipaddr' : { 'e_name' : 'ip' },
'event_time' : { 'e_name' : 'time' },
'usage_in_mb' : { 'e_name' : 'usage' }
}
dict_list = []
inputdict = defaultdict(lambda: None)
count_valueerror = 0
class parser(object):
def read_entries(self):
count = 0
with gzip.open('testfile.gz', 'r') as test:
for row in test:
try:
# Few rows are empty in the source file and have a new line character
if row == "\n":
continue
else:
# Changing the type of each row in file to string type for parsing dictionary
row_new = json.loads(row)
for key, val in e_dict.iteritems():
if val['e_name'] in row_new:
inputdict[key] = row_new[val['e_name']]
except ValueError:
count_valueerror += 1
dict_list.append(inputdict)
def create_csv(self):
with open('dict.csv', 'wb') as csv_file:
for row in dict_list:
for key, val in row:
csvwriter = csv.DictWriter(csv_file, fieldnames= row.keys(), extrasaction='raise', dialect='excel')
csvwriter.writeheader()
csvwriter.writerows(val)
return csv_file
</code></pre>
<p>The <code>create_csv</code> method isn't working correctly. I am not sure how to parse the <code>dict_list</code> and take each dictionary object to write it in csv/text file.</p>
<p>I am getting this error
<code>ValueError: dict contains fields not in fieldnames: 'p</code> for <code>create_csv</code> method.</p>
| 0 | 2016-10-06T18:28:37Z | 39,905,159 | <p>I think the problem might be in your CSV file writer method. You seem to be writing the header of the file and a row with the data <strong>for every key of every row</strong>.</p>
<p>You can try something like this:</p>
<pre><code>def create_csv(dict_list):
with open('dict.csv', 'w') as csv_file:
# Create writer, using first item's keys as header values
csvwriter = csv.DictWriter(csv_file, fieldnames=dict_list[0].keys(), extrasaction='raise', dialect='excel')
# Write the header
csvwriter.writeheader()
# Iterate rows in dictionary list
for row in dict_list:
# Write row
csvwriter.writerow(row)
return csv_file
</code></pre>
<p>I tried on my machine and it works. Let me know if that's what you needed.</p>
| 0 | 2016-10-06T20:33:07Z | [
"python",
"json",
"csv",
"dictionary"
]
|
Python: Create file from list of dictionaries | 39,903,222 | <p>I have about 5000 .gz files from which I have to extract the data which is in form of "list of dictionaries". </p>
<p>Sample Source Data :</p>
<pre><code>{"user" : "J101", "ip" : "192.0.0.0", "usage" : "1000", "Location" : "CA",
"time" : "12038098048"}
{"user" : "M101", "ip" : "192.0.0.1", "usage" : "5000",
"time" : "12038098048", "Device" : "iOS" , "user_type" : "Premium"}
{"user" : "T101", "usage" : "10", "Location" : "AK","time" : "12038098048"}
{"user" : "A101", "ip" : "192.0.0.3", "usage" : "2000",
"time" : "12038098048", "user_type" : "Platinum" }
{"user" : "T101", "usage" : "10", "Location" : "AK","time" : "12038098048"}
{"user" : "J101", "ip" : "192.0.0.0", "usage" : "1000", "Location" : "CA",
"time" : "12038098048" }
</code></pre>
<p>Each line above represents data for a particular event; users <code>J101</code> and <code>T101</code> reported data twice, so they each have 2 rows. </p>
<p>I am in the initial phase of writing code this, so I started by extracting data from 1 .gz and trying to see if I can parse the data of interest and create a .txt or .csv file. </p>
<p>My requirement is to get only few attributes from these files, like <code>user</code>, <code>ip</code>, <code>time</code> and <code>usage</code>.</p>
<p>Below is the code I wrote to extract the data from .gz file and to store the data in form of list of dictionaries.</p>
<pre><code>import gzip
from collections import defaultdict
import json
import csv
e_dict = { 'userid' : { 'e_name' : 'user'},
'ipaddr' : { 'e_name' : 'ip' },
'event_time' : { 'e_name' : 'time' },
'usage_in_mb' : { 'e_name' : 'usage' }
}
dict_list = []
inputdict = defaultdict(lambda: None)
count_valueerror = 0
class parser(object):
def read_entries(self):
count = 0
with gzip.open('testfile.gz', 'r') as test:
for row in test:
try:
# Few rows are empty in the source file and have a new line character
if row == "\n":
continue
else:
# Changing the type of each row in file to string type for parsing dictionary
row_new = json.loads(row)
for key, val in e_dict.iteritems():
if val['e_name'] in row_new:
inputdict[key] = row_new[val['e_name']]
except ValueError:
count_valueerror += 1
dict_list.append(inputdict)
def create_csv(self):
with open('dict.csv', 'wb') as csv_file:
for row in dict_list:
for key, val in row:
csvwriter = csv.DictWriter(csv_file, fieldnames= row.keys(), extrasaction='raise', dialect='excel')
csvwriter.writeheader()
csvwriter.writerows(val)
return csv_file
</code></pre>
<p>The <code>create_csv</code> method isn't working correctly. I am not sure how to parse the <code>dict_list</code> and take each dictionary object to write it in csv/text file.</p>
<p>I am getting this error
<code>ValueError: dict contains fields not in fieldnames: 'p</code> for <code>create_csv</code> method.</p>
| 0 | 2016-10-06T18:28:37Z | 39,905,489 | <p>Modify...</p>
<p>Two list, one dictionary generation(<strong>cveFieldName</strong>, <strong>eventFieldName</strong>, <strong>inputdict</strong>) </p>
<pre><code>inputdict = {}
e_list = ('userid', 'user'), ('ipaddr', 'ip'),\
('event_time', 'time'), ('usage_in_mb', 'usage'),\
('test_1', 'test1'), ('test_2', 'test2'),\
('test_3', 'test4'), ('test_5', 'test6')
cveFieldName, eventFieldName = zip(*e_list)
</code></pre>
<p><strong>eventFieldName</strong> list use, <strong>inputdict.clear()</strong> delete</p>
<pre><code>def read_entries(self):
count_valueerror = 0
with gzip.open('test.gz', 'r') as test:
for row in test:
try:
# Few rows are empty in the source file
# and have a new line character
if row == "\n":
continue
else:
# Changing the type of each row in file
# to string type for parsing dictionary
row_new = json.loads(row)
for idx, x in enumerate(eventFieldName):
inputdict[cveFieldName[idx]] = row_new[x] if x in row_new else ''
except ValueError as e:
print e
count_valueerror += 1
dict_list.append(dict(inputdict))
# inputdict.clear()
</code></pre>
<p><strong>cveFieldName</strong> use</p>
<pre><code>def create_csv(self):
with open('dict.csv', 'wb') as csv_file:
csvwriter = csv.DictWriter(
csv_file,
fieldnames=cveFieldName,
extrasaction='raise',
dialect='excel')
csvwriter.writeheader()
for row in dict_list:
try:
csvwriter.writerow(row)
except Exception as e:
print e
return csv_file
</code></pre>
<p>dict.csv</p>
<pre><code>userid,ipaddr,event_time,usage_in_mb,test_1,test_2,test_3,test_5
J101,192.0.0.0,12038098048,1000,,,,
M101,192.0.0.1,12038098048,5000,,,,
T101,,12038098048,10,,,,
A101,192.0.0.3,12038098048,2000,,,,
T101,,12038098048,10,,,,
J101,192.0.0.0,12038098048,1000,,,,
</code></pre>
<p><s><strong>inputdict.clear()</strong> <= Required instruction</p>
<pre><code>def read_entries(self):
count_valueerror = 0
with gzip.open('test.gz', 'r') as test:
for row in test:
# import pdb; pdb.set_trace()
try:
# Few rows are empty in the source file
# and have a new line character
if row == "\n":
continue
else:
# Changing the type of each row in file
# to string type for parsing dictionary
row_new = json.loads(row)
for key, val in e_dict.iteritems():
if val['e_name'] in row_new:
inputdict[key] = row_new[val['e_name']]
except ValueError as e:
print e
count_valueerror += 1
dict_list.append(dict(inputdict))
inputdict.clear() # <==== very important
def create_csv(self):
with open('dict.csv', 'wb') as csv_file:
csvwriter = csv.DictWriter(
csv_file,
fieldnames=['userid', 'ipaddr', 'event_time', 'usage_in_mb'],
extrasaction='raise',
dialect='excel')
csvwriter.writeheader()
for row in dict_list:
try:
csvwriter.writerow(row)
except Exception as e:
print e
return csv_file
</code></pre>
<p>Without the instruction : inputdict.clear()</p>
<pre><code>userid,ipaddr,event_time,usage_in_mb
J101,192.0.0.0,12038098048,1000
M101,192.0.0.1,12038098048,5000
T101,192.0.0.1,12038098048,10 <=== M101 ip address
A101,192.0.0.3,12038098048,2000
T101,192.0.0.3,12038098048,10 <=== A101 ip address
J101,192.0.0.0,12038098048,1000
</code></pre>
<p></s></p>
| 0 | 2016-10-06T20:56:12Z | [
"python",
"json",
"csv",
"dictionary"
]
|
Is there a more Pythonic way to combine an Else: statement and an Except:? | 39,903,242 | <p>I have a piece of code that searches AutoCAD for text boxes that contain certain keywords (eg. <code>"overall_weight"</code> in this case) and replaces it with a value from a dictionary. However, sometimes the dictionary key is assigned to an empty string and sometimes, the key doesn't exist altogether. In these cases, the <code>"overall_weight"</code> keywords should be replaced with <code>"N/A"</code>. I was wondering if there was a more pythonic way to combine the <code>KeyError</code> exception and the <code>else</code> to both go to <code>nObject.TextString = "N/A"</code> so its not typed twice.</p>
<pre><code>if nObject.TextString == "overall_weight":
try:
if self.var.jobDetails["Overall Weight"]:
nObject.TextString = self.var.jobDetails["Overall Weight"]
else:
nObject.TextString = "N/A"
except KeyError:
nObject.TextString = "N/A"
</code></pre>
<p>Edit: For clarification for future visitors, there are only 3 cases I need to take care of and the correct answer takes care of all 3 cases without any extra padding.</p>
<ol>
<li><p><code>dict[key]</code> exists and points to a non-empty string. <code>TextString</code> replaced with the value assigned to <code>dict[key]</code>.</p></li>
<li><p><code>dict[key]</code> exists and points to a empty string. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
<li><p><code>dict[key]</code> doesn't exist. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
</ol>
| 32 | 2016-10-06T18:29:51Z | 39,903,338 | <p>Use <code>.get()</code> with a default argument of <code>"N/A"</code> which will be used if the key does not exist:</p>
<pre><code>nObject.TextString = self.var.jobDetails.get("Overall Weight", "N/A")
</code></pre>
<h1>Update</h1>
<p>If empty strings need to be handled, simply modify as follows:</p>
<pre><code>nObject.TextString = self.var.jobDetails.get("Overall Weight") or "N/A"
</code></pre>
<p>This will set <code>nObject.TextString</code> to "N/A" if a <code>KeyError</code> is raised, or if the value is retrieved is empty: <code>''</code>, <code>[]</code>, etc.</p>
| 9 | 2016-10-06T18:34:43Z | [
"python",
"python-2.7"
]
|
Is there a more Pythonic way to combine an Else: statement and an Except:? | 39,903,242 | <p>I have a piece of code that searches AutoCAD for text boxes that contain certain keywords (eg. <code>"overall_weight"</code> in this case) and replaces it with a value from a dictionary. However, sometimes the dictionary key is assigned to an empty string and sometimes, the key doesn't exist altogether. In these cases, the <code>"overall_weight"</code> keywords should be replaced with <code>"N/A"</code>. I was wondering if there was a more pythonic way to combine the <code>KeyError</code> exception and the <code>else</code> to both go to <code>nObject.TextString = "N/A"</code> so its not typed twice.</p>
<pre><code>if nObject.TextString == "overall_weight":
try:
if self.var.jobDetails["Overall Weight"]:
nObject.TextString = self.var.jobDetails["Overall Weight"]
else:
nObject.TextString = "N/A"
except KeyError:
nObject.TextString = "N/A"
</code></pre>
<p>Edit: For clarification for future visitors, there are only 3 cases I need to take care of and the correct answer takes care of all 3 cases without any extra padding.</p>
<ol>
<li><p><code>dict[key]</code> exists and points to a non-empty string. <code>TextString</code> replaced with the value assigned to <code>dict[key]</code>.</p></li>
<li><p><code>dict[key]</code> exists and points to a empty string. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
<li><p><code>dict[key]</code> doesn't exist. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
</ol>
| 32 | 2016-10-06T18:29:51Z | 39,903,350 | <p>Use <code>get()</code> function for dictionaries. It will return <code>None</code> if the key doesn't exist or if you specify a second value, it will set that as the default. Then your syntax will look like:</p>
<pre><code>nObject.TextString = self.var.jobDetails.get('Overall Weight', 'N/A')
</code></pre>
| 16 | 2016-10-06T18:35:11Z | [
"python",
"python-2.7"
]
|
Is there a more Pythonic way to combine an Else: statement and an Except:? | 39,903,242 | <p>I have a piece of code that searches AutoCAD for text boxes that contain certain keywords (eg. <code>"overall_weight"</code> in this case) and replaces it with a value from a dictionary. However, sometimes the dictionary key is assigned to an empty string and sometimes, the key doesn't exist altogether. In these cases, the <code>"overall_weight"</code> keywords should be replaced with <code>"N/A"</code>. I was wondering if there was a more pythonic way to combine the <code>KeyError</code> exception and the <code>else</code> to both go to <code>nObject.TextString = "N/A"</code> so its not typed twice.</p>
<pre><code>if nObject.TextString == "overall_weight":
try:
if self.var.jobDetails["Overall Weight"]:
nObject.TextString = self.var.jobDetails["Overall Weight"]
else:
nObject.TextString = "N/A"
except KeyError:
nObject.TextString = "N/A"
</code></pre>
<p>Edit: For clarification for future visitors, there are only 3 cases I need to take care of and the correct answer takes care of all 3 cases without any extra padding.</p>
<ol>
<li><p><code>dict[key]</code> exists and points to a non-empty string. <code>TextString</code> replaced with the value assigned to <code>dict[key]</code>.</p></li>
<li><p><code>dict[key]</code> exists and points to a empty string. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
<li><p><code>dict[key]</code> doesn't exist. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
</ol>
| 32 | 2016-10-06T18:29:51Z | 39,903,442 | <p>I think this is a good case for setting the default value in advance</p>
<pre><code>if nObject.TextString == "overall_weight":
nObject.TextString = "N/A"
try:
if self.var.jobDetails["Overall Weight"]:
nObject.TextString = self.var.jobDetails["Overall Weight"]
except KeyError:
pass
</code></pre>
<p><strong>RADICAL RETHINK</strong></p>
<p>Ditch that first answer (just keeping it because it got an upvote). If you really want to go pythonic, (and you always want to set a value on TextString) replace the whole thing with</p>
<pre><code>nObject.TextString = (nObject.TextString == "overall_weight"
and self.var.jobDetails.get("Overall Weight")
or "N/A")
</code></pre>
<p>Python <code>and</code> and <code>or</code> operations return their last calculated value, not True/False and you can use that to walk through the combinations.</p>
| 3 | 2016-10-06T18:40:43Z | [
"python",
"python-2.7"
]
|
Is there a more Pythonic way to combine an Else: statement and an Except:? | 39,903,242 | <p>I have a piece of code that searches AutoCAD for text boxes that contain certain keywords (eg. <code>"overall_weight"</code> in this case) and replaces it with a value from a dictionary. However, sometimes the dictionary key is assigned to an empty string and sometimes, the key doesn't exist altogether. In these cases, the <code>"overall_weight"</code> keywords should be replaced with <code>"N/A"</code>. I was wondering if there was a more pythonic way to combine the <code>KeyError</code> exception and the <code>else</code> to both go to <code>nObject.TextString = "N/A"</code> so its not typed twice.</p>
<pre><code>if nObject.TextString == "overall_weight":
try:
if self.var.jobDetails["Overall Weight"]:
nObject.TextString = self.var.jobDetails["Overall Weight"]
else:
nObject.TextString = "N/A"
except KeyError:
nObject.TextString = "N/A"
</code></pre>
<p>Edit: For clarification for future visitors, there are only 3 cases I need to take care of and the correct answer takes care of all 3 cases without any extra padding.</p>
<ol>
<li><p><code>dict[key]</code> exists and points to a non-empty string. <code>TextString</code> replaced with the value assigned to <code>dict[key]</code>.</p></li>
<li><p><code>dict[key]</code> exists and points to a empty string. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
<li><p><code>dict[key]</code> doesn't exist. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
</ol>
| 32 | 2016-10-06T18:29:51Z | 39,903,519 | <p>Use <code>dict.get()</code> which will return the value associated with the given key if it exists otherwise <code>None</code>. (Note that <code>''</code> and <code>None</code> are both falsey values.) If <code>s</code> is true then assign it to <code>nObject.TextString</code> otherwise give it a value of <code>"N/A"</code>.</p>
<pre><code>if nObject.TextString == "overall_weight":
nObject.TextString = self.var.jobDetails.get("Overall Weight") or "N/A"
</code></pre>
| 63 | 2016-10-06T18:45:36Z | [
"python",
"python-2.7"
]
|
Is there a more Pythonic way to combine an Else: statement and an Except:? | 39,903,242 | <p>I have a piece of code that searches AutoCAD for text boxes that contain certain keywords (eg. <code>"overall_weight"</code> in this case) and replaces it with a value from a dictionary. However, sometimes the dictionary key is assigned to an empty string and sometimes, the key doesn't exist altogether. In these cases, the <code>"overall_weight"</code> keywords should be replaced with <code>"N/A"</code>. I was wondering if there was a more pythonic way to combine the <code>KeyError</code> exception and the <code>else</code> to both go to <code>nObject.TextString = "N/A"</code> so its not typed twice.</p>
<pre><code>if nObject.TextString == "overall_weight":
try:
if self.var.jobDetails["Overall Weight"]:
nObject.TextString = self.var.jobDetails["Overall Weight"]
else:
nObject.TextString = "N/A"
except KeyError:
nObject.TextString = "N/A"
</code></pre>
<p>Edit: For clarification for future visitors, there are only 3 cases I need to take care of and the correct answer takes care of all 3 cases without any extra padding.</p>
<ol>
<li><p><code>dict[key]</code> exists and points to a non-empty string. <code>TextString</code> replaced with the value assigned to <code>dict[key]</code>.</p></li>
<li><p><code>dict[key]</code> exists and points to a empty string. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
<li><p><code>dict[key]</code> doesn't exist. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
</ol>
| 32 | 2016-10-06T18:29:51Z | 39,904,193 | <p>How about <a href="http://effbot.org/zone/python-with-statement.htm" rel="nofollow"><code>with</code></a>:</p>
<pre><code>key = 'Overall Weight'
with n_object.text_string = self.var.job_details[key]:
if self.var.job_details[key] is None \
or if self.var.job_details[key] is '' \
or if KeyError:
n_object.text_string = 'N/A'
</code></pre>
| -2 | 2016-10-06T19:29:13Z | [
"python",
"python-2.7"
]
|
Is there a more Pythonic way to combine an Else: statement and an Except:? | 39,903,242 | <p>I have a piece of code that searches AutoCAD for text boxes that contain certain keywords (eg. <code>"overall_weight"</code> in this case) and replaces it with a value from a dictionary. However, sometimes the dictionary key is assigned to an empty string and sometimes, the key doesn't exist altogether. In these cases, the <code>"overall_weight"</code> keywords should be replaced with <code>"N/A"</code>. I was wondering if there was a more pythonic way to combine the <code>KeyError</code> exception and the <code>else</code> to both go to <code>nObject.TextString = "N/A"</code> so its not typed twice.</p>
<pre><code>if nObject.TextString == "overall_weight":
try:
if self.var.jobDetails["Overall Weight"]:
nObject.TextString = self.var.jobDetails["Overall Weight"]
else:
nObject.TextString = "N/A"
except KeyError:
nObject.TextString = "N/A"
</code></pre>
<p>Edit: For clarification for future visitors, there are only 3 cases I need to take care of and the correct answer takes care of all 3 cases without any extra padding.</p>
<ol>
<li><p><code>dict[key]</code> exists and points to a non-empty string. <code>TextString</code> replaced with the value assigned to <code>dict[key]</code>.</p></li>
<li><p><code>dict[key]</code> exists and points to a empty string. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
<li><p><code>dict[key]</code> doesn't exist. <code>TextString</code> replaced with <code>"N/A"</code>.</p></li>
</ol>
| 32 | 2016-10-06T18:29:51Z | 39,907,658 | <p>The Zen of Python says "Explicit is better than implicit." I have found this to be very true in my own experience. When I write a piece of code I think to my self, "Will I understand what this means a year from now?" If the answer is "no" then it needs to be re-written or documented. The accepted answer relies on remembering the implementation of dict.get to know how it will handle the corner cases. Since the OP has 3 clear criteria, I would instead document them clearly in an if statement.</p>
<pre><code>if nObject.TextString == "overall_weight" and \
"Overall Weight" in self.var.jobDetails and \
self.var.jobDetails["Overall Weight"] != "":
nObject.TextString = self.var.jobDetails["Overall Weight"]
else:
nObject.TextString = "N/A"
</code></pre>
<p>It's certainly more verbose... but that's a good thing. There is no question when reading this what the behavior will be.</p>
| 2 | 2016-10-07T00:35:17Z | [
"python",
"python-2.7"
]
|
Breaking down rows in Pyspark DataFrame | 39,903,246 | <p>I have a PySpark DataFrame in this format:</p>
<pre><code> dbn | bus | subway | score
----------|----------------|----------|--------
XYZ12 | B1, B44, B66 | A, C | 59
ZYY3 | B8, B3, B7 | J, Z | 66
</code></pre>
<p>What I want to do is be able to attach the score column to every individual bus and subway line, however I want to work on one column at a time so I'll start with bus. Ultimately what I want my DataFrame to look like is this (when I'm working with the bus column)</p>
<pre><code>dbn | bus | subway | score
---------|-----------|---------|-------
XYZ12 | B1 | A, C | 59
XYZ12 | B44 | A, C | 59
XYZ12 | B66 | A, C | 59
ZYY3 | B8 | J, Z | 66
ZYY3 | B3 | J, Z | 66
ZYY3 | B7 | J, Z | 66
</code></pre>
<p>How would I go about doing this?</p>
| 0 | 2016-10-06T18:30:07Z | 39,904,913 | <p>You can <code>explode</code> function which expects an <code>array</code> or a <code>map</code> column as an input. If <code>bus</code> is a string you can use string processing functions, like <code>split</code>, to break it into pieces first. Let's assume this scenario:</p>
<pre><code>df = sc.parallelize([
("XYZ12", "B1, B44, B66", "A, C", 59),
("ZYY3 ", "B8, B3, B7", "J, Z", 66)
]).toDF(["dbn", "bus", "subway", "score"])
</code></pre>
<p>First import required functions:</p>
<pre><code>from pyspark.sql.functions import col, explode, split, trim
</code></pre>
<p>add column:</p>
<pre><code>with_bus_exploded = df.withColumn("bus", explode(split("bus", ",")))
</code></pre>
<p>and <code>trim</code> leading / trailing spaces:</p>
<pre><code>with_bus_trimmed = with_bus_exploded.withColumn("bus", trim(col("bus")))
</code></pre>
<p>Finally the result is:</p>
<pre class="lang-none prettyprint-override"><code>+-----+---+------+-----+
| dbn|bus|subway|score|
+-----+---+------+-----+
|XYZ12| B1| A, C| 59|
|XYZ12|B44| A, C| 59|
|XYZ12|B66| A, C| 59|
|ZYY3 | B8| J, Z| 66|
|ZYY3 | B3| J, Z| 66|
|ZYY3 | B7| J, Z| 66|
+-----+---+------+-----+
</code></pre>
| 0 | 2016-10-06T20:16:21Z | [
"python",
"dataframe",
"pyspark",
"pyspark-sql"
]
|
how framing byte packet to time length from android to python server? | 39,903,258 | <p>I'm trying to develop an application which send to pcm data to python server.</p>
<p>I used AudioRecord library to get real-time audio signal.</p>
<p>And this is the source code.</p>
<pre><code>/*------ setting audio recording ------*/
private static final int SAMPLE_RATE = 44100;
private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
private boolean isRecording = true;
private AudioRecord recorder = null;
private Thread recordingThread;
private AudioTrack player;
//byte[] TotalByteMessage;
/*------ about socket communication ------*/
public DatagramSocket socket;
private int port = 7979;
String IP = "192.168.0.4";
/*------ Recording, Playing and Sending packets method ------*/
private void startStreaming() {
recordingThread = new Thread(new Runnable() {
@Override
public void run() {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
try {
/*------about socket------*/
socket = new DatagramSocket();
Log.d(LOG_NW, "Socket Created!");
DatagramPacket packet;
InetAddress destination = InetAddress.getByName(IP);
Log.d(LOG_NW, "Address retrieved!");
/*------setting recording && playing------*/
//get MinBufferSize for audio recording
int Buffer_Size = AudioRecord.getMinBufferSize(SAMPLE_RATE,
RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING);
Log.d(LOG_Audio, "Min buffer size is " + Buffer_Size);
if (Buffer_Size == AudioRecord.ERROR || Buffer_Size == AudioRecord.ERROR_BAD_VALUE) {
Buffer_Size = SAMPLE_RATE * 2;
}
recorder = new AudioRecord(MediaRecorder.AudioSource.VOICE_RECOGNITION,
SAMPLE_RATE, RECORDER_CHANNELS,
RECORDER_AUDIO_ENCODING, Buffer_Size);
if (recorder.getState() != AudioRecord.STATE_INITIALIZED) {
Log.d(LOG_Audio, "Audio Record can't initialize!");
return;
}
player = new AudioTrack(AudioManager.STREAM_MUSIC,
SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO,
RECORDER_AUDIO_ENCODING, Buffer_Size,
AudioTrack.MODE_STREAM);
Log.d(LOG_Audio, "ready for playing music by using audiotrack");
player.setPlaybackRate(SAMPLE_RATE);
byte[] audioBuffer = new byte[Buffer_Size];
Log.d(LOG_Audio, "AudioBuffer created of size " + Buffer_Size);
recorder.startRecording();
Log.d(LOG_Audio, "Start Recording!");
player.play();
Log.d(LOG_Audio, "Start Playing!");
while (isRecording == true) {
//reading data from MIC into buffer
recorder.read(audioBuffer, 0, audioBuffer.length);
player.write(audioBuffer, 0, audioBuffer.length);
//putting buffer in the packet
packet = new DatagramPacket(audioBuffer, audioBuffer.length, destination, port);
socket.send(packet);
Log.d(LOG_NW, "packet sending to " + destination + " with port : " + port);
}
} catch (UnknownHostException e) {
Log.d(LOG_Audio, "UnknownHostException");
} catch (IOException e) {
Log.d(LOG_Audio, "IOException");
}
}
}); // end of recordingThread
recordingThread.start();
}
</code></pre>
<hr>
<p>and this is the python server code.</p>
<pre><code>import socket
import numpy as np
import matplotlib.pyplot as plt
IP = "192.168.0.4"
server_address = (IP, 7979)
server = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
buffer_size = 3584
server.bind(server_address)
i = True
while(i):
print "Listening...\n"
packet, client = server.recvfrom(buffer_size)
#Convert packet to numpy array
signal = np.fromstring(packet, dtype=np.int16)
i=False
server.close()
</code></pre>
<p>With this python code, I receive only one packet.</p>
<p>But later, I'll make a list to receive several packets.</p>
<p>I want to make a frame which has size of 64ms (time!) for windowing and FFT(Fast Fourier Transform), but the problem is a packet is 3584 bytes.</p>
<p>So I don't know how to windowing and do fft with byte packet from android in python server.</p>
<p>How can I make a frame using time length? </p>
| 0 | 2016-10-06T18:30:50Z | 39,903,599 | <p>You have a problem with your while loop in your python code</p>
<pre><code>i = True
while(i):
print "Listening...\n"
packet, client = server.recvfrom(buffer_size)
#Convert packet to numpy array
signal = np.fromstring(packet, dtype=np.int16)
i=False
server.close()
</code></pre>
<p>This will enter the loop, since <code>i == True</code>, then go through one iteration, and set <code>i = False</code> at the end of the first iteration, which will cause the loop to finish.</p>
| 0 | 2016-10-06T18:51:10Z | [
"android",
"python",
"sockets",
"fft"
]
|
Scikit-learn fails to import only in Jupyter notebook | 39,903,337 | <p>I have Anaconda installed on OS X. I am able to import sklearn from a python terminal and an IPython terminal. But when I try to import sklearn from a Jupyter notebook, I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-4-8fd979e02004> in <module>()
----> 1 import sklearn
/Users/joe/anaconda/envs/data_env/lib/python3.5/site-packages/sklearn/__init__.py in <module>()
55 else:
56 from . import __check_build
---> 57 from .base import clone
58 __check_build # avoid flakes unused variable error
59
/Users/joe/anaconda/envs/data_env/lib/python3.5/site-packages/sklearn/base.py in <module>()
10 from scipy import sparse
11 from .externals import six
---> 12 from .utils.fixes import signature
13 from .utils.deprecation import deprecated
14 from .exceptions import ChangedBehaviorWarning as _ChangedBehaviorWarning
/Users/joe/anaconda/envs/data_env/lib/python3.5/site-packages/sklearn/utils/__init__.py in <module>()
9
10 from .murmurhash import murmurhash3_32
---> 11 from .validation import (as_float_array,
12 assert_all_finite,
13 check_random_state, column_or_1d, check_array,
/Users/joe/anaconda/envs/data_env/lib/python3.5/site-packages/sklearn/utils/validation.py in <module>()
16
17 from ..externals import six
---> 18 from ..utils.fixes import signature
19 from .deprecation import deprecated
20 from ..exceptions import DataConversionWarning as _DataConversionWarning
/Users/joe/anaconda/envs/data_env/lib/python3.5/site-packages/sklearn/utils/fixes.py in <module>()
288 from ._scipy_sparse_lsqr_backport import lsqr as sparse_lsqr
289 else:
--> 290 from scipy.sparse.linalg import lsqr as sparse_lsqr
291
292
/Users/joe/anaconda/envs/data_env/lib/python3.5/site-packages/scipy/sparse/linalg/__init__.py in <module>()
110 from __future__ import division, print_function, absolute_import
111
--> 112 from .isolve import *
113 from .dsolve import *
114 from .interface import *
/Users/joe/anaconda/envs/data_env/lib/python3.5/site-packages/scipy/sparse/linalg/isolve/__init__.py in <module>()
4
5 #from info import __doc__
----> 6 from .iterative import *
7 from .minres import minres
8 from .lgmres import lgmres
/Users/joe/anaconda/envs/data_env/lib/python3.5/site-packages/scipy/sparse/linalg/isolve/iterative.py in <module>()
5 __all__ = ['bicg','bicgstab','cg','cgs','gmres','qmr']
6
----> 7 from . import _iterative
8
9 from scipy.sparse.linalg.interface import LinearOperator
ImportError: dlopen(/Users/joe/anaconda/envs/data_env/lib/python3.5/site-packages/scipy/sparse/linalg/isolve/_iterative.so, 2): Library not loaded: /usr/local/lib/libgcc_s.1.dylib
Referenced from: /Users/joe/anaconda/envs/data_env/lib/python3.5/site-packages/scipy/sparse/linalg/isolve/_iterative.so
Reason: image not found
</code></pre>
<p>I can import numpy, scipy, and pandas fine from the Jupyter notebook. It is just sklearn that fails.</p>
<p>I have also tried creating a new conda environment (<code>conda create -n test_env jupyter notebook matplotlib scipy numpy pandas scikit-learn</code>), but the error persists in the new environment as well.</p>
| 0 | 2016-10-06T18:34:42Z | 39,906,922 | <p>I managed to figure out what was going on, so I'll post my solution here in case anyone else runs into the same problem. As it turns out, I had modified the <code>DYLD_FALLBACK_LIBRARY_PATH</code> environment variable in my <code>.bashrc</code> file when I had installed another piece of software. Restoring this environment variable to its default fixed the problem for me.</p>
<p>(Incidentally, scikit-learn was failing to import in a standard Python terminal as well. I didn't initially realize this because I was testing the Python terminal in an environment in which I had accidentally restored the environment variables to their defaults, overwriting the change I had made in my <code>.bashrc</code> file.)</p>
| 2 | 2016-10-06T23:03:09Z | [
"python",
"scikit-learn",
"importerror",
"jupyter-notebook"
]
|
PyCharm can't find module beatifulsoup4 | 39,903,388 | <p>I installed beautifulsoup4 through the Project Interperter in PyCharm, and when I try to import it, it says that the module doesn't exist and I've checked the spelling a billion times I don't get it</p>
| -2 | 2016-10-06T18:38:09Z | 39,903,449 | <blockquote>
<p>can't find module <code>beatifulsoup4</code></p>
</blockquote>
<p>You should be importing <code>bs4</code>, not <code>beautifulsoup4</code>:</p>
<pre><code>from bs4 import BeautifulSoup
</code></pre>
| 3 | 2016-10-06T18:41:00Z | [
"python",
"module",
"beautifulsoup",
"pycharm"
]
|
Raspberry Pi python website with plotly graph updates | 39,903,438 | <p>I am currently working on a project from home where I have a network of Arduino's sending data (temp humidity etc.) to a raspberry pi. I want to make the rasp take the data and using plotly make a variety of graphs and then embed said graphs into a website that automatically updates at a set interval. I already have the network up and running I am just stuck on how to get the graphs on to a HTML page and have it update. I was considering just running a Python script that makes a webpage and re-write it with the new graphs every time. This seems highly inefficient so I was wondering if there was a better way of doing it?</p>
| 0 | 2016-10-06T18:40:30Z | 39,915,124 | <p>Some time ago I had a very similar problem. A very simple solution was to use Python3's <code>http.server</code> to return a <code>JSON</code> with a time stamp and the temperature.</p>
<pre><code># !/usr/bin/env python3
from http.server import HTTPServer, BaseHTTPRequestHandler
import random
import json
import time
def send_header(BaseHTTPRequestHandler):
BaseHTTPRequestHandler.send_response(200)
BaseHTTPRequestHandler.send_header('Access-Control-Allow-Origin', '*')
BaseHTTPRequestHandler.send_header('Content-type:', 'application/json')
BaseHTTPRequestHandler.end_headers()
class MyRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
# returns the temperature
if self.path == '/temperature':
send_header(self)
self.wfile.write(bytes(json.dumps({'time': time.strftime('%H:%M:%S', time.gmtime()), 'temperature': random.randint(0, 100)}), 'utf-8'))
if __name__ == '__main__':
# start server
server = HTTPServer(('', 8099), MyRequestHandler)
server.serve_forever()
</code></pre>
<p>The data is then received via simple vanilla <code>JavaScript</code> and put into <code>plotly</code>. Every 1000 ms a request is sent to the server and the graph is updated accordingly.</p>
<pre><code><html>
<head>
<script src="https://cdn.plot.ly/plotly-latest.min.js"></script>
<script>
var temperatures;
var temperatures_x = [];
var temperatures_y = [];
var server_url = "";
//basic request handler
function createRequest() {
var result = null;
if (window.XMLHttpRequest) {
// FireFox, Safari, etc.
result = new XMLHttpRequest();
if (typeof result.overrideMimeType != "undefined") {
result.overrideMimeType("text/xml"); // Or anything else
}
} else if (window.ActiveXObject) {
// MSIE
result = new ActiveXObject("Microsoft.XMLHTTP");
}
return result;
}
//gets the temperature from the Python3 server
function update_temperatures() {
var req = createRequest();
req.onreadystatechange = function () {
if (req.readyState !== 4) {
return;
}
temperatures = JSON.parse(req.responseText);
return;
};
req.open("GET", server_url + "/temperature", true);
req.send();
return;
}
//updates the graph
function update_graph() {
update_temperatures();
temperatures_x.push(temperatures.time)
temperatures_y.push(temperatures.temperature)
Plotly.newPlot('graph_t', [{x: temperatures_x, y: temperatures_y}]);
}
//initializes everything
window.onload = function () {
document.getElementById("url").onchange = function () {
server_url = document.getElementById("url").value;
};
server_url = document.getElementById("url").value;
//timer for updating the functions
var t_cpu = setInterval(update_graph, 1000);
};
</script>
</head>
<body>
<li>
URL and port<input type="text" id="url" value="http://localhost:8099">
</li>
<div class="plotly_graph" id="graph_t"></div>
</body>
</html>
</code></pre>
| 0 | 2016-10-07T10:29:00Z | [
"python",
"html",
"website",
"plotly",
"raspberry-pi3"
]
|
antlr4 grammar non greedy | 39,903,531 | <p>I'm trying to write a grammar the allows me to write any expression in a if statement. </p>
<p>My if statement will be something like below:
if [ x == 1 ] [Do some stuff]</p>
<p>The expression is supposed to be any Python expression. </p>
<p>If I use the non-greedy match like below, how can I specify '[' or ']' as a part of expression? List comprehension will be problem with my grammar. </p>
<pre><code>ifval
: (SPACE)* IF (SPACE|WORD)* SQRLBRACE .*? SQRRBRACE (WORD|SPACE)* <blah> <blah>;
WORD : ('a'..'z' | 'A'..'Z'| '_' | '-')+;
NUM : [0-9];
NEWLINE : '\r'? '\n' | '\r';
SPACE : (' ' | '\t') ;
SQRRBRACE: ']';
SQRLBRACE: '[';
</code></pre>
| 0 | 2016-10-06T18:46:43Z | 39,908,724 | <p>Generalized, use a typical statement formulation:</p>
<pre><code>stmt : ifval
| ....
;
ifval : IF expr body ;
expr : LBRACK expr? RBRACK
| NOT expr
| WORD ( LBRACK WORD? RBRACK )? // value or array[idx]
| ....
;
body : WORD
| LBRACE stmt* RBRACE
| ....
;
</code></pre>
<p>The <code>expr</code> rule will handle the occurrence of optional and nested brackets through recursion.</p>
<p>BTW, almost always better to hide whitespace in the grammar, even for Python/whitespace sensitive languages. By hiding, the WS remains easily accessible for computing indentation level. And, the grammar is not then polluted by having to specify every conceivable location of SPACE*.</p>
| 1 | 2016-10-07T03:04:12Z | [
"python",
"antlr4"
]
|
How to get actual line number of where python exception occurred? | 39,903,798 | <p>I have the following python decorator in my file <code>decorators.py</code></p>
<pre><code>def catch_exceptions(function): #Line #1
@wraps(function) #Line #2
def decorator(*args, **kwargs): #Line #3
try: #Line #4
return function(*args, **kwargs) #Line #5
except Exception as e: #Line #6
exc_type, exc_obj, exc_tb = sys.exc_info() #Line #7
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1] #Line #8
print "E=%s, F=%s, L=%s" % (str(e), fname, exc_tb.tb_lineno) #Line #9
return decorator #Line #10
</code></pre>
<p>In another file <code>my_file.py</code>, I use the <code>catch_exceptions</code> decorator like this:</p>
<pre><code>from decorators import catch_exceptions #Line #1
@catch_exceptions #Line #2
def my_method() #Line #3
print (10/0 - 5/0) #Line #4
</code></pre>
<p>When I run it, I get the following output:</p>
<pre><code>E=integer division or modulo by zero, F=decorators.py, L=5
</code></pre>
<p>Instead of it reporting the exception location as <code>decorators.py</code>, line #5, How can I get it to report the actual file and line number of where the exception originally occurred? That would be line #4 in <code>my_file.py</code>.</p>
| 1 | 2016-10-06T19:03:37Z | 39,903,979 | <p>You may want to look into the <code>traceback</code> module</p>
<pre><code>import traceback
try:
function_that_raises_exception()
except Exception:
traceback.print_exc()
</code></pre>
<p>It will print the entire stack trace.</p>
| 3 | 2016-10-06T19:15:04Z | [
"python",
"django",
"exception-handling"
]
|
How to get actual line number of where python exception occurred? | 39,903,798 | <p>I have the following python decorator in my file <code>decorators.py</code></p>
<pre><code>def catch_exceptions(function): #Line #1
@wraps(function) #Line #2
def decorator(*args, **kwargs): #Line #3
try: #Line #4
return function(*args, **kwargs) #Line #5
except Exception as e: #Line #6
exc_type, exc_obj, exc_tb = sys.exc_info() #Line #7
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1] #Line #8
print "E=%s, F=%s, L=%s" % (str(e), fname, exc_tb.tb_lineno) #Line #9
return decorator #Line #10
</code></pre>
<p>In another file <code>my_file.py</code>, I use the <code>catch_exceptions</code> decorator like this:</p>
<pre><code>from decorators import catch_exceptions #Line #1
@catch_exceptions #Line #2
def my_method() #Line #3
print (10/0 - 5/0) #Line #4
</code></pre>
<p>When I run it, I get the following output:</p>
<pre><code>E=integer division or modulo by zero, F=decorators.py, L=5
</code></pre>
<p>Instead of it reporting the exception location as <code>decorators.py</code>, line #5, How can I get it to report the actual file and line number of where the exception originally occurred? That would be line #4 in <code>my_file.py</code>.</p>
| 1 | 2016-10-06T19:03:37Z | 39,904,221 | <p>If you want to do it as you described then</p>
<pre><code>from functools import wraps
import sys, os, traceback
def catch_exceptions(function):
@wraps(function)
def decorator(*args, **kwargs):
try:
return function(*args, **kwargs)
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
print "E=%s, F=%s, L=%s" % (str(e), traceback.extract_tb(exc_tb)[-1][0], traceback.extract_tb(exc_tb)[-1][1]) )
return decorator
</code></pre>
<p>But it's still <a href="https://docs.python.org/2/library/traceback.html#module-traceback" rel="nofollow"><code>traceback</code></a> that you need to know.</p>
<p>I believe that <code>filename</code> that was printing was also a mistake. </p>
<p>So <code>exc_tb</code> is actual <code>traceback</code> object. And extracting it's data is made by <a href="https://docs.python.org/2/library/traceback.html#traceback.extract_tb" rel="nofollow"><code>extract_tb()</code></a> it will do </p>
<blockquote>
<p>Return a list of up to limit âpre-processedâ stack trace entries extracted from the traceback object tb. It is useful for alternate formatting of stack traces. If limit is omitted or <code>None</code>, all entries are extracted. A âpre-processedâ stack trace entry is a 4-tuple (filename, line number, function name*, text) representing the information that is usually printed for a stack trace.</p>
</blockquote>
<p>So the second last element of <code>traceback.extract_tb(exc_tb)</code> would be exception that is raised in decorator, and the last would be in your function. So the last index(<code>-1</code>) is what we need. Then <code>traceback.extract_tb(exc_tb)[-1][0]</code> would be filename of (I suppose) your desired file, not <em>decorators.py</em> and <code>traceback.extract_tb(exc_tb)[-1][1]</code> would be the line when the exception was fired.</p>
| 2 | 2016-10-06T19:30:54Z | [
"python",
"django",
"exception-handling"
]
|
docker-py getarchive destination folder | 39,903,822 | <p>I am following the instructions given in this link for get_archive()<a href="https://docker-py.readthedocs.io/en/stable/api/#get_archive" rel="nofollow">link</a>
but instead of creating my container i was trying to use already running container in my case "docker-nginx" as an input string and also the destination folder where my content was residing in nginx server as '/usr/share/nginx/html' and got my output for the stat of the folder too i want to know how can i give the destination folder in this function if not where is my extrated tar file ? i was not able to figure out where my file was downloaded </p>
<p>here is my code
<code>strm,stat=c.get_archive(container_name,'/usr/share/nginx/html')</code></p>
<p>my output for print(strm) is <code><requests.packages.urllib3.response.HTTPResponse object at 0x7fe3581e1250>
</code>
my output for print(stat) is <code>{u'linkTarget': u'', u'mode': 2147484157, u'mtime': u'2016-10-05T09:37:17.928258508-05:00', u'name': u'html', u'size': 4096}</code></p>
| 0 | 2016-10-06T19:04:55Z | 39,937,617 | <p>I found out how this function works what it does it returns raw response of a requests made to our container. raw response explaination can be found <a href="http://docs.python-requests.org/en/master/user/quickstart/#raw-response-content" rel="nofollow">in this link</a>
in order to extract raw content we can use this code so that we can simply read our raw data as text format according to the output file in my case html.</p>
<pre><code>raw_data=strm.read()
f= open('out.html', 'w')
f.write(raw_data)
</code></pre>
| 0 | 2016-10-08T21:33:59Z | [
"python",
"docker",
"raw-data",
"dockerpy"
]
|
Print file contents using cat in subprocess | 39,903,838 | <p>I am using a subprocess call in python where I have to print a file contents using <code>cat</code>. The file name is a variable that I generate in the python code itself. This is my code:</p>
<pre><code>pid = str(os.getpid())
tmp_file_path = "/tmp/" + pid + "/data_to_synnet"
synnet_output = subprocess.check_output(["cat echo '%s'"%tmp_file_path], shell=True)
</code></pre>
<p>The above code throws an error saying <code>cat: echo: No such file or directory</code>.</p>
<p>However, when I use only <code>subprocess.check_output(["echo '%s'"%tmp_file_path], shell=True)</code>, the variable name is printed correctly.</p>
<p>Also, I tried doing this (<code>cat echo $tmp_file_name</code>) in the command line and it works. Can someone please tell what is wrong?</p>
| 0 | 2016-10-06T19:05:55Z | 39,903,866 | <p>The command you want is this:</p>
<pre><code>"cat '%s'"%tmp_file_path
</code></pre>
<p>Just get rid of the "echo" word.</p>
<p>Alternatively,</p>
<pre><code> synnet_output = subprocess.check_output(["cat", tmp_file_path], shell=False)
</code></pre>
| 2 | 2016-10-06T19:07:36Z | [
"python",
"shell",
"subprocess"
]
|
Read in csv file faster | 39,903,867 | <p>I am currently reading in a large csv file (around 100 million lines), using command along the lines of that described in <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a> e.g. :</p>
<pre><code>import csv
with open('eggs.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in spamreader:
process_row(row)
</code></pre>
<p>This is proving rather slow, I suspect because each line is read in individually (requiring lots of read calls to the hard drive). Is there any way of reading the whole csv file in at once, and then iterating over it? Although the file itself is large in size (e.g. 5Gb), my machine has sufficient ram to hold that in memory.</p>
| 0 | 2016-10-06T19:07:37Z | 39,903,946 | <pre><code>import pandas as pd
df =pd.DataFrame.from_csv('filename.csv')
</code></pre>
<p>This will read it in as a pandas dataframe so you can do all sorts of fun things with it</p>
| 3 | 2016-10-06T19:11:49Z | [
"python",
"csv"
]
|
Read in csv file faster | 39,903,867 | <p>I am currently reading in a large csv file (around 100 million lines), using command along the lines of that described in <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a> e.g. :</p>
<pre><code>import csv
with open('eggs.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in spamreader:
process_row(row)
</code></pre>
<p>This is proving rather slow, I suspect because each line is read in individually (requiring lots of read calls to the hard drive). Is there any way of reading the whole csv file in at once, and then iterating over it? Although the file itself is large in size (e.g. 5Gb), my machine has sufficient ram to hold that in memory.</p>
| 0 | 2016-10-06T19:07:37Z | 39,903,947 | <blockquote>
<p>my machine has sufficient ram to hold that in memory.</p>
</blockquote>
<p>Well then, call <code>list</code> on the <em>iterator</em>:</p>
<pre><code>spamreader = list(csv.reader(csvfile, delimiter=' ', quotechar='|'))
</code></pre>
| 1 | 2016-10-06T19:11:51Z | [
"python",
"csv"
]
|
Read in csv file faster | 39,903,867 | <p>I am currently reading in a large csv file (around 100 million lines), using command along the lines of that described in <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a> e.g. :</p>
<pre><code>import csv
with open('eggs.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in spamreader:
process_row(row)
</code></pre>
<p>This is proving rather slow, I suspect because each line is read in individually (requiring lots of read calls to the hard drive). Is there any way of reading the whole csv file in at once, and then iterating over it? Although the file itself is large in size (e.g. 5Gb), my machine has sufficient ram to hold that in memory.</p>
| 0 | 2016-10-06T19:07:37Z | 39,903,955 | <p>Yes, there is a way to read the entire file at once:</p>
<pre><code>with open('eggs.csv', 'rb', 5000000000) as ...:
...
</code></pre>
<p>Reference: <a href="https://docs.python.org/2/library/functions.html#open" rel="nofollow">https://docs.python.org/2/library/functions.html#open</a></p>
| 1 | 2016-10-06T19:12:17Z | [
"python",
"csv"
]
|
Streaming video using pyzmq | 39,903,926 | <p>I am using the Python bindings for zmq to send messages both between processes and across a network. So far, no problem. I now have an application where I want to stream data recovered from a small camera on a Raspberry Pi. I have no issues actually recording the data but now I want to be able to stream that data. I have an example on how to do this using the low level <code>sockets</code> api but would really like to use <code>zmq</code> for this application.</p>
<p>The <code>picamera</code> <a href="https://picamera.readthedocs.io/en/release-1.10/recipes1.html#recording-to-a-network-stream" rel="nofollow">documentation</a> has the example below on how to use a network stream to send data:</p>
<pre><code>import socket
import time
import picamera
# Connect a client socket to my_server:8000 (change my_server to the
# hostname of your server)
client_socket = socket.socket()
client_socket.connect(('my_server', 8000))
# Make a file-like object out of the connection
connection = client_socket.makefile('wb')
try:
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 24
# Start a preview and let the camera warm up for 2 seconds
camera.start_preview()
time.sleep(2)
# Start recording, sending the output to the connection for 60
# seconds, then stop
camera.start_recording(connection, format='h264')
camera.wait_recording(60)
camera.stop_recording()
finally:
connection.close()
client_socket.close()
</code></pre>
<p><code>zmq</code> does not implement a <code>buffer</code> interface, so you can not simply change the <code>socket</code> object for a <code>zmq.Context()</code> object. I feel like this should be relatively simple, but have been unable to find any information on this. Does anyone have any suggestions on how to use the <code>zmq</code> api in this case?</p>
| 0 | 2016-10-06T19:10:28Z | 39,948,569 | <h2>RPi side: <code>picamera.PiCamera()</code> API rulez:</h2>
<p>Besides the file-based output, RPi has another alternative for the <strong><code>output</code></strong> attribute, worth a try.</p>
<blockquote>
<p><a href="https://picamera.readthedocs.io/en/release-1.12/api_camera.html" rel="nofollow">If output is not a <strong><code>string</code></strong>, but is an object with a <strong><code>write()</code></strong> method, it is assumed to be a file-like object and the video data is appended to it (the implementation only assumes the object has a <code>write()</code> method - no other methods are required but <strong><code>flush()</code></strong> will be called at the end of recording if it is present).<br><br>If output is not a <code>string</code>, and has no <code>write()</code> method it is assumed to be a writeable object implementing the buffer protocol. In this case, the video frames will be written sequentially to the underlying buffer (which must be large enough to accept all frame data).</a></p>
</blockquote>
<h2>So?</h2>
<p>The most straight forward way is to create a Class for interfacing objects, equipped with both the <strong><code>.write()</code></strong> and <strong><code>.flush()</code></strong> methods, that would transfer the relevant video-data towards the messaging zmq-sockets ( may have more than just the one trivial <strong><code>PUB/SUB</code></strong> archetype, so as to have Out-of-Band signalling and similar nice features coming from ZeroMQ Scaleable Formal Communications Framework ) that were instantiated under the central <code>zmq.Context()</code>.</p>
<h2>Performance?</h2>
<p>If running into doubts about IO-performance, there are steps that could be done on the ZeroMQ side -- instantiate the central I/O-pumping <code>Context</code> with more than just a single I/O-thread by <code>zmq.Context( numOfIOthreads )</code> and perhaps decide on using sort of balancing steps with different <strong><code>AFFINITY</code></strong> settings in instantiated <code>aSocketINSTANCE.setsockopt()</code> so as to "reserve" some relative I/O-priorities among the respective instances, pre-reserving the shared-resources in respective manner to protect some minimum levels of access to I/O-threads as needed in your design's performance requirements ( until hardware can support and benefit from those levels of concurrency ).</p>
| 0 | 2016-10-09T21:10:24Z | [
"python",
"sockets",
"raspberry-pi",
"zeromq",
"pyzmq"
]
|
Django: TypeError: int() argument must be a string, a bytes-like object or a number, not | 39,903,959 | <p>Please help! I tried searching for an answer, but I think this issue is too specific to have a generalized enough solution. </p>
<p>It's very difficult for me to pin point when, exactly, it is that this error started. I've Attempted too many changes now to know when the site was last working. I'm very new to this. And entirely self-taught, at that. I can assure you, it will be apparent.</p>
<p>when attempting to migrate I receive this error:</p>
<pre><code>when attempting to migrate I receive this error:
Apply all migrations: admin, auth, contenttypes, purchase_log, sessions
Running migrations:
Applying purchase_log.0009_auto_20161005_1524...Traceback (most recent call la
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
utility.execute()
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
self.execute(*args, **cmd_options)
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
output = self.handle(*args, **options)
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
fake_initial=fake_initial,
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_i
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
state = migration.apply(state, schema_editor)
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
operation.database_forwards(self.app_label, schema_editor, old_state, projec
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
field,
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
self._remake_table(model, create_fields=[field])
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
self.effective_default(field)
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
default = field.get_db_prep_save(default, self.connection)
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
return self.target_field.get_db_prep_save(value, connection=connection)
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
prepared=False)
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
value = self.get_prep_value(value)
File "C:\Users\jdcar\AppData\Local\Programs\Python\Python35-32\lib\site-packag
return int(value)
TypeError: int() argument must be a string, a bytes-like object or a number, not
</code></pre>
<p>I'm going insane, and have no idea where to start! Please help!</p>
<p>edit: Here is the .models.py</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
class Store(models.Model):
name = models.CharField(max_length=250)
owner = models.ForeignKey(User)
def __str__(self):
return self.name
class Product(models.Model):
type = models.CharField(max_length=250)
owner = models.ForeignKey(User)
def __str__(self):
return self.type
class Receipt(models.Model):
store = models.ForeignKey(Store)
date = models.DateField()
line_items = models.ManyToManyField(Product, through='ReceiptProduct')
owner = models.ForeignKey(User)
def __str__(self):
return self.store.name + ': ' + str(self.date)
class ReceiptProduct(models.Model):
receipt = models.ForeignKey(Receipt)
product = models.ForeignKey(Product)
price = models.FloatField()
sale = models.BooleanField()
description = models.CharField(max_length=500, null=True, blank=True)
owner = models.ForeignKey(User)
def __str__(self):
return self.product.type
</code></pre>
<p>edit: Here is migration 0009_auto_20161005_1524.py</p>
<pre><code># -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2016-10-05 19:24
from __future__ import unicode_literals
from django.conf import settings
import django.contrib.auth.models
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('purchase_log', '0008_receiptproduct_sale'),
]
operations = [
migrations.AddField(
model_name='product',
name='owner',
field=models.ForeignKey(default=django.contrib.auth.models.User, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='receipt',
name='owner',
field=models.ForeignKey(default=django.contrib.auth.models.User, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='receiptproduct',
name='owner',
field=models.ForeignKey(default=django.contrib.auth.models.User, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='store',
name='owner',
field=models.ForeignKey(default=django.contrib.auth.models.User, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
]
</code></pre>
| -1 | 2016-10-06T19:12:54Z | 39,923,313 | <p>Problem solved, thanks to @MosesKoledoye. </p>
<p>I deleted the migrations folder inside the app that was causing the problem. And recreated it by running '<code>python manage.py makemigrations <appname></code>' I then migrated to the server, and everything was great. </p>
<p>Thank you, @MosesKoledoye</p>
| 0 | 2016-10-07T17:59:27Z | [
"python",
"django",
"typeerror"
]
|
How can I convert a categorical index to a float? | 39,903,986 | <p>I have a categorical index of wind directions in a pandas dataframe.</p>
<pre><code>print (self.Groups.index)
CategoricalIndex([22.5, 67.5, 112.5, 157.5, 202.5, 247.5, 292.5, 337.5],
categories=[22.5, 67.5, 112.5, 157.5, 202.5, 247.5, 292.5, 337.5],
ordered=True, name='Dir', dtype='category')
</code></pre>
<p>I am trying to use this index to plot a wind rose eg.</p>
<pre><code> ax.bar(self.Groups.index,self.Groups['wind_speed'])
</code></pre>
<p>but I need to convert the index to radians for it to properly display on a polar plot.</p>
<p>Is there a way to convert the the Categorical index to a float?</p>
| -1 | 2016-10-06T19:15:25Z | 39,904,114 | <p>Use <code>astype</code> to perform the conversion:</p>
<pre><code>self.Groups.index = self.Groups.index.astype('float')
</code></pre>
| 1 | 2016-10-06T19:24:11Z | [
"python",
"pandas"
]
|
How can I convert a categorical index to a float? | 39,903,986 | <p>I have a categorical index of wind directions in a pandas dataframe.</p>
<pre><code>print (self.Groups.index)
CategoricalIndex([22.5, 67.5, 112.5, 157.5, 202.5, 247.5, 292.5, 337.5],
categories=[22.5, 67.5, 112.5, 157.5, 202.5, 247.5, 292.5, 337.5],
ordered=True, name='Dir', dtype='category')
</code></pre>
<p>I am trying to use this index to plot a wind rose eg.</p>
<pre><code> ax.bar(self.Groups.index,self.Groups['wind_speed'])
</code></pre>
<p>but I need to convert the index to radians for it to properly display on a polar plot.</p>
<p>Is there a way to convert the the Categorical index to a float?</p>
| -1 | 2016-10-06T19:15:25Z | 39,904,181 | <p>You can use the <code>to_numeric</code> function to convert the index. It gives more control of the expected behavior in case of failure.</p>
<pre><code># Test data
index = pd.CategoricalIndex([22.5, 67.5, 112.5, 157.5, 202.5, 247.5, 292.5, 337.5],
categories=[22.5, 67.5, 112.5, 157.5, 202.5, 247.5, 292.5, 337.5],
ordered=True, name='Dir', dtype='category')
index
df = DataFrame([22.5, 67.5, 112.5, 157.5, 202.5, 247.5, 292.5, 337.5], index=index)
# Converting the index
df.index = pd.to_numeric(df.index)
print(df.index)
# Float64Index([22.5, 67.5, 112.5, 157.5, 202.5, 247.5, 292.5, 337.5], dtype='float64', name='Dir')
</code></pre>
| 1 | 2016-10-06T19:28:21Z | [
"python",
"pandas"
]
|
Python Beautifulsoup access text in tags? | 39,904,089 | <p>I am having trouble finding and returning the value that appears to be in the <code><b></code> tag, I have no luck when reading any of the tags. </p>
<p>I don't want to post a hundred lines of the view-source info and am not sure how to properly post the link to it but here is the webpage if you would be able to view the page source yourself <a href="http://yugiohprices.com/card_price?name=Dark+Magician" rel="nofollow">http://yugiohprices.com/card_price?name=Dark+Magician</a> </p>
<p>the information I am trying to retrieve
<a href="https://postimg.org/image/5fwxfqjqf/" rel="nofollow">https://postimg.org/image/5fwxfqjqf/</a></p>
<p>Here is the code I am using</p>
<pre><code>import requests
from bs4 import BeautifulSoup
r = requests.get('http://yugiohprices.com/card_price?name=Dark+Magician');
soup = BeautifulSoup(r.content, "lxml")
print soup.find('b').text
</code></pre>
<p>this is the output</p>
<p>Home
| Top 100 | Browse Cards | Browse Sets</p>
<p>Purchase Statistics
| Watchlist | Card Pricer</p>
<p>Sell My Cards | Price Alerts | Blog | FAQ | Settings</p>
<p>No matter what I change or try I am unable to access the "LDK2-ENY10" text</p>
| 1 | 2016-10-06T19:22:24Z | 39,913,353 | <p>You can see the page takes a while to load the data, the data is requested through an Ajax request so what requests returns is not what you see in your browser. You can mimic the ajax request with a simple get to <a href="http://yugiohprices.com/get_card_prices/Dark+Magician" rel="nofollow">http://yugiohprices.com/get_card_prices/Dark+Magician</a>, passing a <em>timestamp</em>:</p>
<pre><code>import requests
from time import time
r = requests.get("http://yugiohprices.com/get_card_prices/Dark+Magician?_={}".format(int(time())))
print(r.content)
</code></pre>
<p>That you will see gives you all the details about the card, so to get what you want just find the <em>anchor</em> with the <em>href</em> starting with <em>/browse_sets?set</em>:</p>
<pre><code>In [1]: import requests
...: from time import time
...: from bs4 import BeautifulSoup
...:
...: r = requests.get("http://yugiohprices.com/get_card_prices/Dark+Magician?
...: _={}".format(int(time())))
...: soup = BeautifulSoup(r.content, "lxml")
...: print(soup.select_one("a[href^=/browse_sets?set]").text)
...:
Legendary Decks II
In [2]:
</code></pre>
| 1 | 2016-10-07T08:58:03Z | [
"python",
"beautifulsoup"
]
|
Pandas/SQL-calculate the pecentage based on different Group | 39,904,128 | <p>I want to calculate the percentage of each group and a new column
For example:
data=</p>
<pre><code>Group Value
A 1
A 2
A 1
B 4
B 4
B 8
</code></pre>
<p>and I want to get:</p>
<pre><code>Group Value Percentage
A 1 0.25
A 2 0.50
A 1 0.25
B 4 0.25
B 4 0.25
B 8 0.50
</code></pre>
<p>How to get it with pandas function or SQL? Thanks!</p>
| 1 | 2016-10-06T19:24:53Z | 39,904,202 | <p>Try to use</p>
<pre><code>SELECT A.Group,
A.Value,
( A.Value / (SELECT Sum(b.Value)
FROM yourtable b
WHERE b.Group = A.Group) ) Percentage
FROM yourtable A
</code></pre>
| 1 | 2016-10-06T19:29:35Z | [
"python",
"sql",
"pandas",
"dataframe"
]
|
Pandas/SQL-calculate the pecentage based on different Group | 39,904,128 | <p>I want to calculate the percentage of each group and a new column
For example:
data=</p>
<pre><code>Group Value
A 1
A 2
A 1
B 4
B 4
B 8
</code></pre>
<p>and I want to get:</p>
<pre><code>Group Value Percentage
A 1 0.25
A 2 0.50
A 1 0.25
B 4 0.25
B 4 0.25
B 8 0.50
</code></pre>
<p>How to get it with pandas function or SQL? Thanks!</p>
| 1 | 2016-10-06T19:24:53Z | 39,904,779 | <p>Using pandas:</p>
<pre><code>df['Percentage'] = df.groupby('Group')['Value'].transform(lambda x: x / x.sum())
</code></pre>
<p>output:</p>
<pre><code> Group Value Percentage
0 A 1 0.25
1 A 2 0.50
2 A 1 0.25
3 B 4 0.25
4 B 4 0.25
5 B 8 0.50
</code></pre>
| 2 | 2016-10-06T20:06:11Z | [
"python",
"sql",
"pandas",
"dataframe"
]
|
Organizing list of tuples | 39,904,161 | <p>I have a list of tuples which I create dynamically.</p>
<p>The list appears as:</p>
<pre><code>List = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)]
</code></pre>
<p>Each tuple <code>(a, b)</code> of list represents the range of indexes from a certain table.</p>
<p>The ranges <code>(a, b) and (b, d)</code> is same in my situation as <code>(a, d)</code></p>
<p>I want to merge the tuples where the 2nd element matches the first of any other.</p>
<p>So, in the example above, I want to merge <code>(8, 10), (10,13)</code> to obtain <code>(8,13)</code> and remove <code>(8, 10), (10,13)</code></p>
<p><code>(19,25) and (25,30)</code> merge should yield <code>(19, 30)</code></p>
<p>I don't have a clue where to start. The tuples are non overlapping.</p>
<p>Edit: I have been trying to just avoid any kind of for loop as I have a pretty large list</p>
| 6 | 2016-10-06T19:27:24Z | 39,904,389 | <p>If you need to take into account things like skovorodkin's example in the comment, </p>
<pre><code>[(1, 4), (4, 8), (8, 10)]
</code></pre>
<p>(or even more complex examples), then one way to do efficiently would be using graphs. </p>
<p>Say you create a digraph (possibly using <a href="https://networkx.github.io/" rel="nofollow"><code>networkx</code></a>), where each pair is a node, and there is an edge from <em>(a, b)</em> to node <em>(c, d)</em> if <em>b == c</em>. Now run <a href="https://networkx.github.io/documentation/networkx-1.9/reference/generated/networkx.algorithms.dag.topological_sort.html" rel="nofollow">topological sort</a>, iterate according to the order, and merge accordingly. You should take care to handle nodes with two (or more) outgoing edges properly.</p>
<hr>
<p>I realize your question states you'd like to avoid loops on account of the long list size. Conversely, for long lists, I doubt you'll find even an efficient linear time solution using list comprehension (or something like that). Note that you cannot sort the list in linear time, for example.</p>
<hr>
<p>Here is a possible implementation:</p>
<p>Say we start with</p>
<pre><code>l = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)]
</code></pre>
<p>It simplifies the following to remove duplicates, so let's do:</p>
<pre><code>l = list(set(l))
</code></pre>
<p>Now to build the digraph:</p>
<pre><code>import networkx as nx
import collections
g = nx.DiGraph()
</code></pre>
<p>The vertices are simply the pairs:</p>
<pre><code>g.add_nodes_from(l)
</code></pre>
<p>To build the edges, we need a dictionary:</p>
<pre><code>froms = collections.defaultdict(list)
for p in l:
froms[p[0]].append(p)
</code></pre>
<p>Now we can add the edges:</p>
<pre><code>for p in l:
for from_p in froms[p[1]]:
g.add_edge(p, from_p)
</code></pre>
<p>Next two lines are unneeded - they're just here to show what the graph looks like at this point:</p>
<pre><code>>>> g.nodes()
[(25, 30), (14, 16), (10, 13), (8, 10), (1, 4), (19, 25)]
>>> g.edges()
[((8, 10), (10, 13)), ((19, 25), (25, 30))]
</code></pre>
<p>Now, let's sort the pairs by topological sort:</p>
<pre><code>l = nx.topological_sort(g)
</code></pre>
<p>Finally, here's the tricky part. The result will be a DAG. We have to to traverse things recursively, but remember what we visited already.</p>
<p>Let's create a dict of what we visited:</p>
<pre><code>visited = {p: False for p in l}
</code></pre>
<p>Now a recursive function, that given a node, returns the maximum range edge from any node reachable from it:</p>
<pre><code>def visit(p):
neighbs = g.neighbors(p)
if visited[p] or not neighbs:
visited[p] = True
return p[1]
mx = max([visit(neighb_p) for neighb_p in neighbs])
visited[p] = True
return mx
</code></pre>
<p>We're all ready. Let's create a list for the final pairs:</p>
<pre><code>final_l = []
</code></pre>
<p>and visit all nodes:</p>
<pre><code>for p in l:
if visited[p]:
continue
final_l.append((p[0], visit(p)))
</code></pre>
<p>Here's the final result:</p>
<pre><code>>>> final_l
[(1, 4), (8, 13), (14, 16)]
</code></pre>
| 6 | 2016-10-06T19:41:27Z | [
"python",
"algorithm",
"list",
"tuples"
]
|
Organizing list of tuples | 39,904,161 | <p>I have a list of tuples which I create dynamically.</p>
<p>The list appears as:</p>
<pre><code>List = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)]
</code></pre>
<p>Each tuple <code>(a, b)</code> of list represents the range of indexes from a certain table.</p>
<p>The ranges <code>(a, b) and (b, d)</code> is same in my situation as <code>(a, d)</code></p>
<p>I want to merge the tuples where the 2nd element matches the first of any other.</p>
<p>So, in the example above, I want to merge <code>(8, 10), (10,13)</code> to obtain <code>(8,13)</code> and remove <code>(8, 10), (10,13)</code></p>
<p><code>(19,25) and (25,30)</code> merge should yield <code>(19, 30)</code></p>
<p>I don't have a clue where to start. The tuples are non overlapping.</p>
<p>Edit: I have been trying to just avoid any kind of for loop as I have a pretty large list</p>
| 6 | 2016-10-06T19:27:24Z | 39,904,486 | <p>Here is one optimized recursion approach:</p>
<pre><code>In [44]: def find_intersection(m_list):
for i, (v1, v2) in enumerate(m_list):
for j, (k1, k2) in enumerate(m_list[i + 1:], i + 1):
if v2 == k1:
m_list[i] = (v1, m_list.pop(j)[1])
return find_intersection(m_list)
return m_list
</code></pre>
<p>Demo:</p>
<pre><code>In [45]: lst = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)]
In [46]: find_intersection(lst)
Out[46]: [(1, 4), (8, 13), (19, 30), (14, 16)]
</code></pre>
| 2 | 2016-10-06T19:47:03Z | [
"python",
"algorithm",
"list",
"tuples"
]
|
Organizing list of tuples | 39,904,161 | <p>I have a list of tuples which I create dynamically.</p>
<p>The list appears as:</p>
<pre><code>List = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)]
</code></pre>
<p>Each tuple <code>(a, b)</code> of list represents the range of indexes from a certain table.</p>
<p>The ranges <code>(a, b) and (b, d)</code> is same in my situation as <code>(a, d)</code></p>
<p>I want to merge the tuples where the 2nd element matches the first of any other.</p>
<p>So, in the example above, I want to merge <code>(8, 10), (10,13)</code> to obtain <code>(8,13)</code> and remove <code>(8, 10), (10,13)</code></p>
<p><code>(19,25) and (25,30)</code> merge should yield <code>(19, 30)</code></p>
<p>I don't have a clue where to start. The tuples are non overlapping.</p>
<p>Edit: I have been trying to just avoid any kind of for loop as I have a pretty large list</p>
| 6 | 2016-10-06T19:27:24Z | 39,904,568 | <p>If they don't overlap, then you can sort them, and then just combine adjacent ones.</p>
<p>Here's a generator that yields the new tuples:</p>
<pre><code>def combine_ranges(L):
L = sorted(L) # Make a copy as we're going to remove items!
while L:
start, end = L.pop(0) # Get the first item
while L and L[0][0] == end:
# While the first of the rest connects to it, adjust
# the end and remove the first of the rest
_, end = L.pop(0)
yield (start, end)
print(list(combine_ranges(List)))
</code></pre>
<p>If speed is important, use a <code>collections.deque</code> instead of a list, so that the <code>.pop(0)</code> operations can be in constant speed.</p>
| 5 | 2016-10-06T19:52:43Z | [
"python",
"algorithm",
"list",
"tuples"
]
|
Organizing list of tuples | 39,904,161 | <p>I have a list of tuples which I create dynamically.</p>
<p>The list appears as:</p>
<pre><code>List = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)]
</code></pre>
<p>Each tuple <code>(a, b)</code> of list represents the range of indexes from a certain table.</p>
<p>The ranges <code>(a, b) and (b, d)</code> is same in my situation as <code>(a, d)</code></p>
<p>I want to merge the tuples where the 2nd element matches the first of any other.</p>
<p>So, in the example above, I want to merge <code>(8, 10), (10,13)</code> to obtain <code>(8,13)</code> and remove <code>(8, 10), (10,13)</code></p>
<p><code>(19,25) and (25,30)</code> merge should yield <code>(19, 30)</code></p>
<p>I don't have a clue where to start. The tuples are non overlapping.</p>
<p>Edit: I have been trying to just avoid any kind of for loop as I have a pretty large list</p>
| 6 | 2016-10-06T19:27:24Z | 39,904,580 | <p>non-recursive approach, using sorting (I've added more nodes to handle complex case):</p>
<pre><code>l = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30), (30,34), (38,40)]
l = sorted(l)
r=[]
idx=0
while idx<len(l):
local=idx+1
previous_value = l[idx][1]
# search longest string
while local<len(l):
if l[local][0]!=previous_value:
break
previous_value = l[local][1]
local+=1
# store tuple
r.append((l[idx][0],l[local-1][1]))
idx = local
print(r)
</code></pre>
<p>result:</p>
<pre><code>[(1, 4), (8, 13), (14, 16), (19, 34), (38, 40)]
</code></pre>
<p>The only drawback is that original sort order is not preserved. I don't know if it's a problem.</p>
| 2 | 2016-10-06T19:53:09Z | [
"python",
"algorithm",
"list",
"tuples"
]
|
Organizing list of tuples | 39,904,161 | <p>I have a list of tuples which I create dynamically.</p>
<p>The list appears as:</p>
<pre><code>List = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)]
</code></pre>
<p>Each tuple <code>(a, b)</code> of list represents the range of indexes from a certain table.</p>
<p>The ranges <code>(a, b) and (b, d)</code> is same in my situation as <code>(a, d)</code></p>
<p>I want to merge the tuples where the 2nd element matches the first of any other.</p>
<p>So, in the example above, I want to merge <code>(8, 10), (10,13)</code> to obtain <code>(8,13)</code> and remove <code>(8, 10), (10,13)</code></p>
<p><code>(19,25) and (25,30)</code> merge should yield <code>(19, 30)</code></p>
<p>I don't have a clue where to start. The tuples are non overlapping.</p>
<p>Edit: I have been trying to just avoid any kind of for loop as I have a pretty large list</p>
| 6 | 2016-10-06T19:27:24Z | 39,904,588 | <p>The list is first sorted and adjacent pairs of (min1, max1), (min2, max2) are merged together if they overlap.</p>
<pre><code>MIN=0
MAX=1
def normalize(intervals):
isort = sorted(intervals)
for i in range(len(isort) - 1):
if isort[i][MAX] >= isort[i + 1][MIN]:
vmin = isort[i][MIN]
vmax = max(isort[i][MAX], isort[i + 1][MAX])
isort[i] = None
isort[i + 1] = (vmin, vmax)
return [r for r in isort if r is not None]
List1 = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)]
List2 = [(1, 4), (4, 8), (8, 10)]
print(normalize(List1))
print(normalize(List2))
#[(1, 4), (8, 13), (14, 16), (19, 30)]
#[(1, 10)]
</code></pre>
| 1 | 2016-10-06T19:54:05Z | [
"python",
"algorithm",
"list",
"tuples"
]
|
Organizing list of tuples | 39,904,161 | <p>I have a list of tuples which I create dynamically.</p>
<p>The list appears as:</p>
<pre><code>List = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)]
</code></pre>
<p>Each tuple <code>(a, b)</code> of list represents the range of indexes from a certain table.</p>
<p>The ranges <code>(a, b) and (b, d)</code> is same in my situation as <code>(a, d)</code></p>
<p>I want to merge the tuples where the 2nd element matches the first of any other.</p>
<p>So, in the example above, I want to merge <code>(8, 10), (10,13)</code> to obtain <code>(8,13)</code> and remove <code>(8, 10), (10,13)</code></p>
<p><code>(19,25) and (25,30)</code> merge should yield <code>(19, 30)</code></p>
<p>I don't have a clue where to start. The tuples are non overlapping.</p>
<p>Edit: I have been trying to just avoid any kind of for loop as I have a pretty large list</p>
| 6 | 2016-10-06T19:27:24Z | 39,904,621 | <p>You can use a dictionary to map the different end indices to the range ending at that index; then just iterate the list sorted by start index and merge the segments accordingly:</p>
<pre><code>def join_lists(lst):
ending = {} # will map end position to range
for start, end in sorted(lst): # iterate in sorted order
if start in ending:
ending[end] = (ending[start][0], end) # merge
del ending[start] # remove old value
else:
ending[end] = (start, end)
return list(ending.values()) # return remaining values from dict
</code></pre>
<p>Alternatively, as pointed out by <a href="http://stackoverflow.com/questions/39904161/organizing-list-of-tuples/39904621#comment67105721_39904621">Tomer W in comments</a>, you can do without the sorting, by iterating the list twice, making this solution take only linear time (<em>O(n)</em>) w.r.t. the length of the list.</p>
<pre><code>def join_lists(lst):
ending = {} # will map end position to range
# first pass: add to dictionary
for start, end in lst:
ending[end] = (start, end)
# second pass: lookup and merge
for start, end in lst:
if start in ending:
ending[end] = (ending[start][0], end)
del ending[start]
# return remaining values from dict
return list(ending.values())
</code></pre>
<p>Examples output, for both cases:</p>
<pre><code>>>> join_lists([(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)])
[(1, 4), (8, 13), (14, 16), (19, 30)]
>>> join_lists(lst = [(1, 4), (4, 8), (8, 10)])
[(1, 10)]
</code></pre>
| 2 | 2016-10-06T19:56:07Z | [
"python",
"algorithm",
"list",
"tuples"
]
|
Organizing list of tuples | 39,904,161 | <p>I have a list of tuples which I create dynamically.</p>
<p>The list appears as:</p>
<pre><code>List = [(1,4), (8,10), (19,25), (10,13), (14,16), (25,30)]
</code></pre>
<p>Each tuple <code>(a, b)</code> of list represents the range of indexes from a certain table.</p>
<p>The ranges <code>(a, b) and (b, d)</code> is same in my situation as <code>(a, d)</code></p>
<p>I want to merge the tuples where the 2nd element matches the first of any other.</p>
<p>So, in the example above, I want to merge <code>(8, 10), (10,13)</code> to obtain <code>(8,13)</code> and remove <code>(8, 10), (10,13)</code></p>
<p><code>(19,25) and (25,30)</code> merge should yield <code>(19, 30)</code></p>
<p>I don't have a clue where to start. The tuples are non overlapping.</p>
<p>Edit: I have been trying to just avoid any kind of for loop as I have a pretty large list</p>
| 6 | 2016-10-06T19:27:24Z | 39,904,762 | <p>The following should work. It breaks tuples into individual numbers, then finds the tuple bound on each cluster. This should work even with difficult overlaps, like <code>[(4, 10), (9, 12)]</code></p>
<p>It's a very simple fix.</p>
<pre><code># First turn your list of tuples into a list of numbers:
my_list = []
for item in List: my_list = my_list + [i for i in range(item[0], item[1]+1)]
# Then create tuple pairs:
output = []
a = False
for x in range(max(my_list)+1):
if (not a) and (x in my_list): a = x
if (a) and (x+1 not in my_list):
output.append((a, x))
a = False
print output
</code></pre>
| 1 | 2016-10-06T20:05:00Z | [
"python",
"algorithm",
"list",
"tuples"
]
|
Nginx + uwsgi + django + odbc - issues with not finding odbc driver, possibly related to wrong user somewhere | 39,904,216 | <p>I'm trying to connect to teradata through odbc from django under CentOS. The problem is that odbc cannot find teradata driver when run under django. If I run the script from python directly (or through django's <code>./manage command</code>) it works fine, which makes me to believe that something in the chain above is being run under wrong (possibly "nginx") user, as I assume teradata (odbc?) is unable to locate <code>.odbc.ini</code> in the home directory of the current user. </p>
<p>I had similar problem under Debian and solved it by changing uwsgi user to match, though CentOS uwsgi config is slightly different and the same change doesn't help.</p>
<p>The error I'm getting:</p>
<pre><code>ERROR - ('DRIVER_NOT_FOUND', "No driver found for 'Teradata'. Available drivers: PostgreSQL,MySQL")
</code></pre>
<p><code>.odbc.ini</code> with teradata config is located under /home/myuser/.odbc.ini</p>
<blockquote>
<p>/etc/systemd/system/uwsgi.service</p>
</blockquote>
<pre><code>[Unit]
Description=uWSGI Emperor service
[Service]
ExecStartPre=/usr/bin/bash -c 'mkdir -p /run/uwsgi; chown myuser:myuser /run/uwsgi'
ExecStart=/usr/bin/uwsgi --emperor /etc/uwsgi/sites --logto /home/myuser/log.log
Restart=always
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target
</code></pre>
<blockquote>
<p>uwsgi app.ini config</p>
</blockquote>
<pre><code>[uwsgi]
chmod-socket = 664
chown-socket = nginx #cannot change this to myuser, getting 502
uid = myuser #this is what fixed it on Debian
gid = myuser #this is what fixed it on Debian
vhost = true
plugins = python
socket = /home/myuser/app.sock
master = true
enable-threads = true
processes = 2
module = app.wsgi
wsgi-file = /home/myuser/wsgi.py
chdir = /home/myuser/app/
post-buffering = 1
buffer-size = 32768
vacuum = false
</code></pre>
<blockquote>
<p>nginx config</p>
</blockquote>
<pre><code>upstream django {
server unix:/home/myuser/app.sock;
}
server {
listen 80;
server_name myapp.com;
location / {
uwsgi_pass django;
include /home/myuser/uwsgi_params;
}
}
</code></pre>
<blockquote>
<p>teradata connection log (seeing correct user here too)</p>
</blockquote>
<pre><code>/********************************************************************************
* Application Name: myapp
* Version: 1.0
* Run Number: xxx
* Host: pushnotif01
* Platform: Linux-centos
* OS User: myuser
* Python Version: 2.7.5
* Python Compiler: GCC 4.8.5 20150623 (Red Hat 4.8.5-4)
* Python Build: ('default', 'Sep 15 2016 22:37:39')
* UdaExec Version: 15.10.0.18
* Program Name: uwsgi
* Working Dir: /home/myuser
* Log Dir: /home/myuser/logs
* Log File: /home/myuser/logs/app.log
* Config Files: [u'/etc/udaexec.ini: Not Found', u'/home/myuser/udaexec.ini: Not Found', u'/home/myuser/app/udaexec.ini: Not Found']
* Query Bands: ApplicationName=myapp;Version=1.0;JobID=20161006121335-43;ClientUser=myuser;Production=False;udaAppLogFile=/home/myuser/logs/app.log;UtilityName=PyTd;UtilityVersion=15.10.0.18
********************************************************************************
</code></pre>
<blockquote>
<p>App launch script:</p>
</blockquote>
<pre><code>sudo service nginx restart
sudo service uwsgi restart
</code></pre>
<p>If I try to create a test file from within the same code it is being created under correct <code>myuser</code> user, so it seems that django is running under the right user, yet odbc is still unable to find its config for some reason.</p>
| 0 | 2016-10-06T19:30:23Z | 39,907,024 | <p>After trial and error this made it work for now:</p>
<ol>
<li><p>Add <code>HOME</code> env variable to uwsgi's app.ini config pointing to home dir of the right user:</p>
<pre><code>env=HOME=/home/myuser
</code></pre></li>
<li><p>Pass <code>odbcLibPath</code> param explicitly when initializing the teradata connection:</p>
<pre><code>udaExec = teradata.UdaExec(odbcLibPath='/opt/teradata/client/15.00/odbc_64/lib/libodbc.so', ...)
</code></pre></li>
</ol>
| 0 | 2016-10-06T23:13:15Z | [
"python",
"django",
"nginx",
"teradata",
"uwsgi"
]
|
Python - No module named | 39,904,239 | <p>I have the following code with few modules:</p>
<pre><code>import Persistence.Image as img
import sys
def main():
print(sys.path)
original_image = img.Image.open_image()
if __name__ == "__main__":
main()
</code></pre>
<p>(I've created my own Image module)</p>
<p>And so I'm getting the following error claiming that the Persistence module does not exist:</p>
<pre><code>Traceback (most recent call last):
File "/home/ulises/PycharmProjects/IntelligentPuzzle/Puzzle.py", line 1, in <module>
import Persistence.Image as img
ImportError: No module named Persistence.Image
</code></pre>
<p>I've been searching for this problem here but can't find anything that worked to solve this as the directory tree seems to be correct as you can see on this image:</p>
<p><a href="http://i.stack.imgur.com/EgAkA.png" rel="nofollow"><img src="http://i.stack.imgur.com/EgAkA.png" alt="enter image description here"></a></p>
<p>I'm using ubuntu if it's any use.</p>
<p>Thanks and regards!</p>
| 0 | 2016-10-06T19:31:49Z | 39,904,345 | <p>I don't believe that you are importing with proper syntax. You need to use <code>from Persistance import Image as img</code>. For example:</p>
<pre><code>>>> import cmath.sqrt as c_sqrt
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import cmath.sqrt
ImportError: No module named 'cmath.sqrt'; 'cmath' is not a package
>>> from cmath import sqrt as c_sqrt
>>> c_sqrt(-1)
1j
</code></pre>
| -1 | 2016-10-06T19:38:17Z | [
"python",
"python-2.7"
]
|
Python - No module named | 39,904,239 | <p>I have the following code with few modules:</p>
<pre><code>import Persistence.Image as img
import sys
def main():
print(sys.path)
original_image = img.Image.open_image()
if __name__ == "__main__":
main()
</code></pre>
<p>(I've created my own Image module)</p>
<p>And so I'm getting the following error claiming that the Persistence module does not exist:</p>
<pre><code>Traceback (most recent call last):
File "/home/ulises/PycharmProjects/IntelligentPuzzle/Puzzle.py", line 1, in <module>
import Persistence.Image as img
ImportError: No module named Persistence.Image
</code></pre>
<p>I've been searching for this problem here but can't find anything that worked to solve this as the directory tree seems to be correct as you can see on this image:</p>
<p><a href="http://i.stack.imgur.com/EgAkA.png" rel="nofollow"><img src="http://i.stack.imgur.com/EgAkA.png" alt="enter image description here"></a></p>
<p>I'm using ubuntu if it's any use.</p>
<p>Thanks and regards!</p>
| 0 | 2016-10-06T19:31:49Z | 39,905,562 | <p><code>Persistence</code> package does not exist in that source tree. There is a <em>"Persistence"</em> directory there, but it is not a package, because it does not contain a <code>__init__.py</code> file.</p>
<p>From the <a href="https://docs.python.org/2/tutorial/modules.html#packages" rel="nofollow">Python documentation</a>:</p>
<blockquote>
<p>The <code>__init__.py</code> files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur later on the module search path. In the simplest case, <code>__init__.py</code> can just be an empty file, but it can also execute initialization code for the package or set the <code>__all__</code> variable, described later.</p>
</blockquote>
| 1 | 2016-10-06T21:01:35Z | [
"python",
"python-2.7"
]
|
deploying python code to heroku not working | 39,904,261 | <p>I have added the buildpack for python, it works.</p>
<p>But when i do a git push it doesnt work. </p>
<p>I get the below error all the time.</p>
<pre><code>(venv) D:\Projects\ecommerce\clone\cut_veggies>heroku buildpacks:set heroku/python
Buildpack set. Next release on cutveggie will use heroku/python.
Run git push heroku master to create a new release using this buildpack.
(venv) D:\Projects\ecommerce\clone\cut_veggies>git push heroku master
Counting objects: 308, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (305/305), done.
Writing objects: 100% (308/308), 410.14 KiB | 0 bytes/s, done.
Total 308 (delta 193), reused 0 (delta 0)
remote: Compressing source files... done.
remote: Building source:
remote:
remote: -----> Failed to detect set buildpack https://codon- buildpacks.s3.amazonaws.com/buildpacks/heroku/python.tgz
remote: More info: https://devcenter.heroku.com/articles/buildpacks#detection-failure
remote:
remote: ! Push failed
remote: Verifying deploy...
remote:
remote: ! Push rejected to cutveggie.
remote:
To https://git.heroku.com/cutveggie.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/cutveggie.git'
</code></pre>
<p>Can anyone tell what i am missing?</p>
| 0 | 2016-10-06T19:33:01Z | 39,906,307 | <p>I've had this problem with my application too. Adding a <code>requirements.txt</code> file with the dependencies to the root of the project solved it. Check <a href="https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-python" rel="nofollow">here</a> for more details. You can check <a href="https://github.com/heroku/python-getting-started/blob/master/requirements.txt" rel="nofollow">here</a> for an example too.</p>
| 0 | 2016-10-06T22:03:05Z | [
"python",
"django",
"heroku"
]
|
Python, code works, but openning via bat file get error | 39,904,383 | <p>Following is a simple working GUI programm. However, when I try to open it from a <code>bat</code> file, it gives an error.</p>
<p>Bat file (2 lines):</p>
<pre><code>ch10.2.py
pause
</code></pre>
<p>The error I receive is:</p>
<p>[error message in text format to be included here]</p>
<p>My code:</p>
<pre><code># Lazy Buttons 2
# Demonstrates using a class with Tkinter
from tkinter import *
class Application(Frame):
""" A GUI application with three buttons. """
def __init__(self, master):
""" Initialize the Frame. """
super(Application, self).__init__(master)
self.grid()
self.create_widgets()
def create_widgets(self):
""" Create three buttons that do nothing. """
# create first button
self.bttn1 = Button(self, text = "I do nothing!")
self.bttn1.grid()
# create second button
self.bttn2 = Button(self)
self.bttn2.grid()
self.bttn2.configure(text = "Me too!")
# create third button
self.bttn3 = Button(self)
self.bttn3.grid()
self.bttn3["text"] = "Same here!"
# main
root = Tk()
root.title("Lazy Buttons 2")
root.geometry("200x85")
app = Application(root)
root.mainloop()
</code></pre>
| 0 | 2016-10-06T19:40:45Z | 39,905,585 | <p>It's giving error on <code>super(Application, self).__init__(master)</code>.<br>
If you replace with <code>Frame.__init__(self, master)</code> , its working.
See below </p>
<pre><code># Lazy Buttons 2
# Demonstrates using a class with Tkinter
from tkinter import *
class Application(Frame):
""" A GUI application with three buttons. """
def __init__(self, master):
""" Initialize the Frame. """
Frame.__init__(self, master)
#super(Application, self).__init__(master)
self.grid()
self.createWidgets()
def createWidgets(self):
""" Create three buttons that do nothing. """
# create first button
self.bttn1 = Button(self, text = "I do nothing!")
self.bttn1.grid()
# create second button
self.bttn2 = Button(self)
self.bttn2.grid()
self.bttn2.configure(text = "Me too!")
# create third button
self.bttn3 = Button(self)
self.bttn3.grid()
self.bttn3["text"] = "Same here!"
# main
root = Tk()
root.title("Lazy Buttons 2")
root.geometry("200x85")
app = Application(root)
root.mainloop()
</code></pre>
<p><strong>OUTPUT</strong><br>
<a href="http://i.stack.imgur.com/cHekK.png" rel="nofollow"><img src="http://i.stack.imgur.com/cHekK.png" alt="enter image description here"></a></p>
| 0 | 2016-10-06T21:03:25Z | [
"python"
]
|
Locating Lazy Load Elements While Scrolling in PhantomJS in Python | 39,904,477 | <p>I'm using python and Webdriver to scrape data from a page that dynamically loads content as the user scrolls down the page (lazy load). I have a total of 30 data elements, while only 15 are displayed without first scrolling down.</p>
<p>I am locating my elements, and getting their values in the following way, after scrolling to the bottom of the page multiple times until each element has loaded:</p>
<pre><code># Get All Data Items
all_data = self.driver.find_elements_by_css_selector('div[some-attribute="some-attribute-value"]')
# Iterate Through Each Item, Get Value
data_value_list = []
for d in all_data:
# Get Value for Each Data item
data_value = d.find_element_by_css_selector('div[class="target-class"]').get_attribute('target-attribute')
#Save Data Value to List
data_value_list.append(data_value)
</code></pre>
<p>When I execute the above code using ChromeDriver, while leaving the browser window up on my screen, I get all 30 data values to populate my <code>data_value_list</code>. When I execute the above code using ChromeDriver, with the window minimized, my list <code>data_value_list</code> is only populated with the initial 15 data values. </p>
<p>The same issue occurs while using PhantomJS, limiting my <code>data_value_list</code> to only the initially-visible data values on the page. </p>
<p>Is there away to load these types of elements while having the browser minimized and, ideallyâwhile utilizing PhantomJS?</p>
<p>NOTE: I'm using an action chain to scroll down using the following approach <code>.send_keys(Keys.PAGE_DOWN).perform()</code> for a calculated number of times.</p>
| 0 | 2016-10-06T19:46:47Z | 39,905,354 | <p>I had the exact same issue. The solution I found was to execute javascript code in the virtual browser to force elements to scroll to the bottom. </p>
<p>Before putting the Javascript command into selenium, I recommend opening up your page in Firefox and inspecting the elements to find the scrollable content. The element should encompass all of the dynamic rows, but it should <em>not</em> include the scrollbar Then, after selecting the element with javascript, you can scroll it to the bottom by setting its scrollTop attribute to its scrollHeight attribute.</p>
<p>Then, you will need to test scrolling the content in the browser. The easiest way to select the element is by ID if the element has an id, but other ways will work. To select an element with the id "scrollableContent" and scroll it to the bottom, execute the following code in your browser's javascript console:</p>
<pre><code>e = document.getElementById('scrollableContent'); e.scrollTop = e.scrollHeight;
</code></pre>
<p>Of course, this will only scroll the content to the current top, you will need to repeat this after new content loads if you need to scroll multiple times. Also, I have no way of figuring out how to find the exact element, for me it is trial and error.</p>
<p>This is some code I tried out. However, I feel it can be improved, and should be for applications that are intended to test code or scrape unpredictably. I couldn't figure out how to explicitly wait until more elements were loaded (maybe get the number of elements, scroll to the bottom, then wait for subelement + 1 to show up, and if they don't exit the loop), so I hardcoded 5 scroll events and used time.sleep. time.sleep is ugly and can lead to issues, partly because it depends on the speed of your machine.</p>
<pre><code>def scrollElementToBottom(driver, element_id):
time.sleep(.2)
for i in range(5):
driver.execute_script("e = document.getElementById('" + element_id + "'); e.scrollTop = e.scrollHeight;")
time.sleep(.2)
</code></pre>
<p>The caveat is that the following solution worked with the Firefox driver, but I see no reason why it shouldn't work with your setup.</p>
| 0 | 2016-10-06T20:48:23Z | [
"python",
"web-scraping",
"webdriver",
"phantomjs"
]
|
Efficient, large-scale competition scoring in Python | 39,904,506 | <p>Consider a large dataframe of scores <code>S</code> containing entries like the following. Each row represents a contest between a subset of the participants <code>A</code>, <code>B</code>, <code>C</code> and <code>D</code>. </p>
<pre><code> A B C D
0.1 0.3 0.8 1
1 0.2 NaN NaN
0.7 NaN 2 0.5
NaN 4 0.6 0.8
</code></pre>
<p>The way to read the matrix above is: looking at the first row, the participant <code>A</code> scored <code>0.1</code> in that round, <code>B</code> scored <code>0.3</code>, and so forth.</p>
<p>I need to build a triangular matrix <code>C</code> where <code>C[X,Y]</code> stores how much better participant <code>X</code> was than participant <code>Y</code>. More specifically, <code>C[X,Y]</code> would hold the <strong>mean</strong> % difference in score between <code>X</code> and <code>Y</code>. </p>
<p>From the example above:</p>
<pre><code>C[A,B] = 100 * ((0.1 - 0.3)/0.3 + (1 - 0.2)/0.2) = 33%
</code></pre>
<p>My matrix <code>S</code> is huge, so I am hoping to take advantage of JIT (Numba?) or built-in methods in <code>numpy</code> or <code>pandas</code>. I certainly want to avoid having a nested loop, since <code>S</code> has millions of rows.</p>
<p>Does an efficient algorithm for the above have a name?</p>
| 3 | 2016-10-06T19:48:15Z | 39,905,034 | <p>Let's look at a NumPy based solution and thus let's assume that the input data is in an array named <code>a</code>. Now, the number of pairwise combinations for 4 such variables would be <code>4*3/2 = 6</code>. We can generate the IDs corresponding to such combinations with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.triu_indices.html#numpy.triu_indices" rel="nofollow"><code>np.triu_indices()</code></a>. Then, we index into the columns of <code>a</code> with those indices. We perform the subtractions and divisions and simply add the columns ignoring the NaN affected results with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.nansum.html" rel="nofollow"><code>np.nansum()</code></a> for the desired output.</p>
<p>Thus, we would have an implementation like so -</p>
<pre><code>R,C = np.triu_indices(a.shape[1],1)
out = 100*np.nansum((a[:,R] - a[:,C])/a[:,C],0)
</code></pre>
<p>Sample run -</p>
<pre><code>In [121]: a
Out[121]:
array([[ 0.1, 0.3, 0.8, 1. ],
[ 1. , 0.2, nan, nan],
[ 0.7, nan, 2. , 0.5],
[ nan, 4. , 0.6, 0.8]])
In [122]: out
Out[122]:
array([ 333.33333333, -152.5 , -50. , 504.16666667,
330. , 255. ])
In [123]: 100 * ((0.1 - 0.3)/0.3 + (1 - 0.2)/0.2) # Sample's first o/p elem
Out[123]: 333.33333333333337
</code></pre>
<p>If you need the output as <code>(4,4)</code> array, we can use <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.squareform.html" rel="nofollow"><code>Scipy's squareform</code></a> -</p>
<pre><code>In [124]: from scipy.spatial.distance import squareform
In [125]: out2D = squareform(out)
</code></pre>
<p>Let's convert to a pandas dataframe for a good visual feedback -</p>
<pre><code>In [126]: pd.DataFrame(out2D,index=list('ABCD'),columns=list('ABCD'))
Out[126]:
A B C D
A 0.000000 333.333333 -152.500000 -50
B 333.333333 0.000000 504.166667 330
C -152.500000 504.166667 0.000000 255
D -50.000000 330.000000 255.000000 0
</code></pre>
<p>Let's compute <code>[B,C]</code> manually and check back -</p>
<pre><code>In [127]: 100 * ((0.3 - 0.8)/0.8 + (4 - 0.6)/0.6)
Out[127]: 504.1666666666667
</code></pre>
| 3 | 2016-10-06T20:23:38Z | [
"python",
"pandas",
"numpy",
"matrix",
"numba"
]
|
How to run multiple functions using thread measuring time? | 39,904,525 | <p>I want to run two functions concurrently until they both return True up to 60 sec timeout.</p>
<p>This is what I have:</p>
<pre><code>import time
start_time = time.time()
timeout = time.time() + 60
a_result = b_result = False
a_end_time = b_end_time = None
a_duration = b_duration = None
while time.time() < timeout :
if not a_result:
a_result = func_a()
if a_result:
a_end_time = time.time()
if not b_result:
b_result = func_b()
if b_result:
b_end_time = time.time()
if a_result and b_result:
break
if a_end_time:
a_duration = a_end_time - start_time
if b_end_time:
b_duration = b_end_time - start_time
print a_duration,b_duration
if not (a_result and b_result):
raise Exception("exceeded timeout")
</code></pre>
<p>How can I improve this using threading?</p>
| 0 | 2016-10-06T19:49:08Z | 39,906,255 | <p>I think it would be easiest to implement if each function timed itself:</p>
<pre><code>import random
import time
import threading
MAX_TIME = 20
a_lock, b_lock = threading.Lock(), threading.Lock()
a_result = b_result = False
a_duration = b_duration = None
def func_a():
global a_result, a_duration
start_time = time.time()
time.sleep(random.randint(1, MAX_TIME)) # do something...
a_duration = time.time() - start_time
with a_lock:
a_result = True
def func_b():
global b_result, b_duration
start_time = time.time()
time.sleep(random.randint(1, MAX_TIME)) # do something...
b_duration = time.time() - start_time
with b_lock:
b_result = True
th1 = threading.Thread(target=func_a)
th1.deamon = True
th2 = threading.Thread(target=func_b)
th2.deamon = True
th1.start()
th2.start()
timeout = time.time() + MAX_TIME
while time.time() < timeout:
if a_result and b_result:
break
if not (a_result and b_result):
raise Exception("exceeded timeout")
print('func_a: {} secs, func_b: {} secs'.format(a_duration, b_duration))
</code></pre>
| 1 | 2016-10-06T21:59:35Z | [
"python",
"multithreading"
]
|
Access JSON data from API | 39,904,546 | <p>I am trying to write a script to download images from an API, I have a set up a loop that is as follows: </p>
<pre><code> response = requests.get(url, params=query)
json_data = json.dumps(response.text)
pythonVal = json.loads(json.loads(json_data))
print(pythonVal)
</code></pre>
<p>The print(pythonVal) returns: </p>
<pre><code> {
"metadata": {
"code": 200,
"message": "OK",
"version": "v2.0"
},
"data": {
"_links": {
"self": {
"href": "redactedLink"
}
},
"id": "123456789",
"_fixed": true
,
"type": "IMAGE",
"source": "social media",
"source_id": "1234567890_1234567890",
"original_source": "link",
"caption": "caption",
"video_url": null,
"share_url": "link",
"date_submitted": "2016-07-11T09:34:35+00:00",
"date_published": "2016-09-11T16:30:26+00:00",
</code></pre>
<p>I keep getting an error that reads:</p>
<pre><code>UnicodeEncodeError: 'ascii' codec can't encode character '\xc4' in
position 527: ordinal not in range(128)
</code></pre>
<p>For the pythonVal variable, if I just have it set to <code>json.loads(json_data)</code>, it prints out the JSON response, but then when I try doing <code>pythonVal['data']</code> I get another error that reads: </p>
<pre><code>TypeError: string indices must be integers
</code></pre>
<p>Ultimately I'd like to be able to get data from it by doing something like</p>
<pre><code> pythonVal['data']['_embedded']['uploader']['username']
</code></pre>
<p>Thanks for your input! </p>
| 1 | 2016-10-06T19:51:17Z | 39,904,655 | <p>Why doing <code>json.loads()</code> twice? Change:</p>
<pre><code>json.loads(json.loads(json_data))
</code></pre>
<p>to:</p>
<pre><code>json.loads(json_data)
</code></pre>
<p>and it should work.</p>
<p>Now since you are getting error <code>TypeError: string indices must be integers</code> on doing <code>pythonVal['data']</code>, it means that the value of <code>pythonVal</code> is of <code>list</code> type and not <code>dict</code>. Instead do:</p>
<pre><code>for item in pythonVal:
print item
</code></pre>
<p><em>Please also mention the sample JSON content with the question, if you want better help from others :)</em></p>
| 1 | 2016-10-06T19:57:31Z | [
"python",
"json"
]
|
Access JSON data from API | 39,904,546 | <p>I am trying to write a script to download images from an API, I have a set up a loop that is as follows: </p>
<pre><code> response = requests.get(url, params=query)
json_data = json.dumps(response.text)
pythonVal = json.loads(json.loads(json_data))
print(pythonVal)
</code></pre>
<p>The print(pythonVal) returns: </p>
<pre><code> {
"metadata": {
"code": 200,
"message": "OK",
"version": "v2.0"
},
"data": {
"_links": {
"self": {
"href": "redactedLink"
}
},
"id": "123456789",
"_fixed": true
,
"type": "IMAGE",
"source": "social media",
"source_id": "1234567890_1234567890",
"original_source": "link",
"caption": "caption",
"video_url": null,
"share_url": "link",
"date_submitted": "2016-07-11T09:34:35+00:00",
"date_published": "2016-09-11T16:30:26+00:00",
</code></pre>
<p>I keep getting an error that reads:</p>
<pre><code>UnicodeEncodeError: 'ascii' codec can't encode character '\xc4' in
position 527: ordinal not in range(128)
</code></pre>
<p>For the pythonVal variable, if I just have it set to <code>json.loads(json_data)</code>, it prints out the JSON response, but then when I try doing <code>pythonVal['data']</code> I get another error that reads: </p>
<pre><code>TypeError: string indices must be integers
</code></pre>
<p>Ultimately I'd like to be able to get data from it by doing something like</p>
<pre><code> pythonVal['data']['_embedded']['uploader']['username']
</code></pre>
<p>Thanks for your input! </p>
| 1 | 2016-10-06T19:51:17Z | 39,904,697 | <p>Put the following on top of your code. This works by overriding the native ascii encoding of Python to UTF-8. </p>
<pre><code># -*- coding: utf-8 -*-
</code></pre>
<p>The second error is because you have already gotten the string, and you need integer indices to get the characters of the string. </p>
| 0 | 2016-10-06T20:00:39Z | [
"python",
"json"
]
|
how to loop through certain directories in a directory structure in python? | 39,904,559 | <p>I have the following directory structure</p>
<p><a href="http://i.stack.imgur.com/AR4Sv.png" rel="nofollow"><img src="http://i.stack.imgur.com/AR4Sv.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/vEnMe.png" rel="nofollow"><img src="http://i.stack.imgur.com/vEnMe.png" alt="enter image description here"></a></p>
<p>As you can see in the pics, there are a lot of .0 files in different directories. This directory structure exists for 36 folders(Human_C1 to C36) and each Human_C[num] folder has a 1_image_contours folder which has a contours folder with all related .0 files. </p>
<p>These .0 files contain some co-ordinates(x,y). I wish to loop through all these files, take the data in them and put it in an excel sheet(I am using pandas for this). </p>
<p>The problem is, how I loop through only these set of files and none else? (there can be .0 files in contour_image folders also)</p>
<p>Thanks in advance</p>
| 0 | 2016-10-06T19:51:57Z | 39,904,640 | <p>Since your structure is not recursive I would recommend this:</p>
<pre><code>import glob
zero_files_list = glob.glob("spinux/generated/Human_C*/*/contours/*.0")
for f in zero_files_list:
print("do something with "+f)
</code></pre>
<p>Run it from the parent directory of <code>spinux</code> or you'll have no match!</p>
<p>It will expand the pattern for the fixed directory tree above, just as if you used <code>ls</code> or <code>echo</code> in a linux shell.</p>
| 3 | 2016-10-06T19:57:02Z | [
"python",
"file",
"loops",
"operating-system"
]
|
Setting up 'encoding' in Python's gzip.open() doesn't seem to work | 39,904,587 | <p>Even tho I tried to specify encoding in python's gzip.open(), it seems to be always using cp1252.py to encode the file's content.
My code:</p>
<pre><code>with gzip.open('file.gz', 'rt', 'cp1250') as f:
content = f.read()
</code></pre>
<p>Response:</p>
<blockquote>
<p>File "C:\Python34\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 52893: character maps to undefined</p>
</blockquote>
| 0 | 2016-10-06T19:54:03Z | 39,904,877 | <h1>Python 3.x</h1>
<p><code>gzip.open</code> is <a href="https://docs.python.org/3.4/library/gzip.html#gzip.open" rel="nofollow">defined</a> as:</p>
<blockquote>
<p>gzip.open(filename, mode='rb', compresslevel=9, encoding=None, errors=None, newline=None)</p>
</blockquote>
<p>Therefore, <code>gzip.open('file.gz', 'rt', 'cp1250')</code> sends it these arguments:
- filename = 'file.gz'
- mode = 'rt'
- compresslevel = 'cp1250'</p>
<p>This is clearly wrong, because the intention is to use 'cp1250' encoding.
The <code>encoding</code> argument can either be sent as the fourth positional argument or as a keyword argument:</p>
<pre><code>gzip.open('file.gz', 'rt', 5, 'cp1250') # 4th positional argument
gzip.open('file.gz', 'rt', encoding='cp1250') # keyword argument
</code></pre>
<h1>Python 2.x</h1>
<p><a href="https://docs.python.org/2/library/gzip.html#gzip.open" rel="nofollow">Python 2 version of <code>gzip.open</code></a> does not take the <code>encoding</code> argument and it does not accept text modes, so the decoding has to be done explicitly after reading the data:</p>
<pre><code>with gzip.open('file.gz', 'rb') as f:
data = f.read()
decoded_data = data.decode('cp1250')
</code></pre>
| 0 | 2016-10-06T20:14:11Z | [
"python",
"python-3.x",
"character-encoding",
"gzip"
]
|
Why can't class variables be used in __init__ keyword arg? | 39,904,590 | <p>I can't find any documentation on when exactly a class can reference itself. In the following it will fail. This is because the class has been created but not initialized until after <code>__init__</code>'s first line, correct?</p>
<pre><code>class A(object):
class_var = 'Hi'
def __init__(self, var=A.class_var):
self.var = var
</code></pre>
<p>So in the use case where I want to do that is this the best solution:</p>
<pre><code>class A(object):
class_var = 'Hi'
def __init__(self, var=None)
if var is None:
var = A.class_var
self.var = var
</code></pre>
<p>Any help or documentation appreciated!</p>
| 4 | 2016-10-06T19:54:13Z | 39,904,740 | <p>Python scripts are interpreted as you go. So when the interpreter enters <code>__init__()</code> the class variable <code>A</code> isn't defined yet (you are inside it), same with <code>self</code> (that is a different parameter and only available in function body).</p>
<p>However anything in that class is interpreted top to bottom, so <code>class_var</code> is defined so you can simply use that one.</p>
<pre><code>class A(object):
class_var = 'Hi'
def __init__(self, var=class_var):
self.var = var
</code></pre>
<p>but I am not super certain that this will be stable across different interpreters...</p>
| 6 | 2016-10-06T20:03:32Z | [
"python",
"class",
"variables",
"keyword-argument"
]
|
IndentationError running python script | 39,904,634 | <p>Im very new to python and Im trying to run this script but keep getting indentation errors in this part:</p>
<pre><code>while (time.time()-self.time) &lt; self.limit
</code></pre>
<p>I have tried to remove all indentation and then ree-indent in different ways but nothing seems to work. Does anyone have an idea? Im using Spyder to run this.</p>
<pre><code>start_time = time.time() #grabs the system time
keyword_list = ['twitter'] #track list
from pymongo import MongoClient
import json
class listener(StreamListener):
def __init__(self, start_time, time_limit=60):
self.time = start_time
self.limit = time_limit
def on_data(self, data):
while (time.time()-self.time) &lt; self.limit:
try:
client = MongoClient('localhost', 27017)
db = client['twitter_db']
collection = db['twitter_collection']
tweet = json.loads(data)
collection.insert(tweet)
return True
except BaseException, e:
print 'failed ondata,', str(e)
time.sleep(5)
pass
exit()
def on_error(self, status):
print statuses
</code></pre>
| -4 | 2016-10-06T19:56:45Z | 39,904,661 | <p>The methods of your class must be indented under your <code>class</code> statement and the body of the methods must be indented under the <code>def</code> line. The body of the <code>try</code> and <code>except</code> blocks must be indented under those statements.</p>
<pre><code>start_time = time.time() #grabs the system time
keyword_list = ['twitter'] #track list
from pymongo import MongoClient
import json
class listener(StreamListener):
def __init__(self, start_time, time_limit=60):
self.time = start_time
self.limit = time_limit
def on_data(self, data):
while (time.time()-self.time) < self.limit:
try:
client = MongoClient('localhost', 27017)
db = client['twitter_db']
collection = db['twitter_collection']
tweet = json.loads(data)
collection.insert(tweet)
return True
except BaseException, e:
print 'failed ondata,', str(e)
time.sleep(5)
exit()
def on_error(self, status):
print statuses
</code></pre>
| 0 | 2016-10-06T19:58:13Z | [
"python"
]
|
IndentationError running python script | 39,904,634 | <p>Im very new to python and Im trying to run this script but keep getting indentation errors in this part:</p>
<pre><code>while (time.time()-self.time) &lt; self.limit
</code></pre>
<p>I have tried to remove all indentation and then ree-indent in different ways but nothing seems to work. Does anyone have an idea? Im using Spyder to run this.</p>
<pre><code>start_time = time.time() #grabs the system time
keyword_list = ['twitter'] #track list
from pymongo import MongoClient
import json
class listener(StreamListener):
def __init__(self, start_time, time_limit=60):
self.time = start_time
self.limit = time_limit
def on_data(self, data):
while (time.time()-self.time) &lt; self.limit:
try:
client = MongoClient('localhost', 27017)
db = client['twitter_db']
collection = db['twitter_collection']
tweet = json.loads(data)
collection.insert(tweet)
return True
except BaseException, e:
print 'failed ondata,', str(e)
time.sleep(5)
pass
exit()
def on_error(self, status):
print statuses
</code></pre>
| -4 | 2016-10-06T19:56:45Z | 39,904,924 | <p>you need to fix your syntax for the while condition statement.</p>
<p>first, as several others have mentioned <code>&lt</code> is not a python operator. nor do you want a semicolon after it. The proper syntax would be <code>while (time.time()-self.time) < self.limit:</code></p>
<p>The semicolon is not specifically necessary in any python code ever. It finds use only occasionally in specific cases where it becomes easier to read if multiple statements are on one line. These statements should not however contain keywords like <code>while</code>, <code>if</code>, or <code>for</code> etc. For now (while you are learning python) I would recommend completely avoiding semicolons.</p>
<p>Lastly, (and this may have been stackOverflow not formatting correctly) your functions <code>on_data</code>, and <code>on_error</code> have improper indentation. It is good practice in python 2.7 and mandatory in 3.x to not mix tabs and spaces (really just use spaces) as for indentation rules, see <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">the official style guide</a> or <a href="https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Indentation" rel="nofollow">wikipedia's explanation</a></p>
| 0 | 2016-10-06T20:17:03Z | [
"python"
]
|
Clarification on python websockets example | 39,904,635 | <p>From the example found here: <a href="http://websockets.readthedocs.io/en/stable/intro.html" rel="nofollow">http://websockets.readthedocs.io/en/stable/intro.html</a></p>
<p>Can someone explain what the parameter 'path' does here? Is it a tuple for the host and port needed by websocket.serve()?</p>
<pre><code>import asyncio
import websockets
async def hello(websocket, path):
name = await websocket.recv()
print("< {}".format(name))
greeting = "Hello {}!".format(name)
await websocket.send(greeting)
print("> {}".format(greeting))
start_server = websockets.serve(hello, 'localhost', 8765)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
</code></pre>
| 0 | 2016-10-06T19:56:45Z | 39,904,763 | <p>The <a href="http://websockets.readthedocs.io/en/stable/api.html#websockets.server.serve" rel="nofollow">documentation for <code>websockets.serve</code></a> says that its first argument is <code>ws_handler</code>:</p>
<blockquote>
<p>ws_handler is the WebSocket handler. It must be a coroutine accepting two arguments: a WebSocketServerProtocol and the request URI.</p>
</blockquote>
<p>In function <code>hello</code>, the second argument is not used, but it must accept the argument, because the argument is going to be sent to it by <code>websockets.serve</code>.</p>
| 0 | 2016-10-06T20:05:03Z | [
"python",
"function",
"websocket"
]
|
treatment of mouse events opencv gui vs pyqt | 39,904,646 | <p>I was working with OpenCV gui functions for a while, and the possibilities are a little restricting for python users. Today I started with Pyqt and come across the following conclusion: qt is really confusing. </p>
<p>Now the question concerning mouse events:</p>
<p>In OpenCV I just do the following:</p>
<pre><code>import cv2
cv2.namedWindow('Window',1)
def CallBackFunc(event,x,y,flags,param):
global xc,yc,evt,flg
xc,yc,evt,flg=x,y,event,flags
cv2.setMouseCallback('Window', CallBackFunc)
</code></pre>
<p>This opens a seperate thread, which constantly refreshes the global variables <code>xc,yc,evt,flg</code>, and I can access them anywhere, at anytime I want. If I want to stop the refreshing, I just do a <code>cv2.setMouseCallback('Window',nothing)</code>, whereby <code>nothing</code> is</p>
<pre><code>def nothing():
pass
</code></pre>
<p>It may not be the most beautiful way of dealing with mouse events, but I am fine with it. How can I achieve such freedom with PyQt?</p>
<p>EDIT:</p>
<p>For example, the following script is displaying a white circle, and constantly drawing a text into it. </p>
<pre><code>import sys
from PySide import QtGui
import numpy as np
import cv2
class QCustomLabel (QtGui.QLabel):
def __init__ (self, parent = None):
super(QCustomLabel, self).__init__(parent)
self.setMouseTracking(True)
def mouseMoveEvent (self, eventQMouseEvent):
self.x,self.y=eventQMouseEvent.x(),eventQMouseEvent.y()
cvImg=np.zeros((900,900),dtype=np.uint8)
cv2.circle(cvImg,(449,449),100,255,-1)
cv2.putText(cvImg,"x at {}, y at {}".format(self.x,self.y),(375,455), cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,0,0),1,cv2.LINE_AA)
height, width= cvImg.shape
bytearr=cvImg.data
qImg = QtGui.QImage(bytearr, width, height, QtGui.QImage.Format_Indexed8)
self.setPixmap(QtGui.QPixmap.fromImage(qImg))
def mousePressEvent (self, eventQMouseEvent):
self.evt=eventQMouseEvent.button()
class QCustomWidget (QtGui.QWidget):
def __init__ (self, parent = None):
super(QCustomWidget, self).__init__(parent)
self.setWindowOpacity(1)
# Init QLabel
self.positionQLabel = QCustomLabel(self)
# Init QLayout
layoutQHBoxLayout = QtGui.QHBoxLayout()
layoutQHBoxLayout.addWidget(self.positionQLabel)
self.setLayout(layoutQHBoxLayout)
self.show()
if QtGui.QApplication.instance() is not None:
myQApplication=QtGui.QApplication.instance()
else:
myQApplication = QtGui.QApplication(sys.argv)
myQTestWidget = QCustomWidget()
myQTestWidget.show()
myQApplication.exec_()
</code></pre>
<p>The problem here is, that this is all executed inside the QCustomLabel Class, and inside the MouseMoveEvent function. But I want a seperate function, lets call it <code>drawCircle</code>, outside of that class, which has access to the mouse position and events. With opencv this would be no problem at all. And it would take only a fraction of the writing effort, which is needed for a pyqt implementation.</p>
<p>I think the right question is: Why dont I like pyqt yet?</p>
| 0 | 2016-10-06T19:57:15Z | 39,907,062 | <p>You can use an event-filter to avoid having to subclass the <code>QLabel</code>:</p>
<pre><code>class QCustomWidget (QtGui.QWidget):
def __init__ (self, parent = None):
super(QCustomWidget, self).__init__(parent)
self.setWindowOpacity(1)
# Init QLabel
self.positionQLabel = QtGui.QLabel(self)
self.positionQLabel.setMouseTracking(True)
self.positionQLabel.installEventFilter(self)
# Init QLayout
layoutQHBoxLayout = QtGui.QHBoxLayout()
layoutQHBoxLayout.addWidget(self.positionQLabel)
self.setLayout(layoutQHBoxLayout)
self.show()
def eventFilter(self, source, event):
if event.type() == QtCore.QEvent.MouseMove:
self.drawCircle(event.x(), event.y())
return super(QCustomWidget, self).eventFilter(source, event)
def drawCircle(self, x, y):
# whatever
</code></pre>
| 0 | 2016-10-06T23:17:45Z | [
"python",
"opencv",
"mouseevent",
"pyqt4"
]
|
How to set proxy settings on MacOS using python | 39,904,647 | <p>How to change the internet proxy settings using python in MacOS to set <code>Proxy server</code> and <code>Proxy port</code></p>
<p>I do that with windows using this code:</p>
<pre><code>import _winreg as winreg
INTERNET_SETTINGS = winreg.OpenKey(winreg.HKEY_CURRENT_USER, r'Software\Microsoft\Windows\CurrentVersion\Internet Settings', 0, winreg.KEY_ALL_ACCESS)
def set_key(name, value):
_, reg_type = winreg.QueryValueEx(INTERNET_SETTINGS, name)
winreg.SetValueEx(INTERNET_SETTINGS, name, 0, reg_type, value)
set_key('ProxyEnable', 0)
set_key('ProxyOverride', u'*.local;<local>') # Bypass the proxy for localhost
set_key('ProxyServer', u'proxy.example.com:8080')
</code></pre>
<p>is this possible to do it on MacOS ?</p>
| 1 | 2016-10-06T19:57:17Z | 39,934,154 | <p>After a long time fo search, i found this way of how to change proxy on MacOs using python.</p>
<p>We need to use <code>networksetup</code> via terminal.</p>
<p>To set http proxy server on MacOS using python:</p>
<pre><code>import os
proxy = "proxy.example.com"
port = 8080
def Proxy_on():
os.system('networksetup -setwebproxy Ethernet '+proxy+' '+port)
Proxy_on()
</code></pre>
<p>and to turn it off:</p>
<pre><code>import os
proxy = "proxy.example.com"
port = 8080
def Proxy_off():
os.system('networksetup -setwebproxystate Ethernet off')
Proxy_off()
</code></pre>
<p>If the network service isn't named just <strong>"Ethernet"</strong>, you may need to parse <code>networksetup -listallnetworkservices</code> or <code>-listnetworkserviceorder</code> to get the correct name.</p>
| 0 | 2016-10-08T15:30:43Z | [
"python",
"osx",
"python-2.7",
"python-3.x",
"proxy"
]
|
WxPython : Button Events | 39,904,648 | <p>Is there a way, that I could bind two events to a button in a sequential way. What I mean is as following: </p>
<p>In this example buttons 1 & 2 are bonded to their events respectively. </p>
<pre><code>self.button1 = wx.Button(panel, -1, 'BUTTON1')
self.button1.Bind(wx.EVT_BUTTON, self.OnButton1)
self.button2 = wx.Button(panel, -1, 'Button2')
self.button2.Bind(wx.EVT_BUTTON, self.OnButton2)
def OnButton1(self,event):
a = 1
b = 3
print a+b # basically some commands
# Here I want to execute OnButton2 EVENT - So I use the following LINE
OnButton2(event)
# But the above command should not be executed unless the OnButton2 event has processed in the past - (Meaning unless the Button2 event has happen in the past) So How would I solve that problem.
# It should be like IF CONDITION (BUTTON 2 HAS BEEN PRESSED IN THE PAST) - EXECUTE: SOME BASIC COMMANDS LIKE PRINT "HELLO"
def OnButton2(self,event):
c = 4
d = 5
print c*d
</code></pre>
| 0 | 2016-10-06T19:57:17Z | 39,921,164 | <pre><code> self.button1 = wx.Button(panel, -1, 'BUTTON1')
self.button1.Bind(wx.EVT_BUTTON, self.OnButton1)
self.button2 = wx.Button(panel, -1, 'Button2')
self.button2.Bind(wx.EVT_BUTTON, self.OnButton2)
def OnButton1(self,event):
#why not call a function directly beforehand?
self.DoSomething(args)
a = 1
b = 3
print a+b # basically some commands
def OnButton2(self,event):
self.DoSomething(args)
def DoSomething(self,args):
c = 4
d = 5
print c*d
</code></pre>
<p>why not just use a separate function to call from within OnButton1?</p>
<p>Not sure why one would wait for OnButton2 event if you have click button one? </p>
<p>in the above, you can pass some argument(s) to DoSomething and have it return some value and then you can continue</p>
<p>For example</p>
<pre><code> def OnButton1(self,event):
value = self.DoSomething(11)
a = 1
b = 3
new_value = a + b + value
def DoSomething(self, args):
c = 4
d = 5
e = args
value = c + d + e
print value
return value
</code></pre>
<p>Updated: have OnButton2 work with some instance value. therefore, when OnButton1 checks to see if it is no longer None, you know OnButton2 event has been raised</p>
<pre><code> self.value = None
def OnButton1(self,event):
if not self.value:
return #still None, OnButton2 event hasn't been raised
#else, continue working with value
value = self.DoSomething(11)
a = 1
b = 3
new_value = a + b + value
def OnButton2(self,event):
self.value = 5
</code></pre>
| 0 | 2016-10-07T15:44:42Z | [
"python",
"button",
"wxpython"
]
|
Error "ODBC data type -150 is not supported" when connecting sqlalchemy to mssql | 39,904,693 | <p>I keep running into an odd error when attempting to connect python sqlalchemy to a msssql server/database. I need to use sqlalchemy as it is (from what I've been told) the only way to connect pandas dataframes to mssql. </p>
<p>I have tried connecting sqlalchemy two different ways:</p>
<ol>
<li><p>using full connection string:</p>
<pre><code>import sqlalchemy as sa
import urllib.parse as ulp
usrCnnStr = r'DRIVER={SQL Server};SERVER=myVoid\MYINSTANCE;Trusted_Connection=yes;'
usrCnnStr = ulp.quote_plus(usrCnnStr)
usrCnnStr = "mssql+pyodbc:///?odbc_connect=%s" % usrCnnStr
engine = sa.create_engine(usrCnnStr)
connection = engine.connect()
connection.execute("select getdate() as dt from mydb.dbo.dk_rcdtag")
connection.close()
</code></pre></li>
<li><p>using DSN:</p>
<pre><code>import sqlalchemy as sa
import urllib.parse as ulp
usrDsn = 'myDb'
params = ulp.quote_plus(usrDsn)
engine = sa.create_engine("mssql+pyodbc://cryo:pass@myDb")
conn = engine.connect()
conn.execute('select getdate() as dt')
conn.close()
</code></pre></li>
</ol>
<p>Both methods return the same error: </p>
<pre><code>sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('ODBC data type -150 is not supported. Cannot read column .', 'HY000') [SQL: "SELECT SERVERPROPERTY('ProductVersion')"]
</code></pre>
<p>I am not sure how to get around this error; when I execute the "SELECT SERVERPROPERTY('ProductVersion')" in mssql, it works fine but comes back with a data type of "sql_variant". </p>
<p>Is there any way to get around this? </p>
| 2 | 2016-10-06T20:00:18Z | 39,905,565 | <p>IIRC, this is because you can't select non-cast functions directory, since they don't return a datatype <em>pyodbc</em> recognizes.</p>
<p>Try this:</p>
<pre><code>SELECT CAST(GETDATE() AS DATETIME) AS dt
</code></pre>
<p>Also, your may want to use <code>CURRENT_TIMESTAMP</code>, which is ANSI standard SQL, instead of <code>GETDATE()</code>: <a href="http://stackoverflow.com/questions/186572/retriving-date-in-sql-server-current-timestamp-vs-getdate">Retriving date in sql server, CURRENT_TIMESTAMP vs GetDate()</a></p>
<p>I'm not sure where your product version select is coming from, but hopefully this gets you on the right path. I'll amend the answer if we figure out more.</p>
| 0 | 2016-10-06T21:02:05Z | [
"python",
"sql-server",
"sqlalchemy",
"pyodbc"
]
|
Error "ODBC data type -150 is not supported" when connecting sqlalchemy to mssql | 39,904,693 | <p>I keep running into an odd error when attempting to connect python sqlalchemy to a msssql server/database. I need to use sqlalchemy as it is (from what I've been told) the only way to connect pandas dataframes to mssql. </p>
<p>I have tried connecting sqlalchemy two different ways:</p>
<ol>
<li><p>using full connection string:</p>
<pre><code>import sqlalchemy as sa
import urllib.parse as ulp
usrCnnStr = r'DRIVER={SQL Server};SERVER=myVoid\MYINSTANCE;Trusted_Connection=yes;'
usrCnnStr = ulp.quote_plus(usrCnnStr)
usrCnnStr = "mssql+pyodbc:///?odbc_connect=%s" % usrCnnStr
engine = sa.create_engine(usrCnnStr)
connection = engine.connect()
connection.execute("select getdate() as dt from mydb.dbo.dk_rcdtag")
connection.close()
</code></pre></li>
<li><p>using DSN:</p>
<pre><code>import sqlalchemy as sa
import urllib.parse as ulp
usrDsn = 'myDb'
params = ulp.quote_plus(usrDsn)
engine = sa.create_engine("mssql+pyodbc://cryo:pass@myDb")
conn = engine.connect()
conn.execute('select getdate() as dt')
conn.close()
</code></pre></li>
</ol>
<p>Both methods return the same error: </p>
<pre><code>sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('ODBC data type -150 is not supported. Cannot read column .', 'HY000') [SQL: "SELECT SERVERPROPERTY('ProductVersion')"]
</code></pre>
<p>I am not sure how to get around this error; when I execute the "SELECT SERVERPROPERTY('ProductVersion')" in mssql, it works fine but comes back with a data type of "sql_variant". </p>
<p>Is there any way to get around this? </p>
| 2 | 2016-10-06T20:00:18Z | 39,908,496 | <p>I upgraded to sqlalchemy 1.1 today and ran into a similar issue with connections that were working before. Bumped back to 1.0.15 and no problems. Not the best answer, more of a workaround, but it may work if you are on 1.1 and need to get rolling.</p>
<p>If you are unsure of your version:</p>
<pre><code>>>import sqlalchemy
>>sqlalchemy.__version__
</code></pre>
| 3 | 2016-10-07T02:34:04Z | [
"python",
"sql-server",
"sqlalchemy",
"pyodbc"
]
|
Error "ODBC data type -150 is not supported" when connecting sqlalchemy to mssql | 39,904,693 | <p>I keep running into an odd error when attempting to connect python sqlalchemy to a msssql server/database. I need to use sqlalchemy as it is (from what I've been told) the only way to connect pandas dataframes to mssql. </p>
<p>I have tried connecting sqlalchemy two different ways:</p>
<ol>
<li><p>using full connection string:</p>
<pre><code>import sqlalchemy as sa
import urllib.parse as ulp
usrCnnStr = r'DRIVER={SQL Server};SERVER=myVoid\MYINSTANCE;Trusted_Connection=yes;'
usrCnnStr = ulp.quote_plus(usrCnnStr)
usrCnnStr = "mssql+pyodbc:///?odbc_connect=%s" % usrCnnStr
engine = sa.create_engine(usrCnnStr)
connection = engine.connect()
connection.execute("select getdate() as dt from mydb.dbo.dk_rcdtag")
connection.close()
</code></pre></li>
<li><p>using DSN:</p>
<pre><code>import sqlalchemy as sa
import urllib.parse as ulp
usrDsn = 'myDb'
params = ulp.quote_plus(usrDsn)
engine = sa.create_engine("mssql+pyodbc://cryo:pass@myDb")
conn = engine.connect()
conn.execute('select getdate() as dt')
conn.close()
</code></pre></li>
</ol>
<p>Both methods return the same error: </p>
<pre><code>sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('ODBC data type -150 is not supported. Cannot read column .', 'HY000') [SQL: "SELECT SERVERPROPERTY('ProductVersion')"]
</code></pre>
<p>I am not sure how to get around this error; when I execute the "SELECT SERVERPROPERTY('ProductVersion')" in mssql, it works fine but comes back with a data type of "sql_variant". </p>
<p>Is there any way to get around this? </p>
| 2 | 2016-10-06T20:00:18Z | 39,910,381 | <p>This is most certainly a bug introduced in <a href="https://bitbucket.org/zzzeek/sqlalchemy/issues/3814">Issue 3814</a>, new in SQLAlchemy 1.1.0, where they introduce <code>SELECT SERVERPROPERTY('ProductVersion')</code> to fetch server version for the pyodbc MSSQL driver. Downgrading to 1.0.15 will make the code work again, but hopefully the SQLAlchemy devs will make the new version lookup scheme work better in a new patch release.</p>
<p>(There is <a href="https://bitbucket.org/zzzeek/sqlalchemy/issues/3820/connection-fails-against-sql-server-2016">an issue</a> already reported in the SQLAlchemy issue tracker, I would add this comment there, but bitbucket can't log me in.)</p>
| 5 | 2016-10-07T05:58:04Z | [
"python",
"sql-server",
"sqlalchemy",
"pyodbc"
]
|
Extracting between tokens with NLTK | 39,904,719 | <p>I'm experimenting with NLTK to help me parse some text. So far , just using the sent_tokenize function has been very helpful in organizing the text. As an example I have:</p>
<pre><code>1 Robins Drive owned by Gregg S. Smith was sold to TeStER, LLC of 494 Bridge Avenue, Suite 101-308, Sheltville AZ 02997 for $27,000.00.
</code></pre>
<p>using:</p>
<pre><code>words =pos_tag(word_tokenize(sentence))
</code></pre>
<p>I get:</p>
<pre><code>[('1', 'CD'), ('Robins', 'NNP'), ('Drive', 'NNP'), ('owned', 'VBN'), ('by', 'IN'), ('Gregg', 'NNP'), ('S.', 'NNP'), ('Smith', 'NNP'), ('was', 'VBD'), ('sold', 'VBN'), ('to', 'TO'), ('TeStER', 'NNP'), (',', ','), ('LLC', 'NNP'), ('of', 'IN'), ('494', 'CD'), ('Bridge', 'NNP'), ('Avenue', 'NNP'), (',', ','), ('Suite', 'NNP'), ('101-308', 'CD'), (',', ','), ('Sheltville', 'NNP'), ('AZ', 'NNP'), ('02997', 'CD'), ('for', 'IN'), ('$', '$'), ('27,000.00', 'CD'), ('.', '.')]
</code></pre>
<p>I have been looking at various tutorials and the book <a href="http://www.nltk.org/book/" rel="nofollow">http://www.nltk.org/book/</a> , but I'm not sure of the best approach to extracting between 2 tokens. For example I want to select the tokens between "owned by" and "was sold to" to get the owner name. how can I best use NLTK functions and python to do this?</p>
| 2 | 2016-10-06T20:01:55Z | 39,904,811 | <p>This is one solution to select tokens between the words you wrote:</p>
<pre><code>import re
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'owned by(.*?)was sold to')
string = '1 Robins Drive owned by Gregg S. Smith was sold to TeStER, LLC of 494 Bridge Avenue, Suite 101-308, Sheltville AZ 02997 for $27,000.00.'
s = tokenizer.tokenize(string)
</code></pre>
<p>returns:</p>
<pre><code>[' Gregg S. Smith ']
</code></pre>
| 2 | 2016-10-06T20:08:33Z | [
"python",
"nltk"
]
|
Regex: exception to negative character class | 39,904,862 | <p>Using Python with Matthew Barnett's regex module.</p>
<p>I have this string:</p>
<pre><code>The well known *H*rry P*tter*.
</code></pre>
<p>I'm using this regex to process the asterisks to obtain <code><em>H*rry P*tter</em></code>:</p>
<pre><code>REG = re.compile(r"""
(?<!\p{L}|\p{N}|\\)
\*
([^\*]*?) # I need this part to deal with nested patterns; I really can't omit it
\*
(?!\p{L}|\p{N})
""", re.VERBOSE)
</code></pre>
<h3>PROBLEM</h3>
<p>The problem is that this regex doesn't match this kind of strings unless I protect intraword asterisks first (I convert them to decimal entities), which is awfully expensive in documents with lots of asterisks.</p>
<h3>QUESTION</h3>
<p>Is it possible to tell the negative class to block at internal asterisks <strong>only if</strong> they are not surrounded by word characters?</p>
<p>I tried these patterns in vain:</p>
<ul>
<li><code>([^(?:[^\p{L}|\p{N}]\*[^\p{L}|\p{N}])]*?)</code></li>
<li><code>([^(?<!\p{L}\p{N})\*(?!\p{L}\p{N})]*?)</code></li>
</ul>
| 2 | 2016-10-06T20:12:53Z | 39,905,163 | <p>I suggest a single regex replacement for the cases like you mentioned above:</p>
<pre><code>re.sub(r'\B\*\b([^*]*(?:\b\*\b[^*]*)*)\b\*\B', r'<em>\1</em>', s)
</code></pre>
<p>See the <a href="https://regex101.com/r/SLEeCB/1" rel="nofollow">regex demo</a></p>
<p><strong>Details</strong>:</p>
<ul>
<li><code>\B\*\b</code> - a <code>*</code> that is preceded with a non-word boundary and followed with a word boundary</li>
<li><code>([^*]*(?:\b\*\b[^*]*)*)</code> - Group 1 capturing:
<ul>
<li><code>[^*]*</code> - 0+ chars other than <code>*</code></li>
<li><code>(?:\b\*\b[^*]*)*</code> - zero or more sequences of:
<ul>
<li><code>\b\*\b</code> - a <code>*</code> enclosed with word boundaries</li>
<li><code>[^*]*</code> - 0+ chars other than <code>*</code></li>
</ul></li>
</ul></li>
<li><code>\b\*\B</code> - a <code>*</code> that is followed with a non-word boundary and preceded with a word boundary</li>
</ul>
<p>More information on word boundaries and non-word boundaries:</p>
<ul>
<li><a href="http://www.regular-expressions.info/wordboundaries.html" rel="nofollow"><em>Word boundaries</em> at regular-expressions.info</a></li>
<li><a href="http://stackoverflow.com/questions/6664151/difference-between-b-and-b-in-regex"><em>Difference between <code>\b</code> and <code>\B</code> in regex</em></a></li>
<li><a href="http://stackoverflow.com/questions/4541573/what-are-non-word-boundary-in-regex-b-compared-to-word-boundary"><em>What are non-word boundary in regex (<code>\B</code>), compared to word-boundary?</em></a></li>
</ul>
| 1 | 2016-10-06T20:33:21Z | [
"python",
"regex"
]
|
What is `$6$rounds=` when running Passlib with Python? | 39,904,882 | <p>I'm generating SHA-512 encoded password keys with Python's <code>Passlib</code>'s command.</p>
<p><code>python -c "from passlib.hash import sha512_crypt; import getpass; print sha512_crypt.encrypt(getpass.getpass())"</code></p>
<p>This is per Ansible documentation: <a href="http://docs.ansible.com/ansible/faq.html#how-do-i-generate-crypted-passwords-for-the-user-module" rel="nofollow">http://docs.ansible.com/ansible/faq.html#how-do-i-generate-crypted-passwords-for-the-user-module</a>).</p>
<p>It prompts for a password, which I input. And then it returns the key.</p>
<p>Regardless of the password I input, all keys created begin with <code>$6$rounds=...</code></p>
<p>What does this mean? Is this part of the key?</p>
| -1 | 2016-10-06T20:14:38Z | 39,904,985 | <p>This indicates to the schema for the used algorithm. In the case of <code>sha512_crypt</code> <code>6</code> indicates <code>sha512</code> and <code>rounds=x</code> indicate the number of rounds to compute the hash. </p>
<p>Also current <code>NIST</code> standards suggest <code>pbkdf2_sha256</code> for password hashing.</p>
| 2 | 2016-10-06T20:20:15Z | [
"python",
"encryption",
"passlib"
]
|
Pandas cast all object columns to category | 39,904,889 | <p>I want to have ha elegant function to cast all object columns in a pandas data
frame to categories</p>
<p><code>df[x] = df[x].astype("category")</code> performs the type cast
<code>df.select_dtypes(include=['object'])</code> would sub-select all categories columns. However this results in a loss of the other columns / a manual merge is required. Is there a solution which "just works in place" or does not require a manual cast?</p>
<h1>edit</h1>
<p>I am looking for something similar as <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.convert_objects.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.convert_objects.html</a> for a conversion to categorical data</p>
| 2 | 2016-10-06T20:15:05Z | 39,906,514 | <p>use <code>apply</code> and <code>pd.Series.astype</code> with <code>dtype='category'</code></p>
<p>Consider the <code>pd.DataFrame</code> <code>df</code></p>
<pre><code>df = pd.DataFrame(dict(
A=[1, 2, 3, 4],
B=list('abcd'),
C=[2, 3, 4, 5],
D=list('defg')
))
df
</code></pre>
<p><a href="http://i.stack.imgur.com/qcECx.png" rel="nofollow"><img src="http://i.stack.imgur.com/qcECx.png" alt="enter image description here"></a></p>
<pre><code>df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
A 4 non-null int64
B 4 non-null object
C 4 non-null int64
D 4 non-null object
dtypes: int64(2), object(2)
memory usage: 200.0+ bytes
</code></pre>
<p>Lets use <code>select_dtypes</code> to include all <code>'object'</code> types to convert and recombine with a <code>select_dtypes</code> to exclude them.</p>
<pre><code>df = pd.concat([
df.select_dtypes([], ['object']),
df.select_dtypes(['object']).apply(pd.Series.astype, dtype='category')
], axis=1).reindex_axis(df.columns, axis=1)
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
A 4 non-null int64
B 4 non-null category
C 4 non-null int64
D 4 non-null category
dtypes: category(2), int64(2)
memory usage: 208.0 bytes
</code></pre>
| 2 | 2016-10-06T22:21:57Z | [
"python",
"pandas",
"casting",
"categorical-data"
]
|
Why should I rebuild Caffe each time I want to use a new package? | 39,905,248 | <p>I am very new to Ubuntu and I just started learning Caffe. The question might come stupid as I have no idea about how 'make' command works. </p>
<p>Anyway, I installed Caffe following the instruction in this link: <a href="https://gist.github.com/titipata/f0ef48ad2f0ebc07bcb9" rel="nofollow">Caffe Installation</a> and everything works fine for me. </p>
<p>However, recently I decided to try two packages which are based on Caffe: <a href="https://github.com/rbgirshick/py-faster-rcnn" rel="nofollow">Faster R-CNN</a> and <a href="https://github.com/zhaoweicai/mscnn" rel="nofollow">MS-CNN</a>. In their installation instructions are mentioned that I need to '<em>make</em>' Caffe and pycaffe. It might make sense as both of them added some new layers to Caffe. But do I have to really make Caffe and pycaffe again or is there any other way to install these packages?
Then what should I do with my previous Caffe folder? do I have to just remove it? Then how can I have both packages simultaneously when each has its own copy of caffe? </p>
<p>P/S: when I want to make any of these Caffe, the $PYTHONPATH$ should point to the python folder of that caffe or it generates an error.</p>
| 0 | 2016-10-06T20:40:39Z | 39,906,633 | <p>You have to 'make' caffe for each installation. I don't know of any other way. This is because each version might have different layers. </p>
<p>You can have multiple versions of caffe on your system. There is no need to remove one in order to make another. You'll just have to change the $PYTHONPATH$ to whichever caffe you want to use. </p>
| 1 | 2016-10-06T22:32:34Z | [
"python",
"package",
"install",
"ubuntu-14.04",
"caffe"
]
|
How do I import a submodule with web2py? | 39,905,256 | <p>I'm trying to do a dynamic import (using "__import__()") of a submodule in web2py and it doesn't seem to be finding the submodule.</p>
<p>Here's an example path:
web2py/app/modules/module.py <-- This works
web2py/app/modules/module_folder/submodule.py <-- This won't get spotted.</p>
<p>Right now as a workaround I'm using 'exec()' but I'd rather not do that.</p>
<hr>
<p><strong>Answers to questions:</strong></p>
<p><em>"Do you have __init__.py in module_folder?"</em></p>
<p>Yep.</p>
<p><em>"What exactly is the line of code you write to import web2py/app/modules/module_folder/submodule.py?"</em></p>
<pre><code>mapping_module = __import__('communication_templates.mappings.%s' % module_name)
</code></pre>
<p><em>"what is the error?"</em></p>
<pre><code><type 'exceptions.AttributeError'> 'module' object has no attribute 'mapping_dict'
</code></pre>
<p>Explanation: I'm trying to get a variable named 'mapping_dict' from the module once I've loaded it.</p>
| 0 | 2016-10-06T20:41:30Z | 39,910,270 | <p>The problem here is that <code>__import__</code> does not do the most intuitive thing (neither does <code>import</code> btw).</p>
<p>Here's what happens:</p>
<pre><code>>>> import sys
>>> x = __import__('package1.package2.module')
>>> x
<module 'package1' from 'C:\\Temp\\package1\\__init__.py'>
</code></pre>
<p>Even though <code>package1.package2.module</code> was imported, the variable returned is actually <code>package1</code>.</p>
<p>So, to access something that is in <code>package1.package2.module</code>, one has to dig down again:</p>
<pre><code>>>> x.package2.module.Class
<class 'package1.package2.module.Class'>
</code></pre>
<p><strong>Is it then the same as importing just <code>package1</code>?</strong></p>
<p>Not really:</p>
<pre><code>>>> x = __import__('package1')
>>> x
<module 'package1' from 'C:\\Temp\\package1\\__init__.py'>
>>> x.package2.module.Class
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'package1' has no attribute 'package2'
</code></pre>
<h2>Why?</h2>
<p>Well, <code>__import__</code> is the same as `import, that is, these are the same:</p>
<pre><code>package1 = __import__('package1.package2.module')
# is the same as:
import package1.package2.module
</code></pre>
<p>And:</p>
<pre><code>package1 = __import__('package1')
# is the same as:
import package1
</code></pre>
<p>In each of those cases, you get just one local variable <code>package1</code>.</p>
<p>In the case of importing <code>package1.package2.module</code>, <code>package2</code> is also imported and stored in <code>package2</code> attribute of package <code>package1</code> etc.</p>
<h1>Solution</h1>
<p>To access <code>package1.package2.module.Class</code>, if <code>package1.package2.module</code> is in a string:</p>
<pre><code>>>> s = 'package1.package2.module'
>>> module = __import__(s)
>>> for submodule_name in s.split('.')[1:]:
... module = getattr(module, submodule_name)
...
>>> module.Class
<class 'package1.package2.module.Class'>
</code></pre>
| 0 | 2016-10-07T05:49:04Z | [
"python",
"web2py"
]
|
Python: Saving to CSV file, accidentally writing to next column instead of row after manually opening the file | 39,905,281 | <p>I've noticed a really weird bug and didn't know if anyone else had seen this / knows how to stop it. </p>
<p>I'm writing to a CSV file using this:</p>
<pre><code>def write_to_csv_file(self, object, string):
with open('data_model_1.csv', 'a') as f:
writer = csv.writer(f)
writer.writerow([object, string])
</code></pre>
<p>and then write to the file:</p>
<pre><code>self.write_to_csv_file(self.result['outputLabel'], string)
</code></pre>
<p>If I open the CSV file to look at the results, the next time I write to the file, it will start in column 3 of the last line (column 1 is object, column 2 is string).</p>
<p>If I run <code>self.write_to_csv_file(self.result['outputLabel'], string)</code> multiple times without manually opening the file (obviously I open the file in the Python script), everything is fine. </p>
<p>It's only when I open the file so I get the issue of starting on Column 3. </p>
<p>Any thoughts on how to fix this?</p>
| -1 | 2016-10-06T20:43:21Z | 39,905,521 | <p>You're opening the file in <strong>a</strong>ppend <a href="https://docs.python.org/3.6/library/functions.html#open" rel="nofollow">mode</a>, so the data is appended to the end of the file. If the file doesn't end in a newline, rows may get concatenated. Try writing a newline to the file before appending new rows:</p>
<pre><code>with open("data_model_1.csv", "a") as f:
f.write("\n")
</code></pre>
| 1 | 2016-10-06T20:58:43Z | [
"python",
"csv"
]
|
Unexpected behaviour in python multiprocessing | 39,905,360 | <p>I'm trying to understand the following odd behavior observed using the <code>python mutiprocessing</code>.</p>
<p>Sample testClass:
import os
import multiprocessing </p>
<pre><code>class testClass(multiprocessing.Process):
def __del__(self):
print "__del__ PID: %d" % os.getpid()
print self.a
def __init__(self):
multiprocessing.Process.__init__(self)
print "__init__ PID: %d" % os.getpid()
self.a = 0
def run(self):
print "method1 PID: %d" % os.getpid()
self.a = 1
</code></pre>
<p>And a little test program:
from testClass import testClass </p>
<pre><code>print "Start"
proc_list = []
proc_list.append(testClass())
proc_list[-1].start()
proc_list[-1].join()
print "End"
</code></pre>
<p>This produces:</p>
<pre><code>Start
__init__ PID: 89578
method1 PID: 89585
End
__del__ PID: 89578
0
</code></pre>
<p>Why it does not print <code>1</code>? </p>
<p>I'm guessing that it's related to the fact that <code>run</code> is actually being executed on a different process as can be seen. If this is the expected behavior how is everyone using multiprocessing where processes have an expensive <code>__init__</code> as in processes that need to open a database?</p>
<p>And shouldn't this behaviour be better highlighted in multiprocessing documentation?</p>
| 0 | 2016-10-06T20:48:41Z | 39,905,471 | <p>You can wrap your expensive initialization inside a context manager:</p>
<pre><code>def run(self):
with expensive_initialization() as initialized_object:
do_some_logic_here(initialized_object)
</code></pre>
<p>You will have a chance to properly initialize your object before calling <code>do_some_logic_here</code>, and to properly release the resources after leaving the context manager's block.</p>
<p>See <a href="https://docs.python.org/3.6/reference/datamodel.html#context-managers" rel="nofollow">documentation</a>.</p>
| 0 | 2016-10-06T20:55:28Z | [
"python",
"python-multiprocessing"
]
|
Unexpected behaviour in python multiprocessing | 39,905,360 | <p>I'm trying to understand the following odd behavior observed using the <code>python mutiprocessing</code>.</p>
<p>Sample testClass:
import os
import multiprocessing </p>
<pre><code>class testClass(multiprocessing.Process):
def __del__(self):
print "__del__ PID: %d" % os.getpid()
print self.a
def __init__(self):
multiprocessing.Process.__init__(self)
print "__init__ PID: %d" % os.getpid()
self.a = 0
def run(self):
print "method1 PID: %d" % os.getpid()
self.a = 1
</code></pre>
<p>And a little test program:
from testClass import testClass </p>
<pre><code>print "Start"
proc_list = []
proc_list.append(testClass())
proc_list[-1].start()
proc_list[-1].join()
print "End"
</code></pre>
<p>This produces:</p>
<pre><code>Start
__init__ PID: 89578
method1 PID: 89585
End
__del__ PID: 89578
0
</code></pre>
<p>Why it does not print <code>1</code>? </p>
<p>I'm guessing that it's related to the fact that <code>run</code> is actually being executed on a different process as can be seen. If this is the expected behavior how is everyone using multiprocessing where processes have an expensive <code>__init__</code> as in processes that need to open a database?</p>
<p>And shouldn't this behaviour be better highlighted in multiprocessing documentation?</p>
| 0 | 2016-10-06T20:48:41Z | 39,906,198 | <p>When calling <code>start()</code>, the interpreter forks and creates a child process, which gets a copy of the page tables from the parent. These point to pages which are marked as readonly and only copied when written (COW). The interpreter, when executing <code>run</code> in the child process, accesses the child's copy of the <code>PyObject</code> which represents <code>a</code>. The parent's memory is not touched. The child also gets a copy of the file descriptors table, which means that if a connection is opened by the parent, that file descriptor is inherited by the child. You can see (with <code>strace</code>) that the child is created via <code>clone</code> without <code>CLONE_FILES</code>:</p>
<pre><code>clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f460d8929d0)
</code></pre>
<p>Therefore, from clone man page:</p>
<blockquote>
<p>If CLONE_FILES is not set, the child process inherits a copy of all file descriptors opened in the calling process at the time of clone(). (The duplicated file descriptors in the child refer to the same open file descriptions (see open(2)) as the corresponding file descriptors in the calling process.) Subsequent operations that open or close file descriptors, or change file descriptor flags, performed by either the calling process or the child process do not affect the other process. </p>
</blockquote>
| 0 | 2016-10-06T21:54:28Z | [
"python",
"python-multiprocessing"
]
|
How to use multi-level indexing in pyomo with a set and a rangeset? | 39,905,366 | <p>I have multiple levels of indices in my model in <code>pyomo</code>, and I need to be able to index variables like this:</p>
<pre><code>model.b['a',1]
</code></pre>
<p>But this doesn't seem possible for some reason. I can use multilevel indexing like this:</p>
<pre><code>model = ConcreteModel()
model.W = RangeSet(0,1)
model.I = RangeSet(0,4)
model.J = RangeSet(0,4)
model.K = RangeSet(0,3)
model.B = Var(model.W, model.I, model.J, model.K)
model.B[1,2,3,0] # access the variable using the indices - THIS WORKS!!
</code></pre>
<p>But this does not work, however:</p>
<pre><code>model = ConcreteModel()
model.W = Set(['a','b'])
model.I = RangeSet(0,4)
model.b = Var(model.W, model.I) # I can't even create this - throws exception
</code></pre>
<p>...it throws the exception:</p>
<pre><code>TypeError: Cannot index a component with an indexed set
</code></pre>
<p>Why does the first one work and not the second one?</p>
| 0 | 2016-10-06T20:48:58Z | 39,905,929 | <p>The problem is that when you write</p>
<pre><code>model.W = Set(['a','b'])
</code></pre>
<p>you are actually creating an indexed Set object rather than a Set with the values in the provided list. This is because all Pyomo component constructors treat positional arguments as indexing sets.</p>
<p>You can fix this by adding the "initialize" keyword before your list of values</p>
<pre><code>model.W = Set(initialize=['a','b'])
</code></pre>
<p>The same would be true if you provided a list of integers rather than strings</p>
<pre><code>model.I = Set(initialize=[0,1,2,3,4])
</code></pre>
| 1 | 2016-10-06T21:31:20Z | [
"python",
"python-2.7",
"pyomo"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.