title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Why does it not find the variable? | 39,766,919 | <p>So below is my code, and I need to get the variable 'p' from the Entry widget, set it as a new variable name, the print it. For some reason, I get the following error 'NameError: name 'p' is not defined'. I have absolutely no idea how to fix it and this is my last resort. Please help me.</p>
<p>Code:</p>
<pre><code>import tkinter as tk # python3
#import Tkinter as tk # python
self = tk
TITLE_FONT = ("Helvetica", 18, "bold")
#-------------------FUNCTIONS-------------------#
def EnterP():
b1 = p.get()
print (p.get())
def EnterS(*self):
print (self.s.get())
def EnterB(*args):
print (b.get())
def EnterN(*args):
print (n.get())
#-----------------------------------------------#
class SampleApp(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
# the container is where we'll stack a bunch of frames
# on top of each other, then the one we want visible
# will be raised above the others
container = tk.Frame(self)
container.pack(side="top", fill="both", expand=True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (Home, Population, Quit):
page_name = F.__name__
frame = F(parent=container, controller=self)
self.frames[page_name] = frame
# put all of the pages in the same location;
# the one on the top of the stacking order
# will be the one that is visible.
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame("Home")
def show_frame(self, page_name):
'''Show a frame for the given page name'''
frame = self.frames[page_name]
frame.tkraise()
class Home(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(self, text="Home Page", font=TITLE_FONT)
label.pack(side="top", fill="x", pady=10)
button1 = tk.Button(self, text="Population",
command=lambda: controller.show_frame("Population"))
button5 = tk.Button(self, text = "Quit",
command=lambda: controller.show_frame("Quit"))
button1.pack()
button5.pack()
class Population(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(self, text="Enter Generation 0 Values", font=TITLE_FONT)
label.pack(side="top", fill="x", pady=10)
#Population Number
w = tk.Label(self, text="Enter the value for the Population")
w.pack()
p = tk.Entry(self)
p.pack()
pb = tk.Button(self, text="OK", command = EnterP)
pb.pack()
#Survival Rates
w = tk.Label(self, text="Enter the value of Survival Rates")
w.pack()
s = tk.Entry(self)
s.pack()
sb = tk.Button(self, text="OK", command = EnterS)
sb.pack()
#Birth Rates
w = tk.Label(self, text="ENter the value for the Birth Rate")
w.pack()
b = tk.Entry(self)
b.pack()
bb = tk.Button(self, text="OK", command = EnterB)
bb.pack()
#Number of New Generations To Model
w = tk.Label(self, text="Enter the number of New Generatiions")
w.pack()
n = tk.Entry(self)
n.pack()
nb = tk.Button(self, text="OK", command = EnterN)
nb.pack()
button = tk.Button(self, text="<<< BACK",
command=lambda: controller.show_frame("Home"))
button.pack()
class Quit(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(self, text="Are you sure you want to quit?", font=TITLE_FONT)
label.pack(side="top", fill="x", pady=10)
yes = tk.Button(self, text="Yes")
yes.pack()
no = tk.Button(self, text = "No",
command = lambda: controller.show_frame("Home"))
no.pack()
if __name__ == "__main__":
app = SampleApp()
app.mainloop()
</code></pre>
| -3 | 2016-09-29T09:45:39Z | 39,767,215 | <p>I won't try to give a complete answer (for there might be more that one error in your program), but let me tell you what your problem is:</p>
<p>You didn't take into account <strong>variable scopes</strong>. Whenever naming a variable, it is only accessible in a specific region of your code.</p>
<p>In your code, for instance, <code>p</code> is defined inside <code>class Population.__init__</code> scope. <code>EnterP</code> is defined <strong>outside</strong> of this scope, hence cannot access <code>p</code>.</p>
<p>Each language has its own way to handle such scopes. Here is something you should read on this topic:</p>
<p><a href="https://python-textbok.readthedocs.io/en/latest/Variables_and_Scope.html" rel="nofollow">https://python-textbok.readthedocs.io/en/latest/Variables_and_Scope.html</a></p>
<p>Good luck!</p>
| 0 | 2016-09-29T10:00:52Z | [
"python",
"variables",
"tkinter",
"nameerror"
]
|
Quit button assistance needed | 39,767,084 | <p>I'm trying to make a code for this topic i'm doing and I've manage to get some of it done but when it comes to quiting my tkinter menu it doesn't close unless I manually close it, I've got the the button for the option to close it but it doesn't work. Can anyone help with my problem. Here's my code below.</p>
<pre><code>import sys
import tkinter
from tkinter import*
import time
global v
global popJ
popJ = 0
def genInput(): #Allows the user to input the data
gen = Toplevel()
gen.wm_title("Data Input")
v = IntVar()
ent1 = Entry(gen, textvariable = v).pack()
ent1Txt = Label(gen, text = 'Please input Juvenile Populations')
ent1Txt.pack()
v2 = StringVar()
ent2 = Entry(gen, textvariable = v2)
ent2Txt = Label(gen, text = 'Please input Adult Populations')
ent2.pack()
ent2Txt.pack()
v3 = StringVar()
ent3 = Entry(gen, textvariable = v3)
ent3Txt = Label(gen, text = 'Please input Senile Populations')
ent3.pack()
ent3Txt.pack()
v4 = StringVar()
ent4 = Entry(gen, textvariable = v4)
ent4Txt = Label(gen, text = 'Please input Survival rates for Juveniles')
ent4.pack()
ent4Txt.pack()
v5 = StringVar()
ent5 = Entry(gen, textvariable = v5)
ent5Txt = Label(gen, text = 'Please input Survival rates for Adults')
ent5.pack()
ent5Txt.pack()
v6 = StringVar()
ent6 = Entry(gen, textvariable = v6)
ent6Txt = Label(gen, text = 'Please input Survival rates for Seniles')
ent6.pack()
ent6Txt.pack()
v7 = StringVar()
ent7 = Entry(gen, textvariable = v7)
ent7Txt = Label(gen, text = 'Please input the birth rate')
ent7.pack()
ent7Txt.pack()
v8 = StringVar()
ent8 = Entry(gen, textvariable = v8)
ent8Txt = Label(gen, text = 'Number of Generations')
ent8.pack()
ent8Txt.pack()
def quit1(): # Needs to be here or it breaks the program
gen.destroy()
return
def submit():
global popJ
popJ = v.get()
popJtxt = Label(gen, text= v.get()).pack()
return
submit1= Button(gen, text="Submit")
submit1.pack()
submit1.configure(command = submit)
return1 = Button(gen, text = 'Return to Menu')
return1.pack(pady=30)
return1.configure(command = quit1)
return
def genView(): # should display the data
disp = Toplevel()
disp.wm_title('Displaying data Values')
popJuvenilesTxt = Label (disp, text = popJ)
popJuvenilesTxt.grid(row =1, column = 1)
def menu(): # creates the gui menu
menu = Tk()
menu.wm_title("Greenfly model")
genInp = Button(menu,text = "Set Generation Values")
genVew = Button(menu,text = 'Dysplay Generation Values')
modelCal = Button(menu,text = 'Run model')
exportData = Button(menu,text = 'Export Data')
quitProgram = Button(menu,text = 'Quit')
genTxt = Label(menu, text= 'Input the Generation values')
genvTxt = Label (menu, text = 'View the current generation values')
modelTxt = Label (menu, text = 'Run the model')
exportTxt = Label (menu, text = 'Export data')
quitTxt = Label (menu, text= 'Exit the program')
genInp.grid(row=1, column=1)
genVew.grid(row=2, column=1)
modelCal.grid(row=3, column=1)
exportData.grid(row=4 , column=1)
quitProgram.grid(row=5, column=1)
genTxt.grid(row=1, column = 2)
genvTxt.grid(row=2, column = 2)
modelTxt.grid(row=3, column = 2)
exportTxt.grid(row=4, column = 2)
quitTxt.grid(row=5, column = 2)
genInp.configure(command = genInput)
genVew.configure(command = genView)
menu.mainloop()
menu()
</code></pre>
| -2 | 2016-09-29T09:54:23Z | 39,767,287 | <p>First off, the <code>def function</code> needs to be inline with the others.
secondly, do <code>import os</code> at the beginning of your programe and then change the function to:</p>
<pre><code>def quit1(): # Needs to be here or it breaks the program
os._exit(0)
return
</code></pre>
| 0 | 2016-09-29T10:03:44Z | [
"python",
"button",
"tkinter"
]
|
Quit button assistance needed | 39,767,084 | <p>I'm trying to make a code for this topic i'm doing and I've manage to get some of it done but when it comes to quiting my tkinter menu it doesn't close unless I manually close it, I've got the the button for the option to close it but it doesn't work. Can anyone help with my problem. Here's my code below.</p>
<pre><code>import sys
import tkinter
from tkinter import*
import time
global v
global popJ
popJ = 0
def genInput(): #Allows the user to input the data
gen = Toplevel()
gen.wm_title("Data Input")
v = IntVar()
ent1 = Entry(gen, textvariable = v).pack()
ent1Txt = Label(gen, text = 'Please input Juvenile Populations')
ent1Txt.pack()
v2 = StringVar()
ent2 = Entry(gen, textvariable = v2)
ent2Txt = Label(gen, text = 'Please input Adult Populations')
ent2.pack()
ent2Txt.pack()
v3 = StringVar()
ent3 = Entry(gen, textvariable = v3)
ent3Txt = Label(gen, text = 'Please input Senile Populations')
ent3.pack()
ent3Txt.pack()
v4 = StringVar()
ent4 = Entry(gen, textvariable = v4)
ent4Txt = Label(gen, text = 'Please input Survival rates for Juveniles')
ent4.pack()
ent4Txt.pack()
v5 = StringVar()
ent5 = Entry(gen, textvariable = v5)
ent5Txt = Label(gen, text = 'Please input Survival rates for Adults')
ent5.pack()
ent5Txt.pack()
v6 = StringVar()
ent6 = Entry(gen, textvariable = v6)
ent6Txt = Label(gen, text = 'Please input Survival rates for Seniles')
ent6.pack()
ent6Txt.pack()
v7 = StringVar()
ent7 = Entry(gen, textvariable = v7)
ent7Txt = Label(gen, text = 'Please input the birth rate')
ent7.pack()
ent7Txt.pack()
v8 = StringVar()
ent8 = Entry(gen, textvariable = v8)
ent8Txt = Label(gen, text = 'Number of Generations')
ent8.pack()
ent8Txt.pack()
def quit1(): # Needs to be here or it breaks the program
gen.destroy()
return
def submit():
global popJ
popJ = v.get()
popJtxt = Label(gen, text= v.get()).pack()
return
submit1= Button(gen, text="Submit")
submit1.pack()
submit1.configure(command = submit)
return1 = Button(gen, text = 'Return to Menu')
return1.pack(pady=30)
return1.configure(command = quit1)
return
def genView(): # should display the data
disp = Toplevel()
disp.wm_title('Displaying data Values')
popJuvenilesTxt = Label (disp, text = popJ)
popJuvenilesTxt.grid(row =1, column = 1)
def menu(): # creates the gui menu
menu = Tk()
menu.wm_title("Greenfly model")
genInp = Button(menu,text = "Set Generation Values")
genVew = Button(menu,text = 'Dysplay Generation Values')
modelCal = Button(menu,text = 'Run model')
exportData = Button(menu,text = 'Export Data')
quitProgram = Button(menu,text = 'Quit')
genTxt = Label(menu, text= 'Input the Generation values')
genvTxt = Label (menu, text = 'View the current generation values')
modelTxt = Label (menu, text = 'Run the model')
exportTxt = Label (menu, text = 'Export data')
quitTxt = Label (menu, text= 'Exit the program')
genInp.grid(row=1, column=1)
genVew.grid(row=2, column=1)
modelCal.grid(row=3, column=1)
exportData.grid(row=4 , column=1)
quitProgram.grid(row=5, column=1)
genTxt.grid(row=1, column = 2)
genvTxt.grid(row=2, column = 2)
modelTxt.grid(row=3, column = 2)
exportTxt.grid(row=4, column = 2)
quitTxt.grid(row=5, column = 2)
genInp.configure(command = genInput)
genVew.configure(command = genView)
menu.mainloop()
menu()
</code></pre>
| -2 | 2016-09-29T09:54:23Z | 39,787,009 | <p>For Tkinter you can just pass <code>gen.quit</code> to the command of a button widget, like so:</p>
<pre><code>close = Button(gen, text = 'Close', command = gen.quit).pack()
</code></pre>
| 0 | 2016-09-30T08:30:10Z | [
"python",
"button",
"tkinter"
]
|
Reading large text files line-by-line is still using all my memory | 39,767,227 | <p>I'm writing a simple program that is supposed to read a large file (263.5gb to be exact) with JSON on each line (<a href="https://www.reddit.com/r/datasets/comments/3mg812/full_reddit_submission_corpus_now_available_2006/" rel="nofollow">link here</a>). I've done some research and the best method I've found is to read each line by line. Mine looks like this (<a href="https://trinket.io/python/4d1aa528e4" rel="nofollow">full code here</a>):</p>
<pre><code>with open(dumpLocation, "r") as f:
for line in f:
# Read line, convert to dictionary and assign it to 'c'
c = json.loads(f.readline())
for n in files:
if n.lower() in c["title"].lower():
try:
# Collect data
timestamp = str(c["retrieved_on"])
sr_id = c["subreddit_id"]
score = str(c["score"])
ups = str(c["ups"])
downs = str(c["downs"])
title = ('"' + c["title"] + '"')
# Append data to file
files[n].write(timestamp + ","
+ sr_id + ","
+ score + ","
+ ups + ","
+ downs + ","
+ title + ","
+ "\n")
found += 1
except:
numberOfErrors += 1
errors[comments] = sys.exc_info()[0]
comments += 1
# Updates user
print("Comments scanned: " + str(comments) + "\nFound: " + str(found) + "\n")
</code></pre>
<p>Now I can get this to run, and it ran for a good hour before it crashed (approximately 1.3 million lines). I noticed in the processes that the memory usage was slowly growing and got to about 2gb before crashing.</p>
<p>There is approximately 200 million lines I need to sort through, and I am also writing to files if specific words are found (searched for 5, found 337 before crash). Is there a better way of doing this? My computer usually only has about 2gb RAM to spare</p>
| 0 | 2016-09-29T10:01:19Z | 39,768,216 | <p>You have a memory leak here:</p>
<pre><code>except:
numberOfErrors += 1
errors[comments] = sys.exc_info()[0]
</code></pre>
<p>With a huge number of input lines, number of errors can also be huge, especially if you have some error in your algorithm.</p>
<p>Plain <code>except</code> is evil because it hides all errors, even syntax errors in your code. You should handle only specific exception types that you expect to happen on real data, and make try-except block as narrow as possible.</p>
| 1 | 2016-09-29T10:47:45Z | [
"python",
"file",
"optimization",
"text"
]
|
Reading large text files line-by-line is still using all my memory | 39,767,227 | <p>I'm writing a simple program that is supposed to read a large file (263.5gb to be exact) with JSON on each line (<a href="https://www.reddit.com/r/datasets/comments/3mg812/full_reddit_submission_corpus_now_available_2006/" rel="nofollow">link here</a>). I've done some research and the best method I've found is to read each line by line. Mine looks like this (<a href="https://trinket.io/python/4d1aa528e4" rel="nofollow">full code here</a>):</p>
<pre><code>with open(dumpLocation, "r") as f:
for line in f:
# Read line, convert to dictionary and assign it to 'c'
c = json.loads(f.readline())
for n in files:
if n.lower() in c["title"].lower():
try:
# Collect data
timestamp = str(c["retrieved_on"])
sr_id = c["subreddit_id"]
score = str(c["score"])
ups = str(c["ups"])
downs = str(c["downs"])
title = ('"' + c["title"] + '"')
# Append data to file
files[n].write(timestamp + ","
+ sr_id + ","
+ score + ","
+ ups + ","
+ downs + ","
+ title + ","
+ "\n")
found += 1
except:
numberOfErrors += 1
errors[comments] = sys.exc_info()[0]
comments += 1
# Updates user
print("Comments scanned: " + str(comments) + "\nFound: " + str(found) + "\n")
</code></pre>
<p>Now I can get this to run, and it ran for a good hour before it crashed (approximately 1.3 million lines). I noticed in the processes that the memory usage was slowly growing and got to about 2gb before crashing.</p>
<p>There is approximately 200 million lines I need to sort through, and I am also writing to files if specific words are found (searched for 5, found 337 before crash). Is there a better way of doing this? My computer usually only has about 2gb RAM to spare</p>
| 0 | 2016-09-29T10:01:19Z | 39,783,866 | <p>I worked out where the memory leak was. On this line where I was printing to the console after every line:</p>
<pre><code> print("Comments scanned: " + str(comments) + "\nFound: " + str(found) + "\n")
</code></pre>
<p>Print 200 million times and your computer is bound to run out of memory trying to hold it all in the console at once. Removed it and it worked perfectly :)</p>
| 0 | 2016-09-30T04:47:32Z | [
"python",
"file",
"optimization",
"text"
]
|
How one can run xgboost on hadoop cluster for distributed model training? | 39,767,280 | <p>I am trying to built a CTR prediction model using XGBoost on 100 million of impressions for contextual ads and in order to achieve the same, I want to try XGboost on hadoop as I have all of the impressions data available in HDFS.</p>
<p>Can someone cite a working tutorial for the same for python? </p>
| 0 | 2016-09-29T10:03:27Z | 39,769,979 | <p>There are many ways to do it:</p>
<ol>
<li><p>If in case you have some lower level logical grouping say CTR for some item department and you want to make localized models for departments then you can go for map reduce type of setting. It will make sure all data belonging to single department will end up in single YARN container and you can build a model over that data. NLineInputFormat is a clever trick to make this map only process than map reduce based process which will give you significant speed boost up.</p></li>
<li><p>You can do distributed machine learning using Spark version of XGBoost for more refer <a href="http://dmlc.ml/2016/03/14/xgboost4j-portable-distributed-xgboost-in-spark-flink-and-dataflow.html" rel="nofollow">http://dmlc.ml/2016/03/14/xgboost4j-portable-distributed-xgboost-in-spark-flink-and-dataflow.html</a></p></li>
<li><p>If in case you are in process of deciding your infrastructure as well then give AWS also a try as explained here. Its not hadoop but indeed pseudo distributed machine learning: <a href="https://xgboost.readthedocs.io/en/latest/tutorials/aws_yarn.html" rel="nofollow">https://xgboost.readthedocs.io/en/latest/tutorials/aws_yarn.html</a></p></li>
</ol>
| 0 | 2016-09-29T12:11:12Z | [
"python",
"hadoop",
"machine-learning",
"xgboost"
]
|
How to use SHA256-HMAC in python code? | 39,767,297 | <p><a href="https://doc.periscopedata.com/doc/embed-api" rel="nofollow">I am taking message and key from this URL</a></p>
<pre><code>import hmac
import hashlib
import base64
my = "/api/embedded_dashboard?data=%7B%22dashboard%22%3A7863%2C%22embed%22%3A%22v2%22%2C%22filters%22%3A%5B%7B%22name%22%3A%22Filter1%22%2C%22value%22%3A%22value1%22%7D%2C%7B%22name%22%3A%22Filter2%22%2C%22value%22%3A%221234%22%7D%5D%7D"
key = "e179017a-62b0-4996-8a38-e91aa9f1"
print(hashlib.sha256(my + key).hexdigest())
</code></pre>
<p>I am getting this result:</p>
<pre><code>2df1d58a56198b2a9267a9955c31291cd454bdb3089a7c42f5d439bbacfb3b88
</code></pre>
<p>Expecting result:</p>
<pre><code>adcb671e8e24572464c31e8f9ffc5f638ab302a0b673f72554d3cff96a692740
</code></pre>
| 0 | 2016-09-29T10:04:04Z | 39,767,589 | <p>You are not making use of <code>hmac</code> at all in your code. </p>
<p>Typical way to use <code>hmac</code>, construct an HMAC object from your key, message and identify the hashing algorithm by passing in its constructor:</p>
<pre><code>h = hmac.new( key, my, hashlib.sha256 )
print( h.hexdigest() )
</code></pre>
<p>That should output </p>
<pre><code>adcb671e8e24572464c31e8f9ffc5f638ab302a0b673f72554d3cff96a692740
</code></pre>
<p>for your example data.</p>
| 1 | 2016-09-29T10:18:31Z | [
"python",
"oauth",
"sha256",
"hmac"
]
|
How to download large files in Python 2 | 39,767,343 | <p>I'm trying to download large files (approx. 1GB) with mechanize module, but I have been unsuccessful. I've been searching for similar threads, but I have found only those, where the files are publicly accessible and no login is required to obtain a file. But this is not my case as the file is located in the private section and I need to login before the download. Here is what I've done so far.</p>
<pre><code>import mechanize
g_form_id = ""
def is_form_found(form1):
return "id" in form1.attrs and form1.attrs['id'] == g_form_id
def select_form_with_id_using_br(br1, id1):
global g_form_id
g_form_id = id1
try:
br1.select_form(predicate=is_form_found)
except mechanize.FormNotFoundError:
print "form not found, id: " + g_form_id
exit()
url_to_login = "https://example.com/"
url_to_file = "https://example.com/download/files/filename=fname.exe"
local_filename = "fname.exe"
br = mechanize.Browser()
br.set_handle_robots(False) # ignore robots
br.set_handle_refresh(False) # can sometimes hang without this
br.addheaders = [('User-agent', 'Firefox')]
response = br.open(url_to_login)
# Find login form
select_form_with_id_using_br(br, 'login-form')
# Fill in data
br.form['email'] = 'email@domain.com'
br.form['password'] = 'password'
br.set_all_readonly(False) # allow everything to be written to
br.submit()
# Try to download file
br.retrieve(url_to_file, local_filename)
</code></pre>
<p>But I'm getting an error when 512MB is downloaded:</p>
<pre><code>Traceback (most recent call last):
File "dl.py", line 34, in <module>
br.retrieve(br.retrieve(url_to_file, local_filename)
File "C:\Python27\lib\site-packages\mechanize\_opener.py", line 277, in retrieve
block = fp.read(bs)
File "C:\Python27\lib\site-packages\mechanize\_response.py", line 199, in read
self.__cache.write(data)
MemoryError: out of memory
</code></pre>
<p>Do you have any ideas how to solve this?
Thanks</p>
| 1 | 2016-09-29T10:06:46Z | 39,767,529 | <p>Try downloading/writing it by chunks. Seems like file takes all your memory.</p>
<p>You should specify Range header for your request if server supports it.</p>
<p><a href="https://en.wikipedia.org/wiki/List_of_HTTP_header_fields" rel="nofollow">https://en.wikipedia.org/wiki/List_of_HTTP_header_fields</a></p>
| -1 | 2016-09-29T10:15:55Z | [
"python",
"download",
"mechanize"
]
|
How to download large files in Python 2 | 39,767,343 | <p>I'm trying to download large files (approx. 1GB) with mechanize module, but I have been unsuccessful. I've been searching for similar threads, but I have found only those, where the files are publicly accessible and no login is required to obtain a file. But this is not my case as the file is located in the private section and I need to login before the download. Here is what I've done so far.</p>
<pre><code>import mechanize
g_form_id = ""
def is_form_found(form1):
return "id" in form1.attrs and form1.attrs['id'] == g_form_id
def select_form_with_id_using_br(br1, id1):
global g_form_id
g_form_id = id1
try:
br1.select_form(predicate=is_form_found)
except mechanize.FormNotFoundError:
print "form not found, id: " + g_form_id
exit()
url_to_login = "https://example.com/"
url_to_file = "https://example.com/download/files/filename=fname.exe"
local_filename = "fname.exe"
br = mechanize.Browser()
br.set_handle_robots(False) # ignore robots
br.set_handle_refresh(False) # can sometimes hang without this
br.addheaders = [('User-agent', 'Firefox')]
response = br.open(url_to_login)
# Find login form
select_form_with_id_using_br(br, 'login-form')
# Fill in data
br.form['email'] = 'email@domain.com'
br.form['password'] = 'password'
br.set_all_readonly(False) # allow everything to be written to
br.submit()
# Try to download file
br.retrieve(url_to_file, local_filename)
</code></pre>
<p>But I'm getting an error when 512MB is downloaded:</p>
<pre><code>Traceback (most recent call last):
File "dl.py", line 34, in <module>
br.retrieve(br.retrieve(url_to_file, local_filename)
File "C:\Python27\lib\site-packages\mechanize\_opener.py", line 277, in retrieve
block = fp.read(bs)
File "C:\Python27\lib\site-packages\mechanize\_response.py", line 199, in read
self.__cache.write(data)
MemoryError: out of memory
</code></pre>
<p>Do you have any ideas how to solve this?
Thanks</p>
| 1 | 2016-09-29T10:06:46Z | 39,768,280 | <p>You can use <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow"><em>bs4</em></a> and <a href="http://docs.python-requests.org/en/master/user/quickstart/" rel="nofollow"><em>requests</em></a> to get you logged in then write the <em>streamed</em> content. There are a few form fields required including a <code>_token_</code> field that is definitely necessary:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
from urlparse import urljoin
data = {'email': 'email@domain.com', 'password': 'password'}
base = "https://support.codasip.com"
with requests.Session() as s:
# update headers
s.headers.update({'User-agent': 'Firefox'})
# use bs4 to parse the from fields
soup = BeautifulSoup(s.get(base).content)
form = soup.select_one("#frm-loginForm")
# works as it is a relative path. Not always the case.
action = form["action"]
# Get rest of the fields, ignore password and email.
for inp in form.find_all("input", {"name":True,"value":True}):
name, value = inp["name"], inp["value"]
if name not in data:
data[name] = value
# login
s.post(urljoin(base, action), data=data)
# get protected url
with open(local_filename, "wb") as f:
for chk in s.get(url_to_file, stream=True).iter_content(1024):
f.write(chk)
</code></pre>
| 1 | 2016-09-29T10:50:54Z | [
"python",
"download",
"mechanize"
]
|
How to generate spatial weight matrix in python script? | 39,767,425 | <p>I need to create spatial weights matrices for different point shapefilesï¼so I tried to batch process in stand-alone Python script. Here is the example code exported from the ModelBuilder in the ArcGIS 10.2 software. </p>
<pre><code>import arcpy
test_shp = "D:\\My Documents\\ArcGIS\\test.shp"
tset_swm = "D:\\My Documents\\ArcGIS\\tset.swm"
arcpy.GenerateSpatialWeightsMatrix_stats(test_shp, "MyID", tset_swm,
"K_NEAREST_NEIGHBORS", "EUCLIDEAN",
"1", "", "4", "ROW_STANDARDIZATION",
"", "", "", "")
</code></pre>
<p>The problem here is that there are no output files or messages. And interestingly, when I add </p>
<pre><code>print "hello world"
</code></pre>
<p>After the code execution, it should print a string "hello world" on the console, but there is no such output either!</p>
<p>Could anyone explain me what I'm doing wrong and how I can fix this?</p>
| 0 | 2016-09-29T10:10:42Z | 39,898,933 | <p>I don't see anything wrong here. But isn't tset_swm in your code the output you are looking for? That is the output spatial weight matrix you generated based on your input shapefile. </p>
| 0 | 2016-10-06T14:39:24Z | [
"python",
"arcgis",
"arcpy"
]
|
Looping over a slice copy of a list | 39,767,440 | <p>I am trying to understand the difference between <strong>looping over a list</strong> and <strong>looping over a "slice" copy of a list</strong>.</p>
<p>So, for example, in the following list, the element whose length is greater than 6 is appended to the beginning of the list:</p>
<pre><code>words = ['cat', 'window', 'blahblah']
for word in words[:]:
if len(word) > 6:
words.insert(0, word)
print(words)
words = ['blahblah', 'cat', 'window', 'blahblah']
</code></pre>
<p>I, then, run the following to see why it is not the correct way of doing it, but my interpreter freezes and I have to exit. Why does this happen? I am simply appending something to the start of my list which is allowed since lists are mutable...</p>
<pre><code>for word in words:
if len(word) > 6:
words.insert(0, word)
</code></pre>
<p>Could someone please help me understand why this last bit stops my program?</p>
| 1 | 2016-09-29T10:11:30Z | 39,767,564 | <p>The list <code>words</code> has three elements. The copy of <code>words</code> also does. You iterate over the copy, insert something in <code>words</code> if the current element is longer than 6 characters, and are done.</p>
<p>Now let's see what happens when you iterate over <code>words</code> directly:</p>
<p>The first two iteration steps are fine, because the condition is False. But since <code>len('blahblah') > 6</code>, you are now inserting <code>'blahblah'</code> at the beginning of the list. Now the list looks like this:</p>
<p><code>['blahblah', 'cat', 'window', 'blahblah']</code></p>
<p>You've just seen the third element, so now the loop continues and looks at the fourth element, but because you inserted something at the beginning of the list the rest of the list shifted and the new fourth element is <code>'blahblah'</code> again. <code>blahblah</code> is still longer than 6 characters, you insert it again in the beginning and are stuck in an infinite loop:</p>
<pre><code>['cat', 'window', 'blahblah']
^
['cat', 'window', 'blahblah']
^
['cat', 'window', 'blahblah']
^
['blahblah', 'cat', 'window', 'blahblah']
^
['blahblah', 'blahblah', 'cat', 'window', 'blahblah']
^
...
</code></pre>
| 4 | 2016-09-29T10:17:26Z | [
"python"
]
|
Looping over a slice copy of a list | 39,767,440 | <p>I am trying to understand the difference between <strong>looping over a list</strong> and <strong>looping over a "slice" copy of a list</strong>.</p>
<p>So, for example, in the following list, the element whose length is greater than 6 is appended to the beginning of the list:</p>
<pre><code>words = ['cat', 'window', 'blahblah']
for word in words[:]:
if len(word) > 6:
words.insert(0, word)
print(words)
words = ['blahblah', 'cat', 'window', 'blahblah']
</code></pre>
<p>I, then, run the following to see why it is not the correct way of doing it, but my interpreter freezes and I have to exit. Why does this happen? I am simply appending something to the start of my list which is allowed since lists are mutable...</p>
<pre><code>for word in words:
if len(word) > 6:
words.insert(0, word)
</code></pre>
<p>Could someone please help me understand why this last bit stops my program?</p>
| 1 | 2016-09-29T10:11:30Z | 39,767,590 | <p>In your first approach, you are performing a shallow copy of your list <code>words</code>, iterating over it and appending long words to the list <code>words</code>. So you iterate over a fixed list, and extend a different list.</p>
<p>With your last approach, the list <code>words</code> grows with each iteration, so you are in an endless loop as you are looping over it and it keeps growing.</p>
| 2 | 2016-09-29T10:18:34Z | [
"python"
]
|
Looping over a slice copy of a list | 39,767,440 | <p>I am trying to understand the difference between <strong>looping over a list</strong> and <strong>looping over a "slice" copy of a list</strong>.</p>
<p>So, for example, in the following list, the element whose length is greater than 6 is appended to the beginning of the list:</p>
<pre><code>words = ['cat', 'window', 'blahblah']
for word in words[:]:
if len(word) > 6:
words.insert(0, word)
print(words)
words = ['blahblah', 'cat', 'window', 'blahblah']
</code></pre>
<p>I, then, run the following to see why it is not the correct way of doing it, but my interpreter freezes and I have to exit. Why does this happen? I am simply appending something to the start of my list which is allowed since lists are mutable...</p>
<pre><code>for word in words:
if len(word) > 6:
words.insert(0, word)
</code></pre>
<p>Could someone please help me understand why this last bit stops my program?</p>
| 1 | 2016-09-29T10:11:30Z | 39,767,656 | <p><code>words[:]</code> means a new copy of the <code>words</code> and length is fixed. So you are iterating over a fixed list.</p>
<p>But in the second situation, iterating over and extending the same list which makes it infinity.</p>
<pre><code>words = ['cat', 'window', 'blahblah']
new_words = [_ for _ in words if len(_) > 6]
print(new_words)
</code></pre>
| 1 | 2016-09-29T10:21:35Z | [
"python"
]
|
Looping over a slice copy of a list | 39,767,440 | <p>I am trying to understand the difference between <strong>looping over a list</strong> and <strong>looping over a "slice" copy of a list</strong>.</p>
<p>So, for example, in the following list, the element whose length is greater than 6 is appended to the beginning of the list:</p>
<pre><code>words = ['cat', 'window', 'blahblah']
for word in words[:]:
if len(word) > 6:
words.insert(0, word)
print(words)
words = ['blahblah', 'cat', 'window', 'blahblah']
</code></pre>
<p>I, then, run the following to see why it is not the correct way of doing it, but my interpreter freezes and I have to exit. Why does this happen? I am simply appending something to the start of my list which is allowed since lists are mutable...</p>
<pre><code>for word in words:
if len(word) > 6:
words.insert(0, word)
</code></pre>
<p>Could someone please help me understand why this last bit stops my program?</p>
| 1 | 2016-09-29T10:11:30Z | 39,767,671 | <p>When you are doing <code>words[:]</code>, you are iterating over the copy of list, whereas with <code>words</code>, you are iterating over the original copy of list.</p>
<p>In the case <code>II</code>, your interpreter is freezing because when you are at the last index, the condition is satisfied and you insert the item at the beginning of the list. Now, there is one more index which is required to be iterated by the loop. Again, the condition satisfies and it keeps on going like this resulting in an infinite loop.</p>
<p>Where as that is not the case with <code>words[:]</code>, as the list in which you are appending and the one on which you are iterating are different.</p>
| 2 | 2016-09-29T10:22:34Z | [
"python"
]
|
Looping over a slice copy of a list | 39,767,440 | <p>I am trying to understand the difference between <strong>looping over a list</strong> and <strong>looping over a "slice" copy of a list</strong>.</p>
<p>So, for example, in the following list, the element whose length is greater than 6 is appended to the beginning of the list:</p>
<pre><code>words = ['cat', 'window', 'blahblah']
for word in words[:]:
if len(word) > 6:
words.insert(0, word)
print(words)
words = ['blahblah', 'cat', 'window', 'blahblah']
</code></pre>
<p>I, then, run the following to see why it is not the correct way of doing it, but my interpreter freezes and I have to exit. Why does this happen? I am simply appending something to the start of my list which is allowed since lists are mutable...</p>
<pre><code>for word in words:
if len(word) > 6:
words.insert(0, word)
</code></pre>
<p>Could someone please help me understand why this last bit stops my program?</p>
| 1 | 2016-09-29T10:11:30Z | 39,767,689 | <p>The following loop</p>
<pre><code>for word in words:
// do something
</code></pre>
<p>is roughly equivalent to the following one (when <code>words</code> is a list and not another kind of <code>iterable</code>):</p>
<pre><code>i = 0
while i != len(words):
word = words[i]
// do something (assuming no 'continue' here)
i += 1
</code></pre>
<p>Therefore, when while looping you insert something into your list <em>before the current position</em>, then during the next iteration you end up handling the same element as for the previous iteration. In your case this results in an infinite loop.</p>
| 2 | 2016-09-29T10:23:29Z | [
"python"
]
|
Flask CGI app issues with print statement | 39,767,441 | <p>I have a simple flask application deployed using CGI+Apache running on a shared hosting server.</p>
<p>The app runs Flask 0.11.1 on Python 2.6 along with Flask-Mail 0.9.1.</p>
<p>One of the pages in the app has a contact form that sends an email and redirects back to the same page.</p>
<p>The contact form has a POST action to '/sendmail' that is defined in Flask controller as follows - </p>
<pre><code>@app.route("/sendmail", methods=['GET','POST'])
def send_mail():
print "Sending Email"
mail = SendMail(app)
mail.send_mail(request.form['name'], request.form['mail'], request.form['phoneNo'], request.form['message'])
return render_template('contact.html')
</code></pre>
<p>Here's the issue - </p>
<ol>
<li>With the code above, the app sends me an email successfully, however then gives an error '/sendmail' not found. The template fails to render.</li>
<li>If I remove the print statement from the snippet, the app renders contact.html successfully after sending the email.</li>
</ol>
<p>What explains the behaviour of print statement in the snippet? Considering the execution is sequential, shouldn't the block fail at the print statement itself without sending the email instead of failing during rendering the template?</p>
| 0 | 2016-09-29T10:11:32Z | 39,767,611 | <p>Print statement should not create an error as it is one of the statement like others. Instead since you are not checking for <code>request.method=='POST'</code>, this should create and throw an error in your get request. To redirect to the same page <code>return redirect("/sendmail")</code> Do not forget to import from flask like this <code>from flask import redirect</code></p>
| 0 | 2016-09-29T10:19:24Z | [
"python",
"flask"
]
|
Syntax error when installing csc-pysparse | 39,767,528 | <p>I am new to Python and I am trying to install recsys package.</p>
<p><a href="http://ocelma.net/software/python-recsys/build/html/installation.html" rel="nofollow">http://ocelma.net/software/python-recsys/build/html/installation.html</a></p>
<p>For this i need to install some pre-requiste packages, so i have to run this using pip</p>
<p>pip install csc-pysparse networkx divisi2</p>
<p>But whenever i run this i get the following in logs</p>
<pre><code> Collecting csc-pysparse
Using cached csc-pysparse-1.1.1.4.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\64\AppData\Local\Temp\pip-build-wn7_65_9\csc-pysparse\
setup.py", line 33
print 'setuptools module not found.'
^
SyntaxError: Missing parentheses in call to 'print'
----------------------------------------
</code></pre>
<p>Command "python setup.py egg_info" failed with error code 1 in C:\Users\i054564\
AppData\Local\Temp\pip-build-wn7_65_9\csc-pysparse\</p>
<p>I checked that setuptools exist in my python installation here</p>
<p>C:\Python34\lib\site-packages</p>
<p>I have ran everything from unstinalling setuptools to install it again, upgrade command, but it does not work.</p>
<p>Not able to figure out why setuptools is not found. Is it not found in the path of where pip resolves it from ?</p>
<p>cheers,</p>
<p>Saurav</p>
| 0 | 2016-09-29T10:15:52Z | 39,767,936 | <p>The error is coming from the installation code of recsys package. In order to avoid this error, you need to install setuptools separately.</p>
<p>For debian machines, the below command will work.</p>
<pre><code>sudo apt-get install python3-setuptools
</code></pre>
<p>For other machines, please checkout installation instructions at <a href="http://stackoverflow.com/questions/14426491/python-3-importerror-no-module-named-setuptools">the link</a></p>
<p>Once setuptools package is installed, you can proceed with csc-pysparse installation. </p>
| -1 | 2016-09-29T10:35:14Z | [
"python",
"python-3.x",
"pip"
]
|
Syntax error when installing csc-pysparse | 39,767,528 | <p>I am new to Python and I am trying to install recsys package.</p>
<p><a href="http://ocelma.net/software/python-recsys/build/html/installation.html" rel="nofollow">http://ocelma.net/software/python-recsys/build/html/installation.html</a></p>
<p>For this i need to install some pre-requiste packages, so i have to run this using pip</p>
<p>pip install csc-pysparse networkx divisi2</p>
<p>But whenever i run this i get the following in logs</p>
<pre><code> Collecting csc-pysparse
Using cached csc-pysparse-1.1.1.4.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\64\AppData\Local\Temp\pip-build-wn7_65_9\csc-pysparse\
setup.py", line 33
print 'setuptools module not found.'
^
SyntaxError: Missing parentheses in call to 'print'
----------------------------------------
</code></pre>
<p>Command "python setup.py egg_info" failed with error code 1 in C:\Users\i054564\
AppData\Local\Temp\pip-build-wn7_65_9\csc-pysparse\</p>
<p>I checked that setuptools exist in my python installation here</p>
<p>C:\Python34\lib\site-packages</p>
<p>I have ran everything from unstinalling setuptools to install it again, upgrade command, but it does not work.</p>
<p>Not able to figure out why setuptools is not found. Is it not found in the path of where pip resolves it from ?</p>
<p>cheers,</p>
<p>Saurav</p>
| 0 | 2016-09-29T10:15:52Z | 39,768,227 | <p>The code triggering the error is Python 2-specific and is illegal in Python 3.</p>
<p>Apparently, <code>csc-pysparse</code> doesn't support Python 3 (<a href="https://github.com/rspeer/csc-pysparse/blob/master/README" rel="nofollow">its <code>README</code></a> only mentions 2.6) and <a href="https://github.com/rspeer/csc-pysparse" rel="nofollow">looks abandoned</a> (6 years since last commit).</p>
<p>Some guys out there <a href="https://github.com/ocelma/python-recsys/issues/19" rel="nofollow">suggest replacing it with SciPy</a>.</p>
| 0 | 2016-09-29T10:48:33Z | [
"python",
"python-3.x",
"pip"
]
|
Django messages framework not displaying message? | 39,767,570 | <p>I'm trying to build a community portal and am working on displaying one-time message to the user such as "login successful" and the like; and am working with Django's messages framework. </p>
<p>My template has the following line which currently does nothing:</p>
<pre><code>{% if messages %}{% for message in messages %}<script>alert({{ message }});</script>{% endfor %}{% endif %}
</code></pre>
<p>Strangely, each of the following works:</p>
<pre><code>{% if messages %}{% for message in messages %}<script>alert();</script>{% endfor %}{% endif %}
{% if messages %}{% for message in messages %}<script>alert("Welcome");</script>{% endfor %}{% endif %}
</code></pre>
<p>From this, I am concluding that I'm not creating, storing, or passing the messages correctly. However, I'm checking the docs and my syntax seems fine.</p>
<p>My message creation; views.py:</p>
<pre><code>def login(request):
userName = request.POST.get('usrname',None)
userPass = request.POST.get('psw',None)
user = authenticate(username=sanitize_html(userName), password=userPass)
if user is not None:
if user.is_active:
auth.login(request, user)
messages.add_message(request, messages.INFO, 'Successfully logged in!')
else:
messages.add_message(request, messages.INFO, 'Login not successful. Please try again.')
return HttpResponseRedirect('/home/')
</code></pre>
<p>My message retrieval and passing, views.py (maps to url '/home/'):</p>
<pre><code>def test(request):
messagealert = []
mess = get_messages(request)
for message in mess:
messagealert.append(message)
if request.user.is_authenticated():
student_user = get_student_user(request.user)
student_user.first_name = request.user.first_name
student_user.last_name = request.user.last_name
student_user.email = request.user.email
content = {
'student_user': student_user,
'messages': messagealert,
}
else:
content = {
'student_user': None,
'messages': messagealert,
}
return render_to_response('index3.html', content)
</code></pre>
<p>My index3.html template is the template with the line given above.</p>
<p>What am I missing here?</p>
| 0 | 2016-09-29T10:17:50Z | 39,767,716 | <p>Don't use <code>render_to_response</code> in your <code>test</code> view. It doesn't run context processors which are required to insert things like <code>messages</code> - and other useful items such as <code>user</code> - into the context.</p>
<p>Use <code>render</code> instead:</p>
<pre><code>return render(request, 'index3.html', context)
</code></pre>
| 1 | 2016-09-29T10:24:54Z | [
"python",
"django",
"authentication",
"authorization",
"django-messages"
]
|
pip install MySQL-python failed in linux | 39,767,586 | <p>I've created a Django project and its works as well now I'm trying to config it with MySql so searched in Google and did some ways but when I <code>pip install MySQL-python</code> the below problem occurs:</p>
<pre><code>Collecting MySQL-python
Using cached MySQL-python-1.2.5.zip
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-lwubgejh/MySQL-python/setup.py", line 13, in <module>
from setup_posix import get_config
File "/tmp/pip-build-lwubgejh/MySQL-python/setup_posix.py", line 2, in <module>
from ConfigParser import SafeConfigParser
ImportError: No module named 'ConfigParser'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-lwubgejh/MySQL-python/
</code></pre>
<p>couldn't find any solution for this error:
<code>Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-lwubgejh/MySQL-python/</code></p>
<p>thank you for help.</p>
| 0 | 2016-09-29T10:18:25Z | 39,767,673 | <p>This library is not compatible with Python 3. Follow the <a href="https://docs.djangoproject.com/en/1.10/ref/databases/#mysql-db-api-drivers" rel="nofollow">advice in the Django docs</a> and install mysqlclient instead.</p>
| 2 | 2016-09-29T10:22:37Z | [
"python",
"mysql",
"linux",
"django",
"python-3.4"
]
|
NLTK sentiment vader: ordering results | 39,767,603 | <p>I've just run the Vader sentiment analysis on my dataset: </p>
<pre><code>from nltk.sentiment.vader import SentimentIntensityAnalyzer
from nltk import tokenize
sid = SentimentIntensityAnalyzer()
for sentence in filtered_lines2:
print(sentence)
ss = sid.polarity_scores(sentence)
for k in sorted(ss):
print('{0}: {1}, '.format(k, ss[k]), )
print()
</code></pre>
<p>Here a sample of my results:</p>
<pre><code>Are these guests on Samsung and Google event mostly Chinese Wow Theyre
boring
Google Samsung
('compound: 0.3612, ',)
()
('neg: 0.12, ',)
()
('neu: 0.681, ',)
()
('pos: 0.199, ',)
()
Adobe lose 135bn to piracy Report
('compound: -0.4019, ',)
()
('neg: 0.31, ',)
()
('neu: 0.69, ',)
()
('pos: 0.0, ',)
()
Samsung Galaxy Nexus announced
('compound: 0.0, ',)
()
('neg: 0.0, ',)
()
('neu: 1.0, ',)
()
('pos: 0.0, ',)
()
</code></pre>
<p>I want to know how many times "compound" is equal, greater or less than zero.</p>
<p>I know that probably it is very easy but I'm really new to Python and coding in general.
I've tried in a lot of different ways to create what I need but I can't find any solution. </p>
<p>(please edit my question if the "sample of results" is incorrect, because i don't know the right way to write it)</p>
| 0 | 2016-09-29T10:19:08Z | 39,769,725 | <p>You can use a simple counter for each of the classes:</p>
<pre><code>positive, negative, neutral = 0, 0, 0
</code></pre>
<p>Then, inside the sentence loop, test the compound value and increase the corresponding counter:</p>
<pre><code> ...
if ss['compound'] > 0:
positive += 1
elif ss['compound'] == 0:
neutral += 1
elif ...
</code></pre>
<p>etc.</p>
| 1 | 2016-09-29T11:58:38Z | [
"python",
"python-3.x",
"nltk"
]
|
NLTK sentiment vader: ordering results | 39,767,603 | <p>I've just run the Vader sentiment analysis on my dataset: </p>
<pre><code>from nltk.sentiment.vader import SentimentIntensityAnalyzer
from nltk import tokenize
sid = SentimentIntensityAnalyzer()
for sentence in filtered_lines2:
print(sentence)
ss = sid.polarity_scores(sentence)
for k in sorted(ss):
print('{0}: {1}, '.format(k, ss[k]), )
print()
</code></pre>
<p>Here a sample of my results:</p>
<pre><code>Are these guests on Samsung and Google event mostly Chinese Wow Theyre
boring
Google Samsung
('compound: 0.3612, ',)
()
('neg: 0.12, ',)
()
('neu: 0.681, ',)
()
('pos: 0.199, ',)
()
Adobe lose 135bn to piracy Report
('compound: -0.4019, ',)
()
('neg: 0.31, ',)
()
('neu: 0.69, ',)
()
('pos: 0.0, ',)
()
Samsung Galaxy Nexus announced
('compound: 0.0, ',)
()
('neg: 0.0, ',)
()
('neu: 1.0, ',)
()
('pos: 0.0, ',)
()
</code></pre>
<p>I want to know how many times "compound" is equal, greater or less than zero.</p>
<p>I know that probably it is very easy but I'm really new to Python and coding in general.
I've tried in a lot of different ways to create what I need but I can't find any solution. </p>
<p>(please edit my question if the "sample of results" is incorrect, because i don't know the right way to write it)</p>
| 0 | 2016-09-29T10:19:08Z | 39,769,760 | <p>By far not the most pythonic way of doing it but I think this would be the easiest to understand if you don't have much experience with python. Essentially you create a dictionary with 0 values and increment the value in each one of the cases. </p>
<pre><code>from nltk.sentiment.vader import SentimentIntensityAnalyzer
from nltk import tokenize
sid = SentimentIntensityAnalyzer()
res = {"greater":0,"less":0,"equal":0}
for sentence in filtered_lines2:
ss = sid.polarity_scores(sentence)
if ss["compound"] == 0.0:
res["equal"] +=1
elif ss["compound"] > 0.0:
res["greater"] +=1
else:
res["less"] +=1
print(res)
</code></pre>
| 1 | 2016-09-29T12:00:44Z | [
"python",
"python-3.x",
"nltk"
]
|
NLTK sentiment vader: ordering results | 39,767,603 | <p>I've just run the Vader sentiment analysis on my dataset: </p>
<pre><code>from nltk.sentiment.vader import SentimentIntensityAnalyzer
from nltk import tokenize
sid = SentimentIntensityAnalyzer()
for sentence in filtered_lines2:
print(sentence)
ss = sid.polarity_scores(sentence)
for k in sorted(ss):
print('{0}: {1}, '.format(k, ss[k]), )
print()
</code></pre>
<p>Here a sample of my results:</p>
<pre><code>Are these guests on Samsung and Google event mostly Chinese Wow Theyre
boring
Google Samsung
('compound: 0.3612, ',)
()
('neg: 0.12, ',)
()
('neu: 0.681, ',)
()
('pos: 0.199, ',)
()
Adobe lose 135bn to piracy Report
('compound: -0.4019, ',)
()
('neg: 0.31, ',)
()
('neu: 0.69, ',)
()
('pos: 0.0, ',)
()
Samsung Galaxy Nexus announced
('compound: 0.0, ',)
()
('neg: 0.0, ',)
()
('neu: 1.0, ',)
()
('pos: 0.0, ',)
()
</code></pre>
<p>I want to know how many times "compound" is equal, greater or less than zero.</p>
<p>I know that probably it is very easy but I'm really new to Python and coding in general.
I've tried in a lot of different ways to create what I need but I can't find any solution. </p>
<p>(please edit my question if the "sample of results" is incorrect, because i don't know the right way to write it)</p>
| 0 | 2016-09-29T10:19:08Z | 39,770,040 | <p>I might define a function that returns the type of inequality that's being represented by a document:</p>
<pre><code>def inequality_type(val):
if val == 0.0:
return "equal"
elif val > 0.0:
return "greater"
return "less"
</code></pre>
<p>Then use this on the compound scores of all the sentences to increment the count of the corresponding inequality type.</p>
<pre><code>from collections import defaultdict
def count_sentiments(sentences):
# Create a dictionary with values defaulted to 0
counts = defaultdict(int)
# Create a polarity score for each sentence
for score in map(sid.polarity_scores, sentences):
# Increment the dictionary entry for that inequality type
counts[inequality_type(score["compound"])] += 1
return counts
</code></pre>
<p>You could then call it on your filtered lines.</p>
<p>However, this can be obviated by just using <code>collections.Counter</code>:</p>
<pre><code>from collections import Counter
def count_sentiments(sentences):
# Count the inequality type for each score in the sentences' polarity scores
return Counter((inequality_type(score["compound"]) for score in map(sid.polarity_scores, sentences)))
</code></pre>
| 0 | 2016-09-29T12:13:55Z | [
"python",
"python-3.x",
"nltk"
]
|
pandas assign with new column name as string | 39,767,718 | <p>I recently discovered pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html" rel="nofollow">"assign" method</a> which I find very elegant.
My issue is that the name of the new column is assigned as keyword, so it cannot have spaces or dashes in it. </p>
<pre><code>df = DataFrame({'A': range(1, 11), 'B': np.random.randn(10)})
df.assign(ln_A = lambda x: np.log(x.A))
A B ln_A
0 1 0.426905 0.000000
1 2 -0.780949 0.693147
2 3 -0.418711 1.098612
3 4 -0.269708 1.386294
4 5 -0.274002 1.609438
5 6 -0.500792 1.791759
6 7 1.649697 1.945910
7 8 -1.495604 2.079442
8 9 0.549296 2.197225
9 10 -0.758542 2.302585
</code></pre>
<p>but what if I want to name the new column "ln(A)" for example?
E.g. </p>
<pre><code>df.assign(ln(A) = lambda x: np.log(x.A))
df.assign("ln(A)" = lambda x: np.log(x.A))
File "<ipython-input-7-de0da86dce68>", line 1
df.assign(ln(A) = lambda x: np.log(x.A))
SyntaxError: keyword can't be an expression
</code></pre>
<p>I know I could rename the column right after the .assign call, but I want to understand more about this method and its syntax.</p>
| 1 | 2016-09-29T10:25:08Z | 39,767,887 | <p><code>assign</code> expects a bunch of key word arguments. It will, in turn, assign columns with the names of the key words. That's handy, but you can't pass an expression as the key word. This is spelled out by @EdChum in the comments with this <a href="https://docs.python.org/3.2/reference/lexical_analysis.html#identifiers" rel="nofollow">link</a></p>
<p>use <code>insert</code> instead for inplace transformation</p>
<pre><code>df.insert(2, 'ln(A)', np.log(df.A))
df
</code></pre>
<p><a href="http://i.stack.imgur.com/TUyep.png" rel="nofollow"><img src="http://i.stack.imgur.com/TUyep.png" alt="enter image description here"></a></p>
<hr>
<p>use <code>concat</code> if you don't want inplace</p>
<pre><code>pd.concat([df, np.log(df.A).rename('log(A)')], axis=1)
</code></pre>
<p><a href="http://i.stack.imgur.com/TUyep.png" rel="nofollow"><img src="http://i.stack.imgur.com/TUyep.png" alt="enter image description here"></a></p>
| 1 | 2016-09-29T10:33:29Z | [
"python",
"pandas",
"assign",
"columnname"
]
|
Does __bin__ (or __binary__) operator exists in python? | 39,767,750 | <p>In Python, the <code>__oct__</code> and <code>__hex__</code> operators exists to implement specific bahavior for <code>oct()</code> and <code>hex()</code>. See <a href="https://docs.python.org/2/reference/datamodel.html?#object.__oct__" rel="nofollow">Emulating numeric types</a></p>
<p>But I donât understand why <code>__bin__</code> (or <code>__binary__</code>) doesnât exist, whereas <code>bin()</code> function exists in the built-ins. See <a href="https://docs.python.org/2/library/functions.html#bin" rel="nofollow">Built-in Functions</a>.</p>
<p>Am I missing something? Any reason?</p>
<h2>Changes in Python 3</h2>
<p>I have found this reference <a href="https://docs.python.org/3.5/whatsnew/3.0.html?operators-and-special-methods" rel="nofollow">Operators And Special Methods</a> in "Whatâs New In Python 3.0":</p>
<blockquote>
<p>The <strong>oct</strong>() and <strong>hex</strong>() special methods are removed â oct() and hex() use <strong>index</strong>() now to convert the argument to an integer.</p>
</blockquote>
| 1 | 2016-09-29T10:26:57Z | 39,768,083 | <p>You can use <a href="https://docs.python.org/2/reference/datamodel.html?#object.__index__" rel="nofollow"><code>object.__index__</code></a> to handle <code>bin()</code> calls in Python 2. From Python 3 onwards it works for <code>hex()</code> and <code>oct()</code> as well but not in Python 2.</p>
<p>From <a href="https://docs.python.org/3/reference/datamodel.html?#object.__index__" rel="nofollow">Python 3 docs</a>:</p>
<blockquote>
<p><code>object.__index__(self)</code></p>
<p>Called to implement <code>operator.index()</code>, and whenever Python needs to
losslessly convert the numeric object to an integer object (such as in
slicing, or in the built-in <code>bin()</code>, <code>hex()</code> and <code>oct()</code> functions).
Presence of this method indicates that the numeric object is an
integer type. Must return an integer.</p>
</blockquote>
<p>It is not documented(clearly) in Python 2 but works there as well:</p>
<pre><code>>>> class A(object):
... def __index__(self):
... return 100
...
>>> bin(A())
'0b1100100'
</code></pre>
<p>The CPython code for <a href="https://hg.python.org/cpython/file/2.7/Python/bltinmodule.c#l219" rel="nofollow"><code>bin()</code></a> internally calls <a href="https://hg.python.org/cpython/file/2.7/Objects/abstract.c#l1819" rel="nofollow"><code>PyNumber_ToBase</code></a> which in turn calls <a href="https://hg.python.org/cpython/file/2.7/Objects/abstract.c#l1490" rel="nofollow"><code>PyNumber_Index</code></a> and this function invokes the <code>nb_index</code> slot on that object.</p>
| 2 | 2016-09-29T10:41:18Z | [
"python",
"python-2.7",
"operators"
]
|
Getting substring based on another column in a pandas dataframe | 39,767,787 | <p>Hi is there a way to get a substring of a column based on another column?</p>
<pre><code>import pandas as pd
x = pd.DataFrame({'name':['bernard','brenden','bern'],'digit':[2,3,3]})
x
digit name
0 2 bernard
1 3 brenden
2 3 bern
</code></pre>
<p>What i would expect is something like:</p>
<pre><code>for row in x.itertuples():
print row[2][:row[1]]
be
bre
ber
</code></pre>
<p>where the result is the substring of name based on digit.</p>
<p>I know if I really want to I can create a list based on the itertuples function but does not seem right and also, I always try to create a vectorized method.</p>
<p>Appreciate any feedback.</p>
| 2 | 2016-09-29T10:28:57Z | 39,767,811 | <p>Use <code>apply</code> with <code>axis=1</code> for row-wise with a <code>lambda</code> so you access each column for slicing:</p>
<pre><code>In [68]:
x = pd.DataFrame({'name':['bernard','brenden','bern'],'digit':[2,3,3]})
x.apply(lambda x: x['name'][:x['digit']], axis=1)
Out[68]:
0 be
1 bre
2 ber
dtype: object
</code></pre>
| 3 | 2016-09-29T10:29:58Z | [
"python",
"pandas",
"dataframe"
]
|
Cannot run tensorflow examples | 39,767,788 | <p>I am trying to run this tensorflow example: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/skflow/text_classification_character_cnn.py" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/skflow/text_classification_character_cnn.py</a></p>
<p>However it keeps failing at the stage to open the tar file. This is the error message I am getting:</p>
<pre><code>Successfully downloaded dbpedia_csv.tar.gz 1613 bytes.
Traceback (most recent call last):
File "text_classification_character_cnn.py", line 110, in <module>
tf.app.run()
File "/Users/alechewitt/Envs/solar_detection/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "text_classification_character_cnn.py", line 87, in main
'dbpedia', test_with_fake_data=FLAGS.test_with_fake_data, size='large')
File "/Users/alechewitt/Envs/solar_detection/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/__init__.py", line 64, in load_dataset
return DATASETS[name](size, test_with_fake_data)
File "/Users/alechewitt/Envs/solar_detection/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/text_datasets.py", line 48, in load_dbpedia
maybe_download_dbpedia(data_dir)
File "/Users/alechewitt/Envs/solar_detection/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/text_datasets.py", line 40, in maybe_download_dbpedia
tfile = tarfile.open(archive_path, 'r:*')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py", line 1672, in open
raise ReadError("file could not be opened successfully")
tarfile.ReadError: file could not be opened successfully
</code></pre>
<p>Any help would be much appreciated</p>
| 1 | 2016-09-29T10:29:09Z | 40,031,713 | <p>What about trying to open the tar file using the full path? BTW the link gave is 404 not found. </p>
| 0 | 2016-10-13T21:59:11Z | [
"python",
"tensorflow"
]
|
Cannot run tensorflow examples | 39,767,788 | <p>I am trying to run this tensorflow example: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/skflow/text_classification_character_cnn.py" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/skflow/text_classification_character_cnn.py</a></p>
<p>However it keeps failing at the stage to open the tar file. This is the error message I am getting:</p>
<pre><code>Successfully downloaded dbpedia_csv.tar.gz 1613 bytes.
Traceback (most recent call last):
File "text_classification_character_cnn.py", line 110, in <module>
tf.app.run()
File "/Users/alechewitt/Envs/solar_detection/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "text_classification_character_cnn.py", line 87, in main
'dbpedia', test_with_fake_data=FLAGS.test_with_fake_data, size='large')
File "/Users/alechewitt/Envs/solar_detection/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/__init__.py", line 64, in load_dataset
return DATASETS[name](size, test_with_fake_data)
File "/Users/alechewitt/Envs/solar_detection/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/text_datasets.py", line 48, in load_dbpedia
maybe_download_dbpedia(data_dir)
File "/Users/alechewitt/Envs/solar_detection/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/text_datasets.py", line 40, in maybe_download_dbpedia
tfile = tarfile.open(archive_path, 'r:*')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.py", line 1672, in open
raise ReadError("file could not be opened successfully")
tarfile.ReadError: file could not be opened successfully
</code></pre>
<p>Any help would be much appreciated</p>
| 1 | 2016-09-29T10:29:09Z | 40,047,743 | <p>When you get that error, you can look at the downloaded dbpedia_dsv.tar.gz in a text editor, and you might find that it is actually a 404 webpage. The file you want seems to be available here as well (I found this link <a href="https://github.com/zhangxiangxiao/Crepe" rel="nofollow">here</a>):
<a href="https://drive.google.com/drive/folders/0Bz8a_Dbh9Qhbfll6bVpmNUtUcFdjYmF2SEpmZUZUcVNiMUw1TWN6RDV3a0JHT3kxLVhVR2M" rel="nofollow">https://drive.google.com/drive/folders/0Bz8a_Dbh9Qhbfll6bVpmNUtUcFdjYmF2SEpmZUZUcVNiMUw1TWN6RDV3a0JHT3kxLVhVR2M</a>
Download that file (at your own risk) and replace it manually. Then you can run your script again.</p>
| 0 | 2016-10-14T16:16:27Z | [
"python",
"tensorflow"
]
|
Can't install psycopg2 package through pip install... Is this because of Sierra? | 39,767,810 | <p>I am working on a project for one of my lectures and I need to download the package psycopg2 in order to work with the postgresql database in use. Unfortunately, when I try to pip install psycopg2 the following error pops up:</p>
<pre><code>ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command '/usr/bin/clang' failed with exit status 1
ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command '/usr/bin/clang' failed with exit status 1
</code></pre>
<p>Does anyone know why this is happening? Is it because Sierra has not supported some packages? Thanks in advance!</p>
| 2 | 2016-09-29T10:29:47Z | 39,769,266 | <p>It looks like the openssl package is not installed. Try installing it and <code>pip install</code> again. I'm not a macos user, but I believe that <a href="http://brew.sh/" rel="nofollow"><code>brew</code></a> simplifies package management on that platform.</p>
<p>You might also need to install the Python development and postgresql development packages.</p>
| 0 | 2016-09-29T11:39:07Z | [
"python",
"pip",
"psycopg2"
]
|
Can't install psycopg2 package through pip install... Is this because of Sierra? | 39,767,810 | <p>I am working on a project for one of my lectures and I need to download the package psycopg2 in order to work with the postgresql database in use. Unfortunately, when I try to pip install psycopg2 the following error pops up:</p>
<pre><code>ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command '/usr/bin/clang' failed with exit status 1
ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command '/usr/bin/clang' failed with exit status 1
</code></pre>
<p>Does anyone know why this is happening? Is it because Sierra has not supported some packages? Thanks in advance!</p>
| 2 | 2016-09-29T10:29:47Z | 39,800,677 | <p>I fixed this by installing Command Line Tools</p>
<pre><code>xcode-select --install
</code></pre>
<p>then installing openssl via Homebrew and manually linking my homebrew-installed openssl to pip:</p>
<pre><code>env LDFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib" pip install psycopg2
</code></pre>
<p>on macOS Sierra 10.12.1</p>
| 11 | 2016-09-30T22:01:18Z | [
"python",
"pip",
"psycopg2"
]
|
Can't install psycopg2 package through pip install... Is this because of Sierra? | 39,767,810 | <p>I am working on a project for one of my lectures and I need to download the package psycopg2 in order to work with the postgresql database in use. Unfortunately, when I try to pip install psycopg2 the following error pops up:</p>
<pre><code>ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command '/usr/bin/clang' failed with exit status 1
ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command '/usr/bin/clang' failed with exit status 1
</code></pre>
<p>Does anyone know why this is happening? Is it because Sierra has not supported some packages? Thanks in advance!</p>
| 2 | 2016-09-29T10:29:47Z | 40,039,093 | <ol>
<li><p>Install/update Xcode developer tools </p>
<pre><code>xcode-select --install
</code></pre></li>
<li><p>Query postgres path</p>
<pre><code>find / -name pg_config 2>/dev/null
</code></pre></li>
<li><p>Install psycopg2, use the path you got in <strong>step 2</strong>. Mine was '/usr/local/Cellar/postgresql/9.5.0/bin/pg_config'</p>
<pre><code>PATH=$PATH:/usr/local/Cellar/postgresql/9.5.0/bin/ pip install psycopg2
</code></pre></li>
</ol>
| 0 | 2016-10-14T09:00:09Z | [
"python",
"pip",
"psycopg2"
]
|
What is the best way to collect errors and send a summary? | 39,767,881 | <p>I have a script executing several independent functions in turn. I would like to collect the errors/exceptions happening along the way, in order to send an email with a summary of the errors.</p>
<p>What is the best way to raise these errors/exceptions and collect them, while allowing the script to complete and go through all the steps? They are independent, so it does not matter if one crashes. The remaining ones can still run.</p>
<pre><code>def step_1():
# Code that can raise errors/exceptions
def step_2():
# Code that can raise errors/exceptions
def step_3():
# Code that can raise errors/exceptions
def main():
step_1()
step_2()
step_3()
send_email_with_collected_errors()
if '__name__' == '__main__':
main()
</code></pre>
<p>Should I wrap each step in a try..except block in the main() function? Should I use a decorator on each step function, in addition to an error collector?</p>
| 0 | 2016-09-29T10:33:15Z | 39,767,932 | <p>use simple <code>try</code> <code>except</code> statements and do logging for the exceptions that would be standard way to collect all your errors.</p>
| 0 | 2016-09-29T10:35:03Z | [
"python"
]
|
What is the best way to collect errors and send a summary? | 39,767,881 | <p>I have a script executing several independent functions in turn. I would like to collect the errors/exceptions happening along the way, in order to send an email with a summary of the errors.</p>
<p>What is the best way to raise these errors/exceptions and collect them, while allowing the script to complete and go through all the steps? They are independent, so it does not matter if one crashes. The remaining ones can still run.</p>
<pre><code>def step_1():
# Code that can raise errors/exceptions
def step_2():
# Code that can raise errors/exceptions
def step_3():
# Code that can raise errors/exceptions
def main():
step_1()
step_2()
step_3()
send_email_with_collected_errors()
if '__name__' == '__main__':
main()
</code></pre>
<p>Should I wrap each step in a try..except block in the main() function? Should I use a decorator on each step function, in addition to an error collector?</p>
| 0 | 2016-09-29T10:33:15Z | 39,768,191 | <p>There are two options:</p>
<ol>
<li>Use decorator in which you catch all exceptions and save it somewhere.</li>
<li>Add try/except everywhere.</li>
</ol>
<p>Using decorator might be much better and cleaner, and code will be easier to maintain.</p>
<p>How to store errors? Your decision. You can add them to some list, create logging class receiving exceptions and get them after everything is done⦠Depends on your project and size of code.</p>
<p>Simple logging class:</p>
<pre><code>class LoggingClass(object):
def __init__(self):
self.exceptions = []
def add_exception(self, exception):
self.exceptions.append(exception)
def get_all(self):
return self.exceptions
</code></pre>
<p>Create instance of class in your script, catch exceptions in decorator and add them to class (however global variable might be also ok).</p>
| 0 | 2016-09-29T10:46:31Z | [
"python"
]
|
What is the best way to collect errors and send a summary? | 39,767,881 | <p>I have a script executing several independent functions in turn. I would like to collect the errors/exceptions happening along the way, in order to send an email with a summary of the errors.</p>
<p>What is the best way to raise these errors/exceptions and collect them, while allowing the script to complete and go through all the steps? They are independent, so it does not matter if one crashes. The remaining ones can still run.</p>
<pre><code>def step_1():
# Code that can raise errors/exceptions
def step_2():
# Code that can raise errors/exceptions
def step_3():
# Code that can raise errors/exceptions
def main():
step_1()
step_2()
step_3()
send_email_with_collected_errors()
if '__name__' == '__main__':
main()
</code></pre>
<p>Should I wrap each step in a try..except block in the main() function? Should I use a decorator on each step function, in addition to an error collector?</p>
| 0 | 2016-09-29T10:33:15Z | 39,768,492 | <p>You could wrap each function in try/except, usually better for small simple scripts.</p>
<pre><code>def step_1():
# Code that can raise errors/exceptions
def step_2():
# Code that can raise errors/exceptions
def step_3():
# Code that can raise errors/exceptions
def main():
try:
step_1_result = step_1()
log.info('Result of step_1 was {}'.format(result))
except Exception as e:
log.error('Exception raised. {}'.format(e))
step_1_result = e
continue
try:
step_2_result = step_2()
log.info('Result of step_2 was {}'.format(result))
except Exception as e:
log.error('Exception raised. {}'.format(e))
step_2_result = e
continue
try:
step_3_result = step_3()
log.info('Result of step_3 was {}'.format(result))
except Exception as e:
log.error('Exception raised. {}'.format(e))
step_3_result = e
continue
send_email_with_collected_errors(
step_1_result,
step_2_result,
step_3_result
)
if '__name__' == '__main__':
main()
</code></pre>
<p>For something more elaborate you could use a decorator that'd construct a list of errors/exceptions caught. For example</p>
<pre><code>class ErrorIgnore(object):
def __init__(self, errors, errorreturn = None, errorcall = None):
self.errors = errors
self.errorreturn = errorreturn
self.errorcall = errorcall
def __call__(self, function):
def returnfunction(*args, **kwargs):
try:
return function(*args, **kwargs)
except Exception as E:
if type(E) not in self.errors:
raise E
if self.errorcall is not None:
self.errorcall(E, *args, **kwargs)
return self.errorreturn
return returnfunction
</code></pre>
<p>Then you could use it like this:</p>
<pre><code>exceptions = []
def errorcall(E, *args):
print 'Exception raised {}'.format(E)
exceptions.append(E)
@ErrorIgnore(errors = [ZeroDivisionError, ValueError], errorreturn = None, errorcall = errorcall)
def step_1():
# Code that can raise errors/exceptions
...
def main():
step_1()
step_2()
step_3()
send_email_with_collected_errors(exceptions)
if '__name__' == '__main__':
main()
</code></pre>
| 1 | 2016-09-29T11:02:43Z | [
"python"
]
|
TypeError: match() takes from 2 to 3 positional arguments but 5 were given | 39,767,891 | <p>Full Error: </p>
<pre><code>Traceback (most recent call last):
File "N:/Computing (Programming)/Code/name.py", line 3, in <module>
valid = re.match("[0-9]","[0-9]","[A-Z]","[a-z]" ,tutorGroup)
TypeError: match() takes from 2 to 3 positional arguments but 5 were given
</code></pre>
<p>My code:</p>
<pre><code>import re
tutorGroup = input("Enter your tutor group - e.g. 10: ")
valid = re.match("[0-9]","[0-9]","[A-Z]","[a-z]" ,tutorGroup)
if valid:
print("OK!")
else:
print("Invalid!")
</code></pre>
<p>I'm trying to search a string with a given parameter</p>
| 0 | 2016-09-29T10:33:33Z | 39,767,948 | <p>The problem is, that <code>re.match</code> take 2 or 3 arguments, not 5. First the regex pattern, and the string to match. optionally it takes a 3rd argument with flags.
If you want to match a single digit or a letter, you would use <code>[0-9a-zA-Z]</code> as regex. If you want multiple letters or digits, you can use <code>[0-9a-zA-Z]+</code>. If you want a list of digits or a list of letters (but not mixed), you can use <code>([0-9]+)|[a-zA-Z]+</code>.</p>
<p>Edit: After reading your comment, the regex you want is <code>[0-9]{2}[a-zA-Z]{2}</code></p>
| 1 | 2016-09-29T10:35:45Z | [
"python"
]
|
Create zip archive with multiple files | 39,767,904 | <p>I'm using following code where I pass .pdf file names with their paths to create zip file.</p>
<pre><code>for f in lstFileNames:
with zipfile.ZipFile('reportDir' + str(uuid.uuid4()) + '.zip', 'w') as myzip:
myzip.write(f)
</code></pre>
<p>It only archives one file though. <strong>I need to archive all files in my list in one single zip folder.</strong></p>
<p>Before people start to point out, yes I have consulted answers from <a href="http://stackoverflow.com/questions/12881294/django-create-a-zip-of-multiple-files-and-make-it-downloadable">this</a> and <a href="http://stackoverflow.com/questions/67454/serving-dynamically-generated-zip-archives-in-django">this</a> link but the code given there doesn't work for me. The code runs but I can't find generated zip file anywhere in my computer. </p>
<p>A simple straightforward answer would be appreciated. Thanks.</p>
| 0 | 2016-09-29T10:34:11Z | 39,767,950 | <p>Order is incorrect. You are creating new zipfile object for each item in <code>lstFileNames</code></p>
<p>Should be like this.</p>
<pre><code>with zipfile.ZipFile('reportDir' + str(uuid.uuid4()) + '.zip', 'w') as myzip:
for f in lstFileNames:
myzip.write(f)
</code></pre>
| 4 | 2016-09-29T10:35:48Z | [
"python"
]
|
How to detect closed eyes for three seconds? | 39,767,988 | <p>I have discovered the parts such as the face, eyes, 2 containers eyes. Normally, I can see face, eyes area and two eyes. When I see face, eyes area but not see two eyes, that's when detected eyes closed. And now, I want to detect when eyes closed for three seconds.Someone can suggest me a solution. I tried to time.sleep () function but it doesn't working . It makes Videostream process stopped.</p>
<pre><code>cas_path = os.getcwd()
eye_path = os.getcwd()
two_eyes_path = os.getcwd()
cas_path += "/haarcascade_frontalface_alt.xml"
eye_path += "/haarcascade_mcs_eyepair_big.xml"
two_eyes_path += "/haarcascade_eye.xml"
faceCascade = cv2.CascadeClassifier(cas_path)
eyesCascade = cv2.CascadeClassifier(eye_path)
twoeyesCascade = cv2.CascadeClassifier(two_eyes_path)
class VideoCamera(object):
def __init__(self):
self.status = "Sharing ?"
self._image = np.zeros((100,200))
self.video = cv2.VideoCapture(0)
(self.video).set(3, 200)
(self.video).set(4, 160)
#success, self._image = self.video.read()
# If you decide to use video.mp4, you must have this file in the folder
# as the main.py.
# self.video = cv2.VideoCapture('video.mp4')
def __del__(self):
self.video.release()
def get_frame(self):
global s
s = ''
global string
string = ''
success, image = self.video.read()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.3,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.cv.CV_HAAR_SCALE_IMAGE
)
count = 0
for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x + w, y + h), (255, 255, 0), 2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = image[y:y+h, x:x+w]
eyes = eyesCascade.detectMultiScale(roi_gray)
if eyes is not():
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex -10 ,ey - 10),(ex+ew + 10,ey+eh + 10),(0,255,0),2)
twoeyes = twoeyesCascade.detectMultiScale(roi_gray)
firsttime = 1
if twoeyes is not():
for (exx,eyy,eww,ehh) in twoeyes:
cv2.rectangle(roi_color,(exx-5 ,eyy -5 ),(exx+eww -5,eyy+ehh -5 ),(0,0, 255),2)
ret, jpeg = cv2.imencode('.jpg', image)
self.string = jpeg.tostring()
self._image = image
return jpeg.tostring()
def GetBw(self):
image = self._image
ret, jpeg = cv2.imencode('.jpg', image)
self.string = jpeg.tostring()
return jpeg.tostring()
</code></pre>
| -1 | 2016-09-29T10:37:21Z | 39,769,281 | <p>You will want to use a timer for this specific application. time.sleep() basically just halts the program, it does not keep track of time. This should work for what you want:</p>
<pre><code>closed = False
timer = 0
if detect(eyes_closed) and not closed:
timer = time.time()
closed = True
else if closed:
if time.time() - timer > 3000:
print "Eyes have been closed for 3 seconds!"
else:
closed = False
</code></pre>
<p>I haven't done much work in OpenCV, so I just made the assumption that detect(eyes_closed) detects a set of closed eyes, you will have to correct that yourself.
This will print a message if it detects closed eyes for 3 seconds. </p>
| 0 | 2016-09-29T11:39:41Z | [
"python",
"opencv"
]
|
Python QT how to visualise that the button is clicked | 39,768,065 | <p>I have a <code>QtCreator</code> generated gui. After import i am setting images to the buttons and when i click them the button below probably indicates the click but the <code>QIcon</code> doesn't change in any way. Is there a way to make it visible?
This is my button code:</p>
<pre><code> self.pushButton.setIcon(QtGui.QIcon('artwork/player_rew'))
self.pushButton.setIconSize(QtCore.QSize(48, 48))
self.pushButton.setStyleSheet('QPushButton{border: 0px solid;}')
</code></pre>
| 0 | 2016-09-29T10:40:23Z | 39,769,120 | <p>You can use the <code>:pressed</code> <a href="http://doc.qt.io/qt-5/stylesheet-reference.html#list-of-pseudo-states" rel="nofollow">pseudo-state</a> in your style sheet to specify the behaviour when the button is pressed:</p>
<pre><code>self.pushButton.setStyleSheet("""
QPushButton{
border: 0px solid;
}
QPushButton:pressed {
border: 0px solid;
image: url(some_different_image);
background-color: red;
}
""")
</code></pre>
<p>More information can also be found in the <a href="http://doc.qt.io/qt-5/stylesheet-examples.html#customizing-a-qpushbutton-using-the-box-model" rel="nofollow">Qt Style Sheets Examples</a></p>
| 0 | 2016-09-29T11:32:39Z | [
"python",
"qt",
"qt-creator",
"pyqt5"
]
|
Sphinx - what is different between toctree and content? | 39,768,133 | <p>I can create table of contents in two ways:</p>
<pre><code>.. contents::
:local:
depth: 1
</code></pre>
<p>or as </p>
<pre><code>.. toctree::
:maxdepth: 1
index
</code></pre>
<p>What is difference? Where I should use the toctree and where the contents?</p>
| 0 | 2016-09-29T10:43:48Z | 39,770,866 | <p><a href="http://docutils.sourceforge.net/docs/ref/rst/directives.html#table-of-contents" rel="nofollow"><code>.. contents</code></a> is a doctutils directive (the underlying library which defines ReST and associated utilities) and <strong><em>automatically</em></strong> generates a table of contents from headlines within the current topic.</p>
<p><a href="http://www.sphinx-doc.org/en/stable/markup/toctree.html" rel="nofollow"><code>.. toctree</code></a> is a Sphinx-defined directive in which you <strong>explicitly list documents</strong> whose TOCs will be listed out.</p>
<p>You'd use <code>.. contents</code> for example within a document to generate an overview of contents within the page, e.g.:</p>
<pre><code>===================
Curing World Hunger
===================
.. contents::
:depth: 1
Abstract
========
â¦
Problem description
===================
â¦
</code></pre>
<p>You'd use <code>.. toctree</code> within an index document which contains basically nothing else:</p>
<pre><code>=================
Scientific papers
=================
Below is a list of papers published here:
.. toctree::
:maxdepth: 2
curing_hunger
â¦
</code></pre>
<p><code>.. toctree</code> takes a list of documents to process, <code>.. contents</code> does not.</p>
| 3 | 2016-09-29T12:51:54Z | [
"python",
"python-sphinx"
]
|
Python SSH using Popen | 39,768,169 | <p>How do I <code>ssh</code> using <code>Popen</code> by answering yes by default when faced with this?</p>
<pre><code>The authenticity of host '`XXXXX`' can't be established.
ECDSA key fingerprint is SHA256:LA2RqbdzD8Uxgi36KWOM12giS9T+ceOQYhYjVKReMks.
Are you sure you want to continue connecting (yes/no)?
</code></pre>
<p>this is what I'm trying</p>
<pre><code>for IP in ['10.32.253.250']:
remoteCall = subprocess.Popen(["ssh", "co_user@"+IP, "sudo svstat /etc/sv/nagios_CheckDisk_Alertz_daemon",], stdout=subprocess.PIPE, stderr=subprocess.PIPE);
Response, err = remoteCall.communicate()
if(re.search("authenticity of host", Response)):
stdin.write('yes')
print IP + " " +Response
</code></pre>
| 0 | 2016-09-29T10:45:13Z | 39,768,243 | <p>From the man page:</p>
<blockquote>
<p>StrictHostKeyChecking</p>
<p>If this flag is set to âyesâ, ssh(1) will never automatically add host
keys to the ~/.ssh/known_hosts file, and refuses to connect to hosts
whose host key has changed. This provides maximum protection against
trojan horse attacks, though it can be annoying when the
/etc/ssh/ssh_known_hosts file is poorly maintained or when connections
to new hosts are frequently made. This option forces the user to
manually add all new hosts. If this flag is set to ânoâ, ssh will
automatically add new host keys to the user known hosts files. If this
flag is set to âaskâ, new host keys will be added to the user known
host files only after the user has confirmed that is what they really
want to do, and ssh will</p>
</blockquote>
<p>You can set StrictHostKeyChecking to "no" if you want ssh/scp to automatically accept new keys without prompting. On the command line:</p>
<pre><code>scp -o StrictHostKeyChecking=no ...
</code></pre>
<p>You can also enable batch mode:</p>
<pre><code>BatchMode
</code></pre>
<blockquote>
<p>If set to âyesâ, passphrase/password querying will be disabled. This
option is useful in scripts and other batch jobs where no user is
present to supply the password. The argument must be âyesâ or ânoâ.
The default is ânoâ. With BatchMode=yes, ssh/scp will fail instead of
prompting (which is often an improvement for scripts).</p>
</blockquote>
| -3 | 2016-09-29T10:49:13Z | [
"python",
"ssh",
"subprocess",
"popen"
]
|
Python SSH using Popen | 39,768,169 | <p>How do I <code>ssh</code> using <code>Popen</code> by answering yes by default when faced with this?</p>
<pre><code>The authenticity of host '`XXXXX`' can't be established.
ECDSA key fingerprint is SHA256:LA2RqbdzD8Uxgi36KWOM12giS9T+ceOQYhYjVKReMks.
Are you sure you want to continue connecting (yes/no)?
</code></pre>
<p>this is what I'm trying</p>
<pre><code>for IP in ['10.32.253.250']:
remoteCall = subprocess.Popen(["ssh", "co_user@"+IP, "sudo svstat /etc/sv/nagios_CheckDisk_Alertz_daemon",], stdout=subprocess.PIPE, stderr=subprocess.PIPE);
Response, err = remoteCall.communicate()
if(re.search("authenticity of host", Response)):
stdin.write('yes')
print IP + " " +Response
</code></pre>
| 0 | 2016-09-29T10:45:13Z | 39,771,367 | <p>Not sure how subprocess do it, next is another way, maybe helpful to you. FYI.</p>
<pre><code>import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect("192.168.1.6", 22, "username", "password")
stdin, stdout, stderr = ssh.exec_command('ls') // your command here
content = stdout.readlines()
print content
ssh.close()
</code></pre>
| 0 | 2016-09-29T13:14:51Z | [
"python",
"ssh",
"subprocess",
"popen"
]
|
Converting python tuple, lists, dictionaries containing pandas objects (series/dataframes) to json | 39,768,230 | <p>I know I can convert pandas object like <code>Series</code>, <code>DataFrame</code> to json as follows:</p>
<pre><code>series1 = pd.Series(np.random.randn(5), name='something')
jsonSeries1 = series1.to_json() #{"0":0.0548079371,"1":-0.9072821424,"2":1.3865642993,"3":-1.0609052074,"4":-3.3513341839}
</code></pre>
<p>However what should I do when that series is encapsulated inside other datastructure, say dictionary as follows:</p>
<pre><code>seriesmap = {"key1":pd.Series(np.random.randn(5), name='something')}
</code></pre>
<p>How do I convert above map to json like this:</p>
<pre><code>{"key1":{"0":0.0548079371,"1":-0.9072821424,"2":1.3865642993,"3":-1.0609052074,"4":-3.3513341839}}
</code></pre>
<p><code>simplejson</code> does not work:</p>
<pre><code> jsonObj = simplejson.dumps(seriesmap)
</code></pre>
<p>gives</p>
<pre><code>Traceback (most recent call last):
File "C:\..\py2.py", line 86, in <module>
jsonObj = json.dumps(seriesmap)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\simplejson\__init__.py", line 380, in dumps
return _default_encoder.encode(obj)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\simplejson\encoder.py", line 275, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\simplejson\encoder.py", line 357, in iterencode
return _iterencode(o, 0)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\simplejson\encoder.py", line 252, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: 0 -0.038824
1 -0.047297
2 -0.887672
3 -1.510238
4 0.900217
Name: something, dtype: float64 is not JSON serializable
</code></pre>
<p>To generalize this even further, I want to convert arbitrary object to json. The arbitrary object may be simple int, string or of complex types such that tuple, list, dictionary containing pandas objects along with other types. In dictionary the pandas object may lie at arbitrary depth as some key's value. I want to safely convert such structure to valid json. Is it possible?</p>
<p><strong>Update</strong></p>
<p>I just tried encapsulating DataFrame as a value of one of the keys of a dictionary and converting that dictionary to json by encapsulating in another DataFrame (as suggested in below answer). But seems that it does not work:</p>
<pre><code>import pandas as pd
d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(d)
mapDict = {"key1":df}
print(pd.DataFrame(mapDict).to_json())
</code></pre>
<p>This gave:</p>
<pre><code>Traceback (most recent call last):
File "C:\Mahesh\repos\JavaPython\JavaPython\bin\py2.py", line 80, in <module>
print(pd.DataFrame(mapDict).to_json())
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\pandas\core\frame.py", line 224, in __init__
mgr = self._init_dict(data, index, columns, dtype=dtype)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\pandas\core\frame.py", line 360, in _init_dict
return _arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\pandas\core\frame.py", line 5231, in _arrays_to_mgr
index = extract_index(arrays)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\pandas\core\frame.py", line 5270, in extract_index
raise ValueError('If using all scalar values, you must pass'
ValueError: If using all scalar values, you must pass an index
</code></pre>
| 2 | 2016-09-29T10:48:38Z | 39,768,286 | <p>call <code>pd.DataFrame</code> on <code>seriesmap</code> then use <code>to_json</code></p>
<pre><code>pd.DataFrame(seriesmap).to_json()
'{"key1":{"0":0.8513342674,"1":-1.3357052602,"2":0.2102391775,"3":-0.5957492995,"4":0.2356552588}}'
</code></pre>
| 1 | 2016-09-29T10:51:15Z | [
"python",
"json",
"pandas"
]
|
Plotting a type of Histogram | 39,768,288 | <p>I have data in the following format</p>
<pre><code>0 0.69
1 0.87
1 0.87
0 0.87
0 0.87
</code></pre>
<p>So the first column is either zero or one. Second column is a decimal number. If you look at the table, at 0.69, there is only one zero and no ones. Also at 0.87, there are two zeros and two ones. I want to plot it so that the x-axis is the decimal number. Y-axis has two plots. One will be number of zeros at that decimal number and the other is the number of ones. Also assume that I have this table in pandas dataframe format. </p>
| 0 | 2016-09-29T10:51:16Z | 39,768,385 | <p>use <code>groupby</code>, <code>size</code>, and <code>unstack</code></p>
<pre><code>df.groupby([0, 1]).size().rename_axis([None, None]).unstack(0).plot.bar()
</code></pre>
<p><a href="http://i.stack.imgur.com/9Tqna.png" rel="nofollow"><img src="http://i.stack.imgur.com/9Tqna.png" alt="enter image description here"></a></p>
| 3 | 2016-09-29T10:57:24Z | [
"python",
"pandas",
"numpy",
"histogram"
]
|
Incorrect pattern displayed | 39,768,309 | <p>In the second while loop the asterisk(*) is displayed just once for every cycle. </p>
<pre><code>import sys
n = 0
a = 0
while (n < 6):
n = n + 1
while(a < n):
sys.stdout.write('*')
a = a +1
print ''
</code></pre>
<p>Pattern displayed is :</p>
<pre><code>*
*
*
*
*
*
</code></pre>
| -5 | 2016-09-29T10:52:15Z | 39,768,460 | <p>Assuming you want it to print out 6 patterns of 6 stars with a line between, this is what you want to do:</p>
<pre><code>import sys
n = 0
a = 0
while (n < 6):
n = n + 1
a=0
while(a < n):
sys.stdout.write('*',end="")
a = a +1
print ''
</code></pre>
| 0 | 2016-09-29T11:00:47Z | [
"python"
]
|
Incorrect pattern displayed | 39,768,309 | <p>In the second while loop the asterisk(*) is displayed just once for every cycle. </p>
<pre><code>import sys
n = 0
a = 0
while (n < 6):
n = n + 1
while(a < n):
sys.stdout.write('*')
a = a +1
print ''
</code></pre>
<p>Pattern displayed is :</p>
<pre><code>*
*
*
*
*
*
</code></pre>
| -5 | 2016-09-29T10:52:15Z | 39,768,624 | <p>Here's a possible solution for your version:</p>
<pre><code>import sys
n = 0
a = 0
while (n < 6):
n = n + 1
a = 0
while(a < n):
print('*', end="")
a = a + 1
print('')
</code></pre>
<p>If you want a shorter version, here's a possible one:</p>
<pre><code>print('\n'.join(['*'*i for i in range(1,7)]))
</code></pre>
| 0 | 2016-09-29T11:08:55Z | [
"python"
]
|
Replace whole string if it contains substring in pandas | 39,768,547 | <p>Im sorry if this is a very obvious and easy question but I've been searching the internet for a solution but couldn't come up with one that works (and I don't really have much knowledge in python). I want to replace all strings that contain a specific substring. So for example if I have this dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'name': ['Bob', 'Jane', 'Alice'],
'sport': ['tennis', 'football', 'basketball']})
</code></pre>
<p>I could replace football with the string 'ball sport' like this:</p>
<pre><code>df.replace({'sport': {'football': 'ball sport'}})
</code></pre>
<p>What I want though is to replace everything that contains ball (in this case football and basketball) with 'ball sport'. Something like this:</p>
<pre><code>df.replace({'sport': {'[strings that contain ball]': 'ball sport'}})
</code></pre>
| 2 | 2016-09-29T11:05:30Z | 39,768,574 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html#pandas.Series.str.contains" rel="nofollow"><code>str.contains</code></a> to mask the rows that contain 'ball' and then overwrite with the new value:</p>
<pre><code>In [71]:
df.loc[df['sport'].str.contains('ball'), 'sport'] = 'ball sport'
df
Out[71]:
name sport
0 Bob tennis
1 Jane ball sport
2 Alice ball sport
</code></pre>
<p>To make it case-insensitive pass `case=False:</p>
<pre><code>df.loc[df['sport'].str.contains('ball', case=False), 'sport'] = 'ball sport'
</code></pre>
| 2 | 2016-09-29T11:06:46Z | [
"python",
"pandas"
]
|
Replace whole string if it contains substring in pandas | 39,768,547 | <p>Im sorry if this is a very obvious and easy question but I've been searching the internet for a solution but couldn't come up with one that works (and I don't really have much knowledge in python). I want to replace all strings that contain a specific substring. So for example if I have this dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'name': ['Bob', 'Jane', 'Alice'],
'sport': ['tennis', 'football', 'basketball']})
</code></pre>
<p>I could replace football with the string 'ball sport' like this:</p>
<pre><code>df.replace({'sport': {'football': 'ball sport'}})
</code></pre>
<p>What I want though is to replace everything that contains ball (in this case football and basketball) with 'ball sport'. Something like this:</p>
<pre><code>df.replace({'sport': {'[strings that contain ball]': 'ball sport'}})
</code></pre>
| 2 | 2016-09-29T11:05:30Z | 39,768,594 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html" rel="nofollow"><code>apply</code></a> with a lambda. The <code>x</code> parameter of the lambda function will be each value in the 'sport' column:</p>
<pre><code>df.sport = df.sport.apply(lambda x: 'ball sport' if 'ball' in x else x)
</code></pre>
| 2 | 2016-09-29T11:07:23Z | [
"python",
"pandas"
]
|
Replace whole string if it contains substring in pandas | 39,768,547 | <p>Im sorry if this is a very obvious and easy question but I've been searching the internet for a solution but couldn't come up with one that works (and I don't really have much knowledge in python). I want to replace all strings that contain a specific substring. So for example if I have this dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'name': ['Bob', 'Jane', 'Alice'],
'sport': ['tennis', 'football', 'basketball']})
</code></pre>
<p>I could replace football with the string 'ball sport' like this:</p>
<pre><code>df.replace({'sport': {'football': 'ball sport'}})
</code></pre>
<p>What I want though is to replace everything that contains ball (in this case football and basketball) with 'ball sport'. Something like this:</p>
<pre><code>df.replace({'sport': {'[strings that contain ball]': 'ball sport'}})
</code></pre>
| 2 | 2016-09-29T11:05:30Z | 39,768,663 | <p>you can use <code>str.replace</code></p>
<pre><code>df.sport.str.replace(r'(^.*ball.*$)', 'ball sport')
0 tennis
1 ball sport
2 ball sport
Name: sport, dtype: object
</code></pre>
<p>reassign with</p>
<pre><code>df['sport'] = df.sport.str.replace(r'(^.*ball.*$)', 'ball sport')
df
</code></pre>
<p><a href="http://i.stack.imgur.com/5tJgT.png" rel="nofollow"><img src="http://i.stack.imgur.com/5tJgT.png" alt="enter image description here"></a></p>
| 2 | 2016-09-29T11:10:55Z | [
"python",
"pandas"
]
|
Not expected result received with scipy's curve_fit | 39,768,562 | <p>I was trying to do multivariate logarithmic regression on my data with scipy curve_fit and as a result expect to get a line, but get a curve.
Here is the code I used : </p>
<pre><code>Quercetin=[23,195,6,262,272,158,79,65,136,198]
Naringenin=[11,4,8,6,6,7,6,9,7,9]
Rutin=[178,165,93,239,202,3325,4427,7607,3499,1762]
TEAC=[23,189,37,265,290,267,362,388,364,321]
import matplotlib.pyplot as plt
import scipy
from scipy.optimize import curve_fit
import numpy as np
def func(x, a, b, c, d,e):
m=np.log(a*x[0]+b*x[1]+c*x[2])
return d*(m)+e
x=scipy.array([Quercetin, Naringenin,Rutin])
y=scipy.array(TEAC)
popt, pcov = curve_fit(func, x ,y)
print (popt)
plt.plot(func(x,*popt),y,'ro-')
plt.show()
</code></pre>
<p>And I get this result:
<a href="http://i.stack.imgur.com/6RbYt.png" rel="nofollow"><img src="http://i.stack.imgur.com/6RbYt.png" alt="enter image description here"></a></p>
<p>while I want to get something like this :</p>
<p><a href="http://i.stack.imgur.com/RSbe3.png" rel="nofollow"><img src="http://i.stack.imgur.com/RSbe3.png" alt="enter image description here"></a></p>
<p>Could anyone please give me a hint on what I am doing wrong?
If this matters I use Python 3.5 from Anaconda on Windows 10. </p>
| 0 | 2016-09-29T11:06:17Z | 39,775,906 | <p>(I'm assuming that the question is really about plotting).
You are asking <code>matplotlib</code> to plot red dots (<code>'ro'</code>) connected by straight lines (<code>-</code>). Matplotlib obliges and connects them in whatever order they are given.</p>
<p>If you want to plot a line, just plot it separately:</p>
<pre><code>In [58]: yres = func(x, *popt)
In [59] plt.plot(yres, y, 'ro')
Out[59]: [<matplotlib.lines.Line2D at 0x7f5c0796b828>]
In [60]: plt.plot([0, 400], [0, 400], '-')
Out[60]: [<matplotlib.lines.Line2D at 0x7f5c07a9c860>]
</code></pre>
| 2 | 2016-09-29T16:50:02Z | [
"python",
"scipy",
"curve-fitting"
]
|
pretty much lost : self.parent.parent.__class__ | 39,768,630 | <p>I needed to serialize a self referencing hierarchy </p>
<pre><code>class Systm(models.Model):
...
parent = models.ForeignKey('self', on_delete=models.CASCADE, blank = True, null = True, related_name="children")
</code></pre>
<p>and I was able to achieve that with the following code:</p>
<pre><code>class RecursiveField(serializers.Serializer):
def to_representation(self, value):
serializer = self.parent.parent.__class__(value, context=self.context)
return serializer.data
class SystmSerializer(serializers.ModelSerializer):
children = RecursiveField(many = True, read_only = True)
class Meta:
model = models.Systm
fields = ('id', 'name', 'type', 'children')
</code></pre>
<p>It works and everything is bright except that all I did was to copy and paste the code but no idea how it works and why it works. It is pretty annoying and I want to understand it.<br>
I put '<code>print</code>' commands into '<code>to_representation(...)</code>' but still not clear. I learnt that that there is recursion when the '<code>self.parent.parent.__class__(value, context=self.context)</code>' is executed but not sure why.<br>
I'd be really grateful if somebody could explain it to me.</p>
<p>Thanks,<br>
V.</p>
| 0 | 2016-09-29T11:09:19Z | 39,772,910 | <p>This basically works by getting the top serializer class and instanciating it to return the serialized value of a child. The <code>parent</code> attribute of a serializer (or field) is the serializer instance that declared this as a field.</p>
<p><code>self.parent</code> is the List Serializer implicitly created when declaring <code>many=True</code> (see <a href="http://www.django-rest-framework.org/api-guide/serializers/#listserializer" rel="nofollow">ListSerializer</a>).
<code>self.parent.parent</code> is the <code>SystmSerializer</code> instance.</p>
<p>My guess is that this code only works because the recursive field has been declared with <code>many=True</code>. If you had <code>child = RecursiveField(read_only = True)</code> the code would be:</p>
<pre><code>class RecursiveField(serializers.Serializer):
def to_representation(self, value):
serializer = self.parent.__class__(value, context=self.context)
return serializer.data
</code></pre>
<p>To make it simple, when you (or DRF) calls <code>serialializer_instance.data</code>, it makes a call to <code>to_representation</code> to serialize the object instance.</p>
<p>Recursion comes into play when doing the <code>SystmSerializer.to_representation</code>:</p>
<ol>
<li>when <code>to_representation(instance)</code> is called, it will call <code>to_representation(field_instance)</code> on every field.</li>
<li>One of these fields is the <code>recursiveField</code>, and more exactly the <a href="http://www.django-rest-framework.org/api-guide/serializers/#listserializer" rel="nofollow">ListSerializer</a> associated with the the <code>recursiveField</code></li>
<li>Calling <code>to_representation</code> on the <a href="http://www.django-rest-framework.org/api-guide/serializers/#listserializer" rel="nofollow">ListSerializer</a> will call <code>recursiveField.to_representation(child)</code> for all children (if any) of the instance.</li>
<li>This is when recursion starts: the implementation of <code>recursiveField.to_representation</code> instanciates a new <code>SystmSerializer</code> instance on the child <code>Systm</code> object and then calls <code>SystmSerializer.data</code> which itself triggers a <code>to_representation</code> call... and you are back to step 1 with a new <code>SystmSerializer</code> instance on the child <code>Systm</code> object.</li>
</ol>
<p>Recursion ends when there are no children. Of course, you can run into endless recursion if a child references an ancestor.</p>
| 0 | 2016-09-29T14:20:48Z | [
"python",
"django",
"django-rest-framework"
]
|
Finding the exact string in input file | 39,768,676 | <p>I am looking for the lines starting with "ND" in an input file.
that looks like this:</p>
<pre><code>ND 195 4.53434033e+006 5.62453069e+006 2.56369141e+002
ND 196 4.53436645e+006 5.62443565e+006 2.56452118e+002
NS 129 113 97 82 58 59 37 22 17 18
NS 5 6 12 26 42 64 62 85 102 117
</code></pre>
<p>I wrote a code like this:</p>
<pre><code>from __future__ import print_function
found_ND = False
text = "ND"
with open('input.txt', 'r') as f, open('output.dat', 'w') as outfile:
for line in f:
if text in line:
found_ND = True
if found_ND:
#do whatever you want
try:
line = line.strip()
columns = line.split()
i = float(columns[1])
x = float(columns[2])
y = float(columns[3])
z = float(columns[4])
print(z)
print("{:.2f}".format(z), file=outfile)
except ValueError:
pass
</code></pre>
<p>But in the result I get also the fourth column of the string starting with "NS".
Result looks like this:</p>
<pre><code>256.37
256.45
82.00
26.00
</code></pre>
<p>How can I write the code to avoid the lines starting with <code>"NS"</code>?</p>
| 1 | 2016-09-29T11:11:27Z | 39,768,757 | <p>Well, you're using a flag <code>found_ND</code>, making it <code>True</code> if the line is found (which happens for the first line) and then never change it back to <code>False</code>: </p>
<pre><code>if text in line:
found_ND = True
</code></pre>
<p><em><code>found_ND</code> will be <code>True</code> for all following iterations</em>. In short, just don't use a flag, you don't need it:</p>
<pre><code>for line in f:
if text in line:
#do whatever you want
try:
line = line.strip()
columns = line.split()
# .. and so on.
</code></pre>
<p>or, if you strictly want to check the beginning (that is, the line might contain <code>'ND'</code> elsewhere) use <code>startswith</code> as @Wiktor suggested:</p>
<pre><code>for line in f:
if line.startswith('ND '):
# ...
</code></pre>
| 2 | 2016-09-29T11:15:29Z | [
"python",
"string",
"python-3.x"
]
|
set xticks label frequency with python | 39,768,742 | <p>I want to reduce the frequency of x tick marks. The x values are date :</p>
<pre><code>date=['20120101','20120101','20120101',...'20121231']
</code></pre>
<p>I have 24 time steps for each day of the year. And I would like to label the xticks each 24 time steps (i.e., each day). </p>
<p>Here is the code that I use:</p>
<pre><code>date=[]
val=[]
for lig in file(liste.txt):
ligne=lig.split('')
date.append(ligne[0])
val.append(ligne[1])
plt.plot(date,val)
plt.setp(plt.gca().xaxis.get_majorticklabels(),rotation=90)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%y%m%d'))
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=4))
plt.show()
</code></pre>
<p>And here is my plot, there are no xticks label !</p>
<p><a href="http://i.stack.imgur.com/XmF1K.png" rel="nofollow"><img src="http://i.stack.imgur.com/XmF1K.png" alt="enter image description here"></a></p>
| 0 | 2016-09-29T11:14:40Z | 39,769,409 | <p>Since there is no <em>complete</em> and working example, we have to guess: Is your <code>date</code> properly recognized as datetime object? If not, the <code>set_major_locator</code> in the date-format would result in no x-ticks.</p>
<p>You can achieve this using something like <code>date = datetime.datetime.strptime(date, '%Y%m%d')</code>.</p>
| 1 | 2016-09-29T11:45:15Z | [
"python",
"matplotlib"
]
|
Error in parsing HL7 using hl7apy | 39,768,782 | <p>i am using <strong>hl7apy</strong> to parse hl7 files in python and i am following <strong><a href="https://msarfati.wordpress.com/2015/06/20/python-hl7-v2-x-and-hl7apy-introduction-and-parsing-part-1/" rel="nofollow">this</a></strong> link. When i am using the sample.hl7 i am getting the desired result but when i am using my own hl7 file i am getting below error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "hl7apy/parser.py", line 82, in parse_message
m.structure_by_name)
File "hl7apy/parser.py", line 144, in parse_segments
reference))
File "hl7apy/parser.py", line 189, in parse_segment
reference=reference)
File "hl7apy/core.py", line 1564, in __init__
validation_level, traversal_parent)
File "hl7apy/core.py", line 632, in __init__
self._find_structure(reference)
File "hl7apy/core.py", line 808, in _find_structure
structure = ElementFinder.get_structure(self, reference)
File "hl7apy/core.py", line 524, in get_structure
raise InvalidName(element.classname, element.name)
hl7apy.exceptions.InvalidName: Invalid name for Segment:
</code></pre>
<p>I dont understand what am i doing wrong.</p>
<p><strong>EDIT :</strong>
This is the sample that i am using.</p>
<pre><code>MSH|^~\&|SQ|BIN|SMS|BIN|20121009151949||ORU^R01|120330003918|P|2.2
PID|1|K940462|T19022||TEIENT|JYSHEE|1957009|F|MR^^RM^MR^DR^MD^3216|7|0371 HOES LANE^0371 HOES LANE^NORTH CENTRE^FL^0854^INDIA^P^98|^UA^|(21)2-921|203960|ENG^ENGLISH^HL7096^ENG^ENGLISH^9CLAN|U|^HINU^|^^^T1M|05-1-900||||NW HAVEN||||PAS|NOTH CETRE|
PV1|1|I|BDE^BDE||||960^FALK,HENRY^^^MD|||MED|||||||960^FALK,HENRY^^^MD||22599|||||||||||||||||||||||||20160613102300||||
ORC|RE|10112|1705||D||^^^20103102300^216061102300||201208100924|PS||10084^BRUCE^PALTHROW|||201606310230|
OBR|1|10112|1705|1786-6^HEMOGOI A1C|||201606131300|201606131300||SGR||||201208056||1029^BONE,EAN|3-266-91|||||201280058||CH|F||R^^^2012070957|||||104^VRNEY,SCT|
OBX|1|NM|1856-6^LOINC^LN^HEMOGOI A1C^L||5.9|%|4.2-6.3||||F|||20160613|A^^L
</code></pre>
<p><strong>Code :</strong> This is the code that i am using for parsing my hl7 files and same code i used while parsing the sample hl7 file mentioned in above link.</p>
<pre><code>from hl7apy import parser
from hl7apy.exceptions import UnsupportedVersion
hl7 = open('ICD9.hl7', 'r').read()
try:
m = parser.parse_message(hl7)
except UnsupportedVersion:
m = parser.parse_message(hl7.replace("n", "r"))
</code></pre>
| 0 | 2016-09-29T11:17:00Z | 39,769,267 | <p>You should show a message from your file here for a definitiv answer. </p>
<p>But the most probable reason is, that the content of your file does not follow the HL7 rules. Are you sure that you use the correct segment delimiter (ASCII 13 or HEX 0D) ? Do you use non standard segment names?</p>
<p>Just a check with <a href="http://try-it.caristix.com:9030/default.aspx" rel="nofollow">Free Online HL7 Messages Validation</a> gives these</p>
<pre><code>ID ELEMENT_TYPE POSITION LINE_NO VALIDATION ERROR
1 Field MSH.9.3 1 Component required
2 Field PID.7 2 Invalid date time format : '1957009'
3 Component PID.9.7 2 Invalid table entry value : '3216' for table Name Type
4 Component PID.9.7 2 Value '3216' length (4) exceed limit (1)
5 Component PID.11.6 2 Invalid table entry value : 'INDIA' for table Country Code
6 Component PID.11.6 2 Value 'INDIA' length (5) exceed limit (3)
7 Field PID.12 2 Field should not contain component(s)
8 Component PID.18.1 2 Field required but has no value.
9 Field ORC.5 4 Invalid table entry value : 'D' for table Order status
10 Component ORC.7.4 4 Invalid date time format : '20103102300'
11 Component ORC.7.5 4 Invalid date time format : '216061102300'
12 Field ORC.15 4 Invalid date time format : '201606310230'
13 Field OBR.14 5 Invalid date time format : '201208056'
14 Field OBR.22 5 Invalid date time format : '201280058'
15 SubComponent OBR.27.1.1 5 Invalid numeric format : 'R'
16 Component OBR.27.4 5 Invalid date time format : '2012070957'
17 Component OBR.32.2 5 Invalid date time format : 'VRNEY,SCT'
</code></pre>
<p>But this does not explain your error message. Are you sure, that you read the message file and parse the content?</p>
<p>You have an error in your code. It should be</p>
<pre><code>hl7.replace("\n", "\r")
</code></pre>
<p>if you want to replace the wrong segment delimiter.</p>
| 2 | 2016-09-29T11:39:07Z | [
"python",
"hl7",
"hl7-v2"
]
|
Difference iterating with tp_iternext or PyIter_Next | 39,768,793 | <p>If I write a C function that does something with an iterable then I create an Iterator first and then loop over it.</p>
<pre><code>iterator = PyObject_GetIter(sequence);
if (iterator == NULL) {
return NULL;
}
while (( item = PyIter_Next(iterator) )) {
...
}
</code></pre>
<p>This works fine but I've also seen some functions using <code>tp_iternext</code>:</p>
<pre><code>iterator = PyObject_GetIter(sequence); // ....
iternext = *Py_TYPE(iterator)->tp_iternext;
while (( item = iternext(iterator) )) {
...
}
</code></pre>
<p>the second approach seems faster (I have only one data point: my Windows computer and my msvc compiler).</p>
<p>Is it just coincidence that the <code>iternext</code> approach is faster and is there any significant difference between these two?</p>
<p>Links to the python documentation of both:
<a href="https://docs.python.org/3/c-api/iter.html#c.PyIter_Next" rel="nofollow">PyIter_Next</a>,
<a href="https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_iternext" rel="nofollow">tp_iternext</a>
I have read them but to me it's not clear when and why one should be preferred.</p>
| 0 | 2016-09-29T11:17:20Z | 39,768,987 | <p>The <a href="https://hg.python.org/cpython/file/tip/Objects/abstract.c#l3145" rel="nofollow">source code for <code>PyIter_Next</code></a> shows that it simply retrieves the <code>tp_iternext</code> slot and calls it <strong>and clears a <code>StopIteration</code> exception that may or may not have occurred</strong>.</p>
<p>If you use <code>tp_iternext</code> explicitly you have to check for this <code>StopIteration</code> when exhausting the iterator.</p>
<hr>
<p>By the way: the documentation of <code>tp_iternext</code> also says:</p>
<blockquote>
<p><strong><code>iternextfunc PyTypeObject.tp_iternext</code></strong></p>
<p>An optional pointer to a function that returns the next item in an iterator. When the iterator is exhausted, it must return <code>NULL</code>; <strong>a
<code>StopIteration</code> exception may or may not be set.</strong> When another error
occurs, it must return <code>NULL</code> too. Its presence signals that the
instances of this type are iterators.</p>
</blockquote>
<p>While there is no such mention in <code>PyIter_Next</code>'s documentation.</p>
<p>So <code>PyIter_Next</code> is the simple <em>and safe</em> way of iterating over an iterator. You can use <code>tp_iternext</code> but then you have to be careful to not trigger a <code>StopIteration</code> exception at the end.</p>
| 1 | 2016-09-29T11:26:29Z | [
"python",
"python-c-api"
]
|
Collectd - Perl/Python plugin - registration functions not working | 39,768,848 | <p>I would like to ask about Collectd's plugins Perl and Python and their registration functions. </p>
<p>I tried to code plugin in Perl (and also in Python), set up read and write functions and after that register them into Collectd (plugin_register functions). In all cases it wasnt working. Everytime, the logs show:</p>
<blockquote>
<p>Found a configuration for the "my_plugin" plugin, but the plugin
isn't loaded or didn't register a configuration callback.
severity=warning</p>
</blockquote>
<p>I load my plugin in perl.conf.</p>
<p>Below I attach example of plugin which is directly from Collectd.perl documentation. This plugin, as well as my plugin, has the same result. </p>
<pre><code>package Collectd::Plugins::FooBar;
use strict;
use warnings;
use Collectd qw( :all );
sub foobar_read
{
my $vl = { plugin => 'foobar', type => 'gauge' };
$vl->{'values'} = [ rand(42) ];
plugin_dispatch_values ($vl);
return 1;
}
sub foobar_write
{
my ($type, $ds, $vl) = @_;
for (my $i = 0; $i < scalar (@$ds); ++$i) {
print "$vl->{'plugin'} ($vl->{'type'}): $vl->{'values'}->[$i]\n";
}
return 1;
}
sub foobar_match
{
my ($ds, $vl, $meta, $user_data) = @_;
if (matches($ds, $vl)) {
return FC_MATCH_MATCHES;
} else {
return FC_MATCH_NO_MATCH;
}
}
plugin_register (TYPE_READ, "foobar", "foobar_read");
plugin_register (TYPE_WRITE, "foobar", "foobar_write");
fc_register (FC_MATCH, "foobar", "foobar_match");
</code></pre>
| 3 | 2016-09-29T11:19:44Z | 39,770,304 | <p>Post your configuration if you can.</p>
<p>The documentation says that the <em><strong><code>LoadPlugin</code></strong></em> configuration goes in the collectd.conf file (not that that seems to matter in your case from your log file).</p>
<p>Put your foobar.pm module at <strong><em><code>/path/to/perl/plugins/Collectd/Plugins/FoorBar.pm</code></em></strong> matching it with the path that you specified ... (match the case of names of plugin and plugin pm file).</p>
<pre><code>LoadPlugin perl
# ...
<Plugin perl>
IncludeDir "/path/to/perl/plugins"
BaseName "Collectd::Plugins"
EnableDebugger ""
LoadPlugin "FooBar"
<Plugin FooBar>
Foo "Bar"
</Plugin>
</Plugin>
</code></pre>
| 1 | 2016-09-29T12:25:46Z | [
"python",
"perl",
"plugins",
"registration",
"collectd"
]
|
Check if Entry widget is selected | 39,768,925 | <p>I'm making a program on the Raspberry Pi with a touchscreen display.
I'm using Python Tkinter that has two entry widgets and one on screen keypad. I want to use the same keypad for entering data on both entry widgets. </p>
<p>Can anyone tell me how can i check if an entry is selected? Similar like clicking on the Entry using the mouse and the cursor appears. How can I know that in Python Tkinter?</p>
<p>Thank you.</p>
| 0 | 2016-09-29T11:23:43Z | 39,769,432 | <p>You can use events and bindigs to catch FocusIn events for your entries.</p>
<pre><code>entry1 = Entry(root)
entry2 = Entry(root)
def callback_entry1_focus(event):
print 'entry1 focus in'
def callback_entry2_focus(event):
print 'entry2 focus in'
entry1.bind("<FocusIn>", callback_entry1_focus)
entry2.bind("<FocusIn>", callback_entry2_focus)
</code></pre>
| 0 | 2016-09-29T11:46:19Z | [
"python",
"tkinter",
"raspberry-pi",
"touchscreen",
"raspberry-pi3"
]
|
Check if Entry widget is selected | 39,768,925 | <p>I'm making a program on the Raspberry Pi with a touchscreen display.
I'm using Python Tkinter that has two entry widgets and one on screen keypad. I want to use the same keypad for entering data on both entry widgets. </p>
<p>Can anyone tell me how can i check if an entry is selected? Similar like clicking on the Entry using the mouse and the cursor appears. How can I know that in Python Tkinter?</p>
<p>Thank you.</p>
| 0 | 2016-09-29T11:23:43Z | 39,770,561 | <p>There is always a widget with the keyboard focus. You can query that with the <code>focus_get</code> method of the root window. It will return whatever widget has keyboard focus. That is the window that should receive input from your keypad. </p>
| 0 | 2016-09-29T12:37:55Z | [
"python",
"tkinter",
"raspberry-pi",
"touchscreen",
"raspberry-pi3"
]
|
Mongo TTL cleanup is not working | 39,768,934 | <p>I'm trying to get Mongo to remove documents with the TTL feature however without success. Have tried many things but mongo doesn't seem to clean up.</p>
<p>My index:</p>
<pre><code> {
"v" : 1,
"key" : {
"date" : 1
},
"name" : "date_1",
"ns" : "history.history",
"expireAfterSeconds" : 60
}
</code></pre>
<p>The date value from document:</p>
<pre><code> "date" : "2016-09-29 11:08:46.461207",
</code></pre>
<p>Output from db.serverStatus().metrics.ttl:</p>
<pre><code>{ "deletedDocuments" : NumberLong(0), "passes" : NumberLong(29) }
</code></pre>
<p>Time output from db.serverStatus():</p>
<pre><code>"localTime" : ISODate("2016-09-29T11:19:45.345Z")
</code></pre>
<p>Only thing I suspect is the way I insert the value from Python. Could be that it's in some way wrong. I have a JSON document which contains the following element:</p>
<pre><code>"date": str(datetime.utcnow()),
</code></pre>
<p>Any clues where the problem might lay?</p>
<p>Thanks,
Janis </p>
| 4 | 2016-09-29T11:24:01Z | 39,773,284 | <p>As you have guessed, the problem is in how you insert the date value. I'll quote the <a href="https://docs.mongodb.com/manual/core/index-ttl/" rel="nofollow">docs</a>:</p>
<blockquote>
<p>If the indexed field in a document is not a date or an array that
holds a date value(s), the document will not expire.</p>
</blockquote>
<p>You are casting the date to a string. If you are using the pymongo driver, he will handle datetimes nicely and convert it to MongoDB native <a href="https://docs.mongodb.com/manual/reference/bson-types/#date" rel="nofollow">Date type</a>.</p>
<p>This way, the following should work:</p>
<pre><code>"date": datetime.utcnow()
</code></pre>
| 2 | 2016-09-29T14:37:24Z | [
"python",
"mongodb",
"ttl"
]
|
how to delete entire row in csv file and save changes on same file? | 39,769,041 | <p>i'm new with python and try to modify csv file so i will able to delete specific rows with specific fields according to given list.
in my current code i get the rows which i want to delete but i can't delete it and save the changes on same file (replace). </p>
<pre><code> import os, sys, glob
import time ,csv
# Open a file
path = 'C:\\Users\\tzahi.k\\Desktop\\netzer\\'
dirs = os.listdir( path )
fileslst = []
alertsCode = ("42001", "42003", "42006","51001" , "51002" ,"61001" ,"61002","71001",
"71002","71003","71004","71005","71006","72001","72002","72003","72004",
"82001","82002","82003","82004","82005","82006","82007","83001","84001")
# This would print the unnesscery codes
for file in dirs:
if "ALERTS" in file.upper() :
fileslst.append(file)
fileslst.sort()
with open(fileslst[-1], 'rb') as csvfile:
csvReader = csv.reader(csvfile)
for row in csvReader:
for alert in alertsCode:
if any(alert in row[2] for s in alertsCode) :
print row
</code></pre>
<p>any help? </p>
| 0 | 2016-09-29T11:29:03Z | 39,769,261 | <p>Read all the rows into a list using a <em>list comprehension</em> and excluding the unwanted rows. Then <em>rewrite</em> the rows to the file in mode <code>w</code> (write mode) which overwrites or replaces the content of the file:</p>
<pre><code>with open(fileslst[-1], 'rb') as csvfile:
csvReader = csv.reader(csvfile)
clean_rows = [row for row in csvReader if not any(alert in row[2] for alert in alertsCode)]
# csvfile.truncate()
with open(fileslst[-1], 'wb') as csvfile:
csv_writer = csv.writer(csvfile)
csv_writer.writerows(clean_rows)
</code></pre>
| 3 | 2016-09-29T11:38:56Z | [
"python",
"csv"
]
|
data manipulation/ indexing python vs R | 39,769,196 | <p>I am inexperienced with python and am unsure what i should search for this particular task. I am trying to find a way to index a list much like i would index a vector in R:</p>
<h1>R</h1>
<pre><code>vec=c(1,2,3)
> vec==1
[1] TRUE FALSE FALSE
</code></pre>
<h1>python</h1>
<pre><code>>>> list_a=[1,2,3]
>>> list_a==1
False
</code></pre>
<h1>separate attempt in python</h1>
<pre><code>for i in list_a:
... i==1
...
False
False
False
</code></pre>
<p>Notice above that it is False for all three values even though the first value is 1???</p>
<h1>And yet</h1>
<pre><code>>>> for i in list_a:
... if i==1:
... print('hello')
...
hello
</code></pre>
<p>Hence when it comes to python I am just looking for a way to index the the python list in the same way as i can I'm R???</p>
| 0 | 2016-09-29T11:36:24Z | 39,769,295 | <p>What about :</p>
<pre><code>>>> [x == 1 for x in list_a]
[True, False, False]
</code></pre>
| 3 | 2016-09-29T11:40:00Z | [
"python"
]
|
data manipulation/ indexing python vs R | 39,769,196 | <p>I am inexperienced with python and am unsure what i should search for this particular task. I am trying to find a way to index a list much like i would index a vector in R:</p>
<h1>R</h1>
<pre><code>vec=c(1,2,3)
> vec==1
[1] TRUE FALSE FALSE
</code></pre>
<h1>python</h1>
<pre><code>>>> list_a=[1,2,3]
>>> list_a==1
False
</code></pre>
<h1>separate attempt in python</h1>
<pre><code>for i in list_a:
... i==1
...
False
False
False
</code></pre>
<p>Notice above that it is False for all three values even though the first value is 1???</p>
<h1>And yet</h1>
<pre><code>>>> for i in list_a:
... if i==1:
... print('hello')
...
hello
</code></pre>
<p>Hence when it comes to python I am just looking for a way to index the the python list in the same way as i can I'm R???</p>
| 0 | 2016-09-29T11:36:24Z | 39,769,400 | <p>an alternative:</p>
<pre><code>map(lambda x: x == 1, list_a)
#[True, False, False]
</code></pre>
| 1 | 2016-09-29T11:44:54Z | [
"python"
]
|
Parse pandas df column with regex extracting substrings | 39,769,300 | <p>I have a pandas df containing a column composed of text like:</p>
<pre><code>String1::some_text::some_text;String2::some_text::;String3::some_text::some_text;String4::some_text::some_text
</code></pre>
<p>I can see that:</p>
<ol>
<li>The start of the text always contains the first string I want to extract</li>
<li>The rest of the strings are in between "::" and ";" </li>
</ol>
<p>I want to create a new column containing:</p>
<pre><code>String1, String2, String3, String4
</code></pre>
<p>All separed by a comma but still in the same column. </p>
<p>How to approach the problem?</p>
<p>Thanks for your help</p>
| 0 | 2016-09-29T11:40:15Z | 39,769,413 | <p>try this:</p>
<pre><code>In [136]: df.txt.str.findall(r'String\d+').str.join(', ')
Out[136]:
0 String1, String2, String3, String4
Name: txt, dtype: object
</code></pre>
<p>Data:</p>
<pre><code>In [137]: df
Out[137]:
txt
0 String1::some_text::some_text;String2::some_text::;String3::some_text::some_text;String4::some_t...
</code></pre>
<p>Setup:</p>
<pre><code>df = pd.DataFrame({'txt': ['String1::some_text::some_text;String2::some_text::;String3::some_text::some_text;String4::some_text::some_text']})
</code></pre>
| 1 | 2016-09-29T11:45:23Z | [
"python",
"regex",
"pandas",
"text"
]
|
Parse pandas df column with regex extracting substrings | 39,769,300 | <p>I have a pandas df containing a column composed of text like:</p>
<pre><code>String1::some_text::some_text;String2::some_text::;String3::some_text::some_text;String4::some_text::some_text
</code></pre>
<p>I can see that:</p>
<ol>
<li>The start of the text always contains the first string I want to extract</li>
<li>The rest of the strings are in between "::" and ";" </li>
</ol>
<p>I want to create a new column containing:</p>
<pre><code>String1, String2, String3, String4
</code></pre>
<p>All separed by a comma but still in the same column. </p>
<p>How to approach the problem?</p>
<p>Thanks for your help</p>
| 0 | 2016-09-29T11:40:15Z | 39,772,859 | <p>consider the dataframe <code>df</code> with column <code>txt</code></p>
<pre><code>df = pd.DataFrame(['String1::some_text::some_text;String2::some_text::;String3::some_text::some_text;String4::some_text::some_text'] * 10,
columns=['txt'])
df
</code></pre>
<p><a href="http://i.stack.imgur.com/bMddU.png" rel="nofollow"><img src="http://i.stack.imgur.com/bMddU.png" alt="enter image description here"></a></p>
<hr>
<p>use a combination of <code>str.split</code> and <code>groupby</code></p>
<pre><code>df.txt.str.split(';', expand=True).stack() \
.str.split('::').str[0].groupby(level=0).apply(list)
0 [String1, String2, String3, String4]
1 [String1, String2, String3, String4]
2 [String1, String2, String3, String4]
3 [String1, String2, String3, String4]
4 [String1, String2, String3, String4]
5 [String1, String2, String3, String4]
6 [String1, String2, String3, String4]
7 [String1, String2, String3, String4]
8 [String1, String2, String3, String4]
9 [String1, String2, String3, String4]
dtype: object
</code></pre>
| 1 | 2016-09-29T14:18:45Z | [
"python",
"regex",
"pandas",
"text"
]
|
Parse pandas df column with regex extracting substrings | 39,769,300 | <p>I have a pandas df containing a column composed of text like:</p>
<pre><code>String1::some_text::some_text;String2::some_text::;String3::some_text::some_text;String4::some_text::some_text
</code></pre>
<p>I can see that:</p>
<ol>
<li>The start of the text always contains the first string I want to extract</li>
<li>The rest of the strings are in between "::" and ";" </li>
</ol>
<p>I want to create a new column containing:</p>
<pre><code>String1, String2, String3, String4
</code></pre>
<p>All separed by a comma but still in the same column. </p>
<p>How to approach the problem?</p>
<p>Thanks for your help</p>
| 0 | 2016-09-29T11:40:15Z | 39,773,002 | <p>I would just apply a lambda function to do the operation you want to do (split first on ";", then split on "::" and keep the first element, and join them back):</p>
<pre><code>df['new_col'] = df['old_col'].apply(lambda s: ", ".join(t.split("::")[0] for t in s.split(";")))
</code></pre>
<p>You could also avoid splitting on <code>::</code> since simply stopping before the first <code>:</code> is enough:</p>
<pre><code>df['new_col'] = df['old_col'].apply(lambda s: ", ".join(t[:t.index(":")] for t in s.split(";")))
</code></pre>
| 0 | 2016-09-29T14:24:40Z | [
"python",
"regex",
"pandas",
"text"
]
|
AWS Default region not setting using env variable in boto | 39,769,527 | <p>Python terminal below</p>
<pre><code>In [3]: os.environ['AWS_DEFAULT_REGION']
Out[3]: 'us-west-2'
In [4]: import boto
In [5]: boto.connect_ec2()
Out[5]: EC2Connection:ec2.ap-southeast-1.amazonaws.com
</code></pre>
<p>AWS Default region is not getting set even after using <code>AWS_DEFAULT_REGION</code> as env variable. Please suggest !</p>
| 0 | 2016-09-29T11:49:48Z | 39,785,007 | <p><strong>Solution:</strong> </p>
<p><code>AWS_DEFAULT_REGION</code> works only with <code>boto3</code>.
I was using <code>boto2</code>. Environment variables for region is not supported in <code>boto2</code>.</p>
<p>Updating the <code>boto.cfg</code> fixed the <code>problem</code>.</p>
| 0 | 2016-09-30T06:26:52Z | [
"python",
"amazon-web-services",
"boto"
]
|
Detect and count numerical sequence in Python array | 39,769,564 | <p>In a numerical sequence (e.g. one-dimensional array) I want to find different patterns of numbers and count each finding separately. However, the numbers can occur repeatedly but only the basic pattern is important. </p>
<pre><code># Example signal (1d array)
a = np.array([1,1,2,2,2,2,1,1,1,2,1,1,2,3,3,3,3,3,2,2,1,1,1])
# Search for these exact following "patterns": [1,2,1], [1,2,3], [3,2,1]
# Count the number of pattern occurrences
# [1,2,1] = 2 (occurs 2 times)
# [1,2,3] = 1
# [3,2,1] = 1
</code></pre>
<p>I have come up with the Knuth-Morris-Pratt string matching (<a href="http://code.activestate.com/recipes/117214/" rel="nofollow">http://code.activestate.com/recipes/117214/</a>), which gives me the index of the searched pattern.</p>
<pre><code>for s in KnuthMorrisPratt(list(a), [1,2,1]):
print('s')
</code></pre>
<p>The problem is, I don't know how to find the case, where the pattern [1,2,1] "hides" in the sequence [1,2,2,2,1]. I need to find a way to reduce this sequence of repeated numbers in order to get to [1,2,1]. Any ideas?</p>
| -1 | 2016-09-29T11:51:38Z | 39,770,223 | <p>I don't use NumPy and I am quite new to Python, so there might be a better and more efficient solution.</p>
<p>I would write a function like this:</p>
<pre><code>def dac(data, pattern):
count = 0
for i in range(len(data)-len(pattern)+1):
tmp = data[i:(i+len(pattern))]
if tmp == pattern:
count +=1
return count
</code></pre>
<p>If you want to ignore repeated numbers in the middle of your pattern:</p>
<pre><code>def dac(data, pattern):
count = 0
for i in range(len(data)-len(pattern)+1):
tmp = [data[i], data [i+1]]
try:
for j in range(len(data)-i):
print(i, i+j)
if tmp[-1] != data[i+j+1]:
tmp.append(data[i+j+1])
if len(tmp) == len(pattern):
print(tmp)
break
except:
pass
if tmp == pattern:
count +=1
return count
</code></pre>
<p>Hope that might help.</p>
| 2 | 2016-09-29T12:21:58Z | [
"python",
"arrays",
"numbers",
"detection"
]
|
Detect and count numerical sequence in Python array | 39,769,564 | <p>In a numerical sequence (e.g. one-dimensional array) I want to find different patterns of numbers and count each finding separately. However, the numbers can occur repeatedly but only the basic pattern is important. </p>
<pre><code># Example signal (1d array)
a = np.array([1,1,2,2,2,2,1,1,1,2,1,1,2,3,3,3,3,3,2,2,1,1,1])
# Search for these exact following "patterns": [1,2,1], [1,2,3], [3,2,1]
# Count the number of pattern occurrences
# [1,2,1] = 2 (occurs 2 times)
# [1,2,3] = 1
# [3,2,1] = 1
</code></pre>
<p>I have come up with the Knuth-Morris-Pratt string matching (<a href="http://code.activestate.com/recipes/117214/" rel="nofollow">http://code.activestate.com/recipes/117214/</a>), which gives me the index of the searched pattern.</p>
<pre><code>for s in KnuthMorrisPratt(list(a), [1,2,1]):
print('s')
</code></pre>
<p>The problem is, I don't know how to find the case, where the pattern [1,2,1] "hides" in the sequence [1,2,2,2,1]. I need to find a way to reduce this sequence of repeated numbers in order to get to [1,2,1]. Any ideas?</p>
| -1 | 2016-09-29T11:51:38Z | 39,770,233 | <p>Here's a one-liner that will do it</p>
<pre><code>import numpy as np
a = np.array([1,1,2,2,2,2,1,1,1,2,1,1,2,3,3,3,3,3,2,2,1,1,1])
p = np.array([1,2,1])
num = sum(1 for k in
[a[j:j+len(p)] for j in range(len(a) - len(p) + 1)]
if np.array_equal(k, p))
</code></pre>
<p>The innermost part is a list comprehension that generates all pieces of the array that are the same length as the pattern. The outer part sums 1 for every element of this list which matches the pattern.</p>
| 1 | 2016-09-29T12:22:19Z | [
"python",
"arrays",
"numbers",
"detection"
]
|
Detect and count numerical sequence in Python array | 39,769,564 | <p>In a numerical sequence (e.g. one-dimensional array) I want to find different patterns of numbers and count each finding separately. However, the numbers can occur repeatedly but only the basic pattern is important. </p>
<pre><code># Example signal (1d array)
a = np.array([1,1,2,2,2,2,1,1,1,2,1,1,2,3,3,3,3,3,2,2,1,1,1])
# Search for these exact following "patterns": [1,2,1], [1,2,3], [3,2,1]
# Count the number of pattern occurrences
# [1,2,1] = 2 (occurs 2 times)
# [1,2,3] = 1
# [3,2,1] = 1
</code></pre>
<p>I have come up with the Knuth-Morris-Pratt string matching (<a href="http://code.activestate.com/recipes/117214/" rel="nofollow">http://code.activestate.com/recipes/117214/</a>), which gives me the index of the searched pattern.</p>
<pre><code>for s in KnuthMorrisPratt(list(a), [1,2,1]):
print('s')
</code></pre>
<p>The problem is, I don't know how to find the case, where the pattern [1,2,1] "hides" in the sequence [1,2,2,2,1]. I need to find a way to reduce this sequence of repeated numbers in order to get to [1,2,1]. Any ideas?</p>
| -1 | 2016-09-29T11:51:38Z | 39,772,046 | <p>The only way I could think of solving your problem with the
subpatterns matching was to use <code>regex</code>.</p>
<p>The following is a demonstration for findind for example the sequence <code>[1,2,1]</code> in <code>list1</code>:</p>
<pre><code>import re
list1 = [1,1,2,2,2,2,1,1,1,2,1,1,2,3,3,3,3,3,2,2,1,1,1]
str_list = ''.join(str(i) for i in list1)
print re.findall(r'1+2+1', str_list)
</code></pre>
<p>This will give you as a result:</p>
<pre><code>>>> print re.findall(r'1+2+1', str_list)
['1122221', '1121']
</code></pre>
| 1 | 2016-09-29T13:43:52Z | [
"python",
"arrays",
"numbers",
"detection"
]
|
Python, create new list based on condition applied to an existing list of same length | 39,769,958 | <p>Ok, I'm sure there is a very easy way to do this, but I'm rusty in python and I can't work out the pythonic way to do this.</p>
<p>I have a list, representing the hours of the day:</p>
<pre><code>import numpy as np
hourOfDay = np.mod(range(0, 100), 24)
</code></pre>
<p>Then I want to create a new list which is a larger value <code>0.4</code>, when the hour is between <code>7</code> and <code>22</code>, and <code>0.2</code> otherwise.</p>
<p>There are several related posts <a href="http://stackoverflow.com/questions/15147696/python-how-to-create-a-new-list-based-on-existing-list-without-certain-objects">here</a> and <a href="http://stackoverflow.com/questions/7406448/python-create-a-new-list-from-a-list-when-a-certain-condition-is-met">here</a>, but they're not quite what I want (they end up with a shorter list, I want the same-length list).</p>
<p>Assuming I needed to use list comprehension I tried this:</p>
<pre><code>newList = [0.4 for hour in hourOfDay if hour <= 7 or hour >= 22 else 0.2]
</code></pre>
| 3 | 2016-09-29T12:10:23Z | 39,770,093 | <p>Your list comprehension was slightly off. Also if you want <code>0.4</code> when the hour is between <code>7</code> and <code>22</code>, you need <code>7<= hour <= 22</code>:</p>
<pre><code>import numpy as np
hourOfDay = np.mod(range(0, 100), 24)
newList = [0.4 if 7 <= i <= 22 else 0.2 for i in hourOfDay]
>>> newList
[0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.2, 0.2, 0.2, 0.2, 0.2]
</code></pre>
| 1 | 2016-09-29T12:16:19Z | [
"python",
"list",
"numpy"
]
|
Python, create new list based on condition applied to an existing list of same length | 39,769,958 | <p>Ok, I'm sure there is a very easy way to do this, but I'm rusty in python and I can't work out the pythonic way to do this.</p>
<p>I have a list, representing the hours of the day:</p>
<pre><code>import numpy as np
hourOfDay = np.mod(range(0, 100), 24)
</code></pre>
<p>Then I want to create a new list which is a larger value <code>0.4</code>, when the hour is between <code>7</code> and <code>22</code>, and <code>0.2</code> otherwise.</p>
<p>There are several related posts <a href="http://stackoverflow.com/questions/15147696/python-how-to-create-a-new-list-based-on-existing-list-without-certain-objects">here</a> and <a href="http://stackoverflow.com/questions/7406448/python-create-a-new-list-from-a-list-when-a-certain-condition-is-met">here</a>, but they're not quite what I want (they end up with a shorter list, I want the same-length list).</p>
<p>Assuming I needed to use list comprehension I tried this:</p>
<pre><code>newList = [0.4 for hour in hourOfDay if hour <= 7 or hour >= 22 else 0.2]
</code></pre>
| 3 | 2016-09-29T12:10:23Z | 39,770,117 | <p>You can use a mask, but note that for refusing of type casting you should create the first array with data type float.:</p>
<pre><code>In [15]: hourOfDay = np.mod(range(0, 100), 24, dtype=np.float)
In [16]: mask = np.logical_or(hourOfDay <= 7, hourOfDay >= 22)
In [17]: hourOfDay[mask] = 0.4
In [19]: hourOfDay[~mask] = 0.2
In [20]: hourOfDay
Out[20]:
array([ 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.2, 0.2, 0.2,
0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.2,
0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
0.2, 0.2, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4,
0.4, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
0.2, 0.2, 0.2, 0.2, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4,
0.4, 0.4, 0.4, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.4, 0.4, 0.4, 0.4,
0.4])
</code></pre>
| 1 | 2016-09-29T12:17:16Z | [
"python",
"list",
"numpy"
]
|
Python, create new list based on condition applied to an existing list of same length | 39,769,958 | <p>Ok, I'm sure there is a very easy way to do this, but I'm rusty in python and I can't work out the pythonic way to do this.</p>
<p>I have a list, representing the hours of the day:</p>
<pre><code>import numpy as np
hourOfDay = np.mod(range(0, 100), 24)
</code></pre>
<p>Then I want to create a new list which is a larger value <code>0.4</code>, when the hour is between <code>7</code> and <code>22</code>, and <code>0.2</code> otherwise.</p>
<p>There are several related posts <a href="http://stackoverflow.com/questions/15147696/python-how-to-create-a-new-list-based-on-existing-list-without-certain-objects">here</a> and <a href="http://stackoverflow.com/questions/7406448/python-create-a-new-list-from-a-list-when-a-certain-condition-is-met">here</a>, but they're not quite what I want (they end up with a shorter list, I want the same-length list).</p>
<p>Assuming I needed to use list comprehension I tried this:</p>
<pre><code>newList = [0.4 for hour in hourOfDay if hour <= 7 or hour >= 22 else 0.2]
</code></pre>
| 3 | 2016-09-29T12:10:23Z | 39,770,349 | <p>One of the alternative approach is to use <code>map()</code> as:</p>
<pre><code>map(lambda x: 0.4 if 7 <= x <= 22 else 0.2, hourOfDay)
</code></pre>
| 1 | 2016-09-29T12:27:53Z | [
"python",
"list",
"numpy"
]
|
Is it possible to run Python without installation (without using packagers like py2exe)? | 39,770,096 | <p>We have a complex tool (coded in Python) for environment validation and configuration. It runs on various Windows flavors. We used this tool within the company so far. However now we want our support engineers to use it on the road. The problem is they will not have permissions to install Python in customer machines.</p>
<p>To address this situation, we looked at utilities like py2exe, cx_freeze and pyInstaller. While they are good, the generated EXE runs into dependency issues in some situations. So we dropped using these tools.</p>
<p>Is it possible to take all python related files in a pen drive and run it directly from it? When we do that, obviously the interpreter complains because the DLLs are not registered on the target machine. I suspect just registering the DLLs may lead to other problems. Is there a simple solution to this?</p>
| 0 | 2016-09-29T12:16:25Z | 39,770,711 | <p>Suppose your script is <code>main.py</code>, </p>
<ul>
<li>copy all the contents of python directory to the directory it reside in </li>
<li>search <code>python*.dll</code> in your windows directory and it to <code>project/python</code></li>
<li>create <code>project/main.bat</code>:</li>
</ul>
<p><strong>main.bat</strong></p>
<pre><code>"%~dp0/python/python.exe" "%~dp0/main.py" %*
</code></pre>
<p>The project directory should be:</p>
<pre><code>project
âââ python
| âââ python.exe
| âââ python27.dll
| âââ ...
âââ main.py
âââ main.bat
</code></pre>
<p>Now, invoke the program with <code>main.bat</code></p>
| 0 | 2016-09-29T12:45:05Z | [
"python",
"dll",
"installation",
"python-standalone"
]
|
Write in pre-written xlsx format template in python | 39,770,097 | <p>I am a bit new in python now working in qgis to automates some time.
I have an excel pre-written template file where i have to add different data in different columns.</p>
<pre><code>outpufFile = open('d:/template.xlsx','w')
for layer in QgsMapLayerRegistry.instance().mapLayers().values():
print layer.name() +","+ str(layer.featureCount())
line = layer.name()
unicode_line = line.encode('utf-8')
outpufFile.write(unicode_line)
outpufFile.close()
</code></pre>
<p>I get this so far but it corrupting my template while I also don't know how to access specific column or row of my file In order to write at exactly column and row to full my template.</p>
| 0 | 2016-09-29T12:16:26Z | 39,770,219 | <p>You need a Python library that understands how to handle xlsx files: </p>
<p><a href="https://openpyxl.readthedocs.io/en/default/" rel="nofollow">openpyxl</a>, for example.</p>
| 0 | 2016-09-29T12:21:54Z | [
"python",
"excel"
]
|
Signal when all urls are loaded in Django | 39,770,214 | <p>I want to perform magic on all url patterns of django:</p>
<p>If they don't have a name, then I want to give them an automated name.</p>
<p>Unfortunately django seems to load the url patterns lazy. </p>
<p>Implementing this magic-add-name-method is easy and not part of this question.</p>
<p>The problem: Where to call this method? I need to call this method after all urls have been loaded and before the first request gets handled.</p>
<p>Code should work for Django 1.9 and Django 1.10.</p>
<h1>Background</h1>
<p>Django deprecated the use of import-strings like this (<a href="https://code.djangoproject.com/ticket/22384" rel="nofollow">Deprecate the ability to reverse by dotted path)</a></p>
<pre><code>reverse('myapp.views.my_view')
</code></pre>
<p>I have a large legacy code base, and I don't like typing and maintaining redundant characters.</p>
<p>I want all urls without a name to have the name of their corresponding import-string.</p>
<p>Example in urls.py:</p>
<pre><code>url(r'^.../.../(?P<pk>[^/]+)/$', views.my_view))
</code></pre>
<p>I want to set the name of this url automatically to 'myapp.views.my_view'.</p>
| 1 | 2016-09-29T12:21:40Z | 39,773,972 | <p>I wouldn't use signals to change the created urls after the fact but instead write a drop in replacement wrapper for <code>django.conf.urls.url</code>:</p>
<pre><code>def url(*args, **kwargs):
if 'name' not in kwargs:
kwargs['name'] = modulename(args[1]) # Returns something like 'polls.indexview'
return django.conf.urls.url(*args, **kwargs)
</code></pre>
<p>That you can then use like</p>
<pre><code># from django.conf.urls import url # Removed to enable auto-naming
from autonameroutes.urls import url
from . import views
app_name = 'polls'
urlpatterns = [
url(r'^$', views.IndexView.as_view(), name='index'),
url(r'^(?P<pk>\d+)/$', views.DetailView.as_view(), name='detail'),
...
]
</code></pre>
<p>The logic should be essentially the same as when changing all routes in the system, but you'd have more control when you want to use it and the change would be more obvious to the developer maintaining the code later on.</p>
| 0 | 2016-09-29T15:10:03Z | [
"python",
"django",
"django-urls",
"django-signals"
]
|
python mpl: how to plot a point on a graph of 3D vectors, and the head of an arrow | 39,770,247 | <p>I have the following code to generate a triangle (from some lines) and a vector in a 3D figure:</p>
<pre><code>from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
import sys
# D is the viewing direction.
# P is the "eye" point.
# A, B, and C are the points of the Triangle being tested against.
# make a shear matrix from the D vector.
def make_shear(d):
print [1, 0, -d[0]/d[2] ];
print [1, 0, -d[1]/d[2]];
print [0, 0, 1/d[2]] ;
return np.array([ [1, 0, -d[0]*1.0/d[2] ], [0, 1, -d[1]*1.0/d[2]], [0, 0, 1.0/d[2]] ]);
def draw_line(ax, A, B, c):
ax.plot([A[0], B[0]], [A[1],B[1]],[A[2],B[2]], color=c)
plt.close('all');
fig1 = plt.figure();
ax = fig1.add_subplot(111, projection='3d')
P = np.array([3, 4, 2]);
P1 = np.array([1, 5, 7]);
D = P1 - P;
M = make_shear(D);
print "D = ", D
#print D[0];
print "M = ";
print M;
Dp = M.dot(D);
A =np.array([-5, 3, 5]);
B =np.array([1, 4, 10]);
C =np.array([-5, 5, 5]);
Ap = M.dot(A-P);
Bp = M.dot(B-P);
Cp = M.dot(C-P);
U = np.dot(Dp, np.cross(Cp, Bp));
V = np.dot(Dp, np.cross(Ap, Cp));
W = np.dot(Dp, np.cross(Bp, Ap));
print "U = ", U;
print "V = ", V;
print "W = ", W;
ax = fig1.add_subplot(111, projection='3d')
ax.set_xlabel('X axis')
ax.set_ylabel('Y axis')
ax.set_zlabel('Z axis')
draw_line(ax, A, B, 'g');
draw_line(ax, B, C, 'g');
draw_line(ax, C, A, 'g');
sf = 5; # scale factor, to make the direction more obvious...not sure this works.
ax.quiver(P[0], P[1], P[2], sf*D[0], sf*D[1], sf*D[2]); # head_width=0.05, head_length=0.1, fc='k', ec=ki");
b = 10;
ax.set_xlim([-b, b]);
ax.set_ylim([-b, b]);
ax.set_zlim([-b, b]);
plt.show();
</code></pre>
<ol>
<li>I want to add a single point the graph, say P, as a small red sphere.</li>
<li>I want add an arrow to the head of the vector drawn in the quiver statement.</li>
</ol>
<p>What is the most straightforward way to do these two things?</p>
| 2 | 2016-09-29T12:22:52Z | 39,771,983 | <ol>
<li><p>Add the following line right after your <code>x.quiver...</code> line.</p>
<pre><code>ax.scatter(P[0], P[1], P[2], c='r')
</code></pre></li>
<li><p>Add the following lines right after your <code>x.quiver...</code> line.</p>
<pre><code>v = [0, 1, 0]
end_v = P + v
ax.quiver(end_v[0], end_v[1], end_v[2], v[0], v[1], v[2])
</code></pre></li>
</ol>
| 2 | 2016-09-29T13:41:37Z | [
"python",
"numpy",
"matplotlib",
"vector",
"point"
]
|
Prepend each line in a string python | 39,770,268 | <p>I have a variable that holds the below value:</p>
<pre><code>From: test@example.com
To: user1@us.oracle.com
Date: Thu Sep 29 04:25:45 2016
Subject: IMAP Append Client FNBJL
MIME-version: 1.0
Content-type: text/plain; charset=UTF-8; format=flowed
hocks burdock steelworks propellants resource querying sitings biscuits lectureship
linearly crimea ghosting inelegant contingency resting fracas margate radiographic
befoul waterline stopover two everlastingly highranking doctrine unsmilingly massproducing
teacups litanies malachite pardon rarer glides nonbelievers humorously clonal tribunes
micrometer paralysing splenetic constitutionalists wavings thoughtfulness herbicide
rerolled ore overflows illicitly aerodynamics ably splittable ditching rouged bulldozer
replayed statistic reconfigured adventurers passionate rewarded decides oxygenated
</code></pre>
<p>Every line in the above string needs to be prepended with <code>X:</code> as shown below</p>
<pre><code>X: From: test@example.com
X: To: user1@us.oracle.com
X: Date: Thu Sep 29 04:25:45 2016
X: Subject: SSSSSSSSS FNBJL
X: MIME-version: 1.0
X: Content-type: text/plain
X:
X: hocks burdock steelworks propellants resource querying sitings biscuits lectureship
X: linearly crimea ghosting inelegant contingency resting fracas margate radiographic
X: befoul waterline stopover two everlastingly highranking doctrine unsmilingly massproducing
X: teacups litanies malachite pardon rarer glides nonbelievers humorously clonal tribunes
X: micrometer paralysing splenetic constitutionalists wavings thoughtfulness herbicide
X: rerolled ore overflows illicitly aerodynamics ably splittable ditching rouged bulldozer
X: replayed statistic reconfigured adventurers passionate rewarded decides oxygenated
</code></pre>
<p>I was thinking of splitting the above string on <code>\n</code> and prepending each line with <code>X:</code></p>
<p>Is there a better approach for this?</p>
| 0 | 2016-09-29T12:24:15Z | 39,771,006 | <p>There are tons of ways to achieve what you want, here's few ones:</p>
<pre><code>import re
log = """From: test@example.com
To: user1@us.oracle.com
Date: Thu Sep 29 04:25:45 2016
Subject: IMAP Append Client FNBJL
MIME-version: 1.0
Content-type: text/plain; charset=UTF-8; format=flowed
hocks burdock steelworks propellants resource querying sitings biscuits lectureship
linearly crimea ghosting inelegant contingency resting fracas margate radiographic
befoul waterline stopover two everlastingly highranking doctrine unsmilingly massproducing
teacups litanies malachite pardon rarer glides nonbelievers humorously clonal tribunes
micrometer paralysing splenetic constitutionalists wavings thoughtfulness herbicide
rerolled ore overflows illicitly aerodynamics ably splittable ditching rouged bulldozer
replayed statistic reconfigured adventurers passionate rewarded decides oxygenated"""
def f1(text):
return "X: " + text.replace("\n", "\nX: ")
def f2(text):
return "X: " + re.sub('\n', '\nX: ', text)
def f3(text):
return "\n".join(["X: {0}".format(l) for l in text.split("\n")])
if __name__ == "__main__":
print(log)
for f in [f1, f2, f3]:
print('-' * 80)
print(f(log))
</code></pre>
| 1 | 2016-09-29T12:57:41Z | [
"python",
"python-2.7"
]
|
Is there an operator form of map in python? | 39,770,352 | <p>In Mathematica it is possible to write Map[f,list] as f/@list where /@ is the operator form of Map. In python there is map(f, list) but is there a similar operator form or a package providing this?</p>
<p>The application is that deeply nested transformations using many maps end up with a lot of brackets whereas operator chaining can be simpler to read (and type).</p>
| 1 | 2016-09-29T12:28:07Z | 39,770,551 | <p>There is no easy way to do that. Python doesn't provide any way to define custom operators and the set of operators it provides are pretty standard and mostly used for things like numbers and strings. The <code>map</code> object doesn't support anything like that, but nothing prevents you from writing your own class:</p>
<pre><code>class Mapper:
def __init__(self, f):
self.f = f
def __matmul__(self, other):
return map(self.f, other)
</code></pre>
<p>Used as:</p>
<pre><code>In [3]: list(Mapper(lambda x: x+1) @ [1,2,3,4,5])
Out[3]: [2, 3, 4, 5, 6]
</code></pre>
<p>You can similarly introduce a <code>Filter</code> class:</p>
<pre><code>class Filter:
def __init__(self, p):
self.p = p
def __matmul__(self, other):
return filter(self.p, other)
</code></pre>
<p>Used as:</p>
<pre><code>In [5]: list(Filter(lambda x: x%2==0) @ range(10))
Out[5]: [0, 2, 4, 6, 8]
</code></pre>
<p>And in fact you can see that this kind of classes are all almost identical, so you could generalize them.</p>
<hr>
<p>Note: <code>@</code> as an operator is new to python3.5.</p>
<hr>
<p>One problem with using this is that <code>@</code> is <em>left</em> associative, which means you aren't able to compose this functions. You can use something like <code>**</code> that is right-associative to compose them easily:</p>
<pre><code>class Filter:
def __init__(self, p):
self.p = p
def __pow__(self, other):
return filter(self.p, other)
class Mapper:
def __init__(self, f):
self.f = f
def __pow__(self, other):
return map(self.f, other)
</code></pre>
<p>Which allow:</p>
<pre><code>In [13]: Filter(lambda x: x%2==0) ** Mapper(lambda x: x+1) ** range(10)
Out[13]: <filter at 0x7fe0696bcd68>
</code></pre>
<hr>
<p>For completeness: here's an implementation that generalizes this concept and also works with <code>@</code> by combining the transformations:</p>
<pre><code>class Apply:
def __init__(self, f):
self.f = f
def __matmul__(self, seq_or_apply):
if isinstance(seq_or_apply, Apply):
return Apply(lambda seq: self.f(seq_or_apply.f(seq)))
return self.f(seq_or_apply)
class Mapper(Apply):
def __init__(self, f):
super().__init__(lambda x: map(f, x))
class Filter(Apply):
def __init__(self, p):
super().__init__(lambda x: filter(p, x))
from functools import reduce
class Reduce(Apply):
def __init__(self, op, init):
super().__init__(lambda seq: reduce(op, seq, init))
</code></pre>
<p>Used as:</p>
<pre><code>In [26]: import operator as op
In [27]: Reduce(op.add, -7) @ Filter(lambda x: x%2==0) @ Mapper(lambda x: x+1) @ range(10)
Out[27]: 23
</code></pre>
| 2 | 2016-09-29T12:37:35Z | [
"python",
"functional-programming"
]
|
Scikit-learn, get accuracy scores for each class | 39,770,376 | <p>Is there a built-in way for getting accuracy scores for each class separatetly? I know in sklearn we can get overall accuracy by using <code>metric.accuracy_score</code>. Is there a way to get the breakdown of accuracy scores for individual classes? Something similar to <code>metrics.classification_report</code>.</p>
<pre><code>from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']
</code></pre>
<p><code>classification_report</code> does not give accuracy scores:</p>
<pre><code>print(classification_report(y_true, y_pred, target_names=target_names, digits=4))
Out[9]: precision recall f1-score support
class 0 0.5000 1.0000 0.6667 1
class 1 0.0000 0.0000 0.0000 1
class 2 1.0000 0.6667 0.8000 3
avg / total 0.7000 0.6000 0.6133 5
</code></pre>
<p>Accuracy score gives only the overall accuracy:</p>
<pre><code>accuracy_score(y_true, y_pred)
Out[10]: 0.59999999999999998
</code></pre>
| 1 | 2016-09-29T12:29:03Z | 39,770,819 | <p>You can code it by yourself : the accuracy is nothing more than the ratio between the well classified samples (true positives and true negatives) and the total number of samples you have.</p>
<p>Then, for a given class, instead of considering all the samples, you only take into account those of your class.</p>
<p>You can then try this:
Let's first define a handy function.</p>
<pre><code>def indices(l, val):
retval = []
last = 0
while val in l[last:]:
i = l[last:].index(val)
retval.append(last + i)
last += i + 1
return retval
</code></pre>
<p>The function above will return the indices in the list <strong>l</strong> of a certain value <strong>val</strong></p>
<pre><code>def class_accuracy(y_pred, y_true, class):
index = indices(l, class)
y_pred, y_true = ypred[index], y_true[index]
tp = [1 for k in range(len(y_pred)) if y_true[k]==y_pred[k]]
tp = np.sum(tp)
return tp/float(len(y_pred))
</code></pre>
<p>The last function will return the in-class accuracy that you look for.</p>
| 2 | 2016-09-29T12:49:57Z | [
"python",
"machine-learning",
"scikit-learn"
]
|
Import Errror for python pg module | 39,770,527 | <p>I am having trouble using the pg module in my code. I've installed it using pip. But when I go to run it I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "Contract_gen.py", line 2, in <module>
import pg
File "C:\Python27\lib\site-packages\pg\__init__.py", line 1, in <module>
from .core import (
File "C:\Python27\lib\site-packages\pg\core.py", line 6, in <module>
from . import glfw
File "C:\Python27\lib\site-packages\pg\glfw.py", line 140, in <module>
raise ImportError("Failed to load GLFW3 shared library.")
ImportError: Failed to load GLFW3 shared library.
</code></pre>
| -1 | 2016-09-29T12:36:16Z | 39,771,237 | <p>It seems like to require the <code>GLFW3</code> library. Download & install it and the error should be gone. If you use macOS you can get it via <code>brew</code>.</p>
| 0 | 2016-09-29T13:09:25Z | [
"python",
"module",
"pg"
]
|
Numpy 2D array indexing without out of bound and with clipping value | 39,770,863 | <p>I have indices array </p>
<pre><code>a = np.array([
[0, 0],
[1, 1],
[1, 9]
])
</code></pre>
<p>And 2D array</p>
<pre><code>b = np.array([
[0, 1, 2, 3],
[5, 6, 7, 8]
])
</code></pre>
<p>I can do this one</p>
<pre><code>b[a[:, 0], a[:, 1]]
</code></pre>
<p>But it'll be an exception '<strong>out of bounds</strong>', because 9 is out of range.
I need a <em>very fast way</em> to make array slice by indices and it will be ideal if I can set a clip value, e.g.:</p>
<pre><code>np.indexing_with_clipping(array=b, indices=a, clipping_value=0)
> array([0, 6, --> 0 = clipped value <--])
</code></pre>
| 3 | 2016-09-29T12:51:43Z | 39,771,165 | <p>Here's an approach -</p>
<pre><code>def indexing_with_clipping(arr, indices, clipping_value=0):
idx = np.where(indices < arr.shape,indices,clipping_value)
return arr[idx[:, 0], idx[:, 1]]
</code></pre>
<p>Sample runs -</p>
<pre><code>In [266]: arr
Out[266]:
array([[0, 1, 2, 3],
[5, 6, 7, 8]])
In [267]: indices
Out[267]:
array([[0, 0],
[1, 1],
[1, 9]])
In [268]: indexing_with_clipping(arr,indices,clipping_value=0)
Out[268]: array([0, 6, 5])
In [269]: indexing_with_clipping(arr,indices,clipping_value=1)
Out[269]: array([0, 6, 6])
In [270]: indexing_with_clipping(arr,indices,clipping_value=2)
Out[270]: array([0, 6, 7])
In [271]: indexing_with_clipping(arr,indices,clipping_value=3)
Out[271]: array([0, 6, 8])
</code></pre>
<hr>
<p>With focus on memory and performance efficiency, here's an approach that modifies the indices within the function -</p>
<pre><code>def indexing_with_clipping_v2(arr, indices, clipping_value=0):
indices[indices >= arr.shape] = clipping_value
return arr[indices[:, 0], indices[:, 1]]
</code></pre>
<p>Sample run -</p>
<pre><code>In [307]: arr
Out[307]:
array([[0, 1, 2, 3],
[5, 6, 7, 8]])
In [308]: indices
Out[308]:
array([[0, 0],
[1, 1],
[1, 9]])
In [309]: indexing_with_clipping_v2(arr,indices,clipping_value=2)
Out[309]: array([0, 6, 7])
</code></pre>
| 1 | 2016-09-29T13:05:25Z | [
"python",
"arrays",
"numpy",
"multidimensional-array"
]
|
Numpy 2D array indexing without out of bound and with clipping value | 39,770,863 | <p>I have indices array </p>
<pre><code>a = np.array([
[0, 0],
[1, 1],
[1, 9]
])
</code></pre>
<p>And 2D array</p>
<pre><code>b = np.array([
[0, 1, 2, 3],
[5, 6, 7, 8]
])
</code></pre>
<p>I can do this one</p>
<pre><code>b[a[:, 0], a[:, 1]]
</code></pre>
<p>But it'll be an exception '<strong>out of bounds</strong>', because 9 is out of range.
I need a <em>very fast way</em> to make array slice by indices and it will be ideal if I can set a clip value, e.g.:</p>
<pre><code>np.indexing_with_clipping(array=b, indices=a, clipping_value=0)
> array([0, 6, --> 0 = clipped value <--])
</code></pre>
| 3 | 2016-09-29T12:51:43Z | 39,771,262 | <p>You can use list comprehension:</p>
<pre><code>b[
[min(x,len(b[0])-1) for x in a[:,0]],
[min(x,len(b[1])-1) for x in a[:,1]]
]
</code></pre>
<p><strong>edit</strong> I used last array value as your clipping value, but you can replace the <code>min()</code> function with whatever you want (e.g. trenary operator)</p>
<p><strong>edit2</strong> OK, based on clarification in comments and all python-fu that I could put together, this snipped finally does what you need:</p>
<pre><code>clipping_value = -1
tmp=np.append(b,[[clipping_value],[clipping_value]],axis=1)
tmp[zip(*[((x,y) if (x<b.shape[0] and y<b.shape[1]) else (0,b.shape[1])) for (x,y) in zip(a.transpose()[0],a.transpose()[1])])]
</code></pre>
<p>It is the same as above, just creates ndarray <code>tmp</code>, which is a copy of <code>b</code> but contains the <code>clipping_value</code> as its last element and then uses my previous solution to set indices so, that they point to the last element if either of the indices is bigger than dimensions of <code>b</code>.</p>
<p>I learned that there is reverse to the <code>zip</code> function and that the numpy arrays accept lists as indices. It was fun. Thanks.</p>
| 1 | 2016-09-29T13:10:10Z | [
"python",
"arrays",
"numpy",
"multidimensional-array"
]
|
Can the runtime of calling Python from Excel (via xlwings RunPython) be improved? | 39,770,894 | <p>I have written a Python script which uses the xlwings library, in order to get the function inputs from a particular spreadsheet. The idea is for me to run this code from Excel (having imported the xlwings module as a VBA module). I can either run it like this (i.e. from Excel, using the RunPython command in VBA) or I can set a Mock Caller when running the script in PyCharm, which makes it easier for me to enter the function inputs.</p>
<p>This set-up works fine, and when running the script in PyCharm it takes about 2 seconds on average. However when I run it from Excel, it takes about 30 seconds on average (given the same set of data inputs both times).</p>
<p>So it looks like the time it takes Excel to call Python is relatively significant - does anyone have any advice on how to speed this up? Is this roughly how long it's meant to take to run python from Excel?</p>
<p>Thanks in advance</p>
| 0 | 2016-09-29T12:52:43Z | 39,773,115 | <p>Currently, the default behaviour of xlwings is to fire up a new interpreter session everytime you call <code>RunPython</code>. This naturally comes with an overhead, although on modern systems it's usually just an additional 2-3 seconds rather than what you see.<br>
On Windows you can however switch <code>OPTIMIZED_CONNECTION = True</code> in the VBA settings which will then use a COM server that keeps running in between calls, read more about it in the <a href="http://docs.xlwings.org/en/stable/vba.html#settings" rel="nofollow">docs</a>.</p>
| 0 | 2016-09-29T14:29:46Z | [
"python",
"excel",
"vba",
"pycharm",
"xlwings"
]
|
Python Boolean comparison | 39,771,027 | <p>I'm testing Python's Boolean expressions. When I run the following code:</p>
<pre><code>x = 3
print type(x)
print (x is int)
print (x is not int)
</code></pre>
<p>I get the following results:</p>
<pre><code><type 'int'>
False
True
</code></pre>
<p>Why is (x is int) returning false and (x is not int) returning true when clearly x is an integer type?</p>
| 1 | 2016-09-29T12:58:46Z | 39,771,204 | <p>The best way to do this is use <code>isinstance()</code></p>
<p>so in your case:</p>
<pre><code>x = 3
print isinstance(x, int)
</code></pre>
<p>Regarding python <code>is</code></p>
<blockquote>
<p>The operators <code>is</code> and <code>is not</code> test for object identity: x is y is true
if and only if x and y are the same object.</p>
</blockquote>
<p>Taken from <a href="https://docs.python.org/2/reference/expressions.html#not-in" rel="nofollow">docs</a></p>
| 2 | 2016-09-29T13:07:45Z | [
"python",
"boolean",
"boolean-logic",
"boolean-expression"
]
|
Python Boolean comparison | 39,771,027 | <p>I'm testing Python's Boolean expressions. When I run the following code:</p>
<pre><code>x = 3
print type(x)
print (x is int)
print (x is not int)
</code></pre>
<p>I get the following results:</p>
<pre><code><type 'int'>
False
True
</code></pre>
<p>Why is (x is int) returning false and (x is not int) returning true when clearly x is an integer type?</p>
| 1 | 2016-09-29T12:58:46Z | 39,771,338 | <p>If you want to use <code>is</code> you should do:</p>
<pre><code>>>> print (type(x) is int)
True
</code></pre>
| 0 | 2016-09-29T13:13:40Z | [
"python",
"boolean",
"boolean-logic",
"boolean-expression"
]
|
Python Boolean comparison | 39,771,027 | <p>I'm testing Python's Boolean expressions. When I run the following code:</p>
<pre><code>x = 3
print type(x)
print (x is int)
print (x is not int)
</code></pre>
<p>I get the following results:</p>
<pre><code><type 'int'>
False
True
</code></pre>
<p>Why is (x is int) returning false and (x is not int) returning true when clearly x is an integer type?</p>
| 1 | 2016-09-29T12:58:46Z | 39,771,421 | <p>Try typing these into your interpreter: </p>
<pre><code>type(x)
int
x is 3
x is not 3
type(x) is int
type(x) is not int
</code></pre>
<p>The reason that <code>x is int</code> is false is that it is asking if the number <code>3</code> and the Python int class represent the same object. It should be fairly clear that this is false. </p>
<p>As a side note, Python's <code>is</code> keywords can work in some unexpected ways if you don't know exactly what it is doing, and you should almost certainly be avoiding it if you are ever testing equality. That being said, experimenting with it outside of your actual program is a very good idea.</p>
| 1 | 2016-09-29T13:17:33Z | [
"python",
"boolean",
"boolean-logic",
"boolean-expression"
]
|
Why is the adjoint of a matrix in numpy obtained by np.matrix.getH() | 39,771,056 | <p>when I want to get the adjoint of a numpy array, I have to type</p>
<pre><code>A = np.matrix([...])
A.getH()
</code></pre>
<p>I am curious about the naming. Why is it</p>
<pre><code>np.matrix.getH()?
</code></pre>
<p>In contrast, transpose and conjugate are implemented as</p>
<pre><code>ndarray.transpose()
ndarray.conjugate()
</code></pre>
| 1 | 2016-09-29T13:00:01Z | 39,771,380 | <p>I think the complex conjugate or the <strong>Hermitian transpose</strong> of a matrix with complex entries <em>A*</em> obtained from <em>A</em> gives the adjoint matrix.</p>
<p>Long story short, <strong>getH</strong> smells like <em>get Hermitian transpose</em>.</p>
| 3 | 2016-09-29T13:15:31Z | [
"python",
"numpy"
]
|
Top-k on a list of dict in python | 39,771,064 | <p>Is there an easy way to perform the max k number of key:values pair in this example </p>
<pre><code>s1 = {'val' : 0}
s2 = {'val': 10}
s3 = {'val': 5}
s4 = {'val' : 4}
s5 = {'val' : 6}
s6 = {'val' : 7}
s7 = {'val' : 3}
shapelets = [s1,s2,s3,s4,s5,s6,s7]
</code></pre>
<p>I want to get the max 5 numbers in the shapelets list, knowing that it contains a key of name "val" and to which a value is assigned.
The solution here resides in parsing through the list of dict elements and get the max n numbers of it ( in this case the max 5 values )</p>
<p>What can be a simple solution, does operator library in python supports such operation ?</p>
| 2 | 2016-09-29T13:00:24Z | 39,771,145 | <p>Here's a working example:</p>
<pre><code>s1 = {'val': 0}
s2 = {'val': 10}
s3 = {'val': 5}
s4 = {'val': 4}
s5 = {'val': 6}
s6 = {'val': 7}
s7 = {'val': 3}
shapelets = [s1, s2, s3, s4, s5, s6, s7]
print(sorted(shapelets, key=lambda x: x['val'])[-5:])
</code></pre>
| 2 | 2016-09-29T13:04:32Z | [
"python",
"list",
"dictionary"
]
|
Top-k on a list of dict in python | 39,771,064 | <p>Is there an easy way to perform the max k number of key:values pair in this example </p>
<pre><code>s1 = {'val' : 0}
s2 = {'val': 10}
s3 = {'val': 5}
s4 = {'val' : 4}
s5 = {'val' : 6}
s6 = {'val' : 7}
s7 = {'val' : 3}
shapelets = [s1,s2,s3,s4,s5,s6,s7]
</code></pre>
<p>I want to get the max 5 numbers in the shapelets list, knowing that it contains a key of name "val" and to which a value is assigned.
The solution here resides in parsing through the list of dict elements and get the max n numbers of it ( in this case the max 5 values )</p>
<p>What can be a simple solution, does operator library in python supports such operation ?</p>
| 2 | 2016-09-29T13:00:24Z | 39,771,309 | <p>You can use <a href="https://docs.python.org/3.0/library/heapq.html" rel="nofollow"><code>heapq</code></a>:</p>
<pre><code>import heapq
s1 = {'val': 0}
s2 = {'val': 10}
s3 = {'val': 5}
s4 = {'val': 4}
s5 = {'val': 6}
s6 = {'val': 7}
s7 = {'val': 3}
shapelets = [s1, s2, s3, s4, s5, s6, s7]
heapq.nlargest(5,[dct['val'] for dct in shapelets])
# [10, 7, 6, 5, 4]
</code></pre>
<p><code>heapq</code> is likely to be faster than <code>sorted</code> for large <code>lists</code> if you only want a few of the largest values. Some discussions of <code>heapq</code> vs. <code>sorted</code> are <a href="http://stackoverflow.com/questions/24666602/python-heapq-vs-sorted-complexity-and-performance">here</a>.</p>
| 1 | 2016-09-29T13:12:35Z | [
"python",
"list",
"dictionary"
]
|
Top-k on a list of dict in python | 39,771,064 | <p>Is there an easy way to perform the max k number of key:values pair in this example </p>
<pre><code>s1 = {'val' : 0}
s2 = {'val': 10}
s3 = {'val': 5}
s4 = {'val' : 4}
s5 = {'val' : 6}
s6 = {'val' : 7}
s7 = {'val' : 3}
shapelets = [s1,s2,s3,s4,s5,s6,s7]
</code></pre>
<p>I want to get the max 5 numbers in the shapelets list, knowing that it contains a key of name "val" and to which a value is assigned.
The solution here resides in parsing through the list of dict elements and get the max n numbers of it ( in this case the max 5 values )</p>
<p>What can be a simple solution, does operator library in python supports such operation ?</p>
| 2 | 2016-09-29T13:00:24Z | 39,771,876 | <p>You could do it in linear time using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argpartition.html" rel="nofollow">numpy.argpartition</a>:</p>
<pre><code>from operator import itemgetter
import numpy as np
arr = np.array(list(map(itemgetter("val"), shapelets)))
print(arr[np.argpartition(arr, -5)][-5:])
</code></pre>
<p>The 5 max values will not necessarily be in order, if you want that then you would need to sort the k elements returned.</p>
| 1 | 2016-09-29T13:37:08Z | [
"python",
"list",
"dictionary"
]
|
Python gradient descent - cost keeps increasing | 39,771,075 | <p>I'm trying to implement gradient descent in python and my loss/cost keeps increasing with every iteration.</p>
<p>I've seen a few people post about this, and saw an answer here: <a href="http://stackoverflow.com/questions/17784587/gradient-descent-using-python-and-numpy">gradient descent using python and numpy</a></p>
<p>I believe my implementation is similar, but cant see what I'm doing wrong to get an exploding cost value:</p>
<pre><code>Iteration: 1 | Cost: 697361.660000
Iteration: 2 | Cost: 42325117406694536.000000
Iteration: 3 | Cost: 2582619233752172973298548736.000000
Iteration: 4 | Cost: 157587870187822131053636619678439702528.000000
Iteration: 5 | Cost: 9615794890267613993157742129590663647488278265856.000000
</code></pre>
<p>I'm testing this on a dataset I found online (LA Heart Data): <a href="http://www.umass.edu/statdata/statdata/stat-corr.html" rel="nofollow">http://www.umass.edu/statdata/statdata/stat-corr.html</a></p>
<p>Import code:</p>
<pre><code>dataset = np.genfromtxt('heart.csv', delimiter=",")
x = dataset[:]
x = np.insert(x,0,1,axis=1) # Add 1's for bias
y = dataset[:,6]
y = np.reshape(y, (y.shape[0],1))
</code></pre>
<p>Gradient descent:</p>
<pre><code>def gradientDescent(weights, X, Y, iterations = 1000, alpha = 0.01):
theta = weights
m = Y.shape[0]
cost_history = []
for i in xrange(iterations):
residuals, cost = calculateCost(theta, X, Y)
gradient = (float(1)/m) * np.dot(residuals.T, X).T
theta = theta - (alpha * gradient)
# Store the cost for this iteration
cost_history.append(cost)
print "Iteration: %d | Cost: %f" % (i+1, cost)
</code></pre>
<p>Calculate cost:</p>
<pre><code>def calculateCost(weights, X, Y):
m = Y.shape[0]
residuals = h(weights, X) - Y
squared_error = np.dot(residuals.T, residuals)
return residuals, float(1)/(2*m) * squared_error
</code></pre>
<p>Calculate hypothesis:</p>
<pre><code>def h(weights, X):
return np.dot(X, weights)
</code></pre>
<p>To actually run it:</p>
<pre><code>gradientDescent(np.ones((x.shape[1],1)), x, y, 5)
</code></pre>
| 2 | 2016-09-29T13:00:51Z | 39,771,368 | <p>Assuming that your derivation of the gradient is correct, you are using: <code>=-</code> and you should be using: <code>-=</code>. Instead of updating <code>theta</code>, you are reassigning it to <code>- (alpha * gradient)</code></p>
<p>EDIT (after the above issue was fixed in the code):</p>
<p>I ran what the code on what I believe is the right dataset and was able to get the cost to behave by setting <code>alpha=1e-7</code>. If you run it for <code>1e6</code> iterations you should see it converging. This approach on this dataset appears very sensitive to learning rate. </p>
| 2 | 2016-09-29T13:14:53Z | [
"python",
"numpy",
"machine-learning",
"regression",
"gradient-descent"
]
|
Python gradient descent - cost keeps increasing | 39,771,075 | <p>I'm trying to implement gradient descent in python and my loss/cost keeps increasing with every iteration.</p>
<p>I've seen a few people post about this, and saw an answer here: <a href="http://stackoverflow.com/questions/17784587/gradient-descent-using-python-and-numpy">gradient descent using python and numpy</a></p>
<p>I believe my implementation is similar, but cant see what I'm doing wrong to get an exploding cost value:</p>
<pre><code>Iteration: 1 | Cost: 697361.660000
Iteration: 2 | Cost: 42325117406694536.000000
Iteration: 3 | Cost: 2582619233752172973298548736.000000
Iteration: 4 | Cost: 157587870187822131053636619678439702528.000000
Iteration: 5 | Cost: 9615794890267613993157742129590663647488278265856.000000
</code></pre>
<p>I'm testing this on a dataset I found online (LA Heart Data): <a href="http://www.umass.edu/statdata/statdata/stat-corr.html" rel="nofollow">http://www.umass.edu/statdata/statdata/stat-corr.html</a></p>
<p>Import code:</p>
<pre><code>dataset = np.genfromtxt('heart.csv', delimiter=",")
x = dataset[:]
x = np.insert(x,0,1,axis=1) # Add 1's for bias
y = dataset[:,6]
y = np.reshape(y, (y.shape[0],1))
</code></pre>
<p>Gradient descent:</p>
<pre><code>def gradientDescent(weights, X, Y, iterations = 1000, alpha = 0.01):
theta = weights
m = Y.shape[0]
cost_history = []
for i in xrange(iterations):
residuals, cost = calculateCost(theta, X, Y)
gradient = (float(1)/m) * np.dot(residuals.T, X).T
theta = theta - (alpha * gradient)
# Store the cost for this iteration
cost_history.append(cost)
print "Iteration: %d | Cost: %f" % (i+1, cost)
</code></pre>
<p>Calculate cost:</p>
<pre><code>def calculateCost(weights, X, Y):
m = Y.shape[0]
residuals = h(weights, X) - Y
squared_error = np.dot(residuals.T, residuals)
return residuals, float(1)/(2*m) * squared_error
</code></pre>
<p>Calculate hypothesis:</p>
<pre><code>def h(weights, X):
return np.dot(X, weights)
</code></pre>
<p>To actually run it:</p>
<pre><code>gradientDescent(np.ones((x.shape[1],1)), x, y, 5)
</code></pre>
| 2 | 2016-09-29T13:00:51Z | 39,779,431 | <p>In general, if your cost is increasing, then the very first thing you should check is to see if your learning rate is too large. In such cases, the rate is causing the cost function to jump over the optimal value and increase upwards to infinity. Try different small values of your learning rate. When I face the problem that you describe, I usually repeatedly try 1/10 of the learning rate until I can find a rate where J(w) decreases.</p>
<p>Another problem might be a bug in your derivative implementation. A good way to debug is to do a gradient check to compare the analytic gradient versus the numeric gradient.</p>
| 1 | 2016-09-29T20:28:59Z | [
"python",
"numpy",
"machine-learning",
"regression",
"gradient-descent"
]
|
Import Pandas Into Python | 39,771,274 | <p>I just installed Python 3.5.2. I am working in the shell/IDLE environment and attempting to import Pandas. </p>
<p>However when I write: import pandas </p>
<p>I get the following:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/bartogre/Desktop/Program1.py", line 1, in <module>
import pandas
ImportError: No module named 'pandas'
</code></pre>
<p>How do I add any module to the library Python 3.5.2 is reading? I do not want to work in Anaconda.</p>
<p>I watched this video: <a href="https://www.youtube.com/watch?v=ddpYVA-7wq4" rel="nofollow">https://www.youtube.com/watch?v=ddpYVA-7wq4</a></p>
<p>And below is my output from CMD:</p>
<pre><code>C:\Users\bartogre>
C:\Users\bartogre>cd c:\users\bartogre\desktop\pyodbc-master
c:\Users\bartogre\Desktop\pyodbc-master>python setup.py
c:\Users\bartogre\Desktop\pyodbc-master>python setup.py install
'git' is not recognized as an internal or external command,
operable program or batch file.
WARNING: git describe failed with: 1
WARNING: Unable to determine version. Using 3.0.0.0
C:\Program Files (x86)\Anaconda3\lib\site-packages\setuptools-27.2.0-py3.5.egg\s
etuptools\dist.py:340: UserWarning: The version specified ('3.0.0-unsupported')
is an invalid version, this may not work as expected with newer versions of setu
ptools, pip, and PyPI. Please see PEP 440 for more details.
running install
running bdist_egg
running egg_info
writing pyodbc.egg-info\PKG-INFO
writing dependency_links to pyodbc.egg-info\dependency_links.txt
writing top-level names to pyodbc.egg-info\top_level.txt
reading manifest file 'pyodbc.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'pyodbc.egg-info\SOURCES.txt'
installing library code to build\bdist.win32\egg
running install_lib
running build_ext
building 'pyodbc' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++
Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools
</code></pre>
| 1 | 2016-09-29T13:10:47Z | 39,781,034 | <p>A bit of background: a system can have multiple Python installations. On Windows, each is a directory with python.exe and Lib/site-packages/. To use a package with a particular python.exe, you must install into the corresponding site-packages.</p>
<p>In your case, 'python' invokes 'C:\Program Files (x86)\Anaconda3\python.exe'. Do you have another python installation that you want to be working with? </p>
<p>In any case, the current standard way to install packages on Windows is with pip. The best way to run it in the console is</p>
<pre><code>some/path> <some python> -m pip install package
</code></pre>
<p>where <code><some python></code> is either <code>python</code> to invoke the default install or something else to get another install. Pip first goes to pypi.python.org to find the package. If the package contains C code, it may find an appropriate pre-built binary or try to compile locally, which requires the correct version of Visual C++ compiler.</p>
<p>If pip does not find a pre-built binary for your install, I would do the following. For about 200 packages, unofficial binaries are available at <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/</a>. The site has been up for up, and a livesave for Windows users, for at least a decade and I and many others have used it. Cristoph gives instructions on how to download a file and then use pip to install it.</p>
| 0 | 2016-09-29T22:34:08Z | [
"python",
"pandas",
"importerror",
"python-3.5"
]
|
Import Pandas Into Python | 39,771,274 | <p>I just installed Python 3.5.2. I am working in the shell/IDLE environment and attempting to import Pandas. </p>
<p>However when I write: import pandas </p>
<p>I get the following:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/bartogre/Desktop/Program1.py", line 1, in <module>
import pandas
ImportError: No module named 'pandas'
</code></pre>
<p>How do I add any module to the library Python 3.5.2 is reading? I do not want to work in Anaconda.</p>
<p>I watched this video: <a href="https://www.youtube.com/watch?v=ddpYVA-7wq4" rel="nofollow">https://www.youtube.com/watch?v=ddpYVA-7wq4</a></p>
<p>And below is my output from CMD:</p>
<pre><code>C:\Users\bartogre>
C:\Users\bartogre>cd c:\users\bartogre\desktop\pyodbc-master
c:\Users\bartogre\Desktop\pyodbc-master>python setup.py
c:\Users\bartogre\Desktop\pyodbc-master>python setup.py install
'git' is not recognized as an internal or external command,
operable program or batch file.
WARNING: git describe failed with: 1
WARNING: Unable to determine version. Using 3.0.0.0
C:\Program Files (x86)\Anaconda3\lib\site-packages\setuptools-27.2.0-py3.5.egg\s
etuptools\dist.py:340: UserWarning: The version specified ('3.0.0-unsupported')
is an invalid version, this may not work as expected with newer versions of setu
ptools, pip, and PyPI. Please see PEP 440 for more details.
running install
running bdist_egg
running egg_info
writing pyodbc.egg-info\PKG-INFO
writing dependency_links to pyodbc.egg-info\dependency_links.txt
writing top-level names to pyodbc.egg-info\top_level.txt
reading manifest file 'pyodbc.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'pyodbc.egg-info\SOURCES.txt'
installing library code to build\bdist.win32\egg
running install_lib
running build_ext
building 'pyodbc' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++
Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools
</code></pre>
| 1 | 2016-09-29T13:10:47Z | 39,812,411 | <p>This is all great feedback - I installed via the conda. And, I am using Spyder over the ILDE environment. Please the below from CMD.</p>
<p>Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.</p>
<p>C:\Windows\system32>conda install pyodbc
Using Anaconda Cloud api site <a href="https://api.anaconda.org" rel="nofollow">https://api.anaconda.org</a>
Fetching package metadata: ....
Solving package specifications: .........</p>
<p>Package plan for installation in environment C:\Users\Hal\Anaconda3:</p>
<p>The following packages will be downloaded:</p>
<pre><code>package | build
---------------------------|-----------------
conda-env-2.6.0 | 0 498 B
python-3.5.2 | 0 30.3 MB
pyodbc-3.0.10 | py35_1 48 KB
ruamel_yaml-0.11.14 | py35_0 217 KB
conda-4.2.9 | py35_0 428 KB
------------------------------------------------------------
Total: 31.0 MB
</code></pre>
<p>The following NEW packages will be INSTALLED:</p>
<pre><code>pyodbc: 3.0.10-py35_1
ruamel_yaml: 0.11.14-py35_0
</code></pre>
<p>The following packages will be UPDATED:</p>
<pre><code>conda: 4.0.5-py35_0 --> 4.2.9-py35_0
conda-env: 2.4.5-py35_0 --> 2.6.0-0
python: 3.5.1-4 --> 3.5.2-0
</code></pre>
<p>Proceed ([y]/n)? y</p>
<p>Fetching packages ...
conda-env-2.6. 100% |###############################| Time: 0:00:00 0.00 B/s
python-3.5.2-0 100% |###############################| Time: 0:00:20 1.56 MB/s
pyodbc-3.0.10- 100% |###############################| Time: 0:00:00 788.82 kB/s
ruamel_yaml-0. 100% |###############################| Time: 0:00:00 837.06 kB/s
conda-4.2.9-py 100% |###############################| Time: 0:00:00 969.21 kB/s
Extracting packages ...
[ COMPLETE ]|##################################################| 100%
Unlinking packages ...
[ COMPLETE ]|##################################################| 100%
Linking packages ...
[ COMPLETE ]|##################################################| 100%</p>
<p>C:\Windows\system32></p>
| 0 | 2016-10-01T23:24:51Z | [
"python",
"pandas",
"importerror",
"python-3.5"
]
|
how to slice and combine specific row values in a Pandas groupby? | 39,771,306 | <p>Consider the following dataframe</p>
<pre><code>df = pd.DataFrame({'group1' : ['A', 'A', 'A', 'A',
'A', 'A', 'A', 'A'],
'group2' : ['C', 'C', 'C', 'C',
'C', 'E', 'E', 'E'],
'time' : [-6,-5,-4,-3,-2,-6,-3,-4] ,
'col': [1,2,3,4,5,6,7,8]})
df
Out[36]:
col group1 group2 time
0 1 A C -6
1 2 A C -5
2 3 A C -4
3 4 A C -3
4 5 A C -2
5 6 A E -6
6 7 A E -3
7 8 A E -4
</code></pre>
<p>my objective is to create a column that contains, for each group in <code>['group1','group2']</code> the ratio of <code>col</code> evaluated at <code>time = -6</code> divided by <code>col</code> evaluated at <code>time = -4</code>. </p>
<p>That is, for group <code>['A','C']</code>, I expect this column to be equal to 1/3, for group <code>['A','E']</code> it is 6/8. Both <code>group1</code> and <code>group1</code> take on many different values in the data.</p>
<p>How can I get that in Pandas?</p>
<p>Something like </p>
<pre><code> df.groupby(['group1','group2']).transform(lambda x: x.ix[x['time'] == -6,'col'] / x.ix[x['time'] == -4,'col'])
</code></pre>
<p>does not work..
Any ideas?</p>
<p>Thanks!</p>
| 2 | 2016-09-29T13:12:26Z | 39,771,656 | <p>You could do it without <code>groupby</code> like this:</p>
<pre><code>dfm = pd.merge(df[df.time == -4],df[df.time == -6],on=["group1","group2"])
dfm['Div'] = dfm.col_y.div(dfm.col_x)
df = pd.merge(df,dfm[['group1','group2','Div']],on=["group1","group2"])
</code></pre>
<p>Output:</p>
<pre><code> col group1 group2 time Div
0 1 A C -6 0.333333
1 2 A C -5 0.333333
2 3 A C -4 0.333333
3 4 A C -3 0.333333
4 5 A C -2 0.333333
5 6 A E -6 0.750000
6 7 A E -3 0.750000
7 8 A E -4 0.750000
</code></pre>
| 4 | 2016-09-29T13:27:08Z | [
"python",
"pandas"
]
|
how to slice and combine specific row values in a Pandas groupby? | 39,771,306 | <p>Consider the following dataframe</p>
<pre><code>df = pd.DataFrame({'group1' : ['A', 'A', 'A', 'A',
'A', 'A', 'A', 'A'],
'group2' : ['C', 'C', 'C', 'C',
'C', 'E', 'E', 'E'],
'time' : [-6,-5,-4,-3,-2,-6,-3,-4] ,
'col': [1,2,3,4,5,6,7,8]})
df
Out[36]:
col group1 group2 time
0 1 A C -6
1 2 A C -5
2 3 A C -4
3 4 A C -3
4 5 A C -2
5 6 A E -6
6 7 A E -3
7 8 A E -4
</code></pre>
<p>my objective is to create a column that contains, for each group in <code>['group1','group2']</code> the ratio of <code>col</code> evaluated at <code>time = -6</code> divided by <code>col</code> evaluated at <code>time = -4</code>. </p>
<p>That is, for group <code>['A','C']</code>, I expect this column to be equal to 1/3, for group <code>['A','E']</code> it is 6/8. Both <code>group1</code> and <code>group1</code> take on many different values in the data.</p>
<p>How can I get that in Pandas?</p>
<p>Something like </p>
<pre><code> df.groupby(['group1','group2']).transform(lambda x: x.ix[x['time'] == -6,'col'] / x.ix[x['time'] == -4,'col'])
</code></pre>
<p>does not work..
Any ideas?</p>
<p>Thanks!</p>
| 2 | 2016-09-29T13:12:26Z | 39,772,563 | <p>Your solution in a ridiculously long list iteration (most pythonic way btw). Also, your question makes sense but the ratio for group A,C you have listed as 1/4 is actually 1/3</p>
<pre><code>summary = [(name,group[group.time == -6].col.values[0],group[group.time == -4].col.values[0]) for name,group in df.groupby(['group1','group2'])]
pd.DataFrame(summary, columns=['group', 'numerator', 'denominator'])
</code></pre>
| 1 | 2016-09-29T14:04:58Z | [
"python",
"pandas"
]
|
how to slice and combine specific row values in a Pandas groupby? | 39,771,306 | <p>Consider the following dataframe</p>
<pre><code>df = pd.DataFrame({'group1' : ['A', 'A', 'A', 'A',
'A', 'A', 'A', 'A'],
'group2' : ['C', 'C', 'C', 'C',
'C', 'E', 'E', 'E'],
'time' : [-6,-5,-4,-3,-2,-6,-3,-4] ,
'col': [1,2,3,4,5,6,7,8]})
df
Out[36]:
col group1 group2 time
0 1 A C -6
1 2 A C -5
2 3 A C -4
3 4 A C -3
4 5 A C -2
5 6 A E -6
6 7 A E -3
7 8 A E -4
</code></pre>
<p>my objective is to create a column that contains, for each group in <code>['group1','group2']</code> the ratio of <code>col</code> evaluated at <code>time = -6</code> divided by <code>col</code> evaluated at <code>time = -4</code>. </p>
<p>That is, for group <code>['A','C']</code>, I expect this column to be equal to 1/3, for group <code>['A','E']</code> it is 6/8. Both <code>group1</code> and <code>group1</code> take on many different values in the data.</p>
<p>How can I get that in Pandas?</p>
<p>Something like </p>
<pre><code> df.groupby(['group1','group2']).transform(lambda x: x.ix[x['time'] == -6,'col'] / x.ix[x['time'] == -4,'col'])
</code></pre>
<p>does not work..
Any ideas?</p>
<p>Thanks!</p>
| 2 | 2016-09-29T13:12:26Z | 39,772,974 | <p>Another way using <code>groupby</code> with a custom function:</p>
<pre><code>def time_selection(row):
N_r = row.loc[row['time'] == -6, 'col'].squeeze()
D_r = row.loc[row['time'] == -4, 'col'].squeeze()
return (N_r/D_r)
pd.merge(df, df.groupby(['group1','group2']).apply(time_selection).reset_index(name='div'))
</code></pre>
<p><a href="http://i.stack.imgur.com/C8nO6.png" rel="nofollow"><img src="http://i.stack.imgur.com/C8nO6.png" alt="Image"></a></p>
| 1 | 2016-09-29T14:23:34Z | [
"python",
"pandas"
]
|
confused about the readline() return in Python | 39,771,366 | <p>I am a beginner in python. However, I have some problems when I try to use the readline() method.</p>
<pre><code>f=raw_input("filename> ")
a=open(f)
print a.read()
print a.readline()
print a.readline()
print a.readline()
</code></pre>
<p>and my txt file is </p>
<pre><code>aaaaaaaaa
bbbbbbbbb
ccccccccc
</code></pre>
<p>However, when I tried to run it on a Mac terminal, I got this:</p>
<pre><code>aaaaaaaaa
bbbbbbbbb
ccccccccc
</code></pre>
<p>It seems that readline() is not working at all.
But when I disable print a.read(), the readline() gets back to work.</p>
<p>This confuses me a lot. Is there any solution where I can use read() and readline() at the same time? </p>
| 1 | 2016-09-29T13:14:49Z | 39,771,467 | <p>When you open a file you get a pointer to some place of the file (by default: the begining). Now whenever you run <code>.read()</code> or <code>.readline()</code> this pointer moves:</p>
<ol>
<li><code>.read()</code> reads until the end of the file and moves the pointer to the end (thus further calls to any reading gives nothing)</li>
<li><code>.readline()</code> reads until newline is seen and sets the pointer after it</li>
<li><code>.read(X)</code> reads X bytes and sets the pointer at <code>CURRENT_LOCATION + X</code> (or the end)</li>
</ol>
<p>If you wish you can manually move that pointer by issuing <code>a.seek(X)</code> call where <code>X</code> is a place in file (seen as an array of bytes). For example this should give you the desired output:</p>
<pre><code>print a.read()
a.seek(0)
print a.readline()
print a.readline()
print a.readline()
</code></pre>
| 5 | 2016-09-29T13:19:18Z | [
"python"
]
|
confused about the readline() return in Python | 39,771,366 | <p>I am a beginner in python. However, I have some problems when I try to use the readline() method.</p>
<pre><code>f=raw_input("filename> ")
a=open(f)
print a.read()
print a.readline()
print a.readline()
print a.readline()
</code></pre>
<p>and my txt file is </p>
<pre><code>aaaaaaaaa
bbbbbbbbb
ccccccccc
</code></pre>
<p>However, when I tried to run it on a Mac terminal, I got this:</p>
<pre><code>aaaaaaaaa
bbbbbbbbb
ccccccccc
</code></pre>
<p>It seems that readline() is not working at all.
But when I disable print a.read(), the readline() gets back to work.</p>
<p>This confuses me a lot. Is there any solution where I can use read() and readline() at the same time? </p>
| 1 | 2016-09-29T13:14:49Z | 39,771,555 | <p>You need to understand the concept of file pointers. When you read the file, it is fully consumed, and the pointer is at the end of the file. </p>
<blockquote>
<p>It seems that the readline() is not working at all. </p>
</blockquote>
<p>It is working as expected. There are no lines to read. </p>
<blockquote>
<p>when I disable print a.read(), the readline() gets back to work.</p>
</blockquote>
<p>Because the pointer is at the beginning of the file, and the lines can be read </p>
<blockquote>
<p>Is there any solution that I can use read() and readline() at the same time?</p>
</blockquote>
<p>Sure. Flip the ordering of reading a few lines, then the remainder of the file, or seek the file pointer back to a position that you would like. </p>
<p>Also, don't forget to close the file when you are finished reading it </p>
| 1 | 2016-09-29T13:23:15Z | [
"python"
]
|
confused about the readline() return in Python | 39,771,366 | <p>I am a beginner in python. However, I have some problems when I try to use the readline() method.</p>
<pre><code>f=raw_input("filename> ")
a=open(f)
print a.read()
print a.readline()
print a.readline()
print a.readline()
</code></pre>
<p>and my txt file is </p>
<pre><code>aaaaaaaaa
bbbbbbbbb
ccccccccc
</code></pre>
<p>However, when I tried to run it on a Mac terminal, I got this:</p>
<pre><code>aaaaaaaaa
bbbbbbbbb
ccccccccc
</code></pre>
<p>It seems that readline() is not working at all.
But when I disable print a.read(), the readline() gets back to work.</p>
<p>This confuses me a lot. Is there any solution where I can use read() and readline() at the same time? </p>
| 1 | 2016-09-29T13:14:49Z | 39,771,607 | <p>The file object <code>a</code> remembers it's position in the file.</p>
<ul>
<li><code>a.read()</code> reads from the current position to end of the file (moving the position to the end of the file)</li>
<li><code>a.readline()</code> reads from the current position to the end of the line (moving the position to the next line)</li>
<li><code>a.seek(n)</code> moves to position n in the file (without returning anything)</li>
<li><code>a.tell()</code> returns the position in the file.</li>
</ul>
<p>So try putting the calls to readline first. You'll notice that now the read call won't return the whole file, just the remaining lines (maybe none), depending on how many times you called readline. And play around with seek and tell to confirm whats going on.</p>
<p>Details <a href="https://docs.python.org/2/library/stdtypes.html#bltin-file-objects" rel="nofollow">here</a>.</p>
| 0 | 2016-09-29T13:25:03Z | [
"python"
]
|
Addition speed of numpy arrays with different contiguous-type | 39,771,450 | <p>Numpy arrays are stored with different contiguous types (C- and F-). When using numpy.swapaxes(), the contiguous type gets changed. I need to add two multidimensional arrays (3d to be more specific), one of which comes from another array with swapped axes. What I've noticed is that when the first axis gets swapped with the last axis, in the case of a 3d array, the contiguous type changes from C- to F-. And adding two arrays with different contiguous type is extremely slow (~6 times slower than adding two C-contiguous arrays). However, if other axes are swapped (0-1 or 1-2), the resulting array would have false flags for both C- and F- contiguous (non-contiguous). The weird thing to me is that adding one array of C-configuous and one array neither C- nor F- contiguous, is in fact only slightly slower than adding two arrays of same type. Here are my two questions:</p>
<ol>
<li><p>Why does it seem to be different for C-&F-contiguous arrray addition and C-&non-contiguous array addition? Is is caused by different rearranging mechanism or simply because the rearranging distance between C- and F- contiguous is longest for all possible axes orders?</p></li>
<li><p>If I have to add a C-contiguous array and a F-contiguous/non-contiguous array, what is the best way to accelerate the speed?</p></li>
</ol>
<p>Below is a minimum example of what I encountered. The three printed durations on my computer are 2.0s (C-contiguous + C-contiguous), 12.4s (C-contiguous + F-contiguous), 3.4s (C-contiguous + non-contiguous) and 3.3s (C-contiguous + non-contiguous).</p>
<pre><code>import numpy as np
import time
np.random.seed(1234)
a = np.random.random((300, 400, 500)) # C-contiguous
b = np.swapaxes(np.random.random((500, 400, 300)), 0, 2) # F-contiguous
c = np.swapaxes(np.random.random((300, 500, 400)), 1, 2) # Non-contiguous
d = np.swapaxes(np.random.random((400, 300, 500)), 0, 1) # Non-contiguous
t = time.time()
for n in range(10):
result = a + a
print(time.time() - t)
t = time.time()
for n in range(10):
result = a + b
print(time.time() - t)
t = time.time()
for n in range(10):
result = a + c
print(time.time() - t)
t = time.time()
for n in range(10):
result = a + d
print(time.time() - t)
</code></pre>
| 0 | 2016-09-29T13:18:37Z | 39,771,851 | <p>These types (<code>F</code> and <code>C</code>) denote whether a matrix (or multi-dimensional array) is stored in column-major (<code>C</code> as in C language which uses column-major storage) or row-major (<code>F</code> as in Fortran language which uses row-major storage).</p>
<p>Both do not really vary in speed. It is just a abstraction layer. No matter which one you use, it brings performance wise the same. </p>
<p>However, what makes an enormous difference is whether arrays are contiguous or not. If they are contiguous you will have good timings cause of caching effects, vectorization and other optimization games that the compiler might apply.</p>
| 0 | 2016-09-29T13:35:52Z | [
"python",
"arrays",
"performance",
"numpy",
"contiguous"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.