text
stringlengths 226
34.5k
|
---|
python threads operating serially
Question: I am a python noob. I am trying to design a visual stimulus for experiments in
my lab. The stimulus should stop when the user presses a key. The entire
experiment is timing sensitive and so I cannot run the key check serially.
The code that I wrote looks like this.
* * *
class designStim:
'''
This is class to hold functions designing the stimulus. There are other functions
in this class but the only one that is called is the following
'''
def deisgnStart(self)
#This function returns a dictionary variable containing
#the raw stimulus
* * *
class dispStim(threading.Thread):
def __init__(self,stimdata,q):
threading.Thread.__init__(self)
#*Assign the other data from stimdata to self*
def run(self):
#Code to run the stimulus
* * *
class checkKbdAbort(threading.Thread):
def __init__(self,q):
threading.Thread.__init__(self)
def run(self):
#Code to check for keyboard key press. In case of key press,
#use the queue to abort dispStim*
if __name__ == '__main__':
a=designStim('test')
stimdata=a.designStart()
q=Queue()
thread1=dispStim(stimdata,q)
thread2=checkKbdAbort(q)
thread1.start()
thread2.start()
* * *
This code works when I run serially which leads me to believe that my display
scripts are correct. However, when I run the code in this form, the two
threads are not run in parallel. Thread1 executes and then thread2 runs.
During the thread1 run, the stimulus does not display. Is there a mistake I am
making during the class initialization/call? Please help
Thank you
Mathew
Answer: While I cannot answer your question for sure (since I don't see a complete
example and thus do not know why things seem to be executing serially), and
putting aside what appear to be typos in the code copied into your question,
what you are attempting to do appears basically correct. If you subclass
threading.Thread and provide a run() definition, and if you launch each thread
instance via start(), the threads should run in parallel. (Thread 1 _may_ get
a small head start since it is kicked off first from the parent thread, but
I'm going to guess that it does not matter in this case.)
Here's a working example that shows where threads should be working in
parallel:
import sys
import threading
# Using sys.stdout.write() instead of print; it seems nicer for
# demoing threads in CPython :)
class Fizz(threading.Thread):
def run(self):
for i in xrange(10):
sys.stdout.write("fizz\n")
class Buzz(threading.Thread):
def run(self):
for i in xrange(10):
sys.stdout.write("buzz\n")
fizz = Fizz()
buzz = Buzz()
fizz.start()
buzz.start()
And here's what my output looks like:
C:\Users\Vultaire>test.py
fizz
fizz
fizz
fizz
fizz
buzz
buzz
buzz
buzz
fizz
buzz
fizz
fizz
buzz
fizz
fizz
buzz
buzz
buzz
buzz
The threads in your example appear to be started in this way, so they _should_
be running in parallel. I'm guessing the problem is somewhere else? Or perhaps
the threads are running in parallel and just didn't seem like it? (Please
provide a more complete code example if you need further help! :))
|
App Engine bulkloader transformation for datetime
Question: I'm trying to load a datetime field with the following format.
2013-02-05T10:09:38-08:00
- property: event_time
external_name: datetime
import_transform: transform.import_date_time('%Y-%m-%dT%H:%M:%S%z')
However, the transformer does not accept the %z directive. According to the
Python docs, this directive is platform dependent. Apparently, App Engine
doesn't support it. Here's the error I get on the bulkloader,
ValueError: 'z' is a bad directive in format '%Y-%m-%dT%H:%M:%S%z'
This issue seems to be well established in the context of datetime parsing and
there are workarounds. But what to do with the bulkloader? I don't have the
same flexibility as other Python solutions.
Answer: I store only naïve datetime objects, so I haven't run into this problem
myself. But it would seem to be possible to write a custom export/import pair,
that includes something to recreate the timezone offset.... e.g. the export
might be something like:
def export_datetime_with_gmt_offset(dt):
return transform.export_date_time('%Y-%m-%dT%H:%M:%S')(dt) + ' ' + dt.utcoffset().minutes
(Code above not tested, but with luck it'll set you on the right path.)
|
Is there an equivalent of a bootstrap.php for python?
Question: In php, people often call a bootstrap file to set variables used throughout a
program. I have a python program that calls methods from different modules. I
want those methods from different modules to share some variables. Can I set
these variables up in something like a boostrap.py? Or is this not very
"pythonic" because a module should contain all of the variables it needs?
Answer: The best way would be to create something called a `settings.py` file, that
houses are your shared variables of importance.
This approach is followed by the django team for their web framework called
django, whcih creates a `settings.py` file to house all the data that needs to
be shared, for example database logins, and static file roots.
|
How to convert this non-tail-recursion function to a loop or a tail-recursion version?
Question: I've been curious for this for long. It is really very easy and elegant for
human to create a non-tail-recursive function to get something complicated,
but the speed is very slow and easily hit the limit of Python recursion:
def moves_three(n, ini=0, med=1, des=2):
'''give a int -> return a list '''
if n == 1:
return ((ini,des),)
return moves_three(n-1, ini=ini, med=des, des=med) + \
((ini, des),) + \
moves_three(n-1, ini=med, med=ini, des=des)
if __name__ == '__main__':
moves_three(100) # may be after several hours you can see the result.
len(moves_three(10000))
So, how to change `moves_three` to a tail recursion one or a loop (better)?
More important, are there any essays to talk about this? Thanks.
Answer: Even with an iterative form, this isn't going to get any faster. The problem
isn't the recursion limit; you're still an order of magnitude below the
recursion limit. The problem is that the size of your output is `O(2^n)`. For
`n=100`, you have to build a tuple of about a thousand billion billion billion
elements. It doesn't matter how you build it; you'll never finish.
If you want to convert this to iteration anyway, that can be done by managing
state with an explicit stack instead of the call stack:
def moves_three(n, a=0, b=1, c=2):
first_entry = True
stack = [(first_entry, n, a, b, c)]
output = []
while stack:
first_entry, n1, a1, b1, c1 = stack.pop()
if n1 == 1:
output.append((a1, c1))
elif first_entry:
stack.append((False, n1, a1, b1, c1))
stack.append((True, n1-1, a1, c1, b1))
else:
output.append((a1, c1))
stack.append((True, n1-1, b1, a1, c1))
return tuple(output)
Confusing, isn't it? A tuple `(True, n, a, b, c)` on the stack represents
entering a function call with arguments `n, a, b, c`. A tuple `(False, n, a,
b, c)` represents returning to the `(True, n, a, b, c)` call after
`moves_three(n-1, a, c, b)` ends.
|
Why is a website's response in python's `urllib.request` different to a request sent directly from a web-browser?
Question: I have a program that takes a URL and gets a response from the server using
`urllib.request`. It all works fine, but I tested it a little more and
realised that when I put in a URL such as <http://google.com> into my browser,
I got a different page (which had a doodle and a science fair promotion etc.)
but with my program it was just plain Google with nothing special on it.
It is probably due to redirection, but if the request from my program goes
through the same router and DNS, surely the output should be exactly the same?
Here is the code:
"""
This is a simple browsing widget that handles user requests, with the
added condition that all proxy settings are ignored. It outputs in the
default web browser.
"""
# This imports some necessary libraries.
import tkinter as tk
import webbrowser
from tempfile import NamedTemporaryFile
import urllib.request
def parse(data):
"""
Removes junk from the data so it can be easily processed.
:rtype : list
:param data: A long string of compressed HTML.
"""
data = data.decode(encoding='UTF-8') # This makes data workable.
lines = data.splitlines() # This clarifies the lines for writing.
return lines
class Browser(object):
"""This creates an object for getting a direct server response."""
def __init__(self, master):
"""
Sets up a direct browsing session and a GUI to manipulate it.
:param master: Any Tk() window in which the GUI is displayable.
"""
# This creates a frame within which widgets can be stored.
frame = tk.Frame(master)
frame.pack()
# Here we create a handler that ignores proxies.
proxy_handler = urllib.request.ProxyHandler(proxies=None)
self.opener = urllib.request.build_opener(proxy_handler)
# This sets up components for the GUI.
tk.Label(frame, text='Full Path').grid(row=0)
self.url = tk.Entry(frame) # This takes the specified path.
self.url.grid(row=0, column=1)
tk.Button(frame, text='Go', command=self.browse).grid(row=0, column=2)
# This binds the return key to calling the method self.browse.
master.bind('<Return>', self.browse)
def navigate(self, query):
"""
Gets raw data from the queried server, ready to be processed.
:rtype : str
:param query: The request entered into 'self.url'.
"""
# This contacts the domain and parses it's response.
response = self.opener.open(query)
html = response.read()
return html
def browse(self, event=None):
"""
Wraps all functionality together for data reading and writing.
:param event: The argument from whatever calls the method.
"""
# This retrieves the input given by the user.
location = self.url.get()
print('\nUser inputted:', location)
# This attempts to access the server and gives any errors.
try:
raw_data = self.navigate(location)
except Exception as e:
print(e)
# This executes assuming there are no errors.
else:
clean_data = parse(raw_data)
# This creates and executes a temporary HTML file.
with NamedTemporaryFile(suffix='.html', delete=False) as cache:
cache.writelines(line.encode('UTF-8') for line in clean_data)
webbrowser.open_new_tab(cache.name)
print('Done.')
def main():
"""Using a main function means not doing everything globally."""
# This creates a window that is always in the foreground.
root = tk.Tk()
root.wm_attributes('-topmost', 1)
root.title('DirectQuery')
# This starts the program.
Browser(root)
root.mainloop()
# This allows for execution as well as for importing.
if __name__ == '__main__':
main()
Note: I don't know if it is something to do with the fact that it is
instructed to ignore proxies? My computer doesn't have any proxy settings
turned on by the way. Also, if there is a way that I can get the same
response/output as a web browser such as chrome would, I would love to hear
it.
Answer: In order to answer your general question you need to understand how the web
site in question operates, so this isn't really a Python question. Web sites
frequently detect the browser's "make and model" with special detection code,
often (as indicated in the comment on your question) starting with the User-
Agent: HTTP header.
It would therefor make sense for Google's home page not to include any
JavaScript-based functionality if the User-Agent identifies itself as a
program.
|
Generate all unique permutations of 2d array
Question: I would like to be able to generate all unique permutations of a 2d array in
python.
For example take this 2d array [[1,1],[0,0]] I would like back
[[0,0],
[1,1]]
[[0,1],
[0,1]]
[[0,1]
[1,0]]
[[1,0]
[0,1]]
[[1,0]
[1,0]]
[[1,1]
[0,0]]
Answer: You can do it like this
d = [[1, 1], [0, 0]]
from itertools import permutations, chain
from pprint import pprint
pprint(sorted([i[:2], i[2:]] for i in set(permutations(chain.from_iterable(d)))))
**Output**
[[[0, 0], [1, 1]],
[[0, 1], [0, 1]],
[[0, 1], [1, 0]],
[[1, 0], [0, 1]],
[[1, 0], [1, 0]],
[[1, 1], [0, 0]]]
|
Python yield multiple assignment
Question: I generally try to use yield whenever I can, but I don't get how I'd do it on
code like this:
numbers = [1,2,3,4,5,6,7,8,9,10]
def odd_and_even(numbers):
odd = []
even = []
for number in numbers:
if number % 2:
odd.append(number)
else:
even.append(number)
return (odd, even)
Obviously, in the above case, I could rewrite it as two different functions,
but for something more complex, that might be quite computationally expensive
because I'd be running twice as many checks.
How could the above code be made to use yield?
Answer: Using [`partition()`
function](http://nedbatchelder.com/blog/201306/filter_a_list_into_two_parts.html):
import itertools
def partition(items, predicate=bool):
a, b = itertools.tee((predicate(item), item) for item in items)
return ((item for pred, item in a if not pred),
(item for pred, item in b if pred))
odd, even = partition(numbers, lambda n: n % 2 == 0)
If `odd`, `even` are not consumed nearly in sync or if the predicate is not
expensive then your list version (from the question) should be faster.
|
Speed up vlookup like operation using pandas in python
Question: I have written some code to essentially do a excel style vlookup on two pandas
dataframes and want to speed it up.
The structure of the data frames is as follows: dbase1_df.columns:
'VALUE', 'COUNT', 'GRID', 'SGO10GEO'
merged_df.columns:
'GRID', 'ST0, 'ST1', 'ST2', 'ST3', 'ST4', 'ST5', 'ST6', 'ST7', 'ST8', 'ST9',
'ST10'
sgo_df.columns:
'mkey', 'type'
To combine them, I do the following:
1\. For each row in dbase1_df, find the row where its 'SGO10GEO' value matches
the 'mkey' value of sgo_df. Obtain the 'type' from that row in sgo_df.
1. 'type' contains an integer ranging from 0 to 10. Create a column name by appending 'ST' to type.
2. Find the value in merged_df, where its 'GRID' value matches the 'GRID' value in dbase1_df and the column name is the one we obtained in step 2. Output this value into a csv file.
// Read in dbase1 dbf into data frame
dbase1_df = pandas.DataFrame.from_csv(dbase1_file,index_col=False)
merged_df = pandas.DataFrame.from_csv('merged.csv',index_col=False)
lup_out.writerow(["VALUE","TYPE",EXTRACT_VAR.upper()])
// For each unique value in dbase1 data frame:
for index, row in dbase1_df.iterrows():
# 1. Find the soil type corresponding to the mukey
tmp = sgo_df.type.values[sgo_df['mkey'] == int(row['SGO10GEO'])]
if tmp.size > 0:
s_type = 'ST'+tmp[0]
val = int(row['VALUE'])
# 2. Obtain hmu value
tmp_val = merged_df[s_type].values[merged_df['GRID'] == int(row['GRID'])]
if tmp_val.size > 0:
hmu_val = tmp_val[0]
# 4. Output into data frame: VALUE, hmu value
lup_out.writerow([val,s_type,hmu_val])
else:
err_out.writerow([merged_df['GRID'], type, row['GRID']])
Is there anything here that might be a speed bottleneck? Currently it takes me
around 20 minutes for around ~500,000 rows in dbase1_df; ~1,000 rows in
merged_df and ~500,000 rows in sgo_df.
thanks!
Answer: You need to use the merge operation in Pandas to get a better performance. I'm
not able to test the below code since I don't have the data but at minimum it
should help you to get the idea:
import pandas as pd
dbase1_df = pd.DataFrame.from_csv('dbase1_file.csv',index_col=False)
sgo_df = pd.DataFrame.from_csv('sgo_df.csv',index_col=False)
merged_df = pd.DataFrame.from_csv('merged_df.csv',index_col=False)
#you need to use the same column names for common columns to be able to do the merge operation in pandas , so we changed the column name to mkey
dbase1_df.columns = [u'VALUE', u'COUNT', u'GRID', u'mkey']
#Below operation merges the two dataframes
Step1_Merge = pd.merge(dbase1_df,sgo_df)
#We need to add a new column to concatenate ST and type
Step1_Merge['type_2'] = Step1_Merge['type'].map(lambda x: 'ST'+str(x))
# We need to change the shape of merged_df and move columns to rows to be able to do another merge operation
id = merged_df.ix[:,['GRID']]
a = pd.merge(merged_df.stack(0).reset_index(1), id, left_index=True, right_index=True)
# We also need to change the automatically generated name to type_2 to be able to do the next merge operation
a.columns = [u'type_2', 0, u'GRID']
result = pd.merge(Step1_Merge,a,on=[u'type_2',u'GRID'])
|
How to execute a python script and write output to txt file?
Question: I'm executing a .py file, which spits out a give string. This command works
fine
execfile ('file.py')
But I want the output (in addition to it being shown in the shell) written
into a text file.
I tried this, but it's not working :(
execfile ('file.py') > ('output.txt')
All I get is this:
tugsjs6555
False
I guess "False" is referring to the output file not being successfully written
:(
Thanks for your help
Answer: what your doing is checking the output of `execfile('file.py')` against the
string `'output.txt'`
you can do what you want to do with subprocess
#!/usr/bin/env python
import subprocess
with open("output.txt", "w+") as output:
subprocess.call(["python", "./script.py"], stdout=output);
|
How to read a large file set
Question: I am very new to Python. So please give specific advice. I am using Python
3.2.2.
I need to read a large file set in my computer. Now I can not even open it. To
verify the directory of the file, I used:
>>> import os
>>> os.path.dirname(os.path.realpath('a9000006.txt'))
It gives me the location `'C:\\Python32'`
Then I wrote up codes to open it:
>>> file=open('C:\\Python32\a9000006.txt','r')
Traceback (most recent call last):
File "<pyshell#29>", line 1, in <module>
file=open('C:\\Python32\a9000006.txt','r')
IOError: [Errno 22] Invalid argument: 'C:\\Python32\x079000006.txt'
Then I tried another one:
>>> file=open('C:\\Python32\\a9000006.txt','r')
Traceback (most recent call last):
File "<pyshell#33>", line 1, in <module>
file=open('C:\\Python32\\a9000006.txt','r')
IOError: [Errno 2] No such file or directory: 'C:\\Python32\\a9000006.txt'
Then another one:
>>> file=open(r'C:\Python32\a9000006.txt','r')
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
file=open(r'C:\Python32\a9000006.txt','r')
IOError: [Errno 2] No such file or directory: 'C:\\Python32\\a9000006.txt'
The file is saved in the Python folder. But, it is in a folder, so the path is
`D\Software\Python\Python3.2.2\Part1\Part1awards_1990\awd_1990_00`. It is
multiple layers of folders.
Also, and anyone share how to read the abstract section of all files in that
folder? Thanks.
Answer: `\a` is the ASCII bell character, not a backslash and an `a`. Use forward
slashes instead of backslashes:
open('C:/Python32/a9000006.txt')
and use the actual path to the file instead of `C:/Python32/a9000006.txt` It's
not clear from your question what that path might be; you seem like you might
already know the path, but you're misusing `realpath` in a way that seems like
you're trying to use it to search for the file. `realpath` doesn't do that.
|
Python 2&3: both urllib & requests POST data mysteriously disappears
Question: I'm using Python to scrape data from a number of web pages that have simple
HTML input forms, like the 'Username:' form at the bottom of this page:
<http://www.w3schools.com/html/html_forms.asp> (this is just a simple example
to illustrate the problem)
Firefox Inspect Element indicates this form field has the following HTML
structure:
<form name="input0" target="_blank" action="html_form_action.asp" method="get">
Username:
<input name="user" size="20" type="text"></input>
<input value="Submit" type="submit"></input>
</form>
All I want to do is fill out this form and get the resulting page:
<http://www.w3schools.com/html/html_form_action.asp?user=ThisIsMyUserName>
Which is what is produced in my browser by entering 'ThisIsMyUserName' in the
'Username' field and pressing 'Submit'. However, every method that I have
tried (details below) returns the contents of the original page containing the
unaltered form without any indication the form data I submitted was
recognized, i.e. I get the content from the first link above in response to my
request, when I expected to receive the content of the second link.
I suspect the problem has to do with `action="html_form_action.asp"` in the
form above, or perhaps some kind of hidden field I'm missing (I don't know
what to look for - I'm new to form submission). Any suggestions?
## HERE IS WHAT I'VE TRIED SO FAR:
* * *
Using urllib.requests in Python 3:
import urllib.request
import urllib.parse
# Create dict of form values
example_data = urllib.parse.urlencode({'user': 'ThisIsMyUserName'})
# Encode dict
example_data = example_data.encode('utf-8')
# Create request
example_url = 'http://www.w3schools.com/html/html_forms.asp'
request = urllib.request.Request(example_url, data=example_data)
# Create opener and install
my_url_opener = urllib.request.build_opener() # no handlers
urllib.request.install_opener(my_url_opener)
# Open the page and read content
web_page = urllib.request.urlopen(request)
content = web_page.read()
# Save content to file
my_html_file = open('my_html_file.html', 'wb')
my_html_file.write(content)
But what is returned to me and saved in 'my_html_file.html' is the original
page containing the unaltered form without any indication that my form data
was recognized, i.e. I get this page in response:
qqqhttp://www.w3schools.com/html/html_forms.asp
...which is the same thing I would have expected if I made this request
without the data parameter at all (which would change the request from a POST
to a GET).
Naturally the first thing I did was check whether my request was being
constructed properly:
# Just double-checking the request is set up correctly
print("GET or POST?", request.get_method())
print("DATA:", request.data)
print("HEADERS:", request.header_items())
Which produces the following output:
GET or POST? POST
DATA: b'user=ThisIsMyUserName'
HEADERS: [('Content-length', '21'), ('Content-type', 'application/x-www-form-
urlencoded'), ('User-agent', 'Python-urllib/3.3'), ('Host',
'www.w3schools.com')]
So it appears the POST request has been structured correctly. After re-reading
the documentation and unsuccessfuly searching the web for an answer to this
problem, I moved on to a different tool: the requests module. I attempted to
perform the same task:
import requests
example_url = 'http://www.w3schools.com/html/html_forms.asp'
data_to_send = {'user': 'ThisIsMyUserName'}
response = requests.post(example_url, params=data_to_send)
contents = response.content
And I get the same exact result. At this point I'm thinking maybe this is a
Python 3 issue. So I fire up my trusty Python 2.7 and try the following:
import urllib, urllib2
data = urllib.urlencode({'user' : 'ThisIsMyUserName'})
resp = urllib2.urlopen('http://www.w3schools.com/html/html_forms.asp', data)
content = resp.read()
And I get the same result again! For thoroughness I figured I'd attempt to
achieve the same result by encoding the dictionary values into the url and
attempting a GET request:
# Using Python 3
# Construct the url for the GET request
example_url = 'http://www.w3schools.com/html/html_forms.asp'
form_values = {'user': 'ThisIsMyUserName'}
example_data = urllib.parse.urlencode(form_values)
final_url = example_url + '?' + example_data
print(final_url)
This spits out the following value for final_url:
qqqhttp://www.w3schools.com/html/html_forms.asp?user=ThisIsMyUserName
I plug this into my browser and I see that this page is exactly the same as
the original page, which is exactly what my program is downloading.
I've also tried adding additional headers and cookie support to no avail.
I've tried everything I can think of. Any idea what could be going wrong?
Answer: The form states an action and a method; you are ignoring both. The method
states the form uses `GET`, not `POST`, and the action tells you to send the
form data to `html_form_action.asp`.
The `action` attribute acts like any other URL specifier in an HTML page;
unless it starts with a scheme (so with `http://...`, `https://...`, etc.) it
is relative to the current base URL of the page.
The `GET` HTTP method adds the URL-encoded form parameters to the target URL
with a question mark:
import urllib.request
import urllib.parse
# Create dict of form values
example_data = urllib.parse.urlencode({'user': 'ThisIsMyUserName'})
# Create request
example_url = 'http://www.w3schools.com/html/html_form_action.asp'
get_url = example_url + '?' + example_data
# Open the page and read content
web_page = urllib.request.urlopen(get_url)
print(web_page.read().decode(web_page.info().get_param('charset', 'utf8')))
or, using `requests`:
import requests
example_url = 'http://www.w3schools.com/html/html_form_action.asp'
data_to_send = {'user': 'ThisIsMyUserName'}
response = requests.get(example_url, params=data_to_send)
contents = response.text
print(contents)
In both examples I also decoded the response to Unicode text (something
`requests` makes easier for me with the `response.text` attribute).
|
change default path of django administration
Question: I am new to Django. I am using django administration for basic crud purpose. I
found that template for django admin resides at
C:\Python27\Lib\site-packages\django\contrib\admin\templates\admin
I need to change it as my own location .. i created one folder "template" on
base dir of project and added following lines
STATIC_URL =os.path.join(BASE_DIR, 'templates')+'/'
TEMPLATE_DIRS = (
os.path.join(BASE_DIR, 'templates'),
)
i copied all files from `C:\Python27\Lib\site-
packages\django\contrib\admin\templates` to `basedir/templates`
but still it is referencing to `C:\Python27\Lib\site-
packages\django\contrib\admin\templates`
what is the best way?
Answer: try this,hope this helps you
import os
PROJECT_PATH = os.path.realpath(os.path.dirname(__file__))
...
#MEDIA_ROOT = PROJECT_PATH + '/media/'
TEMPLATE_DIRS = (
PROJECT_PATH + '/templates/'
)
|
Can't understand 500: internal error with Django, Apache, Mod_python
Question: I'm getting a 500:Internal error when i try to start my apache server with
django. I tried the steps given in the previous questions [Django with Apache
500 Error](http://stackoverflow.com/questions/20262164/django-with-
apache-500-error) and [Django Apache mod_wsgi
500](http://stackoverflow.com/questions/15097155/django-apache-mod-wsgi-500)
but neither of them did the trick.
My Wsgi file:
import os
import sys
sys.path = ['var/www/first'] + sys.path
sys.path.append('var/www/first')
os.eviron['DJANGO_SETTINGS_MODULE'] = 'first.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()`
My conf file:
<VirtualHost *:80>
WSGIScriptAlias / /home/alok/Documents/first.wsgi
Servername website.com
Alias /static /var/www/first/static/
<Directory /var/www/first/>
Order allow,deny
Allow from all
</Directory>
</VirtualHost>`
I tried changing the permissions to 777 and the folder for wsgi file but still
it won't work. Any suggestions?
Answer: `os.eviron` is a typo. You mean `os.environ`.
Your Apache error log probably would have shown you that.
|
Import variables into functions from external file in Python
Question: I am trying to write a script that is rather long, uses several different data
sources and can go wrong at many different stages. Having to restart the whole
process from the beginning each time some error in the input data is
discovered is not too much fun so I though I would save variables (actually
paths to created data files) to a backup file then read in this file and pick
up where I left off. Unfortunately
from previousrun import *
only imports the variables locally and I can't use import in each function as
Python tells me its not allowed at module level. Is there any way of importing
an unknown number of variables from another file and have them globally
available?
Answer: Use this in your function:
`locals().update(importlib.import_module("importlib").__dict__)`
and `import importlib`.
|
Why does my code only write the last line?
Question: I'm writing a list to file but it only writes the last line.
Here is my code. I'm on Python 2.7.
server=os.listdir('.') #contents of the current directory
for files in server:
public_html = []
if os.path.isfile(files) == True :
pass
elif os.path.isdir(files) == True :
public_html.insert(0, files)
print public_html
f = open("index.html","w")
f.write("<html>\n<head>\n<meta charset='utf-8'>\n<title></title>\n<link rel='stylesheet' href='css/normalize.css'>\n<script src=''></script></head>\n<body>")
for folder in public_html:
print folder
f.write("<a>" + folder + "<a/>" + "\n")
f.close()
Answer: Whenever you do `open('path/to/file',"w")` it blanks the file before writing
to it. This is called "write mode" and more info can be found [in the
docs](http://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-
files). Instead, open the file in "append mode" (`'a'`), like this:
...
elif os.path.isdir(files): # == True is redundant here
public_html.insert(0,files)
print public_html
f = open('index.html','a')
...
In addition, you're trying to close your file object in every iteration
through your `public_html` list! This won't work, and will probably throw
exceptions when you try to call `write` on a closed object. Dedent that once
so it's after your loop
for folder in public_html:
print folder
f.write("<a>" + folder + "<a/>" + "\n")
f.close()
THAT BEING SAID, I think you're mostly going about this the wrong way...
from bs4 import BeautifulSoup # http://www.crummy.com/software/BeautifulSoup/
directories = [dir_ for dir_ in os.listdir('.') if os.path.isdir(dir_)]
soup = BeautifulSoup("<html>\n<head>\n<meta charset='utf-8'>\n<title></title>\n<link rel='stylesheet' href='css/normalize.css'>\n<script src=''></script></head>\n<body>")
for directory in directories:
tag = soup.new_tag('a') # can do ('a', href='link/path')
tag.string = directory
soup.body.append(tag)
with open('index.html','w') as index:
index.write(soup.prettify())
This is more useful because you can more-easily control the contents of the
HTML, including throwing `href` on those `<a>`s!
|
BeautifulSoup - scraping a forum page
Question: I'm trying to scrape a forum discussion and export it as a csv file, with rows
such as "thread title", "user", and "post", where the latter is the actual
forum post from each individual.
I'm a complete beginner with Python and BeautifulSoup so I'm having a really
hard time with this!
My current problem is that all the text is split into one character per row in
the csv file. Is there anyone out there who can help me out? It would be
fantastic if someone could give me a hand!
Here's the code I've been using:
from bs4 import BeautifulSoup
import csv
import urllib2
f = urllib2.urlopen("https://silkroad5v7dywlc.onion.to/index.php?action=printpage;topic=28536.0")
soup = BeautifulSoup(f)
b = soup.get_text().encode("utf-8").strip() #the posts contain non-ascii words, so I had to do this
writer = csv.writer(open('silkroad.csv', 'w'))
writer.writerows(b)
Answer: Ok here we go. Not quite sure what I'm helping you do here, but hopefully you
have a good reason to be analyzing silk road posts.
You have a few issues here, the big one is that you aren't parsing the data at
all. **What you're essentially doing with .get_text() is going to the page,
highlighting the whole thing, and then copying and pasting the whole thing to
a csv file.**
So here is what you should be trying to do:
1. Read the page source
2. Use soup to break it into sections you want
3. Save sections in parallel arrays for author, date, time, post, etc
4. Write data to csv file row by row
I wrote some code to show you what that looks like, it should do the job:
from bs4 import BeautifulSoup
import csv
import urllib2
# get page source and create a BeautifulSoup object based on it
print "Reading page..."
page = urllib2.urlopen("https://silkroad5v7dywlc.onion.to/index.php?action=printpage;topic=28536.0")
soup = BeautifulSoup(page)
# if you look at the HTML all the titles, dates,
# and authors are stored inside of <dt ...> tags
metaData = soup.find_all("dt")
# likewise the post data is stored
# under <dd ...>
postData = soup.find_all("dd")
# define where we will store info
titles = []
authors = []
times = []
posts = []
# now we iterate through the metaData and parse it
# into titles, authors, and dates
print "Parsing data..."
for html in metaData:
text = BeautifulSoup(str(html).strip()).get_text().encode("utf-8").replace("\n", "") # convert the html to text
titles.append(text.split("Title:")[1].split("Post by:")[0].strip()) # get Title:
authors.append(text.split("Post by:")[1].split(" on ")[0].strip()) # get Post by:
times.append(text.split(" on ")[1].strip()) # get date
# now we go through the actual post data and extract it
for post in postData:
posts.append(BeautifulSoup(str(post)).get_text().encode("utf-8").strip())
# now we write data to csv file
# ***csv files MUST be opened with the 'b' flag***
csvfile = open('silkroad.csv', 'wb')
writer = csv.writer(csvfile)
# create template
writer.writerow(["Time", "Author", "Title", "Post"])
# iterate through and write all the data
for time, author, title, post in zip(times, authors, titles, posts):
writer.writerow([time, author, title, post])
# close file
csvfile.close()
# done
print "Operation completed successfully."
**EDIT:** Included solution that can read files from directory and use data
from that
Okay, so you have your HTML files in a directory. You need to get a list of
files in the directory, iterate through them, and append to your csv file for
each file in the directory.
**This is the basic logic of our new program.**
If we had a function called processData() that took a file path as an argument
and appended data from the file to your csv file here is what it would look
like:
# the directory where we have all our HTML files
dir = "myDir"
# our csv file
csvFile = "silkroad.csv"
# insert the column titles to csv
csvfile = open(csvFile, 'wb')
writer = csv.writer(csvfile)
writer.writerow(["Time", "Author", "Title", "Post"])
csvfile.close()
# get a list of files in the directory
fileList = os.listdir(dir)
# define variables we need for status text
totalLen = len(fileList)
count = 1
# iterate through files and read all of them into the csv file
for htmlFile in fileList:
path = os.path.join(dir, htmlFile) # get the file path
processData(path) # process the data in the file
print "Processed '" + path + "'(" + str(count) + "/" + str(totalLen) + ")..." # display status
count = count + 1 # increment counter
**As it happens our _processData()_ function is more or less what we did
before, with a few changes.**
So this is very similar to our last program, with a few small changes:
1. We write the column headers first thing
2. Following that we open the csv with the 'ab' flag to append
3. We import os to get a list of files
_Here's what that looks like:_
from bs4 import BeautifulSoup
import csv
import urllib2
import os # added this import to process files/dirs
# ** define our data processing function
def processData( pageFile ):
''' take the data from an html file and append to our csv file '''
f = open(pageFile, "r")
page = f.read()
f.close()
soup = BeautifulSoup(page)
# if you look at the HTML all the titles, dates,
# and authors are stored inside of <dt ...> tags
metaData = soup.find_all("dt")
# likewise the post data is stored
# under <dd ...>
postData = soup.find_all("dd")
# define where we will store info
titles = []
authors = []
times = []
posts = []
# now we iterate through the metaData and parse it
# into titles, authors, and dates
for html in metaData:
text = BeautifulSoup(str(html).strip()).get_text().encode("utf-8").replace("\n", "") # convert the html to text
titles.append(text.split("Title:")[1].split("Post by:")[0].strip()) # get Title:
authors.append(text.split("Post by:")[1].split(" on ")[0].strip()) # get Post by:
times.append(text.split(" on ")[1].strip()) # get date
# now we go through the actual post data and extract it
for post in postData:
posts.append(BeautifulSoup(str(post)).get_text().encode("utf-8").strip())
# now we write data to csv file
# ***csv files MUST be opened with the 'b' flag***
csvfile = open('silkroad.csv', 'ab')
writer = csv.writer(csvfile)
# iterate through and write all the data
for time, author, title, post in zip(times, authors, titles, posts):
writer.writerow([time, author, title, post])
# close file
csvfile.close()
# ** start our process of going through files
# the directory where we have all our HTML files
dir = "myDir"
# our csv file
csvFile = "silkroad.csv"
# insert the column titles to csv
csvfile = open(csvFile, 'wb')
writer = csv.writer(csvfile)
writer.writerow(["Time", "Author", "Title", "Post"])
csvfile.close()
# get a list of files in the directory
fileList = os.listdir(dir)
# define variables we need for status text
totalLen = len(fileList)
count = 1
# iterate through files and read all of them into the csv file
for htmlFile in fileList:
path = os.path.join(dir, htmlFile) # get the file path
processData(path) # process the data in the file
print "Processed '" + path + "'(" + str(count) + "/" + str(totalLen) + ")..." # display status
count = count + 1 # incriment counter
|
Python sequence error
Question: I have written a code to solve various integrals using the midpoint method. I
had it working for some other functions, however when trying to compute this
particular function's integral (you will see it within the code), I am running
into this error: "setting an array element with a sequence." If someone could
point out what is causing this issue, it would be much appreciated.
Edit: I have marked the line where the error is occurring.
Here is my code:
from matplotlib.pylab import *
N = 1000
xi = 1.0
xf = 4.0
dx = (xf - xi)/N
x = zeros(N+1)
F = zeros(N+1)
k = m = 1
Z = 2*sqrt((2*m)/2)
A = 1
x[0] = xi
F[0] = 0.0
for i in range(1,N+1):
x[i] = x[i-1] + dx
xmid = (x[i] + x[i-1])/2.0
F[i] = F[i-1] + dx*(Z*sqrt(k*A**4 - k*x**4)) #error here
print 'F at', xf, ' = ', F[N]
plot(F,x,'b')
xlabel('F')
ylabel('x')
show()
Answer: At runtime on the first iteration of the loop the problematic line has the
addition of an array and a scalar which broadcast to a array:
F[i-1] = 0.0
dx*(Z*sqrt(k*A**4 - k*x**4)) = array([ 0. , nan, 0.006, ..., 0.006, 0.006, 0.006])
So at runtime it's trying assign an array to the index, which isn't well-
defined for the one dimensional `F` array.
F[i] = array([ 0. , nan, 0.006, ..., 0.006, 0.006, 0.006])
Check your formula, but I'm guessing you probably intended to do something
like this instead:
dx*(Z*sqrt(k*A**4 - k*x[i]**4))
|
Randomizing Color Python Fractal
Question: I am currently working on a fractal generation, and I've got the fractal built
and functioning to my desired specifications, although I am looking to add
randomize coloring to it. Currently
I have the whole fractal to show as a random color after the program has ran,
but I would like for each level or even each leg of the fractal to be a
different random color, I am using pygame to generate this fractal and an RGB
value for the color.
import pygame
import random
fractal_level = 3
colblue = (0,61,103)
colbk = (0,0,0)
color = random.sample(xrange(0,255), 3)
def fract_draw(x, y, width, height, count):
pygame.draw.line(screen,color,[x + width*.25,height//2+y],[x + width*.75,height//2+y],2)
pygame.draw.line(screen,color,[x+width*.25,(height*.5)//2+y],[x+width*.25,(height*1.5)//2+y],2)
pygame.draw.line(screen,color,[x + width*.75,(height*.5)//2+y],[x + width*.75,(height*1.5)//2+y],2)
if count > 0:
count -= 1
fract_draw(x, y, width // 2, height // 2, count)
fract_draw(x + width // 2, y, width // 2, height // 2, count)
fract_draw(x, y + width // 2, width // 2, height // 2, count)
fract_draw(x + width // 2, y + width // 2, width // 2, height // 2, count)
pygame.init()
size = [750, 750]
screen = pygame.display.set_mode(size)
clock = pygame.time.Clock()
screen.fill(colblue)
fract_draw(0, 0, 750, 750, fractal_level)
pygame.display.flip()
clock.tick(20)
Sorry if the code is not 100% optimized with a sys exit and pygame exit as
I've been attempting to keep the code down to a minimum.
Answer: Simplest way:
import pygame
import random
fractal_level = 3
colblue = (0,61,103)
colbk = (0,0,0)
def fract_draw(x, y, width, height, count):
color = random.sample(xrange(0,255), 3)
pygame.draw.line(screen,color,[x + width*.25,height//2+y],[x + width*.75,height//2+y],2)
color = random.sample(xrange(0,255), 3)
pygame.draw.line(screen,color,[x+width*.25,(height*.5)//2+y],[x+width*.25,(height*1.5)//2+y],2)
color = random.sample(xrange(0,255), 3)
pygame.draw.line(screen,color,[x + width*.75,(height*.5)//2+y],[x + width*.75,(height*1.5)//2+y],2)
|
Logging data to CSV with python
Question: I am trying to update a log file form a python script. I have script that
generate 2 variables, inside & outside, and a log file templog.csv
The CSV file is in the format date,time,inside,outside
I need to generate the date and time and then write the whole lot with commas
to the file.
I have done this already as a shell script but would like to include it all in
one python script.
Thanks.
Answer:
import time
import csv
row = [time.ctime(), time.time(), inside, outside]
with open('templog.csv', 'a') as f:
w = csv.writer(f)
w.writerow(row)
|
creating multiple excel worksheets using data in a pandas dataframe
Question: Just started using pandas and python.
I have a worksheet which I have read into a dataframe and the applied forward
fill (ffill) method to.
I would then like to create a single excel document with two worksheets in it.
One worksheet would have the data in the dataframe before the ffill method is
applied and the next would have the dataframe which has had the ffill method
applied.
Eventually I intend to create one worksheet for every unique instance of data
in a certain column of the dataframe.
I would then like to apply some vba formatting to the results - but i'm not
sure which dll or addon or something I would need to call excel vba using
python to format headings as bold and add color etc.
I've had partial success in that xlsxwriter will create a new workbook and add
sheets, but dataframe.to_excel operations don't seems to work on the workbooks
it creates, the workbooks open but the sheets are blank.
Thanks in advance.
import os
import time
import pandas as pd
import xlwt
from xlwt.Workbook import *
from pandas import ExcelWriter
import xlsxwriter
#set folder to import files from
path = r'path to some file'
#folder = os.listdir(path)
#for loop goes here
#get date
date = time.strftime('%Y-%m-%d',time.gmtime(os.path.getmtime(path)))
#import excel document
original = pd.DataFrame()
data = pd.DataFrame()
original = pd.read_excel(path,sheetname='Leave',skiprows=26)
data = pd.read_excel(path,sheetname='Leave',skiprows=26)
print (data.shape)
data.fillna(method='ffill',inplace=True)
#the code for creating the workbook and worksheets
wb= Workbook()
ws1 = wb.add_sheet('original')
ws2 = wb.add_sheet('result')
original.to_excel(writer,'original')
data.to_excel(writer,'result')
writer.save('final.xls')
Answer: Your sample code is almost correct except you need to create the `writer`
object and you don't need to use the `add_sheet()` methods. The following
should work:
# ...
writer = pd.ExcelWriter('final.xlsx')
data.to_excel(writer,'original')
# data.fillna() or similar.
data.to_excel(writer,'result')
writer.save()
# ...
The correct syntax for this is shown at the end of the Pandas
[`DataFrame.to_excel()`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.to_excel.html) docs.
See also [Working with Python Pandas and
XlsxWriter](https://xlsxwriter.readthedocs.org/working_with_pandas.html).
|
python serial 100% cpu
Question: A year ago I needed a script to capture input from a serial device and send it
to a web browser. (A touch sensor attached to a 3d printed Egyptian tablet in
a Museum.) I had originally intended to use Perl but as that wasn't playing
ball and I only had a few hours before launch I opted for Python (I'm not a
python dev). I made a script that worked fine and has been for some time, with
the only issue being that the script uses 100% CPU. How can I get Python to
read from the serial port without using the whole CPU, while still bring
responsive regardless of when the input is pressed?
My script is below:
#!/usr/bin/python
import time
import serial
import sys
from subprocess import call
import traceback
myport = 0
ser = serial.Serial()
def readkey():
while 1:
out = '';
while ser.inWaiting() > 0:
out += ser.read(1);
if out != '\xf8' and out != '\xf9':
call(["xdotool", "key", "F8"])
call(["xdotool", "type", str(ord(out))])
call(["xdotool", "key", "F9"])
out = ''
def main_sys():
global ser
print "Opening Stela serial port"
ser.open();
ser.isOpen();
print "Starting Stela subsystem"
while 1:
try:
readkey()
break
except Exception as e:
print "caught os error".format(e)
time.sleep(1)
main_sys()
def init():
global ser
global myport
while 1:
try:
theport = '/dev/ttyACM'+str(myport)
print "Trying " + theport
ser = serial.Serial(
port=theport,
baudrate=115200,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS
)
main_sys()
break;
except Exception as e:
traceback.print_exc()
myport += 1
if myport > 5:
myport = 0
time.sleep(1)
init()
init()
Answer: Add a `time.sleep` for a short period at the end of your `readKey`-loop. It
will let other processes run.
Also, be aware that `call` is blocking until the operation is finished.
|
Waiting for xml file writing to finish in python
Question: I have written this script to write data to an xml file. It is writing
correctly, but I want to wait for finish writing in xml file then I will
execute another code. This means another code is depend on xml data write.
So how to wait for finish writing data in xml file.
import xml.etree.ElementTree as ET
tree = ET.parse('output.xml')
root = tree.getroot()
for country in root('country'):
root.attrib['value']=data
tree.write('output.xml')
Above code is example code and `data` can be in more amount, so it will take
time to write. some body let me know how to do that.
Answer: [Python File write()
Method](http://www.tutorialspoint.com/python/file_write.htm) states
> The method write() writes a string str to the file. There is no return
> value. Due to buffering, the string may not actually show up in the file
> until the flush() or close() method is called.
Thus, if you do not want to close the file after the write (so that other
functions can open and read it) you can put
tree.flush() # does not work because no flush method in ElementTree
immediately after the write.
I see that the manual shows the write explicitly into the file name as opposed
to a file descriptor. However, this seems to be done in interactive mode.
If the script can handle file descriptors, then the flush() or close() method
on the file descriptor would be useful.
An example shown elsewhere gives an example of writing to a file using file
descriptors. This would open and close the file as well as allowing the flush.
The exact method is left up to you (:-)
text = ET.tostring(reply)
self.wfile.write(text)
|
C - Signed and Unsigned integer
Question: I'm delving into C because I need to import ctypes library to python to allow
for keyboard control. I'm trying to learn how the following code works:
import ctypes
import time
SendInput = ctypes.windll.user32.SendInput
# C struct redefinitions
PUL = ctypes.POINTER(ctypes.c_ulong)
class KeyBdInput(ctypes.Structure):
_fields_ = [("wVk", ctypes.c_ushort),
("wScan", ctypes.c_ushort),
("dwFlags", ctypes.c_ulong),
("time", ctypes.c_ulong),
("dwExtraInfo", PUL)]
class HardwareInput(ctypes.Structure):
_fields_ = [("uMsg", ctypes.c_ulong),
("wParamL", ctypes.c_short),
("wParamH", ctypes.c_ushort)]
class MouseInput(ctypes.Structure):
_fields_ = [("dx", ctypes.c_long),
("dy", ctypes.c_long),
("mouseData", ctypes.c_ulong),
("dwFlags", ctypes.c_ulong),
("time",ctypes.c_ulong),
("dwExtraInfo", PUL)]
class Input_I(ctypes.Union):
_fields_ = [("ki", KeyBdInput),
("mi", MouseInput),
("hi", HardwareInput)]
class Input(ctypes.Structure):
_fields_ = [("type", ctypes.c_ulong),
("ii", Input_I)]
# Actuals Functions
def PressKey(hexKeyCode):
extra = ctypes.c_ulong(0)
ii_ = Input_I()
ii_.ki = KeyBdInput( hexKeyCode, 0x48, 0, 0, ctypes.pointer(extra) )
x = Input( ctypes.c_ulong(1), ii_ )
SendInput(1, ctypes.pointer(x), ctypes.sizeof(x))
def ReleaseKey(hexKeyCode):
extra = ctypes.c_ulong(0)
ii_ = Input_I()
ii_.ki = KeyBdInput( hexKeyCode, 0x48, 0x0002, 0, ctypes.pointer(extra) )
x = Input( ctypes.c_ulong(1), ii_ )
SendInput(1, ctypes.pointer(x), ctypes.sizeof(x))
def AltTab():
'''
Press Alt+Tab and hold Alt key for 2 seconds in order to see the overlay
'''
PressKey(0x012) #Alt
PressKey(0x09) #Tab
ReleaseKey(0x09) #~Tab
time.sleep(2)
ReleaseKey(0x012) #~Alt
if __name__ =="__main__":
AltTab()
The part I'm not understanding is related to signed and unsigned integers:
int has a range of -32768 - 32767
unsigned int has a range of 0 - 65535
I read: "The total range of numbers that can be displayed by a 2 byte number
is 2^16, since you have 16 bits that can represent a number. 2^16 is the same
as 65536, which since we count from 0, is the same as 0 - 65535. This
obviously matches up with the values for an unsigned int, so you can see that
this is how that type operates."
This seems to make sense, but there's one thing I don't understand:
1 byte = 8 bits 2 bytes = 16 bits
so why is a 2 byte number referred to as 2^16 rather than 2^8?
Answer: A 2 byte number has 16 bits (2 x 8 bits). The function that tells you the
highest unsigned number that can be represented by a given number of bits is
`2^n-1`, so for instance 8 bits can represents numbers 0 to 255, 16 bits 0 to
65,535, etc.
The reason for this is simple. Consider the first number that _cannot_ be
represented by (say) 16 bits. That would be 1 with 16 zeros, as that's the
smallest binary number with 17 digits. That's `2^16`. So the largest number
that can be represented that way is `2^16-1`.
Also note that the size of `int` in C will depend on your C compiler. It may
not always be 2 bytes long.
|
More problems extracting frames from GIFs
Question: Following my previous question ([Gifs opened with python have broken
frames](http://stackoverflow.com/questions/21990868/gifs-opened-with-python-
have-broken-frames)) I now have code that works sometimes.
For example, this code
from PIL import Image
img = Image.open('pigs.gif')
counter = 0
collection = []
current = img.convert('RGBA')
while True:
try:
current.save('original%d.png' % counter, 'PNG')
img.seek(img.tell()+1)
current = Image.alpha_composite(current, img.convert('RGBA'))
counter += 1
except EOFError:
break
…works on most GIFs perfectly, but on others it produces weird results. For
example, when applied to this 2-frame GIF:

It produces these two frames:
 
The first one is ok, the second one not so much.
What now?
Answer: Sounds like you want to do this:
while True:
try:
current.save('original%d.gif' % counter)
img.seek(img.tell()+1)
current = img.convert('RGBA')
counter += 1
except EOFError:
break
|
Drawing ellipses on matplotlib basemap projections-How to extend the basemap class
Question: I am new to python and matplotlib (and stackoverflow). Can you please tell me
how do I extend my basemap class with this ellipse function? The original post
"Drawing ellipses on matplotlib basemap projections" from regeirk is exact
what I need but I do not know how to extend the class.
Here is the code from regeirk: [Drawing ellipses on matplotlib basemap
projections](http://stackoverflow.com/questions/8161144/drawing-ellipses-on-
matplotlib-basemap-projections)
I do not know how to implement it extending the basemap class. I have never
done this before.
I hope I provided all the info.
Thanks.
Answer: With python, you can extend a class without needing to modify the Basemap
sourcecode itself. Simply importing the following code (maybe by just having
it inline in your script) will modify the functionality of a class (In this
case, we may as well modify the Basemap class):
from mpl_toolkits.basemap import Basemap
def ellipse(self, x0, y0, a, b, n, ax=None, **kwargs):
print 'Hello world!'
Basemap.ellipse = ellipse
Now, when you create a Basemap instance it will have the appropriate "ellipse"
method.
See also <http://dietbuddha.blogspot.co.uk/2012/12/python-metaprogramming-
dynamically.html>
|
Python - removing everything from a string except certain characters
Question: Not sure if this question has been asked before, but I couldn't find it, so
here it is:
randomList = ["ACGT","A#$..G","..,/\]AGC]]]T"]
randomList2 = []
for i in randomList:
if i <contains any characters other than "A",C","G", or "T">:
<add a string without junk to randomList2>
How would I do all the things within <>? Thanks,
Answer:
>>> randomList = ["ACGT","A#$..G","..,/\]AGC]]]T"]
>>> import re
>>> [re.sub("[^ACGT]+", "", s) for s in randomList]
['ACGT', 'AG', 'AGCT']
`[^ACGT]+` matches one or more (`+`) characters except `ACGT`.
Some timings:
>>> import timeit
>>> setup = '''randomList = ["ACGT","A#$..G","..,/\]AGC]]]T"]
... import re'''
>>> timeit.timeit(setup=setup, stmt='[re.sub("[^ACGT]+", "", s) for s in randomList]')
8.197133132976195
>>> timeit.timeit(setup=setup, stmt='[re.sub("[^ACGT]", "", s) for s in randomList]')
9.395620040786165
Without `re`, it's faster (see @cmd's answer):
>>> timeit.timeit(setup=setup, stmt="[''.join(c for c in s if c in 'ACGT') for s in randomList]")
6.874829817476666
Even faster (see @JonClement's comment):
>>> setup='''randomList = ["ACGT","A#$..G","..,/\]AGC]]]T"]\nascii_exclude = ''.join(set('ACGT').symmetric_difference(map(chr, range(256))))'''
>>> timeit.timeit(setup=setup, stmt="""[item.translate(None, ascii_exclude) for item in randomList]""")
2.814761871275735
Also possible:
>>> setup='randomList = ["ACGT","A#$..G","..,/\]AGC]]]T"]'
>>> timeit.timeit(setup=setup, stmt="[filter(set('ACGT').__contains__, item) for item in randomList]")
4.341086316883207
|
Error using Requests in a frozen app
Question: I am trying to use the excelent requests library in a frozen app. The code
works fine when interpreted, but it stops working when I generate the dist
executable.
I tried this solution, but it is not working ([Requests library: missing file
after cx_freeze](http://stackoverflow.com/questions/15157502/requests-library-
missing-file-after-cx-freeze))
My setup.py file:
import esky.bdist_esky
from esky.bdist_esky import Executable as Executable_Esky
from cx_Freeze import setup, Executable
from myapp import VERSION
import requests.certs
packages = [
'PIL',
'_winreg',
'esky',
]
includes = [
'PySide',
'sys',
'os',
'datetime',
'threading',
'Queue',
'uuid',
'requests',
]
excludes = [
'TKinter',
'tcl',
'ttk',
]
include_files =["icon-16px.ico",
"icon-32px.ico",
"logo-t-160x56.png",
]
setup(
scripts = [
Executable_Esky(
"myapp.py",
gui_only = False,
icon = "icon-16px.ico",
),
],
data_files = include_files,
options={"build_exe":
{"packages":packages,
"includes": includes,
"include_files": include_files + [(requests.certs.where(),'cacert.pem')],
"excludes": excludes,
"optimize": 2,
"icon":"icon-16px.ico",
},
"bdist_esky":{
'freezer_module':"cxfreeze",
'includes': includes,
'excludes': excludes,
},
},
executables = [Executable(script="myapp.py",base="Win32GUI")],
)
Traceback:
Traceback (most recent call last):
File "C:\Users\Fernando\Dropbox\the all-seeing boss\myapp_client\testes\cx
_freeze\qt_gui\interface_qt.py", line 45, in login
r = requests.post(url, data=data)
File "C:\Python27\lib\site-packages\requests\api.py", line 88, in post
return request('post', url, data=data, **kwargs)
File "C:\Python27\lib\site-packages\requests\api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 383, in reques
t
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 486, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\site-packages\requests\adapters.py", line 385, in send
raise SSLError(e)
requests.exceptions.SSLError: [Errno 185090050] _ssl.c:343: error:0B084002:x509
certificate routines:X509_load_cert_crl_file:system lib
Answer: or you can use
import requests.certs
build_exe_options = {"include_files":[(requests.certs.where(),'cacert.pem')]}
|
The pygame drawing functions leave pixel-wide gaps. Why?
Question: After converting a piece of code (that animates a pattern of rectangles) from
Java to Python, I noticed that the animation that the code produced seemed
quite glitchy. I managed to reproduce the problem with a minimal example as
follows:
import pygame
SIZE = 200
pygame.init()
DISPLAYSURF = pygame.display.set_mode((SIZE, SIZE))
D = 70.9
xT = 0.3
yT = 0
#pygame.draw.rect(DISPLAYSURF, (255,0,0), (0, 0, SIZE, SIZE))
pygame.draw.rect(DISPLAYSURF, (255,255,255), (xT, yT, D, D))
pygame.draw.rect(DISPLAYSURF, (255,255,255), (xT+D, yT+D, D, D))
pygame.draw.rect(DISPLAYSURF, (0,0,0), (xT, yT+D, D, D))
pygame.draw.rect(DISPLAYSURF, (0,0,0), (xT+D, yT, D, D))
pygame.display.update()
This code generates the following image:

Notice that the squares don't line up perfectly in the middle. Uncommenting
the commented line in the code above results in the following image, which
serves to illuminate the problem further:

It seems that there are pixel-wide gaps in the black and white pattern, even
though it can be seen in the code (by the data that is passed in the calls to
pygame.draw.rect()) that this shouldn't be the case. What is the reason for
this behaviour, and how can I fix it?
(This didn't happen in Java, [here](http://pastebin.com/52PuNjEr) is a piece
of Java code corresponding to the Python code above).
Answer: Looking at the rendered picture in an image editor, the pixel distances can be
confirmed as such:

Expanding the function calls (i.e. performing the additions manually), one can
see that the input arguments to draw the white rectangles are of the form
pygame.draw.rect(DISPLAYSURF, (255,255,255), ( 0.3, 0, 70.9, 70.9))
pygame.draw.rect(DISPLAYSURF, (255,255,255), (71.2, 70.9, 70.9, 70.9))
Since fractions of pixels do not make sense screen-wise, the input must be
discretized in some way. Pygame (or SDL, as mentioned in the comments to the
question) seems to choose truncating, which in practice transforms the drawing
commands to:
pygame.draw.rect(DISPLAYSURF, (255,255,255), ( 0, 0, 70, 70))
pygame.draw.rect(DISPLAYSURF, (255,255,255), (71, 70, 70, 70))
which corresponds to the dimensions in the rendered image. If AWT draws it
differently, my guess is that it uses rounding (of some sort) instead of
truncating. This could be investigated by trying different rendering inputs,
or by digging in the documentation.
If one wants pixel perfect rendering, using floating points as input is not
well defined. If one keeps to the integers, the result should be independent
of renderer, though.
* * *
**EDIT** : I expand a bit if anyone else finds this, since I couldn't find
much info on this behavior apart from the source code.
The function call in question takes the following input arguments
([documentation](http://www.pygame.org/docs/ref/draw.html#pygame.draw.rect)):
pygame.draw.rect(Surface, color, Rect, width=0)
where `Rect` is a specific object defined by a top-left coordinate, a width
and a height. By design it only handles integer attributes, since it is meant
as a low-level "this is what you see on the screen" data type. The data type
handles floats by truncating:
>>> import pygame
>>> r = pygame.Rect((1, 1, 8, 12))
>>> r.bottomright
(9, 13)
>>> r.bottomright = (9.9, 13.5)
>>> r.bottomright
(9, 13)
>>> r.bottomright = (11.9, 13.5)
>>> r.bottomright
(11, 13)
i.e., a regular `(int)` cast is done.
The `Rect` object is not meant as a "store the coordinates for my sprite"
object, but as a "this is what the screen will represent" object. Floating
points are certainly useful for the former purpose, and the designer would
probably want to keep an internal list of floats to store this information.
Otherwise, incrementing a screen position by e.g. `r.left += 0.8` (where `r`
is the `Rect` object) would never move `r` at all.
The problem in the question comes from (quite reasonably) assuming that the
right `x` coordinate of the rectangle will at least be calculated as something
like `x₂ = int(x₁ + width)`, but since the function call implicitly transforms
the input tuple to a `Rect` object before proceeding, and since `Rect` will
truncate its input arguments, it will instead calculate it as `x₂ = int(x₁) +
int(width)`, which is not always the same for float input.
To create a `Rect` using rounding rules, one could e.g. define a wrapper like:
def rect_round(x1, y1, w, h):
"""Returns pygame.Rect object after applying sane rounding rules.
Args:
x1, y1, w, h:
(x1, y1) is the top-left coordinate of the rectangle,
w is width,
h is height.
Returns:
pygame.Rect object.
"""
r_x1 = round(x1)
r_y1 = round(y1)
r_w = round(x1 - r_x1 + w)
r_h = round(y1 - r_y1 + h)
return pygame.Rect(map(int, (r_x1, r_y1, r_w, r_h)))
(or modified for other rounding rules) and then call the draw function as e.g.
pygame.draw.rect(DISPLAYSURF, (255,255,255), rect_round(71.2, 70.9, 70.9, 70.9))
One will never bypass the fact that the pixel by definition is the smallest
addressable unit on the screen, though, so this solution might also have its
quirks.
* * *
Related thread on the Pygame mailing list from 2005: [Suggestion: make Rect
use float
coordinates](http://osdir.com/ml/python.pygame/2005-04/msg00029.html)
|
python pandas: how to loop over dateframe and add columns
Question: I need a loop to do what this code is doing and automatically generate columns
ep1 ep2 and so on..
df['ep1'] = df.ep1.apply(lambda x: datetime.datetime(x.year,x.month,1))
df['ep2'] = df.ep1.apply(lambda x: datetime.datetime((x+datetime.timedelta(days=40)).year,(x+datetime.timedelta(days=40)).month,1))
df['ep3'] = df.ep2.apply(lambda x: datetime.datetime((x+datetime.timedelta(days=40)).year,(x+datetime.timedelta(days=40)).month,1))
where the ep vector is the first day of months between df.opdate and
df.closdate.
as a start
import pandas as pd
import datetime
d = {'closdate' : pd.Series([datetime.datetime(2014, 3, 2), datetime.datetime(2014, 2, 2)]),'opdate' : pd.Series([datetime.datetime(2014, 1, 1), datetime.datetime(2014, 1, 1)])}
df=pd.DataFrame(d)
df['ep1'] = df.opdate.apply(lambda x: x if x > datetime.datetime(2014,1,1) else datetime.datetime(2014,1,1))
df['ep1'] = df.ep1.apply(lambda x: datetime.datetime(x.year,x.month,1))
df['ep2'] = df.ep1.apply(lambda x: datetime.datetime((x+datetime.timedelta(days=40)).year,(x+datetime.timedelta(days=40)).month,1))
df['ep3'] = df.ep2.apply(lambda x: datetime.datetime((x+datetime.timedelta(days=40)).year,(x+datetime.timedelta(days=40)).month,1))
How do i loop until ep is larger than the df.closdate?
Answer: Use `where` instead of `apply` and add days with `np.timedelta64`
import numpy as np
from pandas import Timestamp
months = range(1, 13)
df['ep0'] = df.opdate.where(df.opdate > Timestamp('20140101'), Timestamp('20140101'))
for month in months:
colname = 'ep%d' % month
prev_colname = 'ep%d' % (month - 1)
df[colname] = df[prev_colname] + np.timedelta64(40, 'D')
|
Heroku App crashes immediately with R10 and H10 errors
Question: My app runs fine locally using foreman run, and when I execute my
`runserver.py` file using `python runserver.py`. When I push it to Heroku, it
just crashes. I even made changes to my procfile: `web: python runserver.py
${PORT}` so that Heroku will bind to a port number, but to no avail...I've
been at this problem for almost 3 days now. First with my `Procfile` and now
with Heroku...any help would gladly be appreciated. Additionally, I am using
Python with the Flask framework for this project -- I came across Heroku
forward, but it seems to be only for RoR applications..
2014-02-24T02:24:50.146153+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2014-02-24T02:24:51.323561+00:00 heroku[web.1]: Process exited with status 137
2014-02-24T02:24:51.333621+00:00 heroku[web.1]: State changed from starting to crashed
2014-02-24T02:24:51.334368+00:00 heroku[web.1]: State changed from crashed to starting
2014-02-24T02:24:55.793531+00:00 heroku[web.1]: Starting process with command `python runserver.py`
2014-02-24T02:24:57.117683+00:00 app[web.1]: * Running on http://127.0.0.1:5000/
2014-02-24T02:24:57.117683+00:00 app[web.1]: * Restarting with reloader
2014-02-24T02:23:43.987388+00:00 heroku[api]: Deploy c55f7b6 by shaunktw@gmail.com
2014-02-24T02:23:43.987478+00:00 heroku[api]: Release v8 created by shaunktw@gmail.com
2014-02-24T02:25:56.204701+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2014-02-24T02:25:56.204929+00:00 heroku[web.1]: Stopping process with SIGKILL
2014-02-24T02:25:57.495657+00:00 heroku[web.1]: Process exited with status 137
Procfile:
web: python runserver.py ${PORT}
runserver.py:
from intro_to_flask import app
app.run(debug=True)
Answer: I found the answer to this issue...essentially I had to bind a port and
specify the host that I am using:
In my runserver.py file I modified it using:
import os
from intro_to_flask import app
port = int(os.environ.get("PORT", 5000))
app.run(debug=True, host='0.0.0.0', port=port)
It's probably not the most elegant way of doing it.but it works.
|
Scipy expit: Unexpected behavour. NaNs
Question: Noticed some _nan_ 's were appearing unexpectedly, in my data. (and expanding
out and _naning_ everything they touched) Did some careful investigation and
produced a minimal working example:
>>> import numpy
>>> from scipy.special import expit
>>> expit(709)
1.0
>>> expit(710)
nan
Expit is the inverse logit. [Scipy documentation
here](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.expit.html).
Which tells us: `expit(x) = 1/(1+exp(-x))`
So `1+exp(-709)==1.0` so that `expit(709)=1.0` Seems fairly reasonable,
rounding `exp(-709)==0`.
However, what is going on with `expit(710)`?
`expit(710)==nan` implies that `1+exp(-710)==0`, which implies: `exp(-710)=-1`
which is not right at all.
**What is going on?**
I am fixing it with:
def sane_expit(x):
x = np.minimum(x,700*np.ones_like(x)) #Cap it at 700 to avoid overflow
return expit(x)
But this is going to be a bit slower, because extra op, and the python
overhead.
I am using numpy 1.8.-0, and scipy 0.13.2
Answer: > What is going on?
The function is evidently not coded to deal with such large inputs, and
encounters an overflow during the internal calculations.
The significance of the number 710 is that `math.exp(709)` can be represented
as `float`, whereas `math.exp(710)` cannot:
In [27]: import math
In [28]: math.exp(709)
Out[28]: 8.218407461554972e+307
In [29]: math.exp(710)
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
----> 1 math.exp(710)
OverflowError: math range error
Might be worth [filing a bug against SciPy](http://www.scipy.org/scipylib/bug-
report.html).
|
Running python files from command prompt
Question: I have written a python program in eclipse that imports the mechanize module.
It works perfectly there. When I run the .py file from the command prompt, it
shows this error: "No module named mechanize". How do I rectify this?
Answer: Make sure that Eclipse and prompt are using the same python version. Simply
typing $ python on the command line show you the version you are using from
there.
The mechanize module must be in your site-packages folder in order for python
to find it.
(C:\Python\Lib\site-packages)
If the module is not in your site-packages folder then you can install it as
follows:
Download the source code from
<http://pypi.python.org/packages/source/m/mechanize/mechanize-0.2.5.tar.gz>
Now extract and install the package (This is what you do on Linux, on Mac or
Win. this might be slightly different)
$ tar zxvf mechanize-0.2.5.tar.gz)
$ sudo python setup.py install
|
Django: Cron job is not executing python script
Question: I am using csvimporter to import some a csv file into a Django model. I have 2
scripts - one python script to take the file:
import subprocess
subprocess.call("python manage.py csvimport --model='csv_reader.csv' /Users/path_to_csv", shell = True)
And a django script to delete objects from the model:
from csv_reader.models import *
csv.objects.all().delete()
Both of the scripts work fine when ran manually from the shell. But when I add
a cron job to perform the execution of the scripts, it's not working, although
it logs them in cron log:
Feb 25 10:21:00 Liubous-MacBook-Pro.local /usr/sbin/cron[43055]: (yudasinal1) CMD (/Users/path_to_script)
I tried adding a cronjob like this:
DJANGO_SETTINGS_MODULE=project.settings
* * * * * /Users/path_to_csv/test_subprocess.py
Where in the actual script I added `#!/usr/bin/env python` at the top of the
file.
As well as I tried adding this cronjob:
DJANGO_SETTINGS_MODULE=project.settings
* * * * * python /Users/path_to_csv/test_subprocess.py
All of them are logged into cron log, but unfortunately, the actual functions
are not being executed.
Any help would be appreciated!
Answer: # Step 1: Add Shebang to script
Unix scripts use a line called
"[Shebang](https://en.wikipedia.org/wiki/Shebang_%28Unix%29)"
So your first line should look like this:
#!/usr/bin/env python
# Stept 2: Make script executable
1. Go to the folder with your script `mysript.py`
2. Execute `chmod +x myscript.py` in console.
3. Verify that it is executable by executing it with `./myscript.py`.
# Step 3: Add it to CRON
1. Type `crontab -e` in terminal.
2. Add a line like this:
30 13 * * * /home/yourusername/myscript.py
3. Verify with `crontab -l` that everything worked.
(see [cyberciti.biz](http://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-
under-linux-or-unix-oses/) for more information)
# Debugging python scripts
import datetime
import getpass
now = datetime.datetime.now()
# Open file to append
with open("/home/user/myscript.log", "a") as f:
f.write("Script started at %i.%i.%i (%i:%i:%i) by %s" % (now.day, now.month, now.year, now.hour, now.minute, now.second, getpass.getuser()))
[...]
with open("/home/user/myscript.log", "a") as f:
f.write("File 'xy' was opened.")
|
How to process excel file headers using pandas/python
Question: I am trying to read
<https://www.whatdotheyknow.com/request/193811/response/480664/attach/3/GCSE%20IGCSE%20results%20v3.xlsx>
using pandas.
Having saved it my script is
import sys
import pandas as pd
inputfile = sys.argv[1]
xl = pd.ExcelFile(inputfile)
# print xl.sheet_names
df = xl.parse(xl.sheet_names[0])
print df.head()
However this does not seem to process the headers properly as it gives
GCSE and IGCSE1 results2,3 in selected subjects4 of pupils at the end of key stage 4 Unnamed: 1 Unnamed: 2 Unnamed: 3 Unnamed: 4 Unnamed: 5 Unnamed: 6 Unnamed: 7 Unnamed: 8 Unnamed: 9 Unnamed: 10
0 Year: 2010/11 (Final) NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 Coverage: England NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 1. Includes International GCSE, Cambridge Inte... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 2. Includes attempts and achievements by these... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
All of this should be treated as comments.
If you load the spreadsheet into libreoffice, for example, you can see that
the column headings are correctly parsed and appear in row 15 with drop down
menus to let you select the items you want.
How can you get pandas to automatically detect where the column headers are
just as libreoffice does?
Answer: `pandas` is (are?) processing the file correctly, and exactly the way you
asked it (them?) to. You didn't specify a `header` value, which means that it
defaults to picking up the column names from the 0th row. The first few rows
of cells aren't comments in some fundamental way, they're just not cells
you're interested in.
Simply tell `parse` you want to skip some rows:
>>> xl = pd.ExcelFile("GCSE IGCSE results v3.xlsx")
>>> df = xl.parse(xl.sheet_names[0], skiprows=14)
>>> df.columns
Index([u'Local Authority Number', u'Local Authority Name', u'Local Authority Establishment Number', u'Unique Reference Number', u'School Name', u'Town', u'Number of pupils at the end of key stage 4', u'Number of pupils attempting a GCSE or an IGCSE', u'Number of students achieving 8 or more GCSE or IGCSE passes at A*-G', u'Number of students achieving 8 or more GCSE or IGCSE passes at A*-A', u'Number of students achieving 5 A*-A grades or more at GCSE or IGCSE'], dtype='object')
>>> df.head()
Local Authority Number Local Authority Name \
0 201 City of london
1 201 City of london
2 202 Camden
3 202 Camden
4 202 Camden
Local Authority Establishment Number Unique Reference Number \
0 2016005 100001
1 2016007 100003
2 2024104 100049
3 2024166 100050
4 2024196 100051
School Name Town \
0 City of London School for Girls London
1 City of London School London
2 Haverstock School London
3 Parliament Hill School London
4 Regent High School London
Number of pupils at the end of key stage 4 \
0 105
1 140
2 200
3 172
4 174
Number of pupils attempting a GCSE or an IGCSE \
0 104
1 140
2 194
3 169
4 171
Number of students achieving 8 or more GCSE or IGCSE passes at A*-G \
0 100
1 108
2 SUPP
3 22
4 0
Number of students achieving 8 or more GCSE or IGCSE passes at A*-A \
0 87
1 75
2 0
3 7
4 0
Number of students achieving 5 A*-A grades or more at GCSE or IGCSE
0 100
1 123
2 0
3 34
4 SUPP
[5 rows x 11 columns]
|
How to handle a matplotlib pick Artist event within a dynamically created QMdiAreaSubWindow? (Example Given - Partially Working!)
Question: I am trying to create a Qt4 application using python and matplotlib, but I got
stuck in a behaviour, which is not quite clear to me.
This application has a QMdiArea which holds the dynamically created graphs in
idividual subWindows. Everything seems to work fine but the Artist Picking
Events which are previously defined while plotting the data. Apparently the
"pick_event" mpl_connection is wiped out when creating a QMdiAreaSubWindow
instance within the MainWindow class.
Curiously, if the same piece of code is executed outside the MainWinddow
application.
I wrote a simplified version of my code in order to reproduce the behaviour
and maybe someone could give me a clue of what I am doing wrong.
Cheers!
#!/usr/bin/python
import sys
from PyQt4.QtGui import QWidget, QPushButton, QMainWindow, QMdiArea, QVBoxLayout, QApplication
from PyQt4.QtCore import Qt
from pylab import *
from matplotlib.backends.backend_qt4agg import (
FigureCanvasQTAgg as FigureCanvas,
NavigationToolbar2QTAgg as NavigationToolbar)
from matplotlib.backend_bases import key_press_handler
class MyMainWindow(QMainWindow):
""" Defines a simple MainWindow with a QPushButton that plots a Random Wave Fucntion
which must be shown in a Window within a QMdiArea Widget.
"""
def __init__(self, parent=None):
"""
"""
super(MyMainWindow,self).__init__(parent)
self.setWidgets()
def setWidgets(self, ):
""" Createsthe QPushButton and QMdiArea Widgets, organising them in
QVBoxLayout as a central Widget of the MainWindow
"""
vBox = QVBoxLayout()
mainFrame = QWidget()
self._plotGraphButton = QPushButton("Plot Graph")
self._plotGraphButton.clicked.connect(self.plotRandom)
self._mdiArea = QMdiArea()
vBox.addWidget(self._plotGraphButton)
vBox.addWidget(self._mdiArea)
mainFrame.setLayout(vBox)
self.setCentralWidget(mainFrame)
# This is the function called when the Plot Graph Button is pressed
#and where the Picking event does not work.
# When the button is pressed a new window with the plot is shown, but
#it is not possible to drag the rectangle patch with the mouse.
def plotRandom(self, ):
""" Generates and Plots a random wave function (+noise) embedding into a
QMdiAreaSubWindow.
"""
print "Plotting!!"
x = linspace(0,10,1000)
w = rand(1)*10
y = 100*rand(1)*sin(2*pi*w*x)+rand(1000)
p = PlotGraph(x,y)
child = self._mdiArea.addSubWindow(p.plotQtCanvas())
child.show()
class PlotGraph(object):
"""
"""
def __init__(self, x,y):
""" This class plots the data and encapsulates the figure instance in a FigureCanvasQt4Agg,
which can be used to create a QMdiArea SubWindow.
A rectangle patch is added to the plot and linked to the methods that
can drag it horizontally in the graph.
Arguments:
- `x`: Data
- `y`: Data
"""
self._x = x
self._dx = x[1]-x[0]
self._y = y
def _createPlotWidget(self, ):
""" Creates a figure and a NavigationBar organising them vertically into a QWidget,
which can be used by a QMdiArea.addSubWindow method.
"""
self._mainFrame = QWidget()
self._fig = figure(facecolor="white")
self._canvas = FigureCanvas(self._fig)
self._canvas.setParent(self._mainFrame)
self._canvas.setFocusPolicy(Qt.StrongFocus)
# Standard NavigationBar and button press management
self._mplToolbar = NavigationToolbar(self._canvas, self._mainFrame)
self._canvas.mpl_connect('key_press_event', self.on_key_press)
# Layouting
vbox = QVBoxLayout()
vbox.addWidget(self._canvas) # the matplotlib canvas
vbox.addWidget(self._mplToolbar)
self._mainFrame.setLayout(vbox)
def plotQtCanvas(self, ):
""" Plots data using matplotlib, adds a draggable Rectangle and connects the dragging
methods to the mouse events
"""
self._createPlotWidget()
ax = self._fig.add_subplot(111)
ax.plot(self._x,self._y)
ax.set_xlim(self._x[0],self._x[-1])
ax.set_ylim(-max(self._y)*1.1,max(self._y)*1.1)
xlim = ax.get_xlim()
ylim = ax.get_ylim()
wd = (xlim[1]-xlim[0])*0.1
ht = (ylim[1]-ylim[0])
rect = Rectangle((xlim[0],ylim[0]),wd,ht,alpha=0.3,color="g",picker=True)
ax.add_patch(rect)
# Connecting Events to Rectangle dragging methods
self._canvas.mpl_connect("pick_event",self.on_pick)
self._canvas.mpl_connect("button_release_event",self.on_release)
return self._mainFrame
def on_pick(self,event):
""" Manages the Artist Picking event. This method register which
Artist was picked and connects the rectOnMove method to the mouse
motion_notify_event SIGNAL.
Arguments:
- `event`:
"""
if isinstance(event.artist, Rectangle):
rectWd = event.artist.get_width()
if event.mouseevent.button == 1:
self._dragged = event.artist
self._id = self._canvas.mpl_connect("motion_notify_event",self.rectOnMove)
def rectOnMove(self, event):
""" After being picked, updates the new position of the Artist.
Arguments:
- `event`:
"""
rectWd = self._dragged.get_width()
if event.xdata:
i = event.xdata
n2 = rectWd/2.0
if i>=n2 and i<(self._x[-1]-n2):
self._dragged.set_x(i-n2)
self._canvas.draw()
def on_release(self,event):
""" When the mouse button is released, simply disconnect the
SIGNAL motion_notify_event and the rectOnMove method.
Arguments:
- `event`:
"""
self._canvas.mpl_disconnect(self._id)
def on_key_press(self, event):
# implement the default mpl key press events described at
# http://matplotlib.org/users/navigation_toolbar.html#navigation-keyboard-shortcuts
key_press_handler(event, self._canvas, self._mplToolbar)
if __name__ == '__main__':
qApp = QApplication(sys.argv)
MainWindow = MyMainWindow()
# By calling the piece of code bellow everything works fine and the
# the rectangle patch can be dragged, as expected.
# This piece of code "theoretically" does the same thing as the
# method plotRandom() defined in the class MyMainWindow.
print "Plotting!!"
x = linspace(0,10,1000)
w = rand(1)*10
y = 100*rand(1)*sin(2*pi*w*x)
####################################################################
p = PlotGraph(x,y)
child = MainWindow._mdiArea.addSubWindow(p.plotQtCanvas())
child.show()
MainWindow.show()
sys.exit(qApp.exec_())
Answer: This does not work:
p = PlotGraph(x,y)
child = MainWindow._mdiArea.addSubWindow(p.plotQtCanvas())
child.show()
p = PlotGraph(x,y)
child = MainWindow._mdiArea.addSubWindow(p.plotQtCanvas())
child.show()
This works:
p = PlotGraph(x,y)
child = MainWindow._mdiArea.addSubWindow(p.plotQtCanvas())
child.show()
p2 = PlotGraph(x,y)
child2 = MainWindow._mdiArea.addSubWindow(p2.plotQtCanvas())
child2.show()
Edit: Solution:
do this in `setwidgets`
> self.plotList = []
and this in `plotRandom`
> self.plotList.append(p)
And this solves another problem i encountered:
Add this in the `on_release` function
Not every mouseclick will trigger a picker event, so you can't disconnect it
if it did not fire.
try:
self._canvas.mpl_disconnect(self._id)
except AttributeError:
pass
|
Python zipfile library fix
Question: I've got a problem. I've made a simple zip file with password 12345. Now, when
I try to extract the password using brute-force, zipfile chooses wrong
password. It says it found password aaln0, but the extracted file is completly
empty. Is there a way to 'fix' the library? Or is there a replacement for it?
Thanks
Program code:
#!/usr/bin/env python
import itertools
import threading
import argparse
import time
import zipfile
import string
global found
found = False
def extract_zip(zFile, password):
"""
Extract archive with password
"""
try:
zFile.extractall(pwd=password)
write("[+] Password found:", password, "\n")
global found
found = True
except Exception, e:
pass
def write(*args):
print "[%s] %s" % (time.ctime(), " ".join(args))
def main_loop(zFile, length):
"""
Main loop
"""
write("[*] Python Brute-Force zip cracker")
write("[*] Zipfile: %s; password length: %s" % (zFile, length))
try:
zfile = zipfile.ZipFile(zFile)
except:
write("Cannot open zip file")
exit(1)
for combo in itertools.imap(''.join, itertools.product(string.letters + string.digits,
repeat=length)):
if found:
break
thread = threading.Thread(target=extract_zip, args=(zfile, combo))
thread.start()
if not found:
write("[-] Password not found")
def main():
"""
Main function
"""
parser = argparse.ArgumentParser(usage="brute-force-zipcracker.py -f <zipfile> -l <password length>")
parser.add_argument("-f", "--zipfile", help="specify zip file", type=str)
parser.add_argument("-l", "--length", type=int, help="password length", default=5)
args = parser.parse_args()
if (args.zipfile == None):
print parser.usage
exit(0)
main_loop(args.zipfile, args.length)
if __name__ == '__main__':
main()
Answer: First of all, you're doing:
for combo in itertools.imap(...):
if found:
break
thread = ...
thread.start()
if not found:
...
Just look at it for a second.
`found` is defined in a thread, but you're starting multiple threads and you
globally hope that it will be set in one of the threads. How can you ensure
that the correct thread has done the proper job? What if there's a false
positive in one of the threads and you don't bother to retrieve the value from
each individual thread. Careful with your threads!
Secondly, if the threads don't finish in time for your `combo` loop you'll end
up in `if not found` because the threads haven't finished running yet to find
what you're looking for, especially if you have a larger zip-file that takes a
few seconds to complete (a successful password would start unzipping the file
and it could take minutes, and `found` will not be set until that process is
done).
And finally it would be neat to get the parameters used to protect this zip-
file.
Edit:
You could also give us more information in the format of:
zFile.debug(3)
zFile.testzip()
zFile.extractall(pwd=password)
And other useful things from `zipfile.ZipInfo(filename)`
## To the solution then
#!/usr/bin/env python
import itertools
import argparse
import zipfile
import string
def extract_zip(filename, password):
try:
zFile = zipefile.ZipFile(filename)
zFile.extractall(pwd=password)
return True
except zipfile.BadZipFile:
return False
def main_loop(filename, length):
print("[*] Python Brute-Force zip cracker")
print("[*] Zipfile: %s; password length: %s" % (zFile, length))
cracked = False
for combo in itertools.imap(''.join, itertools.product(string.letters + string.digits, repeat=length)):
cracked = extract_zip(filename, combo)
if cracked:
print('Yaay your password is:',combo)
break
if not cracked:
print('Sorry, no luck..')
def main():
parser = argparse.ArgumentParser(usage="brute-force-zipcracker.py -f <zipfile> -l <password length>")
parser.add_argument("-f", "--zipfile", help="specify zip file", type=str)
parser.add_argument("-l", "--length", type=int, help="password length", default=5)
args = parser.parse_args()
if (args.zipfile == None):
print parser.usage
exit(0)
main_loop(args.zipfile, args.length)
if __name__ == '__main__':
main()
|
OpenCV darken oversaturated webcam image
Question: I have a (fairly cheap) webcam which produces images which are far lighter
than it should be. The camera does have brightness correction - the
adjustments are obvious when moving from light to dark - but it is
consistently far to bright.
I am looking for a way to reduce the brightness without iterating over the
entire frame (OpenCV Python bindings on a Raspberry Pi). Does that exist? Or
better, is there a standard way of sending hints to a webcam to reduce the
brightness?
import cv2
# create video capture
cap = cv2.VideoCapture(0)
window = cv2.namedWindow("output", 1)
while True:
# read the frames
_,frame = cap.read()
cv2.imshow("output",frame)
if cv2.waitKey(33)== 27:
break
# Clean up everything before leaving
cv2.destroyAllWindows()
cap.release()
Answer: I forgot Raspberry Pi is just running a regular OS. What an awesome machine.
Thanks for the code which confirms that you just have a regular cv2 image.
Simple vectorized scaling (without playing with each pixel) should be simple.
Below just scales every pixel. It would be easy to add a few lines to
normalize the image if it has a major offset.
import numpy
#...
scale = 0.5 # whatever scale you want
frame_darker = (frame * scale).astype(numpy.uint8)
#...
Does that look like the start of what you want?
|
Python, Generating random string of brackets
Question: I am looking to generate random lengths and patterns of square brackets for
example, [] ][ [] ][ [] [[ ]] []
I have so far managed to get my program to generate brackets randomly, but
randomly in terms of how many times it generates them, so currently my program
is giving me results such as,
[] [] [] [] [] []
[] [] []
[] [] [] [] []
So there is no randomness within the brackets, only randomness in the number
of brackets displayed.
I want to know how I can make the order of the brackets random ASWELL as the
amount of brackets on show.
Here is my code so far,
import random
import string
def randomGen(N):
return random.randint(1,N)
char1 = '['
char2 = ']'
finalist = []
newList = []
newList2 = []
newValue = randomGen(99)
for i in range(newValue):
newList = char1
newList2 = char2
finalist.append(newList + newList2)
for everChar in finalist:
print everChar,
Thanks.
Answer: You could use `random.sample` to select the index for where to place, say,
left brackets. Then place right-brackets everywhere else:
In [119]: import random
In [122]: N = 10
In [125]: idx = set(random.sample(range(N), N//2))
In [126]: idx
Out[126]: {0, 1, 4, 5, 7}
In [127]: ''.join(['[' if i in idx else ']' for i in range(N)])
Out[127]: '[[]][[][]]'
Given your examples, I assumed you want an equal number of left and right
brackets. If not, use jonrsharpe's solution.
|
What is the canonical way of handling sys arguments in Python?
Question: Let's say I want to make a hashing script:
### some code here
def hashlib_based(path, htype='md5', block_size=2**16):
hash = eval(htype)
with open(path, 'rb') as f:
for block in iter(lambda: f.read(block_size), ''):
hash().update(block)
f.close()
return hash().hexdigest()
### some code here
As you can see, I have the opportunity to use different flags to allow me to
change the hash type or the block size when I call the script from the command
line (for example `./myscript.py -sha1 -b 512 some_file.ext`). The thing is, I
don't have any clue on how should I do this in order to keep my code as clean
and readable as possible. How do I deal with `sys.argv`?
First of all, how do I check if the user uses the correct flags? I need to do
that in order to print out a **usage** message. Do I make a list with all the
flags, then I check if the user uses one that is in that list?
Should I do all these things inside `main()` or should I do place them in a
different function?
Should I construct my flags with a hyphen-minus in front of them (like this:
-a, -b) or without one? To check if a certain flag is present in sys.argv, do
I simply do something like:
if '-v' in sys.argv:
verbose = True
?
Because sys.argv has indexes, what is the best way to ignore the order of the
flags - or in other words, should `./myscript.py -a -b` be the same as
`./myscript.py -b -a`? While it certainly makes the job easier for the common
user, is it common practice to do so?
[I saw something similar but for
C#.](http://stackoverflow.com/questions/491595/best-way-to-parse-command-line-
arguments-in-c?rq=1) Is there a similar concept in Python? The thing is, as
simple as these things are, they get out of hands quickly - for me at least. I
end up doing a mess. What is your approach to this problem?
Answer: For _really simple_ use cases, such as checking the presence of one argument,
you can do a check like you're showing, i.e.:
if '-v' in sys.argv: ...
which is the quick'n dirty way of checking arguments. But once your project
gets a bit more serious, you definitely need to use an argument parsing
library.
And there are a few ones to handle argument parsing: there is the now
deprecated `getopt` (I won't give a link), the most common one is
[argparse](http://docs.python.org/3.3/library/argparse.html) which is included
in any python distribution.
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-a', '--a-long', help='a help')
parser.add_argument('-b', '--b-long', help='b help')
args = parser.parse_args()
then you can call your `script -a -b` or `script -b -a` which will be
equivalent. And for free, you've got `script -h` for free! :-)
Though, my preference is now over [docopt](http://docopt.org), imho, which is
way simpler and more elegant, for the same example:
"""
My script.
usage:
myscript -a | --along
myscript -b | --blong
Options:
-a --along a help
-b --blong b help
"""
from docopt import docopt
arguments = docopt(__doc__, version='myscript 1.0')
print(arguments)
HTH
|
Create pdf with tooltips in python
Question: This a python copy of the popular and highly upvoted [Create pdf with tooltips
in R](http://stackoverflow.com/questions/4691780/create-pdf-with-tooltips-
in-r) .
Simple question: Is there a way to plot a graph from python in a pdf file and
include tooltips?
Answer: You can use the matplotlib pgf backend to do that. Then you can use different
packages at the preamble. In this case I am using the pdfcomment.
This is a very simple example, but I think you can go from here!
import matplotlib as mpl
mpl.use("pgf")
pgf_with_pdflatex = {
"pgf.texsystem": "pdflatex",
"pgf.preamble": [
r"\usepackage[author={me}]{pdfcomment}",
]
}
mpl.rcParams.update(pgf_with_pdflatex)
import matplotlib.pyplot as plt
plt.figure(figsize=(4.5,2.5))
plt.plot(range(5))
for i in range(5):
plt.text(i,i,r"\pdftooltip{o}{(%d,%d)}"%(i,i))
plt.savefig("tooltips.pdf")
There is a slight misplacement of the character "o", but that can be fixed
with few tweaks. It is also much simpler than the R case.
PS: The tooltips can only be visualised with Acrobat Reader.
Hope it helps you.
|
why networkx.draw() produces nothing?
Question: I'm new to python and I'm using IPython, I'm starting to learn about NetworkX,
but just in the starting point now I'm noticing that networkx.draw() is not
working, here is my code:
import networkx as nx
g = nx.Graph()
g.add_nodes_from([1,2,3,4])
nx.draw(g)
but nothing is drawn!
Answer: I believe you could show this via PyPlot:
<http://matplotlib.org/api/pyplot_api.html>
NetworkX has some great examples on their website:
<http://networkx.github.io/documentation/latest/examples/index.html>
A similar question and answer was posted here: [Draw graph in
NetworkX](http://stackoverflow.com/questions/19212979/draw-graph-in-networkx)
|
Creating password using Python passlib
Question: I'm trying to use the following that another user posted as an answer to a
different question:
>>> # import the hash algorithm
>>> from passlib.hash import sha256_crypt
>>> # generate new salt, and hash a password
>>> hash = sha256_crypt.encrypt("toomanysecrets")
>>> hash
But when I type `from passlib.hash import sha256_crypt` I get the following
error:
Traceback (most recent call last): File "<stdin>", line 1, in
<module> ImportError: No module named passlib.hash
>>>
I have already done `pip install passlib`. Any ideas?
Result of running: `pip install passlib`:
Downloading/unpacking passlib Downloading passlib-1.6.2.tar.gz (408kB): 408kB downloaded Running setup.py egg_info for package passlib
Installing collected packages: passlib Running setup.py install for passlib
error: could not create '/Library/Python/2.7/site-packages/passlib': Permission denied
Complete output from command /usr/bin/python -c "import setuptools;__file__='/private/var/folders/2t/1yj5qss57xz8sb7p9wymtkdr0000gn/T/pip_build_<user>/passlib/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/2t/1yj5qss57xz8sb7p9wymtkdr0000gn/T/pip-epiHNK-record/install-record.txt
--single-version-externally-managed:
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/passlib
copying passlib/__init__.py -> build/lib/passlib
copying passlib/apache.py -> build/lib/passlib
copying passlib/apps.py -> build/lib/passlib
copying passlib/context.py -> build/lib/passlib
copying passlib/exc.py -> build/lib/passlib
copying passlib/hash.py -> build/lib/passlib
copying passlib/hosts.py -> build/lib/passlib
copying passlib/ifc.py -> build/lib/passlib
copying passlib/registry.py -> build/lib/passlib
copying passlib/win32.py -> build/lib/passlib
creating build/lib/passlib/ext
copying passlib/ext/__init__.py -> build/lib/passlib/ext
creating build/lib/passlib/ext/django
copying passlib/ext/django/__init__.py -> build/lib/passlib/ext/django
copying passlib/ext/django/models.py -> build/lib/passlib/ext/django
copying passlib/ext/django/utils.py -> build/lib/passlib/ext/django
creating build/lib/passlib/handlers
copying passlib/handlers/__init__.py -> build/lib/passlib/handlers
copying passlib/handlers/bcrypt.py -> build/lib/passlib/handlers
copying passlib/handlers/cisco.py -> build/lib/passlib/handlers
copying passlib/handlers/des_crypt.py -> build/lib/passlib/handlers
copying passlib/handlers/digests.py -> build/lib/passlib/handlers
copying passlib/handlers/django.py -> build/lib/passlib/handlers
copying passlib/handlers/fshp.py -> build/lib/passlib/handlers
copying passlib/handlers/ldap_digests.py -> build/lib/passlib/handlers
copying passlib/handlers/md5_crypt.py -> build/lib/passlib/handlers
copying passlib/handlers/misc.py -> build/lib/passlib/handlers
copying passlib/handlers/mssql.py -> build/lib/passlib/handlers
copying passlib/handlers/mysql.py -> build/lib/passlib/handlers
copying passlib/handlers/oracle.py -> build/lib/passlib/handlers
copying passlib/handlers/pbkdf2.py -> build/lib/passlib/handlers
copying passlib/handlers/phpass.py -> build/lib/passlib/handlers
copying passlib/handlers/postgres.py -> build/lib/passlib/handlers
copying passlib/handlers/roundup.py -> build/lib/passlib/handlers
copying passlib/handlers/scram.py -> build/lib/passlib/handlers
copying passlib/handlers/sha1_crypt.py -> build/lib/passlib/handlers
copying passlib/handlers/sha2_crypt.py -> build/lib/passlib/handlers
copying passlib/handlers/sun_md5_crypt.py -> build/lib/passlib/handlers
copying passlib/handlers/windows.py -> build/lib/passlib/handlers
creating build/lib/passlib/tests
copying passlib/tests/__init__.py -> build/lib/passlib/tests
copying passlib/tests/__main__.py -> build/lib/passlib/tests
copying passlib/tests/_test_bad_register.py -> build/lib/passlib/tests
copying passlib/tests/backports.py -> build/lib/passlib/tests
copying passlib/tests/test_apache.py -> build/lib/passlib/tests
copying passlib/tests/test_apps.py -> build/lib/passlib/tests
copying passlib/tests/test_context.py -> build/lib/passlib/tests
copying passlib/tests/test_context_deprecated.py -> build/lib/passlib/tests
copying passlib/tests/test_ext_django.py -> build/lib/passlib/tests
copying passlib/tests/test_handlers.py -> build/lib/passlib/tests
copying passlib/tests/test_handlers_bcrypt.py -> build/lib/passlib/tests
copying passlib/tests/test_handlers_django.py -> build/lib/passlib/tests
copying passlib/tests/test_hosts.py -> build/lib/passlib/tests
copying passlib/tests/test_registry.py -> build/lib/passlib/tests
copying passlib/tests/test_utils.py -> build/lib/passlib/tests
copying passlib/tests/test_utils_crypto.py -> build/lib/passlib/tests
copying passlib/tests/test_utils_handlers.py -> build/lib/passlib/tests
copying passlib/tests/test_win32.py -> build/lib/passlib/tests
copying passlib/tests/tox_support.py -> build/lib/passlib/tests
copying passlib/tests/utils.py -> build/lib/passlib/tests
creating build/lib/passlib/utils
copying passlib/utils/__init__.py -> build/lib/passlib/utils
copying passlib/utils/compat.py -> build/lib/passlib/utils
copying passlib/utils/des.py -> build/lib/passlib/utils
copying passlib/utils/handlers.py -> build/lib/passlib/utils
copying passlib/utils/md4.py -> build/lib/passlib/utils
copying passlib/utils/pbkdf2.py -> build/lib/passlib/utils
creating build/lib/passlib/utils/_blowfish
copying passlib/utils/_blowfish/__init__.py -> build/lib/passlib/utils/_blowfish
copying passlib/utils/_blowfish/_gen_files.py -> build/lib/passlib/utils/_blowfish
copying passlib/utils/_blowfish/base.py -> build/lib/passlib/utils/_blowfish
copying passlib/utils/_blowfish/unrolled.py -> build/lib/passlib/utils/_blowfish
creating build/lib/passlib/_setup
copying passlib/_setup/__init__.py -> build/lib/passlib/_setup
copying passlib/_setup/docdist.py -> build/lib/passlib/_setup
copying passlib/_setup/stamp.py -> build/lib/passlib/_setup
copying passlib/tests/sample1.cfg -> build/lib/passlib/tests
copying passlib/tests/sample1b.cfg -> build/lib/passlib/tests
copying passlib/tests/sample1c.cfg -> build/lib/passlib/tests
copying passlib/tests/sample_config_1s.cfg -> build/lib/passlib/tests
running install_lib
creating /Library/Python/2.7/site-packages/passlib
error: could not create '/Library/Python/2.7/site-packages/passlib': Permission denied
---------------------------------------- Cleaning up... Command /usr/bin/python -c "import setuptools;__file__='/private/var/folders/2t/1yj5qss57xz8sb7p9wymtkdr0000gn/T/pip_build_<user>/passlib/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/2t/1yj5qss57xz8sb7p9wymtkdr0000gn/T/pip-epiHNK-record/install-record.txt
--single-version-externally-managed failed with error code 1 in /private/var/folders/2t/1yj5qss57xz8sb7p9wymtkdr0000gn/T/pip_build_<user>/passlib Storing complete log in /Users/<user>/Library/Logs/pip.log
Answer: You’re getting a ‘permission denied’ error. Try
sudo pip install passlib
or
pip install --user passlib
|
Python: Pass a function as a variable with one input fixed
Question: Say I have a two dimensional function f(x,y) and another function G(function)
that takes a function as an input. BUT, G only takes one dimensional functions
as input and I'm wanting to pass f to G with the second variable as a fixed
parameter.
Right now, I am just declaring a third function h that sets y to a set value.
This is what it looks like in some form:
def f(x,y):
something something something
return z;
def G(f):
something something something
def h(x):
c= something
return f(x,c);
G(h)
At some point I was also making y a default parameter that I would change each
time.
Neither of these are as readable as if I was somehow able to call
G(f(x,c))
that particular syntax doesn't work. What is the best way to do this?
Answer: The
[functools.partial](http://docs.python.org/2/library/functools.html#functools.partial)
function can be used to do this (note, it's not entirely clear where `c` comes
from in your example code, so I've assumed it's some constant).
import functools
def f(x,y):
return x+y
c = 3
G = functools.partial(f, c)
G(4)
I think this is more explicit than the lambda approaches suggested so far.
Edit: replacing the right most argument is not possible as we are dealing with
positional arguments. Depending on the level of control available, you could
introduce a wrapper which handles the switching:
import functools
def f(x,y):
return x+y
def h(c,y):
return f(y,c)
c = 3
G = functools.partial(h, c)
G(4)
But I think you start to sacrifice readability and maintainability at this
point...
|
URL UTF-8 Decoding Python
Question: I am having some data in URL format and I want to decode it using Python. I
tried the (accepted) answer
[here](https://stackoverflow.com/questions/3563126/url-encoding-decoding-with-
python) but I am still not getting getting the correct decoding. My code is as
follows:
import urllib2
name = '%D0%BD%D0%BE%D1%82%D0%B8%D1%84%D0%B8%D0%BA%D0%B0%D1%82%D0%BE%D1%80-%D0%BE%D0%BB%D0%B8%D0%BC%D0%BF%D0%B8%D0%B9%D1%81%D0%BA%D0%B8%D1%85-%D0%B8'
print urllib2.unquote(urllib2.quote(name.encode("utf8"))).decode("utf8")
This should print `нотификатор-олимпийских-и` but it prints
`%D0%BD%D0%BE%D1%82%D0%B8%D1%84%D0%B8%D0%BA%D0%B0%D1%82%D0%BE%D1%80-%D0%BE%D0%BB%D0%B8%D0%BC%D0%BF%D0%B8%D0%B9%D1%81%D0%BA%D0%B8%D1%85-%D0%B8`
so I tried unquoting it again
print
urllib2.unquote(urllib2.unquote(urllib2.quote(name.encode("utf8"))).decode("utf8"))
but it gives me `ноÑиÑикаÑоÑ-олимпийÑкиÑ-и`
I am not sure why this happens. Can anyone please explain where am I doing
wrong and how do I correct my mistake?
Answer: Too many quote/unquote operations: you get a UTF-8 string that is _already_
URL-encoded, why are you UTF-8 and URL encoding it again?
unquoted = urllib.unquote(name)
print unquoted.decode('utf-8')
# нотификатор-олимпийских-и
|
python multiprocessing, do the processes share a common variable?
Question: I have this:
#!/usr/bin/env python
import multiprocessing
class MultiprocessingTest(object):
def __init__(self):
self.cmd = ''
def for_process_A(self):
self.cmd = "AA"
print "%s executing and cmd is %s" % (multiprocessing.current_process().name, self.cmd)
def for_process_B(self):
self.cmd = "BB"
print "%s executing and cmd is %s" % (multiprocessing.current_process().name, self.cmd)
if __name__ == '__main__':
obj = MultiprocessingTest()
process_A = multiprocessing.Process(target=obj.for_process_A, name='process_A')
process_B = multiprocessing.Process(target=obj.for_process_B, name='process_B')
process_A.start()
process_B.start()
process_A.join()
process_B.join()
Question:
Do the two processes share the variable `cmd`?
Do both processes have a separate class `MultiprocessingTest` definition and
work off of that?
Independent copies of which data exists in the two processes?
I am trying to understand from a theoretical standpoint what is actually
happening here. Can you please comment on that?
Test Run o/p:
$ ./commonvar.py
process_A executing and cmd is AA
process_B executing and cmd is BB
Answer: Processes don't share data. Each process is a separate container with
following resources, generally speaking:
* Code to execute
* Stack
* Processor time
Processes interact with outside world through Pipes.
So to answer your questions:
* Processes will not share `cmd` variable.
* Processes will have separate copies of the class code.
* All the program data will be independent.
Further Explanation:
Behind the scenes, [fork system
call](http://en.wikipedia.org/wiki/Fork_%28system_call%29) is used to create a
process (assuming you are using *nix). Processes are heavier compared to
threads because of the overhead involved in [switching the
conext](http://en.wikipedia.org/wiki/Context_switch).
|
KeyError when writing NumPy values to GEXF with NetworkX
Question: Hi everyone I 'd like to compute node coordinates and then export graph to
GEXF and process it with Gephi. However when I run the following code
import networkx as nx
import numpy as np
....
area_ratios = [np.sum(new[:,0])/Stotal, np.sum(new[:,1])/Stotal, np.sum(new[:,2])/Stotal]
X = np.array([0, -sqrt(3)/2 * area_ratios[1] , sqrt(3)/2 * area_ratios[2]])
Y = np.array([ area_ratios[0], -1/2 * area_ratios[1] , -1/2 * area_ratios[2]])
point = (np.sum(X), np.sum(Y))
graph.add_node(node_name, {'x-coord': np.asscalar(point[0]*SCALE_FACTOR),
'y-coord': np.asscalar(point[1]*SCALE_FACTOR), 'size': Stotal*3})
nx.write_gexf(graph, PATH + 'mygraph.gexf')
it gives me a `KeyError: <type 'numpy.float64'>` even though `np.asscalar` is
meant to convert the relevant attributes to the compatible python type.
Any ideas?
Answer: Looks like this was solved a long time ago but I found that my code was having
a similar problem using float values from a pandas data frame. The solution
was in the comments but it took me a while to figure it out so I thought I
might clarify.
If you are making your nodes from a dataframe like this:
G.add_node(df2.loc[row,door_col],
attr_dict={'dropoff':df2.loc[row,'A'],
'pageLoadTime':df2.loc[row,'B'],
'pageviews':df2.loc[row,'C'],
'sessions':df2.loc[row,'D'],
'entrances':df2.loc[row,'E'],
'exits':df2.loc[row,'F'],
'timeOnPage':df2.loc[row,'G'],
'classesB':df2.loc[row,'H']})
Assuming cols a-g are floats, they are np.float64 values, not float values.
nx.write_gexf() will crash. However the easy fix is to coerce them into simple
values using something like this:
G.add_node(df2.loc[row,door_col],
attr_dict={'dropoff':float(df2.loc[row,'A']),
'pageLoadTime':float(df2.loc[row,'B']),
'pageviews':float(df2.loc[row,'C']),
'sessions':float(df2.loc[row,'D']),
'entrances':float(df2.loc[row,'E']),
'exits':float(df2.loc[row,'F']),
'timeOnPage':float(df2.loc[row,'G']),
'classesB':str(df2.loc[row,'H'])})
There are a lot of tools that struggle with np.float64 types. Converting them
is always the easy option.
|
Converting ipython notebook to html with separate images
Question: I have an ipython notebook with a mixture of SVG and PNG graphs. I can export
it to html without any trouble, but it embeds the images as encoded text in
the body of the `.html` file.
I'm calling:
ipython nbconvert --to html mynotebook.ipynb
The output at the command line includes:
[NbConvertApp] Converting notebook mynotebook.ipynb to html
[NbConvertApp] Support files will be in mynotebook_files/
but no such directory is created, and there are no files in it.
There are related posts
([1](http://stackoverflow.com/questions/12502187/ipython-notebook-to-html-for-
blog-post?rq=1) ,[2](http://stackoverflow.com/questions/19082746/convert-
ipython-notebook-to-mediawiki)
,[3](http://stackoverflow.com/questions/20039058/blogging-with-ipython-
notebook) ,[4](http://stackoverflow.com/questions/20609114/how-to-post-
ipython-notebook-thread-into-wordpress-blog) ) but they either don't fix this
specific issue, or refer to the olden days when NBconvert was a separate
library.
[This
document](http://nbviewer.ipython.org/github/Carreau/posts/blob/master/06-NBconvert-
Doc-Draft.ipynb) explains how to solve this problem on the old way of doing
things too.
I've tried to use:
ipython nbconvert --config mycfg.py
With
c = get_config()
c.NbConvertApp.notebooks = ["mynotebook.ipynb"]
in the `.py` file, but that's as fas as I've got.
What I'm looking for is a way to make the png files, and preferably the svg
files, go into a folder. Ideally as easily as possible!
Answer: Thanks to [Thomas K](http://stackoverflow.com/users/434217/thomas-k)'s nudge
I've had some success in getting this to work. Consider this a proto-answer
until I have a chance to get my head around all the nuances of the problem.
There will probably be errors, but this is my understanding of what's
happening.
To override the default behaviour of the default `ipython nbconvert --to html
mynotebook.ipynb` command you need to specify a configuration file and call it
like this `ipython nbconvert --config mycfg.py`. Where `mycfg.py` is a file in
the same directory as your notebooks. Mine looks like this:
c = get_config()
c.NbConvertApp.notebooks = ["mynotebook.ipynb"]
c.NbConvertApp.export_format = 'html'
c.Exporter.preprocessors = ['extractoutput.ExtractOutputPreprocessor']
Where `["mynotebook.ipynb"]` is the file, or list of files, that I want to
convert. The part that controls _how_ the notebook gets converted is
`'extractoutput.ExtractOutputPreprocessor'` in this case.
**`extractoutput`**`.ExtractOutputPreprocessor` refers to `extractoutput.py`,
which is also in the same directory as the notebooks (although I don't think
it needs to be).
`extractoutput`**`.ExtractOutputPreprocessor`** refers to a function in
`extractoutput.py` that specifies how the output will be processed.
In my case the content of this file is taken exactly from the [IPython
repo](https://github.com/ipython/ipython/blob/master/IPython/nbconvert/preprocessors/extractoutput.py)
with a small modification. Line 22 (`from .base import Preprocessor`) produces
a
`ValueError: Attempted relative import in non-package`
[error](https://www.inkling.com/read/learning-python-mark-
lutz-4th/chapter-23/package-relative-imports) because it doesn't know where to
look for the package. When changed to
`from`**`IPython.nbconvert.preprocessors`**`.base import Preprocessor`
then it works and all the image assets are put into the `mynotebook_files`
directory.
I didn't need to edit the [HTML output
template](https://github.com/ipython/ipython/tree/master/IPython/nbconvert/templates/html)
in this case, it knew where to look anyway.
|
How do I add a method to a class from a third-party Python module without editing the original module
Question: I'm using the `Basemap` object from `basemap` module in the matplotlib toolkit
(`mpl_toolkits.basemap.Basemap`). In `basemap`'s `__init__.py` file (i.e. the
`mpl_toolkits.basemap.__init__` module), a method `drawparallels` is defined
which draws latitudes on the map. I aim to duplicate that method to make a new
method called `drawmlat`, making some adjustments in order to plot magnetic
latitudes instead of geographic latitudes.
Ideally, I want the new `drawmlat` to be equivalent to the original
`drawparallel` (a bound method of the instances of Basemap that I can call
with using `BasemapInstance.drawmlats()`), and I do not want to modify the
original file. How would I accomplish this?
I have tried variations of the "recipe" `MyObj.method = MethodType(new_method,
None, MyObj)`, but without placing anything in the original source file, the
new method does not have access to globals etc. from the Basemap module (e.g.
defined in its `__init__.py`).
If it seems I have misunderstood something, I probably have - I am more or
less completely new to object-oriented programming.
Answer: Python is highly modifiable. Just add your function to the _class_ :
from mpl_toolkits.basemap import Basemap
def drawmlat(self, arg1, arg2, kw=something):
pass
Basemap.drawmlat = drawmlat
Now the `Basemap` class has a `drawmlat` method; call it on instances and
`self` will be bound to the instance object. When looking up the method on
instances, the function will automatically be bound as a method for you.
Anything defined in the `Basemap.__init__` method that you need to care about
are attributes on `self`.
Having looked over the `mpl_toolkits.basemap.__init__` module, I do see that
the `drawparallel` method relies on a _few_ globals; you can import those from
the module into your own namespace:
from mpl_toolkits.basemap import Basemap, _cylproj, _pseudocyl
This is no different from other imports you'd make; the original
`drawparallel` method also relies on `import numpy as np` and `from
matplotlib.lines import Line2D`, which make both `np` and `Line2D` globals in
the original module.
|
How to log errors in python script (process)
Question: I need to log errors in python script which running as process.
My current code:
import ips
import sys
import time
import logging
import os
import string
import json
import multiprocessing
import MySQLdb as mdb
from daemon import Daemon
logging.basicConfig(format = u'%(levelname)-8s [%(asctime)s] %(message)s', level = logging.INFO, filename = u'/var/log/info.log')
class MyDaemon(Daemon):
def run(self):
print 'the script was run'
logging.info("the script was run")
while True:
[My code Here]
time.sleep(10)
if __name__ == "__main__":
daemon = MyDaemon('/var/run/pid.pid')
if len(sys.argv) > 1:
if 'start' == sys.argv[1]:
daemon.start()
elif 'stop' == sys.argv[1]:
daemon.stop()
elif 'restart' == sys.argv[1]:
daemon.restart()
else:
print "Unknown command"
sys.exit(2)
sys.exit(0)
else:
print "usage: %s start|stop|restart" % sys.argv[0]
sys.exit(2)
Some times my process shutdown but I doesn't see error report. How I can see
error report if process shutdown?
Answer: Use the logging module to capture an exception. E.g.:
import logging
except serial.SerialException as e:
logging.debug("Could not open serial port: {}".format(com_port, e))
I don't know your code, you enter in the loop but I can advise you to use
maybe returns in program. If it breaks for a specific reason, then you get
some hints. This can be logged too. In python you easily can return several
parameters. I don't know, why you don't use these features.
|
Running tests with coverage using django-jenkins
Question: I've got a couple of Django projects that I work on, and I use Jenkins for
continuous integration purposes. I've had that arrangement up and running for
a while and it works well.
I'd like to be able to generate automated test coverage reports and have
Jenkins handle them as well. It looked to me like [django-
jenkins](https://github.com/kmmbvnr/django-jenkins) was the way to go for
that, so I installed it and `coverage`.
Here's the relevant sections of my `settings.py`:
# Jenkins integration
INSTALLED_APPS += ('django_jenkins',)
JENKINS_TASKS = (
'django_jenkins.tasks.with_coverage',
'django_jenkins.tasks.run_pylint',
'django_jenkins.tasks.django_tests',
)
PROJECT_APPS = ['myapp']
Now, I can run `python manage.py jtest`, and it works as expected. However, if
I run `python manage.py jenkins`, it errors:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/matthew/Projects/blah/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/home/matthew/Projects/blah/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/matthew/Projects/blah/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 272, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/home/matthew/Projects/blah/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 76, in load_command_class
return module.Command()
File "/home/matthew/Projects/blah/venv/local/lib/python2.7/site-packages/django_jenkins/management/commands/__init__.py", line 61, in __init__
for module_name in self.get_task_list()]
File "/home/matthew/Projects/blah/venv/local/lib/python2.7/site-packages/django/utils/importlib.py", line 40, in import_module
__import__(name)
ImportError: No module named django_tests
I'm using the standard Django `TestCase` and `LiveServerTestCase` classes as
the basis of my tests. Any idea where I'm going wrong here? The documentation
seems to imply `django_tests` has been removed, but I can't find any
indication as to how you run the Django tests now.
I'm using Django 1.6.2.
Answer: Just realised I've been a bit of a numpty. All I needed to do was drop the
`django_tests` line, like this:
# Jenkins integration
INSTALLED_APPS += ('django_jenkins',)
JENKINS_TASKS = (
'django_jenkins.tasks.with_coverage',
'django_jenkins.tasks.run_pylint',
)
PROJECT_APPS = ['myapp']
And `django-jenkins` will run the tests without having to explicitly request
that it does so.
|
Behavior of python method in absence of return statement
Question: I have a question related to change in program behavior the absence of
_return_ statement leads to in a python method.
The _count_ method below prints number of digits in a given integer. With the
below chunk of code I get the result as 4, which is the expected result.
def count(x,acc=0):
if x==0:
return acc
return count(x/10,acc+1)
print "Count is %s" %(count(1234))
**Result:** _Count is 4_
If I modify the above method so that the last statement does not contain the
'return' statement the result I get is 'None'.
def count(x,acc=0):
if x==0:
return acc
count(x/10,acc+1)
print "Count is %s" %(count(1234))
**Result: Count is None**
(The version of Python I used is: 2.7.3)
Does the above behavior results due to the fact that Python doesn't do tail
call optimization or is there any other reasoning involved ?
A similar chunk of code in perl (which AFAIK doesn't do tail call
optimization) provides the expected result without 'return' being part of the
last statement.
sub counter {
my ($n,$acc) = @_;
return $acc if ($n==0);
counter(int($n/10), $acc+1);
}
print "Count is:" . counter(1234,0) ."\n"
**Result: Count is:4**
(The versions of Perl I ran above code chunk are : 5.14.4 and 5.8.5).
My questions are:
* Is Tail Call optimization the reason for the behavior shown in above chunk of Python code.
* If that were the case then why behavior of perl code differs, which doesn't do TCO either.
Answer: Doesn't have to do with the missing of _tail optimization_ at all. Functions
in Python need the keyword `return` explicitly, otherwise is assumed their
return `None`.
I know Ruby doesn't behave that way, it returns the value of the last executed
expression. With Perl it must be the same.
It is nothing that clever, just the fact that Python programs behave that way
:)
See the disassembly for both Python functions. You may see how the one having
the `return` value actually calls the function and return the value on top of
the stack. The one that doesn't have it, see that have two instructions after
the funciont call, which load constant `None` and return it.
def count(x,acc=0):
if x==0:
return acc
return count(x/10,acc+1)
def count2(x,acc=0):
if x==0:
return acc
count(x/10,acc+1)
In [7]: import dis
In [8]: dis.dis(count)
2 0 LOAD_FAST 0 (x)
3 LOAD_CONST 1 (0)
6 COMPARE_OP 2 (==)
9 POP_JUMP_IF_FALSE 16
3 12 LOAD_FAST 1 (acc)
15 RETURN_VALUE
4 >> 16 LOAD_GLOBAL 0 (count)
19 LOAD_FAST 0 (x)
22 LOAD_CONST 2 (10)
25 BINARY_DIVIDE
26 LOAD_FAST 1 (acc)
29 LOAD_CONST 3 (1)
32 BINARY_ADD
33 CALL_FUNCTION 2
36 RETURN_VALUE
In [9]: dis.dis(count2)
2 0 LOAD_FAST 0 (x)
3 LOAD_CONST 1 (0)
6 COMPARE_OP 2 (==)
9 POP_JUMP_IF_FALSE 16
3 12 LOAD_FAST 1 (acc)
15 RETURN_VALUE
4 >> 16 LOAD_GLOBAL 0 (count)
19 LOAD_FAST 0 (x)
22 LOAD_CONST 2 (10)
25 BINARY_DIVIDE
26 LOAD_FAST 1 (acc)
29 LOAD_CONST 3 (1)
32 BINARY_ADD
33 CALL_FUNCTION 2
36 POP_TOP
37 LOAD_CONST 0 (None)
40 RETURN_VALUE
|
Python subprocess communicate kills my process
Question: Why does communicate kill my process? I want an interactive process but
communicate does something so that I cannot take raw_input any more in my
process.
from sys import stdin
from threading import Thread
from time import sleep
if __name__ == '__main__':
print("Still Running\n")
x = raw_input()
i = 0
while ('n' not in x ) :
print("Still Running " + str(i) + " \r\n")
x = raw_input()
i += 1
print("quit")
print(aSubProc.theProcess.communicate('y'))
print(aSubProc.theProcess.communicate('y'))
exception!
self.stdin.write(input)
ValueError: I/O operation on closed file
Answer: `communicate` and `wait` methods of `Popen` objects, close the `PIPE` after
the process returns. If you want stay in communication with the process try
something like this:
import subprocess
proc = subprocess.Popen("some_process", stdout=subprocess.PIPE, stdin=subprocess.PIPE)
proc.stdin.write("input")
proc.stdout.readline()
|
Reference an arbitrary row and field in another table
Question: Is there any form (data type, inherence..) of implement in postgresql
something like this:
CREATE TABLE log (
datareferenced table_row_column_reference,
logged boolean
);
The referenced data may be any row field from the database. My objective is
implement something like this without use Procedural Language or implement it
in a higher layer, using only a relational approach and without modify the
rest of the tables. Another feature may be referencial integrity, example:
-- Table foo (id, field1, field2, fieldn)
-- ('bar', '2014-01-01', 4.33, Null)
-- Table log (datareferenced, logged)
-- ({table foo -> id:'bar' -> field2 } <=> 4.33, True)
DELETE FROM foo where id='bar';
-- as result, on cascade, deleted both rows.
I have an application build onto a MVC pattern. The logic is written in
Python. The application is a management tool, very data intensive. My goal is
implement a module that could store additional information per every data
present in the DDBB. Per example, a client have a serie of attributes (name,
address, phone, email ...) across multiple tables, and I want that the app
could store metadata-like for every registry from all the DDBB. A metadata
could be last modification, or a user flag, etc.
I have implemented the metadata model (in postgres), its mapping to objects
and a parcial API. But the part left is the most important, the glue. My plan
B is create that glue in the data mapping layer as a module. Something like
this:
address= person.addresses[0]
address.saveMetadata('foo', 'bar')
-- in the superclass of Address
def saveMetadata(self, code, value):
self.mapper.metadata_adapter.save(self, code, value)
-- in the metadata adapter class:
def save(self, entity, code, value):
sql = """update value=%s from metadata_values
where code=%s and idmetadata=
(select id from metadata_rels mr
where mr.schema=%s and mr.table=%s and
mr.field=%s and mr.recordpk=%s)"""%
(value, code,
self.class2data[entity.__class__]["schema"],
self.class2data[entity.__class__]["table"],
self.class2data[entity.__class__]["field"],
entity.id)
self.mapper.execute(sql)
def read(self, entity , code):
sql = """select mv.value
from metadata_values mv
join metadata_rels mr on mv.idmetadata=mr.id
where mv.code=%s and mr.schema=%s and mr.table=%s and
mr.field=%s and mr.recordpk=%s"""%
(code,
self.class2data[entity.__class__]["schema"],
self.class2data[entity.__class__]["table"],
self.class2data[entity.__class__]["field"],
entity.id )
return self.mapper.execute(sql)
But it would add overhead between python and postgresql, complicate Python
logic, and using PL and triggers may be very laborious and bug-prone. That is
why i'm looking at doing the same at the DDBB level.
Answer: No, there's nothing like that in PostgreSQL.
You could build triggers yourself to do it, probably using a composite type.
But you've said (for some reason) you don't want to use PL/PgSQL, so you've
ruled that out. Getting RI triggers right is quite hard, though, and you must
apply a trigger to the referencing _and_ referenced ends.
Frankly, this seems like a square peg, round hole kind of problem. Are you
sure PostgreSQL is the right choice for this application?
Describe your needs and goal in context. Why do you want this? What problem
are you trying to solve? Maybe there's a better way to approach the same
problem one step back...
|
pyparsing: grammar for list of Dictionaries (erlang)
Question: I'm trying to build a grammar to parse an Erlang tagged tuple list, and map
this to a Dict in pyparsing. I'm having problems when I have a list of Dicts.
The grammar works if the Dict has just one element, but when I add a second
can't work out now to get it to parse.
Current (simplified grammar code (I removed the bits of the language not
necessary in this case):
#!/usr/bin/env python2.7
from pyparsing import *
# Erlang config file definition:
erlangAtom = Word( alphas + '_')
erlangString = dblQuotedString.setParseAction( removeQuotes )
erlangValue = Forward()
erlangList = Forward()
erlangElements = delimitedList( erlangValue )
erlangCSList = Suppress('[') + erlangElements + Suppress(']')
erlangList <<= Group( erlangCSList )
erlangTaggedTuple = Group( Suppress('{') + erlangAtom + Suppress(',') +
erlangValue + Suppress('}') )
erlangDict = Dict( Suppress('[') + delimitedList( erlangTaggedTuple ) +
Suppress(']') )
erlangValue <<= ( erlangAtom | erlangString |
erlangTaggedTuple |
erlangDict | erlangList )
if __name__ == "__main__":
working = """
[{foo,"bar"}, {baz, "bar2"}]
"""
broken = """
[
[{foo,"bar"}, {baz, "bar2"}],
[{foo,"bob"}, {baz, "fez"}]
]
"""
w = erlangValue.parseString(working)
print w.dump()
b = erlangValue.parseString(broken)
print "b[0]:", b[0].dump()
print "b[1]:", b[1].dump()
This gives:
[['foo', 'bar'], ['baz', 'bar2']]
- baz: bar2
- foo: bar
b[0]: [['foo', 'bar'], ['baz', 'bar2'], ['foo', 'bob'], ['baz', 'fez']]
- baz: fez
- foo: bob
b[1]:
Traceback (most recent call last):
File "./erl_testcase.py", line 39, in <module>
print "b[1]:", b[1].dump()
File "/Library/Python/2.7/site-packages/pyparsing.py", line 317, in __getitem__
return self.__toklist[i]
IndexError: list index out of range
i.e. `working` works, but `broken` doesn't parse as two lists.
Any ideas?
**Edit:** Tweaked testcase to be more explicit about expected output.
Answer: Ok, so I have never worked with pyparsing before, so excuse me if my solution
does not make sense. Here we go:
As far as I understand what you need is three main structures. The most common
mistake you made was grouping delimitedLists. They are already grouped, so you
have an issue of double grouping. Here are my definitions:
for {a,"b"}:
erlangTaggedTuple = Dict(Group(Suppress('{') + erlangAtom + Suppress(',') + erlangValue + Suppress('}') ))
for [{a,"b"}, {c,"d"}]:
erlangDict = Suppress('[') + delimitedList( erlangTaggedTuple ) + Suppress(']')
for the rest:
erlangList <<= Suppress('[') + delimitedList( Group(erlangDict|erlangList) ) + Suppress(']')
So my fix for your code is:
#!/usr/bin/env python2.7
from pyparsing import *
# Erlang config file definition:
erlangAtom = Word( alphas + '_')
erlangString = dblQuotedString.setParseAction( removeQuotes )
erlangValue = Forward()
erlangList = Forward()
erlangTaggedTuple = Dict(Group(Suppress('{') + erlangAtom + Suppress(',') +
erlangValue + Suppress('}') ))
erlangDict = Suppress('[') + delimitedList( erlangTaggedTuple ) + Suppress(']')
erlangList <<= Suppress('[') + delimitedList( Group(erlangDict|erlangList) ) + Suppress(']')
erlangValue <<= ( erlangAtom | erlangString |
erlangTaggedTuple |
erlangDict| erlangList )
if __name__ == "__main__":
working = """
[{foo,"bar"}, {baz, "bar2"}]
"""
broken = """
[
[{foo,"bar"}, {baz, "bar2"}],
[{foo,"bob"}, {baz, "fez"}]
]
"""
w = erlangValue.parseString(working)
print w.dump()
b = erlangValue.parseString(broken)
print "b[0]:", b[0].dump()
print "b[1]:", b[1].dump()
Which gives the output:
[['foo', 'bar'], ['baz', 'bar2']]
- baz: bar2
- foo: bar
b[0]: [['foo', 'bar'], ['baz', 'bar2']]
- baz: bar2
- foo: bar
b[1]: [['foo', 'bob'], ['baz', 'fez']]
- baz: fez
- foo: bob
Hope that helps, cheers!
|
A simple Hello World setuptools package and installing it with pip
Question: I'm having trouble figuring out how to install my package using setuptools,
and I've tried reading the documentation on it and SO posts, but I can't get
it to work properly. I'm trying to get a simple helloworld application to
work. This is how far I got:
helloworld.py:
print("Hello, World!")
README.txt:
Hello, World! readme
MANIFEST.in:
recursive-include images *.gif
setup.py:
from setuptools import setup, find_packages
setup(
name='helloworld',
version='0.1',
license='BSD',
author='gyeh',
author_email='hello@world.com',
url='http://www.hello.com',
long_description="README.txt",
packages=find_packages(),
scripts = ['helloworld.py'],
package_data={
"" : ["images/*.gif"]
},
data_files=[('images', ['images/hello.gif'])],
description="Hello World testing setuptools",
)
And I have a blank file called images/hello.gif that I want to include in my
package as additional data. The folder structure looks like this:
testsetup/
|-- helloworld.py
|-- images/
|-- --- hello.gif
|-- MANIFEST.in
|-- README.txt
|-- setup.py
When I run `python setup.py sdist`, it generates the `dist` and
`helloworld.egg-info` successfully. When I look at SOURCES.txt under egg-info,
it contains the script and the image under the images folder, and the tarball
under dist contains them as well.
However, when I try to run `pip install --user helloworld-0.1.tar.gz` on the
tarball, it successfully installs it, but I can't find the program files
helloworld.py and images/hello.gif.
When I look under `$HOME/.local/lib/python3.3/site-packages/`, I see the egg-
info folder and all of it's contents installed there. But the
`$HOME/.local/bin` folder doesn't even exist. Are the program files stores
elsewhere? What am I doing wrong here? I'm running Arch Linux.
Answer: Okay, so after some effort, I finally managed to get a simple **"hello
world"** example working for setuptools. The Python documentation is usually
amazing, but I wish the documentation was better on this in particular.
I'm going to write a fairly detailed guide on how I achieved this, and I'll
assume no prior background on the reader on this topic. I hope this comes in
handy for others...
In order to get this example set up, we'll be creating a package (actually two
of them, one for the data files). This is the directory structure we'll end up
with:
test-setuptools/
|-- helloworld/
|-- --- hello.py
|-- --- images/
|-- --- --- hello.gif
|-- --- --- __init__.py
|-- --- __init__.py
|-- MANIFEST.in
|-- README.txt
|-- setup.py
Here are the steps:
1. Create the `helloworld` package.
1.1 Create the `helloworld/` folder as shown in the directory structure above.
1.2 Add a blank file called `__init__.py` in the `helloworld/` folder.
If you don't add it, the package won't be recognized (run `touch __init__.py`
to create the file on linux/mac machines). If you want some code to be
executed every time the package is imported, include it in the `__init__.py`
file.
1.3 Create the the `hello.py` script file to demonstrate the package
functionality.
Here is the code for `hello.py`:
import os
"""
Open additional data files using the absolute path,
otherwise it doesn't always find the file.
"""
# The absolute path of the directoy for this file:
_ROOT = os.path.abspath(os.path.dirname(__file__))
class Hello(object):
def say_hello(self):
return "Hello, World!"
def open_image(self):
print("Reading image.gif contents:")
# Get the absolute path of the image's relative path:
absolute_image_path = os.path.join(_ROOT, 'images/hello.gif')
with open(absolute_image_path, "r") as f:
for line in f:
print(line)
1. 1.4 Create the `images/` folder inside the `helloworld/` folder.
Make another blank `__init__.py` file, because this folder will also be a
package.
1.5 Create the `hello.gif` file inside the `images/` folder.
This file won't be an actual gif file. Instead, add plain text just to
demonstrate that non-script files can be added and read.
I added the following code in `hello.gif`:
This should be the data inside hello.gif...
...but this is just to demonstrate setuptools,
so it's a dummy gif containing plain text
1. 1.6 Test your package
Run `python` from the `test-setuptools` folder, which will open the python
interpreter.
Type `import helloworld.hello` to import the `hello.py` script in the
`helloworld` package. The import should be successful, indicating that you
successfully created a package. Make sure that the package in the `images/`
folder also works, by typing `import helloworld.images`
Try instantiating the object that we wrote in `hello.py`. Type the following
commands to make sure everything works as expected:
`hey = helloworld.hello.Hello()`
`hey.say_hello()`
`hey.open_image()`
2. Create the `setup.py` file and the remaining files.
2.1 Create a simple `README.txt` file. Mine just has the text: `Hello, World!
Readme` inside.
2.2 Create a `MANIFEST.in` file with the following contents:
`include helloworld/images/hello.gif`
.
This is very **important** because it tells setuptools to include the
additional data in the source distribution (which we'll generate in a later
step). Without this, you won't be able to install additional, non `.py` data
to your package. See
[this](http://docs.python.org/2/distutils/sourcedist.html#manifest-template)
for additional details and commands.
2.3 Create the `setup.py` file (see the code below).
The most important attributes are `packages`, `include_package_data`, and
`package_data`.
.
The `packages` attribute contains a list of the packages you want to include
for setuptools. We want to include both the `helloworld` package and the
`helloworld.images` package that contains our additional data `hello.gif`.
You can make setuptools automatically find these by adding the `from
setuptools import find_packages` import and running the imported
`find_packages()` function. Run the interpreter from the `test-setuptools`
folder and test this command to see which packages are found.
.
The `package_data` attribute tells setuptools to include additional data. It's
this command, the `helloworld.images` package, and the `MANIFEST.in` file
which allow you to install additional data.
The `'helloworld.images' : ['hello.gif']` key/value pair tells setuptools to
include `hello.gif` inside the `helloworld.images` package if it exists. You
can also say `'' : ['*.gif']` to include any .gif file in any of the included
packages.
The `include_package_data` attribute set to `True` is also necessary for this
to work.
You can include additional metadata for the package like I have (I think an
`author` is necessary). It's a good idea to add classifiers. Additional info
can be found [here](http://docs.python.org/2/distutils/setupscript.html).
Here is the entire `setup.py` code:
from setuptools import setup
setup(
name='helloworld',
version='0.1',
license='BSD',
author='gyeh',
author_email='hello@world.com',
url='http://www.hello.com',
long_description="README.txt",
packages=['helloworld', 'helloworld.images'],
include_package_data=True,
package_data={'helloworld.images' : ['hello.gif']},
description="Hello World testing setuptools",
)
.
3\. Install and test your package with setuptools.
3.1 Create the source distribution
Run `python setup.py sdist` from the `test-setuptools/` folder to generate the
source distribution.
This will create a `dist/` folder containing your package, and a
`helloworld.egg-info/` folder containing metadata such as `SOURCE.txt`.
Check `SOURCE.txt` to see if your the `hello.gif` image file is included
there.
Open the `.tar.gz` file under the `dist/` folder. You should see all of the
files described in the directory structure we made earlier, including
`hello.gif` and `hello.py`.
3.2 Install the distribution
Install the .tar.gz distribution file by running `pip install --user
helloworld-0.1.tar.gz` from the `dist/` folder.
Check that the package was successfully installed by running `pip list`. The
package `helloworld` should be there.
That's it! now you should be able to test your package under any folder. Open
up the interpreter in any folder except `test-setuptools`, and try importing
the package using `import helloworld.hello`. It should work. Then try the
commands to instantiate the object and open the image file using the
`hey.open_image()` command again. It should still work!
You can view exactly which files were installed by pip, and where they are, by
uninstalling the package. Mine looked like this:
[gyeh@gyeh package]$ pip uninstall helloworld
Uninstalling helloworld:
/home/gyeh/.local/lib/python3.3/site-packages/helloworld-0.1-py3.3.egg-info
/home/gyeh/.local/lib/python3.3/site-packages/helloworld/__init__.py
/home/gyeh/.local/lib/python3.3/site-packages/helloworld/__pycache__/__init__.cpython-33.pyc
/home/gyeh/.local/lib/python3.3/site-packages/helloworld/__pycache__/hello.cpython-33.pyc
/home/gyeh/.local/lib/python3.3/site-packages/helloworld/hello.py
/home/gyeh/.local/lib/python3.3/site-packages/helloworld/images/__init__.py
/home/gyeh/.local/lib/python3.3/site-packages/helloworld/images/__pycache__/__init__.cpython-33.pyc
/home/gyeh/.local/lib/python3.3/site-packages/helloworld/images/hello.gif
Proceed (y/n)? y
Successfully uninstalled helloworld
As you can see, it successfully installed the additional data file
`hello.gif`, and because we converted the relative path to an absolute path in
`hello.py`, it can read the file just fine.
You can then share this package on PyPI for the rest of the world to use! The
instructions on uploading to PyPI are fairly straightforward and can be found
[here](http://docs.python.org/3/distutils/packageindex.html) and
[here](http://docs.python.org/3.1/distutils/uploading.html).
Once it's online in PyPI, people can search for your package using `pip
search`. Or alternatively, running `pip install --user [package-name]` will
tell pip to search the online PyPI directory for that package name. If it
exists, it will install it.
You can run that command for any python package that in PyPI for an easy
install so you aren't mucking around with build files.
I hope this saves people a bunch of headaches.
|
How to use a map with *args to unpack a tuple in a python function call
Question: I am currently doing a merge over a set of variables that I'd like to
parallelize. My code looks something like this:
mergelist = [
('leftfile1', 'rightfile1', 'leftvarname1', 'outputname1'),
('leftfile1', 'rightfile1', 'leftvarname2', 'outputname2')
('leftfile2', 'rightfile2', 'leftvarname3', 'outputname3')
]
def merger(leftfile,rightfile,leftvarname,outvarname):
do_the_merge
for m in mergelist:
merger(*m)
Ordinarily, to speed up long loops, I would replace the for m in mergelist
with something like....
from multiprocessing import Pool
p = Pool(8)
p.map(merger(m), mergelist)
p.close()
But since I'm using the star to unpack the tuple, it's not clear to me how to
map this correctly. How do I get the `*m`?
Answer: Use `lambda`:
with Pool(8) as p:
p.map(lambda m:merger(*m), mergelist)
|
Open File Dialog freezes after tkinter askopenfilename method is called
Question: I'm trying to simply get a file name from the user by
`tkinter.filedialog.askopenfilename()`. The function returns fine and the code
below displays the file name okay but the dialog window doesn't close
immediately after hitting 'open' or 'cancel', it freezes. I'm using python
3.3.3 or OSX 10.9.1 and tcl/tK 8.5.9.
from tkinter import *
from tkinter.messagebox import *
from tkinter.filedialog import *
top = Tk()
top.withdraw()
file_name = filedialog.askopenfilename()
print (file_name)
Answer: You don't need to specify module name in
file_name = filedialog.askopenfilename()
Try
file_name = askopenfilename()
instead
|
Convert Python's internal str to print equivalent
Question: Currently I have:
>> class_name = 'AEROSPC\xc2\xa01A'
>> print(class)
>> AEROSPC 1A
>> 'AEROSPC 1A' == class_name
>> False
How can I convert `class_name` into 'AEROSPC 1A'? Thanks!
Answer: ## Convert to Unicode
You get interesting errors when converting that, I first converted to utf8:
my_utf8 = 'AEROSPC\xc2\xa01A'.decode('utf8', 'ignore')
my_utf8
returns:
u'AEROSPC\xa01A'
and then I normalize the string, the \xa0 is a non-breaking space.
import unicodedata
my_normed_utf8 = unicodedata.normalize('NFKC', my_utf8)
print my_normed_utf8
prints:
AEROSPC 1A
## Convert back to String
which I can then convert back to an ASCII string:
my_str = str(my_normed_utf8)
print my_str
prints:
AEROSPC 1A
|
Vector to string, save in file and again to vector? Python
Question: this is my first question, sorry for my english.
I have already search, but hmm, i didn't know how to search and i try
different ways and keywords, but nothing.
The problem is this:
I'm doing some scripts in blender with python and i want to use config parser
to save and load items, etc. But for example if i want to save a color, that
in blender is a Vector of 4 float places, putting the vector in a file with
config parser, it obviusly save a string in the file, Example (variable config
is all the config parser stuff):
vector_color = [ 0.1, 0.8, 0.2, 1.0 ]
config.set("section", "item", vector_color)
it's gonna save this:
> [section]
> item = <Vector (0.1, 0.8, 0.2, 1.0)>
such is a good thing because it stores the vector, but now.. the problem, i
want to load the vector, and how i can do that? thats my question, because if
i load it like a vector, it is a string, so.. how i can convert it again to
vector? i prove with eval(), literal_eval(), config.read_string(), i don't
know i prove with many probably functions.
so, with less words:
how i can convert this string:
"<Vector (0.1, 0.8, 0.2, 1.0)>"
to this vector
[0.1, 0.8, 0.2, 1.0]
Answer: If you need to save the actual object - save a _list_ instead of a file with
the characters "[1, 2, 3]" in it - I would highly recommend looking at
`pickle`.
import pickle
my_list = [1, 1, 2, 3, 5]
pickled_list = pickle.dumps(my_list)
f = open('my_file.py', 'w')
f.write(pickled_list)
f.close()
#a wild coding appears...
#you used python... it's super effective
f = open('my_file.py', 'r')
read_file = f.read()
my_loaded_list = pickle.loads(read_file)
f.close()
my_loaded_list ##should be the list you just saved
'pickle' is great. Check the official Python docs.
|
Python: Create File Directories based on Dictionary Key names
Question: I wonder if there is a way to turn a dictionary into a directory structure.
For a example a dictionary with following keys:
dict['dir1']['subdir1']['subsubdir']['folder1']
['subdir2']['subsubdir']['folder1']['folder2']['folder3']
['subdir3']['subsubdir']
would result to a three directories (where each subdirectory correspond to
dictionary key name:
/dir1/subdir1/subsubdir/folder1/
/dir1/subdir2/subsubdir/folder1/folder2/folder3/
/dir1/subdir3/subsubdir/
Answer:
from collections import defaultdict
import os
import os.path
def tree():
return defaultdict(tree)
d = tree()
d["dir1"]["subdir1"]["subsubdir"]["folder1"]
d["dir1"]["subdir2"]["subsubdir"]["folder1"]["folder2"]["folder3"]
d["dir1"]["subdir3"]["subsubdir"]
def rec(directory, current_path):
if len(directory):
for direc in directory:
rec(directory[direc], os.path.join(current_path, direc))
else:
os.makedirs(current_path)
rec(d, "")
**Output**
~/Desktop$ tree dir1
dir1
|-- subdir1
| `-- subsubdir
| `-- folder1
|-- subdir2
| `-- subsubdir
| `-- folder1
| `-- folder2
| `-- folder3
`-- subdir3
`-- subsubdir
10 directories, 0 files
|
Python mockito - Mocking a class which is being instantiated from the testable function
Question: I am bit lost while writing the test case for
**UserCompanyRateLimitValidation** class. I am finding difficulty in mocking
the class which is being instantiated from inside this class.
class UserCompanyRateLimitValidation:
def __init__(self, user_public_key):
self.adapter = UserAdapter(user_public_key)
container = self.adapter.get_user_company_rate_limit()
super(UserCompanyRateLimitValidation, self).__init__(container,\
UserCompanyRateLimitValidation.TYPE)
I have to test this class. I have written test case something like this. I
have tried to mock the UserAdapter class but I am not able to do so
completely.
def test_case_1():
self.user_public_key = 'TEST_USER_PUBLIC_KEY_XXXXXX1234567890XXXXX'
UserAdapter_mock = mock(UserAdapter)
when(UserAdapter_mock).get_user_company_rate_limit().\
thenReturn(get_fake_container_object())
self.test_obj = UserCompanyRateLimitValidation(self.user_public_key)
Here if you see I have mocked get_user_company_rate_limit() call from the
testable function, `container = self.adapter.get_user_company_rate_limit()`
but I am still not able to figure out the way in which I can mock this call,
self.adapter = UserAdapter(user_public_key)
Answer: It is quite simple if you know the trick.
Creating an object in Python is very much like a function call to the class
object. `UserCompanyRateLimitValidation` is 'invoking'
`UserAdapter(user_public_key)`. You want to stub the return value of that
'call' to return `UserAdapter_mock`.
You can stub this like you would stub a function in a module. The line you're
missing is:
when(module_declaring_UserAdapter)\
.UserAdapter(self.user_public_key)\
.thenReturn(UserAdapter_mock)
After that, calling
`module_declaring_UserAdapter.UserAdapter(self.user_public_key)` will return
`UserAdapter_mock`.
Here's the link to the section in the manual:
<https://code.google.com/p/mockito-python/wiki/Stubbing#Modules>
You have to be careful to choose the right `module_declaring_UserAdapter`, due
to the way the `from ... import ...` statement works. From your code, I'd say
you have to pick the module in which `UserCompanyRateLimitValidation` is
declared.
|
Managing dictionary memory size in python
Question: I have a program which imports a text file through standard input and
aggregates the lines into a dictionary. However the input file is very large
(1Tb order) and I wont have enough space to store the whole dictionary in
memory (running on 64Gb ram machine). Currently Iv got a very simple clause
which outputs the dictionary once it has reached a certain length (in this
case 100) and clears the memory. The output can then be aggregated at later
point.
So i want to: output the dictionary once memory is full. what is the best way
of managing this? Is there a function which gives me the current memory usage?
Is this costly to keep on checking? Am I using the right tactic?
import sys
X_dic = dict()
# Used to print the dictionary in required format
def print_dic(dic):
for key, value in dic.iteritems():
print "{0}\t{1}".format(key, value)
for line in sys.stdin:
value, key = line.strip().split(",")
if (not key in X_dic):
X_dic[key] = []
X_dic[key].append(value)
# Limit size of dic.
if( len(X_dic) == 100):
print_dic(X_dic) # Print and clear dictionary
X_dic = dict()
# Now output
print_dic(X_dic)
Answer: The module `resource` provides some information on how much resources (memory,
etc.) you are using. See
[here](http://pythonforbiologists.com/index.php/measuring-memory-usage-in-
python/http://pythonforbiologists.com/index.php/measuring-memory-usage-in-
python/) for a nice little usage.
On a Linux system (I don't know where you are) you can watch the contents of
the file `/proc/meminfo`. As part of the _proc_ file system it is updated
automatically.
But I object to the whole strategy of monitoring the memory and using it up as
much as possible, actually. I'd rather propose to dump the dictionary
regularly (after 1M entries have been added or such). It probably will speed
up your program to keep the dict smaller than possible; also it presumably
will have advantages for later processing if all dumps are of similar size. If
you dump a huge dict which fit into your whole memory when nothing else was
using memory, then you later will have trouble re-reading that dict if
something else is currently using some of your memory. So then you would have
to create a situation in which nothing else is using memory (e. g. reboot or
similar). Not very convenient.
|
How to decrypt 3DES in ECB mode (using a wordlist)?
Question: I have some encrypted texts (encrypted with 3DES in ECB mode without salt).
**My question: How can I decrypt them using a wordlist? (or without one?)**
Example:
Encrypted text:
Xfi+h4Ir6l7zXCP+N4EPvQ==
The wordlist for this:
foo
bar
marketing
The original text was before encrypting was: "marketing" (just to make the
example full).
I tried with the commented python script:
$ cat 3des.py
#!/usr/local/bin/python
from pyDes import *
data = "marketing"
k = des("DESCRYPT", CBC, "\0\0\0\0\0\0\0\0", pad=None, padmode=PAD_PKCS5)
d = k.encrypt(data)
print "Encrypted: %r" % d
print "Decrypted: %r" % k.decrypt(d)
assert k.decrypt(d, padmode=PAD_PKCS5) == data
testing it:
$ python 3des.py
Encrypted: '\xabd\xfc\x98x\x86\x8d\xb5A\xba\x8e\x12,\x1f\x83\xb5'
Decrypted: 'marketing'
$
A little help please?
Answer: If you're going to use pyDES, you'll first have to configure your the script
to use 3DES... Here's the provided pyDES example, modified to use 3DES:
from pyDes import *
data = "marketing"
k = triple_des("DESCRYPTDESCRYPT", ECB, "\0\0\0\0\0\0\0\0", pad=None, padmode=PAD_PKCS5)
d = k.encrypt(data)
print "Encrypted: %r" % d
print "Decrypted: %r" % k.decrypt(d)
assert k.decrypt(d, padmode=PAD_PKCS5) == data
To break it down...
The following line contains the class initialization information:
k = triple_des("DESCRYPTDESCRYPT", ECB, "\0\0\0\0\0\0\0\0", pad=None, padmode=PAD_PKCS5)
From the documentation, the params are as follows:
(key, [mode], [IV], [pad], [padmode])
key -> Bytes containing the encryption key. 8 bytes for DES, 16 or 24 bytes
for Triple DES
mode -> Optional argument for encryption type, can be either
pyDes.ECB (Electronic Code Book) or pyDes.CBC (Cypher Block Chaining)
IV -> Optional Initial Value bytes, must be supplied if using CBC mode.
Length must be 8 bytes.
pad -> Optional argument, set the pad character (PAD_NORMAL) to use during
all encrypt/decrpt operations done with this instance.
padmode -> Optional argument, set the padding mode (PAD_NORMAL or PAD_PKCS5)
to use during all encrypt/decrpt operations done with this instance.
So, in my modified example, I've configured the params like this...
Key: DESCRYPTDESCRYPT
Mode: ECB
IV: "\0\0\0\0\0\0\0\0"
pad: None
padmode: PAD_PKCS5
So, from here, you'll need to change the 'data' variable above to the
ciphertext you want to decrypt and then load your wordlist into an array, set
up a loop to iterate the values in the array through the 'key' param...
|
Python: How to sort a list of lists by the most common first element?
Question: How do you sort a list of lists by the count of the first element? For
example, if I had the following list below, I'd want the list to be sorted so
that all the 'University of Georgia' entries come first, then the 'University
of Michigan' entries, and then the 'University of Florida' entry.
l = [['University of Michigan','James Jones','phd'],
['University of Georgia','Anne Greene','ba'],
['University of Michigan','Frank Kimball','ma'],
['University of Florida','Nate Franklin','ms'],
['University of Georgia','Sara Dean','ms'],
['University of Georgia','Beth Johnson','bs']]
Answer:
from collections import Counter
c = Counter(item[0] for item in l)
print sorted(l, key = lambda x: -c[x[0]])
**Output**
[['University of Georgia', 'Anne Greene', 'ba'],
['University of Georgia', 'Sara Dean', 'ms'],
['University of Georgia', 'Beth Johnson', 'bs'],
['University of Michigan', 'James Jones', 'phd'],
['University of Michigan', 'Frank Kimball', 'ma'],
['University of Florida', 'Nate Franklin', 'ms']]
Vanilla dict version:
c = {}
for item in l:
c[item[0]] = c.get(item[0], 0) + 1
print sorted(l, key = lambda x: -c[x[0]])
`defaultdict` version:
from collections import defaultdict
c = defaultdict(int)
for item in l:
c[item[0]] += 1
print sorted(l, key = lambda x: -c[x[0]])
|
python Eclipse - how to break/pause on warning?
Question: I found this which allows to break on exception.
[Break on exception in pydev](http://stackoverflow.com/questions/455552/break-
on-exception-in-pydev/6655894#6655894)
However, what I'd like is to break on a warning. This is the warning I get and
would like if this or another warning are reported to break at that point.
RuntimeWarning: invalid value encountered in double_scalars yv = Nv(v, U*r)/Nv(v, U*r_)
Thanks in advance.
Answer: Unlike a exception, which is associated with several flow control mechanisms,
a warning is simply text that is outputted to the console - more exactly, to
the
[`stderr`](https://en.wikipedia.org/wiki/Stderr#Standard_error_.28stderr.29):
A possible way to break on warnings would thus be intercepting calls to
`stderr`:
class MyStderr(object):
def __init__(self, original_stderr):
self.original_stderr= original_stderr
def my_break(self):
import pdb; pdb.set_trace()
def write(self,*args, **kwargs):
self.my_break()
#...
def writelines(self,*args, **kwargs):
self.my_break()
#...
#...
import sys
sys.stderr= MyStderr(sys.stderr)
This should launch the interactive `pdb` debugger.
|
Python client certificate authentication over https is failing
Question: I'm attempting to get https client authentication working using [this sample
code](http://stackoverflow.com/a/4464435/789671) in Python 2.7. Unfortunately,
the client script doesn't appear to be authenticating correctly and I've not
been able to track down why.
I generated a test CA and server/client certificates as follows:
# Generate CA key and certificate
openssl genrsa -des3 -out test.ca.key 8192
openssl req -new -key test.ca.key -x509 -days 30 -out test.ca.crt
# Generate server key and certificate
openssl genrsa -out www.testsite.com.key 1024
openssl req -new -key www.testsite.com.key -out www.testsite.com.csr
openssl x509 -req -days 30 -in www.testsite.com.csr -CA test.ca.crt -CAkey test.ca.key -CAcreateserial -out www.testsite.com.crt
# Generate client key and certificate
openssl genrsa -out testclient.key 1024
openssl req -new -key testclient.key -out testclient.csr
openssl x509 -req -days 30 -in testclient.csr -CA test.ca.crt -CAkey test.ca.key -CAcreateserial -out testclient.crt
Now if I generate a PKCS#12 certificate:
openssl pkcs12 -export -clcerts -in testclient.crt -inkey testclient.key -out testclient.p12
...and import testclient.p12 into Firefox, I can browse the test site as
expected, so the server and keys would appear to be configured properly.
However, when trying the sample code referenced above, as follows:
import urllib2, httplib
class HTTPSClientAuthHandler(urllib2.HTTPSHandler):
def __init__(self, key, cert):
urllib2.HTTPSHandler.__init__(self)
self.key = key
self.cert = cert
def https_open(self, req):
return self.do_open(self.getConnection, req)
def getConnection(self, host, timeout=300):
return httplib.HTTPSConnection(host, key_file=self.key, cert_file=self.cert)
opener = urllib2.build_opener(HTTPSClientAuthHandler('testclient.key', 'testclient.crt') )
response = opener.open("https://www.testsite.com/")
print response.read()
... I get a 403 error:
Traceback (most recent call last):
File "./cert2.py", line 21, in <module>
response = opener.open("https://www.testsite.com/")
File "/usr/lib64/python2.6/urllib2.py", line 395, in open
response = meth(req, response)
File "/usr/lib64/python2.6/urllib2.py", line 508, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.6/urllib2.py", line 433, in error
return self._call_chain(*args)
File "/usr/lib64/python2.6/urllib2.py", line 367, in _call_chain
result = func(*args)
File "/usr/lib64/python2.6/urllib2.py", line 516, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
Where did I go astray?
EDIT: Here's the site's Apache config:
<VirtualHost *:80>
ServerAdmin webmaster@testsite.com
ServerName www.testsite.com
DocumentRoot /var/www/testsite/
<Directory "/">
Options FollowSymLinks
AllowOverride None
</Directory>
RewriteEngine On
RewriteCond %{SERVER_PORT} 80
RewriteRule ^/?(.*) https://www.testsite.com/$1 [L]
ErrorLog ${APACHE_LOG_DIR}/error.log
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
<VirtualHost *:443>
ServerAdmin webmaster@testsite.com
DocumentRoot /var/www/testsite/
ServerName www.testsite.com
SSLEngine on
SSLCertificateFile /etc/ssl/certs/www.testsite.com.crt
SSLCertificateKeyFile /etc/ssl/private/www.testsite.com.key
SSLCertificateChainFile /etc/ssl/ca/test.ca.crt
SSLCACertificateFile /etc/ssl/ca/test.ca.crt
<Location />
SSLRequireSSL
SSLVerifyClient require
SSLVerifyDepth 10
</Location>
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/testsite>
Options FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/testsite/
<Directory "/usr/lib/cgi-bin/testsite">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
ErrorLog /var/log/apache2/error.log
LogLevel warn
CustomLog /var/log/apache2/access.log combined
</VirtualHost>
Answer: I think this isn't a cert issue. HTTP 403 (Forbidden) is already a http status
code sent to you by the http server through the secured channel, so ssl seems
to work fine. You are just trying to access something you are not authorized
for.
Additional note: Check the same url in a browser too!
|
Pymongo BSON Binary save and retrieve?
Question: I'm working in Python with MongoDB trying to save an array of floats tightly.
I can **Create and store correctly ***
but **I CANNOT RETRIEVE THE DATA IN A USABLE FORMAT.**
>>> import random, array, pymongo
>>> from bson.binary import Binary as BsonBinary
>>> con = pymongo.Connection('localhost', 27017)
>>> mm = con['testDatabase']
>>> vals = [random.random() *100 for x in range(1, 5)]
>>> vals
[2.9962593, 64.5582810776, 32.3781311717, 82.0606953423]
>>> varray = array.array('f', vals)
>>> varray
array('f', [2.9962593, 64.5582810776, 32.3781311717, 82.0606953423])
>>> vstring = varray.tostring()
>>> vstring
'\xb7\xc2?@\xd7\x1d\x81B5\x83\x01B\x13\x1f\xa4B'
>>> vbson = BsonBinary(vstring, 5)
>>> vbson
Binary('\xb7\xc2?@\xd7\x1d\x81B5\x83\x01B\x13\x1f\xa4B', 5)
>>> doc1 = { 'something': 1 , 'else' : vbson}
>>> doc1
{'something': 1, 'else': Binary('\xb7\xc2?@\xd7\x1d\x81B5\x83\x01B\x13\x1f\xa4B', 5)}
>>> mm.test1.insert(doc1)
ObjectID('530f7af1d809d80d3db1f635')
>>> gotdoc = mm.test1.find_one()
>>> gotdoc
{u'_id': ObjectId('530f7af1d809d80d3db1f635'), u'something': 3, u'else': Binary('\xb7\xc2?@\xd7\x1d\x81B5\x83\x01B\x13\x1f\xa4B', 5)}
>>> gotfield = gotdoc['else']
>>> gotfield
Binary('\xb7\xc2?@\xd7\x1d\x81B5\x83\x01B\x13\x1f\xa4B', 5)
>>> from bson import BSON
>>> BSON.decode(gotfield)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method decode() must be called with BSON instance as first argument (got Binary instance instead)
>>> gotfield.decode()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xb7 in position 0: ordinal not in range(128)
>>>
**Once I get my Python string back, I can get my array of random floats back.
But how?**
Answer: Let's go through the errors:
1. The first error appears simply because you need an actual BSON object. Note, that you have never encoded any data - creating `bson.binary.Binary` object does not mean invoking `BSON.encode()`.
2. And that is where PyMongo cheats you a bit. The `bson.binary.Binary` is a runtime-patched `str` or `bytes` instance ([see source](https://github.com/mongodb/mongo-python-driver/blob/master/bson/binary.py)). That is why you get the second error: what you call is actually `str.decode()`, not `BSON.decode()`. So, `gotfield` _contains_ the random float data you've stored initially, but the object itself has some different methods (e.g. `repr()`) bound to it.
|
One producer thread and multiple non-blocking simultaneous consumer threads (MROW) python
Question: I have one producer thread reading some data and processing them and other
threads reading that that, I was using a condition variable in the producer in
order to avoid busy wait and notify when the new adquisition of data is ready,
but every consumer takes the lock when they wanna read that data so the other
consumer threads have to wait, what would be the better aproach to get all the
consumers read the data at once when the producer is done?,
Here is some of the code:
The produccer:
condition = threading.Condition()
def get_data():
while(1):
global actumulated_data, sdev1, mean1, number_of_adq, instant_data
time.sleep(1.3660619/1000000.0)
number_of_adq = number_of_adq + 1
fpga.write_int('adc_data_ctrl',0)
fpga.write_int('adc_data_ctrl',7)
data = fpga.read('adc_data_bram',8192,0)
data2 = map(hex, map(ord, data))
for i in range(8192):
data2[i] = fix8_7(twos_comp(int(data2[i],16),8))
condition.acquire() # condition variable
instant_data = numpy.array(data2)
actumulated_data = numpy.concatenate((actumulated_data,instant_data),axis=0)
condition.notifyAll()
condition.release()
One of the consumers:
def plot_adc_out():
while(1):
condition.acquire()
plt.clf()
plt.plot(instant_data)
plt.grid(True)
plt.xlim(0, len(instant_data))
plt.ylim(-1.5,1.5)
plt.draw()
condition.wait()
condition.release()
So there is a few consumers like this one, I was thinking on something like in
C++ with read and write locks in which if the lock is write lock is gonna be
blocking but if is a reader lock is non blocking so can be a lot of threads
reading that item at the same time.
[Readers–writer
lock](http://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock)
pthread_rwlock_wrlock
pthread_rwlock_rdlock
But I belive that python dosen't have read and write locks.
Answer: The correct way for threads to communicate is by using locks, but the
queue.Queue data structure offers a much more convenient mechanism for inter-
thread communication, with all the ugly locking stuff comfortably hidden under
the covers. Here's an example adapted from from my O'Reilly School of
Technology classes that converts a string to upper case using by piping each
character to a separate thread. Note the code below is Python 3, but it is
easily adapted to Python 2.
Everything is started off by the control thread (which imports the output and
worker threads shown below from their respective modules). It first creates
the input and output queues. These are standard FIFOs, with a limit of 50%
more than the number of worker threads to avoid locking up too much memory in
buffered objects. Then it creates and starts the output thread, and finally
creates and starts as many worker threads as configured by the WORKERS
constant. Worker threads get from the input queue and put to the output queue.
"""
control.py: Creates queues, starts output and worker threads,
and pushes inputs into the input queue.
"""
from queue import Queue
from output import OutThread
from worker import WorkerThread
WORKERS = 10
inq = Queue(maxsize=int(WORKERS*1.5))
outq = Queue(maxsize=int(WORKERS*1.5))
ot = OutThread(WORKERS, outq)
ot.start()
for i in range(WORKERS):
w = WorkerThread(inq, outq)
w.start()
instring = input("Words of wisdom: ")
for work in enumerate(instring):
inq.put(work)
for i in range(WORKERS):
inq.put(None)
inq.join()
print("Control thread terminating")
The Worker threads have been cast so as to make interactions easy. The work
units received from the input queue are (index, character) pairs, and the
output units are also pairs. The processing is split out into a separate
method to make subclassing easier—simply override the process() method.
"""
worker.py: a sample worker thread that receives input
through one Queue and routes output through another.
"""
from threading import Thread
class WorkerThread(Thread):
def __init__(self, iq, oq, *args, **kw):
"""Initialize thread and save Queue references."""
Thread.__init__(self, *args, **kw)
self.iq, self.oq = iq, oq
def run(self):
while True:
work = self.iq.get()
if work is None:
self.oq.put(None)
print("Worker", self.name, "done")
self.iq.task_done()
break
i, c = work
result = (i, self.process(c)) # this is the "work"
self.oq.put(result)
self.iq.task_done()
def process(self, s):
"""This defines how the string is processed to produce a result"""
return s.upper()
The output thread simply has to extract output packets from a queue where they
are placed by the worker threads. As each worker thread terminates, it posts a
None to the queue. When a None has been received from each thread, the output
thread terminates. The output thread is told on initialization how many worker
threads there are, and each time it receives another None it decrements the
worker count until eventually there are no workers left. At that point, the
output thread terminates. Since the worker threads aren't guaranteed to return
in any particular order the results can be sorted. Without sorting you can see
the order the results arrived in.
"""
output.py: The output thread for the miniature framework.
"""
identity = lambda x: x
import threading
class OutThread(threading.Thread):
def __init__(self, N, q, sorting=True, *args, **kw):
"""Initialize thread and save queue reference."""
threading.Thread.__init__(self, *args, **kw)
self.queue = q
self.workers = N
self.sorting = sorting
self.output = []
def run(self):
"""Extract items from the output queue and print until all done."""
while self.workers:
p = self.queue.get()
if p is None:
self.workers -= 1
else:
# This is a real output packet
self.output.append(p)
print("".join(c for (i, c) in (sorted if self.sorting else identity)(self.output)))
print ("Output thread terminating"
|
Minimum, mean and maximum distance between points 3-D in Python
Question: I have a list of x,y,z points. Using the formula to find the distance between
two points in 3-D
import math
import numpy as np
point0 = x0, y0, z0
point1 = x1, y1, z1
dist = math.sqrt((x0-x1)**2+(y0-y1)**2+(z0-z1)**2)
def dist3d((x0, y0, z0), (x1, y1, z1)):
return math.sqrt((x0-x1)**2+(y0-y1)**2+(z0-z1)**2)
i wish to write a optimized loop and store the distance
points = [(472765.09, 6191522.78, 13.0), (472764.82, 6191524.09, 9.0),
(472763.8, 6191525.68, 8.0), (472764.07, 6191524.39, 16.0)]
dist01 = dist3d(test[0],test[1])
dist02 = dist3d(test[0],test[2])
dist03 = dist3d(test[0],test[2])
dist04 = dist3d(test[0],test[2])
dist12 = dist3d(test[1],test[2])
dist13 = dist3d(test[1],test[3])
dist23 = dist3d(test[2],test[3])
3d_l=[(dist01),(dist02),(dist03),(dist04),(dist12),(dist13),(dist23)]
3d_max =max(3d_l)
3d_min = min(3d_l)
3d_mean = np.average(3d_l)
I wrote the following function (it's not optimized)
def dist3d((x0, y0, z0), (x1, y1, z1)):
return math.sqrt((x0-x1)**2+(y0-y1)**2+(z0-z1)**2)
def dist_3d(obs):
dist_list = list()
while len(obs) != 1:
obs_g = [(obs[0], x) for x in obs[1:]]
dist_list.append([dist3d(obs_g[i][0], obs_g[i][1]) for i in xrange(len(obs_g))])
obs.pop(0)
return dist_list
points = [(472765.09, 6191522.78, 13.0), (472764.82, 6191524.09, 9.0), (472763.8, 6191525.68, 8.0), (472764.07, 6191524.39, 16.0)]
print dist_3d(points)
[[4.217700795331081, 5.922339064664832, 3.554222840244929], [2.1374049685457694, 7.046453008421205], [8.107835716151763]]
Answer: If you don't mind using scipy, this is fairly trivial:
import numpy as np
import scipy.spatial.distance as distance
points = np.array([(472765.09, 6191522.78, 13.0), (472764.82, 6191524.09, 9.0), (472763.8, 6191525.68, 8.0), (472764.07, 6191524.39, 16.0)])
dist = distance.pdist(points)
print dist.max()
print dist.min()
print np.median(dist)
print np.average(dist)
|
Python: Testing Serial Ports for Answer
Question: I'm trying to build a short code that will test all Serial COM ports (I'm on
windows) for reply. For example, I have a Arduino connected on COM3, and when
it connects, it sends a serial message.
I want that when I run the python script it automatically detects which is the
right COM port to use.
I have the function that lists all the ports, but I can't make it work to test
all of them and detect in which the arduino is connected.
**Python:**
import serial
import time
import _winreg as winreg
import itertools
import datetime
def enumerate_serial_ports():
""" Uses the Win32 registry to return an
iterator of serial (COM) ports
existing on this computer.
"""
path = 'HARDWARE\\DEVICEMAP\\SERIALCOMM'
try:
key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, path)
except WindowsError:
raise IterationError
for i in itertools.count():
try:
val = winreg.EnumValue(key, i)
yield str(val[1])
except EnvironmentError:
break
connected = False
for porta in enumerate_serial_ports():
print "TRYING PORT: " + porta
start = datetime.datetime.now()
delta=0
ser = serial.Serial(porta,9600,timeout=0)
if ser.isOpen():
while ser.isOpen() and delta < 1:
delta = (datetime.datetime.now()-start).seconds
r = ser.read()
if r == None:
print "connected!"
**Arduino Code**
void setup(){
// Open serial connection.
Serial.begin(9600);
pinMode(13, OUTPUT);
Serial.write(6);
}
void loop(){
}
Thank you!!
Answer: This is a common problem in that there is no "connection" for serial ports. As
you see, when you open() a serial port, that has no relation to whether
something is plugged in on the other end of the wires. All the open() calls
are doing is establishing the handles to the serial port hardware inside that
machine. The open() guarantees nothing about the other side of the cable.
The concept of "connection" is something you must create. Connections are
normally created by some protocol handshake between the two devices that
establishes to both that they are talking to who they expect.
Two general approaches are:
* probe and check for a response
* listen for a beacon
I warn in this related question that the first approach of pushing bytes at an
unknown device is not the safest approach:
[Related question](http://stackoverflow.com/questions/21615591/how-can-i-keep-
the-serial-address-consistent-on-my-arduino-uno-mac-osx)
You have chosen the second approach where the Python side passively listens
for a beacon. You need something similar to this, where you receive some
predefined message and respond when you receive:
// loop for a short time listening for a *
if r != None:
if r.endswith("*"):
ser.write("I_hear_you")
print "connected"
The Arduino side powers up and broadcasts the beacon. It continues to
broadcast until someone answers
beaconmode = True;
void loop() {
if (beaconmode) {
delay(1000);
Serial.write("*");
// Serial.read() and check for "I_hear_you"
if (someone heard my beacon) {
beaconmode = false;
// now the two devices are "connected"
connectedmode = true;
}
if (connectedmode) {
// do normal stuff
}
}
If you are seeking to be even more robust, this handshake could be expanded
one step. As it is, it only confirms that the Python side heard the Arduino
side. The added step would be for the Arduino to acknowledge by transmitting
"I heard that you heard me". Then you would have something similar to how the
3 way handshake used to establish TCP connections. .
|
Positional Inverted Index in Python
Question: I recently developed a Python program that makes an inverted index out of
terms in a certain document. I now want to create position postings, such as
to, 993427:
⟨ 1, 6: ⟨7, 18, 33, 72, 86, 231⟩;
2, 5: ⟨1, 17, 74, 222, 255⟩; 4, 5: ⟨8, 16, 190, 429, 433⟩; 5, 2: ⟨363, 367⟩;
7, 3: ⟨13, 23, 191⟩; …⟩
I know the code is not complete as described above, I'm just trying to
implement functionality.
from pprint import pprint as pp
from collections import Counter
import pprint
import re
import sys
import string
import fileinput
try:
reduce
except:
from functools import reduce
try:
raw_input
except:
raw_input = input
def readIn(fileglob): #Reads in multiple files and strips punctation/uppercase.
texts, words = {}, set()
for txtfile in (fileglob):
with open(txtfile, 'r') as splitWords:
txt = splitWords.read().lower().split()
txt = str(txt)
txt = re.findall(r'\w+', txt)
words |= set(txt)
texts[txtfile.split('\\')[-1]] = txt
return texts, words
def search(indexes): # Inverted index, based off the book and the web.
return reduce(set.intersection,
(index[word] for word in indexes),
set(texts.keys()))
def getWordBins(posOfWords):
cnt = Counter()
for word in posOfWords:
cnt[posOfWords] += 1
return cnt
def main(fileList, topWords):
tempArray = []
for x in range(1,len(fileList)):
tempArray.append(fileList[x])
texts, words = readIn(tempArray)
index = {word:set(txt
for txt, wrds in texts.items() if word in wrds)
for word in words}
test =({k + " " + str(len(v)) + " " + str(sorted(v)) for k,v in index.items()})
txt = readIn(fileList)
posWord = getWordBins(txt)
for key, value in posWord.most_common(topWords):
print key, value
#Writes out the information requested to a ".idx" file.
doc = open("document.idx", "w")
doc.write("# INPUT DOCUMENT REFERENCE LEGEND\n")
for fileNumber in range(1, len(fileList)):
doc.write(str(fileNumber) + "\t" + fileList[fileNumber] + "\n")
doc.write("# INVERTED INDEX RESULTS\n")
tempTest = []
for x in test:
tempTest.append(x.split(" "))
for x in tempTest:
tempStr = ""
for y in x:
tempStr += y + "\t"
doc.write(tempStr + "\n")
doc.close
main(sys.argv, sys.argv)
This is what I have so far, the only new functionality is the getWordBins
function, and the loop:
txt = readIn(fileList)
posWord = getWordBins(txt)
for key, value in posWord.most_common(topWords):
print key, value
Now, what happens when I try to run the code is this:
Traceback (most recent call last):
File "Intro3.py", line 82, in <module>
main(sys.argv, sys.argv)
File "Intro3.py", line 60, in main
posWord = getWordBins(txt)
File "Intro3.py", line 41, in getWordBins
cnt[posOfWords] += 1
TypeError: unhashable type: 'dict'
Any guidance with this troubling error is gladly received. It is not a
dictionary, so why the error? Thanks for your time!
Answer: Where you're doing:
cnt[posOfWords] += 1
I think you might mean:
cnt[word] += 1
Your readin function also returns a dict and a set, so your txt variable is a
tuple of (`dict`, `set`)
So your problem boils down to trying to use a tuple holding a dict as a key
(which I doubt is your intent). And it wouldn't work for `cnt[word] += 1`,
because that would still be trying to use a dict as a key too. You need to do
this, probably:
txt, _ = readIn(fileList)
and then this might work:
cnt[word] += 1
|
Python Classify commands
Question: I am writing a python script to classify ip countries as they are in another
file .. for example .. I have 2 files in the script dir
IPCountries.txt contains :-
192.168.1.1 | US,
188.100.0.0 | AU,
and the file arrange.txt contains :-
0="US,CA,UK,GE,"
1="AU,EG,"
Now the script will read each line in IPCountries.txt file and take the value
after "|" like the value "US" and then match it with the value in file
arrange.txt and write it into a new file called 0.txt .
The problem is that i do not know how to do this but i have used some info to
write the next code but i am stuck in the loop in the end of the code as u can
see here ..
import re
import os
filepath = 'arrange.txt'
with open(filepath) as file:
txt = file.read()
mapping = re.findall(r'(\d+)="(.*)"', txt)
ip = open("IPCountries.txt",'r')
for line in ip:
Any help with the loop or suggestion how to do it but in the same process and
files ?
Thanks
Answer: You could use something like
for line in ip:
ip, country = [e.strip() for e in line.split("|")]
country = country[:-1] # Strip off comma at the end
I'm not sure what you intend to do with this variables, but the basic
extraction process could look like my example code.
|
Speeding up Fourier-related transform computations in python (OpenCV)
Question: I have an image and I need to compute a fourier-related transform over it
called Short Time Fourier Transform (for extra mathematical info
check:<http://en.wikipedia.org/wiki/Short-time_Fourier_transform>).
In order to do that I need to :
(1) place a window at the starting pixel of the image (x,y)=(M/2,M/2)
(2) Truncate the image using this window
(3) Compute the FFT of the truncated image, save results.
(4) Incrementally slide the window to the right
(5) Go to step 3, until window reaches the end of the image
However I need to perform the aformentioned calculation in real time... But it
is rather slow !!!
Is there anyway to speed up the aformentioned process ??
I also include my code:
height, width = final_frame.shape
M=2
for j in range(M/2, height-M/2):
for i in range(M/2, width-M/2):
face_win=final_frame[j-M/2:j+M/2, i-M/2:i+M/2]
#these steps are perfomed in order to speed up the FFT calculation process
height_win, width_win = face_win.shape
fftheight=cv2.getOptimalDFTSize(height_win)
fftwidth=cv2.getOptimalDFTSize(width_win)
right = fftwidth - width_win
bottom = fftheight - height_win
bordertype = cv2.BORDER_CONSTANT
nimg = cv2.copyMakeBorder(face_win,0,bottom,0,right,bordertype, value = 0)
dft = cv2.dft(np.float32(face_win),flags = cv2.DFT_COMPLEX_OUTPUT)
dft_shift = np.fft.fftshift(dft)
magnitude_spectrum = 20*np.log(cv2.magnitude(dft_shift[:,:,0],dft_shift[:,:,1]))
Answer: Of course the bulk of your time is going to be spent in the FFT's and other
transformation code, but I took a shot at easy optimizations of the other
parts.
# Changes
* Frame size calculations are the same every loop so move them out (~nil improvement)
* Type coercion from uint8 to float32 can be done once on the whole image rather than converting each frame. (small but measurable improvement)
* If the window size is already the same as the optimal size (I guess it always will be if you keep M as a power of 2), then don't do the bordered copy. Just use the `face_win` view as-is. (small but measurable improvement)
Total improvement 26s --> 22s. Not much but there it is.
# Standalone Code (just add `1024x768.jpg`)
import time
import cv2
import numpy as np
# image loading for anybody else who wants to use this
final_frame = cv2.imread('1024x768.jpg')
final_frame = cv2.cvtColor(final_frame, cv2.COLOR_BGR2GRAY)
final_frame_f32 = final_frame.astype(np.float32) # moved out of the loop
# base data
M = 4
height, width = final_frame.shape
# various calculations moved out of the loop
m_half = M//2
height_win, width_win = [2 * m_half] * 2 # can you even use odd values for M?
fftheight = cv2.getOptimalDFTSize(height_win)
fftwidth = cv2.getOptimalDFTSize(width_win)
bordertype = cv2.BORDER_CONSTANT
right = fftwidth - width_win
bottom = fftheight - height_win
start = time.time()
for j in range(m_half, height-m_half):
for i in range(m_half, width-m_half):
face_win = final_frame_f32[j-m_half:j+m_half, i-m_half:i+m_half]
# only copy for border if necessary
if (fftheight, fftwidth) == (height_win, width_win):
nimg = face_win
else:
nimg = cv2.copyMakeBorder(face_win, 0, bottom, 0, right, bordertype, value=0)
dft = cv2.dft(nimg, flags=cv2.DFT_COMPLEX_OUTPUT)
dft_shift = np.fft.fftshift(dft)
magnitude_spectrum = 20 * np.log(cv2.magnitude(dft_shift[:, :, 0], dft_shift[:, :, 1]))
elapsed = time.time() - start
print elapsed
# Bugs
* I fixed these in the code above but I didn't edit your original since you may have intended it to be that way
* you calculate nimg but then use the original face_win in the dft
* to be explicit, I changed `M/2` etc. to `M//2`
|
How to code a Python function that accepts float, list or numpy.array?
Question: I have the following simple Python function:
def get_lerp_factor( a, x, b ):
if x <= a: return 0.
if x >= b: return 1.
return (x - a) / (b - a)
Many numpy functions, like numpy.sin(x) can handle a float or an array.
So how can I extend this in the same manner, so that it can also handle a
numpy array for x?
def get_lerp_factor( a, x_maybe_array, b ):
out = (x_maybe_array - a) / (b - a) # this should work...
# but now I have to clamp each element of out between 0 and 1
Would I have to specifically check the type of x, and branch accordingly?
How about:
def get_lerp_factor( a, x_anything, b ):
x = np.array( x_anything )
out = ...(x)
# now typecast out back into the same type as x... will this work?
?
Answer: You need
[`numpy.asarray`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.asarray.html).
This takes as its first argument:
> Input data, in any form that can be converted to an array. This includes
> lists, lists of tuples, tuples, tuples of tuples, tuples of lists and
> ndarrays.
and it returns:
> Array interpretation of `a`. No copy is performed if the input is already an
> ndarray.
So you can implement your function like this:
import numpy as np
def get_lerp_factor(a, x, b):
a, x, b = np.asarray(a), np.asarray(x), np.asarray(b)
return ((x - a) / (b - a)).clip(0, 1)
This works for scalars:
>>> get_lerp_factor(0, 9, 16)
0.5625
and also for iterables:
>>> get_lerp_factor(2, range(8), 6)
array([ 0. , 0. , 0. , 0.25, 0.5 , 0.75, 1. , 1. ])
|
Invoking destroy() method on window after mainloop in tkinter
Question: I'm relatively new to Python and Tkinter and I am striving to get my head over
how mainloop and the after method work. More specifically, I want to create a
splashScreen, which goes away after a timeframe, and then the actual
mainWindow is shown.
My code looks like this:
class SplashScreen:
# some code here
def destroyMe(self):
self.destroy()
def destroySplashScreen(self, timeToSleep=0):
if timeToSleep > 0:
self.master.after(timeToSleep*1000, self.destroyMe())
# some other code here
if __name__ == '__main__':
root = Toplevel()
mySP = SplashScreen(root)
mySP.populateSplashScreen()
mySP.destroySplashScreen(5)
root.mainloop()
However, what the code does is to create the whole window after the timeframe
given (5 sec.) without any content. Instead, it should create it, wait 5 sec.
and then destroy it.
Answer: Working example
`after` expects only function name (without `()`).
`destroy()` needs `self.master`
from Tkinter import *
class SplashScreen:
# some code here
def __init__(self, master):
self.master = master
self.master.title("SplashScreen")
def destroyMe(self):
self.master.destroy()
def destroySplashScreen(self, timeToSleep=0):
if timeToSleep > 0:
self.master.after(timeToSleep*1000, self.destroyMe)
# some other code here
if __name__ == '__main__':
root = Toplevel()
mySP = SplashScreen(root)
#mySP.populateSplashScreen()
mySP.destroySplashScreen(3)
root.mainloop()
* * *
BTW: `Toplevel` is used to create "child" window so (in example) it create
automaticly "parent" window - so I add `title("SplashScreen")`
BTW: if you will use `command=` in widget - it also expects function name
without `()`.
If you use (in `command` or `after`) function with `()` you run that function
and **result** is assigned to `command=` or use as parameter for `after`.
|
Find and copy a line using regex in Python
Question: I am new to this forum and to programming and apologize in advance if I
violate any of the forum rules. I have researched this extensively, but I
couldn't find a solution for my problem.
So I have a very long file that has this general structure:
data="""
20.020001 563410 9
20.520001 577410 20
21.022001 591466 9
21.522001 605466 120
23.196001 652338 2
25.278001 710634 7
25.780001 724690 144
26.280001 738690 9
26.782001 752746 40
27.282001 766746 9
27.784001 780802 140
29.372001 825266 2
31.458001 883674 7
31.958002 897674 8
32.458002 911674 9
32.958002 925674 10
"""
I imported the file using
with open("C:\blablabla\text.txt", 'r+') as infile:
data = infile.read()
Now I am trying to use a regular expression to find all lines that end with
140 through 146, so I did this:
items=re.findall('.......................14[0-6]\n',data,re.MULTILINE)
for x in items:
print x
This works, but when I now try to copy those lines that contain the regular
expression,
for x in items:
if items in data:
data.write(items)
I get the following error:
if items in data:
TypeError: 'in <string>' requires string as left operand, not list
I understand what the problem is, but I don't know how to solve it. How can I
feed the left operand a string when the outcome of my regex is a list?
Any help is much appreciated!
Answer: You should simply handle each line separately:
data = infile.readlines()
for line in data:
if re.match('.......................14[0-6]\n', line):
print line[:-1]
The last character of the line is a trailing newline, which would be
duplicated by the one the `print` statement includes.
|
Installing Anaconda on Ubuntu 13.10 giving an error message
Question: I ran the installer of Anaconda and at the end I got this message:
...
installing: zlib-1.2.7-0 ...
installing: anaconda-1.9.1-np18py27_0 ...
installing: _cache-0.0-x0 ...
Anaconda-1.9.1-Linux-x86_64.sh: line 389: /home/ohm/anaconda/pkgs/python-2.7.6-1/bin/python: cannot execute binary file
ERROR:
cannot execute native linux-64 binary, output from 'uname -a' is:
Linux ohm-ThinkCentre-M57 3.11.0-17-generic #31-Ubuntu SMP Mon Feb 3 21:53:31 UTC 2014 i686 i686 i686 GNU/Linux
ohm@ohm-ThinkCentre-M57:~/Downloads$
When I try to import one of the modules, like scipy, it doesn't let me.. What
could be the problem?
Answer: I think you need to download the 32 bit version of anaconda. You should be
able to get it from the [downloads](http://continuum.io/downloads) page.
|
Python reportlab put an image in a canvasmaker
Question: I'd like to put an image (a barcode more precisely) in a pdf doc generated by
reportlab. I can put it in a table. That works perfectly with
createBarcodeDrawing().
The point is that I'd like the barcode to change on each page. Thus, I want to
put it in a canvasmaker.
Whatever method I use (drawImage(), drawInLineImage(),...), I always have an
error. I even tried to use CustomImage from [Reportlab [ Platypus ] - Image
does not render](http://stackoverflow.com/questions/13018786/reportlab-
platypus-image-does-not-render) without any success. Consequently, my question
is how can I draw an image in a canvas.Canvas ?
Can anybody help ? Thank you in advance Dom (I am not a professional)
Following a remark I read on <http://www.tylerlesmann.com/2009/jan/28/writing-
pdfs-python-adding-images/>, I tried:
img = 'apple-logo.jpg'
self.drawInlineImage('C:\\'+img,20,20)
This works, while 'C:\apple-logo.jpg' doesn't !! Nevertheless, I still don't
know how to draw my barcode without writing it to a file before! If someone
manages to do it, I would really appreciate. Bye
from reportlab.pdfgen import canvas
from reportlab.lib.units import mm
class NumberedCanvas(canvas.Canvas):
def __init__(self, *args, **kwargs):
canvas.Canvas.__init__(self, *args, **kwargs)
self._saved_page_states = []
def showPage(self):
self._saved_page_states.append(dict(self.__dict__))
self._startPage()
def save(self):
"""add page info to each page (page x of y)"""
num_pages = len(self._saved_page_states)
for state in self._saved_page_states:
self.__dict__.update(state)
page_num = self._pageNumber
mybarcode = createBarcodeDrawing('QR', value= 'www.mousevspython.com - Page %s'%page_num)
self.drawInlineImage(mybarcode,20,20)
canvas.Canvas.showPage(self)
canvas.Canvas.save(self)
def main():
import sys
import urllib2
from cStringIO import StringIO
from reportlab.platypus import SimpleDocTemplate, Image, Paragraph, PageBreak
from reportlab.lib.styles import ParagraphStyle, getSampleStyleSheet
#This is needed because ReportLab accepts the StringIO as a file-like object,
#but doesn't accept urllib2.urlopen's return value
def get_image(url):
u = urllib2.urlopen(url)
return StringIO(u.read())
styles = getSampleStyleSheet()
styleN = ParagraphStyle(styles['Normal'])
# build doc
if len(sys.argv) > 1:
fn = sys.argv[1]
else:
fn = "filename.pdf"
doc = SimpleDocTemplate(open(fn, "wb"))
elements = [
Paragraph("Hello,", styleN),
Image(get_image("http://www.red-dove.com/images/rdclogo.gif")),
PageBreak(),
Paragraph("world!", styleN),
Image(get_image("http://www.python.org/images/python-logo.gif")),
]
doc.build(elements, canvasmaker=NumberedCanvas)
if __name__ == "__main__":
main()
Answer: You were nearly there. Just replace `self.drawImage(mybarcode,20,20)` with
`mybarcode.drawOn(self, 20, 20)`. The barcode is not really an image, more an
barcode object which you can export to an image. As a side note: You are using
the NumberedCanvas which is kind of a hack to get the total page count. As i
see it you don't really need it, as you are just using the current page
number. If you don't need the total page count, you can just define a canvas
drawing function which draws the barcode on each page. For this you would do
something like this:
def draw_barcode(canvas, doc):
canvas.saveState()
page_num = canvas._pageNumber
mybarcode = createBarcodeDrawing('QR', value= 'www.mousevspython.com - Page %s'%page_num)
mybarcode.drawOn(canvas, 20, 20)
canvas.restoreState()
[...]
# doc.build(elements, canvasmaker=NumberedCanvas)
doc.build(elements, onFirstPage=draw_barcode, onLaterPages=draw_barcode)
|
How do I programmatically split a multi-page tiff into single pages using Adobe Acrobat and it's exposed COM Objects?
Question: I want to programmatically (using Python) split a multi-page tiff into single
pages using Adobe Acrobat's exposed COM Objects.
I am writing this in order to answer my own question in order to put a viable
answer out there, as I did not find anyone doing this on SO or any other
forum.
Please, let me know what you think about my solution and feel free to leave
your way of doing this.
Answer: Here is one way:
from win32com.client import Dispatch
def acrobat_split(f_path,f_name,f_ext):
# Connect to Adobe Acrobat.
avDoc = Dispatch("AcroExch.AVDoc")
# Open the input file (as a pdf).
src = f_path+'\\'+f_name+f_ext
avDoc.Open(src,src)
pdDoc = avDoc.GetPDDoc()
page_ct = pdDoc.GetNumPages()
# Set dst.
dst = f_path+'\\'+f_name+PAGE_DIV+".tif"
jsObject = pdDoc.getJSObject()
#Here you can save as many other types by using, for instance: "com.adobe.acrobat.xml"
jsObject.saveAs(dst,"com.adobe.acrobat.tiff")
pdDoc.Close()
del pdDoc
|
why does this vimscript print more than it should?
Question: Here is the code:
function! test()
python << endpy
import vim
buf = vim.current.buffer
(row1, col1) = buf.mark('<')
(row2, col2) = buf.mark('>')
for i in range(row1, row2+1):
print i
endpy
endfunction
If you do `so %` and then `ggVG`, `:call test()`, you will see it prints from
1 to 10 many times. Why?
Answer: First custom vim functions should start with a capital letter. So it should be
`function! Test()`
Second when you call a function on a range of lines it actually call that
function one time for each line in the function. So if you select the 10 lines
the function will be called ten times.
To fix this just add the word range after the function declaration so that vim
knows that you will handle the range yourself.
function! Test() range
python << endpy
import vim
buf = vim.current.buffer
(row1, col1) = buf.mark('<')
(row2, col2) = buf.mark('>')
for i in range(row1, row2+1):
print i
endpy
endfunction
Take a look at `:h E124`, `:h a:firstline` and `:h function-range-example`
|
How do I install pyPDF2 module using windows?
Question: As a newbie... I am having difficulties installing pyPDF2 module. I have
downloaded. Where and how do I install (setup.py) so I can use module in
python interpreter?
Answer: To install setup.py files under Windows you can choose this way with the
command line:
1. hit windows key
2. type cmd
3. excute the command line (black window)
4. type `cd C:\Users\User\Downloads\pyPDF2` to go into the directory where the `setup.py` is (this is mine if I downloaded it) The path can be copied from the explorer window.
5. type `dir` now you should see the name setup.py in the listing of all contents
6. type `C:\python27\python.exe setup.py install` I use Python2.7 here. Use `C:\python33\python.exe setup.py install` for python 3.3 and so on. You can follow these instructions now if you wish: <http://docs.python.org/2/install/index.html>
Another way, that does not show when there are problems, is:
1. create a shortcut to `setup.py`
2. open the properties of the shortcut. There should be a path like this: `C:\Users\User\Downloads\pyPDF2\setup.py` (this is where my setup.py is)
3. you modify that path in the following way:
"C:\Users\User\Downloads\pyPDF2\setup.py" install
The `"` are important if you have white spaces in the path name
4. click OK to save the modifications to the setup.py - shortcut
5. double-click the setup.py - shortcut.
In all cases you may need to restart your python to be able to import the
module.
When you do this feel free to post your solution also with pictures for other
newbies looking for it.
|
java.io.IOException: Broken pipe on increasing number of mappers/reducers, a lot
Question: I am running MapReduce job on a hadoop cluster of 6 nodes with 4 map tasks and
10 reduce tasks configured.
Mapper/Reducer fails a lot on increasing number of map/reduce tasks as below,

I encounter the following error:
**stderr logs**
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 143
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:576)
at org.apache.hadoop.streaming.PipeReducer.reduce(PipeReducer.java:130)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
and this:
**syslog logs**
2014-03-01 15:11:30,118 WARN org.apache.hadoop.streaming.PipeMapRed: java.io.IOException: Broken pipe
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:260)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at java.io.DataOutputStream.flush(DataOutputStream.java:106)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:569)
at org.apache.hadoop.streaming.PipeReducer.reduce(PipeReducer.java:130)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
2014-03-01 15:11:30,118 INFO org.apache.hadoop.streaming.PipeMapRed: PipeMapRed failed!
2014-03-01 15:11:30,121 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2014-03-01 15:11:30,146 INFO org.apache.hadoop.io.nativeio.NativeIO: Initialized cache for UID to User mapping with a cache timeout of 14400 seconds.
2014-03-01 15:11:30,146 INFO org.apache.hadoop.io.nativeio.NativeIO: Got UserName hduser for UID 1001 from the native implementation
2014-03-01 15:11:30,147 WARN org.apache.hadoop.mapred.Child: Error running child
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 143
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:576)
at org.apache.hadoop.streaming.PipeReducer.reduce(PipeReducer.java:130)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
2014-03-01 15:11:30,149 WARN org.apache.hadoop.mapred.Task: Parent died. Exiting attempt_201402281751_0042_r_000004_0
2014-03-01 15:11:31,252 INFO org.apache.hadoop.streaming.PipeMapRed: Records R/W=983976/1957694
Even the simplest program is creating this problem:
**mapper.py**
#!/usr/bin/env python
import sys
for line in sys.stdin:
if line:
print "%s\n%s"%(line, line)
**reducer.py**
#!/usr/bin/env python
import sys
for line in sys.stdin:
if line:
print "%s"%(line)
I am using the following command to run hadoop.
hadoop jar /usr/local/hadoop/contrib/streaming/hadoop-streaming-1.0.3.jar -D mapred.reduce.tasks=10 -file /home/hduser/code/K1D/code1/mapper2.py -mapper mapper2.py -file /home/hduser/code/K1D/code1/reducer2.py -reducer reducer2.py -input /user/hduser/data-out/part-00000 -output /user/hduser/data-out1 -partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
Can you suggest anything?
Answer: I ran into something similar and realized that I didn't have the exact version
of python installed into one of my data nodes. I had
#!/usr/bin/env python
which I changed to
#!/usr/bin/env python2.7
|
How to get the length of a mp3 link in python?
Question: I need get a mp3 lenght(from a url) to transform the lenght in seconds for a
delay, someone know how to do it?
Answer: Here is an example of retrieving the resource (mp3 file), and printing the
track length via [mutagen](https://code.google.com/p/mutagen/) library:
from urllib import urlretrieve
from mutagen.mp3 import MP3
url = 'http://example.com/foo.mp3'
filename, headers = urlretrieve(url)
audio = MP3(filename)
print audio.info.length
|
Python - Sentiment Analysis using Pointwise Mutual Information
Question:
from __future__ import division
import urllib
import json
from math import log
def hits(word1,word2=""):
query = "http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=%s"
if word2 == "":
results = urllib.urlopen(query % word1)
else:
results = urllib.urlopen(query % word1+" "+"AROUND(10)"+" "+word2)
json_res = json.loads(results.read())
google_hits=int(json_res['responseData']['cursor']['estimatedResultCount'])
return google_hits
def so(phrase):
num = hits(phrase,"excellent")
#print num
den = hits(phrase,"poor")
#print den
ratio = num / den
#print ratio
sop = log(ratio)
return sop
print so("ugly product")
I need this code to calculate the Point wise Mutual Information which can be
used to classify reviews as positive or negative. Basically I am using the
technique specified by Turney(2002):
<http://acl.ldc.upenn.edu/P/P02/P02-1053.pdf> as an example for an
unsupervised classification method for sentiment analysis.
As explained in the paper, the semantic orientation of a phrase is negative if
the phrase is more strongly associated with the word "poor" and positive if it
is more strongly associated with the word "excellent".
The code above calculates the SO of a phrase. I use Google to calculate the
number of hits and calculate the SO.(as AltaVista is now not there)
The values computed are very erratic. They don't stick to a particular
pattern. For example SO("ugly product") turns out be 2.85462098541 while
SO("beautiful product") is 1.71395061117. While the former is expected to be
negative and the other positive.
Is there something wrong with the code? Is there an easier way to calculate SO
of a phrase (using PMI) with any Python library,say NLTK? I tried NLTK but was
not able to find any explicit method which computes the PMI.
Answer: Generally, calculating PMI is tricky since the formula will change depending
on the size of the ngram that you want to take into consideration:
Mathematically, for bigrams, you can simply consider:
log(p(a,b) / ( p(a) * p(b) ))
Programmatically, let's say you have calculated all the frequencies of the
unigrams and bigrams in your corpus, you do this:
def pmi(word1, word2, unigram_freq, bigram_freq):
prob_word1 = unigram_freq[word1] / float(sum(unigram_freq.values()))
prob_word2 = unigram_freq[word2] / float(sum(unigram_freq.values()))
prob_word1_word2 = bigram_freq[" ".join([word1, word2])] / float(sum(bigram_freq.values()))
return math.log(prob_word1_word2/float(prob_word1*prob_word2),2)
This is a code snippet from an MWE library but it's in its pre-development
stage (<https://github.com/alvations/Terminator/blob/master/mwe.py>). But do
note that it's for parallel MWE extraction, so here's how you can "hack" it to
extract monolingual MWE:
$ wget https://dl.dropboxusercontent.com/u/45771499/mwe.py
$ printf "This is a foo bar sentence .\nI need multi-word expression from this text file.\nThe text file is messed up , I know you foo bar multi-word expression thingy .\n More foo bar is needed , so that the text file is populated with some sort of foo bar bigrams to extract the multi-word expression ." > src.txt
$ printf "" > trg.txt
$ python
>>> import codecs
>>> from mwe import load_ngramfreq, extract_mwe
>>> # Calculates the unigrams and bigrams counts.
>>> # More superfluously, "Training a bigram 'language model'."
>>> unigram, bigram, _ , _ = load_ngramfreq('src.txt','trg.txt')
>>> sent = "This is another foo bar sentence not in the training corpus ."
>>> for threshold in range(-2, 4):
... print threshold, [mwe for mwe in extract_mwe(sent.strip().lower(), unigram, bigram, threshold)]
[out]:
-2 ['this is', 'is another', 'another foo', 'foo bar', 'bar sentence', 'sentence not', 'not in', 'in the', 'the training', 'training corpus', 'corpus .']
-1 ['this is', 'is another', 'another foo', 'foo bar', 'bar sentence', 'sentence not', 'not in', 'in the', 'the training', 'training corpus', 'corpus .']
0 ['this is', 'foo bar', 'bar sentence']
1 ['this is', 'foo bar', 'bar sentence']
2 ['this is', 'foo bar', 'bar sentence']
3 ['foo bar', 'bar sentence']
4 []
For further details, i find this thesis an quick and easy introduction to MWE
extraction: "Extending the Log Likelihood Measure to Improve Collocation
Identification", see <http://goo.gl/5ebTJJ>
|
Redirection of stdout to console and file with verbosity turned on or off
Question: I have written some code just to practice verbosity in python. Verbosity is
embedded by means of the `ArgumentParser` module. However, I'd also like to
write the `stdout` to file also when verbosity is disabled:
#!/usr/bin/python
import sys
def printable1():
print "1"
def printable2():
print "2"
def printable3():
print "3"
def Main1():
printable1()
printable2()
def Main2():
printable2()
printable3()
class Logger(object):
def __init__(self):
self.terminal = sys.stdout
self.log = open("logfile2.log", "a")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
if __name__ == "__main__":
from argparse import ArgumentParser
parser = ArgumentParser(description='PC Test',version="1.0")
parser.add_argument('--nopc',action='store_true', help='Do not perform test on the PC')
parser.add_argument('--pc', action='store_true', help='Do perform test on the PC')
# VERBOSITY
parser.add_argument('--vmode', dest='verbose', action='store_true',
help='Enable printing of status messages to stdout.')
args = parser.parse_args()
sys.stdout = Logger()
if args.verbose:
if args.pc:
Main1()
elif args.nopc:
Main2()
else:
Main1()
Main2()
This code writes the stdout to file only when verbosity is enabled with the
`--vmode` argument. Can you help me to find a solution?
Answer: Instead of reinventing the wheel, you could use the
[logging](http://docs.python.org/2/library/logging.html) module to handle
printing to terminal and writing into a file. The verbosity part is still up
to you:
import logging
import logging.handlers
log = logging.getLogger(__name__)
log.addHandler(logging.StreamHandler()) # Prints to console.
log.addHandler(logging.handlers.RotatingFileHandler('logfile2.log'))
log.setLevel(logging.INFO) # Set logging level here.
Since now on, you can use that regular `log` object to perform logging in your
script, and the entries will be both sent to console and to the file:
log.info('test')
log.warning('test')
Also, I recommend using
[ConfigDict](http://docs.python.org/2/library/logging.config.html) to handle
registering loggers and handlers - a more declarative alternative to the way
above (you'll still need to instantiate `log` object as above).
|
Get all class names in a Python package
Question: I need to get the list of all classes in Python package. At first I get all
filenames (it works fine, took it from stackoverflow):
from os import listdir, getcwd
from os.path import isfile, join
mypath = os.getcwd()
onlyfiles = [ f for f in listdir(mypath) if isfile(join(mypath,f)) ]
Then I inspect all files and it doesn't work properly:
for x in onlyfiles:
for name, obj in inspect.getmembers(x):
if inspect.isclass(obj):
print obj
The output is:
<type 'str'>
....
<type 'str'>
However, the following code works properly:
for name, obj in inspect.getmembers(example.py):
if inspect.isclass(obj):
print obj
Could you help me to figure out what the mistake is?
Answer: Try this, using the
[`inspect`](http://docs.python.org/3.2/library/inspect.html) module:
import sys
import inspect
clsmembers = inspect.getmembers(sys.modules[__name__], inspect.isclass)
This runs within a live file.
|
send email to recipient from .txt in python 3.3
Question: I am trying to send an email with smtp in python 3.3 to a recipient listed in
a text file. The error I receive is:
`session.sendmail(sender, recipient, msg.as_string())`
`smtplib.SMTPRecipientsRefused: {}`
Where is the error in sendmail? Thanks!
Full code below:
#!/usr/bin/python
import os, re
import sys
import smtplib
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
mails = open('/path/emails.txt','r+')
mailList = mails.read()
mailList = [i.strip() for i in mails.readlines()]
directory = "/path/Desktop/"
SMTP_SERVER = 'smtp.gmail.com'
SMTP_PORT = 587
sender = 'Sender@gmail.com'
password = "Sender'sPassword"
recipient = [i.strip() for i in mails.readlines()]
subject = 'Python (-++-) Test'
message = 'Images attached.'
def main():
msg = MIMEMultipart()
msg['Subject'] = 'Python (-++-) Test'
msg['To'] = recipient
msg['From'] = sender
files = os.listdir(directory)
pngsearch = re.compile(".png", re.IGNORECASE)
files = filter(pngsearch.search, files)
for filename in files:
path = os.path.join(directory, filename)
if not os.path.isfile(path):
continue
img = MIMEImage(open(path, 'rb').read(), _subtype="png")
img.add_header('Content-Disposition', 'attachment', filename=filename)
msg.attach(img)
part = MIMEText('text', "plain")
part.set_payload(message)
msg.attach(part)
session = smtplib.SMTP(SMTP_SERVER, SMTP_PORT)
session.ehlo()
session.starttls()
session.ehlo
session.login(sender, password)
session.sendmail(sender, recipient, msg.as_string())
session.quit()
if __name__ == '__main__':
main()
Answer: You want to double-check your recipients code. It looks like you're trying to
consume the contents of the file more than once, which won't work--the file
objects should be understood as a stream, not a block of data, so once you've
done `f.read()` or `[i.strip() for i in mails.readlines()]` one time, that
stream is empty, so doing it a second time is going to produce an empty list.
You should check this yourself by printing `recipient`
then try this:
mails = open('/path/emails.txt','r+')
#mailList = mails.read()
#mailList = [i.strip() for i in mails.readlines()]
directory = "/path/Desktop/"
SMTP_SERVER = 'smtp.gmail.com'
SMTP_PORT = 587
sender = 'Sender@gmail.com'
password = "Sender'sPassword"
recipient = [i.strip() for i in mails.readlines()]
print(recipient)
subject = 'Python (-++-) Test'
message = 'Images attached.'
Now you should have the a populated recipient list, and on to the next issue!
|
Python connect to IRC function - must be integer
Question: I made a basic IRC bot that would join a certain channel and say a defined
phrase, as part of learning Python. However, are there changes in the latest
version of Python?
import socket
nick = 'TigerBot'
passdwd = '*****'
port = '6667'
net = 'irc.snoonet.org'
chan = '#WritingPrompts'
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Was ist da IRC Socket?
sock.connect ((net, port)) # Open server connection
irc.recv (4096) # Buffer setup
irc.send('nick ' + nick + '\r\n') # What is your name?
irc.send('USER AffixBot AffixBot AffixBot :Affix IRC\r\n') #Send User Info to the server
irc.send('JOIN ' + chan + '\r\n') # Join the pre defined channel
irc.send('PRIVMSG ' + chan + ' :Hello.\r\n') #Send a Message to the channel
while True: #While Connection is Active
data = irc.recv (4096) #Make Data the Receive Buffer
if data.find('PING') != -1: # If PING
irc.send('PONG ' + data.split()[1] + '\r\n') # Then PONG
Line 11 is the problem - apparently, what I've got down as a string needs to
be an integer. How can I go about this?
Answer: Change your definition of port to be an integer.
port = 6667
not
port = '6667'
In general, if you have an existing string value that needs to be an int for a
specific call, you can cast by calling `int(str)`, which will return the value
of str as an integer or raise a `ValueError` if it can't cast.
|
How to group multiples classes defined in several files in one package or namespace?
Question: I have c# background and learning python and I am confused between packages
and namespace. I want to define number of classes in different .py files and
some how they belong to same nsamespace (c# kind of namespace). How can I do
it in python. I am using python tools and visualstudio.
Answer: [Each `.py` file is a
"module."](http://docs.python.org/2/tutorial/modules.html#modules) You can
[define a package](http://docs.python.org/2/tutorial/modules.html#packages),
which is simply a collection of modules, rooted in some directory. The most
important thing to note is that each subdirectory should contain an
`__init.py__` file:
> The `__init__.py` files are required to make Python treat the directories as
> containing packages; this is done to prevent directories with a common name,
> such as `string`, from unintentionally hiding valid modules that occur later
> on the module search path. In the simplest case, `__init__.py` can just be
> an empty file, but it can also execute initialization code for the package
> or set the `__all__` variable, described later.
|
How to make web based python interactive shell
Question: How do sites like <https://www.pythonanywhere.com/try-ipython/> work?
They probably do several `exec` commands, or interfacing with ipython.
However, this can be extremely insecure if they didn't do any "preventive
action" (which they did). A mere (and evil) user can do something like
import shutil, os
and do something bad.
How is technically `web based python interactive shell` possible? and how
could one ensure that the interactive-shell doesn't impact anything bad to the
provider?
Answer: PythonAnywhere dev here. We use a combination of a filesystem jail, low-
privilege accounts, ulimit restrictions, and cgroups to sandbox people. Plus
some complicated iptables routing.
We're likely to move on to LXC or Docker in the future -- we chose the
specific combination that we use now based on what was ready for production
when we released the first version of our system back in 2012, and it if we
were starting from scratch today we'd do it differently.
That's not to say that our current system is bad -- it works really well. But
it does mean that it involves a lot of code that we could strip out if we used
the stuff that's available now, and simpler code is obviously better :-)
[edit] I should also add that you might find [this talk I did at
EuroPython](http://www.youtube.com/watch?v=U_qp8u_BH_E) interesting. It
doesn't touch on the security aspects of how the shell works, but it is
relevant to the subject of your question (how to make a web-based Python
interactive shell) because it covers a bunch of the stuff required for running
a shell in a browser and connecting it to a Python process running on a
server.
|
How to execute git command in a identified path?
Question: I want to execute git command in python program. I have tried os.system("git-
command") As we know, git command can be executed correctly only in the
directories which contains repositories. I have tried to print current path
and this path is not what I hope for, it does not contains repositories. Now
my question is how to execute git command in a identified path.
Answer: Use the [`subprocess`
module](http://docs.python.org/2/library/subprocess.html); pick one of the
functions that suits your needs (based on what output you need). The functions
all take a `cwd` argument that lets you specify the directory to operate in:
import subprocess
output = subprocess.check_output(['git', 'status'], cwd='/path/to/git/workingdir')
|
Counting chars in a file | python 3x
Question: I'm wondering, how can I count for example all "s" characters and print their
number in a text file that I'm importing? Tried few times to do it by my own
but I'm still doing something wrong. If someone could give me some tips I
would really appreciate that :)
Answer: Open the file, the `"r"` means it is opened as readonly mode.
filetoread = open("./filename.txt", "r")
With this loop, you iterate over all the lines in the file and counts the
number of times the character _chartosearch_ appears. Finally, the value is
printed.
total = 0
chartosearch = 's'
for line in filetoread:
total += line.count(chartosearch)
print("Number of " + chartosearch + ": " + total)
|
Twython update_status_with_media error
Question: I have the following code:
photo = open(os.path.join("images", localFileName), 'rb')
tweetThis = "status"
twitter.update_status_with_media(status=tweetThis, media=photo)
Here is the traceback:
twitter.update_status_with_media(status=status, media=photo)
File "/usr/local/lib/python2.7/site-packages/twython/endpoints.py", line 107, in update_status_with_media
return self.post('statuses/update_with_media', params=params)
File "/usr/local/lib/python2.7/site-packages/twython/api.py", line 234, in post
return self.request(endpoint, 'POST', params=params, version=version)
File "/usr/local/lib/python2.7/site-packages/twython/api.py", line 224, in request
content = self._request(url, method=method, params=params, api_call=url)
File "/usr/local/lib/python2.7/site-packages/twython/api.py", line 194, in _request
retry_after=response.headers.get('retry-after'))
twython.exceptions.TwythonError: Twitter API returned a 403 (Forbidden), Status creation failed: Tweet creation failed.
I have tested `twitter.update_status(status='TEST')` which works correctly
meaning I have the correct credentials and permissions. What's wrong with the
media version?
Answer: Try changing `os.path.join("images", localFileName)` to just `/path/to/file`.
Not completely sure, but this is the only likely thing that is going wrong.
Also, don't set your variable for your status to be `status`, it shadows the
`twython` syntax: `twython.twitter.update_status_with_media(status, media)`
Here is an example:
from twython import Twython
from time import strftime
CONSUMER_KEY = '***'
CONSUMER_SECRET = '***'
ACCESS_KEY = '***'
ACCESS_SECRET = '***'
#load your twitter credentials
twyapi = Twython(CONSUMER_KEY,CONSUMER_SECRET,ACCESS_KEY,ACCESS_SECRET)
photo = open('/Users/aj8uppal/Desktop/images.jpg', 'rb')
twyapi.update_status_with_media(status='I love python!', media=photo)
|
Import a sequence of .svg files into FontForge as glyphs and output a font file
Question: I want to create a font with a large volume of glyphs. Think Japanese kanji,
in the thousands. So there will definitely be some scripting / batch
processing required. Luckily FontForge supports python scripting! Unluckily I
haven't been able to get it working. [sadface]
Firstly, thanks to the user [Hoff](http://stackoverflow.com/users/102181/hoff)
for posting code [here](http://stackoverflow.com/questions/12713444/inverted-
glyph-bitmap-svg-via-autotrace-glyph-via-fontforge) that answered a big part
of my question. But upon running his script I encounter problems which raise
more questions:
Failed to find NameList: AGL For New Fonts
Warning: Font contained no glyphs
_Updates:_
* The "font contained no glyphs" error is apparently a bug in FontForge that occurs when the font contains one or less glyph. Adding a second glyph 'B' resolved this.
* I found the same syntax can be used whether saving .ttf .sfd .otf etc.
* The NameList failure actually doesn't prevent the font file from being written. I was happy to discover this, but still don't understand how to provide the NameList it wants.
Here is Hoff's code:
import fontforge
font = fontforge.open('blank.sfd')
glyph = font.createMappedChar('A')
glyph.importOutlines('sourceimg.svg')
font.generate('testfont.ttf')
After five hours of struggling yesterday with building FontForge (a confusing
process on a Mac). I appear to have it up and running properly. I had at first
installed a pre-built version from a .dmg only to find it lacked python
support. But since Hoff seemed not to encounter the same error I did, I'm not
ruling out a build issue.
Either way, I don't understand the error involving AGL. What is AGL? [I looked
it up](http://fontforge.org/encodingmenu.html#namelist): "Adobe Glyph List - a
standard glyph naming convention". Sounds like FontForge tried to map Unicode
values to glyph names and couldn't.
So, why the AGL NameList problem? Thanks in advance for any help.
Answer: Try to rebuild your Fonforge. Because the code should work. I tested it and it
runs fine.
I successfully installed Fontforge with Python extension with
[Homebrew](http://brew.sh/). This is the info:
>
> allcaps$ brew info fontforge
> fontforge: stable 20120731, HEAD
> http://fontforge.org/
> /usr/local/Cellar/fontforge/20120731 (377 files, 16M) *
> Built from source with: --with-x
> From:
> https://github.com/Homebrew/homebrew/commits/master/Library/Formula/fontforge.rb
> ==> Dependencies
> Required: gettext ✘, fontconfig ✔
> Recommended: jpeg ✔, libtiff ✔
> Optional: cairo ✔, pango ✘, libspiro ✘, czmq ✘
> ==> Options
> --with-cairo
> Build with cairo support
> --with-czmq
> Build with czmq support
> --with-gif
> Build with GIF support
> --with-libspiro
> Build with libspiro support
> --with-pango
> Build with pango support
> --with-x
> Build with X11 support, including FontForge.app
> --without-jpeg
> Build without jpeg support
> --without-libpng
> Build without libpng support
> --without-libtiff
> Build without libtiff support
> --without-python
> Build without python support
> --HEAD
> install HEAD version
> ==> Caveats
> Set PYTHONPATH if you need Python to find the installed site-packages:
> export PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH
>
> .app bundles were installed.
> Run `brew linkapps` to symlink these to /Applications.
>
Set `PYTHONPATH`
Run `brew install fontforge` of course with all flags you need.
Run `brew linkapps`
## UPDATE
Start with a empty font so the font isn't the problem:
import fontforge
font = fontforge.font() # create a new font
To include a glyphlist (shouldn't be necessary) Download:
<http://partners.adobe.com/public/developer/en/opentype/glyphlist.txt> and
then:
import fontforge
fontforge.loadNamelist('glyphlist.txt') # load a name list
...
Create the glyph by code point. `createChar(uni[,name])` 'A' is 65 so
char = font.createChar(65)
Glyphs and their code points:
>>> for c in u'ABC 賢治': print ord(c).
>>> 65, 66, 67, 32, 36066, 27835.
The Unicode Consortium defines the Unicode standard. The 'CJK Unified
Ideographs' live in 'Basic Multilingual Plane (BMP)'.
Glyphs without a unicode point can be referenced within a font by name. And
are useful for open type features or building blocks to compose new glyphs.
You can create them like this:
font.createChar(-1, 'some_name')
## UPDATE 2
You should name all glyphs that occur in the [Adobe Glyph
List](http://partners.adobe.com/public/developer/en/opentype/glyphlist.txt) by
their AGL glyph name. The rest of the glyphs should be named `uniXXXX` where
`XXXX` is the Unicode index. During development you can use any human readable
name. So use your own naming and replace it when you generate the font for
shipping. [See Typophile](http://typophile.com/node/10026).
|
Read from CSV file and make plot
Question: I have a little problem I hope someone could help me with. I'm not the best at
python.
I have a "CSV" file that I have to manipulate. I have 3 questions I hope you
could help with.
**1: Print the first two lines**
The first I think I done already, I printed the first two lines.
import csv
from pprint import pprint
data = open('iphonevsandroid.csv')
pprint (data.readlines(2))
f.close()
I getting data like this:
['week,iphone,android\n',
'2004-01-04 - 2004-01-10,0,0\n',
'2004-01-11 - 2004-01-17,0,0\n',
'2004-01-18 - 2004-01-24,0,0\n',
'2004-01-25 - 2004-01-31,0,0\n',
'2004-02-01 - 2004-02-07,0,0\n',
'2004-02-08 - 2004-02-14,0,0\n',
'2004-02-15 - 2004-02-21,0,0\n',
'2004-02-22 - 2004-02-28,0,0\n',
'2004-02-29 - 2004-03-06,0,0\n',
'2004-03-07 - 2004-03-13,0,0\n',
'2004-03-14 - 2004-03-20,0,0\n',
**2: Parse the first field to convert it to a single date object (hint: use
datetime.strptime). You can choose any of the two dates.**
import csv
import datatime
data = open('iphonevsandroid.csv')
reader1 = csv.reader1(data)
for row in reader1:
print row[0]
This will print the first row we needed, but how do I get 1 of the dates? I
have to plot it later.
Answer: I would use the string.split method.
In your case, you can split by ' - '
So if:
row = ['week,iphone,android\n',
'2004-01-04 - 2004-01-10,0,0\n',
'2004-01-11 - 2004-01-17,0,0\n',
'2004-01-18 - 2004-01-24,0,0\n',
'2004-01-25 - 2004-01-31,0,0\n',
'2004-02-01 - 2004-02-07,0,0\n',
'2004-02-08 - 2004-02-14,0,0\n',
'2004-02-15 - 2004-02-21,0,0\n',
'2004-02-22 - 2004-02-28,0,0\n',
'2004-02-29 - 2004-03-06,0,0\n',
'2004-03-07 - 2004-03-13,0,0\n',
'2004-03-14 - 2004-03-20,0,0\n']
We can take the first time value as:
first_date_row = row[1]
Output: '2004-01-04 - 2004-01-10,0,0\n'
You see that the first date is separated from the second one by ' - '.
so:
first_date = first_date_row.split(' - ')[0]
Output: '2004-01-04'
How do we get the second date? Well we can do a similar procedure than we did
with the first date, but taking the first comma after the second date as the
separator.
Thus:
second_date = first_date_row.split(' - ')[1].split(',')[0]
Output: '2004-01-10'
Please note, these dates are just text for Python, you will need to transform
them into date objects. You could use in this case:
first_date_formated = datetime.strptime(first_date, '%Y-%m-%d')
second_date_formated = datetime.strptime(second_date, '%Y-%m-%d')
|
python scrapy how to code the parameter instead of using cmd: use Custom code in Scrapy
Question: I am using scrapy 0.20 with puthon 2.7
i used to do this in cmd
-s JOBDIR=crawls/somespider-1
to handle the dublicated items. **note please, i already did the changes in
setting**
I dont' want to use that in cmd.
**is there anyway so i can type it in code inside my spider?**
thanks
Answer: It's so easy. Use dropitem in pipelines.py to drop the item. And you can use
custom command to code the parameter inside of program.
[Here is example of custom code in
scrapy](http://doc.scrapy.org/en/latest/topics/commands.html)
Using the custom command (say : `scrapy crawl mycommand`)
you can run `-s JOBDIR=crawls/somespider-1`
Example:
Create a directory `commands` where you have `scrapy.cfg` file Inside the
directory create a file `mycommand.py`
from scrapy.command import ScrapyCommand
from scrapy.cmdline import execute
class Command(ScrapyCommand):
requires_project = True
def short_desc(self):
return "This is your custom command"
def run(self, args, opts):
args.append('scrapy')
args.append('crawl')
args.append('spider')##add what ever your syntax needs.In my case i want to get "scrapy crawl spider" in cmd
execute(args)#send a list as parameter with command as a single element of it
Now go to cmd line and type scrapy `mycommand`. Then your magic is ready :-)
|
Python: Naming with Acronyms
Question: In Python code, what is the canonical way of dealing with well-known acronyms
when naming classes, methods and variables?
Consider, for example, a class dealing with RSS feeds. Would that rather be
this:
class RSSClassOne:
def RSSMethod(self):
self.RSS_variable = True
class ClassRSSTwo:
def MethodRSS(self):
self.variable_RSS = True
or this:
class RssClassOne:
def rssMethod(self):
self.rss_variable = True
class ClassRssTwo:
def MethodRss(self):
self.variable_rss = True
I.e. what is more important, keeping the acronym capitalization or the
recommendations of PEP 008?
**Edit:** from the answers, I conclude that this would be the way to go:
class RSSClassOne:
def rss_method(self):
self.rss_variable = True
class ClassRSSTwo:
def method_rss(self):
self.variable_rss = True
Answer: Well, it turns out that PEP 8 already has this topic covered
[here](http://legacy.python.org/dev/peps/pep-0008/#descriptive-naming-styles):
> Note: When using abbreviations in CapWords, capitalize all the letters of
> the abbreviation. Thus `HTTPServerError` is better than `HttpServerError`.
In other words, the Python convention for names containing acronyms is:
1. Keep acronyms uppercase in class names (usually, the only part of Python that uses CapWords).
2. Everywhere else, make them lowercase in order to comply with the other [naming conventions](http://legacy.python.org/dev/peps/pep-0008/#naming-conventions).
Below is a demonstration with the [`ipaddress`
module](http://docs.python.org/3/library/ipaddress.html):
>>> import ipaddress # IP is lowercase because this is a module
>>> ipaddress.IPv4Address # IP is uppercase because this is a class
<class 'ipaddress.IPv4Address'>
>>> ipaddress.ip_network # IP is lowercase because this is a function
<function ip_network at 0x0242C468>
>>>
|
How can I autospec mock attributes that are None by default in python 3?
Question: Consider this code:
import unittest
from unittest.mock import patch
class Foo(object):
def __init__(self, bar=None):
self.bar = bar
def methodA(self):
print("In methodA")
def methodB(self):
print("In methodB")
def my_func(bar):
foo = Foo(bar)
if foo.bar:
foo.methodA()
foo.methodB()
class MyTestCase(unittest.TestCase):
def test_my_func(self):
bar = None
with patch("__main__.Foo", autospec=True) as foo:
my_func(bar)
foo.methodB.assert_called_once_with()
if __name__ == '__main__':
unittest.main()
The idea is fairly simple. I have a function whose behavior switches on the
presence or absence of an instance attribute. I'm trying to write a unit test
that verifies only certain `Foo` methods are executed, depending on the
attribute.
Based on the mock library's
[patch](http://docs.python.org/3.3/library/unittest.mock.html#unittest.mock.patch)
and [autospeccing](http://docs.python.org/3.3/library/unittest.mock.html#auto-
speccing) docs, I thought setting `autospec=True` in the `patch()` context
manager would suffice. It doesn't. The generated `Mock()` correctly includes
mocks of `methodA` and `methodB`, but the test fails with this error:
======================================================================
ERROR: test_my_func (__main__.MyTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "so.py", line 28, in test_my_func
my_func(bar)
File "trash.py", line 18, in my_func
if foo.bar:
File "/.../python3.3/unittest/mock.py", line 549, in __getattr__
raise AttributeError("Mock object has no attribute %r" % name)
AttributeError: Mock object has no attribute 'bar'
I'm sure I'm missing something obvious, but I can't seem to figure out what.
How can I unittest `my_func()`?
Answer: Part of my problem was a misunderstanding of patch's behavior. The context
manager I set up is returning an instance of the `__main__.Foo` mock, not the
_same instance_ used in `my_func()`. To put it another way, even when I was
able to mock `Foo` properly, without autospec, I couldn't execute
`assert_called_once_with()` over any of its methods: It wasn't the same
object.
One solution is to mock the method itself. This works:
def test_my_func(self):
bar = None
with patch('__main__.Foo.methodB') as mock_methodB:
my_func(bar)
mock_methodB.assert_called_once_with()
Another method would be modifying `my_func()` to return foo:
def my_func(bar):
foo = Foo(bar)
if foo.bar:
foo.methodA()
foo.methodB()
return foo
Because the function returns the mock under test, the following should work:
def test_my_func(self):
bar = None
with patch('__main__.Foo', spec=True, bar=None):
foo = my_func(bar)
assert foo.methodB.called
assert not foo.methodA.called
|
Python ElementTree: find element by its child's text using XPath
Question: I'm trying to locate an element that has certain text value in one of its
child. For example,
<peers>
<peer>
<offset>1</offset>
<tag>TRUE</tag>
</peer>
<peer>
<offset>2</offset>
<tag>FALSE</tag>
</peer>
</peers>
from this XML document I would like to directly locate `tag` in a `peer`
element whose `offset` value is 1.
So for that purpose I have a XPath expression as follows:
./peers/peer[offset='1']/tag
however using such expression in ElementTree's `Element.find()` method fails
and gives `None` rather than the "tag" element of my interest:
from xml.etree.ElementTree import fromstring
doc = fromstring("<peers><peer><offset>1</offset><tag>TRUE</tag></peer><peer><offset>2</offset><tag>FALSE</tag></peer></peers>")
tag = doc.find("./peers/peer[offset='1']/tag")
print tag
=> None
I'm being inclined to believe it's either my above XPath expression is wrong,
or due to ElementTree's supporting only a subset of XPath according to its
documentation. Looking for help. Thank you.
Answer: Using `lxml.etree` directly (the same _should_ apply to `ElementTree`), you
can achieve the result like this:
doc = lxml.etree.fromstring(...)
tag_elements = doc.xpath("/peers/peer/offset[text()='1']/../tag")
`tag_elements` will be the **list** of `<tag>` elements belonging to `<peer>`
elements containing an `<offset>` element containing 1.
Given input (I've added a `<peer>` clause to emphasize `tag_elements` being a
list):
<peers>
<peer>
<offset>1</offset>
<tag>TRUE</tag>
</peer>
<peer>
<offset>1</offset>
<tag>OTHER</tag>
</peer>
<peer>
<offset>2</offset>
<tag>FALSE</tag>
</peer>
</peers>
`tag_elements` will contain two elements:
for tag in tag_elements:
print tag.text
-> TRUE
-> OTHER
**UPDATE** :
`doc.xpath("/peers/peer[offset=1]/tag")` also works fine.
But `doc.xpath("./peers/peer[offset=1]/tag")` does not.
|
Python- Share Variable Between Function and It's Decorator
Question: (Python 2.7)Since decorators can not share variables with the function they
are decorating, how can I make/pass `object_list` to the decorating function?
I have a few of functions that will be using `raw_turn_over_mailer()`
decorator and I would like to keep `object_list` to the local decorated
function if possible.
def raw_turn_over_mailer(function):
@wraps(function)
def wrapper(requests):
original_function = function(requests)
if object_list:
....
return original_function
return wrapper
@raw_turn_over_mailer
def one(requests):
object_list = [x for x in requests
if x.account_type.name == 'AccountType1']
@raw_turn_over_mailer
def two(requests):
object_list = [x for x in requests
if x.account_type.name == 'AccountType2']
@periodic_task(run_every=crontab(hour="*", minute="*", day_of_week="*"))
def turn_over_mailer():
HOURS = 1000
requests = Request.objects.filter(completed=False, date_due__gte=hours_ahead(0), date_due__lte=hours_ahead(HOURS))
if requests:
one(requests)
two(requests)
Answer: I can't actually run this, but I think it would do what you say want, It
creates a`wrapper()`function that calls the original (which now just returns
an object list) and then post-processes it (but returns nothing itself):
from functools import wraps
def raw_turn_over_mailer(function):
@wraps(function)
def wrapper(requests):
object_list = function(requests) # call original
if object_list:
#....
return wrapper
@raw_turn_over_mailer
def one(requests):
return [x for x in requests if x.account_type.name == 'AccountType1']
@raw_turn_over_mailer
def two(requests):
return [x for x in requests if x.account_type.name == 'AccountType2']
This seems like a convoluted way to process the results of calling a function.
You could just call the post-processing function and pass it the function to
call to get the object list.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.