text
stringlengths
226
34.5k
Determining the current state of a URL from a list of states containing required and optional items in Python/Flask Question: I'd like to specify a list of possible states a URL can be in by declaring required and optional query string parameters. Here's some pseudo code to maybe illustrate what I mean... sm.add('state1').args.required('name').optional('phone') sm.add('state2').args.required('name', 'address').optional('phone') I'd then like to figure out which state is the closest match to Flask's `request.args` object with an API such as... sm.best_match(request.args) I'm assuming sets will be involved, but I'm rather clueless. Answer: In pseudo-Python code: from bisect import bisect_left, bisect_right, insort from collections import namedtuple from itertools import islice # Not a finished product, just some ideas of the data structures # you would need to get this working. # States is a (name, set(required, fields), set(optional, fields)) tuple states = namedtuple('States', 'name required optional') heap = [] def add_to_tree(state, heap=heap): total_required = len(state.required) total_optional = len(state.optional) total = total_required + total_optional # When store states on the heap ordered by how many # required / optional arguments they have insort(heap, (total_required, total_optional, state)) INF = float('inf') MIN_STATE = states('', set(), set()) # Need custom comparators here for string and set # that always compare greater than what they are compared against def biggest(type): # Left as an exercise for the reader. MAX_STATE = states(biggest(str), biggest(set), biggest(set)) def best_match(args, heap=heap): total_args = len(args) first_match = bisect_left(heap, (0, total_args, MIN_STATE)) last_match = bisect_right(heap, (total_args, INF, MAX_STATE), low=first_match) potential_matches = islice(heap, last_match, first_match, -1) for _r, _o, state in potential_matches: if is_match(args, state): return state def is_match(args, state): # TODO: implement this
How do I get POST data from a free text field? Question: Doing a modified version of the polls tutorial. Comments work with the database when I go in the python manage.py shell but I can't get it to actually read the post data. Any time I post a comment, the page re-renders but no comment in the database. Here are my models for an individual Entry and a Comment import datetime from django.db import models from django.utils import timezone from django.forms import ModelForm class Entry(models.Model): title = models.CharField(max_length=200) body = models.TextField() pub_date = models.DateTimeField('date published') class Comment(models.Model): entry = models.ForeignKey(Entry) comment = models.TextField() comment_date = models.DateTimeField() In the Python shell, I'm able to create comments (that show up in the admin) perfectly. >>> from blog.models import Entry, Comment >>> e = Entry.objects.get(pk=1) >>> from django.utils import timezone >>> e.comment_set.create(comment="isn't it pretty to think so?", comment_date=timezone.now()) <Comment: isn't it pretty to think so?> In the detail.html view of each blog entry, a user can add a comment. <h1>{{ entry.title }}</h1> <p>{{ entry.body }}</p> <p>{{ entry.tags_set.all }}</p> <form action="{% url 'blog:comment' entry.id %}" method="post"> {% csrf_token %} <textarea name="comment101" style="width:300px; height: 70px; maxlength="300"; display:none;"> </textarea></br> <input type="submit" name="comment101" value="Add comment" /> </form> Views for detail and comment: from django.utils import timezone from django.shortcuts import render, get_object_or_404 from django.http import HttpResponse, HttpResponseRedirect from django.core.urlresolvers import reverse from blog.models import Entry, Tags, Comment def detail(request, entry_id): entry = get_object_or_404(Entry, pk=entry_id) return render(request, 'entries/detail.html', {'entry': entry}) def comment(request, entry_id): p = get_object_or_404(Entry, pk=entry_id) add_comment = request.POST['comment101'] #get input name comment from POST data p.comment_set.create(comment="add_comment", comment_date=timezone.now()) return HttpResponseRedirect(reverse('blog:detail', args=(p.id))) I've exhausted all I know. I tried adding name='comment101' every input/form in detail.html and my comment view replicates exactly what I did in the Python shell. Lastly, if anyone could point me to something to debug code involving POST data (for Mac), that'd be helpful. Thank you. Answer: I would use c = Comment(comment=add_comment, comment_date=timezone.now(), entry=p) c.save() That might be of some use. Also if you want to debug code it is best to place import pdb; pdb.set_trace() which gives you interactive console wherever you want it. In case you want to use something better use ipdb (pip install ipdb) import ipdb; ipdb.set_trace()
Counting using DictReader Question: Still new to Python, this is how far I've managed to get: import csv import sys import os.path #VARIABLES reader = None col_header = None total_rows = None rows = None #METHODS def read_csv(csv_file): #Read and display CSV file w/ HEADERS global reader, col_header, total_rows, rows #Open assign dictionaries to reader with open(csv_file, newline='') as csv_file: #restval = blank columns = - /// restkey = extra columns + reader = csv.DictReader(csv_file, fieldnames=None, restkey='+', restval='-', delimiter=',', quotechar='"') try: col_header = reader.fieldnames print('The headers: ' + str(reader.fieldnames)) for row in reader: print(row) #Calculate number of rows rows = list(reader) total_rows = len(rows) except csv.Error as e: sys.exit('file {}, line {}: {}'.format(csv_file, reader.line_num, e)) def calc_total_rows(): print('\nTotal number of rows: ' + str(total_rows)) My issue is that, when I attempt to count the number of rows, it comes up as 0 (impossible because csv_file contains 4 rows and they print on screen. I've placed the '#Calculate number of rows' code above my print row loop and it works, however the rows then don't print. It's as if each task is stealing the dictionary from one another? How do I solve this? Answer: The problem is that the `reader` object behaves like a file as its iterating through the CSV. Firstly you iterate through in the `for` loop, and print each row. Then you try to create a list from whats left - which is now empty as you've iterated through the whole file. The length of this empty list is 0. Try this instead: rows = list(reader) for row in rows: print(row) total_rows = len(rows)
How to calculate sine and cosine without importing math? Question: I started programming in python not too long ago and I am having trouble with a part of a program. The program will ask for input from the user and he can input: A, B, C, M, or Q. I have completed the A, M, and Q part but I can't figure out how to do the parts for B (calculate the sine of the number you want) and C (calculate the sine). All the information I was given was: > The power series approximation for the sine of X can be expressed as: > sine(X) = X – (X3/3!) + (X5/5!) – (X7/7!) + (X9/9!) .... Note that an > individual term in that power series can be expressed as: (-1)k * X2k+1 / > (2k+1)! where k = 0, 1, 2, 3, …. Oooh, and (but for this a while loop should do right?): When computing the sine of X or the cosine of X, the program will expand the power series until the absolute value of the next term in the series is less than 1.0e-8 (the specified epsilon). That term will not be included in the approximation. And I can't use import math. Can anyone give me an idea of how I can do this? I sincerely have no idea of where to even start hahaha. Thanks in advance! ***Hey guys, I've been trying to do this for the last 3 hours. I'm really new to programming and some of yours answers made it a bit more understandable for me but my program is not working, I really don't know how to do this. And yes, I went to speak with a tutor today but he didn't know either. So yeah, I guess I'll just wait until I get the program graded by my teacher and then I can ask him how it was supposed to be done. Thank you for all the answers though, I appreciate them! :) Answer: It seems that you aren't supposed to import `math` because you are supposed to write your own function to compute sine. You are supposed to use the power series approximation. I suggest you start by writing a factorial function, then write a loop that uses this factorial function to compute the power series. If you still can't figure it out, I suggest you talk to your teacher or a teacher's assistant.
Search a webpage and grab certain data in Python 3 Question: Im trying to get the text from a webpage with Python 3.3 and then search through that text for certain strings. When I find a matching string I need to save the following text. For example I take this page: <http://gatherer.wizards.com/Pages/Card/Details.aspx?name=Dark%20Prophecy> and I need to save the text after each category (card text, rarity, etc) in the card info. Currently Im using beautiful Soup but get_text causes a UnicodeEncodeError and doesnt return an iterable object. Here is the relevant code: urlStr = urllib.request.urlopen( 'http://gatherer.wizards.com/Pages/Card/Details.aspx?name=' + cardName ).read() htmlRaw = BeautifulSoup(urlStr) htmlText = htmlRaw.get_text for line in htmlText: line = line.strip() if "Converted Mana Cost:" in line: cmc = line.next() message += "*Converted Mana Cost: " + cmc +"* \n\n" elif "Types:" in line: type = line.next() message += "*Type: " + type +"* \n\n" elif "Card Text:" in line: rulesText = line.next() message += "*Rules Text: " + rulesText +"* \n\n" elif "Flavor Text:" in line: flavor = line.next() message += "*Flavor Text: " + flavor +"* \n\n" elif "Rarity:" in line: rarity == line.next() message += "*Rarity: " + rarity +"* \n\n" Answer: I am not familiar with BeautifulSoup any more but I ran this code - not to give you a complete answer but to point you in the right direction import urllib from lxml import html mypage = urllib.urlopen('http://gatherer.wizards.com/Pages/Card/Details.aspx?multiverseid=264') dir(mypage) ['__doc__', '__init__', '__iter__', '__module__', '__repr__', 'close', 'code', 'fileno', 'fp', 'getcode', 'geturl', 'headers', 'info', 'next', 'read', 'readline', 'readlines', 'url'] page = mypage.readlines() len(page) 526 page[0] '<?xml version="1.0" encoding="utf-8" ?>\r\n' string = ''.join([apage for apage in page]) tree = html.fromstring(string) elements = [e for e in tree.iter()] for e in elements: if 'cardtextbox' in e.values(): e, e.text_content() (<Element div at 0x31a7ba0>, 'Enchant creature') (<Element div at 0x31a7bf8>, "Enchanted creature has protection from red. This effect doesn't remove Red Ward.") I clearly do not know what I am doing but I was poking at it. It seemed to me that the values you are trying to identify are values of attribute dictionaries and so I knew enough to figure out this much. It would take some more poking at it to list all of the attributes you want to identify but I think this should get you started.
Create new file for each instance Python Question: I have a file with id numbers along with the specifics on event that is logged. (time, temp, location) I want python to group all of the same i.d.'s into their own unique files, storing all of the even specifics from each record. That is to say go through each record, if the id does not have a log file create one, if it does log the new record into that id's file. Example input 1, 11:00, 70, port A 1, 11:02, 70, port B 2, 11:00, 40, blink 3, 11:00, 30, front Desired output file name "1" with :[11:00, 70, port A ; 11:02, 70, port B ] file name "2" with :[11:00, 40, blink] file name "3" with :[11:00, 30, front] I am very new to python and I am having trouble finding a reference guide. If anyone knows a good place where I can look for an answer I would appreciate it. Answer: This is pretty straightforward - presuming your file is as you described it however I am in Python 2.7 so I am not sure what the differences could be. all_lines = open('c:\\path_to_my_file.txt').readlines() from collections import defaultdict my_classified_lines = defaultdict(list) for line in all_lines: data_type, value1,value2,value3 = line.split(',') my_classified_lines[data_type).append(','.join([value1,value2,value3]) for data_type in my_classified_lines: outref = open('c:\\directory\\' + data_type + '.txt', 'w') outref.writelines(my_classified_lines[data_type]) outref.close() to understand this you need to learn about dictionaries - useful containers for data, file operations and loops I found Dive into Python a great resource when I was starting up I may not have exactly what you want after looking at your output mine would be like 11:00, 70, port A 11:02, 70, port B I think you are stating that you want a list like object with semi-colons as the separator which to me suggests you are asking for a string with brackets around it. If your output is really as you describe it then try this for data_type in my_classified_lines: outref = open('c:\\directory\\' + data_type + '.txt', 'w') out_list =[ ';'.join([item for item in my_classified_lines[data_type'])] outref.writelines(outlist) # should be only one line outref.close()
How do keys work in min and max? Question: I run through the following sequence of statements: >>> a =range(10) >>> min(a,key=lambda(x):x<5.3) 6 >>> max(a,key=lambda(x):x<5.3) 0 The min and max give the exact opposite of what I was expecting. The python documentation on min and max is pretty sketchy. Can anyone explain to me how the "key" works? Thanks. Answer: ## Explanation of the `key` argument Key works like this: a_list = ['apple', 'banana', 'canary', 'doll', 'elephant'] min(a_list, key=len) returns `'doll'`, and max(a_list, key=len) returns `'elephant'` You provide it with a function, and it uses the minimum or maximum of the results of the function applied to each item to determine which item to return in the result. ## Application If your function returns a boolean, like yours, for min it'll return the first of the minimum of True or False, (which is False) which would be 6 or the first of the max (True) which would be 0. To see this: >>> a [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> import pprint >>> pprint.pprint(dict((i, i<5.3) for i in a)) {0: True, 1: True, 2: True, 3: True, 4: True, 5: True, 6: False, 7: False, 8: False, 9: False} Why? >>> min([True, False]) False >>> max([True, False]) True ## Explanation Why is `True` greater than `False`? >>> True == 1 True >>> False == 0 True >>> issubclass(bool, int) True It turns out that `True` and `False` are very closely related to `1` and `0`. They even evaluate the same respectively.
Adding up values in a 2D array in Python Question: I have a numpy 2D array as follows gona = array([['a1', 3], ['a2', 5], ['a3', 1], ['a3', 2], ['a3', 1], ['a1', 7]]) This array has 2 columns What I want to do is create an array with 2 columns. Column 1 should have 'a1' , 'a2', 'a3' values in its' rows and column 2 should have summation of those corresponding values. new_gona = array([['a1', 10], ['a2', 5], ['a3', 4]]) Here, corresponding values are taken as follows. 'a1' : 3 + 7 = 10 'a2' : 5 'a3' : 1 + 2 + 1 = 4 What would be an easy method to achieve this? Answer: Use pandas and its indexing magic: import pandas as pd import numpy as np gona = np.array([['a1', 3], ['a2', 5], ['a3', 1], ['a3', 2], ['a3', 1], ['a1', 7]]) # Create series where second items are data and first items are index series = pd.Series(gona[:,1],gona[:,0],dtype=np.float) # Compute sums across index sums = series.sum(level=0) # Construct new array in the format you want new_gona = np.array(zip(sums.index,sums.values)) new_gona # out[]: # array([['a1', '10.0'], # ['a2', '5.0'], # ['a3', '4.0']], # dtype='|S4') It's also notable that `np.array`s can only hold one datatype. So your mixing of strings and numeric types needs to be corrected for by specifying `dtype=np.float`. You can use `np.int` if you want.
Tree interval representation in python Question: Consider I have a tree built with import networkx as nx dg = nx.DiGraph() dg.add_edges_from([('a', 'b'), ('a', 'c'), ('b', 'd'), ('b', 'e')]) I want an interval representation of the following tree, what I mean is [1, [2, [3, 4], [5, 6], 7], [8,9 ], 10] Where the nesting corresponds to the tree. Are there any functions in networkx or in an other library that allow to do this? Answer: I don't think you will find a lib which do that. However it's easy with a DFS. Here is a recursive implementation: def tree(dg): def walk(root, start): children = dg.successors(root) if children: cur = start+1 res = [start] for c in children: l = walk(c, cur) res.append(l) cur = l[-1]+1 res.append(cur) return res else: return [start, start+1] roots = [i for i in dg.nodes() if dg.in_degree(i) == 0] assert(len(roots) == 1) root = roots[0] return walk(root, 1) Then: >>> tree(dg) [1, [2, 3], [4, [5, 6], [7, 8], 9], 10] By the way, you did notice that my answer is not the same as yours. You didn't specify any ordering on the children. If you want to, just sort the children list according to the desired ordering.
How can I execute interactive external commands? Question: I usually write bash scripts, but I am writing one now in python. So I have the problem that I want to run an interactive command which asks for some user data, so ideally I would like to pass control of stdin, stdout to bash, and then go back to my python script when the command has been executed correctly. Problem is: I haven't been able to do it with os.system. And I would also like to capture the exit status of the command that I run. Answer: from subprocess import Popen, STDOUT, PIPE from time import sleep x = Popen('du -h', shell=True, stdout=PIPE, stdin=PIPE, stderr=STDOUT) while x.poll() == None: sleep(0.25) print('Command finished successfully with the following exit status:',x.poll()) print('And this was the output given by the command:') print(x.stdout.read()) x.stdout.close() x.stdin.close()
Getting a Syntax Error: if len(sys.argv) =! 5 no clue why the code is wrong Question: Hello I'm working on my first python script and I've got a > Syntax Error: if len(sys.argv) =! 5:: I really have no clue what is causing it. I'm using Python v3.3.3 in Wing IDE 5.0 on a windows box. It's my first script but I do know other programming languages so I don't care if the answer is difficult to understand. It's probably a noobish error.. Does it maybe have something to do with the new syntax? import shodan import requests import sys SHODAN_API_KEY = "ENTER API KEY IN HERE" api = shodan.Shodan(SHODAN_API_KEY) iptotal = ('IP list') pagenmbr = 1 if __name__ == "__main__": if len(sys.argv) =! 5: print('Usage: <query> <username> <password> <lastpagenumber') sys.exit(0) query = sys.argv[1] username = sys.argv[2] password = sys.argv[3] endpage = sys.argv[4] iteratePage(pagenmbr) def iteratePage(pagenmbr): try: ... except (shodan.APIError, e): print ('Error: %s' % e) pagenmbr = pagenmbr + 1 if pagenmbr <= endpage: iteratePage(pagenmbr) print(iptotal) #Append succeeded items to file with open("outputsbb.txt", "a") as myfile: myfile.write(iptotal) I left out the try commands to save some space. If anybody has had this error before or could help me with this plz reply and help a fellow coder out it's really appreciated Answer: You invert the `!` and the `=` in your condition. You wanted to write: if len(sys.argv) != 5: #... See [list of Python operators](http://www.python.org/doc//current/library/operator.html#mapping- operators-to-functions)
Python test for junction point target Question: Under Windows with Python 2.7 is there a way to check if a folder is the **target** of any junction points? And if so, find which symlink leads to it? For example, in a dos shell create a junction point using mklink C:\>mklink /J C:\junction C:\Users Junction created for C:\junction <<===>> C:\Users And in python (assuming no prior knowledge that this junction exists) test "C:\Users" if it is the target of any junction points, returning a list of symbolic links if True, in this case: ['C:\junction'] Answer: Here's something I put together using some of the code in an ActiveState recipe titled [_Windows directory walk using ctypes_](http://code.activestate.com/recipes/578629-windows-directory-walk- using-ctypes/). There's probably a more direct way to do it than using the win32 `FindFirstFile` and `FindNextFile` functions, but this seems to work in my limited testing. import os import sys import ctypes from ctypes import Structure from ctypes import byref import ctypes.wintypes as wintypes from ctypes import addressof FILE_ATTRIBUTE_DIRECTORY = 16 # (0x10) FILE_ATTRIBUTE_REPARSE_POINT = 1024 # (0x400) MAX_PATH = 260 GetLastError = ctypes.windll.kernel32.GetLastError class FILETIME(Structure): _fields_ = [("dwLowDateTime", wintypes.DWORD), ("dwHighDateTime", wintypes.DWORD)] class WIN32_FIND_DATAW(Structure): _fields_ = [("dwFileAttributes", wintypes.DWORD), ("ftCreationTime", FILETIME), ("ftLastAccessTime", FILETIME), ("ftLastWriteTime", FILETIME), ("nFileSizeHigh", wintypes.DWORD), ("nFileSizeLow", wintypes.DWORD), ("dwReserved0", wintypes.DWORD), ("dwReserved1", wintypes.DWORD), ("cFileName", wintypes.WCHAR * MAX_PATH), ("cAlternateFileName", wintypes.WCHAR * 20)] def find_junctions(folder): """ Return a list of subdirectories in folder which are junction points """ if not os.path.isdir(folder): return False folder = unicode(folder) if not folder.startswith(u'\\\\?\\'): if folder.startswith(u'\\\\'): # network drive folder = u'\\\\?\\UNC' + folder[1:] else: # local drive folder = u'\\\\?\\' + folder junction_points = [] data = WIN32_FIND_DATAW() h = ctypes.windll.kernel32.FindFirstFileW(os.path.join(folder, u'*'), byref(data)) last_error = ctypes.windll.kernel32.GetLastError() if h < 0: ctypes.windll.kernel32.FindClose(h) if not sys.stderr.isatty(): print >> sys.stderr, ('Failed to find first file %s' % os.path.join(folder, u'*')) if last_error != 5: # access denied. raise WindowsError('FindFirstFileW %s, Error: %d' % (folder, ctypes.windll.kernel32.GetLastError())) return [] if (data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY and data.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT): if data.cFileName not in (u'.', u'..'): junction_points.append(data.cFileName[:]) try: while ctypes.windll.kernel32.FindNextFileW(h, byref(data)): if (data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY and data.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT): if data.cFileName not in (u'.', u'..'): junction_points.append(data.cFileName[:]) except WindowsError as e: if not sys.stderr.isatty(): print >> sys.stderr, ( 'Failed to find next file %s, handle %d, buff addr: 0x%x' % (os.path.join(folder, u'*'), h, addressof(data))) ctypes.windll.kernel32.FindClose(h) return junction_points def is_junction_point(folder): dirpath, folder = os.path.split(os.path.abspath(folder)) return unicode(folder) in find_junctions(dirpath)
Yahoo Finance - Python Web Scraper - Key Statistics and Financial Statements Question: I am fairly new to programming, and this is my first project after reading various guides. I am trying to scrape data from the Yahoo Finance Key Statistics page and Financial Statements (ie. <http://finance.yahoo.com/q/ks?s=GOOG+Key+Statistics>). The links to the financials is at the bottom of the key statistics page. The code for the key statistics function seems to work. But for the statement function, the entry variable used in pattern3 does not obtain negative values. The problem is especially apparent for the cash flow statement. For negative values, entry should look like entry = '<td align="right">(.+?)</td>' Am I approaching this correctly? Is there a simple way obtain all the values of the financial statements and put them into a list? My Code in Python 2.7: import urllib import re keystat = '<td class="yfnc_tabledata1">(.+?)</td>' date = '<th scope="col" style="border-top:2px solid #000;text-align:right; font- weight:bold">(.+?)</th>' #obtain the date; only works for income statement total = '<strong>(.+?)&nbsp;&nbsp;</strong>' #obtain data for any totals from statements entry = '<td align="right">(.+?)&nbsp;&nbsp;</td>' #obtain data for any entries on statements that are not totals def keystatfunc(symbol): url = 'http://finance.yahoo.com/q/ks?s=' + symbol + '+Key+Statistics' htmlfile = urllib.urlopen(url) htmltext = htmlfile.read() regex = '<span id="yfs_j10_' + symbol + '">(.+?)</span>' pattern = re.compile(regex) pattern2 = re.compile(keystat) marketcap = re.findall(pattern, htmltext) keystats = re.findall(pattern2, htmltext) return (marketcap + keystats[1:31]) #creates a list with all the data on key statistics page) def statement(symbol, period, statementtype): #period: "quarter" or "annually"; statementtype: is, bs, or cf (income statement, balance sheet, cash flow statement) if period == "quarterly" and statementtype == "bs": url = 'http://finance.yahoo.com/q/bs?s=' + symbol elif period == "annual" and statementtype == "bs": url = 'http://finance.yahoo.com/q/bs?s=' + symbol + '&annual' elif period == "quarterly" and statementtype == "is": url = 'http://finance.yahoo.com/q/is?s=' + symbol + '&annual' elif period == "annual" and statementtype == "is": url = 'http://finance.yahoo.com/q/is?s=' + symbol + '&annual' elif period == "quarterly" and statementtype == "cf": url = 'http://finance.yahoo.com/q/cf?s=' + symbol + '&annual' elif period == "annual" and statementtype == "cf": url = 'http://finance.yahoo.com/q/cf?s=' + symbol + '&annual' htmlfile = urllib.urlopen(url) htmltext = htmlfile.read() pattern = re.compile(date) pattern2 = re.compile(total) pattern3 = re.compile(entry) dates = re.findall(pattern, htmltext) totals = re.findall(pattern2, htmltext) entries = re.findall(pattern3, htmltext) return (dates + totals + entries) print keystatfunc("goog") print statement("goog", "annual", "cf") Answer: I don't believe the method you are using to extract the info is the most reliable way but I changed your code a little bit to capture the info you need. I updated the regular expression to check for parenthesis and added a section at the end to replace import urllib import re keystat = '<td class="yfnc_tabledata1">(.+?)</td>' date = '<th scope="col" style="border-top:2px solid #000;text-align:right; font- weight:bold">(.+?)</th>' #obtain the date; only works for income statement total = '<strong>(.+?)&nbsp;&nbsp;</strong>' #obtain data for any totals from statements entry = '<td align="right">(\(?.+?\)?)</td>' #obtain data for any entries on statements that are not totals def keystatfunc(symbol): url = 'http://finance.yahoo.com/q/ks?s=' + symbol + '+Key+Statistics' htmlfile = urllib.urlopen(url) htmltext = htmlfile.read() regex = '<span id="yfs_j10_' + symbol + '">(.+?)</span>' pattern = re.compile(regex) pattern2 = re.compile(keystat) marketcap = re.findall(pattern, htmltext) keystats = re.findall(pattern2, htmltext) return (marketcap + keystats[1:31]) #creates a list with all the data on key statistics page) def statement(symbol, period, statementtype): #period: "quarter" or "annually"; statementtype: is, bs, or cf (income statement, balance sheet, cash flow statement) if period == "quarterly" and statementtype == "bs": url = 'http://finance.yahoo.com/q/bs?s=' + symbol elif period == "annual" and statementtype == "bs": url = 'http://finance.yahoo.com/q/bs?s=' + symbol + '&annual' elif period == "quarterly" and statementtype == "is": url = 'http://finance.yahoo.com/q/is?s=' + symbol + '&annual' elif period == "annual" and statementtype == "is": url = 'http://finance.yahoo.com/q/is?s=' + symbol + '&annual' elif period == "quarterly" and statementtype == "cf": url = 'http://finance.yahoo.com/q/cf?s=' + symbol + '&annual' elif period == "annual" and statementtype == "cf": url = 'http://finance.yahoo.com/q/cf?s=' + symbol + '&annual' htmlfile = urllib.urlopen(url) htmltext = htmlfile.read() pattern = re.compile(date) pattern2 = re.compile(total) pattern3 = re.compile(entry) dates = re.findall(pattern, htmltext) totals = re.findall(pattern2, htmltext) entries = re.findall(pattern3, htmltext) entriesFixed = [] for e in entries: entriesFixed.append(e.replace('&nbsp;','')) return (dates + totals + entriesFixed) print keystatfunc("goog")
Python: How I can split edges with n nodes Question: Is it possible to add additional nodes to the network by splitting existing edges with given number of nodes? Say, I generate a random graph, then I want to "cut" all existing edges into n equal segments with nodes at the ends. Thanks Answer: Yes use [`remove_edges_from`](http://networkx.lanl.gov/reference/generated/networkx.MultiGraph.remove_edges_from.html): In [49]: import networkx as nx g = nx.krackhardt_kite_graph() g.edges() Out[49]: [(0, 1), (0, 2), (0, 3), (0, 5), (1, 3), (1, 4), (1, 6), (2, 3), (2, 5), (3, 4), (3, 5), (3, 6), (4, 6), (5, 6), (5, 7), (6, 7), (7, 8), (8, 9)] Now remove edges: g.remove_edges_from(g.edges()) g.edges() Out[51]: [] In [52]: g.nodes() Out[52]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] You can now arbritratily add new nodes and edges
Why isn't CSS working with apache2+mod_wsgi+python3+bottle? Question: Background: I am trying to stand up an Amazon EC2 instance using the ubuntu server. I have installed Python3.3.2, mod_wsgi-3.5, apache2, and bottle and jinja2 for python3. I can get a regular webpage to load using all of these components, e.g. it recognizes the jinja2 templates, and correctly interpolates variables passed in the python code to the html files. Furthermore, if I change the html in `views/home.tmpl` to have `<body bgcolor="#b0c4de">` then I get the appropriate color. Problem: I want to implement a good level of abstraction (and learn CSS in general), so I want my pages to have an external CSS to manage the HTML page attributes, etc. But I can't get things to work properly and I can't seem to figure out why. Minimum (non)working example of the code: My directory structure is: ubuntu@ip-172-31-47-7:/var/www/helloworld$ ls -lrtR .: -rw-rw-r-- 1 ubuntu ubuntu 162 Feb 4 23:55 adapter.wsgi -rwxrwxr-x 1 www-data www-data 1044 Feb 5 04:10 helloworld.py drwxrwxr-x 3 www-data www-data 4096 Feb 5 04:14 views ./views: drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 5 04:04 css -rw-rw-r-- 1 www-data www-data 431 Feb 5 04:14 home.tmpl ./views/css: -rw-rw-r-- 1 ubuntu ubuntu 34 Feb 5 04:04 homestyle.css `adapter.wsgi` is just a wrapper to launch bottle.default_app(): import sys, os, bottle sys.path = ["/var/www/helloworld/"] + sys.path os.chdir(os.path.dirname(__file__)) import helloworld application = bottle.default_app() `helloworld.py` is pretty simple as well: from bottle import default_app, debug, get, post, request, route, run from bottle import jinja2_template as template from bottle import jinja2_view as view @route("/hello") def hello(name=None): return template('home.tmpl', name=name) `views/home.tmpl` has some jinja2-specific code, but is short. note the `<link ...>` line; I feel like this is where I'm having the trouble: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"> <html lang="en"> <head> <link rel="stylesheet" type="text/css" href="views/css/homestyle.css"> {% block head %} <title>{% block title %} {% endblock %}Jinja2-Templated Webpage!</title> {% endblock %} </head> <body> {% if name is string: %} Hello {{ name.title() }} {% else: %} Hello world... {% endif %} </body> and my `views/css/homestyle.css` is as simple as could be: body {background-color: #b0c4de;} I have tried moving the placement of homestyle.css to be in the `views` directory, or in the "top-level" directory (`/var/www/helloworld` here); I've also tried using different links in my `href=`, including an absolute path. All to no avail, I cannot get this CSS to color my webpage. Any help is greatly appreciated! Answer: If possible, let Apache serve your css file; don't put it under `views`. And remember that your stylesheet's href is relative to your web page's URI, not to the directory where Bottle runs. So, if you hit the page at `http://myhost/hello`, then use this: <link rel="stylesheet" type="text/css" href="css/homestyle.css"> and put `homestyle.css` in `/var/www/helloworld/css/`. Hope that helps!
Using Python and BeautifulSoup (Saved webpage source codes into a local file) Question: Could you help me with this problem, please? (Environment: Python 2.7 + BeautifulSoup 4.3.2) I am trying to using Python and BeautifulSoup to pick up information on a webpage. Because the webpage is in the company website requires login and redirection, so I copy the source codes of the target page into a file and save it as “example.html” in C:\ for the convenience of practicing. This the a part of the original codes: <tr class="ghj"> <td><span class="city-sh"><sh src="./citys/1.jpg" alt="boy" title="boy" /></span><a href="./membercity.php?mode=view&amp;u=12563">port_new_cape</a></td> <td class="position"><a href="./search.php?id=12563&amp;sr=positions" title="Search positions">452</a></td> <td class="details"><div>South</div></td> <td>May 09, 1997</td> <td>Jan 23, 2009 12:05 pm&nbsp;</td> </tr> The codes so far I worked out is: from bs4 import BeautifulSoup import re import urllib2 url = "C:\example.html" page = urllib2.urlopen(url) soup = BeautifulSoup(page.read()) cities = soup.find_all('span', {'class' : 'city-sh'}) for city in cities: print city ** this is just the first stage of testing so somewhat not completed. However when I run it, it gives error message, seems it’s improper to use “urllib2.urlopen” to open a local file. Traceback (most recent call last): File "C:\Python27\Testing.py", line 8, in page = urllib2.urlopen(url) File "C:\Python27\lib\urllib2.py", line 127, in urlopen return _opener.open(url, data, timeout) File "C:\Python27\lib\urllib2.py", line 404, in open response = self._open(req, data) File "C:\Python27\lib\urllib2.py", line 427, in _open 'unknown_open', req) File "C:\Python27\lib\urllib2.py", line 382, in _call_chain result = func(*args) File "C:\Python27\lib\urllib2.py", line 1247, in unknown_open raise URLError('unknown url type: %s' % type) URLError: So could you please teach me, in what way, I can practice by using a local file? Thank you. Answer: The best way to open a local file with BeautifulSoup is to pass it an open file handler directly. <http://www.crummy.com/software/BeautifulSoup/bs4/doc/#making-the-soup> from bs4 import BeautifulSoup soup = BeautifulSoup(open("C:\\example.html")) for city in soup.find_all('span', {'class' : 'city-sh'}): print(city)
Cartoonish style plots in MATLAB or Python? Question: There is an excellent post on [Mathematica here](http://mathematica.stackexchange.com/questions/11350/xkcd-style-graphs) on how to make plots and graphs more cartoony and warm to an audience. I was wondering if anyone knows of similar ways we can do this in either MATLAB, or possibly python. Is something like this possible? (It is also known as xkcd-style plots) Answer: Yes there is (in Python at least)! here's a simple example (basically ripped from a pyplot tutorial): import numpy as np import matplotlib.pyplot as plt # evenly sampled time at 200ms intervals t = np.arange(0., 5., 0.2) plt.xkcd(scale=1, length=100, randomness=2) plt.plot(t, t, 'r', t, t**2, 'b', t, t**3, 'g') plt.show() just make sure you have a recent enough version of matplotlib and you should be good to go here's the docs on the xkcd function - <http://matplotlib.org/api/pyplot_api.html?highlight=xkcd#matplotlib.pyplot.xkcd> when I run the code above on my computer I see: ![enter image description here](http://i.stack.imgur.com/ktl6j.png)
Error in opening a file in python Question: Let's consider the below code: fp=open('PR1.txt','r') ch=fp.readlines() print "%s" % (' '.join(ch)) print "\n" fp.close() Above code gives an error: IOError: [Errno 2] No such file or directory: 'PR1.txt' But when i am provididng its full location i.e; fp=open('D:/PR1.txt','r') then it is working properly... IS it necessary to provide full location of file or there is some other way too? Answer: No, it is not necessary, but you need to be certain you are running your script with the right working directory. Your script working directory is evidently _not_ `D:/`. In practice, it is better to only use relative paths if you are in full control of the working directory. You can get the current working directory with [`os.getcwd()`](http://docs.python.org/2/library/os.html#os.getcwd) and set it with [`os.chdir()`](http://docs.python.org/2/library/os.html#os.chdir) but using absolute paths is usually better. For paths relative to the _current module or script_ , use the `__file__` global to produce a directory name: import os.path here = os.path.dirname(os.path.absolute(__file__)) then use `os.path.join()` to make relative paths absolute in reference to `here`.
TypeError: coercing to Unicode: need string or buffer, list found Question: trying to get this data parsing script up and running. It works as far as the data manipulation is concerned, what im trying to do is set this up so I can enter multiple user defined CSV's with a single command. e.g. > python script.py One.csv Two.csv Three.csv Also if you have any advice on how to automate the naming of the output CSV so that if `input = test.csv`, `output = test1.csv` Getting TypeError: coercing to Unicode: need string or buffer, list found for the line for line in csv.reader(open(args.infile)): My code: import csv import pprint pp = pprint.PrettyPrinter(indent=4) res = [] import argparse parser = argparse.ArgumentParser() #parser.add_argument("infile", nargs="*", type=str) #args = parser.parse_args() parser.add_argument ("infile", metavar="CSV", nargs="+", type=str, help="data file") args = parser.parse_args() with open("out.csv","wb") as f: output = csv.writer(f) for line in csv.reader(open(args.infile)): for item in line[2:]: #to skip empty cells if not item.strip(): continue item = item.split(":") item[1] = item[1].rstrip("%") print([line[1]+item[0],item[1]]) res.append([line[1]+item[0],item[1]]) output.writerow([line[1]+item[0],item[1].rstrip("%")]) I don't really understand what is going on with the error, so if somone could explain in laymans? Bear in mind I am new to programming/python as a whole and am basically learning alone, so if possible could you explain what is going wrong/how to fix so I can not it down for future reference Answer: `args.infile` is a _list_ of filenames, not one filename. Loop over it: for filename in args.infile: base, ext = os.path.splitext(filename) with open("{}1{}".format(base, ext), "wb") as outf, open(filename, 'rb') as inf: output = csv.writer(outf) for line in csv.reader(inf): Here I used `os.path.splitext()` to split extension and base filename so you can generate a new output filename adding `1` to the base.
Unable to connect to Android device from Jenkins Question: So I used to call a MonkeyRunner script from within Jenkins to connect to a USB Android device and run some automated tests, however MonkeyRunner itself was rather unstable so I switched to the fantastic AndroidViewClient and ported my test script over to use this pure Python API. The new Python script works fine when invoked independently from the shell however the goal is to call this script from Jenkins as a post build step. The problem I have is the initial connection to the USB Android device. Script below is a snippet from the actual test script - this is the part that tests if a USB device is present and obtains it's serial number before connecting to the Android device. This works fine from the ubuntu shell however fails to connect when invoked from a Jenkins 'Execute Shell'. #! /usr/bin/env python # Import Class Files import re import sys import os import time import commands import signal import subprocess import codecs ubuntuHome = os.getenv('HOME') sdkRootDefault = ubuntuHome + '/dev_env/ADT/sdk' sdkRoot = os.getenv('ANDROID_SDK_ROOT',sdkRootDefault) platformTools = sdkRoot + '/platform-tools' # Find the attached devices adbcmd = platformTools + "/./adb devices |grep -v attached |grep device |head -n 1 | cut -f1" p = subprocess.Popen(adbcmd, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE) serialnoIn = (p.stdout.readline()).strip() # No devices found then exit if len(serialnoIn) == 0 or serialnoIn is None: print ("ERROR: No devices found") sys.exit(1) print "INFO: Trying Connection to " + serialnoIn + "..." From Ubuntu Shell the output is: > INFO: Trying Connection to 3a005473... From Jenkins 'Execute Shell' build step > ERROR: No devices found Interestingly when this same snippet was called from within the original MonkeyRunner script it worked fine. So I am thinking MonkeyRunner does some initial setup my Python script isn't doing? I am not enough of a Python expert to know how to configure USB devices. Any help is appreciated. Answer: Okay the answer was simple really - it helps if jenkins can make use of the sdk tools and AndroidViewClient paths! > sudo -Rm u:jenkins:rwx ...fixed it
sort 2d array by date in python 3.3 Question: I have 3 arrays that are 2d and inside them is a string that is a date i would like to sort these by that date in them. The arrays are all structured like this: array1 = [[1,'29-04-2013','U11'],[2,'20-05-2013','U11']] array2 = [[1,'06-05-2013','U13'],[2,'03-06-2013','U13']] array3 = [[1,'06-03-2013','U15'],[2,'03-07-2013','U15']] I would like to get them into an array like this: all = [[1,'06-03-2013','U15'],[1,'29-04-2013','U11'],[1,'06-05-2013','U13'],[2,'20-05-2013','U11'],[2,'03-06-2013','U13'],[2,'03-07-2013','U15']] I just need some sort of way to approach this as i havent got a clue how i would do it.Thanks for the help in advance Answer: big_array = array1 + array2 + array3 import dateutil.parser as p print sorted(big_array,key=lambda x: p.parse(x[1])) if for somereason you are opposed to dateutil.parser import datetime print sorted(big_array,key=lambda x:datetime.datetime.strptime(x[1],"%d-%m-%Y") the reason I reccommend datetime over the regular time module is that datetime can see as far in the future as Ive tested ... while the time module only works up to like 2035 however you can also do it with the time module import time print sorted(big_array,key=lambda x:time.strptime(x[1],"%d-%m-%Y")
Email Error help me with syntax error Question: import smtplib from email import encoders from email.message import Message from email.mime.audio import MIMEAudio from email.mime.base import MIMEBase from email.mime.image import MIMEImage from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText msg = MIMEMultipart() msg.attach(MIMEText(file("P:/Email/test.txt").read())) sender = 'sender@email.com' reciever = 'reciever@email.com' msg = 'Hello' # Credentials (if needed) username = 'user' password = 'pass' # The actual mail send server = smtplib.SMTP('localhost') server.starttls() server.login(username,password) server.sendmail(sender, reciever, msg) server.quit() Traceback (most recent call last): File "attach2.py", line 27, in server.sendmail(sender, reciever, msg) File "C:\Python33\lib\smtplib.py", line 775, in sendmail (code, resp) = self.data(msg) File "C:\Python33\lib\smtplib.py", line 516, in data q = _quote_periods(msg) File "C:\Python33\lib\smtplib.py", line 167, in quote_periods return re.sub(br'(?m)^.', b'..', bindata) File "C:\Python33\lib\re.py", line 170, in sub return _compile(pattern, flags).sub(repl, string, count) TypeError: expected string or buffer why am i seeing this error message. is there something wrong with my python library file? Answer: The previous line is missing a closing parenthesis. ... msg = MIMEMultipart() msg.attach(MIMEText(file("P:/Email/test.txt").read())) # line missing a parenthesis sender = 'whosends@something.com' ...
python pyodbc query loop only returns one result Question: I've included the code below. Basically, I have a table (called `forumPosts`) that has the columns `user_id`,`children`,`content`,`id`. `children` is a string of ids for `forumPosts` that are follow up forum posts (I'm not in charge of organizing children posts into the same table as parent posts). This string is formatted like `'["id1","id2",...]'`. My goal is to produce a data set that looks like: parent_user_id,child_user_id_1,child_user_id_2,... The issue here is that the below code only prints a single output. Why does it do this? Thanks for your help. import pyodbc #connect to database, create db cursor cnxn = pyodbc.connect('cnxnstring', autocommit=True) cursor = cnxn.cursor() for row in cursor.execute("select user_id,children from myTable where children!='[]' and user_id!='null'"): children=row[1].replace('"','').replace('[','').replace(']','').split(',') userid=row[0] for child in children: print cursor.execute("select user_id from myTable where id = ?",child).fetchone() sample data for the columns requested: id | user_id | children hgjegh4pjbr44p | gd6v7134AUa | ["asdf34dfg3sdfq", "asdegh4pjbrxx3"] asdegh4pjbrxx3 | xzf7134AUax | ["hgjegh4pjbr44p"] hgjegh4pjbr44p | NULL | [] asdf34dfg3sdfq | adfcv34skax | [] Answer: ## Explanation Using the sample data provided, the output is: ('adfcv34skax', ) None A row for `asdegh4pjbrxx3` ID is not returned because of leading whitespace. This can be observed by changing the last `print` line to: print child Which outputs: asdf34dfg3sdfq asdegh4pjbrxx3 hgjegh4pjbr44p `' asdegh4pjbrxx3'` isn't equal to `'asdegh4pjbrxx3'`, which returns a null result. ## Solution Replace the `children` assignment line with the following to strip necessary characters and whitespace: children=[child.strip('"[] ') for child in row[1].split(',')] I think a list comprehension with the single `strip` call is more readable than nested `replace`. Alternately, [ast.literal_eval](http://docs.python.org/2/library/ast.html#ast.literal_eval) should work as well: children=ast.literal_eval(row[1])
Using a tree in python to get values Question: So I am trying to create a Tree using Python to be able to try and read a text file, which has repeating quantities within the file, and try to create a tree out of these values and return the sentences with the Top 3 values (Explained in more detail below). First of all I searched on [wikipedia](http://en.wikipedia.org/wiki/Tree_%28data_structure%29) on how a tree is created and have also seen previous examples on stackoverflow like: [This one](http://stackoverflow.com/questions/9499782/how-do-i-classify-this- value-using-a-decision-tree). and [This one](http://stackoverflow.com/questions/2824232/getting-a-tables-values-into- a-tree). However I have only been able to do this so far as code goes: import fileinput setPhrasesTree = 0 class Branch(): def __init__(self, value): self.left = None self.right = None self.value = value class Tree(): def __init__(self): self.root = None self.found = False #lessThan function needed to compare strings def lessThan(self, a, b): if len(a) < len(b): loopCount = len(a) else: loopCount = len(b) for pos in range(0, loopCount): if a[pos] > b[pos]: return False return True def insert(self, value): self.root = self.insertAtBranch(self.root, value) def exists(self, value): #set the class variable found to False to assume it is not there self.found = False self.findAtBranch(self.root, value) return self.found #Used to fine a value in a tree def findAtBranch(self, branch, value): if branch == None: pass else: if branch.value == value: self.found = True else: self.findAtBranch(branch.left, value) self.findAtBranch(branch.right, value) def insertAtBranch(self, branch, value): if branch == None: return Branch(value) else: if self.lessThan(branch.value, value): branch.right = self.insertAtBranch(branch.right, value) else: branch.left = self.insertAtBranch(branch.left, value) return branch def loadTree(filename, treeType): if treeType == setPhrasesTree: for sentence in fileinput.input("setPhrases.txt"): print(sentence) setPhrases.insert(sentence[:-1]) def findSentenceType(sentence): if sentence.exists(sentence): return setPhrasesTree Here is what text file looks like. Bare in mind that it is purposefully laid out like this and not with a quantity value next to it (file name = setPhrases.txt): Hi my name is Dave. Thank-You. What is your name? I have done all my homework. What time is dinner? What is your name? Thank-You. Hi my name is Dave. What is your name? I have done all my homework. What is your name? Can you bring me a drink Please? Can you bring me a drink Please? What is your name? Hi my name is Dave. What is your name? Can you bring me a drink Please? Here is what I am trying to get my code to do. I need it to recognize that the first sentence, in the file, is the starting node. And then it needs to tally up all the other sentences that are the same and add a value to that sentence and just use the tree to be able to do this. (I have originally done this in another way, however I need to use a tree to be able to tally up and do all the other stuff) Here is what I mean: ![enter image description here](http://i.stack.imgur.com/dYpYc.png) I then want to be able to return the top 3 `Phrases` with the highest frequencies. So in this case the system would return the sentences (in this order): What is your name? Hi my name is Dave. Can you bring me a drink please? Any help is much appreciated. Also thank-you for your time. Answer: Here you go, an implementation using a dictionary. Is this what you want? import collections def count_lines(): d = collections.defaultdict(int) for line in open( "phrases.txt" ): d[ line.strip() ] += 1 # we use the negative count as sort key, so the biggest ends up first a = sorted( d.items(), key=lambda x : -x[1] ) for n, u in enumerate( a[:3] ): print( u[0], "# count=", u[1] ) count_lines()
filter a Linux log file using Python Question: I want to filter a log file to keep all lines matching a certain pattern. I want to do this with Python. Here's my first attempt: #!/usr/bin/env python from sys import argv script, filename = argv with open(filename) as f: for line in f: try: e = line.index("some_term_I_want_to_match") except: pass else: print(line) How can I improve this to: * save the result to a new file of similar name (i.e., a different extension) * use regex to make it more flexible/powerful. (I'm just learning Python. This question is as much about learning Python as it is about accomplishing this particular result.) **OK, here's what I came up with so far...** But how do you do the equivalent of prepending an `r` as in the following line re.compile(r"\s*") where the string is **not** a string literal, as in the next line? re.compile(a_string_variable) Other than that, I think this updated version does the job: #!/usr/bin/env python from sys import argv import re import os import argparse #requires Python 2.7 or above parser = argparse.ArgumentParser(description='filters a text file on the search phrase') parser.add_argument('-s','--search', help='search phrase or keyword to match',required=True) parser.add_argument('-f','--filename', help='input file name',required=True) parser.add_argument('-v','--verbose', help='display output to the screen too', required=False, action="store_true") args = parser.parse_args() keyword = args.search original_file = args.filename verbose = args.verbose base_file, ext = os.path.splitext(original_file) new_file = base_file + ".filtered" + ext regex_c = re.compile(keyword) with open(original_file) as fi: with open(new_file, 'w') as fo: for line in fi: result = regex_c.search(line) if(result): fo.write(line) if(verbose): print(line) Can this be easily improved? Answer: Well, you know, you have answered most of your questions yourself already :) For regular expression matching use [`re` module](http://docs.python.org/3/library/re.html) (the doc has pretty explanatory examples). You already have made use `open()` function for opening a file. Use the same function for open files for writing, just provide a corresponding `mode` parameter ("w" or "a" combined with "+" if you need, see `help(open)` in the Python interactive shell). That's it.
Setting up pico behind apache using mod_wsgi Question: I'm trying to use [pico](https://github.com/fergalwalsh/pico) for a small project. It works fine using the standard server shipped with pico but I'm unable to get it running on apache. I've already gone through [this](http://modwsgi.readthedocs.org/en/latest/configuration-guides/running- a-basic-application.html) guide and everything went smoothly so I know that mod_wsgi is configured correctly. I followed the WSGI set up instructions on the pico wiki to the letter but this is the error I get when trying to access my page: GET localhost/pico/client.js 404 (NOT FOUND) Which results in the "pico is not defined" reference error. All my test code is located in /var/www/ (I've tried other locations as well) pico was installed using pip and is located in /usr/local/lib/python2.7/dist- packages/pico (I even tried modifying the access permission of the files in pico). pico.wsgi is located in /var/www/pico/pico.wsgi My pico.wsgi: import pico.server import sys sys.stdout = sys.stderr # sys.stdout access restricted by mod_wsgi path = '/var/www/' # the modules you want to be usable by Pico if path not in sys.path: sys.path.insert(0, path) # Set the WSGI application handler application = pico.server.wsgi_app I might not be using my "path" variable as intended but I don't see what else it should reference. My httpd.conf: WSGIScriptAlias /pico /var/www/pico/pico.wsgi <Directory /var/www/> Order allow,deny Allow from all </Directory> My index.html (located in /var/www/): <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <link rel="stylesheet" type="text/css" href="style.css"> <script src="http://d3js.org/d3.v3.js"></script> <script src="/pico/client.js"></script> <script src="picoTest.js"></script> </head> <body> <div id="container"> <div id="toolbar"></div> <div id="graph"></div> </div> </body> </html> And finally, sys.path printed from pico.wsgi: ['/var/www/', '/var/www/pico', '/usr/local/lib/python2.7/dist-packages/pip-1.3.1-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/virtualenv-1.9.1-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/distribute-0.6.35-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/numpy-1.7.0-py2.7-linux-x86_64.egg', '/usr/local/lib/python2.7/dist-packages/gevent_websocket-0.3.6-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/South-0.8.1-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/daemon-1.0-py2.7.egg', '/home/kjartan/work/risk/maynard', '/var/www/pico', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/usr/lib/python2.7/dist-packages/ubuntuone-client', '/usr/lib/python2.7/dist-packages/ubuntuone-control-panel', '/usr/lib/python2.7/dist-packages/ubuntuone-couch', '/usr/lib/python2.7/dist-packages/ubuntuone-installer', '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol', '/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode'] I'm new to apache and wsgi in general so there might be something obvious I'm missing. Answer: I think the WSGIScriptAlias line should be WSGIScriptAlias /pico /var/www/pico/pico.wsgi/pico Note the `/pico` after the `.wsgi` This became necessary after an update to pico a while ago but I forgot to update the [wiki](https://github.com/fergalwalsh/pico/wiki/WSGI). Apologies for that.
Not able to go on with the Request object and callback mechanism of scrapy spider-crawler Question: I am new to Scrapy and python.This is my spider-crawler from scrapy.spider import Spider from scrapy.selector import Selector from scrapy.http import Request from tutorial.settings import * from tutorial.items import * class DmozSpider(Spider): name = "dmoz" allowed_domains = ["m.timesofindia.com"] start_urls = ["http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html"] def parse(self, response): sel = Selector(response) torrent = DmozItem() items=[] links = sel.xpath('//div[@class="gapleftm"]/ul[@class="content"]/li') for ti in sel.xpath("//a[@class='pda']/text()").extract(): yield DmozItem(title=ti) for url in sel.xpath("//a[@class='pda']/@href").extract(): yield DmozItem(link=url) yield Request(url, callback=self.my_parse) def my_parse(self, response): sel = Selector(response) self.log('A response from my_parse just arrived!') for text in sel.xpath("//body/text()").extract(): yield DmozItem(desc=text) pass here i am trying to collect all the urls that are in tag and then calling my callback function but code fails to enter my_parse function. Am I missing something. This is my console log root@yogesh-System-model:~/pythonTest/tutorial# scrapy crawl dmoz -o mypune13.txt 2014-02-06 16:15:01+0530 [scrapy] INFO: Scrapy 0.22.0 started (bot: tutorial) 2014-02-06 16:15:01+0530 [scrapy] INFO: Optional features available: ssl, http11, boto, django 2014-02-06 16:15:01+0530 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'FEED_URI': 'mypune13.txt', 'BOT_NAME': 'tutorial'} 2014-02-06 16:15:01+0530 [scrapy] INFO: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState 2014-02-06 16:15:02+0530 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 2014-02-06 16:15:02+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 2014-02-06 16:15:02+0530 [scrapy] INFO: Enabled item pipelines: 2014-02-06 16:15:02+0530 [dmoz] INFO: Spider opened 2014-02-06 16:15:02+0530 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2014-02-06 16:15:02+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023 2014-02-06 16:15:02+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Crawled (200) <GET http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> (referer: None) 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Front Page'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Times City'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Times Nation'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Auto Expo 2014'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Times Global'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Editorial'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Times Business'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Times Sport'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Pune Times'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'NEWS DIGEST'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Cong denied Pranab chance to be PM: Modi'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Mom, daughter badly hurt in mishap at theme park'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'13 Indians now head major global firms,4 studied at St Stephens'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'9.7cr new voters added across India in 5 years'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Exit bond money for AFMC grads hiked up to Rs 30 lakh'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'SC revisiting death sentences, stays 3 more'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Jr college teachers call off HSC exams boycott plan'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Tourists from 180 countries to get visa on arrival now'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'50 of 58 new Rajya Sabha members are crorepatis'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'2G spectrum bids touch Rs 50,000 crore'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Discoms loss may be Tata Powers gain'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Colleges, schools work till last min to give hall tickets'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Front Page'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Times City'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Times Nation'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Auto Expo 2014'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Times Global'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Editorial'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Times Business'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Times Sport'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'title': u'Pune Times'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Front+Page&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Front+Page&edname=&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Filtered offsite request to 'mobiletoi.timesofindia.com': <GET http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Front+Page&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Front+Page&edname=&publabel=TOI> 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Times+City&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Times+City&edname=&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Times+Nation&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Times+Nation&edname=&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Auto+Expo+2014&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Auto+Expo+2014&edname=&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Times+Global&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Times+Global&edname=&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Editorial&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Editorial&edname=&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Times+Business&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Times+Business&edname=&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Times+Sport&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Times+Sport&edname=&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Pune+Times&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Pune+Times&edname=&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?article=yes&pageid=3&sectid=edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune&edname=&articleid=Ar00300&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?article=yes&pageid=3&sectid=edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune&edname=&articleid=Ar00301&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] DEBUG: Scraped from <200 http://mobiletoi.timesofindia.com/htmldbtoi/TOIPU/20140206/TOIPU_articles__20140206.html> {'link': u'http://mobiletoi.timesofindia.com/mobile.aspx?article=yes&pageid=3&sectid=edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune&edname=&articleid=Ar00302&publabel=TOI'} 2014-02-06 16:15:03+0530 [dmoz] INFO: Closing spider (finished) 2014-02-06 16:15:03+0530 [dmoz] INFO: Stored jsonlines feed (62 items) in: mypune13.txt 2014-02-06 16:15:03+0530 [dmoz] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 279, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 11226, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2014, 2, 6, 10, 45, 3, 542688), 'item_scraped_count': 62, 'log_count/DEBUG': 66, 'log_count/INFO': 8, 'request_depth_max': 1, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2014, 2, 6, 10, 45, 2, 127946)} 2014-02-06 16:15:03+0530 [dmoz] INFO: Spider closed (finished) Answer: Your console log shows that your request for `http://mobiletoi.timesofindia.com/mobile.aspx?sect_articles=yes&sectname=Front+Page&edid=&edlabel=TOIPU&mydateHid=06-02-2014&pubname=Times+of+India+-+Pune+-+Front+Page&edname=&publabel=TOI` was filtered Filtered offsite request to 'mobiletoi.timesofindia.com' Scrapy has an [`OffsiteMiddleware`](http://doc.scrapy.org/en/latest/topics/spider- middleware.html#scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware) on by default: > This middleware filters out every request whose host names aren’t in the > spider’s allowed_domains attribute. You need to include 'mobiletoi.timesofindia.com' in `allowed_domains`, like this: allowed_domains = ["m.timesofindia.com", "mobiletoi.timesofindia.com"] Otherwise, Scrapy spider middleware `OffsiteMiddleware` will receive your requests that were yield with `yield Request(url, callback=self.my_parse)` and say that domain doesn't match, and will discard them, with no callback being called at all.
Python FTP Upload with Multiprocessing - Uploaded Files not complete Question: i was trying to create a ftp upload with multiprocessing like it is described here in many different ways. The skript already uploads the files I choose but the upload breaks up every time after uploading round about 90 KB. Does anybody has a hint for me what I did wrong ? Thanks in advance. Regards Peter #!/usr/bin/env python # -*- coding: utf-8 -*- import ftplib from ftplib import FTP from multiprocessing import Process import os ##### Config für den Upload Path #### Path='c:/' ##################################### def uploadZip(zipName,PathUpload): # Hochzuladenden XML # Upload zu Datenausstausch - Test ftpsportschau= FTP('xxxxx') ftpsportschau.login ('xxx', 'xxxx') zipDatei = open('%s%s' %(PathUpload,zipName),'r') try: ftpsportschau.storbinary('STOR %s' % zipName, zipDatei) except ftplib.error_perm: print "PermError: cannot upload file %s" % zipName except ftplib.error_temp: print "TempError: cannot upload file %s" % zipName zipDatei.close() ftpsportschau.quit() return def UploadAsync(FileListe,PathAsync): ''' Funktion zum Upload mit parallelen Prozessen Sie enthält nicht den Upload Befehl,sondern nur den Aufruf der Prozesse ''' print ' List in Async'+'\n' print FileListe try: for filename in FileListe: p = Process(target=uploadZip, args=(filename,PathAsync)) p.start() except: print 'An error has occured' def Files(PathFiles): ''' Funktion zum Ermitteln der Files''' UploadListe=[] Files=os.listdir(PathFiles) print Files print '\n' for files in Files: if files.endswith('.zip'): UploadListe.append(files) else: continue print UploadListe print '\n' return UploadListe if __name__ == "__main__": UploadAsync(Files(Path),Path) Answer: Since you seem to be transferring filesin binary format, you should use the `'b'` qualifier in your open: zipDatei = open('%s%s' %(PathUpload,zipName),'rb')
How to call a class objects's method if it is not assigned to a variable in python? Question: Please see the code below, how do I call the `GetCurrentSelection()` method of `Choice()` object below? `Choice()` object is not assigned to a variable. import wx class ChoiceFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, -1, 'Choice Example', size=(250, 200)) panel = wx.Panel(self, -1) sampleList = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight'] wx.StaticText(panel, -1, "Select one:", (15, 20)) wx.Choice(panel, -1, (85, 18), choices=sampleList) if __name__ == '__main__': app = wx.PySimpleApp() ChoiceFrame().Show() app.MainLoop() If it was written this way: choice_object = wx.Choice(panel, -1, (85, 18), choices=sampleList) Then I would do: choice_object.GetCurrentSelection() But it has no variable names, so how will I call `GetCurrentSelection()` method in original code above? Answer: Depending on your needs, you could call the `GetCurrentSelection` method when you create the `Choice`: wx.Choice(panel, -1, (85, 18), choices=sampleList).GetCurrentSelection() I don't know why you're not just storing the choice, though. Variables exist for a reason.
Loop gives correct output if script run in steps but not when skript runs from scratch Question: My script contains a while loop: import numpy as np ptf = 200 #profiltiefe dz = 5 DsD0 = 0.02 D0 = 0.16 #cm2/sec bei 20°C Ds= D0 * DsD0 eps= 0.3 R= 8.314 Ptot=101300 Te = 20 dt = 120 modellzeit = 86400*3 J=modellzeit/dt PiA = 0.04 CA = PiA*1000/Ptot respannual = 10 #t C ha-1 a-1 respmol = respannual/12*10**6/10000/(365*24) respvol_SI = respmol * R * (Te+273)/(Ptot*3600) respvol = respvol_SI * 100 I= ptf/dz S = np.zeros(40) for i in range(40): if i <= 4: S[i] = respvol/(2*4*dz) if i > 4 and i <= 8: S[i] = respvol/(4*4*dz) if i > 8 and i <= 16: S[i] = respvol/(8*4*dz) Calt = np.repeat(CA,len(range(int(I+1)))) Cakt = Calt.copy() res_out = range(1,int(J),1) Cresult = np.array(Cakt) faktor = dt*Ds/(dz*dz*eps) timestep=0 #%% while timestep <= J: timestep = timestep+1 for ii in range(int(I)): if ii == 0: s1 = Calt[ii+1] s2 = -3 * Calt[ii] s3 = 2 * CA elif ii == int(I-1): s1 = 0 s2 = -1 * Calt[ii] s3 = Calt[ii-1] else: s1 = Calt[ii+1] s2 = -2 * Calt[ii] s3 = Calt[ii-1] result = Calt[ii]+S[ii]*dt/eps+faktor*(s1+s2+s3) print(result) Cakt[ii] = result Cresult = np.vstack([Cresult,Cakt]) Calt = Cakt.copy() What is intersting: If I run the complete script print(result) gives me different (and incorrect) values. But if I add all my constants before and run the loop part of the code (shown above) the loop performs well and delivers the output I want. Any idea why this might happen? I am on Python 2.7.5 / Mac OS X 10.9.1/ Spyder 2. Answer: You are using python 2.7.5, so division of integers gives integer results. I suspect that is not what you want. For example, the term `respannual/12` will be 0, so `respmol` is 0. Either change your constants to floating point values (e.g. `respannual = 10.0`), or add `from __future__ import division` at the top of your script.
Skip comments lines of different types in csv.Dictreader Question: I have several tab delimited files that I want to read into dicts using csvDictreader. Each file contains several comment lines starting with '#' or '\t' before the start of actual data. The number of comment lines varies between files. I've been trying the methods outlined in [this post](http://stackoverflow.com/questions/14158868/python-skip-comment-lines- marked-with-in-csv-dictreader) but cannot seem to get it working. Here is my current code: def load_database_snps(inputFile): '''This function takes a txt tab delimited input file (in house database) and returns a list of dictionaries for each variant''' idStore = [] #empty list for storing variant records with open(inputFile, 'r+') as varin: idStoreDictgroup = csv.DictReader((row for row in varin if row.startswith('hr', 1, 2)),delimiter='\t') #create a generator; dictionary per snp (row) in the file idStoreDictgroup.fieldnames = [field.strip() for field in idStoreDictgroup.fieldnames] #strip whitespace from field names print(type(idStoreDictgroup)) for d in idStoreDictgroup: #iterate over dictionaries in varin_dictgroup print(d) idStore.append(d) #attach to var_list return idStore Here is an example of an input file: ## SM=Sample,AD=Total Allele Depth, DP=Total Depth ## het;;; and homo;;; are breakdowns of variant read counts per sample - chr1:10002921 T>G AD=34 het:4;11;7;12 (sum=34) Hetereozygous Homozygous Chr Start End ref |A| |C| |G| |T| HetCount |A| |C| |G| |T| HomCount TotalCount SampleCount chr1 10001102 10001102 T 0 0 SM=1;AD=22;DP=38 0 1 0 0 0 0 0 1 138 het:22; homo:- chr1 10002921 10002921 T 0 0 SM=4;AD=34;DP=63 0 4 0 0 0 0 0 4 138 het:4;11;7;12; homo:- The lines I want to read in all begin with 'Chr' or 'chr'. I think its not working because I need to iterate over it to reformat the field names using the generator which exhausts it before the rows can be read into the dictionaries. The error message I get is: > > Traceback (most recent call last): > File "snp_freq_V1-1_export.py", line 99, in <module> > snp_check_wrapper(inputargs.snpstocheck, > inputargs.snp_database_location) > File "snp_freq_V1-1_export.py", line 92, in snp_check_wrapper > snpDatabase = load_database_snps(databaseInputFile) #store database > variants in snp_database (a dictionary) > File "snp_freq_V1-1_export.py", line 53, in load_database_snps > idStoreDictgroup.fieldnames = [field.strip() for field in > idStoreDictgroup.fieldnames] #strip whitespace from field names > TypeError: 'NoneType' object is not iterable > I have tried doing the inverse of my current code and explicitly excluding rows starting with '#' and '\t'. But this also didn't work and just gives me a blank dictionary. Answer: What you should be able to do is skip all the preceding lines _until_ something with a `chr` starts, such as: import csv from itertools import dropwhile with open('somefile') as fin: start = dropwhile(lambda L: not L.lower().lstrip().startswith('chr'), fin) for row in csv.DictReader(start, delimiter='\t'): # do something
Does diesel support python 3? Question: Interested in the diesel networking library (<http://diesel.io>). Will it run on python 3.3? Answer: It appears Diesel does not support Python 3.3 (or any 3.X version) at this point. Source: <https://github.com/jamwt/diesel/blob/master/setup.py> import select, sys, os assert sys.version_info >= (2, 6), \ "Diesel requires python 2.6 (or greater 2.X release)"
Run code in python script on shutdown signal Question: I have a python script that runs in the background on startup. The starting method is a entry in a run.sh file which is called with /etc/rc.local. The exact entry would be "sudo python /home/pi/run/main.py &". The system is a raspberry pi with wheezy. The script is running, no problem so far. If a shutdown command is send to the system (via console "sudo shutdown -h now") I need further the script to not abort right away but to execute some code first. Thats what I got so far: #!/usr/bin/env python import atexit @atexit.register def byebye(): c = "End" datei = open("/home/pi/logfile",'a+b') datei.write(c + "\n") datei.close() def main(): while True: ...do anything... main() Right now it seems to just exit the main loop on shutdown. Do I need to use a different way to shutdown the system so the signal is transmitted to my script or did I maybe not get the usage of the "@atexit" method? Any ideas? Thanks Answer: `shutdown` sends the `SIGTERM` signal, which `atexit` does not handle. Nor will context managers, `finally` blocks, etc. import signal signal.getsignal(signal.SIGTERM) Out[64]: 0 #i.e. nothing Contrast this with, say ctrl-C: signal.getsignal(signal.SIGINT) Out[65]: <function signal.default_int_handler> #i.e. something You can register your `byebye` function with `signal` to run instead of doing nothing (which leads to the interpreter eventually getting killed by the shell) signal.signal(signal.SIGTERM,byebye) If you do the above you'll need to do two things: * change the signature of `byebye` to accept the two arguments that `signal` will pass to it. * you should do something like call `sys.exit()` at the end of your `byebye` function to allow python to gracefully close up shop. You could alternatively do some combination of `signal` _and_ `atexit`: import sys signal.signal(signal.SIGTERM, lambda num, frame: sys.exit(0)) Which would drop right in to your current code. This ensures the atomicity of your cleanup operation (i.e. `byebye` is guaranteed to be the last I/O operation) at the cost of being a bit clunky.
Python, load and list a .csv in a numpy array Question: I am loading a .csv file with this code: import csv import numpy as np import scipy.spatial points = np.array([((int(R), int(G), int(B)),float(X), float(Y), float(Z)) for R, G, B, X, Y, Z in csv.reader(open('XYZcolorlist_D65.csv'))]) # load R,G,B,X,Y,Z coordinates of 'points' in a np.array print points And that works fine. However, if I add this further line, where I am trying to compute a Delaunay triangulation with `scipy`: <http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Delaunay.html> tri = scipy.spatial.Delaunay(points[1, 2, 3]) # do the triangulation I get this error message: Traceback (most recent call last): File "C:\Users\gary\Documents\EPSON STUDIES\delaunay.py", line 15, in <module> tri = scipy.spatial.Delaunay(points[1, 2, 3]) IndexError: too many indices Obviously the syntax `scipy.spatial.Delaunay(points[1, 2, 3])` isn't correct. What is it that I am doing wrong? EDIT: If it's easier to handle, I can also use this line to import (np arrays should be of same type of data?) points = np.array([(float(R), float(G), float(B), float(X), float(Y), float(Z)) for R, G, B, X, Y, Z in csv.reader(open('XYZcolorlist_D65.csv'))]) # load R,G,B,X,Y,Z coordinates of 'points' in a np.array Then I would need to skip the first 3 values in each row... Answer: The error is with the slicing of `numpy` array. To get the coordinates of the points, either version of you code is fine. First version of your code, where the first column of `points` is a tuple of RGB values: tri = scipy.spatial.Delaunay(points[:, 1:]) Second version, where the RGB values are flattened, taking up 3 columns, you need to skip three columns: tri = scipy.spatial.Delaunay(points[:, 3:]) In fact, you can use `np.loadtxt` for reading in the data (yielding the second version): points = np.loadtxt('XYZcolorlist_D65.csv')
newline in text extraction from pdf Question: I am coding a function about extracting text in pdf, I am also using the [pyPdf](https://pypi.python.org/pypi/pyPdf) library. Extracting was okay. But I am encountering a couple of problems like it excluding the newline. So I find a way to add a newline, so I have done this: # Iterate pages for i in range(0, pdf.getNumPages()): # Extract text from page and add to content content += pdf.getPage(i).extractText() content = content.replace('. ', '. <br />') pages += content # Collapse whitespace content = " ".join(pages.replace(u"\xa0", " ").strip().split()) The problem is even instances like this: 1. Apple became like this: 1. Apple Which it shouldn't be. I just want to add newline on every end of a sentence. Is there a way to check or determine when the sentence ends? Or checking whether it is as numbering? Answer: A hackish solution is to perform replacement only if the full stop is not immediately preceded by a digit. Change the line `content = content.replace('. ', '. <br />')` to the following: import re re.sub(r'([^0-9])\. ', r"\1. <br />", content)
Extract Unicode data from CSV file Question: I have a CSV file like this encoding UTF-8 # id english_word part_of_speech malayalam_definition 174569 .net n പുത്തന്‍ കമ്പ്യൂട്ടര്‍ സാങ്കേതികത ഭാഷ 116102 A bad patch n കുഴപ്പം പിടിച്ച സമയം 115869 A bed of nails n പ്രയാസപ്പെടുന്ന അവസ്ഥ 200587 A bed of nails idm ശരശയ്യ 115768 A bed of roses n സുഖകരമായ അവസ്ഥ 115767 A bed of roses n പൂമെത്ത 113832 A bed of thorn n അസുഖകരമായ അവസ്ഥ 113665 A bed roses n പൂമെത്ത I have to extract all Unicode data from the file having `n` tag import csv with open('some.csv', newline='\t', encoding='utf-8') as f: reader = csv.reader(f) for row in reader: print(row) This is the code I have but it is not working The code is not producing output, it does not . Any suggestions ? Python 2.7 Answer: You have to read the csv file before you iterate in it with `for row in f`. First, import the csv package: import csv After, read the csv file: with open('mycsv.csv','r') as f: with open('n.csv','w') as new_file: file_read = csv.reader(f,delimiter=';') for row in file_read: if not extract_n(row): new_file.write(row) The delimiter field can be a semicolon, comma, o whatever you have.
Parsing XML with XPath in Python 3 Question: I have the following xml: <document> <internal-code code="201"> <internal-desc>Biscuits Wrapped</internal-desc> <top-grouping>Finished</top-grouping> <web-category>Biscuits</web-category> <web-sub-category>Biscuits (Wrapped)</web-sub-category> </internal-code> <internal-code code="202"> <internal-desc>Biscuits Sweet</internal-desc> <top-grouping>Finished</top-grouping> <web-category>Biscuits</web-category> <web-sub-category>Biscuits (Sweets)</web-sub-category> </internal-code> <internal-code code="221"> <internal-desc>Biscuits Savoury</internal-desc> <top-grouping>Finished</top-grouping> <web-category>Biscuits</web-category> <web-sub-category>Biscuits For Cheese</web-sub-category> </internal-code> .... </document> I have loaded it into a tree using this code: try: groups = etree.parse(PRODUCT_GROUPS_XML_FILEPATH) root = groups.getroot() internalGroup = root.findall("./internal-code") LOG.append("[INFO] product groupings file loaded and parsed ok") except Exception as e: LOG.append("[ERROR] PRODUCT GROUPINGS XML FILE ACCESS PROBLEM") LOG.append("[***TERMINATED***]") writelog() exit() I would like to use XPath to find the correct and then be able to access the child nodes of that group. So if I am searching for internal-code 221 and want web-category I would do something like: internalGroup.find("internal-code", 221).get("web-category").text I am not experienced with XML and Python and I have been staring at this for ages. All help very gratefully received. Thanks Answer: According to [`xml.etree.ElementTree`](http://docs.python.org/3/library/xml.etree.elementtree.html) documentation: > ## XPath support > > This module provides **limited support for XPath expressions** for locating > elements in a tree. The goal is to support a small subset of the abbreviated > syntax; a full XPath engine is outside the scope of the module. Use [`lxml`](http://lxml.de/): >>> import lxml.etree as ET >>> >>> s = ''' ... <document> ... <internal-code code="201"> ... <internal-desc>Biscuits Wrapped</internal-desc> ... <top-grouping>Finished</top-grouping> ... <web-category>Biscuits</web-category> ... <web-sub-category>Biscuits (Wrapped)</web-sub-category> ... </internal-code> ... <internal-code code="202"> ... <internal-desc>Biscuits Sweet</internal-desc> ... <top-grouping>Finished</top-grouping> ... <web-category>Biscuits</web-category> ... <web-sub-category>Biscuits (Sweets)</web-sub-category> ... </internal-code> ... <internal-code code="221"> ... <internal-desc>Biscuits Savoury</internal-desc> ... <top-grouping>Finished</top-grouping> ... <web-category>Biscuits</web-category> ... <web-sub-category>Biscuits For Cheese</web-sub-category> ... </internal-code> ... </document> ... ''' >>> >>> root = ET.fromstring(s) >>> for text in root.xpath('.//internal-code[@code="221"]/web-category/text()'): ... print(text) ... Biscuits
Unloading a module in Python 3.x Question: It's clear to me that there was no clean solution in Python 2 to unload a module, and this was a known bug, that was set to be corrected. The posts: [How do I unload (reload) a Python module?](http://stackoverflow.com/questions/437589/how-do-i-unload-reload-a- python-module/487718) [Remove an imported python module](http://stackoverflow.com/questions/8781257/remove-an-imported-python- module) of the year 2009 and 2010 confirm this lack of support for unloading a module. I wonder if this was solved in Python 3.x. When I do, `import os`, `del os`, `dir()`, the `os` module is not there (at least not visible, usable). Is it gone? Answer: The `sys.modules` does still hold a reference to the module. >>> import six >>> del six >>> sys.modules["six"] <module 'six' from '/usr/lib64/python3.3/site-packages/six.py'> sys.modules still holds a reference. So I don't think you can unload a module in Python 3.3 either.
Extract all tables and h4 from html Question: I have a html file from which I want to extract all tables and h4 elements. That is I want to take only tables and h4 from the file and use it somewhere else. I am using Notepad++ and looking for some pythonscript to do so. <html> // header <body> <div> <h4></h4> <h4></h4> <table> // some rows with cells here </table> // maybe some content here <table> // a form and other stuff </table> // probably some more text </div> </body> </html> Thanks Answer: I suggest using the module [BeautifulSoup](https://pypi.python.org/pypi/beautifulsoup4). You could accomplish what you want by doing: from bs4 import BeautifulSoup code = file("file.html") html = code.read() soup = BeautifulSoup(html) htag = soup.findall('h4') tabletag = soup.findall('table') for h in htag: print h.text for table in tabletag: print table.text
Python: separating a list by unique values Question: I have the following list of list. xlist =[['instructor','plb','error0992'],['instruction','address','00x0993'],['data','address','017x112']]] I am trying to implement a string algorithm where at one step it needs to separate the above list into several lists. Separation criteria is to first select the least number of unique token values and separate it using unique token value. (Here the token is an element of the inner list). For example, in the above xlist, the least number of unique token resides in the 2nd index => ('plb','address','address'). So i need to break this list into following two lists. list1 = [['instruction','address','00x0993'],['data','address','017x112']] list2= [['instructor','plb','error0992']] I am new to python. This is my first project. Can anybody suggest me a good method? perhaps a suitable list comprehension? Or a brief explanation of the steps i should follow. Answer: Pure Python, in memory, solution. (For when you have the ram) To get name sets, I transpose xlist then form a set of each transposed element that will remove any duplication. mintokenset just finds the set with the smallest number of items. minindex finds what column of the inner list mintokenset corresponds to. name lists is initialised to have enough empty inner lists. The for loop takes that information to split the inner-lists appropriately. >>> from pprint import pprint as pp >>> >>> xlist =[['instructor','plb','error0992'],['instruction','address','00x0993'],['data','address','017x112']] >>> sets = [set(transposedcolumn) for transposedcolumn in zip(*xlist)] >>> pp(sets) [{'instructor', 'data', 'instruction'}, {'plb', 'address'}, {'00x0993', '017x112', 'error0992'}] >>> mintokenset = min(sets, key=lambda x:len(x)) >>> mintokenset {'plb', 'address'} >>> minindex = sets.index(mintokenset) >>> minindex 1 >>> mintokens = sorted(mintokenset) >>> mintokens ['address', 'plb'] >>> lists = [[] for _ in mintokenset] >>> lists [[], []] >>> for innerlist in xlist: lists[mintokens.index(innerlist[minindex])].append(innerlist) >>> pp(lists) [[['instruction', 'address', '00x0993'], ['data', 'address', '017x112']], [['instructor', 'plb', 'error0992']]] >>> **Following on from the above doodle, for big data** , assume it is stored in a file (one inner list per line, comma separated). the file can be read once and mintokenset and minindex found using a complicated generator expression that should reduce the RAM requirement. The output is similarly stored in as much output files as necessary using another generator expression to read the input file a second time and switch input records to their appropriate output file. Data should stream through with little overall RAM usage. from pprint import pprint as pp def splitlists(logname): with open(logname) as logf: #sets = [set(transposedcolumn) for transposedcolumn in zip(*(line.strip().split(',') for line in logf))] mintokenset, minindex = \ min(((set(transposedcolumn), i) for i, transposedcolumn in enumerate(zip(*(line.strip().split(',') for line in logf)))), key=lambda x:len(x[0])) mintokens = sorted(mintokenset) lists = [open(r'C:\Users\Me\Code\splitlists%03i.dat' % i, 'w') for i in range(len(mintokenset))] with open(logname) as logf: for innerlist in (line.strip().split(',') for line in logf): lists[mintokens.index(innerlist[minindex])].write(','.join(innerlist) + '\n') for filehandle in lists: filehandle.close() if __name__ == '__main__': # File splitlists.log has the following input '''\ instructor,plb,error0992 instruction,address,00x0993 data,address,017x112''' logname = 'splitlists.log' splitlists(logname) # Creates the following two output files: # splitlists000.dat '''\ instruction,address,00x0993 data,address,017x112''' # splitlists001.dat '''\ instructor,plb,error0992'''
Can't use WebRTC DataChannels with Chrome when Firefox initiates the connection Question: I am trying to create a simple webpage using WebRTC DataChannels that sends pings/pongs between browsers. When **Chrome** initiates the connection and then **Chrome** connects, it **works**. When **Firefox** initiates the connection and then **Firefox** connects, it **works**. When **Chrome** initiates the connection and then **Firefox** connects, it **works**. But when **Firefox** initiates the connection and then **Chrome** connects, it **doesn't work**. Chrome never receives data sent by Firefox. I'm using Firefox 26 and Chromium 32, on Archlinux. Here is my JavaScript code: <!DOCTYPE html> <html> <head> <title>WebRTC test</title> <meta charset="utf-8"> </head> <body> <button id="create" disabled>Create data channel</button> <script type="text/javascript"> // DOM var create = document.getElementById('create'); // Compatibility window.RTCPeerConnection = window.RTCPeerConnection || window.mozRTCPeerConnection || window.webkitRTCPeerConnection; window.RTCSessionDescription = window.RTCSessionDescription || window.mozRTCSessionDescription || window.webkitRTCSessionDescription; window.RTCIceCandidate = window.RTCIceCandidate || window.mozRTCIceCandidate || window.webkitRTCIceCandidate; // Create a WebRTC object var rtc = new RTCPeerConnection(null); // Create a data channel var sendChannel = rtc.createDataChannel('pingtest', {reliable: false}); var myMsg = 'ping'; function setRecvChannel(recvChannel) { recvChannel.onmessage = function(event) { if(event.data.indexOf('\x03\x00\x00\x00\x00\x00\x00\x00\x00') === 0) { console.log('-> ' + window.btoa(event.data)); return; // Received channel's name, ignore } console.log('-> ' + event.data); window.setTimeout(function() { console.log('<- ' + myMsg); sendChannel.send(myMsg); }, 500); }; } // Chrome and Firefox sendChannel.onopen = function(event) { setRecvChannel(sendChannel); if(myMsg === 'ping') { console.log('<- ' + myMsg); sendChannel.send(myMsg); } }; // Firefox rtc.ondatachannel = function(event) { setRecvChannel(event.channel); }; // ICE rtc.onicecandidate = function(event) { if(event.candidate) { console.log('<- ' + JSON.stringify(event.candidate)); ws.send(JSON.stringify(event.candidate)); } }; // Signaling channel var ws = new WebSocket('ws://127.0.0.1:49300/'); ws.onopen = function() { create.disabled = false; }; ws.onmessage = function(event) { console.log('-> ' + event.data); var data = JSON.parse(event.data); if(data.sdp) { rtc.setRemoteDescription(new RTCSessionDescription(data)); if(data.type === 'offer') { myMsg = 'pong'; rtc.createAnswer(function(anwser) { rtc.setLocalDescription(anwser, function () { console.log('<- ' + JSON.stringify(anwser)); ws.send(JSON.stringify(anwser)); }); }, console.error); } } else { rtc.addIceCandidate(new RTCIceCandidate(data)); } }; ws.onclose = function() { create.disabled = true; }; // Create an offer create.onclick = function() { rtc.createOffer(function(offer) { rtc.setLocalDescription(offer, function () { offer.sdp = offer.sdp; console.log(offer.sdp); console.log('<- ' + JSON.stringify(offer)); ws.send(JSON.stringify(offer)); }); }, console.error); }; </script> </body> </html> Here is the WebSocket-based signaling server I've created only for test purposes, it simply listens on port 49300 and broadcasts data received from clients to other clients: #!/usr/bin/python #-*- encoding: Utf-8 -*- from socket import socket, AF_INET, SOCK_STREAM, SOL_SOCKET, SO_REUSEADDR from string import printable from threading import Thread from base64 import b64encode from struct import unpack from hashlib import sha1 PORT = 49300 activeSocks = [] def SignalingChannel(ip, port, sock): print 'Connection from %s:%s' % (ip, port) # Handling the HTTP request try: headers = sock.recv(8184) assert headers.upper().startswith('GET') assert headers.endswith('\r\n\r\n') data = headers.strip().replace('\r', '').split('\n')[1:] headers = {} for header in data: name, value = header.split(':', 1) headers[name.strip().lower()] = value.strip() assert headers['host'] assert 'upgrade' in headers['connection'].lower() assert 'websocket' in headers['upgrade'].lower() assert headers['sec-websocket-version'] == '13' assert len(headers['sec-websocket-key']) == 24 guid = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11' accept = b64encode(sha1(headers['sec-websocket-key'] + guid).digest()) sock.send('HTTP/1.1 101 Switching Protocols\r\n' + 'Connection: Upgrade\r\n' + 'Upgrade: websocket\r\n' + 'Sec-WebSocket-Accept: %s\r\n' % accept + '\r\n') except: try: msg = 'This is a RFC 6455 WebSocket server.\n' sock.send('HTTP/1.1 400 Bad Request\r\n' + 'Connection: Close\r\n' + 'Content-Length: %d\r\n' % len(msg) + 'Content-Type: text/plain; charset=us-ascii\r\n' + 'Sec-WebSocket-Version: 13\r\n' + '\r\n' + msg) except: pass sock.close() print 'Disconnection from %s:%s' % (ip, port) return activeSocks.append(sock) try: data = sock.recv(2) while len(data) == 2: frame = data[0] + chr(ord(data[1]) & 0b01111111) opcode = ord(data[0]) & 0b00001111 mask = ord(data[1]) & 0b10000000 paylen = ord(data[1]) & 0b01111111 if paylen == 126: data = sock.recv(2) frame += data paylen = unpack('>H', data)[0] elif paylen == 127: data = sock.recv(8) frame += data paylen = unpack('>Q', data)[0] if mask: mask = sock.recv(4) data = '' received = True while received and len(data) < paylen: received = sock.recv(paylen - len(data)) data += received if mask: unmasked = '' for i in xrange(len(data)): unmasked += chr(ord(data[i]) ^ ord(mask[i % 4])) else: unmasked = data frame += unmasked if opcode != 8: print '-- From port %d --' % port if all(ord(c) < 127 and c in printable for c in unmasked): print unmasked else: print repr(unmasked) for destSock in activeSocks: if destSock != sock: destSock.send(frame) else: break data = sock.recv(2) except: pass activeSocks.remove(sock) sock.close() print 'Disconnection from %s:%s' % (ip, port) listenSock = socket(AF_INET, SOCK_STREAM) listenSock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) listenSock.bind(('0.0.0.0', PORT)) listenSock.listen(20) print 'Listening on port 49300...' while True: clientSock, (ip, port) = listenSock.accept() Thread(target=SignalingChannel, args=(ip, port, clientSock)).start() To run the code, launch the signaling server, open the webpage in two browser tabs, click the "Create data channel" button and look at the web console. Any idea? Answer: Looking through the Chrome/Firefox bug trackers it looks like this issue has been identified and resolved, but only in [Chrome Canary 33.0.1715.0](https://code.google.com/p/webrtc/issues/detail?id=2279#c17) or higher. If you're unwilling to require the Chrome Canary build mentioned above, you can detect the bad peer combination, and have your 'offer' button signal the other client to make an offer. Pseudocode: socket.onMessage(msg) { if(msg == "request-offer"){ doOffer(); } ... } createDataChannelButton.onClick() { if(!canCreateChannelBasedOnBrowser){ socket.send("request-offer"); } else { doOffer(); } } With your example code: <!DOCTYPE html> <html> <head> <title>WebRTC test</title> <meta charset="utf-8"> </head> <body> <button id="create" disabled>Create data channel</button> <script type="text/javascript"> // DOM // CHANGE: Add basic browser detection based on google's adapter.js file. var rtcBrowserVersion = 0; var rtcCanInitiateDataOffer = false; if (navigator.mozGetUserMedia) { rtcBrowserVersion = parseInt(navigator.userAgent.match(/Firefox\/([0-9]+)\./)[1], 10); rtcCanInitiateDataOffer = true; } else if (navigator.webkitGetUserMedia) { // Chrome Canary reports major version 35 for me. Can't find a reliable resource to confirm // canary versions. rtcBrowserVersion = parseInt(navigator.userAgent.match(/Chrom(e|ium)\/([0-9]+)\./)[2], 10); rtcCanInitiateDataOffer = rtcBrowserVersion >= 35; } var create = document.getElementById('create'); // Compatibility window.RTCPeerConnection = window.RTCPeerConnection || window.mozRTCPeerConnection || window.webkitRTCPeerConnection; window.RTCSessionDescription = window.RTCSessionDescription || window.mozRTCSessionDescription || window.webkitRTCSessionDescription; window.RTCIceCandidate = window.RTCIceCandidate || window.mozRTCIceCandidate || window.webkitRTCIceCandidate; // Create a WebRTC object var rtc = new RTCPeerConnection(null); // Create a data channel var sendChannel = rtc.createDataChannel('pingtest', {reliable: false}); var myMsg = 'ping'; function setRecvChannel(recvChannel) { recvChannel.onmessage = function(event) { if(event.data.indexOf('\x03\x00\x00\x00\x00\x00\x00\x00\x00') === 0) { console.log('-> ' + window.btoa(event.data)); return; // Received channel's name, ignore } console.log('-> ' + event.data); window.setTimeout(function() { console.log('<- ' + myMsg); sendChannel.send(myMsg); }, 500); }; } // Chrome and Firefox sendChannel.onopen = function(event) { setRecvChannel(sendChannel); if(myMsg === 'ping') { console.log('<- ' + myMsg); sendChannel.send(myMsg); } }; // Firefox rtc.ondatachannel = function(event) { setRecvChannel(event.channel); }; // ICE rtc.onicecandidate = function(event) { if(event.candidate) { console.log('<- ' + JSON.stringify(event.candidate)); ws.send(JSON.stringify(event.candidate)); } }; // Signaling channel var ws = new WebSocket('ws://127.0.0.1:49300/'); ws.onopen = function() { create.disabled = false; }; ws.onmessage = function(event) { console.log('-> ' + event.data); var data = JSON.parse(event.data); if(data.sdp) { rtc.setRemoteDescription(new RTCSessionDescription(data)); if(data.type === 'offer') { myMsg = 'pong'; rtc.createAnswer(function(anwser) { rtc.setLocalDescription(anwser, function () { console.log('<- ' + JSON.stringify(anwser)); ws.send(JSON.stringify(anwser)); }); }, console.error); } } // CHANGE: Chrome with offer bug asked to initiate the offer. else if(data.initiate === true){ doOffer(); } else { rtc.addIceCandidate(new RTCIceCandidate(data)); } }; ws.onclose = function() { create.disabled = true; }; // Create an offer // CHANGE: Create function for offer, so that it may be called from ws.onmessage function doOffer(){ rtc.createOffer(function(offer) { rtc.setLocalDescription(offer, function () { offer.sdp = offer.sdp; console.log(offer.sdp); console.log('<- ' + JSON.stringify(offer)); ws.send(JSON.stringify(offer)); }); }, console.error); } create.onclick = function() { // CHANGE: If this client is not able to negotiate a data channel, send a // message to the peer asking them to offer the channel. if(rtcCanInitiateDataOffer){ doOffer(); } else { ws.send(JSON.stringify({initiate:true})); } }; Related Issues: * WebRTC - [Test SCTP data channel interop with Firefox](https://code.google.com/p/webrtc/issues/detail?id=2279) _Closed as Fixed 11/25/13_ * Chromium - [SCTP Data Channel fails to connect if Chrome offers and FF answers](https://code.google.com/p/webrtc/issues/detail?id=2540#c3) \- _Closed as Fixed 11/4/13_ _It is also worth noting that there is currently a[max message buffer size of 16KB](https://code.google.com/p/webrtc/issues/detail?id=2279#c18) when sending data to a Chrome peer. If you are sending large data messages, you will need to break them into 16KB chunks before transmitting them over the data channel._
resursive function not incrementing as expected - python/django Question: I'm creating a tree app using Django models. Basically a user inputs the names of different nodes, and if a node is connected to another, a recursive function adds 1 to the parent node's "nodes" property. As soon as I add nodes that increase a parents' total to >2, it stops incrementing and stays at two. Here's the code: class Node(models.Model): name = models.CharField(max_length=200) parent = models.ForeignKey('self', null=True) nodes = models.IntegerField(default=0) def __unicode__(self): return self.name The user inputs text. For example, let's say a user enters the following: "animal(dog)" "dog(golden retriever)" # animal nodes should be 2 (and it is) "golden retriever(old yeller)" # animal nodes should now be 3 but it remains 2 This should generate a tree of this structure: animal (3 nodes) / dog (2 nodes) | golden retriever (1 node) | old yeller (0 nodes) Somehow the node_ancestors method functions correctly only at first. Any ideas on what's going wrong in my views? def node_ancestors(node): ancestor = node.parent if ancestor != None: ancestor.nodes += 1 print ancestor, " just added a node. it now has %d nodes. " % (ancestor.nodes) node.save() node_ancestors(ancestor) def index(request): nodes = Node.objects.all() node_names = [a.name for a in nodes] if request.method == 'POST': node_string = request.POST.get('get_node') print node_string index = node_string.find('(') parent = node_string[0:index] child = node_string[index+1:len(node_string)-1] if parent not in moment_names and child not in node_names: parent = Node(name=parent, nodes=1) parent.save() child = Node(name=child, parent=parent) child.save() print "parent is", parent print "child is", child print "parent nodes: ", parent.nodes print "child nodes: ", child.nodes elif parent in node_names and child not in node_names: parent_model = nodes.get(name=parent) node_ancestors(parent_model) # adds 1 to all nodes superior to parent node child = Node(name=child, parent=parent_model) child.save() elif parent not in node_names and child in node_names: parent = Node(name=parent, nodes=child_model.nodes+1) parent.save() print "parent is", parent return render(request, 'nodes_app/index.html') Answer: For these kind of things, you could maybe use Django MPTT (<http://django- mptt.github.io/django-mptt/>), it is designed to build tree models and has a [lot of built-in methods](http://django-mptt.github.io/django- mptt/models.html) to get/display ancestors/children. Here is a simple example built from the code you gave : from django.db import models from mptt.models import MPTTModel, TreeForeignKey class Species(MPTTModel): name = models.CharField(max_length=50, unique=True) parent = TreeForeignKey('self', null=True, blank=True, related_name='children') class MPTTMeta: order_insertion_by = ['name'] dog = Species() dog.name = "Dog" dog.save() golden = Species() golden.name = "Golden Retriever" golden.parent = dog golden.save() old = Species() old.name = "Old Yeller" old.parent = golden old.save() print(dog.level) # 0 print(golden.level) # 1 print(old.level) # 2 ancestors = old.get_ancestors(include_self=True) for each species in ancestors: print(species.name) # Dog # Golden Retriever # Old Yeller
How do I import another python file provided as command-line-argument? Question: This is what `a.py` looks import sys def test_import(another_python_file): import another_python_file as b b.run_me() if __name__ == '__main__': print sys.argv test_import(sys.argv[1].strip()) this is what `b.py` looks def run_me(): print 'I am script b' When I run, I get $ python a.py b.py ['a.py', 'b.py'] Traceback (most recent call last): File "a.py", line 10, in <module> test_import(sys.argv[1].strip()) File "a.py", line 5, in test_import import another_python_file as b ImportError: No module named another_python_file **What I need?** I would expect it to import `b.py` and print `I am script b` What am I missing? Answer: a.py: import os import sys def test_import(another_python_file): b = __import__(another_python_file) b.run_me() if __name__ == '__main__': print sys.argv test_import(sys.argv[1].strip('.py')) b.py def run_me(): print 'I am script b' $ python a.py b.py ['a.py', 'b.py'] I am script b I was able to do that referring to <http://www.diveintopython.net/functional_programming/dynamic_import.html>
Python implementing simple web data storage Question: I am trying to develop a python `PyQt` program that allow user to enter data about personal particulars and review them at a later time for processing purpose. The program will be used by less than 5 persons at the same time. So, i am thinking to use `Sqlite3` database as i believe it should be able to cope for that amount of traffic. The frame work i have in mind is that, the clients will have their own copy of my python pyqt program on each machine. Whenever they perform any operations that required data read/write, it will connect to the server thorough internet and read/write from the `sqlite.db` on the server. Basically, the server will be nothing but a remote data storage. Currently, i am able to create the required `GUI` for data inputs by using various widgets like `QlineEdit`, `QCombobox`, `QTextEdit` and so on. But i have never done network programming before, thus i have no idea how to implement a server that store the sqlite data file for my software. So my questions are (1) if i have a PC that has 24/7 internet connection, how do i set it up so that it can act as a server that store the data file for my software? (2)In what way can/should my program communicate to that server through internet. Even if you can't give me exact answer, i would appreciate if you can provide me some information of so that i look up and study about it. Any constructive advice will be appreciated. FYI: all the PCs will be running windows XP SP3 32 bits. Answer: There are different ways for a client to communicate with a server. You can use 1. [XMLRPC](http://docs.python.org/2/library/xmlrpclib.html) to create an object with methods that are called on the server side 2. You can use HTTP and [REST](https://en.wikipedia.org/wiki/REST) for the server with the library [requests or urllib](http://stackoverflow.com/questions/2018026/should-i-use-urllib-or-urllib2-or-requests) for the client 3. For the latter you can use flask, bottle, django or other frameworks to create a website that serves the content [(tutorials)](http://python.opentechschool.org/) 4. You can use [Pyro](https://github.com/irmen/Pyro4) to remotely access the objects on the server. Useful if the clients should also communicate with eachother. 5. You can create your own protocol. You will learn a lot and value the other options. 6. _The list is not complete_ I suggest that you have a look at XMLRPC if that fits. For number 2 I can say that many APIs use such a HTTP-interface (twitter, github, facebok, google). It is easy to use also for other people. **Security** is important. I am not an expert. If you send username and password in plain text then use SSL to encrypt the connection. If you can not get ssl to work with python you can use [stunnel](https://www.stunnel.org/index.html).
running series of interactive shell commands in bash/python/perl script Question: Currently, I do the following steps: a. Grep for pid of a process and kill it. ps -aux | grep foo.bar # process of interest kill -9 pid_of_foo.bar # kill the process b. start virtualenv cd {required_folder} sudo virtualenv folder/ cd {folder2} source bin/activate c. Start the manage.py in shell mode cd {required folder} sudo python manage.py shell d. In the interactive manage shell, execute the following commands: from core import * foo.bar.bz.clear.state() exit e. Execute a script /baz/maz/foo In bash we can write down a series of commands, however Is it possible to run the interactive shell in django using bash and execute commands? I was wondering if above steps can be scriptified. Thanks Answer: You need a script like this one: #!/bin/bash # kill all foo.bar's instances for pid in $(ps -aux | grep foo.bar | grep -v grep | awk '{print $2;}'); do kill $pid done # start virtualenv cd {required_folder} ... # Start the manage.py in shell mode cd {required folder} cat << EOF | sudo python manage.py shell from core import * foo.bar.bz.clear.state() exit EOF # Execute a script /baz/maz/foo The key point of the script is HEREDOC python snippet. Take a look at the example I've just tried in a console: [alex@galene ~]$ cat <<EOF_MARK | python - > import sys > print "Hello, world from python %s" % sys.version > exit > EOF_MARK Hello, world from python 2.7.6 (default, Nov 22 2013, 22:57:56) [GCC 4.7.2 20121109 (ALT Linux 4.7.2-alt7)] [alex@galene ~]$ _
Getting Exception when setting django with mysql-server Question: First of all I installed django on my Ubuntu machine sudo apt-get install python-django After that i created my first project using following command and a directory named 'mysite' was created. django-admin.py startproject mysite Here, running the following command starts a server and i checked in the browser, minimal server that comes with django was perfectly. python manage.py runserver Then, I installed mysql-server running this command sudo apt-get install mysql-server After this i create database in mysql named 'django_first' . And used the following command to install python module mysqldb successfully. sudo apt-get install python-mysqldb As of now no issue occurred. But as soon as I run this command in the directory after setting all the necessary fields in settings.py file DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'django_first', 'USER': 'root', 'PASSWORD': 'password', 'HOST': '', 'PORT': '', } Now run this command python manage.py runserver I get the following errors :- Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line utility.execute() File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 242, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 280, in execute translation.activate('en-us') File "/usr/local/lib/python2.7/site-packages/django/utils/translation/__init__.py", line 130, in activate return _trans.activate(language) File "/usr/local/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 188, in activate _active.value = translation(language) File "/usr/local/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 177, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/usr/local/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 159, in _fetch app = import_module(appname) File "/usr/local/lib/python2.7/site-packages/django/utils/importlib.py", line 40, in import_module __import__(name) File "/usr/local/lib/python2.7/site-packages/django/contrib/admin/__init__.py", line 6, in <module> from django.contrib.admin.sites import AdminSite, site File "/usr/local/lib/python2.7/site-packages/django/contrib/admin/sites.py", line 4, in <module> from django.contrib.admin.forms import AdminAuthenticationForm File "/usr/local/lib/python2.7/site-packages/django/contrib/admin/forms.py", line 6, in <module> from django.contrib.auth.forms import AuthenticationForm File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/forms.py", line 17, in <module> from django.contrib.auth.models import User File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/models.py", line 48, in <module> class Permission(models.Model): File "/usr/local/lib/python2.7/site-packages/django/db/models/base.py", line 96, in __new__ new_class.add_to_class('_meta', Options(meta, **kwargs)) File "/usr/local/lib/python2.7/site-packages/django/db/models/base.py", line 264, in add_to_class value.contribute_to_class(cls, name) File "/usr/local/lib/python2.7/site-packages/django/db/models/options.py", line 124, in contribute_to_class self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) File "/usr/local/lib/python2.7/site-packages/django/db/__init__.py", line 34, in __getattr__ return getattr(connections[DEFAULT_DB_ALIAS], item) File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 198, in __getitem__ backend = load_backend(db['ENGINE']) File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 113, in load_backend return import_module('%s.base' % backend_name) File "/usr/local/lib/python2.7/site-packages/django/utils/importlib.py", line 40, in import_module __import__(name) File "/usr/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 17, in <module> raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb` Looks to be some issue with mysqldb module not installed, but that's not exactly it is as I rerun the installation command for this. Any ideas about this ? Answer: `MySQLdb` is just the module that connects to MySQL, but it's part of a larger package called `MySQL-Python`. Just do: pip install MySQL-python And it should take care of everything (you probably need to `sudo` that command). In fact, I'd probably suggest you do: pip install MySQL-python --upgrade To assure it installs the latest version. If that doesn't change anything, follow [these instructions](http://edwards.sdsu.edu/labsite/index.php/lab- blog/404-installing-mysqldb-module-for-python-2-7-for-ubuntu-12-04) to make sure you get the latest version and all required dependencies p.s. in general, always prefer `pip` over `apt-get` for installing python packages, since it gets updated a lot more frequently. For example, the `django` version you have installed using the `apt-get` method might very well be a very old release. To fix that, delete the django folder in your site- packages and do: pip install django==1.6.2 #this is the latest version
PyYAML replace dash in keys with underscore Question: I would like to map directly some configuration parameters from YAML into Python argument names. Just wondering if there is a way without writing extra- code (to modify keys afterwards) to let YAML parser replace dash '-' in a key with an underscore '_'. some-parameter: xyz some-other-parameter: 123 Should become when parsed with PyYAML (or may be other lib) a dictionary with values: {'some_parameter': 'xyz', 'some_other_parameter': 123} Than I can pass the dictionary to a function as named parameters: foo(**parsed_data) I know I can iterate through the keys afterwards and modify their values, but I don't want to do that :) Answer: At least for your stated case, you don't need to transform the keys. Given: import pprint def foo(**kwargs): print 'KWARGS:', pprint.pformat(kwargs) If you set: values = { 'some-parameter': 'xyz', 'some-other-parameter': 123, } And then call: foo(**values) You get: KWARGS: {'some-other-parameter': 123, 'some-parameter': 'xyz'} If you goal is is actually to call a function like this: def foo(some_parameter=None, some_other_parameter=None): pass Then sure, you would need to map the key names. But you could just do this: foo(**dict((k.replace('-','_'),v) for k,v in values.items()))
How do I sign in using urllib and then access webpages using that authority in Python? Question: So, what I want to do is sign in to Wallbase.cc and then get the tags for a NSFW wallpaper (you need to be signed in for that). It seems as if I can sign in fine but when I try to access the wallpaper page it throws up a 403 error. This is the code I'm using: import urllib2 import urllib import cookielib import re username = 'xxxx' password = 'xxxx' cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) urllib2.install_opener(opener) payload = { 'csrf' : '371b3b4bd0d1990048354e2056cd36f20b1d7088', 'ref' : 'aHR0cDovL3dhbGxiYXNlLmNjLw==', 'username' : username, 'password' : password } login_data = urllib.urlencode(payload) req = urllib2.Request('http://wallbase.cc/user/login', login_data) url = "http://wallbase.cc/wallpaper/2098029" #Opens url of each pic usock = urllib2.urlopen(url) data = usock.read() usock.close() Any idea? Btw the wallpaper used isn't actually NSFW it was incorrectly flagged. Answer: you can try this library <http://wwwsearch.sourceforge.net/mechanize/> here is an exemple : import re import mechanize br = mechanize.Browser() br.open("http://www.example.com/") # follow second link with element text matching regular expression response1 = br.follow_link(text_regex=r"cheese\s*shop", nr=1) assert br.viewing_html() print br.title() print response1.geturl() print response1.info() # headers print response1.read() # body br.select_form(name="order") # Browser passes through unknown attributes (including methods) # to the selected HTMLForm. br["cheeses"] = ["mozzarella", "caerphilly"] # (the method here is __setitem__) # Submit current form. Browser calls .close() on the current response on # navigation, so this closes response1 response2 = br.submit() # print currently selected form (don't call .submit() on this, use br.submit()) print br.form response3 = br.back() # back to cheese shop (same data as response1) # the history mechanism returns cached response objects # we can still use the response, even though it was .close()d response3.get_data() # like .seek(0) followed by .read() response4 = br.reload() # fetches from server for form in br.forms(): print form # .links() optionally accepts the keyword args of .follow_/.find_link() for link in br.links(url_regex="python.org"): print link br.follow_link(link) # takes EITHER Link instance OR keyword args br.back()
Combing 2 python Lists and sorting by date while preserving source Question: import datetime dic1 = [datetime.datetime(2014, 2, 4, 17, 48, 4), datetime.datetime(2014, 2, 4, 17, 48, 4), datetime.datetime(2014, 2, 4, 17, 58, 18), datetime.datetime(2014, 2, 4, 17, 58, 18), datetime.datetime(2014, 2, 5, 1, 8, 13), datetime.datetime(2014, 2, 5, 1, 8, 13), datetime.datetime(2014, 2, 5, 1, 8, 45), datetime.datetime(2014, 2, 5, 1, 8, 45), datetime.datetime(2014, 2, 5, 15, 40, 54), datetime.datetime(2014 , 2, 5, 15, 40, 54), datetime.datetime(2014, 2, 5, 15, 49, 41)] dic2 = [datetime.datetime(2014, 2, 5, 15, 49, 41), datetime.datetime(2014, 2, 5, 17, 43, 26), datetime.datetime(2014, 2, 5, 17, 43, 26), datetime.datetime(2014, 2, 5, 22, 36), datetime.datetime(2014, 2, 5, 22, 36), datetime.datetime(2014, 2, 6, 15, 26, 54), datetime.datetime(2014, 2, 6, 15, 26, 54), datetime.datetime(2014, 2, 6, 21, 19, 42), datetime.datetime(2014, 2, 6, 21, 19, 42), datetime.datetime(2014, 2, 7, 0, 9, 3), datetime.datetime(2014, 2, 7, 0, 9, 3), datetime.datetime(2014, 2, 7, 16, 15, 11), datetime.datetime(2014, 2, 7, 16, 15, 11), datetime.datetime(2014, 2, 7, 16, 33, 33)] for i in dic1: print i, " source is dic1" print "--" for i in dic2: print i, " source is dic2" This outputs data like this: 2014-02-04 17:48:04 source is dic1 2014-02-04 17:48:04 source is dic1 2014-02-04 17:58:18 source is dic1 2014-02-04 17:58:18 source is dic1 2014-02-05 01:08:13 source is dic1 2014-02-05 01:08:13 source is dic1 2014-02-05 01:08:45 source is dic1 2014-02-05 01:08:45 source is dic1 2014-02-05 15:40:54 source is dic1 2014-02-05 15:40:54 source is dic1 2014-02-05 15:49:41 source is dic1 2014-02-05 15:49:41 source is dic2 2014-02-05 17:43:26 source is dic2 2014-02-05 17:43:26 source is dic2 2014-02-05 22:36:00 source is dic2 2014-02-05 22:36:00 source is dic2 2014-02-06 15:26:54 source is dic2 2014-02-06 15:26:54 source is dic2 2014-02-06 21:19:42 source is dic2 2014-02-06 21:19:42 source is dic2 2014-02-07 00:09:03 source is dic2 2014-02-07 00:09:03 source is dic2 2014-02-07 16:15:11 source is dic2 2014-02-07 16:15:11 source is dic2 2014-02-07 16:33:33 source is dic2 **What I am trying to do is combine the 2 lists in chronological order while preserving the source (Like below). Any way to do this?** 2014-02-07 16:15:11 source is dic1 2014-02-07 16:33:33 source is dic2 2014-02-07 18:09:03 source is dic1 2014-02-07 20:15:11 source is dic1 Answer: You'd produce tuples with `(datetime, source)` and merge the two lists; no need to use sorting if you do your merging intelligently. Here is a iterable merger I wrote for a [different answer](http://stackoverflow.com/questions/14465154/sorting-text-file-by- using-python/14465236#14465236); it can be made more efficient still using `heapq` but this one is more readable: import operator def mergeiter(*iterables, **kwargs): """Given a set of sorted iterables, yield the next value in merged order Takes an optional `key` callable to compare values by. """ iterables = [iter(it) for it in iterables] iterables = {i: [next(it), i, it] for i, it in enumerate(iterables)} if 'key' not in kwargs: key = operator.itemgetter(0) else: key = lambda item, key=kwargs['key']: key(item[0]) while True: value, i, it = min(iterables.values(), key=key) yield value try: iterables[i][0] = next(it) except StopIteration: del iterables[i] if not iterables: raise You'd use it like this: source1 = ((dt, 'source is dic1') for dt in dic1) source2 = ((dt, 'source is dic2') for dt in dic2) for dt, source in mergeiter(source1, source2): print dt, source The `source1` and `source2` inputs are generator expressions; they only produce values as they are iterated over. By looping over `mergeiter()` values from either generator are produced, in order, with their source attached. This is also _very_ memory efficient; no copies are made of the input lists, only enough data is kept in memory to determine the next value to output. For your sample data, this produces: >>> source1 = ((dt, 'source is dic1') for dt in dic1) >>> source2 = ((dt, 'source is dic2') for dt in dic2) >>> for dt, source in mergeiter(source1, source2): ... print dt, source ... 2014-02-04 17:48:04 source is dic1 2014-02-04 17:48:04 source is dic1 2014-02-04 17:58:18 source is dic1 2014-02-04 17:58:18 source is dic1 2014-02-05 01:08:13 source is dic1 2014-02-05 01:08:13 source is dic1 2014-02-05 01:08:45 source is dic1 2014-02-05 01:08:45 source is dic1 2014-02-05 15:40:54 source is dic1 2014-02-05 15:40:54 source is dic1 2014-02-05 15:49:41 source is dic1 2014-02-05 15:49:41 source is dic2 2014-02-05 17:43:26 source is dic2 2014-02-05 17:43:26 source is dic2 2014-02-05 22:36:00 source is dic2 2014-02-05 22:36:00 source is dic2 2014-02-06 15:26:54 source is dic2 2014-02-06 15:26:54 source is dic2 2014-02-06 21:19:42 source is dic2 2014-02-06 21:19:42 source is dic2 2014-02-07 00:09:03 source is dic2 2014-02-07 00:09:03 source is dic2 2014-02-07 16:15:11 source is dic2 2014-02-07 16:15:11 source is dic2 2014-02-07 16:33:33 source is dic2 Unfortunately, your sample input data uses two sources that do not overlap in their timestamp ranges. Your output sample does use sources that'd mix. Using those as input would look like: >>> dic1 = [datetime.datetime(2014, 2, 7, 16, 15, 11), datetime.datetime(2014, 2, 7, 18, 9, 3), datetime.datetime(2014, 2, 7, 20, 15, 11)] >>> dic2 = [datetime.datetime(2014, 2, 7, 16, 33, 33)] >>> source1 = ((dt, 'source is dic1') for dt in dic1) >>> source2 = ((dt, 'source is dic2') for dt in dic2) >>> for dt, source in mergeiter(source1, source2): ... print dt, source ... 2014-02-07 16:15:11 source is dic1 2014-02-07 16:33:33 source is dic2 2014-02-07 18:09:03 source is dic1 2014-02-07 20:15:11 source is dic1
Simple PyQt demo from book doesn't work Question: I'm brand new to GUI programming under Python and just got the book "Rapid GUI Programming with Python and QT" by Summerfield. The very first simple example ("pop-up alert in 25 lines") on page 112 works, but my attempt to exactly replicate the second example ("an expression evaluator in 30 lines") on page 116 produces only a blank window, with no visible fields for either entry or output and not even a window title. This is under Mac OS X 10.8.5 using the latest Enthought Canopy 64-bit Python installation (1.2.0.1610) The complete contents of PyQtdemo.pyw is from __future__ import division import sys from math import * from PyQt4.QtCore import * from PyQt4.QtGui import * class Form(QDialog): def _init__(self, parent=None): super(Form,self).__init__(parent) self.browser = QTextBrowser() self.lineedit = QLineEdit("Type an expression and press Enter") self.lineedit.selectAll() layout = QVBoxLayout() layout.addWidget(self.browser) layout.addWidget(self.lineedit) self.setLayout(layout) self.lineedit.setFocus() self.connect(self.lineedit, SIGNAL("returnPressed()"), self.updateUi) self.setWindowTitle("Calculate") def updateUi(self): try: text = unicode(self.lineedit.text()) self.browser.append("%s = <b>%s</b>" % (text, eval(text))) except: self.browser.append("<font color=red>%s is invalid!</font>" % text) app = QApplication(sys.argv) print dir(app) form = Form() form.show() app.exec_() It seems to me there are only the following possibilities: 1. there's a typo in my code that I've overlooked; 2. there's something wrong with how I'm invoking the script (e.g., "python PyQtdemo.pyw"); 3. there's something wrong with my PyQt 4.10.3-1 installation; 4. there's an error in the book. Answer: It's a typo in your code. The `Form.__init__` method is missing an initial underscore, and so it never gets called. (PS: This also explains why the incorrect indentation of the `updateUi` method doesn't raise an `AttributeError` when it's referenced in `self.connect`).
How to fix the AttributeError: 'str' object has no attribute '_radius'? Question: I have this python code which seem very straight forward but when I try to load it I get an error as above. you can view the full error message below too. Please what am I doing wrong? Thanks you. import math class Circle2D(object): def __init__(self, x = 0, y = 0, radius = 0): self._x = x self._y = y self._radius = radius def __str__(self): return "Circle with center (" + str(self._x) + ", " + str(self._y) + ")" def getX(self): return self._x def getY(self): return self._y def getArea(self): return (math.pi * self._radius**2) def getPerimeter(self): return (math.pi * 2 *self._radius) def containsPoint(self, x, y): if (((x - self._x)**2 - (y - self._y)**2) < self._radius**2): return True else: return False def contains(self, second): distance = math.sqrt((self._x - second._x)**2 + (self._y - second._y)**2) if ((second._radius + distance) <= self._radius): return True else: return False def overlaps(self, second): distance = math.sqrt((self._y - second._y)**2 + (self._x - second._x)**2) if (distance <= (self._radius + second ._radius)): return True else: return False def __contains__(self, anotherCircle): distance = math.sqrt((self._x - anotherCircle._x)**2 + (self._y - anotherCircle._y)**2) if(self._radius >= (anotherCircle._radius + distance)): return True else: return False def __cmp__(self, anotherCircle): if self._radius > anotherCircle._radius: return 1 elif self._radius > anotherCircle._radius: return -1 else: return 0 def __eq__(self, anotherCircle): if self._radius == anotherCircle._radius: return True else: return False def __ne__(self, anotherCircle): if self._radius == anotherCircle._radius: return False else: return True when i runn it and after few steps, the shell just shows: Traceback (most recent call last): File "C:\Users\wxwdd_000\Desktop\HW_2.py", line 124, in <module> main() File "C:\Users\wxwdd_000\Desktop\HW_2.py", line 121, in main print 'c1 == "Hello"?', c1 == "Hello" File "C:\Users\wxwdd_000\Desktop\HW_2.py", line 57, in __eq__ if self._radius == anotherCircle._radius: AttributeError: 'str' object has no attribute '_radius' how can i fix the code? Answer: `Circle2D.__eq__` assumes `anotherCircle` is `Circle2D` instance. But you're passing str objecct. To handle that, you need to check instance type. def __eq__(self, anotherCircle): return isinstance(anotherCircle, Circle2D) and \ self._radius == anotherCircle._radius
Python FTP download 550 error Question: I've written an ftp crawler to download specific files. It works up until it finds the specific file it wants to download, and then it throws this error: ftplib.error_perm: 550 The file exists in my download folder, but the size of the file is 0 kb. Do I need to convert something in order to get it to download?. I can access the ftp manual and download the file without any problems, so don't think it's the login part (unless there's different ways of logging in??) Here's my code: import ftplib import re import os class Reader: def __init__(self): self.data = "" def __call__(self,s): self.data += s + "\n" ftp = ftplib.FTP("my_ftp_server") ftp.login() r = Reader() ftp.dir(r) def get_file_list(folder): r = Reader() ftp.dir(folder, r) print ("Reading folder",folder) global tpe global name for l in r.data.split("\n"): if len(l) > 0: vars = re.split("[ ]*", l) tpe = vars[2] name = vars[3] if tpe == "<DIR>": get_file_list( folder + "/" + name ) else: print (folder + name) for name in folder: if vars[3].endswith(('501.zip','551.zip')): if os.path.exists('C:\\download\\' + vars[3]) == False: fhandle = open(os.path.join('C:\\download\\', vars[3]), 'wb') print ('Getting ' + vars[3]) ftp.retrbinary('RETR ' + vars[3], fhandle.write) fhandle.close() elif os.path.exists(('C:\\download\\' + vars[3])) == True: print ('File ', vars[3], ' Already Exists, Skipping Download') print("-"*30) print ("Fetching folders...") get_file_list("") Answer: Your code is probably OK. FTP error 550 is caused by a permission issue on the server side. This error means 'Requested action not taken. File unavailable (e.g., file not found, no access).', as you can find out [here on Wikipedia](http://en.wikipedia.org/wiki/List_of_FTP_server_return_codes) If you expect to have access to it, you should contact the sysadmin to rectify the file permission.
python how to run script in folder Question: This is my python path: PYTHONPATH = D:\PythonPath in the `PythonPath` folder I have `MyTests` folder that has a `Script.py` in the `PyThonPath` folder I have `ScrapyingProject` folder inside the `Script.py` I do this: from ScrapyingProject.ScrapyingProject.spiders.XXXSpider import XXXSpider I got this exception: ImportError: No module named ScrapyingProjectScrapyingProject.spiders.XXXSpider ### Edit: the XXXSpider is in this location: D:\PythonPath\ScrapyingProject2\ScrapyingProject2\spiders.py Answer: Take a look at this to read more about Python modules and packages: <http://docs.python.org/2/tutorial/modules.html> Turn your python-script-containing folder into a python package by adding `__init__.py` file to it. So, in your case, the directory structure should resemble this: PYTHONPATH - ScrapyingProject - __init__.py - script.py Now, in this scheme, `ScrappyProject` becomes your python-package. Any `.py` file inside the folder becomes a python module. You can import a python module by dot-expanded python path starting `PYTHONPATH`. Something like, `from ScrapyingProject.script import XXXSpider` Same logic can be extended by nesting multiple packages inside each other. A nested package, for example looks like PYTHONPATH - ScrapyingProject2 - __init__.py - ScrapyingProject2 - __init__.py - script.py Now, a package-nested script.py can be imported as from ScrapyingProject2.ScrapyingProject2 import script Or even from ScrapyingProject2.ScrapyingProject2.script import XXXSpider (Assuming you have defined `class XXXSpider` inside `script.py`)
Python: make a list generator JSON serializable Question: How can I concat a list of JSON files into a huge JSON array? I've 5000 files and 550 000 list items. My fist try was to use [jq](http://stedolan.github.io/jq/), but it looks like jq -s is not optimized for a large input. jq -s -r '[.[][]]' *.js This command works, but takes way too long to complete and I really would like to solve this with Python. Here is my current code: def concatFiles(outName, inFileNames): def listGenerator(): for inName in inFileNames: with open(inName, 'r') as f: for item in json.load(f): yield item with open(outName, 'w') as f: json.dump(listGenerator(), f) I'm getting: TypeError: <generator object listGenerator at 0x7f94dc2eb3c0> is not JSON serializable Any attempt load all files into ram will trigger the OOM-killer of Linux. Do you have any ideas? Answer: You should derive from `list` and override `__iter__` method. import json def gen(): yield 20 yield 30 yield 40 class StreamArray(list): def __iter__(self): return gen() # according to the comment below def __len__(self): return 1 a = [1,2,3] b = StreamArray() print(json.dumps([1,a,b])) Result is `[1, [1, 2, 3], [20, 30, 40]]`.
Generating random strings with few characters Question: I'd like to generate unique random strings with Python, so I'm leveraging [uuid4](http://docs.python.org/2/library/uuid.html#uuid.uuid4). from uuid import uuid4 def generate_random_string(): return uuid4().hex The problem is that it generates strings with too many characters. One possible solution that I came up with is to convert the string from base 16 to base 62 (digits + lowercase/uppercase letters). It works, but the result still has too many characters (22 to be more precise). Something between 6 and 10 chars would be great, but I'm afraid they might not be unique some time in the future. Is there any way to overcome this? Answer: This gives control over the length of the generated string, however, as @Keven mentioned in the comments, if you want less chance of collision in the future, you need to use more characters. Though it seems quite unlikely. from random import sample from string import digits, ascii_letters chars = digits + ascii_letters #'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' gen_str = sample(chars, 10) **example output:** ['2', '8', 'G', 'o', 'U', 'e', 'K', 'f', 'w', '7'] >>> ''.join(gen_str) '28GoUeKfw7' **info on[random module](http://docs.python.org/2/library/random.html)**
Script not iterating over multiple lines with count Question: I have a script where python should take each line at a time and do lots of stuffs (alignment and co.) So I tried to use count in order to iterate over every lines in my input file. However, when I run it, it only uses the last line of the input files and runs the rest with it until the end. So the script is alright but iteration not at all For a test, I tried with only 4 lines and this is the iterating part of the script: for line in open(sys.argv[1]): count+=1 if count < 4 : continue elif count > 4 : break I tried to write a test script to see if it does run every lines: count = 0 file = open('mclOutput2', 'r') while True: count+=1 if count < 4: print file.readlines() elif count > 4 : break And this is the output I get ['mono|comp78360_c0_seq1\tpoly|comp71317_c0_seq1\tturc|comp70178_c0_seq1\tturc|comp19023_c0_seq1\n', 'mono|comp78395_c0_seq1\trubr|comp23732_c0_seq1\trugi|comp32227_c0_seq1\tsulc|comp11641_c0_seq1\n', 'mono|comp80301_c0_seq1\tnegl|comp30782_c0_seq1\tphar|comp29363_c0_seq1\tpoly|comp53026_c0_seq2\n', 'mono|comp80554_c0_seq1\tnegl|comp27459_c0_seq1\tpoly|comp57863_c0_seq2\trugi|comp11691_c0_seq1\n'] [] [] I am not really sure how to fix it, any ideas what I am doing wrong? Answer: Better code: from itertools import islice def skip_lines(inf, n): list(islice(inf, n)) with open(sys.argv[1]) as inf: skip_lines(inf, 4) for count,line in enumerate(inf, 4): print("do your stuff here") **Edit:** Looking at your data (quoted in your .readlines output), you want something like GET_LINES = 4 with open(sys.argv[1]) as inf: for count,line in zip(range(1, GET_LINES+1), inf): data = [pairs.split('|') for pairs in line.strip().split('\t')] print("{:>3d}: {}".format(count, data)) which gives 1: [['mono', 'comp78360_c0_seq1'], ['poly', 'comp71317_c0_seq1'], ['turc', 'comp70178_c0_seq1'], ['turc', 'comp19023_c0_seq1']] 2: [['mono', 'comp78395_c0_seq1'], ['rubr', 'comp23732_c0_seq1'], ['rugi', 'comp32227_c0_seq1'], ['sulc', 'comp11641_c0_seq1']] 3: [['mono', 'comp80301_c0_seq1'], ['negl', 'comp30782_c0_seq1'], ['phar', 'comp29363_c0_seq1'], ['poly', 'comp53026_c0_seq2']] 4: [['mono', 'comp80554_c0_seq1'], ['negl', 'comp27459_c0_seq1'], ['poly', 'comp57863_c0_seq2'], ['rugi', 'comp11691_c0_seq1']]
How would you make this script run faster? Question: import urllib2 import time def hunt(url, start="<blockquote>", end="</blockquote>"): while 1: x = urllib2.urlopen(url) y = x.read() print y[y.find(start):y.find(end)] time.sleep(1) I'm trying to get constant updates of a single element on a web page, including a time interval to avoid getting banned by the server. It doesn't have to be python, btw. Answer: Let's try an experiment to compare the speed of `str.find()` to `re.search()`: import timeit setup = ''' import urllib2 import re start = "<body>" end = "</body>" url = 'http://www.stackoverflow.com' req = urllib2.urlopen(url) res = req.read() regex = re.compile('%s.+?%s' % (start, end)) ''' timeit.timeit('''res[res.find(start):res.find(end)]''', setup = setup, number = 1000) timeit.timeit('''res[res.find(start):res.rfind(end)]''', setup = setup, number = 1000) timeit.timeit('''regex.search(res)''', setup = setup, number = 1000) With this we get: 0.16357661195633 0.08454644330907968 0.2768974693601649 So it looks like `str.find()` has a pretty decent speed, but if you know your end quote is going to be closer to the end than the beginning you can speed things up with `str.rfind()`. Another thing you can do is to use multiple threads. Start one thread that constantly fetches URLs and puts them in a Queue, then have another thread process the Queue. This way, when the first thread sleeps while waiting on IO, the second thread will be processing the string from the previous URL. Something roughly along these lines: import Queue import threading import urllib2 q = Queue.Queue() results = [] url = 'http://www.google.com/' start = '<body>' end = '</body>' def get_urls(): while 1: req = urllib2.urlopen(url) res = req.read() print "putting data len", len(res) q.put(res) def process_url(): url_data = q.get() result = url_data[url_data.find(start):url_data.find(end)] results.append(result) q.task_done() putter_thread = threading.Thread(target = get_urls) getter_thread = threading.Thread(target = process_url) putter_thread.start() getter_thread.start()
Alternative way of connecting to PIL library? Question: Usually the library PIL is connected as follows: from PIL import ImageTk, Image I would like to connect it this way: import PIL but my version does not work. Here's the code: import os, sys import tkinter import PIL main = tkinter.Tk() catalogImg1 = 'imgs' nameImg1 = 'n.jpg' pathImg1 = os.path.join(catalogImg1, nameImg1) openImg = PIL.Image.open(pathImg1) renderImg = PIL.ImageTk.PhotoImage(openImg) tkinter.Label(main, image=renderImg).pack() main.mainloop() The error message is: > Traceback (most recent call last): File > "C:\Python33\projects\PIL_IMAGETK\ImageTK_photoimage - копия.py", line 11, > in openImg = PIL.Image.open(pathImg1) AttributeError: 'module' object has no > attribute 'Image' Answer: Importing a package (`PIL`) does not automatically import subpackages, submodules (`PIL.Image`, `PIL.ImageTk`). (Unless the package itself do it). Explicitly import the submodules. Replace following line: import PIL with: import PIL.Image import PIL.ImageTk
String Subsequence Kernel and SVM using Python Question: How can I use Subsequence String Kernel (SSK) [Lodhi 2002] to train a SVM (Support Vector Machine) in Python? Answer: I have come to a solution using the Shogun Library. You have to install it from the commit [0891f5a38bcb](https://code.google.com/p/shogun- toolbox/source/detail?r=0891f5a38bcb927d3c2349f1d3a006b975893d11) as later revisions would mistakenly remove the needed classes. This is a working example: from shogun.Features import * from shogun.Kernel import * from shogun.Classifier import * from shogun.Evaluation import * from modshogun import StringCharFeatures, RAWBYTE from shogun.Kernel import SSKStringKernel strings = ['cat', 'doom', 'car', 'boom'] test = ['bat', 'soon'] train_labels = numpy.array([1, -1, 1, -1]) test_labels = numpy.array([1, -1]) features = StringCharFeatures(strings, RAWBYTE) test_features = StringCharFeatures(test, RAWBYTE) # 1 is n and 0.5 is lambda as described in Lodhi 2002 sk = SSKStringKernel(features, features, 1, 0.5) # Train the Support Vector Machine labels = BinaryLabels(train_labels) C = 1.0 svm = LibSVM(C, sk, labels) svm.train() # Prediction predicted_labels = svm.apply(test_features).get_labels() print predicted_labels
pro = pro*(x[ip] - x[ir]) giving tuple index out of range error new to python Question: This is the code and please help I'm getting **tuple index out of range error** in line 19 import math n = 7 x = 0.5,1.2,2.1,2.9,3.6,4.5,5.7 y = 3.2,5.2,9.3,14.6,20.5,30.1,45.2 xx = 3.4 yy = y[0 ] fact = 1 for i in range(0,n): fact = fact*(xx - x[i]) s = 0.0 i1 = i+ 2 for ip in range(0,i1): pro = 1.0 for ir in range(0,i1): if (ir == ip): continue pro = pro*(x[ip] - x[ir]) s = s + y[ip]/pro yy = yy + s*fact print "x=%5.2f y=%5.2f" %(xx,yy) Answer: You're setting `i1 = i + 2` in your loop. Since i runs from 0 to n, i1 is set to values larger than the length of `x`. If you put a print i1 before the `for ip`, you can see exactly where it goes wrong.
BeautifulSoup Finding Subdirectories Question: I'm trying to figure out how to find sub-directories on a webpage with BeautifulSoup in Python. I have an idea on how I would do it. This is what I had in mind: from bs4 import BeautifulSoup html = '''<a href="/images/pic.png">images</a> <a href="google.com">google</a>''' soup = BeautifulSoup(html) links = soup.find_all('a', href=True) for link in links: print a['href'] Above would post all links on a page though. How would I get it to just print back sub-directories like in the example "/images/pic.png"? Although I want to use beautifulsoup using any other module would be fine. Answer: Add a `if` condition for `a['href']`, for example with the assumption that sub-directories will at least have two `/` in the path, you can use `a['href'].count('/') >= 2` as a condition. Sample: from bs4 import BeautifulSoup html = '''<a href="/images/pic.png">images</a> <a href="google.com">google</a>''' soup = BeautifulSoup(html) links = soup.find_all('a', href=True) for link in links: if a['href'].count('/') >= 2: print a['href'] If you mean relative path by "sub-directories", you can use `a['href'].startswith('/')` as a condition.
Flashcard Game: How to keep score Question: I'm trying to build a flashcard game in python for python terms. A few of the specs I'd like it to have are: 1) The program should store scores in a file 2) Scores should keep track of each word and / or key and keep track of how many times guessed correct and incorrect 3) After each guess, the program should tell the user how many times the user has guessed that key correctly and incorrectly I'm very new to Python and would appreciate the simplest explanation on how to make this work (if at all possible). Here's what I have: from random import choice import sys print("Welcome! Please type 'Start' to begin or 'Quit' to leave") user_answer = raw_input() if (user_answer == "quit"): quit() file = [] words = { "break": "Stops repeat of a block or loop", "else": "comes after 'if' to offer an alternative option", "if": "informs computer on how to react depending on key response from user", "index": "Return the index in the list of the first item whose value is x", "dict": "Associates one thing to another no matter what it is", "import": "To call a command", "def": "to define a word", "print": "to send a message to the screen for the user to see", "for": "One way to start a loop", "while": "Another way to start a loop", "elif": "When an 'If'/'Else' situation calls for more than one 'Else'", "from": "directs the computer to a location from which to import a command" } score = file[] key = choice(words.keys()) remaining_questions = 3 while remaining_questions > 0: print("Which command can accomplish: " + words[key] + "...?") user_guess = raw_input() print(str(user_guess == key)) remaining_questions = remaining_questions - 1 Answer: Well, let's start with the basic: you can store the result in a dict as well and use the `pickle` module to dump that dict into a file. So, the dict structure can be like this: answers[word]['correct'] answers[word]['wrong'] I'll make a simple and short code to start that dict with its default values (0s for all the words and guesses) if the file doesn't exist and to load that dict from a file if it does exist, using `Pickle`: from os.path import isfile from pickle import load if isfile('answers.bin'): # loads the dict from the previously "pickled" file if it exists answers = load(open('answers.bin', 'rb')) else: # create a "default" dict with 0s for every word for word in words.keys(): answers[word] = {'correct': 0, 'wrong': 0} Then, we will build a `if` statement to check is the user answered correctly: if key == user_guess: # there's no need to cast the `user_guess` to str answers[key]['correct'] += 1 else: answers[key]['wrong'] += 1 Finally, after the while block, we persist that answers dict with pickle, so we can load it later if needed: from pickle import dump dump(answers, open('answers.bin', 'wb'))
Why does Python 2.7 sign extend HDC returned from GetDC() on x64? Question: On Python 2.7.5 (default, May 15 2013, 22:44:16) [MSC v.1500 64 bit (AMD64)] why is the return value of **GetDC()** sign extended? from ctypes import * hDC1 = windll.user32.GetDC(None) print hex(hDC1), hDC1, type(hDC1) hDC2 = windll.user32.GetDC(None) print hex(hDC2), hDC2, type(hDC2) Which will sometimes result in negative return values: 0x6a0116c1 1778456257 <type 'int'> # OK -0x53fed994 -1409210772 <type 'int'> # Why negative? I believe HANDLES returned by GetDC() always fit within 32bits (even on x64), but cython seems to be sign extending the latter GetDC() return value if the highest bit in the 32 bit value is set (about 50% of the time). Later, attempting to use this negative value HANDLE fails in subsequent GDI calls. This seems like such a major issue, that I can't understand what I'm missing. **Edit** : This post seems to get to the root of the issue: [Why does setting ctypes dll.function.restype=c_void_p return long?](http://stackoverflow.com/questions/17840144/why-does-setting-ctypes- dll-function-restype-c-void-p-return-long) In our case, adding: class c_void_p(c_void_p): pass to the beginning of the module where restype is defined for each GDI library function seems to have fixed the problem. Answer: ctypes assumes that a function returns a signed int unless you tell it otherwise. Set the function _restype_ to an unsigned value: from ctypes import * windll.user32.GetDC.restype = c_void_p hDC1 = windll.user32.GetDC(None) ctypes doesn't know anything about parameters and return values, just the entry point of the function. It takes a guess but lets you override with _argtypes_ and _restype_.
Instantiate object "at module level" in Django Question: I want to use an [mpd client](https://github.com/Mic92/python-mpd2) in a Django project and would like to avoid having more connections to the mpd server than necessary. I think the easiest way to achieve this is to reuse the mpd-client object instead of creating a new object for every request. In short, I'd like to do something very similar to this: [Django: Keep a persistent reference to an object?](http://stackoverflow.com/questions/18130698/django-keep-a-persistent- reference-to-an-object). @daniel-roseman states this is easy to achieve by simply instantiating the object at module level. However, as a python-newbie I don't quite understand what that means. So far I have created a module (see below) that would reconnect to mpd in case of a disconnect and saved this module to `<Project>/<app>/lib/MPDProxy.py`. How would I instantiate this (mpd-)object at module level? # MPDProxy.py from mpd import MPDClient, MPDError class MPDProxy: def __init__(self, host="localhost", port=6600, timeout=10): self.client = MPDClient() self.host = host self.port = port self.client.timeout = timeout self.connect(host, port) def connect(self, host, port): self.client.connect(host, port) self.client.consume(1) # when we call self.client.next() the previous stream is deleted from the playlist if len(self.client.playlist()) > 1: cur = (self.client.playlist()[0][6:]) self.client.clear() self.add(cur) def add(self, url): try: self.client.add(url) except ConnectionError: self.connect(self.host, self.port) self.client.add(url) def play(self): try: self.client.play() except ConnectionError: self.connect(self.host, self.port) self.client.play() def stop(self): try: self.client.stop() except ConnectionError: self.connect(self.host, self.port) self.client.stop() def next(self): try: self.client.next() except ConnectionError: self.connect(self.host, self.port) self.client.next() def current_song(self): try: return self.client.currentsong() except ConnectionError: self.connect(self.host, self.port) return self.client.current_song() def add_and_play(self, url): self.add(url) if self.client.status()['state'] != "play": self.play() self.next() Answer: I just mean the bottom level of that module, at the same indentation as the "from mpd..." and "class MPDProxy..." lines. So, at the bottom of the file, without indenting at all, put `proxy = MPDProxy()` \- and now you can reference that instance from anywhere by importing it with `from lib.MPDProxy import proxy`.
Writing a Python script to print out an array of recs in lldb Question: I need help with the `SBValue class` used in the `lldb` Python module which is used for creating scripts for lldb debugging sessions. I am in the process of porting my kext test and debug system from `gdb` to `lldb` so I can start using the latest version of `Xcode` and `Mac OS 10.9.1`. Part of this process is rewriting my gdb debug scripts in python so they can be used with lldb. I have the two mac setup working, can drop into lldb and poke around in the kernel of the victim Mac. I can also my Python script to be called when I am running an lldb session. I'm stuck though in that I am unable to figure out how to display the contents of an array of records. For instance, given an array of `TraceRecs`: typedef struct { mach_timespec_t timeStamp; thread_t thread; TraceRecordType recordType; char entry[kEntrySize]; } TraceRec; a class which holds an array of trace records class com_softraid_TraceLog { private: UInt32 fMaxNumberEntries; UInt32 fNextEntryNumber; TraceRec * fTraceRecArray; . . . and a global in the kernel which points to an object of this class: extern com_softraid_TraceLog * com_softraid_gTraceLogPtr; I can use the following Python script to get the value of com_softraid_gTraceLogPtr in lldb: #! /usr/bin/env python # -*- coding: utf-8 -*- import lldb import commands import optparse import shlex def __lldb_init_module(debugger, internal_dict): debugger.HandleCommand('command script add -f sr_debug_macros.srtrace srtrace') def srtrace(debugger, user_input, result, internal_dict): """srtrace [number_entries] dump out that number of entries, default = 0 = all entries""" target = debugger.GetSelectedTarget() traceLog = target.FindFirstGlobalVariable("com_softraid_gTraceLogPtr") traceRecordArray = traceLog.GetChildMemberWithName("fTraceRecArray") maxNumberEntries = traceLog.GetChildMemberWithName("fMaxNumberEntries").GetValueAsUnsigned() nextEntryNumber = traceLog.GetChildMemberWithName("fNextEntryNumber").GetValueAsUnsigned() print >>result, "SRLog Current Entry: %d, Log Size: %d" % (nextEntryNumber, maxNumberEntries) print >>result, traceRecordArray and the output is: (lldb) srtrace SRLog Current Entry: 388, Log Size: 8192 (TraceRec *) fTraceRecArray = 0xffffff80a48fd000 but I can't figure out how to display the values in fields of any of records in the array. I have tried most of the methods in the SBValue class without any luck. Does anyone understand how this is supposed to work? Any help would be great. P. S.: If anyone else is trying to get this to work, the first step should be to update to Xcode 5.1 b5. The version of lldb which ships with Xcode 5.1 b3 crashes frequently when displaying the contents subclasses of IOKit classes. Answer: You're looking for `SBValue::GetChildAtIndex()` but you need to use the long form of that API. For instance, with a standalone user process C file, #include <stdio.h> #include <stdlib.h> #include <stdint.h> typedef struct { int datum; } TraceRec; typedef struct { uint32_t fMaxNumberEntries; uint32_t fNextEntryNumber; TraceRec *fTraceRecArray; } com_softraid_TraceLog; com_softraid_TraceLog *com_softraid_gTraceLogPtr; int main () { com_softraid_TraceLog log; com_softraid_gTraceLogPtr = &log; log.fTraceRecArray = (TraceRec *) malloc (sizeof (TraceRec) * 100); log.fMaxNumberEntries = 100; log.fNextEntryNumber = 4; log.fTraceRecArray[0].datum = 0; log.fTraceRecArray[1].datum = 1; log.fTraceRecArray[2].datum = 2; log.fTraceRecArray[3].datum = 3; puts ("break here"); return 0; } we can experiment a little in the interactive script mode: (lldb) br s -p break (lldb) r (lldb) scri >>> debugger = lldb.debugger >>> target = debugger.GetSelectedTarget() >>> traceLog = target.FindFirstGlobalVariable("com_softraid_gTraceLogPtr") >>> traceRecordArray = traceLog.GetChildMemberWithName("fTraceRecArray") >>> print traceRecordArray.GetChildAtIndex(1, lldb.eNoDynamicValues, 1) (TraceRec) [1] = { datum = 1 } >>> print traceRecordArray.GetChildAtIndex(2, lldb.eNoDynamicValues, 1) (TraceRec) [2] = { datum = 2 } >>> There's also `SBValue::GetPointeeData()` which would give you the raw bytes of each member of the array in an `SBData` object but then you'd need to coerce those bytes back into your structure so I wouldn't go that way.
Adding many scrolledpanels to one scrolledPanel in wxPython Question: Level: Beginner I am developing a GUI with wxPython on Windows 7 OS. Python version is 2.7 ad wxPython version is 3. My GUI looks as shown in the image below: ![Screenshot](http://i.stack.imgur.com/ZpojA.png) I would like to take some time to explain what I am trying to achieve. As you mayhave noticed from the first image that there are 4 different scrolled panels in my GUI named as `panel-1 to panel-4`. A vertical BoxSizer is applied to each scrolled panel. Each scrolled panel in return contains a boxsizer and a horizontal `wx.StaticLine`. The reason for using horizontal line was to make it is easier for the user to read the values for a particular serial number. (It will be easy for an user to read all the values in different panels for a particular serial number.) Finally all these four scrolled panels are added to a horizontal BoxSizer named as `panelSizer`. Currently the scrolled panels are scrolling individually. One important thing is that all the scrolled panels will every time contain same number of components/elements. Problem: I need a way so that I can add all these scrolled panels to one sizer or a panel in such a way that there is only one scroll bar at right side and when I scroll this scrollbar all the scrolled panels (panel-1 to panel-4) should scroll simultaneously. I would like to have something like this:![image 2](http://i.stack.imgur.com/d1SVH.png) I think this option is better from the GUI perspective. I have tried couple of things like, I created a new scrolled panel named as `main panel`, and applied the BoxSizer named as `panelSizer` (it contains all the panel from panel-1 to panel-4) using `SetSizer()` function, but the scroll bar doesn't appears, even the individual scroll bars doesn't appear! Also, I have checked that there are enough components in the panels so that scroll bars appears. I tried to change the four scrolled panels to simple panels and then add to the panelSizer and then applied the sizer to the `main panel` but nothing solved the problem.. Can anyone suggest what is wrong with my approach? Any suggestion. Thank you for your time. Answer: Well I found the solution myself so here it goes. I have to disable scrolling on the panels-1 to panel-4. This can be done by not using `SetupScrolling()` method on them. Also, the scrolling should be enabled on the mainPanel like this: mainPanel.SetupScrolling() For an example: Consider that you have two scrolled panels named as `panel1` and `panel2`. Now if you want to add these two scrolled panels to an other scrolled panel named as `mainPanel`, then you first create these two scrolled panels and then disable scrolling on them then add these two panels two a sizer, then apply this sizer to the `mainPanel`. Code example: mainPanel = wx.lib.scrolledpanel.ScrolledPanel(self, -1,style=wx.SIMPLE_BORDER) mainPanel.SetupScrolling() panel1 = wx.lib.scrolledpanel.ScrolledPanel(mainPanel, -1, style=wx.SIMPLE_BORDER) panel2 = wx.lib.scrolledpanel.ScrolledPanel(mainPanel, -1, style=wx.SIMPLE_BORDER) panelSizer = wx.BoxSizer(wx.HORIZONTAL) panelSizer.Add(panel1) panelSizer.Add(panel2) mainPanel.SetSizer(panelSizer)
How to save a Java object in Jython/Python Question: I'm building a Python UI using Tkinter. For the needs of the program, I've to connect Python with Java to do some stuff, so I'm using a simple Jython script as a linker. I cant use Tkinter with Jython because it's not supported. `Python (ui.py) -> Jython (linker.py) -> Java (compiled in jars)` To call the Jython function in Python I use `subprocess` as follows: **ui.py:** cmd = 'jython linker.py"' my_env = os.environ my_env["JYTHONPATH"] = tons_of_jars subprocess.Popen(cmd, shell=True, env=my_env) Then, in the Jython file, `linker.py`, I import the Java classes already added on the JYTHONPATH, and I create an object with the name `m` and call to some functions of the Java class. **linker.py:** import handler.handler m = handler.handler(foo, moo, bar) m.schedule(moo) m.getFullCalendar() m.printgantt() The thing is that I've created a `m` object, that will be destroyed after the execution of `jython linker.py` ends. So the question is: Is possible to save that `m` object somewhere so I can call it from `ui.py` whenever I want? If it's not possible, is there any other way to do this? Thanks in advance. Answer: I finally solved it by using `ObjectOutputStream`. from java import io def saveObject(x, fname="object.bin"): outs = io.ObjectOutputStream(io.FileOutputStream(fname)) outs.writeObject(x) outs.close() def loadObject(fname="object.bin"): ins = io.ObjectInputStream(io.FileInputStream(fname)) x=ins.readObject() ins.close() return x
How to size a panel to fit a grid Question: Here is the simplest example I can make of what I am trying to do. It is a grid inside a panel: #!/usr/bin/env python import wx import wx.grid app = wx.App(False) class InfoPane(wx.grid.Grid): def __init__(self, parent): wx.grid.Grid.__init__(self, parent) # Set up the presentation self.SetRowLabelSize(0) self.CreateGrid(1, 6) self.SetColLabelAlignment( wx.ALIGN_LEFT, wx.ALIGN_CENTRE ) self.SetColLabelValue(0, "Name") self.SetColLabelValue(1, "Status") self.SetColLabelValue(2, "") self.SetColLabelValue(3, "File") self.SetColLabelValue(4, "Last Action") self.SetColLabelValue(5, "Other Info") frame = wx.Frame(None) panel = wx.Panel(frame) info_pane = InfoPane(panel) note_sizer = wx.BoxSizer() note_sizer.Add(info_pane, 1, wx.EXPAND) panel.SetSizerAndFit(note_sizer) frame.Show() app.MainLoop() I want the whole thing to size itself so that the contents of the grid are all visible. What happens at the moment is this: ![screenie](http://i.stack.imgur.com/7u6qe.png) (note: edit - I took the notebook out, for even more simplification, the result is the same looking) Ideas? Thanks! Answer: You can get the size of the Grid and resize its parent panel with it. Try this: panel.SetSizerAndFit(note_sizer) gridSize = info_pane.GetVirtualSize() frame.Show() panel.SetSize(gridSize) frame.Fit()
Simulating the Knight Sequence Tour Question: I am currently trying to write a simple multi-threading program using Python. However I have run on to a bug I think I am missing. I am trying to simply write a program that uses a brute force approach the problem below: ![How the knight must move...](http://i.stack.imgur.com/u2VOw.gif) As can be seen from the image there is a chess board where the knight travels all respective squares. My approach is simply try each possible way where each possible way is a new thread. If in the end of the thread there is no possible moves count how many squares has been visited if it is equal to 63 write solution on a simple text file... The code is as below: from thread import start_new_thread import sys i=1 coor_x = raw_input("Please enter x[0-7]: ") coor_y = raw_input("Please enter y[0-7]: ") coordinate = int(coor_x), int(coor_y) def checker(coordinates, previous_moves): possible_moves = [(coordinates[0]+1, coordinates[1]+2), (coordinates[0]+1, coordinates[1]-2), (coordinates[0]-1, coordinates[1]+2), (coordinates[0]-1, coordinates[1]-2), (coordinates[0]+2, coordinates[1]+1), (coordinates[0]+2, coordinates[1]-1), (coordinates[0]-2, coordinates[1]+1), (coordinates[0]-2, coordinates[1]-1)] to_be_removed = [] for index in possible_moves: (index_x, index_y) = index if index_x < 0 or index_x > 7 or index_y < 0 or index_y > 7: to_be_removed.append(index) for index in previous_moves: if index in possible_moves: to_be_removed.append(index) if not to_be_removed: for index in to_be_removed: possible_moves.remove(index) if len(possible_moves) == 0: if not end_checker(previous_moves): print "This solution is not correct" else: return possible_moves def end_checker(previous_moves): if len(previous_moves) == 63: writer = open("knightstour.txt", "w") writer.write(previous_moves) writer.close() return True else: return False def runner(previous_moves, coordinates, i): if not end_checker(previous_moves): process_que = checker(coordinates, previous_moves) for processing in process_que: previous_moves.append(processing) i = i+1 print "Thread number:"+str(i) start_new_thread(runner, (previous_moves, processing, i)) else: sys.exit() previous_move = [] previous_move.append(coordinate) runner(previous_move, coordinate, i) c = raw_input("Type something to exit !") I am open to all suggestions... My sample output is as below: Please enter x[0-7]: 4 Please enter y[0-7]: 0 Thread number:2 Thread number:3 Thread number:4 Thread number:5Thread number:4 Thread number:5 Thread number:6Thread number:3Thread number:6Thread number:5Thread number:6 Thread number:7 Thread number:6Thread number:8 Thread number:7 Thread number:8Thread number:7 Thread number:8 Thread number:4 Thread number:5 Thread number:6Thread number:9Thread number:7Thread number:9 Thread number:10 Thread number:11 Thread number:7 Thread number:8 Thread number:9 Thread number:10 Thread number:11 Thread number:12 Thread number:5Thread number:5 Thread number:6 Thread number:7 Thread number:8 Thread number:9 Thread number:6 Thread number:7 Thread number:8 Thread number:9 If seems for some reason the number of threads are stuck at 12... Any help would be most welcomed... Thank you Answer: Your so-called _Quest of the[Knights Who Say Ni_](https://en.wikipedia.org/wiki/Knights_who_say_Ni) problem, while a clever rephrasing for asking a Python question, is more widely known as the [_Knights Tour_](https://en.wikipedia.org/wiki/Knights_tour "https://en.wikipedia.org/wiki/Knights_tour") mathematical problem. Given that and the fact you're a [math teacher](http://www.daniweb.com/members/5504/johnroach1985), I suspect your question's likely a fool's errand (aka [snipe hunt](https://en.wikipedia.org/wiki/Snipe_hunt)) and that you're fully aware of the following fact: According to a [section](https://en.wikipedia.org/wiki/Knights_tour#Brute_force_algorithms "click to go directly to that section of article") of Wikipedia's article on the Knights Tour problem: > 5.1 Brute force algorithms > > A brute-force search for a knight's tour is impractical on all but the > smallest boards; for example, on an 8x8 board there are approximately > 4x1051 possible move sequences**∗** , and it is well beyond the capacity > of modern computers (or networks of computers) to perform operations > on such a large set. > **∗** Exactly 3,926,356,053,343,005,839,641,342,729,308,535,057,127,083,875,101,072 of them according to a [footnote](http://www.josiahland.com/archives/781 "Enumerating the Kight’s Tour - http://www.josiahland.com/archives/781") link.
as once you click on a thumbnail to display the image? Question: please help to fix the script. import os, sys import tkinter from PIL import ImageTk, Image DIR_IMGS = 'imgs' DIR_THUMBS = 'thumbs' imgfiles = os.listdir(DIR_IMGS) thumbfiles = os.listdir(DIR_THUMBS) root = tkinter.Tk() root.geometry('900x700') links = [] def showItem(imgfile): print(imgfile) pathImg = os.path.join(DIR_IMGS, imgfile) print(pathImg) renderImg = ImageTk.PhotoImage(file=pathImg) popup = tkinter.Toplevel() tkinter.Button(popup, image=renderImg).pack() def createThumbs(): for imgfile in imgfiles: pathImg1 = os.path.join(DIR_IMGS, imgfile) pathImg2 = os.path.join(DIR_THUMBS, imgfile) openImg = Image.open(pathImg1) openImg.thumbnail((100, 100)) openImg.save('thumbs/' + imgfile) def outputButtons(): for thumbfile in thumbfiles: pathImg = os.path.join(DIR_THUMBS, thumbfile) renderImg = ImageTk.PhotoImage(file=pathImg) but = tkinter.Button(root, image=renderImg) but.pack(side='left') but.bind('<Button-1>', lambda event, thumbfile=thumbfile: showItem(thumbfile)) links.append(renderImg) createThumbs() outputButtons() root.mainloop() I wrote the script, just the example of the popular book "Mark Lutz. Programming Python". but for some strange reason, my script does not work. no manifest error as the screen is not an error message. but in pop-up window is not displayed (showing blank popup) Answer: The (somewhat large) image is garbage collected before the window has a chance to display it. You'll need to keep a reference around to the image to show it. I've taken the solution from [here](http://compgroups.net/comp.lang.python/tkinter-buttons-with- image/1637656), and your `showItem` function can look as follows: def showItem(imgfile): print(imgfile) pathImg = os.path.join(DIR_IMGS, imgfile) print(pathImg) renderImg = ImageTk.PhotoImage(file=pathImg) popup = tkinter.Toplevel() button = tkinter.Button(popup, image=renderImg) button.image = renderImg button.pack()
How to use unordered_map in cython? Question: I want a step by step guide, how to use unordered_map in cython. I've included file unordered_map.pxd into Cython/Includes/libcpp from <https://gist.github.com/ikuyamada/3265267> and use 3 other files: main.py: import pyximport; pyximport.install() from foo import F F() foo.pyx: from libcpp.unordered_map cimport unordered_map def F(): cdef unordered_map[int, int] my_map my_map[1]=11 my_map[2]=12 print my_map[1],my_map[2] foo.pyxbld: (to compile foo.pyx into C++) def make_ext(modname, pyxfilename): from distutils.extension import Extension return Extension(name=modname, sources=[pyxfilename], language='C++') When I run test.py, I get error: foo.cpp C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : warning C4530: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc C:\Users\kitov\.pyxbld\temp.win-amd64-2.7\Release\pyrex\foo.cpp(316) : fatal error C1083: Cannot open include file: 'tr1/unordered_map': No such file or directory I use Win7, Python 2.7 64bit, VS2008 Professional. Answer: if you've already compiled `foo`, I don't think you'll need to run `pyximport.install()`. That might have messed it up since `pyximport` re- compiles the source code, but you didn't provide any compiler-args to `pyximport`, so it compiled with default settings (i.e. not knowing `foo` is a cpp source file). main.py: from foo import F F() foo.py**x** : <\-- don't forget the x #distutils: language = c++ from libcpp.unordered_map cimport unordered_map def F(): cdef unordered_map[int, int] my_map my_map[1]=11 my_map[2]=12 print my_map[1],my_map[2]
PyQt5 tray icon disappears Question: Using PyQt5 5.2 and Python 2.7.6 on Arch Linux with XMonad 0.11 and trayer (or stalonetray). Wrote up a little demo program: #!/usr/bin/env python2 from PyQt5 import QtGui, QtWidgets import signal signal.signal(signal.SIGINT, signal.SIG_DFL) app = QtWidgets.QApplication([]) icon = QtGui.QIcon('clock.png') tray = QtWidgets.QSystemTrayIcon(icon) tray.show() app.exec_() (clock.png is just some 256x256 icon I found) If my tray is running, the tray icon shows up fine, though the transparent background seems to be ignored. If the tray gets restarted, which happens from time to time as I recompile XMonad or switch monitor setups, the tray icon disappears and only shows a thin black vertical bar, which I usually can't interact with. The rest of my usual tray icons (Spotify, Parcellite, nm- applet, Dropbox) display just fine. Answer: Qt 5 (at least until current 5.2.1 stable version) is not friend with most trays under X11. This goes on for quiet some time. Relevant bug reports: * <https://bugreports.qt.io/browse/QTBUG-31762> * <https://bugreports.qt.io/browse/QTBUG-35658>
LaTeX document and BoundingBox of an eps file created by matplotlib Question: I am having a problem in including an eps file generated by matplotlib into a LaTeX document. The size of the figure does not seems to be recognized correctly, and the caption overlaps with the figure. Please see the image below. This is the image of the latex document which includes figures generated by matplotlib. The LaTeX source file and the python source code for the plotting are shown further below. ======================== ![Image of the LaTeX document.](http://i.stack.imgur.com/4pIMe.jpg) ======================= Figure 1. is overlapped by the caption. It seems that the LaTeX recognizes the figure to have a smaller size than the actual size. Figure 2. is the same eps file as Figure 1., but the `bb` parameters were specified in `includegraphics` command in the LaTeX document. The BoundingBox of the eps file is `%%BoundingBox: 18 180 594 612`, and the `bb` parameters were set as `bb=0 0 594 612`. The first two values are changed to zero while the last two values are kept. Then, Figure 2. looks good. The size of the figure seems to be recognized correctly. I did not have this type of problem in other computers so far, and I wonder what is causing the problem. I am not sure if is the problem of matplotlib or LaTex, and I would like to have suggestions about how to find the source of the problem. The version of matplotlib package is 1.1.1rc, and OS is Ubuntu 12.04. I processed the LaTeX document by `latex` command and then `dvipdfm` command. >>> import matplotlib >>> matplotlib.__version__ '1.1.1rc' $ latex --version pdfTeX 3.1415926-2.5-1.40.14 (TeX Live 2013) kpathsea version 6.1.1 Copyright 2013 Peter Breitenlohner (eTeX)/Han The Thanh (pdfTeX). There is NO warranty. Redistribution of this software is covered by the terms of both the pdfTeX copyright and the Lesser GNU General Public License. For more information about these matters, see the file named COPYING and the pdfTeX source. Primary author of pdfTeX: Peter Breitenlohner (eTeX)/Han The Thanh (pdfTeX). Compiled with libpng 1.5.16; using libpng 1.5.16 Compiled with zlib 1.2.7; using zlib 1.2.7 Compiled with xpdf version 3.03 $ dvipdfm --version This is dvipdfmx-20130405 by the DVIPDFMx project team, modified for TeX Live, an extended version of dvipdfm-0.13.2c developed by Mark A. Wicks. Copyright (C) 2002-2013 by the DVIPDFMx project team This is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. Here is the LaTeX source file. \documentclass{article} \usepackage[dvips]{graphicx,color} %\usepackage{amsmath,amssymb} %\usepackage[top=1in,bottom=1in,left=1in,right=1in]{geometry} \begin{document} This is the first paragraph of the text. Today is a good day. \begin{figure}[ht] \begin{center} \includegraphics[width=.5\linewidth]{fig.eps} \caption{This is the caption of the figure included without specifying bb parameters.} \label{fig1} \end{center} \end{figure} This is the second paragraph of the text written below the first figure environment. Tomorrow will be a bad day. \begin{figure}[hb] \begin{center} \includegraphics[bb=0 0 594 612, width=.5\linewidth]{fig.eps} \caption{This is the caption of the figure included with the first two bb parameters set zero.} \label{fig2} \end{center} \end{figure} % Note that fig.eps has the following bounding box information. % $ grep BoundingBox fig.eps % %%BoundingBox: 18 180 594 612 \end{document} Here is the python source code used for plotting. #!/usr/bin/python import matplotlib.pyplot as plt plt.plot([0, 1, 2], [0, 2, 4], '-b') plt.savefig('fig.eps') Answer: First, you should set the figure dimensions using the `plt.figure()` function with the `figsize=(x,y)` option. You should also set the bounding box in the `plt.savefig()` function with the `bbox_inches='tight'` option, which should remove the extra whitespace around your figure. Some other things you can try include setting the backend to 'PS' if you haven't already using: import matplotlib as mpl mpl.use('PS') Additionally, I use the `format='eps'` option in my savefig function, though it shouldn't be necessary since you already have the eps extension on your filename, but it doesn't hurt to give it a try.
shorten csv file based on rules python Question: I am stuck writing the following program. I have a csv file "SNo","Column1","Column2" "A1","X","Y" "A2","A","B" "A1","X","Z" "A3","M","N" "A1","D","E" I want to shorten this csv to follow these rules a.) If the SNo occurs more than once in the file, combine all column1 and column2 entries of that serial number b.) If same column1 entries and column2 entries occur more than once, then do not combine them twice. Therefore the output of the above should be "SNo","Column1","Column2" "A1","X,D","Y,Z,E" "A2","A","B" "A3","M","N" So far I am reading the csv file, iterating the rows. checking if SNo of next row is same as the previous row. Whats the best way to combine. import csv temp = "A1" col1="" col2="" col3="" with open("C:\\file\\file1.csv","rb") as f: reader = csv.reader(f) for row in reader: if row[0] == temp: continue col1 = col1+row[1] col2=col2+row[2] col3=col3+row[3] temp = row[0] print row[0]+";"+col1+";"+col2+";"+col3 col1="" col2="" col3="" Please let me know a good way to do this. Thanks Answer: The simplest approach is to maintain a dictionary with keys as serial numbers and sets to contain the columns. Then you could do something like the following: my_dict = {} for row in reader: if not row[0] in my_dict.keys(): my_dict[row[0]] = [set(), set()] my_dict[row[0]][0].add(row[1]) my_dict[row[0]][1].add(row[2]) Writing the file out (to a file opened as `file_out`) would be as simple as iterating through the dictionary using a join command: for k in my_dict.keys(): file_out.write("{0},\"{1}\",\"{2}\"\n".format( k, ','.join([x for x in my_dict[k][0]]), ','.join([x for x in my_dict[k][1]]) ))
Align unicode text in terminal window using default monospace font Question: I am pulling data from the web and want to align it in a table in a terminal window. I can align the text fine in most cases but when the text contains certain symbols or foreign characters things get messy. How can I handle these characters? Here is an example with the problem on the third line of output: >>> items = "Apple tree", "Banana plant", "Orange 으르", "Goodbye" >>> values = 100, 200, 300, 400 >>> for i, v in zip(items, values): ... print "%-15s : %-4s" % (i, v) ... Apple tree : 100 Banana plant : 200 Orange 으르 : 300 Goodbye : 400 >>> Note: I quoted all the items correctly. The `"Orange"`closing quotes don't show correctly here on Stack Overflow but they display fine in the terminal window. **UPDATE:** I have added a bounty to this question. I am looking for a solution that can be implemented without too much additional code and without using external libraries. It should also work with python 2.7+ and 3.x (conditionals that test for versions and apply different fixes would be fine). Also it should not require any additional system configuration or changing of fonts or changing any terminal settings of a standard Debian/Ubuntu installation. Answer: The special behaviour for those particular characters can be identified using the [East Asian width](http://www.unicode.org/reports/tr11/) property from their Unicode data. Taking the suggestion from [Programmatically tell if a Unicode character takes up more than one character space in a terminal](http://stackoverflow.com/questions/7086856/programmatically-tell-if- a-unicode-character-takes-up-more-than-one-character-sp) and using that value for alignment: #!/usr/bin/python3 import unicodedata items = "Apple tree", "Banana plant", "Orange 으르", "Goodbye" values = 100, 200, 300, 400 for i, v in zip(items, values): eawid = len(i) + sum(1 for v in i if unicodedata.east_asian_width(v) == 'W') pad = ' ' * (15 - eawid) print("%s%s : %-4s" % (i, pad, v)) gives: Apple tree : 100 Banana plant : 200 Orange 으르 : 300 Goodbye : 400 This may appear misaligned if your browser is using a 1.5-width glyph for those characters; in my terminal, `plan` is exactly the same width as `으르`. Syntax here is Python 3, but the same technique works in 2.7.
Django: Class based view instantiated for each request, is it efficient? Question: I found this in the Django documentation: > While your class is instantiated for each request dispatched to it, class > attributes set through the as_view() entry point are configured only once at > the time your URLs are imported. Wouldn't instantiating view for each request be inefficient? I understand python is good in garbage collection but couldn't it be cause of concern during high traffic as it may lead to server run out of memory before the objects could be garbage collected. Answer: Beside the comment from jpmc26 I would guess its not a big problem. If you follow the workflow Django is doing from when a request is coming in until the response is rendered, there are way more steps involved which initiate objects. The class-based view is probably the least of the problem, assuming you didn't implement it to be blocking.
Problems writing to file in python while using tweepy Question: I am working on a project for which I need to extract tweets. I have used tweepy in python for the same, using various sources of information available online as I'm not very comfortable with python. I am coming across two issues I haven't been able to resolve by googling! 1) I want approx 1000 to get stored in a file. I guess I could use a count variable for the same but don't know where and how to use it. Basically, how to terminate the program once I get 1000 tweets? 2) When printing to file, I get an error that reads, " File "Tweet3.py", line 20, in on_status print "Tweet Text : %s"%status.text UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 65: ordinal not in range(128)" - How could I possibly resolve this error? Here is the code: import sys import tweepy import webbrowser fp=open("Tweets.txt","w") Q=['Earthquake','Flood']#Filters c_key = '...' c_secret = '...' a_token= '...' a_token_sec= '...' auth = tweepy.OAuthHandler(c_key, c_secret) auth.set_access_token(a_token, a_token_sec) class CustomStreamListener(tweepy.StreamListener): def on_status(self, status): print "----------NEW TWEET!-----------" print "Tweet Text : %s"%status.text fp.write(status.text) print "Author's name : %s"%status.author.screen_name print "Time/Date of creation : %s"%status.created_at print "Source of Tweet : %s"%status.source print "Coordinates : %s"%status.coordinates streaming_api = tweepy.streaming.Stream(auth, CustomStreamListener(), timeout=60) print "Displaying Tweets for filters :" #print Q #streaming_api.filter(follow=None, track=Q) streaming_api.filter(locations=[-125,25,-65,48], async=False) Answer: Encode the text first before writing it to the file: status.text.encode('utf8') **EDIT:** Try this instead: import codecs fp = codecs.open("Tweets.txt", "w", "utf-8") fp.write(status.text) **EDIT:** Create a counter and increment it every time a new tweet occurs, e.g.: counter = 0 MAX_TWEETS = 1000 within the on_status method: counter += 1 if counter >= MAX_TWEETS: sys.exit()
Opening A large JSON file in Python with no newlines for csv conversion Python 2.6.6 Question: I am attempting to convert a very large json file to csv. I have been able to convert a small file of this type to a 10 record (for example) csv file. However, when trying to convert a large file (on the order of 50000 rows in the csv file) it does not work. The data was created by a curl command with the -o pointing to the json file to be created. The file that is output does not have newline characters in it. The csv file will be written with csv.DictWriter() and (where data is the json file input) has the form rowcount = len(data['MainKey']) colcount = len(data['MainKey'][0]['Fields']) I then loop through the range of the rows and columns to get the csv dictionary entries csvkey = data['MainKey'][recno]['Fields'][colno]['name'] cvsval = data['MainKey'][recno][['Fields'][colno]['Values']['value'] I attempted to use the answers from other questions, but they did not work with a big file (`du -m bigfile.json = 157`) and the files that I want to handle are even larger. An attempt to get the size of each line shows myfile = open('file.json','r'). line = readline(): print len(line) shows that this reads the entire file as a full string. Thus, one small file will show a length of 67744, while a larger file will show 163815116. An attempt to read the data directly from data=json.load(infile) gives the error that other questions have discussed for the large files An attempt to use the def json_parse(self, fileobj, decoder=JSONDecoder(), buffersize=2048): yield results as shown in [another answer](http://stackoverflow.com/questions/21708192/how- do-i-use-the-json-module-to-read-in-one-json-object-at-a- time/21709058#21709058), works with a 72 kb file (10 rows, 22 columns) but seems to either lock up or take an interminable amount of time for an intermediate sized file of 157 mb (from du -m bigfile.json) Note that a debug print shows that each chunk is 2048 in size as specified by the default input argument. It appears that it is trying to go through the entire 163815116 (shown from the len above) in 2048 chunks. If I change the chunk size to 32768, simple math shows that it would take 5,000 cycles through the loop to process the file. A change to a chunk size of 524288 exits the loop approximately every 11 chunks but should still take approximately 312 chunks to process the entire file If I can get it to stop at the end of each row item, I would be able to process that row and send it to the csv file based on the form shown below. vi on the small file shows that it is of the form {"MainKey":[{"Fields":[{"Value": {'value':val}, 'name':'valname'}, {'Value': {'value':val}, 'name':'valname'}}], (other keys)},{'Fields' ... }] (other keys on MainKey level) } I cannot use ijson as I must set this up for systems that I cannot import additional software for. Answer: I wound up using a chunk size of 8388608 (0x800000 hex) in order to process the files. I then processed the lines that had been read in as part of the loop, keeping count of rows processed and rows discarded. At each process function, I added the number to the totals so that I could keep track of total records processed. This appears to be the way that it needs to go. Next time a question like this is asked, please emphasize that a large chunk size must be specified and not the 2048 as shown in the original answer. The loop goes first = True for data in self.json_parse(inf): records = len(data['MainKey']) columns = len(data['MainKey'][0]['Fields']) if first: # Initialize output as DictWriter ofile, outf, fields = self.init_csv(csvname, data, records, columns) first = False reccount, errcount = self.parse_records(outf, data, fields, records) Within the parsing routine for rec in range(records): currec = data['MainKey'][rec] # If each column count can be different columns = len(currec['Fields']) retval, valrec = self.build_csv_row(currec, columns, fields) To parse the columns use for col in columns: dataname = currec['Fields'][col]['name'] dataval = currec['Fields'][col]['Values']['value'] Thus the references now work and the processing is handled correctly. The large chunk apparently allows the processing to be fast enough to handle the data while being small enough not to overload the system.
Rock, paper, Scissors (python 3.3) Question: Hi i'm making a rock paper scissors game and i have made the following script so far: def main(): from random import randint UserChoices = input("'rock', 'paper' or 'scissors'? \n Input: ") if UserChoices == "rock": UserChoice = 1 elif UserChoices == "paper": UserChoice = 2 elif UserChoices == "scissors": UserChoice = 3 CpuChoice = randint(1,3) if UserChoice == CpuChoice: print("DRAW!") elif UserChoice == "1" and CpuChoice== "3": print("Rock beats scissors PLAYER WINS!") main() elif UserChoice == "3" and CpuChoice== "1": print("Rock beats scissors CPU WINS") main() elif UserChoice == "1" and CpuChoice== "2": print("Paper beats rock CPU WINS!") main() elif UserChoice == "2" and CpuChoice== "1": print("paper beats rock PLAYER WINS!") main() elif UserChoice == "2" and CpuChoice== "3": print("Scissors beats paper CPU WINS!") main() elif UserChoice == "3" and CpuChoice== "2": print("Scissors beats paper PLAYER WINS!") main() elif UserChoice == "1" and CpuChoice== "2": print("cpu wins") main() else: print("Error: outcome not implemented") main() but when I run it I get the error I made "Error: outcome not implemented" Can someone tell me why this is? thank you. Answer: This and all the other comparisons similar to it: elif UserChoice == "1" and CpuChoice == "3": ... should be: elif UserChoice == 1 and CpuChoice == 3: In other words, you should be comparing `int`s with `int`s, instead of `int`s with strings as is happening right now.
Are unittest base classes good practice? (python/webapp2) Question: I'm rather new to unit-testing and am trying to feel out the best practices for the thing. I've seen several questions on here relating to unit-test inheriting a base class that itself contains several tests, for example: class TestBase(unittest.TestCase): # some standard tests class AnotherTest(TestBase): # run some more tests in addition to the standard tests I think what I've gathered from the community is that it's a better idea to write separate tests for each implementation and use [multiple inheritance](http://stackoverflow.com/questions/1323455/python-unit-test-with- base-and-sub-class). But what if that base class actually doesn't contain any tests - just helpers for all your other tests. For example, let's say I've got some base test class which I've used to store some common methods that most if not all of my other tests will use. Let's also assume that I've got a database model in `models.py` called `ContentModel` **test_base.py** import webtest from google.appengine.ext import testbed from models import ContentModel class TestBase(unittest.TestCase): def setUp(self): self.ContentModel = ContentModel self.testbed = testbed.Testbed() self.testbed.activate() # other useful stuff def tearDown(self): self.testbed.deactivate() def createUser(self, admin=False): # create a user that may or may not be an admin # possibly other useful things It seems this would save me tons of time on all other tests: **another_test.py** from test_base import TestBase class AnotherTest(TestBase): def test_something_authorized(self): self.createUser(admin=True) # run a test def test_something_unauthorized(self): self.createUser(admin=False) # run a test def test_some_interaction_with_the_content_model(self): new_instance = self.ContentModel('foo' = 'bar').put() # run a test > Note: this is based on some of my work in webapp2 on google app engine, but > I expect that an analogous situation arises for pretty much any python web > application ## My Question Is it good practice to use a base/helper class that contains useful methods/variables which all your other tests inherit, or should each test class be "self contained"? Thanks! Answer: Superb question. I think that almost anything you do that automates testing is excellent. That said, the tests really serve as the only **reliable source of documentation.** So the tests should be very easy to read and comprehend. The tests are reliable, unlike comments, because they show what the software really does and how to use it. I like this approach. But you might also try out [nose](http://pythontesting.net/framework/nose/nose-introduction/). Nose is a bit "lighter weight" to set up, and is well supported if you go the continuous integration route with something like [Jenkins](http://jenkins-ci.org/) for automated build/test/deployment. Nose does not format its messages quite as nicely as the xUnit style (IMO, of course). But for many things, you might be willing to give that up. BTW. Python is not Java. So it is perfectly acceptable to reuse just a plain old python function for re-use.
How to apply multiprocessing in my Python code? Question: I need help with applying multiprocessing in my code. I tried reading the multiprocessing section of the Python documentation, but I just don't know how to apply it to what I have. I believe what I do want to use is Pool, though. Below is part of one of the python scripts I have written that will eventually be called to another main script: ## servoRemote.py from franges import drange from cmath import sqrt as csqrt from math import atan, degrees, radians, tan, sqrt, floor import servo # Declare variables for servo conditional statements: UR = False UL = False BR = False BL = False def servoControl(x,y): global UR, UL, BR, BL ytop = round((-csqrt((0.3688**2)*(1-((x-0.5)**2)/(0.2067**2)))+0.5).real,3) ybottom = round((csqrt((0.3688**2)*(1-((x-0.5)**2)/(0.2067**2)))+0.5).real,3) if (x in list(drange(0.5,0.708,0.001,3)) and y in list(drange(ytop,0.501,0.001,3))): UR = True UL = False BR = False BL = False (factor, angle) = linearSF(x,y) (servo1, servo2) = angleSF(factor,angle) servo.move(1,servo1) servo.move(2,servo2) def linearSF(r,s): # Calculates the hypotenuse of the gaze: distr = abs(r-0.4999) dists = abs(0.50-s) theta = atan(dists/distr) b = sqrt(distr**2+dists**2) # Involved in solving for max x coordinate: A = 1+0.31412198*tan(theta)**2 B = (-1-0.31412198*tan(theta)**2)**2 - 4*(1+(tan(theta)**2)/3.183477)*(0.207275+0.0785305*tan(theta)**2) B = csqrt(B) C = 2+((2*tan(theta)**2)/3.183477) # Different x equations: xRight = ((A+B)/C).real xLeft = ((A-B)/C).real if (UR == True and UL == False and BR == False and BL == False): x = xRight y = -sqrt((0.3688**2)*(1-((x-0.5)**2)/(0.2067**2)))+0.5 # Solve for max hypotenuse given an angle, a: a = sqrt(abs(x-0.5)**2+abs(0.5-y)**2) # Final outputs, factor and angle (in degrees): factor = (b/float(a)) angle = degrees(theta) return (factor, angle) def angleSF(factor, angle): # Angular factors: S1U = -0.0025641026*angle + 1.230759 S2R = 0.0025641026*angle + 1 if (UR == True and UL == False and BR == False and BL == False): servo1 = int(floor((S1U*65-78)*factor + 78)) servo2 = int(floor((S2R*65-78)*factor + 78)) return (servo1,servo2) The code above is only for the case when UR == True. There are also other conditional if statements that follow with different conditions. Most of the examples I found using multiprocesses use a finite for loop, but I would like to put this in a while like so: while 1: x = [some continuously incoming data stream] y = [some continuously incoming data stream] servoControl(x,y) Thanks again in advance! I'm sure if I understand how to do it for this one script, I will probably able to figure out how to apply it to the other ones. Answer: Multiprocessing is a bit daunting. The key thing to understand is that you do not want more than one process using a resource (for example, writing to a reference) when other processes are using it (reading or writing). So for your ideal multiprocessing setup, you want each process to work only on resources that it alone is using. So multi-processing in python, can be used effectively when 1) the multiple processes work on different resources and 2) either you have multiple processors on your hardware, or multiple processes can swap between each other on a single (or multiple) processor machine while waiting for events to occur. I would recommend getting your feet wet experimenting with "some toys" that meet the conditions above to get a feel for how and where it works. Then revisit your problem and try to implement it in a way consistent with how python multiprocessing works. Read the documentation thoroughly. There is join () function that you will likely need to make sure that all of your processes finish, before whatever the next step might be, even if the next step is just to terminate the program.
How to measure distance at angle in image python Question: I'm working on a particle filter for an autonomous robot right now, and am having trouble producing expected distance measurements by which to filter the particles. I have an image that I'm using as a map. Each pixel represents a certain scaled area in the enviroment. Space the robot can occupy is white, walls are black, and areas that are exterior to the enviroment are grey. If you are unfamiliar with what a particle filter is, my python code will create a predetermined number of random guesses as to where it might be (x,y,theta) in the white space. It will then measure the distance to the nearest wall with ultrasonic sensors at several angles. The script will compare these measurements with the measurements that would have been expected at each angle for each guessed location/orientation. Those that most closely match the actual measurements will survive while guesses that are less likely to be right will be eliminated. My problem is finding the nearest wall AT a given angle. Say the sensor is measuring at 60°. For each guess, I need to adjust the angle to account for the guessed robot orientation, and then measure the distance to the wall at that angle. It's easy enough find the nearest wall in the x direction: from PIL import Image #from matplotlib._png import read_png from matplotlib.pyplot import * mapp = Image.open("Map.png") pixels = mapp.load() width = mapp.size[0] height = mapp.size[1] imshow(mapp) pixelWidth = 5 for x in range(width): if mapp.getpixel((x, 100)) == (0,0,0,255): #Identify the first black pixel distance = x*pixelWidth self.x The problem is that I can't tell the script to search one pixel at a time going at a 60°, or 23°, or whatever angle. Right now the best thing I can think of is to go in the x direction first, find a black pixel, and then use the tangent of the angle to determine how many pixels I need to move up or down, but there are obvious problems with this, mostly having to do with corners, and I can't imagine how many if statements it's going to take to work around it. Is there another solution? Answer: Okay, I think I found a good approximation of what I'm trying to do, though I'd still like to hear if anyone else has a better solution. By checking the tangent of the angle I've actually traveled so far between each pixel move, I can decide whether to move one pixel in the x-direction, or in the y-direction. for i in range(len(angles)): angle = self.orientation+angles[i] if angle > 360: angle -= 360 x = self.x y = self.y x1 = x y1 = y xtoy_ratio = tan(angle*math.pi/180) if angle < 90: xadd = 1 yadd = 1 elif 90 < angle < 180: xadd = -1 yadd = 1 elif 180 < angle < 270: xadd = -1 yadd = -1 else: xadd = 1 yadd = -1 while mapp.getpixel(x,y) != (0,0,0,255): if (y-y1)/(x-x1) < xtoy_ratio: y += yadd else: x += xadd distance = sqrt((y-y1)^2+(x-x1)^2)*pixel_width The accuracy of this method of course depends a great deal on the actual length represented by each pixel. As long as pixel_width is small, accuracy will be pretty good, but if not, it will generally go pretty far before correcting itself. As I said, I welcome other answers. Thanks
Interchange values and keys in OrderedDict Python Question: Here is my OrderedDict o=OrderedDict([('xmllist', 123), ('A', 124), ('B', 125), ('C', 126), ('D', 127)]) How can i interchange its keys and values as , o=OrderedDict([('A', 123), ('B', 124), ('C', 125), ('D', 126)]) Answer: Using [`zip`](http://docs.python.org/3/library/itertools.html#itertools.islice) and [`itertools.islice`](http://docs.python.org/3/library/itertools.html#itertools.islice): >>> from collections import OrderedDict >>> import itertools >>> >>> o = OrderedDict([('xmllist', 123), ('A', 124), ('B', 125), ('C', 126), ('D', 127)]) >>> OrderedDict((key1, o[key2]) for key1, key2 in zip(itertools.islice(o, 1, None), o)) OrderedDict([('A', 123), ('B', 124), ('C', 125), ('D', 126)]) mapping: `o['A'] = o['xmllist']`, `o['B'] = o['A']`, `o['C'] = o['B']`, ...
Python Socket errno 10054 only when client.py runs Question: There are some other posts about this issue but none did help me with mine. I'm trying to build a total simple server - client relationship in python server.py #!/usr/bin/python import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(("127.0.0.1",8889)) s.listen(1) try: while True: client, add = s.accept() data = client.recv(1024) if not data: print 'No data' print data finally: s.close() client.py #!/usr/bin/python import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("127.0.0.1",8889)) try: while True: message = 'Foo' s.send(message) ans = s.recv(1024) print ans finally: s.close() I start by running the server first, but when I try to run the client I'm getting this `Errno 10054 --> An existing connection was forcibly closed by the remote host` While request the browser with the ip and the related port, I receive some data. I'm quiet new to networking, so please explain what might be obvious wrong in my code. EDIT* Main issue is, that the client is somehow wrong, because it returns also an empty string on recv Thank you in advance Answer: I am guessing: The server accepts one socket and then does client, add = s.accept() data = client.recv(1024) ... client, add = s.accept() The client does this in the mean time: s.send(message) ans = s.recv(1024) # blocks until timeout If now an other client connects to the server then `client` is replaced, the socket garbage collected and closed. `s.recv(1024)` will then tell that the connection is reset. Have a look at `import select` or twisted (google around) to handle multiple connections at once.
Making python import statements 'insensitive' to working dir of launching script Question: I am new to python and am running into an unusual situation with my imports. Basically, I am just trying to organize things into packages. As an example, see this basic directory structure: root | ------- | | a b If I 'launch' using a script in root and use scripts in 'a' that import scripts from 'b', then it seems like my imports in the scripts in 'a' need to be relative to the location of the 'launch' script (in root). This is OK, but I am using IntelliJ as my IDE and when I am looking at a script in the 'a' directory, IntelliJ thinks the imports are bad because they are not 'relative' to the 'a' directory. Does this make sense? Am I doing something wrong? Thanks _**__**_Edit Follows_** ___ _** I have imports like these (my 'b' has children): from b.bchild import SomeClass from b import anotherBChild as blah I tried adding 'b' as a source root in IntelliJ, but 'b' is a child of the main source root so IntelliJ complains that source directories should not intersect. If I change my imports to be root.b and so on, then I get runtime failures when the script from root is run. Each of my directories has an empty **init**.py in it. I should probably re-iterate that everything runs just fine... I just can't seem to get IntelliJ to understand the import statements which made me think I was doing something wrong. Answer: If I understand your question correctly I believe this solution should work: You need to add the parent of your "root" directory to pythonpath. Then you place an empty > `__init__.py` file in "root" and every child of "root" and all their children and so on and so forth. When this is done you can import scripts quite easily. Example: Let's say that directory **"a"** contains the script **foo.py** and directory **"b"** contains the script **bar.py**. Now you want to import **bar** in **foo** , simply add this line to **foo** : import root.b.bar as bar EDIT: I have no experience using intellij with python. But if it is anything like PyCharm (both made by jetbrains) then you also need to add the parent of your "root" folder to intellij as you would add a project (like wim said in his comment).
Python BeautifulSoup Extracting PHP Links Question: I'm having a problem in Python with BeautifulSoup. I need to extract all files on the page that end in ".php", but they also have to be local files. They can't be from another website. This is what I have so far: from bs4 import BeautifulSoup import mechanize import sys url = sys.argv[1] br = mechanize.Browser() code = br.open(url) html = code.read() soup = BeautifulSoup(html) This is where I get stuck on what to do. I imagine using _soup.findall_ to get all the "a href" tags. Answer: Try like this, page=urllib2.urlopen(url) soup=BeautifulSoup(page.read()) for a in soup.findAll('a'): if a['href'].endswith('.php'): print a['href']
Python: separate matrix by column values Question: I have a matrix `A` with 3 columns that looks something like, but much larger: [[10 15 1.0] [21 13 1.0] [9 14 0.0] [14 24 1.0] [21 31 0.0] ...] I want to create two separate matrices: one that contains all of the data with the third column=0.0, and another with all the data with the third column=1.0. So essentially splitting the data by the values 0.0 or 1.0 in the third column. Answer: If you're using [Numpy](http://docs.scipy.org/doc/numpy/reference/), first find the rows where the third column has your desired value, then extract the rows using [indexing](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html). **Demo** >>> import numpy >>> A = numpy.array([[1, 0, 1], [2, 0, 1], [3, 0, 0], [4, 0, 0], [5, 0, 0]]) >>> A1 = A[A[:, 2] == 1, :] # extract all rows with the third column 1 >>> A0 = A[A[:, 2] == 0, :] # extract all rows with the third column 0 >>> A0 array([[3, 0, 0], [4, 0, 0], [5, 0, 0]]) >>> A1 array([[1, 0, 1], [2, 0, 1]])
dynamic filename in python Question: I'm facing a problem in coding the file management part of my algorithm. The aim of my file manager is to take a source file, then read it line by line, and split each line in a new file according to filters. Data are separated with tabulation `\t`. The filter consist of gathering all lines with same 4rth parameters ( you can say it is ordering them). My problem is that I want to have dynamic file-names according to that parameters, which is an integer. So the main file will be filled with a part of the algorithm like this: Logfile.write("%s\t %s\t %s\t %s\t %s\t %s\n" % (count, BSid, UEid, nbr_RB, Metric,)) the `nbr_RB` will be my filter parameters, it is a random set of integer ranging 1 to 100. what I want to do is to automate this code: open('/usr/local/resultat/file_1', 'w') # here 1 is linked to the nbr_RB[i] open('/usr/local/resultat/file_2', 'w') . . . open('/usr/local/resultat/file_nbr_RB[i]', 'w') so each time will have the `nbr_RB[i]` in the file name, I'm not writing 100 lines . And when I will apply the filter: ligne = Logfile.split(“\n”) par = ligne.split(“\t”) if par [3] = nbr_RB[1]: file_nbrRB[1].write (“%s \n” % (ligne)) elif par [3] = nbr_RB[4]: file_nbrRB[4].write (“%s \n” % (ligne)) . . . elif par [3] = nbr_RB[i]: file_nbrRB[i].write (“%s \n” % (ligne)) I have looked some solution and found this : For Python versions prior to 2.6, use the string formatting operator %: filename = "ME%d.txt" % i For 2.6 and later, use the str.format() method: filename = "ME{0}.txt".format(i) Though the first example still works in 2.6, the second one is preferred. If you have more than 10 files to name this way, you might want to add leading zeros so that the files are ordered correctly in directory listings: filename = "ME%02d.txt" % i filename = "ME{0:02d}.txt".format(i) This will produce file names like ME00.txt to ME99.txt. For more digits, replace the 2 in the examples with a higher number (eg, ME{0:03d}.txt). And : import os, sys path = "c:/temp" for filename in ["chas.txt", "dave.txt"]: f = open (os.path.join (path, filename)) print filename print f.read () print f.close () Edit: I think I really badly explained my problem, here is what i coded for the first log file, before any treatment: logfile= open('/usr/local/Python-3.3.0/my_tests/fichier_log/logfile_%s_%s.txt' %c %z , 'a') and here is the error: Traceback (most recent call last): File "/usr/local/Python-3.3.0/my_tests/log_files.py", line 181, in <module> main() File "/usr/local/Python-3.3.0/my_tests/log_files.py", line 167, in main logfile= open('/usr/local/Python-3.3.0/my_tests/fichier_log/logfile_%s_%s.txt' %c %z , 'a') TypeError: not enough arguments for format string Answer: Take a look at this line: logfile= open('/long/path/fichier_log/logfile_%s_%s.txt' %c %z , 'a') The problem is you can not first substitute one, and then the other format paramter. You need to pass _all_ the arguments for the format string at once, as a tuple: logfile= open('/long/path/fichier_log/logfile_%s_%s.txt' % (c, z), 'a') * * * Also take a look at this: ligne = Logfile.split(“\n”) par = ligne.split(“\t”) file_nbrRB[1].write (“%s \n” % (ligne)) You are passing _all the lines_ as a list to the format string, which probably is not what you intended to do. Also, `ligne.split` will not work, as `ligne` is a list. Finally, assuming that `Logfile` is a `file`, you probably need to do `Logfile.read().split('\n')` or `Logfile.read().splitlines()` or `Logfile.readlines()` instead. Try this: for ligne in Logfile.readlines(): par = ligne.split(“\t”) if par [3] = nbr_RB[1]: file_nbrRB[1].write (“%s \n” % (ligne)) ...
Using python to analyse coin tossing statistics Question: I have been learning python programming on [edX](https://www.edx.org/course/mitx/mitx-6-00x-introduction-computer- science-586) which is a very good course and I can so far fully recommend. Having just watched a TED talk on [Statistics](http://www.ted.com/talks/peter_donnelly_shows_how_stats_fool_juries.html) I thought great, a simple way of exercising the python skills I have picked up on a real world scenario. The guy gave an example on probability of continually flipping a coin and looking out for two recurring sequences, which he explained, you would think had the same probability of happening which he claimed in fact don't. Put simply he claims the sequence Heads Tails Heads is more likely to occur than Heads Tails Tails as at the end of the first sequence you are are already one third towards repeating the sequence again where at the end of the second sequence you then have to toss a further head to begin the sequence again. This makes perfect sense, so I set about trying to prove it with my small python program shown here. import random HTH = 0 HTT = 0 myList = [] i = 0 numberOfTosses = 1000000 while i < numberOfTosses: myList.append(random.randint(0,1)) i += 1 for i in range (len(myList)): if i+2 >= len(myList): break if myList[i] == 1 and myList[i+1] == 0 and myList[i+2] == 1: HTH +=1 if myList[i] == 1 and myList[i+1] == 0 and myList[i+2] == 0: HTT +=1 print 'HTT :' ,numberOfTosses, HTT, numberOfTosses/HTT print 'HTH :' ,numberOfTosses, HTH, numberOfTosses/HTH So I have run the program many times and changed the max iteration value higher and higher, yet cannot seem to prove his claim that on average the HTH sequence should happen evey 8 tosses and the HTT sequence every 10, as it would seem that I get on average balanced results either way. So my question is where have I gone wrong in my implementation of the problem? Answer: Your expert is right, and your code for what you stated he said is right, but he actually said something else. He says that when you start flipping coins, you should expect to see HTT first come up in an average of 8 flips, and HTH first come up in an average of 10 flips. If you revise your program to test that assertion, it might look like this: import random HTH = 0 HTT = 0 numberOfTrials = 10000 for t in xrange( numberOfTrials ): myList = [ random.randint(0,1), random.randint(0,1), random.randint(0,1) ] flips = 3 HTHflips = HTTflips = 0 while HTHflips == 0 or HTTflips == 0: if HTHflips == 0 and myList[flips-3:flips] == [1,0,1]: HTHflips = flips if HTTflips == 0 and myList[flips-3:flips] == [1,0,0]: HTTflips = flips myList.append(random.randint(0,1)) flips += 1 HTH += HTHflips HTT += HTTflips print 'HTT :', numberOfTrials, HTT, float(HTT)/numberOfTrials print 'HTH :', numberOfTrials, HTH, float(HTH)/numberOfTrials Running that will confirm the expected values of 8 and 10 tosses.
XML parsing ExpatError with xml.dom.minidom at line 1, column 0 Question: i've a Python script which stopped working about a month ago. But until then, it worked flawlessly for months (if not years). A tiny self-contained 7-line script is provided below to demonstrate the problem. i ran into an XML parsing problem with xml.dom.minidom and the error message is included below. The weird thing is if i used an XML validator on the web to validate against this particular URL directly, it is all good. Moreover, i saved the page source in Firefox or Chrome then validated against the saved XML file, it's also all good. Since the error happened at the very beginning of the input (line 1, column 0) as indicated below, i was wondering if this is an encoding mismatch. However, according to the saved page source in FireFox or Chrome, there is the following at the beginning: > xml version="1.0" encoding="UTF-8" <program> ================================================= #!/usr/bin/env python import urllib2 from xml.dom.minidom import parseString fd = urllib2.urlopen('http://api.worldbank.org/countries') data = fd.read() fd.close() dom = parseString(data) ================================================= <error msg> ================================================= Traceback (most recent call last): File "./bugReport.py", line 9, in <module> dom = parseString(data) File "/usr/lib/python2.7/xml/dom/minidom.py", line 1931, in parseString return expatbuilder.parseString(string) File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 940, in parseString return builder.parseString(string) File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 223, in parseString parser.Parse(string, True) xml.parsers.expat.ExpatError: not well-formed (invalid token): line 1, column 0 ================================================= i'm running Python 2.7.5+ on Ubuntu 13.10. Thanks. Answer: <http://api.worldbank.org/countries> responds with a [`Content-Encoding` of `gzip`](https://en.wikipedia.org/wiki/HTTP_compression) even though urllib2 can’t handle it and doesn’t ask for it. You have to gunzip the response manually. See: [Does python urllib2 will automaticly uncompress gzip data from fetch webpage](http://stackoverflow.com/questions/3947120/does-python- urllib2-will-automaticly-uncompress-gzip-data-from-fetch-webpage).
Adding items to wxpython combobox...? Question: I am new to python programming and to this forum as well. I am preparing a wxpython program (GUI) which has 2 comboboxes. First is for the Teams and the second for team members. All the data is stored into a access database and I am successful in fetching and adding the teams list to the teams combobox. Now I want the team member combobox to load on the text change event of team combobox. for this I bind it using the EVT_TEXT method and called a self defined method to first take the team name from the team combobox and then run the query and load the relevant team members to the second combobox. to much extent I am successful but the team members are getting added to the team combobox itself. Please have a look at the below code first: import wx, pyodbc class MyFrame(wx.Frame): def __init__(self, Parent, Title): super(MyFrame, self).__init__(None, title=Title, size=(400,400)) #creating the panel in which all widgets will be stored/created. #self.panel1 = wx.Panel(self, pos=(1,1),size=(382,100),style=wx.RAISED_BORDER) #now creating the first Label inside the panel a = teamData() #rows = a.runQueryEmpList("Mama Badi") TmLst = a.runQueryTmList() abc=[] for r in TmLst: abc.append(r.Team_Name) #static box for the employee details self.myvbfrm = wx.StaticBox(self,-1,label="Employee Detail:-",pos=(1,1),size=(380,98)) #items for the employee details LblName = wx.StaticText(self.myvbfrm,-1, 'Team Name:-',(8,20)) LblName2 = wx.StaticText(self.myvbfrm,-1, 'Employee Name:-',(8,50)) # team combobox TeamList = myComboBox(self.myvbfrm,(200,20),(100,50)) TeamList.addItem(abc) # employee combobox #EmpList = wx.ComboBox(self.myvbfrm,-1,"",(200,50),(100,50)) EmpList = myComboBox(self.myvbfrm,(200,50),(100,50)) TeamList.Bind(wx.EVT_TEXT, TeamList.addTeamMember) #creating the panel in which all widgets will be stored/created. self.panel2 = wx.Panel(self, pos=(10,175),size=(365,150),style=wx.RAISED_BORDER) #now creating the first Label inside the panel myFont = wx.Font(8,wx.DECORATIVE, wx.NORMAL, wx.BOLD,True) myLblFont = wx.Font(12,wx.DECORATIVE, wx.NORMAL, wx.BOLD,True) #Lbl.SetFont(myFont) #Lbl.SetForegroundColour((0,0,0)) #Lbl.SetBackgroundColour((204,204,204)) self.SetBackgroundColour((237,237,237)) LblName.SetFont(myLblFont) LblName2.SetFont(myLblFont) # now under this __init__ method i will also initiate the method which # will create the MenuBar and the Menu Items. self.AddMenu() # now i will also have to create a method named as AddMenu so that # can be run (which will add the Menu Items. def AddMenu(self): myMenuBar = wx.MenuBar() myFileMenu = wx.Menu() myEditMenu = wx.Menu() exitBtn = myFileMenu.Append(wx.ID_EXIT, 'Exit App') editBtn = myEditMenu.Append(wx.ID_MOVE_FRAME, 'Move App') myMenuBar.Append(myFileMenu, '&File') myMenuBar.Append(myEditMenu, '&Edit') self.SetMenuBar(myMenuBar) self.Centre() self.Show() class teamData(): def runQueryEmpList(self,Tname): self.Tname = Tname # set up some constants myDb = 'D:\\Python projects\\Python programs\\trial.accdb' DRV = '{Microsoft Access Driver (*.mdb)}' PWD = 'pw' # connect to db conn = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=%s' % (myDb)) cur = conn.cursor() # run a query and get the results SQL = 'SELECT Emp_Name FROM Table1 WHERE Team = ?' return cur.execute(SQL, self.Tname).fetchall() cur.close() conn.close() def runQueryTmList(self): # set up some constants myDb = 'D:\\Python projects\\Python programs\\trial.accdb' DRV = '{Microsoft Access Driver (*.mdb)}' PWD = 'pw' # connect to db conn = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=%s' % (myDb)) cur = conn.cursor() # run a query and get the results SQL = 'SELECT Team_Name FROM Teams' return cur.execute(SQL).fetchall() cur.close() conn.close() class myComboBox(wx.ComboBox): def __init__(self, parent, lstposition, lstsize): super(myComboBox, self).__init__(parent, -1, value="", pos=lstposition, size=lstsize) def addItem(self, Lst=[]): self.Lst = Lst for el in self.Lst: self.Append(el) def addTeamMember(self,extra): self.extra = extra a = teamData() rows = a.runQueryEmpList(self.GetValue()) Emp_List=[] for r in rows: self.Append(r.Emp_Name) class myFrm(wx.StaticBox): def __init__(self, parent, lblstring, position, BxSize): super(myFrm, self).__init__(parent,-1,label=lblstring,pos=position,size=BxSize) def borderColor(self): self app = wx.App() frame = MyFrame(None, 'Clysdale Activity Tracker') app.MainLoop() I understand why they are getting added to the team combobox (because I am Append'ing the items to self). How should I refer to the other combobox in `addTeamMember()` method? Answer: Normally, you would bind to a handler in your frame or panel and have the comboboxes defined as class variables: self.TeamList = myComboBox(self.myvbfrm,(200,20),(100,50)) self.TeamList.Bind(wx.EVT_TEXT, self.onUpdate) self.EmpList = myComboBox(self.myvbfrm,(200,50),(100,50)) Then you could just append items in the handler: def onUpdate(self, event): team = self.TeamList.GetValue() if team == "Tigers": self.EmpList.append(some_list) If you want to keep your way of doing things, then create the EmpList object first and pass it into the myComboBox class as another parameter when you create TeamList. Then you can append to it in the TeamList instance. Something like the following should work: import wx, pyodbc class MyFrame(wx.Frame): def __init__(self, Parent, Title): super(MyFrame, self).__init__(None, title=Title, size=(400,400)) #creating the panel in which all widgets will be stored/created. #self.panel1 = wx.Panel(self, pos=(1,1),size=(382,100),style=wx.RAISED_BORDER) #now creating the first Label inside the panel a = teamData() #rows = a.runQueryEmpList("Mama Badi") TmLst = a.runQueryTmList() abc=[] for r in TmLst: abc.append(r.Team_Name) #static box for the employee details self.myvbfrm = wx.StaticBox(self,-1,label="Employee Detail:-",pos=(1,1),size=(380,98)) #items for the employee details LblName = wx.StaticText(self.myvbfrm,-1, 'Team Name:-',(8,20)) LblName2 = wx.StaticText(self.myvbfrm,-1, 'Employee Name:-',(8,50)) # employee combobox #EmpList = wx.ComboBox(self.myvbfrm,-1,"",(200,50),(100,50)) EmpList = EmpComboBox(self.myvbfrm,(200,50),(100,50)) # team combobox TeamList = TeamComboBox(self.myvbfrm,(200,20),(100,50), EmpList) TeamList.addItem(abc) TeamList.Bind(wx.EVT_TEXT, TeamList.addTeamMember) #creating the panel in which all widgets will be stored/created. self.panel2 = wx.Panel(self, pos=(10,175),size=(365,150),style=wx.RAISED_BORDER) #now creating the first Label inside the panel myFont = wx.Font(8,wx.DECORATIVE, wx.NORMAL, wx.BOLD,True) myLblFont = wx.Font(12,wx.DECORATIVE, wx.NORMAL, wx.BOLD,True) #Lbl.SetFont(myFont) #Lbl.SetForegroundColour((0,0,0)) #Lbl.SetBackgroundColour((204,204,204)) self.SetBackgroundColour((237,237,237)) LblName.SetFont(myLblFont) LblName2.SetFont(myLblFont) # now under this __init__ method i will also initiate the method which # will create the MenuBar and the Menu Items. self.AddMenu() # now i will also have to create a method named as AddMenu so that # can be run (which will add the Menu Items. def AddMenu(self): myMenuBar = wx.MenuBar() myFileMenu = wx.Menu() myEditMenu = wx.Menu() exitBtn = myFileMenu.Append(wx.ID_EXIT, 'Exit App') editBtn = myEditMenu.Append(wx.ID_MOVE_FRAME, 'Move App') myMenuBar.Append(myFileMenu, '&File') myMenuBar.Append(myEditMenu, '&Edit') self.SetMenuBar(myMenuBar) self.Centre() self.Show() class myComboBox(wx.ComboBox): def __init__(self, parent, lstposition, lstsize): super(myComboBox, self).__init__(parent, -1, value="", pos=lstposition, size=lstsize) def addItem(self, Lst=[]): self.Lst = Lst for el in self.Lst: self.Append(el) def addTeamMember(self,extra): raise NotImplementedError ######################################################################## class EmpComboBox(myComboBox): """""" pass ######################################################################## class TeamComboBox(myComboBox): """""" #---------------------------------------------------------------------- def __init__(self, parent, lstposition, lstsize, empComboBox=None): """Constructor""" super(myComboBox, self).__init__(parent, -1, value="", pos=lstposition, size=lstsize) self.empComboBox = empComboBox def addTeamMember(self,extra): self.extra = extra a = teamData() rows = a.runQueryEmpList(self.GetValue()) for r in rows: self.empComboBox.Append(r.Emp_Name) class myFrm(wx.StaticBox): def __init__(self, parent, lblstring, position, BxSize): super(myFrm, self).__init__(parent,-1,label=lblstring,pos=position,size=BxSize) def borderColor(self): self app = wx.App() frame = MyFrame(None, 'Clysdale Activity Tracker') app.MainLoop() Basically you'll want to subclass your ComboBox class and override the addTeamMember method. Since I couldn't run your code, I wasn't able to test the example above, but I believe it will work (although it might need a tweak here or there).
Python - Json export to txt adds unexpected characters/separators Question: I'm trying to write a json object to a txt file to be used in an other program. The file generated by code presents unexpected behavior (in my opinion): f = {'nt' : 50, 'nt_array': [10,20,30] } json_obj = json.dumps(f) f=open('out.txt','w') f.write(json.dumps(json_obj) f.close() This code produce a txt file with the following content: "{\"nt_array\": [10, 20, 30], \"nt\": 50}" But I want this: {"nt_array": [10, 20, 30], "nt": 50} It adds some separators / and ". Answer: You encoded to JSON **twice** : >>> import json >>> obj = {'nt' : 50, 'nt_array': [10,20,30] } >>> print json.dumps(obj) {"nt_array": [10, 20, 30], "nt": 50} >>> print json.dumps(json.dumps(obj)) "{\"nt_array\": [10, 20, 30], \"nt\": 50}" Just use the [`json.dump()` function](http://docs.python.org/2/library/json.html#json.dump) (no `s` at the end) _once_ and write directly to the file: obj = {'nt' : 50, 'nt_array': [10,20,30] } with open('out.txt','w') as f: json.dump(obj, f) Note the use of `with` to have the file closed automatically as well.
Python TKinter: frame and canvas will expand horizontally, but not vertically, when window is resized Question: For some reason, I was able to get this TKinter frame (`allValOuterFrame`) to expand both vertically and horizontally when its window is resized, but however, it appears that the canvas that the frame holds, as well as the frame and vertical scrollbar inside that canvas, will only expand horizontally. Could someone please explain why? Here is my code: # At first I only had "from X import *" but then discovered that # it is bad practice, so I also included "import X" statements so # that if I need to use something from tk or ttk explicitly, I can, # but I had 2/3 of my code done at that point so I had to leave in the # "import *" things. try: from Tkinter import * #python 2 import Tkinter as tk from ttk import * import ttk import tkMessageBox as msg import tkFileDialog as openfile except ImportError: from tkinter import * #python 3 import tkinter as tk from tkinter.ttk import * import tkinter.ttk as ttk from tkinter import messagebox as msg from tkinter import filedialog as openfile import csv # My stuff: from extractor import Analysis from extractor import createDictionariesAndLists as getRawData from nitrogenCorrector import correct as nCorrect from carbonCorrector import correct as cCorrect ( ... ) def createAllValWindow(self): allValWindow = Toplevel(self) allValWindow.grab_set() if self.element == "N": allValWindow.title("All Nitrogen Raw Data Values") elif self.element == "C": allValWindow.title("All Carbon Raw Data Values") else: allValWindow.title("All Raw Data Values") allValOuterFrame = tk.Frame(allValWindow,background="#00FF00") allValCanvas = Canvas(allValOuterFrame, borderwidth=0) allValInnerFrame = Frame(allValCanvas, borderwidth=5) def allValOnFrameConfigure(event): allValCanvas.configure(scrollregion=allValCanvas.bbox("all")) allValOuterFrame.grid_rowconfigure(0,weight=1) allValOuterFrame.grid_columnconfigure(0,weight=1) allValInnerFrame.grid_rowconfigure(0,weight=1) allValInnerFrame.grid_columnconfigure(0,weight=1) allValVertScrollbar = Scrollbar(allValOuterFrame, orient="vertical",command=allValCanvas.yview) allValHorizScrollbar = Scrollbar(allValOuterFrame, orient="horizontal",command=allValCanvas.xview) allValCanvas.configure(yscrollcommand=allValVertScrollbar.set, xscrollcommand=allValHorizScrollbar.set) allValVertScrollbar.grid(row=1,column=12,sticky=N+S) allValHorizScrollbar.grid(row=2,column=0,columnspan=12,sticky=E+W) allValCanvas.grid(row=1,column=0,columnspan=12,sticky=N+S+E+W) allValCanvas.create_window((4,4),window=allValInnerFrame,anchor="nw",tags="allValInnerFrame") allValInnerFrame.bind("<Configure>",allValOnFrameConfigure) allValDoneButton = Button(allValWindow,text="Done",command=allValWindow.destroy) allValOuterFrame.pack(fill="both",expand=1) allValDoneButton.pack() textRows = [self.rawHeader] textLabels = [[]] numDiffAminos = len(self.analyses) - 1 # Ignore the "trash" list at the end for singleAminoAnalyses in self.analyses[0:numDiffAminos]: # Once again, ignoring the last list if len(singleAminoAnalyses) < 1: continue for analysis in singleAminoAnalyses: textRows.append([str(analysis.analysisID), str(analysis.row), str(analysis.identifier1), str(analysis.identifier2), str(analysis.comment), str(analysis.peakNumber), str(analysis.rt), str(analysis.component), str(analysis.areaAll), str(analysis.ampl), str(analysis.r), str(analysis.delta)]) textLabels.append([]) for i in range(len(textRows)): if i == 0: listRow = i else: listRow = i+1 for j in range(len(textRows[i])): if i == 0: textLabels[i].append(Label(allValInnerFrame,text=textRows[i][j],font=("Fixedsys",10,"bold"))) else: textLabels[i].append(Label(allValInnerFrame,text=textRows[i][j])) if j == 9: textLabels[i][j].grid(row=listRow,column=j,sticky=W+E,padx=(4,10)) else: textLabels[i][j].grid(row=listRow,column=j,sticky=W+E,padx=(4,4)) if i == 0: separator = tk.Frame(allValInnerFrame, height=2, borderwidth=1, bg="black", relief=SUNKEN) separator.grid(row=1,column=0,columnspan=12,sticky=W+E) Answer: It is because you give row 0 of `AllValOuterFrame` a weight of 1, but you put the canvas in row 1. If you move the canvas to row 0 or give row 1 a weight of 1, it will resize the way you expect.
python log n choose k Question: scipy.misc.comb, returning n choose k, is implemented using the gammaln function. Is there a function that stays in log space? I see there is no scipy.misc.combln or any similar. It is trivial to implement myself, but it would be convenient if it were already in a package somewhere. I don't see it in scipy.misc, and it just feels wasteful to convert to normal space and then back to log. Answer: Looking at [the source code](https://github.com/scipy/scipy/blob/v0.13.0/scipy/misc/common.py#L246), it seems like you're right that it would be trivial to implement, but that it likely isn't implemented elsewhere in scipy. On the plus side, there's some error checking going on, so you could eliminate some if you do those checks elsewhere (this is similar to getting rid of the exponential). If you know you'll always give `0 <= k <= N`, and each of `k`, `N` as an array, then it is down to: from scipy import special def chooseln(N, k) return special.gammaln(N+1) - special.gammaln(N-k+1) - special.gammaln(k+1)
Python: Run Script Under Same Window Question: I am trying to setup a program where when someone enters a command it will run that command which is a script in a sub folder called "lib". Here is my code: import os while 1: cmd = input(' >: ') for file in os.listdir('lib'): if file.endswith('.py'): try: os.system(str(cmd + '.py')) except FileNotFoundError: print('Command Not Found.') I have a file: lib/new_user.py But when I try to run it I get this error: Traceback (most recent call last): File "C:/Users/Daniel/Desktop/Wasm/Exec.py", line 8, in <module> exec(str(cmd + '.py')) File "<string>", line 1, in <module> NameError: name 'new_user' is not defined Does anyone know a way around this? I would prefer if the script would be able to be executed under the same window so it doesn't open a completely new one up to run the code there. This may be a really Noob question but I have not been able to find anything on this. Thanks, Daniel Alexander Answer: os.system(os.path.join('lib', cmd + '.py')) You're invoking `new_user.py` but it is not in the current directory. You need to construct `lib/new_user.py`. (I'm not sure what any of this has to do with windows.) However, a better approach for executing Python code from Python is making them into modules and using `import`: import importlib cmd_module = importlib.import_module(cmd, 'lib') cmd_module.execute() (Assuming you have a function `execute` defined in `lib/new_user.py`)
Python: How to download file using range of bytes? Question: I want to download file in multi thread mode and I have following code here: #!/usr/bin/env python import httplib def main(): url_opt = '/film/0d46e21795209bc18e9530133226cfc3/7f_Naruto.Uragannie.Hroniki.001.seriya.a1.20.06.13.mp4' headers = {} headers['Accept-Language'] = 'en-GB,en-US,en' headers['Accept-Encoding'] = 'gzip,deflate,sdch' headers['Accept-Charset'] = 'max-age=0' headers['Cache-Control'] = 'ISO-8859-1,utf-8,*' headers['Cache-Control'] = 'max-age=0' headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 5.1)' headers['Connection'] = 'keep-alive' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml,*/*' headers['Range'] = '' conn = httplib.HTTPConnection('data09-cdn.datalock.ru:80') conn.request("GET", url_opt, '', headers) print "Request sent" resp = conn.getresponse() print resp.status print resp.reason print resp.getheaders() file_for_wirte = open('cartoon.mp4', 'w') file_for_wirte.write(resp.read()) print resp.read() conn.close() if __name__ == "__main__": main() Here is output: Request sent 200 OK [('content-length', '62515220'), ('accept-ranges', 'bytes'), ('server', 'nginx/1.2.7'), ('last-modified', 'Thu, 20 Jun 2013 12:10:43 GMT'), ('connection', 'keep-alive'), ('date', 'Fri, 14 Feb 2014 07:53:30 GMT'), ('content-type', 'video/mp4')] This code working perfectly however I do not understand through the documentation how to download file using ranges. If you see output of response, which server provides: ('content-length', '62515220'), ('accept-ranges', 'bytes') It supports range in 'bytes' unit where content size is 62515220 However in this request whole file downloaded. But what I want to do first obtain server information like does this file can be supported using http range queries and content size of file with out downloading? And how I can create http query with range (i.e.: 0~25000)? Answer: Pass [`Range`](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35) header with `bytes=start_offset-end_offset` as range specifier. For example, following code retrieve the first 300 bytes. (`0-299`): >>> import httplib >>> conn = httplib.HTTPConnection('localhost') >>> conn.request("GET", '/', headers={'Range': 'bytes=0-299'}) # <---- >>> resp = conn.getresponse() >>> resp.status 206 >>> resp.status == httplib.PARTIAL_CONTENT True >>> resp.getheader('content-range') 'bytes 0-299/612' >>> content = resp.read() >>> len(content) 300 **NOTE** Both `start_offset`, `end_offset` are inclusive. **UPDATE** If the server does not understand `Range` header, it will respond with the status code 200 (`httplib.OK`) instead of 206 (`httplib.PARTIAL_CONTENT`), and it will send whole content. To make sure the server respond partial content, check the status code. >>> resp.status == httplib.PARTIAL_CONTENT True
how to access gmail use python? Question: I wanna access gmail use python and oauth2.0,so I download the oauth2.py from "<http://google-mail-oauth2-tools.googlecode.com/svn/trunk/python/oauth2.py>". And this is my demo: import oauth2 import imaplib import email from oauth2client.client import OAuth2WebServerFlow from launchpadlib.credentials import access_token_page email = 'karlvorndoenitz@gmail.com' client_id = 'client_id' client_secret = 'client_secret' # Check https://developers.google.com/drive/scopes for all available scopes OAUTH_SCOPE = 'https://mail.google.com/' # Redirect URI for installed apps REDIRECT_URI = 'urn:ietf:wg:oauth:2.0:oob' flow = OAuth2WebServerFlow(client_id, client_secret, OAUTH_SCOPE, REDIRECT_URI) authorize_url = flow.step1_get_authorize_url() print 'Go to the following link in your browser: ' + authorize_url authorization_code = raw_input("Please input the code:").strip() response = oauth2.AuthorizeTokens(client_id, client_secret, authorization_code) access_token = response['access_token'] auth_string = oauth2.GenerateOAuth2String(email,access_token,base64_encode=True) print auth_string imap_conn = imaplib.IMAP4_SSL('imap.gmail.com') imap_conn.debug = 4 imap_conn.authenticate('XOAUTH2',lambda x:auth_string) imap_conn.select('INBOX') But the demo has some bugs,I don't know how to debug it.The information from console: 16:58.52 > KOMN1 AUTHENTICATE XOAUTH2 16:58.79 < + 16:58.79 write literal size 204 16:59.27 < + eyJzdGF0dXMiOiI0MDAiLCJzY2hlbWVzIjoiQmVhcmVyIiwic2NvcGUiOiJodHRwczovL21haWwuZ29vZ2xlLmNvbS8ifQ== 16:59.27 write literal size 204 16:59.27 < KOMN1 NO Invalid SASL argument. d10if1169757igr.56 16:59.27 NO response: Invalid SASL argument. d10if1169757igr.56 Traceback (most recent call last): File "/home/karl/workspace/Gmail/download_gmail/download_gmail_api.py", line 33, in <module> imap_conn.authenticate('XOAUTH2',lambda x:auth_string) File "/usr/lib/python2.7/imaplib.py", line 351, in authenticate raise self.error(dat[-1]) imaplib.error: Invalid SASL argument. d10if1169757igr.56 I need help. Answer: The `auth_string` should not be `base64encoded`. I believe the `IMAP4.authenticate` encodes it for you. It worked for me anyways. For example: `auth_string = 'user=%s\1auth=Bearer %s\1\1' % (user, access_token)`
Sieve Of Atkin Implementation in Python Question: I am trying to implement the algorithm of Sieve of Atkin given in Wikipedia Link as below: [Sieve Of Atkin](http://en.wikipedia.org/wiki/Sieve_of_Atkin) What I've tried so far is the implementation in Python given by following Code: import math is_prime = list() limit = 100 for i in range(5,limit): is_prime.append(False) for x in range(1,int(math.sqrt(limit))+1): for y in range(1,int(math.sqrt(limit))+1): n = 4*x**2 + y**2 if n<=limit and (n%12==1 or n%12==5): # print "1st if" is_prime[n] = not is_prime[n] n = 3*x**2+y**2 if n<= limit and n%12==7: # print "Second if" is_prime[n] = not is_prime[n] n = 3*x**2 - y**2 if x>y and n<=limit and n%12==11: # print "third if" is_prime[n] = not is_prime[n] for n in range(5,int(math.sqrt(limit))): if is_prime[n]: for k in range(n**2,limit+1,n**2): is_prime[k] = False print 2,3 for n in range(5,limit): if is_prime[n]: print n Now I get error as is_prime[n] = not is_prime[n] IndexError: list index out of range this means that I am accessing the value in list where the index is greater than length of List. Consider the Condition when x,y = 100, then of-course the condition n=4x^2+y^2 will give value which is greater than length of list. Am I doing something wrong here? Please help. **EDIT 1** As suggested by Gabe, using is_prime = [False] * (limit + 1) insted of : for i in range(5,limit): is_prime.append(False) did solved the problem. Answer: You problem is that your limit is 100, but your `is_prime` list only has `limit-5` elements in it due to being initialized with `range(5, limit)`. Since this code assumes it can access up to `limit` index, you need to have `limit+1` elements in it: `is_prime = [False] * (limit + 1)` Note that it doesn't matter that `4x^2+y^2` is greater than `limit` because it always checks `n <= limit`.
Namespacing inside tags with Python lxml's SubElement Question: I'm using Python and the lxml library to produce an XML file that I want to look like this: <item> <a:text>hello</a:text> </item> However, I can't manage to produce this, I've tried the following code: import lxml.etree as etree item = etree.Element('item') el = etree.SubElement(item, 'text', nsmap={'a': 'http://example.com/') But then I end up with: <item> <text xmlns="http://example.com/">hello</text> </item> I also tried this after some inspiration from the lxml namespaces documentation (<http://lxml.de/tutorial.html#namespaces>): import lxml.etree as etree item = etree.Element('item') el = etree.SubElement(item, '{a}text') But that gives me: <item> <ns1:text xmlns:ns1="a">hello</text> </item> Is there any way to get the XML format I need with lxml ? Answer: The first thing to note is that this... <item> <a:text>hello</a:text> </item> ...is not valid XML. `a:` is a namespace _prefix_ , but somewhere you have to map it to an actual namespace, as in: <item xmlns:a="http://example.com/"> <a:text>hello</a:text> </item> As you read in the lxml documentation, you can use the `{namespace}element` syntax to specify a namespace...but this uses an actual namespace, _not_ a namespace prefix (which is why your second example did not work as expected). You can get what I think you want like this: >>> from lxml import etree >>> item = etree.Element('item', nsmap={'a': 'http://example.com/'}) >>> e1 = etree.SubElement(item, '{http://example.com/}text') Which gives you: >>> print etree.tostring(item, pretty_print=True) <item xmlns:a="http://example.com/"> <a:text/> </item> It's also worth noting that from the perspective of XML, the above is _exactly_ equivalent to: <item> <text xmlns="http://example.com/">hello</text> </item>
Python uses a lot of RAM Question: I wrote this simple code below to compare the speed of Python and Fortran. import time start = time.time() x = 0 for i in range(1,1000000000): x = x + i x = 0 end = time.time() print end - start I set x =0 because in Fortran it overflows. When I run the code by Fortran, it returns the answer in 9 seconds, but when I run the code by Python, Python takes all 24GB of my system's RAM and program crashes. What's the problem? Answer: You are using `range()`, which produces a list object of all integers in the range. You produced a list object with 10**9 integers in it. On a 64-bit machine that takes about 30GB to hold all those objects, provided your OS will let you allocate that much. Use the [`xrange()` function](http://docs.python.org/2/library/functions.html#xrange) instead; this produces an object that only yields numbers as you iterate: >>> range(10) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> xrange(10) xrange(10) >>> list(xrange(10)) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Summing 1 billion integers will still take time on Python; you can use `sum(xrange(10**9))` to delegate most of the work to C code here, but you'll still be producing 1 billion integer objects for the process. My rather aging 2008 vintage Macbook Pro does this: >>> import timeit >>> timeit.timeit('sum(xrange(10**9))', number=3) 38.89629793167114 so taking, on average, 13 seconds to produce and destroy 1 billion `int` objects. The actual _summing_ is dwarfed by those allocations and deallocations here. Use [NumPy](http://www.numpy.org/) for calculations involving large datasets instead; a NumPy `array` holds C datatypes, not Python objects, so will take vastly less memory and the library offers optimized routines that'll beat anything pure Python can do hands-down.