text
stringlengths
226
34.5k
numpy: apply operation to multidimensional array Question: Assume I have a matrix of matrices, which is an order-4 tensor. What's the best way to apply the same operation to all the submatrices, similar to Map in Mathematica? #!/usr/bin/python3 from pylab import * t=random( (8,8,4,4) ) #t2=my_map(det,t) #then shape(t2) becomes (8,8) **EDIT** Sorry for the bad English, since it's not my native one. I tried `numpy.linalg.det`, but it doesn't seem to cope well with 3D or 4D tensors: >>> import numpy as np >>> a=np.random.rand(8,8,4,4) >>> np.linalg.det(a) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 1703, in det sign, logdet = slogdet(a) File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 1645, in slogdet _assertRank2(a) File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 155, in _assertRank2 'two-dimensional' % len(a.shape)) numpy.linalg.linalg.LinAlgError: 4-dimensional array given. Array must be two-dimensional **EDIT2 (Solved)** The problem is older numpy version (<1.8) doesn't support inner loop in `numpy.linalg.det`, updating to numpy 1.8 solves the problem. Answer: numpy 1.8 has some gufunc that can do this in C loop: for example, `numpy.linalg.det()` is a gufunc: import numpy as np a = np.random.rand(8,8,4,4) np.linalg.det(a)
How to make a loop repeat itself n number of times - Python 3 Question: I started programming 2 weeks ago for the first time in my life and I have come across something that I cannot figure out. I am trying to make it so a loop that calculates a median from a set amount of random numbers can repeat itself many times (say 10,000) while storing all the median values into a list. I already have everything up to the point where a list is created from the random integers (numbList) from which the median (listMedian) is calculated. I would just like to be able to repeat this process a number of times while generating a list of all the calculated medians. Sample size is how many numbers that are per list and upper limit determines what the range is for each individual number, thanks! I am using Python 3. import random def median(numbList): srtd = sorted(numbList) mid = len(numbList)//2 if len(numbList) % 2 == 0: return (srtd[mid-1] + srtd[mid]) / 2.0 else: return srtd[mid] sampleSize = int(input("What is your desired sample size? ")) upperLimit = int(input("What is your desired upper limit? ")) numbList = [] totalMedians = [] biggerList = [] while sampleSize > 0: sampleSize -= 1 randomNum = random.randrange(0,upperLimit+1) numbList.append(randomNum) numbList.sort(key=int) listMedian = median(numbList) Answer: Here's a simple example of what you want: #!/usr/bin/python import random def create_list(sampleSize, upperLimit): numbList = [] while sampleSize > 0: sampleSize -= 1 randomNum = random.randrange(0,upperLimit+1) numbList.append(randomNum) numbList.sort(key=int) return numbList def median(numList): list_len = len(numList) if list_len % 2: return numList[list_len / 2] else: return (numList[list_len / 2] + numList[list_len / 2 - 1]) / 2.0 def main(): number_lists = 4 sample_size = 5 upper_limit = 50 lists = [] median_list = [] for i in range(number_lists): lists.append(create_list(sample_size, upper_limit)) for current_list in lists: current_median = median(current_list) print current_list, " : median (", current_median, ")" median_list.append(current_median) print "Median list is ", median_list if __name__ == "__main__": main() which outputs, for example: paul@MacBook:~/Documents/src/scratch$ ./sample.py [3, 18, 20, 26, 46] : median ( 20 ) [18, 22, 38, 44, 49] : median ( 38 ) [28, 29, 34, 42, 43] : median ( 34 ) [4, 21, 27, 31, 46] : median ( 27 ) Median list is [20, 38, 34, 27] paul@MacBook:~/Documents/src/scratch$
Do a maven build in a python script Question: I am checking out a source for a given url using a python script and I want to go to the downloadedFoler/src directory and perform a `mvn clean install`. I want to do it in the same script. Thank in advance. Answer: You can do the following: import os import subprocess # Context Manager to change current directory. # I looked at this implementation on stackoverflow but unfortunately do not have the link # to credit the user who wrote this part of the code. class changeDir: def __init__(self, newPath): self.newPath = os.path.expanduser(newPath) # Change directory with the new path def __enter__(self): self.savedPath = os.getcwd() os.chdir(self.newPath) # Return back to previous directory def __exit__(self, etype, value, traceback): os.chdir(self.savedPath) # folderPath = path of the folder you want to run mvn clean install on with changeDir(folderPath): # ****** NOTE ******: using shell=True is strongly discouraged since it possesses security risks subprocess.call(["mvn", "clean", "install"], shell=True)
python xml parse (minidom) Question: I need to read data from this XML file. I don´t know, how I have to read data aaaaa, bbbbb, ccccc, ddddd, eeeee, fffff and ggggg from this XML file. <Episode> <Section type="report" startTime="0" endTime="10"> <Turn startTime="0" endTime="2.284" speaker="spk1"> <Sync time="0"/> aaaaa <Sync time="0.93"/> bbbbb </Turn> <Turn speaker="spk2" startTime="2.284" endTime="6.458"> <Sync time="2.284"/> ccccc <Sync time="3.75"/> ddddd <Sync time="4.911"/> eeeee </Turn> <Turn speaker="spk3" startTime="6.458" endTime="10"> <Sync time="6.458"/> fffff <Sync time="8.467"/> ggggg <Sync time="9.754"/> </Turn> </Section> </Episode> I write this code: # -*- coding: UTF-8-*- from xml.etree import ElementTree as ET import os from xml.dom import minidom dom = minidom.parse("aaa.trs") conference=dom.getElementsByTagName('Turn') for node in conference: conf_name=node.getAttribute('speaker') print conf_name sync=node.getElementsByTagName('Sync') for s in sync: s_name=s.getAttribute('time') print s_name Output is: sp1 sp2 sp3 But the output should be: sp1 aaaaa bbbbb sp2 ccccc ddddd eeeee sp3 fffff ggggg Any suggestions? Thank you. Answer: One way is to get the `nextSibling` of every `Sync` node: conference = dom.getElementsByTagName('Turn') for node in conference: conf_name = node.getAttribute('speaker') print conf_name sync = node.getElementsByTagName('Sync') for s in sync: print s.nextSibling.nodeValue.strip() prints: spk1 aaaaa bbbbb spk2 ccccc ddddd eeeee spk3 fffff ggggg Also, you can achieve the same result with `ElementTree` by getting the `tail` of each `Sync` node: tree = ET.parse("aaa.trs") for turn in tree.findall('.//Turn'): print turn.attrib.get('speaker') for sync in turn.findall('.//Sync'): print sync.tail.strip() Hope that helps.
Django + Apache + mod_wsgi = Bad Request (400) Question: I'm trying to get my app launched on VPS in `Debug=True` mode. I'm using Django 1.6 with Python 2.7. I tried simple wscgi script and found that it works well (basically returns 200 and "Hello world" in text/plain) to the browser. Here's my configuration: **virtual host config** <VirtualHost *:80> ServerName subdomain.domain.info ServerAlias www.subdomain.sigizmund.info WSGIScriptAlias / /var/www/subdomain/index.wsgi Alias /static/ /var/www/subdomain/static/ ErrorLog /tmp/subdomain.error.log CustomLog /tmp/subdomain.custom.log common LogLevel debug <Location "/static/"> Options -Indexes </Location> <Directory /home/sgzmd/code/myproject> Order deny,allow Allow from all </Directory> </VirtualHost> **index.wsgi** import os import sys import site # Add the site-packages of the chosen virtualenv to work with site.addsitedir('/home/sgzmd/.virtualenvs/myenv/local/lib/python2.7/site-packages') PROJECT_PATH = '/home/sgzmd/code/myproject' if PROJECT_PATH not in sys.path: sys.path.insert(0, PROJECT_PATH) os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' # Activate your virtual env activate_env=os.path.expanduser("~/.virtualenvs/myenv/bin/activate_this.py") execfile(activate_env, dict(__file__=activate_env)) import django.core.handlers.wsgi import cStringIO import pprint class LoggingMiddleware: def __init__(self, application): self.__application = application def __call__(self, environ, start_response): errors = environ['wsgi.errors'] pprint.pprint(('REQUEST', environ), stream=errors) def _start_response(status, headers, *args): pprint.pprint(('RESPONSE', status, headers), stream=errors) return start_response(status, headers, *args) return self.__application(environ, _start_response) application = LoggingMiddleware(django.core.handlers.wsgi.WSGIHandler()) I have following logging output: <http://pastebin.com/SnZVEeT1> (from LoggingMiddleware) and `/var/log/django/error.log` while is being created and re-created, remains completely empty. I figured that the app is loading, by editing settings.py of my project, which content is available here: <http://pastebin.com/Byr8RStb> Would appreciate any pointers and ideas, as I'm basically out of options now. Answer: This will not work for a start: activate_env=os.path.expanduser("~/.virtualenvs/myenv/bin/activate_this.py") This is because Apache runs as a special user and ~ will not expand to your own user where virtualenv was setup. No idea if this is relevant or not though.
Getting empty 'Ssl_cipher' with MySQLdb SSL connection to Amazon RDS Question: I've just spent a week on the problems recorded in this question: [Why does the CA information need to be in a tuple for MySQLdb?](http://stackoverflow.com/questions/21315427/why-does-the-ca- information-need-to-be-in-a-tuple-for-mysqldb) Have now boiled it down to one problem. Here's a script that connects to the MySQL server I have on Amazon RDS: #! /usr/bin/env python import MySQLdb ssl = ({'ca': '/home/James/Downloads/mysql-ssl-ca-cert-copy.pem'},) conn = MySQLdb.connect(host='db.doopdoop.eu-west-n.rds.amazonaws.com', user='user', passwd='pass', ssl=ssl) cursor = conn.cursor() cursor.execute("SHOW STATUS LIKE 'Ssl_Cipher'") print cursor.fetchone() This gives me back `('Ssl_cipher', '')`, which I gather means the connection is not encrypted. Have also tried this using Django's `manage.py shell` function, and got the same result. Why, then, am I getting no exceptions? Data is flowing, certainly, but apparently the security is just being ignored. Any help on where I'm going wrong here would be appreciated. I have tried updating MySQL-Python to 1.2.5 with no success. Answer: Possible workaround for your issue: Use a default file. It will look something like this: [mysql] host = db.doopdoop.eu-west-n.rds.amazonaws.com user = user password = pass ssl-ca = /home/James/Downloads/mysql-ssl-ca-cert-copy.pem Then change your connect() call to: conn = MySQLdb.connect(read_default_file=options_path) where options_path is the path to the file above. This also keeps authentication data out of your code. Django settings will look like this: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'OPTIONS': { 'read_default_file': '/path/to/my.cnf', }, } } Ref: <https://docs.djangoproject.com/en/dev/ref/databases/#connecting-to-the- database>
how to extract reviews from iframeurl returned by amazon api in python? Question: I am trying to get the `text` content of reviews of a given product in `amazon` using its `api`. But I am not able to work it out. Here is what I have: result = api.item_lookup('B00062B6QY', ResponseGroup='Reviews', TruncateReviewsAt=256, IncludeReviewsSummary=False) iframeurl=result.xpath('//*[local-name()="IFrameURL"]/text()')[0].strip() print iframeurl reviews=requests.get(iframeurl) reviews.raise_for_status() #data = json.loads(reviews.text) root = ET.fromstring(reviews.text) print root The output is: http://www.amazon.com/reviews/iframe?akid=helloworld&alinkCode=xm2&asin=B00062B6QY&atag=welcomehome-20&exp=2014-01-28T19%3A06%3A20Z&summary=0&truncate=256&v=2&sig=HIDDEN%3D Traceback (most recent call last): File "amazon_api_new.py", line 36, in <module> root = ET.fromstring(reviews.text) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML parser.feed(text) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed self._raiseerror(v) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror raise err xml.etree.ElementTree.ParseError: mismatched tag: line 867, column 2 PS: I have changed the `iframeurl` printed out just to clear the `api key` details EDIT: image![enter image description here](http://i.stack.imgur.com/6SIa6.png) from `firebug` Answer: instead of using ElementTree, try to load `reviews.text` to [lxml](http://lxml.de/parsing.html) like: >>> from lxml import etree >>> parser = etree.HTMLParser() >>> tree = etree.parse(StringIO(reviews.text), parser) >>> result = etree.tostring(tree.getroot(), ... pretty_print=True, method="html") >>> print(result) ... of course, you can then use [lxml xpath](http://lxml.de/xpathxslt.html) for further parsing
Inheriting a base class for nose tests Question: I'm trying to implement an integration test framework using nose. At the core, I'd like a base class that all test classes inherit. I'd like to have a class setup function that is called as well as the per test setup function. When I use `nosetests a_file.py -vs` where *a_file.py* looks like this: from nose import tools class BaseClass(object): def __init__(self): print 'Initialize Base Class' def setup(self): print "\nBase Setup" def teardown(self): print "Base Teardown" @tools.nottest def a_test(self): return 'This is a test.' @tools.nottest def another_test(self): return 'This is another test' class TestSomeStuff(BaseClass): def __init__(self): BaseClass.__init__(self) print 'Initialize Inherited Class' def setup(self): BaseClass.setup(self) print "Inherited Setup" def teardown(self): BaseClass.teardown(self) print 'Inherited Teardown' def test1(self): print self.a_test() def test2(self): print self.another_test() Outputs this: Initialize Base Class Initialize Inherited Class Initialize Base Class Initialize Inherited Class cases.nose.class_super.TestSomeStuff.test1 ... Base Setup Inherited Setup This is a test. Base Teardown Inherited Teardown ok cases.nose.class_super.TestSomeStuff.test2 ... Base Setup Inherited Setup This is another test Base Teardown Inherited Teardown ok ---------------------------------------------------------------------- Ran 2 tests in 0.001s OK How do I make the `__init__`, `setup`, and `teardown` functions class methods? When I attempt this: from nose import tools class BaseClass(object): def __init__(self): print 'Initialize Base Class' @classmethod def setup_class(self): print "\nBase Setup" @classmethod def teardown_class(self): print "Base Teardown" @tools.nottest def a_test(self): return 'This is a test.' @tools.nottest def another_test(self): return 'This is another test' class TestSomeStuff(BaseClass): def __init__(self): BaseClass.__init__(self) print 'Initialize Inherited Class' @classmethod def setup_class(self): BaseClass.setup_class(self) print "Inherited Setup" @classmethod def teardown_class(self): BaseClass.teardown_class(self) print 'Inherited Teardown' def test1(self): print self.a_test() def test2(self): print self.another_test() I get this: Initialize Base Class Initialize Inherited Class Initialize Base Class Initialize Inherited Class ERROR ====================================================================== ERROR: test suite for <class 'cases.nose.class_super.TestSomeStuff'> ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 208, in run self.setUp() File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 291, in setUp self.setupContext(ancestor) File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 314, in setupContext try_run(context, names) File "/usr/lib/python2.7/dist-packages/nose/util.py", line 478, in try_run return func() File "/home/ryan/project/python_testbed/cases/nose/class_super.py", line 30, in setup_class BaseClass.setup_class(self) TypeError: setup_class() takes exactly 1 argument (2 given) ---------------------------------------------------------------------- Ran 0 tests in 0.001s FAILED (errors=1) Removing the `self` from the super class calls (`BaseClass.setup_class(self)` -> `BaseClass.setup_class()`) seems to fix it...which I don't understand: Initialize Base Class Initialize Inherited Class Initialize Base Class Initialize Inherited Class Base Setup Inherited Setup cases.nose.class_super.TestSomeStuff.test1 ... This is a test. ok cases.nose.class_super.TestSomeStuff.test2 ... This is another test ok Base Teardown Inherited Teardown ---------------------------------------------------------------------- Ran 2 tests in 0.001s OK However, this doesn't help with the `__init__` function. How can I make this a class method? Why does passing in `self` to the super class fail. Does anyone have some info on this? Answer: Class methods take a single implicit argument, (called `cls` by convention, although you have called it `self` too), like instance methods take `self`. When you call BaseClass.setup_class(self) It's really more like BaseClass.setup_class(BaseClass, self) hence the warning over two arguments. Therefore it's fixed when you ditch `self`; as a reminder, change the definitions: @classmethod def setup_class(cls): Oh, and `__init__` makes no sense as a `@classmethod`; it's for setting up instances.
Fields Missing when Parsing XML with Python Question: I am trying to collect all the data about a house using zillow's API. I am getting some fields, yet others are coming back as null. Here is my Python code: from bs4 import BeautifulSoup import requests import urllib, urllib2 import csv url = requests.get("https://raw.github.com/rfarley90/random/master/zillowresults.html") pageText = url.text soup = BeautifulSoup(pageText) useCode = soup.find('useCode') taxAssessmentYear = soup.find('taxAssessmentYear') taxAssessment = soup.find('taxAssessment') yearBuilt = soup.find('yearBuilt') lotSizeSqFt = soup.find('lotSizeSqFt') finishedSqFt = soup.find('finishedSqFt') bathrooms = soup.find('bathrooms') lastSoldDate = soup.find('lastSoldDate') lastSoldPrice = soup.find('lastSoldPrice') zestimate = soup.find('zestimate') amount = soup.find('amount') lastupdated = soup.find('last-updated') valueChangeduration = soup.find('valueChange') valuationRange = soup.find('valuationRange') lowcurrency = soup.find('low') highcurrency = soup.find('high') percentile = soup.find('percentile') localRealEstate = soup.find('localRealEstate') region = soup.find('region') links = soup.find('links') overview = soup.find('overview') forSaleByOwner = soup.find('forSaleByOwner') forSale = soup.find('forSale') array = [ ['useCode ' , useCode], ['taxAssessmentYear ' , taxAssessmentYear], ['taxAssessment ' , taxAssessment], ['yearBuilt ' , yearBuilt], ['lotSizeSqFt ' , lotSizeSqFt], ['finishedSqFt ' , finishedSqFt], ['bathrooms ' , bathrooms], ['lastSoldDate ' , lastSoldDate], ['lastSoldPrice ' , lastSoldPrice], ['zestimate ' , zestimate], ['amount ' , amount], ['lastupdated ' , lastupdated], ['valueChangeduration ' , valueChangeduration], ['valuationRange ' , valuationRange], ['lowcurrency ' , lowcurrency], ['highcurrency ' , highcurrency], ['percentile ' , percentile], ['localRealEstate ' , localRealEstate], ['region ' , region], ['links ' , links], ['overview ' , overview], ['forSaleByOwner ' , forSaleByOwner], ['forSale ' , forSale]] for x in array: print x The results I get have a lot of missing values, as seen below: ['useCode ', None] ['taxAssessmentYear ', None] ['taxAssessment ', None] ['yearBuilt ', None] ['lotSizeSqFt ', None] ['finishedSqFt ', None] ['bathrooms ', <bathrooms>2.0</bathrooms>] ['lastSoldDate ', None] ['lastSoldPrice ', None] ['zestimate ', <zestimate> <amount currency="USD">977262</amount> <last-updated>01/23/2014</last-updated> <oneweekchange deprecated="true"> <valuechange currency="USD" duration="30">-25723</valuechange> <valuationrange> <low currency="USD">928399</low> <high currency="USD">1055443</high> </valuationrange> <percentile>0</percentile> </oneweekchange></zestimate>] ['amount ', <amount currency="USD">977262</amount>] ['lastupdated ', <last-updated>01/23/2014</last-updated>] ['valueChangeduration ', None] ['valuationRange ', None] ['lowcurrency ', <low currency="USD">928399</low>] ['highcurrency ', <high currency="USD">1055443</high>] ['percentile ', <percentile>0</percentile>] ['localRealEstate ', None] ['region ', <region id="46465" name="Mc Lean" type="city"> <links> <overview> http://www.zillow.com/local-info/VA-Mc-Lean/r_46465/ </overview> <forsalebyowner>http://www.zillow.com/mc-lean-va/fsbo/</forsalebyowner> <forsale>http://www.zillow.com/mc-lean-va/</forsale> </links> </region>] ['links ', <links> <homedetails> http://www.zillow.com/homedetails/6870-Churchill-Rd-Mc-Lean-VA-22101/51751742_zpid/ </homedetails> <graphsanddata> http://www.zillow.com/homedetails/6870-Churchill-Rd-Mc-Lean-VA-22101/51751742_zpid/#charts-and-data </graphsanddata> <mapthishome>http://www.zillow.com/homes/51751742_zpid/</mapthishome> <comparables>http://www.zillow.com/homes/comps/51751742_zpid/</comparables> </links>] ['overview ', <overview> http://www.zillow.com/local-info/VA-Mc-Lean/r_46465/ </overview>] ['forSaleByOwner ', None] ['forSale ', None] [Finished in 0.6s] Any ideas on what's causing this? Answer: By default, `BeautifulSoup` coerces all tags into lower case. You can see this in your result data above: the `region` tag includes `forsalebyowner` and `forsale` as part of its content, whereas they are `forSaleByOwner` and `forSale` in the original data. Thankfully, you can override this behaviour by specifying that you're using XML when creating the `BeautifulSoup` object, however you'll need to trim away some of the non-XML page content before doing so: url = requests.get("https://raw.github.com/rfarley90/random/master/zillowresults.html") pageText = url.text.split('\n') # exclude initial text & end comment pageXML = ''.join( pageText[1:pageText.index(u'<!--')] ) soup = BeautifulSoup(pageXML, "xml")
Windmill AttributeError: 'module' object has no attribute 'settings' Question: Here is the traceback: File "./test2.py", line 44, in test_scrape client = WindmillTestClient(__name__) File "/usr/local/lib/python2.7/dist-packages/windmill-1.6-py2.7.egg/windmill/authoring/__init__.py", line 142, in __init__ method_proxy = windmill.tools.make_jsonrpc_client() File "/usr/local/lib/python2.7/dist-packages/windmill-1.6-py2.7.egg/windmill/tools/__init__.py", line 35, in make_jsonrpc_client url = urlparse(windmill.settings['TEST_URL']) AttributeError: 'module' object has no attribute 'settings' Here is my test python file (test.py): #!/usr/bin/env python # Generated by the windmill services transformer from windmill.authoring import WindmillTestClient from bs4 import BeautifulSoup import re, urlparse from copy import copy def get_table_info(client): """ Parse HTML page and extract featured image name and link """ # Get Javascript updated HTML page client.waits.forElement(xpath=u"//table[@id='trades']", timeout=40000) response = client.commands.getPageText() assert response['status'] assert response['result'] # Create soup from HTML page and get desired information soup = BeautifulSoup(response['result']) table_info = soup.find("title") return table_inf def test_scrape(): client = WindmillTestClient(__name__) client.open(url='http://www.google.com') test_scrape_() Answer: You're not doing some of the necessary setup required: from windmill.authoring import setup_module, WindmillTestClient from windmill.conf import global_settings import sys global_settings.START_FIREFOX = True # This makes it use Firefox setup_module(sys.modules[__name__]) This should occur before you try to instantiate a `WindmillTestClient`.
Plotting a histogram on punctuation occurrence python Question: I have thousands of sentences in series form (rows) . here's an example: 'After hearing his plea, the judge pardoned him.' 'The weather is quite sunny , though not as the other days.' 'Tom,Bill,Grace and tinkle went fishing,even though it was raining.' so i want a histogram which displays the count of the number of commas in a sentence -- 1. 1comma 2. 1 comma 3. 3 commas. each bar of the histogram should represent one sentence and its height should determine the number of commas. "i'll be doing it for different punctuations as well as some keywords". i have already managed to get the count. all i need to do is display it.. thanks.. this is what i tried.. a1 contains the count of commas for each sentence. i need to plot the no of commas against each sentence X = [i for i in range(len(a1))] plt.bar(X,a1,width=2.5,color="blue") an error that module object has no attribute "bar" . i imported matplotlib as plt in beginning.. Answer: Unless I am mistaken (the comments don't seem to mention this). Your problem is that you recieve the error `module object has no attribute "bar"`. I think this is because you are doing import matplotlib as plt plt.bar(.. If instead you import as import matplotlib.pyplot as plt Then `plt` as a function `bar()` which you can use. Some more information on this can be found [here](http://matplotlib.org/faq/usage_faq.html).
Reorder an array in python Question: I want to sort an array, so it starts off at order = [0,1,2,3,4,5] #loop around trying all columns` and then will go through, trying all combinations of this so 1,2,3,4,5,0 etc, and stop once it has tried all of them. Is there anyway to do this in python? Answer: If you just want to "rotate" a list, have a look at the [`deque`](http://docs.python.org/2/library/collections.html#deque-objects) class: >>> from collections import deque >>> order = [0,1,2,3,4,5] >>> order.sort() # ensure order is sorted >>> q = deque(order) >>> for _ in xrange(len(q)): ... q.rotate(-1) ... print q ... deque([1, 2, 3, 4, 5, 0]) deque([2, 3, 4, 5, 0, 1]) deque([3, 4, 5, 0, 1, 2]) deque([4, 5, 0, 1, 2, 3]) deque([5, 0, 1, 2, 3, 4]) deque([0, 1, 2, 3, 4, 5]) >>>
django.contrib.auth get_user_model isn't working with monitio app Question: i have to set up some big project to start working on it, but i don't have access to its creator, so i have nobody to ask. This project make use of [monitio app](https://github.com/mpasternak/django- monitio) to handle notifications. And i got this error: File "******/local/lib/python2.7/site-packages/monitio/models.py", line 1, in <module> from django.contrib.auth import get_user_model ImportError: cannot import name get_user_model I have no idea, why it isn't working, since there is everything installed. Maybe i miss something related to virtualenv or something. I suppose that maybe in some weird way monitio has not access to django.contrib.auth or smthg. Anyway im lost. Every piece of advice will be appreciated Thanks in advance Answer: May be its because of your Django version problem, Which Django version your using? In requirements.txt of Django-monitio said that they are using django>=1.6. And also see the other requirements in the requirements.txt
Backend API Python Question: While implementing backend API to use backend services, I have done code as below: timezone_service.py: class TaskQueueTimeZoneHandler(webapp2.RequestHandler): def get(self): outdict=self.request.params logging.info("Enter In taskqueue") taskqueue.add(url='/api/timezone/setTimeZone',target='timezonebackend') logging.info("Out Of taskqueue") While call /api/timezone/setTimezone if i am using any function or file of main application it gives undefined because its create seprate version for named we given in backends.yaml (I want to used file from my main application py folder) backends: - name: timezonebackend class: B4 instances: 1 options: dynamic start: gapi/timezone_service.py Answer: This isn't really a question about backends. In any Python code, if you want to use a function or variable defined in another file, you need to import it first.
Load other windows when button clicked. PyQt Question: I am trying to call another window from a button click in python 2.7 using PyQt4. The code below opens the AddBooking dialog but immediately closes it. Im new to Gui programming, can somebody please tell me what is wrong with my code? from PyQt4 import QtGui from HomeScreen import Ui_HomeScreen from AddBooking import Ui_AddBooking import sys class HomeScreen(QtGui.QWidget, Ui_HomeScreen): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setupUi(self) self.show() self.Add_Booking_Button.clicked.connect(self.handleButton) def handleButton(self): AddBooking2() class AddBooking2(QtGui.QWidget, Ui_AddBooking): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setupUi(self) self.show() if __name__ == '__main__': app = QtGui.QApplication(sys.argv) window = HomeScreen() window.show() sys.exit(app.exec_()) Answer: Don't use multi-inheritance and neither call `show` function inside class initializer. The problem is that the object you are creating with `AddBooking2()` is a temporal and it's destroyed automatically when the function ends. So you need use some variable to reference that object something like: addbooking = AddBooking2() addbooking.show() Also, since you are working with `QtDesigner` and `pyuic4` tools you can make connections a little bit easier. Said that, your code can be modified: from PyQt4 import QtGui from PyQt4.QtCore import pyqtSlot from HomeScreen import Ui_HomeScreen from AddBooking import Ui_AddBooking import sys class HomeScreen(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.ui = Ui_HomeScreen() self.ui.setupUi(self) @pyqtSlot("") def on_Add_Booking_Button_clicked(self): # The connection is carried by the Ui_* classes generated by pyuic4 addbooking = AddBooking2() addbooking.show() class AddBooking2(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.ui = Ui_AddBooking() self.ui.setupUi(self) if __name__ == '__main__': app = QtGui.QApplication(sys.argv) window = HomeScreen() window.show() sys.exit(app.exec_())
C# equivalent of Python's defaultdict (for lists) in C# Question: What is the C# equivalent of doing: >>> from collections import defaultdict >>> dct = defaultdict(list) >>> dct['key1'].append('value1') >>> dct['key1'].append('value2') >>> dct defaultdict(<type 'list'>, {'key1': ['value1', 'value2']}) For now, I have: Dictionary<string, List<string>> dct = new Dictionary<string, List<string>>(); dct.Add("key1", "value1"); dct.Add("key1", "value2"); but that gives errors like "The best overloaded method match has invalid arguments". Answer: Your first step should be creating the record with specified key. Then you can add additional values to the value list: Dictionary<string, List<string>> dct = new Dictionary<string, List<string>>(); dct.Add("key1", new List<string>{"value1"}); dct["key1"].Add("value2");
Sum 4D array efficiently according to indices contained in another array Question: I have a 4D array, a series of cubes essentially. These cubes are mostly filled with zeroes apart from sub-cubes of values of which I know the locations. I need to sum all these cubes together into one cube. I can do this simply with np.sum along axis=3 but this is part of a monte carlo process and is done a huge number of times. I was wondering since I know where the sub- cubes are in the cubes can I sum them more efficiently since the majority of the summing operation would be adding zeros- these cube are of significant size (>100^3) so if I can it will be a huge saving. I'm newish to Python/Numpy and am finding it hard to get out of the loop mindset! In general am looking for a way to manipulate large n-dimensional arrays together but only in certain parts. I realise masked arrays come to mind here but I have tried and don't think it offers any speed up in this case; unless I'm way off there! Edit: Here are three terrible versions of what I am trying to do - probably won't make much sense out of context - basically involves calculating the effects of multiple charges at a distance from each other but each charge does not affect itself. One of these determines distance on the fly, two use precalculated arrays with the information in but it must be 'aligned' and summed - again probably won't make sense out of context but this is what I have so far def coloumbicForces3(carrierArray, cubeEnergeticDisorderArray, array): cubeLen=100 offsetArray=np.array([[0,0,0],[0,0,+1],[0,0,-1],[0,+1,0],[0,-1,0],[+1,0,0],[-1,0,0]]) indices=np.zeros((7,3,len(carrierArray)), dtype=np.int32) indices1=a=indices.reshape((7*len(carrierArray),3)) tIndices=cubeLen-indices superimposedArray=np.zeros((cubeLen,cubeLen,cubeLen,2+2), dtype=myFloat) sumArray=np.zeros((cubeLen,cubeLen,cubeLen)) for i in range(len(carrierArray)): indices[:,:,i]=offsetArray+carrierArray[i,1:4] for c, carrierC in enumerate(carrierArray[:,0]): for k, carrierK in enumerate(carrierArray[:,0]): if c==k: continue for (x,y,z) in indices[:,:,k]: #print c, indices[:,:,c] if(carrierC==1): superimposedArray[x,y,z,c]=cubeEnergeticDisorderArray[x,y,z,c]=-1*array[cubeLen-x,cubeLen-y, cubeLen-z] else: superimposedArray[x,y,z,c]=cubeEnergeticDisorderArray[x,y,z,c]=array[cubeLen-x,cubeLen-y, cubeLen-z] b = np.ascontiguousarray(a).view(np.dtype((np.void, a.dtype.itemsize * a.shape[1]))) _,idx = np.unique(b, return_index=True) aUnique=a[idx] for (i,j,k) in aUnique: sumArray[i,j,k]=np.sum(superimposedArray[i,j,k]) for c, carrierC in enumerate(carrierArray[:,0]): for (i,j,k) in aUnique: cubeEnergeticDisorderArray[i,j,k,c]=cubeEnergeticDisorderArray[i,j,k,-2]+sumArray[i,j,k] return cubeEnergeticDisorderArray def coloumbicForces(carrierArray, cubeEnergeticDisorderArray, array): cubeLen= len(cubeEnergeticDisorderArray[0,:,:,0]) superimposedArray=np.zeros((cubeLen,cubeLen,cubeLen,2+2), dtype=myFloat) for k, carrier in enumerate(carrierArray[:,0]): superimposedArray[:,:,:,k]=cubeEnergeticDisorderArray[:,:,:,k]=array[cubeLen-carrierArray[k,1]:2*cubeLen-carrierArray[k,1],cubeLen-carrierArray[k,2]:2*cubeLen-carrierArray[k,2],cubeLen-carrierArray[k,3]:2*cubeLen-carrierArray[k,3]] if (carrier==1): a=superimposedArray[:,:,:,k] b=cubeEnergeticDisorderArray[:,:,:,k] superimposedArray[:,:,:,k]=ne.evaluate("a*-1") cubeEnergeticDisorderArray[:,:,:,k]=ne.evaluate("b*-1") sumArray=ne.evaluate("sum(superimposedArray, axis=3)") for k, carrier in enumerate(carrierArray[:,0]): a=cubeEnergeticDisorderArray[:,:,:,k] b=cubeEnergeticDisorderArray[:,:,:,-2] cubeEnergeticDisorderArray[:,:,:,k]=ne.evaluate("sumArray-a+b") return cubeEnergeticDisorderArray def coloumbicForces2(carrierArray, cubeEnergeticDisorderArray, array): x0=carrierArray[0,1] y0=carrierArray[0,2] z0=carrierArray[0,3] x1=carrierArray[1,1] y1=carrierArray[1,2] z1=carrierArray[1,3] cubeEnergeticDisorderArray[x0,y0,z0,0]=cubeEnergeticDisorderArray[x0,y0,z0,-2]-(1.60217657e-19)*2995850595.79/(distance([x0,y0,z0], carrierArray[1,1:4])*1e-9) cubeEnergeticDisorderArray[x0-1,y0,z0,0]=cubeEnergeticDisorderArray[x0-1,y0,z0,-2]-(1.60217657e-19)*2995850595.79/(distance([x0-1,y0,z0], carrierArray[1,1:4])*1e-9) cubeEnergeticDisorderArray[x0+1,y0,z0,0]=cubeEnergeticDisorderArray[x0+1,y0,z0,-2]-(1.60217657e-19)*2995850595.79/(distance([x0+1,y0,z0], carrierArray[1,1:4])*1e-9) cubeEnergeticDisorderArray[x0,y0-1,z0,0]=cubeEnergeticDisorderArray[x0,y0-1,z0,-2]-(1.60217657e-19)*2995850595.79/(distance([x0,y0-1,z0], carrierArray[1,1:4])*1e-9) cubeEnergeticDisorderArray[x0,y0+1,z0,0]=cubeEnergeticDisorderArray[x0,y0+1,z0,-2]-(1.60217657e-19)*2995850595.79/(distance([x0,y0+1,z0], carrierArray[1,1:4])*1e-9) cubeEnergeticDisorderArray[x0,y0,z0-1,0]=cubeEnergeticDisorderArray[x0,y0,z0-1,-2]-(1.60217657e-19)*2995850595.79/(distance([x0,y0,z0-1], carrierArray[1,1:4])*1e-9) cubeEnergeticDisorderArray[x0,y0,z0+1,0]=cubeEnergeticDisorderArray[x0,y0,z0+1,-2]-(1.60217657e-19)*2995850595.79/(distance([x0,y0,z0+1], carrierArray[1,1:4])*1e-9) cubeEnergeticDisorderArray[x1,y1,z1,1]=cubeEnergeticDisorderArray[x1,y1,z1,-2]+(1.60217657e-19)*2995850595.79/(distance([x1,y1,z1], carrierArray[0,1:4])*1e-9) cubeEnergeticDisorderArray[x1-1,y1,z1,1]=cubeEnergeticDisorderArray[x1-1,y1,z1,-2]+(1.60217657e-19)*2995850595.79/(distance([x1-1,y1,z1], carrierArray[0,1:4])*1e-9) cubeEnergeticDisorderArray[x1+1,y1,z1,1]=cubeEnergeticDisorderArray[x1+1,y1,z1,-2]+(1.60217657e-19)*2995850595.79/(distance([x1+1,y1,z1], carrierArray[0,1:4])*1e-9) cubeEnergeticDisorderArray[x1,y1-1,z1,1]=cubeEnergeticDisorderArray[x1,y1-1,z1,-2]+(1.60217657e-19)*2995850595.79/(distance([x1,y1-1,z1], carrierArray[0,1:4])*1e-9) cubeEnergeticDisorderArray[x1,y1+1,z1,1]=cubeEnergeticDisorderArray[x1,y1+1,z1,-2]+(1.60217657e-19)*2995850595.79/(distance([x1,y1+1,z1], carrierArray[0,1:4])*1e-9) cubeEnergeticDisorderArray[x1,y1,z1-1,1]=cubeEnergeticDisorderArray[x1,y1,z1-1,-2]+(1.60217657e-19)*2995850595.79/(distance([x1,y1,z1-1], carrierArray[0,1:4])*1e-9) cubeEnergeticDisorderArray[x1,y1,z1+1,1]=cubeEnergeticDisorderArray[x1,y1,z1+1,-2]+(1.60217657e-19)*2995850595.79/(distance([x1,y1,z1+1], carrierArray[0,1:4])*1e-9) return cubeEnergeticDisorderArray Answer: If you know were the sub-cubes are you can use fancy indexing to sum only where you need, for example: import numpy as np from numpy.random import random c1 = random((10, 10, 10, 10)) c2 = random((10, 10, 10, 10)) c3 = np.zeros_like(c2) And here are the indices where I want to sum: i1 = [0, 2, 4, 6] i2 = [0, 1, 3, 7] i3 = [1, 5, 8, 9] i4 = [1, 6, 7, 8] c3[i1,i2,i3,i4] = c1[i1,i2,i3,i4] + c2[i1,i2,i3,i4] Which will sum only at points: `p1(0,0,1,1)`, `p2(2,1,5,6)`, `p3(4,3,8,7)` and `p4(6,7,9,8)`.
String and line formatting in Python Question: I want to make the following command to be formatted in python, in other to comply with the 80 character per line policy: cmd = """elastic-mapreduce --create --alive \ --instance-type m1.xlarge\ --num-instances 5 \ --supported-product mapr \ --name m7 \ --args "--edition,m7" """ Although in the code it will look like multiple lines, when `cmd` gets executed I wanted it to look like elastic-mapreduce --create --alive --instance-type m1.xlarge --num-instances 5 supported-product mapr name m7 args "--edition,m7" That's a similar problem I'm having here: raise ValueError, "%s hadoop %s version is not supported" % (self.distribution, version) that line it's too long, I would like to do something like: raise ValueError,\ "%s hadoop %s version is not supported" % (self.distribution, version) Answer: As for your first one bit of code you could write a method to 'linearize' your whitespace import re def linearize_whitespace_regex(text): formatted = text.strip().replace('\n',' ').replace('\r',' ').replace('\t',' ') formatted = re.sub(r'\s{2,}',' ',formatted) return formatted I used the regular expressions library, this of course could be accomplished without that import using your own parsing: def linearize_whitespace_manual(text): formatted = text.strip().replace('\n',' ').replace('\r',' ').replace('\t',' ') ws_buf = '' format_buf = '' for i in formatted: if i == ' ': if len(ws_buf) < 1: ws_buf += i else: format_buf += ws_buf + i ws_buf = '' return format_buf and output: >>> cmd = """elastic-mapreduce --create --alive \ ... --instance-type m1.xlarge\ ... --num-instances 5 \ ... --supported-product mapr \ ... --name m7 \ ... --args "--edition,m7" ... """ >>> linearize_whitespace_regex(cmd) 'elastic-mapreduce --create --alive --instance-type m1.xlarge--num-instances 5 --supported-product mapr --name m7 --args "--edition,m7"' >>> linearize_whitespace_manual(cmd) 'elastic-mapreduce --create --alive --instance-type m1.xlarge--num-instances 5 --supported-product mapr --name m7 --args "--edition,m7"' As for your second question raising exceptions can also be done in this fashion: raise Exception("My error message") So you could write it like this: raise ValueError("%s hadoop %s version is not supported" % (self.distribution, version)) And fulfill your line requirement in any of these ways: raise ValueError("%s hadoop %s version is not supported" % (self.distribution, version)) raise ValueError( "%s hadoop %s version is not supported" % (self.distribution, version)) raise ValueError( "%s hadoop %s version is not supported" % (self.distribution, version) )
subprocess.Popen using relative paths Question: The [docs](http://docs.python.org/2/library/subprocess.html#popen-constructor) for Popen mention that you can't specify your executable path relative to the 'change working directory' kwarg. > If `cwd` is not None, the child’s current directory will be changed to `cwd` > before it is executed. **Note that this directory is not considered when > searching the executable, so you can’t specify the program’s path relative > to`cwd`.** But python's behaviour on my system seems to directly contradict this claim: wim@SDFA100461C:/tmp$ mkdir a wim@SDFA100461C:/tmp$ cp /bin/ls /tmp/a/my_ls wim@SDFA100461C:/tmp$ mkdir b wim@SDFA100461C:/tmp$ touch /tmp/b/potato wim@SDFA100461C:/tmp$ cd /home/wim wim@SDFA100461C:~$ python Python 2.7.5+ (default, Sep 19 2013, 13:48:49) [GCC 4.8.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from subprocess import check_output >>> check_output(['../a/my_ls'], cwd='/tmp/b') 'potato\n' >>> check_output(['../a/my_ls']) OSError: [Errno 2] No such file or directory Is using relative paths to `cwd` something that is platform dependent and shouldn't be relied upon? Or is this a documentation bug? (this question spawns from a comment by glglgl [here](http://stackoverflow.com/q/21406887/674039#comment32299912_21407005)) Answer: Yes, this is platform dependant. On POSIX systems, the process is forked and in the child process a `os.chdir(cwd)` is executed before executing the executable. On Windows however, the [`CreateProcess()` API call](http://msdn.microsoft.com/en- us/library/windows/desktop/ms682425%28v=vs.85%29.aspx) is used and `cwd` is passed in as the `lpCurrentDirectory` parameter. No directory change takes place, and the `CreateProcess()` call does _not_ consult that parameter when looking for the `lpApplicationName` to execute. To keep your application cross-platform, you should not rely on the current working directory to be changed when looking up the executable.
Python-How to resolve TypeError Question: import urllib, urllib2 from bs4 import BeautifulSoup, Comment url='http://www.amazon.in/product-reviews/B00EJBA7HC/ref=cm_cr_pr_top_link_1?ie=UTF8&pageNumber=1&showViewpoints=0&sortBy=bySubmissionDateDescending' content = urllib2.urlopen(url).read() soup = BeautifulSoup(content, "html.parser") fooId = soup.find('input',name='ASIN',type='hidden') #Find the proper tag value = fooId['value'] print value I need this code to print the ASIN ID for the product from the URL given. Instead, I get the following error: TypeError: find() got multiple values for keyword argument 'name' Please help. Answer: The reason why this is breaking is because you have the soup.find function signature wrong. There is no first positional argument. The function signature looks like this: def find(self, name=None, attrs={}, recursive=True, text=None, **kwargs) So 'input' is assigned to the first keyword argument(in this case, name). So now you have 2 values assigned to keyword argument 'name'. The correct syntax for what you are trying to do is likely this: fooId = soup.find(name='input', attrs={'name': 'ASIN', 'type': 'hidden'}) This says, find all `<input>`'s in the HTML you are parsing with the attributes listed described in attrs.
Python does not find custom PyQt5 Question: As the offical pyqt5-installation in the ubuntu repositories seem to lack support for QtQuick, I tried to install pyqt5 from source. The installation itself seems to work correctly, but when running a python script that uses PyQt5, python complains that it cannot find that PyQt. After building sip 4.15.5, I downloaded PyQt5.2. It should be compatible to my version of Qt (output of `qmake --version`): QMake version 3.0 Using Qt version 5.2.0 in /opt/qt5.1.1/5.2.0/gcc_64/lib I ran The output of configure.py of pyqt can be found here: <https://gist.github.com/Mitmischer/8677889> . The installation output of pyqt can be found here: <https://gist.github.com/Mitmischer/8677780> . After `sudo make install`, I can see a folder `PyQt5` in `/usr/lib/python3.3/site-packages` which is quite nice. However, if I run cat `PyQt5/__init__.py`, there is no actual code inside: # Copyright (c) 2014 Riverbank Computing Limited <info@riverbankcomputing.com> # # This file is part of PyQt5. # # This file may be used under the terms of the GNU General Public License # version 3.0 as published by the Free Software Foundation and appearing in # the file LICENSE included in the packaging of this file. Please review the # following information to ensure the GNU General Public License version 3.0 # requirements will be met: http://www.gnu.org/copyleft/gpl.html. # # If you do not wish to use this file under the terms of the GPL version 3.0 # then you may purchase a commercial license. For more information contact # info@riverbankcomputing.com. # # This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE # WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Yep, that's all what is inside that file. I don't know whether it's supposed to be that way, but it looks strange to me. Furthermore (`ls PyQt5`): __init__.py QtCore.so QtGui.so QtMultimediaWidgets.so QtPositioning.so QtQuick.so Qt.so QtTest.so QtX11Extras.so _QOpenGLFunctions_2_0.so QtDBus.so QtHelp.so QtNetwork.so QtPrintSupport.so QtSensors.so QtSql.so QtWebKit.so QtXmlPatterns.so QtBluetooth.so QtDesigner.so QtMultimedia.so QtOpenGL.so QtQml.so QtSerialPort.so QtSvg.so QtWidgets.so uic/ Doesn't look that pythonic. As suggested elsewhere, I (hopefully) set my pythonpath appropriately: > echo $PYTHONPATH /usr/lib/python3.3/site-packages/ Now, if I start an interactive `python3.3`-session (or a script), PyQt5 cannot be found: Python 3.3.2+ (default, Oct 9 2013, 14:50:09) [GCC 4.8.1] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from PyQt5 import * Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named 'PyQt5' >>> Has anyone else tried to install PyQt5 from source? What can I do to make PyQt work? Answer: You probrably messed PYTHONPATH in someway. I have successfully built, installed and used PyQT using virtual environment. So here is how to install it using virtualenv. There are tons of tutorals, so please read about it. So install `python-virtualenv`, `virtualenvwrapper` (at least that's what they are called on Debian). mkvirtualenv -p /path/to/python3.3 name workon name cd PyQtSource configure make make install To use this enviorment do: workon name python
Web.py NameError When Importing Module in Module Question: I am creating a web app using web.py on python 2.7.3. I have the following folder structure: start_app.py /app __init__.py /models __init__.py ActionModel.py AreaModel.py /controllers __init__.py world.py /views Whenever I freshly start the app using `python start_app.py`, and visit `world/surrounding` I get the following error <type 'exceptions.ImportError'> at /world/surrounding cannot import name AreaModel Python /home/dev/app/models/ActionModel.py in <module>, line 13 Web GET http://localhost:5000/world/surrounding Line 13 is simply: `from app.models import AreaModel` but I don't see why python is complaining here. If I comment this importing line, it runs fine. However, if I call a different URL, e.g. `world/view`, I get an error that AreaModel is not defined. Once I uncomment the line, it works fine again for all cases (i.e. /surrounding and /view). I am suspecting that this has something to do with the fact that I am "importing in circles", i.e. world.py imports AreaModel, AreaModel imports ActionModel and ActionModel imports AreaModel. I doubt that this is 'the pythonic way' to do things or even the 'MVC way', so I would very much appreciate your enlightening me how to do this properly. Note: app is not in my PYTHONPATH, but I don't think it is needed here, since start_app.py is in the top-level directory and according to [this](http://stackoverflow.com/questions/13221021/cannot-get-imports-to-work- in-web-py-project) all modules should be available. Basically, what it comes down to is: I need the models' functionalities in both the controllers and the models. Is it good practice to "import in circles"? Or is there a nicer approach to do this? Also, is this problem related to python in general or just web.py? **Update:** Added **init**.py files, I had them, but did not include in original question. Sorry for that. **Update:** ActionModel.py includes (among others) a class named BaseAction and a few functions, which return instances or subclasses of BaseAction depending on what type of Action we are dealing with. They are called using e.g. `ActionModel.get_by_id()` @matthew-trevor : Are you suggesting in _a)_ that I should move those functions `get_by_id()` into a class ActionModel? #actionmodel.py class ActionModel(object): def __init__(arg1, arg2, area_class): self.area = area_class() def get_by_id(self, id): return BaseAction(id) class BaseAction(object): def __init__(id): pass I don't see how this should remedy my import problems though. Answer: **The Immediate Problem** You cannot import from folders, but you _can_ import from packages. You can turn any folder into a package by adding an `__init__.py` file to it: start_app.py /app __init__.py /models __init__.py ActionModel.py AreaModel.py /controllers __init__.py world.py /views __init__.py I'm guessing that `ActionModel.py` includes a class of the same name. If so, I recommend renaming the file to `actionmodel.py` to distinguish it from the class. **Circular imports** > Is it good practice to "import in circles"? Or is there a nicer approach to > do this? It's not only bad practice, it just won't work. There are a couple of ways to get around this, which will mostly depend on what you're trying to do: a. In `AreaModel`, import the `ActionModel` module and then reference anything you want to use in it via attribute lookup and vice versa: # areamodel.py import actionmodel def foo(): action = actionmodel.ActionModel(...) As long as the references are inside class or function definitions, it will only occur at run time and not during importing, so the circular reference is avoided. b. Turn `models` into a module and put both `ActionModel` and `AreaModel` code inside it. c. Move the shared code/functionality for `ActionModel` and `AreaModel` into a base module they both import from. d. Make your `ActionModel` class (or whatever) accept a class as an input, then pass `AreaModel` into it in `world.py` (ditto for `AreaModel`). This way, `ActionModel` doesn't need to contain a reference to `AreaModel`, it just has to know what to do with it: # actionmodel.py class ActionModel(object): def __init__(arg1, arg2, area_class): self.area = area_class() # areamodel.py class AreaModel(object): def __init__(action_class): self.action = action_class() # world.py from actionmodel import ActionModel from areamodel import AreaModel action = ActionModel('foo', 'bar', AreaModel) area = AreaModel(ActionModel) This is known as object composition.
How to divide string like this into items in Python Question: Here is the string: format db "this is string a", 0, 0Ah And I am trying to split it into this: format db "this is string a" 0 0Ah Is there any way can do this in python 2.7? Thank you! Answer: Use `shlex.split`: s = 'format db "this is string a", 0, 0Ah' import shlex shlex.split(s) Out[18]: ['format', 'db', 'this is string a,', '0,', '0Ah'] Your grammar is a little wonky with the trailing commas, but you can pretty safely `rstrip` that out: [x.rstrip(',') for x in shlex.split(s)] Out[20]: ['format', 'db', 'this is string a', '0', '0Ah']
What happens exactly internally when I terminate my Python script using Ctrl+c? Question: These days I am learning Python's Exception handling features deeply. I encountered `exception SystemExit`. While reading about this from [official Python Docs](http://docs.python.org/2/library/exceptions.html) I got question in mind that what exactly would have happen when I terminate Python script by pressing `Ctrl`+`c`? lets take this sample code: def func1(a,b): print "func1: "+str(a/b) #some more functions def func2(a,b): print "func2: "+str(a/b) #some more functions if __name__=="__main__": import random count=0 for i in range(1000000): count=count+1 print "count: "+str(count) try: func1(random.randint(-2,3),random.randint(-2,3)) except KeyboardInterrupt: raise except: print "error in func1" try: func2(random.randint(-2,3),random.randint(-2,3)) except KeyboardInterrupt: raise except: print "error in func2" print "\n" In this sample code I am catching `KeyboardInterrupt` so I can stop my script by pressing `Ctrl`+`c`. Should I catch `SystemExit` too to make this code more mature? if yes then why? actually this question is source of my main question which appear on title. so don't consider that I am asking two different question in one post. Answer: You usually not need to catch `SystemExit` as it is what makes [`exit()`](http://docs.python.org/2/library/constants.html#exit) and [`sys.exit()`](http://docs.python.org/2/library/sys.html#sys.exit) functions work: > **sys.exit([arg])** > > Exit from Python. This is implemented by raising the `SystemExit` exception, > so cleanup actions specified by `finally` clauses of `try` statements are > honored, and it is possible to intercept the exit attempt at an outer level. Example: try: exit() except SystemExit: print "caught" Therefore, you usually don't want to catch all exceptions in the first place (by using an empty `except:` clause). The best approach is generally to make your exception handlers as specific as possible. It otherwise makes debugging your application exceptionally hard, as it either hides errors entirely or at least makes it hard to diagnose the details.
Python unpickling error when using base64encoding Question: I am trying to use pickling and I can't. It seems that I am doing something wrong. What is it? (using python 2.7) In [2]: import cPickle as pickle In [3]: arr = [] In [4]: tuple = ('name', 'surname', 'addr', 'area') In [5]: arr.append(tuple) In [6]: pickled_data=pickle.dumps(arr).encode("base64") In [7]: pickled_data Out[7]: 'KGxwMQooUyduYW1lJwpTJ3N1cm5hbWUnClMnYWRkcicKUydhcmVhJwp0cDIKYS4=\n' In [8]: new_arr = pickle.loads(pickled_data).decode("base64") --------------------------------------------------------------------------- UnpicklingError Traceback (most recent call last) ----> 1 new_arr = pickle.loads(pickled_data).decode("base64") UnpicklingError: invalid load key, 'x'. Answer: It should be: new_arr = pickle.loads(pickled_data.decode("base64")) First you decode base64, then unpickle it.
wxPython threading downloads Question: I need to download a list of urls in a wxPython application, but I'm really new to threading, could anyone show me a working example of how to download links and put the output in a wx control. Answer: It would have been nice to see what you had done to at least try to accomplish this. Anyway, there are a couple of good resources for learning how to do threads in wxPython. Here they are: * wiki.wxpython.org/LongRunningTasks * <http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/> I wrote up a tutorial on downloading files using urllib/urllib2 and requests here: * <http://www.blog.pythonlibrary.org/2012/06/07/python-101-how-to-download-a-file/> If you combine these three articles together, you can come up with your answer. However, I went ahead and wrote up a super simple downloader: import requests import os import wx import wx.lib.scrolledpanel as scrolled from threading import Thread from wx.lib.pubsub import pub ######################################################################## class DownloadThread(Thread): """Downloading thread""" #---------------------------------------------------------------------- def __init__(self, gnum, url, fsize): """Constructor""" Thread.__init__(self) self.fsize = fsize self.gnum = gnum self.url = url self.start() #---------------------------------------------------------------------- def run(self): """ Run the worker thread """ local_fname = os.path.basename(self.url) count = 1 while True: if os.path.exists(local_fname): tmp, ext = os.path.splitext(local_fname) cnt = "(%s)" % count local_fname = tmp + cnt + ext count += 1 else: break req = requests.get(self.url, stream=True) total_size = 0 print local_fname with open(local_fname, "wb") as fh: for byte in req.iter_content(chunk_size=1024): if byte: fh.write(byte) fh.flush() total_size += 1024 if total_size < self.fsize: wx.CallAfter(pub.sendMessage, "update_%s" % self.gnum, msg=total_size) print "DONE!" wx.CallAfter(pub.sendMessage, "update_%s" % self.gnum, msg=self.fsize) ######################################################################## class MyGauge(wx.Gauge): """""" #---------------------------------------------------------------------- def __init__(self, parent, range, num): """Constructor""" wx.Gauge.__init__(self, parent, range=range) pub.subscribe(self.updateProgress, "update_%s" % num) #---------------------------------------------------------------------- def updateProgress(self, msg): """""" self.SetValue(msg) ######################################################################## class MyPanel(scrolled.ScrolledPanel): """""" #---------------------------------------------------------------------- def __init__(self, parent): """Constructor""" scrolled.ScrolledPanel.__init__(self, parent) self.data = [] self.download_number = 1 # create the sizers self.main_sizer = wx.BoxSizer(wx.VERTICAL) dl_sizer = wx.BoxSizer(wx.HORIZONTAL) # create the widgets lbl = wx.StaticText(self, label="Download URL:") self.dl_txt = wx.TextCtrl(self) btn = wx.Button(self, label="Download") btn.Bind(wx.EVT_BUTTON, self.onDownload) # layout the widgets dl_sizer.Add(lbl, 0, wx.ALL|wx.CENTER, 5) dl_sizer.Add(self.dl_txt, 1, wx.EXPAND|wx.ALL, 5) dl_sizer.Add(btn, 0, wx.ALL, 5) self.main_sizer.Add(dl_sizer, 0, wx.EXPAND) self.SetSizer(self.main_sizer) self.SetAutoLayout(1) self.SetupScrolling() #---------------------------------------------------------------------- def onDownload(self, event): """ Update display with downloading gauges """ url = self.dl_txt.GetValue() try: header = requests.head(url) fsize = int(header.headers["content-length"]) / 1024 sizer = wx.BoxSizer(wx.HORIZONTAL) fname = os.path.basename(url) lbl = wx.StaticText(self, label="Downloading %s" % fname) gauge = MyGauge(self, fsize, self.download_number) sizer.Add(lbl, 0, wx.ALL|wx.CENTER, 5) sizer.Add(gauge, 0, wx.ALL|wx.EXPAND, 5) self.main_sizer.Add(sizer, 0, wx.EXPAND) self.Layout() # start thread DownloadThread(self.download_number, url, fsize) self.download_number += 1 except Exception, e: print "Error: ", e ######################################################################## class DownloaderFrame(wx.Frame): """""" #---------------------------------------------------------------------- def __init__(self): """Constructor""" wx.Frame.__init__(self, None, title="Downloader", size=(800, 400)) panel = MyPanel(self) self.Show() #---------------------------------------------------------------------- if __name__ == "__main__": app = wx.App(False) frame = DownloaderFrame() app.MainLoop() Note: You will need [requests](http://docs.python-requests.org/en/latest/) to use this script.
Python range(%d) % sides Question: I am trying to make a program, that uses the turtle module using Python 2.7.5+. The user can input a integer so I want to use that number as a argument for range() Here is my code so far: import turtle import time sides = int(raw_input("Enter the amount of sides you want!")) angle = float(360 / sides) length = int(raw_input("Enter the length of each side!")) #Starts turtle drawer turtle = turtle.Turtle() def turtleDrawer(): for i in range (%d) : % (sides) turtle.fd(%d) % (length) turtle.rt(%d) % (angle) ws = turtle.Screen() turtleDrawer() time.sleep(10) When this code is executed, it gives me a syntax error, highlighting the `%` sign in the brackets. Answer: `for i in range (%d) : % (sides)` is invalid. This type of [formatting](http://docs.python.org/2/library/stdtypes.html#string-formatting) is used for strings, not for conditional statements. You are looking for this statement: for i in range(sides):
How to interact with ssh using subprocess module Question: I'm trying to spawn an ssh child process using subprocess. I'm working on Python 2.7.6 on Windows 7 here is my code: from subprocess import * r=Popen("ssh sshserver@localhost", stdout=PIPE) stdout, stderr=r.communicate() print(stdout) print(stderr) The outputs: None stdout should contain: sshserver@localhost's password: Answer: Here's an example of working SSH code that handles the promt for yes/no on the certificate part and also when asked for a password. #!/usr/bin/python import pty, sys from subprocess import Popen, PIPE, STDOUT from time import sleep from os import fork, waitpid, execv, read, write class ssh(): def __init__(self, host, execute='echo "done" > /root/testing.txt', askpass=False, user='root', password=b'SuperSecurePassword'): self.exec = execute self.host = host self.user = user self.password = password self.askpass = askpass self.run() def run(self): command = [ '/usr/bin/ssh', self.user+'@'+self.host, '-o', 'NumberOfPasswordPrompts=1', self.exec, ] # PID = 0 for child, and the PID of the child for the parent pid, child_fd = pty.fork() if not pid: # Child process # Replace child process with our SSH process execv(command[0], command) ## if we havn't setup pub-key authentication ## we can loop for a password promt and "insert" the password. while self.askpass: try: output = read(child_fd, 1024).strip() except: break lower = output.lower() # Write the password if b'password:' in lower: write(child_fd, self.password + b'\n') break elif b'are you sure you want to continue connecting' in lower: # Adding key to known_hosts write(child_fd, b'yes\n') elif b'company privacy warning' in lower: pass # This is an understood message else: print('Error:',output) waitpid(pid, 0) The reason (and correct me if i'm wrong here) for you not being able to read the `stdin` straight away is because SSH runs as a subprocess under a different process ID which you need to read/attach to. Since you're using windows, `pty` will not work. there's two solutions that would work better and that's [pexpect](http://pexpect.sourceforge.net/pexpect.html) and as someone pointed out key-based authentication. In order to achieve a key-based authentication you only need to do the following: On your client, run: `ssh-keygen` Copy your `id_rsa.pub` content (one line) into `/home/user/.ssh/authorized_keys` on the server. And you're done. If not, go with pexpect. import pexpect child = pexpect.spawn('ssh user@host.com') child.expect('Password:') child.sendline('SuperSecretPassword')
Python Pdb giving me a tracback and won't run Question: I set the `Pdb` debugger in my file as I always do like so `import pdb; pdb.set_trace()` and now I keep getting this traceback. I'm not sure what is the issue, and I don't see anything online about this anywhere. Traceback (most recent call last): File "myfile.py", line 28, in <module> pdb.set_trace() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pdb.py", line 1251, in set_trace Pdb().set_trace(sys._getframe().f_back) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pdb.py", line 63, in __init__ cmd.Cmd.__init__(self, completekey, stdin, stdout) TypeError: __init__() takes at most 2 arguments (4 given) Answer: Check whether you have your own `cmd.py`. That prevent import of standard library [`cmd`](http://docs.python.org/2/library/cmd.html) module. Try following command: python -c "import cmd; print(cmd.__file__)" It should print something like: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/cmd.py or /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/cmd.pyc If there's your own version of `cmd.py` or `cmd.pyc`, rename it.
symbol not found when import PySide QtGui in python and mac 10.9 Question: My computer was damaged and forced me to buy a new Mac. I was using MacOS 10.6 with python 2.7.2, PySide 1.0, and Qt 4.7 before. I have setup the new machine by transferring everything from the old computer to the new one. And things have started not working in python. First, need to upgrade python to 2.7.6. Otherwise, will have a segment fault error. This error is fixed. Then need to upgrade Qt to 4.8 and PySide to 1.2.1. I install both by download the binary packages from the site. Import QtCore has no problem. And check that version are OK, both Qt and PySide. However, got symbol not found problem when importing QtGui as indicated in the following. >>> from PySide.QtGui import * Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: dlopen(/Library/Python/2.7/site-packages/PySide/QtGui.so, 2): Symbol not found: __ZN7QLayout11adoptLayoutEPS_ Referenced from: /Library/Python/2.7/site-packages/PySide/QtGui.so Expected in: /Library/Frameworks/QtGui.framework/Versions/4/QtGui in /Library/Python/2.7/site-packages/PySide/QtGui.so Anyone has any idea? Thanks. Answer: Stupid I am. Just find out that I install Qt 4.8.2 instead of the required version 4.8.5. Problem fixed after reinstalled the correct version. Sorry for all the trouble.
Insert Python List into a single column in mySQL Database Question: Hi I am trying to insert a python list into a single column but it keeps giving an error on the syntax. New to this. Appreciate any help. Thanks. from time import time import MySQLdb import urllib import re from bs4 import BeautifulSoup db = MySQLdb.connect("localhost","testuser","test123","testdb" ) cursor = db.cursor() x=1 while x<2: url = "http://search.insing.com/ts/food-drink/bars-pubs/bars-pubs?page=" +str(x) htmlfile = urllib.urlopen(url) soup = BeautifulSoup(htmlfile) reshtml = [h3.a for h3 in soup.find("div", "results").find_all("h3")] reslist = [] for item in reshtml: res = item.text.encode('ascii', 'ignore') reslist.append(' '.join(res.split())) sql = "INSERT INTO insing(name) \ VALUES %r" \ % reslist try: cursor.execute(sql) db.commit() except: db.rollback() db.close() x += 1 The output for SQL is > 'INSERT INTO insing(name) VALUES [\'AdstraGold Microbrewery & Bistro Bar\', > \'Alkaff Mansion Ristorante\', \'Parco Caffe\', \'The Fat Cat Bistro\', > \'Gravity Bar\', \'The Wine Company (Evans Road)\', \'Serenity Spanish Bar & > Restaurant (VivoCity)\', \'The New Harbour Cafe & Bar\', \'Indian Times\', > \'Sunset Bay Beach Bar\', \'Friends @ Jelita\', \'Talk Cock Sing Song @ > Thomson\', \'En Japanese Dining Bar (UE Square)\', \'Magma German Wine > Bistro\', "Tam Kah Shark\'s Fin", \'Senso Ristorante & Bar\', \'Hard Rock > Cafe (HPL House)\', \'St. James Power Station\', \'The St. James\', > \'Brotzeit German Bier Bar & Restaurant (Vivocity)\']' Answer: what about insert into table(name) values ('name1'), ('name2'), ... , ('name36'); [Inserting multiple rows in a single SQL query?](http://stackoverflow.com/questions/452859/inserting-multiple-rows-in- a-single-sql-query) That might help too. **EDIT** I automated the process as well: dataSQL = "INSERT INTO PropertyRow (SWID, Address, APN, PropertyType, PermissableUse, UseDetail, ReviewResult, Analysis, DocReviewed, AqDate, ValuePurchase, ValueCurrent, ValueDate, ValueBasis, ValueSale, SaleDate, PropPurpose, LotSize, Zoning, ParcelValue, EstRevenue, ReqRevenue, EnvHistory, TransitPotential, PlanObjective, PrevHistory, LastUpdDate, LastUpdUser)" fields = "VALUES ("+"'"+str(rawID)+"', " if(cell.ctype != 0): while column < 27: #column 16 will always be blank if (column == 16): column += 1 #column 26 is the end if (column == 26): fields += "'"+str(sh.cell_value(rowx=currentRow, colx=column)) + "'" else: #append to the value string fields += "'"+str(sh.cell_value(rowx=currentRow, colx=column)) + "', " #print fields column+=1 fields += ');' writeFyle.write(dataSQL) writeFyle.write(fields) In this implementation I am writing an insert statement for each row that I wanted to insert. This wasn't necessary but it was much easier.
Issuing commands to psuedo shells (pty) Question: I've tried to use the subprocess, popen, os.spawn to get a process running, but it seems as though a pseudo terminal is needed. import pty (master, slave) = pty.openpty() os.write(master, "ls -l") Should send "ls -l" to the slave tty... I tried to read the response os.read(master, 1024), but nothing was available. EDIT: Also tried to create pty's, then open the call in a subprocess -- still didn't work. import pty import subprocess (master, slave) = os.openpty() p = subprocess.Popen("ls", close_fds=True, shell=slave, stdin=slave, stdout=slave) Similar: [Send command and exit using python pty pseudo terminal process](http://stackoverflow.com/questions/8710829/send-command-and-exit- using-python-pty-pseudo-terminal-process) [How do *nix pseudo-terminals work ? What's the master/slave channel?](http://stackoverflow.com/questions/476354/how-do-nix-pseudo- terminals-work-whats-the-master-slave-channel) Answer: Use `pty.spawn` instead of `os.spawn`. Here's a function that runs a command in a separate pty and returns its output as a string: import os import pty def inpty(argv): output = [] def reader(fd): c = os.read(fd, 1024) while c: output.append(c) c = os.read(fd, 1024) pty.spawn(argv, master_read=reader) return ''.join(output) print "Command output: " + inpty(["ls", "-l"])
Finding the nth smallest number in a list? Question: i need a efficient way of getting the nth smallest number AND its index in a list containing up to 15000 enties (so speed is not super crucial). I sadly can't use numpy or any other non-standard library. Im using Python 2.7 Answer: use `heapq.nsmallest` (and `enumerate` to get the index): nums = [random.randint(1,1000000) for _ in range(10000)] import heapq import operator heapq.nsmallest(10,enumerate(nums),key=operator.itemgetter(1)) Out[26]: [(5544, 35), (1702, 43), (6547, 227), (1540, 253), (4919, 360), (7993, 445), (1608, 495), (5832, 505), (1388, 716), (5103, 814)]
celery throws unicodedecodeerror when try to start Question: I want to run example code(tasks.py) from official tutorial: from celery import Celery app = Celery('tasks', broker='amqp://guest@localhost//') @app.task def add(x, y): return x + y I used command "celery -A tasks worker --loglevel=info" as shown in the tutorial. But, when I execute this command python throws UnicodeDecodeError: C:\kaznmu\virtualenvs\example\Scripts>celery -A tasks worker --loglevel=info Traceback (most recent call last): File "C:\kaznmu\virtualenvs\example\Scripts\celery-script.py", line 9, in <mod ule> load_entry_point('celery==3.1.8', 'console_scripts', 'celery')() File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\__main__.py", lin e 30, in main main() File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\bin\celery.py", l ine 80, in main cmd.execute_from_commandline(argv) File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\bin\celery.py", l ine 746, in execute_from_commandline super(CeleryCommand, self).execute_from_commandline(argv))) File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\bin\base.py", lin e 308, in execute_from_commandline return self.handle_argv(self.prog_name, argv[1:]) File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\bin\celery.py", l ine 738, in handle_argv return self.execute(command, argv) File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\bin\celery.py", l ine 692, in execute ).run_from_argv(self.prog_name, argv[1:], command=argv[0]) File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\bin\worker.py", l ine 175, in run_from_argv return self(*args, **options) File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\bin\base.py", lin e 271, in __call__ ret = self.run(*args, **kwargs) File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\bin\worker.py", l ine 195, in run hostname = self.simple_format(default_nodename(hostname)) File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\bin\base.py", lin e 569, in simple_format return self._simple_format(s, socket.gethostname(), **extra) File "C:\kaznmu\virtualenvs\example\lib\site-packages\celery\bin\base.py", lin e 574, in _simple_format name, _, domain = host.partition('.') UnicodeDecodeError: 'ascii' codec can't decode byte 0xcf in position 5: ordinal not in range(128) * I tried reinstall all python libraries and use different versions, but this doesn't help. How can I run celery? * I use Windows7 and Python2.7 * Official tutorial here <http://docs.celeryproject.org/en/master/getting-started/first-steps-with-celery.html#first-steps> Answer: Problem is russian characters in pc name. Solved by renaming pc name.
How do I make python encrypt both uppercase and lowercase? Question: Basically I want the ciphered phrase as the output with both uppercase being ciphered to uppercase and lowercase being ciphered to lowercase but not any spaces or symbols are ciphered. It can encrypt a paragraph consisting of all upper case and a paragraph consisting of all lower case but not a mix of the two. here is what I have. def encrypt(phrase,move): encription=[] for character in phrase: a = ord(character) if a>64 and a<123: if a!=(91,96): for case in phrase: if case.islower(): alph=["a","b","c","d","e","f","g","h","i","j","k","l","m","n", "o","p","q","r","s","t","u","v","w","x","y","z"] dic={} for i in range(0,len(alph)): dic[alph[i]]=alph[(i+move)%len(alph)] cipherphrase="" for l in phrase: if l in dic: l=dic[l] cipherphrase+=l encription.append(chr(a if 97<a<=122 else 96+a%122)) return cipherphrase else: ALPH=["A","B","C","D","E","F","G","H","I","J","K","L","M","N", "O","P","Q","R","S","T","U","V","W","X","Y","Z"] DIC={} for I in range(0,len(ALPH)): DIC[ALPH[I]]=ALPH[(I+move)%len(ALPH)] cipherphrase="" for L in phrase: if L in DIC: L=DIC[L] cipherphrase+=L encription.append(chr(a if 97<a<=122 else 96+a%122)) return cipherphrase I know its a lot but as you can see im not very good Answer: import string def caesar_cipher(msg, shift): # create a character-translation table trans = dict(zip(string.lowercase, string.lowercase[shift:] + string.lowercase[:shift])) trans.update(zip(string.uppercase, string.uppercase[shift:] + string.uppercase[:shift])) # apply it to the message string return ''.join(trans.get(ch, ch) for ch in msg) then caesar_cipher('This is my 3rd test!', 2) # => 'Vjku ku oa 3tf vguv!'
Linux - Weird Python Output Question: When ever i mistype or do a error into the console the following message come up: Traceback (most recent call last): File "/usr/lib/python3.3/site.py", line 629, in <module> main() File "/usr/lib/python3.3/site.py", line 614, in main known_paths = addusersitepackages(known_paths) File "/usr/lib/python3.3/site.py", line 284, in addusersitepackages user_site = getusersitepackages() File "/usr/lib/python3.3/site.py", line 260, in getusersitepackages user_base = getuserbase() # this will also set USER_BASE File "/usr/lib/python3.3/site.py", line 250, in getuserbase USER_BASE = get_config_var('userbase') File "/usr/lib/python3.3/sysconfig.py", line 610, in get_config_var return get_config_vars().get(name) File "/usr/lib/python3.3/sysconfig.py", line 560, in get_config_vars _init_posix(_CONFIG_VARS) File "/usr/lib/python3.3/sysconfig.py", line 432, in _init_posix from _sysconfigdata import build_time_vars File "/usr/lib/python3.3/_sysconfigdata.py", line 6, in <module> from _sysconfigdata_m import * ImportError: No module named '_sysconfigdata_m' I have both Python 2.7 and 3.3 install with Anaconda. I wonder if this is normal or it was a conflict between python 2.7 and 3.3 Answer: Assuming you are using ubuntu, here is the relevant bug report <https://bugs.launchpad.net/ubuntu/+source/python3.3/+bug/1192890> You need to patch your /etc/bash.bashrc. See comment #6 for details
Can't get un-stacked bar plot in python pandas Question: This is weird. I just can't seem to get unstacked bar plot in python pandas (unlike pandas official guide). The bars just seem to be overlapped, instead of placed sideways. Any clue why it would be? df.plot(kind='bar',stacked=False, figsize=(20,15), alpha=0.4) Here is the link to the image: ![enter image description here](http://i.stack.imgur.com/2UAb1.png) and here is a sample df OLS Ridge Lasso EN BN 0.008935 0.013937 0.000000 0.000000 BO 0.037947 0.034341 0.021778 0.021771 BP 0.205764 0.190278 0.184766 0.179000 CB 0.302772 0.106399 0.161487 0.076948 CD 0.464572 0.378660 0.424983 0.401792 CF 0.062425 0.006078 0.000000 -0.000000 CL -0.005794 0.002631 0.000000 0.001082 CN 0.012761 0.011331 0.010272 0.010476 Answer: Ok. So now I have to ask what version of pandas you're on. When I run: from matplotlib import pyplot as plt import pandas try: from io import StringIO except: from StringIO import StringIO data = StringIO("""\ OLS Ridge Lasso EN BN 0.008935 0.013937 0.000000 0.000000 BO 0.037947 0.034341 0.021778 0.021771 BP 0.205764 0.190278 0.184766 0.179000 CB 0.302772 0.106399 0.161487 0.076948 CD 0.464572 0.378660 0.424983 0.401792 CF 0.062425 0.006078 0.000000 -0.000000 CL -0.005794 0.002631 0.000000 0.001082 CN 0.012761 0.011331 0.010272 0.010476 """) df = pandas.read_csv(data, sep='\s+') fig, axes = plt.subplots(nrows=2, figsize=(6, 10)) df.plot(kind='bar', stacked=False, alpha=0.4, ax=axes[0]) df.plot(kind='bar', stacked=True, alpha=0.4, ax=axes[1]) for ax in axes: ax.set_ylim(bottom=0) I get: ![enter image description here](http://i.stack.imgur.com/iilW9.png) I'm on pandas 0.13 via (ana)conda
Python: Can't fix IndentationError: expected an indented block Question: I've messed around with this code in python as much as possible and still can't seem to get it to work #!/usr/bin/env python from time import sleep import os import RPi.GPIO as GPIO GPIO.setmode (GPIO.BCM) GPIO.setup (23, GPIO.IN) while True: if ( GPIO.input (23) == False ): os.system('mpg321 Sioux Goal Horn2.mp3 &') sleep (0.1); Every thing I've done to change it, I always get the same message. ~ $ chmod +x ButtonBasedGoalSound.py ~ $ sudo python ButtonBasedGoalSound.py File "ButtonBasedGoalSound.py" line 9 os.system(''mpg321 Sioux Goal Horn2.mp3 &') ^ IndentationError: expected an indented block I'd appreciate some help! Answer: Check whether you have mixed tabs and spaces on that line. Indent with spaces only! Otherwise you will experience great pain.
How to read code file that are in the Python memory? Question: This is the error/traceback I'm actually getting: Traceback (most recent call last): File "/home/apache/tactic/src/tactic/ui/panel/custom_layout_wdg.py", line 619, in process_mako html = template.render(server=my.server, search=Search, sobject=sobject, sobjects=my.sobject_dicts, data=my.data, plugin=plugin, kwargs=my.kwargs) File "/home/apache/tactic/src/mako/template.py", line 189, in render return runtime._render(self, self.callable_, args, data) File "/home/apache/tactic/src/mako/runtime.py", line 403, in _render _render_context(template, callable_, context, *args, **_kwargs_for_callable(callable_, data)) File "/home/apache/tactic/src/mako/runtime.py", line 434, in _render_context _exec_template(inherit, lclcontext, args=args, kwargs=kwargs) File "/home/apache/tactic/src/mako/runtime.py", line 457, in _exec_template callable_(context, *args, **kwargs) File "memory:0x7f2ea8589810", line 19, in render_body ${gl.get_gantt('vfx')} File "/home/apache/gantt_logic.py", line 34, in get_gantt prepend.add_style("width: %spx" %(total_width/days * (start_date - process[1]["start"]).days)) ZeroDivisionError: integer division or modulo by zero I'm not asking how to fix the `ZeroDivisionError` error. That's trivial. What I'm curious to know is that how can I read through files that are in Python memory? In this case `memory:0x7f2ea8589810`. Answer: I do not now of your case but `import linecache` is the module you would ask if you need lines of a file. I also think traceback uses linecache for this line: ${gl.get_gantt('vfx')} So I think developers that want their generated code to be read should create linecache support. try: import linecache print(linecache.getlines("memory:0x7f2ea8589810")) or something like this.
OPC with Python 3.3 Question: I'd like to read the tags from my OPC server (kepware) with Python 3.3 I have found the openopc project at the sourceforge, but it seems it doesnt work with Python 3.3 Do I have other options here? Answer: If your Python program runs under Windows platform, you can use QuickOPC (<http://www.opclabs.com/products/quickopc/languages-and-tools/python>). It exposes COM objects that provide various OPC operations (reading, writing, browsing, ...). These COM objects can be consumed from Python after importing "win32.client". Disclaimer: This is a self-promotion.
Xcode 5 iPhone app; all my '[' and ']' match, but "parse expected ']' " still comes up Question: i'm new and learning from a tutorial, i've coded in _python_ before so i have mercilessly hunted for the additional '**]** ' it claims it needs… and even when i put it in it then says, it's unexpected and want to delete it, leading back to the original problem. it says it expects a '**]** ' on the line "**\- (void)createOrOpenDB** " this is the view controller.m #import "ViewController.h" @interface ViewController () { NSMutableArray *ArrayOfPowders; sqlite3 *powderDB; NSString *dbPathSring; } - (IBAction)AddPowderButton:(id)sender; - (IBAction)DeletePowderButton:(id)sender; - (IBAction)ShowAllButton:(id)sender; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; ArrayOfPowders = [[NSMutableArray alloc]init]; [[self mytableview]setDelegate:self]; [[self mytableview]setDataSource:self]; [self createOrOpenDB]; } - (void)createOrOpenDB { NSArray *path = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *docPath = [path objectAtIndex:0]; dbPathSring = [docPath stringByAppendingPathComponent:@"powders.db"]; char *error; NSFileManager *fileManager = [NSFileManager defaultManager]; if (![fileManager fileExistsAtPath:dbPathSring]) { const char *dbpath = [dbPathSring UTF8String]; // create db here if (sqlite3_open(dbpath, &powderDB)==SQLITE_OK){ const char *sql_stmt = "CREATE TABLE IF NOT EXISTS POWDERS (ID INTEGER PRIMARY KEY AUTOINCREMENT, POWDERNAME TEXT, RALCODE INTEGER, FINISH TEXT)"; sqlite3_exec(powderDB, sql_stmt, NULL, NULL, &error); sqlite3_close(powderDB); } } } -(NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1;} -(NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [ArrayOfPowders count]; } -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *Cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (!Cell){ Cell = [[UITableViewCell alloc]initWithStyle:(UITableViewCellStyleSubtitle) reuseIdentifier:CellIdentifier]; powder *apowder = [ArrayOfPowders objectAtIndex:indexPath.row]; Cell.textLabel.text = apowder.name; Cell.detailTextLabel.text = [NSString stringWithFormat:@"%d",apowder.RAL]; return Cell; } - (void)didReceiveMemoryWarning { [super didRecieveMemoryWarning]; } - (IBAction)addpowderbutton:(id)sender { if (sqlite3_open([dbPathSring UTF8String], &powderDB)==SQLITE_OK){ NSString *inserStmt = [NSString stringWithFormat:@"INSERT INTO POWDERS(NAME,RAL) values ('%s' '%d')",[self.nameField.text UTF8String],[self.RAlField.text intValue]]; const char *insert_stmt = [inserstmt UTF8String]; if (sqlite3_exec(powderDB, insert_stmt, NULL, NULL, &error)==SQLITE_OK) { NSLog(@"powder added"); powder *powder = [[powder alloc]init]; [powder setname:self.namefield.text]; [powder setRALcode:[self.RALfield.text intValue]]; [ArrayOfPowders addObject:powder]; } sqlite3_close(powderDB); } } } - (IBAction)deletepowderbutton:(id)sender { sqlite3_stmt *statement; } - (IBAction)showallpowdersbutton:(id)sender { } - (IBAction)AddPowderButton:(id)sender { } - (IBAction)DeletePowderButton:(id)sender { } - (IBAction)ShowAllButton:(id)sender { } @end viewcontroller.h is; #import <UIKit/UIKit.h> #import "sqlite3.h" #import "powder.h" @interface ViewController : UIViewController<UITableViewDataSource, UITableViewDelegate> @property (weak, nonatomic) IBOutlet UITextField *namefield; @property (weak, nonatomic) IBOutlet UITextField *RALfield; @property (weak, nonatomic) IBOutlet UITableView *mytableview; // items on main storyboard are give properties here @end Answer: On the method `createOrOpenDB` you are missing a closing `}` all you need to do is add it at the end like below (As long as this isn't a typo on the copying over to here) - (void)createOrOpenDB { NSArray *path = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *docPath = [path objectAtIndex:0]; dbPathSring = [docPath stringByAppendingPathComponent:@"powders.db"]; } **UPDATE** Just because of how you have named `ArrayOfPowders` we can tell you aren't familiar with coding conventions so I am going to include [Apples Programming with Objective-c conventions documentation](https://developer.apple.com/library/ios/documentation/cocoa/conceptual/ProgrammingWithObjectiveC/Conventions/Conventions.html) that I would recommend you have a read of as it will provide a better understanding of when to use uppercase and lowercase, method names, variable names etc
Error while using dpkt in python Question: I'm writing some code to parse a pcap file in python as follows: #!/usr/bin/env python import socket import dpkt import sys import pcap pcapReader = dpkt.pcap.Reader(file("clients.pcap", "rb")) for ts, data in pcapReader: ether = dpkt.ethernet.Ethernet(data) if ether.type != dpkt.ethernet.ETH_TYPE_IP: raise ip = ether.data src = socket.inet_ntoa(ip.src) dst = socket.inet_ntoa(ip.dst) print "%s -> %s" % (src, dst) While compiling, I’m getting the following error message:: Traceback (most recent call last): File "test.py", line 3, in <module> import dpkt ImportError: No module named dpkt Any help is appreciated. Answer: On Ubuntu, just type: sudo apt-get install python-dpkt
Flatten (an irregular) list of lists in Python respecting Pandas Dataframes Question: This is a recursive question here on Stackoverflow, yet the solution given [here](http://stackoverflow.com/questions/2158395/flatten-an-irregular-list- of-lists-in-python?answertab=active#tab-top) is still not perfect. Yielding is still (for me) one of the most complex things to use in python, so I dont know how to fix it myself. When an item within any of the lists given to the function is a Pandas dataframe, the flatten function will return its header, instead of the dataframe itself. You can expressly test this by running the following code: import pandas import collections df = pandas.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) def flatten(l): for el in l: if isinstance(el, collections.Iterable) and not isinstance(el, basestring): for sub in flatten(el): yield sub else: yield el Then, if you call the function given on the referenced post: list(flatten([df])) #['A', 'B', 'C', 'D'] Instead of returning a list with the dataframe inside. How to make the function flatten respect the dataframes? Answer: That `flatten` function will recurse down if the element is an instance of `collections.Iterable` and it's not a string (which is iterable, but we usually want to treat it as a scalar, something we're not going to look inside). Even though `DataFrames` are instances of `collections.Iterable`, it sounds like you want them to be terminal too. In that case: if (isinstance(el, collections.Iterable) and not isinstance(el, (basestring, pandas.DataFrame))): After which: >>> list(flatten([[1,2], "2", df])) [1, 2, '2', <class 'pandas.core.frame.DataFrame'> Int64Index: 100 entries, 0 to 99 Data columns (total 4 columns): A 100 non-null values B 100 non-null values C 100 non-null values D 100 non-null values
Python Selenium WebDriver. How to check/verify that drop-down menu with suggested results is displayed? Question: I wanna make sure that this drop-down menu with suggested results is displayed when I enter something into search field. ![enter image description here](http://i.stack.imgur.com/5Egtb.png) Here is my script (not for Google search) which doesn't work: suggestions = driver.find_element_by_xpath('/html/body/div[6]') print suggestions.get_attribute('display') # >>>None if suggestions == "block": print "Suggestions are displayed!" else: print "Suggestions aren't displayed!" How I understand I have to check that attribute "display" is is "block". If it's "none" it means that drop-down menu isn't displayed. HTML code of menu with suggested results: <div style="display: block; position: absolute; width: 237px; top: 270px; left: 186px;" class="ac_results"> <ul style="max-height: 180px; overflow: auto;"> <li class="ac_even ac_over">Elen<strong>a</strong> J<strong>a</strong>mes De<strong>a</strong>n</li> <li class="ac_odd">Ellie portnov T<strong>a</strong>r<strong>a</strong></li> <li class="ac_even">Elen<strong>a</strong> Q<strong>A</strong></li> <li class="ac_odd">Jessy J<strong>a</strong>mes</li> <li class="ac_even">J<strong>a</strong>mes HotStuff De<strong>a</strong>n</li> <li class="ac_odd">j<strong>a</strong>mess b<strong>a</strong>g de<strong>a</strong>n</li> <li class="ac_even">J<strong>a</strong>mes Hotstuff De<strong>a</strong>n</li> <li class="ac_odd">j<strong>a</strong>mes cool De<strong>a</strong>n</li> <li class="ac_even">J<strong>a</strong>smin1 Gurusw<strong>a</strong>mi1</li> <li class="ac_odd">j<strong>a</strong>mes hotguy de<strong>a</strong>n</li> </ul> </div> My code: suggestions = driver.find_element_by_xpath('/html/body/div[6]') if suggestions.is_displayed(): print "Suggestions are displayed!" else: print "Suggestions aren't displayed!" Answer: 1. Attribute is called `style` 2. You can use `is_displayed()` WebElement's method. It returns `True`, if element is displayed 3. You can use expected conditions. [Here's](http://selenium-python.readthedocs.org/en/latest/api.html#selenium.webdriver.support.expected_conditions.visibility_of) an example. 4. Your code is simply wrong. suggestions = driver.find_element_by_xpath('/html/body/div[6]') style = suggestions.get_attribute('style') if 'block' in style: print "Suggestions are displayed!" else: print "Suggestions aren't displayed!" This can help, but better use number 3 ;) **UPDATE:** As it takes some time to render the needed element, it is a good practice to use waits. Info's [here](http://selenium- python.readthedocs.org/en/latest/waits.html) **UPDATE2:** You can use this code: from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait WebDriverWait(driver, 5).until( EC.visibility_of_element_located((By.XPATH, '/html/body/div[6]')), "Element was not displayed after 5 seconds") If element's visibility is not critical you can use try-except block
Memory error at python Question: from __future__ import division import dataProcess import csv,re from collections import OrderedDict import itertools ####################################################################################### # Pruning of N-grams depending upon the frequency of tags # ####################################################################################### for k in range(2,8): filename="Dataset/Cross/N_gram_Features_Pruned/"+str(k)+"_gram.txt" filewrite=open(filename,"w") CSV_tag_reader=csv.reader(open("Dataset/Cross/N_grams_recored/"+str(k)+"_gram.csv","r"),delimiter=',') header_data=CSV_tag_reader.next(); table = [row for row in CSV_tag_reader] values=[] result_tag=[] for j in range(0,len(header_data)): sum1=0 avg1=0 for i in range (0,3227): sum1=sum1+int(table[i][j]) ## print "************************************************************" ## print sum1 avg1=sum1/3227 ## print avg1 if(avg1>=0.3): result_tag.append(header_data[j]) print len(header_data) print len(result_tag) print "************************************************************" filewrite.write(str(result_tag)) My code is to count frequency of particular word in 3227 samples of data.i have record about 277436 words frequency in 3227 samples.so image csv file with 3227 rows and 60k columns.so am reading each word and sum the frequency and find the average..bt am getting memory error when am running this code?how can i solve? > > Error: > Traceback (most recent call last): > File "N_gram_pruning.py", line 15, in <module> > table = [row for row in CSV_tag_reader] > MemoryError > My csv file like thisss f1 f2 f3 f4.....f277436(header row) 0 9 1 4 70 56 2 66 8 23 (3227 rows...) Answer: The problem is that you're reading the entire file into memory. To avoid this, you may have to restructure your algorithm. It seems that you're operating on every column individually, meaning operations on each column are independent. Therefore, if you transpose your csv files so they can be read line by line, you can iterate over those lines rather than reading them all into memory. Alternatively, you could use [file.seek()](http://docs.python.org/2/library/stdtypes.html#file.seek), though it'll be very slow.
Accessing enumaration constants in Excel COM using Python and win32com Question: I'm using python 2.7 win32com module to load an MS Excel worksheet from Python: book = xlApp.Workbooks.Open("myFile.xls") sheet = book.Sheets(1) Many methods and properties of Range, Worksheet etc use enumerations like XlDirection, XlFillWith, and so forth. These define constants such as xlDown, xlUp, xlFillWithContents, etc. Are those constants available from win32com so that I could do, for example: column = outputsSheet.Range("I5:I150") lastRow = column.End(xlInterop.xlDown) print "Last row:", lastRow.Row This doesn't work because xlInterop is not defined, is there a way to access it using win32com? Discovering the values of such constants as xlDown by trial and error is not very practical. Answer: I'm new to win32com, so I don't know if this helps... try to start Excel with: xlApp = win32com.client.gencache.EnsureDispatch("Excel.Application") This should run makepy, and you can find some resulting Python files in ...\Lib\site- packages\win32com\gen_py\00020813-0000-0000-C000-000000000046x0x1x7 (last folder might be another name for you, I dont know). Check the **init**.py file, there's a bunch of constants defined there. Edit: you can access these constants with: from win32com.client import constants as c
Sending High Importance email through Outlook using Python Question: Using win32com.client package, I'm able to send an HTML email using outlook through Python. However, I'm having a hard time figuring out how to mark an email "high priority" or "high importance". Here is the code I'm using to successfully send out an email (with no priority marking): RTFTEMPLATE = """<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN"> <HTML> <HEAD> <META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; = charset=3Dus-ascii"> <META NAME=3D"Generator" CONTENT=3D"MS Exchange Server version = 08.00.0681.000"> <TITLE>%s</TITLE> </HEAD> <BODY> <!-- Converted from text/rtf format --> <P DIR=3DLTR><SPAN LANG=3D"en-us"><FONT = FACE="Times New Roman"> %s </FONT></SPAN><SPAN = LANG=3D"en-us"></SPAN></P> <br> %s </BODY> </HTML>""" Format = { 'UNSPECIFIED' : 0, 'PLAIN' : 1, 'HTML' : 2, 'RTF' : 3} profile = "Outlook" #session = win32com.client.Dispatch("Mapi.Session") outlook = win32com.client.Dispatch("Outlook.Application") #session.Logon(profile) mainMsg = outlook.CreateItem(0) mainMsg.To = "RECIPIENT" mainMsg.Subject = subject mainMsg.BodyFormat = Format['RTF'] mainMsg.HTMLBody = RTFTEMPLATE % (subject,html,bad_table) mainMsg.Send() Answer: You are creating your message through the COM `Outlook Object Model`. This model is fully documented, which can be a great help in situations like this. For instance, the `MailItem` you are creating is documented [here](http://msdn.microsoft.com/en-us/library/office/dn320330.aspx). As you can tell from that page, it has a property [Importance](http://msdn.microsoft.com/en-us/library/office/ff866759.aspx) which you can set to 2 (olImportanceHigh) to mark the message as "high importance". In code mainMsg.Importance = 2
StringVar bound to an Entry doesn't update Entry value Question: I'm starting with GUI in Python and Tkinter and want to make a small window that load an image from a file and show the path of the file and also de image. Until now, I've got my window and the button to pick the image (_tkinter.filedialog.askopenfilename_) but I'm trying to update the _Entry_ that is supposed to show the path using a _tkinter.StringVar_. The value of the StringVar is changing, but not the value shown in the Entry. **Frame Code:** import tkinter as tk import tkinter.filedialog as tkf class ImageViewer(tk.Frame): def __init__(self, master=None): tk.Frame.__init__(self, master) self.master.title("Image Viewer") self.drawButtons() self.grid() def drawButtons(self): self.lblfilename = tk.Label(self, text="Image: ") self.lblfilename.grid(padx=1, column=0, row=0, sticky=tk.N+tk.E+tk.S+tk.W) self.eText = tk.StringVar() self.eText.set("") self.filenametext = tk.Entry(self, background='#fff', width=65, textvariable=self.eText) self.filenametext.grid(padx=10, column=1, row=0, columnspan=2, sticky=tk.N+tk.E+tk.S+tk.W) self.pickerbut = tk.Button(self, text="Load", command=self.picker) self.pickerbut.grid(column=3, row=0, sticky=tk.N + tk.S + tk.W) self.image = tk.Canvas(self, height=480, width=640) self.image.grid(padx=10, pady=5, column=0, row=1, columnspan=4, rowspan=3) self.cancelbut = tk.Button(self, text="Exit", command=self.cancel) self.cancelbut.grid(column=3, row=4, sticky=tk.N + tk.W + tk.S) def picker(self): self.imgpicker = tkf.askopenfilename(parent=self) self.eText.set(self.imgpicker) print(self.eText.get()) def cancel(self): self.master.destroy() **main:** if __name__ == "__main__": imageviewerbutton = tk.Tk() imageviewerbutton.geometry('660x550+100+90') ImageViewer(imageviewerbutton) imageviewerbutton.mainloop() When I print the value of _self.eText_ with _self.eText.get()_ , it show the correct image path, but my Entry remain empty. In my opinion, it is any kind of problem when bindind the StringVar and the Entry, although I've already search on internet and try everything that came to my mind, I didn't found a solution. Can someone give me a hint? Answer: I don't have a python3 instance where I can test this, but it seems to work OK on python 2.7. Are you aware you don't need to use a `StringVar` with an entry widget? In my opinion you almost never need one. You can get and set the value of the entry without it. For example: self.imgpicker = tkf.askopenfilename(parent=self) self.filenametext.delete(0, "end") self.filenametext.insert(0, self.imgpicker)
Paragraph Matching Python Question: **Background information** I have a Python script which generates word documents with the `docx` module. These documents are generated based on a log and then printed and stored as records. However, the log can be edited retroactively, so the document records need to be revised, and these revisions must be tracked. I'm not actually revising the documents, but generating a new one which shows the difference between what is currently in the log, and what will soon be in the log (the log is updated after the revised file is printed). When a revision occurs, my script uses `diff_match_patch` to generate a mark-up of what's changed with the following function: def revFinder(str1,str2): dmp = dmp_module.diff_match_patch() diffs = dmp.diff_main(str1,str2) paratext = [] for diff in diffs: paratext.append((diff[1], '' if diff[0] == 0 else ('s' if diff[0] == -1 else 'b'))) return paratext `docx` can take text either as strings, or by tuple if word-by-word formatting is required, so [see second bullet in "Some Things to Note"] [("Hello, ", ''), ("my name ", 'b'), ("is Brad", 's')] produces > Hello **my name** ~~is Brad~~ * * * **The Problem** `diff_match_patch` is a very efficient code which finds the difference between two texts. Unfortuanly, its a little too efficient, so replacing `redundant` with `dune` results in > ~~re~~ dun~~ant~~**e** This is ugly, but its fine for single words. However, if an entire paragraph gets replaced, the results will be entirely unreadable. That is not ok. Previously I addressed this by collapsing all the text into a single paragraph, but this was less than ideal because it became very cluttered and was still pretty ugly. * * * **The Solution So Far** I have a function which creates the revision document. This function gets passed a list of tuples set up like this: [(fieldName, original, revised)] So the document is set up as Orignial fieldName (With Markup) result of revFinder diffing orignal and revised Revised fieldName revised I assume in order to resolve the problem, I'll need to do some sort of matching between paragraphs to make sure I don't diff two completely separate paragraphs. I'm also assuming this matching will depend if paragraphs are added or removed. Here's the code I have so far: if len(item[1].split('\n')) + len(item[1].split('\n'))) == 2: body.append(heading("Original {} (With Markup)".format(item[0]),2)) body.append(paragraph(revFinder(item[1],item[2]))) body.append(paragraph("",style="BodyTextKeep")) body.append(heading("Revised {}".format(item[0]),2)) body.append(paragraph(item[2])) body.append(paragraph("")) else: diff = len(item[1].split('\n')) - len(item[1].split('\n')) if diff == 0: body.append(heading("Original {} (With Markup)".format(item[0]),2)) for orPara, revPara in zip(item[1].split('\n'),item[2].split('\n')): body.append(paragraph(revFinder(orPara,revPara))) body.append(paragraph("",style="BodyTextKeep")) body.append(heading("Revised {}".format(item[0]),2)) for para in item[2].split('\n'): body.append(paragraph("{}".format(para))) body.append(paragraph("")) elif diff > 0: #Removed paragraphs elif diff < 0: #Added paragraphs So far I've planned on using something like `difflib` to do paragraph matching. But if there's a better way to avoid this problem that is a completely different approach, that's great too. * * * **Some Things to Note:** * I'm running Python 2.7.6 32-bit on Windows 7 64-bit * I've made some changes to my local copy of `docx` (namely adding the strike through formatting) so if you test this code you will not be able to replicate what I'm doing in that regard * * * **Description of the Entire Process (with the revision steps in bold):** 1) User opens Python script and uses GUI to add information to a thing called a "Condition Report" (CR) > **NOTE:** A full CR contains 4 parts, all completed by different people. But > each part gets individually printed. All 4 parts are stored together in the > log 2) When the user is finished, the information is saved to a log (described below), and then printed as a `.docx` file 3) The printed document is signed and stored **4)** When the user wants to revise a part of the CR, the open the GUI, and edit the information in each of the fields. I am only concerned about a few of the fields in this question, and those are the multiline text controls (which can result in multiple paragraphs) **5)** Once the user is done with the revision, the code generates the tuple list I described in the "Solution So Far" section, and sends this to the function which generates the revision document **6)** The revision document is created, printed, signed, and stored with the original document for that part of that CR **7)** The log is completely rewritten to include the revised information * * * **The Log:** The log is simply a giant `dict` which stores all the information on all of the CRs. The general format is {"Unique ID Number": [list of CR info]} The log doesn't store past versions of a CR, so when a CR is revised the old information is overwritten (which is what we want for the system). As I mentioned earlier, every time the log is edited, the whole thing is rewritten. To get at the information in the log, I `import` it (since it always lives in the same directory as the script) Answer: Try using the post-diff cleanup options that diff_match_patch that @tzaman mentioned above, in particular, check out the `diff_cleanupSemantic` function which is intended for use when the diff output is intended to be human- readable. Cleanup options are NOT run automatically, since diff_match_patch provides several cleanup options from which you may choose (depending on your needs). Here is an example: import diff_match_patch dmp = diff_match_patch.diff_match_patch() diffs = dmp.diff_main('This is my original paragraph.', 'My paragraph is much better now.') print diffs # pre-cleanup dmp.diff_cleanupSemantic(diffs) print diffs # post cleanup Output: [(-1, 'This is m'), (1, 'M'), (0, 'y'), (-1, ' original'), (0, ' paragraph'), (1, ' is much better now'), (0, '.')] [(-1, 'This is my original paragraph'), (1, 'My paragraph is much better now'), (0, '.')] As you can see, the first diff is optimal but unreadable, while the second dif (after cleanup) is exactly what you are looking for.
Numba code slower than pure python Question: I've been working on speeding up a resampling calculation for a particle filter. As python has many ways to speed it up, I though I'd try them all. Unfortunately, the numba version is incredibly slow. As Numba should result in a speed up, I assume this is an error on my part. I tried 4 different versions: 1. Numba 2. Python 3. Numpy 4. Cython The code for each is below: import numpy as np import scipy as sp import numba as nb from cython_resample import cython_resample @nb.autojit def numba_resample(qs, xs, rands): n = qs.shape[0] lookup = np.cumsum(qs) results = np.empty(n) for j in range(n): for i in range(n): if rands[j] < lookup[i]: results[j] = xs[i] break return results def python_resample(qs, xs, rands): n = qs.shape[0] lookup = np.cumsum(qs) results = np.empty(n) for j in range(n): for i in range(n): if rands[j] < lookup[i]: results[j] = xs[i] break return results def numpy_resample(qs, xs, rands): results = np.empty_like(qs) lookup = sp.cumsum(qs) for j, key in enumerate(rands): i = sp.argmax(lookup>key) results[j] = xs[i] return results #The following is the code for the cython module. It was compiled in a #separate file, but is included here to aid in the question. """ import numpy as np cimport numpy as np cimport cython DTYPE = np.float64 ctypedef np.float64_t DTYPE_t @cython.boundscheck(False) def cython_resample(np.ndarray[DTYPE_t, ndim=1] qs, np.ndarray[DTYPE_t, ndim=1] xs, np.ndarray[DTYPE_t, ndim=1] rands): if qs.shape[0] != xs.shape[0] or qs.shape[0] != rands.shape[0]: raise ValueError("Arrays must have same shape") assert qs.dtype == xs.dtype == rands.dtype == DTYPE cdef unsigned int n = qs.shape[0] cdef unsigned int i, j cdef np.ndarray[DTYPE_t, ndim=1] lookup = np.cumsum(qs) cdef np.ndarray[DTYPE_t, ndim=1] results = np.zeros(n, dtype=DTYPE) for j in range(n): for i in range(n): if rands[j] < lookup[i]: results[j] = xs[i] break return results """ if __name__ == '__main__': n = 100 xs = np.arange(n, dtype=np.float64) qs = np.array([1.0/n,]*n) rands = np.random.rand(n) print "Timing Numba Function:" %timeit numba_resample(qs, xs, rands) print "Timing Python Function:" %timeit python_resample(qs, xs, rands) print "Timing Numpy Function:" %timeit numpy_resample(qs, xs, rands) print "Timing Cython Function:" %timeit cython_resample(qs, xs, rands) This results in the following output: Timing Numba Function: 1 loops, best of 3: 8.23 ms per loop Timing Python Function: 100 loops, best of 3: 2.48 ms per loop Timing Numpy Function: 1000 loops, best of 3: 793 µs per loop Timing Cython Function: 10000 loops, best of 3: 25 µs per loop **Any idea why the numba code is so slow?** I assumed it would be at least comparable to Numpy. _Note: if anyone has any ideas on how to speed up either the Numpy or Cython code samples, that would be nice too:) My main question is about Numba though._ Answer: The problem is that numba can't intuit the type of `lookup`. If you put a `print nb.typeof(lookup)` in your method, you'll see that numba is treating it as an object, which is slow. Normally I would just define the type of `lookup` in a locals dict, but I was getting a strange error. Instead I just created a little wrapper, so that I could explicitly define the input and output types. @nb.jit(nb.f8[:](nb.f8[:])) def numba_cumsum(x): return np.cumsum(x) @nb.autojit def numba_resample2(qs, xs, rands): n = qs.shape[0] #lookup = np.cumsum(qs) lookup = numba_cumsum(qs) results = np.empty(n) for j in range(n): for i in range(n): if rands[j] < lookup[i]: results[j] = xs[i] break return results Then my timings are: print "Timing Numba Function:" %timeit numba_resample(qs, xs, rands) print "Timing Revised Numba Function:" %timeit numba_resample2(qs, xs, rands) * * * Timing Numba Function: 100 loops, best of 3: 8.1 ms per loop Timing Revised Numba Function: 100000 loops, best of 3: 15.3 µs per loop * * * You can go even a little faster still if you use `jit` instead of `autojit`: @nb.jit(nb.f8[:](nb.f8[:], nb.f8[:], nb.f8[:])) For me that lowers it from 15.3 microseconds to 12.5 microseconds, but it's still impressive how well autojit does.
How to remove character from tuples in list? Question: How to remove "(" ,")" form [('(10', '40)'), ('(40', '30)'), ('(20', '20)')] by python? Answer: Straightforward, use list comprehension and literal_eval. >>> from ast import literal_eval >>> tuple_list = [('(10', '40)'), ('(40', '30)'), ('(20', '20)')] >>> [literal_eval(','.join(i)) for i in tuple_list] [(10, 40), (40, 30), (20, 20)]
How to do linear regression, taking errorbars into account? Question: I am doing a computer simulation for some physical system of finite size, and after this I am doing extrapolation to the infinity (Thermodynamic limit). Some theory says that data should scale linearly with system size, so I am doing linear regression. The data I have is noisy, but for each data point I can estimate errorbars. So, for example data points looks like: x_list = [0.3333333333333333, 0.2886751345948129, 0.25, 0.23570226039551587, 0.22360679774997896, 0.20412414523193154, 0.2, 0.16666666666666666] y_list = [0.13250359351851854, 0.12098339583333334, 0.12398501145833334, 0.09152715, 0.11167239583333334, 0.10876248333333333, 0.09814170444444444, 0.08560799305555555] y_err = [0.003306749165349316, 0.003818446389148108, 0.0056036878203831785, 0.0036635292592592595, 0.0037034897788415424, 0.007576672222222223, 0.002981084130692832, 0.0034913019065973983] Let's say I am trying to do this in Python. 1. First way that I know is: m, c, r_value, p_value, std_err = scipy.stats.linregress(x_list, y_list) I understand this gives me errorbars of the result, but this does not take into account errorbars of the initial data. 2. Second way that I know is: m, c = numpy.polynomial.polynomial.polyfit(x_list, y_list, 1, w = [1.0 / ty for ty in y_err], full=False) Here we use the inverse of the errorbar for the each point as a weight that is used in the least square approximation. So if a point is not really that reliable it will not influence result a lot, which is reasonable. But I can not figure out how to get something that combines both these methods. What I really want is what _second_ method does, meaning use regression when every point influences the result with different weight. But at the same time I want to know _how accurate_ my result is, meaning, I want to know what are errorbars of the resulting coefficients. How can I do this? Answer: Not entirely sure if this is what you mean, but…using pandas, [statsmodels](http://statsmodels.sourceforge.net/stable/), and patsy, we can compare an ordinary least-squares fit and a weighted least-squares fit which uses the inverse of the noise you provided as a weight matrix (statsmodels will complain about sample sizes < 20, by the way). import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels.formula.api as sm x_list = [0.3333333333333333, 0.2886751345948129, 0.25, 0.23570226039551587, 0.22360679774997896, 0.20412414523193154, 0.2, 0.16666666666666666] y_list = [0.13250359351851854, 0.12098339583333334, 0.12398501145833334, 0.09152715, 0.11167239583333334, 0.10876248333333333, 0.09814170444444444, 0.08560799305555555] y_err = [0.003306749165349316, 0.003818446389148108, 0.0056036878203831785, 0.0036635292592592595, 0.0037034897788415424, 0.007576672222222223, 0.002981084130692832, 0.0034913019065973983] # put x and y into a pandas DataFrame, and the weights into a Series ws = pd.DataFrame({ 'x': x_list, 'y': y_list }) weights = pd.Series(y_err) wls_fit = sm.wls('x ~ y', data=ws, weights=1 / weights).fit() ols_fit = sm.ols('x ~ y', data=ws).fit() # show the fit summary by calling wls_fit.summary() # wls fit r-squared is 0.754 # ols fit r-squared is 0.701 # let's plot our data plt.clf() fig = plt.figure() ax = fig.add_subplot(111, axisbg='w') ws.plot( kind='scatter', x='x', y='y', style='o', alpha=1., ax=ax, title='x vs y scatter', edgecolor='#ff8300', s=40 ) # weighted prediction wp, = ax.plot( wls_fit.predict(), ws['y'], color='#e55ea2', lw=1., alpha=1.0, ) # unweighted prediction op, = ax.plot( ols_fit.predict(), ws['y'], color='k', ls='solid', lw=1, alpha=1.0, ) leg = plt.legend( (op, wp), ('Ordinary Least Squares', 'Weighted Least Squares'), loc='upper left', fontsize=8) plt.tight_layout() fig.set_size_inches(6.40, 5.12) plt.savefig("so.png", dpi=100, alpha=True) plt.show() ![Black is OLS, pink is WLS](http://i.stack.imgur.com/xnvjq.png) WLS residuals: [0.025624005084707302, 0.013611438189866154, -0.033569595462217161, 0.044110895217014695, -0.025071632845910546, -0.036308252199571928, -0.010335514810672464, -0.0081511479431851663] The mean squared error of the residuals for the weighted fit (`wls_fit.mse_resid` or `wls_fit.scale`) is **0.22964802498892287** , and the r-squared value of the fit is **0.754**. You can obtain a wealth of data about the fits by calling their `summary()` method, and/or doing `dir(wls_fit)`, if you need a list of every available property and method.
Selenium webdriver screenshot not being taken from django Question: I have a functional test 'y1.py' which I have exported from the selenium IDE. It looks like: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select from selenium.common.exceptions import NoSuchElementException import unittest, time, re class Y1(unittest.TestCase): def setUp(self): self.driver = webdriver.Firefox() self.driver.implicitly_wait(30) self.base_url = "https://www.yahoo.com/" self.verificationErrors = [] self.accept_next_alert = True def test_y1(self): driver = self.driver driver.get(self.base_url) driver.find_element_by_link_text("Weather").click() driver.save_screenshot('out11.png') def tearDown(self): self.driver.quit() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() I have created a django app 'fts' and have placed the script in 'fts/tests.py. I added the line: driver.save_screenshot('out11.png') to the end to get a screenshot. When I run the script from the command line using: $ python manage.py test fts the test passes but no screenshot is taken. How can I fix this? Answer: I personally use `get_screenshot_as_file` instead, this is my code : from selenium import webdriver if __name__ == '__main__': browser = webdriver.Firefox() try: browser.get('http://www.google.com') browser.get_screenshot_as_file('screenshot.png') except Exception as e: print e With `save_screenshot` you have to write the image to a file to keep in memory.
Matplotlib Version Question: Having my system prepped with homebrew and using `pip install matplotlib` after successful installation of numpy and scipy, I'm getting a successful installation. Then, running $ python Python 2.7.6 (default, Jan 30 2014, 20:19:23) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import matplotlib >>> matplotlib.__version__ '1.1.1' This is a very outdated version and noone of my programs run with it. I used `pip uninstall matplotlib` and redid it with `pip install 'the url for 1.3.1'` and it still reads version 1.1.1. Is there a way I can manually delete all python libraries, even python itself, and restart from scratch? Or is this an obvious fix for this? EDIT: I'm running Mac OS X version 10.9. I just reinstalled python 2.7 with scipy, numpy, and matplotlib through macports. Is there a very basic way to see where, when I `import matplotlib` from the python environment, it is calling it from? Like `which` in the terminal? I began using homebrew but switched to macports for more control. Can that be a problem? Do I need to completely remove homebrew? I did get this message at first: `Warning: Error parsing file /Applications/MacPorts/Python 2.7/Python Launcher.app/Contents/MacOS/Python Launcher: Error opening or reading file` but after running `$ sudo port -f deactivate python27` followed by `sudo port activate python27` I no longer have that warning, but I wanted to include this detail for completeness. EDIT 2: Could some things be installing to `opt/local/bin` when they need to be installed to `usr/local/bin`? EDIT 3: To shed some light on this, `print scipy.__version__` reads `0.11.0` which is several outdated, `print numpy.__version__` reads `1.6.2` which is also outdated. However I attempt to install says the installation was successful, which I don't doubt. I suspect it's not linked up together in a correct way. Is there a way delete everything that is connected to python at all and restart? FINAL EDIT: I think the easiest way to handle this is to run `which python` and see what options you have to run python. Because I used homebrew and macports at this time (not recommended) I had four options- a macports install, a package install from python.org, a homebrew install, and the standard 2.6 from Apple. Iterate through these and find which one your installer (`pip` or `easy_install`) is placing your frameworks and run that python when you need certain dependencies. The best way is use only one package manager and run virtual environments if you need different dependencies, but we're all learning as we go. Answer: Using Matplotlib in OSX can give you problems. [In this page](http://matplotlib.org/users/installing.html#build-osx), they say: > The build situation on OSX is complicated by the various places one can get > the libpng and freetype requirements (darwinports, fink, /usr/X11R6) and the > different architectures (e.g., x86, ppc, universal) and the different OSX > version (e.g., 10.4 and 10.5). In the official page of Matplotlib they recommend to use the mkpg installer: > The mkpg installer will have a “zip” extension, and will have a name like > matplotlib-1.2.0-py2.7-macosx10.5_mpkg.zip. The name of the installer > depends on which versions of python, matplotlib, and OSX it was built for. > [...] install to a directory like /Library/Python/2.7/site-packages/ (exact > path depends on your python version). In the [OSX-Notes Section](http://matplotlib.org/faq/installing_faq.html) you have more information about this installing. **Edited** : I haven't found any MPKG but you can use [this](http://sourceforge.net/projects/matplotlib/files/matplotlib/matplotlib-1.3.1/) DMG.
Repeating rows in numpy according to a vector of indices Question: Suppose I have a matrix B: B = [ [0, 1, 2], [2, 3, 4], [5, 6, 7] ] and a vector a: a = [0,0,1,1,2] I need to define a new vector C such that it repeats the rows in B as specified by a, i.e., C = [ [0, 1, 2], [0, 1, 2], [2, 3, 4] [2, 3, 4], [5, 6, 7] ] Is there a trick command to do this in Python? Answer: You can use `a` as an [index array](http://docs.scipy.org/doc/numpy/user/basics.indexing.html#index-arrays) here. >>> import numpy as np >>> b = np.array([ [0, 1, 2], [2, 3, 4], [5, 6, 7] ]) >>> a = [0,0,1,1,2] >>> b[a] array([[0, 1, 2], [0, 1, 2], [2, 3, 4], [2, 3, 4], [5, 6, 7]]) And from the [docs](http://docs.scipy.org/doc/numpy/user/basics.indexing.html#index- arrays): > For all cases of index arrays, what is returned is a **copy** of the > original data, not a view as one gets for slices. In pure Python you can use a list comprehension: >>> B = [ [0, 1, 2], [2, 3, 4], [5, 6, 7] ] >>> [B[x][:] for x in a] [[0, 1, 2], [0, 1, 2], [2, 3, 4], [2, 3, 4], [5, 6, 7]] Note that `[:]` returns a shallow copy of the lists, if the lists contains mutable objects then you'll have to use `copy.deepcopy` to get a completely new copy.
Passing variable to another Python Script Question: I am having difficulty passing a variable from one function to another function in another python script. I have read the other answers but they have not really helped on this subject. This is the first file I want to send the variable to( some code omitted for clarity ) # TestGUI.py from Tkinter import * import serial import os class Testgui: def __init__(self, master): def writetoBOT(self,instruct): ser = serial.Serial(6) ser.baudrate = 9600 ser.parity = serial.PARITY_NONE #set parity check: no parity ser.timeout = 1 #non-block read ser.writeTimeout = 2 #timeout for writ if(ser.isOpen() == False): ser.open() print ser.portstr # check which port was really used ser.write(instruct) else : ser.write(instruct) This is the sceond file: # TestGUI_2.py from TestGUI import Testgui class Tracker: def __init__(self): pass def drive(self,cords, cords1): while( cords >= 320): l='l' Testgui.writetoBOT(l) # This is the problem line TypeError: unbound method writetoBOT() must be called with TestGUI instance as first argument (got str instance instead) Answer: `writetoBOT` takes 2 arguments: `self` and `instruct`. call it with a `Testgui` instance: tgui=Testgui(your_master) tgui.writetoBOT(l) If you want to call it with `Testgui` class, you still need to pass an instance of `Testgui`: tgui=Testgui(your_master) Testgui.writetoBOT(tgui, l)
Python file copying using regex Question: I have a large log file. I want to extract the lines containing `java/javax/or/com` followed by a `./:`. For every line like this, I want to extract some of the corresponding lines which are stack traces and starts with `at`. For example: Line1: java.line.something.somethingexception line 2: at something something line 3: at something something line 4: at something something line 5-20:Junk I don't want to extract. line 21: javax.line.something.somethingexception line 22: at something something line 23: at something something line 24: at something something and so on... Here I want to copy line 1-4 and then again line 21-24. So far my code collects the line which contains the keywords but I'm unable to figure out how to write a specific no of lines after that, skip a few lines and start writing again.These lines which starts with at are random, i.e they can be 100 lines or they can be 250 lines, so no pattern. Here's my code: import re import sys from itertools import islice file = open(sys.argv[1], "r") file1 = open(sys.argv[2],"w") i = 0 for line in file: if re.search(r'[java|javax|org|com]+?[\.|:]+?', line, re.I) and not (re.search(r'at\s', line, re.I) or re.search(r'mdcloginid:|webcontainer|c\.h\.i\.h\.p\.u\.e|threadPoolTaskExecutor|caused\sby', line, re.I)): file1.write(line) This code only extracts the lines containing the keywords, but I'm stuck at how to do the next part,i.e copy the next lines containing at and write them to a new file, stop where 'at' ends. Search for the next line containing keywords and do the same action again. Answer: This can be solved by flag that you set in case you match your specific conditions: java_regex = re.compile(...) # java at_regex = re.compile(...) # at copy = False # flag that control to copy or to not copy to output for line in file_in: if re.search(java_regex, line): # start copying if "java" is in the input copy = True else: if copy and not re.search(at_regex, line): # stop copying if "at" is not in the input copy = False if copy: file_out.write(line)
Integrating New Relic with Tornado app with gunicorn as a process manager Question: I want to use New Relic to monitor errors in my Async Tornado app with gunicorn as a process manager. When I try to make a request after integrating with New Relic I get the following error File "/Library/Python/2.7/site- packages/newrelic-2.10.1.9/newrelic/hooks/framework_tornado.py", line 30, in request_environment result['REQUEST_URI'] = request.uri AttributeError: 'dict' object has no attribute 'uri' The app is hosted on Heroku requirements.txt # Analytics newrelic==2.10.1.9 Procfile web: newrelic-admin run-program gunicorn -k tornado --bind=0.0.0.0:$PORT opening_application.runserver Answer: The workaround to eliminate the issue is to add the following to the agent configuration file (newrelic.ini): [import-hook:gunicorn.app.base] enabled = false
How to uninstall manually openerp module Question: I have installed a module on openerp v7 that I would like to uninstall. Using the interface fails, i get an error during the uninstall process. Is there a 'manual' way to uninstall a module ? Is it sufficient to remove the module folder under `addons/` or is there any other things to do, to make it in the cleanest way ? Here is the error I get when i try to uninstall a module through the interface: Client Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/addons/web/http.py", line 204, in dispatch response["result"] = method(self, **self.params) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/addons/web/controllers/main.py", line 1132, in call_button action = self._call_kw(req, model, method, args, {}) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/addons/web/controllers/main.py", line 1120, in _call_kw return getattr(req.session.model(model), method)(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/addons/web/session.py", line 42, in proxy result = self.proxy.execute_kw(self.session._db, self.session._uid, self.session._password, self.model, method, args, kw) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/addons/web/session.py", line 30, in proxy_method result = self.session.send(self.service_name, method, *args) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/addons/web/session.py", line 103, in send raise xmlrpclib.Fault(openerp.tools.ustr(e), formatted_info) Server Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/addons/web/session.py", line 89, in send return openerp.netsvc.dispatch_rpc(service_name, method, args) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/netsvc.py", line 292, in dispatch_rpc result = ExportService.getService(service_name).dispatch(method, params) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/service/web_services.py", line 626, in dispatch res = fn(db, uid, *params) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/osv/osv.py", line 188, in execute_kw return self.execute(db, uid, obj, method, *args, **kw or {}) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/osv/osv.py", line 131, in wrapper return f(self, dbname, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/osv/osv.py", line 197, in execute res = self.execute_cr(cr, uid, obj, method, *args, **kw) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/osv/osv.py", line 185, in execute_cr return getattr(object, method)(cr, uid, *args, **kw) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/addons/base/module/module.py", line 495, in button_immediate_uninstall return self._button_immediate_function(cr, uid, ids, self.button_uninstall, context=context) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/addons/base/module/module.py", line 475, in _button_immediate_function _, pool = pooler.restart_pool(cr.dbname, update_module=True) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/pooler.py", line 39, in restart_pool registry = RegistryManager.new(db_name, force_demo, status, update_module) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/modules/registry.py", line 218, in new openerp.modules.load_modules(registry.db, force_demo, status, update_module) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/modules/loading.py", line 354, in load_modules loaded_modules, update_module) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/modules/loading.py", line 256, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/modules/loading.py", line 188, in load_module_graph load_data(module_name, idref, mode) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/modules/loading.py", line 76, in <lambda> load_data = lambda *args: _load_data(cr, *args, kind='data') File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725- py2.7.egg/openerp/modules/loading.py", line 124, in _load_data tools.convert_xml_import(cr, module_name, fp, idref, mode, noupdate, report) File "/usr/local/lib/python2.7/dist-packages/openerp-7.0_20131016_232725-py2.7.egg/openerp/tools/convert.py", line 945, in convert_xml_import relaxng.assert_(doc) File "lxml.etree.pyx", line 3027, in lxml.etree._Validator.assert_ (src/lxml/lxml.etree.c:129517) AssertionError: Did not expect text in element record content, line 33 Sorry for this long trace.. At first I suspected one of the xml files to wrong, but as I get the same error when I try to install a module manually, I think something else is going wrong but I can't see what. Cheers Answer: Removing the module's code from `addons/`is not enough - this will break OpenERP. **Solution 1 - Try to resolve the error you have during normal uninstall** This method is recommended because OpenERP does the job cleanly. Look at the error you have during uninstallation and try to imagine what could be the problem. Post the error trace here for further help. **Solution 2 - Manual uninstall** My procedure is based on OpenERP v6.0 but it should be very similar in OpenERP v7. 1. Backup your database :). Do it. It's very probable that you break something during this procedure. 2. Find what records were created during the installation or the update of the module. * Look at `__openerp__.py` to see which XML files are taken into account when installing and updating the module. Alternatively, consider all XML files in the module's directories. * Search for records created by this modules during install or update. There should be XML elements like `<record ...model='...'>` inside these files. The `model` attribute tells you in which datatable the record resides. If you are using a Unix-like system, you can try the following command in the module's root directory: `grep -r -n -A 5 --include="*.xml" \<record *` 3. Delete these records. You'll use some database interface tool such as `PgAdmin` or `pqsl` and find the records discovered in the previous step. For example, the following XML line defines a record in the `ir_cron` datatable: `<record model="ir.cron" id="ir_cron_account_fiscalyear_close">` Knowing that, you can find the record based on the data defined for this record in the XML file. 4. Find and delete all the menu items defined by the module. As above, search the XML files for `<menuitem ...>` elements. Look for related records in the `ir_ui_menu` datatable. 5. Discover which `models` were defined by the module. Try the following command: `grep -r -n -C 5 --include="*.py" "_name = " *` Only pay attention to models defined in objects that derive from `osv.osv` (`osv.Model` in OpenERP v7). They define persistent models stored in the database. Objects descendant from `osv.osv_memory` (`osv.TransientModel` in v7) are not stored in the database. Be careful and avoid deleting models defined in parent objects. Look at the `_inherit` property of the object to give you an idea about that. In this case, you want to only delete the columns added by your module. Once you discover the models defined by your module, try to delete the corresponding datatables. For example, the model with `_name = "bg_vat.bg_vat"` will have a corresponding table in the database named 'bg_vat_bg_vat'. 6. Finally, remove or just deactivate the module. Look for a record corresponding to your module in `ir_module_module` datatable. You can delete the record or just set the `state` field to `uninstalled`. If you like, you can now delete the module's directory from `addons` but I don't see a reason to do this. I'm sure I missed some cleaning actions (like the records in the `ir_model*` datatable family). I'm also pretty sure this procedure can easily break your OpenERP installation. Make a backup first. :)
How to test session in flask resource Question: I'd like to test a resource. The response of that depends on a parameter in session (logged) To test this resource, I've wrote those tests: import app import unittest class Test(unittest.TestCase): def setUp(self): self.app = app.app.test_client() def test_without_session(self): resp = self.app.get('/') self.assertEqual('without session', resp.data) def test_with_session(self): with self.app as c: with c.session_transaction() as sess: sess['logged'] = True resp = c.get('/') self.assertEqual('with session', resp.data) if __name__ == '__main__': unittest.main() My app.py is this: from flask import Flask, session app = Flask(__name__) @app.route('/') def home(): if 'logged' in session: return 'with session' return 'without session' if __name__ == '__main__': app.run(debug=True) When i run the tests I have this error: ERROR: test_pippo_with_session (__main__.Test) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_pippo.py", line 17, in test_pippo_with_session with c.session_transaction() as sess: File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/home/tommaso/repos/prova-flask/local/lib/python2.7/site-packages/flask/testing.py", line 74, in session_transaction raise RuntimeError('Session backend did not open a session. ' RuntimeError: Session backend did not open a session. Check the configuration I haven't found any solution on google. Answer: If you did not set a custom `app.session_interface`, then you forgot to set a [secret key](https://flask.readthedocs.org/en/latest/api/?highlight=session#flask.Flask.secret_key): def setUp(self): app.config['SECRET_KEY'] = 'sekrit!' self.app = app.app.test_client() This just sets a mock secret key for the tests, but for your application to work you'll need to generate a production secret key, see the [sessions section in the Quickstart documentation](https://flask.readthedocs.org/en/latest/quickstart/#sessions) for tips on how to produce a good secret key. Without a secret key, the default [session implementation](https://flask.readthedocs.org/en/latest/api/?highlight=session#flask.sessions.SecureCookieSessionInterface) fails to create a session.
Faster to add all items to an array then write the array to file, or faster to write the item to a file and then add to array one at a time? Question: So right now I have this (in Python 2.7): if y == ports[0]: Array1.append(x) elif y == ports[1]: Array2.append(x) elif y == ports[2]: Array3.append(x) elif y == ports[3]: Array4.append(x) else: Array5.append(x) for x in Array1: target=open('Array1.csv', 'a') target.write(x + ",\n") target.close() print "Added IP address " + x + " to Array1.csv\n" for x in Array2: target=open('Array2.csv', 'a') target.write(x + ",\n") target.close() print "Added IP address " + x + " to Array2.csv\n" for x in Array3: target=open('Array3.csv', 'a') target.write(x + ",\n") target.close() print "Added IP address " + x + " to Array3.csv\n" for x in Array4: target=open('Array4.csv', 'a') target.write(x + ",\n") target.close() print "Added IP address " + x + " to Array4.csv\n" for x in Array5: target=open('Array5.csv', 'a') target.write(x + ",\n") target.close() print "Added IP address " + x + " to Array5.csv\n" Would the program finish quicker if I did: if y == ports[0]: Array1.append(x) target=open('Array1.csv', 'a') target.write(x + ",\n") target.close() print "Added IP address " + x + " to Array1.csv\n" elif y == ports[1]: Array2.append(x) target=open('Array2.csv', 'a') target.write(x + ",\n") target.close() print "Added IP address " + x + " to Array2.csv\n" elif y == ports[2]: Array3.append(x) target=open('Array3.csv', 'a') target.write(x + ",\n") target.close() print "Added IP address " + x + " to Array3.csv\n" elif y == ports[3]: Array4.append(x) target=open('Array4.csv', 'a') target.write(x + ",\n") target.close() print "Added IP address " + x + " to Array4.csv\n" else: Array5.append(x) target=open('Array5.csv', 'a') target.write(x + ",\n") target.close() print "Added IP address " + x + " to Array5.csv\n" Or would I see any difference at all? Or perhaps there is a third way that is quicker? Does it matter when the item is written to the list? Answer: That probably barely matters. What’s more important is not constantly reopening the file: with open('Array1.csv', 'a') as target: for x in Array1: target.write(x + ",\n") print "Added IP address " + x + " to Array1.csv\n" Also, I don’t know if it applies here, but [the `csv` module](http://docs.python.org/3/library/csv.html) _does_ exist.
Vim plugin to toggle Python function/method arguments between single- and multi-line Question: I'm looking for a Vim plugin that can take a single-line statement like this: foo = self.some_method(param1="hi", param2="there") and turn it into this: foo = self.some_method( param1="hi", param2="there" ) Big bonus points if it can append a comma to the last argument, like this: foo = self.some_method( param1="hi", param2="there", ) And finally I'd like to be able to turn the multi-line version back into a single line, but just handling the single-to-multi-line scenario alone is sufficient for me. Using `J` to re-join the line is fast enough most of the time. I'm **not** looking for a solution that formats like this: foo = self.some_method(param1="hi", param2="there") Answer: With this plugin: [splitjoin.vim](https://github.com/AndrewRadev/splitjoin.vim). Using your example you can get something like: foo = self.some_method(param1="hi", param2="there", param3="again") with cursor between parenthesis, invoke using default maping `gS` : foo = self.some_method(param1="hi", param2="there", param3="again") to return to the original just `gJ` It work for many languages. For python you can split dicts, lists, tuples, statements, imports
Installing PyQuery Via Pip Question: I'm attempting to install `PyQuery` via `pip` but I'm getting an error I do not understand. The command I used was: sudo pip install pyquery I get the output below: Requirement already satisfied (use --upgrade to upgrade): pyquery in /usr/local/lib/python2.7/dist-packages Downloading/unpacking lxml>=2.1 (from pyquery) Running setup.py egg_info for package lxml /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.3.0. Building without Cython. ERROR: /bin/sh: 1: xslt-config: not found ** make sure the development packages of libxml2 and libxslt are installed ** Using build configuration of libxslt Downloading/unpacking cssselect (from pyquery) Running setup.py egg_info for package cssselect no previously-included directories found matching 'docs/_build' Installing collected packages: lxml, cssselect Running setup.py install for lxml /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.3.0. Building without Cython. ERROR: /bin/sh: 1: xslt-config: not found ** make sure the development packages of libxml2 and libxslt are installed ** Using build configuration of libxslt building 'lxml.etree' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/home/imageek/scripts/facebook/build/lxml/src/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-i686-2.7/src/lxml/lxml.etree.o -w src/lxml/lxml.etree.c:16:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 Complete output from command /usr/bin/python -c "import setuptools;__file__='/home/imageek/scripts/facebook/build/lxml/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-dyUZWZ-record/install-record.txt: /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.3.0. Building without Cython. ERROR: /bin/sh: 1: xslt-config: not found ** make sure the development packages of libxml2 and libxslt are installed ** Using build configuration of libxslt running install running build running build_py copying src/lxml/includes/lxml-version.h -> build/lib.linux-i686-2.7/lxml/includes running build_ext building 'lxml.etree' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/home/imageek/scripts/facebook/build/lxml/src/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-i686-2.7/src/lxml/lxml.etree.o -w src/lxml/lxml.etree.c:16:20: fatal error: Python.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /usr/bin/python -c "import setuptools;__file__='/home/imageek/scripts/facebook/build/lxml/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-dyUZWZ-record/install-record.txt failed with error code 1 Storing complete log in /home/imageek/.pip/pip.log I have a feeling it's something to do with dependencies, but should 'pip' not automatically install dependencies? Answer: You have missing dependencies. Try running: `sudo apt-get install libxml2-dev libxslt1-dev python-dev`
Relative imports with unittest in Python Question: I am trying to use Python unittest and relative imports, and I can't seem to figure it out. I know there are a lot of related questions, but none of them have helped so far. Sorry if this is repetitive, but I would really appreciate any help. I was trying to use the syntax from PEP 328 <http://www.python.org/dev/peps/pep-0328/> but I must have something wrong. My directory structure is: project/ __init__.py main_program.py lib/ __init__.py lib_a lib_b tests/ __init__.py test_a test_b I run my tests using: python -m unittest test_module1 test_module2 test_a needs to import both lib/lib_a and main_program. This is the code from test_a I am trying to use for the import: from ..lib import lib_a as lib from ...project import main_program both raise this error: ValueError: Attempted relative import in non-package All of my **init**.py files are currently empty. Any specific advice would be greatly appreciated!! Edit: This may be the answer: [Python Packages?](http://stackoverflow.com/questions/1342975/python-packages) I'm still verifying if this will work. Edit II: To clarify, at this point I have attempted to run my test file in 3 different ways: project/tests $ python -m unittest test_a project/tests $ python -m test_a project/tests $ ./test_a All three fail with the same error as above. When I use the same three syntaxes but in the project directory, I get this error: ValueError: Attempted relative import beyond toplevel package Thanks again. Answer: In my experience it is easiest if your project root is not a package, like so: project/ test.py run.py package/ __init__.py main_program.py lib/ __init__.py lib_a lib_b tests/ __init__.py test_a test_b However, as of python 3.2 , the unittest module provides the `-t` option, which lets you set the top level directory, so you could do (from `package/`): python -m unittest discover -t .. More details at the [unittest docs](http://docs.python.org/3/library/unittest.html#test-discovery).
Python ISRIStemmer for Arabic text Question: I am running the following code on IDLE(Python) and I want to enter Arabic string and get the stemming for it but actually it doesn't work > > > ">>> from nltk.stem.isri import ISRIStemmer >>> >>> ">>> st = ISRIStemmer() >>> >>> ">>> w= 'حركات' >>> >>> ">>> join = w.decode('Windows-1256') >>> >>> ">>> print st.stem(join).encode('Windows-1256').decode('utf-8') The result of running it is the same text in w which is 'حركات' which is not the stem but when do the following: > > > ">>> print st.stem(u'اعلاميون') the result succeeded and returns the stem which is 'علم' why passing variable to stem() function doesn't return the stem. Answer: Ok, I solved the problem by myself using the following: > > w='حركات' >> >> st.stem(w.decode('utf-8')) and it gives the root correctly which is "حرك"
Does tuple() copy the elements of the argument? Question: In python, does the built-in function `tuple([iterable])` create a tuple object and fill it with copies of the elements of "iterable", or does it create a tuple containing references to the already existing objects of "iterable"? Answer: `tuple` will iterate the sequence and copy the values. The underlying sequence will be _not_ stored to actually keep the values, but the tuple representation will replace it. So yes, the conversion to a tuple is actual work and not just some nesting of another type. You can see this happening when converting a generator: >>> def gen (): for i in range(5): print(i) yield i >>> g = gen() >>> g <generator object gen at 0x00000000030A9B88> >>> tuple(g) 0 1 2 3 4 (0, 1, 2, 3, 4) As you can see, the generator is immediately iterated, making the values generate. Afterwards, the tuple is self-contained, and no reference to the original source is kept. For reference, `list()` behaves in exactly the same way but creates a list instead. The behaviour that 275365 pointed out (in the now deleted answer) is the standard copying behaviour of Python values. Because everything in Python is an object, you are essentially only working with references. So when references are copied, the underlying object is not copied. The important bit is that non-mutable objects will be recreated whenever their value changes which will not update all previously existing references but just the one reference you are currently changing. That’s why it works like this: >>> source = [[1], [2], [3]] >>> tpl = tuple(source) >>> tpl ([1], [2], [3]) >>> tpl[0].append(4) >>> tpl ([1, 4], [2], [3]) >>> source [[1, 4], [2], [3]] `tpl` still contains a reference to the original objects within the `source` list. As those are lists, they are mutable. Changing a mutable list anywhere will not invalidate the references that exist to that list, so the change will appear in both `source` and `tpl`. The actual source list however is only stored in `source`, and `tpl` has no reference to it: >>> source.append(5) >>> source [[1, 4], [2], [3], 5] >>> tpl ([1, 4], [2], [3])
Contour plot from data in a vtk file using Python Question: I have a set of data stored in a VTK file which represents a cut through a domain with scalar point data in an array. I am trying to produce a contour plot of said scalar to make it look somewhat like the attached picture made using ParaView. I'd rather stick with the vtk libraries than use something else, like Matplotlib, as I think they produce better visualisations in general.![Preview of what I want to achieve](http://i.stack.imgur.com/4dQcU.jpg). I have looked at several examples on-line but none of them work for me (no errors are thrown, all I end up with is an empty render with just the background), all I have been able to do is a surface plot of the data (e.g.: [here](http://public.kitware.com/cgi- bin/cvsweb.cgi/~checkout~/VTK/Examples/ImageProcessing/Python/Contours2D.py) ). Here's the current version of the code I have (very similar to the one that successfully produces the surface plot): # import data reader = vtk.vtkDataSetReader() reader.SetFileName('inputDataFiles/k_zCut.vtk') reader.ReadAllVectorsOn() reader.ReadAllScalarsOn() reader.Update() # access data data = reader.GetOutput() d = data.GetPointData() array=d.GetArray('k') # create the filter contours = vtk.vtkContourFilter() contours.SetInput(reader.GetOutput()) contours.GenerateValues(5,1.,5.) # create the mapper mapper = vtk.vtkPolyDataMapper() mapper.SetInput(contours.GetOutput()) mapper.ScalarVisibilityOff() mapper.SetScalarRange(1., 5.) # create the actor actor = vtk.vtkActor() actor.SetMapper(mapper) # create a rendering window and renderer ren = vtk.vtkRenderer() ren.SetBackground( 0.329412, 0.34902, 0.427451 ) #Paraview blue # Assign actor to the renderer ren.AddActor(actor) renWin = vtk.vtkRenderWindow() renWin.AddRenderer(ren) renWin.SetSize(750, 750) # create a renderwindowinteractor iren = vtk.vtkRenderWindowInteractor() iren.SetRenderWindow(renWin) # render renWin.Render() # screenshot w2if = vtk.vtkWindowToImageFilter() w2if.SetInput(renWin) w2if.Update() w2if.SetMagnification(5.) writer = vtk.vtkPNGWriter() writer.SetFileName("screenshot.png") writer.SetInput(w2if.GetOutput()) writer.Write() # Enable user interface interactor iren.Initialize() iren.Start() Below you can see a shortened part of my input file. Any help will be much appreciated. # vtk DataFile Version 2.0 sampleSurface ASCII DATASET POLYDATA POINTS 34813 float 0 0 0 0 -0.000191589 0 0.000264399 0.000157061 0 0 0.000313389 0 0.000264347 -0.000191923 0 0 -0.000383178 0 -0.000395709 0 0 -0.000395709 0.000156695 0 3.60174e-05 0.000486922 0 0.000528387 0 0 POLYGONS 69284 277136 3 4105 4371 3861 3 4102 3861 4371 3 4656 4371 4373 3 4105 4373 4371 3 3624 3861 3390 3 3621 3390 3861 3 4105 3863 3861 3 3624 3861 3863 3 3188 3390 2990 3 3187 2990 3390 3 3624 3390 3391 3 3188 3391 3390 POINT_DATA 34813 FIELD attributes 1 k 1 34813 float 0.849464 0.391519 1.52947 1.05206 0.391519 0.253736 1.39481 1.39481 0.636517 1.21019 0.640193 0.114295 1.12557 0.644143 0.629569 0.114295 0.485032 0.477396 1.39961 0.0860201 1.66665 1.24058 1.45939 0.483719 1.01318 0.163198 0.317574 0.792821 0.317125 0.658835 Answer: If you'd like to use a more "pythonic" interface to VTK, [consider using `mayavi`/`tvtk`/`mlab`](http://docs.enthought.com/mayavi/mayavi/mlab.html). (Either way, VTK is an excellent choice for this!) `tvtk` is a slightly more pythonic, low-level, python binding to VTK with a handful of really nice features (e.g. transparent usage of numpy arrays). Mayavi and mlab give a more high-level interface to VTK. The snippet of the data file that you showed is invalid as-is, but if we use a similar one: # vtk DataFile Version 2.0 Simple VTK file example ASCII DATASET POLYDATA POINTS 9 float 3.0 0.0 0.0 1.0 1.0 0.0 0.0 3.0 0.0 3.0 0.0 1.0 1.0 1.0 1.0 0.0 3.0 1.0 3.0 2.0 2.0 2.0 2.0 2.0 2.0 3.0 2.0 TRIANGLE_STRIPS 2 14 6 0 3 1 4 2 5 6 3 6 4 7 5 8 POINT_DATA 9 SCALARS nodal float LOOKUP_TABLE default 0.0 0.1 0.0 0.3 0.6 0.3 0.8 1.0 0.8 Rendering contours on the surface can be as simple as: from mayavi import mlab source = mlab.pipeline.open('test.vtk') lines = mlab.pipeline.contour_surface(source) mlab.show() ![enter image description here](http://i.stack.imgur.com/H0l6j.png) Or we can get a bit fancier: from mayavi import mlab # Make a figure with a black background fig = mlab.figure(bgcolor=(0,0,0)) # Also see methods like: fig.scene.z_plus_view(), etc fig.scene.camera.azimuth(215) source = mlab.pipeline.open('test.vtk') # Show the surface, colored by the scalars surf = mlab.pipeline.surface(source) # Draw contours of the scalars on the surface lines = mlab.pipeline.contour_surface(source) mlab.show() ![enter image description here](http://i.stack.imgur.com/FGBhy.png)
python: can't work out why len function() is behaving not as expected Question: i have been working on a python tutorial and have come across a problem which i simply cannot work out. google has not turned up anything specific and after a few hours away and much trial and error i still cannot work it out. anyway, the below code is a simplified version of the tutorial. it works fine and prints out that my file is 17 bytes long: from sys import argv from os.path import exists script, file1 = argv file_open = open(file1) file_read = file_open.read() print "the file is %s bytes long" % len(file_read) then the tutorial asks to merge lines 6 and 7 into a single line. if i do it like this it works: from sys import argv from os.path import exists script, file1 = argv file_read = open(file1).read() print "the file is %s bytes long" % len(file_read) but, if i do it like like this then i get an error message which says `TypeError: object of type 'file' has no len()`: from sys import argv from os.path import exists script, file1 = argv file_read = open(file1, "r+") print "the file is %s bytes long" % len(file_read) my problem is i cannot work out why that error message is occurring when i am adding the "r+" to make sure the open file is read. ( although, is it true that read is default anyway so maybe even adding the r+ is unnecessary ) any help would be much appreciated. many thanks :) Answer: I think you forgot the `.read()` in: file_read = open(file1, "r+") so `file_read` is a file object. Try with: file_read = open(file1, "r+").read() and it will return a string as expected.
create an echo server Question: I am new to python and trying to code. I want to create simple echo server _ie whatever I input to the client will simply be echo back by the server_ and if the client user press enter without writing anything then the server will disconnects. It may be very simple but I lacks logic here, any help?? here is my code #!/usr/bin/env python import socket import sys s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #create socket s.bind(("0.0.0.0",12345)) #binding port s.listen(1) #listening for only one client print "I am listening..." conn,addr=s.accept() #accept the connection conn.send("\nEnter your name.\n") data = conn.recv(2048) print "Client name is : ",data conn.send("Now each word you typed will be echo back to you.\n") while len(data)>0: data=conn.recv(2048) if data == '': sys.exit() else: conn.send(data) Answer: The result will contain a newline, but you can simply call [str.strip](http://docs.python.org/2/library/stdtypes.html#str.strip) to filter that out: conn.send("\nEnter your name.\n") data = conn.recv(2048) if not data.strip(): print('Please enter a name') return 1 print("Client name is : ", data)
Can't bind with port 21 on Python Question: I'm trying to build a simple ftp server with python, but I get an error: "An attempt was made to access a socket in a way forbidden by its access permissions" As I understand, it's because of the port number, but what should I do? Here is the code: import socket HOST = '' PORT = 21 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() conn.send("220") data = conn.recv(1024) print data conn.send("331 Password required for", data[5:]) conn.close() Please help.. Answer: This can be a problem from the firewall or the antivirus. Try to disable them for a quick test. If it works, start them again, and setup some exceptions for your program. You may also have another program currently using the port (Apache, IIS, or else). Only one program can access one port.
sending QTreeWidgetItem to a function with python Question: I'm trying to make a program using Python, Komodo and QT4. I'm trying to send a QTreeWidgetItem to a function after it was selected by the user with the mouse. All I was able to do is to move the position of the X and Y of the selected point by the mouse. Can anybody tell me how to send an QTreeWidgetItem to a function? This is my code: from PyQt4 import QtCore, QtGui try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName(_fromUtf8("MainWindow")) MainWindow.resize(800, 600) self.centralwidget = QtGui.QWidget(MainWindow) self.centralwidget.setObjectName(_fromUtf8("centralwidget")) self.treeWidget = QtGui.QTreeWidget(self.centralwidget) self.treeWidget.setGeometry(QtCore.QRect(155, 50, 481, 361)) self.treeWidget.setObjectName(_fromUtf8("treeWidget")) MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtGui.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 25)) self.menubar.setObjectName(_fromUtf8("menubar")) MainWindow.setMenuBar(self.menubar) self.statusbar = QtGui.QStatusBar(MainWindow) self.statusbar.setObjectName(_fromUtf8("statusbar")) MainWindow.setStatusBar(self.statusbar) self.create_popup_menu() self.treeWidget.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) self.treeWidget.customContextMenuRequested.connect(self.on_context_menu) string="default,default:cluster1,default:cluster1:clusterA,default:cluster2,default:cluster2:clusterA" self.retranslateUi(MainWindow) self.buildingTree(string) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow", None)) self.treeWidget.headerItem().setText(0, _translate("MainWindow", "Assignment1", None)) __sortingEnabled = self.treeWidget.isSortingEnabled() def buildingTree(self, string): arrTree = [] arrPath = [] arrString = [] arrString = string.split(",") root = QtGui.QTreeWidgetItem(self.treeWidget) root.setText(0,arrString[0]) arrTree.append(root) arrString.pop(0) for path in arrString: arrPath = path.split(":") nameOfFather = arrPath[len(arrPath)-2] arrTree.reverse() for node in arrTree: nameOfNode = node.text(0) if nameOfNode == nameOfFather: sonNode = QtGui.QTreeWidgetItem(node) sonNode.setText(0,arrPath[len(arrPath)-1]) arrTree.insert(0,sonNode) arrTree.reverse() break self.treeWidget.setSortingEnabled(False) def new_cluster(self): print "New Cluster" def rename_cluster(self): print "Rename cluster" def delete_cluster(self): print "Delete cluster" def create_popup_menu(self, parent=None): self.popup_menu = QtGui.QMenu(parent) self.popup_menu.addAction("New", self.new_cluster) self.popup_menu.addAction("Rename", self.rename_cluster) self.popup_menu.addSeparator() self.popup_menu.addAction("Delete", self.delete_cluster) def on_context_menu(self, pos): print "open menu" position = QtGui.QTreeWidgetItem(self.treeWidget) print position position = pos print position print "%%%%%%%%%%%%%%%" """ node = self.treeWidget.itemAt(position) print node node = self.treeWidget.setCurrentItem(node) print node.text(0) self.popup_menu.exec_(self.treeWidget.mapToGlobal(pos)) global_pos = self.mapToGlobal(pos) t = self.itemAt(pos) self.setCurrentItem(t) """ if __name__ == "__main__": import sys app = QtGui.QApplication(sys.argv) MainWindow = QtGui.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_()) Answer: Your code is very unclear, because you are editing the file generated by `pyuic`, rather than importing it into your main application. I have re-structured your code, and used an [event filter](https://qt- project.org/doc/qt-4.8/qobject.html#installEventFilter) to handle the context- menu, which helps simplify the code a little. I have also used `lambda` functions to handle the menu actions. The code below assumes the file from `pyuic` is saved as `mainwindow_ui.py` (which you will obviously need to re- generate). from PyQt4 import QtCore, QtGui from mainwindow_ui import Ui_MainWindow class MainWindow(QtGui.QMainWindow, Ui_MainWindow): def __init__(self): super(MainWindow, self).__init__() self.setupUi(self) self.buildingTree( "default,default:cluster1,default:cluster1:clusterA," "default:cluster2,default:cluster2:clusterA" ) self.treeWidget.viewport().installEventFilter(self) def eventFilter(self, target, event): if (event.type() == QtCore.QEvent.ContextMenu and target is self.treeWidget.viewport()): item = self.treeWidget.itemAt(event.pos()) if item is not None: menu = QtGui.QMenu() menu.addAction( "New", lambda: self.new_cluster(item)) menu.addAction( "Rename", lambda: self.rename_cluster(item)) menu.addSeparator() menu.addAction( "Delete", lambda: self.delete_cluster(item)) menu.exec_(event.globalPos()) return True return super(MainWindow, self).eventFilter(target, event) def new_cluster(self, item): print "New Cluster", item.text(0) def rename_cluster(self, item): print "Rename cluster", item.text(0) def delete_cluster(self, item): print "Delete cluster", item.text(0) def buildingTree(self, string): arrTree = [] arrPath = [] arrString = [] arrString = string.split(",") root = QtGui.QTreeWidgetItem(self.treeWidget) root.setText(0,arrString[0]) arrTree.append(root) arrString.pop(0) for path in arrString: arrPath = path.split(":") nameOfFather = arrPath[len(arrPath)-2] arrTree.reverse() for node in arrTree: nameOfNode = node.text(0) if nameOfNode == nameOfFather: sonNode = QtGui.QTreeWidgetItem(node) sonNode.setText(0,arrPath[len(arrPath)-1]) arrTree.insert(0,sonNode) arrTree.reverse() break self.treeWidget.setSortingEnabled(False) if __name__ == "__main__": import sys app = QtGui.QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec_())
Pyocr doesn't recognize get_available_tools Question: I'm using <https://pypi.python.org/pypi/pyocr/0.1.2> for text recognition from images my script is as follows : from PIL import Image import sys import pyocr import pyocr.builders tools = pyocr.get_available_tools() if len(tools) == 0: print("No OCR tool found") sys.exit(1) tool = tools[0] print("Will use tool '%s'" % (tool.get_name())) langs = tool.get_available_languages() print("Available languages: %s" % ", ".join(langs)) lang = langs[0] print("Will use lang '%s'" % (lang)) txt = tool.image_to_string(Image.open('http://www.domain.com/fr/i/3518721/phone'), lang=lang, builder=pyocr.builders.TextBuilder()) word_boxes = tool.image_to_string(Image.open('http://www.domain.com/fr/i/3518721/phone'), lang=lang, builder=pyocr.builders.WordBoxBuilder()) line_and_word_boxes = tool.image_to_string( Image.open('http://www.domain.com/fr/i/3518721/phone'), lang=lang, builder=pyocr.builders.LineBoxBuilder()) when I run the script i have this error message : > Traceback (most recent call last): File "./test.py", line 6, in tools = > pyocr.get_available_tools() AttributeError: 'module' object has no attribute > 'get_available_tools' What seems to be the problem officers? Answer: Change your imports to look like this: from PIL import Image import sys from pyocr import pyocr from pyocr import builders Now `pyocr.get_available_tools()` will work because you have imported the module. But `pyocr.builders.WordBoxBuilder()` wont work because builders is imported on its own namespace. You'd need to change them to `builders.WordBoxBuilder()` ditto for TextBuilder and LineBoxBuilder.
How to "un-export" something in Python Question: Following a django tutorial, I entered these two lines in my terminal: export PYTHONPATH=$PYTHONPATH:/var/www/djangoapp:/var/www/djangoapp/app export DJANGO_SETTINGS_MODULE=app.settings.settings I did't know what exactly I was doing. The problem is that when I want to execute the **managy.py** file in a Django app, I get the following output: ImportError: Could not import settings 'app.settings.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named settings.settings Since this error started after adding those lines, I want to "rollback" those commands. How should I do that? Thanks in advance! Answer: This is not a python question. It's a shell question, probably bash or perhaps ksh or zsh. Try "unset variablename", like "unset PYTHONPATH".
iTunes win32com Python - AddTrack not working Question: I've been using the following code to try and create a new playlist in iTunes and a song from the main library - its example code i've found but i keep getting the following error when it runs. I've had a look through the iTunes COM interface documentation and it seems that AddTrack is only available under IITLibraryPlaylist but all of the example code Ive found is as below. Can anyone help> Error: AttributeError: win32com.ge_py.iTunes 1.13 Type Library.IITPlaylist instance at 0x34035192 object has no attribute 'AddTrack' Python Code: import win32com.client itunes = win32com.client.gencache.EnsureDispatch ("iTunes.Application") mainLibrary = itunes.LibraryPlaylist tracks = mainLibrary.Tracks playlist = itunes.CreatePlaylist("Sonic Jams") song = tracks.ItemByName('Teen Age Riot') playlist.AddTrack(song) Answer: i managed to get it using this code if anyone else needs it. playlist = win32com.client.CastTo(itunes.CreatePlaylist("New List"), 'IITLibraryPlaylist') song = tracks.ItemByName('Silver Rocket') playlist.AddTrack(song)
Searching in csv files Question: How do I search between the first two commas in a csv file? E.G: CSV FILE: name, surname, age , gender How do i only search for the first two, name and surname with the users input? this is what i have. I am looking for the user to search only by a name or surname: searchfile = open("python.csv", "r") a=input() for line in file: if a in line: print(line) When i do this the user can enter anything that is in the file and it will print the line. I only want to search the first two commas which are name and surname. If the input is not there then reject. Answer: You can use the [csv module](http://docs.python.org/2/library/csv.html) : it contains an object called a **reader** , which allows you to split each line of your csv file by defining a **delimiter** : import csv searchfile = open('python.csv', 'rb') reader = csv.reader(searchfile, delimiter = ',') a = input() for row in reader: if a in row[0] or a in row[1]: print row Note that if you're looking for a string in the csv file, you should use raw_input() instead of input() which is only used for numbers.
How to find all ways from up to down in list use python Question: Python v.3.2.3: Need to find all path in this `list[]`, but go down accept only `(↓, ↓+right)` . In finish need create list of `list(all paths)` from `up([0][0])` to `down([6][x])`. Example(list): [[30], [27, 84], [25, 33, 11], [31, 54, 79, 95], [98, 27, 61, 90, 52], [12, 72, 29, 64, 27, 81], [90, 23, 24, 73, 69, 63, 47]] Example `list([all paths][.][.][.][.][.][.][.])`: [[30, 27, 25, 31, 98, 12, 90], [30, 27, 25, 31, 98, 12, 23].............] Code: from random import randint col_str=randint(6,10) '''create pyramid 2d array''' def newpyramid(): masiv=[] index = 0 for i in range(0,col_str): masiv.append([]) index+=1 #print(index) for j in range(0,index): masiv[i].append(randint(10,99)) return masiv b=newpyramid() print(b) '''function that find nodes (↓, ↓+right) ''' def get_list_subnodes(depth, width): try: pyramid[depth+1] except IndexError: return False else: subnodes = [] subnodes1 = [] subnodes2 = [] subnodes1.append(depth + 1) subnodes1.append(width) subnodes2.append(depth + 1) subnodes2.append(width + 1) subnodes.append(subnodes1) subnodes.append(subnodes2) return subnodes '''function go by 2 nodes from get_list_subnodes''' def go_pyramid(depth, width, way): way1.append(pyramid[depth][width]) way2.append(pyramid[depth][width]) #print(depth) list_subnodes = get_list_subnodes(depth, width) print(list_subnodes) if list_subnodes==False: ways.append(way1) ways.append(way2) return go_pyramid(list_subnodes[1][0], list_subnodes[1][1], way1) go_pyramid(list_subnodes[0][0], list_subnodes[0][1], way2) ways=[] go_pyramid(0, 0, way1) print(ways) Answer: What you are trying to achieve is called Cartesian product of a sequence of lists. There is already a library support in python [itertools.product](http://docs.python.org/2/library/itertools.html#itertools.product). lst=[[30], [27, 84], [25, 33, 11], [31, 54, 79, 95], [98, 27, 61, 90, 52], [12, 72, 29, 64, 27, 81], [90, 23, 24, 73, 69, 63, 47]] from itertools import product result = map(list, product(*lst))
Encoding issue for Python tool Unidecode on CL Question: I need to convert unicode files to ascii. In case, a letter doesn't exist in ascii, it should be converted to it's closest ascii representation. I'm using the Unidecode tool for it (<https://pypi.python.org/pypi/Unidecode>). It works fine when I use it in the Python interpreter on the CL (thus, by invoking `python` and then importing the libraries and then printing the decoded word like this: `print unidecode(u'äèß')`) Unfortunately, when I try to use this tool directly on the command line (thus, by doing something like `python -c "from unidecode import *; print unidecode(u'äèß')"`, it only prints gibberish (`A$?A"A` to be exact, even though it should've printed (and did in the interpreter) `aess`). This is annoying and I don't know how to solve that issue. I thought it might be due to encoding errors with my Terminal, not being set correctly to utf-8 or something. However, `locale` in my Terminal printed me the following output: > LANG="de_DE.UTF-8" > > LC_COLLATE="de_DE.UTF-8" > > LC_CTYPE="de_DE.UTF-8" > > LC_MESSAGES="de_DE.UTF-8" > > LC_MONETARY="de_DE.UTF-8" > > LC_NUMERIC="de_DE.UTF-8" > > LC_TIME="de_DE.UTF-8" > > LC_ALL="de_DE.UTF-8" Or, might it be due to Python that has problems with StdIn encoding on the command line? It gave me correct output in the python interpreter, but when invoking `python -c` not. Do you guys have an idea? Answer: If you try writing this in a file: #!/bin/python from unidecode import * print unidecode(u'äèß') [Wani@Linux tmp]$ python tmp.py File "tmp.py", line 1 SyntaxError: Non-ASCII character '\xc3' in file tmp.py on line 1, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details [Wani@Linux tmp]$ To fix this, you do: #!/bin/python #coding: utf8 from unidecode import *; print unidecode(u'äèß') [Wani@Linux tmp]$ python tmp.py aeess [Wani@Linux tmp]$ So, you need to call from command-line like this: [Wani@Linux tmp]$ python -c "#coding: utf8 from unidecode import *; print unidecode(u'äèß')" aeess [Wani@Linux tmp]$ python -c "$(echo -e "#coding: utf8\nfrom unidecode import *; print unidecode(u'äèß')")" aeess [Wani@Linux tmp] Further Reading: [Correct way to define Python source code encoding](http://stackoverflow.com/questions/728891/correct-way-to-define- python-source-code-encoding)
How to get the output from .jar execution in python codes? Question: I'm programming the python module that executes SQL to DBMS and retrieves data. I'm trying to use jdbc jar files instead of native DB drivers. I'm wondering how to executes jar file in python and get output from jar execution. And I'd like to know how to pass SQL string to jar argument. Here is the simplified code. Any help is greatly appreciated. [ java code ] public class GetDBResults { public static void main(String[] args) { // return sql results for(int i=0; i<=100; i++){ // Is this the proper way to generate the output? System.out.println(i+"/t"+i*100+1); } } } [ python code ] subprocess.call( [ 'java','-jar','./GET_DB_DATA.jar' ) # how to get results from jar execution? # how to pass SQL string to jar execution? Answer: You can read the output through pipe: >>> from subprocess import Popen, PIPE, STDOUT >>> p = Popen(['java', '-jar', './GET_DB_DATA.jar'], stdout=PIPE, stderr=STDOUT) >>> for line in p.stdout: print line As regards passing string to stdin, you can achieve it this way: >>> p = Popen(['cat'], stdin=PIPE, stdout=PIPE, stderr=STDOUT) >>> stdout, stderr = p.communicate(input='passed_string') >>> print stdout passed_string
Inter Document Similarity: Cosine distance Question: **Updated Question:** According to **"perimosocordiae"** s solution I found out the cosine similarity between 2 documents. I have tried to use the solution to find out similarity between 2 Files. But again an error arises in test(), which is Traceback (most recent call last): File "3.py", line 103, in <module> main() File "3.py", line 99, in main test(tf_idf_matrix,count,nltkutil.cosine_distance) File "3.py", line 46, in test doc2 = np.asarray(tdMatrix[j-1].todense()).reshape(-1) File "/usr/lib/python2.7/dist-packages/scipy/sparse/csr.py", line 281, in __getitem__ return self[key,:] #[i] or [1:2] File "/usr/lib/python2.7/dist-packages/scipy/sparse/csr.py", line 233, in __getitem__ return self._get_row_slice(row, col) #[i,1:2] File "/usr/lib/python2.7/dist-packages/scipy/sparse/csr.py", line 320, in _get_row_slice raise IndexError('index (%d) out of range' % i ) IndexError: index (4) out of range I am using one file as the train set and the other file as test set and my objective is to use the `test()` function to output the cosine similarity between 2 files using tf-idf. My code is the following: #! /usr/bin/python -tt from __future__ import division from operator import itemgetter from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer import nltk.cluster.util as nltkutil import numpy as np import re def preprocess(fnin, fnout): fin = open(fnin, 'rb') print fin fout = open(fnout, 'wb') buf = [] for line in fin: line = line.strip() if line.find("-- Document Separator --") > -1: if len(buf) > 0: body = re.sub("\s+", " ", " ".join(buf)) fout.write("%s\n" % (body)) rest = map(lambda x: x.strip(), line.split(": ")) buf = [] else: buf.append(line) fin.close() fout.close() def test(tdMatrix,count,fsim): sims=[] sims = np.zeros((len(tdMatrix.todense()), count)) l=len(tdMatrix.todense()) for i in range(0, l): for j in range(0, count): doc1 = np.asarray(tdMatrix[i].todense()).reshape(-1) doc2 = np.asarray(tdMatrix[j].todense()).reshape(-1) sims[i, j] = fsim(doc1, doc2) print sims def main(): file_set=["corpusA.txt","corpusB.txt"] train=[] test1=[] for file1 in file_set: s="x"+file1 preprocess(file1,s) count_vectorizer = CountVectorizer() m=open("xcorpusA.txt",'r') for i in m: train.append(i.strip()) #print doc #print train count_vectorizer.fit_transform(train) #print "Vocabulary:", count_vectorizer.vocabulary # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3} m1=open("xcorpusB.txt",'r') for i in m1: test1.append(i.strip()) freq_term_matrix = count_vectorizer.transform(test1) #print freq_term_matrix.todense() tfidf = TfidfTransformer(norm="l2") tfidf.fit(freq_term_matrix) #print "IDF:", tfidf.idf_ tf_idf_matrix = tfidf.transform(freq_term_matrix) print (tf_idf_matrix.toarray()) count=0 s="" for i in tf_idf_matrix.toarray(): for j in i: count+=1 break #print count #print type(tf_idf_matrix) print "Results with Cosine Distance Similarity Measure" test(tf_idf_matrix,count,nltkutil.cosine_distance) if __name__ == "__main__": main() I am looking for any advice from respective mentors. Answer: Your error is in this expression: tdMatrix[tdMatrix[i], :] Your `tdMatrix` is a 2x2 array of floating point numbers, and indexing itself is going to fail. Perhaps you meant: doc1 = np.asarray(tdMatrix[i].todense()).reshape(-1)
Django runserver error (sqlite2 & sqlite3) Question: I just installed Django and I'm following this tutorial: [Django tutorial](http://www.djangobook.com/en/2.0/chapter02.html.) When I type "python3.3 manage.py runserver" this happens: ninaolo@ninaolo-VirtualBox:~/Documents/Django-projekt/testprojekt$ python3.3 manage.py runserver Traceback (most recent call last): File "/usr/local/lib/python3.3/site-packages/django/db/backends/sqlite3/base.py", line 30, in <module> from pysqlite2 import dbapi2 as Database ImportError: No module named 'pysqlite2' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.3/site-packages/django/db/backends/sqlite3/base.py", line 32, in <module> from sqlite3 import dbapi2 as Database File "/usr/local/lib/python3.3/sqlite3/__init__.py", line 23, in <module> from sqlite3.dbapi2 import * File "/usr/local/lib/python3.3/sqlite3/dbapi2.py", line 26, in <module> from _sqlite3 import * ImportError: No module named '_sqlite3' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python3.3/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line utility.execute() File "/usr/local/lib/python3.3/site-packages/django/core/management/__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python3.3/site-packages/django/core/management/base.py", line 242, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/local/lib/python3.3/site-packages/django/core/management/base.py", line 280, in execute translation.activate('en-us') File "/usr/local/lib/python3.3/site-packages/django/utils/translation/__init__.py", line 130, in activate return _trans.activate(language) File "/usr/local/lib/python3.3/site-packages/django/utils/translation/trans_real.py", line 188, in activate _active.value = translation(language) File "/usr/local/lib/python3.3/site-packages/django/utils/translation/trans_real.py", line 177, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/usr/local/lib/python3.3/site-packages/django/utils/translation/trans_real.py", line 159, in _fetch app = import_module(appname) File "/usr/local/lib/python3.3/importlib/__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1584, in _gcd_import File "<frozen importlib._bootstrap>", line 1565, in _find_and_load File "<frozen importlib._bootstrap>", line 1532, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 584, in _check_name_wrapper File "<frozen importlib._bootstrap>", line 1022, in load_module File "<frozen importlib._bootstrap>", line 1003, in load_module File "<frozen importlib._bootstrap>", line 560, in module_for_loader_wrapper File "<frozen importlib._bootstrap>", line 868, in _load_module File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed File "/usr/local/lib/python3.3/site-packages/django/contrib/admin/__init__.py", line 6, in <module> from django.contrib.admin.sites import AdminSite, site File "/usr/local/lib/python3.3/site-packages/django/contrib/admin/sites.py", line 4, in <module> from django.contrib.admin.forms import AdminAuthenticationForm File "/usr/local/lib/python3.3/site-packages/django/contrib/admin/forms.py", line 6, in <module> from django.contrib.auth.forms import AuthenticationForm File "/usr/local/lib/python3.3/site-packages/django/contrib/auth/forms.py", line 17, in <module> from django.contrib.auth.models import User File "/usr/local/lib/python3.3/site-packages/django/contrib/auth/models.py", line 48, in <module> class Permission(models.Model): File "/usr/local/lib/python3.3/site-packages/django/db/models/base.py", line 96, in __new__ new_class.add_to_class('_meta', Options(meta, **kwargs)) File "/usr/local/lib/python3.3/site-packages/django/db/models/base.py", line 264, in add_to_class value.contribute_to_class(cls, name) File "/usr/local/lib/python3.3/site-packages/django/db/models/options.py", line 124, in contribute_to_class self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) File "/usr/local/lib/python3.3/site-packages/django/db/__init__.py", line 34, in __getattr__ return getattr(connections[DEFAULT_DB_ALIAS], item) File "/usr/local/lib/python3.3/site-packages/django/db/utils.py", line 198, in __getitem__ backend = load_backend(db['ENGINE']) File "/usr/local/lib/python3.3/site-packages/django/db/utils.py", line 113, in load_backend return import_module('%s.base' % backend_name) File "/usr/local/lib/python3.3/importlib/__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/usr/local/lib/python3.3/site-packages/django/db/backends/sqlite3/base.py", line 35, in <module> raise ImproperlyConfigured("Error loading either pysqlite2 or sqlite3 modules (tried in that order): %s" % exc) django.core.exceptions.ImproperlyConfigured: Error loading either pysqlite2 or sqlite3 modules (tried in that order): No module named '_sqlite3' And if I type "python manage.py runserver" (I have both Python 2.7 and 3.3): ninaolo@ninaolo-VirtualBox:~/Documents/Django-projekt/testprojekt$ python2.7 manage.py runserver Traceback (most recent call last): File "manage.py", line 8, in <module> from django.core.management import execute_from_command_line ImportError: No module named django.core.management Does anybody understand what's wrong here? Answer: Apparently you have neither `pysqlite2` nor `sqlite3` installed, but you've configured Django to use a SQLite database; one of those database drivers is required in order to use a SQLite database. Also, you have apparently only installed Django in your Python 3 environment. Package library directories are not shared between different versions of Python — each has its own, so packages you want to use with multiple versions of Python need to be installed separately for each version.
wx.ProgressDialog + py2exe leads to application crash Question: This simple code runs very well : import wx app = wx.App(0) frame = wx.Frame(None) test = wx.ProgressDialog('Test', 'Test', maximum = 20, parent = frame, style = wx.PD_CAN_ABORT) app.MainLoop() However, when compiling/packing it into an executable with `py2exe`... from distutils.core import setup import py2exe setup(script_args = ['py2exe'], windows=[{'script':'progressdlgprobblem.py'}], options = {'py2exe': {'compressed':1,'bundle_files': 1}}, zipfile = None) ... then the `.exe` file crashes. **What could be the cause of this crash? Does`wx.ProgressDialog` require some specific additional elements in order to be used with `py2exe`?** * * * _Addendum 1_ : when I remove the `style = wx.PD_CAN_ABORT`, there is no more crash. How can the crash come from the `style` ? But then, the _styling_ is XP-style when launching from the `.exe` : ![enter image description here](http://i.stack.imgur.com/xXEo5.jpg) and different to the styling I get when launching from the `.py` (without py2exe) : ![enter image description here](http://i.stack.imgur.com/Rvpvq.jpg) * * * _Addendum 2_ : when I remove the `'bundle_files': 1`, no more crash. But I would like to keep this bundling into one file only ! **How can this bundling into a single .exe file be the cause of this crash ?** _Addendum 3_ : A big part of the problem is solved by using wx.Python 3.0.1.0b instead of 3.0.0.0 (more details soon). Answer: py2exe (0.6.9) is pretty outdated and doesn't handle newer Windows versions all that well without some additional changes. Specifically, it will by default include some Windows system DLLs which never should be bundled. To prevent this, try altering your setup script as follows: from distutils.core import setup import py2exe # Exclude system DLLs origIsSystemDLL = py2exe.build_exe.isSystemDLL def isSystemDLL(pathname): if os.path.basename(pathname).lower() in ("gdiplus.dll", "mfc90.dll"): return 0 if os.path.basename(pathname).lower() in ("powrprof.dll", ) or \ os.path.basename(pathname).lower().startswith("api-ms-win-"): return 1 return origIsSystemDLL(pathname) py2exe.build_exe.isSystemDLL = isSystemDLL # Add the MS VC9 CRT and common controls manifest as resource to the exe manifest = '''<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="0.0.0.0" processorArchitecture="x86" name="Enter program name here" type="win32" /> <description>Enter program description here</description> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false" /> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" processorArchitecture="x86" publicKeyToken="6595b64144ccf1df" language="*" /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> </assembly> ''' setup(script_args=['py2exe'], windows=[{'script': 'progressdlgprobblem.py', 'other_resources': [(24, 1, manifest)]}], options = {'py2exe': {'compressed': 1, 'dll_excludes': ['iertutil.dll', 'MPR.dll', 'msvcm90.dll', 'msvcp90.dll', 'msvcr90.dll', 'mswsock.dll', 'urlmon.dll', 'w9xpopen.exe'], 'bundle_files': 1}}, zipfile=None) For best interoperability, you'll also want to copy Microsoft.VC90.CRT.manifest, msvcm90.dll, msvcp90.dll, and msvcr90.dll from C:\Windows\winsxs\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91 (the directory name may be slightly different, but the correct version 9.0.21022.8 is important) to your exe's directory so users don't need to install the right version of the MS VC9 CRT themselves. Alternatively to all of this, you could try PyInstaller which automatically handles assembly dependencies and excludes system DLLs by default.
python numpy assigning by boolean indexing error "TypeError: array cannot be safely cast to required type" Question: In the last line of the following code I get an "TypeError: array cannot be safely cast to required type". Can you help? Let me explain the code a bit. `randin()` function helps me get an array with elements in the range specified by `lb` and `ub`. In other words, you give `randin()` function two one dimensional numpy arrays `lb` and `ub`, and it returns array `r`, where this inequality holds for all elements `i`: `lb[i] <= r[i] < ub[i]` In the one dimensional array`pos` some elements may be out of the range specified by `lb` and `ub`. I want only those elements to be randomly reproduced. And I want to use my already existing function `randin()` if possible. And of course I want to use fancy boolean indexing, not loops. Thx. import numpy as np def randin(lb, ub): """return random array with elements between the elements of lb and ub. ub is not included.""" r = ((ub - lb) * np.random.random(np.shape(lb)) + lb) return r ## inputs pos = np.array([1, -3, 5, -7, 9, -11]) ub = np.ones_like(pos) * 5 lb = np.zeros_like(pos) ## reproduce and assign out of range elements high = pos > ub low = pos < lb hl = high | low pos[hl] = randin(lb[hl], ub[hl]) Answer: When I initialize `pos` like `pos = np.array([1, -3, 5, -7, 9, -11], dtype = float)` the code runs without problem. So numpy was scared of assigning floats to integer elements. Not a bad idea.
SVD - Matrix transformation Python Question: Trying to compute SVD in Python to find the most significant elements of a spectrum and created a matrix just containing the most significant parts. In python I have: u,s,v = linalg.svd(Pxx, full_matrices=True) This gives 3 matrices back; where "s" contains the magnitudes that corresponds to u, v. In order to construct a new matrix, containing all of the significant parts of the signal, I need to capture the highest values in "s" and match them with the columns in "u" and "v" and the resulting matrix should give me the most significant part of the data. The problem is I don't know how I would do this in Python, for example, how do I find the highest numbers in "s" and select the columns in "u" and "v" in order to create a new matrix? (I'm new to Python and numpy) so any help would be greatly appreciated Edit: import wave, struct, numpy as np, matplotlib.mlab as mlab, pylab as pl from scipy import linalg, mat, dot; def wavToArr(wavefile): w = wave.open(wavefile,"rb") p = w.getparams() s = w.readframes(p[3]) w.close() sd = np.fromstring(s, np.int16) return sd,p def wavToSpec(wavefile,log=False,norm=False): wavArr,wavParams = wavToArr(wavefile) print wavParams return mlab.specgram(wavArr, NFFT=256,Fs=wavParams[2],detrend=mlab.detrend_mean,window=mlab.window_hanning,noverlap=128,sides='onesided',scale_by_freq=True) wavArr,wavParams = wavToArr("wavBat1.wav") Pxx, freqs, bins = wavToSpec("wavBat1.wav") Pxx += 0.0001 U, s, Vh = linalg.svd(Pxx, full_matrices=True) assert np.allclose(Pxx, np.dot(U, np.dot(np.diag(s), Vh))) s[2:] = 0 new_a = np.dot(U, np.dot(np.diag(s), Vh)) print(new_a) Answer: `linalg.svd` returns `s` in descending order. So to select the `n` highest numbers in `s`, you'd simply form s[:n] If you set the smaller values of `s` to zero, s[n:] = 0 then matrix multiplication would take care of "selecting" the appropriate columns of U and V. For example, import numpy as np LA = np.linalg a = np.array([[1, 3, 4], [5, 6, 9], [1, 2, 3], [7, 6, 8]]) print(a) # [[1 3 4] # [5 6 9] # [1 2 3] # [7 6 8]] U, s, Vh = LA.svd(a, full_matrices=False) assert np.allclose(a, np.dot(U, np.dot(np.diag(s), Vh))) s[2:] = 0 new_a = np.dot(U, np.dot(np.diag(s), Vh)) print(new_a) # [[ 1.02206755 2.77276308 4.14651336] # [ 4.9803474 6.20236935 8.86952026] # [ 0.99786077 2.02202837 2.98579698] # [ 7.01104783 5.88623677 8.07335002]] * * * Given the [data here](http://pastebin.com/mBtasJLD), import numpy as np import scipy.linalg as SL import matplotlib.pyplot as plt Pxx = np.genfromtxt('mBtasJLD.txt') U, s, Vh = SL.svd(Pxx, full_matrices=False) assert np.allclose(Pxx, np.dot(U, np.dot(np.diag(s), Vh))) s[2:] = 0 new_a = np.dot(U, np.dot(np.diag(s), Vh)) print(new_a) plt.plot(new_a) plt.show() produces ![enter image description here](http://i.stack.imgur.com/Gxonm.png)
How to return a character from standard input on OS X in Python? Question: For a Python project that I'm working on I need to tell the user to insert a character and return its value in ASCII code without having to press enter to commit the key. It must also read the input only if my program is the active application, otherwise it must do nothing (like `getch()` in Windows). The OS is OS X. Answer: Normally, data becomes available at stdin after a newline. So you need to put stdin into 'raw mode'. Here is a python program that will read key presses from a raw tty. It stops after 16 presses, because Ctrl-C is not processed either. import sys import termios import tty attr = termios.tcgetattr( sys.stdin ) tty.setraw( sys.stdin ) for i in range(16) : d = sys.stdin.read(1) print ord( d[0] ) print "\r", termios.tcsetattr( sys.stdin, termios.TCSADRAIN, attr )
python how do I determine screen size Question: I need to resize an image. The original is 1024x768. My laptop screen is set to 1366x768. When I go to view the image the bottom is always cut off. I'm guessing it's because the image is 1024x768 but the image size doesn't take into account the box/window the image sits in, so the bottom of the image gets cuts off as a result. What is the size, pixelwise, of the box/window and how do I determine the size of my screen, codewise, so I can reset the size of the image so the entire image will fit on the screen and none of it will get cut off. Or is there a way of having the image autoscale so it will fit the screen height resolution? I'm using PIL. I know I can in the end just new_image = old_image.resize(x, 768-box_height) I just need to know the box height. Answer: The most environment-agnostic way is likely to just ask `tkinter`: import tkinter #python 3 syntax root = tkinter.Tk() root.withdraw() width, height = root.winfo_screenwidth(), root.winfo_screenheight()
Python Class Fields Question: I am new to Python having come from mainly Java programming. I am currently pondering over how classes in Python are instantiated. I understand that `__init__()`: is like the constructor in Java. However, sometimes python classes do not have an `__init__()` method which in this case I assume there is a default constructor just like in Java? Another thing that makes the transition from Java to python slightly difficult is that in Java you have to define all the class fields with the type and sometimes an initial value. In python all of this just seems to disappear and developers can just define new class variables on the fly. For example I have come across a program like so: class A(Command.UICommand): FIELDS = [ Field( 'runTimeStepSummary', BOOL_TYPE) ] def __init__(self, runTimeStepSummary=False): self.runTimeStepSummary = runTimeStepSummary """Other methods""" def execute(self, cont, result): self.timeStepSummaries = {} """ other code""" The thing that confuses (and slightly irritates me) is that this A class does not have a field called timeStepSummaries yet how can a developer in the middle of a method just define a new field? or is my understanding incorrect? So to be clear, my question is in Python can we dynamically define new fields to a class during runtime like in this example or is this timeStepSummaries variable an instance of a java like private variable? EDIT: I am using python 2.7 Answer: > I understand that `__init__()`: is like the constructor in Java. To be more precise, in Python `__new__` is the constructor method, `__init__` is the initializer. When you do `SomeClass('foo', bar='baz')`, the `type.__call__` method basically does: def __call__(cls, *args, **kwargs): instance = cls.__new__(*args, **kwargs) instance.__init__(*args, **kwargs) return instance Generally, most classes will define an `__init__` if necessary, while `__new__` is more commonly used for immutable objects. > However, sometimes python classes do not have an **init**() method which in > this case I assume there is a default constructor just like in Java? I'm not sure about old-style classes, but this is the case for new-style ones: >>>> object.__init__ <slot wrapper '__init__' of 'object' objects> If no explicit `__init__` is defined, the default will be called. > So to be clear, my question is in Python can we dynamically define new > fields to a class during runtime like in this example Yes. >>> class A(object): ... def __init__(self): ... self.one_attribute = 'one' ... def add_attr(self): ... self.new_attribute = 'new' ... >>> a = A() >>> a.one_attribute 'one' >>> a.new_attribute Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'A' object has no attribute 'new_attribute' >>> a.add_attr() >>> a.new_attribute 'new' Attributes can be added to an instance at any time: >>> a.third_attribute = 'three' >>> a.third_attribute 'three' However, it's possible to restrict the instance attributes that can be added through the class attribute `__slots__`: >>> class B(object): ... __slots__ = ['only_one_attribute'] ... def __init__(self): ... self.only_one_attribute = 'one' ... def add_attr(self): ... self.another_attribute = 'two' ... >>> b = B() >>> b.add_attr() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 6, in add_attr AttributeError: 'B' object has no attribute 'another_attribute' (It's probably important to note that `__slots__` is primarily intended as a _memory optimization_ \- by not requiring an object have a dictionary for storing attributes - rather than as a form of run-time modification prevention.)
How to use random.random() in python Question: Hello I am working on a problem set and everything has been going well till I got to random.random() the instructions are to (Use random.random() to print 10 float numbers from from 21.0 to 30.0 inclusive) however what I am stuck on is on printing the 10 float numbers a example is below. e =random.random() * 21.0 <= 30.0 for x in range(2,10): print (e) however what it is returning is just "True" "True" if I could get some advice on solving this it would be much appreciated, thank you Answer: >>> import random >>> for i in range(10): ... print random.random() * 9 + 21 ... 22.3034067631 26.7803685261 26.8129361915 25.0246844772 23.7558474791 24.9746222797 21.165252633 26.6308193853 29.6625880762 22.3434394977
Natural Join Implementation Python Question: I am working on implementing natural join in python. The first two lines show the tables attributes and the next two lines each tables' tuples or rows. Expected Output: [['A', 1, 'A', 'a', 'A'], ['A', 1, 'A', 'a', 'Y'], ['A', 1, 'Y', 'a', 'A'], ['A', 1, 'Y', 'a', 'Y'], ['S', 2, 'B', 'b', 'S']] And what I got: [['A', 1, 'A', 'a', 'A', 'Y'], ['A', 1, 'A', 'a', 'A', 'Y']] I have looked through the code and everything seems to be right, I would appreciate any help. t1atts = ('A', 'B', 'C', 'D') t2atts = ('B', 'D', 'E') t1tuples = [['A', 1, 'A', 'a'], ['B', 2, 'Y', 'a'], ['Y', 4, 'B', 'b'], ['A', 1, 'Y', 'a'], ['S', 2, 'B', 'b']] t2tuples = [[1, 'a', 'A'], [3, 'a', 'B'], [1, 'a', 'Y'], [2, 'b', 'S'], [3, 'b', 'E']] def findindices(t1atts, t2atts): t1index=[] t2index=[] for index, att in enumerate(t1atts): for index2, att2 in enumerate(t2atts): if att == att2: t1index.append(index) t2index.append(index2) return t1index, t2index def main(): tpl=0; tpl2=0; i=0; j=0; count=0; result=[] t1index, t2index = findindices(t1atts, t2atts) for tpl in t1tuples: while tpl2 in range(len(t2tuples)): i=0; j=0 while (i in range(len(t1index))) and (j in range(len(t2index))): if tpl[t1index[i]] != t2tuples[tpl2][t2index[j]]: i=len(t1index) j=len(t1index) else: count+=1 i+=1 j+=1 if count == len(t1index): extravals = [val for index, val in enumerate(t2tuples[tpl2]) if index not in t2index] temp = tpl tpl += extravals result.append(tpl) tpl = temp count=0 tpl2+=1 print result Answer: Ok, here is the solution please verify and let me know if it works for you: I change little bit of naming to understood myself: #!/usr/bin/python table1 = ('A', 'B', 'C', 'D') table2 = ('B', 'D', 'E') row1 = [['A', 1, 'A', 'a'], ['B', 2, 'Y', 'a'], ['Y', 4, 'B', 'b'], ['A', 1, 'Y', 'a'], ['S', 2, 'B', 'b']] row2 = [[1, 'a', 'A'], [3, 'a', 'B'], [1, 'a', 'Y'], [2, 'b', 'S'], [3, 'b', 'E']] def findindices(table1, table2): inter = set(table1).intersection(set(table2)) tup_index1 = [table1.index(x) for x in inter] tup_index2 = [table2.index(x) for x in inter]] return tup_index1, tup_index2 def main(): final_lol = list() tup_index1, tup_index2 = findindices(table1, table2) merge_tup = zip(tup_index1, tup_index2) for tup1 in row1: for tup2 in row2: for m in merge_tup: if tup1[m[0]] != tup2[m[1]]: break else: ls = [] ls.extend(tup1) ls.append(tup2[-1]) final_lol.append(ls) return final_lol if __name__ == '__main__': import pprint pprint.pprint(main()) Output: [['A', 1, 'A', 'a', 'A'], ['A', 1, 'A', 'a', 'Y'], ['A', 1, 'Y', 'a', 'A'], ['A', 1, 'Y', 'a', 'Y'], ['S', 2, 'B', 'b', 'S']]
Passing arguments to python unittest Question: I have a functional test 'y1.py' which I am trying to pass arguments to, from within a python/django function. Inside the calling function I have: import unittest, sys import ft1.y1 ft1.y1.testVars = [1, 2, 3, "foo"] unittest.main(module=ft1.y1, argv=sys.argv[:1], exit=False) based on h.ttp://stackoverflow.com/questions/2812132/how-to-pass-variables- using-unittest-suite y1.py: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select from selenium.common.exceptions import NoSuchElementException import unittest, time, re class Y1(unittest.TestCase): def setUp(self): self.driver = webdriver.Firefox() self.driver.implicitly_wait(30) self.base_url = "https://www.yahoo.com/" self.verificationErrors = [] self.accept_next_alert = True print('tvars' +self.testVars ) .................... if __name__ == "__main__": unittest.main() I'm getting : Traceback (most recent call last): File "F:\envs\r1\driver1\ft1\y1.py", line 17, in setUp print('tvars '+ y1.testVars ) AttributeError: type object 'y1' has no attribute 'testVars' ---------------------------------------------------------------------- Ran 1 test in 2.330s FAILED (errors=1) [02/Feb/2014 23:59:42] "GET /runtest/ HTTP/1.1" 200 7 How can I fix this? EDIT: As suggested I changed the line to : print('tvars' + sys.module[__name__].testVars ) I'm getting: E ====================================================================== ERROR: test_y1 (ft1.y1.y1) ---------------------------------------------------------------------- Traceback (most recent call last): File "F:\envs\r1\driver1\ft1\y1.py", line 17, in setUp print('tvars' + sys.module[__name__].testVars ) AttributeError: 'module' object has no attribute 'module' ---------------------------------------------------------------------- Ran 1 test in 2.981s FAILED (errors=1) Answer: If you are trying to make a reference to the module rather than the class (your `testVars` are on the **module** level) you should probably use `sys.modules[__name__].testVars` which makes: print('tvars' +self.testVars ) Become: print('tvars' + sys.modules[__name__].testVars )
TypeError when passing bytearray to C++ extension Question: Python code: image = urllib2.urlopen('http://localhost/test.png').read() bytes = bytearray(image) print [myext.do_stuff(bytes, mode=1)] C++ code: static PyObject * do_stuff(PyObject *self, PyObject *args, PyObject *kwargs) { PyByteArrayObject *imgdata; char *image; int mode; char *keywords[] = { "image", "mode", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|i", keywords, &imgdata, &mode)) return NULL; image = PyByteArray_AsString((PyObject*) imgdata); char *result = do_something_more(image, mode); return Py_BuildValue("s", result); } Added: char * do_something_more(char imagebuffer[], int mode) { vector<char> vec(imagebuffer, imagebuffer + sizeof(imagebuffer)); Mat input = imdecode(vec, 1); } Answer: The typeerror is simply due to the fact that the `Y` format specifier does _not_ exist in python2 but only in python3. If you want to pass a `bytearray` in python2 you must use the `O` format specifier. The fact that the result string is just the first few bytes of the actual contect is pretty simple: * `strlen` is a C function that deals with C _null terminated_ strings. Your image data contains some null bytes and hence the function does _not_ return the _actual_ size. * [`PyBuild_Value`](http://docs.python.org/2/c-api/arg.html#Py_BuildValue)'s `s` format specifier takes a C _null terminated_ string and returns a python string object. Since your data contains null bytes not all the content is put in the result. In your C++ code the `char *image` pointer _does_ point to _all_ the data, but you should _not_ rely on C's string functions if your strings contain null bytes. You must always keep track of the length of the string. * * * To make clearer what I mean. Here's a self-contained C extension that can be used to demonstrate your problem: #include <Python.h> static PyObject * do_stuff(PyObject *self, PyObject *args, PyObject *kwargs) { PyByteArrayObject *imgdata; char *image; int mode; char *keywords[] = { "image", "mode", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|i", keywords, &imgdata, &mode)) return NULL; image = PyByteArray_AsString((PyObject*) imgdata); return Py_BuildValue("s", image); } static PyMethodDef noddy_methods[] = { {"do_stuff", do_stuff, METH_VARARGS | METH_KEYWORDS, "Does stuff"}, {NULL, NULL, 0, NULL} /* Sentinel */ }; void initdemo(void) { (void) Py_InitModule("demo", noddy_methods); } With `setup.py`: from distutils.core import setup, Extension module1 = Extension('demo', sources = ['demo_ext.c']) setup (name = 'Demo', version = '1.0', description = 'This is a demo package', ext_modules = [module1]) Used as: >>> import demo >>> with open('/Path/to/A/PNG/Image.png', 'rb') as f: ... contents = f.read() ... >>> byt = bytearray(contents) >>> byt[:20] bytearray(b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x02V') >>> demo.do_stuff(byt) # "truncates" the data '\x89PNG\r\n\x1a\n' Now if you change the `do_stuff` function to: static PyObject * do_stuff(PyObject *self, PyObject *args, PyObject *kwargs) { PyObject *imgdata; char *image; int mode; Py_ssize_t length; char *keywords[] = { "image", "mode", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|i", keywords, &imgdata, &mode)) return NULL; image = PyByteArray_AsString(imgdata); length = PyObject_Length(imgdata); PyObject *res = PyString_FromStringAndSize(image, length); return res; } You get: >>> import demo >>> with open('/home/giacomo/Immagini/bad_grouping.png', 'rb') as f: ... contents = f.read() ... >>> byt = bytearray(contents) >>> byt[:20] bytearray(b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x02V') >>> demo.do_stuff(byt)[:20] '\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x02V' As you can see `do_stuff` does _not_ truncate the data anymore. All functions such as `strlen` assume no null byte is in the string and will misbehave when this is not true (as in this case). Also some python's API calls assume C strings, such as `Py_BuildValue`. As you can see the `char *image` **does** contain _all_ the data. The problem is that you are using functions that don't handle it properly.
OpenMP, Python, C Extension, Memory Access and the evil GIL Question: so I am currently trying to do something like A**b for some 2d ndarray and a double b in parallel for Python. I would like to do it with a C extension using OpenMP (yes I know, there is Cython etc. but at some point I always ran into trouble with those 'high-level' approaches...). So here is the gaussian.c Code for my gaussian.so: void scale(const double *A, double *out, int n) { int i, j, ind1, ind2; double power, denom; power = 10.0 / M_PI; denom = sqrt(M_PI); #pragma omp parallel for for (i = 0; i < n; i++) { for (j = i; j < n; j++) { ind1 = i*n + j; ind2 = j*n + i; out[ind1] = pow(A[ind1], power) / denom; out[ind2] = out[ind1]; } } (A is a square double Matrix, out has the same shape and n is the number of rows/columns) So the point is to update some symmetric distance matrix - ind2 is the transposed index of ind1. I compile it using `gcc -shared -fopenmp -o gaussian.so -lm gaussian.c`. I access the function directly via ctypes in Python: test = c_gaussian.scale test.restype = None test.argtypes = [ndpointer(ctypes.c_double, ndim=2, flags='C_CONTIGUOUS'), # array of sample ndpointer(ctypes.c_double, ndim=2, flags='C_CONTIGUOUS'), # array of sampl ctypes.c_int # number of samples ] The function 'test' is working smoothly as long as I comment the #pragma line - otherwise it ends with error number 139. A = np.random.rand(1000, 1000) + 2.0 out = np.empty((1000, 1000)) test(A, out, 1000) When I change the inner loop to just print ind1 and ind2 it runs smoothly in parallel. It also works, when I just access the ind1 location and leave ind2 alone (even in parallel)! Where do I screw up the memory access? How can I fix this? thank you! Update: Well I guess this is running into the GIL, but I am not yet sure... Update: Okay, I am pretty sure now, that it is evil GIL killing me here, so I altered the example: I now have gil.c: #include <Python.h> #define _USE_MATH_DEFINES #include <math.h> void scale(const double *A, double *out, int n) { int i, j, ind1, ind2; double power, denom; power = 10.0 / M_PI; denom = sqrt(M_PI); Py_BEGIN_ALLOW_THREADS #pragma omp parallel for for (i = 0; i < n; i++) { for (j = i; j < n; j++) { ind1 = i*n + j; ind2 = j*n + i; out[ind1] = pow(A[ind1], power) / denom; out[ind2] = out[ind1]; } } Py_END_ALLOW_THREADS } which is compiled using `gcc -shared -fopenmp -o gil.so -lm gil.c -I /usr/include/python2.7 -L /usr/lib/python2.7/ -lpython2.7` and the corresponding Python file: import ctypes import numpy as np from numpy.ctypeslib import ndpointer import pylab as pl path = '../src/gil.so' c_gil = ctypes.cdll.LoadLibrary(path) test = c_gil.scale test.restype = None test.argtypes = [ndpointer(ctypes.c_double, ndim=2, flags='C_CONTIGUOUS'), ndpointer(ctypes.c_double, ndim=2, flags='C_CONTIGUOUS'), ctypes.c_int ] n = 100 A = np.random.rand(n, n) + 2.0 out = np.empty((n,n)) test(A, out, n) This gives me Fatal Python error: PyEval_SaveThread: NULL tstate Process finished with exit code 134 Now somehow it seems to not be able to save the current thread - but the API doc does not go into detail here, I was hoping that I could ignore Python when writing my C function, but this seems to be quite messy :( any ideas? I found this very helpful: [GIL](http://jessenoller.com/blog/2009/02/01/python- threads-and-the-global-interpreter-lock) Answer: Your problem is much simpler than you think and does not involve GIL in any way. You are running in an out-of-bound access to `out[]` when you access it via `ind2` since `j` easily becomes larger than `n`. The reason is simply that you have not applied any data sharing clause to your parallel region and all variables except `i` remain shared (as per default in OpenMP) and therefore subject to data races - in that case multiple simultaneous increments being done by the different threads. Having too large `j` is less of a problem with `ind1`, but not with `ind2` since there the too large value is multiplied by `n` and thus becomes far too large. Simply make `j`, `ind1` and `ind2` private as they should be: #pragma omp parallel for private(j,ind1,ind2) for (i = 0; i < n; i++) { for (j = i; j < n; j++) { ind1 = i*n + j; ind2 = j*n + i; out[ind1] = pow(A[ind1], power) / denom; out[ind2] = out[ind1]; } } Even better, declare them inside the scope where they are being used. That automatically makes them private: #pragma omp parallel for for (i = 0; i < n; i++) { int j; for (j = i; j < n; j++) { int ind1 = i*n + j; int ind2 = j*n + i; out[ind1] = pow(A[ind1], power) / denom; out[ind2] = out[ind1]; } }
Can't modify global dict variable using mutliprocessing in python Question: I try to use multiprocessing to process numpy array. But I don't know how to return the process result back to the dict variable. Use the comments out code can produce what I expected. But when I try to use multiprocessing, I can't get anything for the `max_rslts`, it is an empty dict. The example code is: from __future__ import print_function from multiprocessing import Process import numpy as np def calc_max(a, step): global max_rslts max_rslts[step] = a[::step].max() if __name__ == '__main__': max_rslts = dict() a1 = np.arange(16).reshape(4,4) # for i in range(1,4): # calc_max(a1, i) # print(max_rslts) jobs = [] for i in range(1,4): p = Process(target=calc_max, args=(a1, i)) jobs.append(p) p.start() for j in jobs: j.join() print(a1) print(max_rslts) Answer: From the [`multiprocessing` programming guidelines](http://docs.python.org/2/library/multiprocessing.html#programming- guidelines): > Avoid shared state `global`s are not shared between processes after forking off each sub-process. Use different means to [share state between processes](http://docs.python.org/2/library/multiprocessing.html#sharing- state-between-processes) instead. A [`dict` manager](http://docs.python.org/2/library/multiprocessing.html#managers) would work here: from __future__ import print_function from multiprocessing import Process, Manager import numpy as np def calc_max(a, step, max_rslts): max_rslts[step] = a[::step].max() if __name__ == '__main__': manager = Manager() max_rslts = manager.dict() a1 = np.arange(16).reshape(4,4) jobs = [] for i in range(1,4): p = Process(target=calc_max, args=(a1, i, max_rslts)) jobs.append(p) p.start() for j in jobs: j.join() print(a1) print(max_rslts)
Combine two lists of lists into a dictionary python Question: I'm not experienced in programming and I have a problem with combining two lists of parsed sentences (=list within list) into a dictionary. I'm using python 2.6.6 I have two lists of sentences, one in English and the other one in German. The sentences correspond: the first sentence in the German list is a translation of the first sentence in the English list etc. My goal is to be able to access the English sentence, extract the subject and see whether the German subject corresponds to the English one. My two lists look like this (simplified): sentences_en = [[u'sid(s1).', u'sentence1', ...], [u'sid(s2).', u'sentence2', ...]] sentences_de = [['sid(s1).', 'Satz1', ...], ['sid(s2).', 'Satz2', ...]] Each item in the list contains the sentence ID ('sid(s1).'), the actual sentence ('sentenceX' or 'SatzX'), as well as more information on the sentence ('...'). Both lists ('sentences_en', 'sentences_de') contain 10000 items. I'd like to map the English to the German sentences in a parallel dictionary that looks like this: parallel_dict = {[u'sid(s1).', u'sentence1', ...]:['sid(s1).', 'Satz1', ...], [u'sid(s2).', 'sentence2', ...]:['sid(s2).', 'Satz2', ....]} As I know that I can't have a list as the key, I tried to turn one of the lists into a tuple and use this as the key (I found this solution on Stack Overflow). Unfortunately, this doesn't seem to work: parallel_dict = {} tuple_sentences_en = tuple(sentences_en) parallel_dict = zip(tuple_sentences_en, sentences_de) When I print parallel_dict, I get the following structure: [([u'sid(s1).', u'sentence1', ...], ['sid(s1).', 'Satz1', ...]), ([u'sid(s2).', u'sentence2', ...], ['sid(s2).', 'Satz2', ...])] It does map the English sentence 1 to the German sentence 1 etc, but it is most certainly not a dictionary - rather a list of tuples. Does anyone know whether it is even possible to turn this structure into a dictionary with the English sentences as the keys and the German sentences as the values? Or is there a better way to work with parallel data? I'd really appreciate your help! Answer: Since this is for 2.6, use a generator expression for the `dict()` constructor: from itertools import izip dict((en[0], (en[0], de[0])) for en, de in izip(sentences_en, sentences_de)) This uses `izip()` to avoid creating an intermediary list of 10000 items. The loop unpacks each pair into two lists, from the english and german lists before forming the key-value pair for the output dictionary. If you can upgrade to Python 2.7 or 3.x, you can use a dict comprehension: {en[0]: (en[0], de[0]) for en, de in izip(sentences_en, sentences_de)}
resize a 2D numpy array excluding NaN Question: I'm trying to resize a 2D numpy array of a given factor, obtaining a smaller array in output. The array is read from an image file and some of the values should be NaN (Not a Number, np.nan from numpy): it is the result of remote sensing measurements from satellite and simply some pixels weren't measured. The suitable package I found for this is scypy.misc.imresize, but each pixel in the output array containing a NaN is set to NaN, even if there are some valid data in the original pixels interpolated together. My solution is appended here, what I've done is essentially : * create a new array based on the original array shape and the desired reduction factor * create an index array to address all the pixels of the original array to be averaged for each pixel in the new * cycle through the new array pixels and average all the not-NaN pixel to obtain the new array pixel value; it there are only NaN, the output will be NaN. I'm planning to add keyword to choice between different output (average, median, standard deviation of the input pixels and so on). It is working as expected, but on a ~1Mpx image it takes around 3 seconds. Due to my lack of experience in python I'm searching for improvements. Do anyone have suggestion how to do it better and more efficiently? Do anyone know a library that already implements all that stuff? Thanks. Here you have an example output for random pixel input generated with the code here below: ![Example output for random pixel input \(see code\)](http://i.stack.imgur.com/fschf.png) import numpy as np import pylab as plt from scipy import misc def resize_2d_nonan(array,factor): """ Resize a 2D array by different factor on two axis sipping NaN values. If a new pixel contains only NaN, it will be set to NaN Parameters ---------- array : 2D np array factor : int or tuple. If int x and y factor wil be the same Returns ------- array : 2D np array scaled by factor Created on Mon Jan 27 15:21:25 2014 @author: damo_ma """ xsize, ysize = array.shape if isinstance(factor,int): factor_x = factor factor_y = factor elif isinstance(factor,tuple): factor_x , factor_y = factor[0], factor[1] else: raise NameError('Factor must be a tuple (x,y) or an integer') if not (xsize %factor_x == 0 or ysize % factor_y == 0) : raise NameError('Factors must be intger multiple of array shape') new_xsize, new_ysize = xsize/factor_x, ysize/factor_y new_array = np.empty([new_xsize, new_ysize]) new_array[:] = np.nan # this saves us an assignment in the loop below # submatrix indexes : is the average box on the original matrix subrow, subcol = np.indices((factor_x, factor_y)) # new matrix indexs row, col = np.indices((new_xsize, new_ysize)) # some output for testing #for i, j, ind in zip(row.reshape(-1), col.reshape(-1),range(row.size)) : # print '----------------------------------------------' # print 'i: %i, j: %i, ind: %i ' % (i, j, ind) # print 'subrow+i*new_ysize, subcol+j*new_xsize :' # print i,'*',new_xsize,'=',i*factor_x # print j,'*',new_ysize,'=',j*factor_y # print subrow+i*factor_x,subcol+j*factor_y # print '---' # print 'array[subrow+i*factor_x,subcol+j*factor_y] : ' # print array[subrow+i*factor_x,subcol+j*factor_y] for i, j, ind in zip(row.reshape(-1), col.reshape(-1),range(row.size)) : # define the small sub_matrix as view of input matrix subset sub_matrix = array[subrow+i*factor_x,subcol+j*factor_y] # modified from any(a) and all(a) to a.any() and a.all() # see http://stackoverflow.com/a/10063039/1435167 if not (np.isnan(sub_matrix)).all(): # if we haven't all NaN if (np.isnan(sub_matrix)).any(): # if we haven no NaN at all msub_matrix = np.ma.masked_array(sub_matrix,np.isnan(sub_matrix)) (new_array.reshape(-1))[ind] = np.mean(msub_matrix) else: # if we haven some NaN (new_array.reshape(-1))[ind] = np.mean(sub_matrix) # the case assign NaN if we have all NaN is missing due # to the standard values of new_array return new_array row , cols = 6, 4 a = 10*np.random.random_sample((row , cols)) a[0:3,0:2] = np.nan a[0,2] = np.nan factor_x = 2 factor_y = 2 a_misc = misc.imresize(a, .5, interp='nearest', mode='F') a_2d_nonan = resize_2d_nonan(a,(factor_x,factor_y)) print a print print a_misc print print a_2d_nonan plt.subplot(131) plt.imshow(a,interpolation='nearest') plt.title('original') plt.xticks(arange(a.shape[1])) plt.yticks(arange(a.shape[0])) plt.subplot(132) plt.imshow(a_misc,interpolation='nearest') plt.title('scipy.misc') plt.xticks(arange(a_misc.shape[1])) plt.yticks(arange(a_misc.shape[0])) plt.subplot(133) plt.imshow(a_2d_nonan,interpolation='nearest') plt.title('my.func') plt.xticks(arange(a_2d_nonan.shape[1])) plt.yticks(arange(a_2d_nonan.shape[0])) **EDIT** I add some modification to address [ChrisProsser comment](http://stackoverflow.com/a/21528588/1435167). If I substitute the NaN with some other value, let say the average of the not- NaN pixels, it will affect all the subsequent calculation: the difference between the resampled original array and the resampled array with NaN substituted shows that 2 pixels changed their values. My goal is simply skip all the NaN pixels. # substitute NaN with the average value ind_nonan , ind_nan = np.where(np.isnan(a) == False), np.where(np.isnan(a) == True) a_substitute = np.copy(a) a_substitute[ind_nan] = np.mean(a_substitute[ind_nonan]) # substitute the NaN with average on the not-Nan a_substitute_misc = misc.imresize(a_substitute, .5, interp='nearest', mode='F') a_substitute_2d_nonan = resize_2d_nonan(a_substitute,(factor_x,factor_y)) print a_2d_nonan-a_substitute_2d_nonan [[ nan -0.02296697] [ 0.23143208 0. ] [ 0. 0. ]] ![enter image description here](http://i.stack.imgur.com/uVvvj.png) ** 2nd EDIT** To address the [Hooked](http://stackoverflow.com/questions/21527770/resize-a-2d-numpy-array- excluding-nan/21531433#21531433)'s answer I put some additional code. It is an iteresting idea, sadly it interpolates new values over pixels that should be "empty" (NaN) and for my small example generate more NaN than good values. X , Y = np.indices((row , cols)) X_new , Y_new = np.indices((row/factor_x , cols/factor_y)) from scipy.interpolate import CloughTocher2DInterpolator as intp C = intp((X[ind_nonan],Y[ind_nonan]),a[ind_nonan]) a_interp = C(X_new , Y_new) print a print print a_interp [[ nan, nan], [ nan, nan], [ nan, 6.32826577]]) ![enter image description here](http://i.stack.imgur.com/OEeD7.png) Answer: You are operating on small _windows_ of the array. Instead of looping through the array to make the windows, the array can be efficiently restructured by manipulating its strides. The numpy library provides the `as_strided()` function to help with that. An example is provided in the SciPy CookBook [Stride tricks for the Game of Life](http://wiki.scipy.org/Cookbook/GameOfLifeStrides). The following will use a generalized sliding window function found at [Efficient Overlapping Windows with Numpy](http://www.johnvinyard.com/blog/?p=268) \- I will include it at the end. Determine the shape of the new array: rows, cols = a.shape new_shape = rows / 2, cols / 2 Restructure the array into the windows you need, and create an indexing array identifying NaNs: # 2x2 windows of the original array windows = sliding_window(a, (2,2)) # make a windowed boolean array for indexing notNan = sliding_window(np.logical_not(np.isnan(a)), (2,2)) The new array can be made using a list comprehension or a generator expression. # using a list comprehension # make a list of the means of the windows, disregarding the Nan's means = [window[index].mean() for window, index in zip(windows, notNan)] new_array = np.array(means).reshape(new_shape) # generator expression # produces the means of the windows, disregarding the Nan's means = (window[index].mean() for window, index in zip(windows, notNan)) new_array = np.fromiter(means, dtype = np.float32).reshape(new_shape) The generator expression should conserve memory. Using `itertools.izip()` instead of ``zip` should also help if memory is a problem. I just used the list comprehension for your solution. **Your function:** def resize_2d_nonan(array,factor): """ Resize a 2D array by different factor on two axis skipping NaN values. If a new pixel contains only NaN, it will be set to NaN Parameters ---------- array : 2D np array factor : int or tuple. If int x and y factor wil be the same Returns ------- array : 2D np array scaled by factor Created on Mon Jan 27 15:21:25 2014 @author: damo_ma """ xsize, ysize = array.shape if isinstance(factor,int): factor_x = factor factor_y = factor window_size = factor, factor elif isinstance(factor,tuple): factor_x , factor_y = factor window_size = factor else: raise NameError('Factor must be a tuple (x,y) or an integer') if (xsize % factor_x or ysize % factor_y) : raise NameError('Factors must be integer multiple of array shape') new_shape = xsize / factor_x, ysize / factor_y # non-overlapping windows of the original array windows = sliding_window(a, window_size) # windowed boolean array for indexing notNan = sliding_window(np.logical_not(np.isnan(a)), window_size) #list of the means of the windows, disregarding the Nan's means = [window[index].mean() for window, index in zip(windows, notNan)] # new array new_array = np.array(means).reshape(new_shape) return new_array I haven't done any time comparisons with your original function, but it should be faster. Many solutions I've seen here on SO _vectorize_ the operations to increase speed/efficiency - I don't quite have a handle on that and don't know if it can be applied to your problem. Searching SO for window, array, moving average, vectorize, and numpy should produce similar questions and answers for reference. **`sliding_window()` from [Efficient Overlapping Windows with Numpy](http://www.johnvinyard.com/blog/?p=268)**: import numpy as np from numpy.lib.stride_tricks import as_strided as ast from itertools import product def norm_shape(shape): ''' Normalize numpy array shapes so they're always expressed as a tuple, even for one-dimensional shapes. Parameters shape - an int, or a tuple of ints Returns a shape tuple ''' try: i = int(shape) return (i,) except TypeError: # shape was not a number pass try: t = tuple(shape) return t except TypeError: # shape was not iterable pass raise TypeError('shape must be an int, or a tuple of ints') def sliding_window(a,ws,ss = None,flatten = True): ''' Return a sliding window over a in any number of dimensions Parameters: a - an n-dimensional numpy array ws - an int (a is 1D) or tuple (a is 2D or greater) representing the size of each dimension of the window ss - an int (a is 1D) or tuple (a is 2D or greater) representing the amount to slide the window in each dimension. If not specified, it defaults to ws. flatten - if True, all slices are flattened, otherwise, there is an extra dimension for each dimension of the input. Returns an array containing each n-dimensional window from a ''' if None is ss: # ss was not provided. the windows will not overlap in any direction. ss = ws ws = norm_shape(ws) ss = norm_shape(ss) # convert ws, ss, and a.shape to numpy arrays so that we can do math in every # dimension at once. ws = np.array(ws) ss = np.array(ss) shape = np.array(a.shape) # ensure that ws, ss, and a.shape all have the same number of dimensions ls = [len(shape),len(ws),len(ss)] if 1 != len(set(ls)): raise ValueError(\ 'a.shape, ws and ss must all have the same length. They were %s' % str(ls)) # ensure that ws is smaller than a in every dimension if np.any(ws > shape): raise ValueError(\ 'ws cannot be larger than a in any dimension.\ a.shape was %s and ws was %s' % (str(a.shape),str(ws))) # how many slices will there be in each dimension? newshape = norm_shape(((shape - ws) // ss) + 1) # the shape of the strided array will be the number of slices in each dimension # plus the shape of the window (tuple addition) newshape += norm_shape(ws) # the strides tuple will be the array's strides multiplied by step size, plus # the array's strides (tuple addition) newstrides = norm_shape(np.array(a.strides) * ss) + a.strides strided = ast(a,shape = newshape,strides = newstrides) if not flatten: return strided # Collapse strided so that it has one more dimension than the window. I.e., # the new array is a flat list of slices. meat = len(ws) if ws.shape else 0 firstdim = (np.product(newshape[:-meat]),) if ws.shape else () dim = firstdim + (newshape[-meat:]) # remove any dimensions with size 1 dim = filter(lambda i : i != 1,dim) return strided.reshape(dim)
Django Admin redirects to 500 error Question: I am getting a 500 error when i login to the django admin interface. I have a ubuntu server 13.10 running nginx uwsgi mysql for my database. ive set it up following [this tutorial](http://blog.richard.do/index.php/2013/04/setting-up-nginx-django- uwsgi-a-tutorial-that-actually-works/#comment-115) (first time I've set up a django production server) my settings.py file is as follows """ Django settings for app_name project. For more information on this file, see https://docs.djangoproject.com/en/1.6/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.6/ref/settings/ """ # Build paths inside the project like this: os.path.join(BASE_DIR, ...) import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXX' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = False TEMPLATE_DEBUG = False ALLOWED_HOSTS = ['website.com', 'www.website.com', 'ip_address'] # Application definition INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'registration', ) MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ) ROOT_URLCONF = 'app_name.urls' WSGI_APPLICATION = 'app_name.wsgi.application' # Database # https://docs.djangoproject.com/en/1.6/ref/settings/#databases DATABASES = { 'default': { 'ENGINE':'django.db.backends.mysql', 'NAME': 'db_name', 'USER': 'username', 'PASSWORD': 'password', 'HOST': '127.0.0.1', } } # Internationalization # https://docs.djangoproject.com/en/1.6/topics/i18n/ LANGUAGE_CODE = 'en-gb' TIME_ZONE = 'Greenwich' USE_I18N = True USE_L10N = True USE_TZ = True # URL that handles the media served from MEDIA_ROOT. Make sure to use a # trailing slash. # Examples: "http://example.com/media/", "http://media.example.com/" MEDIA_URL = '/media/' # Absolute filesystem path to the directory that will hold user-uploaded files. # Example: "/var/www/example.com/media/" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.6/howto/static-files/ STATIC_URL = '/static/' # Absolute path to the directory static files should be collected to. # Don't put anything in this directory yourself; store your static files # in apps' "static/" subdirectories and in STATICFILES_DIRS. # Example: "/var/www/example.com/static/" STATIC_ROOT = os.path.join(BASE_DIR, 'static', 'static-only') # Additional locations of static files STATICFILES_DIRS = ( # Put strings here, like "/home/html/static" or "C:/www/django/static". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. os.path.join(BASE_DIR, 'static', 'static'), ) TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. os.path.join(BASE_DIR, 'static', 'templates'), ) AUTHENTICATION_BACKENDS = ( 'django.contrib.auth.backends.ModelBackend', ) ACCOUNT_ACTIVATION_DAYS = 7 I've managed to run sudo python manage.py syncdb and set up my admin user but when i go to login it redirects me to my 500.html template page. My uwsgi log file is here *** Starting uWSGI 1.9.13-debian (64bit) on [Mon Feb 3 13:11:22 2014] *** compiled with version: 4.8.1 on 16 July 2013 02:12:59 os: Linux-3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 07:38:26 UTC 2013 nodename: appname machine: x86_64 clock source: unix pcre jit disabled detected number of CPU cores: 1 current working directory: /var/www/appname.com/src writing pidfile to /tmp/project-master.pid detected binary path: /usr/bin/uwsgi-core setuid() to 33 your processes number limit is 7781 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes uwsgi socket 0 bound to TCP address 127.0.0.1:8889 fd 3 Python version: 2.7.5+ (default, Sep 19 2013, 13:52:09) [GCC 4.8.1] *** Python threads support is disabled. You can enable it with --enable-threads *** Python main interpreter initialized at 0x1a9f500 your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds mapped 145536 bytes (142 KB) for 1 cores *** Operational MODE: single process *** added /var/www/appname.com/src/appname/ to pythonpath. WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x1a9f500 pid: 13398 (default app) *** uWSGI is running in multiple interpreter mode *** spawned uWSGI master process (pid: 13398) spawned uWSGI worker 1 (pid: 13399, cores: 1) [pid: 13399|app: 0|req: 1/1] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:11:26 2014] GET / => generated 1761 bytes in 161 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 2/2] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:13:27 2014] GET / => generated 1761 bytes in 4 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 3/3] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:13:32 2014] GET /admin/ => generated 1865 bytes in 35 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 4/4] 176.62.211.192 () {48 vars in 926 bytes} [Mon Feb 3 13:13:33 2014] POST /admin/ => generated 1761 bytes in 84 msecs (HTTP/1.1 500) 3 headers in 121 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 5/5] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:19:05 2014] GET /admin/ => generated 1865 bytes in 14 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 6/6] 176.62.211.192 () {42 vars in 717 bytes} [Mon Feb 3 13:19:05 2014] GET /favicon.ico => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 7/7] 176.62.211.192 () {48 vars in 926 bytes} [Mon Feb 3 13:19:07 2014] POST /admin/ => generated 1761 bytes in 78 msecs (HTTP/1.1 500) 3 headers in 121 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 8/8] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:30:01 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 9/9] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:30:05 2014] GET /admin/ => generated 1865 bytes in 14 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 10/10] 176.62.211.192 () {42 vars in 717 bytes} [Mon Feb 3 13:30:05 2014] GET /favicon.ico => generated 1761 bytes in 4 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 11/11] 176.62.211.192 () {48 vars in 926 bytes} [Mon Feb 3 13:30:06 2014] POST /admin/ => generated 1761 bytes in 92 msecs (HTTP/1.1 500) 3 headers in 121 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 12/12] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:31:00 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 13/13] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:12 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 14/14] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:13 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 15/15] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:13 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 16/16] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:13 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 17/17] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:31:15 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 18/18] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:31 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 19/19] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:32 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 20/20] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:32 2014] GET / => generated 1761 bytes in 4 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 21/21] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:32 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 22/22] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:33 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 23/23] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:31:34 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 24/24] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:31:36 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 25/25] 176.62.211.192 () {42 vars in 730 bytes} [Mon Feb 3 13:32:00 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 26/26] 176.62.211.192 () {42 vars in 736 bytes} [Mon Feb 3 13:32:27 2014] GET / => generated 1761 bytes in 5 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 27/27] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:32:32 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 28/28] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:32:38 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 29/29] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:32:38 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 30/30] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:32:40 2014] GET /admin/ => generated 1865 bytes in 16 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 31/31] 176.62.211.192 () {40 vars in 741 bytes} [Mon Feb 3 13:32:40 2014] GET /accounts/register/ => generated 2839 bytes in 7 msecs (HTTP/1.1 200) 4 headers in 224 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 32/32] 176.62.211.192 () {40 vars in 741 bytes} [Mon Feb 3 13:32:42 2014] GET /accounts/register/ => generated 2839 bytes in 7 msecs (HTTP/1.1 200) 4 headers in 224 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 33/33] 176.62.211.192 () {40 vars in 735 bytes} [Mon Feb 3 13:33:03 2014] GET /accounts/login/ => generated 2336 bytes in 7 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 34/34] 176.62.211.192 () {48 vars in 951 bytes} [Mon Feb 3 13:33:05 2014] POST /accounts/login/ => generated 1761 bytes in 75 msecs (HTTP/1.1 500) 3 headers in 121 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 35/35] 176.62.211.192 () {40 vars in 735 bytes} [Mon Feb 3 13:33:08 2014] GET /accounts/login/ => generated 2336 bytes in 9 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 36/36] 176.62.211.192 () {40 vars in 741 bytes} [Mon Feb 3 13:33:09 2014] GET /accounts/register/ => generated 2839 bytes in 6 msecs (HTTP/1.1 200) 4 headers in 224 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 37/37] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:33:10 2014] GET /admin/ => generated 1865 bytes in 13 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0) pid: 13399|app: 0|req: 38/38] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:33:10 2014] GET / => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 39/39] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:33:10 2014] GET / => generated 1761 bytes in 4 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 40/40] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:55:37 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 41/41] 176.62.211.192 () {40 vars in 705 bytes} [Mon Feb 3 13:55:41 2014] GET / => generated 1761 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 42/42] 176.62.211.192 () {40 vars in 717 bytes} [Mon Feb 3 13:55:41 2014] GET /admin/ => generated 1865 bytes in 14 msecs (HTTP/1.1 200) 7 headers in 336 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 43/43] 176.62.211.192 () {42 vars in 717 bytes} [Mon Feb 3 13:55:41 2014] GET /favicon.ico => generated 1761 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 95 bytes (1 switches on core 0) [pid: 13399|app: 0|req: 44/44] 176.62.211.192 () {48 vars in 926 bytes} [Mon Feb 3 13:55:45 2014] POST /admin/ => generated 1761 bytes in 71 msecs (HTTP/1.1 500) 3 headers in 121 bytes (1 switches on core 0) Ive been searching online for a solution but haven't been able to find anything so have resorted to posting on here. Any help on this would be much appreciated. Answer: I have not fixed this realised that i had SESSION_COOKIE_SECURE = True which was messing up the login. I've now restarted the uWSGI process and re run the uwsgi.ini and it all work. Thanks to everyone that helped me resolve this!
accessing ArgumentParser variable with environment variable Question: How do I access the prog variable of the parser = argparse.ArgumentParser(prog='ipush', description='Utility to push the last commit and email the color diff') parser.add_argument('-V', '--version', action='version', version='%(prog)s 1.0,$s'PYTHON_VERSION) how do I access prog variable from `argparse.ArgumentParser` as well as `PYTHON_VERSION` from environment as well? Answer: >>> import argparse >>> parser = argparse.ArgumentParser(prog='ipush', ... description='Utility to push the last commit and email the color diff') >>> >>> parser.prog 'ipush' >>> for python version: >>> import sys >>> sys.version '2.7.2 (default, Oct 11 2012, 20:14:37) \n[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)]' >>> sys.version_info sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0) >>> If you want to print_usage: >>> import argparse >>> parser = argparse.ArgumentParser(prog='ipush',description='Utility to push the last commit and email the color diff') >>> p = parser.parse_args() >>> argument = vars(p) Here check whether the required argument are defined not called : >>> parser.print_usage() Here is a small snippet: #!/usr/bin/python import argparse def run(p): """ Execute base on the arguments """ if p.get('create', False) is True: # do something return True elif p.get('find', False) is True: # do something return True else: return False def main(): parser = argparse.ArgumentParser(prog='ipush', description='Utility to push the last commit and email the color diff') parser.add_argument('--find', action="store_true", help="find the user details") parser.add_argument('--create', action='store_true', help="creating the database") p = parser.parse_args() argument = vars(p) if run(argument) is False: parser.print_usage() if __name__ == '__main__': main() Output: test_script/tmp$ python p.py usage: ipush [-h] [--find] [--create] test_script/tmp$ python p.py -h usage: ipush [-h] [--find] [--create] Utility to push the last commit and email the color diff optional arguments: -h, --help show this help message and exit --find find the user details --create creating the database test_script/tmp$ python p.py --find test_script/tmp$
python social auth redirect to an error page conditionally Question: Am using python social auth with django to create authentication and registration via social media. I've defined the > LOGIN_ERROR_URL = '/account/auth-failed/' it works well when there is a problem, it would redirect there correctly. Yet, I want to conditionally specify the LOGIN_ERROR_PAGE at run time because I have a scneario where its invoked via JSON or simple HTML. Any suggestion on how to do that? Answer: It's not really simple to implement, but you can override the default strategy and in your custom version override the `get_setting()` method, like this: from django.conf import settings from social.strategies.django_strategy import DjangoStrategy class CustomDjangoStrategy(DjangoStrategy): def get_setting(self, name): if name == 'LOGIN_ERROR_URL' and self.request.is_ajax(): return '/auth/error/ajax' else: return getattr(settings, name) Put that into a module and define `SOCIAL_AUTH_STRATEGY = app.module.CustomDjangoStrategy` in your settings.
creating a object in a method in python: invalid syntax Question: I'm trying to make a little program to create and administrate accounts with a bank. The code is written in German, I wrote the translation in the comments. Every time I'm trying to compile the program the compiler says 'invalid syntax', but doesn't highlight it. I can't find the problem, but it has to be located in the method `kontoEroeffnen()` I would be very grateful for some help. import sys # makes an account with bank class Konto : def __init__( self , laufzeit , ID , password , zinssatz): self.ID = ID#the ID of the account self.laufzeit = laufzeit#runtime of the account self.password = password self.zinssatz = zinssatz#interest rate self.kontostaende = list(range(laufzeit + 1)#array with the months def authentifizierung(self): #authentication of the user print("Geben Sie Ihre Konto ID ein")#input of the account-ID idl = int(sys.stdin.readline().rstrip()) print("Geben Sie ihr Passwort ein")#input of the password passwort = int(sys.stdin.readline().rstrip()) if idl == self.id : if passwort == self.password : return True else: print("Falsches Passwort")#output:wrong password return False else: print("Erneut versuchen")#output:retry return False def einzahlen(self) :#pay into the bank if authentifizierung() : print("Wenn es Ihre Ersteinzahlung ist geben Sie e ein")#output:if this is your first pay, input 'e' print("Sonst geben Sie den Monat der Laufzeit ihres Kontos ein")#output:else:input the month of the runtime of your account eingabe = sys.stdin.readline().rstrip() if eingabe == 'e' : print("Geben Sie den Betrag Ihrer Einzahlung ein")#output:you have to input the sum self.kontostaende[0] = int(sys.stdin.readline().rstrip()) #Hier muss noch der Verweis zur Startsequenz hin.... //here I want a jump to the startsequency(not important now...) else: index = int(eingabe) if (index > 0 ) and (index < len(self.kontostaende)): print("Geben Sie den Betrag Ihrer Einzahlung ein") self.kontostaende[index] = int(sys.stdin.readline().rstrip()) # Hier muss wieder zur Startsequenz... else : einzahlen() def kontosteandeVomErstenTagBerechnen(self):#calculate the account balances for i in self.kontostaende : self.kontostaende[i] = self.kontostaende[0]* pow((1 + self.zinssatz) , (i/self.laufzeit)) def kontostaendeVomTagDerEinzahlungBerechnen(self , index): monate = list(range(laufzeit)) #macht eine Liste der zu veraendernden elementen //makes a list of the elements who should be changed for i in range (index): # | del monate[0] #-| for i in monate : self.kontostaende[i + index] = self.kontostaende[index] * pow((1 + self.zinssatz) , ( i / (self.laufzeit - index))) def kontoEröffnen ():#create a new account print("Geben Sie die gewuenschte ID Ihres Kontos ein")#input the ID of your account ID = int(sys.stdin.readline().rstrip()) print("Geben Sie das Passwort, das Sie benutzen wollen ein, achten sie darauf es niemandem zu verraten")#input the password of your account password = sys.stdin.readline().rstrip() print("Geben Sie die Laufzeit des Vertrags ein")#input the runtime of your account laufzeit = int(sys.stdin.readline().rstrip()) print("Geben Sie den Zinssatz Ihres Vertrags ein, in Dezimalschreibweise.")#input the interest rate in decimal zinssatz = float(sys.stdin.readline().rstrip()) konto = Konto(laufzeit , ID , password , zinssatz) return konto #noch nicht fertig // not finished because doesn't work def startsequenz (): #start sequency....doesn't matter for this. print (''' Geben Sie ein ob sie ein Konto haben oder eröffnen wollen! geben Sie, wenn Sie ein Konto haben, k ein, Sie e ein.''') if sys.stdin.read(1) == e : kontoEröffnen() #nicht fertig Answer: You are missing a closing parenthesis in your `Konto.__init__` method: self.kontostaende = list(range(laufzeit + 1)#array with the months # ------------------------------^ missing Add the parenthesis there at the end: self.kontostaende = list(range(laufzeit + 1))
Python alternative to import Question: I've got the following code: def main(): #init #Load config. import localconfig print localconfig.name #update mac adress db, if at all possible: try: from maclist import maclist except: import urllib2 print "Fetching MAC adress db." maclist = urllib2.urlopen(localconfig.url) fl = open("maclist.py","w") fl.write( "#maclist.py generated by "+localconfig.name+""" #Based on """+localconfig.url+""" maclist = {} """) print "Generating maclist.py" for line in maclist: if "#" in line: line=line[:line.index("#")] line = line.split() if line: if "-" in line[0]: line[0]=line[0].replace("-",":") if "/" in line[0]: ndex = line[0].index("/") ndex = (int(line[0][ndex+1:])//8)*3 line[0]=line[0][:ndex-1] line=[repr(part) for part in line] line = "maclist["+line[0]+"]="+line[1] fl.write("=".join(line.split())+"\n") fl.close() from maclist import maclist print "Succesfully loaded maclist.py" This does work, but I don't like the fact that I'm importing (and thus running) something I downloaded from the interweb. Is there a better way to go about this? (Remember, I don't want to redownload every time, only if it's the first time (or if it's been deleted)). Answer: I suggest you learn how to use JSON, so you can just have a file like this: {<id>:<mac>,<id>:<mac>} (Yes, it's similar to Python's `dict` and you can use it the same way with the right parser)
Getting Started with C development and GTK+ Question: I'm really a Python developer exclusively, but I'm making my first foray into C programming now, and I'm having a lot of trouble getting started. I can't seem to get the hang of compilation and including libraries. At this point, I'm just identifying the libraries that I need and trying to compile them with a basic "hello, world" app just to make sure that I have my environment setup to do the actual programming. This is a DBus backend application that will use GIO to connect to DBus. #include <stdlib.h> #include <gio/gio.h> int main (int argc, char *argv[]) { printf("hello, world"); return 0; } Then, I try to compile: ~$ gcc main.c main.c:2:21: fatal error: gio/gio.h: No such file or directory #include <gio/gio.h> I believe that I've installed the correct packages as indicated [here](https://wiki.gnome.org/Apps/DeveloperTools/Installation/Fedora), and gio.h exists at `/usr/include/glib-2.0/gio/gio.h`. I found a command online to add a search directory to gcc, but that resulted in other errors: ~$ gcc -I /usr/include/glib-2.0/ main.c In file included from /usr/include/glib-2.0/glib/galloca.h:34:0, from /usr/include/glib-2.0/glib.h:32, from /usr/include/glib-2.0/gobject/gbinding.h:30, from /usr/include/glib-2.0/glib-object.h:25, from /usr/include/glib-2.0/gio/gioenums.h:30, from /usr/include/glib-2.0/gio/giotypes.h:30, from /usr/include/glib-2.0/gio/gio.h:28, from main.c:2: /usr/include/glib-2.0/glib/gtypes.h:34:24: fatal error: glibconfig.h: No such file or directory #include <glibconfig.h> ^ compilation terminated. There has to be some relatively simple method for being able to set some options/variables (makefile?) to automatically include the necessary headers. I'm also going to use Eclipse-CDT or Anjuta as an IDE and would appreciate help to fix the import path (or whatever it's called in C). Any help is greatly appreciated. Answer: Use `pkg-config` (and `make`). See exactly [this answer](http://stackoverflow.com/a/18877584/841108) to a very similar question. See also [this](http://stackoverflow.com/a/20585574/841108) and [that](http://stackoverflow.com/a/20146082/841108) answers. Don't forget `-Wall -g` flags to `gcc` .. You don't need an IDE to compile your code (the IDE will just run some `gcc` commands, so better know how to use them yourself).