title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
how to make tkinter window to stay at bottom?
39,630,966
<pre><code>import Tkinter from Tkinter import Button root=Tkinter.Tk() def close_window (): root.destroy() w = root.winfo_screenwidth() root.overrideredirect(1) root.geometry("200x200+%d+200" %(w-210)) root.configure(background='gray10') btn = Button(text='X',borderwidth=0,highlightthickness=0,bd=0,command = close_window,height="1",width="1") btn.pack() btn.config(bg='gray10', fg='white') btn.config(font=('helvetica', 8)) root.mainloop() </code></pre> <p>the window always stay at top of all the windows that i open. i want it to stay at the bottom like the wallpaper. Thanks in advance!</p>
0
2016-09-22T05:32:26Z
39,632,581
<p>You're already doing something close. Get the screen height and subtract the height or more of your window.</p> <pre><code>import Tkinter from Tkinter import Button root=Tkinter.Tk() def close_window (): root.destroy() screen_width = root.winfo_screenwidth() screen_height = root.winfo_screenheight() root.overrideredirect(1) root.geometry("200x200+{0}+{1}".format(screen_width-210, screen_height-210)) root.configure(background='gray10') btn = Button(text='X', borderwidth=0, highlightthickness=0, bd=0, command=close_window, height="1", width="1") btn.pack() btn.config(bg='gray10', fg='white') btn.config(font=('helvetica', 8)) root.mainloop() </code></pre>
0
2016-09-22T07:15:47Z
[ "python", "user-interface", "tkinter" ]
python client and perl server: packing and unpacking bytes to send/receive
39,631,009
<p>python_client.py</p> <pre><code>def send_one_message(sock, data): length = len(data) sock.sendall(struct.pack('!I', length)) sock.sendall(data) </code></pre> <p>perl_server.pl</p> <pre><code>sub ntohl { unpack("I", pack("N", $_[0])); } my $line = ""; $client_socket-&gt;recv($line, 4); my $line_length = ntohl($line); print "expected to receive $line_length bytes\n"; $client_socket-&gt;recv($line, $line_length); print "$line\n"; </code></pre> <p>I get this error:<br> <code>Argument "\0\0\0C" isn't numeric in pack</code> in perl_server.pl</p> <p>I dont think I am unpacking correctly in perl_server.pl</p> <p>Any suggestions?</p>
2
2016-09-22T05:35:38Z
39,631,400
<p>I've changed ntohl</p> <pre><code>sub ntohl { unpack("I", $_[0]); } </code></pre> <p>and</p> <pre><code>sock.sendall(struct.pack('I', length)) </code></pre>
0
2016-09-22T06:04:34Z
[ "python", "perl", "sockets" ]
How to read averaged nodal stress from abaqus odb file using python script?
39,631,065
<p>I know how to read element or element node stress value (unaveraged) using python script.</p> <pre><code>field = stressField.getSubset(region=topCenter,position=INTEGRATION_POINT, elementType = 'CAX4') </code></pre> <p>But i want averaged stress values at nodes. FYI, my odb does not contain node position data for stress (i.e., position=NODAL). </p>
0
2016-09-22T05:40:05Z
39,647,993
<p>this is one of those things that way more cumbersome than it should be. you need to create xydata :</p> <pre><code>session.xyDataListFromField(odb=odb, outputPosition=ELEMENT_NODAL, variable=(( 'S', INTEGRATION_POINT), ), elementSets=('PART-1-1.SETNAME', )) </code></pre> <p>this creates a dictionary with objects for every node of every element and every stress component (ie. huge). Unfortunately the dictionary is keyed by cumbersome descriptor strings, eg:</p> <pre><code> session.xyDataObjects['S:S11 PI:PART-1-1 E: 15 N:2'].data </code></pre> <p>gives the 11 stress component of node 2 associated with element 15. To make use of the data in a script you need to either construct the strings, or loop over the dictionary and parse the <code>positionDescription</code> for each object.</p> <p>Edit: if you want the nodal averages its pretty much the same. You do: </p> <pre><code>session.xyDataListFromField(odb=odb, outputPosition=NODAL, variable=(( 'S', INTEGRATION_POINT), ), nodeSets=('PART-1-1.SETNAME', )) </code></pre> <p>and the dictionary objects are keyed like:</p> <pre><code> session.xyDataObjects['S:S11 (Avg: 75%) PI:PART-1-1 N:2'].data </code></pre> <p>note you can issue multiple <code>session.xyDataListFromField</code> calls and all of the data goes into <code>xyDataObjects</code> (For example if you want stress and strain you can process both at once. )</p> <p>For completeness, if you only want a specific component(s) you can request as:</p> <pre><code> variable=(( 'S', INTEGRATION_POINT,((COMPONENT, 'S11'),)), ) </code></pre>
0
2016-09-22T20:05:52Z
[ "python", "abaqus", "stress", "odb" ]
Linear Support Vector Machines multiclass classification with PySpark API
39,631,208
<p>Support Vector Machines currently does not yet support multi class classification within Spark, but will in the future as it is described on the Spark <a href="http://spark.apache.org/docs/latest/mllib-classification-regression.html" rel="nofollow">page</a>.</p> <p>Is there any release date or any chance to run it with PySpark API that implements multi class with Support Vector Machines? Thank you for any insights.</p>
0
2016-09-22T05:53:22Z
39,642,967
<p>In practice you can perform multiclass classification using an arbitrary binary classifier and one-vs-rest strategy. <code>mllib</code> doesn't provide one (there is one in <code>ml</code>) but you can easily built your own. Assuming data looks like this</p> <pre><code>import numpy as np np.random.seed(323) classes = [0, 1, 2, 3, 4, 5] def make_point(classes): label = np.random.choice(classes) features = np.random.random(len(classes)) features[label] += 10 return LabeledPoint(label, features) data = sc.parallelize([make_point(classes) for _ in range(1000)]) xs = data.take(5) </code></pre> <p>we can train separate model for each class:</p> <pre><code>def model_for_class(c, rdd): def adjust_label(lp): return LabeledPoint(1 if lp.label == c else 0, lp.features) model = SVMWithSGD.train(rdd.map(adjust_label)) model.clearThreshold() return model models = [model_for_class(c, data) for c in classes] </code></pre> <p>and use it for prediction:</p> <pre><code>[(x.label, np.argmax([model.predict(x.features) for model in models])) for x in xs] ## [(0.0, 0), (1.0, 1), (0.0, 0), (5.0, 5), (2.0, 2)] </code></pre> <p>On the side note you cannot expect any further developments in <code>pyspark.mllib</code> because it is getting deprecated in favor of <code>ml</code>. </p>
1
2016-09-22T15:21:16Z
[ "python", "apache-spark", "pyspark", "svm", "text-classification" ]
How to connect a socket to another computer's socket through Internet
39,631,360
<p>I recently have some difficulties to connect a socket to another computer's socket through Internet, an image is worth a thousand words:</p> <p><a href="http://i.stack.imgur.com/9CseJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/9CseJ.png" alt="enter image description here"></a></p> <p>Computer <strong>A</strong> is running this "<strong>listener.py</strong>" script:</p> <pre><code>import socket PORT = 50007 BUFFER = 2048 HOST = '' if __name__ == '__main__': with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() with conn: print('Connected by', addr) while True: data = conn.recv(BUFFER) if not data: break conn.sendall(data) </code></pre> <p>Computer <strong>B</strong> is running this "<strong>sender.py</strong>" script:</p> <pre><code>import socket HOST = '101.81.83.169' # The remote host PORT = 50007 # The same port as used by the server if __name__ == '__main__': with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) s.sendall(b'Hello, world') </code></pre> <p>So first of all, I run the "<strong>listener</strong>" script of the computer <strong>A</strong>. Then, I run the "<strong>sender</strong>" script of the computer B. However, when I execute the "<strong>sender</strong>" script, I received a <strong>error</strong> message which explains me that I am not authorized to connect to this remote address. </p> <p>So I would like to know how can I connect a socket to another socket through internet without changing the router configurations.</p> <p>Thank you very much for your help.</p> <p><strong>Edit</strong>: Here the error message (I didn't execute the same script for some reasons, but it's the same error message)</p> <pre><code>sock.connect(('101.81.83.169',50007)) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) socket.error: [Errno 61] Connection refused </code></pre>
0
2016-09-22T06:02:55Z
39,636,475
<p>Computer B can't directly connect to computer A since it has an IP address which is not reachable from the outside. You need to set up a port forwarding rule in the 101.81.83.169 router that redirects incoming connection requests for port 50007 to IP address 192.168.0.4.</p> <p>However, since you say that you are seeking a solution without changing router configurations, you need something different.</p> <p>In this case, you could setup an intermediate server running on the public Internet that both computers can then connect to and serves as an intermediate tunneling platform between them. Solutions for this already exist, for example have a look at <a href="https://ngrok.com/" rel="nofollow">ngrok</a>, which has Python bindings available.</p>
0
2016-09-22T10:22:12Z
[ "python", "sockets", "networking" ]
How to connect a socket to another computer's socket through Internet
39,631,360
<p>I recently have some difficulties to connect a socket to another computer's socket through Internet, an image is worth a thousand words:</p> <p><a href="http://i.stack.imgur.com/9CseJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/9CseJ.png" alt="enter image description here"></a></p> <p>Computer <strong>A</strong> is running this "<strong>listener.py</strong>" script:</p> <pre><code>import socket PORT = 50007 BUFFER = 2048 HOST = '' if __name__ == '__main__': with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() with conn: print('Connected by', addr) while True: data = conn.recv(BUFFER) if not data: break conn.sendall(data) </code></pre> <p>Computer <strong>B</strong> is running this "<strong>sender.py</strong>" script:</p> <pre><code>import socket HOST = '101.81.83.169' # The remote host PORT = 50007 # The same port as used by the server if __name__ == '__main__': with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) s.sendall(b'Hello, world') </code></pre> <p>So first of all, I run the "<strong>listener</strong>" script of the computer <strong>A</strong>. Then, I run the "<strong>sender</strong>" script of the computer B. However, when I execute the "<strong>sender</strong>" script, I received a <strong>error</strong> message which explains me that I am not authorized to connect to this remote address. </p> <p>So I would like to know how can I connect a socket to another socket through internet without changing the router configurations.</p> <p>Thank you very much for your help.</p> <p><strong>Edit</strong>: Here the error message (I didn't execute the same script for some reasons, but it's the same error message)</p> <pre><code>sock.connect(('101.81.83.169',50007)) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) socket.error: [Errno 61] Connection refused </code></pre>
0
2016-09-22T06:02:55Z
39,636,832
<blockquote> <p>So I would like to know how can I connect a socket to another socket through internet without changing the router configurations.</p> </blockquote> <p>You can't. The public IP address belongs to your router. Your server isn't listening in the router, it is listening in some host behind the router. You have to open that port in your router and forward it to the host your listener is running in: whatever that means in your particular router. Otherwise the router will refuse the connection, as it doesn't have anything listening at that port.</p>
-1
2016-09-22T10:40:30Z
[ "python", "sockets", "networking" ]
How to connect a socket to another computer's socket through Internet
39,631,360
<p>I recently have some difficulties to connect a socket to another computer's socket through Internet, an image is worth a thousand words:</p> <p><a href="http://i.stack.imgur.com/9CseJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/9CseJ.png" alt="enter image description here"></a></p> <p>Computer <strong>A</strong> is running this "<strong>listener.py</strong>" script:</p> <pre><code>import socket PORT = 50007 BUFFER = 2048 HOST = '' if __name__ == '__main__': with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() with conn: print('Connected by', addr) while True: data = conn.recv(BUFFER) if not data: break conn.sendall(data) </code></pre> <p>Computer <strong>B</strong> is running this "<strong>sender.py</strong>" script:</p> <pre><code>import socket HOST = '101.81.83.169' # The remote host PORT = 50007 # The same port as used by the server if __name__ == '__main__': with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) s.sendall(b'Hello, world') </code></pre> <p>So first of all, I run the "<strong>listener</strong>" script of the computer <strong>A</strong>. Then, I run the "<strong>sender</strong>" script of the computer B. However, when I execute the "<strong>sender</strong>" script, I received a <strong>error</strong> message which explains me that I am not authorized to connect to this remote address. </p> <p>So I would like to know how can I connect a socket to another socket through internet without changing the router configurations.</p> <p>Thank you very much for your help.</p> <p><strong>Edit</strong>: Here the error message (I didn't execute the same script for some reasons, but it's the same error message)</p> <pre><code>sock.connect(('101.81.83.169',50007)) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) socket.error: [Errno 61] Connection refused </code></pre>
0
2016-09-22T06:02:55Z
39,637,444
<p>From the book <a href="http://www.nylxs.com/docs/cmpnet.pdf" rel="nofollow">"Computer Networking: A Top-Down Approach"</a>, there is a part which is very interesting on page 149 about how Bittorents work:</p> <blockquote> <p>Each torrent has an infrastructure node called a tracker. When a peer joins a torrent, it registers itself with the tracker and periodically informs the tracker that it is still in the torrent. In this manner, the tracker keeps track of the peers that are participating in the torrent. A given torrent may have fewer than ten or more than a thousand peers participating at any instant of time. <strong>Alice, joins</strong> the torrent, the tracker randomly selects a subset of peers (for concreteness, say 50) from the set of participating peers, and <strong>sends the IP addresses</strong> of these 50 peers to Alice. Possessing this list of peers, Alice attempts <strong>to establish concurrent TCP connections with all the peers on this list</strong>. Let’s call all the peers with which Alice succeeds in establishing a TCP connection “neighboring peers.</p> </blockquote> <p><a href="http://i.stack.imgur.com/5h8On.png" rel="nofollow"><img src="http://i.stack.imgur.com/5h8On.png" alt="image"></a></p> <p>So:</p> <p><strong>- Step 1</strong>: Alice connects to the tracker, the tracker gives to Alice the ip addresses of Bob and Mick.</p> <p><strong>- Step 2</strong>:Alice receives the ip addresses of Bob and Mick, then she can try to establish TCP/IP connections for downloading the file.</p> <p>I don't remember having to set up any router configuration when I wanted to download files from Bittorent.</p> <p>So what am I missing?</p>
0
2016-09-22T11:12:05Z
[ "python", "sockets", "networking" ]
How to understand this raw HTML of Yahoo! Finance when retrieving data using Python?
39,631,386
<p>I've been trying to retrieve stock price from Yahoo! Finance, like for <a href="http://finance.yahoo.com/quote/AAPL/profile?p=AAPL" rel="nofollow">Apple Inc.</a>. My code is like this:(using Python 2)</p> <pre><code>import requests from bs4 import BeautifulSoup as bs html='http://finance.yahoo.com/quote/AAPL/profile?p=AAPL' r = requests.get(html) soup = bs(r.text) </code></pre> <p>The problem is when I see raw HTML behind this webpage, the class is dynamic, see figure below. This makes it hard for BeautifulSoup to get tags. How to understand the class and how to get data?</p> <p><a href="http://i.stack.imgur.com/bITK0.png" rel="nofollow">HTML of Yahoo! Finance page</a></p> <p>PS: 1) I know pandas_datareader.data, but that's for historical data. I want the real-time stock data;</p> <p>2) I don't want to use selenium to open a new browser window.</p>
3
2016-09-22T06:04:04Z
39,632,209
<p>Not sure what you mean by 'dynamic' in this case, but have you considered using CSS selectors?</p> <p>With Beautifulsoup you could get it e.g like this:</p> <pre><code>soup.select('div#quote-header-info section span')[0] </code></pre> <p>And there are some variations you could use on the pattern, such as using the '>' filter.</p> <p>You could get the same with just <code>lxml</code>, no need for BeautifulSoup:</p> <pre><code>import lxml.html as html page = html.parse(url).getroot() content = page.cssselect('div#quote-header-info section &gt; span:first-child')[0].text </code></pre> <p>Which immediately illustrates a more specific selector. </p> <p>If you're interested in more efficient DOM-traversal, research xpaths.</p>
3
2016-09-22T06:55:08Z
[ "python", "html", "beautifulsoup", "web-crawler", "yahoo-finance" ]
How to understand this raw HTML of Yahoo! Finance when retrieving data using Python?
39,631,386
<p>I've been trying to retrieve stock price from Yahoo! Finance, like for <a href="http://finance.yahoo.com/quote/AAPL/profile?p=AAPL" rel="nofollow">Apple Inc.</a>. My code is like this:(using Python 2)</p> <pre><code>import requests from bs4 import BeautifulSoup as bs html='http://finance.yahoo.com/quote/AAPL/profile?p=AAPL' r = requests.get(html) soup = bs(r.text) </code></pre> <p>The problem is when I see raw HTML behind this webpage, the class is dynamic, see figure below. This makes it hard for BeautifulSoup to get tags. How to understand the class and how to get data?</p> <p><a href="http://i.stack.imgur.com/bITK0.png" rel="nofollow">HTML of Yahoo! Finance page</a></p> <p>PS: 1) I know pandas_datareader.data, but that's for historical data. I want the real-time stock data;</p> <p>2) I don't want to use selenium to open a new browser window.</p>
3
2016-09-22T06:04:04Z
39,635,322
<p>The data is obviously populated using <em>reactjs</em> so you won't be able to parse it reliably using class names etc.. You can get all the data in <em>json</em> format from the page source from the <code>root.App.main</code> script :</p> <pre><code>import requests from bs4 import BeautifulSoup import re from json import loads soup = BeautifulSoup(requests.get("http://finance.yahoo.com/quote/AAPL/profile?p=AAPL").content) script = soup.find("script",text=re.compile("root.App.main")).text data = loads(re.search("root.App.main\s+=\s+(\{.*\})", script).group(1)) print(data) </code></pre> <p>Which gives you a whole load of json, you can go through the data and pick what you need like below :</p> <pre><code>stores = data["context"]["dispatcher"]["stores"] from pprint import pprint as pp pp(stores[u'QuoteSummaryStore']) </code></pre> <p>Which gives you:</p> <pre><code>{u'price': {u'averageDailyVolume10Day': {u'fmt': u'63.06M', u'longFmt': u'63,056,525', u'raw': 63056525}, u'averageDailyVolume3Month': {u'fmt': u'36.53M', u'longFmt': u'36,527,196', u'raw': 36527196}, u'currency': u'USD', u'currencySymbol': u'$', u'exchange': u'NMS', u'exchangeName': u'NasdaqGS', u'longName': u'Apple Inc.', u'marketState': u'PRE', u'maxAge': 1, u'openInterest': {}, u'postMarketChange': {u'fmt': u'0.11', u'raw': 0.11000061}, u'postMarketChangePercent': {u'fmt': u'0.10%', u'raw': 0.0009687416}, u'postMarketPrice': {u'fmt': u'113.66', u'raw': 113.66}, u'postMarketSource': u'DELAYED', u'postMarketTime': 1474502277, u'preMarketChange': {u'fmt': u'0.42', u'raw': 0.41999817}, u'preMarketChangePercent': {u'fmt': u'0.37%', u'raw': 0.0036987949}, u'preMarketPrice': {u'fmt': u'113.97', u'raw': 113.97}, u'preMarketSource': u'FREE_REALTIME', u'preMarketTime': 1474536411, u'quoteType': u'EQUITY', u'regularMarketChange': {u'fmt': u'-0.02', u'raw': -0.019996643}, u'regularMarketChangePercent': {u'fmt': u'-0.02%', u'raw': -0.00017607327}, u'regularMarketDayHigh': {u'fmt': u'113.99', u'raw': 113.989}, u'regularMarketDayLow': {u'fmt': u'112.44', u'raw': 112.441}, u'regularMarketOpen': {u'fmt': u'113.82', u'raw': 113.82}, u'regularMarketPreviousClose': {u'fmt': u'113.57', u'raw': 113.57}, u'regularMarketPrice': {u'fmt': u'113.55', u'raw': 113.55}, u'regularMarketSource': u'FREE_REALTIME', u'regularMarketTime': 1474488000, u'regularMarketVolume': {u'fmt': u'31.57M', u'longFmt': u'31,574,028.00', u'raw': 31574028}, u'shortName': u'Apple Inc.', u'strikePrice': {}, u'symbol': u'AAPL', u'underlyingSymbol': None}, u'price,summaryDetail': {}, u'quoteType': {u'exchange': u'NMS', u'headSymbol': None, u'longName': u'Apple Inc.', u'market': u'us_market', u'messageBoardId': u'finmb_24937', u'quoteType': u'EQUITY', u'shortName': u'Apple Inc.', u'symbol': u'AAPL', u'underlyingExchangeSymbol': None, u'underlyingSymbol': None, u'uuid': u'8b10e4ae-9eeb-3684-921a-9ab27e4d87aa'}, u'summaryDetail': {u'ask': {u'fmt': u'114.00', u'raw': 114}, u'askSize': {u'fmt': u'100', u'longFmt': u'100', u'raw': 100}, u'averageDailyVolume10Day': {u'fmt': u'63.06M', u'longFmt': u'63,056,525', u'raw': 63056525}, u'averageVolume': {u'fmt': u'36.53M', u'longFmt': u'36,527,196', u'raw': 36527196}, u'averageVolume10days': {u'fmt': u'63.06M', u'longFmt': u'63,056,525', u'raw': 63056525}, u'beta': {u'fmt': u'1.52', u'raw': 1.51744}, u'bid': {u'fmt': u'113.68', u'raw': 113.68}, u'bidSize': {u'fmt': u'400', u'longFmt': u'400', u'raw': 400}, u'dayHigh': {u'fmt': u'113.99', u'raw': 113.989}, u'dayLow': {u'fmt': u'112.44', u'raw': 112.441}, u'dividendRate': {u'fmt': u'2.28', u'raw': 2.28}, u'dividendYield': {u'fmt': u'2.01%', u'raw': 0.0201}, u'exDividendDate': {u'fmt': u'2016-08-04', u'raw': 1470268800}, u'expireDate': {}, u'fiftyDayAverage': {u'fmt': u'108.61', u'raw': 108.608284}, u'fiftyTwoWeekHigh': {u'fmt': u'123.82', u'raw': 123.82}, u'fiftyTwoWeekLow': {u'fmt': u'89.47', u'raw': 89.47}, u'fiveYearAvgDividendYield': {}, u'forwardPE': {u'fmt': u'12.70', u'raw': 12.701344}, u'marketCap': {u'fmt': u'611.86B', u'longFmt': u'611,857,399,808', u'raw': 611857399808}, u'maxAge': 1, u'navPrice': {}, u'open': {u'fmt': u'113.82', u'raw': 113.82}, u'openInterest': {}, u'payoutRatio': {u'fmt': u'24.80%', u'raw': 0.248}, u'previousClose': {u'fmt': u'113.57', u'raw': 113.57}, u'priceToSalesTrailing12Months': {u'fmt': u'2.78', u'raw': 2.777534}, u'regularMarketDayHigh': {u'fmt': u'113.99', u'raw': 113.989}, u'regularMarketDayLow': {u'fmt': u'112.44', u'raw': 112.441}, u'regularMarketOpen': {u'fmt': u'113.82', u'raw': 113.82}, u'regularMarketPreviousClose': {u'fmt': u'113.57', u'raw': 113.57}, u'regularMarketVolume': {u'fmt': u'31.57M', u'longFmt': u'31,574,028', u'raw': 31574028}, u'strikePrice': {}, u'totalAssets': {}, u'trailingAnnualDividendRate': {u'fmt': u'2.13', u'raw': 2.13}, u'trailingAnnualDividendYield': {u'fmt': u'1.88%', u'raw': 0.018754954}, u'trailingPE': {u'fmt': u'13.24', u'raw': 13.240438}, u'twoHundredDayAverage': {u'fmt': u'102.39', u'raw': 102.39367}, u'volume': {u'fmt': u'31.57M', u'longFmt': u'31,574,028', u'raw': 31574028}, u'yield': {}, u'ytdReturn': {}}, u'symbol': u'AAPL'} </code></pre>
2
2016-09-22T09:32:47Z
[ "python", "html", "beautifulsoup", "web-crawler", "yahoo-finance" ]
making a python interpreter using javascript
39,631,465
<p>I want to make a python interpreter by using Javascript. </p> <p>Then you can input the python code and the Javascript in the webpage can interpret the code into javascript code and then run the code and return the result.</p> <p>Because I have not much experience in this area, so I want some advice from the senior.</p> <p>Thanks very much ...</p>
-1
2016-09-22T06:08:59Z
39,631,568
<p>You can use pypyjs, and the detailed procedure is also available. <a href="https://github.com/pypyjs/pypyjs/releases/" rel="nofollow">https://github.com/pypyjs/pypyjs/releases/</a></p>
0
2016-09-22T06:16:21Z
[ "javascript", "python" ]
making a python interpreter using javascript
39,631,465
<p>I want to make a python interpreter by using Javascript. </p> <p>Then you can input the python code and the Javascript in the webpage can interpret the code into javascript code and then run the code and return the result.</p> <p>Because I have not much experience in this area, so I want some advice from the senior.</p> <p>Thanks very much ...</p>
-1
2016-09-22T06:08:59Z
39,632,417
<p>Well, an interpreter is not really a job for a beginner, also you'd better send the code to server side with AJAX, and then display the result in the page.</p>
0
2016-09-22T07:07:28Z
[ "javascript", "python" ]
How do I ask the user if they want to play again and repeat the while loop?
39,631,540
<p>Running on Python, this is an example of my code: </p> <pre><code>import random comp = random.choice([1,2,3]) while True: user = input("Please enter 1, 2, or 3: ") if user == comp print("Tie game!") elif (user == "1") and (comp == "2") print("You lose!") break else: print("Your choice is not valid.") </code></pre> <p>So this part works. However, how do I exit out of this loop because after entering a correct input it keeps asking "Please input 1,2,3".</p> <p>I also want to ask if the player wants to play again:</p> <p><strong>Psuedocode:</strong></p> <pre><code> play_again = input("If you'd like to play again, please type 'yes'") if play_again == "yes" start loop again else: exit program </code></pre> <p>Is this related to a nested loop somehow?</p>
1
2016-09-22T06:14:05Z
39,631,665
<p>Points for your code:</p> <ol> <li>Code you have pasted don't have <code>':'</code> after <code>if,elif</code> and <code>else.</code></li> <li>Whatever you want can be achived using Control Flow Statements like <code>continue and break</code>. <a href="https://docs.python.org/2/tutorial/controlflow.html" rel="nofollow">Please check here for more detail</a>.</li> <li>You need to remove break from "YOU LOSE" since you want to ask user whether he wants to play.</li> <li>Code you have written will never hit "Tie Game" since you are comparing string with integer. User input which is saved in variable will be string and <code>comp</code> which is output of random will be integer. You have convert user input to integer as <code>int(user)</code>.</li> <li>Checking user input is valid or not can be simply check using <code>in</code> operator.</li> </ol> <p><strong>Code:</strong></p> <pre><code>import random while True: comp = random.choice([1,2,3]) user = raw_input("Please enter 1, 2, or 3: ") if int(user) in [1,2,3]: if int(user) == comp: print("Tie game!") else: print("You lose!") else: print("Your choice is not valid.") play_again = raw_input("If you'd like to play again, please type 'yes'") if play_again == "yes": continue else: break </code></pre>
3
2016-09-22T06:21:38Z
[ "python", "loops", "while-loop", "nested" ]
Can a pandas DataFrame hold non-scalar values?
39,631,678
<p>I'd like a DataFrame to store non-primitive types, in particular I would need a column that can have lists as its items.</p> <pre><code>pd.DataFrame( [["foo","bar"],["rab","oof"]], ["my-column-of-lists"] ) </code></pre> <p>should result in a DataFrame with one column (<code>my-column-of-lists</code>) and 2 rows (<code>["foo","bar"]</code> and <code>["rab","oof"]</code>).</p> <p>Is that possible?</p>
2
2016-09-22T06:22:27Z
39,631,714
<p>You can use <code>dict</code>:</p> <pre><code>df = pd.DataFrame({'my-column-of-lists': [["foo","bar"],["rab","oof"]]}) print (df) my-column-of-lists 0 [foo, bar] 1 [rab, oof] </code></pre> <p>Or pass <code>Series</code>:</p> <pre><code>df = pd.DataFrame(pd.Series([["foo","bar"],["rab","oof"]], name='my-column-of-lists')) print (df) my-column-of-lists 0 [foo, bar] 1 [rab, oof] </code></pre>
4
2016-09-22T06:24:23Z
[ "python", "list", "pandas", "dataframe" ]
Python append data reverse
39,631,817
<p>I try to use arraylist to implement graph but when I print(graph.edges()) why some of data will reverse {vertex,node},its mean I should't use append?</p> <pre><code>class Graph(object): def __init__(self, graph_dict=None): """ initializes a graph object If no dictionary or None is given, an empty dictionary will be used """ if graph_dict == None: graph_dict = {} self.__graph_dict = graph_dict def vertices(self): """ returns the vertices of a graph """ return list(self.__graph_dict.keys()) def edges(self): """ returns the edges of a graph """ return self.__generate_edges() def __generate_edges(self): edges = [] for vertex in self.__graph_dict: for node in self.__graph_dict[vertex]: if {node, vertex} not in edges: edges.append({vertex,node}) return edges if __name__ == '__main__': x = {'10': ['80558', '192929', '266485', '500235', '504757'], '4': ['16050', '286286', '310803', '320519', '408108', '448284'], '20': ['23251', '25337', '61306', '186932', '218984', '412652', '450610', '476125'], '11': ['57761', '107436', '400957', '512424'], '15': ['67084', '85444', '252980', '422732', '425706', '428290'], '2': ['27133', '62291', '170507', '299250', '326776', '331042', '411179', '451149', '454888'], '8': ['10758', '55461', '60605', '148586', '184847', '242156', '445607', '453513'], '1': ['88160', '118052', '161555', '244916', '346495', '444232', '447165', '500600'], '6': ['162248', '298989', '398542', '495077'], '16': ['128935', '148997', '422237'], '17': ['18919', '309780'], '18': ['41174', '45026', '168895'], '7': ['30028', '47672', '355935'], '19': ['203677'], '12': ['386032'], '5': ['173362', '305321', '407216', '489756']} graph = Graph(x) print("Vertices of graph:") print(graph.vertices()) print("Edges of graph:") print(graph.edges()) </code></pre>
0
2016-09-22T06:32:22Z
39,632,234
<p>You are using a <code>set</code> when you use this notation:</p> <pre><code>{node,vertex} </code></pre> <p>This data structure is unordered. Try a tuple instead:</p> <pre><code>(node, vertex) </code></pre>
1
2016-09-22T06:56:20Z
[ "python", "python-3.5" ]
python+pyspark: error on inner join with multiple column comparison in pyspark
39,631,870
<p>Hi I have 2 dataframes to join</p> <pre><code>#df1 name genre count satya drama 1 satya action 3 abc drame 2 abc comedy 2 def romance 1 #df2 name max_count satya 3 abc 2 def 1 </code></pre> <p>Now I want to join above 2 dfs on name and count==max_count, But i am getting an error</p> <pre><code>import pyspark.sql.functions as F from pyspark.sql.functions import count, col from pyspark.sql.functions import struct df = spark.read.csv('file',sep = '###', header=True) df1 = df.groupBy("name", "genre").count() df2 = df1.groupby('name').agg(F.max("count").alias("max_count")) #Now trying to join both dataframes final_df = df1.join(df2, (df1.name == df2.name) &amp; (df1.count == df2.max_count)) final_df.show() ###Error #py4j.protocol.Py4JJavaError: An error occurred while calling o207.showString. : org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:194) #Caused by: java.lang.UnsupportedOperationException: Cannot evaluate expression: count(1) at org.apache.spark.sql.catalyst.expressions.Unevaluable$class.doGenCode(Expression.scala:224) </code></pre> <p>But success with "left " join</p> <pre><code>final_df = df1.join(df2, (df1.name == df2.name) &amp; (df1.count == df2.max_count), "left") final_df.show() ###Success but i don't want left join , i want inner join </code></pre> <p>My question is why the above one fails, am I doing something wrong there???</p> <p>I referred this link "<a href="http://stackoverflow.com/questions/35218882/find-maximum-row-per-group-in-spark-dataframe">Find maximum row per group in Spark DataFrame</a>". Used the first answer (2 groupby method).But same error.</p> <p>I am on spark-2.0.0-bin-hadoop2.7 and python 2.7.</p> <p>Please suggest.Thanks.</p> <h1>Edit:</h1> <p>The above scenario works with spark 1.6 (which is quite surprising that what's wrong with spark 2.0 (or with my installation , I will reinstall, check and update here)).</p> <p>Has anybody tried this on spark 2.0 and got success , by following Yaron's answer below???</p>
0
2016-09-22T06:35:07Z
39,633,357
<p>Update: It seems like your code was failing also due to the use of "count" as column name. count seems to be protected keyword in DataFrame API. renaming count to "mycount" solved the problem. The below working code was modify to support spark version 1.5.2 which I used to test your issue.</p> <pre><code>df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").load("/tmp/fac_cal.csv") df1 = df.groupBy("name", "genre").count() df1 = df1.select(col("name"),col("genre"),col("count").alias("mycount")) df2 = df1.groupby('name').agg(F.max("mycount").alias("max_count")) df2 = df2.select(col('name').alias('name2'),col("max_count")) #Now trying to join both dataframes final_df = df1.join(df2,[df1.name == df2.name2 , df1.mycount == df2.max_count]) final_df.show() +-----+---------+-------+-----+---------+ | name| genre|mycount|name2|max_count| +-----+---------+-------+-----+---------+ |brata| comedy| 2|brata| 2| |brata| drama| 2|brata| 2| |panda|adventure| 1|panda| 1| |panda| romance| 1|panda| 1| |satya| action| 3|satya| 3| +-----+---------+-------+-----+---------+ </code></pre> <p>The example for complex condition in <a href="https://spark.apache.org/docs/2.0.0/api/python/pyspark.sql.html" rel="nofollow">https://spark.apache.org/docs/2.0.0/api/python/pyspark.sql.html</a></p> <pre><code>cond = [df.name == df3.name, df.age == df3.age] &gt;&gt;&gt; df.join(df3, cond, 'outer').select(df.name, df3.age).collect() [Row(name=u'Alice', age=2), Row(name=u'Bob', age=5)] </code></pre> <hr> <p>can you try: </p> <pre><code>final_df = df1.join(df2, [df1.name == df2.name , df1.mycount == df2.max_count]) </code></pre> <p>Note also, that according to the spec "left" is not part of the valid join types: how – str, default ‘inner’. One of inner, outer, left_outer, right_outer, leftsemi.</p>
2
2016-09-22T07:55:21Z
[ "python", "apache-spark", "pyspark", "pyspark-sql" ]
python+pyspark: error on inner join with multiple column comparison in pyspark
39,631,870
<p>Hi I have 2 dataframes to join</p> <pre><code>#df1 name genre count satya drama 1 satya action 3 abc drame 2 abc comedy 2 def romance 1 #df2 name max_count satya 3 abc 2 def 1 </code></pre> <p>Now I want to join above 2 dfs on name and count==max_count, But i am getting an error</p> <pre><code>import pyspark.sql.functions as F from pyspark.sql.functions import count, col from pyspark.sql.functions import struct df = spark.read.csv('file',sep = '###', header=True) df1 = df.groupBy("name", "genre").count() df2 = df1.groupby('name').agg(F.max("count").alias("max_count")) #Now trying to join both dataframes final_df = df1.join(df2, (df1.name == df2.name) &amp; (df1.count == df2.max_count)) final_df.show() ###Error #py4j.protocol.Py4JJavaError: An error occurred while calling o207.showString. : org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:194) #Caused by: java.lang.UnsupportedOperationException: Cannot evaluate expression: count(1) at org.apache.spark.sql.catalyst.expressions.Unevaluable$class.doGenCode(Expression.scala:224) </code></pre> <p>But success with "left " join</p> <pre><code>final_df = df1.join(df2, (df1.name == df2.name) &amp; (df1.count == df2.max_count), "left") final_df.show() ###Success but i don't want left join , i want inner join </code></pre> <p>My question is why the above one fails, am I doing something wrong there???</p> <p>I referred this link "<a href="http://stackoverflow.com/questions/35218882/find-maximum-row-per-group-in-spark-dataframe">Find maximum row per group in Spark DataFrame</a>". Used the first answer (2 groupby method).But same error.</p> <p>I am on spark-2.0.0-bin-hadoop2.7 and python 2.7.</p> <p>Please suggest.Thanks.</p> <h1>Edit:</h1> <p>The above scenario works with spark 1.6 (which is quite surprising that what's wrong with spark 2.0 (or with my installation , I will reinstall, check and update here)).</p> <p>Has anybody tried this on spark 2.0 and got success , by following Yaron's answer below???</p>
0
2016-09-22T06:35:07Z
39,696,803
<h1>My work-around in spark 2.0</h1> <p>I created a single column('combined') from columns in join comparision('name','mycount')in respective dfs, so now I have one column to compare and this is not creating any issue as I am comparing only one column.</p> <pre><code>def combine_func(*args): data = '_'.join([str(x) for x in args]) ###converting nonstring to str tehn concatenation return data combine_func = udf(combine_func, StringType()) ##register the func as udf df1 = df1.withColumn('combined_new_1', combine_new(df1['name'],df1['mycount'])) ###a col having concatenated value from name and mycount columns eg: 'satya_3' df2 = df2.withColumn('combined_new_2', combine_new(df2['name2'],df2['max_count'])) #df1.columns == 'name','genre', 'mycount', 'combined_new_1' #df2.columns == 'name2', 'max_count', 'combined_new_2' #Now join final_df = df1.join(df2,df1.combined_new_1 == df2.combined_new_2, 'inner') #final_df = df1.join(df2,df1.combined_new_1 == df2.combined_new_2, 'inner').select('the columns you want') final_df.show() ####It is showing the result, Trust me. </code></pre> <p>Please don't follow until unless you are in a hurry, Better search for a reliable solution.</p>
0
2016-09-26T07:15:58Z
[ "python", "apache-spark", "pyspark", "pyspark-sql" ]
python+pyspark: error on inner join with multiple column comparison in pyspark
39,631,870
<p>Hi I have 2 dataframes to join</p> <pre><code>#df1 name genre count satya drama 1 satya action 3 abc drame 2 abc comedy 2 def romance 1 #df2 name max_count satya 3 abc 2 def 1 </code></pre> <p>Now I want to join above 2 dfs on name and count==max_count, But i am getting an error</p> <pre><code>import pyspark.sql.functions as F from pyspark.sql.functions import count, col from pyspark.sql.functions import struct df = spark.read.csv('file',sep = '###', header=True) df1 = df.groupBy("name", "genre").count() df2 = df1.groupby('name').agg(F.max("count").alias("max_count")) #Now trying to join both dataframes final_df = df1.join(df2, (df1.name == df2.name) &amp; (df1.count == df2.max_count)) final_df.show() ###Error #py4j.protocol.Py4JJavaError: An error occurred while calling o207.showString. : org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:194) #Caused by: java.lang.UnsupportedOperationException: Cannot evaluate expression: count(1) at org.apache.spark.sql.catalyst.expressions.Unevaluable$class.doGenCode(Expression.scala:224) </code></pre> <p>But success with "left " join</p> <pre><code>final_df = df1.join(df2, (df1.name == df2.name) &amp; (df1.count == df2.max_count), "left") final_df.show() ###Success but i don't want left join , i want inner join </code></pre> <p>My question is why the above one fails, am I doing something wrong there???</p> <p>I referred this link "<a href="http://stackoverflow.com/questions/35218882/find-maximum-row-per-group-in-spark-dataframe">Find maximum row per group in Spark DataFrame</a>". Used the first answer (2 groupby method).But same error.</p> <p>I am on spark-2.0.0-bin-hadoop2.7 and python 2.7.</p> <p>Please suggest.Thanks.</p> <h1>Edit:</h1> <p>The above scenario works with spark 1.6 (which is quite surprising that what's wrong with spark 2.0 (or with my installation , I will reinstall, check and update here)).</p> <p>Has anybody tried this on spark 2.0 and got success , by following Yaron's answer below???</p>
0
2016-09-22T06:35:07Z
39,844,319
<p>I ran into the same problem when I tried to join two DataFrames where one of them was GroupedData. It worked for me when I cached the GroupedData DataFrame before the inner join. For your code, try:</p> <pre><code>df1 = df.groupBy("name", "genre").count().cache() # added cache() df2 = df1.groupby('name').agg(F.max("count").alias("max_count")).cache() # added cache() final_df = df1.join(df2, (df1.name == df2.name) &amp; (df1.count == df2.max_count)) # no change </code></pre>
1
2016-10-04T04:44:25Z
[ "python", "apache-spark", "pyspark", "pyspark-sql" ]
Where can I find all the tag definitions of POS tagging for ClassifierBasedPOSTagger in NLTK?
39,631,938
<p>I used the following code to train a <code>ClassifierBasedPOSTagger</code> for POS tagging:</p> <pre><code>from nltk.classify import MaxentClassifier from nltk.tag.sequential import ClassifierBasedPOSTagger me_tagger = ClassifierBasedPOSTagger(train=train_sents, classifier_builder=lambda train_feats: MaxentClassifier.train(train_feats, max_iter=15)) print(me_tagger.tag('My new watch is awesome...'.split())) </code></pre> <p>Which prints out the following tags:</p> <pre><code>[('My', 'PP$'), ('new', 'JJ'), ('watch', 'NN'), ('is', 'BEZ'), ('awesome...', 'AT')] </code></pre> <p>Where can I find the token tag definitions for this classifier? I am familiar with <a href="http://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html" rel="nofollow">these</a> tokens though, but I am unable to construe <code>BEZ</code> and <code>AT</code>.</p>
0
2016-09-22T06:39:55Z
39,636,569
<p>You can check - <a href="http://www.scs.leeds.ac.uk/amalgam/tagsets/brown.html" rel="nofollow">The Brown Corpus Tag-set</a>.</p> <pre><code>╔═════╦═════════════════════╦════════════════════╗ ║ Tag ║ Description ║ Examples ║ ╠═════╬═════════════════════╬════════════════════╣ ║ AT ║ article ║ the an no a every ║ ║ ║ ║ th' ever' ye ║ ╠═════╬═════════════════════╬════════════════════╣ ║ BEZ ║ verb "to be", ║ is ║ ║ ║ present tense, ║ ║ ║ ║ 3rd person singular ║ ║ ╠═════╬═════════════════════╬════════════════════╣ ║ ... ║ ... ║ ... ║ ╚═════╩═════════════════════╩════════════════════╝ </code></pre>
2
2016-09-22T10:27:05Z
[ "python", "nltk" ]
Where can I find all the tag definitions of POS tagging for ClassifierBasedPOSTagger in NLTK?
39,631,938
<p>I used the following code to train a <code>ClassifierBasedPOSTagger</code> for POS tagging:</p> <pre><code>from nltk.classify import MaxentClassifier from nltk.tag.sequential import ClassifierBasedPOSTagger me_tagger = ClassifierBasedPOSTagger(train=train_sents, classifier_builder=lambda train_feats: MaxentClassifier.train(train_feats, max_iter=15)) print(me_tagger.tag('My new watch is awesome...'.split())) </code></pre> <p>Which prints out the following tags:</p> <pre><code>[('My', 'PP$'), ('new', 'JJ'), ('watch', 'NN'), ('is', 'BEZ'), ('awesome...', 'AT')] </code></pre> <p>Where can I find the token tag definitions for this classifier? I am familiar with <a href="http://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html" rel="nofollow">these</a> tokens though, but I am unable to construe <code>BEZ</code> and <code>AT</code>.</p>
0
2016-09-22T06:39:55Z
39,639,348
<p>You should understand that the tagset has nothing to do with the classifier class you chose; the tagset comes from your training data. So your question should have been "where do I find the tag definitions for (this POS-tagged corpus)". You don't say where your <code>train_sents</code> came from, but indeed (as @RAVI already pointed out) these tags seem to come from the Brown corpus; you can read its tagset documentation <a href="http://www.scs.leeds.ac.uk/amalgam/tagsets/brown.html" rel="nofollow">online,</a> or fetch it from within the nltk like this:</p> <pre><code>&gt;&gt;&gt; nltk.help.brown_tagset("BEZ") BEZ: verb 'to be', present tense, 3rd person singular is &gt;&gt;&gt; nltk.help.brown_tagset() # All tags ... </code></pre>
1
2016-09-22T12:40:29Z
[ "python", "nltk" ]
Printing only desired values of keys in dcitionary
39,631,958
<p>I have a dictionary in Python which looks like this</p> <pre><code>{123: [u'6722000', u'6722001', u'6631999', u'PX522.X522.091003143054.S4J2', u'PXX22.XX22.140311131347.A6D4', u'7767815060', u'6631900', u'7767815062', u'18001029945', u'7767815063'],...} </code></pre> <p>When I want to print out the values of dictionary (or store it in a txt file), I only want the values which have "P" in it and not the one having only numbers.</p> <p>For example, the above dictionary should look like </p> <pre><code>{123: [u'PX522.X522.091003143054.S4J2', u'PXX22.XX22.140311131347.A6D4'],...} </code></pre> <p>This is the code that I have written </p> <pre><code>components = nx.connected_component_subgraphs(G) for idx, comp in enumerate(components): if "P" in comp.nodes(): comp_dict.update({idx: comp.nodes()}) </code></pre> <p>But it's giving me no output. If I execute the code without the if-statement, I'm getting the same output with all the values.</p>
-1
2016-09-22T06:41:03Z
39,632,327
<p><a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow">enumerate</a> will give you the iterator index and value of the iterable. In this case the key and index of the key. Not the values of a key. And as the value is a list of string you have to check for "P" in the strings inside the list. Try like this:</p> <pre><code>for idx, comp in components.items(): for s in comp: if "P" in s: print(s) </code></pre>
0
2016-09-22T07:01:31Z
[ "python", "dictionary" ]
Printing only desired values of keys in dcitionary
39,631,958
<p>I have a dictionary in Python which looks like this</p> <pre><code>{123: [u'6722000', u'6722001', u'6631999', u'PX522.X522.091003143054.S4J2', u'PXX22.XX22.140311131347.A6D4', u'7767815060', u'6631900', u'7767815062', u'18001029945', u'7767815063'],...} </code></pre> <p>When I want to print out the values of dictionary (or store it in a txt file), I only want the values which have "P" in it and not the one having only numbers.</p> <p>For example, the above dictionary should look like </p> <pre><code>{123: [u'PX522.X522.091003143054.S4J2', u'PXX22.XX22.140311131347.A6D4'],...} </code></pre> <p>This is the code that I have written </p> <pre><code>components = nx.connected_component_subgraphs(G) for idx, comp in enumerate(components): if "P" in comp.nodes(): comp_dict.update({idx: comp.nodes()}) </code></pre> <p>But it's giving me no output. If I execute the code without the if-statement, I'm getting the same output with all the values.</p>
-1
2016-09-22T06:41:03Z
39,632,329
<p><strong>Code with comments inline:</strong></p> <pre><code>a_dict = {123: [u'6722000', u'6722001', u'6631999', u'PX522.X522.091003143054.S4J2', u'PXX22.XX22.140311131347.A6D4', u'7767815060', u'6631900', u'7767815062', u'18001029945', u'7767815063'], 456: [u'6722000', u'6722001', u'6631999', u'PX522.X522.091003143054.S4J2']} print "Before...",a_dict #Iterate over keys of dictonary for k in a_dict.keys(): #Check the values in list for 'P'. Keep only which has 'P' tmp_list = [i for i in a_dict[k] if "P" in i] #Update dictionary with new list a_dict[k]=tmp_list tmp_list = [] print "After...", a_dict </code></pre> <p><strong>Output:</strong></p> <pre><code>C:\Users\dinesh_pundkar\Desktop&gt;python c.py Before... {456: [u'6722000', u'6722001', u'6631999', u'PX522.X522.091003143054.S 4J2'], 123: [u'6722000', u'6722001', u'6631999', u'PX522.X522.091003143054.S4J2' , u'PXX22.XX22.140311131347.A6D4', u'7767815060', u'6631900', u'7767815062', u'1 8001029945', u'7767815063']} After... {456: [u'PX522.X522.091003143054.S4J2'], 123: [u'PX522.X522.09100314305 4.S4J2', u'PXX22.XX22.140311131347.A6D4']} C:\Users\dinesh_pundkar\Desktop&gt; </code></pre> <p>Please let me know if you have query.</p>
0
2016-09-22T07:01:37Z
[ "python", "dictionary" ]
Check if a value comes before another in a list
39,632,023
<p>I would like to check whether a value comes before another in a list. I asked this question awhile back: <a href="http://stackoverflow.com/questions/38603449/typeerror-githubiterator-object-does-not-support-indexing">TypeError: &#39;GitHubIterator&#39; object does not support indexing</a>, which allows me to access the last comment in a list. I would like to expand this to look through all the comments in a pull request check whether <code>#hold-off</code> comment comes after the <code>#sign-off comment</code>. I can print the comments using the print statement but, but it errors out looking at the order of the values with the error message: <code>AttributeError: 'IssueComment' object has no attribute 'index'</code>.</p> <p>I think I need to somehow get the body of the comments to a list, and then use index to determine the order, because the iterator does not support indexing. But I've been unsuccessful in getting that to work. </p> <pre><code>hold_off_regex_search_string = re.compile(r"\B#hold-off\b", re.IGNORECASE) sign_off_regex_search_string = re.compile(r"\B#sign-off\b", re.IGNORECASE) for comments in list(GitAuth.repo.issue(prs.number).comments()): print (comments.body) if comments.index(hold_off_regex_search_string.search(comments.body)) &gt; comments.index(sign_off_regex_search_string.search(comments.body)): print('True') </code></pre>
0
2016-09-22T06:44:15Z
39,632,181
<p>It looks like you're confusing yourself. The <code>for</code> loop is already iterating over the comments in order. All you need to do is test each comment for the <code>#hold-off</code> and <code>#sign-off</code> patterns and report on which one you see first.</p> <pre><code>hold_off_regex_search_string = re.compile(r"\B#hold-off\b", re.IGNORECASE) sign_off_regex_search_string = re.compile(r"\B#sign-off\b", re.IGNORECASE) special_comments = [] for comments in list(GitAuth.repo.issue(prs.number).comments()): if hold_off_regex_search_string.search(comments.body): special_comments.append('HOLD OFF') elif sign_off_regex_search_string.search(comments.body): special_comments.append('SIGN OFF') if special_comments == ['HOLD OFF', 'SIGN OFF']: # add label elif special_comments == ['SIGN OFF', 'HOLD OFF']: # remove label elif special_comments == ['HOLD OFF']: # handle it elif special_comments == ['SIGN OFF']: # handle it elif special_comments == []: # handle it else: # maybe multiple sign offs or hold offs? </code></pre>
1
2016-09-22T06:53:51Z
[ "python", "indexing" ]
Python MagicMock to os.listdir not raise error
39,632,177
<p>I would like to set os.listdir to raise OSError in UT but it not raise anything.</p> <p>My code:</p> <pre><code>def get_list_of_files(path): try: list_of_files = sorted([filename for filename in os.listdir(path) if filename.startswith('FILE')]) except OSError as error: raise Exception(error) return list_of_files def setUp(self): self.listdir_patcher = patch('os.listdir') self.mock_listdir = self.listdir_patcher.start() self.mock_listdir_rv = MagicMock() self.mock_listdir.return_value = self.mock_listdir_rv def tearDown(self): self.listdir_patcher.stop() def test(self): e = OSError('abc') self.mock_listdir_rv.side_effect = e with self.assertRaises(OSError): get_list_of_files('path') </code></pre> <p>What is the problem? (I can not use normal Mock to os.listdir) </p>
0
2016-09-22T06:53:41Z
39,632,283
<p>You need to set the side effect for <code>self.mock_listdir</code>, not it's return value:</p> <pre><code>def test(self): e = OSError('abc') self.mock_listdir.side_effect = e with self.assertRaises(OSError): get_list_of_files('path') </code></pre> <p>After all, you want the call to <code>os.listdir()</code> to raise the exception, not the call to the return value of <code>os.listdir()</code> (you never use <code>os.listdir()()</code>).</p> <p>Demo (using <code>patch()</code> as a context manager, which has the same effects as using it as a decorator):</p> <pre><code>&gt;&gt;&gt; from unittest.mock import patch &gt;&gt;&gt; import os &gt;&gt;&gt; with patch('os.listdir') as mock_listdir: ... mock_listdir.side_effect = OSError('abc') ... os.listdir('path') ... Traceback (most recent call last): File "&lt;stdin&gt;", line 3, in &lt;module&gt; File "/Users/mjpieters/Development/Library/buildout.python/parts/opt/lib/python3.6/unittest/mock.py", line 930, in __call__ return _mock_self._mock_call(*args, **kwargs) File "/Users/mjpieters/Development/Library/buildout.python/parts/opt/lib/python3.6/unittest/mock.py", line 986, in _mock_call raise effect OSError: abc </code></pre> <p>Note that setting the <code>side_effect</code> of the <code>self.mock_listdir</code> mock will persist to other tests! You should really use a new patch <em>per test</em>. You can use <code>patch</code> as a decorator on each test, use this instead of using a per-testcase patcher:</p> <pre><code>@patch('os.listdir') def test(self, mock_listdir): e = OSError('abc') mock_listdir.side_effect = e with self.assertRaises(OSError): get_list_of_files('path') </code></pre> <p>If you do stick to using a patcher you start in the <code>setUp</code>, you'd have to clear the side effect afterwards (set it to <code>None</code>).</p> <p>Aside from all that, there is no need to explicitly create the <code>MagicMock</code> instance for a <code>return_value</code>; that's the default return value already. You could instead store that default:</p> <pre><code>self.mock_listdir = self.listdir_patcher.start() self.mock_listdir_rv = self.mock_listdir.return_value </code></pre>
1
2016-09-22T06:59:01Z
[ "python", "mocking" ]
How to import normalised json data from several files into a pandas dataframe?
39,632,248
<p>I have json datafiles in several directories that I want to import into Pandas to do some data analysis. The format of the json depends on the type defined in the directory name. For example,</p> <pre><code>dir1_typeA/ file1 file2 ... dir1_typeB/ file1 file2 ... dir2_typeB/ file1 ... dir2_typeA/ file1 file2 </code></pre> <p>Each <code>file</code> contains a complex nested json string that will be a row of the DataFrame. I will have two data frames for each TypeA and TypeB. Later on I will append them if needed. </p> <p>So, far I've got all the files paths I need with os.walk and am trying to go through </p> <pre><code> import os from glob import glob PATH = 'dir/filepath' files = [y for x in os.walk(PATH) for y in glob(os.path.join(x[0], 'file*'))] for file in files: with open(issuefile, 'r') as f: data = f.read() data_json = json_normalize(json.loads(data)) type = ' '.join(issuefile.split('/')[3] data_json['type'] = type # append to data frame for typeA and typeB if 'typeA' in type: # append to typeA dataframe else: # append to typeB dataframe </code></pre> <p>There is one added issue, which is files inside a directory may have slightly different fields. For example, <code>file1</code> may have a few more fields that <code>file2</code> in <code>dir1_typeA</code>. So, I need to accommodate that dynamic nature in data frame for each type as well.</p> <p>How do I create these two dataframes? </p>
0
2016-09-22T06:57:02Z
39,632,608
<p>I think you should concatenate the files together first before you read them into pandas, here is how you'd do it in bash (you could also do it in Python):</p> <pre><code>cat `find *typeA` &gt; typeA cat `find *typeB` &gt; typeB </code></pre> <p>Then you can import it into pandas using <code>io.json.json_normalize</code>:</p> <pre><code>import json with open('typeA') as f: data = [json.loads(l) for l in f.readlines()] dfA = pd.io.json.json_normalize(data) dfA # that this.first this.second # 0 otherthing thing thing # 1 otherthing thing thing # 2 otherthing thing thing </code></pre>
1
2016-09-22T07:16:53Z
[ "python", "json", "pandas", "dataframe" ]
Pandas NaN introduced by pivot_table
39,632,277
<p>I have a table containing some countries and their KPI from the world-banks API. this looks like <a href="http://i.stack.imgur.com/ZL4bU.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/ZL4bU.jpg" alt="no nan values present"></a>. As you can see no nan values are present. </p> <p>However, I need to pivot this table to bring int into the right shape for analysis. A <code>pd.pivot_table(countryKPI, index=['germanCName'], columns=['indicator.id'])</code> For some e.g. <code>TUERKEI</code> this works just fine:</p> <p><a href="http://i.stack.imgur.com/QiO6Z.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/QiO6Z.jpg" alt="for turkey it works"></a> But for most of the countries strange nan values are introduced. How can I prevent this?</p> <p><a href="http://i.stack.imgur.com/Ylu9t.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Ylu9t.jpg" alt="strange nan values"></a></p>
1
2016-09-22T06:58:46Z
39,632,450
<p>I think the best understand <code>pivoting</code> is in small sample:</p> <pre><code>import pandas as pd import numpy as np countryKPI = pd.DataFrame({'germanCName':['a','a','b','c','c'], 'indicator.id':['z','x','z','y','m'], 'value':[7,8,9,7,8]}) print (countryKPI) germanCName indicator.id value 0 a z 7 1 a x 8 2 b z 9 3 c y 7 4 c m 8 print (pd.pivot_table(countryKPI, index=['germanCName'], columns=['indicator.id'])) value indicator.id m x y z germanCName a NaN 8.0 NaN 7.0 b NaN NaN NaN 9.0 c 8.0 NaN 7.0 NaN </code></pre> <p>If need replace <code>NaN</code> to <code>0</code> add parameter <code>fill_value</code>:</p> <pre><code>print (countryKPI.pivot_table(index='germanCName', columns='indicator.id', values='value', fill_value=0)) indicator.id m x y z germanCName a 0 8 0 7 b 0 0 0 9 c 8 0 7 0 </code></pre> <p>You can also check <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1463/reshaping-and-pivoting/4770/simple-pivoting#t=20160922071448883903">simple pivoting</a> and <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1463/reshaping-and-pivoting/4771/pivoting-with-aggregating#t=20160922071448883903">pivoting with aggregating</a>.</p>
1
2016-09-22T07:09:15Z
[ "python", "pandas", "pivot", "pivot-table", null ]
Python - thread inheritance's method (func)
39,632,311
<p>I have class A which inheritance the class B i need to send main_b to thread and continue with the program (main_a)</p> <pre><code>import threading import time class B(object): def main_b(self): i = 0 while i &lt; 5: print "main_b: %s" %time.ctime(time.time()) time.sleep(1) i += 1 class A(B): def main_a(self): b = threading.Thread(target=self.main_b()) b.start() i = 0 while i &lt; 5: print "main_a: %s" %time.ctime(time.time()) time.sleep(1) i += 1 b.join() aa = A() aa.main_a() </code></pre> <p>Expected result main_b and main_a print at same time</p> <p>Actual:</p> <p>main_b: Thu Sep 22 09:57:44 2016</p> <p>main_b: Thu Sep 22 09:57:45 2016</p> <p>main_b: Thu Sep 22 09:57:46 2016</p> <p>main_a: Thu Sep 22 09:57:47 2016</p> <p>main_a: Thu Sep 22 09:57:48 2016</p> <p>main_a: Thu Sep 22 09:57:49 2016</p>
1
2016-09-22T07:00:37Z
39,633,149
<p>Just pass method as a target for thread:</p> <pre><code>b = threading.Thread(target=self.main_b) </code></pre>
0
2016-09-22T07:44:23Z
[ "python", "multithreading" ]
tastypie saving foreignkey field throws exception
39,632,403
<p>I'm trying to add the foreign key to my tastypie resources, but django throws out this error:</p> <pre><code>"error_message": "'BB' object is not iterable", </code></pre> <p>I've created a minimal working example:</p> <p><strong>models.py</strong></p> <pre><code>class AA(models.Model): n = models.IntegerField() class BB(models.Model): aa = models.ForeignKey(AA, related_name='xox') t = models.CharField(max_length=2) </code></pre> <p><strong>resources.py</strong></p> <pre><code>from ..models import AA, BB from tastypie.authorization import Authorization from tastypie.fields import ForeignKey class AResource(ModelResource): class Meta: queryset = AA.objects.all() authorization = Authorization() class BBResource(ModelResource): aa = ForeignKey(AResource, 'aa', related_name="xox", full=False, blank=True, null=True) class Meta: queryset = BB.objects.all() authorization = Authorization() </code></pre> <p>Now using curl to perform the post action:</p> <pre><code>$ curl -XPOST --dump-header - --header 'Content-Type: application/json' localhost:8000/api/v1/a/ --data '{"n": 18}' $ curl -XPOST --dump-header - --header 'Content-Type: application/json' localhost:8000/api/v1/bb/ --data '{"t": "di", "aa": "/api/v1/a/1/"}' </code></pre> <p> <strong>Traceback:</strong></p> <pre><code> { "error_message": "'BB' object is not iterable", "traceback": "Traceback (most recent call last):\n\n File \"/home/mee/.venvs/env_pro/lib/python2.7/site-packages/tastypie/resources.py\", line 219, in wrapper\n response = callback(request, *args, **kwargs)\n\n File \"/home/mee/.venvs/env_pro/lib/python2.7/site-packages/tastypie/resources.py\", line 450, in dispatch_list\n return self.dispatch('list', request, **kwargs)\n\n File \"/home/mee/.venvs/env_pro/lib/python2.7/site-packages/tastypie/resources.py\", line 482, in dispatch\n response = method(request, **kwargs)\n\n File \"/home/mee/.venvs/env_pro/lib/python2.7/site-packages/tastypie/resources.py\", line 1384, in post_list\n updated_bundle = self.obj_create(bundle, **self.remove_api_resource_names(kwargs))\n\n File \"/home/mee/.venvs/env_pro/lib/python2.7/site-packages/tastypie/resources.py\", line 2175, in obj_create\n return self.save(bundle)\n\n File \"/home/mee/.venvs/env_pro/lib/python2.7/site-packages/tastypie/resources.py\", line 2322, in save\n self.save_related(bundle)\n\n File \"/home/mee/.venvs/env_pro/lib/python2.7/site-packages/tastypie/resources.py\", line 2382, in save_related\n setattr(related_obj, field_object.related_name, bundle.obj)\n\n File \"/home/mee/.venvs/env_pro/lib/python2.7/site-packages/django/db/models/fields/related_descriptors.py\", line 481, in __set__\n manager.set(value)\n\n File \"/home/mee/.venvs/env_pro/lib/python2.7/site-packages/django/db/models/fields/related_descriptors.py\", line 639, in set\n objs = tuple(objs)\n\nTypeError: 'BB' object is not iterable\n" } </code></pre> <p><strong>Package versions:</strong></p> <pre><code>Django==1.9.7 django-tastypie==0.13.3 </code></pre> <p><strong>EDIT</strong></p> <p>By querying the bb objects I can see the association is being stored</p> <pre><code>$ curl localhost:8000/api/v1/bb/ { "meta": { "limit": 20, "next": null, "offset": 0, "previous": null, "total_count": 1 }, "objects": [ { "aa": "/api/v1/a/1/", "id": 1, "resource_uri": "/api/v1/bb/1/", "t": "di" } ] </code></pre> <p>And by inspecting the traceback, the error is raised in the django code.</p> <p>Any suggestion will be appreciated. Thanks in advance.</p>
0
2016-09-22T07:06:22Z
39,681,540
<p>it was a simple (and newbie) question that make me feel like a déjà vu. The issue here is to remember when to use <code>related_name</code>, in this case that param isn't necessary.</p> <p><a href="http://django-tastypie.readthedocs.io/en/latest/fields.html#related-name" rel="nofollow">Link to the docs</a> where the <code>related_name</code> param is explained.</p>
0
2016-09-24T22:31:06Z
[ "python", "django", "tastypie", "restful-architecture" ]
Mongoengine signals listens for all models
39,632,486
<p>I have setup <code>django</code> project with <code>mongoengine</code> for using mongodb with django. I have created 2 models and they work fine, but when I use signal listener for one model It also listens for another model, so how can I keep the signals bound to their models?</p> <p>Here's my code for model User:</p> <pre class="lang-py prettyprint-override"><code>from mongoengine import * from mongoengine import signals from datetime import datetime class User(Document): uid = StringField(max_length=60, required=True) platform = StringField(max_length=20, required=True) index = StringField(max_length=80) last_updated = DateTimeField(required=True, default=datetime.now()) meta = { 'collection': 'social_users' } def before_save(sender, document, **kwargs): if document.platform and document.uid: document.index = document.platform+'/'+document.uid signals.pre_save.connect(before_save) </code></pre> <p>Here's another model <code>Error</code></p> <pre class="lang-py prettyprint-override"><code>from mongoengine import * from datetime import datetime class Error(Document): call = DictField(required=True) response = DictField(required=True) date = DateTimeField(default=datetime.now(), required=True) meta = { 'collection': 'errors' } </code></pre> <p>Here's the file which I'm using to test the code:</p> <pre class="lang-py prettyprint-override"><code>from src.social.models.error import Error from src.social.models.user import User error = Error.objects.first() print(error.to_json()) </code></pre> <p>But it doesn't work, throws the following error:</p> <pre><code>AttributeError: 'Error' object has no attribute 'platform' </code></pre> <p>Please help me with this, thanks.</p>
3
2016-09-22T07:11:03Z
39,654,354
<p>I figured out a way to bind signals for particular models, here's the code how I did it:</p> <pre class="lang-py prettyprint-override"><code>from mongoengine import * from mongoengine import signals from datetime import datetime class User(Document): uid = StringField(max_length=60, required=True) platform = StringField(max_length=20, required=True) index = StringField(max_length=80) last_updated = DateTimeField(required=True, default=datetime.now()) meta = { 'collection': 'social_users' } @classmethod def pre_save(cls, sender, document, **kwargs): if document.platform and document.uid: document.index = document.platform+'/'+document.uid signals.pre_save.connect(User.pre_save, sender=User) </code></pre> <p>Hope this helps the people who face the same problem.</p>
3
2016-09-23T06:50:04Z
[ "python", "django", "mongodb", "mongoengine" ]
Processing data with different number of features
39,632,573
<p>I have this classification/regression task but the most interesting thing is that the number of features for each record is different. Features are already extracted and already prepared thus the context of the data is unknown, and the values of the features fluctuate from -10 to 10. There are records with more than 200 features, likewise there are records with the amount of features lower than 20.</p> <p>The dataframe <code>df</code> has two columns: <code>ID</code> and <code>ATTRIBUTES</code>and the output looks like this:</p> <pre><code> ID ATTRIBUTES 0 1 1.1 2.1 3.3 4.4 5.5 6.6 ... 99.9 100.0 101.1 102.2 1 2 1.1 2.1 3.3 4.4 5.5 6.6 ... 45.0 46.0 47.0 49.0 2 3 1.1 2.1 3.3 4.4 5.5 6.6 ... 9.0 10.0 11.0 12.0 3 4 1.1 2.1 3.3 4.4 5.5 6.6 ... 70.0 71.0 72.0 73.0 4 5 1.1 2.1 3.3 4.4 5.5 6.6 ... 131.0 132.0 134.0 135.0 </code></pre> <p>I have split column <code>ATTRIBUTES</code> into separate columns:</p> <p><code>df['ATTRIBUTES'].str.split(' ', expand=True).astype(float)</code></p> <p>Now <code>df</code> looks like this:</p> <pre><code> 0 1 2 3 4 5 6 7 8 9 ... 131 132 133 134 135 0 1.1 2.1 3.3 4.4. 5.5. 6.6. 7.7 8.8 9.9 ... NaN NaN NaN NaN NaN 1 1.1 2.1 3.3 4.4. 5.5. 6.6. 7.7 8.8 9.9 ... NaN NaN NaN NaN NaN 2 1.1 2.1 3.3 4.4. 5.5. 6.6. 7.7 8.8 9.9 ... NaN NaN NaN NaN NaN 3 1.1 2.1 3.3 4.4. 5.5. 6.6. 7.7 8.8 9.9 ... NaN NaN NaN NaN NaN 4 1.1 2.1 3.3 4.4. 5.5. 6.6. 7.7 8.8 9.9 ... 131.0 132.0 133.0 134.0 135.0 </code></pre> <p>Let's say record1 has 102 features, rec2 - 49, rec3- 12, rec4-73, rec5 - 135. After a split operation records <code>rec1, rec2, rec3, rec4</code> were populated with <code>NaN</code> values to fill dataframe.</p> <p>After some googling I've came up with following ideas: </p> <ol> <li>First thought was to change <code>NaN</code> values with meaningful features using <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Imputer.html" rel="nofollow">Imputer</a>; </li> <li>Discard records that have less than 20 (40, 60 etc.) features. </li> </ol> <p>For classification I had chosen <a href="http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html" rel="nofollow">RandomForest</a>. </p> <p>The baseline performance is approximately 0.4117 whilst validating 10% of training set (using <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html" rel="nofollow">train_test_split</a>). </p> <p>Despite everything I had tried:</p> <ol> <li><a href="http://sebastianraschka.com/Articles/2014_about_feature_scaling.html" rel="nofollow">Feature scaling - standardisation</a></li> <li>Dimensionality reduction via Principal Component Analysis (PCA)</li> <li>Tree-based feature selection using <a href="http://scikit-learn.org/stable/modules/feature_selection.html#tree-based-feature-selection" rel="nofollow">ExtraTreesClassifier</a></li> </ol> <p>The baseline performance had not raised higher than 0.4… So my question is - how would one should proceed with the lack of features for particular records? </p>
0
2016-09-22T07:15:22Z
39,633,064
<p>Data imputation <em>might</em> work when </p> <ul> <li>the datapoints with missing attributes represent a small percentage of your datapoints</li> <li>the amount of missing attributes per datapoint is not big.</li> </ul> <p>The reason you are getting poor results - at least as far as the data completeness is concerned - is that you are really distorting the data variance in your data with imputation and in effect you are disguising any underlying problem-specific meta-information hidden in your original data. The results of your work in a classification problem are very much relying on the quality of your data and the 'correctness' of your model.</p> <p>I would start trying a couple different ways to deal with this sort of data issue. By any means these are not the only ways to go forward and Which one is best for you depends on your problem and the specifics of your data.</p> <ol> <li><p>See if some of the 'missing' or 'extra' attributes are possible to aggregate. Could it be the case that you have a fixed amount of features per datapoint and then 'optional' features that could be aggregated in some way?</p></li> <li><p>See if you can segment your data in a way that - within the each segment - you have datapoints that hold the same amount of features. Then try to fit your classifier within these segments.</p></li> <li><p>Check whether there is some pattern between feature amount and datapoint class.</p></li> <li><p>Drop all features that have <code>NaN</code> values from your dataset.</p></li> </ol> <p>I hope this helps a bit!</p>
0
2016-09-22T07:39:34Z
[ "python", "pandas", "machine-learning", "scikit-learn" ]
Convert a 1D list to 2D at every "\n" - Python
39,632,629
<p>I am trying to convert a 1D list to 2D whenever there is <code>\n</code> in it. </p> <pre><code>a = ['a', 'b', '\n', 'c', 'd'] </code></pre> <p>to</p> <pre><code>a = [['a', 'b']['c', 'd']] </code></pre> <p>Can anyone help me?</p> <p><strong>Edit</strong></p> <p>This is because I have a string</p> <pre><code>a = """\ f: 1000:000 000 000 000 000 000 000 000 000 000 000 000 f: 2000:000 000 000 000 000 000 000 000 000 000 000 000 f: 0000:000 000 000 000 000 000 000 000 000 000 000 000 f: 0000:000 000 000 000 000 000 000 000 000 000 000 000 f: 0000:000 000 000 000 000 000 000 666 000 000 000 000 f: 0000:000 000 000 000 000 000 000 000 000 000 000 000 f: 0000:000 000 000 000 000 333 000 000 000 000 000 000 f: 0000:000 000 000 000 000 000 000 000 000 000 000 000 """ </code></pre> <p>So when I convert it into listby doing <code>list(a)</code>, at every line break I get <code>\n</code>, so I just want to split it there.</p>
-1
2016-09-22T07:18:21Z
39,632,715
<p>Simplest way is to join the list to make string. Then split it on '\n' and convert the each string in the list to sub list:</p> <pre><code>&gt;&gt;&gt; a_string = ''.join(a) &gt;&gt;&gt; map(list, a_string.split()) [['a', 'b'], ['c', 'd']] </code></pre> <p>However if you do not want to go with this approach, below is the simple logic based on <code>for</code> loop:</p> <pre><code>&gt;&gt;&gt; my_list = [] &gt;&gt;&gt; sub_list = [] &gt;&gt;&gt; for item in a: ... if item != '\n': ... sub_list.append(item) ... else: ... my_list.append(sub_list) ... sub_list = [] ... &gt;&gt;&gt; my_list.append(sub_list) &gt;&gt;&gt; my_list [['a', 'b'], ['c', 'd']] </code></pre>
4
2016-09-22T07:22:42Z
[ "python" ]
sum based on radix in Python 2.7
39,632,834
<p>Working on a problem to calculate sum for any radix (radix between 2 to 10, inclusive), for example "10" + "20" based on 10 radix result in 30, and "10" + "20" based on radix 3 result in 100.</p> <p>I post my code and test cases to verify it works, and my question is if any performance improvement or any ideas to make code more elegant (I have some repetitive code in my below implementation)? Thanks.</p> <p>BTW, if code has any issues, please feel free to point out.</p> <pre><code>def radixSum(x, y, radix): if not x: return y if not y: return x # make x longer than y if (len(y) &gt; len(x)): tmp = x x = y y = tmp lenCommon = min(len(x), len(y)) count = 0 remaining = 0 i = -1 result='' # deal with common part while count &lt; lenCommon: value = int(x[i]) + int(y[i]) + remaining if value &gt;= radix: remaining = 1 else: remaining = 0 if value &gt;= radix: value = value - radix result = str(value) + result count += 1 i -= 1 # deal with longer string part while count &lt; len(x): value = int(x[i]) + remaining if value &gt;= radix: remaining = 1 else: remaining = 0 if value &gt;= radix: value = value - radix result = str(value) + result count += 1 i -= 1 if remaining &gt; 0: result = str(remaining) + result return result if __name__ == "__main__": print radixSum("10", "10", 2) #100 print radixSum("10", "20", 10) #30 print radixSum("10", "20", 3) #100 print radixSum("100", "20", 10) #120 </code></pre>
1
2016-09-22T07:28:47Z
39,634,065
<p>Your code is mostly ok, but you repeat some tests unnecessarily. Instead of </p> <pre><code>if value &gt;= radix: remaining = 1 else: remaining = 0 if value &gt;= radix: value = value - radix </code></pre> <p>you can do</p> <pre><code>if value &gt;= radix: remaining = 1 value = value - radix else: remaining = 0 </code></pre> <p>Another change you could make is to pad short numbers with zeroes so that you don't need to worry about handling extra digits in the longer number. </p> <p>FWIW, here's a more compact version. It uses <code>itertools.izip_longest</code> to simplify the process of adding corresponding digits and padding the shorter number with zeroes. I've also changed the function name so that it complies with the PEP-8 style guide.</p> <pre><code>from itertools import izip_longest def radix_sum(x, y, radix): #add corresponding digits a = [int(u) + int(v) for u, v in izip_longest(x[::-1], y[::-1], fillvalue=0)] # normalize result = [] carry = 0 for u in a: carry, u = divmod(u + carry, radix) result.append(str(u)) if carry: result.append(str(carry)) return ''.join(result)[::-1] if __name__ == "__main__": print radix_sum("10", "10", 2) #100 print radix_sum("10", "20", 10) #30 print radix_sum("10", "20", 3) #100 print radix_sum("100", "20", 10) #120 </code></pre> <p><strong>output</strong></p> <pre><code>100 30 100 120 </code></pre>
1
2016-09-22T08:34:26Z
[ "python", "python-2.7", "radix" ]
Dot product of patches in tensorflow
39,632,849
<p>I have two square matrices of the same size and the dimensions of a square patch. I'd like to compute the dot product between every pair of patches. Essentially I would like to implement the following operation:</p> <pre><code>def patch_dot(A, B, patch_dim): res_dim = A.shape[0] - patch_dim + 1 res = np.zeros([res_dim, res_dim, res_dim, res_dim]) for i in xrange(res_dim): for j in xrange(res_dim): for k in xrange(res_dim): for l in xrange(res_dim): res[i, j, k, l] = (A[i:i + patch_dim, j:j + patch_dim] * B[k:k + patch_dim, l:l + patch_dim]).sum() return res </code></pre> <p>Obviously this would be an extremely inefficient implementation. Tensorflow's tf.nn.conv2d seems like a natural solution to this as I'm essentially doing a convolution, however my filter matrix isn't fixed. Is there a natural solution to this in Tensorflow, or should I start looking at implementing my own tf-op?</p>
1
2016-09-22T07:30:05Z
39,656,992
<p>The natural way to do this is to first extract overlapping image patches of matrix B using <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/array_ops.html#extract_image_patches" rel="nofollow">tf.extract_image_patches</a>, then to apply the <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/nn.html#conv2d" rel="nofollow">tf.nn.conv2D</a> function on A and each B sub-patch using <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/functional_ops.html" rel="nofollow">tf.map_fn</a>.</p> <p>Note that prior to use <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/array_ops.html#extract_image_patches" rel="nofollow">tf.extract_image_patches</a> and <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/nn.html#conv2d" rel="nofollow">tf.nn.conv2D</a> you need to reshape your matrices as 4D tensors of shape <code>[1, width, height, 1]</code> using <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/array_ops.html#reshape" rel="nofollow">tf.reshape</a>.</p> <p>Also, prior to use <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/functional_ops.html" rel="nofollow">tf.map_fn</a>, you would also need to use the <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/array_ops.html#transpose" rel="nofollow">tf.transpose</a> op so that the B sub-patches are indexed by the first dimension of the tensor you use as the <code>elems</code> argument of <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/functional_ops.html" rel="nofollow">tf.map_fn</a>.</p>
1
2016-09-23T09:12:45Z
[ "python", "machine-learning", "computer-vision", "tensorflow", "linear-algebra" ]
How to get a Python 2.7 package (Spynner) to work with Python 3?
39,632,870
<p>The Spynner docs says it supports Python >=26, but during installation I get the following error:</p> <pre><code>(spynner) spynner$ pip3 install spynner Requirement already satisfied (use --upgrade to upgrade): spynner in /Users/spynner/Envs/spynner/lib/python3.4/site-packages/spynner-2.19-py3.4.egg Collecting six (from spynner) Using cached six-1.10.0-py2.py3-none-any.whl Collecting beautifulsoup4 (from spynner) Using cached beautifulsoup4-4.5.1-py3-none-any.whl Collecting unittest2 (from spynner) Using cached unittest2-1.1.0-py2.py3-none-any.whl Collecting pyquery (from spynner) Using cached pyquery-1.2.13.tar.gz Collecting autopy (from spynner) Using cached autopy-0.51.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "/private/var/folders/zl/dpw1svbx2qjbl549qvzq2r640000gn/T/pip-build-3rvrid_c/autopy/setup.py", line 50 print 'Updating __init__.py' ^ SyntaxError: Missing parentheses in call to 'print' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/zl/dpw1svbx2qjbl549qvzq2r640000gn/T/pip-build-3rvrid_c/autopy/ </code></pre> <p>So it looks like one of the packages is written for 2.7.</p> <p>Is there some Python trick I can do to get this to work with Python 3, or do I have to go and manually correct the offending code?</p> <p>Cheers</p>
0
2016-09-22T07:30:55Z
39,633,021
<p>You will need to correct the offending code. There are automated tools that help you do that, such as <a href="https://docs.python.org/2/library/2to3.html" rel="nofollow">2to3</a>:</p> <blockquote> <p>2to3 is a Python program that reads Python 2.x source code and applies a series of fixers to transform it into valid Python 3.x code. The standard library contains a rich set of fixers that will handle almost all code. 2to3 supporting library lib2to3 is, however, a flexible and generic library, so it is possible to write your own fixers for 2to3. lib2to3 could also be adapted to custom applications in which Python code needs to be edited automatically.</p> </blockquote>
0
2016-09-22T07:37:46Z
[ "python", "spynner" ]
Passing data into flask using Ajax
39,633,202
<p>i am trying to pass some data from html to python using ajax using post request. But the arguments for request are empty.</p> <p>A simple endpoint, i am trying to print all the arguments that come in the form of request.</p> <pre><code>@app.route("/enter", methods=["POST", "GET"]) def enter_data(): if request.method == "GET": return render_template('index.html') else: print("post") print(request.args) return render_template('index.html') </code></pre> <p>debug output</p> <pre><code>post ImmutableMultiDict([]) 127.0.0.1 - - [22/Sep/2016 00:40:12] "POST /enter HTTP/1.1" 200 - </code></pre> <p>the ajax function is as below</p> <pre><code>$(function() { $('.button').click(function() { var user = $('#userinput').val(); console.log(user); $.ajax({ url: '/enter', data: JSON.stringify( {'user': $('#userinput').val()} ), type: 'POST', success: on_request_success, error: on_request_error, }) }); }); </code></pre> <p>the value gets printed in js console. But its not present on python side.</p> <p>html code</p> <pre><code>&lt;body&gt; &lt;h3&gt;The Chef Bucket&lt;/h3&gt; Enter food you like to order. &lt;input type="text" name="user_data" id="userinput" /&gt; &lt;input type="button" value="Submit" class="button"&gt; &lt;div id="userinputdata"&gt; {% if not entered %} &lt;h4&gt;&lt;/h4&gt; {% else %} &lt;h4&gt;{{ entered }}&lt;/h4&gt; {% endif %} &lt;/div&gt; &lt;/body&gt; </code></pre>
-1
2016-09-22T07:47:00Z
39,633,407
<p>There is no <code>&lt;form&gt;</code> tag in your html code, so your <code>$('form').serialize()</code> doing nothing.</p>
0
2016-09-22T07:58:06Z
[ "javascript", "jquery", "python", "ajax", "flask" ]
Passing data into flask using Ajax
39,633,202
<p>i am trying to pass some data from html to python using ajax using post request. But the arguments for request are empty.</p> <p>A simple endpoint, i am trying to print all the arguments that come in the form of request.</p> <pre><code>@app.route("/enter", methods=["POST", "GET"]) def enter_data(): if request.method == "GET": return render_template('index.html') else: print("post") print(request.args) return render_template('index.html') </code></pre> <p>debug output</p> <pre><code>post ImmutableMultiDict([]) 127.0.0.1 - - [22/Sep/2016 00:40:12] "POST /enter HTTP/1.1" 200 - </code></pre> <p>the ajax function is as below</p> <pre><code>$(function() { $('.button').click(function() { var user = $('#userinput').val(); console.log(user); $.ajax({ url: '/enter', data: JSON.stringify( {'user': $('#userinput').val()} ), type: 'POST', success: on_request_success, error: on_request_error, }) }); }); </code></pre> <p>the value gets printed in js console. But its not present on python side.</p> <p>html code</p> <pre><code>&lt;body&gt; &lt;h3&gt;The Chef Bucket&lt;/h3&gt; Enter food you like to order. &lt;input type="text" name="user_data" id="userinput" /&gt; &lt;input type="button" value="Submit" class="button"&gt; &lt;div id="userinputdata"&gt; {% if not entered %} &lt;h4&gt;&lt;/h4&gt; {% else %} &lt;h4&gt;{{ entered }}&lt;/h4&gt; {% endif %} &lt;/div&gt; &lt;/body&gt; </code></pre>
-1
2016-09-22T07:47:00Z
39,633,419
<p>Within render template, you need to specify elements to be filled in within your template.</p> <p>Instead of this;</p> <pre><code>@app.route("/enter", methods=["POST", "GET"]) def enter_data(): if request.method == "GET": return render_template('index.html') else: print("post") print(request.args) return render_template('index.html') </code></pre> <p>Try this;</p> <pre><code>@app.route("/enter", methods=["POST", "GET"]) def enter_data(): if request.method == "GET": return render_template('index.html') elif request.method == "POST": entered = request.form['entered'] return render_template('index.html', entered=entered) else: return render_template('index.html') </code></pre>
0
2016-09-22T07:58:45Z
[ "javascript", "jquery", "python", "ajax", "flask" ]
Passing data into flask using Ajax
39,633,202
<p>i am trying to pass some data from html to python using ajax using post request. But the arguments for request are empty.</p> <p>A simple endpoint, i am trying to print all the arguments that come in the form of request.</p> <pre><code>@app.route("/enter", methods=["POST", "GET"]) def enter_data(): if request.method == "GET": return render_template('index.html') else: print("post") print(request.args) return render_template('index.html') </code></pre> <p>debug output</p> <pre><code>post ImmutableMultiDict([]) 127.0.0.1 - - [22/Sep/2016 00:40:12] "POST /enter HTTP/1.1" 200 - </code></pre> <p>the ajax function is as below</p> <pre><code>$(function() { $('.button').click(function() { var user = $('#userinput').val(); console.log(user); $.ajax({ url: '/enter', data: JSON.stringify( {'user': $('#userinput').val()} ), type: 'POST', success: on_request_success, error: on_request_error, }) }); }); </code></pre> <p>the value gets printed in js console. But its not present on python side.</p> <p>html code</p> <pre><code>&lt;body&gt; &lt;h3&gt;The Chef Bucket&lt;/h3&gt; Enter food you like to order. &lt;input type="text" name="user_data" id="userinput" /&gt; &lt;input type="button" value="Submit" class="button"&gt; &lt;div id="userinputdata"&gt; {% if not entered %} &lt;h4&gt;&lt;/h4&gt; {% else %} &lt;h4&gt;{{ entered }}&lt;/h4&gt; {% endif %} &lt;/div&gt; &lt;/body&gt; </code></pre>
-1
2016-09-22T07:47:00Z
39,633,532
<p>There is no <code>&lt;form&gt;</code> tag in your html code. Therefore your <code>data</code> attribute of the ajax is request is empty. Either put your <code>&lt;div&gt;</code> in a <code>&lt;form&gt;</code> environment in the html or set your <code>data</code> attribute in the ajax request by hand:</p> <pre><code>$(function() { $('.button').click(function() { var user = $('#userinput').val(); console.log(user); $.ajax({ url: '/enter', data: JSON.stringify( {'user': $('#userinput').val()} ), type: 'POST', success: on_request_success, error: on_request_error, }) }); }); </code></pre> <p>Note that the <code>data</code> attribute is here sent as an json object which you have to deserialize on the server side. If you want to keep it simple, just define it as a dictinary.</p>
0
2016-09-22T08:05:04Z
[ "javascript", "jquery", "python", "ajax", "flask" ]
Rollback Datastore PUT Transaction in GAE Python Web
39,633,228
<p>There is one method inside that method I'm inserting three different models record into database. But if some exception or error happens I need to rollback this insertion transaction from db. Following is show case of method.</p> <pre><code>def post(): try: model1 = Model1() model1.key1 = 'key1' model1.key2 = 'key2' model1.put() #some logic1 code block goes here . . model2 = Model2() model2.key2 = 'key2' model2.key2 = 'key2' model2.put() #some logic2 code block goes here . . model3 = Model3() model3.key3 = 'key3' model3.key3 = 'key3' model3.put() #some logic3 block goes here . . except Exception: #all the database insertion transaction which happened should be rollback here. </code></pre> <p><strong>Here Model1, Model2 and Model3 are google.appengine.ext.ndb.Model extending model classes.</strong> Now suppose exception occurred in <code>logic1 code block</code> then <code>model1</code> should be rollback because it was inserted before executing this <code>logic1 code block</code>. Similarly if exception occurred in <code>logic2 code block</code> then both above <code>model1 &amp; model2</code> should be rollback and so on. My problem is very common. I did lot of search but not able to find any solution. I'm totaly new in Python and GAE. Please help.</p>
-2
2016-09-22T07:48:13Z
39,642,932
<p>As <a href="http://stackoverflow.com/users/104349/daniel-roseman">@Daniel Roseman</a> linked for you, the answer is on the <a href="https://cloud.google.com/appengine/docs/python/ndb/transactions" rel="nofollow">NDB Transaction page</a>. Just search for the word "rollback" and you'll see it.</p> <p>For your code, this means:</p> <pre><code>@ndb.transactional(xg=True) def post(): try: model1 = Model1() model1.key1 = 'key1' model1.key2 = 'key2' model1.put() ... #some logic3 block goes here . . except Exception: #all the database insertion transaction which happened should be rollback here. raise ndb.Rollback </code></pre> <p>Note that on the decorator I've specified <code>xg-True</code> since from your code snippet it looked like it was most likely a cross-entity group transaction. If this is not the case and all entities have the same root ancestor, you can omit this.</p> <p>Also note that ndb.Rollback is purely a convenience exception - any exception raised in a transaction decorated function will cause a rollback.</p>
-1
2016-09-22T15:19:33Z
[ "python", "google-app-engine", "google-cloud-datastore", "rollback" ]
Replacing a specific index value in list of lists with corresponding dictionary value
39,633,231
<p>I am trying to replace a value in a list of lists (in all relevant sublists) with a VALUE from a dictionary and cannot quite get it to work. </p> <p>The content/details of the dictionary is as follows:</p> <pre><code>dictionary = dict(zip(gtincodes, currentstockvalues)) </code></pre> <p>The dictionary contains pairs of GTIN codes and currentstock values. So, 12121212(GTIN)corresponding to value (currentstockvalue) 1, and 12345670 corresponding to value (currentstockvalue) 0. </p> <p>I now need to look up a list <strong>NEWDETAILS</strong> which has been read in from file. The list is essentially a list of lists. When newdetails (the list) is printed the output is:</p> <pre><code>[['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] </code></pre> <p><strong>WHAT NEEDS TO BE DONE:</strong></p> <p>I would like to use the dictionary to update and replace values in the list (and all relevant sublists in the list). So, for each GTIN (key) in the dictionary, the 3rd index in each sublist (for the corresponding GTIN) needs to be updated with the VALUE (currentstockvalue) in the dictionary. </p> <p>In the above example, for sublist 1-index[03] of the sublist (whcih is currently 5) needs to be updated with say 2...(or whatever is the value in the dictionary for that GTIN). The same needs to happen for the second sublist. </p> <p>The code I have so far is: </p> <pre><code>for sub_list in newdetails: sub_list[3] = dictionary.get(sub_list[3], sub_list[3]) print(sub_list) </code></pre> <p>The above simply appears to produce two seperate sublists and print them. It is not making the replacement.</p> <pre><code>['12345670', 'Iphone 9.0', '500', '5', '3', '5'] ['12121212', 'Samsung Laptop', '900', '5', '3', '5'] </code></pre> <p>My question is: </p> <p>How do I amend the above code to LOOK UP THE DICTIONARY (to match the index[0]) of each sublist, and REPLACE the 4th element(index[03]) of each sublist with the VALUE in the dictionary for the corresponding GTIN?</p> <p>Thanks in advance. </p> <p><strong>UPDATE</strong></p> <p>Based on Alex P's suggestion (thank you) I made the following edit:</p> <pre><code> for sub_list in newdetails: sub_list[3] = dictionary.get(gtin), sub_list[3] print(sub_list) </code></pre> <p>It comes up with a replacement (of a tuple) rather than just the individual value - as follows: </p> <pre><code>['12345670', 'Iphone 9.0', '500', (1, '5'), '3', '5'] ['12121212', 'Samsung Laptop', '900', (1, '5'), '3', '5'] </code></pre> <p>The contents of the dictionary:</p> <pre><code>12121212 3 0 0 12345670 1 </code></pre> <p>12121212 is the gtin and '3' is the current stock. I want to look up the dictionary, IF a corresponding GTIN is found in the newdetails list, I want to replace the third index (fourth element) in the newdetails list, with the corresponding value in the dictionary. So - for 12121212, 5 to be replaced with 3. And for 12345670, 5 to be replaced with 1. </p> <p><strong>UPDATE based on suggestion by Moses K</strong></p> <p>I tried this - thank you - but ....</p> <pre><code>for sub_list in newdetails: sub_list[3] = dictionary.get(int(sub_list[0]), sub_list[3]) print(sub_list) </code></pre> <p>the output is still simply the two (unchanged) sublists. </p> <pre><code>['12345670', 'Iphone 9.0', '500', '5', '3', '5'] ['12121212', 'Samsung Laptop', '900', '5', '3', '5'] </code></pre> <p>Update #2 - both converted to int. </p> <pre><code>for sub_list in newdetails: print("sublist3") print(sub_list[3]) sub_list[3] = dictionary.get(int(sub_list[0]), (int(sub_list[3]))) print(sub_list) </code></pre> <p>Still producing an output of:</p> <pre><code>['12345670', 'Iphone 9.0', '500', 5, '3', '5'] ['12121212', 'Samsung Laptop', '900', 5, '3', '5'] </code></pre> <p>instead of (what I want)that is: ['12345670', 'Iphone 9.0', '500',<strong>2</strong>, '3', '5'] ['12121212', 'Samsung Laptop', '900', <strong>1</strong>, '3', '5']</p>
1
2016-09-22T07:48:31Z
39,633,389
<p>Your <code>GTIN</code> code at <em>index 0</em> of each sublist should be your dictionary key:</p> <pre><code>for sub_list in newdetails: sub_list[3] = dictionary.get(sub_list[0], sub_list[3]) # ^ </code></pre> <p>If the codes in the dictionary are integers and not strings then you'll need to cast them to <code>int</code>:</p> <pre><code>for sub_list in newdetails: sub_list[3] = dictionary.get(int(sub_list[0]), sub_list[3]) # ^ </code></pre>
1
2016-09-22T07:56:45Z
[ "python", "dictionary", "sublist" ]
Move folders older than 2 days via Ansible
39,633,566
<p>Given the following directories:</p> <pre><code> /tmp/testing/test_ansible ├── [Sep 20 8:53] 2014-05-10 ├── [Sep 20 8:53] 2014-05-11 ├── [Sep 20 8:53] 2014-05-12 └── [Sep 22 9:48] 2016-09-22 4 directories </code></pre> <p>I'm trying to <strong>move</strong> dirs older than <strong>2 days</strong>. In order to achieve that, I'm using Ansible's <code>find</code> module:</p> <pre><code> - name: Find the test dirs created in the past find: paths: /tmp/testing/test_ansible age: 2d file_type: directory register: gold_data - debug: var="{{ item }}" with_items: "{{ gold_data.files }}" </code></pre> <p>The above code is outputting <strong>3 results</strong> out of 4 folders, I'm showing only 1 result below:</p> <pre class="lang-sh prettyprint-override"><code> TASK [debian-linux-move : debug] *********************************************** ok: [localhost] =&gt; (item={u'uid': 1000, u'woth': False, u'mtime': 1474350802.827127, u'inode': 3937540, u'isgid': False, u'size': 4096, u'roth': True, u'isuid' : False, u'isreg': False, u'gid': 1000, u'ischr': False, u'wusr': True, u'xoth': True, u'rusr': True, u'nlink': 2, u'issock': False, u'rgrp': True, u'path': u' /tmp/testing/test_ansible/2014-05-11', u'xusr': True, u'atime': 1474529596.5034406, u'isdir': True, u'ctime': 1474350802.827127, u'isblk': False, u'xgrp': True , u'dev': 2055, u'wgrp': True, u'isfifo': False, u'mode': u'0775', u'islnk': False}) =&gt; { "&lt;type 'dict'&gt;": "VARIABLE IS NOT DEFINED!", "item": { "atime": 1474529596.5034406, "ctime": 1474350802.827127, "dev": 2055, "gid": 1000, "inode": 3937540, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0775", "mtime": 1474350802.827127, "nlink": 2, "path": "/tmp/testing/test_ansible/2014-05-11", "rgrp": true, "roth": true, "rusr": true, "size": 4096, "uid": 1000, "wgrp": true, "woth": false, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } </code></pre> <p>and 2 more results that are somewhat similar to this one.</p> <h3>What I'm trying to achieve</h3> <p>I figured that if I store all the <strong>path</strong> in a variable, then I could just <strong>move</strong> those dirs from the stored variable, and then <strong>make a symlink</strong> back to the dir where they were taken. So I have to loop the items and extract the <code>path</code>.</p> <p>That's why I need the <strong>path</strong>. But when I try to access it, I get error:</p> <pre><code> (debug) p list(vars['gold_data']['files']['path']) ***TypeError:TypeError('list indices must be integers, not str',) </code></pre> <p>What are other options? How could I achieve such operation?</p>
0
2016-09-22T08:07:21Z
39,634,836
<p>Thanks to the awesome #ansible IRC community, manage to fix the error.</p> <p>I was printing the item wrong in the debug module:</p> <p>how I did it (bad):</p> <pre><code> - debug: var={{ item['path'] }} with_items: "{{ gold_data.files }}" </code></pre> <p>how they suggested (good):</p> <pre><code> - debug: var=item.path with_items: "{{ gold_data.files }}" </code></pre> <p>so, by removing the double braces it's now printing the path correctly:</p> <pre class="lang-sh prettyprint-override"><code> TASK [debian-linux-move : debug] *********************************************** ok: [localhost] =&gt; (item={u'uid': 1000, u'woth': False, u'mtime': 1474350802.827127, u'inode': 3937540, u'isgid': False, u'size': 4096, u'roth': True, u'isuid' : False, u'isreg': False, u'gid': 1000, u'ischr': False, u'wusr': True, u'xoth': True, u'rusr': True, u'nlink': 2, u'issock': False, u'rgrp': True, u'path': u' /tmp/testing/test_ansible/2014-05-11', u'xusr': True, u'atime': 1474529596.5034406, u'isdir': True, u'ctime': 1474350802.827127, u'isblk': False, u'xgrp': True , u'dev': 2055, u'wgrp': True, u'isfifo': False, u'mode': u'0775', u'islnk': False}) =&gt; { "item": { "atime": 1474529596.5034406, "ctime": 1474350802.827127, "dev": 2055, "gid": 1000, "inode": 3937540, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0775", "mtime": 1474350802.827127, "nlink": 2, "path": "/tmp/testing/test_ansible/2014-05-11", "rgrp": true, "roth": true, "rusr": true, "size": 4096, "uid": 1000, "wgrp": true, "woth": false, "wusr": true, "xgrp": true, "xoth": true, "xusr": true }, "item.path": "/tmp/testing/test_ansible/2014-05-11" } </code></pre>
1
2016-09-22T09:10:30Z
[ "python", "bash", "directory", "ansible", "ansible-playbook" ]
More efficient way of computing distance matrix in Python
39,633,758
<p>Hi Everyone I am trying to write code (using python 2) that returns a matrix that contains the distance between all pairs of rows. Below is an implementation that I have written. It works as expected but can get very slow as the number of rows gets large. Hence I was wondering if anyone has any suggestions as to how the code can be made more efficient for large number of rows.</p> <p>Thanks in advance</p> <pre><code>def gendist(x,alpha=2): (n,p) = x.shape len = 0 for ii in range(1,n): len = len + ii d = np.empty((len,p)) ind = 0 for ii in range(0,n): for jj in range(1,n): if ii &lt; jj: d[ind,] = (x[ii,]-x[jj,])**alpha ind = ind + 1 return d </code></pre>
0
2016-09-22T08:18:20Z
39,633,960
<p>I see you use <code>X.shape</code>, for me, it is find to assume that you are using <code>NumPy</code></p> <p>Code:</p> <pre><code>#!/usr/bin/env python3 import numpy as np import scipy.spatial.distance as dist a = np.random.randint(0, 10, (5, 3)) b = dist.pdist(a) print('Matrix:') print(a) print('Pdist') for d in b: print(d) </code></pre> <p>Output:</p> <pre><code>Matrix: [[4 7 6] [8 2 8] [8 3 5] [2 4 7] [0 7 5]] Pdist 6.7082039325 5.74456264654 3.74165738677 4.12310562562 3.16227766017 6.40312423743 9.89949493661 6.40312423743 8.94427191 4.12310562562 </code></pre> <p>where the order of combinations is (0,1), (0,2), (0,3), (0,4), (1,2), (1,3), (1,4), (2,3), (2,4), ...</p> <p>The default metric is the Euclidean distance. See <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html#scipy.spatial.distance.pdist" rel="nofollow"><code>pdist</code></a> to apply other metrics.</p>
0
2016-09-22T08:29:48Z
[ "python", "performance", "distance-matrix" ]
More efficient way of computing distance matrix in Python
39,633,758
<p>Hi Everyone I am trying to write code (using python 2) that returns a matrix that contains the distance between all pairs of rows. Below is an implementation that I have written. It works as expected but can get very slow as the number of rows gets large. Hence I was wondering if anyone has any suggestions as to how the code can be made more efficient for large number of rows.</p> <p>Thanks in advance</p> <pre><code>def gendist(x,alpha=2): (n,p) = x.shape len = 0 for ii in range(1,n): len = len + ii d = np.empty((len,p)) ind = 0 for ii in range(0,n): for jj in range(1,n): if ii &lt; jj: d[ind,] = (x[ii,]-x[jj,])**alpha ind = ind + 1 return d </code></pre>
0
2016-09-22T08:18:20Z
39,634,672
<p>Without scipy (it is possible to get numpy without scipy, for instance with an Abaqus install) it's a bit more difficult.</p> <pre><code>def gendist(x,alpha=2): xCopies=x.repeat(x.shape[0],axis=0).reshape(np.conatenate(([a.shape[0]],a.shape)) #n x n x p matrix filled with copies of x xVecs=xCopies-xCopies.swapaxes(0,1) #matrix of distance vectors xDists=np.sum(xVecs**alpha,axis=-1)**(1/alpha) #n x n matrix of distances Return xDists </code></pre> <p>That should be robust, at least it's what I had to use.</p>
0
2016-09-22T09:03:57Z
[ "python", "performance", "distance-matrix" ]
Can't execute some python commands for virtual env from Fish shell
39,633,838
<p>I'm on MacOS Sierra and have python3 and python installed via brew. Using the command <code>python3 -m venv my_venv</code>, I created a vitual environment for python3. I then tried to activate it with <code>. bin/activate.fish</code> from within <code>my_venv</code>. However I get the exception</p> <pre><code>$(...) is not supported. In fish, please use '(automate_stuff)'. bin/activate.fish (line 58): if test -n "$(automate_stuff) " ^ from sourcing file bin/activate.fish called on line 175 of file /usr/local/Cellar/fish/HEAD/share/fish/config.fish in function '.' called on standard input source: Error while reading file 'bin/activate.fish' </code></pre> <p>Also I tried to install flake8 for the <code>my_venv</code> with the command (from within my_venv) <code>. bin/pip -m pip install flake8</code>. That also failed with </p> <pre><code>Missing end to balance this if statement bin/pip (line 9): if __name__ == '__main__': ^ from sourcing file bin/pip called on line 175 of file /usr/local/Cellar/fish/HEAD/share/fish/config.fish in function '.' called on standard input source: Error while reading file 'bin/pip' </code></pre> <p>What is going on and how do I fix it? Just to repeat I run Fish Shell as my default shell.</p>
-1
2016-09-22T08:22:41Z
39,643,448
<blockquote> <p>. bin/pip -m pip install flake8.</p> </blockquote> <p>Here you are sourcing (the <code>.</code> command is an alias for <code>source</code>) the pip script with fish. Since <code>pip</code> is a python script, you'll get syntax errors.</p> <p>You want <code>./bin/pip</code>.</p> <blockquote> <p>$(...) is not supported. In fish, please use '(automate_stuff)'.</p> </blockquote> <p>This is a bug in virtualenv - <code>$()</code> isn't valid fish syntax.</p>
0
2016-09-22T15:44:33Z
[ "python", "shell", "fish" ]
Is my interpretation of the Lagrange polynomial and construction of graph correct?
39,633,846
<pre><code>import matplotlib.pyplot as plt import matplotlib.pyplot as plt def f(x): return (1 + np.exp(-x)) / (1 + x ** 4) def lagrange(x, y, x0): ret = [] for j in range(len(y)): numerator = 1.0 denominator = 1.0 for k in range(len(x)): if k != j: numerator *= (x0 - x[k]) denominator *= (x[j] - x[k]) ret.append(y[j] * (numerator / denominator)) return ret plt.plot(x, lagrange(x, f(x), 5), label="Polynom") plt.plot(x, f(x), label="Function") plt.legend(loc='upper left') plt.show() </code></pre> <p>I need build graph with original function and with Lagrange polynomial. I newbie to matplotlib library(and to python in general), so i want to confirm that i am right</p>
0
2016-09-22T08:23:01Z
39,641,264
<p>SciPy has <code>lagrange</code>, here is an example:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np from scipy.interpolate import lagrange def f(x): return (1 + np.exp(-x)) / (1 + x ** 4) x = np.linspace(-1, 1, 10) x2 = np.linspace(-1, 1, 100) y = f(x) p = lagrange(x, y) plt.plot(x, f(x), "o", label="Function") plt.plot(x2, np.polyval(p, x2), label="Polynom") plt.legend(loc='upper right') </code></pre> <p>the output:</p> <p><a href="http://i.stack.imgur.com/pkjrT.png" rel="nofollow"><img src="http://i.stack.imgur.com/pkjrT.png" alt="enter image description here"></a></p> <p>If you want to calculate the interpolated curve by the lagrange formula:</p> <pre><code>den = x[:, None] - x[None, :] num = x2[:, None] - x[None, :] with np.errstate(divide='ignore', invalid='ignore'): div = num[:, None, :] / den[None, :, :] div[~np.isfinite(div)] = 1 y2 = (np.product(div, axis=-1) * y).sum(axis=1) plt.plot(x, y, "o") plt.plot(x2, y2) </code></pre> <p>The code uses numpy broadcast to speedup the calculation.</p>
0
2016-09-22T14:03:00Z
[ "python", "matplotlib" ]
How to get array after adding a number to specific data of that array?
39,633,889
<pre><code>import numpy as np import xlrd import xlwt wb = xlrd.open_workbook('Scatter plot.xlsx') workbook = xlwt.Workbook() sheet = workbook.add_sheet("Sheet1") sh1 = wb.sheet_by_name('T180') sh2=wb.sheet_by_name("T181") x= np.array([sh1.col_values(1, start_rowx=51, end_rowx=315)]) y= np.array([sh1.col_values(2, start_rowx=51, end_rowx=315)]) x1= np.array([sh2.col_values(1, start_rowx=50, end_rowx=298)]) y1= np.array([sh2.col_values(2, start_rowx=50, end_rowx=298)]) condition = [(x1&lt;=1000) &amp; (x1&gt;=0) ] condition1 = [(y1&lt;=1000) &amp; (y1&gt;=0) ] x_prime=x1[condition]-150 y_prime= y[condition1]+20 plt.plot(x,y, "ro", label="T180") plt.plot(x_prime,y_prime,"gs") plt.show() </code></pre> <p>I want to subtract 150 from the values less than 1000 of x1 array and finally I need all values (subtracted+remaining). But with this code I got only the values that are less than 1000. But I need both (less than 1000+ greater than 1000). But greater than 1000 values will be unchanged. How can I will do this. As you can see there 248 elements in x1 array so after subtraction I will need 248 element as <strong>x_prime</strong>. Same as for y. Thanks in advance for your kind co-operation. </p>
1
2016-09-22T08:25:40Z
39,635,155
<p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.place.html" rel="nofollow"><code>numpy.place</code></a> to modify arrays where a logic expression holds. For complex logic expressions on the array there are the <a href="http://docs.scipy.org/doc/numpy/reference/routines.logic.html" rel="nofollow">logic functions</a> that combines boolean arrays.</p> <p>E.g.:</p> <pre><code>A = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) np.place(A, np.logical_and(A &gt; 1, A &lt;= 8), A-10) </code></pre> <p>will subtract <code>10</code> from every element of <code>A</code> that is <code>&gt; 1</code> and <code>&lt;= 8</code>. After this <code>A</code> will be</p> <pre><code>array([ 1, -9, -8, -7, -6, -5, -4, -3, 9, 10]) </code></pre>
1
2016-09-22T09:25:39Z
[ "python", "arrays", "subtraction" ]
How to get array after adding a number to specific data of that array?
39,633,889
<pre><code>import numpy as np import xlrd import xlwt wb = xlrd.open_workbook('Scatter plot.xlsx') workbook = xlwt.Workbook() sheet = workbook.add_sheet("Sheet1") sh1 = wb.sheet_by_name('T180') sh2=wb.sheet_by_name("T181") x= np.array([sh1.col_values(1, start_rowx=51, end_rowx=315)]) y= np.array([sh1.col_values(2, start_rowx=51, end_rowx=315)]) x1= np.array([sh2.col_values(1, start_rowx=50, end_rowx=298)]) y1= np.array([sh2.col_values(2, start_rowx=50, end_rowx=298)]) condition = [(x1&lt;=1000) &amp; (x1&gt;=0) ] condition1 = [(y1&lt;=1000) &amp; (y1&gt;=0) ] x_prime=x1[condition]-150 y_prime= y[condition1]+20 plt.plot(x,y, "ro", label="T180") plt.plot(x_prime,y_prime,"gs") plt.show() </code></pre> <p>I want to subtract 150 from the values less than 1000 of x1 array and finally I need all values (subtracted+remaining). But with this code I got only the values that are less than 1000. But I need both (less than 1000+ greater than 1000). But greater than 1000 values will be unchanged. How can I will do this. As you can see there 248 elements in x1 array so after subtraction I will need 248 element as <strong>x_prime</strong>. Same as for y. Thanks in advance for your kind co-operation. </p>
1
2016-09-22T08:25:40Z
39,635,471
<pre><code>import numpy as np #random initialization x1=np.random.randint(1,high=3000, size=10) x_prime=x1.tolist() for i in range(len(x_prime)): if(x_prime[i]&lt;=1000 and x_prime[i]&gt;=0): x_prime[i]=x_prime[i]-150 x_prime=np.asarray(x_prime) </code></pre> <p><strong>Answer:</strong></p> <pre><code>x1 Out[151]: array([2285, 2243, 1716, 632, 2489, 2837, 2324, 2154, 562, 2508]) x_prime Out[152]: array([2285, 2243, 1716, 482, 2489, 2837, 2324, 2154, 412, 2508]) </code></pre>
0
2016-09-22T09:38:13Z
[ "python", "arrays", "subtraction" ]
How to get array after adding a number to specific data of that array?
39,633,889
<pre><code>import numpy as np import xlrd import xlwt wb = xlrd.open_workbook('Scatter plot.xlsx') workbook = xlwt.Workbook() sheet = workbook.add_sheet("Sheet1") sh1 = wb.sheet_by_name('T180') sh2=wb.sheet_by_name("T181") x= np.array([sh1.col_values(1, start_rowx=51, end_rowx=315)]) y= np.array([sh1.col_values(2, start_rowx=51, end_rowx=315)]) x1= np.array([sh2.col_values(1, start_rowx=50, end_rowx=298)]) y1= np.array([sh2.col_values(2, start_rowx=50, end_rowx=298)]) condition = [(x1&lt;=1000) &amp; (x1&gt;=0) ] condition1 = [(y1&lt;=1000) &amp; (y1&gt;=0) ] x_prime=x1[condition]-150 y_prime= y[condition1]+20 plt.plot(x,y, "ro", label="T180") plt.plot(x_prime,y_prime,"gs") plt.show() </code></pre> <p>I want to subtract 150 from the values less than 1000 of x1 array and finally I need all values (subtracted+remaining). But with this code I got only the values that are less than 1000. But I need both (less than 1000+ greater than 1000). But greater than 1000 values will be unchanged. How can I will do this. As you can see there 248 elements in x1 array so after subtraction I will need 248 element as <strong>x_prime</strong>. Same as for y. Thanks in advance for your kind co-operation. </p>
1
2016-09-22T08:25:40Z
39,645,577
<p>Here is a Pandas solution:</p> <pre><code>import matplotlib import matplotlib.pyplot as plt import pandas as pd matplotlib.style.use('ggplot') fn = r'/path/to/ExcelFile.xlsx' sheetname = 'T181' df = pd.read_excel(fn, sheetname=sheetname, skiprows=47, parse_cols='B:C').dropna(how='any') # customize X-values df.ix[df.eval('0 &lt;= GrvX &lt;= 1000'), 'GrvX'] -= 150 df.ix[df.eval('2500 &lt; GrvX &lt;= 3000'), 'GrvX'] += 50 df.ix[df.eval('3000 &lt; GrvX'), 'GrvX'] += 30 # customize Y-values df.ix[df.eval('0 &lt;= GrvY &lt;= 1000'), 'GrvX'] += 20 df.plot.scatter(x='GrvX', y='GrvY', marker='s', s=30, label=sheetname, figsize=(14,12)) plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/YnEbM.png" rel="nofollow"><img src="http://i.stack.imgur.com/YnEbM.png" alt="enter image description here"></a></p>
0
2016-09-22T17:43:14Z
[ "python", "arrays", "subtraction" ]
Python multithreading doesn't use common memory when the parent object is created inside a process
39,633,936
<p>I'm trying to implement a thread which will run and do a task in the background. Here's my code which gives expected output.</p> <p><strong>Code 1:</strong></p> <pre><code>from time import sleep from threading import Thread class myTest( ): def myThread( self ): while True: sleep( 1 ) print self.myList.keys( ) if 'exit' in self.myList.keys( ): break return def __init__( self ): self.myList = { } self.thread = Thread( target = self.myThread, args = ( ) ) self.thread.start( ) return def myFun( self ): i = 0 while True: sleep( 0.5 ) self.myList[ i ] = i i += 1 if i &gt; 5 : self.myList[ 'exit' ] = '' break return x = myTest( ) x.myFun( ) </code></pre> <p><strong>Output 1:</strong></p> <pre><code>[0] [0, 1, 2] [0, 1, 2, 3, 4] [0, 1, 2, 3, 4, 5, 'exit'] </code></pre> <p>The moment I create a multi-process environment and create this object in the new sub-process, the thread is unable to access the common memory and the dictionary <code>myList</code> remains blank in the thread. Here is the code.</p> <p><strong>Code 2:</strong></p> <pre><code>from time import sleep from threading import Thread from multiprocessing import Process class myTest( ): def myThread( self ): while True: sleep( 1 ) print "myThread", self.myList.keys( ) if 'exit' in self.myList.keys( ): break return def __init__( self ): self.myList = { } self.thread = Thread( target = self.myThread, args = ( ) ) self.thread.start( ) self.i = 0 return def myFun( self ): self.myList[ self.i ] = self.i self.i += 1 if self.i &gt; 5 : self.myList[ 'exit' ] = '' return 'exit' print 'myFun', self.myList.keys( ) return class myProcess( Process ): def __init__( self ): super( myProcess, self ).__init__( ) self.myObject = myTest( ) return def run(self): while True: sleep( 0.5 ) x = self.myObject.myFun( ) if x == 'exit': break return x = myProcess( ) x.start( ) </code></pre> <p><strong>Output 2:</strong></p> <pre><code>myFun [0] myThread [] myFun [0, 1] myFun [0, 1, 2] myThread [] myFun [0, 1, 2, 3] myFun [0, 1, 2, 3, 4] myThread [] myThread [] myThread [] ... ... ... ... ... ... </code></pre> <p>In the 1st code, the object is created inside a Python process (Though main process). In the 2nd code, the object is created inside a sub-process of Python where everything takes place which should have behaved just like the first one.</p> <ol> <li>Can anybody explain why in the 2nd code, the thread is unable to use the shared memory of the parent object?</li> <li>I want the <code>output 1</code> in <code>code 2</code> i.e. force the thread to use the shared common memory of the parent object. What modification do I need to make?</li> </ol>
1
2016-09-22T08:28:26Z
39,635,639
<p>When you spawn the second process, is created about a <strong>copy</strong> of the first one, so <strong><em>the threads spawned in the 2nd process can only access that process stack</em></strong>.</p> <p>In <strong>code2</strong> The object of <code>myTest</code> is created in the constructor (<code>__init__</code>), so it won't share the same memory as of the spawned process which is <code>run</code>. Just by changing the position of the object creation from constructor to <code>run</code>, the object created becomes part of the spawned process memory.</p>
1
2016-09-22T09:45:47Z
[ "python", "multithreading", "multiprocessing" ]
Python multithreading doesn't use common memory when the parent object is created inside a process
39,633,936
<p>I'm trying to implement a thread which will run and do a task in the background. Here's my code which gives expected output.</p> <p><strong>Code 1:</strong></p> <pre><code>from time import sleep from threading import Thread class myTest( ): def myThread( self ): while True: sleep( 1 ) print self.myList.keys( ) if 'exit' in self.myList.keys( ): break return def __init__( self ): self.myList = { } self.thread = Thread( target = self.myThread, args = ( ) ) self.thread.start( ) return def myFun( self ): i = 0 while True: sleep( 0.5 ) self.myList[ i ] = i i += 1 if i &gt; 5 : self.myList[ 'exit' ] = '' break return x = myTest( ) x.myFun( ) </code></pre> <p><strong>Output 1:</strong></p> <pre><code>[0] [0, 1, 2] [0, 1, 2, 3, 4] [0, 1, 2, 3, 4, 5, 'exit'] </code></pre> <p>The moment I create a multi-process environment and create this object in the new sub-process, the thread is unable to access the common memory and the dictionary <code>myList</code> remains blank in the thread. Here is the code.</p> <p><strong>Code 2:</strong></p> <pre><code>from time import sleep from threading import Thread from multiprocessing import Process class myTest( ): def myThread( self ): while True: sleep( 1 ) print "myThread", self.myList.keys( ) if 'exit' in self.myList.keys( ): break return def __init__( self ): self.myList = { } self.thread = Thread( target = self.myThread, args = ( ) ) self.thread.start( ) self.i = 0 return def myFun( self ): self.myList[ self.i ] = self.i self.i += 1 if self.i &gt; 5 : self.myList[ 'exit' ] = '' return 'exit' print 'myFun', self.myList.keys( ) return class myProcess( Process ): def __init__( self ): super( myProcess, self ).__init__( ) self.myObject = myTest( ) return def run(self): while True: sleep( 0.5 ) x = self.myObject.myFun( ) if x == 'exit': break return x = myProcess( ) x.start( ) </code></pre> <p><strong>Output 2:</strong></p> <pre><code>myFun [0] myThread [] myFun [0, 1] myFun [0, 1, 2] myThread [] myFun [0, 1, 2, 3] myFun [0, 1, 2, 3, 4] myThread [] myThread [] myThread [] ... ... ... ... ... ... </code></pre> <p>In the 1st code, the object is created inside a Python process (Though main process). In the 2nd code, the object is created inside a sub-process of Python where everything takes place which should have behaved just like the first one.</p> <ol> <li>Can anybody explain why in the 2nd code, the thread is unable to use the shared memory of the parent object?</li> <li>I want the <code>output 1</code> in <code>code 2</code> i.e. force the thread to use the shared common memory of the parent object. What modification do I need to make?</li> </ol>
1
2016-09-22T08:28:26Z
39,636,139
<p>Forked process sees snapshot of parent's memory at the moment of <code>fork()</code>. When either parent or child changes a single bit, their memory mappings diverge, one sees what it changed, the other sees old state. I've explained the theory more verbose in this answer: <a href="http://stackoverflow.com/a/29838782/73957">http://stackoverflow.com/a/29838782/73957</a></p> <p>If you want to share memory [changes], you have to do it implicitly. In Python one of simple ways is <code>multiprocessing.{Queue,Value,Array}</code>, see <a href="https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes" rel="nofollow">https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes</a></p>
1
2016-09-22T10:06:53Z
[ "python", "multithreading", "multiprocessing" ]
Pandas: groupby with condition
39,634,175
<p>I have dataframe:</p> <pre><code>ID,used_at,active_seconds,subdomain,visiting,category 123,2016-02-05 19:39:21,2,yandex.ru,2,Computers 123,2016-02-05 19:43:01,1,mail.yandex.ru,2,Computers 123,2016-02-05 19:43:13,6,mail.yandex.ru,2,Computers 234,2016-02-05 19:46:09,16,avito.ru,2,Automobiles 234,2016-02-05 19:48:36,21,avito.ru,2,Automobiles 345,2016-02-05 19:48:59,58,avito.ru,2,Automobiles 345,2016-02-05 19:51:21,4,avito.ru,2,Automobiles 345,2016-02-05 19:58:55,4,disk.yandex.ru,2,Computers 345,2016-02-05 19:59:21,2,mail.ru,2,Computers 456,2016-02-05 19:59:27,2,mail.ru,2,Computers 456,2016-02-05 20:02:15,18,avito.ru,2,Automobiles 456,2016-02-05 20:04:55,8,avito.ru,2,Automobiles 456,2016-02-05 20:07:21,24,avito.ru,2,Automobiles 567,2016-02-05 20:09:03,58,avito.ru,2,Automobiles 567,2016-02-05 20:10:01,26,avito.ru,2,Automobiles 567,2016-02-05 20:11:51,30,disk.yandex.ru,2,Computers </code></pre> <p>I need to do </p> <pre><code>group = df.groupby(['category']).agg({'active_seconds': sum}).rename(columns={'active_seconds': 'count_sec_target'}).reset_index() </code></pre> <p>but I want to add there condition connected with </p> <pre><code>df.groupby(['category'])['ID'].count() </code></pre> <p>and if count for <code>category</code> less than <code>5</code>, I want to drop this category. I don't know, how can I write this condition there.</p>
1
2016-09-22T08:40:00Z
39,634,269
<p>As <a href="http://stackoverflow.com/questions/39634175/pandas-groupby-with-condition/39634269#comment66572870_39634175">EdChum commented</a>, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#filtration" rel="nofollow"><code>filter</code></a>:</p> <p>Also you can simplify aggregation by <code>sum</code>:</p> <pre><code>df = df.groupby(['category']).filter(lambda x: len(x) &gt;= 5) group = df.groupby(['category'], as_index=False)['active_seconds'] .sum() .rename(columns={'active_seconds': 'count_sec_target'}) print (group) category count_sec_target 0 Automobiles 233 1 Computers 47 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p> <pre><code>group = df.groupby(['category'])['active_seconds'].sum().reset_index(name='count_sec_target') print (group) category count_sec_target 0 Automobiles 233 1 Computers 47 </code></pre>
2
2016-09-22T08:45:18Z
[ "python", "pandas", "filter", "group-by", "condition" ]
How can i merge or Concatenate data frame having non equal column number in spark
39,634,215
<p>I am doing a project using spark. in some stage i need to merge or concatenate 3 data frame in single data frame. these data frame is coming from spark sql table i have used union function which already merge column with same number from two table but i need to merge unequal column values too . I am confused now is there any way to merging or concatenating unequal column based data frame in pyspark kindly guide me</p>
0
2016-09-22T08:42:35Z
39,636,216
<p>You could add a column with a default value before merging.</p> <pre><code>from pyspark.sql.functions import lit updDf = df2.withColumn('zero_column', lit(0)) df1.union(updDf) </code></pre>
0
2016-09-22T10:10:41Z
[ "python", "apache-spark", "pyspark", "apache-spark-sql", "spark-dataframe" ]
Is there a way to remove proper nouns from a sentence using python?
39,634,222
<p>Is there any package which i can use to to remove proper nouns from a sentence using Python?</p> <p>I know of a few packages like NLTK, Stanford and Text Blob which does the job(removes names) but they also remove a lot of words which start with a capital letter but are not proper nouns.</p> <p>Also, i cannot have a dictionary of names because it'll be huge and will keep extending as the data keeps populating in the DB.</p>
1
2016-09-22T08:42:58Z
39,635,503
<p>If you want to just remove single words that are proper nouns, you can use <code>nltk</code> and tag your sentence in question, then remove all words with the tags that are proper nouns.</p> <pre><code>&gt;&gt;&gt; import nltk &gt;&gt;&gt; nltk.tag.pos_tag("I am named John Doe".split()) [('I', 'PRP'), ('am', 'VBP'), ('named', 'VBN'), ('John', 'NNP'), ('Doe', 'NNP')] </code></pre> <p>The default tagger uses the <a href="https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html" rel="nofollow">Penn Treebank POS tagset</a> which has only two proper noun tags: <code>NNP</code> and <code>NNPS</code></p> <p>So you can just do the following:</p> <pre><code>&gt;&gt;&gt; sentence = "I am named John Doe" &gt;&gt;&gt; tagged_sentence = nltk.tag.pos_tag(sentence.split()) &gt;&gt;&gt; edited_sentence = [word for word,tag in tagged_sentence if tag != 'NNP' and tag != 'NNPS'] &gt;&gt;&gt; print(' '.join(edited_sentence)) I am named </code></pre> <p>Now, just as a warning, <a href="https://en.wikipedia.org/wiki/Part-of-speech_tagging" rel="nofollow">POS tagging</a> is not 100% accurate and may mistag some ambiguous words. Also, you will not capture <a href="https://en.wikipedia.org/wiki/Named-entity_recognition" rel="nofollow">Named Entities</a> in this way as they are multiword in nature.</p>
1
2016-09-22T09:39:39Z
[ "python", "python-2.7" ]
ctypes c_char_p as an input to a function in python
39,634,225
<p>I am writing a C DLL and a python script for it as below:</p> <pre><code>//test.c __declspec(dllexport) HRESULT datafromfile(char * filename) { HRESULT errorCode = S_FALSE; if (pFileName == NULL) { return E_INVALIDARG; } FILE *fp = NULL; fopen_s(&amp;fp, pFileName, "rb"); if (fp == NULL) return E_FAIL; some more lines of code are there.... </code></pre> <p>The python script that i had written is as follows:</p> <pre><code>//test.py import sys import ctypes from ctypes import * def datafromfile(self,FileName): self.mydll=CDLL('test.dll') self.mydll.datafromfile.argtypes = [c_char_p] self.mydll.datafromfile.restypes = HRESULT self.mydll.datafromfile(FileName) </code></pre> <p>While calling in the main I'm assigning the filename as:</p> <pre><code>FileName = 'abcd.EDID' datafromfile(FileName) </code></pre> <p>But the code is giving an error, Windows <code>Error: Unspecified Error</code>. Can anybody help me in how to pass the <code>c_char_p</code> to the function as shown above?</p>
0
2016-09-22T08:43:06Z
39,640,169
<p>A very silly mistake I was doing, I had kept the edid file in some other folder and was running the script from other. Instead of giving only the filename, I gave the relative path.</p>
0
2016-09-22T13:16:54Z
[ "python", "c", "dll", "ctypes" ]
Pandas append multiple columns for a single one
39,634,312
<p>How can I use pandas to append multiple KPI values per single customer efficiently?</p> <p>A join of the <code>pivoted</code> df with the <code>customers</code> df makes some problems because the country is the index of the pivoted data frame and the nationality is not in the index.</p> <pre><code>countryKPI = pd.DataFrame({'country':['Austria','Germany', 'Germany', 'Austria'], 'indicator':['z','x','z','x'], 'value':[7,8,9,7]}) customers = pd.DataFrame({'customer':['first','second'], 'nationality':['Germany','Austria'], 'value':[7,8]}) </code></pre> <p>See the desired result in pink: <a href="http://i.stack.imgur.com/VCbYn.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/VCbYn.jpg" alt="enter image description here"></a></p>
1
2016-09-22T08:46:57Z
39,634,397
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a>:</p> <pre><code>df_pivoted = countryKPI.pivot_table(index='country', columns='indicator', values='value', fill_value=0) print (df_pivoted) indicator x z country Austria 7 7 Germany 8 9 print (pd.concat([customers.set_index('nationality'), df_pivoted], axis=1)) customer value x z Austria second 8 7 7 Germany first 7 8 9 print (pd.concat([customers.set_index('nationality'), df_pivoted], axis=1) .reset_index() .rename(columns={'index':'nationality'}) [['customer','nationality','value','x','z']]) customer nationality value x z 0 second Austria 8 7 7 1 first Germany 7 8 9 </code></pre> <p>EDIT by comments:</p> <p>Problem is <code>dtypes</code> of columns <code>customers.nationality</code> and <code>countryKPI.country</code> is <code>category</code> and if some categories are missing, it raise error:</p> <blockquote> <p>ValueError: incompatible categories in categorical concat </p> </blockquote> <p>Solution find common categories by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.union.html" rel="nofollow"><code>union</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/categorical.html#setting-categories" rel="nofollow">set_categories</a>:</p> <pre><code>import pandas as pd import numpy as np countryKPI = pd.DataFrame({'country':['Austria','Germany', 'Germany', 'Austria'], 'indicator':['z','x','z','x'], 'value':[7,8,9,7]}) customers = pd.DataFrame({'customer':['first','second'], 'nationality':['Slovakia','Austria'], 'value':[7,8]}) customers.nationality = customers.nationality.astype('category') countryKPI.country = countryKPI.country.astype('category') print (countryKPI.country.cat.categories) Index(['Austria', 'Germany'], dtype='object') print (customers.nationality.cat.categories) Index(['Austria', 'Slovakia'], dtype='object') all_categories =countryKPI.country.cat.categories.union(customers.nationality.cat.categories) print (all_categories) Index(['Austria', 'Germany', 'Slovakia'], dtype='object') customers.nationality = customers.nationality.cat.set_categories(all_categories) countryKPI.country = countryKPI.country.cat.set_categories(all_categories) </code></pre> <pre><code>df_pivoted = countryKPI.pivot_table(index='country', columns='indicator', values='value', fill_value=0) print (df_pivoted) indicator x z country Austria 7 7 Germany 8 9 Slovakia 0 0 print (pd.concat([customers.set_index('nationality'), df_pivoted], axis=1) .reset_index() .rename(columns={'index':'nationality'}) [['customer','nationality','value','x','z']]) customer nationality value x z 0 second Austria 8.0 7 7 1 NaN Germany NaN 8 9 2 first Slovakia 7.0 0 0 </code></pre> <p>If need better performance, instead <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table</code></a> use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a>:</p> <pre><code>df_pivoted1 = countryKPI.groupby(['country','indicator']) .mean() .squeeze() .unstack() .fillna(0) print (df_pivoted1) indicator x z country Austria 7.0 7.0 Germany 8.0 9.0 Slovakia 0.0 0.0 </code></pre> <p><strong>Timings</strong>:</p> <pre><code>In [177]: %timeit countryKPI.pivot_table(index='country', columns='indicator', values='value', fill_value=0) 100 loops, best of 3: 6.24 ms per loop In [178]: %timeit countryKPI.groupby(['country','indicator']).mean().squeeze().unstack().fillna(0) 100 loops, best of 3: 4.28 ms per loop </code></pre>
2
2016-09-22T08:50:59Z
[ "python", "pandas" ]
Pandas append multiple columns for a single one
39,634,312
<p>How can I use pandas to append multiple KPI values per single customer efficiently?</p> <p>A join of the <code>pivoted</code> df with the <code>customers</code> df makes some problems because the country is the index of the pivoted data frame and the nationality is not in the index.</p> <pre><code>countryKPI = pd.DataFrame({'country':['Austria','Germany', 'Germany', 'Austria'], 'indicator':['z','x','z','x'], 'value':[7,8,9,7]}) customers = pd.DataFrame({'customer':['first','second'], 'nationality':['Germany','Austria'], 'value':[7,8]}) </code></pre> <p>See the desired result in pink: <a href="http://i.stack.imgur.com/VCbYn.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/VCbYn.jpg" alt="enter image description here"></a></p>
1
2016-09-22T08:46:57Z
39,635,606
<p>You could counter the mismatch in the categories through <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow"><code>merge</code></a>:</p> <pre><code>df = pd.pivot_table(data=countryKPI, index=['country'], columns=['indicator']) df.index.name = 'nationality' customers.merge(df['value'].reset_index(), on='nationality', how='outer') </code></pre> <p><a href="http://i.stack.imgur.com/W5lbD.png" rel="nofollow"><img src="http://i.stack.imgur.com/W5lbD.png" alt="Image"></a></p> <p>Data:</p> <pre><code>countryKPI = pd.DataFrame({'country':['Austria','Germany', 'Germany', 'Austria'], 'indicator':['z','x','z','x'], 'value':[7,8,9,7]}) customers = pd.DataFrame({'customer':['first','second'], 'nationality':['Slovakia','Austria'], 'value':[7,8]}) </code></pre> <hr> <p>The problem appears to be that you have got <code>CategoricalIndex</code> in your <code>DF</code> resulting from the <code>pivot</code> operation and when you perform <code>reset_index</code> on that complains you of that error.</p> <p>Simply do reverse engineering as in check the <code>dtypes</code> of <code>countryKPI</code> and <code>customers</code> Dataframes and wherever there is <code>category</code> mentioned, convert those columns to their <code>string</code> representation via <code>astype(str)</code></p> <hr> <p><strong>Reproducing the Error and Countering it:</strong></p> <p>Assume the <code>DF</code> to be the above mentioned:</p> <pre><code>countryKPI['indicator'] = countryKPI['indicator'].astype('category') countryKPI['country'] = countryKPI['country'].astype('category') customers['nationality'] = customers['nationality'].astype('category') countryKPI.dtypes country category indicator category value int64 dtype: object customers.dtypes customer object nationality category value int64 dtype: object </code></pre> <p>After <code>pivot</code> operation:</p> <pre><code>df = pd.pivot_table(data=countryKPI, index=['country'], columns=['indicator']) df.index CategoricalIndex(['Austria', 'Germany'], categories=['Austria', 'Germany'], ordered=False, name='country', dtype='category') # ^^ See the categorical index </code></pre> <p>When you perform <code>reset_index</code> on that:</p> <pre><code>df.reset_index() </code></pre> <blockquote> <p>TypeError: cannot insert an item into a CategoricalIndex that is not already an existing category</p> </blockquote> <p>To counter that error, simply cast the categorical columns to <code>str</code> type.</p> <pre><code>countryKPI['indicator'] = countryKPI['indicator'].astype('str') countryKPI['country'] = countryKPI['country'].astype('str') customers['nationality'] = customers['nationality'].astype('str') </code></pre> <p>Now, the <code>reset_index</code> part works and even the <code>merge</code> too.</p>
1
2016-09-22T09:43:56Z
[ "python", "pandas" ]
Django Restul return absolute URL's
39,634,342
<p>So I have a fairly straight forward serializer in <code>serializers.py</code></p> <pre><code>class ScheduleSerializer(serializers.ModelSerializer): class Meta: model = FrozenSchedule fields = ['startDate', 'endDate', 'client', 'url'] startDate = serializers.DateField(source='start_date') endDate = serializers.DateField(source='end_date') client = serializers.StringRelatedField(many=False) url = serializers.URLField(source='get_absolute_url') </code></pre> <p><code>get_absolute_url</code> in my <code>models.py</code></p> <pre><code>def get_absolute_url(self): return reverse('reports:frozenschedule-detail', kwargs={ 'slug': self.client.slug, 'pk': self.id }) </code></pre> <p>And it's related ViewSet in <code>viewsets.py</code></p> <pre><code>class ScheduleViewSet(viewsets.ReadOnlyModelViewSet): queryset = FrozenSchedule.objects.not_abandoned().future()\ .filter(signed=False).order_by('start_date') serializer_class = serializers.ScheduleSerializer </code></pre> <p>It returns JSON which looks like this:</p> <pre><code> [ { "startDate": "2016-10-01", "endDate": null, "client": "Abscissa.Com Limited", "url": "/clients/abscissac/frozenschedule/1", } ] </code></pre> <p>But I'd like it to return the complete URL, not just the relative path</p> <pre><code>[ { "startDate": "2016-10-01", "endDate": null, "client": "Abscissa.Com Limited", "url": "http://localhost:8000/clients/abscissac/frozenschedule/1", } ] </code></pre> <p>Could I serialize URL's this way inside my Serializer?</p> <p>The Restful documentation states that the rest_framework <code>reverse</code> function does exactly what I need. But it requires the request object to build the UR <a href="http://www.django-rest-framework.org/api-guide/reverse/" rel="nofollow">http://www.django-rest-framework.org/api-guide/reverse/</a></p>
1
2016-09-22T08:48:03Z
39,641,783
<p>Override HyperlinkedIdentityField.</p> <pre><code>class UrlHyperlinkedIdentityField(HyperlinkedIdentityField): def get_url(self, obj, view_name, request, format): if obj.pk is None: return None return self.reverse(view_name, kwargs={ 'slug': obj.client.slug, 'pk': obj.id, }, request=request, format=format, ) </code></pre> <p>Then in serializers.py :</p> <pre><code>url = UrlHyperlinkedIdentityField(view_name='reports:frozenschedule-detail') </code></pre>
6
2016-09-22T14:25:40Z
[ "python", "django", "rest", "django-rest-framework" ]
4 dimensional nested dictionary to pandas data frame
39,634,369
<p>I need your help with converting a multidimensional dict to a pandas data frame. I get the dict from a JSON file which I retrieve from a API call (Shopify).</p> <pre><code>response = requests.get("URL", auth=("ID","KEY")) data = json.loads(response.text) </code></pre> <p>The "data" dictionary looks as follows:</p> <pre><code> {'orders': [{'created_at': '2016-09-20T22:04:49+02:00', 'email': 'test@aol.com', 'id': 4314127108, 'line_items': [{'destination_location': {'address1': 'Teststreet 12', 'address2': '', 'city': 'Berlin', 'country_code': 'DE', 'id': 2383331012, 'name': 'Test Test', 'zip': '10117'}, 'gift_card': False, 'name': 'Blueberry Cup'}, {'destination_location': {'address1': 'Teststreet 12', 'address2': '', 'city': 'Berlin', 'country_code': 'DE', 'id': 2383331012, 'name': 'Test Test', 'zip': '10117'}, 'gift_card': False, 'name': 'Strawberry Cup'}] }]} </code></pre> <p>In this case the dictionary has 4 Dimensions and I would like to convert the dict into a pandas data frame. I tried everything ranging from json_normalize() to pandas.DataFrame.from_dict(), yet I did not manage to get anywhere. When I try to convert the dict to a df, I get columns which contain list of lists.</p> <p>My goal is to have an individual row per product. Thanks!</p> <p>Desired Output:</p> <pre><code>Created at Email id Name 9/20/2016 test@test.de 4314127108 Blueberry Cup 9/20/2016 test@test.de 4314127108 Strawberry Cup </code></pre>
0
2016-09-22T08:49:14Z
39,638,210
<p>I don't really understand how <code>json_normalize()</code> fails this so hard, I have similar data with twice the nesting depth and <code>json_normalize()</code> still manages to give me a much better result.</p> <p>I wrote this recursive function to replace the lists in your example with dictionaries:</p> <pre><code>def removeList(D): for k in D.keys(): if isinstance(D[k],list): T = {} for i in range(len(D[k])): T[str(i)] = D[k][i] D[k] = removeList(T) return D elif isinstance(D[k],dict): D[k] = removeList(D[k]) return D else: return D </code></pre> <p><code>json_normalize()</code> can work with the outcome of that a bit better at least.</p> <p>However I recommend doing it manually even if it's annoying. You can create your own dictionary with your own desired structure, write all data manually into it and then convert that into your dataframe. It's a good way to check the data for consistency and do all flattening, preprocessing and normalizing you need.</p> <p>Since I have data that's structured similar to yours, I'm using a two-step process. In the first step I create a flattened dictionary that consists of no other dictionaries but still has a list in one key (that would be <code>line_items</code> in your case). Each list entry is also flattened into a simple dictionary. Then I create a second dataframe from that list of dictionaries like this:</p> <pre><code>ListDF = pd.DataFrame.from_dict([iFr for sl in DF["List"] for iFr in sl]) </code></pre> <p>Since I did all the normalizing manually I was able to add necessary keys to the list items so now I can just merge the two dataframes into my final dataframe using those keys. Then I drop the <code>List</code> column and my final data structure is complete, I went from a horribly nested dictionary to a simple relational scheme that can be easily worked with.</p> <p>I figure this would also work best for you.</p>
2
2016-09-22T11:49:08Z
[ "python", "json", "pandas", "dictionary", "dataframe" ]
Support required with displaying a slideshow of Images in Python w/ TKinter
39,634,374
<p>I am trying to make a set of code that will open a window and displaying 6 images in sequence over and over again very quickly for 10 seconds. This is my code, however the program simply open a blank screen. What do I do? </p> <pre><code>import time import tkinter as tk root = tk.Tk() root.overrideredirect(True) width = root.winfo_screenwidth() height = root.winfo_screenwidth() root.geometry('%dx%d' % (width*1, height*1)) def SS_Part1(): image_file_ssp1 = "goat1.gif" image = tk.PhotoImage(file=image_file_ssp1) canvas = tk.Canvas(root, height=height*1, width=width*1, bg="black") canvas.create_image(width*1/2, height*1/2, image=image) canvas.pack() def SS_Part2(): image_file_ssp2 = "goat2.gif" image = tk.PhotoImage(file=image_file_ssp2) canvas = tk.Canvas(root, height=height*1, width=width*1, bg="black") canvas.create_image(width*1/2, height*1/2, image=image) canvas.pack() def SS_Part3(): image_file_ssp3 = "goat3.gif" image = tk.PhotoImage(file=image_file_ssp3) canvas = tk.Canvas(root, height=height*1, width=width*1, bg="black") canvas.create_image(width*1/2, height*1/2, image=image) canvas.pack() def SS_Part4(): image_file_ssp4 = "goat4.gif" image = tk.PhotoImage(file=image_file_ssp4) canvas = tk.Canvas(root, height=height*1, width=width*1, bg="black") canvas.create_image(width*1/2, height*1/2, image=image) canvas.pack() def SS_Part5(): image_file_ssp5 = "goat5.gif" image = tk.PhotoImage(file=image_file_ssp5) canvas = tk.Canvas(root, height=height*1, width=width*1, bg="black") canvas.create_image(width*1/2, height*1/2, image=image) canvas.pack() def SS_Part6(): image_file_ssp6 = "goat6.gif" image = tk.PhotoImage(file=image_file_ssp6) canvas = tk.Canvas(root, height=height*1, width=width*1, bg="black") canvas.create_image(width*1/2, height*1/2, image=image) canvas.pack() t_end = time.time() + 10 while time.time() &lt; t_end: SS_Part1() time.sleep(0.05) SS_Part2() time.sleep(0.05) SS_Part3() time.sleep(0.05) SS_Part4() time.sleep(0.05) SS_Part5() time.sleep(0.05) SS_Part6() root.mainloop() </code></pre>
0
2016-09-22T08:49:36Z
39,634,732
<p>These are some changes in your code, It should work properly.</p> <pre><code>import tkinter as tk from itertools import cycle # foreign library, need to installed from ImageTk import PhotoImage images = ["first1.jpg", "first2.jpg", "first3.jpg", "first4.jpg"] photos = cycle(PhotoImage(file=image) for image in images) def slideShow(): img = next(photos) displayCanvas.config(image=img) root.after(50, slideShow) # 0.05 seconds root = tk.Tk() root.overrideredirect(True) width = root.winfo_screenwidth() height = root.winfo_screenwidth() root.geometry('%dx%d' % (640, 480)) displayCanvas = tk.Label(root) displayCanvas.pack() root.after(10, lambda: slideShow()) root.mainloop() </code></pre> <p>This is the Object-Oriented version of the above code, <strong><em>Recommended</em></strong>. Below code will work perfectly for the full screen slideshow</p> <pre><code>from itertools import cycle import tkinter as tk # foreign library, need to installed from ImageTk import PhotoImage images = [ "first1.jpg", "first2.jpg", "first3.jpg", "first4.jpg"] class Imagewindow(tk.Tk): def __init__(self): tk.Tk.__init__(self) self.photos = cycle( PhotoImage(file=image) for image in images ) self.displayCanvas = tk.Label(self) self.displayCanvas.pack() def slideShow(self): img = next(self.photos) self.displayCanvas.config(image=img) self.after(50, self.slideShow) # 0.05 seconds def run(self): self.mainloop() root = Imagewindow() width = root.winfo_screenwidth() height = root.winfo_screenwidth() root.overrideredirect(True) root.geometry('%dx%d' % (width*1, height*1)) root.slideShow() root.run() </code></pre>
1
2016-09-22T09:06:18Z
[ "python", "tkinter", "gif" ]
value error: setting an array element with a sequence
39,634,375
<p>I'm making a fit with a scikit model (that is a ExtraTreesRegressor ) with the aim of make supervised features selection. </p> <p>I've made a toy example in order to be as most clear as possible. That's the toy code:</p> <pre><code>import pandas as pd import numpy as np from sklearn.ensemble import ExtraTreesRegressor from itertools import chain # Original Dataframe df = pd.DataFrame({"A": [[10,15,12,14],[20,30,10,43]], "R":[2,2] ,"C":[2,2] , "CLASS":[1,0]}) X = np.array([np.array(df.A).reshape(1,4) , df.C , df.R]) Y = np.array(df.CLASS) # prints X = np.array([np.array(df.A), df.C , df.R]) Y = np.array(df.CLASS) print("X",X) print("Y",Y) print(df) df['A'].apply(lambda x: print("ORIGINAL SHAPE",np.array(x).shape,"field:",x)) df['A'] = df['A'].apply(lambda x: np.array(x).reshape(4,1),"field:",x) df['A'].apply(lambda x: print("RESHAPED SHAPE",np.array(x).shape,"field:",x)) model = ExtraTreesRegressor() model.fit(X,Y) model.feature_importances_ </code></pre> <pre><code>X [[[10, 15, 12, 14] [20, 30, 10, 43]] [2 2] [2 2]] Y [1 0] A C CLASS R 0 [10, 15, 12, 14] 2 1 2 1 [20, 30, 10, 43] 2 0 2 ORIGINAL SHAPE (4,) field: [10, 15, 12, 14] ORIGINAL SHAPE (4,) field: [20, 30, 10, 43] --------------------------- </code></pre> <p>That's the arise exception:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-37-5a36c4c17ea0&gt; in &lt;module&gt;() 7 print(df) 8 model = ExtraTreesRegressor() ----&gt; 9 model.fit(X,Y) /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/sklearn/ensemble/forest.py in fit(self, X, y, sample_weight) 210 """ 211 # Validate or convert input data --&gt; 212 X = check_array(X, dtype=DTYPE, accept_sparse="csc") 213 if issparse(X): 214 # Pre-sort indices to avoid that each individual tree of the /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator) 371 force_all_finite) 372 else: --&gt; 373 array = np.array(array, dtype=dtype, order=order, copy=copy) 374 375 if ensure_2d: ValueError: setting an array element with a sequence. </code></pre> <p>I've noticed that involves np.arrays. So I've tried to fit another toy dataframe, that is the most basic one, with only scalars and there are not arised errors. I've tried to keep the same code and just modify the same toy dataframe by adding another field that contains monodimensional arrays, and now the same exception was arised.</p> <p>I've looked around but so far I've not found a solution even by trying to make some reshapes, conversions into lists, np.array etc. and matrixed in my real problem. Now I'm keeping trying along this direction. </p> <p>I've also seen that usually this kind of problem is arised when there are arrays withdifferent lengths betweeen samples but that is not the case of the toy example.</p> <p>Anyone know how to deal with this structures/exception ? Thanks in advance for any help.</p>
0
2016-09-22T08:49:39Z
39,634,485
<p>To convert Pandas' DataFrame to NumPy's matrix,</p> <pre><code>import pandas as pd def df2mat(df): a = df.as_matrix() n = a.shape[0] m = len(a[0]) b = np.zeros((n,m)) for i in range(n): for j in range(m): b[i,j]=a[i][j] return b df = pd.DataFrame({"A":[[1,2],[3,4]]}) b = df2mat(df.A) </code></pre> <p>After then, concatenate.</p>
1
2016-09-22T08:55:18Z
[ "python", "arrays", "numpy", "scikit-learn", "data-fitting" ]
value error: setting an array element with a sequence
39,634,375
<p>I'm making a fit with a scikit model (that is a ExtraTreesRegressor ) with the aim of make supervised features selection. </p> <p>I've made a toy example in order to be as most clear as possible. That's the toy code:</p> <pre><code>import pandas as pd import numpy as np from sklearn.ensemble import ExtraTreesRegressor from itertools import chain # Original Dataframe df = pd.DataFrame({"A": [[10,15,12,14],[20,30,10,43]], "R":[2,2] ,"C":[2,2] , "CLASS":[1,0]}) X = np.array([np.array(df.A).reshape(1,4) , df.C , df.R]) Y = np.array(df.CLASS) # prints X = np.array([np.array(df.A), df.C , df.R]) Y = np.array(df.CLASS) print("X",X) print("Y",Y) print(df) df['A'].apply(lambda x: print("ORIGINAL SHAPE",np.array(x).shape,"field:",x)) df['A'] = df['A'].apply(lambda x: np.array(x).reshape(4,1),"field:",x) df['A'].apply(lambda x: print("RESHAPED SHAPE",np.array(x).shape,"field:",x)) model = ExtraTreesRegressor() model.fit(X,Y) model.feature_importances_ </code></pre> <pre><code>X [[[10, 15, 12, 14] [20, 30, 10, 43]] [2 2] [2 2]] Y [1 0] A C CLASS R 0 [10, 15, 12, 14] 2 1 2 1 [20, 30, 10, 43] 2 0 2 ORIGINAL SHAPE (4,) field: [10, 15, 12, 14] ORIGINAL SHAPE (4,) field: [20, 30, 10, 43] --------------------------- </code></pre> <p>That's the arise exception:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-37-5a36c4c17ea0&gt; in &lt;module&gt;() 7 print(df) 8 model = ExtraTreesRegressor() ----&gt; 9 model.fit(X,Y) /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/sklearn/ensemble/forest.py in fit(self, X, y, sample_weight) 210 """ 211 # Validate or convert input data --&gt; 212 X = check_array(X, dtype=DTYPE, accept_sparse="csc") 213 if issparse(X): 214 # Pre-sort indices to avoid that each individual tree of the /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator) 371 force_all_finite) 372 else: --&gt; 373 array = np.array(array, dtype=dtype, order=order, copy=copy) 374 375 if ensure_2d: ValueError: setting an array element with a sequence. </code></pre> <p>I've noticed that involves np.arrays. So I've tried to fit another toy dataframe, that is the most basic one, with only scalars and there are not arised errors. I've tried to keep the same code and just modify the same toy dataframe by adding another field that contains monodimensional arrays, and now the same exception was arised.</p> <p>I've looked around but so far I've not found a solution even by trying to make some reshapes, conversions into lists, np.array etc. and matrixed in my real problem. Now I'm keeping trying along this direction. </p> <p>I've also seen that usually this kind of problem is arised when there are arrays withdifferent lengths betweeen samples but that is not the case of the toy example.</p> <p>Anyone know how to deal with this structures/exception ? Thanks in advance for any help.</p>
0
2016-09-22T08:49:39Z
39,637,144
<p>Have a closer look at your X:</p> <pre><code>&gt;&gt;&gt; X array([[[10, 15, 12, 14], [20, 30, 10, 43]], [2, 2], [2, 2]], dtype=object) &gt;&gt;&gt; type(X[0,0]) &lt;class 'list'&gt; </code></pre> <p>Notice that it's <code>dtype=object</code>, and one of these objects is a <code>list</code>, hence "setting array element with sequence. Part of the problem is that <code>np.array(df.A)</code> does not correctly create a 2D array:</p> <pre><code>&gt;&gt;&gt; np.array(df.A) array([[10, 15, 12, 14], [20, 30, 10, 43]], dtype=object) &gt;&gt;&gt; _.shape (2,) # oops! </code></pre> <p>But using <code>np.stack(df.A)</code> fixes the problem.</p> <p>Are you looking for:</p> <pre><code>&gt;&gt;&gt; X = np.concatenate([ np.stack(df.A), # condense A to (N, 4) np.expand_dims(df.C, axis=-1), # expand C to (N, 1) np.expand_dims(df.R, axis=-1), # expand R to (N, 1) axis=-1 ) &gt;&gt;&gt; X array([[10, 15, 12, 14, 2, 2], [20, 30, 10, 43, 2, 2]], dtype=int64) </code></pre>
1
2016-09-22T10:57:03Z
[ "python", "arrays", "numpy", "scikit-learn", "data-fitting" ]
Are methods instanciated with classes, consuming lots of memory (in Scala)?
39,634,577
<h2>Situation</h2> <p>I am going to build a program (in Scala or Python - not yet decided) that is intensive on data manipulation. I see two mayor approaches:</p> <ol> <li><strong>Approach:</strong> Define a collection of the data. Write my function. Send the entire dataset through the function.</li> <li><strong>Approach:</strong> Define a data class that represents a single data entity and code the method (class member) into the data class. Parts of the method that should be flexible are send to the method via Scala Function or Python lambda.</li> </ol> <h2>Side question</h2> <p>I am not sure but the first approach might be more functional programming like, the second more OOP, is that right? By the way, I love both Functional Programming and OOP (some say they are opposites of each other, but Odersky tried his best to disprove that with Scala).</p> <h2>Main question</h2> <p>I prefer the second approach, because</p> <ol> <li>It seems more concise to me.</li> <li>It makes it easier to distribute the program in a shared-nothing architecture Big Data setting, since it brings functionality to the data and not data to functionality following the principle of data locality.</li> </ol> <p>However, <em>I worry that if I have a lot of data (and I do), I will have a lot of memory consumption because the method might have to be instantiated so many times.</em></p> <ol> <li><strong>Question:</strong> Is that true for Scala/JVM - if not, how is it solved?</li> <li><strong>Question:</strong> Is that true for Python - if not, how is it solved?</li> </ol> <h2>Follow-up question</h2> <p>Leading me up to: Which approach should I choose?</p> <h2>More Context</h2> <ul> <li>I have lot of data (millions, potentially billions of data objects)</li> <li>I have not that many functions to implement. To give a ballpark figure, let's say about 10.</li> <li>I expect a lot of calls to the methods though.</li> <li>Let's say I have 100 calls per data entity, then I would have 100 * 1 million calls for the entire program.</li> <li>My data class represents a single entity, not the entire dataset. </li> <li>My worry is that with each instantiation of my DataObject class, the code of the method needs to be duplicated, which would cost a lot of memory and processing power. I have no idea how the internals of the JVM and Python work in this regard and whether that is true - that is what I am asking.</li> </ul> <p>Here a crude DataObject class:</p> <pre><code>class DataObject { List datavalues def mymethod(){ ... } } </code></pre>
4
2016-09-22T08:59:23Z
39,635,598
<p>Which approach is best depends entirely on your problem. If you have only few operations, functions are simpler. If you have many operations which depend on the type/features of data, classes are efficient.</p> <p>Personally, I prefer having classes for the same type of data to improve abstraction and modularity. Basically, using classes requires you to think about what your data is like, what is allowed on it and what is appropriate. It enforces that you separate, compartmentalize and <em>understand</em> what you are doing. Once you've done that, you can treat them like black boxes that just work.</p> <p>I've seen many data-analysis programs fail because they just had functions working on arbitrary data. At first, it was simple computations. Then state needed to be preserved/cached, so data got appended or modified directly. Then someone realized that if you did x before you shouldn't do y later, so all sorts of flags, fields and other things get tacked on, which only functions a, b and d understood. Then someone added function f which extended on that, while someone added function k which extended it differently. That creates a cluster-foo that's impossible to understand, maintain, or <em>trust</em> in creating results.</p> <p>So if you are unsure, do classes. You'll be happier in the end.</p> <hr> <p>Concerning your second question, I can only answer that for python. However, <em>many</em> languages do it similarly.</p> <p>Regular methods in python are defined on the class and created with it. That means the actual function represented by a method is shared by all instances, without memory overhead. Basically, a bare instance is just a wrapped reference to the class, from which methods are fetched. Only things <em>exclusive</em> to an instance, like data, add to memory notably.</p> <p><em>Calling</em> a method does add some overhead, because the method gets bound to the instance - basically, the function is fetched from the class and the first parameter <code>self</code> gets bound. This technically incurs some overhead.</p> <pre><code># Method Call $ python -m timeit -s 'class Foo():' -s ' def p(self):' -s ' pass' -s 'foo = Foo()' 'foo.p()' 10000000 loops, best of 3: 0.158 usec per loop # Method Call of cached method $ python -m timeit -s 'class Foo():' -s ' def p(self):' -s ' pass' -s 'foo = Foo()' -s 'p=foo.p' 'p()' 10000000 loops, best of 3: 0.0984 usec per loop # Function Call $ python -m timeit -s 'def p():' -s ' pass' 'p()' 10000000 loops, best of 3: 0.0846 usec per loop </code></pre> <p>However, practically <em>any</em> operation does this; you'll only notice the added overhead if your applications does nothing else but call your method, and the method also does nothing.</p> <p>I've also seen people write data analysis applications with so many levels of abstraction that in fact they mostly just called methods/functions. This is a smell of writing code in general, not whether to use methods <em>or</em> functions.</p> <p>So if you are unsure, do classes. You'll be happier in the end.</p>
1
2016-09-22T09:43:14Z
[ "python", "scala", "oop", "design", "jvm" ]
Can't I run file in temporary folder using subprocess?
39,634,736
<p>I was trying to using this code below...</p> <pre><code>**subprocess.Popen('%USERPROFILE%\\AppData\\Local\\Temp\\AdobeARM - Copy.log').communicate()** </code></pre> <p>but I got an error message. Is there anyone can help this?</p>
1
2016-09-22T09:06:30Z
39,634,806
<p>Since there's an environment variable in the path you can add <code>shell=True</code> to force running a batch process which will evaluate env. vars:</p> <pre><code>subprocess.Popen('"%USERPROFILE%\\AppData\\Local\\Temp\\AdobeARM - Copy.log"',shell=True).communicate() </code></pre> <p>Note the protection with quotes since there are spaces. You can also drop the quotes if you pass a list containing one element to <code>Popen</code>, which is cleaner:</p> <pre><code>subprocess.Popen(['%USERPROFILE%\\AppData\\Local\\Temp\\AdobeARM - Copy.log'],shell=True).communicate() </code></pre> <p>alternately if you just want to activate the default editor for your logfile, there's a simpler way (which does not block the executing script, so it's slightly different):</p> <pre><code>p = os.path.join(os.getenv('USERPROFILE'),r"AppData\Local\Temp\AdobeARM - Copy.log") os.startfile(p) </code></pre> <p>Maybe it can be even simpler since that may be the temporary directory you're trying to reach:</p> <pre><code>p = os.path.join(os.getenv('TEMP'),r"AdobeARM - Copy.log") os.startfile(p) </code></pre>
2
2016-09-22T09:09:23Z
[ "python", "subprocess" ]
Python Convert Datetime from csv for matplotlib
39,634,777
<p>I am trying to plot from a csv file with column 1 as a datetime value, as below</p> <pre><code> 27-08-2016 08:43 21.38329164 </code></pre> <p>using this code:</p> <pre><code>from matplotlib import pyplot as plt from matplotlib import style import numpy as np import datetime as dt from datetime import datetime import matplotlib.dates as mdates style.use('ggplot') x,y = np.genfromtxt('I112-1a.csv', unpack=True,dtype=None, delimiter = ',', converters={0: lambda x: datetime.strptime(x, '%d-%m-%Y %H:%M')}) plt.title('Panel Charge') plt.ylabel('Y axis') plt.xlabel('X axis') plt.show() </code></pre> <p>I am getting this error:</p> <pre><code> x,y = np.genfromtxt('I112-1a.csv', unpack=True,dtype=None, delimiter = ',', converters={0: lambda x: datetime.strptime(x, '%d-%m-%Y %H:%M')}) ValueError: too many values to unpack </code></pre> <p>Please help! Thanks</p>
0
2016-09-22T09:08:02Z
39,637,096
<p>Source: <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow">numpy.genfromtxt</a></p> <pre><code>numpy.genfromtxt Returns:ndarray Data read from the text file. If usemask is True, this is a masked array. </code></pre> <p>@Ev. Kounis pointed out in comment section: numpy.genfromtxt returns only one array and you are assigning it to two parameters x and y.</p> <p>Please update your code as follows:</p> <pre><code>data = np.genfromtxt('I112-1a.csv', unpack=True,dtype=None, delimiter = ',', converters={0: lambda x: datetime.strptime(x, '%d-%m-%Y %H:%M')}) </code></pre> <p>For example,</p> <p>If your data file is structured like this</p> <pre><code>col1, col2, col3 1, 2, 3 10, 20, 30 100, 200, 300 </code></pre> <p>then <code>numpy.genfromtxt</code> can interpret the first line as column headers using the <code>names=True</code> option. With this you can access the data very conveniently by providing the column header:</p> <pre><code>data = np.genfromtxt('data.txt', delimiter=',', names=True) print data['col1'] # Output: array([ 1., 10., 100.]) print data['col2'] # Output: array([ 2., 20., 200.]) print data['col3'] # Output: array([ 3., 30., 300.]) </code></pre> <p>Since in this case the data is formed like this</p> <pre><code>row1, 1, 10, 100 row2, 2, 20, 200 row3, 3, 30, 300 </code></pre> <p>you can achieve something similar using the following code snippet:</p> <pre><code>labels = np.genfromtxt('data.txt', delimiter=',', usecols=0, dtype=str) raw_data = np.genfromtxt('data.txt', delimiter=',')[:,1:] </code></pre> <p>The first line reads the first column (the labels) into an array of strings. The second line reads all data from the file but discards the first column. This is how you can extract columns in different variables. </p> <p>if you want to plot col1 and col2, you can simply define:</p> <pre><code>x=data['col1'] y=data['col2'] </code></pre>
0
2016-09-22T10:54:59Z
[ "python", "csv", "pandas", "matplotlib", "genfromtxt" ]
Python Convert Datetime from csv for matplotlib
39,634,777
<p>I am trying to plot from a csv file with column 1 as a datetime value, as below</p> <pre><code> 27-08-2016 08:43 21.38329164 </code></pre> <p>using this code:</p> <pre><code>from matplotlib import pyplot as plt from matplotlib import style import numpy as np import datetime as dt from datetime import datetime import matplotlib.dates as mdates style.use('ggplot') x,y = np.genfromtxt('I112-1a.csv', unpack=True,dtype=None, delimiter = ',', converters={0: lambda x: datetime.strptime(x, '%d-%m-%Y %H:%M')}) plt.title('Panel Charge') plt.ylabel('Y axis') plt.xlabel('X axis') plt.show() </code></pre> <p>I am getting this error:</p> <pre><code> x,y = np.genfromtxt('I112-1a.csv', unpack=True,dtype=None, delimiter = ',', converters={0: lambda x: datetime.strptime(x, '%d-%m-%Y %H:%M')}) ValueError: too many values to unpack </code></pre> <p>Please help! Thanks</p>
0
2016-09-22T09:08:02Z
39,646,628
<pre><code>from matplotlib import pyplot as plt from matplotlib import style import numpy as np import datetime as dt from datetime import datetime import matplotlib.dates as mdates style.use('ggplot') data = np.genfromtxt('I112-1.csv', unpack=True,dtype=None, names=True,delimiter = ',', converters={0: lambda x: datetime.strptime(x, '%d-%m-%Y %H:%M')}) x = data['Time'] y = data['Charge'] plt.plot (x,y) plt.title('Panel Charge') plt.ylabel('Y axis') plt.xlabel('X axis') plt.show() </code></pre>
0
2016-09-22T18:42:52Z
[ "python", "csv", "pandas", "matplotlib", "genfromtxt" ]
Can someone please explain how to use threading in my script?
39,634,883
<p>I have been playing around with python for about three months and a few days ago I was needing to do some menial task (folder creation) and I actually made a script to do it.</p> <p>Out of pure perfectionism I wanted to add a progress bar but when I run the script, it hangs, then says complete once it's finished. I would like to see the progress bar going to 100% and not hanging. I have read that I need to use python threading to do this but I am really confused as to how to implement it but also understand it.</p> <p>I have copied the section of the code that I would theoretically need to thread. As you can see I am using a QT Designer UI and Progress Bar.</p> <p>Also the spacing is not an issue. This is my first time posting on Stack Overflow so forgive me if I messed up the code spacing here.</p> <p>Thank you very much for any light you can shed on this for me.</p> <pre><code>class ShotCreator(QtGui.QDialog): def __init__(self, *args): ui = 'ShotCreator_UI.ui' QtGui.QWidget.__init__(self, *args) loadUi(ui, self) self.show() #Slots self.connect(self.browseLocation_btn, QtCore.SIGNAL("clicked()"), self.openBrowse) self.connect(self.create_btn, QtCore.SIGNAL("clicked()"), self.makeDir) self.progressBar.setValue(0) def makeDir(self): #Get data departments = self.queryValues() shots = self.fullShotAmount() proj = self.projName() #Converting proj to string proj = str(proj) #Converting dirLoc to string dirLoc = self.browseLocation.text() dirLoc = str(dirLoc) if dirLoc == "": msgBox = QtGui.QMessageBox() msgBox.setIcon(QMessageBox.Warning) msgBox.setWindowTitle("Oops - Directory Location") msgBox.setText("Please give a directory location") msgBox.exec_() #Creating shot numbers shot = 0 for s in shots: shot = shot + 5 sShot = ("00" + str(shot)) if not os.path.exists(sShot): #Create shot folders os.mkdir(sShot) self.progressBar.setValue((int(s) / int(len(shots))* 100)) self.progressBar.setValue(100) </code></pre>
1
2016-09-22T09:12:31Z
39,647,363
<p>It's usually best to move your business logic out of the GUI code and place it in a separate module that isn't dependent on Qt. That makes it executing it in another thread easier.</p> <p>For functions that I want progress feedback on, I usually turn them into generators.</p> <pre><code>from __future__ import division def makeDir(dirloc, shots): cnt = len(shots) #Creating shot numbers shot = 0 for i, s in enumerate(shots): shot = shot + 5 sShot = ("00" + str(shot)) if not os.path.exists(sShot): #Create shot folders os.mkdir(sShot) yield int((i + 1) / cnt * 100) </code></pre> <p>You create a worker object in another thread that will execute this business logic and report progress back to the main GUI thread using Signals, so that the progress bar can update</p> <pre><code>class Worker(QObject): progress_updated = pyqtSignal(int) finished = pyqtSignal() @pyqtSlot(object, object) def makeDir(self, dirloc, shots): for progress in makeDir(dirloc, shots): self.progress_updated.emit(progress) self.finished.emit() </code></pre> <p>Then add this worker to your GUI class and connect to the signals</p> <pre><code>class ShotCreator(QtGui.QDialog): start_makedirs = pyqtSignal(object, object) def __init__(self, *args): ui = 'ShotCreator_UI.ui' QtGui.QWidget.__init__(self, *args) loadUi(ui, self) self.show() #Slots self.connect(self.browseLocation_btn, QtCore.SIGNAL("clicked()"), self.openBrowse) self.connect(self.create_btn, QtCore.SIGNAL("clicked()"), self.makeDir) self.progressBar.setValue(0) self.thread = QThread(self) self.worker = Worker(self) self.worker.progress_updated.connect(self.update_progress) self.worker.finished.connect(self.worker_finished) self.start_makedirs.connect(self.worker.makeDir) self.worker.moveToThread(self.thread) self.thread.start() def makeDir(self): # gather information from gui dirloc = '' shots = [] ... self.start_makedirs.emit(dirloc, shots) @pyqtSlot(int) def update_progress(self, progress): self.progressBar.setValue(progress) @pyqtSlot() def worker_finished(self): self.progressBar.setValue(100) </code></pre>
0
2016-09-22T19:26:08Z
[ "python", "multithreading", "for-loop", "pyqt" ]
model field not rendering on html page
39,634,969
<blockquote> <p>I want to show title or description on html page but i am not getting. how can i do that ?</p> </blockquote> <p><strong>models.py</strong></p> <pre><code>class Event(models.Model): title = models.CharField(max_length=100) description = models.TextField() </code></pre> <p><strong>views.py</strong></p> <pre><code>def get_event_list(request): model = Event.objects.all() return render_to_response('events.html', {'model':model}) </code></pre> <p><strong>urls.py</strong></p> <pre><code>url(r'^events/get_event_list/$','events.views.get_event_list', name ='get_event_list' ), </code></pre> <p><strong>events.html</strong></p> <pre><code>&lt;h4&gt;{{ model.title }}&lt;/h4&gt; &lt;p&gt;{{ model.description }}&lt;/p&gt; </code></pre>
0
2016-09-22T09:16:55Z
39,635,026
<p>This is because you are returning a list of objects. Not just one.</p> <p>Try to loop it over.</p> <pre><code>{% for item in model %} &lt;h4&gt;{{ item.title }}&lt;/h4&gt; &lt;p&gt;{{ item.description }}&lt;/p&gt; {% endfor %} </code></pre> <p>Or update your view like:</p> <pre><code>def get_event_list(request): model = Event.objects.all() if model: model = model[0] return render_to_response('events.html', {'model':model}) </code></pre>
1
2016-09-22T09:19:15Z
[ "python", "html", "django" ]
re and binary file, python strange behavior
39,635,011
<p>i have a strange behavior of re.search on a binary file. Here is my python screenshot :</p> <p><a href="http://i.stack.imgur.com/OlLnw.png" rel="nofollow"><img src="http://i.stack.imgur.com/OlLnw.png" alt="enter image description here"></a></p> <p>As you can see, i have two problems :</p> <ul> <li>re can't find a pattern which is indeed in the file</li> <li>include "\x5b" in an expression results in an error</li> </ul> <p>Any idea?</p>
-4
2016-09-22T09:18:36Z
39,635,128
<p><code>\x5b</code> is the ASCII <code>[</code> character, the left square bracket. That's a <em>regex meta character</em> forming the start of a <code>[...]</code> character class specification and needs to be escaped if you want to match a literal <code>[</code> character:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; re.search('[', '') Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/Users/mjpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/re.py", line 146, in search return _compile(pattern, flags).search(string) File "/Users/mjpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/re.py", line 251, in _compile raise error, v # invalid expression sre_constants.error: unexpected end of regular expression &gt;&gt;&gt; re.search('\[', '') </code></pre> <p>The same applies to <code>\x41</code>, that's the <code>^</code> character, which in a regex context matches the <em>start</em> of the string only, not the literal character <code>^</code>. Since you tried to match data <em>before</em> the <code>^</code> point the regex can't match anything, simply because that makes the anchor invalid.</p> <p>If you are only searching for literal text matches, don't use a regex. You could just use <a href="https://docs.python.org/2/library/stdtypes.html#str.find" rel="nofollow"><code>str.find()</code></a> or <a href="https://docs.python.org/2/library/stdtypes.html#str.index" rel="nofollow"><code>str.index()</code></a> to get the index of matched text.</p> <p>If you are using this in a larger expression and <em>generate</em> the expression from data, then use <a href="https://docs.python.org/2/library/re.html#re.escape" rel="nofollow"><code>re.escape()</code></a> to ensure all metacharacters are properly escaped first.</p>
2
2016-09-22T09:24:27Z
[ "python", "regex", "binaryfiles" ]
Configure Sentry for different environments (staging, production)
39,635,027
<p>I want to configure Sentry in a Django app to report errors using different environments, like staging and production. This way I can configure alerting per environment.</p> <p>How can I configure different environments for Raven using different Django settings? The <code>environment</code> variable is not listed at the <a href="https://docs.sentry.io/hosted/clients/python/advanced/#client-arguments" rel="nofollow">Raven Python client arguments docs</a>, however I can find the variable in the <a href="https://github.com/getsentry/raven-python/blob/master/raven/base.py#L188" rel="nofollow">raven-python code</a>.</p>
1
2016-09-22T09:19:19Z
39,635,130
<p>You can use different settings for different branches. You have your main one, with all shared settings. And for develop branch you have dev.py settings and for production you have your prod.py. And while deploying your app you just specify which settings are meant to be used. If not you can also use <a href="https://github.com/gitpython-developers/GitPython" rel="nofollow">GitPython package</a>. Where you make something like this:</p> <pre><code>if branch in ['develop']: DEBUG = True RAVEN_CONFIG = { 'dsn': 'your_link_to_raven', } else: #some other settings </code></pre>
0
2016-09-22T09:24:31Z
[ "python", "django", "sentry", "raven" ]
Configure Sentry for different environments (staging, production)
39,635,027
<p>I want to configure Sentry in a Django app to report errors using different environments, like staging and production. This way I can configure alerting per environment.</p> <p>How can I configure different environments for Raven using different Django settings? The <code>environment</code> variable is not listed at the <a href="https://docs.sentry.io/hosted/clients/python/advanced/#client-arguments" rel="nofollow">Raven Python client arguments docs</a>, however I can find the variable in the <a href="https://github.com/getsentry/raven-python/blob/master/raven/base.py#L188" rel="nofollow">raven-python code</a>.</p>
1
2016-09-22T09:19:19Z
39,648,090
<p>If you are setting environment as a constant within <a href="https://docs.djangoproject.com/en/dev/topics/settings/" rel="nofollow">Django settings</a>, you can set the <code>environment</code> argument when initializing the <code>raven-python</code> client.</p> <p>You're correct—our docs didn't include the environment argument. I've <a href="https://github.com/getsentry/raven-python/pull/871" rel="nofollow">updated them</a> to <a href="https://docs.sentry.io/hosted/clients/python/advanced/#client-arguments" rel="nofollow">include it</a>. Thanks for raising the issue.</p>
2
2016-09-22T20:12:48Z
[ "python", "django", "sentry", "raven" ]
Replace value(s) in sublists, using a dictionary
39,635,117
<p>I have a dictionary, called <code>dictionary</code> and a list of lists (<code>newdetails</code>). I want to look up the key (which is the gtin) in the dictionary and if it matches the <code>index[0]</code> in each sublist in <code>newdetails</code>, to replace <code>index[3]</code> of each sublist with the corresponding value (<code>currentstock</code>) in the dictionary. </p> <p>The dictionary, on running the program is here: 12121212 below is the <code>gtin</code> and <code>1</code> refers to the current stock. </p> <pre><code>0 0 12121212 1 12345670 0 </code></pre> <p>and this is <code>newdetails</code> - in this particular iteration it contains two sublists:</p> <pre><code>[['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] </code></pre> <p>And the code I have so far is:</p> <pre><code>#newdetails is the list of lists for sub_list in newdetails: sub_list[3] = dictionary.get(gtin), (currentstock) print(sub_list) #print items in the dictionary for reference for gtin,currentstock in dictionary.items(): print(gtin, currentstock) </code></pre> <p>As mentioned, I want the output to be:</p> <pre><code>['12345670', 'Iphone 9.0', '500', 0, '3', '5'] ['12121212', 'Samsung Laptop', '900', 1, '3', '5'] </code></pre> <p>But instead it is:</p> <pre><code>['12345670', 'Iphone 9.0', '500', (0, 0), '3', '5'] ['12121212', 'Samsung Laptop', '900', (0, 0), '3', '5'] </code></pre> <p>UPDATE based on suggestions. The code is now:</p> <pre><code>for sub_list in newdetails: for gtin,currentstock in dictionary.items(): #sub_list[3] = dictionary.get(gtin), (currentstock) sub_list[3] = dictionary.get(int(sub_list[0])) #print(sub_list) print("After - ",newdetails) #print items in the dictionary for reference for gtin,currentstock in dictionary.items(): print(gtin, currentstock) </code></pre> <p>The output however is:</p> <pre><code>[['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] After - [['12345670', 'Iphone 9.0', '500', None, '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] After - [['12345670', 'Iphone 9.0', '500', None, '3', '5'], ['12121212', 'Samsung Laptop', '900', None, '3', '5']] #Contents of dictionary here 0 0 12345670 2 12121212 4 </code></pre> <p>In the above example, it should be replacing the 5 in each sublist (that is the index[03] in each sublist, with 2 and 4 respectively for the corresponding gtin. But it doesn't quite do that yet?</p>
-1
2016-09-22T09:24:02Z
39,635,526
<p><em>Thanks <strong>@MartijnPieters</strong> for all your comments.</em> </p> <p><em>Will post all three versions of my code which I converted from less efficient to efficient with your help.</em></p> <p><strong>Version 1 :</strong> (<em>Refer to comment no 1 from Martijn</em>)</p> <pre><code> for k,v in dictionary.iteritems(): for sub_list in newdetails: if k == sub_list[0]: sub_list[3] = v </code></pre> <p><strong>Version 2:</strong> (<em>Refer to comment no 2 from Martijn</em>)</p> <pre><code>for sub_list in newdetails: if int(sub_list[0]) in dictionary.keys(): sub_list[3] = dictionary.get(sub_list[0]) </code></pre> <p><strong>Version 3 (Final and good one) :</strong></p> <pre><code>dictionary={'0' :0, '12121212': 1,'12345670' :0} newdetails = [['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] print "Before - ",newdetails for sub_list in newdetails: sub_list[3] = dictionary.get(sub_list[0]) print "After - ",newdetails </code></pre> <p><strong>Output:</strong></p> <pre><code>C:\Users\dinesh_pundkar\Desktop&gt;python c.py Before - [['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] After - [['12345670', 'Iphone 9.0', '500', 0, '3', '5'], ['12121212', 'Samsung Laptop', '900', 1, '3', '5']] C:\Users\dinesh_pundkar\Desktop&gt; </code></pre>
0
2016-09-22T09:40:31Z
[ "python", "dictionary", "replace" ]
TypeError : string indices must be integers
39,635,198
<p>I've returned to Python after a few years doing C code and I'm a little confused while training myself to get my Python coding habits back.</p> <p>I've tried to run this little, very simple, piece of code but I keep getting a TypeError as described in the title. I've searched a lot but cannot figure out what is the problem with this :</p> <pre><code>def toLower(pStr): i = 0 for i in pStr: if ord(pStr[i]) &gt;= 65 and ord(pStr[i]) &lt;= 90: pStr[i] = chr(ord(pStr[i])+28) return pStr testStr = "TEST STRING" print(toLower(testStr)) </code></pre> <p>Considering that <code>i</code> is an integer, I don't understand why I get this error. Maybe I think too much like i'm doing C IDK.</p>
0
2016-09-22T09:27:26Z
39,635,277
<p>You are iterating over the string, so each <code>i</code> is bound to a single character, <strong>not</strong> an integer. That's because Python <code>for</code> loops are <a href="https://en.wikipedia.org/wiki/Foreach_loop" rel="nofollow">Foreach constructs</a>, unlike C.</p> <p>Just use that character directly, no need to index back into the string. Python strings are also <em>immutable</em>, so you can't replace characters in the string object. Build a new object:</p> <pre><code>def toLower(pStr): output = [] for char in pStr: if ord(char) &gt;= 65 and ord(char) &lt;= 90: char = chr(ord(char)+28)) output.append(char) return ''.join(output) </code></pre> <p>If you <em>must</em> generate an index for something, you'd generally use either the <a href="https://docs.python.org/3/library/stdtypes.html#ranges" rel="nofollow"><code>range()</code> type</a> to produce those for you, or use <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow"><code>enumerate()</code></a> to produce both an index <em>and</em> the value itself in a loop.</p> <p>Also, note you don't need to set the <code>for</code> loop target name to a default before the loop unless you need to handle the case where the loop iterable is empty and you expect to use the target name after the loop. In other words, your <code>i = 0</code> is entirely redundant.</p>
1
2016-09-22T09:30:30Z
[ "python" ]
Overloading and wrapping method of field of parent class in python
39,635,202
<p>For example I have something like this:</p> <pre><code>class A(object): def __init__(self): pass def foo(self, a, b, c): return a + b + c class B(object): def __init__(self): self.b = A() def wrapper_func(func): def wrapper(self, *args, **kwargs): return func(self, a=3, *args, **kwargs) return wrapper class C(B): def __init__(self): pass @wrapper_func def ??? </code></pre> <p>Is it possible to some how overload and then wrap method <code>foo</code> of the field of parent <code>B</code> class in python without inherits from class <code>A</code>? I need the wrapper indeed because I have the different methods with same arguments, but in the same time I have to save original class B methods native (besides overloading).</p>
0
2016-09-22T09:27:34Z
39,635,884
<p>Initialize <code>C</code>'s parent class using <code>super</code> and then pass all the parameters to the <code>foo</code> method of the composed class instance <code>A()</code> via the <em>inherited</em> attribute <code>b</code> of the class <code>C</code>:</p> <pre><code>def wrapper_func(func): def wrapper(self, *args, **kwargs): kwargs['a'] = 3 return func(self, *args, **kwargs) return wrapper class C(B): def __init__(self): super(C, self).__init__() @wrapper_func def bar(self, *args, **kwargs): return self.b.foo(*args, **kwargs) # access foo via attribute b </code></pre> <p><em>Trial:</em></p> <pre><code>c = C() print(c.bar(a=1, b=2, c=3)) # 8 -&gt; 3+2+3 </code></pre> <hr> <p>To make the call to the decorated function via <code>c.b.foo</code>, <em>patch</em> the <code>c.b.foo</code> method with the new <code>bar</code> method:</p> <pre><code>class C(B): def __init__(self): super(C, self).__init__() self._b_foo = self.b.foo self.b.foo = self.bar @wrapper_func def bar(self, *args, **kwargs): return self._b_foo(*args, **kwargs) </code></pre> <p><em>Trial:</em></p> <pre><code>c = C() print(c.b.foo(a=1, b=2, c=3)) # 8 -&gt; 3+2+3 </code></pre>
1
2016-09-22T09:55:36Z
[ "python", "overloading", "wrapper" ]
Sorting dictionary both desceding and ascending in Python
39,635,256
<p>I want to sort the dictionary by values. If the values are the same, then I want to sort it by keys.</p> <p>For example, if I have the string "bitter butter a butter baggy", output has to be [(butter,2),(a,1),(baggy,1),(bitter,1)).</p> <p>My below code sorts the dictionary by values in descending order. But I am not able to do the second part, i.e. if the values are same, then I have sort keys in ascending order. </p> <p>Any help would be appreciated.</p> <pre><code>def count_words(s,n): words = s.split(" ") wordcount = {} for word in words: if word not in wordcount: wordcount[word] = 1 else: wordcount[word] += 1 sorted_x = sorted(wordcount.items(), key=operator.itemgetter(1), reverse=True) sorted_asc = sorted(wordcount.items(), key=operator.itemgetter(0)) return sorted_x </code></pre>
-4
2016-09-22T09:29:39Z
39,636,136
<p>For this, what you need is to write a comparator which sorts the values by count first and if they are equal then it sorts the values by the keys.</p> <pre><code>from collections import defaultdict def count_words(s): def comparator(first, second): if first[1] &gt; second[1]: return 1 elif first[1] &lt; second[1]: return -1 if first[0] &gt; second[0]: return -1 elif first[0] == second[0]: return 0 return 1 words = s.split(" ") wordcount = defaultdict(int) for word in words: wordcount[word] += 1 return sorted(wordcount.items(), cmp=comparator, reverse=True) print count_words("bitter butter a c batter butter baggy") </code></pre> <blockquote> <p>[('butter', 2), ('a', 1), ('baggy', 1), ('batter', 1), ('bitter', 1), ('c', 1)]</p> </blockquote>
2
2016-09-22T10:06:46Z
[ "python" ]
Sorting dictionary both desceding and ascending in Python
39,635,256
<p>I want to sort the dictionary by values. If the values are the same, then I want to sort it by keys.</p> <p>For example, if I have the string "bitter butter a butter baggy", output has to be [(butter,2),(a,1),(baggy,1),(bitter,1)).</p> <p>My below code sorts the dictionary by values in descending order. But I am not able to do the second part, i.e. if the values are same, then I have sort keys in ascending order. </p> <p>Any help would be appreciated.</p> <pre><code>def count_words(s,n): words = s.split(" ") wordcount = {} for word in words: if word not in wordcount: wordcount[word] = 1 else: wordcount[word] += 1 sorted_x = sorted(wordcount.items(), key=operator.itemgetter(1), reverse=True) sorted_asc = sorted(wordcount.items(), key=operator.itemgetter(0)) return sorted_x </code></pre>
-4
2016-09-22T09:29:39Z
39,636,191
<p>I suppose this would help</p> <pre><code>s = { "A":1, "B":1, "C":1, "D":0, "E":2, "F":0 } rez = sorted(s.items(), key = lambda x: (x[1],x[0])) </code></pre> <p>Result is</p> <pre><code>[('D', 0), ('F', 0), ('A', 1), ('B', 1), ('C', 1), ('E', 2)] </code></pre> <p>If you need a reversed order, just use -x[1] instead of x[1]</p>
1
2016-09-22T10:09:37Z
[ "python" ]
Scapy verbose mode documentation
39,635,324
<p>I understand that:</p> <pre><code>conf.verb = 0 </code></pre> <p>Disables Scapy verbose mode but where is the documentation to confirm this?</p> <p>My googling has failed me.</p>
1
2016-09-22T09:32:54Z
39,638,193
<p>Actually, <code>conf</code>'s docstring specifies that configuring <code>conf.verb = 0</code> sets the level of verbosity to <em>almost mute</em>, which implies that it doesn't disable it altogether.</p> <p>The relevant excerpt from the docstring is as follows:</p> <pre><code>verb : level of verbosity, from 0 (almost mute) to 3 (verbose) </code></pre> <p>Here is the entire docstring:</p> <pre><code>In [1]: from scapy.all import conf WARNING: No route found for IPv6 destination :: (no default route?) In [2]: conf? Type: Conf String Form: ASN1_default_codec = &lt;ASN1Codec BER[1]&gt; AS_resolver = &lt;scapy.as_resolvers.AS_resolver_multi insta &lt;...&gt; alse use_pcap = False verb = 2 version = '2.3.2' warning_threshold = 5 wepkey = '' File: /usr/local/lib/python2.7/dist-packages/scapy/config.py Docstring: This object contains the configuration of scapy. session : filename where the session will be saved interactive_shell : If set to "ipython", use IPython as shell. Default: Python stealth : if 1, prevents any unwanted packet to go out (ARP, DNS, ...) checkIPID: if 0, doesn't check that IPID matches between IP sent and ICMP IP citation received if 1, checks that they either are equal or byte swapped equals (bug in some IP stacks) if 2, strictly checks that they are equals checkIPsrc: if 1, checks IP src in IP and ICMP IP citation match (bug in some NAT stacks) check_TCPerror_seqack: if 1, also check that TCP seq and ack match the ones in ICMP citation iff : selects the default output interface for srp() and sendp(). default:"eth0") verb : level of verbosity, from 0 (almost mute) to 3 (verbose) promisc : default mode for listening socket (to get answers if you spoof on a lan) sniff_promisc : default mode for sniff() filter : bpf filter added to every sniffing socket to exclude traffic from analysis histfile : history file padding : includes padding in desassembled packets except_filter : BPF filter for packets to ignore debug_match : when 1, store received packet that are not matched into debug.recv route : holds the Scapy routing table and provides methods to manipulate it warning_threshold : how much time between warnings from the same place ASN1_default_codec: Codec used by default for ASN1 objects mib : holds MIB direct access dictionnary resolve : holds list of fields for which resolution should be done noenum : holds list of enum fields for which conversion to string should NOT be done AS_resolver: choose the AS resolver class to use extensions_paths: path or list of paths where extensions are to be looked for In [3]: </code></pre>
1
2016-09-22T11:48:26Z
[ "python", "documentation", "scapy", "verbose" ]
How to construct data structure with variable size elements
39,635,372
<p>I am trying to construct the structure with variable size of the element <code>value</code> from class <code>val</code> : </p> <pre><code>from construct import * TEST = Struct("test", UInt8("class"), Embed(switch(lambda ctx: ctx.class) { 1: UInt8("value"), 2: UInt16("value"), 3: UInt32("value")} )) ) </code></pre> <p>The above code is incorrect. </p> <p>I need to achieve something like this: If the class is 1 then one byte will be received from the packet.</p>
0
2016-09-22T09:34:45Z
39,637,876
<p>You can use multiple <a href="https://docs.python.org/3/library/struct.html#format-characters" rel="nofollow">format string</a> to change <code>struct</code> behaviour and <code>struct.unpack_from</code> with <code>struct.calcsize</code> to continue parsing bytearray from the end of the last found byte sequence:</p> <pre><code>import struct v1 = bytearray([0x00,0xFF]) v2 = bytearray([0x01,0xFF,0xFF]) v3 = bytearray([0x02,0xFF,0xFF,0xFF,0xFF]) v = v2 # change this header_format = 'B' body_format_variants = {0:'B',1:'H',2:'L'} header = struct.unpack_from(header_format,v,0) body = struct.unpack_from(body_format_variants[header[0]],v,struct.calcsize(header_format)) print (header, body, "size=",struct.calcsize('&gt;'+header_format+body_format_variants[header[0]])) </code></pre>
0
2016-09-22T11:33:37Z
[ "python", "construct" ]
Pass-through/export whole third party module (using __all__?)
39,635,448
<p>I have a module that wraps another module to insert some shim logic in some functions. The wrapped module uses a settings module <code>mod.settings</code> which I want to expose, but I don't want the users to import it from there, in case I would like to shim something there as well in the future. I want them to import <code>wrapmod.settings</code>.</p> <p>Importing the module and exporting it works, but is a bit verbose on the client side. It results in having to write <code>settings.thing</code> instead of just <code>thing</code>. </p> <p>I want the users to be able to do <code>from wrapmod.settings import *</code> and get the same results as if they did <code>from mod.settings import *</code> but right now, only <code>from wrapmod import settings</code> is available. How to I work around this?</p>
0
2016-09-22T09:37:29Z
39,636,745
<p>If I understand the situation correctly, you're writing a module <code>wrapmod</code> that is intended to transform parts of an existing package <code>mod</code>. The specific part you're transforming is the submodule <code>mod.settings</code>. You've imported the <code>settings</code> module and made your changes to it, but even though it is available as <code>wrapmod.settings</code>, you can't use that module name in an <code>from ... import ...</code> statement.</p> <p>I think the best way to fix that is to insert the modified module into <code>sys.modules</code> under the new dotted name. This makes Python accept that name as valid even though <code>wrapmod</code> isn't really a package.</p> <p>So <code>wrapmod</code> would look something like:</p> <pre><code>import sys from mod import settings # modify settings here sys.modules['wrapmod.settings'] = settings # add this line! </code></pre>
1
2016-09-22T10:36:36Z
[ "python", "python-3.x", "python-3.4" ]
Pass-through/export whole third party module (using __all__?)
39,635,448
<p>I have a module that wraps another module to insert some shim logic in some functions. The wrapped module uses a settings module <code>mod.settings</code> which I want to expose, but I don't want the users to import it from there, in case I would like to shim something there as well in the future. I want them to import <code>wrapmod.settings</code>.</p> <p>Importing the module and exporting it works, but is a bit verbose on the client side. It results in having to write <code>settings.thing</code> instead of just <code>thing</code>. </p> <p>I want the users to be able to do <code>from wrapmod.settings import *</code> and get the same results as if they did <code>from mod.settings import *</code> but right now, only <code>from wrapmod import settings</code> is available. How to I work around this?</p>
0
2016-09-22T09:37:29Z
39,637,558
<p>I ended up making a code-generator for a thin wrapper module instead, since the <code>sys.module</code> hacking broke all IDE integration. </p> <pre><code>from ... import mod # this is just a pass-through wrapper around mod.settings __all__ = mod.__all__ # generate pass-through wrapper around mod.settings; doesn't break IDE integration, unlike manual sys.modules editing. if __name__ == "__main__": for thing in settings.__all__: print(thing + " = mod." + thing) </code></pre> <p>which when run as a script, outputs code that can then be appended to the end of this file. </p>
0
2016-09-22T11:17:51Z
[ "python", "python-3.x", "python-3.4" ]
Updating already existing json file python
39,635,480
<p>I have a json file and I want to update 'filename' field with the filenames I scp from a remote server. I am pretty new to python but learning as I go. </p> <p>JSON file:</p> <pre><code>{"path":"/home/Document/Python", "md5s":[{"filename":"", "md5":"", "timestamp":""}, {"filename":"", "md5":"", "timestamp":""}, {"filename":"", "md5":"", "timestamp":""} ]} </code></pre> <p>My python code so far: </p> <pre><code> def filemd5(): try: config = json.load(open(config_file)) #print(str(json.dumps(config, indent=4))) for server in config['servers']: ssh = SSHClient() ssh.load_system_host_keys() ssh.connect(server['ip'], username=server['username'], password=server['password']) #print(str(server)) print('Connecting to servers') ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command('ls /tmp/') error = str(ssh_stderr.read()) if len(error) ==0: for files in config['servers']: filename = file_location + server['file'] scp = SCPClient(ssh.get_transport()) scp.get(filename) if os.path.isfile(server['file']): updateJsonFile(filename) print(filename) else: print('KO') def updateJsonFile(filename): with open('md5.json', 'r') as f: data = json.load(f) subdata = data['md5s'] for check in subdata: check["filename"] = filename with open('md5.json', 'w') as f: f.write(json.dumps(data)) filemd5() </code></pre> <p>Formatting has not really came out well here but I am nearly sure it is good in my python script. What is happening now is that it populates all fields 'filename' with the same file when I am SCP three files from different servers. </p> <p>Any help would be great. Thanks.</p> <p>EDIT(updated question as adding to file works but it fills all values with same filename.</p> <p>Expected result:</p> <pre><code>{"path":"/home/Document/Python", "md5s":[{"filename":"text1.txt", "md5":"", "timestamp":""}, {"filename":"text2.txt", "md5":"", "timestamp":""}, {"filename":"text3.txt", "md5":"", "timestamp":""} ]} </code></pre> <p>Actual:</p> <pre><code> {"path":"/home/Document/Python", "md5s":[{"filename":"text1.txt", "md5":"", "timestamp":""}, {"filename":"text1.txt", "md5":"", "timestamp":""}, {"filename":"text1.txt", "md5":"", "timestamp":""} ]}} </code></pre>
-1
2016-09-22T09:38:29Z
39,636,809
<p>This is as close as I can come to sorting the code for you. I don't know why you get <code>KeyError</code> but you didn't implement a counter as suggested in the comments. Since I don't have access to <code>config['servers']</code>, the counter might be in the wrong place, in which case put it in the inner <code>for</code> loop. I tested this on your json string and it does work as you intended so the principle is correct, you just have to make sure you pass the desired values for <code>counter</code>.</p> <pre><code>def filemd5(): try: config = json.load(open(config_file)) counter = 0 # Add a counter here for server in config['servers']: ssh = SSHClient() ssh.load_system_host_keys() ssh.connect(server['ip'], username=server['username'], password=server['password']) print('Connecting to servers') ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command('ls /tmp/') error = str(ssh_stderr.read()) if not error: for files in config['servers']: filename = file_location + server['file'] scp = SCPClient(ssh.get_transport()) scp.get(filename) if os.path.isfile(server['file']): updateJsonFile(filename, counter) counter += 1 # increment the counter print(filename) else: print('KO') except: # I don't understand why you don't get an error for missing except? pass def updateJsonFile(filename, counter): with open('md5.json', 'r') as f: data = json.load(f) subdata = data['md5s'] # The code below would update every value since you loop through whole list #for check in subdata: # check["filename"] = filename subdata[counter]['filename'] = filename with open('md5.json', 'w') as f: f.write(json.dumps(data)) </code></pre>
1
2016-09-22T10:39:26Z
[ "python", "json" ]
Updating already existing json file python
39,635,480
<p>I have a json file and I want to update 'filename' field with the filenames I scp from a remote server. I am pretty new to python but learning as I go. </p> <p>JSON file:</p> <pre><code>{"path":"/home/Document/Python", "md5s":[{"filename":"", "md5":"", "timestamp":""}, {"filename":"", "md5":"", "timestamp":""}, {"filename":"", "md5":"", "timestamp":""} ]} </code></pre> <p>My python code so far: </p> <pre><code> def filemd5(): try: config = json.load(open(config_file)) #print(str(json.dumps(config, indent=4))) for server in config['servers']: ssh = SSHClient() ssh.load_system_host_keys() ssh.connect(server['ip'], username=server['username'], password=server['password']) #print(str(server)) print('Connecting to servers') ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command('ls /tmp/') error = str(ssh_stderr.read()) if len(error) ==0: for files in config['servers']: filename = file_location + server['file'] scp = SCPClient(ssh.get_transport()) scp.get(filename) if os.path.isfile(server['file']): updateJsonFile(filename) print(filename) else: print('KO') def updateJsonFile(filename): with open('md5.json', 'r') as f: data = json.load(f) subdata = data['md5s'] for check in subdata: check["filename"] = filename with open('md5.json', 'w') as f: f.write(json.dumps(data)) filemd5() </code></pre> <p>Formatting has not really came out well here but I am nearly sure it is good in my python script. What is happening now is that it populates all fields 'filename' with the same file when I am SCP three files from different servers. </p> <p>Any help would be great. Thanks.</p> <p>EDIT(updated question as adding to file works but it fills all values with same filename.</p> <p>Expected result:</p> <pre><code>{"path":"/home/Document/Python", "md5s":[{"filename":"text1.txt", "md5":"", "timestamp":""}, {"filename":"text2.txt", "md5":"", "timestamp":""}, {"filename":"text3.txt", "md5":"", "timestamp":""} ]} </code></pre> <p>Actual:</p> <pre><code> {"path":"/home/Document/Python", "md5s":[{"filename":"text1.txt", "md5":"", "timestamp":""}, {"filename":"text1.txt", "md5":"", "timestamp":""}, {"filename":"text1.txt", "md5":"", "timestamp":""} ]}} </code></pre>
-1
2016-09-22T09:38:29Z
39,637,697
<p>If I have correctly understood, you start with a json file containing a list of references to file, and you want to update the next element of the list.</p> <p>You could browse the <code>data['md5s']</code> searching for the first element where the <code>filename</code> field is empty, and add a new dictionnary to the list if all are already completed:</p> <pre><code>def updateJsonFile(filename): jsonFile = open("md5.json", "r") # load data from disk data = json.load(jsonFile) jsonFile.close() for tmp in data["md5s"]: # browse the list if len(tmp['filename']) == 0: # found one empty slot, use it and exit looop tmp['filename'] = filename break else: # no empty slot found: add a new dict data["md5s"].append({'md5': '', 'timestamp': '', 'filename': filename}) jsonFile = open("m.json", "w") # write the json back to file json.dump(data, jsonFile) jsonFile.close() </code></pre>
1
2016-09-22T11:24:28Z
[ "python", "json" ]
Assigning to global var in Python?
39,635,483
<p>I am trying out Python for a new project, and in that state, need some global variable. To test without changing the hardcoded value, I've tried to make a function that reassign thoses variables:</p> <pre><code>foo = None; def setFoo(bar): global foo = bar; setFoo(1); </code></pre> <p>However, this get:</p> <pre><code> File ".\test.py", line 4 global foo = bar; ^ SyntaxError: invalid syntax </code></pre> <p>I've already read <a href="http://stackoverflow.com/questions/10588317/python-function-global-variables">two</a> <a href="http://stackoverflow.com/questions/423379/using-global-variables-in-a-function-other-than-the-one-that-created-them">questions</a> higly related to my problem, but obviously missed some gotcha of Python.</p> <p>Could anyone enlighten me?</p>
0
2016-09-22T09:38:34Z
39,635,512
<p><code>global</code> and assignment are both statements. You can't mix them on the same line.</p>
3
2016-09-22T09:40:01Z
[ "python" ]
Assigning to global var in Python?
39,635,483
<p>I am trying out Python for a new project, and in that state, need some global variable. To test without changing the hardcoded value, I've tried to make a function that reassign thoses variables:</p> <pre><code>foo = None; def setFoo(bar): global foo = bar; setFoo(1); </code></pre> <p>However, this get:</p> <pre><code> File ".\test.py", line 4 global foo = bar; ^ SyntaxError: invalid syntax </code></pre> <p>I've already read <a href="http://stackoverflow.com/questions/10588317/python-function-global-variables">two</a> <a href="http://stackoverflow.com/questions/423379/using-global-variables-in-a-function-other-than-the-one-that-created-them">questions</a> higly related to my problem, but obviously missed some gotcha of Python.</p> <p>Could anyone enlighten me?</p>
0
2016-09-22T09:38:34Z
39,635,529
<pre><code>foo = None; def setFoo(bar): global foo foo = bar setFoo(1); </code></pre>
3
2016-09-22T09:40:36Z
[ "python" ]
Assigning to global var in Python?
39,635,483
<p>I am trying out Python for a new project, and in that state, need some global variable. To test without changing the hardcoded value, I've tried to make a function that reassign thoses variables:</p> <pre><code>foo = None; def setFoo(bar): global foo = bar; setFoo(1); </code></pre> <p>However, this get:</p> <pre><code> File ".\test.py", line 4 global foo = bar; ^ SyntaxError: invalid syntax </code></pre> <p>I've already read <a href="http://stackoverflow.com/questions/10588317/python-function-global-variables">two</a> <a href="http://stackoverflow.com/questions/423379/using-global-variables-in-a-function-other-than-the-one-that-created-them">questions</a> higly related to my problem, but obviously missed some gotcha of Python.</p> <p>Could anyone enlighten me?</p>
0
2016-09-22T09:38:34Z
39,635,572
<p>As saurabh baid pointed out, the correct syntax is</p> <pre><code>foo = None def set_foo(bar): global foo foo = bar set_foo(1) </code></pre> <p>Using the <code>global</code> keyword allows the function to access the global scope (i.e. the scope of the module)</p> <p>Also notice how I renamed the variables to be <code>snake_case</code> and removed the semi-colons. semi-colons are unnecessary in Python, and functions are typically <code>snake_case</code> :)</p>
1
2016-09-22T09:42:24Z
[ "python" ]
Assigning to global var in Python?
39,635,483
<p>I am trying out Python for a new project, and in that state, need some global variable. To test without changing the hardcoded value, I've tried to make a function that reassign thoses variables:</p> <pre><code>foo = None; def setFoo(bar): global foo = bar; setFoo(1); </code></pre> <p>However, this get:</p> <pre><code> File ".\test.py", line 4 global foo = bar; ^ SyntaxError: invalid syntax </code></pre> <p>I've already read <a href="http://stackoverflow.com/questions/10588317/python-function-global-variables">two</a> <a href="http://stackoverflow.com/questions/423379/using-global-variables-in-a-function-other-than-the-one-that-created-them">questions</a> higly related to my problem, but obviously missed some gotcha of Python.</p> <p>Could anyone enlighten me?</p>
0
2016-09-22T09:38:34Z
39,635,632
<p><code>global</code> and <code>foo = bar</code> on separate lines</p> <pre><code>foo = None def setFoo(bar): global foo foo = bar setFoo(1) </code></pre> <p>don't use semicolon: <a href="http://stackoverflow.com/questions/12335358/python-what-does-a-semi-colon-do">Python: What Does a Semi Colon Do?</a></p>
1
2016-09-22T09:45:21Z
[ "python" ]
Assigning to global var in Python?
39,635,483
<p>I am trying out Python for a new project, and in that state, need some global variable. To test without changing the hardcoded value, I've tried to make a function that reassign thoses variables:</p> <pre><code>foo = None; def setFoo(bar): global foo = bar; setFoo(1); </code></pre> <p>However, this get:</p> <pre><code> File ".\test.py", line 4 global foo = bar; ^ SyntaxError: invalid syntax </code></pre> <p>I've already read <a href="http://stackoverflow.com/questions/10588317/python-function-global-variables">two</a> <a href="http://stackoverflow.com/questions/423379/using-global-variables-in-a-function-other-than-the-one-that-created-them">questions</a> higly related to my problem, but obviously missed some gotcha of Python.</p> <p>Could anyone enlighten me?</p>
0
2016-09-22T09:38:34Z
39,635,754
<p>Python doesn't require you to explicitly declare variables and automatically assumes that a variable that you assign has function scope unless you tell it otherwise. The <code>global</code> keyword is the means that is provided to tell it otherwise. And you can't assign value when you declare it as global. Correct way will be</p> <pre><code>foo = None def setFoo(bar): global foo foo = bar setFoo(1) </code></pre>
2
2016-09-22T09:50:20Z
[ "python" ]
Unable to run AppEngine django application locally
39,635,544
<p>I am on <strong>windows 10</strong>, Installed <strong>python 2.7.11</strong>.I have a app engine application.I am able to run it locally with django, It is working perfectly.But when I am trying to run it via <strong>dev_appengine.py</strong> I am getting this error.</p> <p><strong><em><a href="http://i.stack.imgur.com/YMZK1.png" rel="nofollow"><img src="http://i.stack.imgur.com/YMZK1.png" alt="importerror-no-module-named-msvcrt"></a></em></strong></p> <p>Already tried <a href="http://stackoverflow.com/questions/25915164/django-1-7-on-app-engine-importerror-no-module-named-msvcrt">this</a></p>
0
2016-09-22T09:41:22Z
39,673,958
<p>Please include this code in your appengine_config.py file, </p> <pre><code>import os on_appengine = os.environ.get('SERVER_SOFTWARE','').startswith('Development') if on_appengine and os.name == 'nt': os.name = None </code></pre>
1
2016-09-24T07:54:45Z
[ "python", "django", "python-2.7", "google-app-engine" ]
Python - Removing \n when using default split()?
39,635,566
<p>I'm working on strings where I'm taking input from the command line. For example, with this input:</p> <pre><code>format driveName "datahere" </code></pre> <p>when I go string.split(), it comes out as:</p> <pre><code>&gt;&gt;&gt; input.split() ['format, 'driveName', '"datahere"'] </code></pre> <p>which is what I want.</p> <p>However, when I specify it to be string.split(" ", 2), I get:</p> <pre><code>&gt;&gt;&gt; input.split(' ', 2) ['format\n, 'driveName\n', '"datahere"'] </code></pre> <p>Does anyone know why and how I can resolve this? I thought it could be because I'm creating it on Windows and running on Unix, but the same problem occurs when I use nano in unix.</p> <p>The third argument (data) could contain newlines, so I'm cautious not to use a sweeping newline remover. </p>
0
2016-09-22T09:42:06Z
39,635,616
<p>Use <code>None</code> to get the default whitespace splitting behaviour with a limit:</p> <pre><code>input.split(None, 2) </code></pre> <p>This leaves the whitespace at the <em>end</em> of <code>input()</code> untouched.</p> <p>Or you could strip the values afterwards; this removes whitespace from the start and end, not the middle, of each resulting string, just like <code>input.split()</code> would:</p> <pre><code>[v.strip() for v in input.split(' ', 2)] </code></pre>
0
2016-09-22T09:44:40Z
[ "python" ]
Python - Removing \n when using default split()?
39,635,566
<p>I'm working on strings where I'm taking input from the command line. For example, with this input:</p> <pre><code>format driveName "datahere" </code></pre> <p>when I go string.split(), it comes out as:</p> <pre><code>&gt;&gt;&gt; input.split() ['format, 'driveName', '"datahere"'] </code></pre> <p>which is what I want.</p> <p>However, when I specify it to be string.split(" ", 2), I get:</p> <pre><code>&gt;&gt;&gt; input.split(' ', 2) ['format\n, 'driveName\n', '"datahere"'] </code></pre> <p>Does anyone know why and how I can resolve this? I thought it could be because I'm creating it on Windows and running on Unix, but the same problem occurs when I use nano in unix.</p> <p>The third argument (data) could contain newlines, so I'm cautious not to use a sweeping newline remover. </p>
0
2016-09-22T09:42:06Z
39,635,699
<p>Default separator in <code>split()</code> is all whitespace which includes newlines <code>\n</code> and spaces. </p> <p>Here is what the <a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow">docs on split</a> say:</p> <pre><code>str.split([sep[, maxsplit]]) If sep is not specified or is None, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace. </code></pre> <p>When you define a new <code>sep</code> it only uses that separator to <code>split</code> the strings. </p>
2
2016-09-22T09:48:03Z
[ "python" ]
Python - Removing \n when using default split()?
39,635,566
<p>I'm working on strings where I'm taking input from the command line. For example, with this input:</p> <pre><code>format driveName "datahere" </code></pre> <p>when I go string.split(), it comes out as:</p> <pre><code>&gt;&gt;&gt; input.split() ['format, 'driveName', '"datahere"'] </code></pre> <p>which is what I want.</p> <p>However, when I specify it to be string.split(" ", 2), I get:</p> <pre><code>&gt;&gt;&gt; input.split(' ', 2) ['format\n, 'driveName\n', '"datahere"'] </code></pre> <p>Does anyone know why and how I can resolve this? I thought it could be because I'm creating it on Windows and running on Unix, but the same problem occurs when I use nano in unix.</p> <p>The third argument (data) could contain newlines, so I'm cautious not to use a sweeping newline remover. </p>
0
2016-09-22T09:42:06Z
39,635,718
<p>The default <code>str.split</code> targets a number of "whitespace characters", including also tabs and others. If you do <code>str.split(' ')</code>, you tell it to split <em>only</em> on <code>' '</code> (a space). You can get the default behavior by specifying <code>None</code>, as in <code>str.split(None, 2)</code>.</p> <p>There may be a better way of doing this, depending on what your actual use-case is (your example does not replicate the problem...). As your example output implies newlines as separators, you should consider splitting on them explicitly.</p> <pre><code>inp = """ format driveName datahere datathere """ inp.strip().split('\n', 2) # ['format', 'driveName', 'datahere\ndatathere'] </code></pre> <p>This allows you to have spaces (and tabs etc) in the first and second item as well.</p>
1
2016-09-22T09:48:39Z
[ "python" ]
Logging exceptions in Django commands
39,635,579
<p>I have implemented custom commands in Django, and their exceptions aren't logged in my log file.</p> <p>I created an application <code>my_app_with_commands</code> which contains a directory <code>management/commands</code> in which I implemented some commands.</p> <p>A sample command, could be like this, which crashed due to an exception:</p> <pre><code>import logging from django.core.management.base import BaseCommand class Command(BaseCommand): help = 'Do something usful' log = logging.getLogger(__name__) def handle(self, *args, **options): self.log.info('Starting...') raise RuntimeError('Something bad happened') self.log.info('Done.') </code></pre> <p>And my logging configuration is like this:</p> <pre><code>LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'normal': { 'format': '%(asctime)s %(module)s %(levelname)s %(message)s', } }, 'handlers': { 'file': { 'level': 'INFO', 'class': 'logging.FileHandler', 'filename': os.path.join(BASE_DIR, '..', 'logs', 'my_log.log'), 'formatter': 'normal', }, 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True, } }, 'loggers': { 'django': { 'handlers': ['file'], 'level': 'INFO', 'propagate': True, }, 'my_app_with_commands': { 'handlers': ['file', 'mail_admins'], 'level': 'INFO', 'propagate': True, }, }, } </code></pre> <p>When I run the command, the calls to the logger are successfully saved to <code>my_log.log</code> file:</p> <pre><code>2016-09-22 11:37:01,514 test INFO Starting... </code></pre> <p>But the exception with each traceback, is displayed in <code>stderr</code> where the command had been called:</p> <pre><code>[mgarcia@localhost src]$ ./manage.py test Traceback (most recent call last): File "./manage.py", line 10, in &lt;module&gt; execute_from_command_line(sys.argv) File "/home/mgarcia/anaconda3/envs/my_env/lib/python3.5/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line utility.execute() File "/home/mgarcia/anaconda3/envs/my_env/lib/python3.5/site-packages/django/core/management/__init__.py", line 345, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/mgarcia/anaconda3/envs/my_env/lib/python3.5/site-packages/django/core/management/base.py", line 348, in run_from_argv self.execute(*args, **cmd_options) File "/home/mgarcia/anaconda3/envs/my_env/lib/python3.5/site-packages/django/core/management/base.py", line 399, in execute output = self.handle(*args, **options) File "/home/mgarcia/my_project/src/my_app_with_commands/management/commands/test_command.py", line 11, in handle raise RuntimeError('Something bad happened') RuntimeError: Something bad happened </code></pre> <p>I can implement in each of my commands a <code>try/except</code> block, and manually log the exception. But can I just capture the exception and save it to my log file instead of <code>stderr</code> using Django settings?</p>
0
2016-09-22T09:42:35Z
39,639,468
<p>One of the ways - you can edit <strong>manage.py</strong> file to add a try/except block in it:</p> <pre><code>log = logging.getLogger('my_app_with_commands') # ... try: execute_from_command_line(sys.argv) except Exception as e: log.error('your exception log') raise e </code></pre>
0
2016-09-22T12:46:24Z
[ "python", "django", "logging" ]
What is a good way to open configs or files during unit test in python?
39,635,850
<p>I am wondering what is a good way to read config or local file during the unit testing.</p> <p>I think it could be either to write the test config file during time. For example:</p> <pre><code>def setUp(self): self.config = ConfigParser.RawConfigParser() self.config.add_section('TestingSection') self.config.set('TestingSection', 'x', '1') with open('local_file.txt', 'w') as f: f.write('testing_value') </code></pre> <p>or files could be prepared before testing, and we just open them during testing, for example:</p> <pre><code>def setUp(self): self.config = ConfigParser.RawConfigParser() self.config('local_config_file_path') with open('local_file.txt', 'r') as f: self.testing_value = f.read() </code></pre> <p>I am not sure which is the better way to read files during the unit testing, and hope some experts can help me.</p> <p>If you have better approach of it, please share with me.</p> <p>Thank you.</p>
1
2016-09-22T09:54:03Z
39,638,185
<p>A good way is to not have to open them at all.</p> <p>For your functions relying on a config file, you could create a fake object that implements the required methods that your particular function relies on. It might just be a <code>get</code> method, that supports getting a "section".</p> <p>This is exploiting <a href="https://en.wikipedia.org/wiki/Duck_typing" rel="nofollow">duck typing</a>. Your python functions don't care what the actual object they are getting, AS LONG as it implements the config parser methods it expects.</p> <p>At some point you have to test the "edge" of your application, the entry point. I'm guessing that the entry point function is executed and it loads and parses the config file from the file system. A single test should be able to test this, as config parser is already tested, in python core. </p> <p>In this test I would create a <a href="https://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile" rel="nofollow">named temporary file</a>, and use that files path as input into your main function, to make sure that it can at least execute without error. This may technically be an integration test as it interacts with the filesystem. </p>
1
2016-09-22T11:48:01Z
[ "python", "unit-testing", "python-unittest" ]
Concatanate tuples in list of tuples
39,635,919
<p>I have a list of tuples that looks something like this: </p> <pre><code>tuples = [('a', 10, 11), ('b', 13, 14), ('a', 1, 2)] </code></pre> <p>Is there a way that i can join them together based on the first index of every tuple to make a each tuple contain 5 elements. I know for a fact there isn't more that 2 of each letter in the tuples, Ie more than 2 'a's or 'b's in the entire list. The other requirement is to use Python2.6. I cant figure out the logic to it. Any help is greatly appreciated. </p> <p>Desired Output: </p> <pre><code>tuples = [('a', 10, 11, 1, 2), ('b', 13, 14, 0, 0)] </code></pre> <p>I have tried creating a new list of first elements and adding the other elements to it but then I only have a list and not list of tuples.</p> <p>EDIT to provide previous tried code,</p> <p>Created a new list: <code>templist, resultList = [], []</code></p> <p>Populate templist with the first element in every tuple:</p> <pre><code>for i in tuples: templist.append(i[0]) elemlist = list(set(templist)) for i in elemlist: for j in tuples: if i == j[0]: resultlist.append((i, j[1], j[2])) </code></pre> <p>This just returns the same list of tuples, How can i hold onto it and append every <code>j[1] j[2]</code> that corresponds to correct <code>j[0]</code></p>
-1
2016-09-22T09:57:06Z
39,636,306
<p>Assuming there are only one or two of every letter in the list as stated:</p> <pre><code>import itertools tuples = [('a', 10, 11), ('b', 13, 14), ('a', 1, 2)] result = [] key = lambda t: t[0] for letter,items in itertools.groupby(sorted(tuples,key=key),key): items = list(items) if len(items) == 1: result.append(items[0]+(0,0)) else: result.append(items[0]+items[1][1:]) print(result) </code></pre> <p>Output:</p> <pre><code>[('a', 10, 11, 1, 2), ('b', 13, 14, 0, 0)] </code></pre>
2
2016-09-22T10:14:30Z
[ "python", "list", "tuples" ]
travis-CI error after pull request
39,635,938
<p>I've done my first pull request on github recently.<br> The project i try to contribute to is written in python, and it uses tox and travis CI.<br> When i look at github.com/author/project/pulls, i see "Error: The Travis CI build could not complete due to an error" message near my request.<br> Never worked with CI tools before, but apparently all builds have failed (as i understand, it tries to build for python versions 2.6, 2.7 and 3.4).<br> So i've looked up travis logs (travis-ci.org/author/project/builds/my_build_number). Here are configs for one of the builds:</p> <pre><code>{ "language": "python", "python": 2.7, "env": "TOXENV=py34", "install": "pip install --quiet --use-mirrors tox", "script": "tox", "after_script": [ "if [ $TOXENV == \"cov\" ]; then pip install --quiet --use-mirrors coveralls; coveralls; fi" ], "group": "stable", "dist": "precise", "os": "linux" } </code></pre> <p>and this is what the logs look like:</p> <pre><code>$ export DEBIAN_FRONTEND=noninteractive $ git clone --depth=50 https://github.com/author/project.git author/project Setting environment variables from .travis.yml $ export TOXENV=py34 $ source ~/virtualenv/python2.7/bin/activate $ python --version Python 2.7.12 $ pip --version pip 8.1.2 from /home/travis/virtualenv/python2.7.12/lib/python2.7/site-packages (python 2.7) $ pip install --quiet --use-mirrors tox no such option: --use-mirrors The command "pip install --quiet --use-mirrors tox" failed and exited with 2 during . Your build has been stopped. </code></pre> <p>As i see it fails because it tries to launch pip with "--use-mirros" option (which was indeed deprecated and later completely removed from pip).<br> So, question is: could this be an error on my side or does it happen because author uses incorrect configs?</p>
2
2016-09-22T09:58:03Z
39,699,023
<p>Yes, you should remove <code>--use-mirrors</code> from the config file, since its not used anymore and makes the build fail.</p> <p>The author probably didn't updated the repository for a while (or only the config).</p> <p>Best ;-)</p>
2
2016-09-26T09:21:12Z
[ "python", "github", "travis-ci", "pull-request", "tox" ]
How to set time limit in real time video capturing?
39,635,970
<p>I have code that capture video from camera.The frames captured are append to a list.But how to set time limit to this capture?.I only want to capture the first two minutes after that the recording must stop.My code is</p> <pre><code>import cv2 import numpy #creating video capture object capture=cv2.VideoCapture(0) #Set the resolution of capturing to 640W*480H capture.set(3,640) capture.set(4,480) frame_set=[] while(True): # Capture frame-by-frame ret, frame = capture.read() # Converting to Gray Scale gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) frame_set.append(gray) # Display the resulting frame cv2.imshow('frame',gray) if cv2.waitKey(1) &amp; 0xFF == ord('q'): break # When everything done, release the capture capture.release() cv2.destroyAllWindows() </code></pre>
2
2016-09-22T09:59:37Z
39,636,782
<p>Use time package</p> <pre><code>import cv2 import numpy import time capture=cv2.VideoCapture(0) capture.set(3,640) capture.set(4,480) frame_set=[] start_time=time.time() while(True): ret, frame = capture.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) frame_set.append(gray) cv2.imshow('frame',gray) if cv2.waitKey(1) &amp; 0xFF == ord('q'): break end_time=time.time() elapsed = end_time - start_time if elapsed &gt; 120: break capture.release() cv2.destroyAllWindows() </code></pre>
1
2016-09-22T10:38:20Z
[ "python", "python-2.7", "video", "image-processing" ]
How to convert pandas dataframe rows into columns, based on category?
39,635,993
<p>I have a pandas data frame with a category variable and some number variables. Something like this: </p> <pre><code>ls = [{'count':5, 'module':'payroll', 'id':2}, {'count': 53, 'module': 'general','id':2}, {'id': 5,'count': 35, 'module': 'tax'}, ] df = pd.DataFrame.from_dict(ls) </code></pre> <p>The df looks like this: </p> <pre><code> df Out[15]: count id module 0 5 2 payroll 1 53 2 general 2 35 5 tax </code></pre> <p>I want convert(transpose is the right word?) the module variables into columns and group by the id. So something like:</p> <pre><code> general_count id payroll_count tax_count 0 53.0 2 5.0 NaN 1 NaN 5 NaN 35.0 </code></pre> <p>One approach to this would be to use apply:</p> <pre><code>df['payroll_count'] = df.id.apply(lambda x: df[df.id==x][df.module=='payroll']) </code></pre> <p>However, this suffers from multiple drawbacks:</p> <ol> <li><p>Costly, and takes too much time </p></li> <li><p>Creates artifacts and empty dataframes that need to be cleaned up.</p></li> </ol> <p>I sense there's a better way to achieve this with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow">pandas groupby</a>, but can't find a way to this same operation more efficiently. Please help. </p>
2
2016-09-22T10:00:36Z
39,636,105
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by columns which first create new <code>index</code> and last <code>column</code>. then need aggreagate some way - I use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.mean.html" rel="nofollow"><code>mean</code></a>, then convert one column <code>DataFrame</code> to <code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.squeeze.html" rel="nofollow"><code>DataFrame.squeeze</code></a> (then is not necessary remove top level of Multiindex in columns) and reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow"><code>unstack</code></a>. Last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add_suffix.html" rel="nofollow"><code>add_suffix</code></a> to column name:</p> <pre><code>df = df.groupby(['id','module']).mean().squeeze().unstack().add_suffix('_count') print (df) module general_count payroll_count tax_count id 2 53.0 5.0 NaN 5 NaN NaN 35.0 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow"><code>pivot</code></a>, then need remove <code>Multiindex</code> from columns by <code>list comprehension</code>:</p> <pre><code>df = df.pivot(index='id', columns='module') df.columns = ['_'.join((col[1], col[0])) for col in df.columns] print (df) general_count payroll_count tax_count id 2 53.0 5.0 NaN 5 NaN NaN 35.0 </code></pre>
2
2016-09-22T10:05:08Z
[ "python", "pandas" ]
Mardown fails combining fenced_code and attr_list
39,636,154
<p>I'm trying to write markdown files for mkdocs and want an id attribute with the pre tag, generated be fenced_code. If i use both extensions in combination there is no pre-tag but a p(aragraph tag):</p> <pre><code>import markdown text = """# Welcome This is *true* markdown text. ````python a=5 print "Hello World" ````{: #hello } """ html = markdown.markdown(text, extensions= ['markdown.extensions.fenced_code', 'markdown.extensions.attr_list']) print html </code></pre> <p>print returns</p> <pre><code>&lt;h1&gt;Welcome&lt;/h1&gt; &lt;p&gt;This is &lt;em&gt;true&lt;/em&gt; markdown text.&lt;/p&gt; &lt;p&gt;&lt;code id="hello"&gt;python a=5 print "Hello World"&lt;/code&gt;&lt;/p&gt; </code></pre> <p>but i expected </p> <pre><code>&lt;pre id="hello"&gt;&lt;code&gt;... </code></pre> <p>it's the same under mkdocs, which i use actually. I need with id to access it through javascript and run the embedded python code wit skulpt. Is there a solution to achieve this?</p>
0
2016-09-22T10:07:21Z
39,787,459
<p>I posted an issue to mkdocs on github and they say it is not possible at the moment. So i tried something else. Because i needed the id of the pre-element in a javascript-function which reacts to an onclick, i figured out, how to access the pre content from there. I was lucky to find that parentNode.previousElementSibling does what in want. The event's target is the element with the onclick event.</p> <pre><code>elem = event.target.parentNode.previousElementSibling </code></pre> <p>hope, anyone in a comparable situation understands what i mean :-)</p>
0
2016-09-30T08:53:40Z
[ "javascript", "python", "html", "markdown", "mkdocs" ]
I got an error while using git commit -m "Added a Procfile" on heroku
39,636,164
<p>I'm using heroku for deploying a python-django project.I used <code>git add .</code> command and after that I gave <code>git commit -m "Added a Procfile"</code> .I got an error </p> <pre><code>error: pathspec 'a' did not match any file(s) known to git. error: pathspec 'Procfile”' did not match any file(s) known to git. </code></pre> <p>My procfile is like this.</p> <pre><code>web: gunicorn resume.wsgi --log-file - </code></pre> <p>How can I solve this error?</p>
0
2016-09-22T10:08:04Z
39,636,207
<p>Your error shows that you are somehow using curly quotes, not straight quotes. Type your command directly into the command window rather than copying and pasting from a doc.</p>
2
2016-09-22T10:10:17Z
[ "python", "django", "git", "heroku" ]
Get pandas subseries by values when each value is a ndarray
39,636,224
<p>I want to make a subseries by values when the Series consists of ndarrays.</p> <p>This one works.</p> <pre><code>sa = pd.Series([1,2,35,2],index=list('abcd')) sa[sa==2] </code></pre> <p>Results</p> <pre><code>b 2 d 2 dtype: int64 </code></pre> <p>Why below codes does not work? What should I change?. It gives a ValueError: Lengths must match to compare</p> <pre><code>sa2 = pd.Series([np.array(['out']), np.array(['2f-right', '2f']), np.array(['out', '2f']), np.array(['out'])], index=list('abcd')) ar = np.array(['out']) sa2[sa2 == ar] </code></pre>
1
2016-09-22T10:11:02Z
39,636,326
<p>The comparison operator doesn't understand how to compare for equality with np arrays here so you can use <code>apply</code> with a <code>lambda</code>:</p> <pre><code>In [211]: sa2[sa2.apply(lambda x: (x == ar).all())] Out[211]: a [out] d [out] dtype: object </code></pre> <p>So here we compare against the array and use <code>all</code> to generate a boolean mask</p>
2
2016-09-22T10:15:06Z
[ "python", "pandas", "indexing", "series" ]
Selenium webdriverwait: __init__() takes exactly 2 arguments (3 given)
39,636,236
<p>gives the error as </p> <pre><code>Traceback (most recent call last): File "p3.py", line 21, in &lt;module&gt; WebDriverWait(driver, timex).until(EC.presence_of_element_located(by, element)) TypeError: __init__() takes exactly 2 arguments (3 given) </code></pre> <p>I have not used <code>__init__()</code> why is this error there?</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By import time chrome_path=r"C:\Users\Bhanwar\Desktop\New folder (2)\chromedriver.exe" driver =webdriver.Chrome(chrome_path) driver.get("https://priceraja.com/mobile/pricelist/samsung-mobile-price-list-in-india") #driver.implicitly_wait(10) i=0 timex = 5 by = By.ID hook = "product-itmes-" # The id of one item, they seems to be this plus # the number item that they are button = '.loadmore' while i&lt;3: element_number = 25*i element=hook+str(element_number)# It looks like there are 25 items added each time, and starts at 25 WebDriverWait(driver, timex).until(EC.presence_of_element_located(by, element)) driver.find_element_by_css_selector(button).click() time.sleep(5) # Makes the page wait for the element to change i+=1 </code></pre>
0
2016-09-22T10:11:31Z
39,636,404
<p><code>presence_of_element_located()</code> only takes one argument, a locator, which is a tuple. You forgot to add the <code>(...)</code> parentheses needed for a tuple in a call:</p> <pre><code>WebDriverWait(driver, timex).until( EC.presence_of_element_located((by, element))) # these make this a tuple ^ and ^ </code></pre>
1
2016-09-22T10:19:08Z
[ "python", "selenium", "selenium-chromedriver" ]
Can I create cloudshell shell without driver?
39,636,277
<p>I would like to build a quali cloudshell shell with driver. I.e. just a data model without python driver. Can I do that when working with shellfoundry?</p>
1
2016-09-22T10:13:14Z
39,637,107
<p>Yes, you can create a CloudShell Shell without a driver whether using ShellFoundry or not. </p> <p>In order to remove the driver from being attached to the Model of the Shell, open <strong>shellconfig.xml</strong> file located under <strong>datamodel</strong> directory for editing. </p> <p>Then remove <strong>Driver</strong> attribute from the <strong>ResourceTemplate</strong> XML node:</p> <pre class="lang-xml prettyprint-override"><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;ShellsConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.qualisystems.com/ResourceManagement/ShellsConfigurationSchema.xsd"&gt; &lt;ResourceTemplates&gt; &lt;ResourceTemplate Name="ShellWithoutDriver" Model="ShellWithoutDriver"&gt; &lt;Description&gt;&lt;/Description&gt; &lt;AutoLoad Enable="false"&gt; &lt;Description&gt;Description for autoload &lt;/Description&gt; &lt;/AutoLoad&gt; &lt;Attributes&gt; &lt;Attribute Name="User" Value="" /&gt; &lt;Attribute Name="Password" Value="" /&gt; &lt;/Attributes&gt; &lt;/ResourceTemplate&gt; &lt;/ResourceTemplates&gt; &lt;/ShellsConfiguration&gt; </code></pre>
0
2016-09-22T10:55:41Z
[ "python", "continuous-integration", "continuous-deployment", "devops" ]
Shopify + Python library: how to create new shipping address
39,636,421
<p>I'm having troubles adding a new shipping address by using the Python API.</p> <p>Let's put aside the auth part and assume I have the customer_id.</p> <p>I couldn't find how to put the correct lines of code to achieve this goal. I've searched the shopify tests folder but couldn't find such example there.</p> <p>Can someone point me to the right direction?</p>
2
2016-09-22T10:19:43Z
39,719,756
<p>While there is currently no <code>CustomerAddress</code> resource implemented into the official python Shopify API client, you can append the address information directly onto the <code>addresses</code> attribute of the <code>Customer</code> resource as a work-around:</p> <pre><code>customer_id = 1234567890 new_address = { "address1": "123 Main Street", "address2": "#5", "city": "New York", "company": "", "country": "United States", "name": "John Smith", "phone": "", "province": "New York", "zip": "10001" } customer = shopify.Customer.find(customer_id) customer.addresses.append(new_address) customer.save() </code></pre>
0
2016-09-27T08:28:24Z
[ "python", "shopify" ]