title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Find a HTML tag that contains certain text
39,529,983
<p>So I am trying to find a particular string in website html source file.</p> <p>Ex) If I have following html tag</p> <pre><code>&lt;div class="rev" data="123456789adfdfdfdfadf"&gt;&lt;/div&gt; </code></pre> <p>I want to be able to find this particular line that contain <code>div class = "rev"</code> and data that are inside and print out <code>"123456789adfdfdfdfadf"</code></p> <p>But before I do that, I am just trying to make sure its finding the right tag but I kept getting <code>[]</code> as output</p> <p>This is my code</p> <pre><code>import urllib2 from BeautifulSoup import BeautifulSoup import re request = urllib2.Request("http://www.adidas.co.uk/nmd_r1-shoes/BB1970.html") request.add_header("User-Agent", "Mozilla/5.0 (Windows; U; Windows NT 5.1; es-ES; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5") f = urllib2.urlopen(request) soup = BeautifulSoup(f) d = soup.findAll('div', text = re.compile('123456789adfdfdfdfadf'), attrs = {'class' : 'data'}) print d </code></pre>
2
2016-09-16T11:06:49Z
39,532,152
<p>You are mixing your data (as attribute) and the text you're looking for.<br> With the <code>div</code> given, you should find it with:</p> <pre><code>print [item["data"] for item in soup.find_all('div', {'_class': 'rev'}) if "data" in item.attrs] </code></pre> <p>Or, a bit more accurate:</p> <pre><code>[item['data-bin'] for item in soup.find_all('div', {'_class': 'rev', attrs={'data-bin' : True}})] </code></pre>
1
2016-09-16T12:59:18Z
[ "python", "html", "python-2.7" ]
Emitting event from socketIo python
39,529,993
<p>I'm stuck in development of socket.io with Python. I'm using this <a href="http://python-socketio.readthedocs.io/en/latest/" rel="nofollow">lib</a> </p> <p>I got a chat app running by using <a href="https://github.com/sreejesh79/android-socket.io-demo" rel="nofollow">this</a> android part and with the lib sample. I want to trigger an event from the server side from a separate file. Here's my code.</p> <pre><code>import socketio import eventlet from flask import Flask, render_template sio = socketio.Server(logger=True, async_handlers= True) app = Flask(__name__) eventlet.monkey_patch() @sio.on('connect', namespace='/d') def connect(sid, environ): print('connect ', sid) pass @sio.on('messaget', namespace='/d') def messaget(sid, data): print('message ', data) # sio.emit('messaget', data, namespace='/d') # sendmsg("YO YO") @sio.on('disconnect', namespace='/d') def disconnect(sid): print('disconnect ', sid) def start_socket(app): # wrap Flask application with socketio's middleware app = socketio.Middleware(sio, app) eventlet.wsgi.server(eventlet.listen(('', 8000)), app) def sendmsg(data): my_data= { 'text': data }; sio.emit('messaget', my_data, namespace='/d') start_socket(app) </code></pre> <p>I'm calling <code>sendmsg("dipen")</code> from my another python file. I'm getting a log <strong>emitting event "messaget" to all [/d]</strong> but android app is not getting any messages. And it work's if event is emitting from the Android app. I tried with the NodeJs code and it worked for the NodeJs code so I'm pretty sure that something is wrong in my Python code. Hope that someone could save me from this. </p>
0
2016-09-16T11:07:20Z
39,532,878
<p>Send a message from Android client. Does your Python code get it? If not, check if you have connected to the same namespace.</p>
0
2016-09-16T13:35:38Z
[ "python", "node.js", "socket.io", "chat", "real-time" ]
Is it possible to create a polynomial through Numpy's C API?
39,530,054
<p>I'm using SWIG to wrap a C++ library with its own polynomial type. I'd like to create a typemap to automatically convert that to a numpy polynomial. However, browsing the docs for the numpy C API, I'm not seeing anything that would allow me to do this, only numpy arrays. Is it possible to typemap to a polynomial?</p>
0
2016-09-16T11:10:01Z
39,576,294
<p>Numpy's polynomial package is largely a collection of functions that can accept array-like objects as the polynomial. Therefore, it is sufficient to convert to a normal ndarray, where the value at index n is the coefficient for the term with exponent n.</p>
0
2016-09-19T14:59:44Z
[ "python", "c++", "numpy", "swig" ]
How to chain a queryset together in Django correctly
39,530,081
<p>Using the example below, I'm trying to use a queryset and append/chain filters together. To my understanding last <code>queryset.count()</code> should have just 1 instance, but it always had the original 10 in it. </p> <p>Expected output of last <code>queryset.count()</code> is 1:</p> <pre><code># Set a default queryset. def get_queryset(self, *args, **kwargs): queryset = super(UserMixin, self).get_queryset(*args, **kwargs) queryset.count() # 10 instacnes queryset.filter(id=1) queryset.count() # 10 instacnes excpeted 1 </code></pre> <p>I can solve this problem I think by:</p> <pre><code>queryset = queryset.filter(id=1) </code></pre> <p>Is this the correct way or there a way to chain them correctly where I can add the queryset object around?</p>
1
2016-09-16T11:11:30Z
39,530,465
<p>You never assign the filter to anything so it doesn't update it</p> <pre><code> queryset = queryset.filter(id=1) </code></pre> <p>Yes this is the correct way because you are creating a new query, otherwise you need to call the count on the end of the previous filter call</p>
3
2016-09-16T11:33:26Z
[ "python", "django" ]
Py2.7: Tkinter have code for pages in separate files
39,530,107
<p>I have code that opens up a window and has three buttons for page 1, page 2, and page 3. However I also have Four .py files (Dashboard, PageOne, PageTwo. PageThree). Dashboard will be the file that used to run the application. I want it so that the code from each file is only runs/displayed when the user clicks on the corresponding page button. </p> <p>Dashboard.py:</p> <pre><code>import Tkinter as tk import PageOne import PageTwo import PageThree class Page(tk.Frame): def __init__(self, *args, **kwargs): tk.Frame.__init__(self, *args, **kwargs) def show(self): self.lift() class MainView(tk.Frame): def __init__(self, *args, **kwargs): tk.Frame.__init__(self, *args, **kwargs) p1 = Page1(self) p2 = Page2(self) p3 = Page3(self) buttonframe = tk.Frame(self) container = tk.Frame(self) buttonframe.pack(side="top", fill="x", expand=False) container.pack(side="top", fill="both", expand=True) p1.place(in_=container, x=0, y=0, relwidth=1, relheight=1) p2.place(in_=container, x=0, y=0, relwidth=1, relheight=1) p3.place(in_=container, x=0, y=0, relwidth=1, relheight=1) b1 = tk.Button(buttonframe, text="Page 1", command=p1.lift) b2 = tk.Button(buttonframe, text="Page 2", command=p2.lift) b3 = tk.Button(buttonframe, text="Page 3", command=p3.lift) b1.pack(side="left") b2.pack(side="left") b3.pack(side="left") p1.show() if __name__ == "__main__": root = tk.Tk() main = MainView(root) main.pack(side="top", fill="both", expand=True) root.wm_geometry("400x400") root.mainloop() </code></pre> <p>PageOne.py:</p> <pre><code>import Tkinter as tk import Dashboard class Page1(Page): def __init__(self, *args, **kwargs): Page.__init__(self, *args, **kwargs) label = tk.Label(self, text="This is page 1") label.pack(side="top", fill="both", expand=True) </code></pre> <p>PageTwo.py</p> <pre><code>import Tkinter as tk import Dashboard class Page2(Page): def __init__(self, *args, **kwargs): Page.__init__(self, *args, **kwargs) label = tk.Label(self, text="This is page 2") label.pack(side="top", fill="both", expand=True) </code></pre> <p>PageThree.py:</p> <pre><code>import Tkinter as tk import Dashboard class Page3(Page): def __init__(self, *args, **kwargs): Page.__init__(self, *args, **kwargs) label = tk.Label(self, text="This is page 3") label.pack(side="top", fill="both", expand=True) </code></pre> <p>EDIT:</p> <p>Error I'm getting:</p> <pre><code>Traceback (most recent call last): File "C:\Users\ross.watson\Google Drive\Smart Mirror\Test\Dashboard.py", line 2, in &lt;module&gt; import PageOne File "C:\Users\ross.watson\Google Drive\Smart Mirror\Test\PageOne.py", line 2, in &lt;module&gt; import Dashboard File "C:\Users\ross.watson\Google Drive\Smart Mirror\Test\Dashboard.py", line 3, in &lt;module&gt; import PageTwo File "C:\Users\ross.watson\Google Drive\Smart Mirror\Test\PageTwo.py", line 4, in &lt;module&gt; class Page2(Page): NameError: name 'Page' is not defined </code></pre>
-1
2016-09-16T11:12:38Z
39,534,704
<p>The error messages tell you everything you need to know in order to fix the problem. This is a simple problem that has nothing to do with tkinter, and everything to do with properly organizing and using your code. </p> <p>For example, what is <code>Page is not defined</code> telling you? It's telling you, quite literally, that <code>Page</code> is not defined. You defined it in your main file, but you're using it in another file. The solution is to move the definition of <code>Page</code> to a separate file that can be imported into the files that use it.</p> <p>Modify your files and imports to look like this:</p> <h2>page.py</h2> <pre><code>class Page(tk.Frame): ... </code></pre> <h2>pageOne.py</h2> <pre><code>from page import Page class PageOne(Page): ... </code></pre> <h2>pageTwo.py</h2> <pre><code>from page import Page class PageTwo(Page): ... </code></pre> <h2>pageThree.py</h2> <pre><code>from page import Page class PageThree(Page): ... </code></pre> <h2>Dashboard.py</h2> <pre><code>from pageOne import PageOne from pageTwo import PageTwo from pageThree import PageThree ... </code></pre>
0
2016-09-16T15:04:51Z
[ "python", "python-2.7", "tkinter" ]
Camera calibration for Structure from Motion with OpenCV (Python)
39,530,110
<p>I want to calibrate a car video recorder and use it for 3D reconstruction with Structure from Motion (SfM). The original size of the pictures I have took with this camera is 1920x1080. Basically, I have been using the source code from the <a href="http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html" rel="nofollow">OpenCV tutorial</a> for the calibration. </p> <p>But there are some problems and I would really appreciate any help. </p> <p>So, as usual (at least in the above source code), here is the pipeline:</p> <ol> <li>Find the chessboard corner with <code>findChessboardCorners</code></li> <li>Get its subpixel value with <code>cornerSubPix</code></li> <li>Draw it for visualisation with <code>drawhessboardCorners</code></li> <li>Then, we calibrate the camera with a call to <code>calibrateCamera</code> </li> <li>Call the <code>getOptimalNewCameraMatrix</code> and the <code>undistort</code> function to undistort the image</li> </ol> <p>In my case, since the image is too big (1920x1080), I have resized it to 640x320 (during SfM, I will also use this size of image, so, I don't think it would be any problem). And also, I have used a 9x6 chessboard corners for the calibration.</p> <p>Here, the problem arose. After a call to the <code>getOptimalNewCameraMatrix</code>, the distortion come out totally wrong. Even the returned ROI is <code>[0,0,0,0]</code>. Below is the original image and its undistorted version: </p> <p><a href="http://i.stack.imgur.com/elJx1.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/elJx1.jpg" alt="Original image"></a> <a href="http://i.stack.imgur.com/nEbeN.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/nEbeN.jpg" alt="Undistorted image"></a> You can see the image in the undistorted image is at the bottom left.</p> <p>But, if I didn't call the <code>getOptimalNewCameraMatrix</code> and just straight <code>undistort</code> it, I got a quite good image. <a href="http://i.stack.imgur.com/L08QS.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/L08QS.jpg" alt="Undistorted image"></a></p> <p>So, I have three questions. </p> <ol> <li><p>Why is this? I have tried with another dataset taken with the same camera, and also with my iPhone 6 Plus, but the results are same as above. </p></li> <li><p>Another question is, what is the <code>getOptimalNewCameraMatrix</code> does? I have read the documentations several times but still cannot understand it. From what I have observed, if I didn't call the <code>getOptimalNewCameraMatrix</code>, my image will retain its size but it would be zoomed and blurred. Can anybody explain this function in more detail for me?</p></li> <li><p>For SfM, I guess the call to <code>getOptimalNewCameraMatrix</code> is important? Because if not, the undistorted image would be zoomed and blurred, making the keypoint detection harder (in my case, I will be using the optical flow)?</p></li> </ol> <p>I have tested the code with the opencv sample pictures and the results are just fine.</p> <p>Below is my source code:</p> <pre><code>from sys import argv import numpy as np import imutils # To use the imutils.resize function. # Resizing while preserving the image's ratio. # In this case, resizing 1920x1080 into 640x360. import cv2 import glob # termination criteria criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) objp = np.zeros((9*6,3), np.float32) objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2) # Arrays to store object points and image points from all the images. objpoints = [] # 3d point in real world space imgpoints = [] # 2d points in image plane. images = glob.glob(argv[1] + '*.jpg') width = 640 for fname in images: img = cv2.imread(fname) if width: img = imutils.resize(img, width=width) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Find the chess board corners ret, corners = cv2.findChessboardCorners(gray, (9,6),None) # If found, add object points, image points (after refining them) if ret == True: objpoints.append(objp) corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria) imgpoints.append(corners2) # Draw and display the corners img = cv2.drawChessboardCorners(img, (9,6), corners2,ret) cv2.imshow('img',img) cv2.waitKey(500) cv2.destroyAllWindows() ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None) for fname in images: img = cv2.imread(fname) if width: img = imutils.resize(img, width=width) h, w = img.shape[:2] newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h)) # undistort dst = cv2.undistort(img, mtx, dist, None, newcameramtx) # crop the image x,y,w,h = roi dst = dst[y:y+h, x:x+w] cv2.imshow("undistorted", dst) cv2.waitKey(500) mean_error = 0 for i in xrange(len(objpoints)): imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist) error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2) mean_error += error print "total error: ", mean_error/len(objpoints) </code></pre> <p>Already ask someone in answers.opencv.org and he tried my code and my dataset with success. I wonder what is actually wrong.</p>
2
2016-09-16T11:12:43Z
39,533,639
<p><strong>Question #2:</strong></p> <p>With <code>cv::getOptimalNewCameraMatrix(...)</code> you can compute a new camera matrix according to the free scaling parameter <code>alpha</code>.</p> <p>If <code>alpha</code> is set to <code>1</code> then all the source image pixels are retained in the undistorted image that is you'll see black and curved border along the undistorted image (like a pincushion). This scenario is unlucky for several computer vision algorithms, because new edges are appeared on the undistorted image for example. </p> <p>By default <code>cv::undistort(...)</code> regulates the subset of the source image that will be visible in the corrected image and that's why only the sensible pixels are shown on that - no pincushion around the corrected image but data loss.</p> <p>Anyway, you are allowed to control the subset of the source image that will be visible in the corrected image:</p> <pre><code>cv::Mat image, cameraMatrix, distCoeffs; // ... cv::Mat newCameraMatrix = cv::getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, image.size(), 1.0); cv::Mat correctedImage; cv::undistort(image, correctedImage, cameraMatrix, distCoeffs, newCameraMatrix); </code></pre> <p><strong>Question #1:</strong></p> <p>It is just my feeling, but you should also take care, if you resize your image after the calibration then the camera matrix must be also "scaled" as well, for example:</p> <pre><code>cv::Mat cameraMatrix; cv::Size calibSize; // Image during the calibration, e.g. 1920x1080 cv::Size imageSize; // Your current image size, e.g. 640x320 // ... cv::Matx31d t(0.0, 0.0, 1.0); t(0) = (double)imageSize.width / (double)calibSize.width; t(1) = (double)imageSize.height / (double)calibSize.height; cameraMatrixScaled = cv::Mat::diag(cv::Mat(t)) * cameraMatrix; </code></pre> <p>This must be done only for the camera matrix, because the distortion coefficients do not depend on the resolution.</p> <p><strong>Question #3:</strong></p> <p>Whatever I think <code>cv::getOptimalNewCameraMatrix(...)</code> is not important in your case, the undistorted image can be zoomed and blurred because you remove the effect of a non-linear transformation. If I were you then I would try the optical flow without calling <code>cv::undistort(...)</code>. I think that even a distorted image can contain a lot of good features for tracking.</p>
0
2016-09-16T14:11:09Z
[ "python", "opencv", "camera-calibration", "3d-reconstruction", "structure-from-motion" ]
Python numpy nonzero cumsum
39,530,157
<p>I want to do nonzero <code>cumsum</code> with <code>numpy</code> array. Simply skip zeros in array and apply <code>cumsum</code>. Suppose I have a np. array </p> <pre><code>a = np.array([1,2,1,2,5,0,9,6,0,2,3,0]) </code></pre> <p>my result should be</p> <pre><code>[1,3,4,6,11,0,20,26,0,28,31,0] </code></pre> <p>I have tried this</p> <pre><code>a = np.cumsum(a[a!=0]) </code></pre> <p>but result is</p> <pre><code>[1,3,4,6,11,20,26,28,31] </code></pre> <p>Any ideas? </p>
1
2016-09-16T11:16:24Z
39,530,202
<p>You need to mask the original array so only the non-zero elements are overwritten:</p> <pre><code>In [9]: a = np.array([1,2,1,2,5,0,9,6,0,2,3,0]) a[a!=0] = np.cumsum(a[a!=0]) a Out[9]: array([ 1, 3, 4, 6, 11, 0, 20, 26, 0, 28, 31, 0]) </code></pre> <p>Another method is to use <code>np.where</code>:</p> <pre><code>In [93]: a = np.array([1,2,1,2,5,0,9,6,0,2,3,0]) a = np.where(a!=0,np.cumsum(a),a) a Out[93]: array([ 1, 3, 4, 6, 11, 0, 20, 26, 0, 28, 31, 0]) </code></pre> <p><strong>timings</strong></p> <pre><code>In [91]: %%timeit a = np.array([1,2,1,2,5,0,9,6,0,2,3,0]) a[a!=0] = np.cumsum(a[a!=0]) a The slowest run took 4.93 times longer than the fastest. This could mean that an intermediate result is being cached 100000 loops, best of 3: 12.6 µs per loop In [94]: %%timeit a = np.array([1,2,1,2,5,0,9,6,0,2,3,0]) a = np.where(a!=0,np.cumsum(a),a) a The slowest run took 6.00 times longer than the fastest. This could mean that an intermediate result is being cached 100000 loops, best of 3: 10.5 µs per loop </code></pre> <p>the above shows that <code>np.where</code> is marginally quicker than the first method</p>
2
2016-09-16T11:19:05Z
[ "python", "numpy" ]
Python numpy nonzero cumsum
39,530,157
<p>I want to do nonzero <code>cumsum</code> with <code>numpy</code> array. Simply skip zeros in array and apply <code>cumsum</code>. Suppose I have a np. array </p> <pre><code>a = np.array([1,2,1,2,5,0,9,6,0,2,3,0]) </code></pre> <p>my result should be</p> <pre><code>[1,3,4,6,11,0,20,26,0,28,31,0] </code></pre> <p>I have tried this</p> <pre><code>a = np.cumsum(a[a!=0]) </code></pre> <p>but result is</p> <pre><code>[1,3,4,6,11,20,26,28,31] </code></pre> <p>Any ideas? </p>
1
2016-09-16T11:16:24Z
39,530,363
<p>Just trying to simplify it:)</p> <pre><code>b=np.cumsum(a) [b[i] if ((i &gt; 0 and b[i] != b[i-1]) or i==0) else 0 for i in range(len(b))] </code></pre>
0
2016-09-16T11:27:47Z
[ "python", "numpy" ]
Python numpy nonzero cumsum
39,530,157
<p>I want to do nonzero <code>cumsum</code> with <code>numpy</code> array. Simply skip zeros in array and apply <code>cumsum</code>. Suppose I have a np. array </p> <pre><code>a = np.array([1,2,1,2,5,0,9,6,0,2,3,0]) </code></pre> <p>my result should be</p> <pre><code>[1,3,4,6,11,0,20,26,0,28,31,0] </code></pre> <p>I have tried this</p> <pre><code>a = np.cumsum(a[a!=0]) </code></pre> <p>but result is</p> <pre><code>[1,3,4,6,11,20,26,28,31] </code></pre> <p>Any ideas? </p>
1
2016-09-16T11:16:24Z
39,530,868
<p>To my mind, jotasi's suggestion in a comment to the OP is the most idiomatic. Here are some timings, though note that Shawn. L's answer returns a Python list, not a NumPy array, so they are not strictly comparable.</p> <pre><code>import numpy as np def jotasi(a): b = np.cumsum(a) b[a==0] = 0 return b def EdChum(a): a[a!=0] = np.cumsum(a[a!=0]) return a def ShawnL(a): b=np.cumsum(a) b = [b[i] if ((i &gt; 0 and b[i] != b[i-1]) or i==0) else 0 for i in range(len(b))] return b def Ed2(a): return np.where(a!=0,np.cumsum(a),a) </code></pre> <p>To test, I generated a NumPy array of 1E5 integers in [0,100]. Therefore about 1% are 0. These results are from NumPy 1.9.2, Python 2.7.12, and are presented from slowest to fastest:</p> <pre><code>import timeit a = np.random.random_integers(0,100,100000) len(a[a==0]) #verify there are some 0's 1003 timeit.timeit("ShawnL(a)", "from __main__ import a,EdChum,ShawnL,jotasi,Ed2", number=250) 11.743098020553589 timeit.timeit("EdChum(a)", "from __main__ import a,EdChum,ShawnL,jotasi,Ed2", number=250) 0.1794271469116211 timeit.timeit("Ed2(a)", "from __main__ import a,EdChum,ShawnL,jotasi,Ed2", number=250) 0.1282949447631836 timeit.timeit("jotasi(a)", "from __main__ import a,EdChum,ShawnL,jotasi,Ed2", number=250) 0.09286999702453613 </code></pre> <p>I'm a little surprised there's such a big difference between jotasi's and Ed Chum's answers - minimizing boolean operations is noticeable I guess. No surprise that a list comprehension is slow.</p>
1
2016-09-16T11:54:20Z
[ "python", "numpy" ]
python bs4 requests: ValueError: unconverted data remains:
39,530,159
<p>i did go through the existing questions, but for my code there is no hint given after the <strong>ValueError</strong> as to where exactly the data remained unconverted. code below. please help:</p> <pre><code>str_time = 'Fri, 16 Sep 2016 14:28:14 +0530' obj_time = datetime.datetime.strptime(str_time[:25],'%a, %d %b %Y %H:%M:%S') obj_time_rounded = obj_time.replace(hour=0, minute=0, second=0, microseconds =0) today = datetime.datetime.today() today_rounded = today.replace(hour=0, minute=0, second=0, microsecond=0) delta = (today_rounded - obj_time_rounded) if delta.days == 0: .... .... error: File "C:\Users\dell\AppData\Local\Programs\Python\Python35\lib\_strptime.py", line 340, in _strptime data_string[found.end():]) ValueError: unconverted data remains: </code></pre>
0
2016-09-16T11:16:29Z
39,530,344
<p>What do you mean no hint?:</p> <p>It is in line 340</p> <p>this is the same problem:</p> <p><a href="http://stackoverflow.com/questions/20327937/valueerror-unconverted-data-remains-0205">ValueError: unconverted data remains: 02:05</a></p>
0
2016-09-16T11:26:35Z
[ "python", "datetime" ]
Plotting linesegments with Pysal in a map
39,530,189
<p>I am using Pysal to visualize geospatial data. I want to plot segments between individuals (network) but I don't know how to plot my list of LineSegment shapes (lc). So how can I display those LineSegment in my map ? (Below, the plotting code)</p> <pre><code>data_table = ps.pdio.read_files("cartes/departements/DEPARTEMENT.shp") dt = data_table[data_table.apply(lambda x:x["CODE_DEPT"][0:2] in ["75","92","93","94"],axis=1)]["geometry"] bbox = [min([i.bbox[0] for i in dt]),min([i.bbox[1] for i in dt]),max([i.bbox[2] for i in dt]),max([i.bbox[3] for i in dt])] fig = figure(figsize=(15,12)) base = maps.map_poly_shp(dt,bbox=bbox) base.set_facecolor("#e1e1e1") base.set_linewidth(0.5) base.set_edgecolor('black') pts_r = scatter([i[0] for i in data],[i[0] for i in data],s=10,alpha=0.5) pts_r.set_color('red') lc = [LineSegment(i[0],i[1]) for i in lines] ax = maps.setup_ax([base,pts_i], [bbox,bbox]) fig.add_axes(ax) </code></pre> <p>Thanks !</p>
0
2016-09-16T11:18:27Z
39,592,720
<h2>Solved</h2> <p>I didn't use correctly the pysal plotting framework. The only thing needed was to use the LineCollection(l,alpha=0.1) method, and then add it to the setup_ax.</p> <pre><code>from pysal.contrib.viz import mapping as maps data_table = ps.pdio.read_files("cartes/departements/DEPARTEMENT.shp") dt = data_table[data_table.apply(lambda x:x["CODE_DEPT"][0:2] in ["75","92","93","94"],axis=1)]["geometry"] bbox = [min([i.bbox[0] for i in dt]),min([i.bbox[1] for i in dt]),max([i.bbox[2] for i in dt]),max([i.bbox[3] for i in dt])] fig = figure(figsize=(15,12)) base = maps.map_poly_shp(dt,bbox=bbox) base.set_facecolor("#e1e1e1") base.set_linewidth(0.5) base.set_edgecolor('black') pts_r = scatter([i[0] for i in data],[i[1] for i in data],s=10,alpha=0.5) pts_r.set_color('red') lc = maps.LineCollection(lines,alpha=0.1) ax = maps.setup_ax([base,pts_i,lc], [bbox,bbox,bbox]) fig.add_axes(ax) </code></pre>
0
2016-09-20T11:20:09Z
[ "python", "networking", "gis", "pysal" ]
Everything seems correct but I get "ValueError: time data" when parsing a timestamp
39,530,273
<p>It seems that the format is correct but I still get this error. What could be the problem here?</p> <pre><code>ValueError: time data '2016-09-16 11:36:28' does not match format '%y-%m-%d %H:%M:%S' </code></pre>
0
2016-09-16T11:22:40Z
39,530,338
<p>You need to use <code>%Y</code> for 4-digit years:</p> <pre><code>In [10]: d='2016-09-16 11:36:28' dt.datetime.strptime(d, '%Y-%m-%d %H:%M:%S') Out[10]: datetime.datetime(2016, 9, 16, 11, 36, 28) </code></pre> <p>see the docs: <a href="https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow">https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior</a></p>
1
2016-09-16T11:26:13Z
[ "python", "timestamp" ]
Transform JSON key:value pairs in flat structure into key:value tree structure
39,530,279
<p>It is possible to transform list of key:value pairs of data in the same "level" (flat structure) into a tree structure key:value of that data?</p> <p>Example:</p> <p>From:</p> <pre><code>[{"COD": "20000", "VAL": "Fanerozoico"}, {"COD": "23000", "VAL": "Cenozoico"}, {"COD": "23300", "VAL": "Quaternario"}, {"COD": "23310", "VAL": "Pleistocenico"}, {"COD": "23314", "VAL": "Pleistocenico Superior"}, {"COD": "23200", "VAL": "Neogénico"}, {"COD": "23220", "VAL": "Pliocénico"}, {"COD": "23222", "VAL": "Piacenziano"}] </code></pre> <p>Into:</p> <pre><code>{ "Fanerozoico": { "COD": "20000", "Cenozoico": { "COD": "23000", "Quaternario": { "COD": "23300", "Pleistocenico": { "COD": "23310", "Pleistocenico Superior": { "COD": "23314" } } }, "Neogenico": { "COD": "23200", "Pliocenico": { "COD": "23220", "Piacenziano": { "COD": "23222" } } } } } } </code></pre>
0
2016-09-16T11:22:58Z
39,532,950
<p>My attempt (that works but might not be the best solution):</p> <pre><code>mylist = [ {"COD": "20000", "VAL": "Fanerozoico"}, {"COD": "23000", "VAL": "Cenozoico"}, {"COD": "23300", "VAL": "Quaternario"}, {"COD": "23310", "VAL": "Pleistocenico"}, {"COD": "23314", "VAL": "Pleistocenico Superior"}, {"COD": "23200", "VAL": "Neogénico"}, {"COD": "23220", "VAL": "Pliocénico"}, {"COD": "23222", "VAL": "Piacenziano"} ] COD_LEN = 5 class CODMapper(object): def __init__(self, kvlist): self._map = {item["COD"]: item["VAL"] for item in kvlist} def getchildren(self, key): ' return a list of all direct children of a given key ' if len(key) == COD_LEN: return [] suffix = "0"*(COD_LEN-len(key)-1) return [(cod, val) for cod, val in self._map.items() if cod.startswith(key) and cod.endswith(suffix) and cod[len(key)] != "0" ] def todict(self, key=""): ' Creates the dictionnary structured as you want ' children = self.getchildren(key) result = {} for cod, val in children: inner_dict = {"COD": cod} new_key = cod[:len(key)+1] inner_dict.update(self.todict(new_key)) result[val] = inner_dict return result from pprint import pprint pprint(CODMapper(mylist).todict()) </code></pre> <p>Will output:</p> <pre><code>{'Fanerozoico': {'COD': '20000', 'Cenozoico': {'COD': '23000', 'Neogénico': {'COD': '23200', 'Pliocénico': {'COD': '23220', 'Piacenziano': {'COD': '23222'}}}, 'Quaternario': {'COD': '23300', 'Pleistocenico': {'COD': '23310', 'Pleistocenico Superior': {'COD': '23314'}}}}}} </code></pre>
0
2016-09-16T13:38:29Z
[ "python", "json", "data-structures" ]
Kivy POPUP and Button text
39,530,341
<p>First I'm new to kivy. myApp is based on <code>kivy-example/demo/kivycatlog</code> and I was modifying <code>PopupContainer.kv</code>, but my code doesn't work.</p> <p><strong>PopupContainer.kv</strong></p> <pre><code>BoxLayout: id: bl orientation: "vertical" popup: popup.__self__ canvas: Color: rgba: .18, .18, .18, .91 Rectangle: size: self.size pos: self.pos Button: id: showPopup1 text: 'press to show popup' on_release: root.popup.open() Button: id: showPopup2 text: 'press to show popup' on_release: root.popup.open() Popup: id: popup on_parent: if self.parent == bl: bl.remove_widget(self) title: "An example popup" BoxLayout: orientation: 'vertical' BoxLayout: orientation: 'vertical' Button: id: accept text: "yes" on_release: status.text = self.text Button: id: cancel text: "no" on_release: status.text = self.text Label: id: status text: "yes or no?" Button: text: "press to dismiss" on_release: popup.dismiss() </code></pre> <p>I want to change the <code>text(showPopup)</code> when I click on "yes" or "no" on the <code>showPopup</code>'s text.</p>
0
2016-09-16T11:26:22Z
39,544,334
<p>You could get the text of your ShowPopup's buttons from your code. I mean declare a variable in your code named <code>popups_text</code> and in your .kv file try accesing it using <code>root.popups_text</code>. then create a method in your code that changes this text every time the button is pressed.</p>
0
2016-09-17T07:52:32Z
[ "python", "kivy" ]
How to call scala from python?
39,530,375
<p>I would like to build my project in Scala and then use it in a script in Python for my data hacking (as a module or something like that). I have seen that there are ways to integrate python code into JVM languages with Jython (only Python 2 projects though). What I want to do is the other way around though. I found no information on the net how to do this, but it seems strange that this should not be possible.</p>
0
2016-09-16T11:28:17Z
39,531,868
<p>General solution -- use some RPC/IPC (sockets, protobuf, whatever).</p> <p>However, you might want to look at Spark's solution -- how they translate Python code in Scala's APIs (<a href="https://www.py4j.org/" rel="nofollow">https://www.py4j.org/</a>) .</p>
0
2016-09-16T12:44:49Z
[ "python", "scala" ]
django form wizard, ValidationError: ['ManagementForm data is missing or has been tampered.']
39,530,458
<p>I'm using django wizard for a multiple form signup which in the end will be 4 or 5 pages of forms. However I'm getting validation errors that may relate to the form action somehow, which I'm not sure how to solve.</p> <p>The error seems to stem from line 282 here: <a href="https://github.com/django/django-formtools/blob/master/formtools/wizard/views.py" rel="nofollow">https://github.com/django/django-formtools/blob/master/formtools/wizard/views.py</a> but I'm not clear on what's causing it?</p> <p>(note I'm using django crispy forms but may not be relevant)</p> <p>views.py</p> <pre><code>class SignupWizard(SessionWizardView): def get_template_names(self): return [TEMPLATES[self.steps.current]] def done(self, form_list, **kwargs): for form in form_list: if isinstance(form, SignupForm): user = form.save(self.request) complete_signup(self.request, user, settings.ACCOUNT_EMAIL_VERIFICATION, settings.LOGIN_REDIRECT_URL) else: other_signup_form = form.save(commit=False) user = self.request.user other_signup_form.user = user other_signup_form.save() return HttpResponseRedirect(settings.LOGIN_REDIRECT_URL) signup_view = SignupWizard.as_view(SIGNUP_FORMS) </code></pre> <p>forms.py</p> <pre><code>class SignupForm(allauthforms.SignupForm): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.helper = FormHelper(self) self.helper.label_class = 'sr-only' self.helper.layout = Layout( Field('name', placeholder='Your Name'), PrependedText('email', '&lt;i class="fa fa-envelope-o"&gt;&lt;/i&gt;', placeholder="Your Email", autofocus=""), PrependedText('password1', '&lt;i class="fa fa-key"&gt;&lt;/i&gt;', placeholder="Enter Password"), Submit('sign_up', 'Sign up', css_class="btn btn-block btn-cta-primary"), ) class SignupForm2(forms.Form): first_name = forms.CharField(max_length=30) last_name = forms.CharField(max_length=30) </code></pre> <p>template:</p> <pre><code>{% block inner %} &lt;h2 class="title text-center"&gt;Sign up now&lt;/h2&gt; &lt;p class="intro text-center"&gt;It only takes 2 minutes.&lt;/p&gt; &lt;div class="row"&gt; {% crispy form %} &lt;/div&gt; {% endblock %} </code></pre>
0
2016-09-16T11:33:07Z
39,530,576
<p>The form wizard requires that you include the management form in the form tag in your template:</p> <pre><code>{{ wizard.management_form }} </code></pre> <p>See <a href="http://django-formtools.readthedocs.io/en/latest/wizard.html?highlight=management_form#creating-templates-for-the-forms" rel="nofollow">the docs</a> for more info.</p>
3
2016-09-16T11:37:58Z
[ "python", "django", "forms" ]
wxpython: detect whether a dialog has been closed
39,530,471
<p>I am using wxpython to create a GUI and I have the following customized dialog class:</p> <pre><code>class GetDataDlg(wx.Dialog): def __init__(self, *args, **kwargs): self.parameters = kwargs.pop('parameters', None) request = kwargs.pop('request', None) assert self.parameters is not None assert request is not None strings = re.findall('[A-Z][a-z]*', request) info = "" for string in strings: if len(string) == 1: info = info + string elif not info: info = string.lower() else: info = info + " " + string.lower() wx.Dialog.__init__(self, *args, **kwargs) self.data = {} main_sizer = wx.BoxSizer(wx.VERTICAL) input_text = wx.StaticText(self, label="Please type the new {}".format(info)) main_sizer.Add(input_text, 1, wx.ALL, 10) input_sizer = wx.BoxSizer(wx.HORIZONTAL) main_sizer.Add(input_sizer, 1, wx.ALIGN_CENTER | wx.LEFT | wx.RIGHT, 10) text_sizer = wx.BoxSizer(wx.VERTICAL) input_sizer.Add(text_sizer, 1, wx.ALIGN_LEFT | wx.RIGHT, 10) ctrl_sizer = wx.BoxSizer(wx.VERTICAL) input_sizer.Add(ctrl_sizer, 1, wx.ALIGN_RIGHT) self.controls = controls = {} for key in self.parameters: text = wx.StaticText(self, label=key) text_sizer.Add(text, 0, wx.BOTTOM, 17) ctrl = controls[key] = wx.TextCtrl(self) ctrl_sizer.Add(ctrl, 0, wx.BOTTOM, 10) ok_button = wx.Button(self, id=wx.ID_OK) main_sizer.Add(ok_button, 1, wx.ALIGN_CENTER | wx.LEFT | wx.RIGHT | wx.BOTTOM, 10) self.SetSizer(main_sizer) self.Fit() self.Layout() ok_button.Bind(wx.EVT_BUTTON, self.save_data) def save_data(self, event): for item in self.parameters: self.data[item] = self.controls[item].GetValue() event.Skip() </code></pre> <p>In my main frame I call the dialog like this:</p> <pre><code>dlg = GetDataDlg(self, parameters=parameter, request=item) result = dlg.ShowModal() </code></pre> <p>Now I need to detect whether the user has pushed the ok button provided by my code or the close button in the right upper part of the dialog provided by the class itself. <code>result</code> does not seem to change in the two cases, neither do other attributes of <code>dlg</code>. Besides I cannot check the existence of <code>dlg.data</code>, because the dialog appears to save the values even when pushing the close button.</p> <p>Does anyone have any ideas?</p>
1
2016-09-16T11:33:35Z
39,536,150
<p>You are not binding the close event.<br> Have you tried inserting a <code>self.Bind(wx.EVT_CLOSE, self.OnQuit)</code> where OnQuit returns wx.ID_CANCEL</p>
0
2016-09-16T16:24:36Z
[ "python", "user-interface", "dialog", "modal-dialog", "wxpython" ]
results are not showing in taiga front end
39,530,493
<p>I am working on customization of taiga project management system. I have code downloaded from <a href="https://github.com/taigaio" rel="nofollow">github of taiga.io</a>. </p> <p>I am making changes in below code:</p> <pre><code>&lt;pre&gt;{{ user}}&lt;/pre&gt; &lt;div class="create-options"&gt; &lt;a href="#" ng-click="vm.newProject()" title="title" class="create-project-btn button-green"&gt;&lt;/a&gt; &lt;span tg-import-project-button="tg-import-project-button"&gt; &lt;a href="" title="title" class="button-blackish import-project-button"&gt; &lt;tg-svg svg-icon="icon-upload"&gt;&lt;/tg-svg&gt; &lt;/a&gt; &lt;input type="file" class="import-file hidden"/&gt; &lt;/span&gt; &lt;/div&gt; </code></pre> <p>Data in <code>{{user}}</code> is:</p> <pre><code>{"_attrs":{"full_name_display":"Administrator","id":4,"full_name":"Administrator","email":"admin@admin.com","is_active":true,} </code></pre> <p>When I put condition <code>ng-if="user.id==4"</code> on div then <strong>it works fine</strong> </p> <p>but when I put condition <code>ng-if="user.full_name=="Administrator""</code> on div <strong>it does not work.</strong> but value of <code>{{user.full_name == "Administrator"}}</code> is <code>true</code>.</p> <p>When I change condition to <code>ng-if="user.full_name=='Administrator'"</code> on div then <strong>it gives error</strong>:</p> <pre><code>Uncaught Error: [$injector:modulerr] Failed to instantiate module taiga due to: Error: [$injector:modulerr] Failed to instantiate module templates due to: Error: [$injector:nomod] Module 'templates' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument. http://errors.angularjs.org/1.4.7/$injector/nomod?p0=templates at http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:6745 at http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:16302 at e (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:15775) at http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:16087 at http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:24768 at o (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:7152) at p (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:24616) at http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:24785 at o (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:7152) at p (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:24616) at Qt (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:26306) at s (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:13829) at Object.at [as bootstrap] (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:14139) at Object.&lt;anonymous&gt; (http://127.0.0.1:8000/v-1468840853868/js/app-loader.js:1:1516) at Object.load (http://127.0.0.1:8000/v-1468840853868/js/libs.js:20:21097) at http://127.0.0.1:8000/v-1468840853868/js/libs.js:20:21149 http://errors.angularjs.org/1.4.7/$injector/modulerr?p0=templates&amp;p1=Error%3A%20%5B%24injector%3Anomod%5D%20Module%20'templates'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.4.7%2F%24injector%2Fnomod%3Fp0%3Dtemplates%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A6745%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A16302%0A%20%20%20%20at%20e%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A15775)%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A16087%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24768%0A%20%20%20%20at%20o%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A7152)%0A%20%20%20%20at%20p%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24616)%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24785%0A%20%20%20%20at%20o%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A7152)%0A%20%20%20%20at%20p%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24616)%0A%20%20%20%20at%20Qt%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A26306)%0A%20%20%20%20at%20s%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A13829)%0A%20%20%20%20at%20Object.at%20%5Bas%20bootstrap%5D%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A14139)%0A%20%20%20%20at%20Object.%3Canonymous%3E%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Fapp-loader.js%3A1%3A1516)%0A%20%20%20%20at%20Object.load%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A20%3A21097)%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A20%3A21149 at http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:6745 at http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:25044 at o (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:7152) at p (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:24616) at http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:24785 at o (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:7152) at p (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:24616) at Qt (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:26306) at s (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:13829) at Object.at [as bootstrap] (http://127.0.0.1:8000/v-1468840853868/js/libs.js:8:14139) at Object.&lt;anonymous&gt; (http://127.0.0.1:8000/v-1468840853868/js/app-loader.js:1:1516) at Object.load (http://127.0.0.1:8000/v-1468840853868/js/libs.js:20:21097) at http://127.0.0.1:8000/v-1468840853868/js/libs.js:20:21149 at Object.&lt;anonymous&gt; (http://127.0.0.1:8000/v-1468840853868/js/libs.js:20:20667) at HTMLScriptElement.n (http://127.0.0.1:8000/v-1468840853868/js/libs.js:20:20693) http://errors.angularjs.org/1.4.7/$injector/modulerr?p0=taiga&amp;p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed%20to%20instantiate%20module%20templates%20due%20to%3A%0AError%3A%20%5B%24injector%3Anomod%5D%20Module%20'templates'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.4.7%2F%24injector%2Fnomod%3Fp0%3Dtemplates%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A6745%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A16302%0A%20%20%20%20at%20e%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A15775)%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A16087%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24768%0A%20%20%20%20at%20o%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A7152)%0A%20%20%20%20at%20p%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24616)%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24785%0A%20%20%20%20at%20o%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A7152)%0A%20%20%20%20at%20p%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24616)%0A%20%20%20%20at%20Qt%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A26306)%0A%20%20%20%20at%20s%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A13829)%0A%20%20%20%20at%20Object.at%20%5Bas%20bootstrap%5D%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A14139)%0A%20%20%20%20at%20Object.%3Canonymous%3E%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Fapp-loader.js%3A1%3A1516)%0A%20%20%20%20at%20Object.load%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A20%3A21097)%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A20%3A21149%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.4.7%2F%24injector%2Fmodulerr%3Fp0%3Dtemplates%26p1%3DError%253A%2520%255B%2524injector%253Anomod%255D%2520Module%2520'templates'%2520is%2520not%2520available!%2520You%2520either%2520misspelled%2520the%2520module%2520name%2520or%2520forgot%2520to%2520load%2520it.%2520If%2520registering%2520a%2520module%2520ensure%2520that%2520you%2520specify%2520the%2520dependencies%2520as%2520the%2520second%2520argument.%250Ahttp%253A%252F%252Ferrors.angularjs.org%252F1.4.7%252F%2524injector%252Fnomod%253Fp0%253Dtemplates%250A%2520%2520%2520%2520at%2520http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A6745%250A%2520%2520%2520%2520at%2520http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A16302%250A%2520%2520%2520%2520at%2520e%2520(http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A15775)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A16087%250A%2520%2520%2520%2520at%2520http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A24768%250A%2520%2520%2520%2520at%2520o%2520(http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A7152)%250A%2520%2520%2520%2520at%2520p%2520(http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A24616)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A24785%250A%2520%2520%2520%2520at%2520o%2520(http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A7152)%250A%2520%2520%2520%2520at%2520p%2520(http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A24616)%250A%2520%2520%2520%2520at%2520Qt%2520(http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A26306)%250A%2520%2520%2520%2520at%2520s%2520(http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A13829)%250A%2520%2520%2520%2520at%2520Object.at%2520%255Bas%2520bootstrap%255D%2520(http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A8%253A14139)%250A%2520%2520%2520%2520at%2520Object.%253Canonymous%253E%2520(http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Fapp-loader.js%253A1%253A1516)%250A%2520%2520%2520%2520at%2520Object.load%2520(http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A20%253A21097)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F127.0.0.1%253A8000%252Fv-1468840853868%252Fjs%252Flibs.js%253A20%253A21149%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A6745%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A25044%0A%20%20%20%20at%20o%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A7152)%0A%20%20%20%20at%20p%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24616)%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24785%0A%20%20%20%20at%20o%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A7152)%0A%20%20%20%20at%20p%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A24616)%0A%20%20%20%20at%20Qt%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A26306)%0A%20%20%20%20at%20s%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A13829)%0A%20%20%20%20at%20Object.at%20%5Bas%20bootstrap%5D%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A8%3A14139)%0A%20%20%20%20at%20Object.%3Canonymous%3E%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Fapp-loader.js%3A1%3A1516)%0A%20%20%20%20at%20Object.load%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A20%3A21097)%0A%20%20%20%20at%20http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A20%3A21149%0A%20%20%20%20at%20Object.%3Canonymous%3E%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A20%3A20667)%0A%20%20%20%20at%20HTMLScriptElement.n%20(http%3A%2F%2F127.0.0.1%3A8000%2Fv-1468840853868%2Fjs%2Flibs.js%3A20%3A20693)(anonymous function) @ angular.js:68(anonymous function) @ angular.js:4413o @ angular.js:336p @ angular.js:4374Qt @ angular.js:4299s @ angular.js:1657at @ angular.js:1678(anonymous function) @ app-loader.js:1load @ l.js:121(anonymous function) @ l.js:124(anonymous function) @ l.js:99n @ l.js:100 </code></pre> <p>Backend code of taiga is in <code>Django</code> and frontend is in <code>angularjs</code>.</p> <p>Is there any other condition on which I can change GUI for Admin and members.</p>
0
2016-09-16T11:34:28Z
39,653,606
<p>I resolved issue by placing <code>\</code> in <code>ng-if</code> then error was removed.</p> <p>Like:</p> <pre><code>ng-if="user.full_name==\'Administrator\'" </code></pre>
0
2016-09-23T06:02:04Z
[ "python", "angularjs", "django", "taiga" ]
Python - Supervisor how to log the standard output -
39,530,567
<p>I'm not expert about python, could someone explain where is the problem?</p> <p>I'd like to collect the stdout through supervisor <a href="http://supervisord.org/" rel="nofollow">http://supervisord.org/</a></p> <p>I've made 3 different scripts that print output, for bash and PHP I can collect the output, without problems, python doesn't work.</p> <p>php_test.sh</p> <pre><code>#!/usr/bin/php &lt;?php for($i=0;$i&lt;100000;$i++){ sleep(5); echo 'test'; } ?&gt; </code></pre> <p>bash_test.sh</p> <pre><code>#!/bin/bash for i in `seq 1 100000`; do sleep 5 echo item: $i done </code></pre> <p>python_test.sh ( with differents tests to print output )</p> <pre><code>#!/usr/bin/python3 import time import sys from contextlib import redirect_stdout for num in range(0,100000): time.sleep(5) print("test!") sys.stdout.write("test") with redirect_stdout(sys.stderr): print("test") </code></pre> <p>My supervisor config files</p> <p>bash</p> <pre><code>[program:bash_test] process_name=bash_test command=/home/user/bash_test.sh stdout_logfile=/home/user/bash_test_output.log stdout_logfile_maxbytes=0 </code></pre> <p>php</p> <pre><code>[program:php_test] process_name=php_test command=/home/user/php_test.sh stdout_logfile=/home/user/php_test_output.log stdout_logfile_maxbytes=0 </code></pre> <p>python</p> <pre><code>[program:python_test] process_name=python_test command=/home/user/python_test.sh stdout_logfile=/home/user/python_test_output.log stdout_logfile_maxbytes=0 </code></pre> <p>Thank so much for the help. It's driving me crazy ;[</p>
0
2016-09-16T11:37:36Z
39,532,147
<p>ok...python output is buffered use after print</p> <pre><code>sys.stdout.flush() </code></pre> <p>or ( python 3 )</p> <pre><code>print(something, flush=True) </code></pre> <p>or more better </p> <pre><code>import logging logging.warning('Watch out!') </code></pre> <p><a href="https://docs.python.org/3/howto/logging.html" rel="nofollow">https://docs.python.org/3/howto/logging.html</a></p>
0
2016-09-16T12:59:00Z
[ "python", "linux", "python-3.x", "unix", "supervisord" ]
Selecting certain features from a dataset according to a list.
39,530,617
<p>I have a <strong>dataset "h_train"</strong> which has about 26 features and I have a <strong>list</strong> <strong>H</strong> which has some selected features from the dataset "h_train". I would only like to keep those features in the "h_train" which are present in list H. </p> <pre><code>h_train #Dataset with 26 features [A - Z] H = ["A", "B", "C", "D"] </code></pre> <p>So I would like h_train to be reduced to only those features in H. How do I do it ?</p>
0
2016-09-16T11:40:10Z
39,530,805
<p>You can transfrom the list into the tuple datastructure and than sue its operators</p> <pre><code>a = tuple(h_train) b = tuple(H) c = a &amp; b only_H_features_left = list(c) </code></pre>
0
2016-09-16T11:51:12Z
[ "python", "pandas", "scikit-learn" ]
Selecting certain features from a dataset according to a list.
39,530,617
<p>I have a <strong>dataset "h_train"</strong> which has about 26 features and I have a <strong>list</strong> <strong>H</strong> which has some selected features from the dataset "h_train". I would only like to keep those features in the "h_train" which are present in list H. </p> <pre><code>h_train #Dataset with 26 features [A - Z] H = ["A", "B", "C", "D"] </code></pre> <p>So I would like h_train to be reduced to only those features in H. How do I do it ?</p>
0
2016-09-16T11:40:10Z
39,545,236
<p>Assuming <code>h_train</code> is a <code>pd.DataFrame</code> you can just do</p> <pre><code>h_train = h_train[H] </code></pre>
0
2016-09-17T09:37:14Z
[ "python", "pandas", "scikit-learn" ]
Firebase Streaming using Python, connection times out when there is huge amount of data under the child.
39,530,707
<p>I am listening to data from firebase, under a path that contains enormous amount of data, using this library <a href="https://github.com/andrewsosa001/firebase-python-streaming" rel="nofollow">firebase-python-streaming</a>.However every time I start streaming, it gets hung on the call and nothing is returned leading to an eventual timeout. However if I try it on a shorter path it works, in the first call returns all the data inside the child, followed by changes if any made in the database. </p> <pre><code>def p(x): print x # Firebase object fb = Firebase('https://myfirebase.firebaseio.com/') custom_callback = fb.child("views").listener(p) # Start and stop the stream using the following custom_callback.start() raw_input("ENTER to stop...") custom_callback.stop()enter code here </code></pre> <p>On running the streamer, the command line stays stuck here in case of large data</p> <pre><code>ENTER to stop... </code></pre> <p>It stays stuck there and eventually times out. I have used other libraries as well such as pyrebase, and pyfirebase with the same result. I think it has to do with the large amount of data I am trying to stream in the first iteration. Is there any hack or solution to fix this?</p>
0
2016-09-16T11:44:56Z
39,533,243
<p>If you get problems because of the large amount of data you're trying to retrieve, the solution will be to retrieve less data.</p> <p>One way to do this would be to listen to a point <strong>lower</strong> in your database tree. E.g.</p> <pre><code>custom_callback = fb.child("views/201609").listener(p) </code></pre> <p>This requires you to modify you data structure. You'll find that a common theme when using NoSQL databases: you need to model your data for the way you application wants to consume it.</p> <p>Alternatively, you can request only the most recent items from your database. I'm not exactly sure how your Python library handles these, but on the REST API this would be something like:</p> <pre><code>https://myfirebase.firebaseio.com/views.json?orderBy=$key&amp;limitToLast=100 </code></pre>
0
2016-09-16T13:53:08Z
[ "python", "firebase", "listener", "firebase-database" ]
Tox running command based on env variable
39,530,802
<p>My current workflow is github PRs and Builds tested on Travis CI, with tox testing pytest and reporting coverage to codeclimate. </p> <p><strong>travis.yml</strong></p> <pre><code>os: - linux sudo: false language: python python: - "3.3" - "3.4" - "3.5" - "pypy3" - "pypy3.3-5.2-alpha1" - "nightly" install: pip install tox-travis script: tox </code></pre> <p><strong>tox.ini</strong></p> <pre><code>[tox] envlist = py33, py34, py35, pypy3, docs, flake8, nightly, pypy3.3-5.2-alpha1 [tox:travis] 3.5 = py35, docs, flake8 [testenv] deps = -rrequirements.txt platform = win: windows linux: linux commands = py.test --cov=pyCardDeck --durations=10 tests [testenv:py35] commands = py.test --cov=pyCardDeck --durations=10 tests codeclimate-test-reporter --file .coverage passenv = CODECLIMATE_REPO_TOKEN TRAVIS_BRANCH TRAVIS_JOB_ID TRAVIS_PULL_REQUEST CI_NAME </code></pre> <p>However, Travis isn't passing my environmental variables for pull requests, which makes my coverage reporting fail. Travis documentation shows this as solution: </p> <pre><code>script: - 'if [ "$TRAVIS_PULL_REQUEST" != "false" ]; then bash ./travis/run_on_pull_requests; fi' - 'if [ "$TRAVIS_PULL_REQUEST" = "false" ]; then bash ./travis/run_on_non_pull_requests; fi' </code></pre> <p>However, in tox this doesn't work as tox is using subprocess python module and doesn't recognize if as a command (naturally). </p> <p>How do I run codeclimate-test-reporter only for builds and not for pull requests based on the TRAVIS_PULL_REQUEST variable? Do I have to create my own script and call that? Is there a smarter solution?</p>
3
2016-09-16T11:51:02Z
39,562,141
<p>You can have two <code>tox.ini</code> files and call that from <code>travis.yml</code></p> <p><code>script: if [ $TRAVIS_PULL_REQUEST ]; then tox -c tox_nocodeclimate.ini; else tox -c tox.ini; fi</code></p>
0
2016-09-18T20:03:56Z
[ "python", "travis-ci", "tox", "code-climate" ]
Tox running command based on env variable
39,530,802
<p>My current workflow is github PRs and Builds tested on Travis CI, with tox testing pytest and reporting coverage to codeclimate. </p> <p><strong>travis.yml</strong></p> <pre><code>os: - linux sudo: false language: python python: - "3.3" - "3.4" - "3.5" - "pypy3" - "pypy3.3-5.2-alpha1" - "nightly" install: pip install tox-travis script: tox </code></pre> <p><strong>tox.ini</strong></p> <pre><code>[tox] envlist = py33, py34, py35, pypy3, docs, flake8, nightly, pypy3.3-5.2-alpha1 [tox:travis] 3.5 = py35, docs, flake8 [testenv] deps = -rrequirements.txt platform = win: windows linux: linux commands = py.test --cov=pyCardDeck --durations=10 tests [testenv:py35] commands = py.test --cov=pyCardDeck --durations=10 tests codeclimate-test-reporter --file .coverage passenv = CODECLIMATE_REPO_TOKEN TRAVIS_BRANCH TRAVIS_JOB_ID TRAVIS_PULL_REQUEST CI_NAME </code></pre> <p>However, Travis isn't passing my environmental variables for pull requests, which makes my coverage reporting fail. Travis documentation shows this as solution: </p> <pre><code>script: - 'if [ "$TRAVIS_PULL_REQUEST" != "false" ]; then bash ./travis/run_on_pull_requests; fi' - 'if [ "$TRAVIS_PULL_REQUEST" = "false" ]; then bash ./travis/run_on_non_pull_requests; fi' </code></pre> <p>However, in tox this doesn't work as tox is using subprocess python module and doesn't recognize if as a command (naturally). </p> <p>How do I run codeclimate-test-reporter only for builds and not for pull requests based on the TRAVIS_PULL_REQUEST variable? Do I have to create my own script and call that? Is there a smarter solution?</p>
3
2016-09-16T11:51:02Z
39,663,483
<p>My solution is going through setup.py command which takes care of everything</p> <p>Tox.ini</p> <pre><code>[testenv:py35] commands = python setup.py testcov passenv = ... </code></pre> <p>setup.py</p> <pre><code>class PyTestCov(Command): description = "run tests and report them to codeclimate" user_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): errno = call(["py.test --cov=pyCardDeck --durations=10 tests"], shell=True) if os.getenv("TRAVIS_PULL_REQUEST") == "false": call(["python -m codeclimate_test_reporter --file .coverage"], shell=True) raise SystemExit(errno) ... cmdclass={'testcov': PyTestCov}, </code></pre>
0
2016-09-23T14:38:15Z
[ "python", "travis-ci", "tox", "code-climate" ]
Sublime text change text color
39,530,825
<p>I am currently trying to extract information (manually) from a text file. The text file has a decent format (parsable), but it contains something like 'random chars'. These random chars are not completely random, by running an algorithm on them I am able to collect information. I am giving each char a positive integer.</p> <p>The question is whether or not I can write a sublime text 3 plugin that will help me see those chars.</p> <p>I would like to change the colour of those chars.</p> <p>Note: there can be a char in the same string with 2 colours. The colour depends on of the position.</p> <p>Can such a plugin be written for sublime text 3? If not what can I use instead? The algorithm that gives each char a score is written in python.</p>
0
2016-09-16T11:52:18Z
39,533,921
<p>While I don't have any experience working with the Sublime API directly, I've had plugins that highlight things in my files with a rectangle/outline. It appears from the <a href="https://www.sublimetext.com/docs/3/api_reference.html" rel="nofollow">api reference</a> that Regions are used to handle this. There is an add_regions() function that you may want to look into. The reference lists Packages/Default/mark.py as an example plugin that uses this function.</p> <p>As for changing foreground specifically, the last post <a href="https://forum.sublimetext.com/t/add-regions-swaps-background-and-foreground/3587/5" rel="nofollow">in this thread</a> suggests that it is not possible.</p>
0
2016-09-16T14:24:52Z
[ "python", "sublimetext3", "sublime-text-plugin" ]
Sublime text change text color
39,530,825
<p>I am currently trying to extract information (manually) from a text file. The text file has a decent format (parsable), but it contains something like 'random chars'. These random chars are not completely random, by running an algorithm on them I am able to collect information. I am giving each char a positive integer.</p> <p>The question is whether or not I can write a sublime text 3 plugin that will help me see those chars.</p> <p>I would like to change the colour of those chars.</p> <p>Note: there can be a char in the same string with 2 colours. The colour depends on of the position.</p> <p>Can such a plugin be written for sublime text 3? If not what can I use instead? The algorithm that gives each char a score is written in python.</p>
0
2016-09-16T11:52:18Z
39,537,299
<p>In order to set the color of a <a href="http://www.sublimetext.com/docs/3/api_reference.html#sublime.Region" rel="nofollow"><code>Region</code></a> of text, you'll need to have a customized <a href="http://docs.sublimetext.info/en/latest/extensibility/syntaxdefs.html#scopes" rel="nofollow">scope selector</a> in your color scheme. Then, once you've chosen the text you want to highlight and converted it to a Region, you can use the <code>add_regions()</code> method of <a href="http://www.sublimetext.com/docs/3/api_reference.html#sublime.View" rel="nofollow"><code>sublime.View</code></a> (accessible within your <a href="http://www.sublimetext.com/docs/3/api_reference.html#sublime_plugin.TextCommand" rel="nofollow"><code>sublime_plugin.TextCommand</code></a> subclass as <code>self.view</code>) to add the regions to the view and assign a scope to them. The regions should then be highlighted according to your color scheme.</p>
1
2016-09-16T17:40:16Z
[ "python", "sublimetext3", "sublime-text-plugin" ]
Sublime text change text color
39,530,825
<p>I am currently trying to extract information (manually) from a text file. The text file has a decent format (parsable), but it contains something like 'random chars'. These random chars are not completely random, by running an algorithm on them I am able to collect information. I am giving each char a positive integer.</p> <p>The question is whether or not I can write a sublime text 3 plugin that will help me see those chars.</p> <p>I would like to change the colour of those chars.</p> <p>Note: there can be a char in the same string with 2 colours. The colour depends on of the position.</p> <p>Can such a plugin be written for sublime text 3? If not what can I use instead? The algorithm that gives each char a score is written in python.</p>
0
2016-09-16T11:52:18Z
39,626,287
<p>It's possible in <strong>CudaText</strong> app, with Python API. API has <code>on_change_slow</code> event to run your code. And your code must find text x/y position, then call <code>ed.attr()</code> to highlight substring at x/y with any color. It's simple.</p>
0
2016-09-21T20:45:40Z
[ "python", "sublimetext3", "sublime-text-plugin" ]
python code to create a merged file
39,530,937
<p>I want to create a a file from two input files. Input1 is a single line file which contains 18 words separated by space. Input2 is multiple lines file which holds different size strings separated by space. The Output shall contains the presence (1) and absence (0) the 18 words in the Input2. Here is how it shall look like.</p> <h2>Input1</h2> <pre><code>word1 Word2 word3 word4 word5 word6 word7 word8 word9 word10 word11 word12 word13 word14 word15 word16 word17 word18 </code></pre> <h2>Input2</h2> <pre><code>word1 Word2 word3 word4 word6 word7 word8 word9 word15 word16 word17 word1 word5 word7 word8 word11 word16 word18 word1 word18 word1 Word2 word3 word4 word5 word6 word7 word8 word9 word10 word11 word12 word13 word14 word15 word16 word17 word18 word5 word8 word12 word15 </code></pre> <h2>Output</h2> <pre><code>word1 Word2 word3 word4 word5 word6 word7 word8 word9 word10 word11 word12 word13 word14 word15 word16 word17 word18 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 0 1 0 0 0 1 0 1 1 0 0 1 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 </code></pre>
-6
2016-09-16T11:58:07Z
39,531,298
<p>Assuming you can read the data from file and hold it into two string say input1 and input2 below code should do what you are looking for</p> <pre><code>word_list_1 = input1.split(" ") for str in input2.split("\n"): word_list_2 = str.split(" ") for word in word_list_1: if word in word_list_2: sys.stdout.write("1") else: sys.stdout.write("0") sys.stdout.write("") sys.stdout.write("\n") </code></pre>
0
2016-09-16T12:15:33Z
[ "python" ]
Find minimum value of edge property for all edges connected to a given node
39,531,007
<p>I have a network where each of my edges is labelled with a date. I want to now also label my vertices so that each vertex has a date assigned to it corresponding to the minimum date of all edges incident and emanating from it. Is there an inbuilt function to find this which would be faster than me looping over all vertices and then all edges for each vertex manually? In other words: I am after a function to find the minimum value of a given edge property for a given subset of edges.</p> <p>My current code idea is:</p> <pre><code>lowest_year = 2016 for v in g.vertices(): for e in v.in_edges(): year = g.ep.year[e] lowest_year = min(year,lowest_year) for e in v.out_edges(): year = g.ep.year[e] lowest_year = min(year,lowest_year) g.vp.year[v]=lowest_year lowest_year = 2016 </code></pre>
2
2016-09-16T12:02:06Z
39,531,342
<p>There is hardly any solution that would not need to check all the edges to find the minimum. </p> <p>You could however optimize your calls to <code>min</code> by making a single call on the entire data instead of multiple calls and you also wouldn't need <code>lowest_year</code> any longer:</p> <pre><code>from itertools import chain for v in g.vertices(): g.vp.year[v] = min(map(g.ep.year.__getitem__, chain(v.in_edges(), v.out_edges()))) </code></pre> <p>Methods <a href="https://networkx.github.io/documentation/development/reference/generated/networkx.MultiDiGraph.in_edges.html#in-edges" rel="nofollow"><code>in_edges</code></a> and <a href="https://networkx.github.io/documentation/development/reference/generated/networkx.MultiDiGraph.out_edges.html#networkx.MultiDiGraph.out_edges" rel="nofollow"><code>out_edges</code></a> both returns lists which you can easily merge with the <code>+</code> operator.</p> <p>In a more general case, you would use <a href="https://docs.python.org/2/library/itertools.html#itertools.chain" rel="nofollow"><code>itertools.chain</code></a> when oblivious of the types you're merging, but <code>+</code> is better in this case since we know the items are lists.</p>
2
2016-09-16T12:18:13Z
[ "python", "graph-tool" ]
Find minimum value of edge property for all edges connected to a given node
39,531,007
<p>I have a network where each of my edges is labelled with a date. I want to now also label my vertices so that each vertex has a date assigned to it corresponding to the minimum date of all edges incident and emanating from it. Is there an inbuilt function to find this which would be faster than me looping over all vertices and then all edges for each vertex manually? In other words: I am after a function to find the minimum value of a given edge property for a given subset of edges.</p> <p>My current code idea is:</p> <pre><code>lowest_year = 2016 for v in g.vertices(): for e in v.in_edges(): year = g.ep.year[e] lowest_year = min(year,lowest_year) for e in v.out_edges(): year = g.ep.year[e] lowest_year = min(year,lowest_year) g.vp.year[v]=lowest_year lowest_year = 2016 </code></pre>
2
2016-09-16T12:02:06Z
39,728,831
<p>This discussion (<a href="http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/Find-minimum-value-of-edge-property-for-all-edges-connected-to-a-given-node-td4026722.html" rel="nofollow">http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/Find-minimum-value-of-edge-property-for-all-edges-connected-to-a-given-node-td4026722.html</a> ) contains some useful suggestions for this topic too. It also highlights that there is actually an inbuilt method in graph-tool for finding the lowest value across all outgoing, say, edges (<a href="https://graph-tool.skewed.de/static/doc/graph_tool.html#graph_tool.incident_edges_op" rel="nofollow">https://graph-tool.skewed.de/static/doc/graph_tool.html#graph_tool.incident_edges_op</a> ):</p> <pre><code>g.vp.year = incident_edges_op(g, "out", "min", g.ep.year) </code></pre> <p>This would need repeating for the ingoing edges too and the minimum value between the two would then have to be found.</p>
0
2016-09-27T15:37:05Z
[ "python", "graph-tool" ]
Python calling and using windows dll’s
39,531,243
<p>HelloI have to ask 1 question about python and dll functions. which I am a bit frustrated about. The question is can I load dll functions from windows using python? I heard of Ctype to do that, but I can’t find good tutorials for this Is there another way to use dll files from windows to get extra functionality? I want to call some dll to work with mouse events I used pyautogui but it is not that useful for me. I wonder is python good for windows aplications? I know it runs on windows however there are good dll function that can provide better functionality for windows then python original libraries. Well that’s my opinion what I think anyways Is it worth to work with dlls with python after all? Or I better study c# for that Because I love python for simplisity and don’t want to move to c# yet.</p>
-2
2016-09-16T12:12:50Z
39,531,506
<p>Yes you can. The <code>ctypes</code> library is indeed what you need. The official doc is here <a href="https://docs.python.org/3/library/ctypes.html" rel="nofollow">https://docs.python.org/3/library/ctypes.html</a> . Loading DLLs pretty straightforward, but calling the functions inside can be a pain depending on the arguments types. Handling old C style error return codes is also cumbersome compared to the exception handling and general low overhead code style in Python.</p> <p>99% of the time it is way easier and better to use an appropriate existing module that either implements what you need or wraps the appropriate DLL for you. For example search in PyPI which is the central repository of Python expternal modules. That's my advice.</p>
0
2016-09-16T12:26:42Z
[ "python", "windows", "dll", "dllimport" ]
How to use tf.cond in combination with batching operations / queue runners
39,531,367
<h1>Situation</h1> <p>I want to train a specific network architecture (a <a href="https://arxiv.org/abs/1406.2661" rel="nofollow">GAN</a>) that needs inputs from different sources during training.</p> <p>One input source is examples loaded from disk. The other source is a generator sub-network creating examples.</p> <p>To choose which kind of input to feed to the network I use <code>tf.cond</code>. There is one caveat though that <a href="http://stackoverflow.com/a/34934543/850264">has already been explained</a>: <code>tf.cond</code> evaluates the inputs to both conditional branches even though only one of those will ultimately be used.</p> <p>Enough setup, here is a minimal working example:</p> <pre><code>import numpy as np import tensorflow as tf BATCH_SIZE = 32 def load_input_data(): # Normally this data would be read from disk data = tf.reshape(np.arange(10 * BATCH_SIZE, dtype=np.float32), shape=(10 * BATCH_SIZE, 1)) return tf.train.batch([data], BATCH_SIZE, enqueue_many=True) def generate_input_data(): # Normally this data would be generated by a much bigger sub-network return tf.random_uniform(shape=[BATCH_SIZE, 1]) def main(): # A bool to choose between loaded or generated inputs load_inputs_pred = tf.placeholder(dtype=tf.bool, shape=[]) # Variant 1: Call "load_input_data" inside tf.cond data_batch = tf.cond(load_inputs_pred, load_input_data, generate_input_data) # Variant 2: Call "load_input_data" outside tf.cond #loaded_data = load_input_data() #data_batch = tf.cond(load_inputs_pred, lambda: loaded_data, generate_input_data) init_op = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init_op) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) print(threads) # Get generated input data data_batch_values = sess.run(data_batch, feed_dict={load_inputs_pred: False}) print(data_batch_values) # Get input data loaded from disk data_batch_values = sess.run(data_batch, feed_dict={load_inputs_pred: True}) print(data_batch_values) if __name__ == '__main__': main() </code></pre> <h1>Problem</h1> <p>Variant 1 does not work at all since the queue runner threads don't seem to run. <code>print(threads)</code> outputs something like <code>[&lt;Thread(Thread-1, stopped daemon 140165838264064)&gt;, ...]</code>.</p> <p>Variant 2 does work and <code>print(threads)</code> outputs something like <code>[&lt;Thread(Thread-1, started daemon 140361854863104)&gt;, ...]</code>. But since <code>load_input_data()</code> has been called outside of <code>tf.cond</code>, batches of data will be loaded from disk even when <code>load_inputs_pred</code> is <code>False</code>.</p> <p>Is it possible to make Variant 1 work, so that input data is only loaded when <code>load_inputs_pred</code> is <code>True</code> and not for every call to <code>session.run()</code>?</p>
0
2016-09-16T12:19:51Z
39,533,583
<p>If you're using a queue when loading your data and follow it up with a batch input then this shouldn't be a problem as you can specify the max amount to have loaded or stored in the queue.</p> <pre><code>input = tf.WholeFileReader(somefilelist) # or another way to load data return tf.train.batch(input,batch_size=10,capacity=100) </code></pre> <p>See here for more details: <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/io_ops.html#batch" rel="nofollow">https://www.tensorflow.org/versions/r0.10/api_docs/python/io_ops.html#batch</a></p> <hr> <p>Also there's an alternative approach that skips the tf.cond completely. Just define two losses one that follows the data through the autoencoder and discrimator and the other that follows the data through just the discriminator.</p> <p>Then it just becomes a matter of calling</p> <pre><code>sess.run(auto_loss,feed_dict) </code></pre> <p>or</p> <pre><code>sess.run(real_img_loss,feed_dict) </code></pre> <p>In this way the graph will only run through which ever loss was called upon. Let me know if this needs more explanation.</p> <hr> <p>Lastly I think to make variant one work you need to do something like this if you're using preloaded data.</p> <p><a href="https://www.tensorflow.org/versions/r0.10/how_tos/reading_data/index.html#preloaded-data" rel="nofollow">https://www.tensorflow.org/versions/r0.10/how_tos/reading_data/index.html#preloaded-data</a></p> <p>Otherwise I'm not sure what the issue is to be honest.</p>
0
2016-09-16T14:08:42Z
[ "python", "tensorflow" ]
Using regex to detect a sku
39,531,548
<p>I'm new to regex and I have some trouble dectecting the sku (unique ids) of a product in a column. My skus can take any form: all they have in common basically is:</p> <ol> <li>to be words made of a combination of letters and numbers</li> <li>to have 6 characters</li> </ol> <p>Here is an example of what I have in my column:</p> <pre><code>LX0051 N41554 shoes handbag 1B1F25 1V1F8L store near me M90947 M90844 </code></pre> <p>How can I identify the rows that contain a sku using regex?</p>
0
2016-09-16T12:28:51Z
39,531,709
<p>If I understand correctly you mean that it must have at least on digit, and at least one letter and be exactly 6 characters... Try</p> <pre><code>^(?=.*\d)(?=.*[a-z])[a-z\d]{6}$ </code></pre> <p>It uses two look-aheads to ensure there's at least one digit and one letter in the string. then it simply matches 6 characters. (Remember the <code>i</code> flag if both common and capital letters should be allowed.)</p> <p><a href="https://regex101.com/r/pO2aX9/2" rel="nofollow">See it here at regex101</a>.</p>
1
2016-09-16T12:36:07Z
[ "python", "regex" ]
gzip file with splitting the record into columns when the one of the column value in double quote
39,531,607
<p>I have gzip file which contains columns separated by comma, but when the column value is within double quotes the commas should be kept as it is. I wrote the following code:</p> <pre><code> input = gzip.open(file, "rb") reader = codecs.getreader("utf-8") input_file = reader(input) try: count = 0 for line in input_file: try: # print 'count=' # print count if len(line) != 0: col = line.split(',') </code></pre> <p>My data in the file looks like:</p> <pre><code>4798151,1137351,nam_p0,2762913,nam_r000,"NAM_Rack, Power &amp; Cooling",3 4798151,1135623,nam_s0,2762914,nam_a0,"NAM_Advise, Transform &amp; Manage",3 </code></pre> <p>When I was splitting data with comman, the comma with in double quotes should ignore and come into a column. I am not sure how to add the condition treating the text enclosed in double quote as one. A quick response would be a great help. Thanks.</p>
0
2016-09-16T12:31:33Z
39,531,806
<p>Use <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">csv</a>.</p> <p><strong>Demo</strong></p> <pre><code>&gt;&gt;&gt; import StringIO &gt;&gt;&gt; import csv &gt;&gt;&gt; line = '4798151,1137351,nam_p0,2762913,nam_r000,"NAM_Rack, Power &amp; Cooling",3' &gt;&gt;&gt; handler = StringIO.StringIO(line) &gt;&gt;&gt; [row for row in csv.reader(handler, delimiter=',')] [['4798151', '1137351', 'nam_p0', '2762913', 'nam_r000', 'NAM_Rack, Power &amp; Cooling', '3']] </code></pre> <p>In this case you can use this <em>direct approach</em>:</p> <pre><code>with gzip.open(file, 'rb') as handler: for row in csv.reader(handler, delimiter=","): # row processing HERE </code></pre>
0
2016-09-16T12:41:35Z
[ "python", "python-2.7" ]
Running a program using tensorflow
39,531,719
<p>I was trying to run a program using a tensoflow installed on a virtual machine but it gives me the following errors and I couldn't be able to get solution from Google :) Can anyone help me with this? Thanks. </p> <p>Here is the error: </p> <pre><code>ImportError: /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so: invalid ELF header </code></pre> <p>Thanks in advance.</p>
0
2016-09-16T12:36:33Z
39,535,210
<p>You most likely installed the wrong version of tensorflow check <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html" rel="nofollow">the download page</a> and make sure that you installed the version that matches your device.</p>
0
2016-09-16T15:31:24Z
[ "python", "tensorflow" ]
Can't install pycurl==7.19.0 on ubuntu 16.04 python 2.7.12
39,531,891
<p>Ubuntu - 16.04 Python - 2.7.12</p> <p>Hi guys, I'm trying install pycurl==7.19.0 from setup.py, but catch this stack trace:</p> <pre><code>Downloading https://pypi.python.org/packages/11/73/abcfbbb6e1dd7087fa53042c301c056c11264e8a737a4688f834162d731e/pycurl-7.19.0.tar.gz#md5=074cd44079bb68697f5d8751102b384b Best match: pycurl 7.19.0 Processing pycurl-7.19.0.tar.gz Writing /tmp/easy_install-F8gcvD/pycurl-7.19.0/setup.cfg Running pycurl-7.19.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-F8gcvD/pycurl-7.19.0/egg-dist-tmp-8sbXeG Using curl-config (libcurl 7.47.0) src/pycurl.c: In function ‘multi_socket_callback’: src/pycurl.c:2351:9: warning: variable ‘ret’ set but not used [-Wunused-but-set-variable] int ret; ^ src/pycurl.c: In function ‘initpycurl’: src/pycurl.c:3453:31: warning: macro "__DATE__" might prevent reproducible builds [-Wdate-time] insstr(d, "COMPILE_DATE", __DATE__ " " __TIME__); ^ src/pycurl.c:3453:44: warning: macro "__TIME__" might prevent reproducible builds [-Wdate-time] insstr(d, "COMPILE_DATE", __DATE__ " " __TIME__); ^ /usr/bin/ld: cannot find -lidn /usr/bin/ld: cannot find -lrtmp /usr/bin/ld: cannot find -lgssapi_krb5 /usr/bin/ld: cannot find -lkrb5 /usr/bin/ld: cannot find -lk5crypto /usr/bin/ld: cannot find -lcom_err /usr/bin/ld: cannot find -llber /usr/bin/ld: cannot find -llber /usr/bin/ld: cannot find -lldap collect2: error: ld returned 1 exit status error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 </code></pre> <p>Does anyone known what's wrong?</p>
0
2016-09-16T12:46:21Z
39,532,274
<p>Try: <code>sudo apt-get install python-dev</code></p>
-1
2016-09-16T13:05:59Z
[ "python", "pycurl" ]
Can't install pycurl==7.19.0 on ubuntu 16.04 python 2.7.12
39,531,891
<p>Ubuntu - 16.04 Python - 2.7.12</p> <p>Hi guys, I'm trying install pycurl==7.19.0 from setup.py, but catch this stack trace:</p> <pre><code>Downloading https://pypi.python.org/packages/11/73/abcfbbb6e1dd7087fa53042c301c056c11264e8a737a4688f834162d731e/pycurl-7.19.0.tar.gz#md5=074cd44079bb68697f5d8751102b384b Best match: pycurl 7.19.0 Processing pycurl-7.19.0.tar.gz Writing /tmp/easy_install-F8gcvD/pycurl-7.19.0/setup.cfg Running pycurl-7.19.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-F8gcvD/pycurl-7.19.0/egg-dist-tmp-8sbXeG Using curl-config (libcurl 7.47.0) src/pycurl.c: In function ‘multi_socket_callback’: src/pycurl.c:2351:9: warning: variable ‘ret’ set but not used [-Wunused-but-set-variable] int ret; ^ src/pycurl.c: In function ‘initpycurl’: src/pycurl.c:3453:31: warning: macro "__DATE__" might prevent reproducible builds [-Wdate-time] insstr(d, "COMPILE_DATE", __DATE__ " " __TIME__); ^ src/pycurl.c:3453:44: warning: macro "__TIME__" might prevent reproducible builds [-Wdate-time] insstr(d, "COMPILE_DATE", __DATE__ " " __TIME__); ^ /usr/bin/ld: cannot find -lidn /usr/bin/ld: cannot find -lrtmp /usr/bin/ld: cannot find -lgssapi_krb5 /usr/bin/ld: cannot find -lkrb5 /usr/bin/ld: cannot find -lk5crypto /usr/bin/ld: cannot find -lcom_err /usr/bin/ld: cannot find -llber /usr/bin/ld: cannot find -llber /usr/bin/ld: cannot find -lldap collect2: error: ld returned 1 exit status error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 </code></pre> <p>Does anyone known what's wrong?</p>
0
2016-09-16T12:46:21Z
39,533,224
<p>These lines:</p> <pre><code>/usr/bin/ld: cannot find -lidn /usr/bin/ld: cannot find -lrtmp /usr/bin/ld: cannot find -lgssapi_krb5 /usr/bin/ld: cannot find -lkrb5 /usr/bin/ld: cannot find -lk5crypto /usr/bin/ld: cannot find -lcom_err /usr/bin/ld: cannot find -llber /usr/bin/ld: cannot find -llber /usr/bin/ld: cannot find -lldap </code></pre> <p>Mean the libraries <code>idn</code>, <code>rtmp</code>, <code>gssapi_krb5</code>, <code>krb5</code>, <code>k5crypto</code>, <code>com_err</code>, <code>lber</code>, and <code>ldap</code> could not be found which most likely means they are not installed. I checked the dependencies of <code>libcurl3</code> and it directly depends on <code>gssapi-krb5-2</code> (which depends on <code>krb5-3</code> and <code>k5crypto3</code>), <code>idn11</code>, <code>ldap</code>, <code>rtmp1</code>, <code>ssl1</code>.</p> <p>So, installing <code>libcurl3</code> should solve your problem:</p> <pre><code>sudo apt-get install libcurl3 </code></pre> <p>After that's installed, try installing <em>pycurl</em> again.</p>
1
2016-09-16T13:52:08Z
[ "python", "pycurl" ]
Can't install pycurl==7.19.0 on ubuntu 16.04 python 2.7.12
39,531,891
<p>Ubuntu - 16.04 Python - 2.7.12</p> <p>Hi guys, I'm trying install pycurl==7.19.0 from setup.py, but catch this stack trace:</p> <pre><code>Downloading https://pypi.python.org/packages/11/73/abcfbbb6e1dd7087fa53042c301c056c11264e8a737a4688f834162d731e/pycurl-7.19.0.tar.gz#md5=074cd44079bb68697f5d8751102b384b Best match: pycurl 7.19.0 Processing pycurl-7.19.0.tar.gz Writing /tmp/easy_install-F8gcvD/pycurl-7.19.0/setup.cfg Running pycurl-7.19.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-F8gcvD/pycurl-7.19.0/egg-dist-tmp-8sbXeG Using curl-config (libcurl 7.47.0) src/pycurl.c: In function ‘multi_socket_callback’: src/pycurl.c:2351:9: warning: variable ‘ret’ set but not used [-Wunused-but-set-variable] int ret; ^ src/pycurl.c: In function ‘initpycurl’: src/pycurl.c:3453:31: warning: macro "__DATE__" might prevent reproducible builds [-Wdate-time] insstr(d, "COMPILE_DATE", __DATE__ " " __TIME__); ^ src/pycurl.c:3453:44: warning: macro "__TIME__" might prevent reproducible builds [-Wdate-time] insstr(d, "COMPILE_DATE", __DATE__ " " __TIME__); ^ /usr/bin/ld: cannot find -lidn /usr/bin/ld: cannot find -lrtmp /usr/bin/ld: cannot find -lgssapi_krb5 /usr/bin/ld: cannot find -lkrb5 /usr/bin/ld: cannot find -lk5crypto /usr/bin/ld: cannot find -lcom_err /usr/bin/ld: cannot find -llber /usr/bin/ld: cannot find -llber /usr/bin/ld: cannot find -lldap collect2: error: ld returned 1 exit status error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 </code></pre> <p>Does anyone known what's wrong?</p>
0
2016-09-16T12:46:21Z
39,611,274
<p>I don't know, i tried everything, i think that i had problem with pip install, i usually use sudo pip install.... so maybe some libraries don't have permissions to read. I restored snapshot with empty ubuntu and install all libraries again, without 'SUDO' pip install, thanx a lot for all answers.</p>
0
2016-09-21T08:29:23Z
[ "python", "pycurl" ]
Upgrade python 3.4 to 3.5 (latest) in ubuntu 14.04
39,531,917
<p>How to upgrade Python 3.4 to the latest version available on ubuntu 14.04</p>
0
2016-09-16T12:48:04Z
39,531,955
<p>The best way to do it, in my opinion, is using <a href="https://www.continuum.io/downloads" rel="nofollow">ANACONDA</a>. Very, very convenient.</p>
-1
2016-09-16T12:49:57Z
[ "python", "python-2.7", "python-3.x", "ubuntu-14.04" ]
Upgrade python 3.4 to 3.5 (latest) in ubuntu 14.04
39,531,917
<p>How to upgrade Python 3.4 to the latest version available on ubuntu 14.04</p>
0
2016-09-16T12:48:04Z
39,532,081
<p>I would recommend adding an additional installation and not actually upgrading the <code>python3</code> binary past where ubuntu expects it. Anything system use of python that depends on python3.4 but breaks under 3.5, will make your life tough.</p> <p><a href="https://github.com/yyuu/pyenv" rel="nofollow">pyenv</a> is nice tool for maintaining multiple local versions of python but it'll almost certainly replace virtualenv in your workflow.</p>
1
2016-09-16T12:55:37Z
[ "python", "python-2.7", "python-3.x", "ubuntu-14.04" ]
Can't run kivy file on Komodo
39,532,034
<p>I'm trying to run a Kivy python file on Komodo IDE (for Mac) but its giving me this error <code>import kivy ImportError: No module named kivy</code> although if I drag-drop the file on the Kivy app its running normally,</p> <p>any ideas ?, thanks</p>
1
2016-09-16T12:53:45Z
39,532,351
<p>Try going to <strong>Edit > Preferences > Languages > Python > Additional Python Import Directories</strong>, and adding the directory where Kivy is installed. I am not sure on a Mac where this would be. To find out, in a python interpreter (or elsewhere - wherever you are able to successfully import <code>kivy</code>) do:</p> <pre><code>&gt;&gt;&gt; import kivy &gt;&gt;&gt; kivy.__file__ </code></pre> <p>and add the directory one up from <code>kivy/</code> in that path through the Komodo settings as described above.</p>
0
2016-09-16T13:10:08Z
[ "python", "python-3.x", "kivy", "komodo", "kivy-language" ]
Can't run kivy file on Komodo
39,532,034
<p>I'm trying to run a Kivy python file on Komodo IDE (for Mac) but its giving me this error <code>import kivy ImportError: No module named kivy</code> although if I drag-drop the file on the Kivy app its running normally,</p> <p>any ideas ?, thanks</p>
1
2016-09-16T12:53:45Z
39,537,143
<p>This only works for Komodo code intelligence. Run time is still limited to PYTHONPATH. To run a script that's using Kivy, even in your command line, the kivy source has to be on your PYTHONPATH.</p> <p>You can add items to your PYTHONPATH in Komodo using <strong>Edit > Preferences > Environment</strong> then create a <em>New</em> environmental variable to append the kivy installation location to your <code>$PYTHONPATH</code>, ie <code>$PYTHONPATH:install/location/kivy</code>.</p> <p>If you don't mind having it in your system though, I'd just do what @tuan333 suggest above, install it using pip, then make sure you're using THAT Python interpreter in Komodo.</p>
0
2016-09-16T17:29:02Z
[ "python", "python-3.x", "kivy", "komodo", "kivy-language" ]
Scrapy: Rules set inside __init__ are ignored by CrawlSpider
39,532,119
<p>I've been stuck on this for a few days, and it's making me go crazy.</p> <p>I call my scrapy spider like this:</p> <pre><code>scrapy crawl example -a follow_links="True" </code></pre> <p>I pass in the "follow_links" flag to determine whether the entire website should be scraped, or just the index page I have defined in the spider.</p> <p>This flag is checked in the spider's constructor to see which rule should be set:</p> <pre><code>def __init__(self, *args, **kwargs): super(ExampleSpider, self).__init__(*args, **kwargs) self.follow_links = kwargs.get('follow_links') if self.follow_links == "True": self.rules = ( Rule(LinkExtractor(allow=()), callback="parse_pages", follow=True), ) else: self.rules = ( Rule(LinkExtractor(deny=(r'[a-zA-Z0-9]*')), callback="parse_pages", follow=False), ) </code></pre> <p>If it's "True", all links are allowed; if it's "False", all links are denied.</p> <p>So far, so good, however these rules are ignored. The only way I can get rules to be followed is if I define them outside of the constructor. That means, something like this would work correctly:</p> <pre><code>class ExampleSpider(CrawlSpider): rules = ( Rule(LinkExtractor(deny=(r'[a-zA-Z0-9]*')), callback="parse_pages", follow=False), ) def __init__(self, *args, **kwargs): ... </code></pre> <p>So basically, defining the rules within the <code>__init__</code> constructor causes the rules to be ignored, whereas defining the rules outside of the constructor works as expected.</p> <p>I cannot understand this. My code is below.</p> <pre><code>import re import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from w3lib.html import remove_tags, remove_comments, replace_escape_chars, replace_entities, remove_tags_with_content class ExampleSpider(CrawlSpider): name = "example" allowed_domains = ['example.com'] start_urls = ['http://www.example.com'] # if the rule below is uncommented, it works as expected (i.e. follow links and call parse_pages) # rules = ( # Rule(LinkExtractor(allow=()), callback="parse_pages", follow=True), # ) def __init__(self, *args, **kwargs): super(ExampleSpider, self).__init__(*args, **kwargs) # single page or follow links self.follow_links = kwargs.get('follow_links') if self.follow_links == "True": # the rule below will always be ignored (why?!) self.rules = ( Rule(LinkExtractor(allow=()), callback="parse_pages", follow=True), ) else: # the rule below will always be ignored (why?!) self.rules = ( Rule(LinkExtractor(deny=(r'[a-zA-Z0-9]*')), callback="parse_pages", follow=False), ) def parse_pages(self, response): print("In parse_pages") print(response.xpath('/html/body').extract()) return None def parse_start_url(self, response): print("In parse_start_url") print(response.xpath('/html/body').extract()) return None </code></pre> <p>Thank you for taking the time to help me on this matter.</p>
0
2016-09-16T12:57:47Z
39,553,044
<p>The problem here is that <code>CrawlSpider</code> constructor (<code>__init__</code>) is also handling the <code>rules</code> parameter, so if you need to assign them, you'll have to do it before calling the default constructor.</p> <p>In other words do everything you need before calling <code>super(ExampleSpider, self).__init__(*args, **kwargs)</code> : </p> <pre><code>def __init__(self, *args, **kwargs): # setting my own rules super(ExampleSpider, self).__init__(*args, **kwargs) </code></pre>
1
2016-09-18T00:41:27Z
[ "python", "scrapy", "scrapy-spider" ]
Guess Random Number Why i m not able to enter input - python
39,532,171
<p>Below is my code to generate random number between 0 - 9 and checking with user input whether it is higher lower or equal. When I run the code, it is not taking input and showing</p> <blockquote> <p>error in 'guessNumber = int(input("Guess a Random number between 0-9")) File "", line 1 '</p> </blockquote> <p>Can somebody please tell me where I'm making mistake</p> <pre><code>#Guess Random Number #Generate a Random number between 0 to 9 import random turn = 0 def guessRandom(): secretNumber = random.randint(0,9) guessNumber = int(input("Guess a Random number between 0-9")) while secretNumber != guessNumber: if(secretNumber &gt; guessNumber): input("You have Guessed the number higher than secretNumber. Guess Again!") turn = turn + 1 elif (secretNumber &lt; guessNumber): input("You have guessed the number lower than secretNumber. Guess Again! ") turn = turn + 1 if(secretNumber == guessNumber): print("you Have Guessed it Right!") guessRandom() </code></pre>
-6
2016-09-16T13:00:27Z
39,532,587
<p>You didn't get the new input into a variable - <code>input(...)</code> needs to be assigned to a variable so you can operate with the input.</p> <p>Use </p> <pre><code>guessNumber = input("You have Guessed the number higher than secretNumber. Guess Again!") </code></pre> <p>When the user guesses higher/ lower number.</p>
0
2016-09-16T13:22:17Z
[ "python" ]
Guess Random Number Why i m not able to enter input - python
39,532,171
<p>Below is my code to generate random number between 0 - 9 and checking with user input whether it is higher lower or equal. When I run the code, it is not taking input and showing</p> <blockquote> <p>error in 'guessNumber = int(input("Guess a Random number between 0-9")) File "", line 1 '</p> </blockquote> <p>Can somebody please tell me where I'm making mistake</p> <pre><code>#Guess Random Number #Generate a Random number between 0 to 9 import random turn = 0 def guessRandom(): secretNumber = random.randint(0,9) guessNumber = int(input("Guess a Random number between 0-9")) while secretNumber != guessNumber: if(secretNumber &gt; guessNumber): input("You have Guessed the number higher than secretNumber. Guess Again!") turn = turn + 1 elif (secretNumber &lt; guessNumber): input("You have guessed the number lower than secretNumber. Guess Again! ") turn = turn + 1 if(secretNumber == guessNumber): print("you Have Guessed it Right!") guessRandom() </code></pre>
-6
2016-09-16T13:00:27Z
39,532,833
<p>I think <code>guessRandom()</code> was meant to be outside of the method definition, in order to call the method. The <code>guessNumber</code> variable never changes since the inputs are not assigned to be <code>guessNumber</code>, thus it will continuously check the initial guess. Also, the less than / greater than signs seem to conflict with the intended message. Additionally, <code>turn</code> is outside of the scope of the method.</p> <pre><code>#Generate a Random number between 0 to 9 import random def guessRandom(): secretNumber = random.randint(0, 9) guessNumber = int(input("Guess a Random number between 0-9: ")) i = 0 while secretNumber != guessNumber: if secretNumber &lt; guessNumber: print "You have guessed a number higher than secretNumber." i += 1 elif secretNumber &gt; guessNumber: print "You have guessed a number lower than secretNumber." i += 1 else: print("you Have Guessed it Right!") guessNumber = int(input("Guess Again! ")) return i turn = 0 turn += guessRandom() </code></pre> <p>EDIT: Assuming you're using <a href="http://stackoverflow.com/questions/4915361/whats-the-difference-between-raw-input-and-input-in-python3-x"><code>input</code> in Python3</a> (or using <code>raw_input</code> in older versions of Python), you may want to except for <code>ValueError</code> in case someone enters a string. For instance,</p> <pre><code>#Generate a Random number between 0 to 9 import random def guessRandom(): secretNumber = random.randint(0, 9) guessNumber = input("Guess a Random number between 0-9: ") i = 0 while True: try: guessNumber = int(guessNumber) except ValueError: pass else: if secretNumber &lt; guessNumber: print "You have guessed a number higher than secretNumber." i += 1 elif secretNumber &gt; guessNumber: print "You have guessed a number lower than secretNumber." i += 1 else: print("you Have Guessed it Right!") break guessNumber = input("Guess Again! ") return i turn = 0 turn += guessRandom() </code></pre> <p>I changed the <code>while</code> loop condition to <code>True</code> and added a break because otherwise it would loop indefinitely (comparing an integer to a string input value).</p>
0
2016-09-16T13:33:32Z
[ "python" ]
How can i route my URL to a common method before passing to actual view in Django
39,532,173
<p>Here is my code looks like :-</p> <p>url.py file :-</p> <pre><code>from rest_framework import routers from view_user import user_signup,user_login router = routers.DefaultRouter() urlpatterns = [ url(r'^api/v1/user_signup',csrf_exempt(user_signup)), url(r'^api/v1/user_login',csrf_exempt(user_login)) ] </code></pre> <p>view_user.py file:-</p> <pre><code>def user_signup(request): try: if request.method == 'POST': json_data = json.loads(request.body) return JsonResponse(result, safe=False) except Exception as e: logger.error("at method user : %s", e) </code></pre> <p>So, when I call the url:- <a href="http://myserver/api/v1/user_signup" rel="nofollow">http://myserver/api/v1/user_signup</a> it goes to "user_signup" method of view_user.py file. </p> <p>But what I want is I should be able validate my request before it goes to the user_signup method. </p> <p>I want this validation for all the requests that comes to my server for all methods (ex:- user_signup,user_login ...) before it goes to respective methods.</p>
0
2016-09-16T13:00:29Z
39,532,250
<p>Annotate the concerned views with a <a href="https://wiki.python.org/moin/PythonDecorators" rel="nofollow">decorator</a> that contains the logic you want to execute before the views are called.</p> <p>See <a href="http://stackoverflow.com/questions/20945366/python-decorators">Python - Decorators</a> for a head start.</p> <p>And <a href="http://stackoverflow.com/questions/5469159/how-to-write-a-custom-decorator-in-django">How to write a custom decorator in django?</a></p> <p>If you want to do this on all requests, regardless of the associated view, then you should consider writing a <a href="https://docs.djangoproject.com/en/1.10/topics/http/middleware/" rel="nofollow">middleware</a>. See <a href="http://stackoverflow.com/questions/18322262/how-to-setup-custom-middleware-in-django">how to setup custom middleware in django</a></p>
2
2016-09-16T13:04:46Z
[ "python", "django", "django-models", "django-views", "django-rest-framework" ]
any middleware in django to perform the abilities of security.yml in symfony2?
39,532,334
<p>Can i declare the auth for all the apis in django, group wise from one file just like <a href="http://symfony.com/doc/current/security/access_control.html" rel="nofollow">security.yml</a> allows us in symfony2</p>
0
2016-09-16T13:08:27Z
39,534,886
<p>Have a look at the package <a href="https://github.com/mgrouchy/django-stronghold" rel="nofollow">django-stronghold</a>. It allows you to require login for all views.</p>
1
2016-09-16T15:13:53Z
[ "python", "django", "symfony2", "django-middleware" ]
Python 3 changing variable in function from another function
39,532,350
<p>I would like to access the testing variable in main from testadder, such that it will add 1 to testing after testadder has been called in main.</p> <p>For some reason I can add 1 to a list this way, but not variables. The nonlocal declaration doesn't work since the functions aren't nestled.</p> <p>Is there a way to work around this?</p> <pre><code>def testadder(test, testing): test.append(1) testing += 1 def main(): test = [] testing = 1 testadder(test, testing) print(test, testing) main() </code></pre>
1
2016-09-16T13:10:08Z
39,532,489
<p>Lists are mutable, but integers are not. Return the modified variable and reassign it. </p> <pre><code>def testadder(test, testing): test.append(1) return testing + 1 def main(): test = [] testing = 1 testing = testadder(test, testing) print(test, testing) main() </code></pre>
1
2016-09-16T13:16:44Z
[ "python", "python-3.x", "python-nonlocal" ]
Find index position of characters in string
39,532,420
<p>I am trawling through a storage area and the paths look a lot like this: storagearea/storage1/ABC/ABCDEF1/raw/2013/05/ABCFGM1 </p> <p>I wont always know what year is it. I need to find the starting index position of the year </p> <p>Therefor I am looking for where I find the following in the file name (2010, 2011, 2012, 2013, 2014 etc...) </p> <p>I have set up a list as follows:</p> <pre><code>list_ = ['2010', '2011','2012','2013','2014', '2015', '2016'] </code></pre> <p>and I can find if it is in the file name</p> <pre><code>if any(word in file for word in list_): print 'Yahooo' </code></pre> <p>But how do I find the character index of the year in the absolute path? </p>
-3
2016-09-16T13:12:57Z
39,532,482
<p><a href="https://docs.python.org/2/library/string.html#string.index" rel="nofollow">Python string.index</a></p> <pre><code>string.index(s, sub[, start[, end]])¶ Like find() but raise ValueError when the substring is not found. </code></pre>
0
2016-09-16T13:16:24Z
[ "python" ]
Find index position of characters in string
39,532,420
<p>I am trawling through a storage area and the paths look a lot like this: storagearea/storage1/ABC/ABCDEF1/raw/2013/05/ABCFGM1 </p> <p>I wont always know what year is it. I need to find the starting index position of the year </p> <p>Therefor I am looking for where I find the following in the file name (2010, 2011, 2012, 2013, 2014 etc...) </p> <p>I have set up a list as follows:</p> <pre><code>list_ = ['2010', '2011','2012','2013','2014', '2015', '2016'] </code></pre> <p>and I can find if it is in the file name</p> <pre><code>if any(word in file for word in list_): print 'Yahooo' </code></pre> <p>But how do I find the character index of the year in the absolute path? </p>
-3
2016-09-16T13:12:57Z
39,532,795
<p>Instead of using a generator expression (which has its own scope), use a traditional loop and then print the found word's index and <code>break</code> when you find a match:</p> <pre><code>list_ = ['2010', '2011','2012','2013','2014', '2015', '2016'] for word in list_: if word in file: print file.index(word) break </code></pre>
1
2016-09-16T13:32:04Z
[ "python" ]
Find index position of characters in string
39,532,420
<p>I am trawling through a storage area and the paths look a lot like this: storagearea/storage1/ABC/ABCDEF1/raw/2013/05/ABCFGM1 </p> <p>I wont always know what year is it. I need to find the starting index position of the year </p> <p>Therefor I am looking for where I find the following in the file name (2010, 2011, 2012, 2013, 2014 etc...) </p> <p>I have set up a list as follows:</p> <pre><code>list_ = ['2010', '2011','2012','2013','2014', '2015', '2016'] </code></pre> <p>and I can find if it is in the file name</p> <pre><code>if any(word in file for word in list_): print 'Yahooo' </code></pre> <p>But how do I find the character index of the year in the absolute path? </p>
-3
2016-09-16T13:12:57Z
39,532,800
<p>I'd suggest <code>join</code>ing those years to a <a href="https://docs.python.org/3/library/re.html" rel="nofollow">regular expression</a> using <code>'|'</code> as a delimiter...</p> <pre><code>&gt;&gt;&gt; list_ = ['2010', '2011','2012','2013','2014', '2015', '2016'] &gt;&gt;&gt; p = "|".join(list_) &gt;&gt;&gt; p '2010|2011|2012|2013|2014|2015|2016' </code></pre> <p>... and then using <a href="https://docs.python.org/3/library/re.html#re.search" rel="nofollow"><code>re.search</code></a> to find a match and <code>span()</code> and <code>group()</code> to find the position of that match and the matched year itself:</p> <pre><code>&gt;&gt;&gt; filename = "storagearea/storage1/ABC/ABCDEF1/raw/2013/05/ABCFGM1" &gt;&gt;&gt; m = re.search(p, filename) &gt;&gt;&gt; m.group() '2013' &gt;&gt;&gt; m.span() (37, 41) </code></pre>
1
2016-09-16T13:32:17Z
[ "python" ]
Why does Python call both functions?
39,532,534
<p>I have some problems to understand what happens here.</p> <p>This is my source:</p> <pre><code>class Calc(): def __init__(self,Ideal,Limit,Value,Debug=None): self.Ideal = Ideal self.Limit = Limit self.Value = Value self.Debug = Debug self.Grade = self.GetGrade() self.LenGrade = self.GetLenGrade() def GetGrade(self): if self.Debug: print('calling GetGrade') return Grade def GetLenGrade(self): if self.Debug: print('calling GetLenGrade') return Grade </code></pre> <p>When calling it with</p> <pre><code>GradeMinLen += Calc(TargetLen, LimitMinLen, Length ,Debug=1).LenGrade </code></pre> <p>I get always the output</p> <pre><code>calling GetGrade calling GetLenGrade </code></pre> <p>Why is python calling <code>GetGrade</code>?</p>
-7
2016-09-16T13:18:58Z
39,532,578
<p>You create an instance of <code>Calc()</code>, and whenever you do that, <code>Calc.__init__()</code> is called for that new instance.</p> <p>Your <code>Calc.__init__()</code> method calls both <code>self.GetGrade()</code> and <code>self.GetLenGrade()</code>:</p> <pre><code>self.Grade = self.GetGrade() self.LenGrade = self.GetLenGrade() </code></pre> <p>It doesn't matter here that <em>after</em> you created the instance you only access the <code>LenGrade</code> attribute; the above two lines in <code>__init__</code> do not store method references, they store the <em>results</em> of the method calls. The <code>Calc(...).LenGrade</code> then returns one of those results; the other result is <em>also</em> there however.</p>
2
2016-09-16T13:21:44Z
[ "python" ]
Why does Python call both functions?
39,532,534
<p>I have some problems to understand what happens here.</p> <p>This is my source:</p> <pre><code>class Calc(): def __init__(self,Ideal,Limit,Value,Debug=None): self.Ideal = Ideal self.Limit = Limit self.Value = Value self.Debug = Debug self.Grade = self.GetGrade() self.LenGrade = self.GetLenGrade() def GetGrade(self): if self.Debug: print('calling GetGrade') return Grade def GetLenGrade(self): if self.Debug: print('calling GetLenGrade') return Grade </code></pre> <p>When calling it with</p> <pre><code>GradeMinLen += Calc(TargetLen, LimitMinLen, Length ,Debug=1).LenGrade </code></pre> <p>I get always the output</p> <pre><code>calling GetGrade calling GetLenGrade </code></pre> <p>Why is python calling <code>GetGrade</code>?</p>
-7
2016-09-16T13:18:58Z
39,532,604
<p>In your object initialisation code, you have the following:</p> <pre><code>self.Grade = self.GetGrade() self.LenGrade = self.GetLenGrade() </code></pre> <p>This means "set the value of the data member Grade to the value obtained by calling the method GetGrade" and the same for LenGrade.</p> <p>It should not be surprising that they're called, it would be more surprising if they were not.</p>
1
2016-09-16T13:23:04Z
[ "python" ]
raise exception wave python
39,532,555
<p>I use the wave module for example .</p> <pre><code> import wave origAudio = wave.open("son.wav",'r') </code></pre> <p>get output </p> <pre><code> raise Error, 'file does not start with RIFF id' wave.Error: file does not start with RIFF id </code></pre> <p>I know the file is not good,but I want to raise this exception or this error</p>
0
2016-09-16T13:20:22Z
39,533,124
<p>If you wish to continue after an expection has been raised you must catch it:</p> <pre><code> import wave try: origAudio = wave.open("son.wav",'r') except wave.Error as e: # if you get here it means an error happende, maybe you should warn the user # but doing pass will silently ignore it pass </code></pre>
1
2016-09-16T13:48:23Z
[ "python", "exception", "wave" ]
How can I save and read variable size images in TensorFlow's protobuf format
39,532,568
<p>I'm trying to write variable size images in TensorFlow's protobuf format with the following code:</p> <pre><code>img_feature = tf.train.Feature( bytes_list=tf.train.BytesList(value=[ img.flatten().tostring()])) # Define how the sequence length is stored seq_len_feature = tf.train.Feature( int64_list=tf.train.Int64List(value=[seq_len])) # Define how the label list is stored label_list_feature = tf.train.Feature( int64_list=tf.train.Int64List(value=label_list)) # Define the feature dictionary that defines how the data is stored feature = { IMG_FEATURE_NAME: img_feature, SEQ_LEN_FEATURE_NAME: seq_len_feature, LABEL_LIST_FEATURE_NAME: label_list_feature} # Create an example object to store example = tf.train.Example( features=tf.train.Features(feature=feature)) </code></pre> <p>Where the images <code>img</code> that I save has a fixed height but variable length.</p> <p>Now if I want to parse this image with the following code:</p> <pre><code># Define how the features are read from the example features_dict = { IMG_FEATURE_NAME: tf.FixedLenFeature([], tf.string), SEQ_LEN_FEATURE_NAME: tf.FixedLenFeature([1], tf.int64), LABEL_LIST_FEATURE_NAME: tf.VarLenFeature(tf.int64), } features = tf.parse_single_example( serialized_example, features=features_dict) # Decode string to uint8 and reshape to image shape img = tf.decode_raw(features[IMG_FEATURE_NAME], tf.uint8) img = tf.reshape(img, (self.img_shape, -1)) seq_len = tf.cast(features[SEQ_LEN_FEATURE_NAME], tf.int32) # Convert list of labels label_list = tf.cast(features[LABEL_LIST_FEATURE_NAME], tf.int32) </code></pre> <p>I get the following error: <code>ValueError: All shapes must be fully defined: [TensorShape([Dimension(28), Dimension(None)]), TensorShape([Dimension(1)]), TensorShape([Dimension(3)])]</code></p> <p>Is there a way to store images with variable size (more specifically variable width in my case) and read them with <code>TFRecordReader</code>?</p>
0
2016-09-16T13:21:02Z
39,579,574
<p>First, I was not able to reproduce the error. The following code works just fine:</p> <pre><code>import tensorflow as tf import numpy as np image_height = 100 img = np.random.randint(low=0, high=255, size=(image_height,200), dtype='uint8') IMG_FEATURE_NAME = 'image/raw' with tf.Graph().as_default(): img_feature = tf.train.Feature( bytes_list=tf.train.BytesList(value=[ img.flatten().tostring()])) feature = {IMG_FEATURE_NAME: img_feature} example = tf.train.Example(features=tf.train.Features(feature=feature)) serialized_example = example.SerializeToString() features_dict = {IMG_FEATURE_NAME: tf.FixedLenFeature([], tf.string)} features = tf.parse_single_example(serialized_example, features=features_dict) img_tf = tf.decode_raw(features[IMG_FEATURE_NAME], tf.uint8) img_tf = tf.reshape(img_tf, (image_height, -1)) with tf.Session() as sess: img_np = sess.run(img_tf) print(img_np) print('Images are identical: %s' % (img == img_np).all()) </code></pre> <p>It outputs:</p> <blockquote> <p>Images are identical: True</p> </blockquote> <p>Second, I'd recommend to store images encoded as PNG instead of RAW and read them using tf.VarLenFeature+tf.image.decode_png. It will save you a lot of space and naturally supports variable size images.</p>
0
2016-09-19T18:11:28Z
[ "python", "tensorflow", "protocol-buffers" ]
insert key-pair value into nested dict without overwriting after delimiter of key which produce duplicate key
39,532,609
<p>Assume that I have a nested dict like:</p> <pre><code>D={'Germany': {'1972-05-23': 'test1', '1969-12-27': 'test2'}, 'Morocco|Germany': {'1978-01-14':'test3'}} </code></pre> <p>I want to get a new dict like:</p> <pre><code>{'Germany': {'1972-05-23': 'test1', '1969-12-27': 'test2', '1978-01-14':'test3'} 'Morocco': {'1978-01-14':'test3'}} </code></pre> <p>which means I have to handle the perhaps duplicate key after the <code>str.split(key)</code> ,and this is my code:</p> <pre><code>D={'Germany': {'1972-05-23': 'test1', '1969-12-27': 'test2'}, 'Morocco|Germany': {'1978-01-14':'test3'}} new_dict={} for item in D: for index in str.split(item,'|'): new_dict[index]=D[item] print new_dict </code></pre> <p>however the key-value pair generated after splitting operation overwriting the original ones which result:</p> <pre><code>{'Morocco': {'1978-01-14': 'test3'}, 'Germany': {'1978-01-14': 'test3'}} </code></pre> <p>I wonder how I can modify the my code to get a satisfying dict for further processing or any better solution for this requirement?</p> <p>PS: My Python Version is 2.7.12 with Anaconda 4.0.0 via IDE PyCharm</p> <p>Any help will be appreciated, thank you</p>
1
2016-09-16T13:23:12Z
39,532,658
<p>you could use:</p> <pre><code>if not index in new_dict: new_dict[index] = {} new_dict[index].update(D[item]) </code></pre>
2
2016-09-16T13:25:28Z
[ "python", "dictionary", "split", "overwrite" ]
Changing PYTHONHOME causes ImportError: No module named site
39,532,705
<p>I am trying to deploy a Flask web app via mod_wsgi on Apache. I cannot use the default Python environment because it was compiled with UCS-2 Unicode instead of UCS-4, and I cannot recompile it for this one case. Thus, a virtual environment. Virtual environments would have been used anyway, but that error means that I can't get away with using the default Python install and just adding the virtual environment's modules to the <code>PYTHONPATH</code>, which otherwise would have let me avoid the current problem entirely by accident.</p> <p>I found the <a href="http://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIPythonHome.html" rel="nofollow">documentation for <code>mod_wsgi</code> to change which Python executable to use</a>. However, when attempting to do so, the server fails to work properly. <code>/var/log/httpd/error_log</code> rapidly floods with the line <code>ImportError: No module named site</code>.</p> <p>I have checked every similar question I can find here and elsewhere, and not yet had success. Experimentation has shown that as far as I can tell, the problem occurs when changing PYTHONHOME without activating a virtual environment - and the way the automated deployment works (via Fabric), as far as I can tell I can't activate a virtual environment.</p> <hr> <h2>Apache config</h2> <p>My current <code>httpd.conf</code> for the app:</p> <pre><code>WSGIPythonPath /path/to/dir/containing/wsgi/file/and/app:/path/to/virtualenv/lib:/path/to/virtualenv/lib/site-packages WSGIPythonHome /path/to/virtualenv WSGISocketPrefix /var/run/wsgi User user Group group &lt;VirtualHost *&gt; ServerName servername.generic.com DocumentRoot /path/to/dir/containing/wsgi/file/and/app/static_dev/ WSGIDaemonProcess appname user=user group=group threads=2 WSGIScriptAlias / /path/to/dir/containing/wsgi/file/and/app/app.wsgi &lt;Directory /path/to/dir/containing/wsgi/file/and/app&gt; WSGIProcessGroup appname WSGIApplicationGroup %{GLOBAL} Require all granted &lt;/Directory&gt; &lt;/VirtualHost&gt; </code></pre> <hr> <h2>Data I've found from failed attempts</h2> <p>I know that the error is not in my <code>app.wsgi</code>, because when I added the line <code>raise Exception('tried to open the file')</code> at the very top to check that, the existing <code>ImportError</code> kept happening instead of that new <code>Exception</code>.</p> <p>I have confirmed via <code>ldd</code> that my version of <code>mod_wsgi</code> is for Python 2.7.</p> <p>I have tried setting <code>WSGIPythonHome /path/to/virtualenv/bin/</code> and <code>WSGIPythonHome /path/to/virtualenv/bin/python</code>, with the same result as the current state.</p> <p>I have tried omitting the <code>WSGIPythonHome</code> directive, in which case it loads the <code>app.wsgi</code> as it should, but breaks on a later import as described at the top (the reason I can't just do that).</p> <p>I have tried omitting the <code>WSGIPythonPath</code> directive and leaving it up to <code>app.wsgi</code> to add things to the PYTHONPATH, with the same result as the current state.</p> <p>I have tried putting the path-setting as an argument to <code>WSGIDaemonProcess</code> instead of as the <code>WSGIPythonPath</code> directive, with the same result as the current state.</p> <p>I have confirmed that there is a <code>site.py</code> in <code>/path/to/virtualenv/lib</code>.</p> <p>I have confirmed that no other non-app-specific Apache <code>.conf</code> files being used (default settings, automatic module loads, etc) contain the string "WSGI", so I don't think there's any conflicts here.</p> <p>If I activate the virtual environment from the command line I can import <code>site</code> without an error, just for the sake of testing that it does in fact exist in the environment. However, this is insufficient because it needs to start smoothly from a single call to <code>sudo systemctl start httpd.service</code> due to the deployment tools in use, and that seems to not care about the venv of the current shell session.</p> <p>If, from a default state, I <code>export PYTHONHOME=/path/to/virtualenv</code>, attempting to open the Python REPL exits immediately with <code>ImportError: No module named site</code>.</p> <p>If I activate the virtual environment and then set <code>PYTHONHOME</code>, I get the same import error.</p> <p>If I activate the virtual environment and don't touch <code>PYTHONHOME</code>, <code>echo $PYTHONHOME</code> outputs a blank line, and the Python REPL works fine. In the Python REPL while in the virtualenv:</p> <pre><code>(virtualenv)-bash-4.2$ python Python 2.7.8 (default, Aug 14 2014, 13:26:38) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.prefix '/path/to/virtualenv' &gt;&gt;&gt; sys.exec_prefix '/path/to/virtualenv' </code></pre> <p>Even though setting <code>PYTHONHOME</code> to the same value didn't work.</p> <p>If I try <code>export PYTHONHOME=:/path/to/virtualenv</code> or <code>export PYTHONHOME=/path/to/virtualenv:</code>, explicitly setting only one of <code>prefix</code> and <code>exec_prefix</code>, it fails with the same import error in either case.</p> <p>If I activate the virtual environment and set <code>PYTHONHOME</code> in one of those latter two ways, the unset one appears to default to <code>/</code> rather than to the usual default value, but the Python REPL runs fine:</p> <pre><code># Setting only exec_prefix (virtualenv)-bash-4.2$ export PYTHONHOME=:/path/to/virtualenv (virtualenv)-bash-4.2$ python Python 2.7.8 (default, Aug 14 2014, 13:26:38) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.prefix '/' &gt;&gt;&gt; sys.exec_prefix '/path/to/virtualenv' &gt;&gt;&gt; quit() # Setting only prefix (.virtualenv)-bash-4.2$ export PYTHONHOME=/path/to/virtualenv: (.virtualenv)-bash-4.2$ python Python 2.7.8 (default, Aug 14 2014, 13:26:38) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.prefix '/path/to/virtualenv' &gt;&gt;&gt; sys.exec_prefix '/' </code></pre> <p>Unfortunately, since the deployment script doesn't care what environment is activated, that doesn't solve it. Trying to set <code>WSGIPythonHome</code> in such a fashion makes no difference whatsoever.</p> <p>I have noticed one further thing: The Python in the virtualenv is <strong>2.7.8</strong>. The Python run outside the virtualenv (<code>usr/bin/python</code>) is <strong>2.7.5</strong>. I do not know - would this affect the setting of <code>PYTHONHOME</code> somehow? I would hope not - since that seems to defeat the entire purpose of using <code>WSGIPythonHome</code> to run a virtualenv as compared to just setting <code>sys.path</code> inside the <code>app.wsgi</code> file, the ability to start from a different executable - but I cannot rule it out, clueless as I am.</p> <p>The 2.7.8 Python in <code>/path/to/virtualenv/bin/python</code> has a <code>sys.real_prefix</code> of <code>/network-mounted-drive/sw/python/python-2.7.8</code>.</p> <p>I changed the deployment to build from <code>/network-mounted-drive/sw/python/python-2.7.5</code>, then did more tests. Results as follows:</p> <p>Attempting to start httpd gives the same import error as before.</p> <p>Setting PYTHONHOME to the location of the virtual environment, then running python:</p> <pre><code>-bash-4.2$ echo $PYTHONHOME /path/to/virtualenv -bash-4.2$ python ImportError: No module named site </code></pre> <p>Setting PYTHONHOME to the location of the virtual environment, then explicitly running the virtual environment's python binary (activating the virtual environment and then running <code>python</code> gives the same result):</p> <pre><code># In the directory just above the virtualenv -bash-4.2$ ./virtualenv/bin/python Python 2.7.5 (default, Mar 14 2016, 14:13:09) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.prefix '/path/to/virtualenv' &gt;&gt;&gt; sys.exec_prefix '/path/to/virtualenv' &gt;&gt;&gt; sys.real_prefix '/network-mounted-drive/sw/python/python-2.7.5 </code></pre> <hr> <p>Does anyone have any idea?</p>
0
2016-09-16T13:27:55Z
39,580,230
<p>Resolution found: The issue seems to have been in trying to use a virtual environment built from something other than the local python install on the system.</p> <p>Solved by pushing the problem of "local python install on the deployment VM doesn't have pip installed" up the chain to someone with the permissions required to install pip, since no attempted workarounds via networked python installs worked.</p> <p>The issue of actually using a virtual environment chained from a Python install on a network drive for mod_wsgi may be insoluble, or at least I couldn't figure it out in a reasonable amount of time relative to the bureaucratic solution.</p>
0
2016-09-19T18:57:28Z
[ "python", "centos", "mod-wsgi", "fabric", "importerror" ]
Using python to send email through TLS
39,532,708
<p>I am trying to automate sending email through a python script. I am presently using an expect script to use openssl s_client to connect to the server. Presently we only use a certificate file along with the username password and it allows me to send the email. I found another question in which it was mentioned that in python you either need a hack to or a wrapper around the smtp class to use only the CA cert file and not the key file(which i don't have). </p> <pre><code>&gt;&gt;&gt; smtpobj = smtplib.SMTP("mymailserver.com",465) Traceback (innermost last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Program Files (x86)\Python35-32\lib\smtplib.py", line 251, in __init__ (code, msg) = self.connect(host, port) File "C:\Program Files (x86)\Python35-32\lib\smtplib.py", line 337, in connect (code, msg) = self.getreply() File "C:\Program Files (x86)\Python35-32\lib\smtplib.py", line 390, in getreply + str(e)) smtplib.SMTPServerDisconnected: Connection unexpectedly closed: [WinError 10054] An existing connection was forcibly closed by the remote host </code></pre> <p>The problem I am facing right now is that I am unable to connect to the server through python.</p> <p>If I use the certificate file to connect through </p> <pre><code>smtplib.SMTP_SSL(myserver, port, certfile="mycert.cert") </code></pre> <p>then it throws the following error.</p> <pre><code>ssl.SSLError: [Errno 336265225] _ssl.c:339: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib </code></pre> <p>Please note, I am able to connect to the server using thunderbird, without a cert file. Any ideas, on how I can use python smtp(tls) to send the emails? </p>
0
2016-09-16T13:28:14Z
39,532,826
<p>You need to remove the key pass phrase first using -</p> <pre><code>openssl rsa -in key.pem -out tempkey.pem </code></pre> <p>And then type passphrase once more - </p> <pre><code>openssl rsa -in mycert.pem -out tempkey.pem openssl x509 -in mycert.pem &gt;&gt;tempkey.pem </code></pre> <p>Refer <a href="https://www.madboa.com/geek/openssl/#key-removepass" rel="nofollow">this</a> for more info.</p>
0
2016-09-16T13:33:16Z
[ "python", "email", "ssl" ]
Using python to send email through TLS
39,532,708
<p>I am trying to automate sending email through a python script. I am presently using an expect script to use openssl s_client to connect to the server. Presently we only use a certificate file along with the username password and it allows me to send the email. I found another question in which it was mentioned that in python you either need a hack to or a wrapper around the smtp class to use only the CA cert file and not the key file(which i don't have). </p> <pre><code>&gt;&gt;&gt; smtpobj = smtplib.SMTP("mymailserver.com",465) Traceback (innermost last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Program Files (x86)\Python35-32\lib\smtplib.py", line 251, in __init__ (code, msg) = self.connect(host, port) File "C:\Program Files (x86)\Python35-32\lib\smtplib.py", line 337, in connect (code, msg) = self.getreply() File "C:\Program Files (x86)\Python35-32\lib\smtplib.py", line 390, in getreply + str(e)) smtplib.SMTPServerDisconnected: Connection unexpectedly closed: [WinError 10054] An existing connection was forcibly closed by the remote host </code></pre> <p>The problem I am facing right now is that I am unable to connect to the server through python.</p> <p>If I use the certificate file to connect through </p> <pre><code>smtplib.SMTP_SSL(myserver, port, certfile="mycert.cert") </code></pre> <p>then it throws the following error.</p> <pre><code>ssl.SSLError: [Errno 336265225] _ssl.c:339: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib </code></pre> <p>Please note, I am able to connect to the server using thunderbird, without a cert file. Any ideas, on how I can use python smtp(tls) to send the emails? </p>
0
2016-09-16T13:28:14Z
39,533,435
<blockquote> <pre><code>smtpobj = smtplib.SMTP("mymailserver.com",465) smtplib.SMTPServerDisconnected: Connection unexpectedly closed: [WinError 10054] An existing connection was forcibly closed by the remote host </code></pre> </blockquote> <p>This error has nothing to do with validation of the certificate. It is simply that you are using explicit TLS (i.e. STARTTLS command on plain connection) on a port which requires implicit TLS (TLS from start). Try this instead:</p> <pre><code>smtpobj = smtplib.SMTP_SSL("mymailserver.com",465) </code></pre> <p>Apart from that:</p> <blockquote> <p>... you either need a hack to or a wrapper around the smtp class to use only the CA cert file and not the key file(which i don't have).</p> </blockquote> <p>I think you mixing up some concepts:</p> <ul> <li>CA cert: this contains the trusted root which is needed to verify the certificate of the server. You don't have a key for this certificate and you don't need one.</li> <li>local cert, local key: these are used if the server requires authentication with a client certificate. In this case both cert and key are needed</li> </ul> <p>What you probably want to specify is a CA cert in order to verify the servers certificate. Unfortunately smtplib does not give you a way to specify this CA certificate. You've tried certfile but this is used for specifying the local cert for client certificate authentication and it requires a key file.</p> <p>The good news is: <strong>it works without specifying a CA cert</strong> because smtplib simply does not verify the servers certificate at all. The bad news is: because there is no verification of the server certificate <strong>a man in the middle attack against the encrypted connection is easy</strong>.</p>
0
2016-09-16T14:01:50Z
[ "python", "email", "ssl" ]
Alternative data model to using sorted set in redis (for a Django/Python project)
39,532,732
<p>I have a web application where users post text messages for others to read (kind of like Twitter).</p> <p>I need to save the 50 latest <code>message_id</code> and the poster's <code>user_id</code> pairs (for processing later). I use redis backend and realized I can save these 50 latest pairs in a sorted set: <code>user_id</code> as a value and <code>message_id</code> as a score. </p> <p>Now since <code>user_id</code> can be repeated, I would need to set the <code>NX</code> flag to <code>true</code>. This, according to <a href="http://redis.io/commands/zadd" rel="nofollow">the docs</a>, ensures that new members are added to the sorted set instead of updating existing ones. This helps because if the same user posts messages multiple times, new entries will be added to the sorted set, instead of overwriting existing ones. That keeps the data sane.</p> <p>Here's the problem: my application uses python, and the NX flag wasn't introduced in redis 2.8.4 (the version I'm using). </p> <p>So what alternatives do I have for efficiently saving the 50 latest <code>message_id</code> and <code>user_id</code> pairs using redis? Please advise.</p> <hr> <p>Switching <code>message_id</code> and <code>user_id</code> in the sorted set as value and score doesn't work for me. Why? Because to ensure the sorted set only saves the latest 50 entries, I need to <code>zremrangebyrank</code> if the set cardinality exceeds 50. And that works only if <code>message_id</code> is the <code>score</code>, instead of the repeatable <code>user_id</code>. I hope that makes sense.</p>
0
2016-09-16T13:29:18Z
39,533,735
<p>First, i think that you misunderstood the "NX" flag meaning. That is the sorted <strong>set</strong> - you can't have the same values with different scores. The "NX" flag only ensures, that if you will try to add value again with different score, then it will not modify the score of the existing element.</p> <p>You need to use redis lists. You just need to add values like:</p> <p>user_id:score_id (or serialized),</p> <p>with: <a href="http://redis.io/commands/lset" rel="nofollow">http://redis.io/commands/lset</a></p> <p>then use: <a href="http://redis.io/commands/lrange" rel="nofollow">http://redis.io/commands/lrange</a> to read last 50 elements, </p> <p>and sometimes use: <a href="http://redis.io/commands/ltrim" rel="nofollow">http://redis.io/commands/ltrim</a> to trim the list.</p>
1
2016-09-16T14:15:29Z
[ "python", "redis" ]
Python BS4 with SDMX
39,532,776
<p>I would like to retrieve data given in a SDMX file (like <a href="https://www.bundesbank.de/cae/servlet/StatisticDownload?tsId=BBK01.ST0304&amp;its_fileFormat=sdmx&amp;mode=its" rel="nofollow">https://www.bundesbank.de/cae/servlet/StatisticDownload?tsId=BBK01.ST0304&amp;its_fileFormat=sdmx&amp;mode=its</a>). I tried to use BeautifulSoup, but it seems, it does not see the tags. In the following the code</p> <pre><code>import urllib2 from bs4 import BeautifulSoup url = "https://www.bundesbank.de/cae/servlet/StatisticDownload?tsId=BBK01.ST0304&amp;its_fileFormat=sdmx" html_source = urllib2.urlopen(url).read() soup = BeautifulSoup(html_source, 'lxml') ts_series = soup.findAll("bbk:Series") </code></pre> <p>which gives me an empty object.</p> <p>Is BS4 the wrong tool, or (more likely) what am I doing wrong? Thanks in advance</p>
0
2016-09-16T13:31:21Z
39,533,371
<p><code>soup.findAll("bbk:series")</code> would return the result.</p> <p>In fact, in this case, even you use <code>lxml</code> as the parser, BeautifulSoup still parse it as html, since html tags are case insensetive, BeautifulSoup downcases all the tags, thus <code>soup.findAll("bbk:series")</code> works. See <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#other-parser-problems" rel="nofollow" title="official doc">Other parser problems</a> from the official doc.</p> <p>If you want to parse it as <code>xml</code>, use <code>soup = BeautifulSoup(html_source, 'xml')</code> instead. It also uses <code>lxml</code> since <code>lxml</code> is the only <code>xml</code> parser BeautifulSoup has. Now you can use <code>ts_series = soup.findAll("Series")</code> to get the result as beautifulSoup will strip the namespace part <code>bbk</code>.</p>
0
2016-09-16T13:58:47Z
[ "python", "python-2.7", "xml-parsing", "bs4" ]
Remove final characters from string recursively - What's the best way to do this?
39,532,974
<p>I am reading lines from a file one at a time and before i store each, i wanna modify them according to the following simple rule:</p> <ul> <li>if the last character is not any of, e.g., <code>{'a', 'b', 'c'}</code> store the line.</li> <li>if that is not the case, remove the character (pop-like) and check again.</li> </ul> <p>What i currently have (felt like the obvious thing to do) is this:</p> <pre><code>bad_chars = {'a', 'b', 'c'} def remove_end_del(line_string, chars_to_remove): while any(line_string[-1] == x for x in chars_to_remove): line_string = line_string[:-1] return line_string example_line = 'jkhasdkjashdasjkd|abbbabbababcbccc' modified_line = remove_end_del(example_line, bad_chars) print(modified_line) # prints -&gt; jkhasdkjashdasjkd| </code></pre> <p>Which of course works, but the string slicing\reconstruction seems a bit too excessive to my untrained eyes. So i was wondering a couple of things:</p> <ol> <li>is there a better way to do this? like a <code>pop</code> type of function for strings?</li> <li>how is <code>rstrip()</code> or <code>strip()</code> in general implemented? is it also with a <strong>while</strong>?</li> <li>would it be worthwhile making <code>rstrip()</code> recursive for this example?</li> <li>Finally, how much better is the following:</li> </ol> <hr> <pre><code>def remove_end_del_2(line_string, chars_to_remove): i = 1 while line_string[-i] in chars_to_remove: i += 1 return line_string[:-i+1] </code></pre> <p>Any comment on any of the points made above would be appreciated ☺.</p> <p><strong>Note: the separator ("|") is only there for visualization.</strong></p>
3
2016-09-16T13:40:23Z
39,533,040
<p>Not the direct answer to the question, but one alternative option would be to use <em>regular expressions</em> to remove the bad characters at the end of the string:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; &gt;&gt;&gt; example_line = 'jkhasdkjashdasjkd|abbbabbababcbccc' &gt;&gt;&gt; bad_chars = {'a', 'b', 'c'} &gt;&gt;&gt; &gt;&gt;&gt; re.sub(r'[%s]+$' % ''.join(bad_chars), '', example_line) 'jkhasdkjashdasjkd|' </code></pre> <p>The regular expression here is dynamically constructed from the set of "bad" characters. It would (or "could", since sets have no order) be <code>[abc]+$</code> in this case:</p> <ul> <li><code>[abc]</code> defines a "character class" - any of "a", "b" or "c" would be matched</li> <li><code>+</code> means 1 or more </li> <li><code>$</code> defines the end of the string</li> </ul> <p>(Note that, if "bad" characters can contain a character that may have a special meaning in the character class (like, for example, <code>[</code> or <code>]</code>), it should be escaped with <a href="https://docs.python.org/2/library/re.html#re.escape" rel="nofollow"><code>re.escape()</code></a>).</p> <p><sup>The last statement though may prove that <a href="http://programmers.stackexchange.com/questions/223634/what-is-meant-by-now-you-have-two-problems">old saying</a> about having more problems than initially.</sup></p>
2
2016-09-16T13:44:01Z
[ "python" ]
Remove final characters from string recursively - What's the best way to do this?
39,532,974
<p>I am reading lines from a file one at a time and before i store each, i wanna modify them according to the following simple rule:</p> <ul> <li>if the last character is not any of, e.g., <code>{'a', 'b', 'c'}</code> store the line.</li> <li>if that is not the case, remove the character (pop-like) and check again.</li> </ul> <p>What i currently have (felt like the obvious thing to do) is this:</p> <pre><code>bad_chars = {'a', 'b', 'c'} def remove_end_del(line_string, chars_to_remove): while any(line_string[-1] == x for x in chars_to_remove): line_string = line_string[:-1] return line_string example_line = 'jkhasdkjashdasjkd|abbbabbababcbccc' modified_line = remove_end_del(example_line, bad_chars) print(modified_line) # prints -&gt; jkhasdkjashdasjkd| </code></pre> <p>Which of course works, but the string slicing\reconstruction seems a bit too excessive to my untrained eyes. So i was wondering a couple of things:</p> <ol> <li>is there a better way to do this? like a <code>pop</code> type of function for strings?</li> <li>how is <code>rstrip()</code> or <code>strip()</code> in general implemented? is it also with a <strong>while</strong>?</li> <li>would it be worthwhile making <code>rstrip()</code> recursive for this example?</li> <li>Finally, how much better is the following:</li> </ol> <hr> <pre><code>def remove_end_del_2(line_string, chars_to_remove): i = 1 while line_string[-i] in chars_to_remove: i += 1 return line_string[:-i+1] </code></pre> <p>Any comment on any of the points made above would be appreciated ☺.</p> <p><strong>Note: the separator ("|") is only there for visualization.</strong></p>
3
2016-09-16T13:40:23Z
39,533,254
<p>I understand what you mean by <em>excessive</em>, but I think in general, it looks good. The alternative would be to work with indices, which is not very readable. (I also happen to think that regexes are not very readable either...)</p> <p>You could, however, use a <code>memoryview</code> if you have a <code>bytes</code> object which may or may not be relevant: <a href="https://docs.python.org/3/library/stdtypes.html#memoryview" rel="nofollow">https://docs.python.org/3/library/stdtypes.html#memoryview</a></p> <p><strong>1. <code>pop</code> function for strings</strong></p> <p>No there are no <code>.pop</code> method for <code>str</code>. You'd have to use <code>list(line_string).pop()</code>, where <code>list(s)</code> creates a list with each character of the string as an element.</p> <p><strong>2. <code>(r)strip</code> implementation</strong></p> <p>This is probably implemented with a while, yes. It should be all C code though.</p> <p><strong>3. recursive <code>rstrip</code></strong></p> <p>First of all, why would you need to make it <em>recursive</em>? Second, I think that (recursiveness) would make the mental load unnecessarily high -- so, no.</p> <p><strong>4. Finally, how much better is the following:</strong></p> <p>Measure it! Surely it would be faster.</p>
1
2016-09-16T13:53:44Z
[ "python" ]
Remove final characters from string recursively - What's the best way to do this?
39,532,974
<p>I am reading lines from a file one at a time and before i store each, i wanna modify them according to the following simple rule:</p> <ul> <li>if the last character is not any of, e.g., <code>{'a', 'b', 'c'}</code> store the line.</li> <li>if that is not the case, remove the character (pop-like) and check again.</li> </ul> <p>What i currently have (felt like the obvious thing to do) is this:</p> <pre><code>bad_chars = {'a', 'b', 'c'} def remove_end_del(line_string, chars_to_remove): while any(line_string[-1] == x for x in chars_to_remove): line_string = line_string[:-1] return line_string example_line = 'jkhasdkjashdasjkd|abbbabbababcbccc' modified_line = remove_end_del(example_line, bad_chars) print(modified_line) # prints -&gt; jkhasdkjashdasjkd| </code></pre> <p>Which of course works, but the string slicing\reconstruction seems a bit too excessive to my untrained eyes. So i was wondering a couple of things:</p> <ol> <li>is there a better way to do this? like a <code>pop</code> type of function for strings?</li> <li>how is <code>rstrip()</code> or <code>strip()</code> in general implemented? is it also with a <strong>while</strong>?</li> <li>would it be worthwhile making <code>rstrip()</code> recursive for this example?</li> <li>Finally, how much better is the following:</li> </ol> <hr> <pre><code>def remove_end_del_2(line_string, chars_to_remove): i = 1 while line_string[-i] in chars_to_remove: i += 1 return line_string[:-i+1] </code></pre> <p>Any comment on any of the points made above would be appreciated ☺.</p> <p><strong>Note: the separator ("|") is only there for visualization.</strong></p>
3
2016-09-16T13:40:23Z
39,533,264
<p>Slicing creates lots of unnecessary temporary copies of string. Recursion would be even worse - copies would still be made, and on top of it function call overhead would be intruduced. Both approaches are not that great.</p> <p>You may find an <code>rstrip</code> implementation in <a href="https://github.com/python/cpython/blob/master/Objects/bytesobject.c#L2030" rel="nofollow">CPython source code</a>. An iterative approach (similar to your last code snippet) is used there.</p> <pre><code>Py_LOCAL_INLINE(PyObject *) do_xstrip(PyBytesObject *self, int striptype, PyObject *sepobj) { Py_buffer vsep; char *s = PyBytes_AS_STRING(self); Py_ssize_t len = PyBytes_GET_SIZE(self); char *sep; Py_ssize_t seplen; Py_ssize_t i, j; if (PyObject_GetBuffer(sepobj, &amp;vsep, PyBUF_SIMPLE) != 0) return NULL; sep = vsep.buf; seplen = vsep.len; i = 0; if (striptype != RIGHTSTRIP) { while (i &lt; len &amp;&amp; memchr(sep, Py_CHARMASK(s[i]), seplen)) { i++; } } j = len; if (striptype != LEFTSTRIP) { do { j--; } while (j &gt;= i &amp;&amp; memchr(sep, Py_CHARMASK(s[j]), seplen)); j++; } PyBuffer_Release(&amp;vsep); if (i == 0 &amp;&amp; j == len &amp;&amp; PyBytes_CheckExact(self)) { Py_INCREF(self); return (PyObject*)self; } else return PyBytes_FromStringAndSize(s+i, j-i); } </code></pre> <p>So to sum up, your intuition to use index-based parsing is correct. Main advantage is that no temporary strings are created and copying things in memory is significantly reduced.</p>
2
2016-09-16T13:54:12Z
[ "python" ]
Remove final characters from string recursively - What's the best way to do this?
39,532,974
<p>I am reading lines from a file one at a time and before i store each, i wanna modify them according to the following simple rule:</p> <ul> <li>if the last character is not any of, e.g., <code>{'a', 'b', 'c'}</code> store the line.</li> <li>if that is not the case, remove the character (pop-like) and check again.</li> </ul> <p>What i currently have (felt like the obvious thing to do) is this:</p> <pre><code>bad_chars = {'a', 'b', 'c'} def remove_end_del(line_string, chars_to_remove): while any(line_string[-1] == x for x in chars_to_remove): line_string = line_string[:-1] return line_string example_line = 'jkhasdkjashdasjkd|abbbabbababcbccc' modified_line = remove_end_del(example_line, bad_chars) print(modified_line) # prints -&gt; jkhasdkjashdasjkd| </code></pre> <p>Which of course works, but the string slicing\reconstruction seems a bit too excessive to my untrained eyes. So i was wondering a couple of things:</p> <ol> <li>is there a better way to do this? like a <code>pop</code> type of function for strings?</li> <li>how is <code>rstrip()</code> or <code>strip()</code> in general implemented? is it also with a <strong>while</strong>?</li> <li>would it be worthwhile making <code>rstrip()</code> recursive for this example?</li> <li>Finally, how much better is the following:</li> </ol> <hr> <pre><code>def remove_end_del_2(line_string, chars_to_remove): i = 1 while line_string[-i] in chars_to_remove: i += 1 return line_string[:-i+1] </code></pre> <p>Any comment on any of the points made above would be appreciated ☺.</p> <p><strong>Note: the separator ("|") is only there for visualization.</strong></p>
3
2016-09-16T13:40:23Z
39,533,365
<p>Another nearly fast approach to <code>re.sub</code> albeit more intuitive (<em>it sounds like the <code>pop</code> you're asking for</em>) is <a href="https://docs.python.org/2/library/itertools.html#itertools.dropwhile" rel="nofollow"><code>itertools.dropwhile</code></a>:</p> <blockquote> <p>Make an iterator that drops elements from the iterable as long as the predicate is true;</p> </blockquote> <pre><code>&gt;&gt;&gt; ''.join(dropwhile(lambda x: x in bad_chars, example_line[::-1]))[::-1] 'jkhasdkjashdasjkd|' </code></pre> <p>However, it appears <code>rstrip</code> was made and more suited for a task as this.</p> <hr> <p>Some timings:</p> <pre><code>In [4]: example_line = 'jkhasdkjashdasjkd|abbbabbababcbccc' In [5]: bad_chars = {'a', 'b', 'c'} </code></pre> <hr> <pre><code>In [6]: %%timeit ...: re.sub(r'[%s]+$' % ''.join(bad_chars), '', example_line) ...: 100000 loops, best of 3: 5.24 µs per loop </code></pre> <hr> <pre><code>In [7]: %%timeit ...: ''.join(dropwhile(lambda x: x in bad_chars, example_line[::-1]))[::-1] ...: 100000 loops, best of 3: 5.72 µs per loop </code></pre> <hr> <pre><code>In [10]: %%timeit ....: remove_end_del(example_line, bad_chars) ....: 10000 loops, best of 3: 24.1 µs per loop </code></pre> <hr> <pre><code>In [11]: %%timeit ....: example_line.rstrip('abc') ....: 1000000 loops, best of 3: 579 ns per loop </code></pre> <hr> <pre><code>In [14]: %%timeit ....: remove_end_del_2(example_line, bad_chars) ....: 100000 loops, best of 3: 4.22 µs per loop </code></pre> <p><strong><code>rstrip</code> wins!</strong></p>
4
2016-09-16T13:58:32Z
[ "python" ]
Python function redefines variables that are not passed
39,532,978
<p>I made a function that operates on a numpy array of latitude/longitude values. </p> <pre><code>from __future__ import division, print_function import pandas as pd, numpy as np def regrid2(lats, lons, lat_res=0.25, lon_res=0.25): # round lat/lon values to nearest decimal degree according to specified # resolution and reshape the array lats[lats&lt;=0] = lat_res*(np.round(lats[lats&lt;=0]/lat_res)) - lat_res/2 lats[lats&gt;0] = lat_res*(np.round(lats[lats&gt;0]/lat_res)) + lat_res/2 lons[lons&lt;=0] = lon_res*(np.round(lons[lons&lt;=0]/lon_res)) + lon_res/2 lons[lons&gt;0] = lon_res*(np.round(lons[lons&gt;0]/lon_res)) - lon_res/2 lats = np.reshape(lats, (lats.size,1), order='F') lons = np.reshape(lons, (lons.size,1), order='F') lats = 0 df = pd.DataFrame() return df lat = np.arange(80.111, 90, 5) lon = np.arange(170.11, 180,0.33) df = regrid2(lat,lon) </code></pre> <p>When I call regrid2 my lat/lon arrays change, even though the function is not returning new arrays for lat/lon. </p> <p>E.g., before calling regrid: </p> <pre><code>&gt;&gt;&gt; lat.min() 80.111000000000004 &gt;&gt;&gt; lon.min() 170.11000000000001 </code></pre> <p>AFTER calling regrid: </p> <pre><code>&gt;&gt;&gt; lat.min() 80.125 &gt;&gt;&gt; lon.min() 169.875 </code></pre> <p>I do not recall having similar problems before. What makes this particularly strange is that I ran the same script last night without problems, but this morning the function is redefining my lat/lon variables. I have re-started my IDE, but I have not been able to determine why this is happening. </p> <p>That being said, if I copy lat/lon into regrid then I have no problems. e.g.</p> <pre><code>df = regrid2(lat.copy(),lon.copy()) &gt;&gt;&gt; lat.min() 80.111000000000004 &gt;&gt;&gt; lon.min() 170.11000000000001 </code></pre> <p>I would like to identify the sudden change in behavior. I typically work in Pandas these days, not so much with numpy, so maybe there has been no change in behavior, rather I am just noticing this.</p> <p>Python 2.7, Numpy 1.10.4</p>
0
2016-09-16T13:40:56Z
39,533,339
<p>You're passing mutable data to the function, then mutating it. </p> <p>You could use </p> <pre><code>def regrid2(lats, lons, lat_res=0.25, lon_res=0.25): lats, lons = np.copy(lats), np.copy(lons) </code></pre> <p>to work with copies instead?</p>
1
2016-09-16T13:57:34Z
[ "python", "arrays", "pandas", "numpy" ]
Issues in Plotting Intraday OHLC Chart with Matplotlib
39,533,004
<p>I am trying to plot OHLC candlestick chart (1Min) for complete one day and want to show 'Hours' as Major locator and Min as minor locator. Hour locator should be displayed as till end of data Major Locator 09:00 10:00 11:00 and so on.</p> <p><a href="http://i.stack.imgur.com/3K6A9.png" rel="nofollow"><img src="http://i.stack.imgur.com/3K6A9.png" alt="enter image description here"></a></p> <p>I am not able to understand what error I am doing and why time is starting from 22:00 and OHLC candles are not visible.</p> <p>If you can also help with volume overlay on ohlc chart it would be a great help.<a href="https://drive.google.com/file/d/0B4iD4lT8-7J2ckhNcFIxR0E3YTA/view?usp=sharing" rel="nofollow">link to data file</a></p> <pre><code>from datetime import datetime, date, timedelta import matplotlib.pyplot as plt import matplotlib.dates as mdates import matplotlib.gridspec as grd from matplotlib.transforms import Bbox from matplotlib.finance import candlestick_ohlc, volume_overlay3, volume_overlay #from matplotlib.finance import candlestick from matplotlib.backends.backend_pdf import PdfPages from matplotlib.dates import DateFormatter, WeekdayLocator, DayLocator, MONDAY, HourLocator, MinuteLocator import numpy as np import pandas as pd def plot_underlying_hft_data(filename): #Read the data and filtered out the required rows and columns print("Reading File.. ", filename) tempdata = pd.read_csv(filename, index_col = ['Date']) tempdata = tempdata.loc[(tempdata.index == '2016-09-16')] tempdata['Datetime'] = pd.to_datetime(tempdata['Datetime'], format='%Y-%m-%d %H:%M:%S') print(tempdata) HourLocator hour = HourLocator() minute = MinuteLocator() hourformatter = DateFormatter('%H:%M') #tempdata['Datetime'] = tempdata['Datetime'].apply(lambda datetimevar : datetime) tempdata['DatetimeNum'] = mdates.date2num(tempdata['Datetime'].dt.to_pydatetime()) quotes = [tuple(x) for x in tempdata[['DatetimeNum', 'Open', 'High', 'Low', 'Close', 'Volume']].to_records(index=False)] #print(quotes) title_name_ohlc = 'OHLC Intraday Chart' #print(title_name_ohlc) plt.figure(figsize = (12,6)) #plt.title(title_name_ohlc) ax1 = plt.subplot2grid((1,1), (0,0), axisbg='w') ax1.set_ylabel('Price', fontsize=12, fontweight = 'bold') ax1.set_title(title_name_ohlc, fontsize=14, fontweight = 'bold') ax1.set_ylabel('Price', fontsize=12, fontweight = 'bold') ax1.set_title(title_name_ohlc, fontsize=14, fontweight = 'bold') print(tempdata['DatetimeNum'].min(), tempdata['DatetimeNum'].max()) ax1.set_ylim(bottom = tempdata['DatetimeNum'].min(), top = tempdata['DatetimeNum'].max()) ax1.xaxis.set_major_locator(hour) ax1.xaxis.set_minor_locator(minute) ax1.xaxis.set_major_formatter(hourformatter) #ax1.grid(True) candlestick_ohlc(ax1, quotes, width=1, colorup='g', colordown='r', alpha = 1.0) plt.setp(plt.gca().get_xticklabels(), rotation=45, horizontalalignment='right') plt.show() plot_underlying_hft_data("data.csv") #print(tempdata.head(5)) </code></pre>
0
2016-09-16T13:42:05Z
39,536,333
<p>I was doing mistake in defining the xlimits and width in the plotting of graph. I fixed after reading documentation and some hit and trial and got the output as desired. <a href="http://i.stack.imgur.com/nZkdy.png" rel="nofollow"><img src="http://i.stack.imgur.com/nZkdy.png" alt="enter image description here"></a></p> <pre><code>def plot_underlying_hft_data(filename): #Read the data and filtered out the required rows and columns print("Reading File.. ", filename) tempdata = pd.read_csv(filename, index_col = ['Date']) tempdata = tempdata.loc[(tempdata.index == '2016-09-16')].tail(751) print(tempdata.head(5)) tempdata.set_index(['Datetime'], inplace = True) print(tempdata.head(5)) #tempdata['Datetime'] = pd.to_datetime(tempdata['Datetime'], format='%Y-%m-%d %H:%M:%S') #print(tempdata) #hour = HourLocator(interval = 1) minute = MinuteLocator(interval = 30) hourformatter = DateFormatter('%H:%M') #tempdata['Datetime'] = tempdata['Datetime'].apply(lambda datetimevar : datetime) tempdata["Datetime"] = pd.to_datetime(tempdata.index) tempdata.Datetime = mdates.date2num(tempdata.Datetime.dt.to_pydatetime()) #print(tempdata.head(5)) quotes = [tuple(x) for x in tempdata[['Datetime', 'Open', 'High', 'Low', 'Close', 'Volume']].to_records(index=False)] #print(quotes) title_name_ohlc = 'OHLC Intraday Chart' #print(title_name_ohlc) plt.figure(figsize = (18,10)) #plt.title(title_name_ohlc) ax1 = plt.subplot2grid((1,1), (0,0), axisbg='w') ax1.set_ylabel('Price', fontsize=12, fontweight = 'bold') ax1.set_title(title_name_ohlc, fontsize=14, fontweight = 'bold') ax1.set_ylabel('Price', fontsize=12, fontweight = 'bold') ax1.set_xlabel('Time', fontsize=12, fontweight = 'bold') ax1.set_title(title_name_ohlc, fontsize=14, fontweight = 'bold') #print(tempdata['DatetimeNum'].min(), tempdata['DatetimeNum'].max()) ax1.set_xlim(tempdata['Datetime'].min(), tempdata['Datetime'].max()) ax1.xaxis.set_major_locator(minute) #ax1.xaxis.set_minor_locator(minute) ax1.xaxis.set_major_formatter(hourformatter) ax1.axhline(y=262.32, linewidth=1.5, color='g', alpha = 0.7, linestyle = "dashed") ax1.axhline(y=260.33, linewidth=2, color='g', alpha = 0.7, linestyle = "dashed") ax1.axhline(y=258.17, linewidth=2.5, color='g', alpha = 0.7, linestyle = "dashed") ax1.axhline(y=256.18, linewidth=3, color='b', alpha = 1, linestyle = "dashed") ax1.axhline(y=254.02, linewidth=2.5, color='r', alpha = 0.7, linestyle = "dashed") ax1.axhline(y=252.03, linewidth=2, color='r', alpha = 0.7, linestyle = "dashed") ax1.axhline(y=249.87, linewidth=1.5, color='r', alpha = 0.7, linestyle = "dashed") #['256.18', '254.02', '252.03', '249.87', '258.17', '260.33', '262.32'] ax1.grid(True) #ax1.grid(True) candlestick_ohlc(ax1, quotes, width = 1/(24*60*2.5), alpha = 1.0, colorup = 'g', colordown ='r') plt.setp(plt.gca().get_xticklabels(), horizontalalignment='center') pad = 0.25 yl = ax1.get_ylim() print(yl) ax1.set_ylim(yl[0]-(yl[1]-yl[0])*pad,yl[1]*1.005) Datetime = [x[0] for x in quotes] Datetime = np.asarray(Datetime) Volume = [x[5] for x in quotes] Volume = np.asarray(Volume) ax2 = ax1.twinx() ax2.set_position(matplotlib.transforms.Bbox([[0.125,0.125],[0.9,0.27]])) width = 1/(24*60*4) ax2.bar(Datetime, Volume, color='blue', width = width, alpha = 0.75) ax2.set_ylim([0, ax2.get_ylim()[1] * 1]) ax2.set_ylabel('Volume', fontsize=12, fontweight = 'bold') yticks = ax2.get_yticks() ax2.set_yticks(yticks[::1]) #ax2.grid(True) #report_pdf.savefig(pad_inches=0.5, bbox_inches= 'tight') #plt.close() plt.show() </code></pre>
0
2016-09-16T16:35:33Z
[ "python", "pandas", "numpy", "matplotlib", "plot" ]
Handling a timeout exception in Python
39,533,022
<p>I'm looking for a way to handle timeout exceptions for my Reddit bot which uses PRAW (Python). It times out at least once every day, and it has a variable coded in so I have to update the variable and then manually run the bot again. I am looking for a way to automatically handle these exceptions. I looked into try: and except:, but I am afraid that adding a break point after time.sleep(10) would stop the loop completely. I want it to keep running the loop regardless if it times out or not. There is a sample of the code below.</p> <pre><code>def run_bot(): # Arbitrary Bot Code Here # This is at the bottom of the code, and it runs the above arbitrary code every 10 seconds while True: try: run_bot() time.sleep(10) except: # Don't know what goes here </code></pre>
1
2016-09-16T13:43:05Z
39,533,139
<p>It depends on what you want to do when a timeout occurs.</p> <p>you can make a <code>pass</code> to do nothing and continue with the loop.</p> <pre><code>try: run_bot() except: pass </code></pre> <p>In your case it would be better to write it explicitly as </p> <pre><code>try: run_bot() except: continue </code></pre> <p>But you can also add some logging to the except clause</p> <pre><code>try: run_bot() except e: print 'Loading failed due to Timeout' print e </code></pre> <p>To make sure the loop always sleeps you can do the following:</p> <pre><code>nr_of_comments = 0 def run_bot(): # do stuff nr_of_comments =+ 1 while True: sleep(10) try: run_bot() except e: continue </code></pre>
1
2016-09-16T13:48:58Z
[ "python", "exception", "timeout", "handle", "praw" ]
Handling a timeout exception in Python
39,533,022
<p>I'm looking for a way to handle timeout exceptions for my Reddit bot which uses PRAW (Python). It times out at least once every day, and it has a variable coded in so I have to update the variable and then manually run the bot again. I am looking for a way to automatically handle these exceptions. I looked into try: and except:, but I am afraid that adding a break point after time.sleep(10) would stop the loop completely. I want it to keep running the loop regardless if it times out or not. There is a sample of the code below.</p> <pre><code>def run_bot(): # Arbitrary Bot Code Here # This is at the bottom of the code, and it runs the above arbitrary code every 10 seconds while True: try: run_bot() time.sleep(10) except: # Don't know what goes here </code></pre>
1
2016-09-16T13:43:05Z
39,534,278
<p>Moving the sleep to finally will solve your issue, I guess. finally block will run irrespective of whether there is an exception happened or not.</p> <pre><code>def run_bot(): # Arbitrary Bot Code Here # This is at the bottom of the code, and it runs the above arbitrary code every 10 seconds while True: try: run_bot() except: from traceback import format_exc print "Exception happened:\n%s" % (format_exc()) finally: time.sleep(10) </code></pre>
0
2016-09-16T14:43:57Z
[ "python", "exception", "timeout", "handle", "praw" ]
Difference between chain(*iter) vs chain.from_iterable(iter)
39,533,052
<p>I have been really fascinated by all the interesting iterators in itertools and one confusion I have had is the difference between these two functions and why chain.from_iterator exists.</p> <pre><code>from itertools import chain def foo(n): for i in range(n): yield [i, i**2] chain(*foo(5)) chain.from_iterable(foo(5)) </code></pre> <p>What is the difference between the two functions?</p>
4
2016-09-16T13:44:41Z
39,533,104
<p><code>chain(*foo(5))</code> unpacks the whole generator, packs it into a tuple and processes it then.</p> <p><code>chain.from_iterable(foo(5))</code> queries the generator created from <code>foo(5)</code> value for value.</p> <p>Try <code>foo(1000000)</code> and watch the memory usage go up and up.</p>
4
2016-09-16T13:47:37Z
[ "python" ]
Difference between chain(*iter) vs chain.from_iterable(iter)
39,533,052
<p>I have been really fascinated by all the interesting iterators in itertools and one confusion I have had is the difference between these two functions and why chain.from_iterator exists.</p> <pre><code>from itertools import chain def foo(n): for i in range(n): yield [i, i**2] chain(*foo(5)) chain.from_iterable(foo(5)) </code></pre> <p>What is the difference between the two functions?</p>
4
2016-09-16T13:44:41Z
39,533,108
<p><code>*</code> unpacks the iterator, meaning it iterates the iterator in order to pass its values to the function. <code>chain.from_iterable</code> iterates the iterator one by one lazily.</p>
1
2016-09-16T13:47:53Z
[ "python" ]
Difference between chain(*iter) vs chain.from_iterable(iter)
39,533,052
<p>I have been really fascinated by all the interesting iterators in itertools and one confusion I have had is the difference between these two functions and why chain.from_iterator exists.</p> <pre><code>from itertools import chain def foo(n): for i in range(n): yield [i, i**2] chain(*foo(5)) chain.from_iterable(foo(5)) </code></pre> <p>What is the difference between the two functions?</p>
4
2016-09-16T13:44:41Z
39,533,120
<p>The former can only handle unpackable iterables. The latter can handle iterables that cannot be fully unpacked, such as infinite generators.</p> <p>Consider</p> <pre><code>&gt;&gt;&gt; from itertools import chain &gt;&gt;&gt; def inf(): ... i=0 ... while True: ... i += 1 ... yield i ... &gt;&gt;&gt; x=inf() &gt;&gt;&gt; y=chain.from_iterable(x) &gt;&gt;&gt; z=chain(*x) &lt;hangs forever&gt; </code></pre> <p>Furthermore, just the act of unpacking is an eager, up-front-cost activity, so if your iterable has effects you want to evaluate lazily, <code>from_iterable</code> is your best option.</p>
4
2016-09-16T13:48:13Z
[ "python" ]
Python/CSV unique rows with unique values per row in a column
39,533,144
<p>Have this dataset, dataset is fictional:</p> <pre><code>cat sample.csv id,fname,lname,education,gradyear,attributes "6F9619FF-8B86-D011-B42D-00C04FC964FF",john,smith,mit,2003,qa "6F9619FF-8B86-D011-B42D-00C04FC964FF",john,smith,harvard,2007,"test|admin,test" "6F9619FF-8B86-D011-B42D-00C04FC964FF",john,smith,harvard,2007,"test|admin,test" "6F9619FF-8B86-D011-B42D-00C04FC964FF",john,smith,ft,2012,NULL "6F9619FF-8B86-D011-B42D-00C04FC964F1",john,doe,htw,2000,dev </code></pre> <p>When I run this script, which parses the csv and finds unique rows, concating rows in on column when more are found:</p> <p>parse-csv.py</p> <pre><code>import itertools from itertools import groupby import csv import pprint import argparse if __name__ == '__main__': parser = argparse.ArgumentParser(description='sql dump parser') parser.add_argument('-i','--input', help='input file', required=True) parser.add_argument('-o','--output', help='output file', required=True) args = parser.parse_args() inputf = args.input outputf = args.output t = csv.reader(open(inputf, 'rb')) t = list(t) def join_rows(rows): return [(e[0] if i &lt; 1 else '|'.join(e)) for (i, e) in enumerate(zip(*rows))] myfile = open(outputf, 'wb') wr = csv.writer(myfile, quoting=csv.QUOTE_ALL, lineterminator='\n') for name, rows in groupby(t, lambda x:x[0]): wr.writerow(join_rows(rows)) #print join_rows(rows) </code></pre> <p>And than another script, which makes sure each colum has only unique values separated by "|"</p> <p>unique.py</p> <pre><code>import csv import sys from collections import OrderedDict import argparse csv.field_size_limit(sys.maxsize) import argparse if __name__ == '__main__': parser = argparse.ArgumentParser(description='sql dump parser - unique') parser.add_argument('-i','--input', help='input file', required=True) parser.add_argument('-o','--output', help='output file', required=True) args = parser.parse_args() inputf = args.input outputf = args.output with open(inputf) as fin, open(outputf, 'wb') as fout: csvin = csv.DictReader(fin) csvout = csv.DictWriter(fout, fieldnames=csvin.fieldnames, quoting=csv.QUOTE_ALL,lineterminator='\n') csvout.writeheader() for row in csvin: for k, v in row.items(): row[k] = '|'.join(OrderedDict.fromkeys(v.split('|'))) csvout.writerow(row) </code></pre> <p>It works for the sample.csv</p> <p>Output:</p> <pre><code>$ python parse-csv.py -i sample.csv -o sample-out.csv $ python unique.py -i sample-out.csv -o sample-final.csv $ cat sample-final.csv "id","fname","lname","education","gradyear","attributes" "6F9619FF-8B86-D011-B42D-00C04FC964FF","john","smith","mit|harvard|ft","2003|2007|2012","qa|test|admin,test|NULL" "6F9619FF-8B86-D011-B42D-00C04FC964F1","john","doe","htw","2000","dev" </code></pre> <p>But when I do the same for this:</p> <p>(dataset is fictional)</p> <p>sample2.csv</p> <pre><code>id,lastname,firstname,middlename,address1,address2,city,zipcode,city2,zipcode2,emailaddress,website "E387F3C1-F6E9-40DD-86AB-A7149C67F61C","Technical Support",NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL "648EEB5D-0586-444A-B86F-4EB2446BBC93","Palm","Samuel","J",NULL,NULL,NULL,NULL,NULL,NULL,"",NULL "A94FAD4E-27DB-48FE-B89E-C37B408C5DD5","Mait","A.V.",NULL,NULL,NULL,NULL,NULL,NULL,NULL,"mait@yahoo.com",NULL "E387F3C1-F6E9-40DD-86AB-A7149C67F61C","Technical Support",NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL "648EEB5D-0586-444A-B86F-4EB2446BBC93","Palm","Samuel","J",NULL,NULL,NULL,NULL,NULL,NULL,"",NULL "A94FAD4E-27DB-48FE-B89E-C37B408C5DD5","Mait","A.V.",NULL,NULL,NULL,NULL,NULL,NULL,NULL,"mait@yahoo.com",NULL "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul",NULL,"","","","",NULL,NULL,"psd@gmail.com",NULL "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul",NULL,"","","","",NULL,NULL,"psd@gmail.com",NULL "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul",NULL,"","","","",NULL,NULL,"psd@gmail.com",NULL "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul",NULL,"","","","",NULL,NULL,"psd@gmail.com",NULL "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul",NULL,"","","","",NULL,NULL,"psd@gmail.com",NULL </code></pre> <p>The output is:</p> <pre><code>$ python parse-csv.py -i sample2.csv -o sample2-out.csv $ python unique.py -i sample2-out.csv -o sample2-final.csv $ cat sample2-final.csv "id","lastname","firstname","middlename","address1","address2","city","zipcode","city2","zipcode2","emailaddress","website" "E387F3C1-F6E9-40DD-86AB-A7149C67F61C","Technical Support","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL" "648EEB5D-0586-444A-B86F-4EB2446BBC93","Palm","Samuel","J","NULL","NULL","NULL","NULL","NULL","NULL","","NULL" "A94FAD4E-27DB-48FE-B89E-C37B408C5DD5","Mait","A.V.","NULL","NULL","NULL","NULL","NULL","NULL","NULL","mait@yahoo.com","NULL" "E387F3C1-F6E9-40DD-86AB-A7149C67F61C","Technical Support","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL" "648EEB5D-0586-444A-B86F-4EB2446BBC93","Palm","Samuel","J","NULL","NULL","NULL","NULL","NULL","NULL","","NULL" "A94FAD4E-27DB-48FE-B89E-C37B408C5DD5","Mait","A.V.","NULL","NULL","NULL","NULL","NULL","NULL","NULL","mait@yahoo.com","NULL" "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul","NULL","","","","","NULL","NULL","psd@gmail.com","NULL" </code></pre> <p>Why it doesn't properly get unique rows and columns like it did for sample.csv???? </p> <p>Anybody has any ideas?</p> <p>Thanks in advance! Chewing on this for a long time now ....</p>
1
2016-09-16T13:49:10Z
39,534,032
<p>Your first file is sorted, while the second is not. Please see <a href="http://stackoverflow.com/questions/8116666/itertools-groupby">this discussion</a></p> <p>All you need is this:</p> <pre><code>t = list(t) t[1:] = sorted(t[1:]) </code></pre>
1
2016-09-16T14:31:21Z
[ "python", "csv" ]
Python/CSV unique rows with unique values per row in a column
39,533,144
<p>Have this dataset, dataset is fictional:</p> <pre><code>cat sample.csv id,fname,lname,education,gradyear,attributes "6F9619FF-8B86-D011-B42D-00C04FC964FF",john,smith,mit,2003,qa "6F9619FF-8B86-D011-B42D-00C04FC964FF",john,smith,harvard,2007,"test|admin,test" "6F9619FF-8B86-D011-B42D-00C04FC964FF",john,smith,harvard,2007,"test|admin,test" "6F9619FF-8B86-D011-B42D-00C04FC964FF",john,smith,ft,2012,NULL "6F9619FF-8B86-D011-B42D-00C04FC964F1",john,doe,htw,2000,dev </code></pre> <p>When I run this script, which parses the csv and finds unique rows, concating rows in on column when more are found:</p> <p>parse-csv.py</p> <pre><code>import itertools from itertools import groupby import csv import pprint import argparse if __name__ == '__main__': parser = argparse.ArgumentParser(description='sql dump parser') parser.add_argument('-i','--input', help='input file', required=True) parser.add_argument('-o','--output', help='output file', required=True) args = parser.parse_args() inputf = args.input outputf = args.output t = csv.reader(open(inputf, 'rb')) t = list(t) def join_rows(rows): return [(e[0] if i &lt; 1 else '|'.join(e)) for (i, e) in enumerate(zip(*rows))] myfile = open(outputf, 'wb') wr = csv.writer(myfile, quoting=csv.QUOTE_ALL, lineterminator='\n') for name, rows in groupby(t, lambda x:x[0]): wr.writerow(join_rows(rows)) #print join_rows(rows) </code></pre> <p>And than another script, which makes sure each colum has only unique values separated by "|"</p> <p>unique.py</p> <pre><code>import csv import sys from collections import OrderedDict import argparse csv.field_size_limit(sys.maxsize) import argparse if __name__ == '__main__': parser = argparse.ArgumentParser(description='sql dump parser - unique') parser.add_argument('-i','--input', help='input file', required=True) parser.add_argument('-o','--output', help='output file', required=True) args = parser.parse_args() inputf = args.input outputf = args.output with open(inputf) as fin, open(outputf, 'wb') as fout: csvin = csv.DictReader(fin) csvout = csv.DictWriter(fout, fieldnames=csvin.fieldnames, quoting=csv.QUOTE_ALL,lineterminator='\n') csvout.writeheader() for row in csvin: for k, v in row.items(): row[k] = '|'.join(OrderedDict.fromkeys(v.split('|'))) csvout.writerow(row) </code></pre> <p>It works for the sample.csv</p> <p>Output:</p> <pre><code>$ python parse-csv.py -i sample.csv -o sample-out.csv $ python unique.py -i sample-out.csv -o sample-final.csv $ cat sample-final.csv "id","fname","lname","education","gradyear","attributes" "6F9619FF-8B86-D011-B42D-00C04FC964FF","john","smith","mit|harvard|ft","2003|2007|2012","qa|test|admin,test|NULL" "6F9619FF-8B86-D011-B42D-00C04FC964F1","john","doe","htw","2000","dev" </code></pre> <p>But when I do the same for this:</p> <p>(dataset is fictional)</p> <p>sample2.csv</p> <pre><code>id,lastname,firstname,middlename,address1,address2,city,zipcode,city2,zipcode2,emailaddress,website "E387F3C1-F6E9-40DD-86AB-A7149C67F61C","Technical Support",NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL "648EEB5D-0586-444A-B86F-4EB2446BBC93","Palm","Samuel","J",NULL,NULL,NULL,NULL,NULL,NULL,"",NULL "A94FAD4E-27DB-48FE-B89E-C37B408C5DD5","Mait","A.V.",NULL,NULL,NULL,NULL,NULL,NULL,NULL,"mait@yahoo.com",NULL "E387F3C1-F6E9-40DD-86AB-A7149C67F61C","Technical Support",NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL "648EEB5D-0586-444A-B86F-4EB2446BBC93","Palm","Samuel","J",NULL,NULL,NULL,NULL,NULL,NULL,"",NULL "A94FAD4E-27DB-48FE-B89E-C37B408C5DD5","Mait","A.V.",NULL,NULL,NULL,NULL,NULL,NULL,NULL,"mait@yahoo.com",NULL "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul",NULL,"","","","",NULL,NULL,"psd@gmail.com",NULL "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul",NULL,"","","","",NULL,NULL,"psd@gmail.com",NULL "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul",NULL,"","","","",NULL,NULL,"psd@gmail.com",NULL "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul",NULL,"","","","",NULL,NULL,"psd@gmail.com",NULL "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul",NULL,"","","","",NULL,NULL,"psd@gmail.com",NULL </code></pre> <p>The output is:</p> <pre><code>$ python parse-csv.py -i sample2.csv -o sample2-out.csv $ python unique.py -i sample2-out.csv -o sample2-final.csv $ cat sample2-final.csv "id","lastname","firstname","middlename","address1","address2","city","zipcode","city2","zipcode2","emailaddress","website" "E387F3C1-F6E9-40DD-86AB-A7149C67F61C","Technical Support","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL" "648EEB5D-0586-444A-B86F-4EB2446BBC93","Palm","Samuel","J","NULL","NULL","NULL","NULL","NULL","NULL","","NULL" "A94FAD4E-27DB-48FE-B89E-C37B408C5DD5","Mait","A.V.","NULL","NULL","NULL","NULL","NULL","NULL","NULL","mait@yahoo.com","NULL" "E387F3C1-F6E9-40DD-86AB-A7149C67F61C","Technical Support","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL" "648EEB5D-0586-444A-B86F-4EB2446BBC93","Palm","Samuel","J","NULL","NULL","NULL","NULL","NULL","NULL","","NULL" "A94FAD4E-27DB-48FE-B89E-C37B408C5DD5","Mait","A.V.","NULL","NULL","NULL","NULL","NULL","NULL","NULL","mait@yahoo.com","NULL" "FDFCA22A-EE19-4997-B892-90B2006FE328","Drago","Paul","NULL","","","","","NULL","NULL","psd@gmail.com","NULL" </code></pre> <p>Why it doesn't properly get unique rows and columns like it did for sample.csv???? </p> <p>Anybody has any ideas?</p> <p>Thanks in advance! Chewing on this for a long time now ....</p>
1
2016-09-16T13:49:10Z
39,534,237
<p>Here's my simple minded solution to your problem (as I understand it), using a dictionary:</p> <pre><code>import csv t = csv.reader(open("sample2.csv", 'rb')) t = list(t) def parsecsv(data): # Assumes that the first column is the unique id and that the first # row contains the column titles and that all rows have same # of columns L = len(data[0]) csvDict = {} for entry in data: # build a dict csvDict to represent data if entry[0] in csvDict: # already have entry so add to it... for i in range(L - 1): # loop through columns if csvDict[entry[0]][i] != 'NULL': #check if data exists in column if (entry[i] not in csvDict[entry[0]][i]) and (entry[i] != 'NULL'): csvDict[entry[0]][i] += '|' + entry[i] else: csvDict[entry[0]][i] = entry[i] else: csvDict[entry[0]] = [None]*(L - 1) for i in range(L - 1): # loop through columns csvDict[entry[0]][i] = entry[i] return csvDict out = parsecsv(t) for entry in out: print entry + ' = ' + str(out[entry]) </code></pre> <p>This should be independent of sorted data sets, etc...</p> <p>Let me know if it helps!</p>
0
2016-09-16T14:42:01Z
[ "python", "csv" ]
Python and RSS feeds - parse feeder
39,533,226
<p>I'm using feed parser in python and I'm a beginner in python.</p> <pre><code>for post in d.entries: if test_word3 and test_word2 in post.title: print(post.title) </code></pre> <p>What I'm trying to do is make feed parser find a couple of words inside titles in the RSS feeds.</p>
0
2016-09-16T13:52:11Z
39,579,777
<p>Note that <strong>and</strong> does not distribute across the <strong>in</strong> operator.</p> <pre><code>if test_word3 in post.title and test_word2 in post.title: </code></pre> <p>should solve your problem. What you wrote is evaluated as</p> <pre><code>if test_word3 and (test_word2 in post.title): </code></pre> <p>... and this turns into ...</p> <pre><code>if (test_word3 != "") and (test_word2 in post.title): </code></pre> <p>Simplifying a little ... The Boolean value of a string is whether it's non-empty. The Boolean value of an integer is whether it's non-zero.</p>
0
2016-09-19T18:26:12Z
[ "python", "parsing", "rss", "feed", "feedparser" ]
How to export regression tree with two features as matrix?
39,533,229
<p>I have a problem which is suitable for the application of a regression tree. The final result needs to be in matrix, though, not a tree. Is there an easy way in R or Python to export a regression tree which has only two features (in my case weight and distance) to a matrix where rows would be the weight classes, columns the distances and cell values the estimated prices? So far I could only find tree visualiziations as output. </p> <p>I presume, that on each dimension, the tree distinguishes about 15 classes, so there are 200 to 300 cell values, in case this is important.</p>
-1
2016-09-16T13:52:22Z
39,548,993
<p>I don't really know what you mean by "exporting the tree to a matrix"; I have no idea how a regression tree could be represented as a matrix. I guess you want to show the prediction of the tree for multiple combinations of your (two) features/input variables?</p> <p>If so, you need first need to create all combinations of interest, e.g. by utilizing <code>grid.expand</code>. Then you put the resulting data.frame as <code>newdata</code> argument of the <code>predict</code> method of the tree to get the predicted outcome. Finally you <code>cast</code> the result into matrix form.</p>
0
2016-09-17T16:16:06Z
[ "python", "matrix", "tree", "regression" ]
Django - Get data from a form that is not a django.form
39,533,387
<p>I am working with Django in order to do a quite simple web application. </p> <p>I am currently trying to have a succession of forms so that the user completes the information bits by bits. I first tried to do it only in HTML because I wanted to really have the hand on the presentation. Here is my code. </p> <p>I have two templates, create_site.html and create_cpe.html. I need to get informations from the first page in order to know what to ask in the second page.</p> <p>Here is my create_site.html</p> <pre><code>&lt;body&gt; &lt;form action="{% url 'confWeb:cpe_creation' %}" method="post" class="form-creation"&gt; {% csrf_token %} &lt;div class = "form-group row"&gt; &lt;label for="site_name" class="col-xs-3 col-form-label"&gt;Name of theSite&lt;/label&gt; &lt;div class="col-xs-9"&gt; &lt;input id="site_name" class="form-control" placeholder="Name" required="" autofocus="" type="text"&gt; &lt;/div&gt; &lt;/div&gt; &lt;button class="btn btn-lg btn-primary btn-block" type="submit"&gt;Créer&lt;/button&gt; &lt;/form&gt; &lt;/body&gt; </code></pre> <p>And here is the views.py that I'm using to do all of this : </p> <pre><code>def site_creation(request): template = loader.get_template('confWeb/create_site.html') return HttpResponse(template.render(request)) def cpe_creation(request): if request.method == "POST" : print(request.POST) </code></pre> <p>What I would like to do is to get the value of the input of my form, inside my view "cpe_creation". I tried getting informations from the "request" object but I didn't manage to do that. Maybe I'm lacking very basic HTML skills but I thought the form information would be in the request body, but it wasn't (or didn't look like it).</p> <p>I also tried using Django forms, but the problem is, I can't control very precisely what the form will look like. If I create an input, I can't tell it the size it's supposed to take or anything like that. </p> <p>Hence, my questions are the following : - How can I get the data from the submitted form ? - Do I have to do it with Django's Forms ? If I do, how can I control the css class of an input ? </p>
0
2016-09-16T13:59:36Z
39,533,446
<p>Your field in your form in your template has to have name attribute in order to pass it to your request.</p> <p>Then you will be able to get it in your view like this:</p> <p><code>site_name = request.POST.get('site_name')</code></p>
2
2016-09-16T14:02:11Z
[ "python", "html", "django", "forms" ]
Django - Get data from a form that is not a django.form
39,533,387
<p>I am working with Django in order to do a quite simple web application. </p> <p>I am currently trying to have a succession of forms so that the user completes the information bits by bits. I first tried to do it only in HTML because I wanted to really have the hand on the presentation. Here is my code. </p> <p>I have two templates, create_site.html and create_cpe.html. I need to get informations from the first page in order to know what to ask in the second page.</p> <p>Here is my create_site.html</p> <pre><code>&lt;body&gt; &lt;form action="{% url 'confWeb:cpe_creation' %}" method="post" class="form-creation"&gt; {% csrf_token %} &lt;div class = "form-group row"&gt; &lt;label for="site_name" class="col-xs-3 col-form-label"&gt;Name of theSite&lt;/label&gt; &lt;div class="col-xs-9"&gt; &lt;input id="site_name" class="form-control" placeholder="Name" required="" autofocus="" type="text"&gt; &lt;/div&gt; &lt;/div&gt; &lt;button class="btn btn-lg btn-primary btn-block" type="submit"&gt;Créer&lt;/button&gt; &lt;/form&gt; &lt;/body&gt; </code></pre> <p>And here is the views.py that I'm using to do all of this : </p> <pre><code>def site_creation(request): template = loader.get_template('confWeb/create_site.html') return HttpResponse(template.render(request)) def cpe_creation(request): if request.method == "POST" : print(request.POST) </code></pre> <p>What I would like to do is to get the value of the input of my form, inside my view "cpe_creation". I tried getting informations from the "request" object but I didn't manage to do that. Maybe I'm lacking very basic HTML skills but I thought the form information would be in the request body, but it wasn't (or didn't look like it).</p> <p>I also tried using Django forms, but the problem is, I can't control very precisely what the form will look like. If I create an input, I can't tell it the size it's supposed to take or anything like that. </p> <p>Hence, my questions are the following : - How can I get the data from the submitted form ? - Do I have to do it with Django's Forms ? If I do, how can I control the css class of an input ? </p>
0
2016-09-16T13:59:36Z
39,533,575
<p>Your form field needs a name attribute.</p> <pre><code> &lt;input id="site_name" name="site_name" class="form-control" placeholder="Name" required="" autofocus="" type="text"&gt; </code></pre> <p>Alternatively, if you want to use Django forms, you can set the CSS class of a form field by explicitly defining the widget.</p> <pre><code>class SiteForm(forms.Form): site_name = forms.CharField(widget=forms.TextInput(attrs={'class': 'form-control'})) </code></pre> <p>Judging from the class <code>form-control</code>, I assume you want to use Bootstrap. Check out <a href="http://django-crispy-forms.readthedocs.io/en/latest/" rel="nofollow">django-crispy-forms</a>, which makes it really easy style Django forms with Bootstrap.</p>
1
2016-09-16T14:08:12Z
[ "python", "html", "django", "forms" ]
Access Common Block variables from ctypes
39,533,409
<p>I am trying to access the data stored in a Fortran 77 common block from a Python script. The question is that I don't know where this data is stored. The Python application that I am developing makes use of different libraries. These libraries contain functions with the following directives:</p> <pre><code>#include &lt;tcsisc_common.inc&gt; </code></pre> <p>The common block contains:</p> <pre><code>C INTEGER*4 IDEBUG C C.... ARRAY DIMENSIONS DIMENSION IDEBUG(10) C C.... COMMON BLOCK COMMON /TCSD/ IDEBUG C </code></pre> <p>On the Python part (on the example I have used iPython), I load the library: </p> <pre><code>In [1]: import ctypes In [2]: _libtcsisc= /home/jfreixa/project/bin/libtcsisc.so In [3]: _tcsisc = ctypes.CDLL(_libtcsisc, ctypes.RTLD_GLOBAL) </code></pre> <p>The problem is that I don't know how to get IDEBUG. I have tried the following, but I just get tcsd as a c_long initialized to 0.</p> <pre><code>In [4]: tcsd = ctypes.c_int.in_dll(_tcsisc, "TCSD_") In [5]: tcsd Out[5]: c_long(0) In [6]: idebug = ctypes.c_int.in_dll(_tcsisc, "IDEBUG_") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-6-ee5018286275&gt; in &lt;module&gt;() ----&gt; 1 idebug = ctypes.c_int.in_dll(_tcsisc,'IDEBUG_') ValueError: ld.so.1: python2.7: fatal: IDEBUG_: can't find symbol </code></pre> <p>Any idea to correctly get the variable?</p>
0
2016-09-16T14:00:31Z
39,540,375
<p>According to <a href="http://www.yolinux.com/TUTORIALS/LinuxTutorialMixingFortranAndC.html" rel="nofollow">this page</a> (particularly how to access Fortran common blocks from C) and some <a href="http://stackoverflow.com/questions/35346835/how-to-access-to-c-global-variable-structure-by-python-and-ctype">Q/A page</a> about how to access C struct from Python, it seems that we could access common blocks as follows (though this may be not very portable, see below):</p> <p>mylib.f90</p> <pre><code>subroutine fortsub() implicit none integer n common /mycom/ n print *, "fortsub&gt; current /mycom/ n = ", n end </code></pre> <p>compile:</p> <pre><code>$ gfortran -shared -fPIC -o mylib.so mylib.f90 </code></pre> <p>test.py</p> <pre><code>from __future__ import print_function import ctypes class Mycom( ctypes.Structure ): _fields_ = [ ( "n", ctypes.c_int ) ] mylib = ctypes.CDLL( "./mylib.so" ) mycom = Mycom.in_dll( mylib, "mycom_" ) print( " python&gt; modifying /mycom/ n to 777" ) mycom.n = 777 fortsub = mylib.fortsub_ fortsub() </code></pre> <p>Test:</p> <pre><code> $ python test.py python&gt; modifying /mycom/ n to 777 fortsub&gt; current /mycom/ n = 777 </code></pre> <p>Here, please note that the name of the common block (here, <code>mycom</code>) is made lowercase and one underscore attached (by assuming gfortran). Because this convention is compiler-dependent, it may be more robust/portable to write new Fortran routines for setting/getting values in common blocks (particularly with the help of <code>iso_c_binding</code>) and call those routines from Python (as suggested by @innoSPG in the first comment).</p> <hr> <p>Another example including different types and arrays may look like this:</p> <p>mylib.f90</p> <pre><code>subroutine initcom() implicit none integer n( 2 ), w !! assumed to be compatible with c_int real f( 2 ) !! ... with c_float double precision d( 2 ) !! ... with c_double common /mycom/ n, f, d, w print *, "(fort) initializing /mycom/" n(:) = [ 1, 2 ] f(:) = [ 3.0, 4.0 ] d(:) = [ 5.0d0, 6.0d0 ] w = 7 call printcom() end subroutine printcom() implicit none integer n( 2 ), w real f( 2 ) double precision d( 2 ) common /mycom/ n, f, d, w print *, "(fort) current /mycom/" print *, " n = ", n print *, " f = ", f print *, " d = ", d print *, " w = ", w end </code></pre> <p>test.py</p> <pre><code>from __future__ import print_function import ctypes N = 2 class Mycom( ctypes.Structure ): _fields_ = [ ( "x", ctypes.c_int * N ), ( "y", ctypes.c_float * N ), ( "z", ctypes.c_double * N ), ( "w", ctypes.c_int ) ] mylib = ctypes.CDLL( "./mylib.so" ) mycom = Mycom.in_dll( mylib, "mycom_" ) initcom = mylib.initcom_ initcom() print( " (python) current /mycom/" ) print( " x = ", mycom.x[:] ) print( " y = ", mycom.y[:] ) print( " z = ", mycom.z[:] ) print( " w = ", mycom.w ) print( " (python) modifying /mycom/ ..." ) for i in range( N ): mycom.x[ i ] = (i + 1) * 10 mycom.y[ i ] = (i + 1) * 100 mycom.z[ i ] = (i + 1) * 0.1 mycom.w = 777 printcom = mylib.printcom_ printcom() </code></pre> <p>Test:</p> <pre><code> $ python test.py (fort) initializing /mycom/ (fort) current /mycom/ n = 1 2 f = 3.0000000 4.0000000 d = 5.0000000000000000 6.0000000000000000 w = 7 (python) current /mycom/ x = [1, 2] y = [3.0, 4.0] z = [5.0, 6.0] w = 7 (python) modifying /mycom/ ... (fort) current /mycom/ n = 10 20 f = 100.00000 200.00000 d = 0.10000000000000001 0.20000000000000001 w = 777 </code></pre>
2
2016-09-16T21:32:12Z
[ "python", "fortran", "ctypes" ]
Bokeh server and on.click (on.change) doing nothing
39,533,452
<p>I am trying to get Bokeh server to print out, however all I get is an instance running on <a href="http://localhost:5006/?bokeh-session-id=default" rel="nofollow">http://localhost:5006/?bokeh-session-id=default</a> with the radio buttons. When I click the buttons nothing happens. Is there something I am missing?</p> <pre><code>from bokeh.models.widgets import RadioButtonGroup from bokeh.plotting import figure, show, output_server def my_radio_handler(): print 'Radio button option selected.' radio_button_group = RadioButtonGroup( labels=["Option 1", "Option 2", "Option 3"], active=0) radio_button_group.on_click(my_radio_handler) output_server() show(radio_button_group) </code></pre>
0
2016-09-16T14:02:21Z
39,533,971
<p>You need to sync the server and the current session to get informations back.</p> <pre><code>from bokeh.client import push_session from bokeh.models.widgets import RadioButtonGroup from bokeh.plotting import curdoc, figure, show, output_server def my_radio_handler(new): print 'Radio button ' + str(new) + ' option selected.' radio_button_group = RadioButtonGroup( labels=['Option 1', 'Option 2', 'Option 3'], active=0) radio_button_group.on_click(my_radio_handler) # Open a session to keep our local document in sync with server session = push_session(curdoc()) # Open the document in a browser session.show(radio_button_group) # Run forever (this is crucial to retrive callback's data) session.loop_until_closed() </code></pre> <p>You can get more control on the click handler using <code>on_change()</code>:</p> <pre><code>def my_radio_handler_complete(attr, old, new): print(attr, old, new) print('Radio button ' + str(new) + ' option selected.') radio_button_group.on_change('active', my_radio_handler_complete) </code></pre> <p>FYI, in the source-code, <code>on_click</code> is just a proxy for <code>on_change()</code> and defined as:</p> <pre><code>def on_click(self, handler): self.on_change('active', lambda attr, old, new: handler(new)) </code></pre>
1
2016-09-16T14:27:28Z
[ "python", "bokeh" ]
Bokeh server and on.click (on.change) doing nothing
39,533,452
<p>I am trying to get Bokeh server to print out, however all I get is an instance running on <a href="http://localhost:5006/?bokeh-session-id=default" rel="nofollow">http://localhost:5006/?bokeh-session-id=default</a> with the radio buttons. When I click the buttons nothing happens. Is there something I am missing?</p> <pre><code>from bokeh.models.widgets import RadioButtonGroup from bokeh.plotting import figure, show, output_server def my_radio_handler(): print 'Radio button option selected.' radio_button_group = RadioButtonGroup( labels=["Option 1", "Option 2", "Option 3"], active=0) radio_button_group.on_click(my_radio_handler) output_server() show(radio_button_group) </code></pre>
0
2016-09-16T14:02:21Z
39,535,545
<p>The answer above shows how this can properly be accomplished with <code>bokeh.client</code> and <code>push_session</code> and <code>session.loop_until_closed</code>.</p> <p>I'd like to add a few more comments for context. (I am a core developer of the Bokeh project)</p> <p>There are two general ways that Bokeh applications can run:</p> <h2>Directly In the Server</h2> <p>These are scripts that are run something like:</p> <pre><code>bokeh serve --show my_app.py </code></pre> <p>In this case, the code, the objects, all the callbacks, etc. are running <em>in the bokeh server itself.</em> The situation looks like this:</p> <pre><code> browser &lt;---&gt; Bokeh Server </code></pre> <p>This is the method I would always recommend first. It takes the least amount of code, is the simplest to deploy, can be scaled out, uses less network traffic, and is more robust. </p> <h2>In a Separate Python Process</h2> <p>It's also possible to create the app and run callbacks in a separate process, using <code>bokeh.client</code>. Then the situation looks like this:</p> <pre><code> browser &lt;---&gt; Bokeh Server &lt;---&gt; another python process </code></pre> <p>Then the Bokeh server really just become a middleman, relaying messages between the browser and your python process. This has disadvantages: </p> <ul> <li>doubles the network traffic (for the new python process, unavoidable by definition)</li> <li>takes more code to write (all the <code>session</code> stuff not need in "Server Apps")</li> <li>requires the python process to block indefinitely to service the callbacks (have to call <code>session.loop_until_closed</code>)</li> </ul> <p>In the past, there were specific use-cases where using <code>bokeh.client</code> was necessary, for example: being able to customize app sessions per-use. But now that HTML request arguments are available to "Server Apps" that can be accomplished without <code>bokeh.client</code>. I would say there are now fewer reasons to reach for the <code>bokeh.client</code> approach. For this reason <strong><em>I always recommend the <code>bokeh serve my_app.py</code> method as the first, best way to use the Bokeh server</em></strong>.</p> <hr> <p>But, getting back to the question at hand: So what happens in the <em>Separate Python Process</em> scenario, if you forget to call <code>session.loop_until_closed</code>? Well the python process (the one that has your callbacks!) finishes, and the process terminates. Then there is nothing left to actually run the callbacks. </p> <p>Well, this is essentially exactly the situation with <code>output_server</code>. It does the "first half" of the session setup, loading the objects in to the server, but then your python script that calls <code>output_server</code> finishes, and nothing is left to execute any callbacks. <code>output_server</code> is basically still around due to accidents of history, but I would say it is not very useful at all. At best, it can load apps with no callbacks into a Bokeh server, but then why would you need a Bokeh server except to connect web apps to real python callbacks?</p> <p>There is currently (as of release <code>0.12.2</code>) an open issue discussing deprecating <code>output_server</code> for this reason:</p> <p><a href="https://github.com/bokeh/bokeh/issues/5154" rel="nofollow">https://github.com/bokeh/bokeh/issues/5154</a></p> <p>TLDR; <strong>I would not recommend using <code>output_server</code> for any reason at this point.</strong></p>
1
2016-09-16T15:50:37Z
[ "python", "bokeh" ]
Python Web Scraping - Table with Dynamic Data
39,533,543
<p>I want to scrape data from a dynamic changing table.</p> <p>The table is empty when you first open the website, but gets updated every 1-2 seconds with new values.</p> <p>I tried doing that with the requests and lxml python package (Hitchiker's Guide to Python), but I only get the empty table. </p> <p>Then I did it with Selenium, but it's a bit too slow to always start up a new browser (I need to get the value every 20-30 seconds).</p> <p>The table uses a Messaging service called <a href="https://www.lightstreamer.com/" rel="nofollow">Lightstreamer</a>.</p>
0
2016-09-16T14:06:52Z
39,533,658
<p>Instead of starting up a new browser every time, why not use something similar to <a href="http://phantomjs.org/" rel="nofollow">PhantomJS</a>. It would speed up your code with Selenium. Or try <a href="https://github.com/scrapinghub/splash" rel="nofollow">Splash</a> with Scrappy instead of Selenium. At the end of the day, it's hard to help you without seeing what you've done or tried. Also there are plenty of guides on how to use them on this site or google. </p>
0
2016-09-16T14:11:41Z
[ "python", "selenium" ]
PyopenCL 3D RGBA image from numpy array
39,533,635
<p>I want to construct an OpenCL 3D RGBA image from a numpy array, using pyopencl. I know about the <code>cl.image_from_array()</code> function, that basically does exactly that, but doesn't give any control about command queues or events, that is exposed by <code>cl.enqueue_copy()</code>. So I really would like to use the latter function, to transfer a 3D RGBA image from host to device, but I seem to not being able getting the syntax of the image constructor right.</p> <p>So in this environment</p> <pre><code>import pyopencl as cl import numpy as np platform = cl.get_platforms()[0] devs = platform.get_devices() device1 = devs[1] mf = cl.mem_flags ctx = cl.Context([device1]) Queue1=cl.CommandQueue(ctx,properties=cl.command_queue_properties.PROFILING_ENABLE) </code></pre> <p>I would like to do something analog to </p> <pre><code> d_colortest = cl.image_from_array(ctx,np.zeros((256,256,256,4)).astype(np.float32),num_channels=4,mode='w') </code></pre> <p>Using the functions</p> <pre><code>d_image = cl.Image(...) event = cl.enqueue_copy(...) </code></pre>
1
2016-09-16T14:10:57Z
39,637,325
<p>I adapted the <code>cl.image_from_array()</code> function to be able to return an event, which was basically straightforward:</p> <pre><code>def p_Array(queue_s, name, ary, num_channels=4, mode="w", norm_int=False,copy=True): q = eval(queue_s) if not ary.flags.c_contiguous: raise ValueError("array must be C-contiguous") dtype = ary.dtype if num_channels is None: from pyopencl.array import vec try: dtype, num_channels = vec.type_to_scalar_and_count[dtype] except KeyError: # It must be a scalar type then. num_channels = 1 shape = ary.shape strides = ary.strides elif num_channels == 1: shape = ary.shape strides = ary.strides else: if ary.shape[-1] != num_channels: raise RuntimeError("last dimension must be equal to number of channels") shape = ary.shape[:-1] strides = ary.strides[:-1] if mode == "r": mode_flags = cl.mem_flags.READ_ONLY elif mode == "w": mode_flags = cl.mem_flags.WRITE_ONLY else: raise ValueError("invalid value '%s' for 'mode'" % mode) img_format = { 1: cl.channel_order.R, 2: cl.channel_order.RG, 3: cl.channel_order.RGB, 4: cl.channel_order.RGBA, }[num_channels] assert ary.strides[-1] == ary.dtype.itemsize if norm_int: channel_type = cl.DTYPE_TO_CHANNEL_TYPE_NORM[dtype] else: channel_type = cl.DTYPE_TO_CHANNEL_TYPE[dtype] d_image = cl.Image(ctx, mode_flags, cl.ImageFormat(img_format, channel_type), shape=shape[::-1]) if copy: event = cl.enqueue_copy(q,d_image,ary,origin=(0,0,0),region=shape[::-1]) event_list.append((event,queue_s,name)) return d_image, event </code></pre>
0
2016-09-22T11:06:46Z
[ "python", "image", "numpy", "opencl", "pyopencl" ]
Unable to Install Python Package
39,533,880
<p>In trying to install a python package via pip I get the error:</p> <pre><code> Failed building wheel for atari-py Running setup.py clean for atari-py Failed to build atari-py Installing collected packages: atari-py, PyOpenGL Running setup.py install for atari-py ... error Complete output from command C:\Users\xxxxxx\AppData\Local\Continuum\Anaconda2\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\xxxxxx\\appdata\\local\\temp\\pip-build-qhuh1q\\atari-py\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\xxxxxx\appdata\local\temp\pip-z8wnzs-record\install-record.txt --single-version-externally-managed --compile: running install running build Unable to execute 'make build -C atari_py/ale_interface -j 3'. HINT: are you sure `make` is installed? error: [Error 2] The system cannot find the file specified </code></pre> <p>In my system when I type make:</p> <pre><code>C:\Users\xxxxxx&gt;make 'make' is not recognized as an internal or external command, operable program or batch file. </code></pre> <p>So, clearly make is missing. But I installed make using conda:</p> <pre><code>C:\Users\xxxxxx&gt;conda install mingw Fetching package metadata ......... Solving package specifications: .......... # All requested packages already installed. # packages in environment at C:\Users\xxxxxx\AppData\Local\Continuum\Anaconda2: # mingw 4.7 </code></pre> <p>So I have mingw 4.7 already installed.</p> <p>How could I remove the error and get the package?</p> <p>Many thanks for the help.</p>
0
2016-09-16T14:22:34Z
39,533,991
<p><code>make</code> is not in your PATH. Do <code>echo %PATH%</code> and check if the path to your msys utilities is in there. Otherwise you can edit this variable by following the instructions here: <a href="https://stackoverflow.com/questions/9546324/adding-directory-to-path-environment-variable-in-windows">Adding directory to PATH Environment Variable in Windows</a></p>
1
2016-09-16T14:28:30Z
[ "python", "windows", "pip", "mingw", "anaconda" ]
How to dump YAML with explicit references?
39,534,101
<p>Recursive references work great in <code>ruamel.yaml</code> or <code>pyyaml</code>: </p> <pre><code>$ ruamel.yaml.dump(ruamel.yaml.load('&amp;A [ *A ]')) '&amp;id001 - *id001' </code></pre> <p>However it (obviously) does not work on normal references: </p> <pre><code>$ ruamel.yaml.dump(ruamel.yaml.load("foo: &amp;foo { a: 42 }\nbar: { &lt;&lt;: *foo }")) bar: {a: 42} foo: {a: 42} </code></pre> <p>I would like is to explicitly create a reference:</p> <pre><code>data = {} data['foo'] = {'foo': {'a': 42}} data['bar'] = { '&lt;&lt;': data['foo'], 'b': 43 } $ ruamel.yaml.dump(data, magic=True) foo: &amp;foo a: 42 bar: &lt;&lt;: *foo b: 43 </code></pre> <p>This will be very useful to generate YAML output of large data structures that have lots of common keys</p> <p>How is it possible without disputable re.replace on the output?</p> <p>Actually the result of <code>ruamel.yaml.dump(data)</code> is</p> <pre><code>bar: '&lt;&lt;': &amp;id001 foo: a: 42 b: 43 foo: *id001 </code></pre> <p>So I need to replace <code>'&lt;&lt;'</code> with <code>&lt;&lt;</code> and maybe replace <code>id001</code> with <code>foo</code>.</p>
2
2016-09-16T14:34:47Z
39,535,871
<p>If you want to create something like that, at least in ruamel.yaml ¹, you should use round-trip mode, which also preserves the merges. The following doesn't throw an assertion error:</p> <pre><code>import ruamel.yaml yaml_str = """\ foo: &amp;xyz a: 42 bar: &lt;&lt;: *xyz """ data = ruamel.yaml.round_trip_load(yaml_str) assert ruamel.yaml.round_trip_dump(data) == yaml_str </code></pre> <p>What this means is that <code>data</code> has enough information to recreate the merge as it was in the output. In practise however, in round-trip mode, the merge never takes place. Instead retrieving a value <code>data['foo']['bar']['a']</code> means that there is no real key <code>'bar'</code> in <code>data['foo']</code>, but that that key is subsequently looked up in the attached "merge mappings".</p> <p>There is no public interface for this (so things might change), but by analyzing <code>data</code> and looking at <code>ruamel.yaml.comments.CommentedMap()</code> you can find that there is a <code>merge_attrib</code> (currently being the string <code>_yaml_merge</code>) and more useful that there is a method <code>add_yaml_merge()</code>. The latter takes a list of (int, CommentedMap()) tuples.</p> <pre><code>baz = ruamel.yaml.comments.CommentedMap() baz['b'] = 196 baz.yaml_set_anchor('klm') data.insert(1, 'baz', baz) </code></pre> <p>you need to insert the <code>'baz'</code> key before the <code>'bar'</code> key of data, otherwise the mapping will reverse. After insert the new structure in the merge for <code>data['bar']</code>:</p> <pre><code>data['bar'].add_yaml_merge([(0, baz)]) ruamel.yaml.round_trip_dump(data, sys.stdout) </code></pre> <p>which gives:</p> <pre><code>foo: &amp;xyz a: 42 baz: &amp;klm b: 196 bar: &lt;&lt;: [*xyz, *klm] </code></pre> <p>( if you like to see what <code>add_yaml_merge</code> does insert</p> <pre><code>print(getattr(data['bar'], ruamel.yaml.comments.merge_attrib)) </code></pre> <p>before and after the call)</p> <p>If you want to start from scratch completely you can do:</p> <pre><code>data = ruamel.yaml.comments.CommentedMap([ ('foo', ruamel.yaml.comments.CommentedMap([('a', 42)])), ]) data['foo'].yaml_set_anchor('xyz') data['bar'] = bar = ruamel.yaml.comments.CommentedMap() bar.add_yaml_merge([(0, data['foo'])]) </code></pre> <p>instead of the <code>data = ruamel.yaml.round_trip_load(yaml_str)</code>.</p> <hr> <p>¹ <sub>Disclaimer: I am the author of that package.</sub></p>
2
2016-09-16T16:06:53Z
[ "python", "yaml", "pyyaml", "ruamel.yaml" ]
change index value in pandas dataframe
39,534,129
<p>How can I change the value of the index in a pandas dataframe?</p> <p>for example, from this:</p> <pre><code>In[1]:plasmaLipo Out[1]: Chyloquantity ChyloSize VLDLquantity VLDLSize LDLquantity \ Sample 1010107 1.07 87.0 115.0 38.7 661.0 1020107 1.13 88.0 119.0 38.8 670.0 1030107 0.66 87.0 105.0 37.7 677.0 1040107 0.74 88.0 100.0 38.1 687.0 ... </code></pre> <p>to this:</p> <pre><code>In[1]:plasmaLipo Out[1]: Chyloquantity ChyloSize VLDLquantity VLDLSize LDLquantity \ Sample 1010 1.07 87.0 115.0 38.7 661.0 1020 1.13 88.0 119.0 38.8 670.0 1030 0.66 87.0 105.0 37.7 677.0 1040 0.74 88.0 100.0 38.1 687.0 ... </code></pre> <p>i.e. use only the first 4 digits of the index.</p> <p>I tried to convert the index in a list and then modify it:</p> <pre><code>indx = list(plasmaLipo.index.values) newindx = [] for items in indx: newindx.append(indx[:3]) print newindx </code></pre> <p>but gives me back the first 3 elements of the list:</p> <pre><code>Out[2] [[1010101, 1020101, 1030101], [1010101, 1020101, 1030101], ..... </code></pre>
1
2016-09-16T14:36:21Z
39,534,260
<p>You can convert the index to <code>str</code> dtype using <code>astype</code> and then slice the string values, cast back again using <code>astype</code> and overwrite the index attribute:</p> <pre><code>In [30]: df.index = df.index.astype(str).str[:4].astype(int) df Out[30]: Chyloquantity ChyloSize VLDLquantity VLDLSize LDLquantity Sample 1010 1.07 87.0 115.0 38.7 661.0 1020 1.13 88.0 119.0 38.8 670.0 1030 0.66 87.0 105.0 37.7 677.0 1040 0.74 88.0 100.0 38.1 687.0 </code></pre> <p>what you tried:</p> <pre><code>for items in indx: newindx.append(indx[:3]) </code></pre> <p>looped over each index value in <code>for items in indx</code> but you then appended a slice of the entire index on the next line <code>newindx.append(indx[:3])</code> which is why it repeated the first 3 index values for each index item</p> <p>it would've worked if you did:</p> <pre><code>for items in indx: newindx.append(items[:3]) </code></pre>
0
2016-09-16T14:43:17Z
[ "python", "pandas" ]
Adding data to an existing apache spark dataframe from a csv file
39,534,214
<p>i have a spark dataframe that has two columns: name, age as follows:</p> <pre><code>[Row(name=u'Alice', age=2), Row(name=u'Bob', age=5)] </code></pre> <p>The dataframe was created using </p> <pre><code>sqlContext.createDataFrame() </code></pre> <p>What i need to do next is to add a third column 'UserId' from an external 'csv' file. The external file has several columns but i need to include the first column only, which is the 'UserId':</p> <p><a href="http://i.stack.imgur.com/udp5W.png" rel="nofollow"><img src="http://i.stack.imgur.com/udp5W.png" alt="enter image description here"></a></p> <p>The number of records in both data sources is the same. I am using a standalone pyspark version on windows os. The final result should be a new dataframe with three columns: UserId, Name, Age. </p> <p>Any suggestion? </p>
-1
2016-09-16T14:40:36Z
39,534,990
<p>You can do this by join of two data frame but for that you need to have in booth tables either ids or other keys. I would suggest to just copy it to an excel file if the location of lines is the same other wise you do not have enough information to merge them.</p>
0
2016-09-16T15:19:27Z
[ "python", "apache-spark", "pyspark", "spark-dataframe" ]
Adding data to an existing apache spark dataframe from a csv file
39,534,214
<p>i have a spark dataframe that has two columns: name, age as follows:</p> <pre><code>[Row(name=u'Alice', age=2), Row(name=u'Bob', age=5)] </code></pre> <p>The dataframe was created using </p> <pre><code>sqlContext.createDataFrame() </code></pre> <p>What i need to do next is to add a third column 'UserId' from an external 'csv' file. The external file has several columns but i need to include the first column only, which is the 'UserId':</p> <p><a href="http://i.stack.imgur.com/udp5W.png" rel="nofollow"><img src="http://i.stack.imgur.com/udp5W.png" alt="enter image description here"></a></p> <p>The number of records in both data sources is the same. I am using a standalone pyspark version on windows os. The final result should be a new dataframe with three columns: UserId, Name, Age. </p> <p>Any suggestion? </p>
-1
2016-09-16T14:40:36Z
39,537,578
<p>I used pandas to make this work. It allows to join dataframes in many different ways.</p> <p>1) We need first to import only that extra column (after we remove headers, although this can also be done after the import) and convert it into an RDD</p> <pre><code>from pyspark.sql.types import StringType from pyspark import SQLContext sqlContext = SQLContext(sc) userid_rdd = sc.textFile("C:……/userid.csv").map(lambda line: line.split(",")) </code></pre> <p>2) Convert the 'userid' RDD into a spark dataframe</p> <pre><code>userid_df = userid_rdd.toDF(['userid']) userid_df.show() </code></pre> <p>3) Convert the 'userid' dataframe into a pandas dataframe</p> <pre><code>userid_toPandas = userid_df.toPandas() userid_toPandas </code></pre> <p>4) Convert the ‘predictions’ dataframe (an existing dataframe) into a pandas dataframe</p> <pre><code>predictions_toPandas = predictions.toPandas() predictions_toPandas </code></pre> <p>5) Combine the two pandas dataframes into one new dataframe using ‘concat’</p> <pre><code>import pandas as pd result = pd.concat([userid_toPandas, predictions_toPandas], axis = 1, ignore_index = True) result </code></pre>
0
2016-09-16T18:01:03Z
[ "python", "apache-spark", "pyspark", "spark-dataframe" ]
how can I save super large array into many small files?
39,534,246
<p>In linux 64bit environment, I have very big float64 array (single one will be 500GB to 1TB). I would like to access these arrays in numpy with uniform way: a[x:y]. So I do not want to access the array as segments file by file. Is there any tools that I can create memmap over many different files? Can hdf5 or pytables store a single CArray into many small files? Maybe something similar to the fileInput? Or Can I do something with the file system to simulate a single file? </p> <p>In matlab I've been using H5P.set_external to do this. Then I can create a raw dataset and access it as a big raw file. But I do not know if I can create numpy.ndarray over these dataset in python. Or can I spread a single dataset over many small hdf5 files?</p> <p>and unfortunately the H5P.set_chunk does not work with H5P.set_external, because set_external only work with continuous data type not chunked data type.</p> <p>some related topics: <a href="http://stackoverflow.com/questions/35534294/chain-datasets-from-multiple-hdf5-files-datasets">Chain datasets from multiple HDF5 files/datasets</a></p>
1
2016-09-16T14:42:22Z
39,534,448
<p>I would use hdf5. In h5py, you can specify a chunk size which makes retrieving small pieces of the array efficient:</p> <p><a href="http://docs.h5py.org/en/latest/high/dataset.html?#chunked-storage" rel="nofollow">http://docs.h5py.org/en/latest/high/dataset.html?#chunked-storage</a></p>
1
2016-09-16T14:52:57Z
[ "python", "numpy", "filesystems", "hdf5", "pytables" ]
how can I save super large array into many small files?
39,534,246
<p>In linux 64bit environment, I have very big float64 array (single one will be 500GB to 1TB). I would like to access these arrays in numpy with uniform way: a[x:y]. So I do not want to access the array as segments file by file. Is there any tools that I can create memmap over many different files? Can hdf5 or pytables store a single CArray into many small files? Maybe something similar to the fileInput? Or Can I do something with the file system to simulate a single file? </p> <p>In matlab I've been using H5P.set_external to do this. Then I can create a raw dataset and access it as a big raw file. But I do not know if I can create numpy.ndarray over these dataset in python. Or can I spread a single dataset over many small hdf5 files?</p> <p>and unfortunately the H5P.set_chunk does not work with H5P.set_external, because set_external only work with continuous data type not chunked data type.</p> <p>some related topics: <a href="http://stackoverflow.com/questions/35534294/chain-datasets-from-multiple-hdf5-files-datasets">Chain datasets from multiple HDF5 files/datasets</a></p>
1
2016-09-16T14:42:22Z
39,539,165
<p>You can use <a href="http://dask.pydata.org/en/latest/index.html" rel="nofollow"><code>dask</code></a>. <code>dask</code> <a href="http://dask.pydata.org/en/latest/array.html" rel="nofollow">arrays</a> allow you to create an object that behaves like a single big numpy array but represents the data stored in <a href="http://dask.pydata.org/en/latest/examples/array-hdf5.html" rel="nofollow">many small HDF5 files</a>. <code>dask</code> will take care of figuring out how any operations you carry out relate to the underlying on-disk data for you.</p>
1
2016-09-16T19:51:35Z
[ "python", "numpy", "filesystems", "hdf5", "pytables" ]
Storing data from a tag in Python with BeautifulSoup4
39,534,371
<p>Using BeautifulSoup4, I can isolate: </p> <pre><code>&lt;a href="#" data-nutrition="{ &amp;quot;serving-name&amp;quot;:&amp;quot;Milk, 2%&amp;quot;, &amp;quot;serving-size&amp;quot;:&amp;quot;16 FL OZ&amp;quot;, &amp;quot;calories&amp;quot;:&amp;quot;267&amp;quot;}"&gt; Milk, 2% &lt;i class="icon-leaf icon-hidden-text"&gt;Meatless&lt;/i&gt; &lt;/a&gt; </code></pre> <p>By running: </p> <pre><code>for i in soup('a', attrs={'data-nutrition' : True}): sample = i break print(sample) </code></pre> <p>I need create the dictionary: </p> <pre><code>my_dict = { 'serving-name': 'Milk, 2%', 'serving-size': '16 FL OZ', 'calories': '267' } </code></pre> <p>How can I do this using BeautifulSoup4 in Python?</p>
1
2016-09-16T14:48:43Z
39,534,409
<p>Locate the element and use <a href="https://docs.python.org/2/library/json.html#json.loads" rel="nofollow"><code>json.loads()</code></a> to load the <code>data-nutrition</code> attribute value into the Python dictionary:</p> <pre><code>import json from bs4 import BeautifulSoup data = """ &lt;a href="#" data-nutrition="{ &amp;quot;serving-name&amp;quot;:&amp;quot;Milk, 2%&amp;quot;, &amp;quot;serving-size&amp;quot;:&amp;quot;16 FL OZ&amp;quot;, &amp;quot;calories&amp;quot;:&amp;quot;267&amp;quot;}"&gt; Milk, 2% &lt;i class="icon-leaf icon-hidden-text"&gt;Meatless&lt;/i&gt; &lt;/a&gt;""" soup = BeautifulSoup(data, "html.parser") a = soup.select_one("a[data-nutrition]") nutrition = json.loads(a["data-nutrition"]) print(nutrition) </code></pre> <p>Prints:</p> <pre><code>{'serving-name': 'Milk, 2%', 'serving-size': '16 FL OZ', 'calories': '267'} </code></pre>
1
2016-09-16T14:50:59Z
[ "python", "beautifulsoup" ]
Python Numpy - Reading elements of a Vector Numpy Array as numerical values rather than as Numpy Arrays
39,534,395
<p>Suppose I have a Numpy Array that has dimensions of nx1 (n rows, 1 column). My usage of this is for implementing 3D vectors as 3x1 Matrices using Numpy, but the application can be extended for nx1 Vector Matrices:</p> <pre><code>In [0]: import numpy as np In [1]: foo = np.array([ ['a11'], ['a21'], ['a31'], ..., ['an1'] ]) </code></pre> <p>I want to be able to access the values of the array by dereferencing one value.</p> <pre><code>In [2]: foo[0] Out[2]: 'a11' In [3]: foo[n] Out[3]: 'an1' </code></pre> <p>However, by the general formatting of Numpy Arrays, a Vector array would be considered a 2D Array and would require 2 values to dereference it: I would have to use <code>foo[0][0]</code> or <code>foo[0][n]</code> to get the same values. I could use <code>np.transpose</code> to Transpose the Vector into one row, but the syntax continues to produce a 2D Numpy Array that requires 2 values to dereference: hence</p> <pre><code>In [4]: np.transpose(foo)[0] == foo[0][0] Out[4]: array([ True, False, False], dtype=bool) In [5]: np.transpose(foo)[0][0] == foo[0][0] Out[5]: True </code></pre> <p>This would nullify any advantage that transposing would provide. How can I access the elements of a Vector Numpy Array using only one Dereferencing Value?</p>
0
2016-09-16T14:50:11Z
39,534,644
<p>You could use the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tolist.html" rel="nofollow">numpy.ndarray.tolist()</a> function:</p> <pre><code>In [1]: foo = np.array([ ['a11'], ['a21'], ['a31'] ]) In [2]: foo.tolist() Out[2]: [['an1'], ['a2n'], ['a3n']] In [3]: foo.tolist()[0] Out[3]: ['an1'] </code></pre>
0
2016-09-16T15:01:51Z
[ "python", "arrays", "numpy", "vector", "dereference" ]
Python Numpy - Reading elements of a Vector Numpy Array as numerical values rather than as Numpy Arrays
39,534,395
<p>Suppose I have a Numpy Array that has dimensions of nx1 (n rows, 1 column). My usage of this is for implementing 3D vectors as 3x1 Matrices using Numpy, but the application can be extended for nx1 Vector Matrices:</p> <pre><code>In [0]: import numpy as np In [1]: foo = np.array([ ['a11'], ['a21'], ['a31'], ..., ['an1'] ]) </code></pre> <p>I want to be able to access the values of the array by dereferencing one value.</p> <pre><code>In [2]: foo[0] Out[2]: 'a11' In [3]: foo[n] Out[3]: 'an1' </code></pre> <p>However, by the general formatting of Numpy Arrays, a Vector array would be considered a 2D Array and would require 2 values to dereference it: I would have to use <code>foo[0][0]</code> or <code>foo[0][n]</code> to get the same values. I could use <code>np.transpose</code> to Transpose the Vector into one row, but the syntax continues to produce a 2D Numpy Array that requires 2 values to dereference: hence</p> <pre><code>In [4]: np.transpose(foo)[0] == foo[0][0] Out[4]: array([ True, False, False], dtype=bool) In [5]: np.transpose(foo)[0][0] == foo[0][0] Out[5]: True </code></pre> <p>This would nullify any advantage that transposing would provide. How can I access the elements of a Vector Numpy Array using only one Dereferencing Value?</p>
0
2016-09-16T14:50:11Z
39,535,125
<p>Your <code>foo</code> is a 2d array of strings:</p> <pre><code>In [354]: foo = np.array([ ['a11'], ['a21'], ['a31'], ['an1'] ]) In [355]: foo.shape Out[355]: (4, 1) In [356]: foo[0] # selects a 'row' Out[356]: array(['a11'], dtype='&lt;U3') In [357]: foo[0,:] # or more explicitly with the column : Out[357]: array(['a11'], dtype='&lt;U3') In [358]: foo[:,0] # selects a column, results in a 1d array Out[358]: array(['a11', 'a21', 'a31', 'an1'], dtype='&lt;U3') In [359]: foo[0,0] # select an element Out[359]: 'a11' </code></pre> <p>Transpose is still 2d, just switching rows and columns:</p> <pre><code>In [360]: foo.T Out[360]: array([['a11', 'a21', 'a31', 'an1']], dtype='&lt;U3') In [361]: _.shape Out[361]: (1, 4) </code></pre> <p>Ravel (or flatten) turns it into a 1d array, which can be accessed with just one index.</p> <pre><code>In [362]: foo.ravel()[1] Out[362]: 'a21' </code></pre> <p>Your talk about matrix multiplication and such suggests that your array really isn't of strings, but that 'a21' represents an array, or is number. So is it really a 2d numeric array, or maybe a 3d array?</p>
0
2016-09-16T15:26:26Z
[ "python", "arrays", "numpy", "vector", "dereference" ]
scrollbar does not work in recent chaco releases
39,534,402
<p>This program was running fine in chaco 3.2, but with chaco 4, scrollbar does not show at all.</p> <p>I would like either to find the problem or find a workaround.</p> <p>PanTool may be a workaround, but this will conflict with some linecursors used with mouse.</p> <pre><code>#!/usr/bin/env python # Major library imports from numpy import linspace from scipy.special import jn # Enthought library imports from enthought.enable.api import Component, ComponentEditor from enthought.traits.api import HasTraits, Instance from enthought.traits.ui.api import Item, Group, View # Chaco imports from enthought.chaco.api import ArrayPlotData, VPlotContainer, \ Plot, OverlayPlotContainer, add_default_axes, add_default_grids from enthought.chaco.plotscrollbar import PlotScrollBar from enthought.chaco.tools.api import PanTool, ZoomTool #=============================================================================== # # Create the Chaco plot. #=============================================================================== def _create_plot_component(): # Create some x-y data series to plot x = linspace(-2.0, 10.0, 100) pd = ArrayPlotData(index = x) for i in range(5): pd.set_data("y" + str(i), jn(i,x)) # Create some line plots of some of the data plot1 = Plot(pd) plot1.plot(("index", "y0", "y1", "y2"), name="j_n, n&lt;3", color="red")[0] p = plot1.plot(("index", "y3"), name="j_3", color="blue")[0] # Add the scrollbar plot1.padding_top = 0 p.index_range.high_setting = 1 # Create a container and add our plots container = OverlayPlotContainer(padding = 5,fill_padding = True, bgcolor = "lightgray", use_backbuffer=True) hscrollbar = PlotScrollBar(component=p, mapper=p.index_mapper,axis="index", resizable="",use_backbuffer = False, height=15,position=(0,0)) hscrollbar.force_data_update() plot1.overlays.append(hscrollbar) hgrid,vgrid = add_default_grids(plot1) add_default_axes(plot1) container.add(plot1) container.invalidate_and_redraw() return container #=============================================================================== # Attributes to use for the plot view. size=(900,500) title="Scrollbar example" #=============================================================================== # # Demo class that is used by the demo.py application. #=============================================================================== class Demo(HasTraits): plot = Instance(Component) traits_view = View( Group( Item('plot', editor=ComponentEditor(size=size), show_label=False), orientation = "vertical"), resizable=True, title=title ) def _plot_default(self): return _create_plot_component() demo = Demo() if __name__ == "__main__": demo.configure_traits() #--EOF--- </code></pre>
0
2016-09-16T14:50:33Z
39,992,297
<p>We investigated and found some problems in the code of enable api (<a href="https://github.com/enthought/enable" rel="nofollow">https://github.com/enthought/enable</a>), that had disallowed the scrollbar to display in wx backend.</p> <p>The following patch solves the problem.</p> <p>There are other issues, like height setting that does not work, we will continue to investigate.</p> <pre><code>diff --git a/enable/wx/scrollbar.py b/enable/wx/scrollbar.py index 02d0da0..003cc90 100644 --- a/enable/wx/scrollbar.py +++ b/enable/wx/scrollbar.py @@ -136,7 +136,7 @@ # We have to do this flip_y business because wx and enable use opposite # coordinate systems, and enable defines the component's position as its # lower left corner, while wx defines it as the upper left corner. - window = getattr(gc, "window", None) + window = getattr(self, "window", None) if window is None: return wx_ypos = window._flip_y(wx_ypos) </code></pre>
0
2016-10-12T06:56:26Z
[ "python", "plot", "enthought", "chaco" ]
Simple Radius server on Python
39,534,494
<p>I am new to python and I am trying to use the python library pyrad ( <a href="https://github.com/wichert/pyrad" rel="nofollow">https://github.com/wichert/pyrad</a> ) to implement a very simple Radius server to test one application. The only thing it has to do is to check if the password is equals to 123. I am able to get the password, but it is obfuscate. I need to Unobfuscate it. There is a method called PwDecrypt inside of pyrad -> packet -> AuthPacket. That is used to do this task. My issue, is that I don't know how to call this method on my code, as I said, I am new to Python.</p> <p>This is the code I am using to test and get the obfuscated password:</p> <pre><code>#!/usr/bin/python from __future__ import print_function from pyrad import dictionary, packet, server import logging logging.basicConfig(filename="pyrad.log", level="DEBUG", format="%(asctime)s [%(levelname)-8s] %(message)s") class FakeServer(server.Server): def _HandleAuthPacket(self, pkt): server.Server._HandleAuthPacket(self, pkt) print("") print("Received an authentication request") print("Attributes: ") for attr in pkt.keys(): print("%s: %s" % (attr, pkt[attr])) ########################################### ########################################### ########################################### ########################################### #HERE I GET THE OBFUSCATED PASSWORD print("%s" % pkt['Password']) ########################################### ########################################### ########################################### ########################################### reply = self.CreateReplyPacket(pkt, **{ "Service-Type": "Framed-User", "Framed-IP-Address": '10.10.10.10', "Framed-IPv6-Prefix": "fc66::1/64" }) #reply.code = packet.AccessAccept reply.code = packet.AccessChallenge #reply.code = packet.AccessReject self.SendReplyPacket(pkt.fd, reply) if __name__ == '__main__': # create server and read dictionary srv = FakeServer(dict=dictionary.Dictionary("dictionary")) # add clients (address, secret, name) srv.hosts["192.168.0.110"] = server.RemoteHost("192.168.0.110", b"secret", "192.168.0.110") srv.BindToAddress("") # start server srv.Run() </code></pre> <p>Thanks</p>
1
2016-09-16T14:55:15Z
39,559,619
<p>I friend of mine helped me to solve this issue.</p> <p>These two approaches do what I need:</p> <pre><code> pwd = map(pkt.PwDecrypt,pkt['Password']) print('User: %s Pass: %s' % (pkt['User-Name'], pwd)) pwd = pkt.PwDecrypt(pkt['Password'][0]) print('User: %s Pass: %s' % (pkt['User-Name'], pwd)) </code></pre>
0
2016-09-18T15:54:34Z
[ "python", "radius" ]
ImportError: No module named cv2.cv
39,534,496
<p><strong>python 3.5 and windows 10</strong></p> <p>I installed open cv using this command :</p> <pre><code>pip install opencv_python-3.1.0-cp35-cp35m-win_amd64.whl </code></pre> <p>This command in python works fine :</p> <pre><code>import cv2 </code></pre> <p>But when i want to import cv2.cv :</p> <pre><code>import cv2.cv as cv </code></pre> <p>This error comes up :</p> <pre><code>import cv2.cv as cv ImportError: No module named 'cv2.cv'; 'cv2' is not a package </code></pre> <p>So what is the problem and how can i fix it?</p>
1
2016-09-16T14:55:18Z
39,534,936
<p>as @Miki said :</p> <p><code>cv2.cv</code> has been removed in OpenCV3 and functions have changed</p> <p>And this is OpenCV Documention:<a href="http://opencv.org/documentation.html" rel="nofollow">http://opencv.org/documentation.html</a></p>
1
2016-09-16T15:16:19Z
[ "python", "python-3.x", "opencv", "windows-10" ]
Modify Django settings programmatically
39,534,505
<p>I have Django project. I need to add app to <code>INSTALLED_APPS</code> or change template config using another python script. What is the best way to do that?</p> <p>I have several ideas.</p> <ul> <li>Open <code>settings.py</code> as text file from Python and modify it. Seems to be wheel reinvention and opens box with many errors (escaping and so on).</li> <li>Use Python modules like <code>ast</code> but it is pretty low level and more for read access (and I need to write data back).</li> <li>Use some Django tools (I am not sure if such tool exists).</li> </ul> <p>What is the best way to do that?</p> <p>PS: <a href="http://stackoverflow.com/questions/768634/parse-a-py-file-read-the-ast-modify-it-then-write-back-the-modified-source-c">Parse a .py file, read the AST, modify it, then write back the modified source code</a> is related, but it is not Django specific and pretty old. </p>
1
2016-09-16T14:55:47Z
39,534,970
<p>A better (meaning more safe and clean) approach would be to generate a python module and then use it in settings.py. For example, have your script create a file with these contents:</p> <pre><code>ADDITIONAL_APPS = [ 'foo', 'bar', ] </code></pre> <p>Save it as, say, additional_settings.py and put it in the same directory as settings.py, and then in settings.py add lines</p> <pre><code>from .additional_settings import ADDITIONAL_APPS ... INSTALLED_APPS += ADDITIONAL_APPS </code></pre> <p>This way you don't have to parse anything at all and you don't risk ruining your existing settings.</p> <p>Let me know if it works</p>
0
2016-09-16T15:18:15Z
[ "python", "django" ]
ASCII error with my Django API
39,534,564
<p>I'm learning a tutorial in order to study Django so I'm really new with that. I started a project which is named <code>etat_civil</code> and I created an application which is named <code>blog</code>.</p> <p>I get this kind of things :</p> <p><a href="http://i.stack.imgur.com/mdpIw.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/mdpIw.jpg" alt="enter image description here"></a></p> <p>My <code>blog views.py</code> file looks like :</p> <pre><code>#-*- coding : utf-8 -*- from django.shortcuts import render from django.http import HttpResponse # Create your views here. def home (request) : text = u"""&lt;h1&gt;Bienvenue sur mon blog ! &lt;/h1&gt; &lt;p&gt;Les crêpes bretonnes ça tue des mouettes en plein vol ! &lt;/p&gt;""" return HttpResponse(text) </code></pre> <p>and my <code>etat_civil urls.py</code> file looks like :</p> <pre><code>#-*- coding : utf-8 -*- from django.conf.urls import url from django.contrib import admin from blog import views urlpatterns = [ url(r'^accueil$', views.home), ] </code></pre> <p>Then, in the terminal I executed the command line : <code>python manage.py run server</code></p> <p>But I get this error and I don't know from where is this error :</p> <pre><code>macbook-pro-de-valentin:etat_civil valentinjungbluth$ python manage.py runserver Performing system checks... Unhandled exception in thread started by &lt;function wrapper at 0x1109e1050&gt; Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper fn(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 121, in inner_run self.check(display_num_errors=True) File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 374, in check include_deployment_checks=include_deployment_checks, File "/usr/local/lib/python2.7/site-packages/django/core/management/base.py", line 361, in _run_checks return checks.run_checks(**kwargs) File "/usr/local/lib/python2.7/site-packages/django/core/checks/registry.py", line 81, in run_checks new_errors = check(app_configs=app_configs) File "/usr/local/lib/python2.7/site-packages/django/core/checks/urls.py", line 14, in check_url_config return check_resolver(resolver) File "/usr/local/lib/python2.7/site-packages/django/core/checks/urls.py", line 24, in check_resolver for pattern in resolver.url_patterns: File "/usr/local/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/usr/local/lib/python2.7/site-packages/django/urls/resolvers.py", line 313, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/usr/local/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/usr/local/lib/python2.7/site-packages/django/urls/resolvers.py", line 306, in urlconf_module return import_module(self.urlconf_name) File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/Users/valentinjungbluth/etat_civil/etat_civil/urls.py", line 22, in &lt;module&gt; from blog import views File "/Users/valentinjungbluth/etat_civil/blog/views.py", line 9 SyntaxError: Non-ASCII character '\xc3' in file /Users/valentinjungbluth/etat_civil/blog/views.py on line 10, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details </code></pre> <p>Thank you so much if you could help me ;)</p>
0
2016-09-16T14:58:39Z
39,534,747
<p>There is an extra space in your source code encoding definition</p> <p>Change this:</p> <pre><code>#-*- coding : utf-8 -*- </code></pre> <p>To this:</p> <pre><code>#-*- coding: utf-8 -*- </code></pre>
2
2016-09-16T15:06:44Z
[ "python", "django" ]
Navigating pagination with selenium
39,534,584
<p>I am getting stuck on a weird case of pagination. I am scraping search results from <a href="https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx" rel="nofollow">https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx</a></p> <p>I have search results that fall into 4 categories.</p> <p>1) There are no search results</p> <p>2) There is one results page</p> <p>3) There is more than one results page but less than 12 results pages</p> <p>4) There are more than 12 results pages.</p> <p>For case 1, that is easy, I am just passing.</p> <pre><code>results = driver.find_element_by_class_name('GridView') if len(results)== 0: pass </code></pre> <p>For cases 2 and 3, I am checking if the list of links in the containing element is at least one and then click it.</p> <pre><code>else: results_table = bsObj.find('table', {'class':'GridView'}) sub_tables = results_table.find_all('table') next_page_links = sub_tables[1].find_all('a') if len(next_page_links) == 0 scrapeResults() else: scrapeResults() ####GO TO NEXT PAGE UNTIL THERE IS NO NEXT PAGE </code></pre> <p>Question for case 2 and 3: What could i possibly check for here as my control? </p> <p>The links are hrefs to pages 2, 3, etc. But the tricky part is if I am on a current page, say page 1, how do I make sure I a going to page 2 and when I am on page 2 how do i make sure I am going to page 3? The html for page 1 for the results list is as follows</p> <pre><code>&lt;table cellspacing="0" cellpadding="0" border="0" style="border-collapse:collapse;"&gt; &lt;tr&gt; &lt;td&gt;Page: &lt;span&gt;1&lt;/span&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$2&amp;#39;)"&gt;2&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$3&amp;#39;)"&gt;3&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; </code></pre> <p>I can zero into this table specifically using <code>sub_tables[1]</code> see above bs4 code in case 2.</p> <p>The problem is there is no next button that I could utilize. Nothing changes along the results pages in the html. There is nothing to isolate the current page besides the number in the <code>span</code> right before the links. And I would like it to stop when it reaches the last page</p> <p>For case 4, the html looks like this:</p> <pre><code>&lt;table cellspacing="0" cellpadding="0" border="0" style="border-collapse:collapse;"&gt; &lt;tr&gt; &lt;td&gt;Page: &lt;span&gt;1&lt;/span&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$2&amp;#39;)"&gt;2&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$3&amp;#39;)"&gt;3&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$4&amp;#39;)"&gt;4&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$5&amp;#39;)"&gt;5&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$6&amp;#39;)"&gt;6&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$7&amp;#39;)"&gt;7&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$8&amp;#39;)"&gt;8&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$9&amp;#39;)"&gt;9&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$10&amp;#39;)"&gt;10&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$11&amp;#39;)"&gt;...&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$Last&amp;#39;)"&gt;Last&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; </code></pre> <p>The last two links are <code>...</code> to show that there are more results pages and <code>Last</code> to signify the last page. However, the `Last link exists on every page and it is only on the last page itself that it is not an active link. </p> <p>Question for case 4, how could i check if the <code>last</code> link is clickable and use this as my stopping point? </p> <p>Bigger question for case 4, how do i manouver the <code>...</code> to go through other results pages? The results page list is 12 values at most. i.e. the nearest ten pages to the current page, the <code>...</code> link to more pages and the <code>Last</code> link. So i don't know what to do if my results have say 88 pages.</p> <p>I am link a dump to a full sample page : <a href="https://ghostbin.com/paste/nrb27" rel="nofollow">https://ghostbin.com/paste/nrb27</a></p>
1
2016-09-16T14:59:24Z
39,535,606
<p>First of all you have to know what page are you at. To achieve it:</p> <p>Find element with current page number, using xpath:</p> <pre><code>currentPageElement = driver.find_element(By.XPATH, '//table[./tbody/tr/td[text()='Page: ']]//span') </code></pre> <p>Then extract the number:</p> <pre><code>currentPageNumber = int(currentPageElement.text) </code></pre> <p>And then you can do anything: go to next page just adding 1 to current page number, go to last page and read the number, etc </p>
1
2016-09-16T15:53:49Z
[ "python", "loops", "selenium", "selenium-webdriver", "pagination" ]
Navigating pagination with selenium
39,534,584
<p>I am getting stuck on a weird case of pagination. I am scraping search results from <a href="https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx" rel="nofollow">https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx</a></p> <p>I have search results that fall into 4 categories.</p> <p>1) There are no search results</p> <p>2) There is one results page</p> <p>3) There is more than one results page but less than 12 results pages</p> <p>4) There are more than 12 results pages.</p> <p>For case 1, that is easy, I am just passing.</p> <pre><code>results = driver.find_element_by_class_name('GridView') if len(results)== 0: pass </code></pre> <p>For cases 2 and 3, I am checking if the list of links in the containing element is at least one and then click it.</p> <pre><code>else: results_table = bsObj.find('table', {'class':'GridView'}) sub_tables = results_table.find_all('table') next_page_links = sub_tables[1].find_all('a') if len(next_page_links) == 0 scrapeResults() else: scrapeResults() ####GO TO NEXT PAGE UNTIL THERE IS NO NEXT PAGE </code></pre> <p>Question for case 2 and 3: What could i possibly check for here as my control? </p> <p>The links are hrefs to pages 2, 3, etc. But the tricky part is if I am on a current page, say page 1, how do I make sure I a going to page 2 and when I am on page 2 how do i make sure I am going to page 3? The html for page 1 for the results list is as follows</p> <pre><code>&lt;table cellspacing="0" cellpadding="0" border="0" style="border-collapse:collapse;"&gt; &lt;tr&gt; &lt;td&gt;Page: &lt;span&gt;1&lt;/span&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$2&amp;#39;)"&gt;2&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$3&amp;#39;)"&gt;3&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; </code></pre> <p>I can zero into this table specifically using <code>sub_tables[1]</code> see above bs4 code in case 2.</p> <p>The problem is there is no next button that I could utilize. Nothing changes along the results pages in the html. There is nothing to isolate the current page besides the number in the <code>span</code> right before the links. And I would like it to stop when it reaches the last page</p> <p>For case 4, the html looks like this:</p> <pre><code>&lt;table cellspacing="0" cellpadding="0" border="0" style="border-collapse:collapse;"&gt; &lt;tr&gt; &lt;td&gt;Page: &lt;span&gt;1&lt;/span&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$2&amp;#39;)"&gt;2&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$3&amp;#39;)"&gt;3&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$4&amp;#39;)"&gt;4&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$5&amp;#39;)"&gt;5&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$6&amp;#39;)"&gt;6&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$7&amp;#39;)"&gt;7&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$8&amp;#39;)"&gt;8&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$9&amp;#39;)"&gt;9&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$10&amp;#39;)"&gt;10&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$11&amp;#39;)"&gt;...&lt;/a&gt;&lt;/td&gt; &lt;td&gt;&lt;a href="javascript:__doPostBack(&amp;#39;ctl00$cphMain$lrrgResults$cgvNamesDir&amp;#39;,&amp;#39;Page$Last&amp;#39;)"&gt;Last&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; </code></pre> <p>The last two links are <code>...</code> to show that there are more results pages and <code>Last</code> to signify the last page. However, the `Last link exists on every page and it is only on the last page itself that it is not an active link. </p> <p>Question for case 4, how could i check if the <code>last</code> link is clickable and use this as my stopping point? </p> <p>Bigger question for case 4, how do i manouver the <code>...</code> to go through other results pages? The results page list is 12 values at most. i.e. the nearest ten pages to the current page, the <code>...</code> link to more pages and the <code>Last</code> link. So i don't know what to do if my results have say 88 pages.</p> <p>I am link a dump to a full sample page : <a href="https://ghostbin.com/paste/nrb27" rel="nofollow">https://ghostbin.com/paste/nrb27</a></p>
1
2016-09-16T14:59:24Z
39,535,665
<p>What you should do is to count the number of results in a page and use the value from total results to estimate the total number of pages by dividing.</p> <p>If you will inspect the page you will see: `</p> <p><code>Displaying records <strong>1 - 500</strong> of <strong>32563</strong> at <strong>10:08 AM ET</strong> on <strong>9/16/2016</strong></code></p> <p>Knowing the total number of the page, start navigation and check that page is loaded if needed and knowing the current page you could get a dynamic selector for the page navigation number based on the page for 2 cases:</p> <ul> <li>if pagination number is not a link then you are on that page</li> <li>if pagination number is a link you can use it to click</li> </ul> <p>You should't need 4 categories since: - you can count the number of results and how many can be displayed on a page - know the number of pages</p> <ol> <li>Create a method to navigate if needed with a for or other control structure</li> <li>For each navigation do what you need to do</li> </ol> <p>Or go to the last page and start backwards until page 1 is not a link.</p>
1
2016-09-16T15:56:42Z
[ "python", "loops", "selenium", "selenium-webdriver", "pagination" ]