title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Python: Calculating Additions, Deletions, By Date
39,879,857
<p>I have two columns in a dataframe that has date and name. On a daily basis, I would like to count the number of records that enter and are deleted on a daily basis. </p> <pre><code>import numpy as np import pandas as pd df1 = pd.DataFrame({'Name': ['A', 'B', 'C', 'D','E']}) df1['Date'] = '2016-01-01' df2 = pd.DataFrame({'Name': ['C', 'D','E','F']}) df2['Date'] = '2016-01-02' df3 = pd.DataFrame({'Name': ['B', 'C', 'D','E','F']}) df3['Date'] = '2016-01-03' df4 = pd.DataFrame({'Name': ['A', 'D', 'E','H']}) df4['Date'] = '2016-01-04' df=pd.concat([df1,df2,df3,df4]) df=df.reset_index(drop=True) df </code></pre> <p>I would like an output that, for each date, counts the number of additions and subtractions by date. For example, on 2016-01-02, A and B are gone, but F is new, and 3 stayed the same. I would like the output to look like the following:</p> <pre><code>Date add del same 2016-01-02 1 2 3 </code></pre> <p>I have tried to do a full outer join , then count the banks respectively, but that is sooooo inefficient! </p> <p>Does anyone have any ideas of a more efficient way of doing this? Thank you so much!</p>
0
2016-10-05T17:02:01Z
39,882,691
<p>I have an attempt but I cannot say whether it will be as fast or as stable as whatever your full outer join might be but it works with the example above.</p> <p>Carrying from where you left off then,</p> <pre><code>df['Value'] = 1 df = df.set_index(['Date', 'Name']).unstack('Name').fillna(0) df = (df - df.shift(1)) df = pd.DataFrame({i: j.value_counts() for i, j in df.iterrows()}).T.fillna(0) df.columns = ['del', 'same', 'add'] print(df) del same add 2016-01-01 0.0 0.0 0.0 2016-01-02 2.0 4.0 1.0 2016-01-03 0.0 6.0 1.0 2016-01-04 3.0 2.0 2.0 </code></pre> <p>As you can see it is not beautiful, pythonic code. It depends upon the shift which means that the Date must be in order. You could use <code>pd.to_datetime()</code> as an actual datetime to ensure that this happens.</p> <p>It also used a dict comprehension that is then turned back into a DataFrame. Finally, it relies on the del, same and add columns being in that order. You could do an actual mapping as opposed to an overwrite.</p> <p>I will be interested to know how this compares with the join in terms of speed. Do let us know and we can all learn something!</p>
1
2016-10-05T20:00:46Z
[ "python", "pandas", "dataframe" ]
Rotate a rectangle consisting of 4 tuples to left or right
39,879,924
<p>I'm working on a program to manipulate GIS data, but for this precise problem, I'm trying to rotate a rectangle of 4 points around the bottom left corner. I've got 1 tuple describing the bottom left corner: <code>x, y=40000,40000</code> I've also got a length x, and a length y, <code>x_displacement</code>, and <code>y_displacement</code>. I've got an angle, <code>theta</code>, in degrees. I want to rotate the rectangle by up to 90 degrees left or right, so theta can be -89 to 89 degrees. Negative angles should rotate the corners to the left; positive angles to the right.I've represented the rectangle as such: <a href="http://i.imgur.com/pp3hFyA.jpg" rel="nofollow">http://i.imgur.com/pp3hFyA.jpg</a></p> <pre><code> x_displacement=100 y_displacement=100 x = 40000 y = 40000 x1 = x y1 = y + a.y_displacement x2 = x + a.x_displacement y2 = y + a.y_displacement x3 = x + a.x_displacement y3 = y #describes the other 4 corners of the rectangle </code></pre> <p>Coord is a class that holds an x and a y value. <code>coords</code> is a list of Coord class items.</p> <pre><code> c = Coord(x, y) coords.append(c) c = Coord(x1, y1) coords.append(c) c = Coord(x2, y2) coords.append(c) c = Coord(x3, y3) coords.append(c) #Adds each corner to the list of coordinates theta = math.radians(a.angle) newcoords = [] for c in coords: newcoords.append(Coord((c.x * math.cos(theta) - c.y * math.sin(theta)), (c.x * math.sin(theta) + c.y * math.cos(theta)))) coords=newcoords </code></pre> <p>I suspect that there's something relatively trivial that I'm doing wrong, but I've been stuck on this problem for quite some time. This code produces a new rectangle that is either misshapen, or has negative corners, rather than slightly left-rotated corners as wanted. I've seen many posts on here about rotating rectangles, but none seem to be a direct duplicate, because they do not handle negative angles. I'd appreciate any pointers!</p>
0
2016-10-05T17:05:46Z
39,880,778
<p>As a few commenters mentioned, you are rotating around the (0, 0) point, rather than the lower left point. As we are constructing the coordinates we can:</p> <ul> <li>First construct the shape at the (0, 0) point</li> <li>Rotate it</li> <li>Translate it out to where it needs to be</li> </ul> <p>The below gives an example using plain lists rather than your Coord object, but I'm sure it makes the point.</p> <pre><code>import math def rotate(xy, theta): # https://en.wikipedia.org/wiki/Rotation_matrix#In_two_dimensions cos_theta, sin_theta = math.cos(theta), math.sin(theta) return ( xy[0] * cos_theta - xy[1] * sin_theta, xy[0] * sin_theta + xy[1] * cos_theta ) def translate(xy, offset): return xy[0] + offset[0], xy[1] + offset[1] if __name__ == '__main__': # Create the square relative to (0, 0) w, h = 100, 100 points = [ (0, 0), (0, h), (w, h), (w, 0) ] offset = (40000, 50000) degrees = 90 theta = math.radians(degrees) # Apply rotation, then translation to each point print [translate(rotate(xy, theta), offset) for xy in points] </code></pre> <p>As a bonus, this should work with any set of points defined relative to (0, 0), regardless of whether they form any sensible polygon.</p>
1
2016-10-05T17:59:11Z
[ "python", "python-3.x", "rotation", "gis" ]
django migrate failing after switching from sqlite3 to postgres
39,879,939
<p>I have been developing a Django project using sqlite3 as the backend and it has been working well. I am now attempting to switch the project over to use postgres as the backend but running into some issues.</p> <p>After modifying my settings file, setting up postgres, creating the database and user I get the error below when running <code>manage.py migrate</code></p> <p><code>django.db.utils.ProgrammingError: relation "financemgr_rate" does not exist</code></p> <p><code>financemgr</code> is an app within the project. <code>rate</code> is a table within the app.</p> <p>If I run this same command but specify sqlite3 as my backend it works fine.</p> <p>For clarity I will repeat:</p> <p><strong>Environment Config1</strong></p> <ul> <li>Ubuntu 14.04, Django 1.10</li> <li>Settings file has <code>'ENGINE': 'django.db.backends.sqlite3'</code> <ol> <li>Run <code>manage.py migrate</code></li> <li>Migration runs and processes all the migrations successfully</li> </ol></li> </ul> <p><strong>Environment Config2</strong></p> <ul> <li>Ubuntu 14.04, Django 1.10</li> <li>Settings file has <code>'ENGINE': 'django.db.backends.postgresql_psycopg2'</code> <ol> <li>Run <code>manage.py migrate</code></li> <li>Migration runs and gives the error <code>django.db.utils.ProgrammingError: relation "financemgr_rate" does not exist</code></li> </ol></li> </ul> <p>Everything else is identical. I am not trying to migrate data, just populate the schema etc.</p> <p>Any ideas?</p>
0
2016-10-05T17:06:54Z
40,100,350
<p>This may help you :</p> <p>I think you have pre-stored migration files(migrate for sqlite database). Now you have change the database configuration but still django looking for the existing table according to migration files you have(migrated for previous database). Better you delete all the migration files in migration folder of your app, and migrate it again, by running commands <code>python manage.py makemigrations</code> and <code>python manage.py migrate</code> it may work fine.</p>
0
2016-10-18T05:40:05Z
[ "python", "django", "postgresql", "django-models", "sqlite3" ]
Chaining functions for cleaner code
39,880,043
<p>I have the following functions that I would like to be able to chain together for usage to have cleaner code:</p> <pre><code>def label_encoder(dataframe, column): """ Encodes categorical variables """ le = preprocessing.LabelEncoder() le.fit(dataframe[column]) dataframe[column] = le.transform(dataframe[column]) return dataframe def remove_na_and_inf(dataframe): """ Removes rows containing NaNs, inf or -inf from dataframes """ dataframe.replace([np.inf, -np.inf], np.nan, inplace=True).dropna(how="all", inplace=True) return dataframe def create_share_reate_vars(dataframe): """ Generate share rate to use as interaction var """ for interval in range(300, 3900, 300): interval = str(interval) dataframe[interval + '_share_rate'] = dataframe[interval + '_shares'] / dataframe[interval + '_video_views'] return dataframe def generate_logged_values(dataframe): """ Generate logged values for all features which can be logged """ columns = list(dataframe.columns) for feature in columns: try: dataframe[str(feature + '_log')] = np.log(dataframe[feature]) except AttributeError: continue return dataframe </code></pre> <p>I would like to do something like this:</p> <pre><code>new_df = reduce(lambda x, y: y(x), reversed([label_encoder, remove_na_and_inf, create_share_reate_vars, generate_logged_values]), df) </code></pre> <p>but since the first function takes two arguments this will not work. Any solutions to this, or maybe a completely different paradigm?</p>
0
2016-10-05T17:13:22Z
39,880,738
<p>You could partially evaluate <code>label_encoder</code> first using functools.partial, and then use that version to parse to your lambda. E.g.</p> <pre><code>from functools import partial fixed_col_bound_encoder = partial(label_encoder, column=2) new_df = reduce(lambda x, y: y(x), reversed([fixed_col_bound_encoder, remove_na_and_inf, create_share_reate_vars, generate_logged_values]), df) </code></pre>
3
2016-10-05T17:56:45Z
[ "python" ]
Efficient way to read non-empty cells in a column in CSV file
39,880,069
<p>I have a large python file (>500,000 rows), and would like to read non-empty cells in a column in the dataframe (panda). Right now, I am doing this:</p> <pre><code>df = pd.read_csv(filename) myiter = [] for xiter, x in enumerate(df['Column_name']): if (np.isnan(x) == False): myiter.append(xiter) </code></pre> <p>Is there a more efficient way to do the same?</p>
0
2016-10-05T17:14:34Z
39,880,572
<p>are they tagged as <code>NaN</code> in your <code>df</code>?</p> <p>if yes then do</p> <pre><code>df.dropna() </code></pre>
0
2016-10-05T17:46:48Z
[ "python", "pandas", "dataframe", "large-files", "import-from-csv" ]
Move multiple elements (based on criteria) to end of list
39,880,110
<p>I have the following 2 Python lists:</p> <pre><code>main_l = ['Temp_Farh', 'Surface', 'Heater_back', 'Front_Press', 'Lateral_Cels', 'Gauge_Finl','Gauge_Relay','Temp_Throw','Front_JL'] hlig = ['Temp', 'Lateral', 'Heater','Front'] </code></pre> <p>I need to move elements from <code>main_l</code> to the end of the list if they contain strings listed in <code>hlig</code>.</p> <p>Final version of <code>main_l</code> should look like this:</p> <pre><code>main_l = ['Surface', 'Gauge_Finl','Gauge_Relay', 'Temp_Farh', 'Heater_back', 'Front_Press', 'Lateral_Cels', 'Temp_Throw','Front_JL'] </code></pre> <p><strong>My attempt:</strong></p> <p>I first try to find if the list <code>main_l</code> contains elements with a substring listed in the 2nd list <code>hlig</code>. Here is the way I am doing this:</p> <pre><code>`found` = [i for e in hlig for i in main_l if e in i] </code></pre> <p><code>found</code> is a sublist of <code>main_l</code>. The problem is: now that I have this list, I do not know how to select the elements that do NOT contain the substrings in <code>hlig</code>. If I could do this, then I could add them to a list <code>not_found</code> and then I could concatenate them like this: <code>not_found + found</code> - and this would give me what I want.</p> <p><strong>Question:</strong></p> <p>Is there a way to move matching elements to end of the list <code>main_l</code>?</p>
1
2016-10-05T17:16:29Z
39,880,202
<p>You could sort <code>main_l</code> using whether each element contains a string from hlig as a key:</p> <pre><code>main_l.sort(key=lambda x: any(term in x for term in hlig)) </code></pre>
4
2016-10-05T17:23:28Z
[ "python", "list", "python-2.7" ]
Move multiple elements (based on criteria) to end of list
39,880,110
<p>I have the following 2 Python lists:</p> <pre><code>main_l = ['Temp_Farh', 'Surface', 'Heater_back', 'Front_Press', 'Lateral_Cels', 'Gauge_Finl','Gauge_Relay','Temp_Throw','Front_JL'] hlig = ['Temp', 'Lateral', 'Heater','Front'] </code></pre> <p>I need to move elements from <code>main_l</code> to the end of the list if they contain strings listed in <code>hlig</code>.</p> <p>Final version of <code>main_l</code> should look like this:</p> <pre><code>main_l = ['Surface', 'Gauge_Finl','Gauge_Relay', 'Temp_Farh', 'Heater_back', 'Front_Press', 'Lateral_Cels', 'Temp_Throw','Front_JL'] </code></pre> <p><strong>My attempt:</strong></p> <p>I first try to find if the list <code>main_l</code> contains elements with a substring listed in the 2nd list <code>hlig</code>. Here is the way I am doing this:</p> <pre><code>`found` = [i for e in hlig for i in main_l if e in i] </code></pre> <p><code>found</code> is a sublist of <code>main_l</code>. The problem is: now that I have this list, I do not know how to select the elements that do NOT contain the substrings in <code>hlig</code>. If I could do this, then I could add them to a list <code>not_found</code> and then I could concatenate them like this: <code>not_found + found</code> - and this would give me what I want.</p> <p><strong>Question:</strong></p> <p>Is there a way to move matching elements to end of the list <code>main_l</code>?</p>
1
2016-10-05T17:16:29Z
39,880,406
<p>I would rewrite what you have to:</p> <pre><code>main_l = ['Temp_Farh', 'Surface', 'Heater_back', 'Front_Press', 'Lateral_Cels', 'Gauge_Finl','Gauge_Relay','Temp_Throw','Front_JL'] hlig = ['Temp', 'Lateral', 'Heater','Front'] found = [i for i in main_l if any(e in i for e in hlig)] </code></pre> <p>Then the solution is obvious:</p> <pre><code>not_found = [i for i in main_l if not any(e in i for e in hlig)] answer = not_found + found </code></pre> <p>EDIT: Removed square brackets around list comprehension based on comments by Sven Marnach (to aviraldg's solution)</p>
1
2016-10-05T17:36:19Z
[ "python", "list", "python-2.7" ]
numpy.fromfile differences between python 2.7.3 and 2.7.6
39,880,198
<p>I'm having an issue running code between two consoles and I've gotten it down to a difference between the versions of python installed on these computers (2.7.3 and 2.7.6 respectively). </p> <p>Here is the input file found on github (<a href="https://github.com/tkkanno/PhD_work/blob/master/1r" rel="nofollow">https://github.com/tkkanno/PhD_work/blob/master/1r</a>).</p> <p>when in python 2.7.3 and numpy version 1.11.1 the following code works as expected:</p> <pre><code>import numpy as np s = 'directory/to/file' f = open(s, 'rb') y = np.fromfile(f,'&lt;l') y.shape </code></pre> <p>this give gets an numpy array of shape (16384,). However, when it is run on python 2.7.6/numpy 1.11.1 it gives an array half the size (8192,). This isnt' acceptable for me</p> <p>I can't understand why numpy is acting this way with different versions of python. I would be grateful for any suggestions</p>
0
2016-10-05T17:23:23Z
39,900,392
<p>Converted from <a href="https://stackoverflow.com/questions/39880198/numpy-fromfile-differences-between-python-2-7-3-and-2-7-6/39900392#comment67047204_39880198">my comment</a>:</p> <p>You're likely running on different Python/OS builds with different notions of how big a <code>long</code> is. C doesn't require a specific <code>long</code> size, and in practice, Windows always treats it as 32 bits, while other common desktop OSes treat it as 32 bits if the OS &amp; Python are built for 32 bit CPUs (ILP32), and 64 bits if built for 64 bit CPUs (LP64).</p> <p>If you want a fixed width type on all OSes, don't use the system-dependent-width types. <a href="https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html#arrays-dtypes-constructing" rel="nofollow">Use fixed width types instead</a>. From your comments, the expected behavior is to load 32 bit/4 byte values. If you were just using native endianness, you could just pass <code>numpy.int32</code> (<code>numpy</code> recognizes the raw <code>class</code> as datatypes). Since you want to specify endianness explicitly (perhaps this might run on a big endian system), you can instead pass <code>'&lt;i4'</code> which explicitly states it's a little endian (<code>&lt;</code>) signed integer (<code>i</code>) four bytes in size (<code>4</code>):</p> <pre><code>import numpy as np s = 'directory/to/file' with open(s, 'rb') as f: # Use with statements when opening files y = np.fromfile(f, '&lt;i4') # Use explicit fixed width type y.shape </code></pre>
0
2016-10-06T15:46:00Z
[ "python", "arrays", "numpy" ]
Zeroconf not found any service
39,880,204
<p>I started to test zeroconf to implement the discovery feature in a plugin I'm developing. At the beginning it was working good, but a few weeks ago it isn't showing any available service.</p> <p>I thought it was problem of my device, but Arduino IDE is showing the mDNS service (I'm using few nodemcu devices).</p> <p>So now I don't know where is the problem. In the zeroconf github recommended me to use wireshark to know what happen in the traffic, however I don't see anything unusual on it. <a href="http://txt.do/dantz" rel="nofollow">Here</a> is the full log. </p> <p>I've test in differents environments (windows and linux) and both shows me the same results (not services)</p> <p>So now I'm thinking it could be problem of zeroconf. There someone who can point me what could be the nexts steps to debug this problem?</p>
0
2016-10-05T17:23:41Z
39,899,706
<p>As you can see in <a href="https://github.com/jstasiak/python-zeroconf/issues/85" rel="nofollow">this</a> github issue, the problem was related with <strong>natifaces</strong>, the solution is</p> <p>uninstall netifaces: <code>pip uninstall netifaces</code></p> <p>and install version 0.10.4</p> <p><code>pip install netifaces==0.10.4</code></p> <p>after that you should see your mDNS services again</p>
1
2016-10-06T15:13:03Z
[ "python", "iot", "nodemcu", "zeroconf", "mdns" ]
pandas dataseries - How to address difference in days
39,880,209
<p>I have a pandas dataframe navTable whose index is a series of dates.</p> <p>I needed to find the difference between the consecutive dates in index</p> <pre><code> Delta 2016-08-10 0.006619 2016-08-12 0.006595 2016-08-14 0.006595 2016-08-17 0.006595 2016-08-18 0.006595 </code></pre> <p>I want a new column <code>Days_Diff</code> which would give me difference in subsequent dates (in index). Therefore my output should look like this</p> <pre><code> Delta Delta_Days 8/10/2016 0.006619 None 8/12/2016 0.006595 2 8/14/2016 0.006595 2 8/17/2016 0.006595 3 8/18/2016 0.006595 1 </code></pre> <p>I tried this first:</p> <pre><code>navTable['Index'] = navTable.index navTable['Days_Diff'] = navTable['Index'] - navTable['Index'].shift(1) navTable['Delta_Days'] = navTable['Days_Diff'].days </code></pre> <p>Outright, this was not accepted as it was complaining about "days cannot be applied on Series"</p> <p>So, I tried this:</p> <pre><code>navTable['Index'] = navTable.index navTable['Days_Diff'] = navTable['Index'] - navTable['Index'].shift(1) navTable['Delta_Days'] = [ eachDayDiff.days for eachDayDiff in list(dataTable['Days_Diff']) ] </code></pre> <p>Understandably, it is complaining about the first element with is <code>Null</code>. </p> <blockquote> <p>'NaTType' object has no attribute 'days'</p> </blockquote> <p>Question 1) Am I handling this scenario efficiently? Question 2) How to I address </p> <blockquote> <p>'NaTType' object has no attribute 'days'</p> </blockquote> <p>For the record, first element is of type <code>pandas.tslib.NaTType</code> Rest are of type <code>pandas.tslib.Timedelta</code></p> <p>Edit: formatting</p>
3
2016-10-05T17:24:15Z
39,881,020
<p>Normally you will use the <code>diff()</code> function to calculate the adjacent difference and you can convert the index to a normal series and then use the <code>diff()</code> function which gives a series of <code>time delta</code> data type:</p> <pre><code>df.index.to_series().diff() # 2016-08-10 NaT # 2016-08-12 2 days # 2016-08-14 2 days # 2016-08-17 3 days # 2016-08-18 1 days # dtype: timedelta64[ns] </code></pre> <p>To convert the time delta data type to numeric types:</p> <pre><code>import numpy as np df['Delta_Days'] = (df.index.to_series().diff() / np.timedelta64(1, 'D')).astype(float) df # Delta Delta_Days #2016-08-10 0.006619 NaN #2016-08-12 0.006595 2.0 #2016-08-14 0.006595 2.0 #2016-08-17 0.006595 3.0 #2016-08-18 0.006595 1.0 </code></pre>
2
2016-10-05T18:13:10Z
[ "python", "pandas", "time-series" ]
Kivy App build with Buildozer. APK crash
39,880,264
<p>I am using Oracle VirtualBox running Ubuntu 16. I have been able to build apk files for a while until my latest build. My program will run and keep its functionality when run with python 2.7 on the same Virtual machine. When i install the .apk file on my Samsung S3 it shows the standard kivy loading screen then crashes after around 20 seconds. PLEASE HELP</p> <p>I ran the latest build with verabose below is the log file.</p> <p><a href="https://drive.google.com/open?id=0B1XW1ekAndYiT2NrUTRNeHZhVGc" rel="nofollow">https://drive.google.com/open?id=0B1XW1ekAndYiT2NrUTRNeHZhVGc</a></p> <p><strong>EDIT</strong></p> <p>After researching adb logcat i have been able to find this error. It occurs when "adb logcat" is run on a usb connected device.</p> <pre><code>I/python (29113): Traceback (most recent call last): I/python (29113): File "/home/paul/Desktop/10/.buildozer/android/app/main.py", line 11, in &lt;module&gt; I/python (29113): File "/home/paul/Desktop/10/.buildozer/android/app/_applibs/bs4/__init__.py", line 35, in &lt;module&gt; I/python (29113): File "/home/paul/Desktop/10/.buildozer/android/app/_applibs/bs4/builder/__init__.py", line 315, in &lt;module&gt; I/python (29113): ImportError: cannot import name _htmlparser I/python (29113): Python for android ended. </code></pre> <p><strong>EDIT</strong></p> <p>Line 11 in main.py is </p> <pre><code>from bs4 import BeautifulSoup as bs </code></pre> <p>I there something obvious im missing?</p>
-4
2016-10-05T17:26:48Z
39,880,864
<p>Turn USB Debuggin mode on in your device and connect your device to your PC and then run <code>adb logcat</code>. Run the application on your device and see what is going on in your application and what is the reason of crashing. you could also show us the the <code>adb logcat</code> result if you couldn't figure out the reason.</p>
1
2016-10-05T18:04:26Z
[ "python", "kivy", "android-logcat", "ubuntu-16.04", "buildozer" ]
App Engine development server fails to execute queued task - with no stack trace whatsoever
39,880,267
<p>I'm currently trying to queue tasks in App Engine using the Flask framework, but I'm having some difficulty. I run my code, and it seems like the tasks are queued properly when I check the admin server at localhost:8000/taskqueue. However, the console repeatedly print the following error:</p> <pre><code>WARNING 2016-10-05 17:08:09,560 taskqueue_stub.py:1981] Task task1 failed to execute. This task will retry in 0.100 seconds </code></pre> <p>Furthermore, it doesn't seem like the desired code is being executed.</p> <p>My question is, why isn't my code working? I apologize for the very broad question, however there's no stack trace to guide me through to something more specific. I've simplified my code however to make my error reproducible. The code below should print the phrase "sample task" onto the console 5 times. However, this does not occur.</p> <pre><code>#main.py from google.appengine.api.taskqueue import taskqueue from flask import Flask, Response app = Flask(__name__) @app.route("/get") def get(): for i in range(5): # attempt to execute the desired function 5 times # the message "sample task" should be printed to the console five times task = taskqueue.add( queue_name='my-queue', url='/sample_task', ) message += 'Task {} enqueued, ETA {}.&lt;br&gt;'.format(task.name, task.eta) response = Response(message) return response @app.route("/sample_task") def sample_task(): message = "sample task" print (message) return Response(message) if __name__ == "__main__": app.run() </code></pre> <p>app.yaml</p> <pre><code># app.yaml runtime: python27 api_version: 1 threadsafe: true # [START handlers] handlers: - url: /sample_task script: main.app login: admin - url: /get script: main.app login: admin </code></pre> <p>queue.yaml</p> <pre><code># queue.yaml queue: - name: my-queue rate: 1/s bucket_size: 40 max_concurrent_requests: 1 </code></pre>
0
2016-10-05T17:26:58Z
39,880,600
<p>I've found the answer here: <a href="https://stackoverflow.com/a/13552794">https://stackoverflow.com/a/13552794</a></p> <p>Apparently, all I have to do is add the POST method to whichever handler is called for queuing.</p> <p>So</p> <pre><code>@app.route("/sample_task") def sample_task(): ... </code></pre> <p>should instead be:</p> <pre><code>@app.route("/sample_task", methods=['POST']) def sample_task(): ... </code></pre>
0
2016-10-05T17:48:05Z
[ "python", "google-app-engine", "flask", "google-app-engine-python" ]
pyspark: TypeError: condition should be string or Column
39,880,269
<p>I am trying to filter an RDD based like below:</p> <pre><code>spark_df = sc.createDataFrame(pandas_df) spark_df.filter(lambda r: str(r['target']).startswith('good')) spark_df.take(5) </code></pre> <p>But got the following errors:</p> <pre><code>TypeErrorTraceback (most recent call last) &lt;ipython-input-8-86cfb363dd8b&gt; in &lt;module&gt;() 1 spark_df = sc.createDataFrame(pandas_df) ----&gt; 2 spark_df.filter(lambda r: str(r['target']).startswith('good')) 3 spark_df.take(5) /usr/local/spark-latest/python/pyspark/sql/dataframe.py in filter(self, condition) 904 jdf = self._jdf.filter(condition._jc) 905 else: --&gt; 906 raise TypeError("condition should be string or Column") 907 return DataFrame(jdf, self.sql_ctx) 908 TypeError: condition should be string or Column </code></pre> <p>Any idea what I missed? Thank you!</p>
0
2016-10-05T17:27:11Z
39,884,079
<p><code>DataFrame.filter</code>, which is an alias for <code>DataFrame.where</code>, expects a SQL expression expressed either as a <code>Column</code>:</p> <pre><code>spark_df.filter(col("target").like("good%")) </code></pre> <p>or equivalent SQL string:</p> <pre><code>spark_df.filter("target LIKE 'good%'") </code></pre> <p>I believe you're trying here to use <code>RDD.filter</code> which is completely different method:</p> <pre><code>spark_df.rdd.filter(lambda r: r['target'].startswith('good')) </code></pre> <p>and does not benefit from SQL optimizations.</p>
2
2016-10-05T21:33:57Z
[ "python", "pandas", "pyspark", "spark-dataframe", "rdd" ]
pyspark: TypeError: condition should be string or Column
39,880,269
<p>I am trying to filter an RDD based like below:</p> <pre><code>spark_df = sc.createDataFrame(pandas_df) spark_df.filter(lambda r: str(r['target']).startswith('good')) spark_df.take(5) </code></pre> <p>But got the following errors:</p> <pre><code>TypeErrorTraceback (most recent call last) &lt;ipython-input-8-86cfb363dd8b&gt; in &lt;module&gt;() 1 spark_df = sc.createDataFrame(pandas_df) ----&gt; 2 spark_df.filter(lambda r: str(r['target']).startswith('good')) 3 spark_df.take(5) /usr/local/spark-latest/python/pyspark/sql/dataframe.py in filter(self, condition) 904 jdf = self._jdf.filter(condition._jc) 905 else: --&gt; 906 raise TypeError("condition should be string or Column") 907 return DataFrame(jdf, self.sql_ctx) 908 TypeError: condition should be string or Column </code></pre> <p>Any idea what I missed? Thank you!</p>
0
2016-10-05T17:27:11Z
39,899,755
<p>I have been through this and have settled to using a UDF:</p> <pre><code>from pyspark.sql.functions import udf from pyspark.sql.types import BooleanType filtered_df = spark_df.filter(udf(lambda target: target.startswith('good'), BooleanType())(spark_df.target)) </code></pre> <p>More readable would be to use a normal function definition instead of the lambda</p>
0
2016-10-06T15:15:17Z
[ "python", "pandas", "pyspark", "spark-dataframe", "rdd" ]
Pybot - how to specify an xpath for an "a href" link
39,880,398
<p>I am trying to run a very simple test with pybot using the xpath but for some reason it keeps saying that my xpath is not a valid, even though I am following the documentation straight from <a href="http://robotframework.org/Selenium2Library/Selenium2Library.html" rel="nofollow">http://robotframework.org/Selenium2Library/Selenium2Library.html</a></p> <p>This is all I have in my test:</p> <pre><code>Temp test Set Selenium Timeout 60s Set Selenium Speed 1s Open Browser http://google.com chrome Click Link xpath=//a[@href=‘https://mail.google.com/mail/?tab=wm&amp;authuser=0'] Sleep 3s Close All Browsers </code></pre> <p>But for whatever reason it keeps complaining </p> <p>Message: invalid selector: Unable to locate an element with the xpath expression //a[@href=‘/‘] because of the following error: SyntaxError: Failed to execute 'evaluate' on 'Document': The string '//a[@href=‘/‘]' is not a valid XPath expression.</p> <p>I have seen other people follow the same format with no issues. I have also never had issues with xpath expressions for selenium with java in the past. </p>
-2
2016-10-05T17:35:38Z
39,880,497
<p>Notice how your opening quote is different from the closing one?</p> <p>Change the <code>‘</code> in your syntax to a proper <code>'</code> -- and check that the close-quote in your real code is also a genuine ASCII-character-39 single-quote, not some fancy Unicode curly created by being copied/pasted through Microsoft applications or such.</p>
2
2016-10-05T17:42:28Z
[ "python", "selenium", "xpath", "robotframework" ]
Testing students' code in Jupyter with a unittest
39,880,411
<p>I'd like my students to be able to check their code as they write it in a Jupyter Notebook by calling a function from an imported module which runs a unittest. This works fine unless the function needs to be checked against objects which are to be picked up in the global scope of the Notebook.</p> <p>Here's my <code>check_test</code> module:</p> <pre><code>import unittest from IPython.display import Markdown, display def printmd(string): display(Markdown(string)) class Tests(unittest.TestCase): def check_add_2(self, add_2): val = 5 self.assertAlmostEqual(add_2(val), 7) def check_add_n(self, add_n): n = 6 val = 5 self.assertAlmostEqual(add_n(val), 11) check = Tests() def run_check(check_name, func, hint=False): try: getattr(check, check_name)(func) except check.failureException as e: printmd('**&lt;span style="color: red;"&gt;FAILED&lt;/span&gt;**') if hint: print('Hint:', e) return printmd('**&lt;span style="color: green;"&gt;PASSED&lt;/span&gt;**') </code></pre> <p>If the Notebook is:</p> <pre><code>In [1]: def add_2(val): return val + 2 In [2]: def add_n(val): return val + n In [3]: import test_checks In [4]: test_checks.run_check('check_add_2', add_2) PASSED In [5]: test_checks.run_check('check_add_n', add_n) !!! ERROR !!! </code></pre> <p>The error here is not suprising: <code>add_n</code> doesn't know about the <code>n</code> I defined in <code>check_add_n</code>.</p> <p>So I got to thinking I could do something like:</p> <pre><code>In [6]: def add_n(val, default_n=None): if default_n: n = default_n return val + n </code></pre> <p>in the Notebook, and then passing <code>n</code> in the test:</p> <pre><code> def check_add_n(self, add_n): val = 5 self.assertAlmostEqual(add_n(val, 6), 11) </code></pre> <p>But this is causing me <code>UnboundLocalError</code> headaches down the line because of the assignment of <code>n</code>, even within an <code>if</code> clause: this is apparently stopping the Notebook from picking up <code>n</code> in global scope when it's needed.</p> <p>For the avoidance of doubt, I don't want insist that <code>n</code> is passed as an argument to <code>add_n</code>: there could be many such objects used but not changed by the function being tested and I want them resolved in the outer scope.</p> <p>Any ideas how to go about this?</p>
3
2016-10-05T17:36:56Z
39,880,566
<p>You can <code>import __main__</code> to access the notebook scope:</p> <pre><code>import unittest from IPython.display import Markdown, display import __main__ def printmd(string): display(Markdown(string)) class Tests(unittest.TestCase): def check_add_2(self, add_2): val = 5 self.assertAlmostEqual(add_2(val), 7) def check_add_n(self, add_n): __main__.n = 6 val = 5 self.assertAlmostEqual(add_n(val), 11) check = Tests() def run_check(check_name, func, hint=False): try: getattr(check, check_name)(func) except check.failureException as e: printmd('**&lt;span style="color: red;"&gt;FAILED&lt;/span&gt;**') if hint: print('Hint:', e) return printmd('**&lt;span style="color: green;"&gt;PASSED&lt;/span&gt;**') </code></pre> <p>This gives me a <code>PASSED</code> output.</p> <hr> <p>This works because when you execute a python file that file is stored in <code>sys.modules</code> as the <code>__main__</code> module. This is precisely why the <code>if __name__ == '__main__':</code> idiom is used. It is possible to import such module and since it is already in the module cache it will not re-execute it or anything.</p>
2
2016-10-05T17:46:30Z
[ "python", "unit-testing", "python-3.x", "jupyter-notebook" ]
some python exercises and need some input
39,880,495
<p>This is the exercise.</p> <blockquote> <p>Write a program that repeatedly prompts a user for integer numbers until the user enters 'done'. Once 'done' is entered, print out the largest and smallest of the numbers. If the user enters anything other than a valid number catch it with a try/except and put out an appropriate message and ignore the number. Enter the numbers from the book for problem 5.1 and Match the desired output as shown.</p> </blockquote> <p>The result should be: </p> <pre><code>Invalid input Maximum is 7 Minimum is 4 </code></pre> <p>My code:</p> <pre><code>largest = None smallest = None while True: num = raw_input("Enter a number: ") if num == "done" : break if len(num) &lt; 1 : break try : num = int(num) except : print "Invalid input" continue print "Maximum", largest print "Minimum", smallest </code></pre> <p>Why is the program not printing out the largest and smallest?<br> What am I doing wrong?</p>
-3
2016-10-05T17:42:22Z
39,880,646
<p>You never entered a value for largest and smallest.</p> <pre><code>largest = float('-inf') # Always smaller than any number smallest = float('inf') # Always larger than any number while True: num = raw_input("Enter a number: ") if num == "done" : break if len(num) &lt; 1 : break try : num = int(num) except : print "Invalid input" continue # set largest and smallest # initial inf forces first entry to reset the value largest = max(largest, num) smallest = min(smallest, num) # Because None is always smaller than any integer print "Maximum", largest print "Minimum", smallest </code></pre>
0
2016-10-05T17:50:58Z
[ "python", "python-2.7" ]
str.endwith() return false in valid check
39,880,531
<p><strong>I don't need alternate solution.</strong> </p> <p>I'm using Python 2.5.4 and want know why this happens.</p> <p>I write source parser for makefiles.</p> <pre><code>ff = open("module.mk") f = ff.readlines() ff.close() for i in f: if ".o \\" in i[-5:]: print "Is %s for str: %s" %(i.endswith('.o \\'), i) </code></pre> <p>I got:</p> <pre class="lang-none prettyprint-override"><code>Is False for str: bitmap.o \ </code></pre> <p>And so for every check.</p> <p>You can get module.mk from <a href="https://github.com/scummvm/scummvm/blob/master/engines/access/module.mk" rel="nofollow">github</a></p>
-2
2016-10-05T17:44:05Z
39,880,662
<p>When you use <code>.readlines()</code> it includes the newline character in the line, CR-LF in this case.</p> <p>You need to remove that newline before checking <code>.endswith()</code> as such:</p> <pre><code>with open("module.mk") as data: for i in data.readlines(): if ".o \\" in i[-5:]: print "Is %s for str: %s" %(i.strip().endswith('.o \\'), i) </code></pre> <p>Note: The <code>.readlines()</code> call isn't needed here, I'm just keeping it in so that the behaviour remains the same as OP's code.</p>
1
2016-10-05T17:51:39Z
[ "python", "parsing", "windows-xp" ]
Using SoftLayer Python API placeOrder to add disks to vm
39,880,557
<p>I've created a softlayer VM using a custom image template. I am able to add SAN disks to my vm using curl but I'm unsuccessful trying to do this with the Python SoftLayer library. I receive the following error:</p> <pre><code>SoftLayer.exceptions.SoftLayerAPIError: SoftLayerAPIError(SoftLayer_Exception_Order_InvalidContainer): Invalid container specified: SoftLayer_Container_Product_Order. Ordering a server or service requires a specific container type, not the generic base order container. </code></pre> <p>Here is my code:</p> <pre><code>self.client = SoftLayer.Client(username='myusername@email.com', api_key='key') console_id = 11111111 order = { "parameters": [{ "virtualGuests": [{"id": console_id}], "prices": [{ "id": 113031, "categories": [{ "categoryCode": "guest_disk1", "complexType": "SoftLayer_Product_Item_Category" }], "complexType": "SoftLayer_Product_Item_Price" }, { "id": 112707, "categories": [{ "categoryCode": "guest_disk2", "complexType": "SoftLayer_Product_Item_Category" }], "complexType": "SoftLayer_Product_Item_Price" } ], "properties": [ {"name": "NOTE_GENERAL", "value": "adding disks"}, {"name": "MAINTENANCE_WINDOW", "value": "now"} ], "complexType": "SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade" }] } # response = self.client['Product_Order'].placeOrder(order, False) response = self.client.call('Product_Order', 'placeOrder', order) print response </code></pre> <p>If I run the following curl command however my vm updates are successful:</p> <pre><code>curl -X POST --data @updatefile https://myusername%40email%2Ecom:apikey@api.softlayer.com/rest/v3.1/SoftLayer_Product_Order/placeOrder </code></pre> <p>Contents of updatefile:</p> <pre><code>{ "parameters": [{ "virtualGuests":[{"id":11111111}], "prices": [{ "id": 113031, "categories": [{ "categoryCode": "guest_disk1", "complexType": "SoftLayer_Product_Item_Category" }], "complexType": "SoftLayer_Product_Item_Price" }, { "id": 112707, "categories": [{ "categoryCode": "guest_disk2", "complexType": "SoftLayer_Product_Item_Category" }], "complexType": "SoftLayer_Product_Item_Price" } ], "properties": [ {"name": "NOTE_GENERAL","value": "adding disks"}, {"name": "MAINTENANCE_WINDOW","value": "now"} ], "complexType": "SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade" }] } </code></pre> <p>Any idea what I'm doing wrong?</p>
0
2016-10-05T17:45:57Z
39,880,699
<p>yep, because when you are using the Softlayer's python client you do not have to specify the "paramters" property in your order that is only for RESTful request, remove it and it should work.</p> <p>Try this:</p> <pre><code>self.client = SoftLayer.Client(username='myusername@email.com', api_key='key') console_id = 11111111 order = { "virtualGuests": [{"id": console_id}], "prices": [{ "id": 113031, "categories": [{ "categoryCode": "guest_disk1", "complexType": "SoftLayer_Product_Item_Category" }], "complexType": "SoftLayer_Product_Item_Price" }, { "id": 112707, "categories": [{ "categoryCode": "guest_disk2", "complexType": "SoftLayer_Product_Item_Category" }], "complexType": "SoftLayer_Product_Item_Price" } ], "properties": [ {"name": "NOTE_GENERAL", "value": "adding disks"}, {"name": "MAINTENANCE_WINDOW", "value": "now"} ], "complexType": "SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade" } response = self.client['Product_Order'].placeOrder(order) print response </code></pre> <p>Regards</p>
0
2016-10-05T17:53:59Z
[ "python", "curl", "softlayer" ]
In Pandas, how to delete rows from a Data Frame based on another Data Frame?
39,880,627
<p>I have 2 Data Frames, one named USERS and another named EXCLUDE. Both of them have a field named "email".</p> <p>Basically, I want to remove every row in USERS that has an email contained in EXCLUDE.</p> <p>How can I do it?</p>
2
2016-10-05T17:50:01Z
39,881,230
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> and condition with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a>, inverting boolean <code>Series</code> is by <code>~</code>:</p> <pre><code>import pandas as pd USERS = pd.DataFrame({'email':['a@g.com','b@g.com','b@g.com','c@g.com','d@g.com']}) print (USERS) email 0 a@g.com 1 b@g.com 2 b@g.com 3 c@g.com 4 d@g.com EXCLUDE = pd.DataFrame({'email':['a@g.com','d@g.com']}) print (EXCLUDE) email 0 a@g.com 1 d@g.com </code></pre> <pre><code>print (USERS.email.isin(EXCLUDE.email)) 0 True 1 False 2 False 3 False 4 True Name: email, dtype: bool print (~USERS.email.isin(EXCLUDE.email)) 0 False 1 True 2 True 3 True 4 False Name: email, dtype: bool print (USERS[~USERS.email.isin(EXCLUDE.email)]) email 1 b@g.com 2 b@g.com 3 c@g.com </code></pre> <hr> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>:</p> <pre><code>df = pd.merge(USERS, EXCLUDE, how='outer', indicator=True) print (df) email _merge 0 a@g.com both 1 b@g.com left_only 2 b@g.com left_only 3 c@g.com left_only 4 d@g.com both print (df.ix[df._merge == 'left_only', ['email']]) email 1 b@g.com 2 b@g.com 3 c@g.com </code></pre>
2
2016-10-05T18:26:49Z
[ "python", "pandas" ]
pinging websites from array in a loop
39,880,655
<pre><code>import subproccess import sys mylist= ['google.com','bbc.com','yahoo.com','gmail.com','hotmail.com', 'amazon.com'] for ping in mylist(0,5): result = os.system("ping %s" % ping) result.stdout=open("test.txt","w") result.stdout.close() </code></pre> <p>Can someone please find the mistake in my code? I want to call this script from cmd terminal. The aim of this code is to ping each of the website every time and then write the result in a text file. I'm quite new with python and I don't know how to build a proper code.</p>
-1
2016-10-05T17:51:28Z
39,880,991
<ul> <li>os not imported</li> <li>mylist = ['google.com','bbc.com','yahoo.com','gmail.com','hotmail.com','amazon.com']</li> <li>for ping in mylist:</li> <li>result = os.system("ping -c 1 %s" % ping)</li> <li><p>f = open("test.txt","w") (open the file before entering the loop, else it would rewrite the file everytime the loop iterates)</p> <pre><code>import os mylist= ['google.com','bbc.com','yahoo.com','gmail.com','hotmail.com', 'amazon.com'] f = open("file.txt", 'w') for ping in mylist: result = os.system("ping -c 1 %s" % ping) f.write(ping + " : " +str(result) + "\n") f.close() </code></pre></li> </ul>
0
2016-10-05T18:11:33Z
[ "python", "arrays", "loops", "cmd", "ping" ]
How to iterate over ResultProxy many times?
39,880,685
<p>Suppose I execute the following query:</p> <pre><code>results = db.engine.execute(sql_query) </code></pre> <p>where it returns rows as expected: </p> <pre><code>&gt;&gt;&gt; for record in results: ... print(record['path']) ... Top.Collections.Pictures.Astronomy.Stars Top.Collections.Pictures.Astronomy.Galaxies Top.Collections.Pictures.Astronomy.Astronauts </code></pre> <p>When I try to iterate over it a second or third time the object is empty:</p> <pre><code>&gt;&gt;&gt; for record in results: ... print(record['path']) ... &gt;&gt;&gt; </code></pre> <p><strong>How can I save and reuse the ResultProxy to iterate over it many times?</strong></p>
0
2016-10-05T17:53:17Z
39,880,718
<p>You probably have an <em>iterator</em> object being returned, so once it's exhausted after iterating over it, you can't go over it again. Assuming your results are not extremely large, you can do:</p> <pre><code>results = db.engine.execute(sql_query) results = list(results) </code></pre> <p>That turns the iterator object into a list and you can iterate over it as many times as you would.</p>
1
2016-10-05T17:55:15Z
[ "python", "sqlalchemy", "resultproxy" ]
Python Socket - Networking
39,880,806
<p>I am unable to figure out, whats wrong with my code here. I used pycharm debugger and found out that there is something wrong in my server code - with the command <code>clients_name.append(name)</code>.</p> <p>My server Code: </p> <h1>Server.py</h1> <pre><code>import socket import time import numpy as np import array host = '127.0.0.1' port = 5000 clients_name = ['NONE'] clients_addr = [] s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # Creating Socket OBJECT s.bind((host, port)) s.setblocking(0) quitting = False print "Server Started" while not quitting: try: data, addr = s.recvfrom(1024) print data for client in clients_addr: print "hello" s.sendto(clients_name, client) if int(data[0]) == 1: name = str(data[1]) print name, type(name) if name not in clients_name: clients_name.append(name) print clients_name if addr not in clients_addr: clients_addr.append(addr) print data[1] print time.ctime(time.time()) + str(addr) + ": :" + str(data[1]) elif int(data[0]) ==0: name = str(data[1]) clients_name.remove(name) clients_addr.remove(addr) except: pass s.close() </code></pre> <p>My Client Code: </p> <h1>Client.py</h1> <pre><code>import socket, threading, time, wx from wx.lib.pubsub import setupkwargs from wx.lib.pubsub import pub from Communication import ReceiveData, SendData import numpy as np class windowClass(wx.Frame): def __init__(self, parent, title): global appSize_x global appSize_y appSize_x = 1100 appSize_y = 800 super(windowClass, self).__init__(parent, title = title, style = wx.MINIMIZE_BOX | wx.SYSTEM_MENU | wx.CLOSE_BOX |wx.CAPTION, size = (appSize_x, appSize_y)) self.basicGUI() self.Centre() self.Show() def basicGUI(self): # Font for all the text in the panel font = wx.Font(12, wx.ROMAN, wx.NORMAL, wx.BOLD) font2 = wx.Font(36, wx.MODERN, wx.ITALIC, wx.LIGHT) # Main Panel panel = wx.Panel(self) panel.SetBackgroundColour('white') self.title = wx.StaticText(panel, -1, "Work Transfer Application", pos=(150, 20)) self.title.SetFont(font2) self.title.SetForegroundColour("RED") self.name_text_ctrl = wx.TextCtrl(panel, -1, size=(150, 40), pos=(100, 200)) self.name_text_ctrl.SetFont(font) self.refresh = wx.Button(panel, -1, 'REFRESH', size = (225,30), pos = (100, 300)) self.refresh.SetFont(font) self.refresh.SetBackgroundColour('YELLOW') self.refresh.Bind(wx.EVT_BUTTON, self.OnRefresh) self.free_button = wx.Button(panel, -1, 'I AM FREE', size = (225,30), pos = (100, 400)) self.free_button.SetFont(font) self.free_button.SetBackgroundColour('GREEN') self.free_button.Bind(wx.EVT_BUTTON, self.OnFree) self.got_work_button = wx.Button(panel, -1, ' GOT WORK', size = (225, 30), pos = (100, 600)) self.got_work_button.SetFont(font) self.got_work_button.SetBackgroundColour('RED') self.got_work_button.Bind(wx.EVT_BUTTON, self.OnGotWork) self.got_work_button.Disable() self.listbox = wx.ListBox(panel, -1, size=(300, 250), pos=(500, 200)) self.listbox.SetFont(font) def OnRefresh(self, event): self.host = '127.0.0.1' self.port = 0 self.s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.s.bind((self.host, self.port)) self.s.setblocking(0) self.rT = threading.Thread(target=ReceiveData, name = "Receive Thread", args=(self.s,)) self.rT.daemon = True self.rT.start() pub.subscribe(self.ReadEvent, "READ EVENT") def ReadEvent(self,arg1): self.people_list = arg1 # This goes to list box list_len = len(self.people_list) for i in range(list_len): self.listbox.Append(self.people_list[i]) def OnFree(self, event): self.server = ('127.0.0.1', 5000) # This you UPDATE ONCE YOUR IP IS FIXED self.flag = np.array([ '1', str(self.name_text_ctrl.GetValue())]) self.rS1 = threading.Thread(target = SendData, name = "Send Thread", args = (self.s,self.server, self.flag)) self.rS1.start() self.got_work_button.Enable() self.rS1.join() print "Hello" def OnGotWork(self, event): self.flag = np.array([ '0', str(self.name_text_ctrl.GetValue())]) self.rS2 = threading.Thread(target=SendData, name="Send Thread", args=(self.s, self.server, self.flag)) self.rS2.start() self.rS2.join() self.s.close() self.Close(True) def main(): app = wx.App() windowClass(None, title = 'Wagner SprayTech V2.0') app.MainLoop() if __name__ == '__main__': # if we're running file directly and not importing it main() # run the main function </code></pre> <p>Please dont mind the Graphics of the GUI, I am mainly looking for a functioning application. </p> <p>My idea is to have a client code - when I click Refresh - it gets all the <code>clients_name</code> from the server. When I click Free, it sends the name I enter in the textbox to the server. </p> <pre><code>import threading, wx import socket, time from wx.lib.pubsub import setupkwargs from wx.lib.pubsub import pub import numpy as np shutdown = False lock = threading.Lock() def ReceiveData(sock): while not shutdown: try: lock.acquire() #while True: data, addr = sock.recvfrom(1024) wx.CallAfter(pub.sendMessage, "READ EVENT", arg1 = data) print str(data) + "hehehe" # Thisd data will be posted on a list box. except: pass finally: lock.release() time.sleep(1) def SendData(s, serv, data_send): lock.acquire() s.sendto(data_send, serv) lock.release() time.sleep(0.2) </code></pre> <p>I would like the code to behave as following: </p> <ol> <li>Run the server and client (opens the GUI)</li> <li>The user hits Refresh - it get data from the server - which is <code>clients_name</code>. This should be printed on the listbox, I have on the GUI</li> <li>Then the user enters a name in the text Box and presses, button " I am free" - this should take the name from the textBox and send it to the server. </li> <li>The server then reads the name, adds to <code>clients_name</code> variable. Then sends it back (regularly - as it is a while loop). This information as said in step 2 - will be displayed on listbox. </li> </ol>
1
2016-10-05T18:01:13Z
39,905,391
<p>This program starts a TCP server that handles each connection in a separate thread. The client connects to the server, gets the client list, and prints it. The client list is transferred as JSON. The receive buffers are set to 1024 bytes, so a client list longer than this will not be completely received.</p> <p><strong>client_list.py</strong></p> <pre><code>import argparse import json import socket import threading def handle_client(client_list, conn, address): name = conn.recv(1024) entry = dict(zip(['name', 'address', 'port'], [name, address[0], address[1]])) client_list[name] = entry conn.sendall(json.dumps(client_list)) conn.shutdown(socket.SHUT_RDWR) conn.close() def server(client_list): print "Starting server..." s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('127.0.0.1', 5000)) s.listen(5) while True: (conn, address) = s.accept() t = threading.Thread(target=handle_client, args=(client_list, conn, address)) t.daemon = True t.start() def client(name): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('127.0.0.1', 5000)) s.send(name) data = s.recv(1024) result = json.loads(data) print json.dumps(result, indent=4) def parse_arguments(): parser = argparse.ArgumentParser() parser.add_argument('-c', dest='client', action='store_true') parser.add_argument('-n', dest='name', type=str, default='name') result = parser.parse_args() return result def main(): client_list = dict() args = parse_arguments() if args.client: client(args.name) else: try: server(client_list) except KeyboardInterrupt: print "Keyboard interrupt" if __name__ == '__main__': main() </code></pre> <p><strong>Server Output</strong></p> <pre><code>$ python client_list.py Starting server... </code></pre> <p><strong>Client Output</strong></p> <pre><code>$ python client_list.py -c -n name1 { "name1": { "address": "127.0.0.1", "port": 62210, "name": "name1" } } </code></pre> <p>This code is a proof-of-concept and should not be used as-is.</p>
1
2016-10-06T20:50:25Z
[ "python", "multithreading", "sockets" ]
how to count output and make newline every 7 characters?
39,880,880
<p>I just learned the basics of Python , when searching google for cool thing to do with Python found this pdf : <a href="http://usingpython.com/dl/Binary_Images.pdf" rel="nofollow">Binary_Image</a> (converting 1/0 to * and spaces) this pdf has a Challenge section which say's </p> <p>"Modify your program so that it has a display width of 6 characters. You could create a new variable called “position”, and add 1 to it for every character the user enters, printing a “newline” whenever the position reaches 6."</p> <p>what I understood of challenge is making a variable to count <code>img_out</code> then every 6 characters making a newline</p> <p>what I don't understood is how to use the "position" variable so i tried this code </p> <pre><code>#get a binary number from the user img_in = input("Enter your b&amp;w bitmap image :") #initially, there is no output img_out = "" #loop through each character in the binary input for character in img_in: #add a star(*) to the output if a 1 is found if character == "1": img_out = img_out + "*" #otherwise, add a space else: img_out = img_out + " " #count the img_out if len(img_out) &gt;= "7": img_out = img_out + "\n" else: img_out = img_out #print the image to the screen print(img_out) </code></pre> <p>when compile the code with <code>cmd /k</code> using <code>python path/to/file.py</code></p> <pre><code>Enter your b&amp;w bitmap image :11111101101101 Traceback (most recent call last): File "C:\Users\saber\Desktop\testing.py", l if len(img_out) &gt;= "7": TypeError: unorderable types: int() &gt;= str() </code></pre> <p>if anyone can help me with how to solve this chalange that's will be great thanks in advance</p> <p>P.S : I use Python 3.5.2 on Windows</p>
0
2016-10-05T18:05:18Z
39,880,910
<p>Change:</p> <pre><code>if len(img_out) &gt;= "7": img_out = img_out + "\n" </code></pre> <p>To:</p> <pre><code>if len(img_out) &gt;= 7: img_out = img_out + "\n" </code></pre> <p><strong>UPDATE</strong>:</p> <pre><code>#get a binary number from the user img_in = input("Enter your b&amp;w bitmap image :") #initially, there is no output img_out = "" #loop through each character in the binary input for character in img_in: #add a star(*) to the output if a 1 is found if character == "1": img_out = img_out + "*" #otherwise, add a space elif character == '0': img_out = img_out + " " if len(img_out.replace('\n', '')) % 7 == 0: img_out = img_out + "\n" #print the image to the screen print(img_out) </code></pre> <p>In your code, you were just testing if the string was longer then 7, and of course when that was checked for more than one time it would break the line. On my code, i'm testing if the position on the string is divisble by 7, with that every time the seventh character is written it will break a line. I'm also using replace because '\n' is character and can't be counted when i'm doing this.</p> <p>You can check more about modulus <a href="http://www.tutorialspoint.com/python/python_basic_operators.htm" rel="nofollow">here</a> and about replace <a href="https://www.tutorialspoint.com/python/string_replace.htm" rel="nofollow">here</a>.</p>
0
2016-10-05T18:07:12Z
[ "python", "io", "binary" ]
In-house made package and environmental variables link
39,880,906
<p>I created a python package for in-house use which relies upon some environmental variables (namely, the user and password to enter an online database). for my company, the convenience of installing a package rather than having it in every project is significant as the functions inside are used in completely separate projects and maintainability is a primary issue. </p> <p>so, how do I "link" the package with the environmental variables?</p>
0
2016-10-05T18:06:56Z
39,918,729
<p>I think I'll just detail in the readme file what to insert and where. I tried to find a difficult solution when it was really simple and straightforward</p>
0
2016-10-07T13:41:49Z
[ "python", "package", "environment-variables" ]
Compute the median of dynamic time series
39,880,920
<p>If I have a pandas series [a1,a2,a3,a4,...] with length = T. Each value corresponds to one day. For each day, I would like to compute the historical median. For example, the first day compute the median of [a1]; the second day compute the median of [a1,a2]; the nth day compute the median of [a1,a2,...,an]. Finally I would like to get a series with length = T as well. Do we have an efficient way to do this in pandas? Thanks!</p>
0
2016-10-05T18:07:46Z
39,881,038
<p>For a Series, <code>ser</code>:</p> <pre><code>ser = pd.Series(np.random.randint(0, 100, 10)) </code></pre> <p>If your pandas version is 0.18.0 or above, use:</p> <pre><code>ser.expanding().median() Out: 0 0.0 1 25.0 2 50.0 3 36.5 4 33.0 5 36.0 6 33.0 7 36.0 8 33.0 9 36.0 dtype: float64 </code></pre> <p>The following is for earlier versions and deprecated:</p> <pre><code>pd.expanding_median(ser) C:\Anaconda3\envs\p3\lib\site-packages\spyder\utils\ipython\start_kernel.py:1: FutureWarning: pd.expanding_median is deprecated for Series and will be removed in a future version, replace with Series.expanding(min_periods=1).median() # -*- coding: utf-8 -*- Out: 0 0.0 1 25.0 2 50.0 3 36.5 4 33.0 5 36.0 6 33.0 7 36.0 8 33.0 9 36.0 dtype: float64 </code></pre>
0
2016-10-05T18:14:11Z
[ "python", "pandas" ]
Append to Numpy Using a For Loop
39,880,938
<p>I am working on a Python script that takes live streaming data and appends it to a numpy array. However I noticed that if I append to four different arrays one by one it works. For example:</p> <pre><code>openBidArray = np.append(openBidArray, bidPrice) highBidArray = np.append(highBidArray, bidPrice) lowBidArray = np.append(lowBidArray, bidPrice) closeBidArray = np.append(closeBidArray, bidPrice) </code></pre> <p>However If I do the following it does not work:</p> <pre><code>arrays = ["openBidArray", "highBidArray", "lowBidArray", "closeBidArray"] for array in arrays: array = np.append(array, bidPrice) </code></pre> <p>Any idea on why that is?</p>
0
2016-10-05T18:08:56Z
39,880,968
<p>In your second example, you have strings, not np.array objects. You are trying to append a number(?) to a string.</p> <p>The string "openBidArray" doesn't hold any link to an array called <code>openBidArray</code>.</p>
1
2016-10-05T18:10:39Z
[ "python", "arrays", "numpy" ]
Append to Numpy Using a For Loop
39,880,938
<p>I am working on a Python script that takes live streaming data and appends it to a numpy array. However I noticed that if I append to four different arrays one by one it works. For example:</p> <pre><code>openBidArray = np.append(openBidArray, bidPrice) highBidArray = np.append(highBidArray, bidPrice) lowBidArray = np.append(lowBidArray, bidPrice) closeBidArray = np.append(closeBidArray, bidPrice) </code></pre> <p>However If I do the following it does not work:</p> <pre><code>arrays = ["openBidArray", "highBidArray", "lowBidArray", "closeBidArray"] for array in arrays: array = np.append(array, bidPrice) </code></pre> <p>Any idea on why that is?</p>
0
2016-10-05T18:08:56Z
39,881,071
<p>Do this instead:</p> <pre><code>arrays = [openBidArray, highBidArray, lowBidArray, closeBidArray] </code></pre> <p>In other words, your list should be a list of arrays, not a list of strings that coincidentally contain the names of arrays you happen to have defined.</p> <p>Your next problem is that <code>np.append()</code> returns a copy of the array with the item appended, rather than appending in place. You store this result in <code>array</code>, but <code>array</code> will be assigned the next item from the list on the next iteration, and the modified array will be lost (except for the last one, of course, which will be in <code>array</code> at the end of the loop). So you will want to store each modified array back into the list. To do that, you need to know what slot it came from, which you can get using <code>enumerate()</code>.</p> <pre><code>for i, array in enumerate(arrays): arrays[i] = np.append(array, bidPrice) </code></pre> <p>Now of course this doesn't update your original variables, <code>openBidArray</code> and so on. You could do this after the loop using unpacking:</p> <pre><code>openBidArray, highBidArray, lowBidArray, closeBidArray = arrays </code></pre> <p>But at some point it just makes more sense to store the arrays in a list (or a dictionary if you need to access them by name) to begin with and not use the separate variables.</p> <p>N.B. if you used regular Python lists here instead of NumPy arrays, some of these issues would go away. <code>append()</code> on lists is an in-place operation, so you wouldn't have to store the modified array back into the list or unpack to the individual variables. It might be feasible to do all the appending with lists and then convert them to arrays afterward, if you really need NumPy functionality on them.</p>
2
2016-10-05T18:16:16Z
[ "python", "arrays", "numpy" ]
Can't convert column with pandas.to_numeric
39,880,953
<p>I have a column of data, here is a snip of it:</p> <pre><code>a = data["hs_directory"]["lat"][:5] 0 40.67029890700047 1 40.8276026690005 2 40.842414068000494 3 40.71067947100045 4 40.718810094000446 Name: lat, dtype: object </code></pre> <p>I try to convert it to numerical with python, but fail:</p> <pre><code>pandas.to_numeric(a, errors='coerce') </code></pre> <p>This line does nothing, the dtype is still "object" and I can't do mathematical operations with the column. What am I doing wrong?</p>
0
2016-10-05T18:09:55Z
39,881,329
<p>As discussed in the comments, let me post the answer for future readers:</p> <p>try:</p> <pre><code>df1=pd.to_numeric(a,errors='coerce') </code></pre>
0
2016-10-05T18:32:42Z
[ "python", "pandas", "converter" ]
Parse XML with using ElementTree library
39,880,964
<p>I am trying to parse an xml data to print its content with ElementTree library in python 3. but when I print lst it returns me an empty list.</p> <blockquote> <p>Error: Returning an empty array of list.</p> </blockquote> <p>My attempt:</p> <pre><code>import xml.etree.ElementTree as ET data = ''' &lt;data&gt; &lt;country name="Liechtenstein"&gt; &lt;rank&gt;1&lt;/rank&gt; &lt;year&gt;2008&lt;/year&gt; &lt;gdppc&gt;141100&lt;/gdppc&gt; &lt;neighbor name="Austria" direction="E"/&gt; &lt;neighbor name="Switzerland" direction="W"/&gt; &lt;/country&gt; &lt;country name="Singapore"&gt; &lt;rank&gt;4&lt;/rank&gt; &lt;year&gt;2011&lt;/year&gt; &lt;gdppc&gt;59900&lt;/gdppc&gt; &lt;neighbor name="Malaysia" direction="N"/&gt; &lt;/country&gt; &lt;/data&gt;''' tree = ET.fromstring(data) lst = tree.findall('data/country') print(lst) #for item in lst: # print('Country: ', item.get('name')) </code></pre> <p>Thanks in advance.</p>
0
2016-10-05T18:10:21Z
39,881,231
<p>Tree references the root item <code>&lt;data&gt;</code> already so it shouldn't be in your path statement. Just find "country".</p> <pre><code>import xml.etree.ElementTree as ET data = ''' &lt;data&gt; &lt;country name="Liechtenstein"&gt; &lt;rank&gt;1&lt;/rank&gt; &lt;year&gt;2008&lt;/year&gt; &lt;gdppc&gt;141100&lt;/gdppc&gt; &lt;neighbor name="Austria" direction="E"/&gt; &lt;neighbor name="Switzerland" direction="W"/&gt; &lt;/country&gt; &lt;country name="Singapore"&gt; &lt;rank&gt;4&lt;/rank&gt; &lt;year&gt;2011&lt;/year&gt; &lt;gdppc&gt;59900&lt;/gdppc&gt; &lt;neighbor name="Malaysia" direction="N"/&gt; &lt;/country&gt; &lt;/data&gt;''' tree = ET.fromstring(data) lst = tree.findall('data/country') print(lst) # check position in tree print(tree.tag) print(tree.findall('country')) </code></pre>
1
2016-10-05T18:26:54Z
[ "python", "xml", "python-3.x" ]
Parquet Files from Spark detected as directory in Linux
39,881,051
<p>I am trying to use Python's parquet module to read in some Parquet files written from a local MapR instance.</p> <p>The command I used to output these parquet files is: </p> <pre><code>df.sqlContext.sql("SQL HERE").write.format("parquet").option("mergeSchema", "true").save("/path/to/parquet/test.parquet") </code></pre> <p>This is what the file looks like on my Linux host:</p> <pre><code>drwxr-xr-x 2 mapr mapr 403 Oct 5 13:56 igayfvpwrs.parquet </code></pre> <p>Unfortunately, when I use the Python here (<a href="https://pypi.python.org/pypi/parquet" rel="nofollow">https://pypi.python.org/pypi/parquet</a>) - I receive the following exception:</p> <pre><code>IOError: [Errno 21] Is a directory: '/mnt/mapr/saw/user/mapr/igayfvpwrs.parquet' </code></pre> <p>Any idea? These files work great in MapR.</p> <p>EDIT 2:</p> <p>I was able to figure it out. Since the original .parquet "file" is a directory, just loop through the directory with glob for all the inner .parquet files - the original code for Python-parquet works in there. </p> <pre><code>for filename in glob.glob("/mnt/mapr/saw/user/mapr/{0}.parquet/*.parquet".format(tempTableID)): with open(filename) as foo: for row in parquet.DictReader(foo, columns=["column"]): print(json.dumps(row)) </code></pre> <p>EDIT: Here is what is inside the parquet file:</p> <pre><code>-rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 _common_metadata -rwxr-xr-x 1 mapr mapr 2.4K Oct 5 13:58 _metadata -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00000-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00001-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00002-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00003-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00004-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00005-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00006-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00007-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00008-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00009-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00010-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00011-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00012-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00013-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00014-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00015-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00016-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00017-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00018-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00019-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00020-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00021-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00022-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00023-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00024-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00025-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00026-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00027-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00028-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00029-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00030-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00031-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00032-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00033-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00034-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00035-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00036-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00037-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00038-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00039-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00040-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00041-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00042-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00043-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00044-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00045-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00046-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00047-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00048-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00049-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00050-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00051-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00052-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 1.2K Oct 5 13:58 part-r-00053-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00054-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00055-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00056-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00057-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00058-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00059-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00060-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00061-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00062-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00063-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00064-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00065-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00066-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00067-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00068-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00069-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00070-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00071-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00072-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00073-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00074-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00075-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00076-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00077-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00078-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00079-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00080-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00081-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00082-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00083-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00084-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00085-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00086-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00087-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00088-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00089-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00090-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00091-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00092-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00093-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00094-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00095-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00096-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00097-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00098-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00099-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00100-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00101-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00102-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00103-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00104-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 1.2K Oct 5 13:58 part-r-00105-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00106-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00107-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00108-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00109-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00110-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00111-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00112-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00113-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00114-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00115-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00116-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00117-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00118-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00119-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00120-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00121-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00122-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00123-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00124-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 1.2K Oct 5 13:58 part-r-00125-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00126-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00127-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00128-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00129-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00130-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00131-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00132-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00133-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00134-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00135-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00136-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00137-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00138-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00139-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00140-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00141-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00142-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00143-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00144-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00145-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00146-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00147-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00148-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00149-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00150-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00151-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00152-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00153-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00154-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00155-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00156-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00157-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00158-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00159-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00160-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00161-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00162-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00163-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00164-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00165-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00166-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00167-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00168-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00169-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00170-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00171-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00172-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00173-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00174-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00175-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00176-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00177-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00178-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00179-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00180-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00181-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00182-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00183-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00184-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00185-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00186-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00187-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00188-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00189-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00190-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00191-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00192-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00193-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00194-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00195-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00196-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00197-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00198-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00199-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00200-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00201-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00202-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00203-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00204-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00205-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00206-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00207-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00208-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00209-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00210-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00211-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00212-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00213-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00214-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00215-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00216-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00217-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00218-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00219-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00220-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00221-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00222-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00223-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00224-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00225-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00226-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00227-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00228-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00229-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00230-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00231-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00232-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00233-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00234-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00235-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00236-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00237-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00238-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00239-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00240-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00241-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00242-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00243-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00244-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00245-02f0b80b-2993-46f8- [truncated] -rwxr-xr-x 1 mapr mapr 0 Oct 5 13:58 _SUCCESS </code></pre>
0
2016-10-05T18:14:53Z
39,881,137
<p>Based on the output of your ls command, itlooks like igayfvpwrs.parquet is actually a directory. Can you check for the data inside? </p>
0
2016-10-05T18:20:36Z
[ "python", "hadoop", "apache-spark", "parquet" ]
Parquet Files from Spark detected as directory in Linux
39,881,051
<p>I am trying to use Python's parquet module to read in some Parquet files written from a local MapR instance.</p> <p>The command I used to output these parquet files is: </p> <pre><code>df.sqlContext.sql("SQL HERE").write.format("parquet").option("mergeSchema", "true").save("/path/to/parquet/test.parquet") </code></pre> <p>This is what the file looks like on my Linux host:</p> <pre><code>drwxr-xr-x 2 mapr mapr 403 Oct 5 13:56 igayfvpwrs.parquet </code></pre> <p>Unfortunately, when I use the Python here (<a href="https://pypi.python.org/pypi/parquet" rel="nofollow">https://pypi.python.org/pypi/parquet</a>) - I receive the following exception:</p> <pre><code>IOError: [Errno 21] Is a directory: '/mnt/mapr/saw/user/mapr/igayfvpwrs.parquet' </code></pre> <p>Any idea? These files work great in MapR.</p> <p>EDIT 2:</p> <p>I was able to figure it out. Since the original .parquet "file" is a directory, just loop through the directory with glob for all the inner .parquet files - the original code for Python-parquet works in there. </p> <pre><code>for filename in glob.glob("/mnt/mapr/saw/user/mapr/{0}.parquet/*.parquet".format(tempTableID)): with open(filename) as foo: for row in parquet.DictReader(foo, columns=["column"]): print(json.dumps(row)) </code></pre> <p>EDIT: Here is what is inside the parquet file:</p> <pre><code>-rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 _common_metadata -rwxr-xr-x 1 mapr mapr 2.4K Oct 5 13:58 _metadata -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00000-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00001-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00002-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00003-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00004-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00005-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00006-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00007-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00008-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00009-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00010-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00011-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00012-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00013-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00014-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00015-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00016-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00017-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00018-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00019-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00020-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00021-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00022-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00023-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00024-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00025-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00026-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00027-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00028-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00029-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00030-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00031-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00032-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00033-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00034-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00035-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00036-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00037-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00038-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00039-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00040-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00041-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00042-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00043-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00044-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00045-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00046-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00047-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00048-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00049-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00050-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00051-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00052-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 1.2K Oct 5 13:58 part-r-00053-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00054-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00055-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00056-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00057-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00058-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00059-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00060-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00061-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00062-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00063-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00064-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00065-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00066-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00067-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00068-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00069-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00070-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00071-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00072-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00073-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00074-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00075-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00076-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00077-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00078-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00079-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00080-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00081-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00082-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00083-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00084-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00085-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00086-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00087-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00088-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00089-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00090-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00091-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00092-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00093-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00094-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00095-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00096-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00097-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00098-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00099-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00100-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00101-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00102-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00103-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00104-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 1.2K Oct 5 13:58 part-r-00105-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00106-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00107-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00108-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00109-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00110-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00111-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00112-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00113-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00114-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00115-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00116-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00117-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00118-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00119-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00120-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00121-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00122-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00123-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00124-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 1.2K Oct 5 13:58 part-r-00125-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00126-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00127-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00128-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00129-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00130-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00131-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00132-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00133-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00134-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00135-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00136-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00137-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00138-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00139-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00140-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00141-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00142-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00143-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00144-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00145-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00146-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00147-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00148-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00149-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00150-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00151-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00152-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00153-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00154-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00155-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00156-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00157-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00158-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00159-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00160-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00161-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00162-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00163-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00164-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00165-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00166-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00167-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00168-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00169-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00170-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00171-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00172-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00173-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00174-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00175-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00176-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00177-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00178-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00179-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00180-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00181-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00182-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00183-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00184-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00185-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00186-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00187-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00188-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00189-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00190-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00191-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00192-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00193-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00194-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00195-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00196-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00197-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00198-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00199-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00200-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00201-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00202-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00203-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00204-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00205-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00206-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00207-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00208-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00209-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00210-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00211-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00212-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00213-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00214-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00215-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00216-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00217-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00218-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00219-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00220-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00221-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00222-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00223-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00224-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00225-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00226-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00227-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00228-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00229-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00230-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00231-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00232-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00233-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00234-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00235-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00236-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00237-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00238-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00239-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00240-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00241-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00242-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00243-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00244-02f0b80b-2993-46f8-a191-f111c6db1dad.gz.parquet -rwxr-xr-x 1 mapr mapr 514 Oct 5 13:58 part-r-00245-02f0b80b-2993-46f8- [truncated] -rwxr-xr-x 1 mapr mapr 0 Oct 5 13:58 _SUCCESS </code></pre>
0
2016-10-05T18:14:53Z
39,881,494
<p>There is no issue with parquet on spark here. The DataFrameWriter writes to parquet format into a directory and partition the output according the number of partition of the DataFrame it is writing. </p> <p>What you are getting is absolutely normal. </p>
0
2016-10-05T18:43:45Z
[ "python", "hadoop", "apache-spark", "parquet" ]
Counting number of characters in a key in a dictionary
39,881,062
<p>For homework I have been set the following:</p> <p>Build a dictionary with the names from myEmployees list as keys and assign each employee a salary of 10 000 (as value). Loop over the dictionary and increase the salary of employees which names have more than four letters with 1000 * length of their name. Print the dictionary contents before and after the increase.</p> <p>I can't figure out how to do it.</p> <p>This is what I've come up with so far.</p> <pre><code>employeeDict = {"John":'10,000', "Daren":"10,000", "Graham":"10,000", "Steve":"10,000", "Adren":"10,000"} say = 'Before increase' print say print employeeDict say1 = 'After increase' print say1 for x in employeeDict: x = len(employeeDict) if x &gt; 5: print employeeDict[x] </code></pre>
1
2016-10-05T18:15:51Z
39,881,134
<p>You have some indentation problems, clearly, but the main problems are that you are taking the length of the dictionary (getting the number of keys) not taking the length of the key. You also have some bad logic.</p> <pre><code>employeeDict = {"John":'10,000', "Daren":"10,000", "Graham":"10,000", "Steve":"10,000", "Adren":"10,000"} say = 'Before increase' print say print employeeDict say1 = 'After increase' print say1 for x in employeeDict: length = len(employeeDict) # &lt;---- indent this if length &gt;= 5: # &lt;--- greater than 4 # convert string to number, add money, convert back to string employeeDict[x] = str(int(employeeDict[x]) + 1000 * (length)) print employeeDict[x] </code></pre>
0
2016-10-05T18:20:19Z
[ "python", "dictionary", "count" ]
Counting number of characters in a key in a dictionary
39,881,062
<p>For homework I have been set the following:</p> <p>Build a dictionary with the names from myEmployees list as keys and assign each employee a salary of 10 000 (as value). Loop over the dictionary and increase the salary of employees which names have more than four letters with 1000 * length of their name. Print the dictionary contents before and after the increase.</p> <p>I can't figure out how to do it.</p> <p>This is what I've come up with so far.</p> <pre><code>employeeDict = {"John":'10,000', "Daren":"10,000", "Graham":"10,000", "Steve":"10,000", "Adren":"10,000"} say = 'Before increase' print say print employeeDict say1 = 'After increase' print say1 for x in employeeDict: x = len(employeeDict) if x &gt; 5: print employeeDict[x] </code></pre>
1
2016-10-05T18:15:51Z
39,881,162
<p>First, change the values to integers/floats.</p> <pre><code>employeeDict = {"John":10000, "Daren":10000, "Graham":10000, "Steve":10000, "Adren":10000} </code></pre> <p>After doing this, as you know, you need to loop over the items in the dict.</p> <pre><code>for x in employeeDict: x = len(employeeDict) if x &gt; 5: print employeeDict[x] </code></pre> <p>In the code above, your "x" will be the employee name. And as you know, to asign a value to a key in dict, you have to use <code>dict[key] = value</code>, so try to do it in the <code>if x &gt; 5:</code> block statement. Im not trying to give you the full answer, but to push you in to the right direction.</p>
0
2016-10-05T18:22:25Z
[ "python", "dictionary", "count" ]
Counting number of characters in a key in a dictionary
39,881,062
<p>For homework I have been set the following:</p> <p>Build a dictionary with the names from myEmployees list as keys and assign each employee a salary of 10 000 (as value). Loop over the dictionary and increase the salary of employees which names have more than four letters with 1000 * length of their name. Print the dictionary contents before and after the increase.</p> <p>I can't figure out how to do it.</p> <p>This is what I've come up with so far.</p> <pre><code>employeeDict = {"John":'10,000', "Daren":"10,000", "Graham":"10,000", "Steve":"10,000", "Adren":"10,000"} say = 'Before increase' print say print employeeDict say1 = 'After increase' print say1 for x in employeeDict: x = len(employeeDict) if x &gt; 5: print employeeDict[x] </code></pre>
1
2016-10-05T18:15:51Z
39,881,166
<p>Give this a try and analyse it :</p> <p></p> <pre><code>employees = {"John":10000, "Daren":10000, "Graham":10000} for name in employees: if len(name) &gt; 5: employees[name] += 1000 * len(name) </code></pre> <p>If you have to stick with the string values you can do this :</p> <p></p> <pre><code>employees = {"John":"10000", "Daren":"10000", "Graham":"10000"} for name in employees: if len(name) &gt; 5: employees[name] = str(int(employees[name]) + 1000 * len(name)) </code></pre>
0
2016-10-05T18:22:41Z
[ "python", "dictionary", "count" ]
Counting number of characters in a key in a dictionary
39,881,062
<p>For homework I have been set the following:</p> <p>Build a dictionary with the names from myEmployees list as keys and assign each employee a salary of 10 000 (as value). Loop over the dictionary and increase the salary of employees which names have more than four letters with 1000 * length of their name. Print the dictionary contents before and after the increase.</p> <p>I can't figure out how to do it.</p> <p>This is what I've come up with so far.</p> <pre><code>employeeDict = {"John":'10,000', "Daren":"10,000", "Graham":"10,000", "Steve":"10,000", "Adren":"10,000"} say = 'Before increase' print say print employeeDict say1 = 'After increase' print say1 for x in employeeDict: x = len(employeeDict) if x &gt; 5: print employeeDict[x] </code></pre>
1
2016-10-05T18:15:51Z
39,881,186
<p>This should give you what your looking for.</p> <pre><code>employeeDict = {"John":10000, "Daren":10000, "Graham":10000, "Steve":10000, "Adren":10000} print "Before increase" print employeeDict for name, salary in employeeDict.items(): if len(name) &gt; 4: employeeDict[name] = salary + len(name) * 1000 print "After increase" print employeeDict </code></pre> <p>You had a few problems in your version.</p> <ul> <li>Your idention for your for-loop was not correct</li> <li>You were getting the length of the dictionary and not the length of the keys <strong>in</strong> the dictionary.</li> <li>You should make the values in your dictionary floats/integers.</li> </ul> <p>Also note, I believe your homework said if the name was <strong>longer than four characters</strong>. So i used 4 instead of five.</p>
0
2016-10-05T18:23:40Z
[ "python", "dictionary", "count" ]
Results for python and MATLAB caffe are different for the same network
39,881,081
<p>I am trying to port MATLAB implementation of <a href="https://github.com/kpzhang93/MTCNN_face_detection_alignment" rel="nofollow">MTCNN_face_detection_alignment</a> to python. I use the same version of caffe bindings for MATLAB and for python.</p> <p>Minimal runnable code to reproduce the issue:</p> <p>MATLAB:</p> <pre><code>addpath('f:/Documents/Visual Studio 2013/Projects/caffe/matlab'); warning off all caffe.reset_all(); %caffe.set_mode_cpu(); caffe.set_mode_gpu(); caffe.set_device(0); prototxt_dir = './model/det1.prototxt'; model_dir = './model/det1.caffemodel'; PNet=caffe.Net(prototxt_dir,model_dir,'test'); img=imread('F:/ImagesForTest/test1.jpg'); [hs ws c]=size(img) im_data=(single(img)-127.5)*0.0078125; PNet.blobs('data').reshape([hs ws 3 1]); out=PNet.forward({im_data}); imshow(out{2}(:,:,2)) </code></pre> <p>Python:</p> <pre><code>import numpy as np import caffe import cv2 caffe.set_mode_gpu() caffe.set_device(0) model = './model/det1.prototxt' weights = './model/det1.caffemodel' PNet = caffe.Net(model, weights, caffe.TEST) # create net and load weights print ("\n\n----------------------------------------") print ("------------- Network loaded -----------") print ("----------------------------------------\n") img = np.float32(cv2.imread( 'F:/ImagesForTest/test1.jpg' )) img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB) avg = np.array([127.5,127.5,127.5]) img = img - avg img = img*0.0078125; img = img.transpose((2,0,1)) img = img[None,:] # add singleton dimension PNet.blobs['data'].reshape(1,3,img.shape[2],img.shape[3]) out = PNet.forward_all( data = img ) cv2.imshow('out',out['prob1'][0][1]) cv2.waitKey() </code></pre> <p>The model I use located <a href="https://github.com/kpzhang93/MTCNN_face_detection_alignment/tree/master/code/codes/MTCNNv2/model" rel="nofollow">here</a> (det1.prototxt and det1.caffemodel)</p> <p>The image I used to get these results:</p> <p><a href="http://i.stack.imgur.com/tpGCm.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/tpGCm.jpg" alt="enter image description here"></a></p> <p>The results I have from both cases:</p> <p><a href="http://i.stack.imgur.com/lOrS0.png" rel="nofollow"><img src="http://i.stack.imgur.com/lOrS0.png" alt="enter image description here"></a></p> <p>Results are similar, but not same.</p> <p>UPD: it looks like not type conversion problem (corrected, but nothing changed). I saved result of convolution after conv1 layer (first channel) in matlab, and extracted the same data in python, both images are now displayed by python cv2.imshow.</p> <p>Data on input layer (data) are absolutely the same, I did the check using the same method.</p> <p>And as you can see the difference visible even on the first (conv1) layer. Looks like kernels transformed somehow.</p> <p><a href="http://i.stack.imgur.com/zEAxI.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/zEAxI.jpg" alt="enter image description here"></a> </p> <p>Can anyone say where is the difference hidden ?</p>
2
2016-10-05T18:16:55Z
39,892,959
<p>I found the issue source, it is because MATLAB sees image transposed. This python code shows the same result as MATLAB one:</p> <pre><code>import numpy as np import caffe import cv2 caffe.set_mode_gpu() caffe.set_device(0) model = './model/det1.prototxt' weights = './model/det1.caffemodel' PNet = caffe.Net(model, weights, caffe.TEST) # create net and load weights print ("\n\n----------------------------------------") print ("------------- Network loaded -----------") print ("----------------------------------------\n") img = np.float32(cv2.imread( 'F:/ImagesForTest/test1.jpg' )) img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB) img=cv2.transpose(img) # &lt;----- THIS line ! avg = np.float32(np.array([127.5,127.5,127.5])) img = img - avg img = np.float32(img*0.0078125); img = img.transpose((2,0,1)) img = img[None,:] # add singleton dimension PNet.blobs['data'].reshape(1,3,img.shape[2],img.shape[3]) out = PNet.forward_all( data = img ) # transpose it back and show the result cv2.imshow('out',cv2.transpose(out['prob1'][0][1])) cv2.waitKey() </code></pre> <p>Thanks all who advised me in comments !</p>
2
2016-10-06T09:57:57Z
[ "python", "matlab", "caffe", "pycaffe", "matcaffe" ]
How to run SublimeText with visual studio environment enabled
39,881,091
<p><strong>OVERVIEW</strong></p> <p>Right now I got these 2 programs on my windows taskbar:</p> <ul> <li><p>SublimeText3 target: </p> <pre><code>"D:\software\SublimeText 3_x64\sublime_text.exe" </code></pre></li> <li><p>VS2015 x64 Native Tools Command Prompt target: </p> <pre><code>%comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat"" amd64 </code></pre></li> </ul> <p>Goal here is running Sublime Text with vs2015 environment enabled.</p> <ol> <li>One option would be open the vs command prompt and then run sublime text from there, <code>&gt; sublime_text</code> (this is not good one, I want it to be a <strong>non-interactive process</strong>)</li> <li>Another option would be modifying somehow the sublimetext symlink target from the taskbar so I could open sublime with vs2015 environment enabled just clicking the icon</li> </ol> <p><strong>QUESTION</strong></p> <p>How could I acomplish option 2?</p> <p>NS: I want to get Sublime Text 3 to run vcvarsall.bat properly only <strong>once</strong> at startup (not at build time on any build system)</p> <p><strong>ATTEMPTS</strong></p> <p>My first attempt was trying to understand how bat files executed so I tested some basic batch files:</p> <ul> <li><p>bat1.bat: It opens sublime text succesfully</p> <pre><code>sublime_text </code></pre></li> <li><p>bat2.bat: It opens vs command prompt succesfully and it waits for user to type commands</p> <pre><code>cmd /k ""C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat"" amd64 </code></pre></li> <li><p>bat3.bat: Open vs command prompt but it won't open ST, unless you type exit once the command prompt is shown</p> <pre><code>%comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat"" amd64 sublime_text </code></pre></li> <li><p>bat4.bat: Open vs command prompt but it doesn't open ST, same case than bat3.bat</p> <pre><code>%comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat"" amd64 &amp;&amp; sublime_text </code></pre></li> <li><p>bat5.bat: Open vs command prompt but it doesn't open ST, same case than bat{4,3}.bat</p> <pre><code>%comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat"" amd64 &amp; sublime_text </code></pre></li> </ul> <p>After these attempts I've decided to read some docs trying to find some hints about <a href="http://ss64.com/nt/cmd.html" rel="nofollow">cmd</a> but it didn't make any difference.</p> <p>Another idea was using conemu customized tasks, something like:</p> <pre><code>{vs2015}: cmd.exe /k "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" amd64 &amp; sublime_text </code></pre> <p>and then having a bat file calling conemu like this:</p> <pre><code>D:\software\ConEmuPack.151205\ConEmu64.exe /single -cmd {vs2015} </code></pre> <p>the result was +/- what I wanted, but 2 terminals and a new sublime session would be spawned. What I'm looking for is just opening a SublimeText session with the right environment, so I consider this not a good solution, plus, it requires to have conemu installed.</p> <p>After all those attempts I thought maybe using python to open a command prompt and typing&amp;running "magically" some commands could be helpful but I don't know how to do it.</p> <p>Conclusion: Till the date the problem remains unsolved... </p>
0
2016-10-05T18:17:49Z
39,883,233
<ol start="2"> <li>exactly as stated is not possible directly but Sublime Text has options for running other programs.</li> </ol> <p>Make a batch file that goes to the directory that vcvarsall.bat wants (it looks at the directory it was run from, so it needs to be started from the proper dir) and runs vcvarsall.bat:</p> <pre><code>@ECHO OFF REM vcdir is the directory where you should be set vcdir="c:\program files (x86)\Microsoft Visual Studio 14.0\VC" REM go into that directory push "%vcdir%" REM Run vcvarsall.bat to set the env vars call vcvarsall.bat amd64 REM Get back to the directory we were in initially popd </code></pre> <p>If you're ok running this batch file during build (if you only need it for the build process and you don't mind rerunning it every time) then follow these instructions to <a href="http://stackoverflow.com/questions/23789410/how-to-edit-sublime-text-build-settings">add that batch file to your build settings in sublime text 3</a></p> <p>A more elegant approach would be to find out how to get Sublime Text 3 to run a program just once on startup (rather than at build time). I'd bet money that's possible, but I need to get back to work now...</p>
0
2016-10-05T20:37:36Z
[ "python", "windows", "visual-studio-2015", "environment-variables", "sublimetext3" ]
Fast way to compress audio data?
39,881,122
<p>I'm trying to build (Inspirated by teamspeak) a voip program that comunicate via UDP.</p> <p>Here is my source (Server):</p> <pre><code>import pyaudio import socket CHUNK = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 p = pyaudio.PyAudio() stream = p.open(format=FORMAT, channels = CHANNELS, rate = RATE, output = True, frames_per_buffer = CHUNK, ) udp = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) udp.bind(("0.0.0.0", 4444)) while True: soundData, addr = udp.recvfrom(CHUNK * CHANNELS * 2) stream.write(soundData, CHUNK) print len(soundData) udp.close() </code></pre> <p>Client:</p> <pre><code>import pyaudio import socket CHUNK = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 p = pyaudio.PyAudio() stream = p.open(format = FORMAT, channels = CHANNELS, rate = RATE, input = True, frames_per_buffer = CHUNK, ) udp = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) while True: udp.sendto(stream.read(CHUNK), ("127.0.0.1", 4444)) udp.close() </code></pre> <p>It works great on LAN, but in WAN the audio is very bad.</p> <p>I assumed that raw audio is not good for voip, and I'm looking for a way to compress the audio using a lossy algorithm or an encoder (mp3, AAC, ogg)</p> <p>I tried LZMA, but I don't need lossless compression, lossy is better in my case.</p> <p>I have two rules to follow:</p> <ol> <li><p>The program must be cross platform, so I need a way to compress/decompress in a "cross platform" way (Inside python)</p></li> <li><p>Audio quality should be good (not below 50% of original)</p></li> </ol>
0
2016-10-05T18:19:44Z
39,882,543
<p>FFmpeg works on Pipe protocols, and the same functionality has been ported to ffmpy so data can be written to stdin and read from stdout. You would likely have to provide some timing constructs as well to handle synchronization and appropriate buffer managment, but I see no reason this couldn't be made to work. </p> <p>ffmpy: <a href="http://ffmpy.readthedocs.io/en/latest/examples.html#using-pipe-protocol" rel="nofollow">using the <code>pipe</code> protocol</a></p> <p>live audio streaming with <a href="http://raspberrypi.stackexchange.com/a/4660">FFmpeg on a rasberry pi</a></p>
1
2016-10-05T19:51:39Z
[ "python", "audio", "encoding", "compression" ]
Python access all entrys from one dimension in a multidimensional numpy array
39,881,164
<p>I would like to manipulate the data of all entries of a complicated 3d numpy array. I want all entries of all subarrays in the X-Y-Position. I know Matlab can do something like that (with the variable indicator : for everything) I indicated that below with DARK[:][1][1]. Which basically mean I want the second entry from the second the column in all sub arrays. Is there a way to do this in python?</p> <pre><code>import numpy # Creating a dummy variable of the type I deal with (If this looks crappy sorry, the variable actually comes from the output of d = pyfits.getdata()): a = [] for i in range(3): d = numpy.array([[i, 2*i], [3*i, 4*i]]) a.append(d) print a # Pseudo code: print 'Second row, second column: ', a[:][1][1] </code></pre> <p>I expect a result like this:</p> <pre><code>[array([[ 0, 0],[ 0, 0]]), array([[ 1, 2],[ 3, 4]]), array([[ 2, 4],[ 6, 8]])] Second row, second column: [0, 4, 8] </code></pre>
-1
2016-10-05T18:22:28Z
39,881,262
<p>You can do this using slightly different syntax.</p> <pre><code>import numpy as np a = np.arange(27).reshape(3,3,3) # Create a 3x3x3 3d array print("3d Array:") print(a) print("Second Row, Second Column: ", a[:,1,1]) </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; 3d Array: [[[ 0 1 2] [ 3 4 5] [ 6 7 8]] [[ 9 10 11] [12 13 14] [15 16 17]] [[18 19 20] [21 22 23] [24 25 26]]] &gt;&gt;&gt; Second Row, Second Column: [ 4 13 22] </code></pre>
1
2016-10-05T18:28:43Z
[ "python", "arrays", "numpy" ]
Python access all entrys from one dimension in a multidimensional numpy array
39,881,164
<p>I would like to manipulate the data of all entries of a complicated 3d numpy array. I want all entries of all subarrays in the X-Y-Position. I know Matlab can do something like that (with the variable indicator : for everything) I indicated that below with DARK[:][1][1]. Which basically mean I want the second entry from the second the column in all sub arrays. Is there a way to do this in python?</p> <pre><code>import numpy # Creating a dummy variable of the type I deal with (If this looks crappy sorry, the variable actually comes from the output of d = pyfits.getdata()): a = [] for i in range(3): d = numpy.array([[i, 2*i], [3*i, 4*i]]) a.append(d) print a # Pseudo code: print 'Second row, second column: ', a[:][1][1] </code></pre> <p>I expect a result like this:</p> <pre><code>[array([[ 0, 0],[ 0, 0]]), array([[ 1, 2],[ 3, 4]]), array([[ 2, 4],[ 6, 8]])] Second row, second column: [0, 4, 8] </code></pre>
-1
2016-10-05T18:22:28Z
39,881,535
<p>Found the solution, thanks <strong>Divakar</strong> and <strong>eeScott</strong>:</p> <pre><code>import numpy as np # Creating a dummy variable of the type I deal with (If this looks crappy sorry, the variable actually comes from the output of d = pyfits.getdata()): a = [] for i in range(3): d = np.array([[i, 2*i], [3*i, 4*i]]) a.append(d) # print variable print np.array(a) print 'Second row, second column: ', np.array(a)[:, 1, 1] # Alternative solution: a = np.asarray(a) print a print 'Second row, second column: ', a[:,1,1] </code></pre> <p>Result:</p> <pre><code>[[[0 0][0 0]] [[1 2][3 4]] [[2 4][6 8]]] Second row, second column: [0 4 8] </code></pre>
1
2016-10-05T18:46:35Z
[ "python", "arrays", "numpy" ]
Python access all entrys from one dimension in a multidimensional numpy array
39,881,164
<p>I would like to manipulate the data of all entries of a complicated 3d numpy array. I want all entries of all subarrays in the X-Y-Position. I know Matlab can do something like that (with the variable indicator : for everything) I indicated that below with DARK[:][1][1]. Which basically mean I want the second entry from the second the column in all sub arrays. Is there a way to do this in python?</p> <pre><code>import numpy # Creating a dummy variable of the type I deal with (If this looks crappy sorry, the variable actually comes from the output of d = pyfits.getdata()): a = [] for i in range(3): d = numpy.array([[i, 2*i], [3*i, 4*i]]) a.append(d) print a # Pseudo code: print 'Second row, second column: ', a[:][1][1] </code></pre> <p>I expect a result like this:</p> <pre><code>[array([[ 0, 0],[ 0, 0]]), array([[ 1, 2],[ 3, 4]]), array([[ 2, 4],[ 6, 8]])] Second row, second column: [0, 4, 8] </code></pre>
-1
2016-10-05T18:22:28Z
39,881,572
<p>The point is you are creating a list of arrays, not a multidimensional array. For this specific example you can use <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehensions</a> to build directly the structure you need and convert it to an array. Then you can access the elements you need using a somewhat matlab-like syntax:</p> <pre><code>import numpy as np d = np.array([[[i, 2*i], [3*i, 4*i]] for i in range(3) ]) print("Second Row, Second Column: ", d[:,1,1]) </code></pre>
0
2016-10-05T18:48:54Z
[ "python", "arrays", "numpy" ]
How do you run an OR query in Peewee ORM?
39,881,232
<p>I'd like to run a query for a user based on username or email address.</p> <p>I must be missing it, but I can't find how to run an <code>OR</code> query in the peewee documentation. How do you do that?</p>
-1
2016-10-05T18:27:00Z
39,881,259
<p>From the <a href="http://docs.peewee-orm.com/en/latest/peewee/querying.html#filtering-records">documentation</a></p> <blockquote> <p>If you want to express a complex query, use parentheses and python’s bitwise or and and operators:</p> </blockquote> <pre><code>&gt;&gt;&gt; Tweet.select().join(User).where( ... (User.username == 'Charlie') | ... (User.username == 'Peewee Herman') ... ) </code></pre>
5
2016-10-05T18:28:21Z
[ "python", "peewee" ]
Python Flask Sqlalchemy Subst Query
39,881,240
<p>I am working on an internel search engine at my company written in python utilizing flask and sqlalchemy(sqlite). My current problem is that I would like to.</p> <p>A.) Query on a certain amount of information for the description field B.) Preferable query before it 50 characters and after it.</p> <p>Very similiar to google under the link field. If you search for something, it returns the links with 100 characters of words below that.</p> <p>I was reading the documentation and found that there is no mid() function in sqlalchemy. I also noticed from this post that sqlalchemy only support max, min, and avg <a href="http://stackoverflow.com/questions/7133007/sqlalchemy-get-max-min-avg-values-from-a-table">sqlalchemy: get max/min/avg values from a table</a></p> <p>SQL Documentation of functions <a href="http://docs.sqlalchemy.org/en/latest/core/functions.html" rel="nofollow">http://docs.sqlalchemy.org/en/latest/core/functions.html</a></p> <p>I was trying to implement a query such as</p> <pre><code>links = Item.query(func.mid(Item.description, 0, 200).like('%helloworld%')) </code></pre> <p>I releazed sqlite has the syntax Substr and have tried Item.query.filter(func.substr(Item.description,0, 200) == '%helloworld%')</p> <p>Is there a way in sqlalchemy to navigate around this issue? </p> <p>My code:</p> <pre><code>from sqlalchemy.sql.functions import func def mainSearch(searchterm): links = Item.query(func.mid(Item.title, 1, 3).Item.title.like('%e%')) return links </code></pre> <p>HTML/Jinja code:</p> <pre><code> {% for link in links.items %} &lt;div id="resultbox"&gt; &lt;div id="linkTitle"&gt;&lt;h4&gt;&lt;a href="{{ link.link }}"&gt;{{ link.title }}&lt;/a&gt;&lt;/h4&gt; &lt;/div&gt; &lt;div id="lastUpdated"&gt;Last Updated: {{ link.last_updated }} &lt;/div&gt; &lt;div id="linkLink"&gt;{{ link.link }}&lt;/div&gt; &lt;div id="linkDescription"&gt;{{ link.description | safe }}&lt;/div&gt; &lt;/div&gt; </code></pre> <p>Error</p> <p>TypeError: 'BaseQuery' object is not callable</p> <p>My database: Sqlite</p> <p>I wanted to a query in sql similar too:</p> <pre><code>SELECT MID(column_name,start,length) AS some_name FROM table_name; </code></pre> <p><strong>Overall I am trying to do this to the data we query in Column Description:</strong></p> <p><strong>Example text:</strong></p> <p>An article (abbreviated to ART) is a word (prefix or suffix) that is used alongside a noun to indicate the type of reference being made by the noun. Articles specify grammatical definiteness of the noun, in some languages extending to volume or numerical scope. The articles in the English language are the and a/an, and (in certain contexts) some. "An" and "a" are modern forms of the Old English "an", which in Anglian dialects was the number "one" (compare "on", in Saxon dialects) and survived into Modern Scots as the number "owan". Both "on" (respelled "one" by the Normans) and "an" survived into Modern English, with "one" used as the number and "an" ("a", before nouns that begin with a consonant sound) as an indefinite article.</p> <p>In many languages, articles are a special part of speech, which cannot easily be combined with other parts of speech. In English, articles are frequently considered a part of a broader speech category called determiners, which combines articles and demonstratives (such as "this" and "that").</p> <p>In languages that employ articles, every common noun, with some exceptions, is expressed with a certain definiteness (e.g., definite or indefinite), just as many languages express every noun with a certain grammatical number (e.g., singular or plural). Every noun must be accompanied by the article, if any, corresponding to its definiteness, and the lack of an article (considered a zero article) itself specifies a certain definiteness. This is in contrast to other adjectives and determiners, which are typically optional. This obligatory nature of articles makes them among the most common words in many languages—in English, for example, the most frequent word is the.[1]</p> <p>Articles are usually characterized as either definite or indefinite.[2] A few languages with well-developed systems of articles may distinguish additional subtypes. Within each type, languages may have various forms of each article, according to grammatical attributes such as gender, number, or case, or according to adjacent sounds.</p> <p><strong>To this</strong></p> <p>An article (abbreviated to ART) is a word (prefix or suffix) that is used alongside a noun to indicate the type of reference being made by the noun. Articles specify grammatical definiteness of the noun,</p> <p>So it doesnt crash the database by grabbing text of 100,000 words long. I only need the first 100</p>
0
2016-10-05T18:27:18Z
39,882,043
<p>This has nothing to do with the <code>mid</code> function. The error message says <code>'BaseQuery' object is not callable.</code> Where are you calling <code>BaseQuery</code>? Here:</p> <pre><code>Item.query(...) </code></pre> <p>The correct incantation is:</p> <pre><code>db.session.query(func.mid(...)) </code></pre> <p>or</p> <pre><code>Item.query.with_entities(func.mid(...)) </code></pre>
2
2016-10-05T19:19:54Z
[ "python", "sql", "sqlalchemy", "flask-sqlalchemy" ]
Sympy on PyPy - Sometimes 6x faster, sometimes 4x slower
39,881,276
<p>Here, pypy is slower in calculating, whether a given number is prime:</p> <pre><code>C:\Users\User&gt;python -m timeit -n10 -s"from sympy import isprime" "isprime(2**521-1)" 10 loops, best of 3: 25.9 msec per loop C:\Users\User&gt;pypy -m timeit -n10 -s"from sympy import isprime" "isprime(2**521-1)" 10 loops, best of 3: 97.9 msec per loop </code></pre> <p>Here, pypy is faster in creating a list of primes (from 1 to 1000000):</p> <pre><code>C:\Users\User&gt;pypy -m timeit -n10 -s"from sympy import sieve" "primes = list(sieve.primerange(1, 10**6))" 10 loops, best of 3: 2.12 msec per loop C:\Users\User&gt;python -m timeit -n10 -s"from sympy import sieve" "primes = list(sieve.primerange(1, 10**6))" 10 loops, best of 3: 11.9 msec per loop </code></pre> <p>Very surprising, hard to understand.</p> <blockquote> <p>“If you want your code to run faster, you should probably just use PyPy.” — Guido van Rossum (creator of Python)</p> </blockquote> <p>Am I missing something?</p>
1
2016-10-05T18:29:48Z
39,884,187
<p><code>isprime</code> has a fast path for when <code>gmpy</code> is installed. <code>gmpy</code> has bindings to a highly optimized C library, and is probably only installed on CPython.</p>
2
2016-10-05T21:42:28Z
[ "python", "performance", "sympy", "pypy" ]
GAE python: success message or add HTML class after redirect
39,881,280
<p>I've a website with a contact form running on a google App Engine. After submitting I'd like to redirect and show a message to the user to let him know the message was sent, this can eighter be a alert message or adding a class to a html tag. How can I do that?</p> <p>my python file looks like this:</p> <pre><code>import webapp2 import jinja2 import os from google.appengine.api import mail jinja_environment = jinja2.Environment(autoescape=True,loader=jinja2.FileSystemLoader(os.path.join(os.path.dirname(__file__), 'templates'))) class index(webapp2.RequestHandler): def get(self): template = jinja_environment.get_template('index.html') self.response.write(template.render()) def post(self): vorname=self.request.get("vorname") ... message=mail.EmailMessage(sender="...",subject="...") if not mail.is_email_valid(email): self.response.out.write("Wrong email! Check again!") message.to="..." message.body=""" Neue Nachricht erhalten: Vorname: %s ... %(vorname,...) self.redirect('/#Kontakt') app = webapp2.WSGIApplication([('/', index)], debug=True) </code></pre> <p>I already tried this in my html file:</p> <pre><code>&lt;script&gt; function sentAlert() { alert("Nachricht wurde gesendet"); } &lt;/script&gt; &lt;div class="submit"&gt; &lt;input type="submit" value="Senden" onsubmit="return sentAlert()" id="button-blue"/&gt; &lt;/div&gt; </code></pre> <p>but it does it before the redirect and therefore doesn't work. Does someone have an idea how to do that?</p>
1
2016-10-05T18:30:01Z
39,881,767
<p>After the redirect a request <strong>different</strong> than the POST one for which the email was sent will be served.</p> <p>So you need to <strong>persist</strong> the information about the email being sent across requests, saving it in the POST request handler code and retrieving it in the subsequent GET request handler code (be it the redirected one or any other one for that matter).</p> <p>To persist the info you can, for example, use the user's session (if you already have one, see <a href="http://stackoverflow.com/questions/37134540/passing-data-between-pages-in-a-redirect-function-in-google-app-engine?noredirect=1">Passing data between pages in a redirect() function in Google App Engine</a>), or GAE's memcache/datastore/GCS. </p> <p>Once the info is retrieved you can use it any way you wish.</p>
0
2016-10-05T19:02:16Z
[ "javascript", "python", "html", "google-app-engine" ]
Threading with subprocess
39,881,356
<p>I am using python 2.7 and new to Threading. I got a class file and run method. But I don't see the run method invoked when I create instances of thread. I am also planning to use <code>subprocess.Popen</code> inside the <code>run</code> method and get <code>stdout</code> of the process for each filename and print the output. </p> <p>Please tell me what I am missing here for <code>run</code> method to be called.</p> <pre><code>class FileScanThread(threading.Thread): def __init__(self, myFileName): print("In File Scan Thread") self.mapFile = myFileName #myjar=myFileName self.start() def run(self): print self.mapFile x= FileScanThread("myfile.txt") </code></pre>
1
2016-10-05T18:34:18Z
39,881,398
<p>you're forgetting to call the mother class constructor to specify target. It's not java, and <code>run</code> has no particular meaning. By default, target is <code>None</code> and the thread does nothing.</p> <pre><code>import threading class FileScanThread(threading.Thread): def __init__(self, myFileName): threading.Thread.__init__(self,target=self.run) # another syntax uses "super", which is simpler in python 3 # super().__init__(target=self.run) print("In File Scan Thread") self.mapFile = myFileName #myjar=myFileName self.start() def run(self): print(self.mapFile) x= FileScanThread("myfile.txt") x.join() # when you're done </code></pre>
4
2016-10-05T18:37:17Z
[ "python", "multithreading" ]
Threading with subprocess
39,881,356
<p>I am using python 2.7 and new to Threading. I got a class file and run method. But I don't see the run method invoked when I create instances of thread. I am also planning to use <code>subprocess.Popen</code> inside the <code>run</code> method and get <code>stdout</code> of the process for each filename and print the output. </p> <p>Please tell me what I am missing here for <code>run</code> method to be called.</p> <pre><code>class FileScanThread(threading.Thread): def __init__(self, myFileName): print("In File Scan Thread") self.mapFile = myFileName #myjar=myFileName self.start() def run(self): print self.mapFile x= FileScanThread("myfile.txt") </code></pre>
1
2016-10-05T18:34:18Z
39,881,492
<p>This will do what you want. You aren't calling <code>__init__</code> from the class Thread.</p> <pre><code>class FileScanThread(threading.Thread): def __init__(self, myFileName): threading.Thread.__init__(self) print("In File Scan Thread") self.mapFile = myFileName #myjar=myFileName self.start() def run(self): print self.mapFile x = FileScanThread("myfile.txt") </code></pre> <p>I don't think you have to pass target argument to it. At least that's not usually how I do it. </p> <p>Output:</p> <pre><code>In File Scan Thread myfile.txt </code></pre>
1
2016-10-05T18:43:36Z
[ "python", "multithreading" ]
BeautifulSoup unable to parse Goa University site
39,881,393
<p>I am working on a parsing project which require me to parse educational website. While doing so, my code is unable to parse <a href="http://unigoa.ac.in" rel="nofollow"><strong>University of Goa</strong></a> site. It does not return as expected. My code:</p> <pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup import requests hdrs = {'User-Agent': 'Mozilla / 5.0 (X11 Linux x86_64) AppleWebKit / 537.36 (\ KHTML, like Gecko) Chrome / 52.0.2743.116 Safari / 537.36'} r = requests.get(url, verify=True, headers=hdrs) result = BeautifulSoup(r.content) print(result) </code></pre> <p>It prints:</p> <pre><code>&lt;html&gt;&lt;head&gt;&lt;script type="text/javascript"&gt; document.location="https://www.unigoa.ac.in/result_redirect.php"; &lt;/script&gt; &lt;/head&gt;&lt;/html&gt; </code></pre> <p>instead of raw html parsed tree. I tried passing explicity parser <code>lxml</code> and <code>html5lib</code> to BeautifulSoup but it also does not work as expected. Kindly help me. Thanks in advance.</p>
1
2016-10-05T18:36:51Z
39,881,531
<p>You need to create a session then parse and use the redirect url:</p> <pre><code>with requests.Session() as s: s.headers.update(hdrs) r = s.get("https://www.unigoa.ac.in") result = BeautifulSoup(r.content) redirect = result.find("script").text.split("=")[1].strip('";\r\n') r2 = s.get(redirect) print(r2.text) </code></pre> <p><em>r2.text</em> will give you the html you see on the home page.</p>
1
2016-10-05T18:46:22Z
[ "python", "python-2.7", "python-3.x", "parsing", "beautifulsoup" ]
tensorflow import error. With /Library/Python/2.7/site-packages not included
39,881,437
<p>Trying to install tensorflow. Tried anaconda, it worked but affected my other program. Then decided to use pip install.</p> <p>But after I installed it, I just can't import it within ipython.</p> <p>Here are the messages: I tried uninstall and reinstall.</p> <pre><code>pip install tensorflow Requirement already satisfied (use --upgrade to upgrade): tensorflow in /Library/Python/2.7/site-packages Cleaning up... </code></pre> <p>First I try to import in ipython:</p> <pre><code>In [1]: import tensorflow --------------------------------------------------------------------------- ImportError Traceback (most recent call last) &lt;ipython-input-1-a649b509054f&gt; in &lt;module&gt;() ----&gt; 1 import tensorflow ImportError: No module named tensorflow </code></pre> <p>And then I copied the folder of tensorflow from /Library/Python/2.7/site-packages to /usr/local/lib/python2.7/site-packages/. I try to import:</p> <pre><code>In [1]: import tensorflow --------------------------------------------------------------------------- ImportError Traceback (most recent call last) &lt;ipython-input-1-a649b509054f&gt; in &lt;module&gt;() ----&gt; 1 import tensorflow /Library/Python/2.7/site-packages/tensorflow/__init__.py in &lt;module&gt;() 21 from __future__ import print_function 22 ---&gt; 23 from tensorflow.python import * 24 25 /Library/Python/2.7/site-packages/tensorflow/python/__init__.py in &lt;module&gt;() 57 please exit the tensorflow source tree, and relaunch your python interpreter 58 from there.""" % traceback.format_exc() ---&gt; 59 raise ImportError(msg) 60 61 from tensorflow.core.framework.node_def_pb2 import * ImportError: Traceback (most recent call last): File "tensorflow/python/__init__.py", line 53, in &lt;module&gt; from tensorflow.core.framework.graph_pb2 import * File "tensorflow/core/framework/graph_pb2.py", line 6, in &lt;module&gt; from google.protobuf import descriptor as _descriptor ImportError: No module named google.protobuf Error importing tensorflow. Unless you are using bazel, you should not try to import tensorflow from its source directory; please exit the tensorflow source tree, and relaunch your python interpreter from there. </code></pre> <p>However, I have protobuf installed under /Library/Python/2.7/site-packages as well.</p> <p>I find out this version of ipython is using macports python package directory: /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/</p> <p>I am not sure how to let ipython search for the pip installed directory.</p> <p>I reinstalled the tensorflow again as it is described on the website:</p> <pre><code>sudo pip install --upgrade $TF_BINARY_URL Password: Downloading/unpacking https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py2-none-any.whl Downloading tensorflow-0.11.0rc0-py2-none-any.whl (35.5MB): 35.5MB downloaded Downloading/unpacking protobuf==3.0.0 (from tensorflow==0.11.0rc0) Downloading protobuf-3.0.0-py2.py3-none-any.whl (342kB): 342kB downloaded Requirement already up-to-date: wheel in /Library/Python/2.7/site-packages (from tensorflow==0.11.0rc0) Requirement already up-to-date: mock&gt;=2.0.0 in /Library/Python/2.7/site-packages (from tensorflow==0.11.0rc0) Requirement already up-to-date: numpy&gt;=1.11.0 in /Library/Python/2.7/site-packages (from tensorflow==0.11.0rc0) Requirement already up-to-date: six&gt;=1.10.0 in /Library/Python/2.7/site-packages (from tensorflow==0.11.0rc0) Downloading/unpacking setuptools from https://pypi.python.org/packages/be/20/3f4d2fb59ddeed35532bd4e11e900abcf8019d29f4558d38169639303536/setuptools-28.2.0-py2.py3-none-any.whl#md5=02e79b1127c5a131a2dace6d30cf7f25 (from protobuf==3.0.0-&gt;tensorflow==0.11.0rc0) Downloading setuptools-28.2.0-py2.py3-none-any.whl (467kB): 467kB downloaded Installing collected packages: tensorflow, protobuf, setuptools Found existing installation: protobuf 3.1.0.post1 Uninstalling protobuf: Successfully uninstalled protobuf Found existing installation: setuptools 18.5 Uninstalling setuptools: Successfully uninstalled setuptools Successfully installed tensorflow protobuf setuptools Cleaning up... </code></pre> <p>And still no luck:</p> <pre><code>In [1]: import tensorflow --------------------------------------------------------------------------- ImportError Traceback (most recent call last) &lt;ipython-input-1-a649b509054f&gt; in &lt;module&gt;() ----&gt; 1 import tensorflow </code></pre>
2
2016-10-05T18:40:20Z
39,900,144
<p>OK, I got the problem fixed.</p> <p>I am not sure what exactly is causing it. Because I installed all this on my RMBP, nothing wrong. But desktop just won't take it.</p> <p>Solution is:</p> <p>Uninstall all previously installed or half installed tensorflow, and my other packages (just two main packages). And use anaconda to install tensorflow, activate it. And use anaconda to install my other packages.</p> <p>Then problem solved.</p>
0
2016-10-06T15:34:01Z
[ "python", "ipython", "tensorflow" ]
best way to transfer data from flask to d3
39,881,571
<p>I have csv tables, processed in Pandas, that I would like to serve from Flask to the browser so that I can use d3 to display the information. How do I transfer the data from Flask to the browser? </p>
0
2016-10-05T18:48:53Z
39,882,115
<p>Use the to_csv method of the Pandas dataframe, and return that from your flask server:</p> <pre><code># app.py @app.route('/my/data/entpoint') def get_d3_data(): df = pandas.DataFrame(...) # Constructed however you need it return df.to_csv() </code></pre> <p>Then on the front end, direct d3.tsv to your endpoint above:</p> <pre><code>&lt;!-- page.html --&gt; &lt;script&gt; d3.tsv("/my/data/endpoint", function(data) { console.log(data); // Do something with data }); &lt;/script&gt; </code></pre>
0
2016-10-05T19:26:13Z
[ "javascript", "python", "d3.js", "flask", "routing" ]
How to clear pending analysis in Cuckoo sandbox platform
39,881,589
<p>I am testing malware detection on guest VM using Cuckoo sandbox platform. To speed up the analysis, I want to remove pending analysis but keep completed analysis. </p> <p>Cuckoo have --clean option but it will clean all tasks and samples. Can you think of a way to only remove pending analysis? </p> <p>Thanks</p>
0
2016-10-05T18:50:27Z
39,904,650
<p>You can do it by deleting entries from the mysql database for pending analysis.</p>
0
2016-10-06T19:57:19Z
[ "python", "virtual-machine", "sandbox", "antivirus", "malware-detection" ]
Parse one or more expressions with helpful errors
39,881,655
<p>I'm using grako (a PEG parser generator library for python) to parse a simple declarative language where a document can contain one or more protocols.</p> <p>Originally, I had the root rule for document written as:</p> <p><code>document = {protocol}+ ;</code></p> <p>This appropriately returns a list of protocols, but only gives helpful errors if a syntax error is in the first protocol. Otherwise, it silently discards the invalid protocol and everything after it.</p> <p>I have also tried a few variations on:</p> <pre><code>document = protocol document | $ ; </code></pre> <p>But this doesn't result in a list if there's only one protocol, and doesn't give helpful error messages either, saying only <code>no available options: (...) document</code> if any of the protocols contains an error.</p> <p>How do I write a rule that does both of the following?:</p> <ol> <li>Always returns a list, even if there's only one protocol</li> <li>Displays helpful error messages about the unsuccessful match, instead of just saying it's an invalid document or silently dropping the damaged protocol</li> </ol>
2
2016-10-05T18:55:03Z
39,886,192
<p>This is the solution:</p> <pre><code>document = {protocol ~ }+ $ ; </code></pre> <p>If you don't add the <code>$</code> for the parser to see the end of file, the parse will succeed with one or more <em>protocol</em>, even if there are more to parse.</p> <p>Adding the <em>cut</em> expression (<code>~</code>) makes the parser commit to what was parsed in the closest option/choice in the parse (a closure is an option of <code>X = a X|();</code>). Additional <em>cut</em> expressions within what's parsed by <code>protocol</code> will make the error messages be closer to the expected points of failure in the input.</p>
1
2016-10-06T01:39:52Z
[ "python", "parsing", "ebnf", "peg", "grako" ]
Terminate loop at any given time
39,881,660
<p>I have the following code which turns an outlet on/off every 3 seconds.</p> <pre><code> start_time = time.time() counter = 0 agent = snmpy4.Agent("192.168.9.50") while True: if (counter % 2 == 0): agent.set("1.3.6.1.4.1.13742.6.4.1.2.1.2.1.1",1) else: agent.set("1.3.6.1.4.1.13742.6.4.1.2.1.2.1.1", 0) time.sleep(3- ((time.time()-start_time) % 3)) counter = counter + 1 </code></pre> <p>Is there a way I can have the loop terminate at any given point if something is entered, (space) for example... while letting the code above run in the mean time</p>
0
2016-10-05T18:55:22Z
39,881,978
<p>You can put the loop in a thread and use the main thread to wait on the keyboard. If its okay for "something to be entered" can be a line with line feed (e.g., type a command and enter), then this will do</p> <pre><code>import time import threading import sys def agent_setter(event): start_time = time.time() counter = 0 #agent = snmpy4.Agent("192.168.9.50") while True: if (counter % 2 == 0): print('agent.set("1.3.6.1.4.1.13742.6.4.1.2.1.2.1.1",1)') else: print('agent.set("1.3.6.1.4.1.13742.6.4.1.2.1.2.1.1", 0)') if event.wait(3- ((time.time()-start_time) % 3)): print('got keyboard') event.clear() counter = counter + 1 agent_event = threading.Event() agent_thread = threading.Thread(target=agent_setter, args=(agent_event,)) agent_thread.start() for line in sys.stdin: agent_event.set() </code></pre>
0
2016-10-05T19:16:12Z
[ "python", "loops", "break" ]
Jinja2 Environment ImportError in Django 1.10 with Python 3.5
39,881,708
<p>I have a Django 1.10 project (with Python 3.5, and Jinja2 2.8 for templates) setup like this:</p> <p><a href="http://i.stack.imgur.com/QSdtj.png" rel="nofollow"><img src="http://i.stack.imgur.com/QSdtj.png" alt="Directory structure"></a></p> <p>*Consider the red marks as <code>mysite</code></p> <p>File <code>jinja2.py</code> defines this:</p> <pre><code>from __future__ import absolute_import from django.contrib.staticfiles.storage import staticfiles_storage from django.urls import reverse from jinja2 import Environment def environment(**options): env = Environment(**options) env.globals.update({ 'static': staticfiles_storage.url, 'url': reverse, }) return env </code></pre> <p>And this file is called in <code>settings.py</code> like this:</p> <pre><code>{ 'BACKEND': 'django.template.backends.jinja2.Jinja2', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'environment': 'mysite.jinja2.environment', }, }, </code></pre> <p>But when I run the server and visit a URL with view that calls a Jinja2 template, I get the following error:</p> <p><a href="http://i.stack.imgur.com/WksbB.png" rel="nofollow"><img src="http://i.stack.imgur.com/WksbB.png" alt="ImportError"></a></p> <p>I spent several hours searching the Internet to find a solution, but unable to resolve this.</p> <p>The entire setup was done following the official Django documentation: <a href="https://docs.djangoproject.com/en/1.10/topics/templates/#django.template.backends.jinja2.Jinja2" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/templates/#django.template.backends.jinja2.Jinja2</a></p> <p>Thanks for your response(s)</p>
0
2016-10-05T18:58:27Z
39,883,270
<p>You've called your own file <code>jinja2</code>. So Python finds that instead of the actual Jinja2 library when you do <code>import jinja2</code>.</p> <p>Call your file something else.</p>
0
2016-10-05T20:40:36Z
[ "python", "django" ]
Jinja2 Environment ImportError in Django 1.10 with Python 3.5
39,881,708
<p>I have a Django 1.10 project (with Python 3.5, and Jinja2 2.8 for templates) setup like this:</p> <p><a href="http://i.stack.imgur.com/QSdtj.png" rel="nofollow"><img src="http://i.stack.imgur.com/QSdtj.png" alt="Directory structure"></a></p> <p>*Consider the red marks as <code>mysite</code></p> <p>File <code>jinja2.py</code> defines this:</p> <pre><code>from __future__ import absolute_import from django.contrib.staticfiles.storage import staticfiles_storage from django.urls import reverse from jinja2 import Environment def environment(**options): env = Environment(**options) env.globals.update({ 'static': staticfiles_storage.url, 'url': reverse, }) return env </code></pre> <p>And this file is called in <code>settings.py</code> like this:</p> <pre><code>{ 'BACKEND': 'django.template.backends.jinja2.Jinja2', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'environment': 'mysite.jinja2.environment', }, }, </code></pre> <p>But when I run the server and visit a URL with view that calls a Jinja2 template, I get the following error:</p> <p><a href="http://i.stack.imgur.com/WksbB.png" rel="nofollow"><img src="http://i.stack.imgur.com/WksbB.png" alt="ImportError"></a></p> <p>I spent several hours searching the Internet to find a solution, but unable to resolve this.</p> <p>The entire setup was done following the official Django documentation: <a href="https://docs.djangoproject.com/en/1.10/topics/templates/#django.template.backends.jinja2.Jinja2" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/templates/#django.template.backends.jinja2.Jinja2</a></p> <p>Thanks for your response(s)</p>
0
2016-10-05T18:58:27Z
39,884,017
<p>I think you put the <code>jinja2.py</code> in a wrong position. It should be in the dir <code>mysite/</code>.</p> <p>This is the directory structure of my django project <code>mysite</code>(Jin2 is a app dir), which can run normally:</p> <pre><code>➜ /tmp/jinja/mysite $ tree . -I '*.pyc' . ├── db.sqlite3 ├── jin2 │   ├── admin.py │   ├── apps.py │   ├── __init__.py │   ├── jinja2 │   │   └── index.html │   ├── migrations │   │   ├── __init__.py │   │   └── __pycache__ │   ├── models.py │   ├── __pycache__ │   ├── tests.py │   ├── urls.py │   └── views.py ├── manage.py ├── mysite │   ├── __init__.py │   ├── jinja2.py │   ├── __pycache__ │   ├── settings.py │   ├── urls.py │   └── wsgi.py └── __pycache__ </code></pre> <p>My <code>settings.py</code> and <code>jinja2.py</code> are same as yours. The only difference is that I put the <code>jinja2.py</code> in the mysite dir.</p> <p>And you should delete the <code>from __future__ import absolute_import</code> in the <code>jinja2.py</code>, it is only needed in python2.</p>
0
2016-10-05T21:30:28Z
[ "python", "django" ]
how would i use php to display multiple buttons on a web page (they are going to control a raspberry pi)
39,881,738
<p>i am creating a home automation system, i am using my raspberry pi as a web server to control the gpio ports, p have used some code to turn on the gpio ports. what i want to do is have different php buttons that run different python scripts in turn they turn on different ports. i know lots about python but very little about php and html.</p> <p>this is all the code i have so far for turning on a gpio port through php</p> <pre><code> &lt;html&gt; &lt;head&gt; &lt;meta name="viewport" content="width=device-width" /&gt; &lt;title&gt;LED Control&lt;/title&gt; &lt;/head&gt; &lt;body&gt; LED Control: &lt;form method="get" action="gpio.php"&gt; &lt;input type="submit" value="ON" name="on"&gt; &lt;input type="submit" value="OFF" name="off"&gt; &lt;/form&gt; &lt;?php $setmode17 = shell_exec("/usr/local/bin/gpio -g mode 17 out"); if(isset($_GET['on'])){ $gpio_on = shell_exec("/usr/local/bin/gpio -g write 17 1"); echo "LED is on"; } else if(isset($_GET['off'])){ $gpio_off = shell_exec("/usr/local/bin/gpio -g write 17 0"); echo "LED is off"; } ?&gt; &lt;/body&gt; </code></pre> <p></p> <p>i also wont to controle an 8 channel relay, meaning i will need 8 buttons. i have made up some basic code for running the raspberry pi scripts, i dont know hot to put it all together though.</p> <pre><code>&lt;?php if (isset($_POST['LightON'])) { exec("sudo python /home/pi/lighton.py"); } if (isset($_POST['LightOFF'])) { exec("sudo python /home/pi/lightoff.py"); } ?&gt; </code></pre> <p>Thanks, Sam i am aware the variable names are different</p>
0
2016-10-05T19:00:22Z
39,906,578
<p>One solution would be to add all your html into a php file, add all your php code on top of that and run it in your browser. This assuming that www-data user have correct permissions on your file systems to execute the python scripts. </p> <p>Example:</p> <pre><code>&lt;?php if (isset($_POST['on'])) { exec("sudo python /home/pi/lighton.py"); } if (isset($_POST['off'])) { exec("sudo python /home/pi/lightoff.py"); } ?&gt; &lt;html&gt; &lt;head&gt; &lt;meta name="viewport" content="width=device-width" /&gt; &lt;title&gt;LED Control&lt;/title&gt; &lt;/head&gt; &lt;body&gt; LED Control: &lt;form method="POST" &gt; &lt;input type="submit" value="ON" name="on"&gt; &lt;input type="submit" value="OFF" name="off"&gt; &lt;/form&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Alternatively if you know Python well, configure ModPython on Apache and use a py file as your server side script with JQuery on the client side to pass the button presses.</p>
0
2016-10-06T22:27:23Z
[ "python", "linux", "raspberry-pi", "composer-php" ]
Correct usage of Decimal objects in this expression
39,881,740
<p>I need to calculate the area of any regular polygon. I acquired this formula from a good ol' google:</p> <pre><code>(n * l**2) / (4 * math.tan((180/n) * (math.pi/180))) </code></pre> <p>No matter how I use <code>decimal.Decimal</code> with these values I can never get it to work as intended. <code>n</code> and <code>l</code> are given as floats.</p> <p>Can someone apply decimal object to this please? When <code>n</code> is 4 and <code>l</code> is 2, it should return <code>4</code>, but currently it returns <code>4.00000000000000156234</code> (or something along those lines).</p>
-1
2016-10-05T19:00:29Z
39,881,858
<p>You need to set the precision for the decimal object. Pi is an infinite decimal number, so the precision won't be a whole number.</p> <p>You can do</p> <pre><code>from decimal import * getcontext().prec = 1 </code></pre> <p>or add to the decimal object by rounding down.</p> <pre><code>&gt;&gt;&gt; Decimal('7.325').quantize(Decimal('.01'), rounding=ROUND_DOWN) Decimal('7.32') </code></pre> <p><a href="https://docs.python.org/3.5/library/decimal.html" rel="nofollow">take from docs</a></p>
1
2016-10-05T19:08:10Z
[ "python", "python-3.x", "decimal" ]
Aws passing credentials to ansible s3 module
39,881,807
<p>In ansible,When I am passing the access key,secret key,token to my aws credentials,it passes the access key to my ansible module s3 ,but when I am passing secret key and token it shows me <code>""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"</code> like that.So how can I solve this.</p> <p>In accesskey id there is no extra special symbols at all,we have only text alphabets,but in security ans session we have / and special symbols,may be due that it passes like <code>"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"</code>???</p> <p>Thanks guys</p>
0
2016-10-05T19:05:02Z
39,882,855
<p><code>aws_secret_key</code> and <code>security_token</code> are marked as <code>no_log</code> arguments. See <a href="https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/ec2.py#L125" rel="nofollow">source</a>.<br> It is sensitive data, so not printed to stdout and you see <code>VALUE_SPECIFIED_IN_NO_LOG_PARAMETER</code>.</p> <p>If you absolutely need to get secret values printed to stdout, use <code>ANSIBLE_DISPLAY_ARGS_TO_STDOUT=1</code> environment variable.</p>
1
2016-10-05T20:10:48Z
[ "python", "amazon-web-services", "amazon-s3", "ansible", "ansible-playbook" ]
Hide upper/right tick markers using rcparams/mplstyle
39,882,004
<p>I am currently using a variant of <a href="http://stackoverflow.com/a/11417222/2552873">this answer</a> to eliminate the top/right edges on my plots. However, I would like to be able to add this to my .mplstyle file instead of calling this function for every plot I create. Is there a way to achieve this functionality using the style parameters, or even by calling something once at the beginning of my code?</p>
1
2016-10-05T19:17:58Z
39,884,034
<p>You can use:</p> <pre><code>axes.spines.top : False axes.spines.right : False </code></pre> <p>in a mpstyle file to turn off spines. Unfortunately, <a href="http://stackoverflow.com/a/38750147/812786">this recent answer</a> indicates that ticks cannot currently be controlled from an rc or style file like this, and I haven't yet found a way either. However, in matplotlib 2.0 you should be able to write:</p> <pre><code>xtick.top : False ytick.right : False </code></pre> <p>(In fact, this appears to be the default style for 2.0, according to <a href="https://github.com/matplotlib/matplotlib/blob/v2.x/matplotlibrc.template#L362" rel="nofollow">the template file</a>.)</p>
1
2016-10-05T21:31:10Z
[ "python", "matplotlib" ]
pip error after upgrading pip & scrapy by "pip install --upgrade"
39,882,200
<p>Using debian 8(jessie) amd64 with python 2.7.9. I tried following commands:</p> <pre><code>pip install --upgrade pip pip install --upgrade scrapy </code></pre> <p>after that, I am getting following pip error</p> <pre><code>root@debian:~# pip Traceback (most recent call last): File "/usr/local/bin/pip", line 11, in &lt;module&gt; load_entry_point('pip==8.1.2', 'console_scripts', 'pip')() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 567, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2604, in load_entry_point return ep.load() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2264, in load return self.resolve() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2270, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/local/lib/python2.7/dist-packages/pip/__init__.py", line 16, in &lt;module&gt; from pip.vcs import git, mercurial, subversion, bazaar # noqa File "/usr/local/lib/python2.7/dist-packages/pip/vcs/mercurial.py", line 9, in &lt;module&gt; from pip.download import path_to_url File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 39, in &lt;module&gt; from pip._vendor import requests, six File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/__init__.py", line 53, in &lt;module&gt; from .packages.urllib3.contrib import pyopenssl File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/contrib/pyopenssl.py", line 54, in &lt;module&gt; import OpenSSL.SSL File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in &lt;module&gt; from OpenSSL import rand, crypto, SSL File "/usr/lib/python2.7/dist-packages/OpenSSL/rand.py", line 11, in &lt;module&gt; from OpenSSL._util import ( File "/usr/lib/python2.7/dist-packages/OpenSSL/_util.py", line 4, in &lt;module&gt; binding = Binding() File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 89, in __init__ self._ensure_ffi_initialized() File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 113, in _ensure_ffi_initialized libraries=libraries, File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/utils.py", line 80, in build_ffi extra_link_args=extra_link_args, File "/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 437, in verify lib = self.verifier.load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 114, in load_library return self._load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 225, in _load_library return self._vengine.load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/vengine_cpy.py", line 174, in load_library lst = list(map(self.ffi._get_cached_btype, lst)) File "/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 409, in _get_cached_btype BType = type.get_cached_btype(self, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 61, in get_cached_btype BType = self.build_backend_type(ffi, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 507, in build_backend_type base_btype = self.build_baseinttype(ffi, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 525, in build_baseinttype % self._get_c_name()) cffi.api.CDefError: 'point_conversion_form_t' has no values explicitly defined: refusing to guess which integer type it is meant to be (unsigned/signed, int/long) </code></pre> <p>googled for several similar problem, cffi or cryptography may cause this problem, but i can't find any clear way to fix it.</p>
2
2016-10-05T19:31:23Z
39,882,444
<p>Got the exact same error today, but in a different situation. I suspect this is related to <code>cryptography</code> module.</p> <p>What helped me was to install a specific version of <code>cffi</code> package:</p> <pre><code>pip install cffi==1.7.0 </code></pre>
0
2016-10-05T19:46:34Z
[ "python", "scrapy", "pip" ]
pip error after upgrading pip & scrapy by "pip install --upgrade"
39,882,200
<p>Using debian 8(jessie) amd64 with python 2.7.9. I tried following commands:</p> <pre><code>pip install --upgrade pip pip install --upgrade scrapy </code></pre> <p>after that, I am getting following pip error</p> <pre><code>root@debian:~# pip Traceback (most recent call last): File "/usr/local/bin/pip", line 11, in &lt;module&gt; load_entry_point('pip==8.1.2', 'console_scripts', 'pip')() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 567, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2604, in load_entry_point return ep.load() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2264, in load return self.resolve() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2270, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/local/lib/python2.7/dist-packages/pip/__init__.py", line 16, in &lt;module&gt; from pip.vcs import git, mercurial, subversion, bazaar # noqa File "/usr/local/lib/python2.7/dist-packages/pip/vcs/mercurial.py", line 9, in &lt;module&gt; from pip.download import path_to_url File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 39, in &lt;module&gt; from pip._vendor import requests, six File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/__init__.py", line 53, in &lt;module&gt; from .packages.urllib3.contrib import pyopenssl File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/contrib/pyopenssl.py", line 54, in &lt;module&gt; import OpenSSL.SSL File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in &lt;module&gt; from OpenSSL import rand, crypto, SSL File "/usr/lib/python2.7/dist-packages/OpenSSL/rand.py", line 11, in &lt;module&gt; from OpenSSL._util import ( File "/usr/lib/python2.7/dist-packages/OpenSSL/_util.py", line 4, in &lt;module&gt; binding = Binding() File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 89, in __init__ self._ensure_ffi_initialized() File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 113, in _ensure_ffi_initialized libraries=libraries, File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/utils.py", line 80, in build_ffi extra_link_args=extra_link_args, File "/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 437, in verify lib = self.verifier.load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 114, in load_library return self._load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 225, in _load_library return self._vengine.load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/vengine_cpy.py", line 174, in load_library lst = list(map(self.ffi._get_cached_btype, lst)) File "/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 409, in _get_cached_btype BType = type.get_cached_btype(self, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 61, in get_cached_btype BType = self.build_backend_type(ffi, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 507, in build_backend_type base_btype = self.build_baseinttype(ffi, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 525, in build_baseinttype % self._get_c_name()) cffi.api.CDefError: 'point_conversion_form_t' has no values explicitly defined: refusing to guess which integer type it is meant to be (unsigned/signed, int/long) </code></pre> <p>googled for several similar problem, cffi or cryptography may cause this problem, but i can't find any clear way to fix it.</p>
2
2016-10-05T19:31:23Z
39,901,990
<p>i removed cffi and tried this command to install cffi 1.7.0:</p> <pre><code>pip install cffi==1.7.0 </code></pre> <p>it worked, thank you, alecxe and moeseth :)</p>
0
2016-10-06T17:13:03Z
[ "python", "scrapy", "pip" ]
pip error after upgrading pip & scrapy by "pip install --upgrade"
39,882,200
<p>Using debian 8(jessie) amd64 with python 2.7.9. I tried following commands:</p> <pre><code>pip install --upgrade pip pip install --upgrade scrapy </code></pre> <p>after that, I am getting following pip error</p> <pre><code>root@debian:~# pip Traceback (most recent call last): File "/usr/local/bin/pip", line 11, in &lt;module&gt; load_entry_point('pip==8.1.2', 'console_scripts', 'pip')() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 567, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2604, in load_entry_point return ep.load() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2264, in load return self.resolve() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2270, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/local/lib/python2.7/dist-packages/pip/__init__.py", line 16, in &lt;module&gt; from pip.vcs import git, mercurial, subversion, bazaar # noqa File "/usr/local/lib/python2.7/dist-packages/pip/vcs/mercurial.py", line 9, in &lt;module&gt; from pip.download import path_to_url File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 39, in &lt;module&gt; from pip._vendor import requests, six File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/__init__.py", line 53, in &lt;module&gt; from .packages.urllib3.contrib import pyopenssl File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/contrib/pyopenssl.py", line 54, in &lt;module&gt; import OpenSSL.SSL File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in &lt;module&gt; from OpenSSL import rand, crypto, SSL File "/usr/lib/python2.7/dist-packages/OpenSSL/rand.py", line 11, in &lt;module&gt; from OpenSSL._util import ( File "/usr/lib/python2.7/dist-packages/OpenSSL/_util.py", line 4, in &lt;module&gt; binding = Binding() File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 89, in __init__ self._ensure_ffi_initialized() File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 113, in _ensure_ffi_initialized libraries=libraries, File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/utils.py", line 80, in build_ffi extra_link_args=extra_link_args, File "/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 437, in verify lib = self.verifier.load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 114, in load_library return self._load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 225, in _load_library return self._vengine.load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/vengine_cpy.py", line 174, in load_library lst = list(map(self.ffi._get_cached_btype, lst)) File "/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 409, in _get_cached_btype BType = type.get_cached_btype(self, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 61, in get_cached_btype BType = self.build_backend_type(ffi, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 507, in build_backend_type base_btype = self.build_baseinttype(ffi, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 525, in build_baseinttype % self._get_c_name()) cffi.api.CDefError: 'point_conversion_form_t' has no values explicitly defined: refusing to guess which integer type it is meant to be (unsigned/signed, int/long) </code></pre> <p>googled for several similar problem, cffi or cryptography may cause this problem, but i can't find any clear way to fix it.</p>
2
2016-10-05T19:31:23Z
40,011,723
<p>My situation was like @alecxe</p> <p>This works:</p> <pre><code>pip install cffi==1.7.0 </code></pre>
0
2016-10-13T03:18:38Z
[ "python", "scrapy", "pip" ]
pip error after upgrading pip & scrapy by "pip install --upgrade"
39,882,200
<p>Using debian 8(jessie) amd64 with python 2.7.9. I tried following commands:</p> <pre><code>pip install --upgrade pip pip install --upgrade scrapy </code></pre> <p>after that, I am getting following pip error</p> <pre><code>root@debian:~# pip Traceback (most recent call last): File "/usr/local/bin/pip", line 11, in &lt;module&gt; load_entry_point('pip==8.1.2', 'console_scripts', 'pip')() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 567, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2604, in load_entry_point return ep.load() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2264, in load return self.resolve() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2270, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/local/lib/python2.7/dist-packages/pip/__init__.py", line 16, in &lt;module&gt; from pip.vcs import git, mercurial, subversion, bazaar # noqa File "/usr/local/lib/python2.7/dist-packages/pip/vcs/mercurial.py", line 9, in &lt;module&gt; from pip.download import path_to_url File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 39, in &lt;module&gt; from pip._vendor import requests, six File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/__init__.py", line 53, in &lt;module&gt; from .packages.urllib3.contrib import pyopenssl File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/contrib/pyopenssl.py", line 54, in &lt;module&gt; import OpenSSL.SSL File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in &lt;module&gt; from OpenSSL import rand, crypto, SSL File "/usr/lib/python2.7/dist-packages/OpenSSL/rand.py", line 11, in &lt;module&gt; from OpenSSL._util import ( File "/usr/lib/python2.7/dist-packages/OpenSSL/_util.py", line 4, in &lt;module&gt; binding = Binding() File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 89, in __init__ self._ensure_ffi_initialized() File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 113, in _ensure_ffi_initialized libraries=libraries, File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/utils.py", line 80, in build_ffi extra_link_args=extra_link_args, File "/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 437, in verify lib = self.verifier.load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 114, in load_library return self._load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 225, in _load_library return self._vengine.load_library() File "/usr/local/lib/python2.7/dist-packages/cffi/vengine_cpy.py", line 174, in load_library lst = list(map(self.ffi._get_cached_btype, lst)) File "/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 409, in _get_cached_btype BType = type.get_cached_btype(self, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 61, in get_cached_btype BType = self.build_backend_type(ffi, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 507, in build_backend_type base_btype = self.build_baseinttype(ffi, finishlist) File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 525, in build_baseinttype % self._get_c_name()) cffi.api.CDefError: 'point_conversion_form_t' has no values explicitly defined: refusing to guess which integer type it is meant to be (unsigned/signed, int/long) </code></pre> <p>googled for several similar problem, cffi or cryptography may cause this problem, but i can't find any clear way to fix it.</p>
2
2016-10-05T19:31:23Z
40,056,431
<p>Had the same problem as <em>moeseth</em>: the <code>pip install something</code> answers are pretty useless when all pip commands throw the original exception. Installing cffi v. 1.7.0 also solved the problem and this is how I managed to do it in Debian Jessie without relying on pip:</p> <ol> <li><p>Temporarily add testing repos to <code>/etc/apt/sources.list</code>, e.g.,</p> <pre><code>deb http://ftp.fi.debian.org/debian/ testing main contrib non-free deb-src http://ftp.fi.debian.org/debian/ testing main contrib non-free </code></pre></li> <li><p>Run <code>sudo apt-get update</code></p></li> <li>Use aptitude or apt-get to upgrade <code>python-cffi</code> and <code>python-cffi-backend</code> to v. 1.7.0</li> <li>Remove the lines added in step 1. from <code>/etc/apt/sources.list</code> and run <code>sudo apt-get update</code></li> </ol>
0
2016-10-15T07:38:47Z
[ "python", "scrapy", "pip" ]
POST data from modelform_factory generated form
39,882,222
<p>What I'm trying to accomplish is to save the data to the relevant model. Very much looking to mimic the Django admin interface.</p> <p>urls.py </p> <pre><code>url(r'^admin/add/(\w+)', views.admin_add_object, name='admin_add_object'), </code></pre> <p>view.py: </p> <pre><code>def admin_add_object(request, model): if request.method == "POST": # The code I need to save the posted data else: model_name = apps.get_model("product", model) ModelFormSet = modelform_factory(model_name, fields=("__all__")) return render(request, 'product/admin/add.html', {'formset': ModelFormSet}) </code></pre> <p>add.html</p> <p>Here I simply access <code>{{ formset }}</code> to get the relevant input fields.</p>
0
2016-10-05T19:33:19Z
39,882,389
<pre><code>def admin_add_object(request, model): model_name = apps.get_model("product", model) ModelFormSet = modelformset_factory(model_name) if request.method == 'POST': formset = ModelFormSet(request.POST, request.FILES) if formset.is_valid(): formset.save() # do something fancy. Redirect or so... else: ModelFormSet = modelform_factory(model_name, fields=("__all__")) return render(request, 'product/admin/add.html', {'formset': ModelFormSet}) </code></pre> <p>Something like that. You can read more about this in the docs: <a href="https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#using-a-model-formset-in-a-view" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#using-a-model-formset-in-a-view</a> </p>
0
2016-10-05T19:43:55Z
[ "python", "django", "python-2.7", "django-models", "django-forms" ]
ImportError: No module named caffe while running spark-submit
39,882,281
<p>While running a <code>spark-submit</code> on a spark standalone cluster comprised of one master and 1 worker, the <code>caffe</code> python module does not get imported due to error <code>ImportError: No module named caffe</code></p> <p>This doesn't seem to be an issue whenever I run a job locally by <code>spark-submit --master local script.py</code> the <code>caffe</code> module gets imported just fine.</p> <p>The environmental variables are currently set under <code>~/.profile</code> for spark and caffe and they are pointing to the <code>PYTHONPATH</code>.</p> <p>Is <code>~/.profile</code> the correct location to set these variables or perhaps a system wide configuration is needed such as adding the variables under <code>/etc/profile.d/</code></p>
1
2016-10-05T19:37:06Z
40,077,765
<p>Please note that the CaffeOnSpark team ported Caffe to a distributed environment backed by Hadoop and Spark. You cannot, I am 99.99% sure, use Caffe alone <em>(without any modifications)</em> in a Spark cluster or any distributed environment per se. (Caffe team is known to be working on this).</p> <p>If you need distributed deep-learning using Caffe, please follow the building method mentioned here in <a href="https://github.com/yahoo/CaffeOnSpark/wiki/build" rel="nofollow">https://github.com/yahoo/CaffeOnSpark/wiki/build</a> to build CaffeOnSpark for that and use CaffeOnSpark instead of Caffe. </p> <p>But, best bet will be to follow either this <a href="https://github.com/yahoo/CaffeOnSpark/wiki/GetStarted_standalone" rel="nofollow">GetStarted_standalone wiki</a> or <a href="https://github.com/yahoo/CaffeOnSpark/wiki/GetStarted_yarn" rel="nofollow">GetStarted_yarn wiki</a> to create you a distributed environment to carry out deep-learning.</p> <p>Further, to add python, please go through <a href="https://github.com/yahoo/CaffeOnSpark/wiki/GetStarted_python" rel="nofollow">GetStarted_python wiki</a>.</p> <p>Also, since you mentioned that you are using Ubuntu <a href="https://github.com/yahoo/CaffeOnSpark/issues/164" rel="nofollow">here</a>, please use <code>~/.bashrc</code> to update environment your variables. You will have to source the file after the changes: <code>source ~/.bashrc</code></p>
0
2016-10-17T02:43:10Z
[ "python", "ubuntu", "apache-spark", "caffe", "pycaffe" ]
remove quotes from csv file data in python
39,882,301
<p>I have a csv that is being imported from url and placed into a database, however it imports with quotes around the names and id like to remove them. The originial format of the csv file is</p> <pre><code>"Apple Inc.",113.08,113.07 "Alphabet Inc.",777.61,777.30 "Microsoft Corporation",57.730,57.720 </code></pre> <p>the code I currently have is as follows. </p> <pre><code>def csv_new(conn, cursor, filename): with open(filename, 'rt') as csv_file: csv_data = csv.reader(csv_file) for row in csv_data: if(not row[0][0].isdigit()): continue split = [int(x) for x in row[0].split('/')] row[0] = datetime.datetime(split[2], split[0], split[1]).date().isoformat() print(row); cursor.execute('INSERT INTO `trade_data`.`import_data`' '(date, name, price) VALUES(%s, "%s", %s)', row) conn.commit() </code></pre> <p>final database looks like this </p> <pre><code> Name | Price1| Price 2| 'Apple Inc.' 113.08 113.07 'Alphabet Inc.' 777.61 777.30 'Microsoft Corporation' 57.730 57.720 </code></pre> <p>and I would like it to look like </p> <pre><code>Name | Price1| Price 2| Apple Inc. 113.08 113.07 Alphabet Inc. 777.61 777.30 Microsoft Corporation 57.730 57.720 </code></pre> <p>I tried using for row in csv.reader(new_data.splitlines(), delimiter=', skipinitialspace=True): but it threw errors</p>
-1
2016-10-05T19:38:33Z
39,882,598
<p><code>csv.reader</code> removes the quotes properly. You may be viewing a quoted string representation of the text instead of the actual text.</p> <pre><code>&gt;&gt;&gt; new_data = '''"Apple Inc.",113.08,113.07 ... "Alphabet Inc.",777.61,777.30 ... "Microsoft Corporation",57.730,57.720''' &gt;&gt;&gt; &gt;&gt;&gt; import csv &gt;&gt;&gt; &gt;&gt;&gt; for row in csv.reader(new_data.splitlines()): ... print(','.join(row)) ... Apple Inc.,113.08,113.07 Alphabet Inc.,777.61,777.30 Microsoft Corporation,57.730,57.720 &gt;&gt;&gt; </code></pre>
1
2016-10-05T19:54:53Z
[ "python", "mysql", "csv" ]
remove quotes from csv file data in python
39,882,301
<p>I have a csv that is being imported from url and placed into a database, however it imports with quotes around the names and id like to remove them. The originial format of the csv file is</p> <pre><code>"Apple Inc.",113.08,113.07 "Alphabet Inc.",777.61,777.30 "Microsoft Corporation",57.730,57.720 </code></pre> <p>the code I currently have is as follows. </p> <pre><code>def csv_new(conn, cursor, filename): with open(filename, 'rt') as csv_file: csv_data = csv.reader(csv_file) for row in csv_data: if(not row[0][0].isdigit()): continue split = [int(x) for x in row[0].split('/')] row[0] = datetime.datetime(split[2], split[0], split[1]).date().isoformat() print(row); cursor.execute('INSERT INTO `trade_data`.`import_data`' '(date, name, price) VALUES(%s, "%s", %s)', row) conn.commit() </code></pre> <p>final database looks like this </p> <pre><code> Name | Price1| Price 2| 'Apple Inc.' 113.08 113.07 'Alphabet Inc.' 777.61 777.30 'Microsoft Corporation' 57.730 57.720 </code></pre> <p>and I would like it to look like </p> <pre><code>Name | Price1| Price 2| Apple Inc. 113.08 113.07 Alphabet Inc. 777.61 777.30 Microsoft Corporation 57.730 57.720 </code></pre> <p>I tried using for row in csv.reader(new_data.splitlines(), delimiter=', skipinitialspace=True): but it threw errors</p>
-1
2016-10-05T19:38:33Z
39,938,223
<p>Figured it out, the problem was as tdelaney mentioned was that the quotes were not acually in the string it was python, so my changing value in </p> <pre><code>cursor.execute('INSERT INTO `trade_data`.`import_data`' '(date, name, price) VALUES(%s, "%s", %s)', row) </code></pre> <p>to %s instead of "%s" it fixed the problem and removed the extra quotes. </p>
0
2016-10-08T22:56:17Z
[ "python", "mysql", "csv" ]
Django REST framework foreign key with generics.ListCreateAPIView
39,882,314
<p>How it is possible to get foreign key assigned in url with Django REST Framework?</p> <pre><code>class CommentList(generics.ListCreateAPIView): serializer_class = CommentSerializer pagination_class = StandardResultsSetPagination queryset = Comment.objects.all() def get(self, *args, **kwargs): serializer = CommentSerializer(comment, many=True) return super(CommentList, self).get(*args, **kwargs) </code></pre> <p>My goal is to use this URL (urls.py):</p> <pre><code>url(r'^event/(?P&lt;pk&gt;[0-9]+)/comments', views.CommentList.as_view()) </code></pre> <p>Somehow I managed to get foreign key with this way</p> <pre><code>class CommentLikeList(APIView): def get(self, request, *args, **kwargs): key = self.kwargs['pk'] commentLikes = CommentLike.objects.filter(pk=key) serializer = CommentLikeSerializer(commentLikes, many=True) return Response(serializer.data) def post(self): pass </code></pre> <p>But I don't know how to get foreign key with such URL using ''generics.ListCreateAPIView''</p> <pre><code>http://127.0.0.1:8000/event/&lt;eventnumber&gt;/comments </code></pre>
1
2016-10-05T19:39:42Z
39,885,914
<p>If you want to get the pk. You can use <code>lookup_url_kwarg</code> attribute from <code>ListCreateAPIView</code> class.</p> <pre><code>class CommentLikeList(ListCreateAPIView): def get(self, request, *args, **kwargs): key = self.kwargs[self.lookup_url_kwarg] commentLikes = CommentLike.objects.filter(pk=key) serializer = CommentLikeSerializer(commentLikes, many=True) return Response(serializer.data) </code></pre> <blockquote> <p>lookup_url_kwarg - The URL keyword argument that should be used for object lookup. The URL conf should include a keyword argument corresponding to this value. If unset this defaults to using the same value as lookup_field.</p> </blockquote> <p>The default value for <code>lookup_field</code> attribute is <code>'pk'</code>. So, if you change your url keyword argumento from another different to pk, you should define <code>lookup_url_kwarg</code> then. </p> <pre><code>class CommentLikeList(ListCreateAPIView): lookup_url_kwarg = 'eventnumber' </code></pre> <p>You can inspect all DRF classes methods and attributes over here: <a href="http://www.cdrf.co/" rel="nofollow">http://www.cdrf.co/</a></p>
1
2016-10-06T01:01:33Z
[ "python", "django", "generics", "foreign-keys", "django-rest-framework" ]
Python connect to Teradata in AWS EC2
39,882,352
<p>I want to pull data from a Teradata instance. Client code runs Python2.7+ on AWS EC2 instance.</p> <p>I installed unixODBC driver and <code>sudo pip install teradata</code> but I am still getting the following exception:</p> <pre><code> File "/usr/local/lib/python2.7/site-packages/teradata/tdodbc.py", line 369, in determineDriver "Available drivers: {}".format(dbType, ",".join(drivers))) teradata.api.InterfaceError: ('DRIVER_NOT_FOUND', "No driver found for 'Teradata'. Available drivers: PostgreSQL,MySQL") </code></pre> <p>The code is as follows:</p> <pre><code>import sys import teradata # my own imports td = TeradataClient(DEFAULT_HOSTNAME, DEFAULT_USERNAME, DEFAULT_PASSWORD) td.select(query, outfile) </code></pre> <p>The <code>TeradataClient</code> class I created which calls Teradata is as follows:</p> <pre><code>class TeradataClient: def __init__(self, hostname, username, password): self._hostname = hostname self._username = username self._password = password self._udaExec = teradata.UdaExec(appName="MyApp", version="1.0", logConsole=False) def select(self, query, outfile, sep=DEFAULT_SEPARATOR, nullstr=DEFAULT_NULL_STR): with self._udaExec.connect(method="odbc", system=self._hostname, username=self._username, password=self._password) as session: print 'Connection to Teradata established' with open(outfile,'w') as fp: with session.cursor() as cursor: for row in cursor.execute(query): lineparts = [str(x if x!=None else nullstr) for x in row] fp.write('%s\n' %sep.join(lineparts)) </code></pre> <p>How can I fix this? Is there another ODBC driver that needs to be installed?</p>
0
2016-10-05T19:41:56Z
39,894,883
<p>It sounds like you need to install the Teradata ODBC driver.</p> <p>Available on the Teradata Developer Exchange. <a href="http://downloads.teradata.com/download/connectivity" rel="nofollow">http://downloads.teradata.com/download/connectivity</a></p>
0
2016-10-06T11:32:36Z
[ "python", "python-2.7", "amazon-ec2", "odbc", "teradata" ]
image downloading gets stuck on a link that doesn't exists anymore
39,882,388
<p>So I am not sure how to handle this situation. It almost works on many other broken links but not this one:</p> <pre><code>import datetime import praw import re import urllib import requests from bs4 import BeautifulSoup sub = 'dog' imgurUrlPattern = re.compile(r'(http://i.imgur.com/(.*))(\?.*)?') r = praw.Reddit(user_agent = "download all images from a subreddit", user_site = "lamiastella") already_done = [] #checkWords = ['i.imgur.com', 'jpg', 'png',] check_words = ['jpg', 'jpeg', 'png'] subreddit = r.get_subreddit(sub) for submission in subreddit.get_top_from_all(limit=10000): #for submission in subreddit.get_hot(limit=10000): is_image = any(string in submission.url for string in check_words) print '[LOG] Getting url: ' + submission.url if submission.id not in already_done and is_image: if submission.url.endswith('/'): modified_url = submission.url[:len(submission.url)-1] try: urllib.urlretrieve(modified_url, '/home/jalal/computer_vision/image_retrieval/images/' + datetime.datetime.now().strftime('%y-%m-%d-%s') + modified_url[-5:]) except Exception as e: print(e) #pass continue else: try: urllib.urlretrieve(submission.url, '/home/jalal/computer_vision/image_retrieval/images/' + datetime.datetime.now().strftime('%y-%m-%d-%s') + submission.url[-5:]) except Exception as e: print(e) #pass continue already_done.append(submission.id) print '[LOG] Done Getting ' + submission.url print('{0}: {1}'.format('submission id is', submission.id)) elif 'imgur.com' in submission.url and not (submission.url.endswith('gif') or submission.url.endswith('webm') or submission.url.endswith('mp4') or submission.url.endswith('all') or '#' in submission.url or '/a/' in submission.url): # This is an Imgur page with a single image. html_source = requests.get(submission.url).text # download the image's page soup = BeautifulSoup(html_source, "lxml") image_url = soup.select('img')[0]['src'] if image_url.startswith('//'): # if no schema is supplied in the url, prepend 'http:' to it image_url = 'http:' + image_url image_id = image_url[image_url.rfind('/') + 1:image_url.rfind('.')] urllib.urlretrieve(image_url, '/home/jalal/computer_vision/image_retrieval/images/' + 'imgur_'+ datetime.datetime.now().strftime('%y-%m-%d-%s') + submission.url[-9:0]) elif 'instagram.com' in submission.url: html_source = requests.get(submission.url).text soup = BeautifulSoup(html_source, "lxml") instagram_url = soup.find('meta', {"property":"og:image"})['content'] urllib.urlretrieve(instagram_url, '/home/jalal/computer_vision/image_retrieval/images/' + 'instagram_'+ datetime.datetime.now().strftime('%y-%m-%d-%s') + '.jpg') else: continue </code></pre> <p>I get stuck at a link <a href="http://cutearoo.com/wp-content/uploads/2011/04/Pomsky.png" rel="nofollow">http://cutearoo.com/wp-content/uploads/2011/04/Pomsky.png</a> and have to CTL+C it:</p> <pre><code>[LOG] Done Getting http://i.imgur.com/Vc9P9QC.jpg submission id is: 1fv70j [LOG] Getting url: http://i.imgur.com/iOBi0qx.jpg [LOG] Done Getting http://i.imgur.com/iOBi0qx.jpg submission id is: 1dof3o [LOG] Getting url: http://cutearoo.com/wp-content/uploads/2011/04/Pomsky.png ^CTraceback (most recent call last): File "download_images.py", line 35, in &lt;module&gt; urllib.urlretrieve(submission.url, '/home/jalal/computer_vision/image_retrieval/images/' + datetime.datetime.now().strftime('%y-%m-%d-%s') + submission.url[-5:]) File "/usr/lib/python2.7/urllib.py", line 98, in urlretrieve return opener.retrieve(url, filename, reporthook, data) File "/usr/lib/python2.7/urllib.py", line 245, in retrieve fp = self.open(url, data) File "/usr/lib/python2.7/urllib.py", line 213, in open return getattr(self, name)(url) File "/usr/lib/python2.7/urllib.py", line 350, in open_http h.endheaders(data) File "/usr/lib/python2.7/httplib.py", line 1053, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 897, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 859, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 836, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 566, in create_connection sock.connect(sa) File "/usr/lib/python2.7/socket.py", line 228, in meth return getattr(self._sock,name)(*args) KeyboardInterrupt </code></pre> <p>Please suggest fixes for this.</p> <p>UPDATE: I used something like:</p> <pre><code>image_file = urllib2.urlopen(modified_url) with open('/home/jalal/computer_vision/image_retrieval/images/' + datetime.datetime.now().strftime('%y-%m-%d-%s') + modified_url[-5:], 'wb') as output_image: output_image.write(image_file.read()) </code></pre> <p>and still get stuck for the particular link.</p>
0
2016-10-05T19:43:54Z
39,882,797
<p>Use <code>urlopen</code> with <code>timeout</code> argument:</p> <pre><code>&gt;&gt;&gt; import urllib2 &gt;&gt;&gt; modified_url = 'http://cutearoo.com/wp-content/uploads/2011/04/Pomsky.png' &gt;&gt;&gt; try: ... image_file = urllib2.urlopen(modified_url, timeout=5) ... except urllib2.URLError: ... print 'could not download :(' ... could not download :( &gt;&gt;&gt; </code></pre> <p>The answer above is correct :) just adding what I have based on your answer as well;</p> <pre><code>image_file = urllib2.urlopen(modified_url) with open('/home/jalal/computer_vision/image_retrieval/'+category+'/' + datetime.datetime.now().strftime('%y-%m-%d-%s') + modified_url[-5:], 'wb') as output_image: output_image.write(image_file.read(), timeout = 5) except urllib2.URLError as e: print(e) continue </code></pre>
1
2016-10-05T20:06:36Z
[ "python", "http", "urllib", "information-retrieval" ]
python request payload without sorting
39,882,412
<p>I am trying to use python request module to pass few values to server using the following code, but in the http request its changing the order of the payload values. </p> <pre><code>payload3 = {'OSILA_LinkTo_Site': OSILA_LinkTo_Site, 'OSILA_LinkTo_Service': OSILA_LinkTo_Service, 'OSILA_Locale': OSILA_Locale, 'OSILA_Locale_display': OSILA_Locale, 'OSILA_TimeZone': OSILA_TimeZone, 'OSILA_TimeZone_display': OSILA_TimeZone, 'selectedUser': '', 'OSILA_LinkTo_User_display': '', 'OSILA_LinkTo_User': OSILA_LinkTo_User, 'OSILA_DisplayName': OSILA_DisplayName, 'OSILA_CountryCode': OSILA_CountryCode, 'OSILA_AreaCode': OSILA_AreaCode, 'OSILA_LocalOfficeCode': OSILA_LocalOfficeCode, 'OSILA_LinkTo_Extension': OSILA_LinkTo_Extension, 'OSILA_ProductPackage': OSILA_ProductPackage, 'OSILA_ProductPackage_display': OSILA_ProductPackage, 'OSILA_DeviceType': OSILA_DeviceType, 'OSILA_DeviceType_display': OSILA_DeviceType, 'OSILA_Parameter_4': OSILA_Parameter_4, 'OSILA_Parameter_4_display': OSILA_Parameter_4, 'OSILA_Parameter_3': OSILA_Parameter_3, 'OSILA_Parameter_3_display': OSILA_Parameter_3, 'OSILA_Parameter_2': OSILA_Parameter_2, 'OSILA_Parameter_2_display': OSILA_Parameter_2, 'OSILA_Parameter_5': OSILA_Parameter_5, 'OSILA_Parameter_5_display': OSILA_Parameter_5, 'OSILA_Parameter_1': OSILA_Parameter_1, 'OSILA_Parameter_1_display': OSILA_Parameter_1, 'OSILA_CreateNewFax': OSILA_CreateNewFax, 'OSILA_LinkTo_ServiceFax': '', 'OSILA_CountryCodeFax': '', 'OSILA_AreaCodeFax': '', 'OSILA_LocalOfficeCodeFax': '', 'OSILA_LinkTo_ExtensionFax': '', 'OSILA_ProvisioningDate': OSILA_ProvisioningDate, 'OSILA_DateMail_1': OSILA_ProvisioningDate, 'OSILA_DateMail_2': '', 'OSILA_DateMail_3': '', 'OSILA_DateMail_4': '', 'OSILA_DateMail_5': '', 'OSILA_DateMail_6': '', 'OSILA_DateMail_7': '', 'OSILA_DateMail_8': '', 'OSILA_DateMail_9': '', 'attachEventListeners': '', 'webMgrRequestId': webMgrRequestId, 'WT': WT2, 'enterConfirm': 'true', 'users_R_0_C__select': 'on', 'selectedUser': '0'} </code></pre> <p><code>subcr = session.post("http://192.168.0.10:8080/OSILAManager/createSubscriber2.do", data=payload3)</code></p> <p>using browser i can see the values are passing in the following order ,but when i use python its changing the order, is there any way to avoid this sorting </p> <pre><code>OSILA_LinkTo_Site OSILA_LinkTo_Service OSILA_Locale OSILA_Locale_display OSILA_TimeZone OSILA_TimeZone_display users_R_0_C__select selectedUser OSILA_LinkTo_User_display OSILA_LinkTo_User OSILA_DisplayName OSILA_CountryCode OSILA_AreaCode OSILA_LocalOfficeCode OSILA_LinkTo_Extension OSILA_ProductPackage OSILA_ProductPackage_display OSILA_DeviceType OSILA_DeviceType_display OSILA_Parameter_4 OSILA_Parameter_4_display OSILA_Parameter_3 OSILA_Parameter_3_display OSILA_Parameter_2 OSILA_Parameter_2_display OSILA_Parameter_5 OSILA_Parameter_5_display OSILA_Parameter_1 OSILA_Parameter_1_display OSILA_CreateNewFax OSILA_LinkTo_ServiceFax OSILA_CountryCodeFax OSILA_AreaCodeFax OSILA_LocalOfficeCodeFax OSILA_LinkTo_ExtensionFax OSILA_ProvisioningDate OSILA_DateMail_1 OSILA_DateMail_2 OSILA_DateMail_3 OSILA_DateMail_4 OSILA_DateMail_5 OSILA_DateMail_6 OSILA_DateMail_7 OSILA_DateMail_8 OSILA_DateMail_9 attachEventListeners webMgrRequestId WT enterConfirm </code></pre>
2
2016-10-05T19:44:49Z
39,882,607
<p>You can pass tuples in place of the dict:</p> <pre><code>In [7]: ...: import requests ...: d = {"foo":"bar","bar":"foo","123":"456"} ...: t = (("foo","bar"),("bar","foo"),("123","456")) ...: r1 = requests.post("http://httpbin.org", data=d) ...: r2 = requests.post("http://httpbin.org", data=t) ...: print(r1.request.body) ...: print(r2.request.body) ...: bar=foo&amp;123=456&amp;foo=bar foo=bar&amp;bar=foo&amp;123=456 </code></pre> <p>Also you have <code>selectedUser</code> twice in your <em>dict</em> so that won't work as you will only have the last key/value pair you assign, not sure what the correct format is but again tuples would allow you to repeat a key:</p> <pre><code>In [8]: import requests ...: d = {"foo":"bar","bar":"foo","123":"456","selectedUser":"0", "selectedUs ...: er":"1"} ...: t = (("foo","bar"),("bar","foo"),("123","456"), ("selectedUser","0"),("s ...: electedUser","1")) ...: r1 = requests.post("http://httpbin.org", data=d) ...: r2 = requests.post("http://httpbin.org", data=t) ...: print(r1.request.body) ...: print(r2.request.body) ...: bar=foo&amp;selectedUser=1&amp;123=456&amp;foo=bar foo=bar&amp;bar=foo&amp;123=456&amp;selectedUser=0&amp;selectedUser=1 </code></pre> <p>You can see using a dict leaves you with just <code>selectedUser=1</code> as that is the last of the pair assigned. That may actually be your problem more than the order of the post body keys. If you only meant to use <code>selectedUser</code> once then remove whichever you did not mean to post.</p>
1
2016-10-05T19:55:30Z
[ "python", "python-requests", "unify" ]
django-tables2 flooding database with queries
39,882,504
<p>im using django-tables2 in order to show values from a database query. And everythings works fine. Im now using Django-dabug-toolbar and was looking through my pages with it. More out of curiosity than performance needs. When a lokked at the page with the table i saw that the debug toolbar registerd over 300 queries for a table with a little over 300 entries. I dont think flooding the DB with so many queries is a good idea even if there is no performance impact (at least not now). All the data should be coming from only one query.</p> <p>Why is this happening and how can i reduce the number of queries?</p>
0
2016-10-05T19:49:37Z
39,882,505
<p>Im posting this as a future reference for myself and other who might have the same problem. </p> <p>After searching for a bit I found out that django-tables2 was sending a single query for each row. The query was something like <code>SELECT * FROM "table" LIMIT 1 OFFSET 1</code> with increasing offset. </p> <p>I reduced the number of sql calls by calling <code>query = list(query)</code> before i create the table and pass the query. By evaluating the query in the python view code the table now seems to work with the evaulated data instead and there is only one database call instead of hundreds.</p>
0
2016-10-05T19:49:37Z
[ "python", "django", "django-tables2" ]
Finding duplicate rows python
39,882,522
<p>I have <code>timestamp</code> and <code>id</code> variables in my dataframe (<code>df</code>)</p> <pre><code>timestamp id 2016-06-09 8:33:37 a1 2016-06-09 8:33:37 a1 2016-06-09 8:33:38 a1 2016-06-09 8:33:39 a1 2016-06-09 8:33:39 a1 2016-06-09 8:33:37 b1 2016-06-09 8:33:38 b1 </code></pre> <p>Each <code>id</code> can't have two timestamps. I have to print these duplicate timestamps for each <code>id</code>. In my above case, the output should be for rows 1,2,4,5</p> <p>The following code will give the duplicate <code>timestamp</code></p> <pre><code>set([x for x in df['timestamp'] if df['timestamp'].count(x) &gt; 1]) </code></pre> <p>How to consider <code>id</code> along with <code>timestamp</code> to have the duplicate rows?</p>
1
2016-10-05T19:50:14Z
39,882,642
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> and get mask of all duplicates values per group by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.duplicated.html" rel="nofollow"><code>Series.duplicated</code></a>. Last use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>print (df.groupby(['id'])['timestamp'].apply(lambda x: x.duplicated(keep=False))) 0 True 1 True 2 False 3 True 4 True 5 False 6 False Name: timestamp, dtype: bool print (df[df.groupby(['id'])['timestamp'].apply(lambda x: x.duplicated(keep=False))]) timestamp id 0 2016-06-09 08:33:37 a1 1 2016-06-09 08:33:37 a1 3 2016-06-09 08:33:39 a1 4 2016-06-09 08:33:39 a1 </code></pre>
1
2016-10-05T19:57:15Z
[ "python", "pandas", "timestamp", "duplicates" ]
Finding duplicate rows python
39,882,522
<p>I have <code>timestamp</code> and <code>id</code> variables in my dataframe (<code>df</code>)</p> <pre><code>timestamp id 2016-06-09 8:33:37 a1 2016-06-09 8:33:37 a1 2016-06-09 8:33:38 a1 2016-06-09 8:33:39 a1 2016-06-09 8:33:39 a1 2016-06-09 8:33:37 b1 2016-06-09 8:33:38 b1 </code></pre> <p>Each <code>id</code> can't have two timestamps. I have to print these duplicate timestamps for each <code>id</code>. In my above case, the output should be for rows 1,2,4,5</p> <p>The following code will give the duplicate <code>timestamp</code></p> <pre><code>set([x for x in df['timestamp'] if df['timestamp'].count(x) &gt; 1]) </code></pre> <p>How to consider <code>id</code> along with <code>timestamp</code> to have the duplicate rows?</p>
1
2016-10-05T19:50:14Z
39,883,411
<p>If you wish to find all duplicates then use the <code>duplicated</code> method. It only works on the columns. On the other hand <code>df.index.duplicated</code> works on the index. Therefore we do a quick <code>reset_index</code> to bring the index into the columns.</p> <pre><code>df = df.reset_index() df.ix[df.duplicated(keep=False)] index id 0 2016-06-09 8:33:37 a1 1 2016-06-09 8:33:37 a1 3 2016-06-09 8:33:39 a1 4 2016-06-09 8:33:39 a1 </code></pre> <p>If you just wish to remove duplicates there is the DataFrame method <code>drop_duplicates</code>.</p> <pre><code>df = df.reset_index() df = df.drop_duplicates() # keep='first' by default. index id 0 2016-06-09 8:33:37 a1 2 2016-06-09 8:33:38 a1 3 2016-06-09 8:33:39 a1 5 2016-06-09 8:33:37 b1 6 2016-06-09 8:33:38 b1 </code></pre> <p>If you wish to get back your old index then set_index using the default columns name 'index' for either of the above then simply use.</p> <pre><code>df.set_index('index') id index 2016-06-09 8:33:37 a1 2016-06-09 8:33:38 a1 2016-06-09 8:33:39 a1 2016-06-09 8:33:37 b1 2016-06-09 8:33:38 b1 </code></pre> <p>The above methods allow you choose whether to keep the first, last or none of the duplicates by setting the <code>keep</code> attribute to <code>'first'</code>, <code>'last'</code> or <code>False</code>. So to remove all duplicates in <code>df</code> then use <code>keep=False</code>.</p>
0
2016-10-05T20:50:23Z
[ "python", "pandas", "timestamp", "duplicates" ]
Add custom items to QListWidget
39,882,602
<p>How can I add customized items to a QListWidget with a background color that I choose, and add a bottom border to each item, like this draft example in the picture below.</p> <p>This is the code that I wrote:</p> <pre><code>from PyQt5 import QtWidgets, QtGui import sys class CustomListHead(QtWidgets.QWidget): def __init__(self): super(CustomListHead, self).__init__() self.project_title = QtWidgets.QLabel("Today") self.set_ui() def set_ui(self): grid_box = QtWidgets.QGridLayout() grid_box.addWidget(self.project_title, 0, 0) self.setLayout(grid_box) self.show() class CustomListItem(QtWidgets.QWidget): def __init__(self): super(CustomListItem, self).__init__() self.project_title = QtWidgets.QLabel("Learn Python") self.task_title = QtWidgets.QLabel("Learn more about forms, models and include") self.set_ui() def set_ui(self): grid_box = QtWidgets.QGridLayout() grid_box.addWidget(self.project_title, 0, 0) grid_box.addWidget(self.task_title, 1, 0) self.setLayout(grid_box) self.show() class MainWindowUI(QtWidgets.QMainWindow): def __init__(self): super(MainWindowUI, self).__init__() self.list_widget = QtWidgets.QListWidget() self.set_ui() def set_ui(self): custom_head_item = CustomListHead() item = QtWidgets.QListWidgetItem(self.list_widget) item.setSizeHint(custom_head_item.sizeHint()) self.list_widget.setItemWidget(item, custom_head_item) self.list_widget.addItem(item) custom_item = CustomListItem() item = QtWidgets.QListWidgetItem(self.list_widget) item.setSizeHint(custom_item.sizeHint()) self.list_widget.addItem(item) self.list_widget.setItemWidget(item, custom_item) vertical_layout = QtWidgets.QVBoxLayout() vertical_layout.addWidget(self.list_widget) widget = QtWidgets.QWidget() widget.setLayout(vertical_layout) self.setCentralWidget(widget) self.show() app = QtWidgets.QApplication(sys.argv) ui = MainWindowUI() sys.exit(app.exec_()) </code></pre> <p><img src="http://i.stack.imgur.com/dLkvl.png" alt="example"></p>
0
2016-10-05T19:55:15Z
39,883,248
<p>I see you have QListWidgetItem with you.</p> <p>From documentation you can customize each widget item, customize it and add to your listwidget:</p> <p>The appearance of the text can be customized with setFont(), setForeground(), and setBackground(). Text in list items can be aligned using the setTextAlignment() function. Tooltips, status tips and "What's This?" help can be added to list items with setToolTip(), setStatusTip(), an d setWhatsThis().</p> <p><a href="http://doc.qt.io/qt-5/qlistwidgetitem.html#details" rel="nofollow">http://doc.qt.io/qt-5/qlistwidgetitem.html#details</a></p>
0
2016-10-05T20:39:04Z
[ "python", "qt", "pyqt", "qlistwidget" ]
Managing zipping large number of files in Python using 7zip and subprocess
39,882,604
<p>I have a large amount of files (~100k) moved from a server to a designated folder on a daily basis. I'm running a Python script against these files. At the end of the process I want to create a number of .zip files, each of approximately 2GB.</p> <p>The script will get a list of all files in the folder, get the size one a time using <code>os.path.getsize</code> , keep track of the total accrued file size until it gets above 2GB, it will take the list of those files and zip them in one zip archive, then reset the size accumulator and move on to the next chunk of files.</p> <p>If there wasn't a 2GB requirement I'd just run the following and it would be super speedy.</p> <pre><code>txt = '"C:\\Program Files\\7-Zip\\7z.exe" a -tzip %s *.txt -pSECRET'%(zipFileName) out = subprocess.check_output(txt, shell = True) </code></pre> <p>But given the 2GB requirement I've tried two routes. One is to add one file at a time to the archive, but that seems to be orders of magnitude slower than just using a wildcard (I'm hypothesizing that is due to the constant re-opening of the archive).</p> <pre><code>for file in FilesListBelow2GB: txt = '"C:\\Program Files\\7-Zip\\7z.exe" a -tzip %s %s -pSECRET'%(zipFileName,file) out = subprocess.check_output(txt, shell = True) </code></pre> <p>I also tried to supply a space delimited list of files to process but it errors <code>WindowsError: [Error 206] The filename or extension is too long</code>.</p> <pre><code>txt = '"C:\\Program Files\\7-Zip\\7z.exe" a -tzip %s %s -pSECRET'%(zipFileName,' '.join(FilesListBelow2GB)) out = subprocess.check_output(txt, shell = True) </code></pre> <p>How can I manage this situation so that I can achive better zipping performance given the constraints I'm given (all files are in the same folder, can't delete files, zip files need to be approximately ~2GB in size)?</p>
0
2016-10-05T19:55:17Z
39,905,145
<p>I ended up finding a workaround, where essentially I write all file names from FilesListBelow2GB to a .txt file, and then ask 7zip to use that to know which files to zip.</p> <pre><code>outf = open('list.txt','w') for fileToZip in FilesListBelow2GB : outf.write(fileToZip+'\n') outf.close() txt = '"C:\\Program Files\\7-Zip\\7z.exe" a -tzip compress.zip -i@list.txt' out = subprocess.check_output(txt, shell = True) </code></pre>
0
2016-10-06T20:32:16Z
[ "python", "python-2.7", "zip", "subprocess" ]
How to resolve "ValueError: invalid literal for int() with base 10" in Python?
39,882,621
<p>I'm having a little trouble with a calculator that I'm making. I need to make it where it can add, multiply, divide, and subtract if you put them as the operator then it will follow which one it is. My code looks like this:</p> <pre><code> a= int( input("First Number: ")) int( input("First Operator: ")) b= int( input("Second Number: ")) if (operator == "+"): c=a+b elif(operator == "-"): c=a-b elif(operator == "*"): c=a*b elif(operator == "-"): c=a/b print(c) </code></pre> <p>Every time I enter either a <code>+,-,*,or/</code> as the operator it give me this:</p> <pre><code> operator=int( input("Operator: ")) ValueError: invalid literal for int() with base 10: '/' </code></pre> <p>I know this means that I need an integer, but how can I fix it?</p>
-1
2016-10-05T19:56:04Z
39,882,658
<p>The operator is being cast as an int. This should not be the case:</p> <p><code>int( input("First Operator: "))</code></p> <p>Try without the int(...).</p>
0
2016-10-05T19:58:27Z
[ "python" ]
How to resolve "ValueError: invalid literal for int() with base 10" in Python?
39,882,621
<p>I'm having a little trouble with a calculator that I'm making. I need to make it where it can add, multiply, divide, and subtract if you put them as the operator then it will follow which one it is. My code looks like this:</p> <pre><code> a= int( input("First Number: ")) int( input("First Operator: ")) b= int( input("Second Number: ")) if (operator == "+"): c=a+b elif(operator == "-"): c=a-b elif(operator == "*"): c=a*b elif(operator == "-"): c=a/b print(c) </code></pre> <p>Every time I enter either a <code>+,-,*,or/</code> as the operator it give me this:</p> <pre><code> operator=int( input("Operator: ")) ValueError: invalid literal for int() with base 10: '/' </code></pre> <p>I know this means that I need an integer, but how can I fix it?</p>
-1
2016-10-05T19:56:04Z
39,882,915
<p>You are trying to turn a char variable, in this case the operator, in an int one. Since the compiler cant turn the operator in int it will return this error. If you just declare a variable called operator it will work, your code will look like this:</p> <pre><code>a= float( input("First Number: ")) operator =input("First Operator: ") b= float( input("Second Number: ")) if (operator == "+"): c=a+b elif(operator == "-"): c=a-b elif(operator == "*"): c=a*b elif(operator == "-"): c=a/b </code></pre>
0
2016-10-05T20:14:53Z
[ "python" ]
In Tensorflow, how can you check if gradients are correct for your custom operation?
39,882,623
<p>My question is, how backpropatation paths are determined e.g., when using <code>tf.slice</code>? </p> <p>Let me take an example. Let's say, I have a K-classification problem. I can do this in a standard way like</p> <pre><code>conv1 = # conv1+relu1+lrm1+pool1 conv2 = # from conv1 fc1 = # from conv2 to 128D fully connected + relu fc2 = # from fc2 to K-D fully connected batch_loss = tf.softmax_cross_entropy(fc2, labels) loss = tf.reduce_mean(batch_loss) ... minimize(loss) </code></pre> <p>In this case, the gradient of <code>loss</code> will backpropagate to each weights.</p> <p>Let's say, I calcuated a custom loss value by slicing <code>labels</code> and <code>fc2</code> output (maybe since I think a certain class is more important?)</p> <pre><code>label_sub = tf.slice(labels, ..) output_sub = tf.slice(fc2, ..) batch_loss_sub = tf.softmax_cross_entropy(output_sub, label_sub) loss = tf.reduce_mean(batch_loss + batch_loss_sub) ... minimize(loss) </code></pre> <p>In this case, I am not getting how back-propagation would work. From the "slicing", didn't we lose backprob paths?</p> <p>This might be a weird pseudo-code, but my question is "when using tf.slice, how backpropagation work?" </p>
0
2016-10-05T19:56:05Z
39,882,694
<p>There is nothing magical about it. You extract a part of the tensor and use for computations, so all the partial derivatives that "flow through" this slice are well defined. From math perspective it is something among the lines of having</p> <pre><code>f([x1,x2,x3,x4]) = f(x) = 2 * sum(slice(x, 2, 2)) + 1 = 2 * (x2 + x3) + 1 </code></pre> <p>and you can compute the gradient directly</p> <pre><code>grad f(x) = [df / dx1, df / dx2, df / dx3, df / dx4] = [0, 2, 2, 0] </code></pre> <p>And now when you <strong>add</strong> this f your original loss g, from properties of the gradients, it is added as well. So </p> <pre><code>grad (f + g)(x) = grad f(x) + grad g(x) = [0, 2, 2, 0] + grad g(x) </code></pre> <p>Everything works fine.</p> <p>In particular, you can always <strong>visualize your graph</strong> in the TensorBoard, double click on "gradient" node and you will see exactly each operation used to compute your gradients.</p>
1
2016-10-05T20:00:50Z
[ "python", "tensorflow", "deep-learning" ]
How would you store a pyramidal image representation in Python?
39,882,632
<p>Suppose I have N images which are a multiresolution representation of a single image (the Nth image being the coarsest one). If my finest scale is a 16x16 image, the next scale is a 8x8 image and so on. How should I store such data to fastly be able to access, at a given scale and for a given pixel, its unique parent in the next coarser scale and its children at each finest scale?</p>
0
2016-10-05T19:56:45Z
39,883,453
<p>You could just use a list of numpy arrays. Assuming a scale factor of two, for the <em>i,j</em><sup>th</sup> pixel at scale <em>n</em>:</p> <ul> <li>The indices of its "parent" pixel at scale <em>n-1</em> will be <code>(i//2, j//2)</code></li> <li>Its "child" pixels at scale <em>n+1</em> can be indexed by <code>(slice(2*i, 2*(i+1)), slice(2*j, 2*(j+1)))</code></li> </ul>
2
2016-10-05T20:53:09Z
[ "python", "numpy", "image-processing" ]
Python Requests - add text at the beginning of query string
39,882,644
<p>When sending data through python-requests a GET request, I have a need to specifically add something at the beginning of the query string. I have tried passing the data in through dicts and json strings with no luck.</p> <p>The request as it appears when produced by requests:</p> <pre><code>/apply/.../explain?%7B%22...... </code></pre> <p>The request as it appears when produced by their interactive API documentation (Swagger):</p> <pre><code>/apply/.../explain?record=%7B%22.... </code></pre> <p>Where the key-value pairs of my data follow the excerpt above.</p> <p>Ultimately, I think the missing piece is the <strong>record=</strong> that gets produced by their documentation. It is the only piece that is different from what is produced by Requests.</p> <p>At the moment I've got it set up something like this:</p> <pre><code>import requests s = requests.Session() s.auth = requests.auth.HTTPBasicAuth(username,password) s.verify = certificate_path # with data below being a dictionary of the values I need to pass. r = s.get(url,data=data) </code></pre> <p>I am trying to include an image of the documentation below, but don't yet have enough reputation to do so:</p> <p><a href="http://i.stack.imgur.com/r7Z5x.png" rel="nofollow">apply/model/explain documentation</a></p>
0
2016-10-05T19:57:18Z
39,882,741
<p>'GET' requests don't have data, that's for 'POST' and friends. </p> <p>You can send the query string arguments using <code>params</code> kwarg instead:</p> <pre><code>&gt;&gt;&gt; params = {'record': '{"'} &gt;&gt;&gt; response = requests.get('http://www.example.com/explain', params=params) &gt;&gt;&gt; response.request.url 'http://www.example.com/explain?record=%7B%22' </code></pre>
0
2016-10-05T20:03:33Z
[ "python", "python-requests" ]
Python Requests - add text at the beginning of query string
39,882,644
<p>When sending data through python-requests a GET request, I have a need to specifically add something at the beginning of the query string. I have tried passing the data in through dicts and json strings with no luck.</p> <p>The request as it appears when produced by requests:</p> <pre><code>/apply/.../explain?%7B%22...... </code></pre> <p>The request as it appears when produced by their interactive API documentation (Swagger):</p> <pre><code>/apply/.../explain?record=%7B%22.... </code></pre> <p>Where the key-value pairs of my data follow the excerpt above.</p> <p>Ultimately, I think the missing piece is the <strong>record=</strong> that gets produced by their documentation. It is the only piece that is different from what is produced by Requests.</p> <p>At the moment I've got it set up something like this:</p> <pre><code>import requests s = requests.Session() s.auth = requests.auth.HTTPBasicAuth(username,password) s.verify = certificate_path # with data below being a dictionary of the values I need to pass. r = s.get(url,data=data) </code></pre> <p>I am trying to include an image of the documentation below, but don't yet have enough reputation to do so:</p> <p><a href="http://i.stack.imgur.com/r7Z5x.png" rel="nofollow">apply/model/explain documentation</a></p>
0
2016-10-05T19:57:18Z
39,883,003
<p>From the comments i felt the need to explain this. </p> <p><code>http://example.com/sth?key=value&amp;anotherkey=anothervalue</code></p> <p>Let's assume you have a url like the above in order to call with python requests you only have to write </p> <pre><code> response = requests.get('http://example.com/sth', params={ 'key':'value', 'anotherkey':'anothervalue' }) </code></pre> <p>Have in mind that if your value or your keys have any special character in them they will be escaped thats the reason for the <code>%7B%2</code> part of url in your question. </p>
-1
2016-10-05T20:21:33Z
[ "python", "python-requests" ]
How to grab headers in python selenium-webdriver
39,882,645
<p>I am trying to grab the headers in selenium webdriver. Something similar to the following:</p> <pre><code>&gt;&gt;&gt; import requests &gt;&gt;&gt; res=requests.get('http://google.com') &gt;&gt;&gt; print res.headers </code></pre> <p>I need to use the <code>Chrome</code> webdriver because it supports flash and some other things that I need to test a web page. Here is what I have so far in Selenium:</p> <pre><code>from selenium import webdriver driver = webdriver.Chrome() driver.get('https://login.comcast.net/login?r=comcast.net&amp;s=oauth&amp;continue=https%3A%2F%2Flogin.comcast.net%2Foauth%2Fauthorize%3Fclient_id%3Dxtv-account-selector%26redirect_uri%3Dhttps%3A%2F%2Fxtv-pil.xfinity.com%2Fxtv-authn%2Fxfinity-cb%26response_type%3Dcode%26scope%3Dopenid%2520https%3A%2F%2Flogin.comcast.net%2Fapi%2Flogin%26state%3Dhttps%3A%2F%2Ftv.xfinity.com%2Fpartner-success.html%26prompt%3Dlogin%26response%3D1&amp;reqId=18737431-624b-44cb-adf0-2a85d91bd662&amp;forceAuthn=1&amp;client_id=xtv-account-selector') driver.find_element_by_css_selector('#user').send_keys('XY@comcast.net') driver.find_element_by_css_selector('#passwd').send_keys('XXY') driver.find_element_by_css_selector('#passwd').submit() print driver.headers ### How to do this? </code></pre> <p>I have seen some other answers that recommend running an entire selenium server to get this information (<a href="https://github.com/derekargueta/selenium-profiler" rel="nofollow">https://github.com/derekargueta/selenium-profiler</a>). How would I get it using something similar to the above with Webdriver?</p>
4
2016-10-05T19:57:26Z
39,882,959
<p>Unfortunately, you <strong>cannot</strong> get this information from the Selenium webdriver, nor will you be able to any time in the near future it seems. An excerpt from <a href="https://github.com/seleniumhq/selenium-google-code-issue-archive/issues/141" rel="nofollow">a very long conversation on the subject</a>:</p> <blockquote> <p>This feature isn't going to happen.</p> </blockquote> <p>The gist of the main reason being, from what I gather from the discussion, that the webdriver is meant for "driving the browser", and extending the API beyond that primary goal will, in the opinion of the developers, cause the overall quality and reliability of the API to suffer. </p> <p>One potential workaround that I have seen suggested in a number of places, including the conversation linked above, is to use <a href="https://github.com/lightbody/browsermob-proxy" rel="nofollow">BrowserMob Proxy</a>, which can be used to capture HTTP content, and <a href="https://github.com/lightbody/browsermob-proxy#using-with-selenium" rel="nofollow">can be used with selenium</a> - though the linked example does not use the Python selenium API. It does seem that there is <a href="https://github.com/AutomatedTester/browsermob-proxy-py" rel="nofollow">a Python wrapper for BrowserMob Proxy</a>, but I cannot vouch for it's efficacy since I have never used it. </p>
2
2016-10-05T20:17:39Z
[ "python", "selenium" ]
How to grab headers in python selenium-webdriver
39,882,645
<p>I am trying to grab the headers in selenium webdriver. Something similar to the following:</p> <pre><code>&gt;&gt;&gt; import requests &gt;&gt;&gt; res=requests.get('http://google.com') &gt;&gt;&gt; print res.headers </code></pre> <p>I need to use the <code>Chrome</code> webdriver because it supports flash and some other things that I need to test a web page. Here is what I have so far in Selenium:</p> <pre><code>from selenium import webdriver driver = webdriver.Chrome() driver.get('https://login.comcast.net/login?r=comcast.net&amp;s=oauth&amp;continue=https%3A%2F%2Flogin.comcast.net%2Foauth%2Fauthorize%3Fclient_id%3Dxtv-account-selector%26redirect_uri%3Dhttps%3A%2F%2Fxtv-pil.xfinity.com%2Fxtv-authn%2Fxfinity-cb%26response_type%3Dcode%26scope%3Dopenid%2520https%3A%2F%2Flogin.comcast.net%2Fapi%2Flogin%26state%3Dhttps%3A%2F%2Ftv.xfinity.com%2Fpartner-success.html%26prompt%3Dlogin%26response%3D1&amp;reqId=18737431-624b-44cb-adf0-2a85d91bd662&amp;forceAuthn=1&amp;client_id=xtv-account-selector') driver.find_element_by_css_selector('#user').send_keys('XY@comcast.net') driver.find_element_by_css_selector('#passwd').send_keys('XXY') driver.find_element_by_css_selector('#passwd').submit() print driver.headers ### How to do this? </code></pre> <p>I have seen some other answers that recommend running an entire selenium server to get this information (<a href="https://github.com/derekargueta/selenium-profiler" rel="nofollow">https://github.com/derekargueta/selenium-profiler</a>). How would I get it using something similar to the above with Webdriver?</p>
4
2016-10-05T19:57:26Z
39,883,010
<p>You are meaning HTTP header data, right? This is not really the scope of Selenium: <a href="http://www.seleniumhq.org/" rel="nofollow">Selenium automates browsers. That's it!</a> So if you cannot do it with your browser (and I don't know of any way), Selenium is the wrong tool to use. However, if you can do it with JavaScript you could use <code>driver.execute_script(script, *args)</code> as explained <a href="http://selenium-python.readthedocs.io/api.html" rel="nofollow">here</a>.</p>
-1
2016-10-05T20:21:49Z
[ "python", "selenium" ]
Split a pandas dataframe into two by columns
39,882,767
<p>I have a dataframe and I want to split it into two dataframes, one that has all the columns beginning with <code>foo</code> and one with the rest of the columns. Is there a quick way of doing this?</p>
1
2016-10-05T20:05:39Z
39,882,854
<p>You can use <code>list comprehensions</code> for select all columns names:</p> <pre><code>df = pd.DataFrame({'fooA':[1,2,3], 'fooB':[4,5,6], 'fooC':[7,8,9], 'D':[1,3,5], 'E':[5,3,6], 'F':[7,4,3]}) print (df) D E F fooA fooB fooC 0 1 5 7 1 4 7 1 3 3 4 2 5 8 2 5 6 3 3 6 9 foo = [col for col in df.columns if col.startswith('foo')] print (foo) ['fooA', 'fooB', 'fooC'] other = [col for col in df.columns if not col.startswith('foo')] print (other) ['D', 'E', 'F'] print (df[foo]) fooA fooB fooC 0 1 4 7 1 2 5 8 2 3 6 9 print (df[other]) D E F 0 1 5 7 1 3 3 4 2 5 6 3 </code></pre> <hr> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="nofollow"><code>filter</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.difference.html" rel="nofollow"><code>difference</code></a>:</p> <pre><code>df1 = df.filter(regex='^foo') print (df1) fooA fooB fooC 0 1 4 7 1 2 5 8 2 3 6 9 print (df.columns.difference(df1.columns)) Index(['D', 'E', 'F'], dtype='object') print (df[df.columns.difference(df1.columns)]) D E F 0 1 5 7 1 3 3 4 2 5 6 3 </code></pre> <p><strong>Timings</strong>:</p> <pre><code>In [123]: %timeit a(df) 1000 loops, best of 3: 1.06 ms per loop In [124]: %timeit b(df3) 1000 loops, best of 3: 1.04 ms per loop In [125]: %timeit c(df4) 1000 loops, best of 3: 1.41 ms per loop </code></pre> <pre><code>df3 = df.copy() df4 = df.copy() def a(df): df1 = df.filter(regex='^foo') df2 = df[df.columns.difference(df1.columns)] return df1, df2 def b(df): df1 = df[[col for col in df.columns if col.startswith('foo')]] df2 = df[[col for col in df.columns if not col.startswith('foo')]] return df1, df2 def c(df): df1 = df[df.columns[df.columns.str.startswith('foo')]] df2 = df[df.columns[~df.columns.str.startswith('foo')]] return df1, df2 df1, df2 = a(df) print (df1) print (df2) df1, df2 = b(df3) print (df1) print (df2) df1, df2 = c(df4) print (df1) print (df2) </code></pre>
2
2016-10-05T20:10:47Z
[ "python", "pandas" ]
how to suppress further processing of matplotlib event
39,882,773
<p>I'm building an interactive graphical application with matplotlib. I want it to play nice with the existing "pan" and "zoom" functions of the default matplotlib GUI, but I also want to overshadow some keystrokes. My problem is that I don't know to suppress the GUI's default response to those keystrokes. For example, the GUI responds to the backspace key by stepping back one view in the history: I want to replace this behaviour with my own custom behaviour:</p> <pre><code>def KeyUp(event): if event.key.lower() == 'backspace': RespondToBackspace() PreventAnybodyElseRespondingToBackspace() # HOW?? import matplotlib.pyplot as plt cid = plt.gcf().canvas.mpl_connect( 'key_release_event', KeyUp ) </code></pre> <p>I can find no documentation on cancelling or suppressing event processing and have run out of creativity on my search terms. Speculatively, thinking that this mechanism will work as it does in some other toolkits, I tried returning either <code>True</code> or <code>False</code> from the callback, without effect (i.e. the "step back one view" behaviour still happens, along with my custom response, when I press backspace).</p> <p>Is this possible?</p>
1
2016-10-05T20:05:47Z
39,901,886
<p>From tacaswell's comments above:</p> <p>Callbacks are stored in a dictionary so the order in which they get called cannot be guaranteed. Callback return values are ignored.</p> <p>The solution is to change <code>matplotlib.rcParams['keymap.back']</code> so that it does not include <code>'backspace'</code>.</p>
0
2016-10-06T17:06:52Z
[ "python", "matplotlib" ]
How can I get a list of the days in a year and follow this form?
39,882,809
<p>This code not work at all, and I'm still confusing with the pandas datetime method...</p> <pre><code>def date_list(): list = [] for i in pd.date_range(datetime(2016, 1, 1), datetime(2016, 12, 31)): list.append(datetime[1], datetime[2]) return list </code></pre> <p>This is the example for the list, and should I use zfill method for making this list?(If it's leap year) [0101, 0102, 0103, ..., 1230, 1231]</p>
0
2016-10-05T20:07:39Z
39,883,081
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.strftime.html" rel="nofollow"><code>strftime</code></a> to directly do the formatting:</p> <pre><code>dt_list = pd.date_range('2016-01-01', '2016-12-31').strftime('%m%d') </code></pre> <p>Use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tolist.html" rel="nofollow"><code>tolist</code></a> if you want a Python list instead of a numpy array:</p> <pre><code>dt_list = pd.date_range('2016-01-01', '2016-12-31').strftime('%m%d').tolist() </code></pre>
1
2016-10-05T20:26:22Z
[ "python", "python-3.x", "pandas", "anaconda", "miniconda" ]
After override the field as compute it's not working
39,882,822
<p>I did some changes in product.template and override the field like this</p> <pre><code>list_price = fields.Float(copy=False, string='Sale Price', store=True,compute='_calculate_sale_price', help='compute cost as per gun_dealer', default=0.00) </code></pre> <p>and the compute methods is like </p> <pre><code>@api.multi @api.depends('profit_per_unit', 'estimated_shipping_charges' , 'flat_rate_charges', 'list_price') def _calculate_sale_price(self): if self.category_parent_id == 33: self.list_price = self.standard_price+self.profit_per_unit+ \ self.estimated_shipping_charges else: self.list_price = self.standard_price+self.profit_per_unit-self.collected_shipping_charges + \ self.flat_rate_charges </code></pre> <p>But after doing this the Unit Price from sale order's order line is not grabbing value of Sales Price from product.template</p>
0
2016-10-05T20:08:38Z
39,883,798
<p>THIS IS NOT A SOLUTION. But I cant really put this suggestion in a comment.</p> <p>I am not sure if you should use list_price in your depends decorator arguments. Try placing some logging in and see what is happening when you load the page.</p> <pre><code>import logging _logger = logging.getLogger(__name__) @api.multi @api.depends('profit_per_unit', 'estimated_shipping_charges', 'flat_rate_charges') def _calculate_sale_price(self): _logger.info("EXECUTING _calculate_sale_price() function") if self.category_parent_id == 33: _logger.info("CATEGORY PARENT ID IS 33!") _logger.info("list_price = {} + {} + {}".format(self.standard_price, self.profit_per_unit, self.estimated_shipping_charges)) self.list_price = self.standard_price+self.profit_per_unit + self.estimated_shipping_charges else: _logger.info("CATEGORY PARENT ID IS NOT 33! REPEAT NOT") _logger.info("list_price = {} + {} + {} - {} + {}".format(self.standard_price, self.profit_per_unit, self.collected_shipping_charges, self.flat_rate_charges)) self.list_price = self.standard_price + self.profit_per_unit -self.collected_shipping_charges + self.flat_rate_charges </code></pre>
0
2016-10-05T21:15:55Z
[ "python", "odoo-9" ]
Replacing a node in graph with custom op having variable dependency in tensorflow
39,882,956
<p>I am trying to replace the computation done in the graph with a custom op that does the same. </p> <p>Lets say the graph has a constant <code>A</code> and weight variable <code>W</code>, I create the custom op to take these two inputs and do the entire computation (except the last step of weight update):</p> <pre><code>custom_op_tensor = custom_module.custom_op([A,W]) g_def = tf.get_default_graph().as_graph_def() input_map = { tensor.name : custom_op_tensor } train_op, = tf.import_graph_def(g_def, input_map=input_map, return_elements=[train_op]) </code></pre> <p>After the import graph def, there are two <code>W</code>'s, one from the original graph def and one in the imported graph. When we run the train op, the custom op ends up reading the old <code>W</code> and the new <code>W</code> is updated. As a result, the gradient descent ends up failing to do the right thing.</p> <p>The problem is instantiation of custom_op requires the input weight tensor <code>W</code>. The new <code>W</code> is known only after the import. And, import requires the custom op. How does one get around this problem ? </p>
2
2016-10-05T20:17:30Z
40,136,131
<p>Could you precise which version of Tensorflow you use : r0.08, r0.09, r0.10, r0.11 ?</p> <p>That is impossible to change an op in the graph with another op. But If you may access W, you can still make a backup copy of W (using deepcopy() <a href="https://docs.python.org/2/library/copy.html" rel="nofollow">from copy module</a> ) before running the train op which update it ? </p> <p>Regards</p>
0
2016-10-19T15:47:06Z
[ "python", "graph", "tensorflow", "custom-operator" ]
How can I pause a long running loop?
39,882,967
<p>I have a question about pausing / resuming a long running program. I'll use Python in this example, but it could be for any programming language really.</p> <p>Let's say I want to sum up all the numbers to a billion for example</p> <pre><code>results = [] for i in range(1000000000): results.append(i + i*2) with open('./sum.txt', 'w') as outfile: for r in results: output.write(r+'\n') outfile.close() </code></pre> <p>That's a very simple program and in the end I would like to write the values out to a file.</p> <p>Let's say I start this script and it's taking hours and I want to shut down my computer. Now I could wait but what are some ways that I could stop this process while keeping the data in tact?</p> <p>I've though about opening the file before the loop and inside the loop, instead of appending to the results list, I could just write [append] to the opened file but that seems like still if I quit the program, the file never closes and the data could get corrupted. Even if I just write to the file in the loop then directly close the file after the write operation, couldn't that still cause data corruption or something?</p> <p>What are some ways that I could tackle a problem like this. Where I'd like to be able to stop a program and later, restart the program and it could continue from where it left off?</p> <p>If you have a long running script that does most of the work on the inner part of a loop.</p> <blockquote> <p>@MisterMiyagi Can you please specify what your actual question is? Creating safe-points inside the loop is the way to go if you want to preserve the loop's state. Whether file I/O is safe against corruption doesn't depend on your application so much as operating/file system. In Linux, it's safe to write to a temporary file, then rename it; files are also closed automatically when a program exits.</p> </blockquote> <p>I was more interested in a safe place inside the loop. What are some recommended ways to do that in a program that's structured like the one above.</p>
0
2016-10-05T20:19:09Z
39,883,237
<p>Ctrl+C won't corrupt the file. I'm not sure how sophisticated you need to be, but a simple solution would be to be able to give the program an input of where to resume from:</p> <pre><code>def main(start=0): with open('./sum.txt', 'w') as outfile: for i in range(start, 10000000): outfile.write(str(i) +'\n') if __name__ == "__main__": import sys try: start = int(sys.argv[1]) except IndexError: start = 0 main() </code></pre> <p>Then run this like to resume from 1000:</p> <pre><code>python foo.py 1000 </code></pre>
1
2016-10-05T20:38:00Z
[ "python", "loops", "for-loop", "long-running-processes", "pause" ]
How can I pause a long running loop?
39,882,967
<p>I have a question about pausing / resuming a long running program. I'll use Python in this example, but it could be for any programming language really.</p> <p>Let's say I want to sum up all the numbers to a billion for example</p> <pre><code>results = [] for i in range(1000000000): results.append(i + i*2) with open('./sum.txt', 'w') as outfile: for r in results: output.write(r+'\n') outfile.close() </code></pre> <p>That's a very simple program and in the end I would like to write the values out to a file.</p> <p>Let's say I start this script and it's taking hours and I want to shut down my computer. Now I could wait but what are some ways that I could stop this process while keeping the data in tact?</p> <p>I've though about opening the file before the loop and inside the loop, instead of appending to the results list, I could just write [append] to the opened file but that seems like still if I quit the program, the file never closes and the data could get corrupted. Even if I just write to the file in the loop then directly close the file after the write operation, couldn't that still cause data corruption or something?</p> <p>What are some ways that I could tackle a problem like this. Where I'd like to be able to stop a program and later, restart the program and it could continue from where it left off?</p> <p>If you have a long running script that does most of the work on the inner part of a loop.</p> <blockquote> <p>@MisterMiyagi Can you please specify what your actual question is? Creating safe-points inside the loop is the way to go if you want to preserve the loop's state. Whether file I/O is safe against corruption doesn't depend on your application so much as operating/file system. In Linux, it's safe to write to a temporary file, then rename it; files are also closed automatically when a program exits.</p> </blockquote> <p>I was more interested in a safe place inside the loop. What are some recommended ways to do that in a program that's structured like the one above.</p>
0
2016-10-05T20:19:09Z
39,889,147
<h3>Create a condition for the loop to either continue or hold</h3> <p>A possible solution would be in the script, periodically checking for a "trigger" file to exist. In the example below, the loop checks once in hundred cycles, but you could make it 500 for example.</p> <p><strong><em>If</em></strong> and as long as the file exists, the loop will hold, checking again every two seconds. <br> While this works fine, you will have to accept the fact that it slightly slows down the loop, depending on <code>n</code>; larger n will decrease the effect. <em>ANY</em> solution inside the loop will however have at least <em>some</em> effect on the loop.</p> <p>Although this is on <code>python</code>, the concept should be possible in any language, and on any os.</p> <p>The <code>print</code> functions are only to show that it works of course:)</p> <h3>The edited loop:</h3> <pre><code>import os import time results = [] n = 100 for i in range(1000000000): results.append(i + i*2) print(i) if i%n == 0: while os.path.exists("test123"): time.sleep(2) print("wait") else: print("go on") with open('./sum.txt', 'w') as outfile: for r in results: output.write(r+'\n') outfile.close() </code></pre> <p>The result, if I create the trigger file and remove it again:</p> <pre><code>4096 4097 4098 4099 wait wait wait wait wait wait go on 4100 4101 4102 </code></pre>
1
2016-10-06T06:38:55Z
[ "python", "loops", "for-loop", "long-running-processes", "pause" ]
tearDown not called after timeout in twisted trial?
39,883,058
<p>I'm seeing an issue in my test suite in trial where everything works fine until I get a timeout. If a test fails due to a timeout, the tearDown function never gets called, leaving the reactor unclean which in turn causes the rest of the tests to fail. I believe tearDown should be called after a timeout, does anyone know why this might happen?</p>
0
2016-10-05T20:24:44Z
39,883,194
<p>You are correct that <code>tearDown()</code> should be called regardless of what happens in your test. From <a href="https://docs.python.org/2/library/unittest.html#unittest.TestCase.tearDown" rel="nofollow">the documentation</a> for <code>tearDown()</code>:</p> <blockquote> <p>This is called even if the test method raised an exception</p> </blockquote> <p>However, there is a catch. From the same documentation:</p> <blockquote> <p>This method will only be called if the setUp() succeeds, regardless of the outcome of the test method.</p> </blockquote> <p>So it sounds like you perhaps start the reactor in <code>setUp()</code> and when it times out, this is preventing your <code>tearDown()</code> from running - the idea being that whatever you were trying to "set up" in <code>setUp()</code> was <em>not</em> successfully set up, so you do not want to try to tear it down. However, it would be hard to diagnose with certainty unless you provide the code of your <code>setUp</code> and <code>tearDown</code> methods, along with the code of any relevant tests. </p>
1
2016-10-05T20:34:48Z
[ "python", "testing", "twisted", "trial" ]
tearDown not called after timeout in twisted trial?
39,883,058
<p>I'm seeing an issue in my test suite in trial where everything works fine until I get a timeout. If a test fails due to a timeout, the tearDown function never gets called, leaving the reactor unclean which in turn causes the rest of the tests to fail. I believe tearDown should be called after a timeout, does anyone know why this might happen?</p>
0
2016-10-05T20:24:44Z
39,888,137
<p>It's rather strange because on my box, the teardown executes even if a timeout occurs. The tests should stop running if the reactor is not in a clean state, unless you use the <code>--unclean-warnings</code> flag. Does the test runner stop after the timeout for you? What version of Python and Twisted are you running?</p> <p>As a side note, if you need to run a unique teardown for a specific test function, there's a very convenient <a href="https://twistedmatrix.com/documents/current/core/howto/trial.html#cleaning-up-after-tests" rel="nofollow"><code>addCleanup()</code></a> callback. It comes in handy if you need to cancel callback, LoopingCall, or callLater functions so that the reactor isn't in a dirty state. <code>addCleanup</code> returns a <code>Deferred</code> so you can just chain callbacks that perform an adhoc teardown. It might be a good option to try if the class teardown isn't working for you.</p> <p>PS</p> <p>I've been so used to writing "well behaved" Twisted code, I don't even recall how to get into an unclean reactor state :D I swear I'm not bragging. Could you provide me a brief summary of what you're doing so that I could test it out on my end?</p>
1
2016-10-06T05:26:44Z
[ "python", "testing", "twisted", "trial" ]
How to create and set variables as classes from a list of tuples?
39,883,083
<p>I am trying to figure out how to create variables from a list of tuple and assign them to a class. </p> <p>I have data organized like this</p> <pre><code> Name Age Status John 30 Employed </code></pre> <p>That I have created a list of tuple like this</p> <pre><code> employeelist = [(John, 30, Employed), (Steve, 25, Part-Time)] </code></pre> <p>And a class set up like this </p> <pre><code>class Employee(): ecount = 0 elist = [] def __init__(self, name, age, emp_status): self.name = name self.age = age self.emp_status = emp_status self.lookup = {} Employee.ecount = Employee.ecount+1 Employee.elist.append(self.name) </code></pre> <p>Using this code I am able to turn the tuple into an instance of the class</p> <pre><code> for i in range(0, len(employeelist),1): sublist = [str(n) for n in employeelist[i]] Employee(sublist[0], sublist[1], sublist[2]) </code></pre> <p>But I am not able to access them. Is there a way to think about setting up the for loop to create a variable from sublist[0] and then create a class out of it (e.g. sublist[0] = Employee(sublist[0], sublist[1], sublist[2]))?</p>
2
2016-10-05T20:26:31Z
39,883,127
<p>You just need</p> <pre><code>employees = [Employee(*v) for v in employee_list] </code></pre> <p>Note that <code>employees</code> and <code>Employee.elist</code> are essentially the same once each <code>Employee</code> object has been created.</p>
4
2016-10-05T20:30:11Z
[ "python", "class", "loops", "tuples" ]
scipy minimization/optimization of simple function
39,883,173
<p>I'm having trouble optimizing a very simple function I'm using as a test case before moving on to something more complex. I've tried different optimization methods, giving the method a bound and even giving the exact solution as the initial guess.</p> <p>Function I'm trying to optimize: <code>f(x) = 1 / x - x</code></p> <p>Here is my code:</p> <pre><code>import scipy def testfun(x): return (1 / x - x) sol = scipy.optimize.minimize(testfun, 1).x </code></pre> <p>it returns large numbers (3.2 e+08) as the solution</p> <p>Am I using the optimization function incorrectly?</p>
0
2016-10-05T20:33:31Z
39,883,720
<p>As Victor mentioned, the optimization function is working correctly,</p> <p>I was looking to solve <code>f(x) --&gt; 0</code> which requires a root finding method rather than an optimization routine.</p> <p>for example: </p> <p><code>scipy.optimize.root(testfun, 1)</code> or <code>scipy.optimize.Newton(testfun, 1)</code></p>
0
2016-10-05T21:10:06Z
[ "python", "optimization" ]
Game timer in pygame
39,883,175
<p>I'm trying to implement a game timer for a word memory game. The game has a start screen, so the time should be calculated from when the user presses "play", paused when the user presses "pause" mid game and continued when the user presses "continue" on the pause screen. I understand that the correct implementation of this should be independent(ish) of the ticks as i'm using that to control fps. Also it is suggested to use pygame.time.set_times(), which in most examples takes (USEREVENT+1, 1000) as arguments. What is USEREVENT? I can't grasp the documentation for this function.</p> <p>Below is my implementation. It starts measuring time at pygame.init(), not whhen i call my game loop although the function call for timer() is in the game loop, and can't be paused. How do i rewrite it in order to be able to control the start/stop state of the timer?</p> <p>Thanks!</p> <pre><code>def timer(): #isOn =True #while isOn: second=round((pygame.time.get_ticks())/1000) minute=0 hour=0 minute, second=divmod(second, 60) hour, minute=divmod(minute, 60) #text=smallfont.render("Elapsed time: "+str(getSeconds), True, WHITE) text=smallfont.render("Elapsed time: "+str("%d" %hour + " : "+"%d" %minute + " : "+"%d" %second), True, WHITE) gameDisplay.blit(text, [0+MARGIN,350]) return time </code></pre>
0
2016-10-05T20:33:34Z
39,883,405
<p>Create your own timer class. Use <code>time.time()</code> (from the regular python <code>time</code> module). </p> <pre><code>import time class MyTimer: def __init__(self): self.elapsed = 0.0 self.running = False self.last_start_time = None def start(self): if not self.running: self.running = True self.last_start_time = time.time() def pause(self): if self.running: self.running = False self.elapsed += time.time() - self.last_start_time def get_elapsed(self): elapsed = self.elapsed if self.running: elapsed += time.time() - self.last_start_time return elapsed </code></pre>
2
2016-10-05T20:50:09Z
[ "python", "timer", "pygame" ]
Append a column to a dataframe in Pandas
39,883,182
<p>I am trying to append a numpy.darray to a dataframe with little success. The dataframe is called user2 and the numpy.darray is called CallTime.</p> <p>I tried:</p> <pre><code>user2["CallTime"] = CallTime.values </code></pre> <p>but I get an error message:</p> <pre><code>Traceback (most recent call last): File "&lt;ipython-input-53-fa327550a3e0&gt;", line 1, in &lt;module&gt; user2["CallTime"] = CallTime.values AttributeError: 'numpy.ndarray' object has no attribute 'values' </code></pre> <p>Then I tried:</p> <pre><code>user2["CallTime"] = user2.assign(CallTime = CallTime.values) </code></pre> <p>but I get again the same error message as above.</p> <p>I also tried to use the merge command but for some reason it was not recognized by Python although I have imported pandas. In the example below CallTime is a dataframe:</p> <pre><code> user3 = merge(user2, CallTime) </code></pre> <p>Error message:</p> <pre><code> Traceback (most recent call last): File "&lt;ipython-input-56-0ebf65759df3&gt;", line 1, in &lt;module&gt; user3 = merge(user2, CallTime) NameError: name 'merge' is not defined </code></pre> <p>Any ideas?</p> <p>Thank you!</p>
-2
2016-10-05T20:33:45Z
39,883,467
<p><code>pandas DataFrame</code> is a 2-dimensional data structure, and each column of a <code>DataFrame</code> is a 1-dimensional <code>Series</code>. So if you want to add one column to a DataFrame, you must first convert it into <code>Series</code>. np.ndarray is a multi-dimensional data structure. From your code, I believe the shape of np.ndarray <code>CallTime</code> should be <code>nx1</code> (<code>n</code> rows and <code>1</code> colmun), and it's easy to convert it to a Series. Here is an example:</p> <pre><code>df = DataFrame(np.random.rand(5,2), columns=['A', 'B']) </code></pre> <p>This creates a dataframe <code>df</code> with two columns 'A' and 'B', and <code>5</code> rows.</p> <pre><code>CallTime = np.random.rand(5,1) </code></pre> <p>Assume this is your <code>np.ndarray</code> data <code>CallTime</code></p> <pre><code>df['C'] = pd.Series(CallTime[:, 0]) </code></pre> <p>This will add a new column to <code>df</code>. Here <code>CallTime[:,0]</code> is used to select first column of <code>CallTime</code>, so if you want to use different column from <code>np.ndarray</code>, change the index.</p> <p>Please make sure that the number of rows for <code>df</code> and <code>CallTime</code> are equal.</p> <p>Hope this would be helpful.</p>
0
2016-10-05T20:53:48Z
[ "python", "numpy" ]
Append a column to a dataframe in Pandas
39,883,182
<p>I am trying to append a numpy.darray to a dataframe with little success. The dataframe is called user2 and the numpy.darray is called CallTime.</p> <p>I tried:</p> <pre><code>user2["CallTime"] = CallTime.values </code></pre> <p>but I get an error message:</p> <pre><code>Traceback (most recent call last): File "&lt;ipython-input-53-fa327550a3e0&gt;", line 1, in &lt;module&gt; user2["CallTime"] = CallTime.values AttributeError: 'numpy.ndarray' object has no attribute 'values' </code></pre> <p>Then I tried:</p> <pre><code>user2["CallTime"] = user2.assign(CallTime = CallTime.values) </code></pre> <p>but I get again the same error message as above.</p> <p>I also tried to use the merge command but for some reason it was not recognized by Python although I have imported pandas. In the example below CallTime is a dataframe:</p> <pre><code> user3 = merge(user2, CallTime) </code></pre> <p>Error message:</p> <pre><code> Traceback (most recent call last): File "&lt;ipython-input-56-0ebf65759df3&gt;", line 1, in &lt;module&gt; user3 = merge(user2, CallTime) NameError: name 'merge' is not defined </code></pre> <p>Any ideas?</p> <p>Thank you!</p>
-2
2016-10-05T20:33:45Z
39,889,564
<p>I think instead to provide only documentation, I will try to provide a sample:</p> <pre><code>import numpy as np import pandas as pd data = {'A': [2010, 2011, 2012], 'B': ['Bears', 'Bears', 'Bears'], 'C': [11, 8, 10], 'D': [5, 8, 6]} user2 = pd.DataFrame(data, columns=['A', 'B', 'C', 'D']) #creating the array what will append to pandas dataframe user2 CallTime = np.array([1, 2, 3]) #convert to list the ndarray array CallTime, if you your CallTime is a matrix than after converting to list you can iterate or you can convert into dataframe and just append column required or just join the dataframe. user2.loc[:,'CallTime'] = CallTime.tolist() print(user2) </code></pre> <p><a href="http://i.stack.imgur.com/GuchR.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/GuchR.jpg" alt="result of dataframe user2"></a></p> <p>I think this one will help, also check <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tolist.html" rel="nofollow">numpy.ndarray.tolist</a> documentation if need to find out why we need the list and how to do, also here is example how to create dataframe from numpy in case of need <a href="http://stackoverflow.com/a/35245297/2027457">http://stackoverflow.com/a/35245297/2027457</a></p>
0
2016-10-06T07:03:26Z
[ "python", "numpy" ]
Pandas - Self reference of instances in column
39,883,232
<p>I have the following DF</p> <pre><code> SampleID ParentID 0 S10 S20 1 S10 S30 2 S20 S40 3 S30 4 S40 </code></pre> <p>How can I put the id of the other row in the column 'ParentID' instead of the string?</p> <p>Expected result:</p> <pre><code> SampleID ParentID 0 S10 2 1 S10 3 2 S20 4 3 S30 4 S40 </code></pre> <p>The closest result I found for this problem was: <a href="http://stackoverflow.com/questions/28101317/how-to-self-reference-column-in-pandas-data-frame">How to self-reference column in pandas Data Frame?</a></p>
0
2016-10-05T20:37:35Z
39,883,603
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> and then assign column <code>index</code>:</p> <pre><code>df1 = pd.merge(df[['SampleID']].reset_index(), df[['ParentID']], left_on='SampleID', right_on='ParentID') print (df1) index SampleID ParentID 0 2 S20 S20 1 3 S30 S30 2 4 S40 S40 df['ParentID'] = df1['index'] df.fillna('', inplace=True) print (df) SampleID ParentID 0 S10 2 1 S10 3 2 S20 4 3 S30 4 S40 </code></pre> <hr> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>map</code></a> and <code>dict</code> where swap keys with values:</p> <pre><code>d = dict((v,k) for k,v in df.SampleID.iteritems()) print (d) {'S10': 1, 'S40': 4, 'S20': 2, 'S30': 3} df.ParentID = df.ParentID.map(d) df.ParentID.fillna('', inplace=True) print (df) SampleID ParentID 0 S10 2 1 S10 3 2 S20 4 3 S30 4 S40 </code></pre>
2
2016-10-05T21:01:48Z
[ "python", "pandas" ]