title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
How to drop columns which have same values in all rows via pandas or spark dataframe?
39,658,574
<p>Suppose I've data similar to following:</p> <pre><code> index id name value value2 value3 data1 val5 0 345 name1 1 99 23 3 66 1 12 name2 1 99 23 2 66 5 2 name6 1 99 23 7 66 </code></pre> <p>How can we drop all those columns like (<code>value</code>, <code>value2</code>, <code>value3</code>) where all rows have same values, in one command or couple of commands using <strong>python</strong> ? </p> <p>Consider we have many columns similar to <code>value</code>,<code>value2</code>,<code>value3</code>...<code>value200</code>.</p> <p>Output:</p> <pre><code> index id name data1 0 345 name1 3 1 12 name2 2 5 2 name6 7 </code></pre>
2
2016-09-23T10:30:25Z
39,658,853
<p>Another solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> from column which are not compared and then compare first row selected by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow"><code>iloc</code></a> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.eq.html" rel="nofollow"><code>eq</code></a> with all <code>DataFrame</code> and last use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>df1 = df.set_index(['index','id','name',]) print (~df1.eq(df1.iloc[0]).all()) value False value2 False value3 False data1 True val5 False dtype: bool print (df1.ix[:, (~df1.eq(df1.iloc[0]).all())].reset_index()) index id name data1 0 0 345 name1 3 1 1 12 name2 2 2 5 2 name6 7 </code></pre>
2
2016-09-23T10:45:33Z
[ "python", "pandas", "duplicates", "multiple-columns", "spark-dataframe" ]
How can I search for specific part of strings in a list and list them? Python 2.7
39,658,589
<p>With some help I'm hoping to build a list of strings from a bunch of text files whereby the date stamp of the filename is older than three days and the output list only contains part of the strings i.e. filename = 2016_08_18_23_10_00 - playlist, string in file is E:\media\filename.mxf or D:\media2\filename.mxf. I wish for the list to contain only the filename.mxf for example. So far I have the following:</p> <pre><code>## imports modules ## from datetime import datetime, timedelta import os import re ## directory ## path = r"C:\Users\michael.lawton\Desktop\Housekeeper\Test" ## days to subtract variable ## days_to_subtract = 3 ## re-import datetime module ## import datetime ## finds all files with date and time stamp. If statement is true adds them to list ## lines = [] for filename in os.listdir(path): date_filename = datetime.datetime.strptime(filename.split(" ")[0], '%Y_%m_%d_%H_%M_%S') if date_filename &lt; datetime.datetime.now()- datetime.timedelta(days=days_to_subtract): with open(os.path.join(path, filename), 'r') as f: lines.extend(f.readlines()) # put all lines into array ## opens files in array ## print filename # debug file = open(os.path.join(path,filename), 'r') print file.read() # debug rasp = file.read() ## search for all strings containing .mxf from array ## import fnmatch import os.path pattern = "*.mxf" matching = [os.path.basename(s) for s in file if fnmatch.fnmatch(s, pattern)] print matching # currently the output is empty i.e. [] </code></pre>
0
2016-09-23T10:30:53Z
39,667,274
<p>If I understood you correctly, you want to put all the files of type "mxf" that are older than three days into an array. You are doing things more complicated then you have to. Simply check if the filename matches the pattern you have defined and add those files to an array (e.g matchingfiles):</p> <pre><code>## imports modules ## from datetime import datetime, timedelta import os import re import fnmatch import os.path ## directory ## path = r"C:\Users\michael.lawton\Desktop\Housekeeper\Test" ## days to subtract variable ## days_to_subtract = 3 pattern = "*.mxf" ## finds all files with date and time stamp. If statement is true adds them to list ## matchingfiles = [] for filename in os.listdir(path): date_filename = datetime.strptime(filename.split(" ")[0], '%Y_%m_%d_%H_%M_%S') if date_filename &lt; datetime.now() - datetime.timedelta(days=days_to_subtract): if fnmatch.fnmatch(filename, pattern): matchingfiles.append(os.path.join(path, filename)) # put all matching filenames in array # if you just want to include the filename, you can do this instead: # matchingfiles.append(filename) print matchingfiles </code></pre> <p>No need to reimport datetime, you can simply use <code>datetime.strptime</code> instead of <code>datetime.datetime.strptime</code>. And put all imports at the top the file, it makes it MUCH easier to read and maintain the code.</p>
0
2016-09-23T18:22:46Z
[ "python", "arrays", "list" ]
Caching in python using *args and lambda functions
39,658,710
<p>I recently attempted Googles <a href="http://www.ibtimes.co.uk/google-foobar-how-searching-web-earned-software-graduate-job-google-1517284" rel="nofollow"> foo.bar challenge</a>. After my time was up I decided to try find a solution to the problem I couldn't do and found a solution <a href="https://github.com/rtheunissen/foobar/blob/master/line_up_the_captives.py" rel="nofollow">here </a> (includes the problem statement if you're interested). I'd previously been making a dictionary for every function I wanted to cache but it looks like in this solution any function/input can be cached using the same syntax. </p> <p>Firstly I'm confused on how the code is even working, the *args variable isn't inputted as an argument (and prints to nothing). Heres an modified minimal example to illustrate my confusion:</p> <pre><code>mem = {} def memoize(key, func, *args): """ Helper to memoize the output of a function """ print(args) if key not in mem: # store the output of the function in memory mem[key] = func(*args) return mem[key] def example(n): return memoize( n, lambda: longrun(n), ) def example2(n): return memoize( n, longrun(n), ) def longrun(n): for i in range(10000): for j in range(100000): 2**10 return n </code></pre> <p>Here I use the same <strong>memoize</strong> function but with a print. The function <strong>example</strong> returns <strong>memoize(n, a lambda function,)</strong>. The function <strong>longrun</strong> is just an identity function with lots of useless computation so it's easy to see if the cache is working (<strong>example(2)</strong> will take ~5 seconds the first time and be almost instant after). </p> <p>Here are my confusions:</p> <ul> <li>Why is the third argument of memoize empty? When args is printed in <strong>memoize</strong> it prints (). Yet somehow mem[key] stores func(*args) as func(key)?</li> <li>Why does this behavior only work when using the lambda function (<strong>example</strong> will cache but <strong>example2</strong> won't)? I thought lambda: longrun(n) is just a short way of giving as input a function which returns longrun(n).</li> </ul> <p>As a bonus, does anyone know how you could memoize functions using a decorator?</p> <p>Also I couldn't think of a more descriptive title, edits welcome. Thanks.</p>
0
2016-09-23T10:37:35Z
39,659,174
<p>The notation <a href="https://docs.python.org/3/tutorial/controlflow.html#arbitrary-argument-lists" rel="nofollow"><code>*args</code></a> stands for a variable number of positional arguments. For example, <code>print</code> can be used as <code>print(1)</code>, <code>print(1, 2)</code>, <code>print(1, 2, 3)</code> and so on. Similarly, <code>**kwargs</code> stands for a variable number of keyword arguments.</p> <p>Note that the names <code>args</code> and <code>kwargs</code> are just a convention - it's the <code>*</code> and <code>**</code> symbols that make them variadic.</p> <p>Anyways, <code>memoize</code> uses this to accept basically <em>any</em> input to <em>func</em>. If the result of <em>func</em> isn't cached, it's called with the arguments. In a function call, <code>*args</code> is basically the reverse of <code>*args</code> in a function definition. For example, the following are equivalent:</p> <pre><code># provide *args explicitly print(1, 2, 3) # unpack iterable to *args arguments = 1, 2, 3 print(*arguments) </code></pre> <p>If <code>args</code> is empty, then calling <code>print(*args)</code> is the same as calling <code>print()</code> - no arguments are passed to it.</p> <hr> <p>Functions and lambda functions are <em>the same</em> in python. It's simply a different notation for creating a function object.</p> <p>The problem is that in <code>example2</code>, you are not passing a function. You <em>call</em> a function, then pass on its result. Instead, you have to pass on the function and its argument separately.</p> <pre><code>def example2(n): return memoize( n, longrun, # no () means no call, just the function object # all following parameters are put into *args n ) </code></pre> <hr> <p>Now, some implementation details: why is <code>args</code> empty and why is there a separate key?</p> <ul> <li><p>The empty <code>args</code> comes from your definition of the lambda. Let's write that as a function for clarity:</p> <pre><code>def example3(n): def nonlambda(): return longrun(n) return memoize(n, nonlambda) </code></pre> <p>Note how <code>nonlambda</code> takes <em>no arguments</em>. The parameter <code>n</code> is bound from the containing scope as a closure, <a href="https://docs.python.org/3/tutorial/controlflow.html#defining-functions" rel="nofollow">bound from the containing scope</a>. As such, you don't have to pass it to memoize - it is already bound inside the <code>nonlambda</code>. Thus, <code>args</code> is empty in memoize, even though <code>longrun</code> does receive a parameter, because the two don't interact directly.</p></li> <li><p>Now, why is it <code>mem[key] = f(*args)</code>, not <code>mem[key] = f(key)</code>? That's actually slightly the wrong question; the right question is "why isn't it <code>mem[f, args] = f(*args)</code>?".</p> <p>Memoization works because the same input to the same function leads to the same output. That is, <code>f, args</code> <em>identifies</em> your output. Ideally, your <code>key</code> would be <code>f, args</code> as that's the only relevant information.</p> <p>The problem is you need a way to look up <code>f</code> and <code>args</code> inside <code>mem</code>. If you ever tried putting a <code>list</code> inside a <code>dict</code>, you know there are some types which don't work in mappings (or any other suitable lookup structure, for that matter). So if you define <code>key = f, args</code>, you cannot memoize functions taking mutable/unhashable types. Python's <code>functools.lru_cache</code> actually has this limitation.</p> <p>Defining an explicit <code>key</code> is one way of solving this problem. It has the advantage that the caller can select an appropriate key, for example taking <code>n</code> without any modifications. This offers the best optimization potential. However, it breaks easily - using just <code>n</code> misses out the actual function called. Memoizing a second function with the same input would break your cache.</p> <p>There are alternative approaches, each with pros and cons. Common is the explicit conversion of types: <code>list</code> to <code>tuple</code>, <code>set</code> to <code>frozenset</code>, and so on. This is slow, but the most precise. Another approach is to just call <code>str</code> or <code>repr</code> as in <code>key = repr((f, args, sorted(kwargs.items())))</code>, but it relies on every value having a proper <code>repr</code>.</p></li> </ul>
2
2016-09-23T11:02:20Z
[ "python", "caching", "lambda" ]
Plot dynamically changing graph using matplotlib in Jupyter Notebook
39,658,717
<p>I have a M x N 2D array: ith row represents that value of N points at time i. </p> <p>I want to visualize the points [1 row of the array] in the form of a graph where the values get updated after a small interval. Thus the graph shows 1 row at a time, then update the values to next row, so on and so forth. </p> <p>I want to do this in a jupyter notebook. Looking for reference codes. </p> <p>I tried following things but no success:</p> <p><a href="http://community.plot.ly/t/updating-graph-with-new-data-every-100-ms-or-so/812" rel="nofollow">http://community.plot.ly/t/updating-graph-with-new-data-every-100-ms-or-so/812</a></p> <p><a href="https://pythonprogramming.net/live-graphs-matplotlib-tutorial/" rel="nofollow">https://pythonprogramming.net/live-graphs-matplotlib-tutorial/</a></p> <p><a href="http://stackoverflow.com/questions/5618620/create-dynamic-updated-graph-with-python">Create dynamic updated graph with Python</a></p> <p><a href="http://stackoverflow.com/questions/11371255/update-lines-in-matplotlib">Update Lines in matplotlib</a></p>
3
2016-09-23T10:37:51Z
39,793,063
<p>I don't know much about matplotlib or jupyter. However, Graphs interest me. I just did some googling and came across this <a href="http://louistiao.me/posts/notebooks/embedding-matplotlib-animations-in-jupyter-notebooks/" rel="nofollow">post</a>. Seems like you have to render the graph as an HTML video to see a dynamic graph. </p> <p>I tried that post. <a href="http://www.filedropper.com/displayanimationasvideoavconv" rel="nofollow">This</a> is the notebook, if you wish to try. Note that the kernel (python 2) takes sometime to build the video. You can read more about it <a href="https://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/" rel="nofollow">here</a>.</p> <p>Now you want to display a graph row to row. I tried <a href="http://www.filedropper.com/showdownload.php/randomgraphvideo" rel="nofollow">this</a>. In that notebook, I have a <code>dump_data</code> with 10 rows. I randomly take one and plot them and display as video. </p> <p>It was interesting to learn about jupyter. Hope this helps.</p>
-1
2016-09-30T13:47:06Z
[ "python", "matplotlib", "plot", "graph", "jupyter-notebook" ]
Plot dynamically changing graph using matplotlib in Jupyter Notebook
39,658,717
<p>I have a M x N 2D array: ith row represents that value of N points at time i. </p> <p>I want to visualize the points [1 row of the array] in the form of a graph where the values get updated after a small interval. Thus the graph shows 1 row at a time, then update the values to next row, so on and so forth. </p> <p>I want to do this in a jupyter notebook. Looking for reference codes. </p> <p>I tried following things but no success:</p> <p><a href="http://community.plot.ly/t/updating-graph-with-new-data-every-100-ms-or-so/812" rel="nofollow">http://community.plot.ly/t/updating-graph-with-new-data-every-100-ms-or-so/812</a></p> <p><a href="https://pythonprogramming.net/live-graphs-matplotlib-tutorial/" rel="nofollow">https://pythonprogramming.net/live-graphs-matplotlib-tutorial/</a></p> <p><a href="http://stackoverflow.com/questions/5618620/create-dynamic-updated-graph-with-python">Create dynamic updated graph with Python</a></p> <p><a href="http://stackoverflow.com/questions/11371255/update-lines-in-matplotlib">Update Lines in matplotlib</a></p>
3
2016-09-23T10:37:51Z
39,809,509
<p>In addition to @0aslam0 I used code from <a href="http://jakevdp.github.io/blog/2013/05/12/embedding-matplotlib-animations/" rel="nofollow">here</a>. I've just changed animate function to get next row every next time. It draws animated evolution (M steps) of all N points.</p> <pre><code>from IPython.display import HTML import numpy as np from matplotlib import animation N = 5 M = 100 points_evo_array = np.random.rand(M,N) # First set up the figure, the axis, and the plot element we want to animate fig = plt.figure() ax = plt.axes(xlim=(0, M), ylim=(0, np.max(points_evo_array))) lines = [] lines = [ax.plot([], [])[0] for _ in range(N)] def init(): for line in lines: line.set_data([], []) return lines def animate(i): for j,line in enumerate(lines): line.set_data(range(i), [points_evo_array[:i,j]]) return lines # call the animator. blit=True means only re-draw the parts that have changed. anim = animation.FuncAnimation(fig, animate,np.arange(1, M), init_func=init, interval=10, blit=True) HTML(anim.to_html5_video()) </code></pre> <p>Hope it will be useful</p>
0
2016-10-01T17:24:54Z
[ "python", "matplotlib", "plot", "graph", "jupyter-notebook" ]
Plot dynamically changing graph using matplotlib in Jupyter Notebook
39,658,717
<p>I have a M x N 2D array: ith row represents that value of N points at time i. </p> <p>I want to visualize the points [1 row of the array] in the form of a graph where the values get updated after a small interval. Thus the graph shows 1 row at a time, then update the values to next row, so on and so forth. </p> <p>I want to do this in a jupyter notebook. Looking for reference codes. </p> <p>I tried following things but no success:</p> <p><a href="http://community.plot.ly/t/updating-graph-with-new-data-every-100-ms-or-so/812" rel="nofollow">http://community.plot.ly/t/updating-graph-with-new-data-every-100-ms-or-so/812</a></p> <p><a href="https://pythonprogramming.net/live-graphs-matplotlib-tutorial/" rel="nofollow">https://pythonprogramming.net/live-graphs-matplotlib-tutorial/</a></p> <p><a href="http://stackoverflow.com/questions/5618620/create-dynamic-updated-graph-with-python">Create dynamic updated graph with Python</a></p> <p><a href="http://stackoverflow.com/questions/11371255/update-lines-in-matplotlib">Update Lines in matplotlib</a></p>
3
2016-09-23T10:37:51Z
39,853,938
<p>Here's an alternative, possibly simpler solution:</p> <pre><code>%matplotlib notebook import numpy as np import matplotlib.pyplot as plt m = 100 n = 100 matrix = np.random.normal(0,1,m*n).reshape(m,n) fig = plt.figure() ax = fig.add_subplot(111) plt.ion() fig.show() fig.canvas.draw() for i in range(0,100): ax.clear() ax.plot(matrix[i,:]) fig.canvas.draw() </code></pre>
1
2016-10-04T13:43:17Z
[ "python", "matplotlib", "plot", "graph", "jupyter-notebook" ]
Plot dynamically changing graph using matplotlib in Jupyter Notebook
39,658,717
<p>I have a M x N 2D array: ith row represents that value of N points at time i. </p> <p>I want to visualize the points [1 row of the array] in the form of a graph where the values get updated after a small interval. Thus the graph shows 1 row at a time, then update the values to next row, so on and so forth. </p> <p>I want to do this in a jupyter notebook. Looking for reference codes. </p> <p>I tried following things but no success:</p> <p><a href="http://community.plot.ly/t/updating-graph-with-new-data-every-100-ms-or-so/812" rel="nofollow">http://community.plot.ly/t/updating-graph-with-new-data-every-100-ms-or-so/812</a></p> <p><a href="https://pythonprogramming.net/live-graphs-matplotlib-tutorial/" rel="nofollow">https://pythonprogramming.net/live-graphs-matplotlib-tutorial/</a></p> <p><a href="http://stackoverflow.com/questions/5618620/create-dynamic-updated-graph-with-python">Create dynamic updated graph with Python</a></p> <p><a href="http://stackoverflow.com/questions/11371255/update-lines-in-matplotlib">Update Lines in matplotlib</a></p>
3
2016-09-23T10:37:51Z
39,857,366
<p>Here is a library that deals with real-time plotting/logging data (<a href="https://pypi.python.org/pypi/joystick/" rel="nofollow">joystick</a>), although I am not sure it is working with jupyter. You can install it using the usual <code>pip install joystick</code>.</p> <p>Hard to make a working solution without more details on your data. Here is an option:</p> <pre><code>import joystick as jk import numpy as np class test(jk.Joystick): # initialize the infinite loop decorator _infinite_loop = jk.deco_infinite_loop() def _init(self, *args, **kwargs): """ Function called at initialization, see the docs """ # INIT DATA HERE self.shape = (10, 4) # M, N self.data = np.random.random(self.shape) self.xaxis = range(self.shape[1]) ############ # create a graph frame self.mygraph = self.add_frame( jk.Graph(name="TheName", size=(500, 500), pos=(50, 50), fmt="go-", xnpts=self.shape[1], freq_up=5, bgcol="w", xylim=(0, self.shape[1]-1, None, None))) @_infinite_loop(wait_time=0.5) def _generate_fake_data(self): # function looped every 0.5 second """ Loop starting with the simulation start, getting data and pushing it to the graph every 0.5 seconds """ # NEW (RANDOM) DATA new_data = np.random.random(self.shape[1]) # concatenate data self.data = np.vstack((self.data, new_data)) # push new data to the graph self.mygraph.set_xydata(self.xaxis, self.data[-1]) t = test() t.start() t.stop() t.exit() </code></pre> <p>This code will create a graph that is auto-updating 5 times a second (freq_up=5), while new data is (randomly) generated every 0.5 seconds (wait_time=0.5) and pushed to the graph for display.</p> <p>If you don't want the Y-axis to wiggle around, type <code>t.mygraph.xylim = (0, t.shape[1]-1, 0, 1)</code>.</p>
0
2016-10-04T16:24:21Z
[ "python", "matplotlib", "plot", "graph", "jupyter-notebook" ]
Conflict between PyQt5 and datetime.datetime.strptime
39,658,719
<p>So I was writing a tool that would read time from file using graphical user interface based on python 3.52 and Qt5. The minimal operation</p> <pre><code>datetime.datetime.strptime('Tue', '%a') </code></pre> <p>works in an isolated environment, giving output "1900-01-01 00:00:00". However, when I run the following minimal example</p> <pre><code>import sys import datetime as datetime from PyQt5 import QtWidgets if __name__ == '__main__' : print(datetime.datetime.strptime('Tue', '%a')) app = QtWidgets.QApplication(sys.argv) print(datetime.datetime.strptime('Tue', '%a')) #sys.exit(app.exec_()) </code></pre> <p>I get the output</p> <pre><code>1900-01-01 00:00:00 Traceback (most recent call last): File "/home/user/gui/testfile.py", line 11, in &lt;module&gt; print(datetime.datetime.strptime('Tue', '%a')) File "/usr/lib/python3.5/_strptime.py", line 510, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "/usr/lib/python3.5/_strptime.py", line 343, in _strptime (data_string, format)) ValueError: time data 'Tue' does not match format '%a' </code></pre> <p>So, the first call to the <i>strptime</i> routine works fine, but after the <i>QApplication</i> class is created, it does not work any more. Note that further using <i>QApplication</i> to construct the GUI and do a lot of complicated things with it works fine. The only thing that does not work currently is <i>strptime</i></p> <p>Any idea what goes wrong?</p>
2
2016-09-23T10:37:56Z
39,660,949
<p>I can reproduce your problem: after calling the QtWidget, the </p> <p><code>print(datetime.datetime.strptime('Tue', '%a'))</code></p> <p>results in an error.</p> <p>If I execute after QtWidget</p> <p><code>print(datetime.datetime.strptime('Die', '%a'))</code> this works.</p> <p>I am located in Switzerland, so <em>Die</em> in German is equivalent to <em>Tue</em>.</p> <p>It seems that Qt somehow has an influence on the region settings as %A and %a evaluates the local weekday's name(<a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow">Datetime</a>). Maybe a Qt expert can explain more in detail, what is ongoing.</p>
1
2016-09-23T12:37:11Z
[ "python", "datetime", "pyqt5" ]
Conflict between PyQt5 and datetime.datetime.strptime
39,658,719
<p>So I was writing a tool that would read time from file using graphical user interface based on python 3.52 and Qt5. The minimal operation</p> <pre><code>datetime.datetime.strptime('Tue', '%a') </code></pre> <p>works in an isolated environment, giving output "1900-01-01 00:00:00". However, when I run the following minimal example</p> <pre><code>import sys import datetime as datetime from PyQt5 import QtWidgets if __name__ == '__main__' : print(datetime.datetime.strptime('Tue', '%a')) app = QtWidgets.QApplication(sys.argv) print(datetime.datetime.strptime('Tue', '%a')) #sys.exit(app.exec_()) </code></pre> <p>I get the output</p> <pre><code>1900-01-01 00:00:00 Traceback (most recent call last): File "/home/user/gui/testfile.py", line 11, in &lt;module&gt; print(datetime.datetime.strptime('Tue', '%a')) File "/usr/lib/python3.5/_strptime.py", line 510, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "/usr/lib/python3.5/_strptime.py", line 343, in _strptime (data_string, format)) ValueError: time data 'Tue' does not match format '%a' </code></pre> <p>So, the first call to the <i>strptime</i> routine works fine, but after the <i>QApplication</i> class is created, it does not work any more. Note that further using <i>QApplication</i> to construct the GUI and do a lot of complicated things with it works fine. The only thing that does not work currently is <i>strptime</i></p> <p>Any idea what goes wrong?</p>
2
2016-09-23T10:37:56Z
39,661,572
<p>To elaborate on the nice answer by Patrick, I have now found a way to undo the localization enforced by QT</p> <pre><code>import sys import datetime as datetime import locale from PyQt5 import QtWidgets ## Start the QT window print(datetime.datetime.strptime('Tue', '%a')) app = QtWidgets.QApplication(sys.argv) locale.setlocale(locale.LC_TIME, "en_GB.utf8") print(datetime.datetime.strptime('Tue', '%a')) #sys.exit(app.exec_()) </code></pre>
0
2016-09-23T13:06:58Z
[ "python", "datetime", "pyqt5" ]
Ending infinite loop for a bot
39,658,736
<p>I have created a chat bot for <strong>Twitch IRC</strong>, I can connect and create commands etc etc, however I cannot use keyboard-interrupt in the command prompt. I suspect it is because it's stuck in this infinite loop, and I don't know how to fix this? I am new to programming btw!</p> <p>Here is the code I have in my <code>Run.py</code>, <code>openSocket()</code> is defined in another file, basically connection to the server. <code>s = socket.socket.</code> First part in the while-loop basically just reads the server messages, I think it's pretty straight forward for you guys!</p> <pre><code>s = openSocket() joinRoom(s) readbuffer = "" while True: readbuffer = readbuffer + s.recv(1024).decode("utf-8") temp = str.split(readbuffer, "\n") readbuffer = temp.pop() for line in temp: if "PING" in line: s.send("PONG :tmi.twitch.tv\r\n".encode("utf-8")) print("---SENT PONG---") printMessage(getUser, getMessage, line) message = getMessage(line) for key in commands: command = key if command in message: sendMessage(s, commands[command]) </code></pre> <p>((Edit: I also have this problem where the connection to the server seems to time out for whatever reason. I managed to get it keep connection with ping/pong for about 40-45min, but then it disconnected again.</p> <p>EDIT:</p> <p>Sorry the original post was super messy. I have created this pastebin with the least amount of code I could use to recreate the problem. If the IRC chat is inactive it will disconnect, and I can't get it to send 2 pings in a row without any messages in between, not sure if that's because it disconnects before the 2nd ping or because of the 2nd ping.</p> <p>On at least one occasion it has disconnected even before I got the first ping from the server.</p> <p>Pastebin: pastebin.com/sXUW50sS</p>
0
2016-09-23T10:38:41Z
39,659,091
<p>Part of code that you posted doesn't have much to do with problem you described.</p> <p>This is a guess (although an educated one). In you socket connection you are probably using <code>try: except:</code> and using <code>Pokemon</code> approach (gotta catch 'em all)</p> <p>Thing here would be to find a line where you are doing something like this:</p> <pre><code>except: pass </code></pre> <p>and change it to:</p> <pre><code>except (KeyboardInterrupt, SystemExit): raise except: pass </code></pre> <p>Obviously I'm not trying to say here that your porgram should catch all exceptions and just pass like if nothing happened. Main point is that you are probably already doing that (for i-have-no-idea-why reasons) and you should have special treatment for system errors.</p>
0
2016-09-23T10:57:57Z
[ "python", "loops", "infinite" ]
Parsing JSON feed with Ruby for use in Dashing Dashboard
39,658,759
<p>First post here, so ya know, be nice?</p> <p>I'm setting up a dashboard in Dashing (<a href="http://dashing.io/" rel="nofollow"><a href="http://dashing.io/" rel="nofollow">http://dashing.io/</a></a>) using a JSON feed on a server, which looks like:</p> <pre><code>{ "error":0, "message_of_the_day":"Welcome!", "message_of_the_day_hash":"a1234567890123456789012345678901", "metrics":{ "daily":{ "metric1":"1m 30s", "metric2":160 }, "monthly":{ "metric1":"10m 30s", "metric2":"3803" } }, </code></pre> <p>I have been experimenting with grabbing the data from the feed, and have managed to do so by Python with no issues:</p> <pre><code>import json import urllib2 data = { 'region': "Europe" } req = urllib2.Request('http://192.168.1.2/info/handlers/handler.php') req.add_header('Content-Type', 'application/json') response = urllib2.urlopen(req, json.dumps(data)) print response.read() </code></pre> <p>However I haven't yet been successful, and get numerous errors in Ruby. Would anyone be able to point me in the right direction in parsing this in Ruby? My attempts to write a basic script, (keeping it simple and outside of Dashing) don't pull through any data.</p> <pre><code>#!/usr/bin/ruby require 'httparty' require 'json' response = HTTParty.get("http://192.168.1.2/info/handlers/handler.php?region=Europe") json = JSON.parse(response.body) puts json </code></pre>
3
2016-09-23T10:39:43Z
39,659,297
<p>In python code you are sending a JSON and adding a header. I bet it makes sense to do that in ruby as well. The code below is untested, since I can’t test it, but it should lead you into the right direction:</p> <pre><code>#!/usr/bin/ruby require 'httparty' require 'json' response = HTTParty.post( "http://192.168.1.2/info/handlers/handler.php", headers: {'Content-Type', 'application/json'}, query: { data: { 'region' =&gt; "Europe" } } # or maybe query: { 'region' =&gt; "Europe" } ) puts response.inspect </code></pre>
1
2016-09-23T11:09:22Z
[ "python", "json", "ruby", "dashing" ]
python bad operand type for unary -: 'NoneType'
39,659,023
<p>I send you a question because I have a problem on python and I don't understand why. I created a function "mut1" to change the number inside a list (with a probability to 1/2) either in adding 1 or subtracting 1, except for 0 and 9:</p> <pre><code>def mut1 (m): i=np.random.randint(1,3) j=np.random.randint(1,3) if i==1: if 0&lt;m&lt;9: if j==1: m=m+1 elif j==2: m=m-1 elif m==0: if j==1: m=1 if j==2: m=9 elif m==9: if j==1: m=0 if j==2: m=8 print m </code></pre> <p>mut1 function well, for example, if I create a list P1:</p> <pre><code>&gt;&gt;&gt;p1=np.array(range(8),int).reshape((4, 2)) </code></pre> <p>After that, I apply "mut1" at a number (here 3) in the list p1</p> <pre><code>&gt;&gt;&gt;mut1(p1[1,1]) </code></pre> <p>Hovewer if I write:</p> <pre><code>&gt;&gt;&gt; p1[1,1]=mut1(p1[1,1]) </code></pre> <p>I have a message error:</p> <blockquote> <p>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: long() argument must be a string or a number, not 'NoneType'</p> </blockquote>
0
2016-09-23T10:54:04Z
39,660,497
<p>That happens because you have to make your <code>mut1</code> return an <code>numpy.int64</code> type of result. So I tried with the following modified code of yours and worked.</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; import random &gt;&gt;&gt; &gt;&gt;&gt; def mut1 (m): ... i=np.random.randint(1,3) ... j=np.random.randint(1,3) ... if i==1: ... if 0&lt;m&lt;9: ... if j==1: ... m=m+1 ... elif j==2: ... m=m-1 ... elif m==0: ... if j==1: ... m=1 ... if j==2: ... m=9 ... elif m==9: ... if j==1: ... m=0 ... if j==2: ... m=8 ... return np.int64(m) ... &gt;&gt;&gt; p1=np.array(range(8),int).reshape((4, 2)) &gt;&gt;&gt; mut1(p1[1,1]) 2 &gt;&gt;&gt; p1[1,1]=mut1(p1[1,1]) &gt;&gt;&gt; </code></pre> <p>So the only thing you need to change is to replace <code>print m</code> with <code>return np.int64(m)</code> and then should work!</p> <p>You will easily understand why this happened with the following kind of debugging code:</p> <pre><code>&gt;&gt;&gt; type(p1[1,1]) &lt;type 'numpy.int64'&gt; &gt;&gt;&gt; type(mut1(p1[1,1])) &lt;type 'NoneType'&gt; </code></pre>
1
2016-09-23T12:12:27Z
[ "python", "nonetype" ]
packet-sniffer can't sniff SIP(voip) packet
39,659,096
<p>I want to build "SIP sniffer" for my project to alert incoming call from VoIP communication.I try to call from my smartphone to my notebook and check incoming packet by wireshark. I see all SIP-message ( INVITE , BYE , TRYING ). I know basic of SIP ,it use UDP port 5060.</p> <p>Next, I use this code from <a href="http://www.binarytides.com/python-packet-sniffer-code-linux/" rel="nofollow">http://www.binarytides.com/python-packet-sniffer-code-linux/</a> &lt;&lt;--- last code , longest code ( I try to paste but I can't paste code in box ) run with Raspberry PI connect to notebook by LAN cable.</p> <p>This program can sniff UDP packet, I check in wireshark it correct 90% ( IP address and IP destination not correct ) port and payload are correct. I checking header from ethernet header ===> ip header ===> udp header it not different from SIP-INVITE , they different only payload ( check by wireshark ).</p> <p>but i try to VoIP call to my notebook, It's not work , it never sniff 5060 or SIP packet ( one time i see outgoing call data : "sip:xxxx@linphone.org" )</p> <p>Why i can sniff other but VoIP can't.</p> <p>Sorry for my poor english. thank you for your advice.</p>
0
2016-09-23T10:58:01Z
39,659,276
<p>From the quick look , I see that your packets are UDP. But the python code only sniffs for TCP.</p> <p><code>#create an INET, raw socket s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_TCP)</code> </p> <p>change socket.IPPROTO_TCP to socket.IPPROTO_UDP</p>
1
2016-09-23T11:08:27Z
[ "python", "linux", "sockets", "udp", "packet-sniffers" ]
packet-sniffer can't sniff SIP(voip) packet
39,659,096
<p>I want to build "SIP sniffer" for my project to alert incoming call from VoIP communication.I try to call from my smartphone to my notebook and check incoming packet by wireshark. I see all SIP-message ( INVITE , BYE , TRYING ). I know basic of SIP ,it use UDP port 5060.</p> <p>Next, I use this code from <a href="http://www.binarytides.com/python-packet-sniffer-code-linux/" rel="nofollow">http://www.binarytides.com/python-packet-sniffer-code-linux/</a> &lt;&lt;--- last code , longest code ( I try to paste but I can't paste code in box ) run with Raspberry PI connect to notebook by LAN cable.</p> <p>This program can sniff UDP packet, I check in wireshark it correct 90% ( IP address and IP destination not correct ) port and payload are correct. I checking header from ethernet header ===> ip header ===> udp header it not different from SIP-INVITE , they different only payload ( check by wireshark ).</p> <p>but i try to VoIP call to my notebook, It's not work , it never sniff 5060 or SIP packet ( one time i see outgoing call data : "sip:xxxx@linphone.org" )</p> <p>Why i can sniff other but VoIP can't.</p> <p>Sorry for my poor english. thank you for your advice.</p>
0
2016-09-23T10:58:01Z
39,660,157
<pre><code>#UDP packets elif protocol == 17 : u = iph_length + eth_length udph_length = 8 udp_header = packet[u:u+8] #now unpack them :) udph = unpack('!HHHH' , udp_header) source_port = udph[0] dest_port = udph[1] length = udph[2] checksum = udph[3] print 'Source Port : ' + str(source_port) + ' Dest Port : ' + str(dest_port) + ' Length : ' + str(length) + ' Checksum : ' + str(checksum) h_size = eth_length + iph_length + udph_length data_size = len(packet) - h_size #get data from the packet data = packet[h_size:] print 'Data : ' + data #some other IP packet like IGMP else : print 'Protocol other than TCP/UDP/ICMP' print </code></pre>
1
2016-09-23T11:54:35Z
[ "python", "linux", "sockets", "udp", "packet-sniffers" ]
Pandas DataFrame with date and time from JSON format
39,659,146
<p>I'm importing data from <code>.json</code> file with pandas <code>DataFrame</code> and the result is a bit broken: </p> <pre><code> &gt;&gt; print df summary response_date 8.0 {u'$date': u'2009-02-19T10:54:00.000+0000'} 11.0 {u'$date': u'2009-02-24T11:23:45.000+0000'} 14.0 {u'$date': u'2009-03-03T17:55:07.000+0000'} 16.0 {u'$date': u'2009-03-10T12:23:04.000+0000'} 19.0 {u'$date': u'2009-03-17T17:19:55.000+0000'} 13.0 {u'$date': u'2009-03-25T15:10:52.000+0000'} 22.0 {u'$date': u'2009-04-02T16:57:31.000+0100'} 15.0 {u'$date': u'2009-04-08T22:29:09.000+0100'} 20.0 {u'$date': u'2009-04-16T18:14:20.000+0100'} 13.0 {u'$date': u'2009-04-29T10:47:06.000+0100'} 15.0 {u'$date': u'2009-05-06T13:45:45.000+0100'} 20.0 {u'$date': u'2009-05-26T10:41:52.000+0100'} </code></pre> <p>How to get rid of 'date' and other mess to create a normal column with date and time. To convert from ISO8601 format I normally use:</p> <pre><code>df.response_date = pd.to_datetime(df.response_date) </code></pre> <p><strong>UPDATE 1</strong></p> <pre><code> summary response_date closed_date open_date 24.0 2011-10-15T00:00:00.000+0100 NaN NaN 24.0 2011-11-24T09:00:00.000+0000 NaN NaN 19.0 2011-10-01T09:00:00.000+0100 NaN NaN 25.0 2011-10-29T09:00:00.000+0100 NaN NaN 19.0 2011-10-08T09:00:00.000+0100 NaN NaN -1.0 2011-11-09T17:20:00.000+0000 {u'$date': u'2011-11-16T15:20:00.000+0000'} {u'$date': u'2011-11-09T15:20:00.000+0000'} -1.0 2011-11-16T17:20:00.000+0000 {u'$date': u'2011-11-23T15:20:00.000+0000'} {u'$date': u'2011-11-16T15:20:00.000+0000'} -1.0 2011-11-23T17:20:00.000+0000 {u'$date': u'2011-11-30T15:20:00.000+0000'} {u'$date': u'2011-11-23T15:20:00.000+0000'} -1.0 2011-11-30T17:20:00.000+0000 {u'$date': u'2011-12-07T15:20:00.000+0000'} {u'$date': u'2011-11-30T15:20:00.000+0000'} </code></pre> <p>So, the </p> <pre><code>&gt;&gt; df.response_date = pd.DataFrame(df.response_date.values.tolist()) </code></pre> <p>worked perfectly, but other columns contain NaN values, and imputing with "-1" doesn't help. </p> <pre><code>&gt;&gt; print type(df.ix[0,'scheduleClosedAt']) &lt;type 'int'&gt; </code></pre> <p><strong>UPDATED 2</strong></p> <p>Why this (masking) method does not work?</p> <pre><code>&gt;&gt; df.reset_index(inplace=True) &gt;&gt; indx_nan_closed = df.closed_date.isnull() &gt;&gt; df[~indx_nan_closed].closed_date = pd.DataFrame(df[~indx_nan_closed].closed_date.values.tolist()) </code></pre> <p>This line is equivalent to the one in above, but with masking array, so I want to apply this method only to non-NaN values, but the result is that my data frame "df" remains unchanged. This is quite strange.</p> <p>Any thoughts?</p>
1
2016-09-23T11:00:59Z
39,659,312
<p>You can use <code>DataFrame</code> constructor with converting column <code>response_date</code> to <code>list</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.values.html" rel="nofollow"><code>values</code></a> if <code>type</code> is <code>dict</code>:</p> <pre><code>print (type(df.ix[0,'response_date'])) &lt;class 'dict'&gt; df.response_date = pd.DataFrame(df.response_date.values.tolist()) df.response_date = pd.to_datetime(df.response_date) print (df) summary response_date 0 8.0 2009-02-19 10:54:00 1 11.0 2009-02-24 11:23:45 2 14.0 2009-03-03 17:55:07 </code></pre> <p>If <code>type</code> is <code>string</code>, use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>split</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow"><code>strip</code></a>:</p> <pre><code>print (type(df.ix[0,'response_date'])) &lt;class 'str'&gt; df.response_date = df.response_date.str.split().str[1].str.strip("'u}") df.response_date = pd.to_datetime(df.response_date) print (df) summary response_date 0 8.0 2009-02-19 10:54:00 1 11.0 2009-02-24 11:23:45 2 14.0 2009-03-03 17:55:07 </code></pre> <p>EDIT by comment:</p> <p>2 possible solutions:</p> <p>First is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a> by empty <code>dict</code>:</p> <pre><code>df.closed_date = df.closed_date.fillna(pd.Series([{}])) </code></pre> <p>another is <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame({'summary':[19.0, -1.0,-1.0], 'response_date':['2011-10-08T09:00:00.000+0100','2011-11-09T17:20:00.000+0000','2011-11-16T17:20:00.000+0000'], 'closed_date':[np.nan, {u'$date': u'2011-11-16T15:20:00.000+0000'}, {u'$date': u'2011-11-23T15:20:00.000+0000'}]}, columns=['summary','response_date','closed_date']) print (df) summary response_date \ 0 19.0 2011-10-08T09:00:00.000+0100 1 -1.0 2011-11-09T17:20:00.000+0000 2 -1.0 2011-11-16T17:20:00.000+0000 closed_date 0 NaN 1 {'$date': '2011-11-16T15:20:00.000+0000'} 2 {'$date': '2011-11-23T15:20:00.000+0000'} </code></pre> <pre><code>a = df.ix[df.closed_date.notnull(), 'closed_date'] print (a) 1 {'$date': '2011-11-16T15:20:00.000+0000'} 2 {'$date': '2011-11-23T15:20:00.000+0000'} Name: closed_date, dtype: object df['closed_date'] = pd.DataFrame(a.values.tolist(), index=a.index) df.closed_date = pd.to_datetime(df.closed_date) print (df) summary response_date closed_date 0 19.0 2011-10-08T09:00:00.000+0100 NaT 1 -1.0 2011-11-09T17:20:00.000+0000 2011-11-16 15:20:00 2 -1.0 2011-11-16T17:20:00.000+0000 2011-11-23 15:20:00 </code></pre>
2
2016-09-23T11:10:05Z
[ "python", "json", "pandas", "dictionary", "dataframe" ]
I am trying to upgrading pip from 8.1.1 to 8.1.2 . but it showing following 'PermissionError: [WinError 5] Access is denied:.how to upgrade pip?
39,659,180
<blockquote> <p>C:>python -m pip install --upgrade pip Collecting pip Using cached pip-8.1.2-py2.py3-none-any.whl Installing collected packages: pip Found existing installation: pip 8.1.1 Uninstalling pip-8.1.1: Exception: Traceback (most recent call last): File "C:\Program Files\Python35\lib\shutil.py", line 538, in move os.rename(src, real_dst) PermissionError: [WinError 5] Access is denied: 'c:\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst' -> 'C:\Users\user\AppD ata\Local\Temp\pip-nen4ldwg-uninstall\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst'</p> </blockquote> <p>During handling of the above exception, another exception occurred:</p> <blockquote> <p>Traceback (most recent call last): File "C:\Program Files\Python35\lib\site-packages\pip\basecommand.py", line 209, in main status = self.run(options, args) File "C:\Program Files\Python35\lib\site-packages\pip\commands\install.py", line 317, in run prefix=options.prefix_path, File "C:\Program Files\Python35\lib\site-packages\pip\req\req_set.py", line 726, in install requirement.uninstall(auto_confirm=True) File "C:\Program Files\Python35\lib\site-packages\pip\req\req_install.py", line 746, in uninstall paths_to_remove.remove(auto_confirm) File "C:\Program Files\Python35\lib\site-packages\pip\req\req_uninstall.py", line 115, in remove renames(path, new_path) File "C:\Program Files\Python35\lib\site-packages\pip\utils__init__.py", line 267, in renames shutil.move(old, new) File "C:\Program Files\Python35\lib\shutil.py", line 553, in move os.unlink(src) PermissionError: [WinError 5] Access is denied: 'c:\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst' You are using pip version 8.1.1, however version 8.1.2 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command.</p> </blockquote> <p>and</p> <blockquote> <p>C:>python -m pip -qqq install -U pip Exception: Traceback (most recent call last): File "C:\Program Files\Python35\lib\shutil.py", line 538, in move os.rename(src, real_dst) PermissionError: [WinError 5] Access is denied: 'c:\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst' -> 'C:\Users\user\AppD ata\Local\Temp\pip-3i5xeu8u-uninstall\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst'</p> </blockquote> <p>During handling of the above exception, another exception occurred:</p> <blockquote> <p>Traceback (most recent call last): File "C:\Program Files\Python35\lib\site-packages\pip\basecommand.py", line 209, in main status = self.run(options, args) File "C:\Program Files\Python35\lib\site-packages\pip\commands\install.py", line 317, in run prefix=options.prefix_path, File "C:\Program Files\Python35\lib\site-packages\pip\req\req_set.py", line 726, in install requirement.uninstall(auto_confirm=True) File "C:\Program Files\Python35\lib\site-packages\pip\req\req_install.py", line 746, in uninstall paths_to_remove.remove(auto_confirm) File "C:\Program Files\Python35\lib\site-packages\pip\req\req_uninstall.py", line 115, in remove renames(path, new_path) File "C:\Program Files\Python35\lib\site-packages\pip\utils__init__.py", line 267, in renames shutil.move(old, new) File "C:\Program Files\Python35\lib\shutil.py", line 553, in move os.unlink(src) PermissionError: [WinError 5] Access is denied: 'c:\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst'</p> </blockquote>
0
2016-09-23T11:02:35Z
39,659,240
<p>open your cmd with admin priviliges. for that right click in icon and select open with administrator.</p>
0
2016-09-23T11:06:08Z
[ "python", "django", "pip", "pycharm", "python-3.5.2" ]
I am trying to upgrading pip from 8.1.1 to 8.1.2 . but it showing following 'PermissionError: [WinError 5] Access is denied:.how to upgrade pip?
39,659,180
<blockquote> <p>C:>python -m pip install --upgrade pip Collecting pip Using cached pip-8.1.2-py2.py3-none-any.whl Installing collected packages: pip Found existing installation: pip 8.1.1 Uninstalling pip-8.1.1: Exception: Traceback (most recent call last): File "C:\Program Files\Python35\lib\shutil.py", line 538, in move os.rename(src, real_dst) PermissionError: [WinError 5] Access is denied: 'c:\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst' -> 'C:\Users\user\AppD ata\Local\Temp\pip-nen4ldwg-uninstall\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst'</p> </blockquote> <p>During handling of the above exception, another exception occurred:</p> <blockquote> <p>Traceback (most recent call last): File "C:\Program Files\Python35\lib\site-packages\pip\basecommand.py", line 209, in main status = self.run(options, args) File "C:\Program Files\Python35\lib\site-packages\pip\commands\install.py", line 317, in run prefix=options.prefix_path, File "C:\Program Files\Python35\lib\site-packages\pip\req\req_set.py", line 726, in install requirement.uninstall(auto_confirm=True) File "C:\Program Files\Python35\lib\site-packages\pip\req\req_install.py", line 746, in uninstall paths_to_remove.remove(auto_confirm) File "C:\Program Files\Python35\lib\site-packages\pip\req\req_uninstall.py", line 115, in remove renames(path, new_path) File "C:\Program Files\Python35\lib\site-packages\pip\utils__init__.py", line 267, in renames shutil.move(old, new) File "C:\Program Files\Python35\lib\shutil.py", line 553, in move os.unlink(src) PermissionError: [WinError 5] Access is denied: 'c:\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst' You are using pip version 8.1.1, however version 8.1.2 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command.</p> </blockquote> <p>and</p> <blockquote> <p>C:>python -m pip -qqq install -U pip Exception: Traceback (most recent call last): File "C:\Program Files\Python35\lib\shutil.py", line 538, in move os.rename(src, real_dst) PermissionError: [WinError 5] Access is denied: 'c:\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst' -> 'C:\Users\user\AppD ata\Local\Temp\pip-3i5xeu8u-uninstall\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst'</p> </blockquote> <p>During handling of the above exception, another exception occurred:</p> <blockquote> <p>Traceback (most recent call last): File "C:\Program Files\Python35\lib\site-packages\pip\basecommand.py", line 209, in main status = self.run(options, args) File "C:\Program Files\Python35\lib\site-packages\pip\commands\install.py", line 317, in run prefix=options.prefix_path, File "C:\Program Files\Python35\lib\site-packages\pip\req\req_set.py", line 726, in install requirement.uninstall(auto_confirm=True) File "C:\Program Files\Python35\lib\site-packages\pip\req\req_install.py", line 746, in uninstall paths_to_remove.remove(auto_confirm) File "C:\Program Files\Python35\lib\site-packages\pip\req\req_uninstall.py", line 115, in remove renames(path, new_path) File "C:\Program Files\Python35\lib\site-packages\pip\utils__init__.py", line 267, in renames shutil.move(old, new) File "C:\Program Files\Python35\lib\shutil.py", line 553, in move os.unlink(src) PermissionError: [WinError 5] Access is denied: 'c:\program files\python35\lib\site-packages\pip-8.1.1.dist-info\description.rst'</p> </blockquote>
0
2016-09-23T11:02:35Z
39,659,449
<p>Upgrade PIP on your default python environment required <code>sudo</code>, i.e. in Windows term, you must your start command prompt as administrator mode. In fact, this is not recommended.</p> <p>You don't need to use <code>sudo</code>/'Windows adminsitrators" if you setup virtualenv. It is python best practice to setup <a href="https://virtualenv.pypa.io/en/stable/installation/" rel="nofollow">Python virtualenv </a>, So you can separate the pip package required for individual project. For windows, it is <a href="https://virtualenv.pypa.io/en/stable/userguide/" rel="nofollow">similar setup</a>.</p> <p>Nevertheless, if you have decent PC with enough RAM, just install Virtualbox and deploy ubuntu. This way, your don't need to deal with windows python package installation/upgrade that overlook by developer (although all python package suppose to work out of the box) .</p>
0
2016-09-23T11:16:46Z
[ "python", "django", "pip", "pycharm", "python-3.5.2" ]
How to parse XML by using python
39,659,219
<p>I want to parse this url to get the text of \Roman\ </p> <p><a href="http://jlp.yahooapis.jp/FuriganaService/V1/furigana?appid=dj0zaiZpPU5TV0Zwcm1vaFpIcCZzPWNvbnN1bWVyc2VjcmV0Jng9YTk-&amp;grade=1&amp;sentence=%E7%A7%81%E3%81%AF%E5%AD%A6%E7%94%9F%E3%81%A7%E3%81%99" rel="nofollow">http://jlp.yahooapis.jp/FuriganaService/V1/furigana?appid=dj0zaiZpPU5TV0Zwcm1vaFpIcCZzPWNvbnN1bWVyc2VjcmV0Jng9YTk-&amp;grade=1&amp;sentence=私は学生です</a></p> <p><a href="http://i.stack.imgur.com/9CHnt.png" rel="nofollow"><img src="http://i.stack.imgur.com/9CHnt.png" alt="enter image description here"></a></p> <pre><code>import urllib import xml.etree.ElementTree as ET url = 'http://jlp.yahooapis.jp/FuriganaService/V1/furigana?appid=dj0zaiZpPU5TV0Zwcm1vaFpIcCZzPWNvbnN1bWVyc2VjcmV0Jng9YTk-&amp;grade=1&amp;sentence=私は学生です' uh = urllib.urlopen(url) data = uh.read() tree = ET.fromstring(data) counts = tree.findall('.//Word') for count in counts print count.get('Roman') </code></pre> <p>But it didn't work.</p>
0
2016-09-23T11:04:54Z
39,659,623
<p>I recently ran into a similar issue to this. It was because I was using an older version of the xml.etree package and to workaround that issue I had to create a loop for each level of the XML structure. For example:</p> <pre><code>import urllib import xml.etree.ElementTree as ET url = 'http://jlp.yahooapis.jp/FuriganaService/V1/furigana?appid=dj0zaiZpPU5TV0Zwcm1vaFpIcCZzPWNvbnN1bWVyc2VjcmV0Jng9YTk-&amp;grade=1&amp;sentence=私は学生です' uh = urllib.urlopen(url) data = uh.read() tree = ET.fromstring(data) counts = tree.findall('.//Word') for result in tree.findall('Result'): for wordlist in result.findall('WordList'): for word in wordlist.findall('Word'): print(word.get('Roman')) </code></pre> <p>Edit:</p> <p>With the suggestion from @omu_negru I was able to get this working. There was another issue, when getting the text for "Roman" you were using the "get" method which is used to get attributes of the tag. Using the "text" attribute of the element you can get the text between the opening and closing tags. Also, if there is no 'Roman' tag, you'll get a None object and won't be able to get an attribute on None.</p> <pre><code># encoding: utf-8 import urllib import xml.etree.ElementTree as ET url = 'http://jlp.yahooapis.jp/FuriganaService/V1/furigana?appid=dj0zaiZpPU5TV0Zwcm1vaFpIcCZzPWNvbnN1bWVyc2VjcmV0Jng9YTk-&amp;grade=1&amp;sentence=私は学生です' uh = urllib.urlopen(url) data = uh.read() tree = ET.fromstring(data) ns = '{urn:yahoo:jp:jlp:FuriganaService}' counts = tree.findall('.//%sWord' % ns) for count in counts: roman = count.find('%sRoman' % ns) if roman is None: print 'Not found' else: print roman.text </code></pre>
0
2016-09-23T11:25:47Z
[ "python", "xml", "parsing" ]
How to parse XML by using python
39,659,219
<p>I want to parse this url to get the text of \Roman\ </p> <p><a href="http://jlp.yahooapis.jp/FuriganaService/V1/furigana?appid=dj0zaiZpPU5TV0Zwcm1vaFpIcCZzPWNvbnN1bWVyc2VjcmV0Jng9YTk-&amp;grade=1&amp;sentence=%E7%A7%81%E3%81%AF%E5%AD%A6%E7%94%9F%E3%81%A7%E3%81%99" rel="nofollow">http://jlp.yahooapis.jp/FuriganaService/V1/furigana?appid=dj0zaiZpPU5TV0Zwcm1vaFpIcCZzPWNvbnN1bWVyc2VjcmV0Jng9YTk-&amp;grade=1&amp;sentence=私は学生です</a></p> <p><a href="http://i.stack.imgur.com/9CHnt.png" rel="nofollow"><img src="http://i.stack.imgur.com/9CHnt.png" alt="enter image description here"></a></p> <pre><code>import urllib import xml.etree.ElementTree as ET url = 'http://jlp.yahooapis.jp/FuriganaService/V1/furigana?appid=dj0zaiZpPU5TV0Zwcm1vaFpIcCZzPWNvbnN1bWVyc2VjcmV0Jng9YTk-&amp;grade=1&amp;sentence=私は学生です' uh = urllib.urlopen(url) data = uh.read() tree = ET.fromstring(data) counts = tree.findall('.//Word') for count in counts print count.get('Roman') </code></pre> <p>But it didn't work.</p>
0
2016-09-23T11:04:54Z
39,660,296
<p>Try <code>tree.findall('.//{urn:yahoo:jp:jlp:FuriganaService}Word')</code> . It seems you need to specify the namespace too .</p>
0
2016-09-23T12:01:02Z
[ "python", "xml", "parsing" ]
Send PDF file path to client to download after covnersion in WeasyPrint
39,659,311
<p>In my Django app, I'm using WeasyPrint to convert html report to pdf. I need to send the converted file back to client so they can download it. But I don't see any code on WeasyPrint site where we can get the path of saved file or know in any way where the file was saved. </p> <p>If I hard code the path, like, <code>D:/Python/Workspace/report.pdf</code> and try to open it via javascript, it simply says that the address was not understood.</p> <p>What is a better way to apporach this issue?</p> <p>My code:</p> <pre><code> HTML(string=htmlContent).write_pdf(fileName, stylesheets=[CSS(filename='css/bootstrap.min.css')]) </code></pre> <p>This is all the code related to WeasyPrint that generated PDF file.</p>
-1
2016-09-23T11:09:56Z
39,659,750
<p>You didn't even bothered to post the relevant code, but anyway:</p> <p>If you're using the Python API, you either specify the output file path when calling <code>weasyprint.HTML().write_pdf()</code> or get the PDF back as bytestring, <a href="http://weasyprint.readthedocs.io/en/latest/api.html#weasyprint.HTML.write_pdf" rel="nofollow">as documented here</a> - and then you can either manually save it to a file somewhere you can redirect your user to or just pass the bytestring to django's <code>HttpResponse</code>. </p> <p>If you're using the commandline (which would be quite surprising from a Django app...), you have to specify the output path too... </p> <p>IOW : I don't really understand your problem. FWIW, the whole documentation is here : <a href="http://weasyprint.readthedocs.io/en/latest/" rel="nofollow">http://weasyprint.readthedocs.io/en/latest/</a> - and there's a quite obvious link on the project's homepage (which is how I found it FWIW).</p> <p><em>EDIT</em> : now you posted your actual code: the answer is written in plain in the <a href="http://weasyprint.readthedocs.io/en/latest/api.html#weasyprint.HTML.write_pdf" rel="nofollow">FineManual(tm)</a>:</p> <blockquote> <p>Parameters: target – A filename, file-like object, or None Returns:<br> The PDF as byte string if target is not provided or None, otherwise None (the PDF is written to target.)</p> </blockquote> <p>IOW, either you choose to pass the filename for the generated to be generated and serve this file to the user, or you can just pass your Django <code>HttpResponse</code> as target, cf <a href="https://docs.djangoproject.com/en/1.10/howto/outputting-pdf/" rel="nofollow">this example</a> in Django's doc.</p>
0
2016-09-23T11:32:27Z
[ "python", "django", "weasyprint" ]
How to append several data frame into one
39,659,316
<p>I have write down a code to append several dummy DataFrame into one. After appending, the expected "DataFrame.shape" would be (9x3). But my code producing something unexpected output (6x3). How can i rectify the error of my code. </p> <pre><code>import pandas as pd a = [[1,2,4],[1,3,4],[2,3,4]] b = [[1,1,1],[1,6,4],[2,9,4]] c = [[1,3,4],[1,1,4],[2,0,4]] d = [[1,1,4],[1,3,4],[2,0,4]] df1 = pd.DataFrame(a,columns=["a","b","c"]) df2 = pd.DataFrame(b,columns=["a","b","c"]) df3 = pd.DataFrame(c,columns=["a","b","c"]) for df in (df1, df2, df3): df = df.append(df, ignore_index=True) print df </code></pre> <p>I don't want use "pd.concat" because in this case i have to store all the data frame into memory and my real data set contains hundred of data frame with huge shape. I just want a code which can open one CSV file at once into loop update the final DF with the progress of loop</p> <p>thanks </p>
1
2016-09-23T11:10:15Z
39,659,404
<p>Firstly use <code>concat</code> to concatenate a bunch of dfs it's quicker:</p> <pre><code>In [308]: df = pd.concat([df1,df2,df3], ignore_index=True) df Out[308]: a b c 0 1 2 4 1 1 3 4 2 2 3 4 3 1 1 1 4 1 6 4 5 2 9 4 6 1 3 4 7 1 1 4 8 2 0 4 </code></pre> <p>secondly you're reusing the iterable in your loop which is why it overwrites it, if you did this it would work:</p> <pre><code>In [307]: a = [[1,2,4],[1,3,4],[2,3,4]] b = [[1,1,1],[1,6,4],[2,9,4]] c = [[1,3,4],[1,1,4],[2,0,4]] d = [[1,1,4],[1,3,4],[2,0,4]] ​ ​ df1 = pd.DataFrame(a,columns=["a","b","c"]) df2 = pd.DataFrame(b,columns=["a","b","c"]) df3 = pd.DataFrame(c,columns=["a","b","c"]) ​ df = pd.DataFrame() ​ for d in (df1, df2, df3): df = df.append(d, ignore_index=True) df Out[307]: a b c 0 1 2 4 1 1 3 4 2 2 3 4 3 1 1 1 4 1 6 4 5 2 9 4 6 1 3 4 7 1 1 4 8 2 0 4 </code></pre> <p>Here I changed the iterable to be <code>d</code> and declared an empty <code>df</code> outside the loop:</p> <pre><code>df = pd.DataFrame() ​ for d in (df1, df2, df3): df = df.append(d, ignore_index=True) </code></pre>
1
2016-09-23T11:14:34Z
[ "python", "pandas" ]
Binding outputs of transformers in FeatureUnion
39,659,370
<p>New to python and sklearn so apologies in advance. I have two transformers and I would like to gather the results in a `FeatureUnion (for a final modelling step at the end). This should be quite simple but FeatureUnion is stacking the outputs rather than providing an nx2 array or DataFrame. In the example below I will generate some data that is 10 rows by 2 collumns. This will then generate two features that are 10 rows by 1 collumn. I would like the final feature union to have 10 rows and 1 collumn but what I get are 20 rows by 1 collumn.</p> <p>I will try to demonstrate with my example below:</p> <p>some imports</p> <pre><code>import numpy as np import pandas as pd from sklearn import pipeline from sklearn.base import TransformerMixin </code></pre> <p>some random data</p> <pre><code>df = pd.DataFrame(np.random.rand(10, 2), columns=['a', 'b']) </code></pre> <p>a custom transformer that selects a collumn</p> <pre><code>class Trans(TransformerMixin): def __init__(self, col_name): self.col_name = col_name def fit(self, X): return self def transform(self, X): return X[self.col_name] </code></pre> <p>a pipeline that uses the transformer twice (in my real case I have two different transformers but this reproduces the problem)</p> <pre><code>pipe = pipeline.FeatureUnion([ ('select_a', Trans('a')), ('select_b', Trans('b')) ]) </code></pre> <p>now i use the pipeline but it returns an array of twice the length</p> <pre><code>pipe.fit_transform(df).shape (20,) </code></pre> <p>however I would like an array with dimensions (10, 2).</p> <p>Quick fix?</p>
1
2016-09-23T11:12:40Z
39,693,444
<p>The transformers in the <code>FeatureUnion</code> need to return 2-dimensional matrices, however in your code by selecting a column, you are returning a 1-dimensional vector. You could fix this by selecting the column with <code>X[[self.col_name]]</code>.</p>
1
2016-09-26T01:17:26Z
[ "python", "scikit-learn", "pipeline" ]
Building a python interface for a *.so
39,659,419
<p>I want to make use of a <code>C</code> library, from which a shared object and the header files are available. </p> <p>As the documentation of <code>ctypes</code> and <code>Cython</code> are very scarce and tutorials about those were for different usage, I need some help. </p> <p>So, I don't know where to start here and which tool would be the easiest solution for a Python beginner like me. </p>
1
2016-09-23T11:15:07Z
39,697,803
<p>I finally managed to import the library with <code>ctypes</code>. <code>Cython</code> didn't work out for me, and seemed to complex with the different files needed. </p> <p>After getting an error like: <code>undefined symbol: inflate</code>, the accessing really worked out with importing the needed pcap lib from the system libs. I just didn't knew that it was needed. I found where it is with: <code>find /usr/lib/ -name libpcap*</code></p> <pre><code>from ctypes import cdll def main(): libpcap = cdll.LoadLibrary('path/to/libpcap.so') lib = cdll.LoadLibrary('path/to/lib.so') lib.function_from_lib if __name__ == "__main__": main() </code></pre> <p>So I hope, if anyone has this problem and comes from google, here is a solution which might help.</p>
0
2016-09-26T08:16:33Z
[ "python", "shared-libraries", "cython", "ctypes" ]
Animated text funtion only working for certain strings
39,659,470
<p>I am attempting to make a function that displays animated text in Python</p> <pre><code>import sys def anitext(str): for char in str: sys.stdout.write(char) time.sleep(textspeed) print ("") </code></pre> <p>This function is working for strings such as</p> <pre><code>anitext ("String") </code></pre> <p>And for sole variables such as</p> <pre><code>name = ("Stack") anitext (name) </code></pre> <p>But will not work for input statements, or conjoined statements like</p> <pre><code>anitext (name, "This is a string") </code></pre> <p>Is there any way for this "Anitext" function to work on statements that are not just plain strings?</p> <p><em>- Olli E</em></p>
1
2016-09-23T11:17:42Z
39,659,641
<p>You just need to use argument unpacking. See <a href="https://docs.python.org/3/tutorial/controlflow.html#arbitrary-argument-lists" rel="nofollow">Arbitrary Argument Lists</a> in the official Python tutorial.</p> <pre><code>import sys import time textspeed = 0.2 def anitext(*args): for s in args: for char in s: sys.stdout.write(char) sys.stdout.flush() time.sleep(textspeed) print("") anitext("String", "Another string", "More stuff") </code></pre> <p>I've made a couple of other changes to your script. The <code>sys.stdout.flush()</code> call ensures that the characters are actually printed one at a time; most terminals will buffer whole lines of text, so you wouldn't actually see the animation happening.</p> <p>Also, I use <code>s</code> for the name of the current string being animated. You should not use <code>str</code> as a variable name as that shadows the built-in <code>str</code> type. That makes your code confusing to read and can also lead to subtle bugs. </p>
1
2016-09-23T11:26:51Z
[ "python", "function", "variables" ]
How to get key and value in well format and n/a values at the end in pandas
39,659,512
<p>Sort the data in ascending order and the keys which are not present need to be printed in the last.</p> <p>Please suggest a solution and also suggest if any modifications is required.</p> <p><strong>Input.txt-</strong></p> <pre><code>3=1388|4=1388|5=M|8=157.75|9=88929|1021=1500|854=n|388=157.75|394=157.75|474=157.75|1584=88929|444=20160713|459=93000546718000|461=7|55=93000552181000|22=89020|400=157.75|361=0.73|981=0|16=1468416600.6006|18=1468416600.6006|362=0.46 3=1388|4=1388|5=M|8=157.73|9=100|1021=0|854=p|394=157.73|474=157.749977558|1584=89029|444=20160713|459=93001362639104|461=26142|55=93001362849000|22=89120|361=0.71|981=0|16=1468416601.372|18=1468416601.372|362=0.45 </code></pre> <p><strong>Program-Code-</strong></p> <pre><code>import pandas as pd import numpy as np from operator import itemgetter df = pd.read_csv("C:\",index_col=None, names=['text']) s = df.text.str.split('|') ds =[dict(w.split('=',1 ) for w in x) for x in s] p = pd.DataFrame.from_records(ds) p1 = p.replace(np.nan,'n/a', regex=True) st = p1.stack(level=0,dropna=False) dfs = [g for i, g in st.groupby(level=0)] dfs_length = len(dfs) i = 0 while i &lt; len(dfs): print '\nindex[%d]'%i for (_,k),v in dfs[i].iteritems(): print k,'\t',v i = i + 1 </code></pre> <p><strong>OUTPUT (I got):</strong></p> <pre><code>index[0] 1021 1500 1584 88929 16 1468416600.6006 18 1468416600.6006 22 89020 3 1388 361 0.73 362 0.46 388 157.75 394 157.75 4 1388 400 157.75 444 20160713 459 93000546718000 461 7 474 157.75 5 M 55 93000552181000 8 157.75 854 n 9 88929 981 0 index[1] 1021 0 1584 89029 16 1468416601.372 18 1468416601.372 22 89120 3 1388 361 0.71 362 0.45 388 n/a 394 157.73 4 1388 400 n/a 444 20160713 459 93001362639104 461 26142 474 157.749977558 5 IBM 55 93001362849000 8 157.73 854 p 9 100 981 0 </code></pre> <p><strong>EXPECTED OUTPUT</strong></p> <pre><code>index[0] 3 1388 4 1388 5 M 8 157.75 9 88929 16 1468416600.6006 18 1468416600.6006 22 89020 55 93000552181000 361 0.73 388 157.75 394 157.75 400 157.75 444 20160714 459 93000546718000 461 7 474 157.75 854 n 981 0 1021 1500 1584 88929 index[1] 3 1388 4 1388 5 M 8 157.73 9 100 16 1468416601.372 18 1468416601.372 22 89120 55 9300136284900 361 0.71 362 0.45 394 157.73 444 20160713 459 93001362639104 461 26142 474 157.749977558 854 p 981 0 1021 0 1584 89029 388 n/a 400 n/a </code></pre>
0
2016-09-23T11:19:31Z
39,660,297
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> for creating <code>Series</code>, which is splited by <code>=</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>split</code></a> and converted to <code>DataFrame</code>. Then first column is casted to <code>int</code>, overwrite index by first column by <code>set_index</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow"><code>sort_index</code></a>:</p> <pre><code>temp=u"""3=1388|4=1388|5=M|8=157.75|9=88929|1021=1500|854=n|388=157.75|394=157.75|474=157.75|1584=88929|444=20160713|459=93000546718000|461=7|55=93000552181000|22=89020|400=157.75|361=0.73|981=0|16=1468416600.6006|18=1468416600.6006|""" #after testing replace io.StringIO(temp) to filename df = pd.read_csv(io.StringIO(temp), sep='|',index_col=None, header=None) </code></pre> <pre><code>df1 = df.stack().str.split('=', expand=True) df1.iloc[:,0] = df1.iloc[:,0].astype(int) df1 = df1.set_index(0).sort_index() print (df1) 1 0 3 1388 4 1388 5 M 8 157.75 9 88929 16 1468416600.6006 18 1468416600.6006 22 89020 55 93000552181000 361 0.73 388 157.75 394 157.75 400 157.75 444 20160713 459 93000546718000 461 7 474 157.75 854 n 981 0 1021 1500 1584 88929 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow"><code>sort_values</code></a>:</p> <pre><code>df1= df.stack().str.split('=', expand=True) df1.columns = ['a','b'] df1['a'] = df1['a'].astype(int) df1 = df1.reset_index(drop=True).sort_values('a') print (df1) a b 0 3 1388 1 4 1388 2 5 M 3 8 157.75 4 9 88929 19 16 1468416600.6006 20 18 1468416600.6006 15 22 89020 14 55 93000552181000 17 361 0.73 7 388 157.75 8 394 157.75 16 400 157.75 11 444 20160713 12 459 93000546718000 13 461 7 9 474 157.75 6 854 n 18 981 0 5 1021 1500 10 1584 88929 </code></pre>
0
2016-09-23T12:01:06Z
[ "python", "pandas" ]
Caesar Cipher Code Python printing on separate lines
39,659,538
<p>The code below works fine, however the message prints onto separate lines once it has been encrypted. For example if I type: abc with the shift of 1 it encrypts it but prints it back as:</p> <pre><code>b c d </code></pre> <p>And I don't understand why. I want it to print as:</p> <pre><code> bcd </code></pre> <p>Here is the code:</p> <pre><code>print("Welcome to the Code-Breaking/Making Software") print("This program uses something called a Caesar Cipher.") Message = (input("Please enter the message you wish to Encrypt &gt;&gt; ")) Shift = int(input("Please enter the shift for your message &gt;&gt; ")) for x in Message: OrdMessage = ord(x) ShiftedMessage = OrdMessage + Shift NewMessage = chr(ShiftedMessage) NewMessageList = list(NewMessage) print("".join(NewMessageList)) </code></pre>
-2
2016-09-23T11:21:19Z
39,659,613
<p>Indentation matters and you shouldn't create new list of <code>NewMessage</code> everytime </p> <pre><code>print("Welcome to the Code-Breaking/Making Software") print("This program uses something called a Caesar Cipher.") Message = (input("Please enter the message you wish to Encrypt &gt;&gt; ")) Shift = int(input("Please enter the shift for your message &gt;&gt; ")) NewMessageList = [] for x in Message: OrdMessage = ord(x) ShiftedMessage = OrdMessage + Shift NewMessage = chr(ShiftedMessage) NewMessageList.append(NewMessage) print("".join(NewMessageList)) </code></pre>
1
2016-09-23T11:25:03Z
[ "python", "encryption", "caesar-cipher" ]
Caesar Cipher Code Python printing on separate lines
39,659,538
<p>The code below works fine, however the message prints onto separate lines once it has been encrypted. For example if I type: abc with the shift of 1 it encrypts it but prints it back as:</p> <pre><code>b c d </code></pre> <p>And I don't understand why. I want it to print as:</p> <pre><code> bcd </code></pre> <p>Here is the code:</p> <pre><code>print("Welcome to the Code-Breaking/Making Software") print("This program uses something called a Caesar Cipher.") Message = (input("Please enter the message you wish to Encrypt &gt;&gt; ")) Shift = int(input("Please enter the shift for your message &gt;&gt; ")) for x in Message: OrdMessage = ord(x) ShiftedMessage = OrdMessage + Shift NewMessage = chr(ShiftedMessage) NewMessageList = list(NewMessage) print("".join(NewMessageList)) </code></pre>
-2
2016-09-23T11:21:19Z
39,659,682
<p>you should change the following part;</p> <pre><code>print("".join(NewMessageList), end="") </code></pre>
0
2016-09-23T11:28:41Z
[ "python", "encryption", "caesar-cipher" ]
Caesar Cipher Code Python printing on separate lines
39,659,538
<p>The code below works fine, however the message prints onto separate lines once it has been encrypted. For example if I type: abc with the shift of 1 it encrypts it but prints it back as:</p> <pre><code>b c d </code></pre> <p>And I don't understand why. I want it to print as:</p> <pre><code> bcd </code></pre> <p>Here is the code:</p> <pre><code>print("Welcome to the Code-Breaking/Making Software") print("This program uses something called a Caesar Cipher.") Message = (input("Please enter the message you wish to Encrypt &gt;&gt; ")) Shift = int(input("Please enter the shift for your message &gt;&gt; ")) for x in Message: OrdMessage = ord(x) ShiftedMessage = OrdMessage + Shift NewMessage = chr(ShiftedMessage) NewMessageList = list(NewMessage) print("".join(NewMessageList)) </code></pre>
-2
2016-09-23T11:21:19Z
39,659,688
<p>What happening was is that for each charachter it was running the loop and printing the answer, now I have collected all the encrypted letter and clubbed them as one in the end and printed it.</p> <p>it at first initialize an empty list with <code>NewMessage = []</code> and then for every letter that we get encrypted it adds to that empty list using <code>.append()</code> and at end print all by <code>''.join(NewMessage)</code></p> <pre><code>print("Welcome to the Code-Breaking/Making Software") print("This program uses something called a Caesar Cipher.") Message = (input("Please enter the message you wish to Encrypt &gt;&gt; ")) Shift = int(input("Please enter the shift for your message &gt;&gt; ")) NewMessage = [] for x in Message: OrdMessage = ord(x) ShiftedMessage = OrdMessage + Shift NewMessage.append(chr(ShiftedMessage)) print(''.join(NewMessage)) </code></pre>
0
2016-09-23T11:29:02Z
[ "python", "encryption", "caesar-cipher" ]
How can I convert a dict_keys list to integers
39,659,619
<p>I am trying to find a way of converting a list within dict_keys() to an integer so I can use it as a trigger to send to another system. My code (below) imports a list of 100 words (a txt file with words each on a new line) which belong to 10 categories (e.g. the first 10 words belong to category 1, second 10 words belong to category 2 etc...). </p> <p><strong>Code:</strong></p> <pre><code>from numpy.random import choice from collections import defaultdict number_of_elements = 10 Words = open('file_location').read().split() categories = defaultdict(list) for i in range(len(words)): categories[i/number_of_elements].append(words[i]) category_labels = categories.keys() category_labels </code></pre> <p><strong>Output</strong></p> <pre><code>dict_keys([0.0, 1.1, 2.0, 3.0, 4.9, 5.0, 0.5, 1.9, 8.0, 9.0, 1.3, 2.7, 3.9, 9.2, 9.4, 7.2, 4.2, 8.6, 5.1, 5.4, 3.3, 1.0, 6.6, 7.4, 7.7, 8.4, 5.8, 9.8, 0.7, 8.8, 2.1, 7.0, 6.4, 4.3, 0.1, 2.5, 3.8, 1.2, 6.9, 7.1, 5.6, 0.4, 5.3, 2.9, 7.3, 3.5, 9.5, 8.2, 2.8, 3.1, 0.9, 2.3, 8.1, 4.0, 6.3, 6.7, 4.5, 0.2, 1.7, 2.2, 8.9, 1.4, 7.6, 9.1, 7.8, 5.5, 4.8, 0.6, 3.2, 2.4, 6.5, 9.9, 9.6, 1.5, 6.0, 3.7, 4.7, 3.4, 5.9, 4.1, 1.6, 6.8, 9.3, 3.6, 8.5, 8.7, 0.3, 0.8, 7.5, 5.2, 2.6, 4.6, 5.7, 7.9, 6.1, 1.8, 8.3, 6.2, 9.7, 4.4]) </code></pre> <p><strong>What I need:</strong></p> <p>I would like the first number before the point (e.g. if it was 6.7, I just want the 6 as an int). </p> <p>Thank you in advance for any help and/or advice!</p>
-1
2016-09-23T11:25:42Z
39,659,644
<p>Just convert your keys to integers using a list comprehension; note that there is no need to call <code>.keys()</code> here as iteration over the dictionary directly suffices:</p> <pre><code>[int(k) for k in categories] </code></pre> <p>You may want to bucket your values directly into integer categories rather than by floating point values:</p> <pre><code>categories = defaultdict(list) for i, word in enumerate(words): categories[int(i / number_of_elements)].append(word) </code></pre> <p>I used <code>enumerate()</code> to pair words up with their index, rather than use <code>range()</code> plus indexing back into <code>words</code>.</p>
4
2016-09-23T11:27:02Z
[ "python", "dictionary", "defaultdict" ]
force eclipse to use Python 3.5 autocompletion
39,659,748
<p>I changed the interpreter for my python projects from 2.x to 3.5 recently. The code interpretes correctly with the 3.5 version.</p> <p>I noticed that the autocompletion function of Eclipse still autocompletes as if I am using 2.x Python version. For example: <code>print</code> gets autocompleted without parenthesis as a statement and not as a function. Any idea how to notify the Eclipse that it need to use 3.5 autocompletion?</p>
0
2016-09-23T11:32:25Z
39,773,761
<p>If you are using PyDev, make sure that interpreter grammar is set to 3.0 (<code>right click project -&gt; Properties -&gt; PyDev - Interpreter/Grammar</code>) </p>
1
2016-09-29T15:00:15Z
[ "python", "eclipse", "python-3.x", "autocomplete" ]
Using regex to match a specific pattern in Python
39,659,865
<p>I am trying to create <strong>regex</strong> that matches the following pattern:</p> <p><em>Note</em> : <strong><code>x</code> is a number e.g. 2</strong></p> <p><strong><em>Pattern:</em></strong></p> <pre><code>u'id': u'x' # x = Any Number e.g: u'id': u'2' </code></pre> <p>So far I have tried the folllowing:</p> <pre><code>Regex = re.findall(r"'(u'id':u\d)'", Data) </code></pre> <p>However, no matches are being found.</p>
1
2016-09-23T11:38:12Z
39,659,923
<p>This regex will match your patterns:</p> <p><code>u'id': u'(\d+)'</code></p> <p>The important bits of the regex here are:</p> <ul> <li>the brackets <code>()</code> which makes a capture group (so you can get the information</li> <li>the digit marker <code>\d</code> which specifies any digit 0 - 9</li> <li>the multiple marker <code>+</code> which means "at least 1"</li> </ul> <p>Tested on the following patterns:</p> <pre><code>u'id': u'3' u'id': u'20' u'id': u'250' u'id': u'6132838' </code></pre>
2
2016-09-23T11:41:29Z
[ "python", "regex", "python-2.7", "nsregularexpression" ]
Using regex to match a specific pattern in Python
39,659,865
<p>I am trying to create <strong>regex</strong> that matches the following pattern:</p> <p><em>Note</em> : <strong><code>x</code> is a number e.g. 2</strong></p> <p><strong><em>Pattern:</em></strong></p> <pre><code>u'id': u'x' # x = Any Number e.g: u'id': u'2' </code></pre> <p>So far I have tried the folllowing:</p> <pre><code>Regex = re.findall(r"'(u'id':u\d)'", Data) </code></pre> <p>However, no matches are being found.</p>
1
2016-09-23T11:38:12Z
39,659,924
<p>You have misplaced single quotes and you should use <code>\d+</code> instead of just <code>\d</code>:</p> <pre><code>&gt;&gt;&gt; s = "u'id': u'2'" &gt;&gt;&gt; re.findall(r"u'id'\s*:\s*u'\d+'", s) ["u'id': u'2'"] </code></pre>
2
2016-09-23T11:41:37Z
[ "python", "regex", "python-2.7", "nsregularexpression" ]
Using regex to match a specific pattern in Python
39,659,865
<p>I am trying to create <strong>regex</strong> that matches the following pattern:</p> <p><em>Note</em> : <strong><code>x</code> is a number e.g. 2</strong></p> <p><strong><em>Pattern:</em></strong></p> <pre><code>u'id': u'x' # x = Any Number e.g: u'id': u'2' </code></pre> <p>So far I have tried the folllowing:</p> <pre><code>Regex = re.findall(r"'(u'id':u\d)'", Data) </code></pre> <p>However, no matches are being found.</p>
1
2016-09-23T11:38:12Z
39,660,024
<p>Try this:</p> <p>str1 = "u'id': u'x'"</p> <p>re.findall(r'u\'id\': u\'\d+\'',str1)</p> <p>You need to escape single-quote(') because it's a special character</p>
1
2016-09-23T11:47:53Z
[ "python", "regex", "python-2.7", "nsregularexpression" ]
Python, scipy.optimize.curve_fit do not fit to a linear equation where the slope is known
39,659,900
<p>I think I have a relatively simple problem but I have been trying now for a few hours without luck. I am trying to fit a linear function (linearf) or power-law function (plaw) where I already known the slope of these functions (b, I have to keep it constant in this study). The results should give an intercept around 1.8, something I have not managed to get. I must do something wrong but I can not point my finger on it. Does somebody have an idea how to get around this problem?</p> <p>Thank you in advance!</p> <pre><code>import numpy as np from scipy import optimize p2 = np.array([ 8.08543600e-06, 1.61708700e-06, 1.61708700e-05, 4.04271800e-07, 4.04271800e-06, 8.08543600e-07]) pD = np.array([ 12.86156, 16.79658, 11.52103, 21.092 , 14.47469, 18.87318]) # Power-law function def plaw(a,x): b=-0.1677 # known slope y = a*(x**b) return y # linear function def linearf(a,x): b=-0.1677 # known slope y = b*x + a return y ## First way, via power-law function ## popt, pcov = optimize.curve_fit(plaw,p2,pD,p0=1.8) # array([ 7.12248200e-37]) wrong popt, pcov = optimize.curve_fit(plaw,p2,pD) # &gt;&gt;&gt; return 0.9, it is wrong too (the results should be around 1.8) ## Second way, via log10 and linear function ## x = np.log10(p2) y = np.log10(pD) popt, pcov = optimize.curve_fit(linearf,x,y,p0=0.3) K = 10**popt[0] ## &gt;&gt;&gt;&gt; return 3.4712954470408948e-41, it is wrong </code></pre>
1
2016-09-23T11:40:07Z
39,662,134
<p>I just discover an error in the functions:</p> <p>It should be : </p> <pre><code>def plaw(x,a): b=-0.1677 # known slope y = a*(x**b) return y </code></pre> <p>and not</p> <pre><code>def plaw(a,x): b=-0.1677 # known slope y = a*(x**b) return y </code></pre> <p>Stupid mistake.</p>
0
2016-09-23T13:33:05Z
[ "python", "python-2.7" ]
Using pyplot to create grids of plots
39,659,998
<p>I am new to python and having some difficulties with plotting using <code>pyplot</code>. My goal is to plot a grid of plots in-line (<code>%pylab inline</code>) in Juypter Notebook.</p> <p>I programmed a function <code>plot_CV</code> which plots cross-validation erorr over the degree of polynomial of some x where across plots the degree of penalization (lambda) is supposed to vary. Ultimately there are 10 elements in lambda and they are controlled by the first argument in <code>plot_CV</code>. So </p> <pre><code>fig = plt.figure() ax1 = fig.add_subplot(1,1,1) ax1 = plot_CV(1,CV_ve=CV_ve) </code></pre> <p>Gives</p> <p><a href="http://i.stack.imgur.com/DV1tk.png" rel="nofollow"><img src="http://i.stack.imgur.com/DV1tk.png" alt="enter image description here"></a></p> <p>Now I think I have to use <code>add_subplot</code> to create a grid of plots as in</p> <pre><code>fig = plt.figure() ax1 = fig.add_subplot(2,2,1) ax1 = plot_CV(1,CV_ve=CV_ve) ax2 = fig.add_subplot(2,2,2) ax2 = plot_CV(2,CV_ve=CV_ve) ax3 = fig.add_subplot(2,2,3) ax3 = plot_CV(3,CV_ve=CV_ve) ax4 = fig.add_subplot(2,2,4) ax4 = plot_CV(4,CV_ve=CV_ve) plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/9j1PZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/9j1PZ.png" alt="enter image description here"></a></p> <p>If I continue this, however, then the plots get smaller and smaller and start to overlap on the x and y labels. Here a picture with a 3 by 3 plot.</p> <p><a href="http://i.stack.imgur.com/mR2mv.png" rel="nofollow"><img src="http://i.stack.imgur.com/mR2mv.png" alt="enter image description here"></a></p> <p>Is there a way to space the plots evenly, so that they do not overlap and make better use of the horizontal and vertical in-line space in Jupyter Notebook? To illustrate this point here a screenshot from jupyter:</p> <p><a href="http://i.stack.imgur.com/Cb1t3.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Cb1t3.jpg" alt="enter image description here"></a></p> <p>Final note: I still need to add a title or annotation with the current level of lambda used in <code>plot_CV</code>.</p> <hr> <p><strong>Edit:</strong> Using the tight layout as suggested, gives:</p> <p><a href="http://i.stack.imgur.com/6dNtV.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/6dNtV.jpg" alt="enter image description here"></a></p> <hr> <p><strong>Edit 2</strong>: Using the <code>fig.set_figheight</code> and <code>fig.set_figwidth</code> I could finally use the full length and heigth available.</p> <p><a href="http://i.stack.imgur.com/Jh8ll.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Jh8ll.jpg" alt="enter image description here"></a></p>
0
2016-09-23T11:46:26Z
39,660,225
<p>A first suggestion to you problem would be taking a look at the "<a href="http://matplotlib.org/users/tight_layout_guide.html" rel="nofollow">Tight Layout guide</a>" for matplotlib. </p> <p>They have an example that looks visually very similar to your situation. As well they have examples and suggestions for taking into consideration axis labels and plot titles.</p> <p>Further more you can control the over all figure size by using Figure from the <a href="http://matplotlib.org/api/figure_api.html" rel="nofollow">matplotlib.figure</a> class. </p> <p>Figure(figsize = (x,y))</p> <p>figsize: x,y (inches)</p> <p><strong>EDIT:</strong> </p> <p>Here is an example that I pulled from the matplotlib website and added in the: </p> <p>fig.set_figheight(15) fig.set_figwidth(15)</p> <pre><code>import matplotlib.pyplot as plt plt.rcParams['savefig.facecolor'] = "0.8" def example_plot(ax, fontsize=12): ax.plot([1, 2]) ax.locator_params(nbins=3) ax.set_xlabel('x-label', fontsize=fontsize) ax.set_ylabel('y-label', fontsize=fontsize) ax.set_title('Title', fontsize=fontsize) plt.close('all') fig = plt.figure() fig.set_figheight(15) fig.set_figwidth(15) ax1 = plt.subplot2grid((3, 3), (0, 0)) ax2 = plt.subplot2grid((3, 3), (0, 1), colspan=2) ax3 = plt.subplot2grid((3, 3), (1, 0), colspan=2, rowspan=2) ax4 = plt.subplot2grid((3, 3), (1, 2), rowspan=2) example_plot(ax1) example_plot(ax2) example_plot(ax3) example_plot(ax4) plt.tight_layout() </code></pre> <p>You can achieve padding of your subplots by using tight_layout this way:</p> <p>plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)</p> <p>That way you can keep your subplots from crowding each other even further.</p> <p>Have a good one!</p>
2
2016-09-23T11:57:25Z
[ "python", "matplotlib", "plot" ]
How do I return a JsonResponse if the post data is bad in tastypie?
39,660,005
<p>For example, client post a card number in data and I use it to fetch the card record from database. If it does not exist, I return a JsonResponse such as:</p> <p><code> return JsonResponse({ success:False, msg:'The card does not exsit! Please check the card number.' }) </code></p> <p>If the card does exist, I will use it to filter another record from database and use them togeter to create the obj such as an consumption record.</p> <p>I read the docs of tastypie but have no idea how to control the HttpResponse that tastypie finally returned.</p>
0
2016-09-23T11:46:54Z
39,677,892
<p>Assuming you're returning django's <a href="https://docs.djangoproject.com/en/1.10/ref/request-response/#jsonresponse-objects" rel="nofollow">JsonResponse</a>, you can simply return it with appropriate <a href="https://docs.djangoproject.com/en/1.10/ref/request-response/#django.http.HttpResponse.status_code" rel="nofollow">status</a>, just like with <code>HttpResponse</code>.</p>
0
2016-09-24T15:14:13Z
[ "python", "django", "tastypie" ]
How to test faster GUI applications?
39,660,019
<p>I'd like to know to test faster my GUI applications. </p> <p>For the backend I got a good set of unit-tests, so I think that's quite ok and I can iterate quite fast. </p> <p>But to test the frontend logic I find myself running over and over repeating the same sequence of events to test certain part of the logic... and that feels like I'm doing something clearly wrong here because my iteration cycle is not as faster as I'd like it to. </p> <p>So, could you suggest me a good way to test GUI applications? in particular I'm pretty much interested to know how to speed-up my PyQt apps testing cycle.</p>
1
2016-09-23T11:47:25Z
39,665,686
<p>If you're just trying to validate that certain GUI actions do the correct thing, you can use <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qtest.html" rel="nofollow"><code>QTest</code></a> to simulate button clicks and other GUI interaction.</p> <p>Ideally, most of your business logic is in non-GUI modules to make it easier to test. That way, GUI testing is limited mostly to testing if the results display correctly and that key presses trigger events.</p>
1
2016-09-23T16:39:37Z
[ "python", "qt", "user-interface", "pyqt", "gui-testing" ]
How to test faster GUI applications?
39,660,019
<p>I'd like to know to test faster my GUI applications. </p> <p>For the backend I got a good set of unit-tests, so I think that's quite ok and I can iterate quite fast. </p> <p>But to test the frontend logic I find myself running over and over repeating the same sequence of events to test certain part of the logic... and that feels like I'm doing something clearly wrong here because my iteration cycle is not as faster as I'd like it to. </p> <p>So, could you suggest me a good way to test GUI applications? in particular I'm pretty much interested to know how to speed-up my PyQt apps testing cycle.</p>
1
2016-09-23T11:47:25Z
39,668,071
<p>There are also UI blackbox testing tools such as <a href="https://wiki.ubuntu.com/Touch/Testing/Autopilot" rel="nofollow">AutoPilot</a> and <a href="https://www.froglogic.com/squish/gui-testing/" rel="nofollow">Squish</a> which allow you to record your interactions with the application and later on replay them again</p>
1
2016-09-23T19:17:07Z
[ "python", "qt", "user-interface", "pyqt", "gui-testing" ]
Cannot implement recursion with Python Scrapy
39,660,025
<p>Please pardon my knowledge in Scrapy, I have been doing Data Scraping for past 3 years or so using PHP and Python BeautifulSoup, but I am new to Scrapy.</p> <p>I have Python 2.7 and latest Scrapy.</p> <p>I have a requirement where I need to scrape <a href="http://www.dos.ny.gov/corps/bus_entity_search.html" rel="nofollow">http://www.dos.ny.gov/corps/bus_entity_search.html</a> it shows results in paginations.</p> <p>My requiement is that if a search returns more than 500 results, for example "AME" returns more than 500 results, then code should search for "AMEA" to "AMEZ", and for "AMEA" if it still returns more than 500 results then search "AMEAA" and so on recursively</p> <p>But it is giving me unexpected results. Here is crawler code.</p> <pre><code>from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.selector import Selector from scrapy.http import FormRequest from scrapy.http.request import Request import urllib from appext20.items import Appext20Item from scrapy.selector import HtmlXPathSelector class Appext20Spider(CrawlSpider): name = "appext20" allowed_domains = ["appext20.dos.ny.gov"] # p_entity_name means Keyword to search payload = {"p_entity_name": '', "p_name_type": 'A', 'p_search_type':'BEGINS'} url = 'https://appext20.dos.ny.gov/corp_public/CORPSEARCH.SELECT_ENTITY' search_characters = ["A","B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y","Z"," "] construction_keywords = ['Carpenters','Carpentry','Plastering','Roofers','Roofing','plumbing','remodelling','remodeling','Tiling','Painting','Rendering','Electrical','Plumber','contracting ','contractor','construction','Waterproofing','Landscaping','Bricklaying','Cabinet Maker','Flooring','carpenters','electricians','restoration','drywall','renovation','renovating ','remodels ','framing','Masonry','builders','Woodwork','Cabinetry','Millwork','Electric','plastering','painters','painting','HVAC','Labouring','Fencing','Concreting','Glass','AC','Heating','glazier ','air duct','tiles','deck','Guttering','Concrete','Demolition','Debris','Dumpster','Cabinet','Junk','stucco','general contract','home improvement','home repair','home build','homes','building maintenance','masons','siding','kitchens','paving','landscapers','landscapes','design &amp; build','design build','design and build'] search_keywords = [''] def start_requests(self): # create keywords combo for char in self.search_characters: for char2 in self.search_characters: for char3 in self.search_characters: self.search_keywords.extend([char+char2+char3]) # now start requests for keyword in self.search_keywords: self.payload['p_entity_name'] = keyword print ('this is keyword '+ keyword) # parse_data() is my callback func yield FormRequest(self.url, formdata= self.payload, callback=self.parse_data) def parse_data(self, response): ads_on_page = Selector(response).xpath("//td[@headers='c1']") # get that message to see how many results this keyword returned. # if it returns more than 500, then page shows "More than 500 entities were found. Only the first 500 entities will be displayed." try: results = Selector(response).xpath("//center/p/text()").extract()[0] except Exception,e: results = '' all_links = [] for tr in ads_on_page: temp_dict = {} temp_dict['title'] = tr.xpath('a/text()').extract() temp_dict['link'] = tr.xpath('a/@href').extract() temp_dict['p_entity_name'] = self.payload['p_entity_name'] temp_dict['test'] = results yield temp_dict # check if has next page try: next_page = Selector(response).xpath("//a[text()='Next Page']/@href").extract() next_page = 'https://appext20.dos.ny.gov/corp_public/' + next_page[0] next_page_text = Selector(response).xpath("//a[text()='Next Page']/@href/text()").extract() # if it has more than 1 page, then do recursive calls to search # I.E: "AME" returns more than 500 resutls, then code should search for "AMEA" to "AMEZ" # and for "AMEA" if it still returns more than 500 results then search "AMEAA" and so on recursively if next_page_text == 2: if "More than 500 entities were found" in results: # search through "A" to "Z" for char3 in self.search_characters: self.payload['p_entity_name'] = self.payload['p_entity_name'] + char3 print ('THIS is keyword '+ self.payload['p_entity_name']) yield FormRequest(self.url, formdata= self.payload, callback=self.parse_data) # scrape that next page. yield Request(url=next_page, callback=self.parse_data) except Exception,e: # no next page. return </code></pre> <p><a href="https://www.dropbox.com/s/102r34hubedp1z3/appext20.zip?dl=0" rel="nofollow">Here is</a> full copy of my project </p> <p>I am running my code using <code>scrapy crawl appext20 -t csv -o app.csv --loglevel=INFO</code> command.</p>
0
2016-09-23T11:47:55Z
39,661,780
<p>Well, without having had a deeper look at <code>scrapy</code>, I had to have a look at the recursion thing.</p> <p>First, you may want to simplify your keyword generation.</p> <pre><code>import itertools import random URL = 'https://appext20.dos.ny.gov/corp_public/CORPSEARCH.SELECT_ENTITY' ALPHABET = [chr(i) for i in range(65, 65+26)] def keyword_initial_set(n=2): '''Generates a list of all n-length combinations of the entire alphabet E.g. n=2: ['AA', 'AB', 'AC', ..., 'ZY', 'ZZ'] E.g. n=5: ['AAAAA', 'AAAAB', 'AAAAC', ..., 'ZZZZY', 'ZZZZZ'] ''' cartesian = list(itertools.product(*[ALPHABET for i in range(n)])) return map((lambda x: ''.join(x)), cartesian) def keyword_generator(base): '''Generates keywords for an additional level for the given keyword base E.g. base='BEZ': ['BEZA', 'BEZB', 'BEZC', ..., 'BEZZ'] ''' for c in ALPHABET: yield base + c </code></pre> <p>With these little helpers, it is a lot easier to generate your keyword combinatorics and to generate subsequent keywords for a recursive descent (see their docstrings).</p> <p>Then, for your recursion, it is handy -- as you did in your own code -- to have two separate functions: One for the HTTP request, the other for handling the responses.</p> <pre><code>def keyword_request(kw): '''Issues an online search using a keyword WARNING: MONKEY-PATCHED CODE INCLUDED ''' payload = { 'p_entity_name': kw, 'p_name_type': 'A', 'p_search_type': 'BEGINS' } print('R {}'.format(kw)) FormRequest(URL, formdata=payload, callback=keyword_parse) def keyword_parse(response): '''Parses the response to seek for the number of results and performs a recursive descent if necessary WARNING: MONKEY-PATCHED CODE INCLUDED ''' try: n_res = Selector(response).xpath('//center/p/text()').extract()[0] except Exception: # Please put specific exception type here. Don't be so generic! n_res = '' if n_res.startswith('More than 500'): print('Recursive descent.') for kw in keyword_generator(response['p_entity_name']): # Hacked. If not feasible, get current kw form s/e else keyword_request(kw) else: # Parse paginated results here. pass </code></pre> <p>With these functions, your main method (or call to the crawler wherever it is issued) becomes:</p> <pre><code>if __name__ == '__main__': kwords = keyword_initial_set(n=2) for kw in kwords: keyword_request(kw) </code></pre> <h2>What happens here?</h2> <p>The <code>keyword_initial_set</code> generates a list of all <code>n</code>-length combinations of the entire alphabet. This serves as a starting point: Each of these keywords is requested from the website search and the results are parsed.</p> <p>In case the website yields more than 500 results, a recursive descent is performed. The current keyword is extended by all letters <code>A-Z</code> and for each new keyword (of length <code>n+1</code>) a new request is issued and parsed upon completion.</p> <p>Hope to help.</p> <h2>Monkey Patches</h2> <p>For my local and offline testing, I monkeypatched the original <code>scrapy</code> classes with these ones:</p> <pre><code>class FormRequest(object): '''Monkey-patch for original implementation ''' def __init__(self, url, formdata, callback): self.url = url self.formdata = formdata self.callback = callback self.callback(formdata) class Selector(object): '''Monkey-patch for original implementation ''' def __init__(self, response): self.response = response def xpath(self, xpattern): return self def extract(self): n_res = random.randint(0, 510) if n_res &gt; 500: return ['More than 500 results found'] else: return [''] </code></pre> <p>Thus, you may have to adapt the code at those spots where my patches do not hit the original behavior. But you'll surely manage that.</p>
1
2016-09-23T13:17:14Z
[ "python", "recursion", "web-scraping", "scrapy" ]
Looping multiple values into dictionary
39,660,069
<p>I would like to expand on a previously asked question:</p> <p><a href="http://stackoverflow.com/questions/39578130/nested-for-loop-with-unequal-entities">Nested For Loop with Unequal Entities</a></p> <p>In that question, I requested a method to extract the location's type (Hospital, Urgent Care, etc) in addition to the location's name (WELLSTAR ATLANTA MEDICAL CENTER, WELLSTAR ATLANTA MEDICAL CENTER SOUTH, etc).</p> <p>The answer suggested utilizing a for loop and dictionary to collect the values and keys. The code snippet appears below:</p> <pre><code>from pprint import pprint import requests from bs4 import BeautifulSoup url = "https://www.wellstar.org/locations/pages/default.aspx" response = requests.get(url) soup = BeautifulSoup(response.content, "html.parser") d = {} for row in soup.select(".WS_Content &gt; .WS_LeftContent &gt; table &gt; tr"): title = row.h3.get_text(strip=True) d[title] = [item.get_text(strip=True) for item in row.select(".PurpleBackgroundHeading a)] pprint(d) </code></pre> <p>I would like to extend the existing solution to include the entity's address matched with the appropriate key-value combination. If the best solution is to utilize something other than a dictionary, I'm open to that suggestion as well.</p>
-1
2016-09-23T11:50:26Z
39,660,117
<p>Let's say you have a dict <code>my_dict</code> and you want to add <code>2</code> with <code>my_key</code> as key. Simply do:</p> <pre><code>my_dict['my_key'] = 2 </code></pre>
1
2016-09-23T11:52:38Z
[ "python", "loops", "dictionary", "beautifulsoup", "nested-loops" ]
Looping multiple values into dictionary
39,660,069
<p>I would like to expand on a previously asked question:</p> <p><a href="http://stackoverflow.com/questions/39578130/nested-for-loop-with-unequal-entities">Nested For Loop with Unequal Entities</a></p> <p>In that question, I requested a method to extract the location's type (Hospital, Urgent Care, etc) in addition to the location's name (WELLSTAR ATLANTA MEDICAL CENTER, WELLSTAR ATLANTA MEDICAL CENTER SOUTH, etc).</p> <p>The answer suggested utilizing a for loop and dictionary to collect the values and keys. The code snippet appears below:</p> <pre><code>from pprint import pprint import requests from bs4 import BeautifulSoup url = "https://www.wellstar.org/locations/pages/default.aspx" response = requests.get(url) soup = BeautifulSoup(response.content, "html.parser") d = {} for row in soup.select(".WS_Content &gt; .WS_LeftContent &gt; table &gt; tr"): title = row.h3.get_text(strip=True) d[title] = [item.get_text(strip=True) for item in row.select(".PurpleBackgroundHeading a)] pprint(d) </code></pre> <p>I would like to extend the existing solution to include the entity's address matched with the appropriate key-value combination. If the best solution is to utilize something other than a dictionary, I'm open to that suggestion as well.</p>
-1
2016-09-23T11:50:26Z
39,660,295
<p>Let say you have a dict <code>d = {'Name': 'Zara', 'Age': 7}</code> now you want to add another value </p> <blockquote> <p>'Sex'= 'female'</p> </blockquote> <p>You can use built in update method.</p> <pre><code>d.update({'Sex': 'female' }) print "Value : %s" % d Value : {'Age': 7, 'Name': 'Zara', 'Sex': 'female'} </code></pre> <p>ref is <a href="https://www.tutorialspoint.com/python/dictionary_update.htm" rel="nofollow">https://www.tutorialspoint.com/python/dictionary_update.htm</a></p>
0
2016-09-23T12:01:01Z
[ "python", "loops", "dictionary", "beautifulsoup", "nested-loops" ]
Doubts with Keras RNN formats and layers
39,660,160
<p>Ok, I know this has been asked before but I'm afraid I have not fully grasped the comments/solutions, so let me write here my own problem:</p> <p>It's a very basic problem. I have an array <code>X_train</code> of shape <code>(35584,)</code> that represents measures each hour for several years. I also have the correspondent <code>Y_train</code> with shape <code>(35584,)</code> as the expected values. As you see, a simple regression problem. But the values of an hour h are affected by values of, say, the previous 6 hours, so I want to use a RNN.</p> <p>In Keras, I understand that for my case: <code>timesteps = 6</code> and <code>nb_samples = 35584</code>. In my case, <code>nb_features = 1</code>. </p> <p>How can I program this in Keras? Should I use <code>Embedding</code> layer? And how to do it?</p> <p>Any help will be valuable, but I would appreciate not only code, but also explanations. IMHO, a pedagogical tutorial/book on the use of Keras is missing.</p>
0
2016-09-23T11:54:42Z
40,059,477
<p>Ok, I'm going to self answer this question in case it can be useful for someone. How to do a regression with a RNN in Keras it is very well explained here. The blog, besides, has a lot of resources for machine learning and the explanations are superb. Strongly recommended. Link with the explanations of formats, layers, stateful states and so on: </p> <p><a href="http://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/" rel="nofollow">http://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/</a></p>
0
2016-10-15T13:16:37Z
[ "python", "neural-network", "keras", "recurrent-neural-network" ]
designing the predicate for the filter function
39,660,329
<p>I have this in built filter function for my homework.</p> <pre><code>def filter(pred, seq): if seq == (): return () elif pred(seq[0]): return (seq[0],) filter(pred, seq[1:]) else: return filter(pred, seq[1:]) </code></pre> <p>We are supposed to convert a give function to one that has only one return line using the designed filter function (which overwrites the more powerful python version). </p> <p>The code that we are supposed to convert is this: </p> <pre><code>def lookup_bus_stop_by_road(stops, road): matched = () for bus_stop in stops: if get_road_name(bus_stop) == road: matched = matched + (bus_stop, ) return matched </code></pre> <p>my question is: how am i supposed to convert the predicate for my filter function by adapting from the given lookup_bus_Stop_by_road function? But I have always got the TypeError: bool object not callable. </p> <p>This is the return line i have put in: </p> <pre><code>def lookup_bus_stop_by_road(stops, road): return filter(get_road_name(bus_stops) == road, stops) </code></pre> <p>what is wrong here?</p>
1
2016-09-23T12:02:58Z
39,661,203
<pre><code>get_road_name(bus_stops) == road </code></pre> <p>is a boolean value, not a function. What you want to do is create a function that calls <code>get_road_name</code> and checks if the result is equal to <code>road</code></p> <pre><code>filter(lambda x: get_road_name(x) == road, stops) </code></pre> <p>For more reading on this topic see here: <a href="https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions" rel="nofollow">https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions</a></p>
1
2016-09-23T12:49:03Z
[ "python", "recursion", "filter", "higher-order-functions" ]
Django: Setting the from address on an email
39,660,409
<p>I'd like for my users to be able to enter their <code>email-address</code> and <code>message</code> and then send the email with the 'from address' being their own email-address. Currently the <code>EMAIL_HOST</code> is set on our own domain and the emails arrives when it is sent with a "from address" equal to our <code>HOST_USER</code>, but not if it is anything else. Is this possible?</p> <p>Our settings: </p> <pre><code>EMAIL_HOST = 'smtp02.hostnet.nl' EMAIL_PORT = 587 EMAIL_USE_TLS = True EMAIL_HOST_USER = "xxx" EMAIL_HOST_PASSWORD = "xxx" EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' </code></pre>
0
2016-09-23T12:07:19Z
39,661,013
<p>If you allow your users to set the from addresss, you may find that your emails are blocked by anti-spam measures.</p> <p>A better approach would be to use an email address you control as the from address, and set the <code>reply_to</code> header on your email. Then, when the recipients click 'reply', the reply will go to the user's from address.</p> <pre><code>email = EmailMessage( 'Hello', 'Body goes here', 'your-email-address@example.com', # from address ['to1@example.com', 'to2@example.com'], # recipients reply_to=[user_from_address], # reply to address set by your user ) email.send() </code></pre>
1
2016-09-23T12:40:04Z
[ "python", "django", "smtp" ]
Python Pandas .Apply Function to Vectorized Form
39,660,466
<p>I'm trying to convert the following .apply transformation to a vectorized form which will run faster. I've tried .where, and I've tried normal boolean indexing, however my solutions are not working. Please send me in the right direction</p> <pre><code>oneDayDelta = datetime.timedelta(days=1) def correct_gps_datetimestamp(row): new_dts = row['GPS_DateTime'] if row['Created'].hour == 0 and row['GPS_DateTime'].hour &gt; 10: new_dts = row['GPS_DateTime'] - oneDayDelta return(new_dts) allData['GPS_DateTime'] = allData.apply(correct_gps_datetimestamp,axis=1) </code></pre> <p>Non working solution:</p> <pre><code>allData['GPS_DateTime'] = allData.where(allData['Created'].hour == 0 &amp; allData['GPS_DateTime'].hour &gt; 10,allData['GPS_DateTime'] - datetime.timedelta(days=1)) </code></pre>
1
2016-09-23T12:10:22Z
39,660,517
<p>I think you need add <code>()</code> only to conditions:</p> <pre><code>(allData['Created'].hour == 0) &amp; (allData['GPS_DateTime'].hour &gt; 10) </code></pre> <hr> <pre><code>allData['GPS_DateTime'] = allData.where((allData['Created'].hour == 0) &amp; (allData['GPS_DateTime'].hour &gt; 10), allData['GPS_DateTime'] - datetime.timedelta(days=1)) </code></pre>
2
2016-09-23T12:13:27Z
[ "python", "pandas" ]
Python Pandas .Apply Function to Vectorized Form
39,660,466
<p>I'm trying to convert the following .apply transformation to a vectorized form which will run faster. I've tried .where, and I've tried normal boolean indexing, however my solutions are not working. Please send me in the right direction</p> <pre><code>oneDayDelta = datetime.timedelta(days=1) def correct_gps_datetimestamp(row): new_dts = row['GPS_DateTime'] if row['Created'].hour == 0 and row['GPS_DateTime'].hour &gt; 10: new_dts = row['GPS_DateTime'] - oneDayDelta return(new_dts) allData['GPS_DateTime'] = allData.apply(correct_gps_datetimestamp,axis=1) </code></pre> <p>Non working solution:</p> <pre><code>allData['GPS_DateTime'] = allData.where(allData['Created'].hour == 0 &amp; allData['GPS_DateTime'].hour &gt; 10,allData['GPS_DateTime'] - datetime.timedelta(days=1)) </code></pre>
1
2016-09-23T12:10:22Z
39,660,525
<p>You can do this in a single line using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow"><code>np.where</code></a>:</p> <pre><code>allData['GPS_DateTime'] = np.where((allData['Created'].dt.hour == 0) &amp; (allData['GPS_DateTime'].dt.hour &gt; 10), allData['GPS_DateTime'] - oneDayDelta, allData['GPS_DateTime']) </code></pre> <p>Note that the datetimes have <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.hour.html" rel="nofollow"><code>dt.hour</code></a> accessor to get the hours as int value, this allows you to compare the entire df, note that we use <code>&amp;</code> here instead of <code>and</code> as we're comparing arrays. Additionally we have to use parentheses around the conditions due to operator precedence.</p> <pre><code>(allData['Created'].dt.hour == 0) &amp; (allData['GPS_DateTime'].dt.hour &gt; 10) </code></pre> <p>So where this condition is met it returns your datetime column minus the one data timedelta, otherwise it just returns your column</p>
2
2016-09-23T12:13:54Z
[ "python", "pandas" ]
numpy polyfit yields nonsense
39,660,663
<p>I am trying to fit these values: <a href="http://i.stack.imgur.com/KpLUj.png" rel="nofollow"><img src="http://i.stack.imgur.com/KpLUj.png" alt="Values to fit"></a></p> <p>This is my code:</p> <pre><code> for i in range(-area,area): stDev1= [] for j in range(-area,area): stDev0 = stDev[i+i0][j+j0] stDev1.append(stDev0) slices[i] = stDev1 fitV = [] xV = [] for l in range(-area,area): y = np.asarray(slices[l]) x = np.arange(0,2*area,1) for m in range(-area,area): fitV.append(slices[m][l]) xV.append(l) fit = np.polyfit(xV,fitV,4) yfit = function(fit,area) x100 = np.arange(0,100,1) plt.plot(xV,fitV,'.') plt.savefig("fits1.png") def function(fit,area): yfit = [] for x in range(-area,area): yfit.append(fit[0]+fit[1]*x+fit[2]*x**2+fit[3]*x**3+fit[4]*x**4) return(yfit) i0 = 400 j0 = 400 area = 50 stdev = 2d np.array([1300][800]) #just an image of "noise" feel free to add any image // 2d np array you like. </code></pre> <p>This yields: <a href="http://i.stack.imgur.com/eFXmo.png" rel="nofollow"><img src="http://i.stack.imgur.com/eFXmo.png" alt="Wrong fit, in green are the same values I plotted in image 1"></a></p> <p>obviously this is completly wrong? I assume I miss understand the concept of polyfit? From the doc the requirement is that I feed it with with two arrays of shape x[i] y[i]? My values in </p> <pre><code> xV = [ x_1_-50,x_1_-49,...,x_1_49,x_2_-50,...,x_49_49] </code></pre> <p>and my ys are: </p> <pre><code> fitV = [y_1_-50,y_1_-49,...,y_1_49,...y_2_-50,...,y_2_49] </code></pre>
0
2016-09-23T12:21:42Z
39,667,945
<p>I do not completely understand your program. In the future, it would be helpful if you were to distill your issue to a <a href="http://stackoverflow.com/help/mcve">MCVE</a>. But here are some thoughts:</p> <ol> <li><p>It seems, in your data, that for a given value of <em>x</em> there are multiple values of <em>y</em>. Given (<em>x</em>,<em>y</em>) data, <code>polyfit</code> returns a tuple that represents a polynomial <em>function</em>, but no function can map a single value of <em>x</em> onto multiple values of <em>y</em>. As a first step, consider collapsing each set of <em>y</em> values into a single representative value using, for example, the mean, median, or mode. Or perhaps, in your domain, there's a more natural way to do this. </p></li> <li><p>Second, there is an idiomatic way to use the pair of functions <code>np.polyfit</code> and <code>np.polyval</code>, and you're not using them in the standard way. Of course, numerous useful departures from this pattern exist, but first make sure you understand the basic pattern of these two functions.</p> <p>a. Given your measurements <em>y_data</em>, taken at times or locations <em>x_data</em>, plot them and make a guess as to the order of the fit. That is, does it look like a line? Like a parabola? Let's assume you believe your data to be parabolic, and that you'll use a second order polynomial fit.</p> <p>b. Make sure that your arrays are sorted in order of increasing <code>x</code>. There are many ways to do this, but <code>np.argsort</code> is a easy one.</p> <p>c. Run <code>polyfit</code>: <code>p = polyfit(x_data,y_data,2)</code>, which returns a tuple containing the 2nd, 1st, and 0th order coefficients in <code>p</code>, <code>(c2,c1,c0)</code>.</p> <p>d. In the idiomatic use of <code>polyfit</code> and <code>polyval</code>, next you would generate your fit: <code>polyval(p,x_data)</code>. Or perhaps you want the fit to be sampled more coarsely or finely, in which case you might take a subset of <code>x_data</code> or interpolate more values in <code>x_data</code>.</p></li> </ol> <p>A complete example is below.</p> <pre><code>import numpy as np from matplotlib import pyplot as plt # these are your measurements, unsorted x_data = np.array([18, 6, 9, 12 , 3, 0, 15]) y_data = np.array([583.26347805, 63.16059915, 100.94286909, 183.72581827, 62.24497418, 134.99558191, 368.78421529]) # first, sort both vectors in increasing-x order: sorted_indices = np.argsort(x_data) x_data = x_data[sorted_indices] y_data = y_data[sorted_indices] # now, plot and observe the parabolic shape: plt.plot(x_data,y_data,'ks') plt.show() # generate the 2nd order fitting polynomial: p = np.polyfit(x_data,y_data,2) # make a more finely sampled x_fit vector with, for example # 1024 equally spaced points between the first and last # values of x_data x_fit = np.linspace(x_data[0],x_data[-1],1024) # now, compute the fit using your polynomial: y_fit = np.polyval(p,x_fit) # and plot them together: plt.plot(x_data,y_data,'ks') plt.plot(x_fit,y_fit,'b--') plt.show() </code></pre> <p>Hope that helps.</p>
1
2016-09-23T19:08:30Z
[ "python", "numpy", "data-fitting" ]
Deciding if a Point is Inside a Polygon python
39,660,851
<p>I am trying to detect if a given point(x,y) is in a polygon of n*2 array. But it seems that some points on the borders of the polygon return that it's not include.</p> <pre><code>def point_inside_polygon(x,y,poly): n = len(poly) inside =False p1x,p1y = poly[0] for i in range(n+1): p2x,p2y = poly[i % n] if y &gt; min(p1y,p2y): if y &lt;= max(p1y,p2y): if x &lt;= max(p1x,p2x): if p1y != p2y: xinters = (y-p1y)*(p2x-p1x)/float((p2y-p1y))+p1x if p1x == p2x or x &lt;= xinters: inside = not inside p1x,p1y = p2x,p2y return inside </code></pre>
1
2016-09-23T12:31:52Z
39,663,561
<p>You may use <code>contains_point</code> function from <code>matplotlib.path</code> with small negative and positive radius (small trick). Something like this:</p> <pre><code>import matplotlib.path as mplPath import numpy as np crd = np.array([[0,0], [0,1], [1,1], [1,0]])# poly bbPath = mplPath.Path(crd) pnts = [[0.0, 0.0],[1,1],[0.0,0.5],[0.5,0.0]] # points on edges r = 0.001 # accuracy isIn = [ bbPath.contains_point(pnt,radius=r) or bbPath.contains_point(pnt,radius=-r) for pnt in pnts] </code></pre> <p>The result is</p> <pre><code>[True, True, True, True] </code></pre> <p>By default (or <code>r=0</code>) all the points on borders are not included, and the result is</p> <pre><code>[False, False, False, False] </code></pre>
0
2016-09-23T14:42:24Z
[ "python", "polygon", "polygons", "point-in-polygon" ]
What method is equivalent to_internal_value in DRF 2?
39,660,927
<p>I can not find this in the documentation. </p> <p>I remember that I used such a method in DRF 2 but can not recall the name.</p>
0
2016-09-23T12:35:53Z
39,694,640
<p>This was <code>from_native</code> as described in the <a href="http://www.django-rest-framework.org/topics/3.0-announcement/#changes-to-the-custom-field-api" rel="nofollow">release notes</a></p>
0
2016-09-26T04:14:28Z
[ "python", "django", "django-rest-framework" ]
Error when using importlib.util to check for library
39,660,934
<p>I'm trying to use the importlib library to verify whether the nmap library is installed on the computer executing the script in Python 3.5.2</p> <p>I'm trying to use <code>importlib.util.find_spec("nmap")</code> but receive the following error.</p> <pre><code>&gt;&gt;&gt; import importlib &gt;&gt;&gt; importlib.util.find_spec("nmap") Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; AttributeError: module 'importlib' has no attribute 'util' </code></pre> <p>Can someone tell me where I'm going wrong?</p> <p><strong>EDIT</strong></p> <p>I was able to get the function to work using the following code.</p> <pre><code>#!/usr/bin/pythonw import importlib from importlib import util #check to see if nmap module is installed find_nmap = util.find_spec("nmap") if find_nmap is None: print("Error") </code></pre>
2
2016-09-23T12:36:21Z
39,661,116
<p>Try this:</p> <pre><code>from importlib import util util.find_spec("nmap") </code></pre> <p>I intend to investigate, but honestly I don't know why one works and the other doesn't. Also, observe the following interactive session:</p> <pre><code>&gt;&gt;&gt; import importlib &gt;&gt;&gt; importlib.util Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; AttributeError: module 'importlib' has no attribute 'util' &gt;&gt;&gt; from importlib import util &gt;&gt;&gt; util &lt;module 'importlib.util' from '/usr/lib/python3.5/importlib/util.py'&gt; &gt;&gt;&gt; importlib.util &lt;module 'importlib.util' from '/usr/lib/python3.5/importlib/util.py'&gt; </code></pre> <p>So...yeah. I am sure this makes perfect sense to someone, but not to me. I will update once I figure it out.</p> <h1>Update:</h1> <p>Comparing this to something like:</p> <pre><code>&gt;&gt;&gt; import datetime &gt;&gt;&gt; datetime &lt;module 'datetime' from '/usr/lib/python3.5/datetime.py'&gt; &gt;&gt;&gt; datetime.datetime &lt;class 'datetime.datetime'&gt; </code></pre> <p>I think the difference is that in this case the first <code>datetime</code> is a module and the second is a class, while in the <code>importlib.util</code> case both are modules. So perhaps <code>module.module</code> is not OK unless the code from both modules has been loaded, while <code>module.class</code> is OK, because the class code is loaded when the module is imported.</p> <h1>Update #2</h1> <p>Nope, it seems like in many cases <code>module.module</code> is fine. For example:</p> <pre><code>&gt;&gt;&gt; import urllib &gt;&gt;&gt; urllib &lt;module 'urllib' from '/usr/lib/python3.5/urllib/__init__.py'&gt; &gt;&gt;&gt; urllib.error &lt;module 'urllib.error' from '/usr/lib/python3.5/urllib/error.py'&gt; </code></pre> <p>So perhaps it is something specific to <code>importlib</code>.</p> <h1>Update #3</h1> <p>As <strong>@kfb</strong> pointed out in the comments, it does seem to be related to <code>importlib</code> specifically. See the following comment from the <a href="https://hg.python.org/cpython/file/tip/Lib/importlib/__init__.py" rel="nofollow"><code>__init__.py</code> for <code>importlib</code></a>:</p> <pre><code># Until bootstrapping is complete, DO NOT import any modules that attempt # to import importlib._bootstrap (directly or indirectly). Since this # partially initialised package would be present in sys.modules, those # modules would get an uninitialised copy of the source version, instead # of a fully initialised version (either the frozen one or the one # initialised below if the frozen one is not available). </code></pre> <p><code>importlib/util.py</code> <em>does</em> import <code>importlib._bootstrap</code> so I would assume that this is realted. If my understanding is correct, when you do <code>import importlib</code> the submodules will be initialized, but are not initialized for the <code>importlib</code> module object that you have imported. At this point, if you do <code>dir(importlib)</code> you will not see <code>util</code>. Interestingly, <em>after</em> you have tried to access <code>importlib.util</code> and gotten an <code>AttributeError</code>, <code>util</code> (along with the other submodules) gets loaded/initialized, and now you <em>can</em> access <code>importlib.util</code>!</p> <pre><code>&gt;&gt;&gt; import importlib &gt;&gt;&gt; dir(importlib) ['_RELOADING', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__import__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_bootstrap', '_bootstrap_external', '_imp', '_r_long', '_w_long', 'find_loader', 'import_module', 'invalidate_caches', 'reload', 'sys', 'types', 'warnings'] &gt;&gt;&gt; importlib.util Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; AttributeError: module 'importlib' has no attribute 'util' &gt;&gt;&gt; importlib.util &lt;module 'importlib.util' from '/usr/lib/python3.5/importlib/util.py'&gt; &gt;&gt;&gt; dir(importlib) ['_RELOADING', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__import__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_bootstrap', '_bootstrap_external', '_imp', '_r_long', '_w_long', 'abc', 'find_loader', 'import_module', 'invalidate_caches', 'machinery', 'reload', 'sys', 'types', 'util', 'warnings'] </code></pre>
3
2016-09-23T12:44:59Z
[ "python", "python-3.x", "python-importlib" ]
Extracting ICCID from a string using regex
39,660,961
<p>I'm trying to return and print the ICCID of a SIM card in a device; the SIM cards are from various suppliers and therefore of differing lengths (either 19 or 20 digits). As a result, I'm looking for a regular expression that will extract the ICCID (in a way that's agnostic to non-word characters immediately surrounding it).</p> <p>Given that an ICCID is specified as a 19-20 digit string starting with "89", I've simply gone for:</p> <pre><code>(89\d{17,18}) </code></pre> <p>This was the most successful pattern that I'd tested (along with some patterns rejected for reasons below).</p> <p>In the string that I'm extracting it from, the ICCID is immediately followed by a carriage return and then a line feed, but some testing against terminating it with <code>\r</code>, <code>\n</code>, or even <code>\b</code> failed to work (the program that I'm using is an in-house one built on python, so I suspect that's what it's using for regex). Also, simply using <code>(\d{19,20})</code> ended up extracting the last 19 digits of a 20-digit ICCID (as the third and last valid match). Along the same lines, I ruled out <code>(\d{19,20})?</code> in principle, as I expect that to finish when it finds the first 19 digits.</p> <p><strong>So my question is:</strong> Should I use the pattern I've chosen, or is there a better expression (not using non-word characters to frame the string) that will return the longest substring of a variable-length string of digits?</p>
1
2016-09-23T12:37:42Z
39,661,185
<p>I'd go for</p> <pre><code>89\d{17,18}[^\d] </code></pre> <p>This should prefer 18 digits, but 17 would also suffice. After that, no more other numeric characters would be allowed.</p> <p>Only limitation: there must be at least one more character after the ICCID (which should be okay from what you described).</p> <p>Be aware that any longer number sequence carrying "89" followed by 17 or 18 numerical characters would also match.</p>
1
2016-09-23T12:48:14Z
[ "python", "regex", "string", "iccid" ]
Extracting ICCID from a string using regex
39,660,961
<p>I'm trying to return and print the ICCID of a SIM card in a device; the SIM cards are from various suppliers and therefore of differing lengths (either 19 or 20 digits). As a result, I'm looking for a regular expression that will extract the ICCID (in a way that's agnostic to non-word characters immediately surrounding it).</p> <p>Given that an ICCID is specified as a 19-20 digit string starting with "89", I've simply gone for:</p> <pre><code>(89\d{17,18}) </code></pre> <p>This was the most successful pattern that I'd tested (along with some patterns rejected for reasons below).</p> <p>In the string that I'm extracting it from, the ICCID is immediately followed by a carriage return and then a line feed, but some testing against terminating it with <code>\r</code>, <code>\n</code>, or even <code>\b</code> failed to work (the program that I'm using is an in-house one built on python, so I suspect that's what it's using for regex). Also, simply using <code>(\d{19,20})</code> ended up extracting the last 19 digits of a 20-digit ICCID (as the third and last valid match). Along the same lines, I ruled out <code>(\d{19,20})?</code> in principle, as I expect that to finish when it finds the first 19 digits.</p> <p><strong>So my question is:</strong> Should I use the pattern I've chosen, or is there a better expression (not using non-word characters to frame the string) that will return the longest substring of a variable-length string of digits?</p>
1
2016-09-23T12:37:42Z
39,661,484
<p>If the engine behind the scenes is really Python, and there can be any non-digits chars around the value you need to extract, use lookarounds to restrict the context around the values:</p> <pre><code>(?&lt;!\d)89\d{17,18}(?!\d) ^^^^^^^ ^^^^^^ </code></pre> <p>The <code>(?&lt;!\d)</code> loobehind will require the absense of a digit before the match and <code>(?!\d)</code> negative lookahead will require the absence of a digit after that value.</p> <p>See <a href="https://regex101.com/r/xU6bN5/1" rel="nofollow">this regex demo</a></p>
1
2016-09-23T13:02:55Z
[ "python", "regex", "string", "iccid" ]
Extracting ICCID from a string using regex
39,660,961
<p>I'm trying to return and print the ICCID of a SIM card in a device; the SIM cards are from various suppliers and therefore of differing lengths (either 19 or 20 digits). As a result, I'm looking for a regular expression that will extract the ICCID (in a way that's agnostic to non-word characters immediately surrounding it).</p> <p>Given that an ICCID is specified as a 19-20 digit string starting with "89", I've simply gone for:</p> <pre><code>(89\d{17,18}) </code></pre> <p>This was the most successful pattern that I'd tested (along with some patterns rejected for reasons below).</p> <p>In the string that I'm extracting it from, the ICCID is immediately followed by a carriage return and then a line feed, but some testing against terminating it with <code>\r</code>, <code>\n</code>, or even <code>\b</code> failed to work (the program that I'm using is an in-house one built on python, so I suspect that's what it's using for regex). Also, simply using <code>(\d{19,20})</code> ended up extracting the last 19 digits of a 20-digit ICCID (as the third and last valid match). Along the same lines, I ruled out <code>(\d{19,20})?</code> in principle, as I expect that to finish when it finds the first 19 digits.</p> <p><strong>So my question is:</strong> Should I use the pattern I've chosen, or is there a better expression (not using non-word characters to frame the string) that will return the longest substring of a variable-length string of digits?</p>
1
2016-09-23T12:37:42Z
39,661,691
<pre><code>(\d+)\D+ </code></pre> <p>seems like it would do the trick readily. (\d+ ) would capture 20 numbers. \D+ would match anything else afterwards. </p>
0
2016-09-23T13:12:44Z
[ "python", "regex", "string", "iccid" ]
How can I speed up nearest neighbor search with python?
39,660,968
<p>I have a code, which calculates the nearest voxel (which is unassigned) to a voxel ( which is assigned). That is i have an array of voxels, few voxels already have a scalar (1,2,3,4....etc) values assigned, and few voxels are empty (lets say a value of '0'). This code below finds the nearest assigned voxel to an unassigned voxel and assigns that voxel the same scalar. So, a voxel with a scalar '0' will be assigned a value (1 or 2 or 3,...) based on the nearest voxel. This code below works, but it takes too much time. Is there an alternative to this ? or if you have any feedback on how to improve it further?</p> <p>""" #self.voxels is a 3D numpy array""" </p> <pre><code>def fill_empty_voxel1(self,argx, argy, argz): """ where # argx, argy, argz are the voxel location where the voxel is zero""" argx1, argy1, argz1 = np.where(self.voxels!=0) # find the non zero voxels a = np.column_stack((argx1, argy1, argz1)) b = np.column_stack((argx, argy, argz)) tree = cKDTree(a, leafsize=a.shape[0]+1) distances, ndx = tree.query(b, k=1, distance_upper_bound= self.mean) # self.mean is a mean radius search value argx2, argy2, argz2 = a[ndx][:][:,0],a[ndx][:][:,1],a[ndx][:][:,2] self.voxels[argx,argy,argz] = self.voxels[argx2,argy2,argz2] # update the voxel array </code></pre> <h1>Example</h1> <p>""" Here is a small example with small dataset:"""</p> <pre><code>import numpy as np from scipy.spatial import cKDTree import timeit voxels = np.zeros((10,10,5), dtype=np.uint8) voxels[1:2,:,:] = 5. voxels[5:6,:,:] = 2. voxels[:,3:4,:] = 1. voxels[:,8:9,:] = 4. argx, argy, argz = np.where(voxels==0) tic=timeit.default_timer() argx1, argy1, argz1 = np.where(voxels!=0) # non zero voxels a = np.column_stack((argx1, argy1, argz1)) b = np.column_stack((argx, argy, argz)) tree = cKDTree(a, leafsize=a.shape[0]+1) distances, ndx = tree.query(b, k=1, distance_upper_bound= 5.) argx2, argy2, argz2 = a[ndx][:][:,0],a[ndx][:][:,1],a[ndx][:][:,2] voxels[argx,argy,argz] = voxels[argx2,argy2,argz2] toc=timeit.default_timer() timetaken = toc - tic #elapsed time in seconds print '\nTime to fill empty voxels', timetaken </code></pre> <h1>for visualization:</h1> <pre><code>from mayavi import mlab data = voxels.astype('float') scalar_field = mlab.pipeline.scalar_field(data) iso_surf = mlab.pipeline.iso_surface(scalar_field) surf = mlab.pipeline.surface(scalar_field) vol = mlab.pipeline.volume(scalar_field,vmin=0,vmax=data.max()) mlab.outline() mlab.show() </code></pre> <p>Now, if I have the dimension of the voxels array as something like (500,500,500), then the time it takes to compute the nearest search is no longer efficient. How can I overcome this? Could parallel computation reduce the time (I have no idea whether I can parallelize the code, if you do, please let me know)?</p> <h1>A potential fix:</h1> <p>I could substantially improve the computation time by adding the n_jobs = -1 parameter in the cKDTree query.</p> <pre><code>distances, ndx = tree.query(b, k=1, distance_upper_bound= 5., n_jobs=-1) </code></pre> <p>I was able to compute the distances in less than a hour for an array of (400,100,100) on a 13 core CPU. I tried with 1 processor and it takes around 18 hours to complete the same array. Thanks to @gsamaras for the answer!</p>
3
2016-09-23T12:38:06Z
39,680,380
<p>It would be interesting to try <a href="http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html" rel="nofollow">sklearn.neighbors.NearestNeighbors</a>, which offers <code>n_jobs</code> parameter:</p> <blockquote> <p>The number of <strong>parallel jobs</strong> to run for neighbors search.</p> </blockquote> <p>This package also provides the Ball Tree algorithm, which you can test versus the kd-tree one, however my hunch is that the kd-tree will be better (but that again does depend on your data, so research that!).</p> <hr> <p>You might also want to use <em>dimensionality reduction</em>, which is easy. The idea is that you reduce your dimensions, thus your data contain less info, so that tackling the Nearest Neighbour Problem can be done much faster. Of course, there is a trade off here, accuracy!</p> <p>You might/will get less accuracy with dimensionality reduction, but it might worth the try. However, this usually applies in a high dimensional space, and <em>you are just in 3D</em>. So I don't know if <em>for your specific case</em> it would make sense to use <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html" rel="nofollow">sklearn.decomposition.PCA</a>.</p> <hr> <p>A remark:</p> <p>If you really want high performance though, you won't get it with <a href="/questions/tagged/python" class="post-tag" title="show questions tagged &#39;python&#39;" rel="tag">python</a>, you could switch to <a href="/questions/tagged/c%2b%2b" class="post-tag" title="show questions tagged &#39;c++&#39;" rel="tag">c++</a>, and use <a href="http://doc.cgal.org/latest/Spatial_searching/index.html" rel="nofollow">CGAL</a> for example.</p>
2
2016-09-24T19:54:47Z
[ "python", "performance", "parallel-processing", "nearest-neighbor", "kdtree" ]
Spyder logging output only in file not IPython Console
39,661,084
<p>I'm having a problem with logging in Spyder.</p> <p>My code has some important output like a progressbar and some logging. I don't want the logging to be in the IPython console output, just in the log file.</p> <p>There is a logging.conf file because i need the TimedRotatingFileHandler and a formatter.</p> <p>The code looks like this</p> <pre><code>print('sth important') logger.info('first print worked') print('just sth') </code></pre> <p>I want the output to be like</p> <pre><code>sth important just sth </code></pre> <p>and the logfile "output.log"</p> <pre><code>date - INFO - first print worked </code></pre> <p>The Problem is: When I set the logger &amp; handler level in .conf to INFO, the output in the IPython console is</p> <pre><code>sth important date - INFO - first print worked just sth </code></pre> <p>logger level WARNING, handler level INFO: neither output in console nor file</p> <p>logger level INFO, handler level WARNING: output in console, "output.log" empty</p> <p>In the python.org logging tutorial this works with logging.basicConfig, but how can I combine this with handlers and formatters?</p>
0
2016-09-23T12:43:32Z
39,787,835
<p>There seems to be a problem with the timedRotatingHandler. It works fine with the FileHandler. Just rename the logfile with the datetime and os packages. This may not be the best solution but at least it is working.</p>
0
2016-09-30T09:11:08Z
[ "python", "logging", "console", "spyder" ]
Optimal way to add small lists to Pandas DataFrame
39,661,198
<p>I'm parsing some logs that contain HTTP transactions into a Pandas DataFrame. Each row is one trasaction, so one column has the IP address, the other has the hostname, etc. For each row (log entry) I'd like to extract the header parameters into a list, and store that list with the rest of the info for that row.</p> <p>The question is: how to store the list of parameters so that it can be easily cross-referenced with the rest of the data from the log?</p> <p>For the sake of example, imagine I have this dataframe, where a user's list of pets is stored as a string, and we want to parse out the individual animals and store it as a list. The idea is to store the result of parsing the string so that the actual parse routine only has to run once.</p> <pre><code># Original Dataframe User | PetsString ---------------------- Mary | 'dog/cat/rat' John | 'dog/lizard' </code></pre> <p><strong>Method 1)</strong> I can add a column to the dataframe and store the list in this column. </p> <pre><code>User | PetsString | PetsList -------------------------------------------- Mary | 'dog/cat/rat' | ['dog','cat','rat'] John | 'dog/lizard' | ['dog','lizard'] </code></pre> <p><strong>Method 2)</strong> I can create another dataframe that has the list entries, with a column indicating the position of the log in the original dataframe for corss-referencing. I'd like to avoid this since I assume that iterating over two dataframes simultaneously is slower than iterating over a single large list. E.g.</p> <pre><code>User | PetsString ----------------------- Mary | 'dog/cat/rat' John | 'dog/lizard' #Separate DataFrame for cross-reference User | Pet ----------------- Mary | 'dog' Mary | 'cat' Mary | 'rat' John | 'dog' John | 'lizard' </code></pre> <p><strong>Method 3)</strong> Someone suggested adding, say, 50 columns to my existing dataframe and storing each list item in one of the columns. I don't expect to have more than 50 header parameters. This seems speed-optimal but has the nasty limitation in the number of columns. E.g.</p> <pre><code>User | PetsString | Pet0 | Pet1 | Pet2 ------------------------------------------------------ Mary | 'dog/cat/rat' | 'dog' | 'cat' | 'rat' John | 'dog/lizard' | 'dog' | 'lizard' | </code></pre> <p>I have two questions:</p> <p><strong>(i)</strong> Assuming I need to compute a function that will read a row and all parameters from the corresponding list, which of the three layouts is speed-optimal?</p> <p><strong>(ii)</strong> Which of these is space-optimal? I'm not sure how Pandas works with objects, but I believe that if I use method 1, Pandas will create a column that is as wide as the longest list. Similarly, Method 3 will have to allocate space for a full 'Pet2' column, even if John doesn't have one</p> <p>I know a lot of these things may be specific to my particular processor, cache size, use-case, etc. but even a general understanding of the tradeoffs would be very useful to me</p> <p>thanks in advance for your help!</p>
1
2016-09-23T12:48:54Z
39,661,899
<p>The values in the columns of a Pandas DataFrame are stored in homogeneous numpy arrays. Consider the following:</p> <pre><code>In [95]: pd.DataFrame({'a': ['foo', 'bar/baz']}).a.dtype Out[95]: dtype('O') In [96]: pd.DataFrame({'a': [['foo'], ['bar', 'baz']]}).a.dtype Out[96]: dtype('O') </code></pre> <p>This shows that:</p> <ol> <li><p>When you store strings of different lengths, Pandas uses a numpy array of objects (numpy also has string arrays for equally fixed-size strings, but Pandas doesn't use them).</p></li> <li><p>When you store lists, Pandas also uses a numpy object array.</p></li> </ol> <p>Based on this, I think that your first option will have good memory and speed performance. Pandas and numpy have an advantage over regular Python data structures, in things like huge numerical sequences, where the Python overhead of a single number object is huge. A Pyton <code>list</code> of strings is pretty efficient, and a numpy array of (non fixed-size) strings doesn't really have an advantage over it.</p> <p>In fact, you might consider if Pandas offers any advantages here at all over plain vanilla Python. Why not a <code>dict</code> mapping strings to <code>list</code>s of strings, for example?</p>
2
2016-09-23T13:22:45Z
[ "python", "pandas" ]
Python 2.7 BeautifulSoup , email scraping
39,661,367
<p>Hope you are all well. I'm new in Python and using python 2.7.</p> <p>I'm trying to extract only the mailto from this public website business directory: <a href="http://www.tecomdirectory.com/companies.php?segment=&amp;activity=&amp;search=category&amp;submit=Search" rel="nofollow">http://www.tecomdirectory.com/companies.php?segment=&amp;activity=&amp;search=category&amp;submit=Search</a><br> the mails i'm looking for are the emails mentioned in every widget from a-z in the full directory. This directory does not have an API unfortunately. I'm using BeautifulSoup, but with no success so far.<br> here is mycode:</p> <pre><code>import urllib from bs4 import BeautifulSoup website = raw_input("Type website here:&gt;\n") html = urllib.urlopen('http://'+ website).read() soup = BeautifulSoup(html) tags = soup('a') for tag in tags: print tag.get('href', None) </code></pre> <p>what i get is just the website of the actual website , like <a href="http://www.tecomdirectory.com" rel="nofollow">http://www.tecomdirectory.com</a> with other href rather then the mailto or websites in the widgets. i also tried replacing soup('a') with soup ('target'), but no luck! Can anybody help me please?</p>
0
2016-09-23T12:56:35Z
39,662,163
<p>You cannot just find every anchor, you need to specifically look for "mailto:" in the href, you can use a css selector <code>a[href^=mailto:]</code> which finds <em>anchor</em> tags that have a <em>href</em> starting with <code>mailto:</code>:</p> <pre><code>import requests soup = BeautifulSoup(requests.get("http://www.tecomdirectory.com/companies.php?segment=&amp;activity=&amp;search=category&amp;submit=Search").content) print([a["href"] for a in soup.select("a[href^=mailto:]")]) </code></pre> <p>Or extract the text:</p> <pre><code>print([a.text for a in soup.select("a[href^=mailto:]")]) </code></pre> <p>Using <code>find_all("a")</code> you would need to use a regex to achieve the same:</p> <pre><code>import re find_all("a", href=re.compile(r"^mailto:")) </code></pre>
1
2016-09-23T13:34:31Z
[ "python", "python-2.7", "web-scraping", "beautifulsoup" ]
Trying to loop over colums in DataFrame and strip Dollar Sign
39,661,428
<p>My DataFrame has way too many columns to manully type in all of the columns separately. Therefore I am trying to loop through them quickly and get rid of the dollar signs and the commas in the large numbers. This is the code that I have so far: </p> <pre><code>for column in df1: df1[column] = df1[column].str.lstrip('$') </code></pre> <p>and I am getting the error: </p> <p>AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas</p>
1
2016-09-23T12:59:16Z
39,661,467
<p>You can filter just the str columns using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.select_dtypes.html" rel="nofollow"><code>select_dtypes</code></a>:</p> <pre><code>for col in df.select_dtypes([np.object]): df[col] = df[col].str.lstrip('$') </code></pre> <p>Example:</p> <pre><code>In [309]: df = pd.DataFrame({'int':np.arange(3), 'float':[0.1,2.3,4.0], 'str':['$d', 'a$', 'asd']}) df Out[309]: float int str 0 0.1 0 $d 1 2.3 1 a$ 2 4.0 2 asd In [310]: for col in df.select_dtypes([np.object]): df[col] = df[col].str.lstrip('$') df Out[310]: float int str 0 0.1 0 d 1 2.3 1 a$ 2 4.0 2 asd </code></pre>
3
2016-09-23T13:01:43Z
[ "python", "pandas", "dataframe" ]
Get the corresponding XML nodes with xpath
39,661,533
<p>I have a XML file (actually is a xliff file) where a node has 2 children nodes with identical substructure (which is not known a priori, can be very complex and changes for each <code>&lt;trans-unit&gt;</code>). I'm working with python and lxml library... Example:</p> <pre><code>&lt;trans-unit id="tu4" xml:space="preserve"&gt; &lt;seg-source&gt; &lt;mrk mid="0" mtype="seg"&gt; &lt;g id="1"&gt;...&lt;/g&gt; &lt;g id="2"&gt;...&lt;/g&gt; &lt;g id="3"&gt;...&lt;/g&gt; &lt;bx id="7"/&gt;... &lt;/mrk&gt; &lt;mrk mid="1" mtype="seg"&gt;...&lt;/mrk&gt; &lt;mrk mid="2" mtype="seg"&gt;... &lt;ex id="7"/&gt; &lt;g id="8"&gt; FROM HERE &lt;/g&gt; &lt;/mrk&gt; &lt;/seg-source&gt; &lt;target xml:lang="en"&gt; &lt;mrk mid="0" mtype="seg"&gt; &lt;g id="1"&gt;...&lt;/g&gt; &lt;g id="2"&gt;...&lt;/g&gt; &lt;g id="3"&gt;...&lt;/g&gt; &lt;bx id="7"/&gt;... &lt;/mrk&gt; &lt;mrk mid="1" mtype="seg"&gt;...&lt;/mrk&gt; &lt;mrk mid="2" mtype="seg"&gt;... &lt;ex id="7"/&gt; &lt;g id="8"&gt; TO HERE &lt;/g&gt; &lt;/mrk&gt; &lt;/target&gt; &lt;/trans-unit&gt; </code></pre> <p>As you can see, the 2 nodes <code>&lt;seg-source&gt;</code> and <code>&lt;target&gt;</code> have exactly the same sub-structure. My goal is to navigate to each node of <code>&lt;seg-source&gt;</code>, get the text and the tail of that node (and I know how to do that with xpath), translate them and finally (and THIS IS what I don't know how to do) assign to the corresponding node in the <code>&lt;target&gt;</code> the translation...</p> <p>In other words... suppose I get the node "FROM HERE"... how can I get the node "TO HERE"?. </p>
1
2016-09-23T13:05:13Z
39,661,906
<p>if you want to pair them all you could just zip the nodes together so you can access the matching codes from each:</p> <pre><code>from lxml import etree tree = etree.fromstring(x) nodes = iter(tree.xpath("//*[self::seg-source or self::target]")) for seq, tar in zip(nodes, nodes): # each node will be the matching nodes from each seq-source and target print(seq.xpath(".//*")) print(tar.xpath(".//*")) </code></pre> <p>Since there are only two in any/each <code>trans-unit</code> you can just use <code>nodes = iter(tree.xpath("//trans-unit/*"))</code> so the names of the nodes inside don't matter.</p> <pre><code>nodes = iter(tree.xpath("/trans-unit/*")) for seq, tar in zip(nodes, nodes): print(seq.xpath(".//*")) print(tar.xpath(".//*")) </code></pre> <p>If we run the code on your sample and print each id node you can see the output gets one from each:</p> <pre><code>In [2]: from lxml import etree In [3]: tree = etree.fromstring(x) In [4]: nodes = iter(tree.xpath("//trans-unit/*")) In [5]: for seq, tar in zip(nodes, nodes): ...: print(seq.xpath(".//g[@id='8']/text()")) ...: print(tar.xpath(".//g[@id='8']/text()")) ...: [' FROM HERE '] [' TO HERE '] </code></pre> <p>Each node is the corresponding node from each child of trans-unit:</p> <pre><code>In [7]: for seq, tar in zip(nodes, nodes): ...: print(seq.tag, tar.tag) ...: for n1, n2 in zip(seq.xpath(".//*"),tar.xpath(".//*")): ...: print(n1.tag, n2.tag) ...: ('seg-source', 'target') ('mrk', 'mrk') ('g', 'g') ('g', 'g') ('g', 'g') ('bx', 'bx') ('mrk', 'mrk') ('mrk', 'mrk') ('ex', 'ex') ('g', 'g') </code></pre>
0
2016-09-23T13:23:03Z
[ "python", "xml", "xpath", "lxml", "xliff" ]
PyGobject error
39,661,560
<pre><code>#!/usr/bin/python # -*- coding: utf-8 -*- from gi.repository import Gtk class ourwindow(Gtk.Window): def __init__(self): Gtk.Window.__init__(self, title="My Hello World Program") Gtk.Window.set_default_size(self, 400,325) Gtk.Window.set_position(self, Gtk.WindowPosition.CENTER) button1 = Gtk.Button("Hello, World!") button1.connect("clicked", self.whenbutton1_clicked) self.add(button1) def whenbutton1_clicked(self, button): print "Hello, World!" window = ourwindow() window.connect("delete-event", Gtk.main_quit) window.show_all() Gtk.main() </code></pre> <p>This Python+GTK code is giving me the following error:</p> <pre><code>./pygtk.py ./pygtk.py:3: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded. from gi.repository import Gtk Traceback (most recent call last): File "./pygtk.py", line 4, in &lt;module&gt; class ourwindow(Gtk.Window): File "./pygtk.py", line 10, in ourwindow button1.connect("clicked", self.whenbutton1_clicked) NameError: name 'self' is not defined </code></pre> <p>It also gives me an indentaion error. I am new to Python and GTK. Thanks in advance.</p>
0
2016-09-23T13:06:20Z
39,661,925
<p>This is most likely how it should be formatted:</p> <pre><code>#!/usr/bin/python # -*- coding: utf-8 -*- from gi.repository import Gtk class ourwindow(Gtk.Window): def __init__(self): Gtk.Window.__init__(self, title="My Hello World Program") Gtk.Window.set_default_size(self, 400,325) Gtk.Window.set_position(self, Gtk.WindowPosition.CENTER) button1 = Gtk.Button("Hello, World!") button1.connect("clicked", self.whenbutton1_clicked) self.add(button1) def whenbutton1_clicked(self, button): print "Hello, World!" window = ourwindow() window.connect("delete-event", Gtk.main_quit) window.show_all() Gtk.main() </code></pre> <p>I would definitely recommend you to read some basic Python tutorial to at least understand the syntax. Easier to do GUI stuff when you know the basics of the language.</p>
2
2016-09-23T13:23:52Z
[ "python", "gtk", "pygobject" ]
How to create permanent MS Access Query by Python 3.5.1?
39,661,582
<p>I have about 40 MS Access Databases and have some troubles if need to create or transfer one of MS Access Query (like object) from one db to other dbs. So I tried to solve this problem with <code>pyodbc</code> but.. as I saw <code>pyodbc</code> doesn't support to create new, permanent MS Access Query (object). I can connect to db, create or delete tables/rows but can't to create and save new query.</p> <pre><code>import pyodbc odbc_driver = r"{Microsoft Access Driver (*.mdb, *.accdb)}" db_test1 = r'''..\Test #1.accdb''' db_test2 = r'''..\Test #2.accdb''' db_test3 = r'''..\Test #3.accdb''' db_test4 = r'''..\Test #4.accdb''' db_test_objects = [db_test1, db_test2, db_test3, db_test4] odbc_conn_str = "Driver=%s;DBQ=%s;" % (odbc_driver, db_file) print (odbc_conn_str) conn = pyodbc.connect(odbc_conn_str) odbc_cursor = conn.cursor() NewQuery = "CREATE TABLE TestTable(symbol varchar(15), leverage double)" odbc_cursor.execute(NewQuery) conn.commit() conn.close() </code></pre> <p>SO, How to create and save MS Access Query <strong>like objects</strong> from python? I tried to search info in Google, but the answers were related with <strong>Run SQL code</strong>.</p> <p>On VBA this code looks like:</p> <pre><code>Public Sub CreateQueryDefX() Dim base(1 To 4) As String base(1) = "..\Test #1.accdb" base(2) = "..\Test #2.accdb" base(3) = "..\Test #3.accdb" base(4) = "..\Test #4.accdb" For i = LBound(base) To UBound(base) CurrentBase = base(i) Set dbo = OpenDatabase(CurrentBase) With dbo Set QueryNew = .CreateQueryDef("TestQuery", _ "SELECT * FROM TestTable") RefreshDatabaseWindow .Close End With Next i RefreshDatabaseWindow End Sub </code></pre> <p>Sorry for my English, it's not my native :)</p> <p>By the way, I know how to solve this by VBA, but I'm interested in solve this by python.</p> <p>Thank you.</p>
3
2016-09-23T13:07:23Z
39,662,166
<p>You can use a <a href="https://msdn.microsoft.com/en-us/library/bb177895(v=office.12).aspx" rel="nofollow">CREATE VIEW</a> statement to create a saved Select Query in Access. The pyodbc equivalent to your VBA example would be</p> <pre class="lang-python prettyprint-override"><code>crsr = conn.cursor() sql = """\ CREATE VIEW TestQuery AS SELECT * FROM TestTable """ crsr.execute(sql) </code></pre> <p>To delete that saved query you could simply execute a <a href="https://msdn.microsoft.com/en-us/library/bb177897(v=office.12).aspx" rel="nofollow">DROP VIEW</a> statement.</p> <p>For more information on DDL in Access see</p> <p><a href="https://msdn.microsoft.com/en-us/library/bb267262(v=office.12).aspx" rel="nofollow">Data Definition Language</a></p>
5
2016-09-23T13:34:37Z
[ "python", "vba", "ms-access", "pyodbc" ]
How to create permanent MS Access Query by Python 3.5.1?
39,661,582
<p>I have about 40 MS Access Databases and have some troubles if need to create or transfer one of MS Access Query (like object) from one db to other dbs. So I tried to solve this problem with <code>pyodbc</code> but.. as I saw <code>pyodbc</code> doesn't support to create new, permanent MS Access Query (object). I can connect to db, create or delete tables/rows but can't to create and save new query.</p> <pre><code>import pyodbc odbc_driver = r"{Microsoft Access Driver (*.mdb, *.accdb)}" db_test1 = r'''..\Test #1.accdb''' db_test2 = r'''..\Test #2.accdb''' db_test3 = r'''..\Test #3.accdb''' db_test4 = r'''..\Test #4.accdb''' db_test_objects = [db_test1, db_test2, db_test3, db_test4] odbc_conn_str = "Driver=%s;DBQ=%s;" % (odbc_driver, db_file) print (odbc_conn_str) conn = pyodbc.connect(odbc_conn_str) odbc_cursor = conn.cursor() NewQuery = "CREATE TABLE TestTable(symbol varchar(15), leverage double)" odbc_cursor.execute(NewQuery) conn.commit() conn.close() </code></pre> <p>SO, How to create and save MS Access Query <strong>like objects</strong> from python? I tried to search info in Google, but the answers were related with <strong>Run SQL code</strong>.</p> <p>On VBA this code looks like:</p> <pre><code>Public Sub CreateQueryDefX() Dim base(1 To 4) As String base(1) = "..\Test #1.accdb" base(2) = "..\Test #2.accdb" base(3) = "..\Test #3.accdb" base(4) = "..\Test #4.accdb" For i = LBound(base) To UBound(base) CurrentBase = base(i) Set dbo = OpenDatabase(CurrentBase) With dbo Set QueryNew = .CreateQueryDef("TestQuery", _ "SELECT * FROM TestTable") RefreshDatabaseWindow .Close End With Next i RefreshDatabaseWindow End Sub </code></pre> <p>Sorry for my English, it's not my native :)</p> <p>By the way, I know how to solve this by VBA, but I'm interested in solve this by python.</p> <p>Thank you.</p>
3
2016-09-23T13:07:23Z
39,663,974
<p>Consider the Python equivalent of the VBA running exactly what VBA uses: a COM interface to the Access Object library. With Python's <code>win32com</code> third-party module, you can call the <a href="https://msdn.microsoft.com/en-us/library/office/ff195966.aspx" rel="nofollow">CreateQueryDef</a> method. Do note: this COM interfacing can be applied in other languages such as PHP and R!</p> <p>Below uses a <code>try/except/finally</code> block to ensure the Access application process closes regardless of error or success of code (similar to VBA's <code>On Error</code> handling):</p> <pre><code>import win32com.client # OPEN ACCESS APP AND DATABASE dbases = ["..\Test #1.accdb", "..\Test #2.accdb", "..\Test #3.accdb", "..\Test #4.accdb"] try: oApp = win32com.client.Dispatch("Access.Application") # CREATE QUERYDEF for db in dbases: oApp.OpenCurrentDatabase(db) currentdb = oApp.CurrentDb() currentdb.CreateQueryDef("TestQuery", "SELECT * FROM TestTable") currentdb = None oApp.DoCmd.CloseDatabase except Exception as e: print(e) finally: currentdb = None oApp.Quit oApp = None </code></pre> <hr> <p>Also, if you need to run DML statements via pyodbc and not a COM interface, consider distributed queries as Access can query other databases directly in SQL. Below should work in Python (be sure to escape the backslash):</p> <pre class="lang-sql prettyprint-override"><code>SELECT t.* FROM [C:\Path\To\Other\Database.accdb].TestTable t </code></pre>
1
2016-09-23T15:03:39Z
[ "python", "vba", "ms-access", "pyodbc" ]
Show progress in Python from underlying C++ process
39,661,708
<p>I have a C++ program that runs for a long time and performs lots (e.g. 1,000,000) of iterations. Typically I run it from Python (usually Jupyter Notebook). I would like to see the a progress from the C++ program. Is there a convenient way to do it? Perhaps to link it to a Pythonic progress bar library, e.g. tqdm?</p>
1
2016-09-23T13:13:28Z
39,793,233
<p>Disclaimer, I'm codeveloper of tqdm.</p> <p>I see 3 solutions :</p> <ul> <li><p>Either the cpp lib regularly calls back to python like after processing each row of a matrix (as pandas does) and then you can use a Python progress bar like tqdm, just like for any other common python loop. The loop won't be updated at each iteration but at each callback, so it's not really real-time but if the cpp lib is fast, you won't notice anything. See the submodule tqdm_pandas for example, it works exactly like that. </p></li> <li><p>Either the cpp lib does all the work without any callback until the end (this maximizes performances, callbacks to Python are huge slowdowns), then you need to use a cpp progress bar inside your cpp lib, as you cannot use a python one (since it will never be called until the end). There is an <a href="https://github.com/tqdm/tqdm.cpp" rel="nofollow">official cpp port of tqdm</a> in development, this might fit your needs.</p></li> <li><p>Last case is if your cpp program is not a linked lib but rather a standalone program that can be run from command-line. In this case, tqdm has facilities to interface with such programs as long as your cpp program can output something. See the readme about it, it works already well for gzipping and other commin Unix commands. </p></li> </ul>
0
2016-09-30T13:56:05Z
[ "python", "subprocess", "progress-bar", "jupyter-notebook", "tqdm" ]
Merging multiple dataframe bassed on matching values from three column into single dataframe
39,661,759
<p>I have multiple data frames (25 dataframes), I am looking for recurrently occuuring row values from three columns of all dataframes. The following are my example of my daframes</p> <pre><code>df1 chr start end name 1 12334 12334 AAA 1 2342 2342 SAP 2 3456 3456 SOS 3 4537 4537 ABR df2 chr start end name 1 12334 12334 DSF 1 3421 3421 KSF 2 7689 7689 LUF df3 chr start end name 1 12334 12334 DSF 1 3421 3421 KSF 2 4537 4537 LUF 3 8976 8976 BAR 4 6789 6789 AIN </code></pre> <p>And finally what I am aiming is to look into fist three columns of these daframe and extract a new dataframe based on matching rows from these 3 column values along with names of datafrme as last column. So the final data frame should look like this,</p> <pre><code>chr start end name Sample 1 12334 12334 AAA df1 1 12334 12334 AAA df2 1 12334 12334 AAA df3 </code></pre> <p>I know the following line of python script will create the above output without Sample as a column.</p> <pre><code>s1 = pd.merge(df1, df2, how='left', on=['chr', 'start', 'end']) df_final = pd.merge(s1, df3[['chr', 'start', 'end']], how='left', on=['chr', 'start','end']) </code></pre> <p>but I have more than 25 dataframe which I need to look for merge based on matching values. Any robust and better solution would be really appreciated</p>
1
2016-09-23T13:16:05Z
39,662,541
<p>Say you have a dictionary mapping sample names to DataFrames:</p> <pre><code>dfs = {'df1': df1, 'df2': df2} </code></pre> <p>(and so on).</p> <p>The common relevant keys (in hashable form) are</p> <pre><code>common_tups = set.intersection(*[set(df[['chr', 'start', 'end']].drop_duplicates().apply(tuple, axis=1).values) for df in dfs.values()]) </code></pre> <p>Now you just need, for each DataFrame, to find the relevant rows, add the name of the DataFrame as the sample, and concatenate the results:</p> <pre><code>pd.concat([df[df[['chr', 'start', 'end']].apply(tuple, axis=1).isin(common_tups)].assign(Sample=name) for (name, df) in dfs.items()]) </code></pre>
1
2016-09-23T13:53:04Z
[ "python", "pandas", "numpy", "dataframe" ]
PyGame Space Invaders Game - Making aliens move together
39,661,888
<p>I've created a Space Invaders clone in Python using the PyGame modules but I'm running into some difficulty getting them to move down together when reaching the edge of the game screen. </p> <p>How would I make it so when the aliens reach the edge of the game screen they all simultaneously change direction and drop down a level?</p> <pre><code>import pygame import random class spaceInvader(pygame.sprite.Sprite): def __init__(self): self.image = pygame.image.load("spaceInvader.png") self.x = 200 self.y = 390 self.shots = [] def handle_keys(self): key = pygame.key.get_pressed() dist = 5 if key[pygame.K_RIGHT]: self.x+=dist elif key[pygame.K_LEFT]: self.x-=dist def draw(self, surface): surface.blit(self.image,(self.x,self.y)) for s in self.shots: s.draw(screen) class Alien(pygame.sprite.Sprite): def __init__(self,x,y,direction,alienType): pygame.sprite.Sprite.__init__(self) self.AlienType = alienType self.Direction = direction if alienType == 1: alienImage = pygame.image.load("alien1.png") self.Speed = 1 self.Score = 5 if alienType == 2: alienImage = pygame.image.load("alien2.png") self.Score = 15 self.Speed = 1 if alienType == 3: alienImage = pygame.image.load("alien3.png") self.Score = 10 self.Speed = 1 if alienType == 4: alienImage = pygame.image.load("alien4.png") self.Score = 20 self.Speed = 1 if alienType == 5: alienImage = pygame.image.load("alien5.png") self.Score = 25 self.Speed = 1 self.image = pygame.Surface([26, 50]) self.image.set_colorkey(black) self.image.blit(alienImage,(0,0)) self.rect = self.image.get_rect() self.rect.x = x self.rect.y = y def moveAliens(self): if self.Direction == "right": self.rect.x += self.Speed if self.Direction == "left": self.rect.x -= self.Speed pygame.init() pygame.mouse.set_visible(False) screen = pygame.display.set_mode([400,400]) allAliens = pygame.sprite.Group() spaceInvader = spaceInvader() pygame.display.set_caption("Space Attack") background_image = pygame.image.load("Galaxy.png").convert() pygame.mouse.set_visible(True) done = False clock = pygame.time.Clock() black =( 0, 0, 0) white =( 255,255,255) red = (255, 0, 0) score = 0 enemies = [] #For X coords spawnPositions = [50,100,150,200,250,300,350] yCoord = 10 for n in range(5): for i in range(len(spawnPositions)): xCoord = spawnPositions[i] alienType = random.randint(1,5) alien = Alien(xCoord, yCoord,"right", alienType) allAliens.add(alien) yCoord = yCoord + 15 loop = 0 while done == False: for event in pygame.event.get(): if event.type == pygame.QUIT: done = True for alien in (allAliens.sprites()): if alien.rect.x &lt; 0: alien.rect.y = alien.rect.y + 15 alien.Direction = "right" if alien.rect.x &gt; 395: alien.rect.y = alien.rect.y + 15 alien.Direction = "left" loop =+1 for alien in (allAliens.sprites()): alien.moveAliens() spaceInvader.handle_keys() screen.blit(background_image,[0,0]) spaceInvader.draw(screen) allAliens.draw(screen) pygame.display.flip() clock.tick(20) pygame.quit() </code></pre> <p>Thanks.</p>
1
2016-09-23T13:22:17Z
39,661,985
<p>Your problem lies here:</p> <pre><code>for alien in (allAliens.sprites()): if alien.rect.x &lt; 0: alien.rect.y = alien.rect.y + 15 alien.Direction = "right" if alien.rect.x &gt; 395: alien.rect.y = alien.rect.y + 15 alien.Direction = "left" loop =+1 </code></pre> <p>I assume the aliens are currently individually dropping down? </p> <p>You need to change this so that when one alien triggers these if statements, all alien's <code>y</code> and <code>direction</code> are appropriately set, not just the one hitting the side.</p>
4
2016-09-23T13:26:35Z
[ "python", "python-2.7", "pygame" ]
2d density plot using Python
39,661,929
<p>I had been trying to make a 2D density plot from a data file using the code below in Python</p> <h1>2d.py</h1> <pre><code>import matplotlib.pyplot as plt import matplotlib.cm as cm from matplotlib.colors import LogNorm import numpy as np fig1 = plt.figure(1) x, y, z1, z2 = np.loadtxt('fort.101',unpack=True) N = int(len(z1)**.5) z1 = z1.reshape(N, N) plt.imshow(z1, extent=(np.amin(x), np.amax(x), np.amax(y), np.amin(y)), cmap=cm.hot) plt.colorbar() fig1.savefig('2d.eps') </code></pre> <p>The figure that it creates is 90 degrees rotated from what is created by GNUPlot. That is to say, the two figures are 90 degrees out of phase. Can anyone help me resolve this issue?</p>
0
2016-09-23T13:23:57Z
39,675,164
<p><a href="http://i.stack.imgur.com/sD1Pe.png" rel="nofollow">enter image description here</a></p> <p><a href="http://i.stack.imgur.com/Uri8R.png" rel="nofollow">enter image description here</a></p> <p>This is what I am getting...</p>
0
2016-09-24T10:11:03Z
[ "python", "plot", "2d" ]
How can I use NodeJS for front-end in large Python application?
39,661,982
<p>I am currently writing the front-end for a large python application and have realized that I would benefit a lot from various Node packages (I am currently trying to make it into a single page ReactJS application).</p> <p>From my understanding though, in most tutorials, Node seems to be used for the entire application. In my case I just want the front-end. So my question is how can I do that? Do I simply do an npm init in the folder where my front-end JS files are and install whatever packages I need? If not, how do I do this? Is this even an appropriate use of Node?</p> <p>Thanks</p>
0
2016-09-23T13:26:22Z
39,663,885
<p>Node.js doesn't run on the frontend but on the <strong>backend</strong> - you can use Node <strong>instead</strong> of Python.</p> <p>You can use Node to prepare/compile/build/minify your frontend code.</p> <p>You can use <code>npm</code> <strong>modules</strong> on the frontend - see <a href="http://browserify.org/" rel="nofollow">Browserify</a>.</p> <p>But you will not be able to <strong>run Node on the frontend</strong> - unless you want something like <a href="https://repl.it/languages/nodejs" rel="nofollow">repl.it</a>.</p>
1
2016-09-23T14:59:37Z
[ "python", "node.js", "npm", "node-modules" ]
Python: creating a group column in a Pandas dataframe based on an integer range of values
39,662,016
<p>For each range <code>[0, 150]</code> in the <code>diff</code> column, I want to create a group column that increases by 1 each time the range resets. When <code>diff</code> is negative, the range resets.</p> <pre><code>import pandas as pd df = pd.DataFrame({'year': [2016, 2016, 2016, 2016, 2016, 2016, 2016], 'month' : [1, 1, 2, 3, 3, 3, 3], 'day': [23, 25, 1, 1, 7, 20, 30]}) df = pd.to_datetime(df) df = pd.concat([df, pd.Series(data=[15, 35, 80, 5, 20, 45, 90])], axis=1) df.columns = ['date', 'percentworn'] col_shift = ['percentworn'] df_shift = df.shift(1).loc[:, col_shift] df_combined = df.join(df_shift, how='inner', rsuffix='_2') df_combined.fillna(value=0,inplace=True) df_combined['diff'] = df_combined['percentworn'] - df_combined['percentworn_2'] </code></pre> <p><a href="http://i.stack.imgur.com/YjiQf.png" rel="nofollow"><img src="http://i.stack.imgur.com/YjiQf.png" alt="enter image description here"></a></p> <p>The <code>grp</code> columns should have <code>0, 0, 0, 1, 1, 1, 1</code>. The code I tried was</p> <pre><code>def grping(df): df_ = df.copy(deep=True) i = 0 if df_['diff'] &gt;= 0: df_['grp'] = i else: i += 1 df_['grp'] = i return df_ df_combined.apply(grping,axis=1) </code></pre> <p>I need to <code>i += 1</code> persist after incrementing. How can I achieve this? Or is there a better way get the desired results?</p> <p><a href="http://i.stack.imgur.com/9PKgI.png" rel="nofollow"><img src="http://i.stack.imgur.com/9PKgI.png" alt="enter image description here"></a></p>
1
2016-09-23T13:28:21Z
39,662,141
<p>IIUC you can test whether the <code>'diff'</code> column is negative which produces a boolean array, then cast this to <code>int</code> and the call <code>cumsum</code>:</p> <pre><code>In [313]: df_combined['group'] = (df_combined['diff'] &lt; 0).astype(int).cumsum() df_combined Out[313]: date percentworn percentworn_2 diff group 0 2016-01-23 15 0.0 15.0 0 1 2016-01-25 35 15.0 20.0 0 2 2016-02-01 80 35.0 45.0 0 3 2016-03-01 5 80.0 -75.0 1 4 2016-03-07 20 5.0 15.0 1 5 2016-03-20 45 20.0 25.0 1 6 2016-03-30 90 45.0 45.0 1 </code></pre> <p>breaking the above down:</p> <pre><code>In [314]: df_combined['diff'] &lt; 0 Out[314]: 0 False 1 False 2 False 3 True 4 False 5 False 6 False Name: diff, dtype: bool In [316]: (df_combined['diff'] &lt; 0).astype(int) Out[316]: 0 0 1 0 2 0 3 1 4 0 5 0 6 0 Name: diff, dtype: int32 </code></pre>
2
2016-09-23T13:33:20Z
[ "python", "pandas" ]
How to split a file in python
39,662,094
<p>I am trying to split 2 lists, compare them and make a new list without the succesfully compared items in 2 lists.</p> <p>So lets say List_1.txt =</p> <pre><code>Failed = abc Failed = hfi Failed = kdi </code></pre> <p>and List_2.txt = </p> <pre><code>1:1:1 - jdsfjdf 2:2:2 - iidf 3:3:3 - abc 6:3:1 - hfi 8:2:1 - kdi 3:1:5 - dua 3:1:2 - dfh </code></pre> <p>I want to compare those lists and make a new_list2 without the list_1 entries.</p> <p>what I tried was:</p> <pre><code>treinrit = open('List_1', 'r') lijna = treinrit.readlines() treinrit.close() annuleer_treinrit = open('List_2', 'r') lijnb = annuleer_treinrit.readline() annuleer_treinrit.close() lijsta = [] lijstb = [] for a in lijna: clean = a.split(' - ') print(lijsta) for b in lijnb: lijstb.append(lijnb.split(": ")) </code></pre> <p>I just cant get the list to split properly. I only need the last bit of each file to compare but I don't know how.</p>
0
2016-09-23T13:31:06Z
39,662,242
<pre><code>with open('File1', 'r') as f1: f1_stored = [] for line in f1: f1_stored.append(line.split('=')[1].strip()) with open('File2', 'r') as f2; output = [] for line in f2: if not any(failed in line for failed in f1_stored): output.append(line) </code></pre> <p>The do what you want with <code>output</code></p>
1
2016-09-23T13:37:49Z
[ "python", "list", "file", "split" ]
How to split a file in python
39,662,094
<p>I am trying to split 2 lists, compare them and make a new list without the succesfully compared items in 2 lists.</p> <p>So lets say List_1.txt =</p> <pre><code>Failed = abc Failed = hfi Failed = kdi </code></pre> <p>and List_2.txt = </p> <pre><code>1:1:1 - jdsfjdf 2:2:2 - iidf 3:3:3 - abc 6:3:1 - hfi 8:2:1 - kdi 3:1:5 - dua 3:1:2 - dfh </code></pre> <p>I want to compare those lists and make a new_list2 without the list_1 entries.</p> <p>what I tried was:</p> <pre><code>treinrit = open('List_1', 'r') lijna = treinrit.readlines() treinrit.close() annuleer_treinrit = open('List_2', 'r') lijnb = annuleer_treinrit.readline() annuleer_treinrit.close() lijsta = [] lijstb = [] for a in lijna: clean = a.split(' - ') print(lijsta) for b in lijnb: lijstb.append(lijnb.split(": ")) </code></pre> <p>I just cant get the list to split properly. I only need the last bit of each file to compare but I don't know how.</p>
0
2016-09-23T13:31:06Z
39,662,355
<p>Something like this</p> <pre><code>bad_stuff = [] with open('List_1', 'r') as fn: for line in fn: bad_stuff.append(line.split('=')[1].strip()) with open('List_2', 'r') as fn: for line in fn: if line.split(':')[1].strip() not in bad_stuff: print(line) </code></pre> <p>The list <code>bad_stuff</code> will have all the elements from the first file after the <code>=</code> sign (like <code>abc</code>, <code>hfi</code> and <code>kdi</code>)</p> <p>Then check the second file, and only print if the part after the <code>:</code> sign is not in the list <code>bad_stuff</code></p>
1
2016-09-23T13:43:28Z
[ "python", "list", "file", "split" ]
Data responses in IMAP using Python
39,662,099
<p>Dear users of Stack Overflow,</p> <p>My question is about how to use the returned data of an imap command in python (in this case printing the amount of messages in your inbox). What I was able to find on the internet is limited to the following two descriptions:</p> <p><a href="http://i.stack.imgur.com/P9V0D.png" rel="nofollow">Discription 1</a></p> <p><a href="http://i.stack.imgur.com/oOzmR.png" rel="nofollow">Discription 2</a></p> <p>Reading these explanations, I still have no idea how to use the EXISTS response as I’ve just started programming (one semester of C programming at Uni). So, if someone could help me understand how the responses of imap commands can be used in python, that would be awesome. I do prefer to understand the principle of the responses instead of solving just this one-time issue so I’ll be able to use responses in different situations (and other people might be able to apply it too then). </p> <p>The (basic) code I’ve written so far on my Raspberry Pi including the point where I'm stuck with EXISTS (between the two question marks):</p> <pre><code>import imaplib server = imaplib.IMAP4_SSL(‘imap.gmail.com’) server.login(‘USERNAME’, ‘PASSWORD’) server.list() server.select(‘inbox’) print ‘%d messages in the inbox’ %d ??EXISTS?? </code></pre> <p>Hopefully I’m not the only one who would like to know this!</p> <p>Kind regards,</p> <p>I. Van Dijck</p> <p>P.S. My updated code is as follow (the error is: TypeError: not all arguments converted during string formatting):</p> <pre><code>import imaplib server = imaplib.IMAP4_SSL('imap.gmail.com') server.login('USERNAME', 'PASSWORD') server.list() server.select('inbox') number_of_messages = server.select("inbox") print '%s messages' % number_of_messages </code></pre>
0
2016-09-23T13:31:15Z
39,726,876
<p>The select function returns the number from the EXISTs response. Almost all imaplib commands return two values, <code>type</code>, and <code>data</code>.</p> <p>From the <a href="https://docs.python.org/2/library/imaplib.html" rel="nofollow">documentation</a></p> <blockquote> <p>Each command returns a tuple: (type, [data, ...]) where type is usually 'OK' or 'NO', and data is either the text from the command response, or mandated results from the command. Each data is either a string, or a tuple. If a tuple, then the first part is the header of the response, and the second part contains the data (ie: ‘literal’ value).</p> </blockquote> <p><code>select</code>, in it's list of data returns the number from the EXISTS response, apparently as a string. You will need to extract the item from the list, and convert it to a string:</p> <pre><code>typ, dat = inbox.select() number_of_messages = int(dat[0]) </code></pre>
0
2016-09-27T14:06:57Z
[ "python", "imap", "gmail-imap", "imaplib" ]
Checking If Word Is English Python
39,662,212
<p>So, I am doing a project where I have a list with english words and want it to check the word i write is in the list and tell me if it's in english or not, I have no idea how to do this but It's what i'm supposed to do so I am asking for your help</p> <pre><code>text = open("dict.txt","r") #Opens the dict.txt file I have and reads it word = input("Type a word to check if it's english.") #Type in a word to check if is english or not if(word in text == true): print(word, "Is in English.") elif(word in text == false): print(word, "Is not in English.") #Check if word is true or false in the dict.txt and print if it is english or not. </code></pre>
0
2016-09-23T13:36:56Z
39,662,299
<p>In your code, <code>text</code> is a file object, which you first need to read from somehow. You could, for example, read them into a set (because of O(1) lookup times):</p> <pre><code>with open("dict.txt", "r") as f: text = {line.strip() for line in f} # set comprehension word = input("Type a word to check if it's english.") if word in text: print(word, "Is in English.") else: print(word, "Is not in English.") </code></pre> <p>As someone with a background in NLP: Trying to <em>actually</em> test whether a word is valid English is more complicated than you might think. With a big enough dictionary (that also contains inflected forms) you should have a high accuracy though.</p>
2
2016-09-23T13:40:32Z
[ "python", "list" ]
Transforming an Abaqus macro into a python script
39,662,277
<p>I am using Abaqus (6.13) to run FEM thermal simulations. I need to get the total external heat flux aplied on that model. My searches indicated that the only way to get it was to sum de RFLE history output on the whole model and it works fine. The problem is that I have a ~300 000 elements model and that the simple opening of the Report/XY window takes a couple of hours.</p> <p>In order to simplify my exportations, I made an exporting macro with the macro manager of Abaqus. The recording starts before importing the odb in abaqus and ends after exporting the report containing the X/Y datas. This generated macro is big (~900 000 lines) so I give you here a cropped version of it:</p> <pre><code># -*- coding: mbcs -*- # Do not delete the following import lines from abaqus import * from abaqusConstants import * import __main__ def OdbMacro1(): import section import regionToolset import displayGroupMdbToolset as dgm import part import material import assembly import step import interaction import load import mesh import optimization import job import sketch import visualization import xyPlot import displayGroupOdbToolset as dgo import connectorBehavior import os os.chdir(r"C:\FolderPath") session.mdbData.summary() o1 = session.openOdb(name='C:\FolderPath\odb.odb') session.viewports['Viewport: 1'].setValues(displayedObject=o1) odb = session.odbs['C:\FolderPath\odb.odb'] xy0 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='Nodal temperature: NT11 PI: PAD-1 Node 10 in NSET PADSURF_BACK', steps=('Step-2', ), suppressQuery=True) xy1 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='Nodal temperature: NT11 PI: PAD-1 Node 10053 in NSET PADSURF_BACK', steps=('Step-2', ), suppressQuery=True) xy2 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='Nodal temperature: NT11 PI: PAD-1 Node 10054 in NSET PADSURF_BACK', steps=('Step-2', ), suppressQuery=True) xy3 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='Nodal temperature: NT11 PI: PAD-1 Node 10055 in NSET PADSURF_BACK', steps=('Step-2', ), suppressQuery=True) xy4 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='Nodal temperature: NT11 PI: PAD-1 Node 10056 in NSET PADSURF_BACK', steps=('Step-2', ), suppressQuery=True) xy5 = avg((xy0, xy1, xy2, xy3, xy4, ), ) session.XYData(name='x0.nt11', objectToCopy=xy5, sourceDescription='avg((Nodal temperature: NT11 PI: PAD-1 Node 10 in NSET PADSURF_BACK, Nodal temperature: NT11 PI: PAD-1 Node 10053 in NSET PADSURF_BACK, Nodal temperature: NT11 PI: PAD-1 Node 10054 in NSET PADSURF_BACK, Nodal temperature: NT11 PI: PAD-1 Node 10055 in NSET PADSURF_BACK, Nodal temperature: NT11 PI: PAD-1 Node 10056 in NSET PADSURF_BACK, ),)') odb = session.odbs['C:\FolderPath\odb.odb'] xy0 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='RFLE11: RFLE11 PI: PAD-1 Node 1', steps=('Step-2', ), suppressQuery=True) xy1 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='RFLE11: RFLE11 PI: PAD-1 Node 2', steps=('Step-2', ), suppressQuery=True) xy2 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='RFLE11: RFLE11 PI: PAD-1 Node 3', steps=('Step-2', ), suppressQuery=True) [...] xy280068 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='RFLE11: RFLE11 PI: SLIDER-1 Node 210034', steps=( 'Step-2', ), suppressQuery=True) xy280069 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='RFLE11: RFLE11 PI: SLIDER-1 Node 210035', steps=( 'Step-2', ), suppressQuery=True) xy280070 = sum((xy0, xy1, xy2, xy3, xy4, xy5, xy6, xy7, xy8, xy9, xy10, xy11, xy12, xy13, xy14, xy15, xy16, xy17, xy18, xy19, xy20, xy21, xy22, xy23, xy24, xy25, xy26, xy27, xy28, xy29, xy30, xy31, xy32, xy33, xy34, xy35, [...] xy280057, xy280058, xy280059, xy280060, xy280061, xy280062, xy280063, xy280064, xy280065, xy280066, xy280067, xy280068, xy280069, ), ) session.XYData(name='model.RFLE', objectToCopy=xy280070, sourceDescription='sum((RFLE11: RFLE11 PI: PAD-1 Node 1, RFLE11: RFLE11 PI: PAD-1 Node 2, RFLE11: RFLE11 PI: PAD-1 Node 3, [...] RFLE11: RFLE11 PI: SLIDER-1 Node 210033, RFLE11: RFLE11 PI: SLIDER-1 Node 210034, RFLE11: RFLE11 PI: SLIDER-1 Node 210035, ),)') odb = session.odbs['C:\FolderPath\odb.odb'] xy0 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 5', steps=('Step-2', ), suppressQuery=True) xy1 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 6', steps=('Step-2', ), suppressQuery=True) xy2 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12 [................................................................] =True) xy6000 = xyPlot.XYDataFromHistory(odb=odb, outputVariableName='Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18048', steps=('Step-2', ), suppressQuery=True) xy6001 = sum((xy0, xy1, xy2, xy3, xy4, xy5, xy6, xy7, xy8, xy9, xy10, xy11, xy12, xy13, xy14, xy15, xy16, xy17, xy18, xy19, xy20, xy21, xy22, xy23, [................................................................] xy5991, xy5992, xy5993, xy5994, xy5995, xy5996, xy5997, xy5998, xy5999, xy6000, ), ) session.XYData(name='surf.hfla', objectToCopy=xy6001, sourceDescription='sum((Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 5, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 6, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12050, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12051, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12052, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12053, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12054, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12055, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12056, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12057, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12058, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12059, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12060, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12061, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 12062, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ [................................................................] 37, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18038, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18039, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18040, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18041, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18042, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18043, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18044, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18045, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18046, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18047, Heat flux: HFLA ASSEMBLY_SLIDERSURF/ASSEMBLY_PADSURF PI: SLIDER-1 Node 18048, ),)') x0 = session.xyDataObjects['surf.hfla'] x1 = session.xyDataObjects['model.RFLE'] x2 = session.xyDataObjects['x0.nt11'] session.xyReportOptions.setValues(interpolation=ON) session.writeXYReport(fileName='C:\FolderPath\report.rpt', appendMode=OFF, xyData=(x0, x1, x2)) OdbMacro1() </code></pre> <p>I added the call to OdbMacro1 at the end, following the search results I got here and there.</p> <p>I want to run that macro (or at least the useful part) outside the GUI from a python file. When I do (with "C:\Path\to\python\file\folder>abaqus python macro.py"), I get an error: </p> <pre><code>Traceback (most recent call last): File "macro.py", line 3, in &lt;module&gt; from abaqus import * File "SMAPyaModules\SMAPyaAbqPy.m\src\abaqus.py", line 15, in &lt;module&gt; ImportError: abaqus module may only be imported in the Abaqus kernel process </code></pre> <p>. I don't understand what the problem is. I tried adding "import odbAccess" at the beginning of the file but I get the same error. I think I should be adding some code at the beginning but I can't get around which. Could you help me?</p> <p>By the way, it is secondary, but I feel like I could simplify the function: </p> <ul> <li>Among the bunch of import at the beginning of the function I am not sure all are needed may I delete some?</li> <li>All the nodes whose history output are summed (or averaged) in the same operation form a set. Isn't there a way to use that to avoid using each individual history output in the macro?</li> </ul> <p>Thank you for any lead to the answer. :)</p>
1
2016-09-23T13:39:28Z
39,708,953
<p>here is a script that does essentially what you want: (and you see we only need the three import's)</p> <pre><code>from abaqus import * from abaqusConstants import * import visualization odb=session.openOdb(name='/full/path/Job-1.odb') session.viewports['Viewport: 1'].setValues(displayedObject=odb) session.xyDataListFromField(odb=odb, outputPosition=NODAL, variable=(('NT11', NODAL), ), nodeSets=('PART-1-1.SETNAME', )) keyname='From Field Data: NT11 at part instance PART-1-1' # run this to see what the keys look like: # [ o.description for o in session.xyDataObjects.values() ] temp=[o for o in session.xyDataObjects.values() if o.description.find(keyname)==0] #note if you only have requested one data type you could just do: #temp=session.xyDataObjects.values() session.writeXYReport(fileName='test.rpt', xyData=temp) #alternate way to directly write data instead of using writexyreport: f=open('test.dat','w') for o in temp: f.write('%i %g\n'% (int(o.description.split()[-1]),o.data[-1][-1])) f.close() </code></pre> <p>run with <code>abaqus cae -noGUI script.py</code> or <code>abaqus cae noGUI=script.py</code></p>
0
2016-09-26T17:31:08Z
[ "python", "macros", "abaqus" ]
How do I run parallel tasks with Celery?
39,662,381
<p>I am using Celery to run some tasks that take a long time to complete. There is an initial task that needs to complete before two sub-tasks can run. The tasks that I created are file system operations and don't return a result. </p> <p>I would like the subtasks to run at the same time, but when I use a group for these tasks they run sequentially and not in parallel.</p> <p>I have tried: </p> <pre><code>g = group([secondary_task(), secondary_tasks2()]) chain(initial_task(),g) </code></pre> <p>I've also tried running the group directly in the first task, but that doesn't seem to work either. </p> <p>Is what I'm trying to accomplish doable with Celery? </p> <pre><code> First Task / \ Second Task Third Task </code></pre> <p>Not:</p> <pre><code>First Task | Second Task | Third Task </code></pre>
0
2016-09-23T13:44:22Z
39,662,548
<p>The chain is definitely the right approach. </p> <p>I would expect this to work: chain(initial_task.s(), g)()</p> <p>Do you have more than one celery worker running to be able to run more than one task at the same time?</p>
0
2016-09-23T13:53:22Z
[ "python", "celery" ]
scikit learn output metrics.classification_report into CSV/tab-delimited format
39,662,398
<p>I'm doing a multiclass text classification in Scikit-Learn. The dataset is being trained using the Multinomial Naive Bayes classifier having hundreds of labels. Here's an extract from the Scikit Learn script for fitting the MNB model</p> <pre><code>from __future__ import print_function # Read **`file.csv`** into a pandas DataFrame import pandas as pd path = 'data/file.csv' merged = pd.read_csv(path, error_bad_lines=False, low_memory=False) # define X and y using the original DataFrame X = merged.text y = merged.grid # split X and y into training and testing sets; from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # import and instantiate CountVectorizer from sklearn.feature_extraction.text import CountVectorizer vect = CountVectorizer() # create document-term matrices using CountVectorizer X_train_dtm = vect.fit_transform(X_train) X_test_dtm = vect.transform(X_test) # import and instantiate MultinomialNB from sklearn.naive_bayes import MultinomialNB nb = MultinomialNB() # fit a Multinomial Naive Bayes model nb.fit(X_train_dtm, y_train) # make class predictions y_pred_class = nb.predict(X_test_dtm) # generate classification report from sklearn import metrics print(metrics.classification_report(y_test, y_pred_class)) </code></pre> <p>And a simplified output of the metrics.classification_report on command line screen looks like this:</p> <pre><code> precision recall f1-score support 12 0.84 0.48 0.61 2843 13 0.00 0.00 0.00 69 15 1.00 0.19 0.32 232 16 0.75 0.02 0.05 965 33 1.00 0.04 0.07 155 4 0.59 0.34 0.43 5600 41 0.63 0.49 0.55 6218 42 0.00 0.00 0.00 102 49 0.00 0.00 0.00 11 5 0.90 0.06 0.12 2010 50 0.00 0.00 0.00 5 51 0.96 0.07 0.13 1267 58 1.00 0.01 0.02 180 59 0.37 0.80 0.51 8127 7 0.91 0.05 0.10 579 8 0.50 0.56 0.53 7555 avg/total 0.59 0.48 0.45 35919 </code></pre> <p>I was wondering if there was any way to get the report output into a standard csv file with regular column headers</p> <p>When I send the command line output into a csv file or try to copy/paste the screen output into a spreadsheet - Openoffice Calc or Excel, It lumps the results in one column. Looking like this:</p> <p><a href="http://i.stack.imgur.com/LjVby.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/LjVby.jpg" alt="enter image description here"></a></p> <p>Help appreciated. Thanks!</p>
1
2016-09-23T13:45:19Z
39,664,921
<p>The way I have always solved output problems is like what I've mentioned in my previous comment, I've converted my output to a DataFrame. Not only is it incredibly easy to send to files (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow">see here</a>), but <a href="http://pandas.pydata.org/" rel="nofollow">Pandas</a> is really easy to manipulate the data structure. The other way I have solved this is writing the output line-by-line using <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">CSV</a> and specifically using <code>writerow</code>.</p> <p>If you manage to get the output into a dataframe it would be</p> <pre><code>dataframe_name_here.to_csv() </code></pre> <p>or if using CSV it would be something like the example they provide in the CSV link.</p>
-1
2016-09-23T15:53:00Z
[ "python", "text", "machine-learning", "scikit-learn", "classification" ]
Sending data from html to python flask server "GET / HTTP/1.1" 405 error
39,662,447
<p>very new to python/AJAX and have been piecing everything from examples on the internet/flask documentation and I've gotten so far. Basically what I am trying to do is send latitude and longitude coordinates on a click (from mapbox API) to flask, and have that data print to console (to prove it has successfully gone Flask so I can work with it later).</p> <p>data I am trying to send is formatted as: </p> <pre><code>LngLat {lng: 151.0164794921875, lat: -33.79572045486767} </code></pre> <p>HTML:</p> <pre><code>&lt;button onclick=submit() type="button"&gt;POST&lt;/button&gt; &lt;script&gt; map.on('click', function (e) { console.log(e.lngLat) }); function submit() { var myData = e.lngLat $.post( "/", $( "myData" ).serialize() ); } &lt;/script&gt; </code></pre> <p>PY:</p> <pre><code>from flask import Flask from flask import request from flask import render_template app = Flask(__name__) @app.route('/', methods=['POST']) def home(): return render_template('index.html') print(request.form['myData']) if __name__ == '__main__': app.run(debug=True) </code></pre> <p>when I try to run from console to localhost:5000 I get the error</p> <pre><code>127.0.0.1 - - [23/Sept/2016 23:21:15] "GET / HTTP/1.1" 405 - </code></pre> <p>I'm sorry if this is a silly question but I'm stumped for now! Thank you for your input </p>
-1
2016-09-23T13:47:57Z
39,662,650
<p>The reason you are getting 405 error is because you only have <code>home()</code> controller, that accept only <code>POST</code> methods. And you are trying to get response with <code>GET</code> method.</p> <p>So you need to change <code>methods</code> argument in <code>@app.route()</code> decorator</p> <pre><code>@app.route('/', methods=['GET', 'POST']) def home(): return render_template('index.html') </code></pre> <p>But still, you don't have any code that would handle your AJAX request.</p>
0
2016-09-23T13:58:10Z
[ "javascript", "python", "html", "ajax", "flask" ]
Sending data from html to python flask server "GET / HTTP/1.1" 405 error
39,662,447
<p>very new to python/AJAX and have been piecing everything from examples on the internet/flask documentation and I've gotten so far. Basically what I am trying to do is send latitude and longitude coordinates on a click (from mapbox API) to flask, and have that data print to console (to prove it has successfully gone Flask so I can work with it later).</p> <p>data I am trying to send is formatted as: </p> <pre><code>LngLat {lng: 151.0164794921875, lat: -33.79572045486767} </code></pre> <p>HTML:</p> <pre><code>&lt;button onclick=submit() type="button"&gt;POST&lt;/button&gt; &lt;script&gt; map.on('click', function (e) { console.log(e.lngLat) }); function submit() { var myData = e.lngLat $.post( "/", $( "myData" ).serialize() ); } &lt;/script&gt; </code></pre> <p>PY:</p> <pre><code>from flask import Flask from flask import request from flask import render_template app = Flask(__name__) @app.route('/', methods=['POST']) def home(): return render_template('index.html') print(request.form['myData']) if __name__ == '__main__': app.run(debug=True) </code></pre> <p>when I try to run from console to localhost:5000 I get the error</p> <pre><code>127.0.0.1 - - [23/Sept/2016 23:21:15] "GET / HTTP/1.1" 405 - </code></pre> <p>I'm sorry if this is a silly question but I'm stumped for now! Thank you for your input </p>
-1
2016-09-23T13:47:57Z
39,662,824
<p>In your route, allow <code>GET</code> method, otherwise the html wil never render.</p> <pre><code>@app.route('/', methods=['POST', 'GET']) </code></pre> <p>To print lat/lng to the console, first check if the method is <code>POST</code>, then print it:</p> <pre><code>if request.method == 'POST': print(request.form.get('lng'), request.form.get('lat')) </code></pre> <p>This is the resulting code for the route:</p> <pre><code>@app.route('/', methods=['POST', 'GET']) def home(): if request.method == 'POST': print(request.form.get('lng'), request.form.get('lat')) return render_template('index.html') </code></pre>
0
2016-09-23T14:06:43Z
[ "javascript", "python", "html", "ajax", "flask" ]
Unable to resolve EOL error while using chdir along with path in python
39,662,538
<p>Code</p> <pre><code>os.chdir(os.path.normpath(r"D:\mystuff\Ear\APP-INF\lib\extract-dto")) </code></pre> <p>I have tried other code sample :</p> <pre><code>os.chdir("D:\\mystuff\\ekaEar\\APP-INF\\lib\\extract-dto") </code></pre> <p>Output:</p> <pre><code>File "CreateJar.py", line 13 os.chdir(os.path.normpath('D:\mystuff\Ear\APP-INF\lib\')) ^ SyntaxError: EOL while scanning string literal </code></pre> <p>I think \e is escape sequence character not sure.</p> <p>How to prevent this error?</p>
0
2016-09-23T13:52:59Z
39,664,036
<p>It looks like the backslash at the end of the path is causing the single quote to be escaped. If possible, try removing the last backslash from the path.</p> <pre><code>os.chdir(os.path.normpath('D:\mystuff\Ear\APP-INF\lib')) </code></pre> <p>If your path is stored in a variable, you can do this with code using something like the this:</p> <pre><code>my_path.rstrip('\\') </code></pre>
-1
2016-09-23T15:07:09Z
[ "python", "cmd" ]
cross products with einsums
39,662,540
<p>I'm trying to compute the cross-products of many 3x1 vector pairs as fast as possible. This</p> <pre><code>n = 10000 a = np.random.rand(n, 3) b = np.random.rand(n, 3) numpy.cross(a, b) </code></pre> <p>gives the correct answer, but motivated by <a href="http://stackoverflow.com/a/20910319/353337">this answer to a similar question</a>, I thought that <code>einsum</code> would get me somewhere. I found that both</p> <pre><code>eijk = np.zeros((3, 3, 3)) eijk[0, 1, 2] = eijk[1, 2, 0] = eijk[2, 0, 1] = 1 eijk[0, 2, 1] = eijk[2, 1, 0] = eijk[1, 0, 2] = -1 np.einsum('ijk,aj,ak-&gt;ai', eijk, a, b) np.einsum('iak,ak-&gt;ai', np.einsum('ijk,aj-&gt;iak', eijk, a), b) </code></pre> <p>compute the cross product, but their performance is disappointing: Both methods perform much worse than <code>np.cross</code>:</p> <pre><code>%timeit np.cross(a, b) 1000 loops, best of 3: 628 µs per loop </code></pre> <pre><code>%timeit np.einsum('ijk,aj,ak-&gt;ai', eijk, a, b) 100 loops, best of 3: 9.02 ms per loop </code></pre> <pre><code>%timeit np.einsum('iak,ak-&gt;ai', np.einsum('ijk,aj-&gt;iak', eijk, a), b) 100 loops, best of 3: 10.6 ms per loop </code></pre> <p>Any ideas of how to improve the <code>einsum</code>s?</p>
1
2016-09-23T13:53:03Z
39,663,181
<p>You can bring in matrix-multiplication using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html" rel="nofollow"><code>np.tensordot</code></a> to lose one of the dimensions at the first level and then use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> to lose the other dimension, like so -</p> <pre><code>np.einsum('aik,ak-&gt;ai',np.tensordot(a,eijk,axes=([1],[1])),b) </code></pre> <p>Alternatively, we can perform broadcasted elementwise multiplications between <code>a</code> and <code>b</code> using <code>np.einsum</code> and then lose the two dimensions in one go with <code>np.tensordot</code>, like so -</p> <pre><code>np.tensordot(np.einsum('aj,ak-&gt;ajk', a, b),eijk,axes=([1,2],[1,2])) </code></pre> <p>We could have performed the elementwise multiplications by introducing new axes too with something like <code>a[...,None]*b[:,None]</code>, but it seems to slow it down.</p> <hr> <p>Though, these show good improvement over the proposed <code>np.einsum</code> only based approaches, but fail to beat <code>np.cross</code>.</p> <p>Runtime test -</p> <pre><code>In [26]: # Setup input arrays ...: n = 10000 ...: a = np.random.rand(n, 3) ...: b = np.random.rand(n, 3) ...: In [27]: # Time already posted approaches ...: %timeit np.cross(a, b) ...: %timeit np.einsum('ijk,aj,ak-&gt;ai', eijk, a, b) ...: %timeit np.einsum('iak,ak-&gt;ai', np.einsum('ijk,aj-&gt;iak', eijk, a), b) ...: 1000 loops, best of 3: 298 µs per loop 100 loops, best of 3: 5.29 ms per loop 100 loops, best of 3: 9 ms per loop In [28]: %timeit np.einsum('aik,ak-&gt;ai',np.tensordot(a,eijk,axes=([1],[1])),b) 1000 loops, best of 3: 838 µs per loop In [30]: %timeit np.tensordot(np.einsum('aj,ak-&gt;ajk',a,b),eijk,axes=([1,2],[1,2])) 1000 loops, best of 3: 882 µs per loop </code></pre>
2
2016-09-23T14:23:50Z
[ "python", "performance", "numpy", "cross-product", "numpy-einsum" ]
cross products with einsums
39,662,540
<p>I'm trying to compute the cross-products of many 3x1 vector pairs as fast as possible. This</p> <pre><code>n = 10000 a = np.random.rand(n, 3) b = np.random.rand(n, 3) numpy.cross(a, b) </code></pre> <p>gives the correct answer, but motivated by <a href="http://stackoverflow.com/a/20910319/353337">this answer to a similar question</a>, I thought that <code>einsum</code> would get me somewhere. I found that both</p> <pre><code>eijk = np.zeros((3, 3, 3)) eijk[0, 1, 2] = eijk[1, 2, 0] = eijk[2, 0, 1] = 1 eijk[0, 2, 1] = eijk[2, 1, 0] = eijk[1, 0, 2] = -1 np.einsum('ijk,aj,ak-&gt;ai', eijk, a, b) np.einsum('iak,ak-&gt;ai', np.einsum('ijk,aj-&gt;iak', eijk, a), b) </code></pre> <p>compute the cross product, but their performance is disappointing: Both methods perform much worse than <code>np.cross</code>:</p> <pre><code>%timeit np.cross(a, b) 1000 loops, best of 3: 628 µs per loop </code></pre> <pre><code>%timeit np.einsum('ijk,aj,ak-&gt;ai', eijk, a, b) 100 loops, best of 3: 9.02 ms per loop </code></pre> <pre><code>%timeit np.einsum('iak,ak-&gt;ai', np.einsum('ijk,aj-&gt;iak', eijk, a), b) 100 loops, best of 3: 10.6 ms per loop </code></pre> <p>Any ideas of how to improve the <code>einsum</code>s?</p>
1
2016-09-23T13:53:03Z
39,663,686
<p>The count of multiply operation of <code>einsum()</code> is more then <code>cross()</code>, and in the newest NumPy version, <code>cross()</code> doesn't create many temporary arrays. So <code>einsum()</code> can't be faster than <code>cross()</code>. </p> <p>Here is the old code of cross:</p> <pre><code>x = a[1]*b[2] - a[2]*b[1] y = a[2]*b[0] - a[0]*b[2] z = a[0]*b[1] - a[1]*b[0] </code></pre> <p>Here is the new code of cross:</p> <pre><code>multiply(a1, b2, out=cp0) tmp = array(a2 * b1) cp0 -= tmp multiply(a2, b0, out=cp1) multiply(a0, b2, out=tmp) cp1 -= tmp multiply(a0, b1, out=cp2) multiply(a1, b0, out=tmp) cp2 -= tmp </code></pre> <p>To speedup it, you need cython or numba.</p>
2
2016-09-23T14:48:00Z
[ "python", "performance", "numpy", "cross-product", "numpy-einsum" ]
python xlsxwriter - wrong category axis in column chart in Excel 2013
39,662,555
<p>I'm using <code>xlsxwriter</code> to generate a chart. I'm trying to use texts for the category axis, it works in almost every excel version (2007, 2010). But not in excel 2013, and not in Microsoft Excel Online (which seems to be like excel 2013). </p> <p>The problem: the category axis is displayed as sequence numbers (1,2,3..), instead of the actual text in the cells. This is the relevant part of my the code, which writes the <code>data</code> (a list of 2-sized tuples), and inserts a column chart based on that data.</p> <pre><code> xlsxwriter.Workbook('a.xlsx', {'default_date_format': 'dd/mm/yyyy'}) sheet = workbook.add_worksheet(sheet_name) sheet.write_row(0, 0, headers) # Write header row sheet.set_column(0, 1, 25) # Column width rowCount = 1 # Write the data for text, total in data: sheet.write_row(rowCount, 0, (text, total)) rowCount += 1 column_chart = workbook.add_chart({'type': 'column'}) column_chart.set_size({'width': 850, 'height': 600}) column_chart.set_x_axis({'text_axis': True}) column_chart.add_series({ 'name': sheet_name, 'categories': [sheet_name, 1, 0, rowCount, 0], # row, col row, col 'values': [sheet_name, 1, 1, rowCount, 1], 'data_labels': {'value': True} }) sheet.insert_chart('D10', column_chart) workbook.close() </code></pre> <p>As I've said, the code outputs good xlsx, that works in every excel besides 2013. The category axis shows the line number in the sheet (1,2,3..) instead of the text value which is assign to the chart in the <code>categories</code> option.</p> <p>Thanks in advance</p>
2
2016-09-23T13:53:41Z
39,664,936
<p>There isn't any reason that the XlsxWriter output would work in Excel 2007 and not in Excel 2013. The file format is the default 2007 file format and Excel is very good at backward compatibility.</p> <p>Also, I don't see the issue you describe. I modified your example to add some sample input data:</p> <pre><code>import xlsxwriter workbook = xlsxwriter.Workbook('a.xlsx', {'default_date_format': 'dd/mm/yyyy'}) sheet_name = 'Data' sheet = workbook.add_worksheet(sheet_name) sheet.set_column(0, 1, 25) rowCount = 1 data = [ ['Foo', 4], ['Bar', 5], ['Baz', 6], ] # Write the data for text, total in data: sheet.write_row(rowCount, 0, (text, total)) rowCount += 1 column_chart = workbook.add_chart({'type': 'column'}) column_chart.set_size({'width': 850, 'height': 600}) column_chart.set_x_axis({'text_axis': True}) column_chart.add_series({ 'name': sheet_name, 'categories': [sheet_name, 1, 0, rowCount, 0], # row, col row, col 'values': [sheet_name, 1, 1, rowCount, 1], 'data_labels': {'value': True} }) sheet.insert_chart('D10', column_chart) workbook.close() </code></pre> <p>And the output looks correct in Excel 2013:</p> <p><a href="http://i.stack.imgur.com/r4JI8.png" rel="nofollow"><img src="http://i.stack.imgur.com/r4JI8.png" alt="enter image description here"></a></p>
2
2016-09-23T15:54:09Z
[ "python", "axis", "excel-2013", "xlsxwriter" ]
python xlsxwriter - wrong category axis in column chart in Excel 2013
39,662,555
<p>I'm using <code>xlsxwriter</code> to generate a chart. I'm trying to use texts for the category axis, it works in almost every excel version (2007, 2010). But not in excel 2013, and not in Microsoft Excel Online (which seems to be like excel 2013). </p> <p>The problem: the category axis is displayed as sequence numbers (1,2,3..), instead of the actual text in the cells. This is the relevant part of my the code, which writes the <code>data</code> (a list of 2-sized tuples), and inserts a column chart based on that data.</p> <pre><code> xlsxwriter.Workbook('a.xlsx', {'default_date_format': 'dd/mm/yyyy'}) sheet = workbook.add_worksheet(sheet_name) sheet.write_row(0, 0, headers) # Write header row sheet.set_column(0, 1, 25) # Column width rowCount = 1 # Write the data for text, total in data: sheet.write_row(rowCount, 0, (text, total)) rowCount += 1 column_chart = workbook.add_chart({'type': 'column'}) column_chart.set_size({'width': 850, 'height': 600}) column_chart.set_x_axis({'text_axis': True}) column_chart.add_series({ 'name': sheet_name, 'categories': [sheet_name, 1, 0, rowCount, 0], # row, col row, col 'values': [sheet_name, 1, 1, rowCount, 1], 'data_labels': {'value': True} }) sheet.insert_chart('D10', column_chart) workbook.close() </code></pre> <p>As I've said, the code outputs good xlsx, that works in every excel besides 2013. The category axis shows the line number in the sheet (1,2,3..) instead of the text value which is assign to the chart in the <code>categories</code> option.</p> <p>Thanks in advance</p>
2
2016-09-23T13:53:41Z
39,673,192
<p>Problem actually solved by upgrading my version of the library. My version was 0.5.1 and now is 0.9. I wonder if it was a bug in the earlier version. Works great now, too bad it took me some time to figure out</p>
-1
2016-09-24T06:17:13Z
[ "python", "axis", "excel-2013", "xlsxwriter" ]
Python: requests.get ignores the last record
39,662,596
<p>This is my code:</p> <pre><code>response = requests.get(apiurl+'api/v1/watch/services', auth=(apiuser,apipass), verify=False, stream=True) for line in response.iter_lines(): try: data = json.loads(line.decode('utf-8')) pprint.pprint(data) except Exception as e: pprint.pprint(e) pass </code></pre> <p>Please note the <code>stream=True</code>.</p> <p>The problem is, when I have <code>a</code> <code>b</code> <code>c</code> <code>d</code> on the input, the script just outputs <code>a</code> <code>b</code> and <code>c</code>. Then, when <code>e</code> comes on input, the script outputs <code>d</code>.</p> <p>What am I doing wrong?</p>
0
2016-09-23T13:56:02Z
39,755,743
<p>OK so the answer is a bit unexpected for me.</p> <p>Updating python from 3.4 to 3.5 helped, nothing else changed.</p> <p>Hope this answer helps someone else fighting this problem.</p>
0
2016-09-28T19:07:09Z
[ "python", "python-requests" ]
getting the ascii code of a sentence
39,662,665
<p>I am trying to make a code that will give me the ascii code of a sentence, right now i can only give one letter</p> <p>for example</p> <pre><code>a = input("c") b = ord(a) print (b) </code></pre> <p>it prints 99, my goal is to type abc with the outcome of 97 98 99</p>
0
2016-09-23T13:59:03Z
39,662,707
<p>Loop through <code>a</code>:</p> <pre><code>print([ord(c) for c in a]) </code></pre>
2
2016-09-23T14:00:38Z
[ "python", "ascii" ]
numpy.meshgrid explanation
39,662,699
<p>Could someone care to explin the <code>meshgrid</code> method ? I cannot wrap my mind around it. The example is from the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html#numpy.meshgrid" rel="nofollow">SciPy</a> site:</p> <pre><code>import numpy as np nx, ny = (3, 2) x = np.linspace(0, 1, nx) print ("x =", x) y = np.linspace(0, 1, ny) print ("y =", y) xv, yv = np.meshgrid(x, y) print ("xv_1 =", xv) print ("yv_1 =",yv) xv, yv = np.meshgrid(x, y, sparse=True) # make sparse output arrays print ("xv_2 =", xv) print ("yv_2 =", yv) </code></pre> <p>Printout is :</p> <pre><code>x = [ 0. 0.5 1. ] y = [ 0. 1.] xv_1 = [[ 0. 0.5 1. ] [ 0. 0.5 1. ]] yv_1 = [[ 0. 0. 0.] [ 1. 1. 1.]] xv_2 = [[ 0. 0.5 1. ]] yv_2 = [[ 0.] [ 1.]] </code></pre> <p>Why are arrays xv_1 and yv_1 formed like this ? Ty :)</p>
0
2016-09-23T14:00:23Z
39,663,187
<p>Your linear spaced vectors <code>x</code> and <code>y</code> defined by <code>linspace</code> use 3 and 2 points respectively. </p> <p>These linear spaced vectors are then used by the meshgrid function to create a 2D linear spaced point cloud. This will be a grid of points for each of the <code>x</code> and <code>y</code> coordinates. The size of this point cloud will be 3 x 2.</p> <p>The output of the function <code>meshgrid</code> creates an indexing matrix that holds in each cell the <code>x</code> and <code>y</code> coordinates for each point of your space.</p> <p>This is created as follows:</p> <pre><code># dummy def meshgrid_custom(x,y): xv = np.zeros((len(x),len(y))) yv = np.zeros((len(x),len(y))) for i,ix in zip(range(len(x)),x): for j,jy in zip(range(len(y)),y): xv[i,j] = ix yv[i,j] = jy return xv.T, yv.T </code></pre> <p>So, for example the point at the location (1,1) has the coordinates:</p> <p><code>x = xv_1[1,1] = 0.5</code><br> <code>y = yv_1[1,1] = 1.0</code></p>
0
2016-09-23T14:23:56Z
[ "python", "numpy" ]
numpy.meshgrid explanation
39,662,699
<p>Could someone care to explin the <code>meshgrid</code> method ? I cannot wrap my mind around it. The example is from the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html#numpy.meshgrid" rel="nofollow">SciPy</a> site:</p> <pre><code>import numpy as np nx, ny = (3, 2) x = np.linspace(0, 1, nx) print ("x =", x) y = np.linspace(0, 1, ny) print ("y =", y) xv, yv = np.meshgrid(x, y) print ("xv_1 =", xv) print ("yv_1 =",yv) xv, yv = np.meshgrid(x, y, sparse=True) # make sparse output arrays print ("xv_2 =", xv) print ("yv_2 =", yv) </code></pre> <p>Printout is :</p> <pre><code>x = [ 0. 0.5 1. ] y = [ 0. 1.] xv_1 = [[ 0. 0.5 1. ] [ 0. 0.5 1. ]] yv_1 = [[ 0. 0. 0.] [ 1. 1. 1.]] xv_2 = [[ 0. 0.5 1. ]] yv_2 = [[ 0.] [ 1.]] </code></pre> <p>Why are arrays xv_1 and yv_1 formed like this ? Ty :)</p>
0
2016-09-23T14:00:23Z
39,665,354
<pre><code>In [214]: nx, ny = (3, 2) In [215]: x = np.linspace(0, 1, nx) In [216]: x Out[216]: array([ 0. , 0.5, 1. ]) In [217]: y = np.linspace(0, 1, ny) In [218]: y Out[218]: array([ 0., 1.]) </code></pre> <p>Using unpacking to better see the 2 arrays produced by <code>meshgrid</code>:</p> <pre><code>In [225]: X,Y = np.meshgrid(x, y) In [226]: X Out[226]: array([[ 0. , 0.5, 1. ], [ 0. , 0.5, 1. ]]) In [227]: Y Out[227]: array([[ 0., 0., 0.], [ 1., 1., 1.]]) </code></pre> <p>and for the sparse version. Notice that <code>X1</code> looks like one row of <code>X</code> (but 2d). and <code>Y1</code> like one column of <code>Y</code>.</p> <pre><code>In [228]: X1,Y1 = np.meshgrid(x, y, sparse=True) In [229]: X1 Out[229]: array([[ 0. , 0.5, 1. ]]) In [230]: Y1 Out[230]: array([[ 0.], [ 1.]]) </code></pre> <p>When used in calculations like plus and times, both forms behave the same. That's because of <code>numpy's</code> broadcasting.</p> <pre><code>In [231]: X+Y Out[231]: array([[ 0. , 0.5, 1. ], [ 1. , 1.5, 2. ]]) In [232]: X1+Y1 Out[232]: array([[ 0. , 0.5, 1. ], [ 1. , 1.5, 2. ]]) </code></pre> <p>The shapes might also help:</p> <pre><code>In [235]: X.shape, Y.shape Out[235]: ((2, 3), (2, 3)) In [236]: X1.shape, Y1.shape Out[236]: ((1, 3), (2, 1)) </code></pre> <p>The <code>X</code> and <code>Y</code> have more values than are actually needed for most uses. But usually there isn't much of penalty for using them instead the sparse versions.</p>
1
2016-09-23T16:18:37Z
[ "python", "numpy" ]
plot categorical variable compared to another categorical variable in Python
39,662,817
<p>What is the best way to plot a categorical variable versus another categorical variable in Python. Imagine we have "males" and "females" and, on the other side, we have "paid" and "unpaid". How can I plot a meaningful and easy-to-interpret figure in python describing the information about males and females and if they paid the loan or not. </p>
1
2016-09-23T14:06:25Z
39,690,919
<p>This type of stacked bar charts can be used: <a href="http://i.stack.imgur.com/q9QE5.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/q9QE5.jpg" alt="enter image description here"></a></p> <hr> <p>The code for the above stacked barchart:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt raw_data = {'genders': ['Male', 'Female'], 'Paid': [40, 60], 'Unpaid': [60, 40]} df = pd.DataFrame(raw_data, columns = ['genders', 'Paid', 'Unpaid']) # Create the general blog and the "subplots" i.e. the bars f, ax1 = plt.subplots(1, figsize=(12,8)) # Set the bar width bar_width = 0.75 # positions of the left bar-boundaries bar_l = [i+1 for i in range(len(df['Paid']))] # positions of the x-axis ticks (center of the bars as bar labels) tick_pos = [i+(bar_width/2) for i in bar_l] # Create a bar plot, in position bar_1 ax1.bar(bar_l, # using the pre_score data df['Paid'], # set the width width=bar_width, # with the label pre score label='Paid', # with alpha 0.5 alpha=0.5, # with color color='#F4561D') # Create a bar plot, in position bar_1 ax1.bar(bar_l, # using the mid_score data df['Unpaid'], # set the width width=bar_width, # with pre_score on the bottom bottom=df['Paid'], # with the label mid score label='Unpaid', # with alpha 0.5 alpha=0.5, # with color color='#F1911E') # set the x ticks with names plt.xticks(tick_pos, df['genders']) # Set the label and legends ax1.set_ylabel("Proportion") ax1.set_xlabel("Genders") plt.legend(loc='upper left') # Set a buffer around the edge plt.xlim([min(tick_pos)-bar_width, max(tick_pos)+bar_width]) </code></pre>
0
2016-09-25T19:27:05Z
[ "python", "matplotlib", "plot", "categorical-data" ]
Tracking down implicit unicode conversions in Python 2
39,662,847
<p>I have a large project where at various places problematic implicit Unicode conversions (coersions) were used in the form of e.g.: </p> <pre class="lang-python prettyprint-override"><code>someDynamicStr = "bar" # could come from various sources # works u"foo" + someDynamicStr u"foo{}".format(someDynamicStr) someDynamicStr = "\xff" # uh-oh # raises UnicodeDecodeError u"foo" + someDynamicStr u"foo{}".format(someDynamicStr) </code></pre> <p>(Possibly other forms as well.)</p> <p><strong>Now I would like to track down those usages, especially those in actively used code.</strong></p> <p>It would be great if I could easily replace the <code>unicode</code> constructor with a wrapper which checks whether the input is of type <code>str</code> and the <code>encoding</code>/<code>errors</code> parameters are set to the default values and then notifies me (prints traceback or such).</p> <p><em>/edit:</em></p> <p>While not directly related to what I am looking for I came across this gloriously horrible hack for how to make the decode exception go away altogether (the decode one only, i.e. <code>str</code> to <code>unicode</code>, but not the other way around, see <a href="https://mail.python.org/pipermail/python-list/2012-July/627506.html" rel="nofollow">https://mail.python.org/pipermail/python-list/2012-July/627506.html</a>).</p> <p>I don't plan on using it but it might be interesting for those battling problems with invalid Unicode input and looking for a quick fix (but please think about the side effects):</p> <pre class="lang-python prettyprint-override"><code>import codecs codecs.register_error("strict", codecs.ignore_errors) codecs.register_error("strict", lambda x: (u"", x.end)) # alternatively </code></pre> <p>(An internet search for <code>codecs.register_error("strict"</code> revealed that apparently it's used in some real projects.)</p> <p><em>/edit #2:</em></p> <p>For explicit conversions I made a snippet with the help of <a href="http://stackoverflow.com/a/4025310">a SO post on monkeypatching</a>:</p> <pre class="lang-python prettyprint-override"><code>class PatchedUnicode(unicode): def __init__(self, obj=None, encoding=None, *args, **kwargs): if encoding in (None, "ascii", "646", "us-ascii"): print("Problematic unicode() usage detected!") super(PatchedUnicode, self).__init__(obj, encoding, *args, **kwargs) import __builtin__ __builtin__.unicode = PatchedUnicode </code></pre> <p>This only affects explicit conversions using the <code>unicode()</code> constructor directly so it's not something I need.</p> <p><em>/edit #3:</em></p> <p>The thread "<a href="https://stackoverflow.com/questions/6738987/extension-method-for-python-built-in-types">Extension method for python built-in types!</a>" makes me think that it might actually not be easily possible (in CPython at least).</p> <p><em>/edit #4:</em></p> <p>It's nice to see many good answers here, too bad I can only give out the bounty once.</p> <p>In the meantime I came across a somewhat similar question, at least in the sense of what the person tried to achieve: <a href="http://stackoverflow.com/q/2851481">Can I turn off implicit Python unicode conversions to find my mixed-strings bugs?</a> Please note though that throwing an exception would <strong>not</strong> have been OK in my case. Here I was looking for something which might point me to the different locations of problematic code (e.g. by printing smth.) but not something which might exit the program or change its behavior (because this way I can prioritize what to fix).</p> <p>On another note, the people working on the Mypy project (which include Guido van Rossum) might also come up with something similar helpful in the future, see the discussions at <a href="https://github.com/python/mypy/issues/1141" rel="nofollow">https://github.com/python/mypy/issues/1141</a> and more recently <a href="https://github.com/python/typing/issues/208" rel="nofollow">https://github.com/python/typing/issues/208</a>.</p> <p><em>/edit #5</em></p> <p>I also came across the following but didn't have yet the time to test it: <a href="https://pypi.python.org/pypi/unicode-nazi" rel="nofollow">https://pypi.python.org/pypi/unicode-nazi</a></p>
7
2016-09-23T14:08:02Z
39,688,614
<p>I see you have a lot of edits relating to solutions you may have encountered. I'm just going to address your original post which I believe to be: "I want to create a wrapper around the unicode constructor that checks input".</p> <p>The <a href="https://docs.python.org/2/library/functions.html#unicode" rel="nofollow"><code>unicode</code></a> method is part of Python's standard library. You will <em>decorate</em> the <code>unicode</code> method to add checks to the method.</p> <pre><code>def add_checks(fxn): def resulting_fxn(*args, **kargs): # this is where whether the input is of type str if type(args[0]) is str: # do something # this is where the encoding/errors parameters are set to the default values encoding = 'utf-8' # Set default error behavior error = 'ignore' # Print any information (i.e. traceback) # print 'blah' # TODO: for traceback, you'll want to use the pdb module return fxn(args[0], encoding, error) return resulting_fxn </code></pre> <p>Using this will look like this:</p> <pre><code>unicode = add_checks(unicode) </code></pre> <p>We overwrite the existing function name so that you don't have to change all the calls in the large project. You want to do this very early on in the runtime so that subsequent calls have the new behavior.</p>
-3
2016-09-25T15:31:51Z
[ "python", "python-2.7", "debugging", "unicode", "monkeypatching" ]
Tracking down implicit unicode conversions in Python 2
39,662,847
<p>I have a large project where at various places problematic implicit Unicode conversions (coersions) were used in the form of e.g.: </p> <pre class="lang-python prettyprint-override"><code>someDynamicStr = "bar" # could come from various sources # works u"foo" + someDynamicStr u"foo{}".format(someDynamicStr) someDynamicStr = "\xff" # uh-oh # raises UnicodeDecodeError u"foo" + someDynamicStr u"foo{}".format(someDynamicStr) </code></pre> <p>(Possibly other forms as well.)</p> <p><strong>Now I would like to track down those usages, especially those in actively used code.</strong></p> <p>It would be great if I could easily replace the <code>unicode</code> constructor with a wrapper which checks whether the input is of type <code>str</code> and the <code>encoding</code>/<code>errors</code> parameters are set to the default values and then notifies me (prints traceback or such).</p> <p><em>/edit:</em></p> <p>While not directly related to what I am looking for I came across this gloriously horrible hack for how to make the decode exception go away altogether (the decode one only, i.e. <code>str</code> to <code>unicode</code>, but not the other way around, see <a href="https://mail.python.org/pipermail/python-list/2012-July/627506.html" rel="nofollow">https://mail.python.org/pipermail/python-list/2012-July/627506.html</a>).</p> <p>I don't plan on using it but it might be interesting for those battling problems with invalid Unicode input and looking for a quick fix (but please think about the side effects):</p> <pre class="lang-python prettyprint-override"><code>import codecs codecs.register_error("strict", codecs.ignore_errors) codecs.register_error("strict", lambda x: (u"", x.end)) # alternatively </code></pre> <p>(An internet search for <code>codecs.register_error("strict"</code> revealed that apparently it's used in some real projects.)</p> <p><em>/edit #2:</em></p> <p>For explicit conversions I made a snippet with the help of <a href="http://stackoverflow.com/a/4025310">a SO post on monkeypatching</a>:</p> <pre class="lang-python prettyprint-override"><code>class PatchedUnicode(unicode): def __init__(self, obj=None, encoding=None, *args, **kwargs): if encoding in (None, "ascii", "646", "us-ascii"): print("Problematic unicode() usage detected!") super(PatchedUnicode, self).__init__(obj, encoding, *args, **kwargs) import __builtin__ __builtin__.unicode = PatchedUnicode </code></pre> <p>This only affects explicit conversions using the <code>unicode()</code> constructor directly so it's not something I need.</p> <p><em>/edit #3:</em></p> <p>The thread "<a href="https://stackoverflow.com/questions/6738987/extension-method-for-python-built-in-types">Extension method for python built-in types!</a>" makes me think that it might actually not be easily possible (in CPython at least).</p> <p><em>/edit #4:</em></p> <p>It's nice to see many good answers here, too bad I can only give out the bounty once.</p> <p>In the meantime I came across a somewhat similar question, at least in the sense of what the person tried to achieve: <a href="http://stackoverflow.com/q/2851481">Can I turn off implicit Python unicode conversions to find my mixed-strings bugs?</a> Please note though that throwing an exception would <strong>not</strong> have been OK in my case. Here I was looking for something which might point me to the different locations of problematic code (e.g. by printing smth.) but not something which might exit the program or change its behavior (because this way I can prioritize what to fix).</p> <p>On another note, the people working on the Mypy project (which include Guido van Rossum) might also come up with something similar helpful in the future, see the discussions at <a href="https://github.com/python/mypy/issues/1141" rel="nofollow">https://github.com/python/mypy/issues/1141</a> and more recently <a href="https://github.com/python/typing/issues/208" rel="nofollow">https://github.com/python/typing/issues/208</a>.</p> <p><em>/edit #5</em></p> <p>I also came across the following but didn't have yet the time to test it: <a href="https://pypi.python.org/pypi/unicode-nazi" rel="nofollow">https://pypi.python.org/pypi/unicode-nazi</a></p>
7
2016-09-23T14:08:02Z
39,736,968
<p>You can register a custom encoding which prints a message whenever it's used:</p> <p>Code in <code>ourencoding.py</code>:</p> <pre><code>import sys import codecs import traceback # Define a function to print out a stack frame and a message: def printWarning(s): sys.stderr.write(s) sys.stderr.write("\n") l = traceback.extract_stack() # cut off the frames pointing to printWarning and our_encode l = traceback.format_list(l[:-2]) sys.stderr.write("".join(l)) # Define our encoding: originalencoding = sys.getdefaultencoding() def our_encode(s, errors='strict'): printWarning("Default encoding used"); return (codecs.encode(s, originalencoding, errors), len(s)) def our_decode(s, errors='strict'): printWarning("Default encoding used"); return (codecs.decode(s, originalencoding, errors), len(s)) def our_search(name): if name == 'our_encoding': return codecs.CodecInfo( name='our_encoding', encode=our_encode, decode=our_decode); return None # register our search and set the default encoding: codecs.register(our_search) reload(sys) sys.setdefaultencoding('our_encoding') </code></pre> <p>If you import this file at the start of our script, then you'll see warnings for implicit conversions:</p> <pre><code>#!python2 # coding: utf-8 import ourencoding print("test 1") a = "hello " + u"world" print("test 2") a = "hello ☺ " + u"world" print("test 3") b = u" ".join(["hello", u"☺"]) print("test 4") c = unicode("hello ☺") </code></pre> <p>output:</p> <pre><code>test 1 test 2 Default encoding used File "test.py", line 10, in &lt;module&gt; a = "hello ☺ " + u"world" test 3 Default encoding used File "test.py", line 13, in &lt;module&gt; b = u" ".join(["hello", u"☺"]) test 4 Default encoding used File "test.py", line 16, in &lt;module&gt; c = unicode("hello ☺") </code></pre> <p>It's not perfect as test 1 shows, if the converted string only contain ASCII characters, sometimes you won't see a warning.</p>
4
2016-09-28T02:12:07Z
[ "python", "python-2.7", "debugging", "unicode", "monkeypatching" ]
Tracking down implicit unicode conversions in Python 2
39,662,847
<p>I have a large project where at various places problematic implicit Unicode conversions (coersions) were used in the form of e.g.: </p> <pre class="lang-python prettyprint-override"><code>someDynamicStr = "bar" # could come from various sources # works u"foo" + someDynamicStr u"foo{}".format(someDynamicStr) someDynamicStr = "\xff" # uh-oh # raises UnicodeDecodeError u"foo" + someDynamicStr u"foo{}".format(someDynamicStr) </code></pre> <p>(Possibly other forms as well.)</p> <p><strong>Now I would like to track down those usages, especially those in actively used code.</strong></p> <p>It would be great if I could easily replace the <code>unicode</code> constructor with a wrapper which checks whether the input is of type <code>str</code> and the <code>encoding</code>/<code>errors</code> parameters are set to the default values and then notifies me (prints traceback or such).</p> <p><em>/edit:</em></p> <p>While not directly related to what I am looking for I came across this gloriously horrible hack for how to make the decode exception go away altogether (the decode one only, i.e. <code>str</code> to <code>unicode</code>, but not the other way around, see <a href="https://mail.python.org/pipermail/python-list/2012-July/627506.html" rel="nofollow">https://mail.python.org/pipermail/python-list/2012-July/627506.html</a>).</p> <p>I don't plan on using it but it might be interesting for those battling problems with invalid Unicode input and looking for a quick fix (but please think about the side effects):</p> <pre class="lang-python prettyprint-override"><code>import codecs codecs.register_error("strict", codecs.ignore_errors) codecs.register_error("strict", lambda x: (u"", x.end)) # alternatively </code></pre> <p>(An internet search for <code>codecs.register_error("strict"</code> revealed that apparently it's used in some real projects.)</p> <p><em>/edit #2:</em></p> <p>For explicit conversions I made a snippet with the help of <a href="http://stackoverflow.com/a/4025310">a SO post on monkeypatching</a>:</p> <pre class="lang-python prettyprint-override"><code>class PatchedUnicode(unicode): def __init__(self, obj=None, encoding=None, *args, **kwargs): if encoding in (None, "ascii", "646", "us-ascii"): print("Problematic unicode() usage detected!") super(PatchedUnicode, self).__init__(obj, encoding, *args, **kwargs) import __builtin__ __builtin__.unicode = PatchedUnicode </code></pre> <p>This only affects explicit conversions using the <code>unicode()</code> constructor directly so it's not something I need.</p> <p><em>/edit #3:</em></p> <p>The thread "<a href="https://stackoverflow.com/questions/6738987/extension-method-for-python-built-in-types">Extension method for python built-in types!</a>" makes me think that it might actually not be easily possible (in CPython at least).</p> <p><em>/edit #4:</em></p> <p>It's nice to see many good answers here, too bad I can only give out the bounty once.</p> <p>In the meantime I came across a somewhat similar question, at least in the sense of what the person tried to achieve: <a href="http://stackoverflow.com/q/2851481">Can I turn off implicit Python unicode conversions to find my mixed-strings bugs?</a> Please note though that throwing an exception would <strong>not</strong> have been OK in my case. Here I was looking for something which might point me to the different locations of problematic code (e.g. by printing smth.) but not something which might exit the program or change its behavior (because this way I can prioritize what to fix).</p> <p>On another note, the people working on the Mypy project (which include Guido van Rossum) might also come up with something similar helpful in the future, see the discussions at <a href="https://github.com/python/mypy/issues/1141" rel="nofollow">https://github.com/python/mypy/issues/1141</a> and more recently <a href="https://github.com/python/typing/issues/208" rel="nofollow">https://github.com/python/typing/issues/208</a>.</p> <p><em>/edit #5</em></p> <p>I also came across the following but didn't have yet the time to test it: <a href="https://pypi.python.org/pypi/unicode-nazi" rel="nofollow">https://pypi.python.org/pypi/unicode-nazi</a></p>
7
2016-09-23T14:08:02Z
39,748,135
<p>What you can do is the following:</p> <p>First create a custom encoding. I will call it "lascii" for "logging ASCII":</p> <pre><code>import codecs import traceback def lascii_encode(input,errors='strict'): print("ENCODED:") traceback.print_stack() return codecs.ascii_encode(input) def lascii_decode(input,errors='strict'): print("DECODED:") traceback.print_stack() return codecs.ascii_decode(input) class Codec(codecs.Codec): def encode(self, input,errors='strict'): return lascii_encode(input,errors) def decode(self, input,errors='strict'): return lascii_decode(input,errors) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): print("Incremental ENCODED:") traceback.print_stack() return codecs.ascii_encode(input) class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): print("Incremental DECODED:") traceback.print_stack() return codecs.ascii_decode(input) class StreamWriter(Codec,codecs.StreamWriter): pass class StreamReader(Codec,codecs.StreamReader): pass def getregentry(): return codecs.CodecInfo( name='lascii', encode=lascii_encode, decode=lascii_decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, ) </code></pre> <p>What this does is basically the same as the ASCII-codec, just that it prints a message and the current stack trace every time it encodes or decodes from unicode to lascii.</p> <p>Now you need to make it available to the codecs module so that it can be found by the name "lascii". For this you need to create a search function that returns the lascii-codec when it's fed with the string "lascii". This is then registered to the codecs module:</p> <pre><code>def searchFunc(name): if name=="lascii": return getregentry() else: return None codecs.register(searchFunc) </code></pre> <p>The last thing that is now left to do is to tell the sys module to use 'lascii' as default encoding:</p> <pre><code>import sys reload(sys) # necessary, because sys.setdefaultencoding is deleted on start of Python sys.setdefaultencoding('lascii') </code></pre> <p><strong>Warning:</strong> This uses some deprecated or otherwise unrecommended features. It might not be efficient or bug-free. Do not use in production, only for testing and/or debugging.</p>
2
2016-09-28T12:52:15Z
[ "python", "python-2.7", "debugging", "unicode", "monkeypatching" ]
Tracking down implicit unicode conversions in Python 2
39,662,847
<p>I have a large project where at various places problematic implicit Unicode conversions (coersions) were used in the form of e.g.: </p> <pre class="lang-python prettyprint-override"><code>someDynamicStr = "bar" # could come from various sources # works u"foo" + someDynamicStr u"foo{}".format(someDynamicStr) someDynamicStr = "\xff" # uh-oh # raises UnicodeDecodeError u"foo" + someDynamicStr u"foo{}".format(someDynamicStr) </code></pre> <p>(Possibly other forms as well.)</p> <p><strong>Now I would like to track down those usages, especially those in actively used code.</strong></p> <p>It would be great if I could easily replace the <code>unicode</code> constructor with a wrapper which checks whether the input is of type <code>str</code> and the <code>encoding</code>/<code>errors</code> parameters are set to the default values and then notifies me (prints traceback or such).</p> <p><em>/edit:</em></p> <p>While not directly related to what I am looking for I came across this gloriously horrible hack for how to make the decode exception go away altogether (the decode one only, i.e. <code>str</code> to <code>unicode</code>, but not the other way around, see <a href="https://mail.python.org/pipermail/python-list/2012-July/627506.html" rel="nofollow">https://mail.python.org/pipermail/python-list/2012-July/627506.html</a>).</p> <p>I don't plan on using it but it might be interesting for those battling problems with invalid Unicode input and looking for a quick fix (but please think about the side effects):</p> <pre class="lang-python prettyprint-override"><code>import codecs codecs.register_error("strict", codecs.ignore_errors) codecs.register_error("strict", lambda x: (u"", x.end)) # alternatively </code></pre> <p>(An internet search for <code>codecs.register_error("strict"</code> revealed that apparently it's used in some real projects.)</p> <p><em>/edit #2:</em></p> <p>For explicit conversions I made a snippet with the help of <a href="http://stackoverflow.com/a/4025310">a SO post on monkeypatching</a>:</p> <pre class="lang-python prettyprint-override"><code>class PatchedUnicode(unicode): def __init__(self, obj=None, encoding=None, *args, **kwargs): if encoding in (None, "ascii", "646", "us-ascii"): print("Problematic unicode() usage detected!") super(PatchedUnicode, self).__init__(obj, encoding, *args, **kwargs) import __builtin__ __builtin__.unicode = PatchedUnicode </code></pre> <p>This only affects explicit conversions using the <code>unicode()</code> constructor directly so it's not something I need.</p> <p><em>/edit #3:</em></p> <p>The thread "<a href="https://stackoverflow.com/questions/6738987/extension-method-for-python-built-in-types">Extension method for python built-in types!</a>" makes me think that it might actually not be easily possible (in CPython at least).</p> <p><em>/edit #4:</em></p> <p>It's nice to see many good answers here, too bad I can only give out the bounty once.</p> <p>In the meantime I came across a somewhat similar question, at least in the sense of what the person tried to achieve: <a href="http://stackoverflow.com/q/2851481">Can I turn off implicit Python unicode conversions to find my mixed-strings bugs?</a> Please note though that throwing an exception would <strong>not</strong> have been OK in my case. Here I was looking for something which might point me to the different locations of problematic code (e.g. by printing smth.) but not something which might exit the program or change its behavior (because this way I can prioritize what to fix).</p> <p>On another note, the people working on the Mypy project (which include Guido van Rossum) might also come up with something similar helpful in the future, see the discussions at <a href="https://github.com/python/mypy/issues/1141" rel="nofollow">https://github.com/python/mypy/issues/1141</a> and more recently <a href="https://github.com/python/typing/issues/208" rel="nofollow">https://github.com/python/typing/issues/208</a>.</p> <p><em>/edit #5</em></p> <p>I also came across the following but didn't have yet the time to test it: <a href="https://pypi.python.org/pypi/unicode-nazi" rel="nofollow">https://pypi.python.org/pypi/unicode-nazi</a></p>
7
2016-09-23T14:08:02Z
39,805,488
<p>Just add:</p> <pre><code>from __future__ import unicode_literals </code></pre> <p>at the beginning of your source code files - it has to be the first import and it has to be in all source code files affected and the headache of using unicode in Python-2.7 goes away. If you didn't do anything super weird with strings then it should get rid of the problem out of the box.<br> Check out the following Copy&amp;Paste from my console - I tried with the sample from your question:</p> <pre><code>user@linux2:~$ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; someDynamicStr = "bar" # could come from various sources &gt;&gt;&gt; &gt;&gt;&gt; # works ... u"foo" + someDynamicStr u'foobar' &gt;&gt;&gt; u"foo{}".format(someDynamicStr) u'foobar' &gt;&gt;&gt; &gt;&gt;&gt; someDynamicStr = "\xff" # uh-oh &gt;&gt;&gt; &gt;&gt;&gt; # raises UnicodeDecodeError ... u"foo" + someDynamicStr Traceback (most recent call last): File "&lt;stdin&gt;", line 2, in &lt;module&gt; uUnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128) "&gt;&gt;&gt; u"foo{}".format(someDynamicStr) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128) &gt;&gt;&gt; </code></pre> <p>And now with <code>__future__</code> magic:</p> <pre><code>user@linux2:~$ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; from __future__ import unicode_literals &gt;&gt;&gt; someDynamicStr = "bar" # could come from various sources &gt;&gt;&gt; &gt;&gt;&gt; # works ... u"foo" + someDynamicStr u'foobar' &gt;&gt;&gt; u"foo{}".format(someDynamicStr) u'foobar' &gt;&gt;&gt; &gt;&gt;&gt; someDynamicStr = "\xff" # uh-oh &gt;&gt;&gt; &gt;&gt;&gt; # raises UnicodeDecodeError ... u"foo" + someDynamicStr u'foo\xff' &gt;&gt;&gt; u"foo{}".format(someDynamicStr) u'foo\xff' &gt;&gt;&gt; </code></pre>
2
2016-10-01T10:24:48Z
[ "python", "python-2.7", "debugging", "unicode", "monkeypatching" ]
Read in the first column of a CSV in python
39,662,891
<p>I have a CSV (mylist.csv) with 2 columns that look similar to this:</p> <blockquote> <pre><code>jfj840398jgg item-2f hd883hb2kjsd item-9k jie9hgtrbu43 item-12 fjoi439jgnso item-3i </code></pre> </blockquote> <p>I need to read the first column into a variable so I just get:</p> <blockquote> <pre><code>jfj840398jgg hd883hb2kjsd jie9hgtrbu43 fjoi439jgnso </code></pre> </blockquote> <p>I tried the following, but it is only giving me the first letter of each column:</p> <pre><code>import csv list2 = [] with open("mylist.csv") as f: for row in f: list2.append(row[0]) </code></pre> <p>so the results of the above code are giving me list2 as:</p> <blockquote> <p>['j', 'h', 'j', 'f']</p> </blockquote>
1
2016-09-23T14:10:07Z
39,662,946
<p>You should <code>split</code> the row and then append the first item </p> <pre><code>list2 = [] with open("mylist.csv") as f: for row in f: list2.append(row.split()[0]) </code></pre> <p>You could also use a <em>list comprehension</em> which are pretty standard for creating lists:</p> <pre><code>with open("mylist.csv") as f: list2 = [row.split()[0] for row in f] </code></pre>
3
2016-09-23T14:12:41Z
[ "python", "csv" ]
Read in the first column of a CSV in python
39,662,891
<p>I have a CSV (mylist.csv) with 2 columns that look similar to this:</p> <blockquote> <pre><code>jfj840398jgg item-2f hd883hb2kjsd item-9k jie9hgtrbu43 item-12 fjoi439jgnso item-3i </code></pre> </blockquote> <p>I need to read the first column into a variable so I just get:</p> <blockquote> <pre><code>jfj840398jgg hd883hb2kjsd jie9hgtrbu43 fjoi439jgnso </code></pre> </blockquote> <p>I tried the following, but it is only giving me the first letter of each column:</p> <pre><code>import csv list2 = [] with open("mylist.csv") as f: for row in f: list2.append(row[0]) </code></pre> <p>so the results of the above code are giving me list2 as:</p> <blockquote> <p>['j', 'h', 'j', 'f']</p> </blockquote>
1
2016-09-23T14:10:07Z
39,662,990
<p>you import <code>csv</code>, but then never use it to actually read the CSV. Then you open <code>mylist.csv</code> as a normal file, so when you declare:</p> <pre><code> for row in f: list2.append(row[0]) </code></pre> <p>What you're actually telling Python to do is "iterate through the lines, and append the first element of the lines (which would be the first letter) to <code>list2</code>". What you need to do, if you want to use the CSV module, is:</p> <pre><code>import csv with open('mylist.csv', 'r') as f: csv_reader = csv.reader(f, delimiter=' ') for row in csv_reader: list2.append(row[0]) </code></pre>
2
2016-09-23T14:14:48Z
[ "python", "csv" ]
Read in the first column of a CSV in python
39,662,891
<p>I have a CSV (mylist.csv) with 2 columns that look similar to this:</p> <blockquote> <pre><code>jfj840398jgg item-2f hd883hb2kjsd item-9k jie9hgtrbu43 item-12 fjoi439jgnso item-3i </code></pre> </blockquote> <p>I need to read the first column into a variable so I just get:</p> <blockquote> <pre><code>jfj840398jgg hd883hb2kjsd jie9hgtrbu43 fjoi439jgnso </code></pre> </blockquote> <p>I tried the following, but it is only giving me the first letter of each column:</p> <pre><code>import csv list2 = [] with open("mylist.csv") as f: for row in f: list2.append(row[0]) </code></pre> <p>so the results of the above code are giving me list2 as:</p> <blockquote> <p>['j', 'h', 'j', 'f']</p> </blockquote>
1
2016-09-23T14:10:07Z
39,842,693
<p>You can also use <a href="http://pandas.pydata.org/" rel="nofollow"><code>pandas</code></a> here:</p> <pre><code>import pandas as pd df = pd.read_csv(mylist.csv) </code></pre> <p>Then, getting the first column is as easy as:</p> <pre><code>matrix2 = df[df.columns[0]].as_matrix list2 = matrix2.tolist() </code></pre> <p>This will return only the first column in <code>list</code>. You might want to consider leaving the data in <code>numpy</code>, if you're conducting further data operation on the result you get.</p>
0
2016-10-04T00:59:43Z
[ "python", "csv" ]
flask + mysql beginner issue
39,662,909
<p>This is my first ever SO post, so goes without saying I am only just starting with web programming. Did a lot of Pascal programming in high school, but that was long ago. Please bear with me.</p> <p>I am trying to create a simple web app and the first thing I am building is the register feature. Using flask and mysql as my database.</p> <p>After googling I couldn't find a solution to my problem, so had to ask here, apologies if it was asked before.</p> <h2>Here is my code:</h2> <pre><code>from flask import Flask, request from flaskext.mysql import MySQL app = Flask(__name__) mysql = MySQL() app.config['MYSQL_DATABASE_USER'] = 'root' app.config['MYSQL_DATABASE_PASSWORD'] = 'mmoosakam93#' app.config['MYSQL_DATABASE_DB'] = 'profApp' app.config['MYSQL_DATABASE_HOST'] = 'localhost' mysql.init_app(app) @app.route('/') def hello(): return "Welcome to my world!" @app.route('/Authenticate') def authenticate(): username = request.args.get('userName') password = request.args.get('password') email = request.args.get('email') cursor.execute("Select * from user where userName='" + "'userName + and Password='" + password + "'") data = cursor.fetchone() if data is None: return "Usernamee or Password is wrong" else: return "Logged in successfully" if __name__ == "__main__": app.run(host="0.0.0.0", port=5000) </code></pre> <hr> <p>When i run it from my terminal, after manually inputting parameters into my browser (<a href="http://192.168.33.33:5000/Authenticate?userName=jay&amp;password=jay&amp;email=aoisf" rel="nofollow">http://192.168.33.33:5000/Authenticate?userName=jay&amp;password=jay&amp;email=aoisf</a>) I get the following error:</p> <p>" File "app.py", line 25, in authenticate cursor.execute("Select * from user where userName='" + "'userName + and Password='" + password + "'") NameError: global name 'cursor' is not defined "</p> <p>I understand that 'cursor' is a variable which I haven't defined properly, but I don't know how to do this.</p> <p>Thank you in advance!</p>
-1
2016-09-23T14:10:51Z
39,662,963
<p>You need to add define <code>cursor</code> variable in <code>authenticate()</code> controller like this</p> <pre><code>cursor = mysql.connection.cursor() # for Flask-MySQLdb package (not relevant here) </code></pre> <p>Also you need to define it before it's first use.</p> <p><strong>UPDATE</strong> since you are using <a href="http://flask-mysql.readthedocs.io/" rel="nofollow">Flask-MySQL</a> to fetch cursor you need to run </p> <pre><code>cursor = mysql.get_db().cursor() </code></pre> <p>By defining <code>cursor</code> before it first use i meant that you need to place definition line before that line</p> <pre><code>cursor.execute("Select * from user where userName='" + "'userName + and Password='" + password + "'") </code></pre>
1
2016-09-23T14:13:47Z
[ "python", "mysql", "flask" ]
Python pandas: Select columns where a specific row satisfies a condition
39,662,941
<p>I have a dataframe dfall where there is a row labeled 'row1' with values 'foo' and 'bar'. I want to select only columns of dfall where 'row1' has the value 'foo'.</p> <p>In other words:</p> <pre><code>dfall= pd.DataFrame([['bar','foo'],['bla','bli']], columns=['col1','col2'], index=['row1','row2']) </code></pre> <p>I want as result the column 'col2'containing: <code>['foo','bli']</code></p> <p>I tried:</p> <pre><code>dfall[dfall.loc['row1'].isin(['foo'])] </code></pre> <p>I get the error </p> <pre><code>IndexingError: Unalignable boolean Series key provided </code></pre> <p>Can anybody help me with the command? Thanks in advance!</p>
1
2016-09-23T14:12:37Z
39,663,001
<p>You can compare your df against the scalar value, and then use <code>any</code> with <code>axis=0</code> and pass this boolean mask to <code>ix</code>:</p> <pre><code>In [324]: df.ix[:,(df == 'foo').any(axis=0)] Out[324]: col2 row1 foo row2 bli </code></pre> <p>breaking the above down:</p> <pre><code>In [325]: df == 'foo' Out[325]: col1 col2 row1 False True row2 False False In [326]: (df == 'foo').any(axis=0) Out[326]: col1 False col2 True dtype: bool </code></pre>
0
2016-09-23T14:15:09Z
[ "python", "pandas", "select" ]
Python pandas: Select columns where a specific row satisfies a condition
39,662,941
<p>I have a dataframe dfall where there is a row labeled 'row1' with values 'foo' and 'bar'. I want to select only columns of dfall where 'row1' has the value 'foo'.</p> <p>In other words:</p> <pre><code>dfall= pd.DataFrame([['bar','foo'],['bla','bli']], columns=['col1','col2'], index=['row1','row2']) </code></pre> <p>I want as result the column 'col2'containing: <code>['foo','bli']</code></p> <p>I tried:</p> <pre><code>dfall[dfall.loc['row1'].isin(['foo'])] </code></pre> <p>I get the error </p> <pre><code>IndexingError: Unalignable boolean Series key provided </code></pre> <p>Can anybody help me with the command? Thanks in advance!</p>
1
2016-09-23T14:12:37Z
39,663,380
<p>Using EdChum's answer, to make it row specific I did: df.ix[:,(df.loc['row1'] == 'foo')]</p>
0
2016-09-23T14:33:08Z
[ "python", "pandas", "select" ]
Django 1.9 Can't leave formfield blank issue
39,662,975
<p>I modified my forms to show a new label for the fields in my model. Now I can't have a user leave the form field blank. I've tried to add blank=True and null=True, but without luck. How can I allow fields to be blank?</p> <p>My Model:</p> <pre><code>class UserProfile(models.Model): # Page 1 user = models.OneToOneField(User) first_name = models.CharField(max_length=100, blank=True) last_name = models.CharField(max_length=100, blank=True) email = models.EmailField(max_length=100, blank=True, unique=True) # Page 2 Address line1 = models.CharField(max_length=255, blank=True, null=True) line2 = models.CharField(max_length=255, blank=True, null=True) line3 = models.CharField(max_length=255, blank=True, null=True) city = models.CharField(max_length=255, blank=True, null=True) state = models.CharField(max_length=2, blank=True, null=True) zip_code = models.CharField(max_length=15, blank=True, null=True) def __str__(self): return self.user.username </code></pre> <p>forms.py:</p> <pre><code>class UserProfileForm3(forms.ModelForm): line1 = forms.CharField(label="Address") line2 = forms.CharField(label="") line3 = forms.CharField(label="") city = forms.CharField(label="City / Town") state = forms.CharField(label="State / Province / Region") zip_code = forms.IntegerField(label="Zip / Postal Code") def __init__(self, *args, **kwargs): super(UserProfileForm3, self).__init__(*args, **kwargs) self.fields['line1'].widget.attrs = { 'class': 'form-control', 'placeholder': 'Address Line 1' } self.fields['line2'].widget.attrs = { 'class': 'form-control', 'placeholder': 'Address Line 2' } self.fields['line3'].widget.attrs= { 'class': 'form-control', 'placeholder': 'Address Line 3' } self.fields['city'].widget.attrs = { 'class': 'form-control', 'placeholder': 'City' } self.fields['state'].widget.attrs = { 'id': 'state_id', 'class': 'form-control', 'placeholder': 'State' } self.fields['zip_code'].widget.attrs = { 'id': 'zip_code_id', 'class': 'form-control', 'placeholder': 'Zip' } self.fields['country'].widget.attrs = { 'class': 'form-control', 'placeholder': 'US', 'value': 'US' } class Meta: model = UserProfile fields = ('line1', 'line2', 'line3', 'city', 'state', 'zip_code','country') </code></pre>
0
2016-09-23T14:14:11Z
39,663,307
<p>You overrode the fields, so they won't preserve any of the attributes; you need to set them explicitly.</p> <pre><code>line1 = forms.CharField(label="Address", required=False) </code></pre> <p>Note also that your model field definitions shouldn't have <code>null=True</code> for CharFields, this is bad practice.</p>
2
2016-09-23T14:29:28Z
[ "python", "django", "django-models", "django-forms", "django-1.9" ]
Django 1.9 Can't leave formfield blank issue
39,662,975
<p>I modified my forms to show a new label for the fields in my model. Now I can't have a user leave the form field blank. I've tried to add blank=True and null=True, but without luck. How can I allow fields to be blank?</p> <p>My Model:</p> <pre><code>class UserProfile(models.Model): # Page 1 user = models.OneToOneField(User) first_name = models.CharField(max_length=100, blank=True) last_name = models.CharField(max_length=100, blank=True) email = models.EmailField(max_length=100, blank=True, unique=True) # Page 2 Address line1 = models.CharField(max_length=255, blank=True, null=True) line2 = models.CharField(max_length=255, blank=True, null=True) line3 = models.CharField(max_length=255, blank=True, null=True) city = models.CharField(max_length=255, blank=True, null=True) state = models.CharField(max_length=2, blank=True, null=True) zip_code = models.CharField(max_length=15, blank=True, null=True) def __str__(self): return self.user.username </code></pre> <p>forms.py:</p> <pre><code>class UserProfileForm3(forms.ModelForm): line1 = forms.CharField(label="Address") line2 = forms.CharField(label="") line3 = forms.CharField(label="") city = forms.CharField(label="City / Town") state = forms.CharField(label="State / Province / Region") zip_code = forms.IntegerField(label="Zip / Postal Code") def __init__(self, *args, **kwargs): super(UserProfileForm3, self).__init__(*args, **kwargs) self.fields['line1'].widget.attrs = { 'class': 'form-control', 'placeholder': 'Address Line 1' } self.fields['line2'].widget.attrs = { 'class': 'form-control', 'placeholder': 'Address Line 2' } self.fields['line3'].widget.attrs= { 'class': 'form-control', 'placeholder': 'Address Line 3' } self.fields['city'].widget.attrs = { 'class': 'form-control', 'placeholder': 'City' } self.fields['state'].widget.attrs = { 'id': 'state_id', 'class': 'form-control', 'placeholder': 'State' } self.fields['zip_code'].widget.attrs = { 'id': 'zip_code_id', 'class': 'form-control', 'placeholder': 'Zip' } self.fields['country'].widget.attrs = { 'class': 'form-control', 'placeholder': 'US', 'value': 'US' } class Meta: model = UserProfile fields = ('line1', 'line2', 'line3', 'city', 'state', 'zip_code','country') </code></pre>
0
2016-09-23T14:14:11Z
39,663,355
<p>Try to add this in forms.py:</p> <pre><code>self.fields['line1'].required = false </code></pre> <p>You will have to do this for every field.</p>
1
2016-09-23T14:31:42Z
[ "python", "django", "django-models", "django-forms", "django-1.9" ]
Find next lower value in list of (float) numbers?
39,662,977
<p>How should I write the <code>find_nearest_lower</code> function?</p> <pre><code>&gt;&gt;&gt; values = [10.1, 10.11, 10.20] &gt;&gt;&gt; my_value = 10.12 &gt;&gt;&gt; nearest_lower = find_nearest_lower(values, my_value) &gt;&gt;&gt; nearest_lower 10.11 </code></pre> <p>This needs to work in Python 2.6 without access to numpy.</p>
-2
2016-09-23T14:14:14Z
39,663,039
<pre><code>&gt;&gt;&gt; def find_nearest_lower(seq, x): ... return max(item for item in seq if item &lt; x) ... &gt;&gt;&gt; values = [10.1, 10.11, 10.20] &gt;&gt;&gt; my_value = 10.12 &gt;&gt;&gt; nearest_lower = find_nearest_lower(values, my_value) &gt;&gt;&gt; nearest_lower 10.11 </code></pre> <p>This approach will raise an exception if there aren't any values in <code>seq</code> that are smaller than <code>x</code>. If this is undesirable behavior, you could instead return a sentinel value, such as None:</p> <pre><code>def find_nearest_lower(seq, x): candidates = [item for item in seq if item &lt; x] if not candidates: return None return max(candidates) </code></pre> <p>Or</p> <pre><code>def find_nearest_lower(seq, x): try: return max(item for item in seq if item &lt; x) except ValueError: return None </code></pre> <p>... If you're more of an "ask forgiveness" person than a "look before you leap" person.</p>
4
2016-09-23T14:16:57Z
[ "python", "python-2.6" ]