title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
How to send the test cases results in tabular format in html using Python
39,586,613
<p>I am sending daily result mail of passed and fail in Text format.</p> <p>I want it to be send in Tabular format in HTML format using Python.</p> <pre><code>============Text I am sending daily=========================== 423 EIR DIAMETER IMEI Software Version handling 5 5 0 100.0 424 EIR DIAMETER eirDualImsiUpdateTimestamp 2 2 0 100.0 EIR-Provisioning 47 41 6 87.23 ------------------------------------------------------------------------------------------------------------------------------------------------- Total Summary 839 828 11 98.68 --------------------------------------------------------------------------------------------------------------------------------------------- </code></pre>
0
2016-09-20T05:51:30Z
39,587,102
<p>According to my understanding of the question.</p> <p>check this out </p> <p><a href="http://stackoverflow.com/questions/8356501/python-format-tabular-output">Python format tabular output</a></p> <p>If you are looking forward for any module Pretty-print tabular data might come handy.</p> <p>If you want html table try with html module in python. </p> <p>If you are looking for something else than this, please do mention them.</p>
0
2016-09-20T06:26:48Z
[ "python", "python-2.7", "python-3.x" ]
How to send the test cases results in tabular format in html using Python
39,586,613
<p>I am sending daily result mail of passed and fail in Text format.</p> <p>I want it to be send in Tabular format in HTML format using Python.</p> <pre><code>============Text I am sending daily=========================== 423 EIR DIAMETER IMEI Software Version handling 5 5 0 100.0 424 EIR DIAMETER eirDualImsiUpdateTimestamp 2 2 0 100.0 EIR-Provisioning 47 41 6 87.23 ------------------------------------------------------------------------------------------------------------------------------------------------- Total Summary 839 828 11 98.68 --------------------------------------------------------------------------------------------------------------------------------------------- </code></pre>
0
2016-09-20T05:51:30Z
39,587,763
<p>I'd recommend using <a href="http://jinja.pocoo.org/docs/dev/api/#basics" rel="nofollow">jinja2</a>. It's so simple i don't think i need to explain anything. But say you have a <code>tests</code>folder with an empty <code>__init__.py</code> file inside (your folder is now a python package), you'll need to create a folder called <code>templates</code> and place your html file inside.</p> <p>Your html file (say <code>report.html</code>) content would be something like this:</p> <pre><code>&lt;table&gt; &lt;tbody&gt; {% for test in test_results %} &lt;tr&gt; &lt;td&gt;{{test.name}}&lt;/td&gt; &lt;td&gt;{{test.version}}&lt;/td&gt; &lt;/tr&gt; {% endfor %} &lt;/tbody&gt; &lt;/table&gt; </code></pre> <p>Now in your python code you'd do something like this when your tests are run:</p> <pre><code>from jinja2 import Environment, PackageLoader env = Environment(loader=PackageLoader('tests', 'templates')) html_report = env.get_template('report.html') send_by_email(html_template.render(tests_results=dictionary_of_results)) </code></pre> <p>Of course the <code>send_by_email</code>function does not exist and is yours to write. Also your html file is really HTML so you can do anything you'd usually do, inline style and all. <code>jinja2</code> just makes it possible to customize its content on the fly with the help of <code>template tags</code> and render an HTML string to you use as you wish.</p>
0
2016-09-20T07:07:20Z
[ "python", "python-2.7", "python-3.x" ]
Not able to clear file Contents
39,586,648
<p>I have a file which has below data.</p> <pre><code>edit 48 set dst 192.168.4.0 255.255.255.0 set device "Tague-VPN" set comment "Yeshtel" edit 180 set dst 64.219.107.45 255.255.255.255 set device "Austin-Backup" set comment "images.gsmc.org" </code></pre> <p>I want to copy the commands under edit only if Set device is Austin-Backup.</p> <pre><code>string = 'set device' word = '"Austin-Backup"' with open('test.txt') as oldfile, open('script.txt', 'w') as newfile: for line in oldfile: newfile.write(line) newfile.write('\n') if string not in line: pass elif string in line: if word not in line: a = open('script.txt', 'w') a.close() else: pass </code></pre> <p>I am trying to write test file content to new file(script) and if command "set comment "Yeshtel"" is found i want to delete contents in new file. I tried to delete but its not happening. I am new to Python, Can you please tell what is the Prob??</p> <p>I got to know that reopening the same file in Write mode will clear the contents..</p>
-1
2016-09-20T05:54:17Z
39,586,761
<p>I suspect the issue is that you have the same file open twice, once as <code>newfile</code> and a second time as <code>a</code>. While it should be truncated when you open it as <code>a</code> and then close it, the writes you made on <code>newfile</code> may still appear if the filesystem had cached them until after the truncated version was written.</p> <p>I suggest only opening the file once. When you need to truncate it, call the <code>truncate</code> method on it.</p> <pre><code>if word not in line: newfile.truncate() </code></pre> <p>If you might write more to the file after truncating, you should probably also <code>seek</code> back to the start position (e.g. <code>newfile.seek(0)</code>). If you're going to be done with the file after truncating it, that step is not needed.</p>
1
2016-09-20T06:02:56Z
[ "python", "python-2.7" ]
Not able to clear file Contents
39,586,648
<p>I have a file which has below data.</p> <pre><code>edit 48 set dst 192.168.4.0 255.255.255.0 set device "Tague-VPN" set comment "Yeshtel" edit 180 set dst 64.219.107.45 255.255.255.255 set device "Austin-Backup" set comment "images.gsmc.org" </code></pre> <p>I want to copy the commands under edit only if Set device is Austin-Backup.</p> <pre><code>string = 'set device' word = '"Austin-Backup"' with open('test.txt') as oldfile, open('script.txt', 'w') as newfile: for line in oldfile: newfile.write(line) newfile.write('\n') if string not in line: pass elif string in line: if word not in line: a = open('script.txt', 'w') a.close() else: pass </code></pre> <p>I am trying to write test file content to new file(script) and if command "set comment "Yeshtel"" is found i want to delete contents in new file. I tried to delete but its not happening. I am new to Python, Can you please tell what is the Prob??</p> <p>I got to know that reopening the same file in Write mode will clear the contents..</p>
-1
2016-09-20T05:54:17Z
39,586,920
<p>Should be something like this</p> <pre><code>temp_lines = [] last_line_was_edit = False found_keyword = False keyword = "Austin-Backup" with open('test.txt') as oldfile, open('script.txt', 'w') as newfile: for line in oldfile: if last_line_was_edit and temp_lines: if found_keyword: newfile.writelines(temp_lines) temp_lines = [] if line.startswith("edit"): last_line_was_edit = True else: if keyword in line: found_keyword = True temp_lines.append(line) </code></pre> <p>Please note that you should not open the file twice. Just use an temporary variable and write only what have to be written</p>
0
2016-09-20T06:14:53Z
[ "python", "python-2.7" ]
Vectorized Evaluation of a Function, Broadcasting and Element Wise Operations
39,586,803
<p>Given this...</p> <p><a href="http://i.stack.imgur.com/FBePB.png" rel="nofollow"><img src="http://i.stack.imgur.com/FBePB.png" alt="enter image description here"></a></p> <p>I have to explain what this code does, knowing that it performs the vectorized evaluation of F, using broadcasting and element wise operations concepts...</p> <pre><code>def F(x_pos, alpha): D = x_pos.reshape(1,-1) - x_pos.reshape(-1,1) return (1./alpha) * (alpha.reshape(1,-1) * R(D)).sum(axis=1) </code></pre> <p><strong>My explanation is:</strong></p> <p>In the first line of the function F receives x_pos and alpha as parameters (both numpy arrays), in the second line the matrix D is calculated by means of broadcasting (basic operations such as addition in arrays numpy are performed elementwise, ie, element by element, but it is also possible with arranys of different size if numpy can transform them into others of the same size, this conversion is called broadcasting), subtracting an array of order 1xN with another of order Nx1, resulting in the matrix D of order NxN containing x_j - x_1, x_j - x_2, etc. as elements, finally, in the last line the reciprocal of alpha is calculated (which clearly is an arrangement), where each element is multiplied by the sum of the R evaluation of each cell of the matrix D multiplied by alpha_j horizontally (due to axis = 1 in the argument)</p> <p><strong>Questions:</strong></p> <ol> <li>Considering I'm new to Python, is my explanation OK?</li> <li>The code has an error or not? Because I don't see that the "j must be different from 1, 2, ..., n" in each sum is taken into consideration in the code... and If it's in fact wrong... How can I fix the code so it do exactly the same thing as stated as in the image?</li> </ol>
1
2016-09-20T06:06:22Z
39,587,345
<p>Few comments/improvements/fixes could be suggested here.</p> <p>1] The first step could be alternatively done with just introducing a new axis and subtracting with itself, like so -</p> <pre><code>D = x_pos[:,None] - x_pos </code></pre> <p>In my opinion, this is a cleaner option. The performance benefit might be just marginal.</p> <p>2] In the second line, I think it needs a fix as we need to avoid computations for the diagonal elements of <code>R(D)</code>. So, If I got that correctly, the corrected code would be -</p> <pre><code>vals = R(D) np.fill_diagonal(vals,0) out = (1./alpha) * (alpha.reshape(1,-1) * vals).sum(axis=1) </code></pre> <p>Now, let's make the code a bit more idiomatic/cleaner.</p> <p>At that line, we could write : <code>(alpha * vals)</code> instead of <code>alpha.reshape(1,-1) * vals</code>. This is because the shapes are already aligned for <code>broadcasting</code> as shown in a schematic diagram below -</p> <pre><code>alpha : n vals : n x n </code></pre> <p>Thus, <code>alpha</code> would be automatically extended to <code>2D</code> with its elements broadcasted along the first axis for the length of <code>vals</code> and then elementwise multiplications being generated with it. Again, this is meant as a cleaner code.</p> <p>There's a further performance improvement possible here with <code>(alpha.reshape(1,-1) * vals).sum(axis=1)</code> being replaceable with a <code>matrix-multiplicatiion</code> using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html" rel="nofollow"><code>np.dot</code></a> as <code>alpha.dot(vals)</code>. The benefit on performance should be noticeable with this step.</p> <p>So, the second step reduces to -</p> <pre><code>out = (1./alpha) * alpha.dot(vals) </code></pre>
0
2016-09-20T06:42:30Z
[ "python", "numpy", "vectorization", "elementwise-operations" ]
Replace values in a dataframes using other dataframe with strings as keys with Pandas
39,586,809
<p>I have been trying this for a while and I am stuck. Here it is the problem:</p> <p>I am working with some metadata about texts that I have in CSV files. It looks like this:</p> <p><a href="http://i.stack.imgur.com/OIvRL.png" rel="nofollow"><img src="http://i.stack.imgur.com/OIvRL.png" alt="enter image description here"></a></p> <p>The real table is longer and more complex, but it follows the same logic: every row is a text and every column is different aspects of the text. I have in some of the columns to much variation and I want it to remodel in a simpler one. For example changing from the narrative-perspective the values of homodiegetic and autodiegetic to non-heterodiegetic. I define this new model in another CSV file called keywords that looks like this:</p> <p><a href="http://i.stack.imgur.com/hchVz.png" rel="nofollow"><img src="http://i.stack.imgur.com/hchVz.png" alt="enter image description here"></a></p> <p>As you can see, every column of the metadata becomes a row in the new model-keywords, where the old value is in the term_value column and the new value is in the new_model column.</p> <p>So I need to map or replace this values using Pandas. This is what I have got for now:</p> <pre><code>import re import pandas as pd df_metadata = pd.read_csv("/metadata.csv", encoding="utf-8", sep=",") df_keywords = pd.read_csv("/keywords.csv", encoding="utf-8", sep="\t") for column_metadata,value_metadata in df_metadata.iteritems(): if str(column_metadata) in list(df_keywords.loc[:,"term_type"]): df_metadata.loc[df_metadata[column_metadata] == value_metadata, column_metadata] = df_keywords.loc[df_keywords["term_value"] == value_metadata, ["new_model"]] </code></pre> <p>And Python always gives this error back:</p> <blockquote> <p>"ValueError: Series lengths must match to compare"</p> </blockquote> <p>I think the problem is in the value_metadata of the second part of the replace with loc, I mean here:</p> <pre><code>df_keywords.loc[df_keywords["term_value"] == value_metadata, ["new_model"]] </code></pre> <p>The thing I don't understand is why value_metadata works in the first part of this command but it doesn't in the second one...</p> <p>Please, I would appreciate any help. Maybe there is a simpler way than iterate through the dataframes... I am very open to any suggestion. Best regards, José</p>
1
2016-09-20T06:06:41Z
39,587,275
<p>You can first create <code>Multiindex</code> in <code>df_keywords</code> for faster selecting by <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers" rel="nofollow">slicers</a> and in loop <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>map</code></a> new value by old one:</p> <pre><code>df_keywords.set_index(['term_type','term_value'], inplace=True) idx = pd.IndexSlice #first maping in column narrative-perspective print (df_keywords.loc[idx['narrative-perspective',:]].to_dict()['new_model']) {'heterodiegetic': 'heterodiegetic', 'other/mixed': 'n-heterodiegetic', 'homodiegetic': 'n-heterodiegetic', 'autodiegetic': 'n-heterodiegetic'} #column names for replacing L = ['narrative-perspective','narrator','protagonist-gender'] for col in L: df_metadata[col] = df_metadata[col].map(df_keywords.loc[idx[col,:]].to_dict()['new_model']) print (df_metadata) idno author-name narrative-perspective narrator protagonist-gender 0 ne0001 Baroja n-heterodiegetic third-person male 1 ne0002 Galdos heterodiegetic third-person n-male 2 ne0003 Galdos n-heterodiegetic third-person male 3 ne0004 Galdos n-heterodiegetic third-person n-male 4 ne0005 Galdos heterodiegetic third-person n-male 5 ne0006 Galdos heterodiegetic third-person male 6 ne0007 Sawa heterodiegetic third-person n-male 7 ne0008 Zamacois heterodiegetic third-person n-male 8 ne0009 Galdos heterodiegetic third-person n-male 9 ne0011 Galdos n-heterodiegetic n-third-person male </code></pre> <p>Also <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_dict.html" rel="nofollow"><code>to_dict</code></a> can be omited and then map by <code>Series</code>:</p> <pre><code>df_keywords.set_index(['term_type','term_value'], inplace=True) idx = pd.IndexSlice #first maping in column narrative-perspective print (df_keywords.loc[idx['narrative-perspective',:]]['new_model']) term_value autodiegetic n-heterodiegetic heterodiegetic heterodiegetic homodiegetic n-heterodiegetic other/mixed n-heterodiegetic Name: new_model, dtype: object L = ['narrative-perspective','narrator','protagonist-gender'] for col in L: df_metadata[col] = df_metadata[col].map(df_keywords.loc[idx[col,:]]['new_model']) print (df_metadata) idno author-name narrative-perspective narrator protagonist-gender 0 ne0001 Baroja n-heterodiegetic third-person male 1 ne0002 Galdos heterodiegetic third-person n-male 2 ne0003 Galdos n-heterodiegetic third-person male 3 ne0004 Galdos n-heterodiegetic third-person n-male 4 ne0005 Galdos heterodiegetic third-person n-male 5 ne0006 Galdos heterodiegetic third-person male 6 ne0007 Sawa heterodiegetic third-person n-male 7 ne0008 Zamacois heterodiegetic third-person n-male 8 ne0009 Galdos heterodiegetic third-person n-male 9 ne0011 Galdos n-heterodiegetic n-third-person male </code></pre>
1
2016-09-20T06:38:50Z
[ "python", "csv", "pandas", "dictionary", "replace" ]
Use bootstrap modal in django class based views, (create, update, search, etc)
39,586,816
<p>Bootstrap modal implementation on standalone page is quite easy, and can be triggered using a button or <code>&lt;a&gt;</code> tag, but let's assume that I have the below classes</p> <ol> <li><code>class CustomerMixin(object)</code></li> <li><code>class CustomerFormMixin(CustomerMixin, CreateView)</code></li> <li><code>CustomerListView(CustomerMixin, ListView)</code> </li> <li><code>CustomerCreateView(CustomerMixin, CreateView)</code></li> <li><code>CustomerUpdateView(CustomerFormMixin, UpdateView)</code></li> </ol> <p>URL:</p> <pre><code>urlpatterns = [ url(r'^$', CustomerListView.as_view(), name='index'), url(r'^create$', CustomerCreateView.as_view(), name='create'), url(r'^detail/(?P&lt;pk&gt;[0-9]+)$', CustomerDetailView.as_view(), name='detail'), url(r'^delete/(?P&lt;pk&gt;[0-9]+)$', CustomerDeleteView.as_view(), name='delete'), url(r'edit/(?P&lt;pk&gt;[0-9]+)$', CustomerUpdateView.as_view(), name='edit'), </code></pre> <p>]</p> <p>In the index page I have 3 link each referring to its own Class</p> <pre><code>&lt;a href="{% url 'myapp:create' %}"&gt;Create&lt;/a&gt; </code></pre> <p>and in the list on Index page for each row in the table I have 2 links for Edit and Delete for that row.</p> <p>So now I want to have bootstrap modal for both Create and Update once user clicks on Create a tag on the Index page so I can trigger modal and at the same time trigger Create class.</p> <p>I have checked the below blogs but no luck:</p> <ul> <li><a href="https://libraries.io/pypi/django-fm" rel="nofollow">https://libraries.io/pypi/django-fm</a> (This one looks good, but I couldn't implement it)</li> <li><a href="https://dmorgan.info/posts/django-views-bootstrap-modals/" rel="nofollow">https://dmorgan.info/posts/django-views-bootstrap-modals/</a></li> </ul>
0
2016-09-20T06:07:31Z
39,605,370
<p>So, to clarify: </p> <p>You have a page to manage a list of items (create, edit and remove). You want a modal to be able to show the forms for creating, editing these objects. </p> <p>You already have the pages set up for these two actions you wish to perform that you have setup as Class Based Views (CustomerCreateView, CustomerUpdateView).</p> <p>What you need to do is trigger an AJAX call to these views via 'GET' request when the modal is opened. Then, return the template of these views as HTML and populate your modal with this content.</p> <p>E.g. for the CreateView:</p> <pre><code>$.ajax({ url: {% url 'create' %}, dataType: "html", method: "GET", success: function(data){ $("#createModal").html(data); } }); </code></pre> <p>Similarly, you can do this for UpdateView but ofcourse you must pass the id of the object as a parameter in the Ajax.</p> <p>Hope this helps!</p>
1
2016-09-20T23:42:43Z
[ "python", "django", "bootstrap-modal" ]
Python: How to get some characters in the menu bar underlined?
39,586,861
<p>I built with Python a tkinter menu and I would like to underline in it some characters. I used the command "underline" in some lines, but strangely the characters are not appearing as underlined. What should I do that "underline" finally works and starts to underline a given index? Did I have something forgotten?</p> <pre><code>from tkinter import Tk, Frame, Menu class Window(): def __init__(self): self.__window = Tk() self.__set_window() self.__set_menu() def __set_window(self): self.__window.geometry("700x500") self.__window.minsize(500, 200) self.__window.title("Some Text") self.__window.iconbitmap("MyIcon") def start_window(self): self.__window.mainloop() def __set_menu(self): self.__menubar = Menu(self.__window) self.__file= Menu(self.__menubar, tearoff=0) self.__file.add_command(label = "Exit", underline=1, accelerator="Strg + C") self.__menubar.add_cascade(label="File", underline=0, menu=self.__datei) self.__menubar.add_cascade(label="Edit", underline=1) self.__menubar.add_cascade(label="Help", underline=0) self.__window["menu"] = self.__menubar </code></pre>
0
2016-09-20T06:10:57Z
39,587,096
<p>Underlines in tkinter menus are definitely determined by the underline command. In your example, the 'F' in file, and the 'x' in Exit are both underlined. After fixing the typo below, (and adding code to call the class) I had the underlines show up correctly.</p> <p>from:</p> <pre><code>self.__menubar.add_cascade(label="File", underline=0, menu=self.__datei) </code></pre> <p>to:</p> <pre><code>self.__menubar.add_cascade(label="File", underline=0, menu=self.__file) </code></pre> <p>Do you still have the error when you run the code snippet as opposed to your complete file?</p>
0
2016-09-20T06:26:32Z
[ "python", "windows", "tkinter", "menu", "underline" ]
Replace specific text with a block of text
39,586,903
<p>I am trying to replace a small section of text with a block of text. I am hoping that this block of text will be replaced using replace function of python. I would have really like to use <code>sed</code> to get this job, but I am on windows machine. Anyway here is my script, not sure why the block of text isn't being replaced. </p> <pre><code># Change the current working directory to IDX location import os print os.getcwd() os.chdir(##location of my text files##) print os.getcwd() #Read file names com_line=raw_input("File name please:") contents=[] strs = '#DREFIELD IDOL_SOURCE="idol source"\n#DREFIELD IDOL_TAXONOMY="taxonomy data"\n #DREFIELD IDOL_CATEGORY="IDOL_CATEGORY"\n #DREFIELD IDOL_PROMOTION="idol_promotion"\n #DREFIELD IDOL_PROMOTION_TERM="IDOL_PROMOTION_TERM"\n #DREFIELD IDOL_WHATSNEW="IDOL_WHATSNEW"\n #DREFIELD IDOL_SUMMARY="IDOL_SUMMARY"\n #DREFIELD IDOL_URL="IDOL_URL DATA"\n #DREFIELD IDOL_FORMNUMBER="IDOL_FORMNUMBER"\n #DREFIELD IDOL_DESCRIPTION="IDOL_DESCRIPTION DATA"\n#DRESECTION 0\n' with open(com_line) as f: contents.append(f.readlines()) f.close #Writing to file f2 = open("test.idx", 'w') for content in contents: for s in content: if s == '#DRESECTION 0\n': s.replace("#DRESECTION 0\n", strs) f2.write("%s" % s) f2.close </code></pre> <p>I tested this code, and it is entering the conditional statement as well. Just wondering why is the needful text <code>#DRESECTION 0\n</code> not getting replaced with <code>strs</code> ? Also I am looking to do this operation of large text files worth few gigs. Is there a better way of reading large text files in python?</p>
0
2016-09-20T06:13:51Z
39,587,072
<p>Stings in Python are immutable. Once a string object has been created, it can't be modified. You can however create a new string based on the old one.</p> <p>When you are calling <code>s.replace</code>, it doesn't modify <code>s</code> in place, because that's not possible. Instead, the <code>str.replace</code> method returns a new string. You could do <code>s = s.replace(...)</code> and it should work. However, there's no point in replacing the whole contents of one string with another string when you could just rebind the name to the new string.</p> <p>I suggest using:</p> <pre><code> if s == '#DRESECTION 0\n': s = strs f2.write(s) </code></pre> <p>I also changed the <code>write</code> call. There's no need to use string substitution here, since you want to write the whole string (and nothing extra). Just pass <code>s</code> as the argument.</p> <p>There are a few other minor issues with your code:</p> <p>For instance, you're not calling the <code>close</code> methods on your files, only naming them. You need <code>f2.close()</code> if you want to actually close the file. But you don't actually need to do that for <code>f</code>, since the <code>with</code> statement you're using will close the file automatically (you should probably use a <code>with</code> for <code>f2</code> and not bother with the manual <code>close</code> call).</p> <p>There also doesn't seem to be much need <code>contents</code> to be a nested list (with just one item in the outer list). You probably want to do <code>contents = f.readlines()</code> rather than <code>append</code>ing the list of lines to an initially empty list. That will let you get rid of one of the loops in the later code.</p>
3
2016-09-20T06:24:33Z
[ "python", "replace", "readline" ]
pytest monkeypatch: it is possible to return different values each time when patched method called?
39,586,925
<p>In <code>unittest</code> I can assert to <a href="https://docs.python.org/3.5/library/unittest.mock.html?highlight=side_effect#unittest.mock.Mock.side_effect" rel="nofollow">side_effect</a> iterable with values - each of them one-by-one will be returned when patched method called, moreover I <a href="http://stackoverflow.com/a/7665754/1879101">found</a> that in <code>unittest</code> my patched method can return different results according to input arguments. Can I make something like that in pytest? <a href="http://doc.pytest.org/en/latest/monkeypatch.html#example-preventing-requests-from-remote-operations" rel="nofollow">Documentation</a> does not mention this.</p>
0
2016-09-20T06:15:19Z
39,588,982
<p>You could certainly monkeypatch something with a class with a <code>__call__</code> attribute which does whatever you want - however, nothing keeps you from using <code>unittest.mock</code> with pytest - there's even the <a href="https://github.com/pytest-dev/pytest-mock/" rel="nofollow"><code>pytest-mock</code> plugin</a> to make this easier.</p>
1
2016-09-20T08:18:49Z
[ "python", "unit-testing", "py.test", "python-unittest" ]
How to Remove single character from a Mixed string In Python
39,587,253
<p><a href="http://i.stack.imgur.com/S55ee.png" rel="nofollow"><img src="http://i.stack.imgur.com/S55ee.png" alt="Table "></a></p> <p>I have one table (Please Ref Image) In this Table I want to remove "A" char from each Row How can I do in Python.</p> <p>Below is my code using <code>regexe_replace</code> but code is not optimised I want optimised code</p> <pre><code> def re(s): return regexp_replace(s, "A", "").cast("Integer") finalDF = finalD.select(re(col("C0")).alias("C0"),col("C1"), re(col("C2")).alias("C2"), re(col("C3")).alias("C3"),col("C4"), re(col("C5")).alias("C5"), re(col("C6")).alias("C6"),col("C7"), re(col("C8")).alias("C8"), re(col("C9")).alias("C9"),col("C10"), re(col("C11")).alias("C11"),col("C12"), re(col("C13")).alias("C13"), re(col("C14")).alias("C14"),col("C15"), re(col("C16")).alias("16"),col("C17"), re(col("C18")).alias("18"), re(col("C19")).alias("C19"),col("Label")) finalDF.show(2) </code></pre> <p>Thank you in Advance.</p>
-3
2016-09-20T06:37:29Z
39,587,375
<p>Why regex? Regex will be over kill.</p> <p>If you have data in format you have given, then use <strong>replace function</strong> as below:</p> <p><strong>Content of master.csv:</strong></p> <pre><code>A11| 6|A34|A43| A11| 6|A35|A44| </code></pre> <p><strong>Code :</strong></p> <pre><code>with open('master.csv','r') as fh: for line in fh.readlines(): print "Before - ",line line = line.replace('A','') print "After - ", line print "---------------------------" </code></pre> <p>Output:</p> <pre><code>C:\Users\dinesh_pundkar\Desktop&gt;python c.py Before - A11| 6|A34|A43| After - 11| 6|34|43| --------------------------- Before - A11| 6|A35|A44| After - 11| 6|35|44| --------------------------- </code></pre> <p><strong>Code with replacing 'A' from complete data in in one shot (without going line by line)</strong></p> <pre><code>with open("master.csv",'r') as fh: data = fh.read() data_after_remove = data.replace('A','') print "Before remove ..." print data print "After remove ..." print data_after_remove </code></pre> <p><strong>Output:</strong></p> <pre><code>C:\Users\dinesh_pundkar\Desktop&gt;python c.py Before remove... A11| 6|A34|A43| A11| 6|A35|A44| After remove ... 11| 6|34|43| 11| 6|35|44| C:\Users\dinesh_pundkar\Desktop&gt; </code></pre>
2
2016-09-20T06:44:41Z
[ "python", "optimization", "replace" ]
How to send JSON data created by Python to JavaScript?
39,587,355
<p>I am using Python <code>cherrypy</code> and <code>Jinja</code> to serve my web pages. I have two Python files: <em>Main.py</em> (handle web pages) and <em>search.py</em> (server-side functions).<br> I create a dynamic dropdown list (using <code>JavaScript</code>) based on a local <code>JSON</code> file called <em>component.json</em>(created by function <em>componentSelectBar</em> inside <em>search.py</em>). </p> <p>I want to ask how can my JavaScript retrieve JSON data <strong>without physically storing the JSON data into my local website root's folder</strong> and still fulfil the function of dynamic dropdown list.</p> <p>The <em>componentSelectBar</em> function inside <em>search.py</em>:</p> <pre><code>def componentSelectBar(self, brand, category): args = [brand, category] self.myCursor.callproc('findComponent', args) for result in self.myCursor.stored_results(): component = result.fetchall() if (len(component) == 0): print "component not found" return "no" components = [] for com in component: t = unicodedata.normalize('NFKD', com[0]).encode('ascii', 'ignore') components.append(t) j = json.dumps(components) rowarraysFile = 'public/json/component.json' f = open(rowarraysFile, 'w') print &gt;&gt; f, j print "finish component bar" return "ok" </code></pre> <p>The <em>selectBar.js</em>:</p> <pre><code> $.getJSON("static/json/component.json", function (result) { console.log("retrieve component list"); console.log("where am i"); $.each(result, function (i, word) { $("#component").append("&lt;option&gt;"+word+"&lt;/option&gt;"); }); }); </code></pre>
1
2016-09-20T06:43:07Z
39,587,433
<p>To use that json data directly into javascript you can use </p> <pre><code>var response = JSON.parse(component); console.log(component); //prints </code></pre> <p>OR</p> <p>You already created json file.If that file is in right format then you can read json data from that file using jQuery <code>jQuery.getJSON()</code> For more: <a href="http://api.jquery.com/jQuery.getJSON/" rel="nofollow">http://api.jquery.com/jQuery.getJSON/</a></p>
-1
2016-09-20T06:48:29Z
[ "javascript", "python", "html", "json", "cherrypy" ]
How to send JSON data created by Python to JavaScript?
39,587,355
<p>I am using Python <code>cherrypy</code> and <code>Jinja</code> to serve my web pages. I have two Python files: <em>Main.py</em> (handle web pages) and <em>search.py</em> (server-side functions).<br> I create a dynamic dropdown list (using <code>JavaScript</code>) based on a local <code>JSON</code> file called <em>component.json</em>(created by function <em>componentSelectBar</em> inside <em>search.py</em>). </p> <p>I want to ask how can my JavaScript retrieve JSON data <strong>without physically storing the JSON data into my local website root's folder</strong> and still fulfil the function of dynamic dropdown list.</p> <p>The <em>componentSelectBar</em> function inside <em>search.py</em>:</p> <pre><code>def componentSelectBar(self, brand, category): args = [brand, category] self.myCursor.callproc('findComponent', args) for result in self.myCursor.stored_results(): component = result.fetchall() if (len(component) == 0): print "component not found" return "no" components = [] for com in component: t = unicodedata.normalize('NFKD', com[0]).encode('ascii', 'ignore') components.append(t) j = json.dumps(components) rowarraysFile = 'public/json/component.json' f = open(rowarraysFile, 'w') print &gt;&gt; f, j print "finish component bar" return "ok" </code></pre> <p>The <em>selectBar.js</em>:</p> <pre><code> $.getJSON("static/json/component.json", function (result) { console.log("retrieve component list"); console.log("where am i"); $.each(result, function (i, word) { $("#component").append("&lt;option&gt;"+word+"&lt;/option&gt;"); }); }); </code></pre>
1
2016-09-20T06:43:07Z
39,587,684
<ol> <li>store results from componentSelectBar into database</li> <li>expose new api to get results from database and return json to browser</li> </ol> <p>demo here:</p> <pre><code>@cherrypy.expose def codeSearch(self, modelNumber, category, brand): ... result = self.search.componentSelectBar(cherrypy.session['brand'], cherrypy.session['category']) # here store result into a database, for example, brand_category_search_result ... @cherrypy.expose @cherrypy.tools.json_out() def getSearchResult(self, category, brand): # load json from that database, here is brand_category_search_result a_json = loadSearchResult(category, brand) return a_json </code></pre> <p>document on CherryPy, hope helps: <a href="http://docs.cherrypy.org/en/latest/basics.html#encoding-response" rel="nofollow">Encoding response</a></p> <p>In your broswer, you need to GET /getSearchResult for json:</p> <pre><code>$.getJSON("/getSearchResult/&lt;arguments here&gt;", function (result) { console.log("retrieve component list"); console.log("where am i"); $.each(result, function (i, word) { $("#component").append("&lt;option&gt;"+word+"&lt;/option&gt;"); }); }); </code></pre>
0
2016-09-20T07:02:39Z
[ "javascript", "python", "html", "json", "cherrypy" ]
How to send JSON data created by Python to JavaScript?
39,587,355
<p>I am using Python <code>cherrypy</code> and <code>Jinja</code> to serve my web pages. I have two Python files: <em>Main.py</em> (handle web pages) and <em>search.py</em> (server-side functions).<br> I create a dynamic dropdown list (using <code>JavaScript</code>) based on a local <code>JSON</code> file called <em>component.json</em>(created by function <em>componentSelectBar</em> inside <em>search.py</em>). </p> <p>I want to ask how can my JavaScript retrieve JSON data <strong>without physically storing the JSON data into my local website root's folder</strong> and still fulfil the function of dynamic dropdown list.</p> <p>The <em>componentSelectBar</em> function inside <em>search.py</em>:</p> <pre><code>def componentSelectBar(self, brand, category): args = [brand, category] self.myCursor.callproc('findComponent', args) for result in self.myCursor.stored_results(): component = result.fetchall() if (len(component) == 0): print "component not found" return "no" components = [] for com in component: t = unicodedata.normalize('NFKD', com[0]).encode('ascii', 'ignore') components.append(t) j = json.dumps(components) rowarraysFile = 'public/json/component.json' f = open(rowarraysFile, 'w') print &gt;&gt; f, j print "finish component bar" return "ok" </code></pre> <p>The <em>selectBar.js</em>:</p> <pre><code> $.getJSON("static/json/component.json", function (result) { console.log("retrieve component list"); console.log("where am i"); $.each(result, function (i, word) { $("#component").append("&lt;option&gt;"+word+"&lt;/option&gt;"); }); }); </code></pre>
1
2016-09-20T06:43:07Z
39,587,922
<p>You are rendering a HTML and sending it as response. If you wish to do with JSON, this has to change. You should return JSON in your main.py, whereas you will send a HTML(GET or POST) from Javascript and render it back.</p> <pre><code>def componentSelectBar(self, brand, category): /* Your code goes here */ j = json.dumps(components) // Code to add a persistent store here rowarraysFile = 'public/json/component.json' f = open(rowarraysFile, 'w') print &gt;&gt; f, j // Better to use append mode and append the contents to the file in python return j //Instead of string ok @cherrypy.expose def codeSearch(self): json_request = cherrypy.request.body.read() import json # This should go to the top of the file input_dict = json.loads(json_request) modelNumber = input_dict.get("modelNumber", "") category = input_dict.get("category", "") brand = input_dict.get("brand", "") /* Your code goes here */ json_response = self.search.componentSelectBar(cherrypy.session['brand'], cherrypy.session['category']) return json_response </code></pre> <p>Here, I added only for the successful scenario. However, you should manage the failure scenarios(a JSON error response that could give as much detail as possible) in the <code>componentSelectBar</code> function. That will help you keep the <code>codeSearch</code> function as plain as possible and help in a long run(read maintaining the code). </p> <p>And I would suggest you to read <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP 8</a> and apply it to the code as it is kind of norm for all python programmers and help any one else who touches your code.</p> <p>EDIT: This is a sample javascript function that will make a post request and get the JSON response:</p> <pre><code>searchResponse: function(){ $.ajax({ url: 'http://localhost:8080/codeSearch', // Add your URL here data: {"brand" : "Levis", "category" : "pants"} async: False, success: function(search_response) { response_json = JSON.parse(search_response) alert(response_json) // Do what you have to do here; // In this specific case, you have to generate table or any structure based on the response received } }) } </code></pre>
-1
2016-09-20T07:16:41Z
[ "javascript", "python", "html", "json", "cherrypy" ]
How to extract year and week number in a dataframe and put it in a new column, python
39,587,451
<p>I have the following dataframe :</p> <pre><code>sale_id created_at 1 2016-05-28T05:53:31.042Z 2 2016-05-30T12:50:58.184Z 3 2016-05-23T10:22:18.858Z 4 2016-05-27T09:20:15.158Z 5 2016-05-21T08:30:17.337Z 6 2016-05-28T07:41:14.361Z </code></pre> <p>i need t add a year-week columns where it contains year and week number of each row in created_at column:</p> <pre><code>sale_id created_at year_week 1 2016-05-28T05:53:31.042Z 2016-21 2 2016-05-30T12:50:58.184Z 2016-22 3 2016-05-23T10:22:18.858Z 2016-21 4 2016-05-27T09:20:15.158Z 2016-21 5 2016-05-21T08:30:17.337Z 2016-20 6 2016-05-28T07:41:14.361Z 2016-21 </code></pre> <p>I'd prefer a solution that can be easily transferred to pyspark as well.</p>
1
2016-09-20T06:50:05Z
39,587,480
<p><strong>UPDATE:</strong> PySpark DF solution:</p> <pre><code>from pyspark.sql.functions import * df.withColumn('year_week', df.select(date_format('created_at', 'yyyy-w')) </code></pre> <p><strong>Old Pandas solution:</strong></p> <p>try this:</p> <pre><code>df['year_week'] = df.created_at.dt.year.astype(str) + '-' + df.created_at.dt.weekofyear.astype(str) In [29]: df Out[29]: sale_id created_at year_week 0 1 2016-05-28 05:53:31.042 2016-21 1 2 2016-05-30 12:50:58.184 2016-22 2 3 2016-05-23 10:22:18.858 2016-21 3 4 2016-05-27 09:20:15.158 2016-21 4 5 2016-05-21 08:30:17.337 2016-20 5 6 2016-05-28 07:41:14.361 2016-21 </code></pre> <p><strong>Timing against 600K rows DF:</strong></p> <pre><code>In [33]: df = pd.concat([df] * 10**5, ignore_index=True) In [34]: %timeit df.created_at.dt.strftime('%Y-%U') 1 loop, best of 3: 16.1 s per loop In [35]: %timeit df.created_at.dt.year.astype(str) + '-' + df.created_at.dt.weekofyear.astype(str) 1 loop, best of 3: 7.43 s per loop In [43]: %timeit df.created_at.dt.year.astype(str) + '-' + df.created_at.dt.week.astype(str) 1 loop, best of 3: 7.45 s per loop In [36]: df.shape Out[36]: (600000, 2) </code></pre>
2
2016-09-20T06:52:26Z
[ "python", "pandas", "apache-spark", "pyspark", "spark-dataframe" ]
How to extract year and week number in a dataframe and put it in a new column, python
39,587,451
<p>I have the following dataframe :</p> <pre><code>sale_id created_at 1 2016-05-28T05:53:31.042Z 2 2016-05-30T12:50:58.184Z 3 2016-05-23T10:22:18.858Z 4 2016-05-27T09:20:15.158Z 5 2016-05-21T08:30:17.337Z 6 2016-05-28T07:41:14.361Z </code></pre> <p>i need t add a year-week columns where it contains year and week number of each row in created_at column:</p> <pre><code>sale_id created_at year_week 1 2016-05-28T05:53:31.042Z 2016-21 2 2016-05-30T12:50:58.184Z 2016-22 3 2016-05-23T10:22:18.858Z 2016-21 4 2016-05-27T09:20:15.158Z 2016-21 5 2016-05-21T08:30:17.337Z 2016-20 6 2016-05-28T07:41:14.361Z 2016-21 </code></pre> <p>I'd prefer a solution that can be easily transferred to pyspark as well.</p>
1
2016-09-20T06:50:05Z
39,587,484
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.strftime.html" rel="nofollow"><code>strftime</code></a>:</p> <p><a href="http://strftime.org/" rel="nofollow">Python's strftime directives</a>.</p> <pre><code>#if dtype is not datetime df.created_at = pd.to_datetime(df.created_at) df['year_week'] = df.created_at.dt.strftime('%Y-%U') print (df) sale_id created_at year_week 0 1 2016-05-28 05:53:31.042 2016-21 1 2 2016-05-30 12:50:58.184 2016-22 2 3 2016-05-23 10:22:18.858 2016-21 3 4 2016-05-27 09:20:15.158 2016-21 4 5 2016-05-21 08:30:17.337 2016-20 5 6 2016-05-28 07:41:14.361 2016-21 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.year.html" rel="nofollow"><code>dt.year</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.week.html" rel="nofollow"><code>dt.week</code></a>:</p> <pre><code>df['year_week'] = df.created_at.dt.year.astype(str) +'-'+ df.created_at.dt.week.astype(str) print (df) sale_id created_at year_week 0 1 2016-05-28 05:53:31.042 2016-21 1 2 2016-05-30 12:50:58.184 2016-22 2 3 2016-05-23 10:22:18.858 2016-21 3 4 2016-05-27 09:20:15.158 2016-21 4 5 2016-05-21 08:30:17.337 2016-20 5 6 2016-05-28 07:41:14.361 2016-21 </code></pre>
1
2016-09-20T06:52:35Z
[ "python", "pandas", "apache-spark", "pyspark", "spark-dataframe" ]
Random walk's weird outcome in python 3?
39,587,461
<p>I just start learning python and have a problem to print a new location of random walk in 3 dimensions. There is no error popping up, but it's obvious that the output (x, y, z) being printed is unreasonable! When simulating random walk step by step, I assume only one value in the (x, y, z) should be changed in each time. But it seems not in the output. I'm trying to debug it but still confused about identifying what the real problem is.</p> <p>The output's head lines:</p> <pre><code>(0,-1,1) (-1,0,1) (-2,0,1) (-2,1,2) (-2,2,2) (-2,2,3) (-1,2,3) (0,1,3) </code></pre> <p>My motivation:</p> <blockquote> <p>The purpose of this code is to simulate N steps of a random walk in 3 dimensions. At each step, a random direction is chosen (north, south, east, west, up, down) and a step of size 1 is taken in that direction. The new location is then printed. The starting location is the origin (0, 0, 0). </p> </blockquote> <p>My code:</p> <pre><code>import pdb import random # this helps us generate random numbers N = 30 # number of steps n = random.random() # generate a random number x = 0 y = 0 z = 0 count = 0 while count &lt;= N: if n &lt; 1/6: x = x + 1 # move east n = random.random() # generate a new random number if n &gt;= 1/6 and n &lt; 2/6: y = y + 1 # move north n = random.random() # generate a new random number if n &gt;= 2/6 and n &lt; 3/6: z = z + 1 # move up n = random.random() # generate a new random number if n &gt;= 3/6 and n &lt; 4/6: x = x - 1 # move west n = random.random() # generate a new random number if n &gt;= 4/6 and n &lt; 5/6: y = y - 1 # move south n = random.random() # generate a new random number if n &gt;= 5/6: z = z - 1 # move down n = random.random() # generate a new random number print("(%d,%d,%d)" % (x,y,z)) count = count + 1 print("squared distance = %d" % (x*x + y*y + z*z)) </code></pre> <p>Environment:</p> <p>Python 3.5 in Jupyter Notebook, Windows 10, Dell XPS 13</p>
-2
2016-09-20T06:50:42Z
39,587,708
<p>You set <code>n</code> to a new random number in <em>every <code>if</code> test</em>:</p> <pre><code>if n &lt; 1/6: x = x + 1 # move east n = random.random() # generate a new random number </code></pre> <p>This means the <em>next</em> <code>if</code> test can then also match on the new <code>n</code>, giving you more than one change per step.</p> <p>Move the <code>n = random.random()</code> step to the <em>top</em> of the loop, generating it only once per step. You probably want to use <code>elif</code> as well to avoid making too many tests:</p> <pre><code>N = 30 # number of steps x = 0 y = 0 z = 0 for count in range(N): n = random.random() # generate a new random number if n &lt; 1/6: x = x + 1 # move east elif n &lt; 2/6: y = y + 1 # move north elif n &lt; 3/6: z = z + 1 # move up elif n &lt; 4/6: x = x - 1 # move west elif n &lt; 5/6: y = y - 1 # move south else: z = z - 1 # move down print("(%d,%d,%d)" % (x,y,z)) </code></pre> <p>I also switched to using a <code>for</code> loop over <code>range()</code> so you don't have to manually increment and test <code>count</code>.</p> <p>This can be further simplified by using a <em>list</em> to store the directions, <code>random.range()</code> to pick an index in that list at random, and <code>random.choice()</code> to pick what direction to change the step in:</p> <pre><code>N = 30 # number of steps pos = [0, 0, 0] for count in range(N): index = random.randrange(3) # generate a new random index change = random.choice([-1, 1]) pos[index] += change print(tuple(pos)) </code></pre>
2
2016-09-20T07:04:09Z
[ "python", "random" ]
Find out selected features in transformed output in Feature Selection
39,587,526
<p>I was using the L1-based feature selection from <a href="http://scikit-learn.org/stable/modules/feature_selection.html" rel="nofollow">feature selection documentaion</a> . The transformed result gives a numpy array. Is there a way I could figure out which features got selected in the transformed output <code>X_new</code>.</p> <pre><code>from sklearn.svm import LinearSVC from sklearn.datasets import load_iris from sklearn.feature_selection import SelectFromModel import pandas as pd iris = load_iris() y=iris.target X = pd.DataFrame(iris.data,columns=['sepal_length','sepal_width','petal_length','petal_width']) print X.shape #(150,4) lsvc = LinearSVC(C=0.01, penalty="l1", dual=False).fit(X, y) model = SelectFromModel(lsvc, prefit=True) X_new = model.transform(X) print X_new.shape #(150, 3) </code></pre>
0
2016-09-20T06:54:30Z
39,588,347
<p>I figured from this question <a href="http://stackoverflow.com/questions/27622175/which-features-selects-fit-transform?rq=1">Which features selects fit_transform?</a> that lsvc.coef_ would return me the complete transfromed numpy array with a few features having all <code>0's</code>.</p> <pre><code>df=pd.DataFrame(lsvc.coef_,columns=['sepal_length','sepal_width','petal_length','petal_width']) print df.columns[(df == 0).all()] </code></pre>
0
2016-09-20T07:40:06Z
[ "python", "scikit-learn" ]
Column transform in Spark MLlib
39,587,529
<p>I have read <a href="https://spark.apache.org/docs/latest/ml-features.html#binarizer" rel="nofollow">Spark MLlib doc</a> for feature transform, but I am still confused about two simple cases:</p> <p>1.How to deal with single column flexible? For example, I have one column named "date", it's "YYYY-MM-DD" format, I want to generate one new column called "week" based on "date". If use pandas.Dataframe, it could be done with Series.apply, my quesion is how to do that in Spark MLlib?</p> <p>2.How to generate new column based on multi-columns? For example, I want to calculate roi based on spend and income, it's simple in pandas.DataFrame:</p> <pre><code>df['roi'] = (df['income'] - df['spend'])/df['spend'] </code></pre> <p>For Spark.MLlib, I find SQLTransformer may be used for the same work, but I am not sure</p> <p>Could any guy tell me how to deal with that in Spark.MLlib? Thanks a lot</p>
0
2016-09-20T06:54:32Z
39,588,292
<p>A clean option is to define your own function, and apply to your <code>DataFrame</code> using <code>withColumn()</code>. Note that this has nothing to do with <code>MLlib</code> as that refers to the machine learning module of <code>Spark</code>.</p> <pre><code>from pyspark.sql.types import FloatType from pyspark.sql.functions import udf def roiCalc(income, spend): # Define function return((income - spend)/spend) roiCalculator = udf(roiCalc, FloatType()) # Convert to udf df.withColumn("roi", roiCalculator(df["income"],df["spend"])) # Apply to df </code></pre>
2
2016-09-20T07:36:58Z
[ "python", "apache-spark", "spark-dataframe", "apache-spark-mllib" ]
make a secure connection between squid and python
39,587,677
<p>hi folk<br /> i'd like to configure a squid proxy server on my server and make a secure connection (SSL/TLS) from client to server. in other hand i want to set the proxy of my browser to route requests to local proxy client which is intented to be written in python, then python script make a secure connection with squid proxy server and then all data between client and proxy server is encrypted. <br /> how python script has to be written? need public key or something like that?</p>
0
2016-09-20T07:02:20Z
39,588,301
<p>You can consider the <code>paramico</code> library to make ssh connections from python. There are plenty of examples on the Internet on how to use it.</p>
1
2016-09-20T07:37:30Z
[ "python", "sockets", "ssl", "proxy", "squid" ]
Flask JSON post request not working
39,587,695
<p>I am using react to submit a form to a flask backend. The data is submitted in json this is an example.</p> <pre><code>add_new_user(e){ e.preventDefault() var user_details = {} user_details['fname'] = this.state.first_name user_details['last_name'] = this.state.last_name var post_request = new Request('http://127.0.0.1:5000/add_new_user', { method: 'post', body: JSON.stringify(user_details) }) fetch(post_request).then(function(response){ console.log(response) }) </code></pre> <p>On my backend the code looks like this,</p> <pre><code>@app.route('/add_new_user', methods=['POST']) def add_user(): content = request.json() print content return 'user added' </code></pre> <p>However the content variable is null and therefore the printed data on the screen is None. How can I fix this? What am I doing wrong. Thanks</p>
2
2016-09-20T07:03:16Z
39,587,913
<p>You might need to set the request content type of Request to <code>application/json</code>. Otherwise, it'll be <code>None</code>.</p>
0
2016-09-20T07:16:15Z
[ "javascript", "python", "json", "node.js", "reactjs" ]
Flask JSON post request not working
39,587,695
<p>I am using react to submit a form to a flask backend. The data is submitted in json this is an example.</p> <pre><code>add_new_user(e){ e.preventDefault() var user_details = {} user_details['fname'] = this.state.first_name user_details['last_name'] = this.state.last_name var post_request = new Request('http://127.0.0.1:5000/add_new_user', { method: 'post', body: JSON.stringify(user_details) }) fetch(post_request).then(function(response){ console.log(response) }) </code></pre> <p>On my backend the code looks like this,</p> <pre><code>@app.route('/add_new_user', methods=['POST']) def add_user(): content = request.json() print content return 'user added' </code></pre> <p>However the content variable is null and therefore the printed data on the screen is None. How can I fix this? What am I doing wrong. Thanks</p>
2
2016-09-20T07:03:16Z
39,587,947
<p>Try <code>request.get_json(force=True)</code>. It's recommended to use <code>get_json()</code> instead of the <code>json()</code> method. <code>force=True</code> allows to ignore content type requirement. </p>
0
2016-09-20T07:18:14Z
[ "javascript", "python", "json", "node.js", "reactjs" ]
How to check values of column in one dataframe available or not in column of another dataframe?
39,587,730
<p>I have two dataframes-</p> <pre><code>df1_data = {'sym1' :{0:'abc a01',1:'pqr q02',2:'xyz y03',3:'mno o12',4:'lmn l45'}} df1 = pd.DataFrame(df1_data) print df1 df2_data = {'sym2' :{0:'abc a01',1:'xxx p0',2:'xyz y03',3:'mno o12',4:'lmn l45',5:'rrr r1',6:'kkk k3'}} df2 = pd.DataFrame(df2_data) print df2 </code></pre> <p><strong>output-</strong></p> <pre><code> sym1 0 abc a01 1 pqr q02 2 xyz y03 3 mno o12 4 lmn l45 sym2 0 abc a01 1 xxx p0 2 xyz y03 3 mno o12 4 lmn l45 5 rrr r1 6 kkk k3 </code></pre> <p>I want to check sym2 column values available or not in df2 dataframes sym1 column. If symbols in sym2 column are not available then I want list of that symbols which are not available in sym1 column. If all symbols are available then list must be empty.</p> <p><strong>Expected Result-</strong></p> <pre><code>list -&gt; ['xxx p0','rrr r1','kkk k3'] </code></pre>
2
2016-09-20T07:05:20Z
39,587,779
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a>, then select by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a> and convert to <code>list</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.tolist.html" rel="nofollow"><code>tolist</code></a>:</p> <pre><code>print (~df2.sym2.isin(df1.sym1)) 0 False 1 True 2 False 3 False 4 False 5 True 6 True Name: sym2, dtype: bool print (df2.ix[~df2.sym2.isin(df1.sym1), 'sym2']) 1 xxx p0 5 rrr r1 6 kkk k3 Name: sym2, dtype: object print (df2.ix[~df2.sym2.isin(df1.sym1), 'sym2'].tolist()) ['xxx p0', 'rrr r1', 'kkk k3'] </code></pre>
4
2016-09-20T07:08:28Z
[ "python", "pandas", "dataframe" ]
How to check values of column in one dataframe available or not in column of another dataframe?
39,587,730
<p>I have two dataframes-</p> <pre><code>df1_data = {'sym1' :{0:'abc a01',1:'pqr q02',2:'xyz y03',3:'mno o12',4:'lmn l45'}} df1 = pd.DataFrame(df1_data) print df1 df2_data = {'sym2' :{0:'abc a01',1:'xxx p0',2:'xyz y03',3:'mno o12',4:'lmn l45',5:'rrr r1',6:'kkk k3'}} df2 = pd.DataFrame(df2_data) print df2 </code></pre> <p><strong>output-</strong></p> <pre><code> sym1 0 abc a01 1 pqr q02 2 xyz y03 3 mno o12 4 lmn l45 sym2 0 abc a01 1 xxx p0 2 xyz y03 3 mno o12 4 lmn l45 5 rrr r1 6 kkk k3 </code></pre> <p>I want to check sym2 column values available or not in df2 dataframes sym1 column. If symbols in sym2 column are not available then I want list of that symbols which are not available in sym1 column. If all symbols are available then list must be empty.</p> <p><strong>Expected Result-</strong></p> <pre><code>list -&gt; ['xxx p0','rrr r1','kkk k3'] </code></pre>
2
2016-09-20T07:05:20Z
39,587,987
<p>Here is another, bit faster, solution:</p> <pre><code>In [54]: df2.set_index('sym2').index.difference(df1.set_index('sym1').index).values Out[54]: array(['kkk k3', 'rrr r1', 'xxx p0'], dtype=object) </code></pre> <p>or as vanilla Python list:</p> <pre><code>In [74]: df2.set_index('sym2').index.difference(df1.set_index('sym1').index).values.tolist() Out[74]: ['kkk k3', 'rrr r1', 'xxx p0'] </code></pre> <p>Timings for 700K and 500K DFs:</p> <pre><code>In [55]: df1 = pd.concat([df1] * 10**5, ignore_index=True) In [57]: df2 = pd.concat([df2] * 10**5, ignore_index=True) In [58]: df1.shape Out[58]: (500000, 1) In [59]: df2.shape Out[59]: (700000, 1) In [67]: %timeit df2.set_index('sym2').index.difference(df1.set_index('sym1').index).values 10 loops, best of 3: 123 ms per loop In [68]: %timeit df2.ix[~df2.sym2.isin(df1.sym1), 'sym2'] 1 loop, best of 3: 216 ms per loop In [72]: %timeit df2.set_index('sym2').index.difference(df1.set_index('sym1').index).values.tolist() 10 loops, best of 3: 123 ms per loop </code></pre>
1
2016-09-20T07:20:32Z
[ "python", "pandas", "dataframe" ]
How do I calculate the percent difference between the 1st and 6th key in a list of dictionaries?
39,587,863
<p>I have a pandas df column with a list of dictionaries for each each company name. Like below:</p> <pre><code>company | growth_scores comp xyz | [{u'score': u'198', u'recorded_at': u'2016-09},{u'score': u'190', u'recorded_at': u'2016-08} </code></pre> <p>I understand how to extract the keys and I'm familiar with the pd.apply method but I can't seem to piece together anything that will go row-by-row and perform the calculation. Ultimately, I need to perform a calculation and store the result in a new column, for each company. </p> <p>The output should look like this:</p> <pre><code>company | growth_score_diff comp xyz | 10% </code></pre> <p>Would love some guidance here! </p>
0
2016-09-20T07:13:19Z
39,589,437
<p>Say you have the following DataFrame:</p> <pre><code>df = pd.DataFrame.from_dict({'company': 'Pandology', 'metrics': [[{'score': 10}, {'score': 20}, {'score': 35}]]}) </code></pre> <p>which looks like this:</p> <p><a href="http://i.stack.imgur.com/BUvUA.png" rel="nofollow"><img src="http://i.stack.imgur.com/BUvUA.png" alt="enter image description here"></a></p> <p>To compute a total score, you can <code>map</code> the <code>metrics</code> column to a new column called <code>score_total</code>. To perform the actual calculation, you define a function <code>calculate_score</code> which takes a row of <code>metrics</code> data as input and outputs a total score value. (<em>In this case it's just a trivial sum calculation)</em></p> <pre><code>def calculate_score(metrics): total_score = 0 for metric in metrics: total_score += metric['score'] return total_score df['score_total'] = df['metrics'].map(calculate_score) </code></pre> <p>Now you have a new column containing the result:</p> <p><a href="http://i.stack.imgur.com/o06hj.png" rel="nofollow"><img src="http://i.stack.imgur.com/o06hj.png" alt="enter image description here"></a></p>
1
2016-09-20T08:42:11Z
[ "python", "list", "pandas", "dictionary" ]
How does this TensorFlow sample actually update the weights to find the solution
39,588,008
<p>New to tensorflow, python and numpy (I guess that's everything in this sample)</p> <p>In the code below I (almost) understand that the update_weights.run() call in the loop is calculating the loss and developing new weights. What I don't see is how this actually causes the weights to be changed. </p> <p>The point I'm stuck on is commented # THIS IS WHAT I DONT UNDERSTAND</p> <p>What is the relationship between the update_weights.run() and the new values being placed in weights? - Or perhaps; how come when weights.eval is called after the loop that the values have changed?</p> <p>Thanks for any help</p> <pre><code>#@test {"output": "ignore"} # Import tf import tensorflow as tf # Numpy is Num-Pie n dimensional arrays # https://en.wikipedia.org/wiki/NumPy import numpy as np # Plotting library # http://matplotlib.org/users/pyplot_tutorial.html import matplotlib.pyplot as plt # %matplotlib magic # http://ipython.readthedocs.io/en/stable/interactive/tutorial.html#magics-explained %matplotlib inline # Set up the data with a noisy linear relationship between X and Y. # Variable? num_examples = 5 noise_factor = 1.5 line_x_range = (-10,10) #Just variables in Python # np.linspace - Return evenly spaced numbers over a specified interval. X = np.array([ np.linspace(line_x_range[0], line_x_range[1], num_examples), np.linspace(line_x_range[0], line_x_range[1], num_examples) ]) # Plot out the starting data # plt.figure(figsize=(4,4)) # plt.scatter(X[0], X[1]) # plt.show() # npm.random.randn - Return a sample (or samples) from the “standard normal” distribution. # Generate noise for x and y (2) noise = np.random.randn(2, num_examples) * noise_factor # plt.figure(figsize=(4,4)) # plt.scatter(noise[0],noise[1]) # plt.show() # += on an np.array X += noise # The 'Answer' polyfit to the noisy data answer_m, answer_b = np.polyfit(X[0], X[1], 1) # Destructuring Assignment - http://codeschool.org/python-additional-miscellany/ x, y = X # plt.figure(figsize=(4,4)) # plt.scatter(x, y) # plt.show() # np.array # for a in x # [(1., a) for a in [1,2,3]] =&gt; [(1.0, 1), (1.0, 2), (1.0, 3)] # numpy.ndarray.astype - http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.astype.html # Copy of the array, cast to a specified type. x_with_bias = np.array([(1., a) for a in x]).astype(np.float32) #Just variables in Python # The difference between our current outputs and the training outputs over time # Starts high and decreases losses = [] history = [] training_steps = 50 learning_rate = 0.002 # Start the session and give it a variable name sess with tf.Session() as sess: # Set up all the tensors, variables, and operations. # Creates a constant tensor input = tf.constant(x_with_bias) # Transpose the ndarray y of random float numbers target = tf.constant(np.transpose([y]).astype(np.float32)) # Start with random weights weights = tf.Variable(tf.random_normal([2, 1], 0, 0.1)) # Initialize variables ...?obscure? tf.initialize_all_variables().run() print('Initialization complete') # tf.matmul - Matrix Multiplication # What are yhat? Why this name? yhat = tf.matmul(input, weights) # tf.sub - Matrix Subtraction yerror = tf.sub(yhat, target) # tf.nn.l2_loss - Computes half the L2 norm of a tensor without the sqrt # loss function? loss = tf.nn.l2_loss(yerror) # tf.train.GradientDescentOptimizer - Not sure how this is updating the weights tensor? # What is it operating on? update_weights = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # _ in Python is conventionally used for a throwaway variable for step in range(training_steps): # Repeatedly run the operations, updating the TensorFlow variable. # THIS IS WHAT I DONT UNDERSTAND update_weights.run() losses.append(loss.eval()) b, m = weights.eval() history.append((b,m,step)) # Training is done, get the final values for the graphs betas = weights.eval() yhat = yhat.eval() # Show the fit and the loss over time. # destructuring assignment fig, (ax1, ax2, ax3) = plt.subplots(1, 3) # Adjust whitespace between plots plt.subplots_adjust(wspace=.2) # Output size of the figure fig.set_size_inches(12, 4) ax1.set_title("Final Data Fit") ax1.axis('equal') ax1.axis([-15, 15, -15, 15]) # Scatter plot data x and y (pairs?) set with 60% opacity ax1.scatter(x, y, alpha=.6) # Scatter plot x and np.transpose(yhat)[0] (must be same length), in red, 50% transparency # these appear to be the x values mapped onto the ax1.scatter(x, np.transpose(yhat)[0], c="r", alpha=.5) # Add the line along the slope defined by betas (whatever that is) ax1.plot(line_x_range, [betas[0] + a * betas[1] for a in line_x_range], "g", alpha=0.6) # This polyfit coefficients are reversed in order vs the betas ax1.plot(line_x_range, [answer_m * a + answer_b for a in line_x_range], "r", alpha=0.3) ax2.set_title("Loss over Time") # Create a range of intefers from 0 to training_steps and plot the losses as a curve ax2.plot(range(0, training_steps), losses) ax2.set_ylabel("Loss") ax2.set_xlabel("Training steps") ax3.set_title("Slope over Time") ax3.axis('equal') ax3.axis([-15, 15, -15, 15]) for b, m, step in history: ax3.plot(line_x_range, [b + a * m for a in line_x_range], "g", alpha=0.2) # This line seems to be superfluous removing it doesn't change the behaviour plt.show() </code></pre>
1
2016-09-20T07:21:30Z
39,590,065
<p>Ok so update_weights() is calling a minimizer on the loss that you defined to be the error between your prediction and the target. </p> <p>What it will do is it will add some small quantity to the weights (how small is controlled by the learning_rate parameter) to make your loss decrease and hence to make your predictions "truer".</p> <p>This is what happens when you call update_weights() so after the call your weights have changed from a small value and if everything went according to the plan your loss value has decreased.</p> <p>What you want is follow the evolution of your loss and the weights see for example if the loss is really decreasing (and your algorithm works) or if the weights are changing a lot or maybe to visualize them.</p> <p>You can gain a lot of insights by visualizing how the loss is changing. This is why you have to see the full history of the evolution of the parameters and loss; That is why you eval them at each step. </p> <p>The eval or run operation is different when you do it on the minimizer and on the parameters when you do it on the minimizer it will <strong>apply</strong> the minimizer to the weights when you do it on the weights it simply <strong>evaluates</strong> them. I strongly advise you to read <a href="http://cs231n.github.io/" rel="nofollow">this website</a> where the author explains far better than me what is going on and in more details.</p>
2
2016-09-20T09:12:30Z
[ "python", "numpy", "tensorflow", "gradient-descent" ]
Django + Apache - Can't find directory of my website
39,588,104
<p>I have done a website by using django framework. Now I'm trying to deploy using apache+wsgi. Here is my setting. So I have a ip 59.120.185.12 and here is my .config:</p> <pre><code>&lt;VirtualHost *:80&gt; ServerName www.vbgetech.com ServerAlias vbgetech.com ServerAdmin webmaster@localhost Alias /media/ /home/max/v_bridge/media/ Alias /static/ /home/max/v_bridge/static/ &lt;Directory /home/max/v_bridge/media&gt; Require all granted &lt;/Directory&gt; &lt;Directory /home/max/v_bridge/static&gt; Require all granted &lt;/Directory&gt; WSGIScriptAlias / /home/max/v_bridge/v_bridge/wsgi.py &lt;Directory /home/max/v_bridge/v_bridge/&gt; &lt;Files wsgi.py&gt; Require all granted &lt;/Files&gt; &lt;/Directory&gt; &lt;/VirtualHost&gt; </code></pre> <p>And its my wsgi.py</p> <pre><code>""" WSGI config for website project. It exposes the WSGI callable as a module-level variable named ``application``. For more information on this file, see https://docs.djangoproject.com/en/1.8/howto/deployment/wsgi/ """ import os from django.core.wsgi import get_wsgi_application from os.path import join,dirname,abspath PROJECT_DIR = dirname(dirname(abspath(__file__)))#3 import sys # 4 sys.path.insert(0,PROJECT_DIR) # 5 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "v_bridge.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() </code></pre> <p>I think I do follow all the instruction by official document. And It seems that If I type the ip on browser it still return me a apache default page. I guess that is because my server can't map the ip to the .conf file. Anyone has idea about it?</p>
0
2016-09-20T07:26:03Z
39,588,808
<p>You should access the site using the URL <code>http://www.vbgetech.com</code>. You can't necessarily use the IP address. If you use the IP it will not match by host name lookup against <code>VirtualHost</code> and so the request will instead be handled by the first <code>VirtualHost</code> in the Apache configuration file, or in other words the default <code>VirtualHost</code>. That default <code>VirtualHost</code> is going to give you the default page if you never changed it.</p> <p>In other words, if you added a new <code>VirtualHost</code> and didn't modify the default one, then you are going to have to use a proper host name in the URL for it to match your <code>VirtualHost</code>.</p>
1
2016-09-20T08:06:50Z
[ "python", "django", "apache", "mod-wsgi" ]
How shall I draw "K" plot by matplotlib.finance for some special formate data as below?
39,588,268
<p>The codes are as below:</p> <pre><code>import tushare as ts import matplotlib.pyplot as plt from matplotlib.finance import candlestick_ohlc as candle stock=ts.get_hist_data('000581',ktype='w') </code></pre> <p>the data form of "stock" is as below picture: <a href="http://i.stack.imgur.com/Bo7QU.png" rel="nofollow">enter image description here</a></p> <p>Then the below codes:</p> <pre><code>vals=stock.iloc[:,0:4] fig=plt.figure() ax=fig.add_subplot(111) candle(ax,vals) </code></pre> <p>I get error as below: Traceback (most recent call last): File "", line 1, in File "/usr/local/anaconda3/lib/python3.5/site-packages/matplotlib/finance.py", line 735, in candlestick_ohlc alpha=alpha, ochl=False) File "/usr/local/anaconda3/lib/python3.5/site-packages/matplotlib/finance.py", line 783, in _candlestick t, open, high, low, close = q[:5] ValueError: not enough values to unpack (expected 5, got 4)</p> <p>How shall I resove it?</p>
0
2016-09-20T07:35:17Z
39,588,414
<p><code>candlestick</code> needs a very specific format <em>and</em> order to work. For example, if you use <code>_ohlc</code>, then the order must be open-high-low-close. The array for the candlestick graph can be prepared as follows:</p> <pre><code>candleArray = [] while i &lt; len(datep): newLine = datep[i], openp[i], highp[i], lowp[i], closep[i], volumep[i], pricechange[i], pchange[i] candleArray.append(newLine) i += 1 </code></pre> <p>Then, you can call candlestick with the array <code>candleArray</code>.</p>
0
2016-09-20T07:44:03Z
[ "python", "matplotlib" ]
Python Sqlite3 Database table isn't being updated
39,588,293
<p>I'm creating a change-password page for a website, which requests the new password and the current password. The old password is hashed and salted using the scrypt library then compared to the password stored in the sqlite3 database, and if these are a match, the new password is hashed and the database is updated. However I am having difficulty executing the update command, as it throws a sqlite3.OperationalError: unrecognised token: "\" error. The execute statement currently has the following code:</p> <pre><code>c.execute("UPDATE users SET password = \'{0}\' WHERE memberID = \'{1}\'".format(newPas, memID)) </code></pre> <p>Initially we believed this error to have been caused by the use of ' in the string formatting due to the presence of ' within the new password itself, so this was run again as:</p> <pre><code>c.execute("UPDATE users SET password = \"{0}\" WHERE memberID = \"{1}\"".format(newPas, memID)) </code></pre> <p>This successfully runs, but doesn't actually change anything in the database. We also attempted to create a query string and then execute the string.</p> <pre><code>query = "UPDATE users SET password = {0} WHERE memberID = {1}".format(newPas, memID) c.execute(query) </code></pre> <p>This caused a sqlite3.OperationalError: near "'\xa1\x91\x9f\x88\xfb\x81\x12\xd4\xc2\xf9\xce\x91y\xf0/\xe1*#\x8aj\xc7\x1d\xd3\x91\x14\xcb\xa4\xabaP[\x02\x1d\x1b\xabr\xc7\xe4\xee\x19\x80c\x8e|\xc0S\xaaX\xc6\x04\xab\x08\x9b\x8e\xd7zB\xc6\x84[\xfb\xbc\x8d\xfc'": syntax error. I believe that this is caused by the presence of ' and " characters within the password, but I am unsure how to get around this issue as these are added by the hashing process and thus removing them would change the password. The password I would like to add is:</p> <pre><code>b'\xa1\x91\x9f\x88\xfb\x81\x12\xd4\xc2\xf9\xce\x91y\xf0/\xe1*#\x8aj\xc7\x1d\xd3\x91\x14\xcb\xa4\xabaP[\x02\x1d\x1b\xabr\xc7\xe4\xee\x19\x80c\x8e|\xc0S\xaaX\xc6\x04\xab\x08\x9b\x8e\xd7zB\xc6\x84[\xfb\xbc\x8d\xfc' </code></pre> <p>I was wondering if anyone could share some insights into why it isn't liking the "\" character or why it isn't updating the database, and point me in the right direction to making it work. If you need more information or code snippets or just want to yell at me, please don't hesitate to! Thank you in advance :)</p>
0
2016-09-20T07:36:58Z
39,588,580
<p>You should use parametrized queries something like this:</p> <p>c.execute("""UPDATE users SET password = ? WHERE memberID = ?;""", (newPas, memID))</p> <p>It will allow to avoid nasty things like SQL-injections.</p>
1
2016-09-20T07:54:41Z
[ "python", "sqlite3" ]
Python Sqlite3 Database table isn't being updated
39,588,293
<p>I'm creating a change-password page for a website, which requests the new password and the current password. The old password is hashed and salted using the scrypt library then compared to the password stored in the sqlite3 database, and if these are a match, the new password is hashed and the database is updated. However I am having difficulty executing the update command, as it throws a sqlite3.OperationalError: unrecognised token: "\" error. The execute statement currently has the following code:</p> <pre><code>c.execute("UPDATE users SET password = \'{0}\' WHERE memberID = \'{1}\'".format(newPas, memID)) </code></pre> <p>Initially we believed this error to have been caused by the use of ' in the string formatting due to the presence of ' within the new password itself, so this was run again as:</p> <pre><code>c.execute("UPDATE users SET password = \"{0}\" WHERE memberID = \"{1}\"".format(newPas, memID)) </code></pre> <p>This successfully runs, but doesn't actually change anything in the database. We also attempted to create a query string and then execute the string.</p> <pre><code>query = "UPDATE users SET password = {0} WHERE memberID = {1}".format(newPas, memID) c.execute(query) </code></pre> <p>This caused a sqlite3.OperationalError: near "'\xa1\x91\x9f\x88\xfb\x81\x12\xd4\xc2\xf9\xce\x91y\xf0/\xe1*#\x8aj\xc7\x1d\xd3\x91\x14\xcb\xa4\xabaP[\x02\x1d\x1b\xabr\xc7\xe4\xee\x19\x80c\x8e|\xc0S\xaaX\xc6\x04\xab\x08\x9b\x8e\xd7zB\xc6\x84[\xfb\xbc\x8d\xfc'": syntax error. I believe that this is caused by the presence of ' and " characters within the password, but I am unsure how to get around this issue as these are added by the hashing process and thus removing them would change the password. The password I would like to add is:</p> <pre><code>b'\xa1\x91\x9f\x88\xfb\x81\x12\xd4\xc2\xf9\xce\x91y\xf0/\xe1*#\x8aj\xc7\x1d\xd3\x91\x14\xcb\xa4\xabaP[\x02\x1d\x1b\xabr\xc7\xe4\xee\x19\x80c\x8e|\xc0S\xaaX\xc6\x04\xab\x08\x9b\x8e\xd7zB\xc6\x84[\xfb\xbc\x8d\xfc' </code></pre> <p>I was wondering if anyone could share some insights into why it isn't liking the "\" character or why it isn't updating the database, and point me in the right direction to making it work. If you need more information or code snippets or just want to yell at me, please don't hesitate to! Thank you in advance :)</p>
0
2016-09-20T07:36:58Z
39,589,436
<p>A couple of things with your code:</p> <ol> <li>You should not use <code>format</code> to build your queries like this. This leaves you liable to SQL injection and, whilst you might sanitise your inputs in this case, it's a bad habit that will bite you.</li> <li>All changes need to be committed to the database to actually take effect. This is why your second query did not throw an error but equally did not make any changes to the database. </li> </ol> <p>The correct formatting of this query would be:</p> <pre><code>conn = sqlite3.connect('my_db.db') c = conn.cursor() query = "UPDATE users SET password = ? WHERE memberID = ?" c.execute(query, (newPas, memID)) conn.commit() # To finalise the alteration </code></pre> <p>As a side note, the cursor expects a tuple in this case, so a common stumbling block comes when passing single values:</p> <pre><code>query = "UPDATE users SET password = ? WHERE memberID = 'abc'" c.execute(query, (newPas)) # Throws "incorrect number of bindings" error # Use this instead i.e. pass single value as a tuple c.execute(query, (newPas,)) </code></pre> <p>You could use <code>format</code> to create variable field names in a query, since placeholders are not allowed in this case:</p> <pre><code>fields = ['a', 'b', 'c'] query = "UPDATE users SET {} = ?".format(random.choice(fields)) </code></pre> <p>in addition to using it to help you build big queries where it would be tedious to manually type all the placeholders, and difficult to ensure that you had the correct number if your code changed:</p> <pre><code>my_list = ['a', 'b',...., n] placeholders = ', '.join(['?' for item in my_list]) query = "INSERT .... VALUES = ({})".format(placeholders) </code></pre>
0
2016-09-20T08:42:09Z
[ "python", "sqlite3" ]
C++ to communicate to Python function
39,588,385
<p>I am new to c++, </p> <p>I created DLL which contains Class, and functions and all the return type of each function is PyObject (Python Object), So now i want to write the C++ application which loads DLL dynamically using LoadLibrary function.</p> <p>Able to execute with adding the project to same solution and adding the reference to DLL.</p> <p>I am able to load the DLL, but when i am calling the function it returns PyObject data type, how to store the return type of PyObject in C++?</p>
2
2016-09-20T07:42:36Z
39,588,661
<p>You should take a look at Python's documentation on <a href="https://docs.python.org/3.5/c-api/concrete.html" rel="nofollow">Concrete Objects Layer</a>. Basically you have to convert <code>PyObject</code> into a C++ type using a function of the form <code>Py*T*_As*T*(PyObject* obj)</code> where <code>T</code> is the concrete type you want to retrieve.</p> <p>The API assumes you know which function you should call. But, as stated in the <a href="https://docs.python.org/3.5/c-api/concrete.html" rel="nofollow">doc</a>, you can check the type before use:</p> <blockquote> <p>...if you receive an object from a Python program and you are not sure that it has the right type, you must perform a type check first; for example, to check that an object is a dictionary, use <a href="https://docs.python.org/3.5/c-api/dict.html#c.PyDict_Check" rel="nofollow"><code>PyDict_Check()</code></a>.</p> </blockquote> <p>Here is an example to convert a <code>PyObject</code> into <code>long</code>:</p> <pre><code>PyObject* some_py_object = /* ... */; long as_long( PyLong_AsLong(some_py_object) ); Py_DECREF(some_py_object); </code></pre> <p>Here is another, more complicated example converting a Python <a href="https://docs.python.org/3.5/c-api/list.html" rel="nofollow"><code>list</code></a> into a <code>std::vector</code>:</p> <pre><code>PyObject* some_py_list = /* ... */; // assuming the list contains long std::vector&lt;long&gt; as_vector(PyList_Size(some_py_list)); for(size_t i = 0; i &lt; as_vector.size(); ++i) { PyObject* item = PyList_GetItem(some_py_list, i); as_vector[i] = PyLong_AsLong(item); Py_DECREF(item); } Py_DECREF(some_py_list); </code></pre> <p>A last, more complicated example, to parse a Python <a href="https://docs.python.org/3.5/c-api/dict.html" rel="nofollow"><code>dict</code></a> into a <code>std::map</code>:</p> <pre><code>PyObject* some_py_dict = /* ... */; // assuming the dict uses long as keys, and contains string as values std::map&lt;long, std::string&gt; as_map; // first get the keys PyObject* keys = PyDict_Keys(some_py_dict); size_t key_count = PyList_Size(keys); // loop on the keys and get the values for(size_t i = 0; i &lt; key_count; ++i) { PyObject* key = PyList_GetItem(keys, i); PyObject* item = PyDict_GetItem(some_py_dict, key); // add to the map as_map.emplace(PyLong_AsLong(key), PyString_AsString(item)); Py_DECREF(key); Py_DECREF(item); } Py_DECREF(keys); Py_DECREF(some_py_dict); </code></pre>
1
2016-09-20T07:59:12Z
[ "python", "c++", "dll" ]
Mocking ConfigObj instances
39,588,617
<p>Using <a href="https://configobj.readthedocs.io" rel="nofollow">ConfigObj</a>, I want to test some <a href="https://configobj.readthedocs.io/en/latest/configobj.html#sections" rel="nofollow">section</a> creation code:</p> <pre><code>def create_section(config, section): config.reload() if section not in config: config[session] = {} logging.info("Created new section %s.", section) else: logging.debug("Section %s already exists.", section) </code></pre> <p>I would like to write a few unit tests but I am hitting a problem. For example, </p> <pre><code>def test_create_section_created(): config = Mock(spec=ConfigObj) # ← This is not right… create_section(config, 'ook') assert 'ook' in config config.reload.assert_called_once_with() </code></pre> <p>Clearly, the test method will fail because of a <code>TypeError</code> as the argument of type 'Mock' is not iterable.</p> <p>How can I define the <code>config</code> object as a mock?</p>
0
2016-09-20T07:57:04Z
39,588,786
<p>And this is why you should never, <em>ever</em>, post before you are wide awake:</p> <pre><code>def test_create_section_created(): logger.info = Mock() config = MagicMock(spec=ConfigObj) # ← This IS right… config.__contains__.return_value = False # Negates the next assert. create_section(config, 'ook') # assert 'ook' in config ← This is now a pointless assert! config.reload.assert_called_once_with() logger.info.assert_called_once_with("Created new section ook.") </code></pre> <p>I shall leave that answer/question here for posterity in case someone else has a brain failure…</p>
0
2016-09-20T08:05:32Z
[ "python", "unit-testing", "mocking", "configobj" ]
Python search exact word from list in string?
39,588,726
<p>I need to find exact word from list in the string.</p> <p>I tried below code. Here I am getting exact match for single word from list but how to match two words from the list.</p> <pre><code>categories_to_retain = ['SOLID', 'GEOMETRIC', 'FLORAL', 'BOTANICAL', 'STRIPES', 'ABSTRACT', 'ANIMAL', 'GRAPHIC PRINT', 'ORIENTAL', 'DAMASK', 'TEXT', 'CHEVRON', 'PLAID', 'PAISLEY', 'SPORTS'] x = " Beautiful Art By Design Studio **graphic print** Creates A **TEXT** Design For This Art Driven Duvet. Printed In Remarkable Detail On A Woven Duvet, This Is An Instant Focal Point Of Any Bedroom. The Fabric Is Woven Of Easy Care Polyester And Backed With A Soft Poly/Cotton Blend Fabric. The Texture In The Fabric Gives Dimension And A Unique Look And Feel To The Duvet." x = x.upper() print x #x = "GRAPHIC" #x = "GRAPHIC PRINTS" matches = [cat for cat in categories_to_retain if cat in x.split()] matches Output: ['TEXT'] </code></pre> <p>Here you can see there is word present in my list called 'GRAPHIC PRINT'. I want find this word from my string.</p> <p>Also i need to find word even if its present in plural or past tense. For example,STRIPED,STRIPE,GRAPHIC PRINTS etc.</p> <p>Thanks , Niranjan</p>
-1
2016-09-20T08:02:19Z
39,588,878
<p>Here you are splitting the string using default split(), which means it will be split at each space: there will be the strings "GRAPHIC" and "PRINT" in x.split(), but not "GRAPHIC PRINT". You may want to use "if cat in x", which I believe would return what you need in this case.</p> <p>This should work:</p> <pre><code>matches = [cat for cat in categories_to_retain if cat in x] </code></pre>
-1
2016-09-20T08:10:52Z
[ "python", "regex", "list", "search", "find" ]
Python search exact word from list in string?
39,588,726
<p>I need to find exact word from list in the string.</p> <p>I tried below code. Here I am getting exact match for single word from list but how to match two words from the list.</p> <pre><code>categories_to_retain = ['SOLID', 'GEOMETRIC', 'FLORAL', 'BOTANICAL', 'STRIPES', 'ABSTRACT', 'ANIMAL', 'GRAPHIC PRINT', 'ORIENTAL', 'DAMASK', 'TEXT', 'CHEVRON', 'PLAID', 'PAISLEY', 'SPORTS'] x = " Beautiful Art By Design Studio **graphic print** Creates A **TEXT** Design For This Art Driven Duvet. Printed In Remarkable Detail On A Woven Duvet, This Is An Instant Focal Point Of Any Bedroom. The Fabric Is Woven Of Easy Care Polyester And Backed With A Soft Poly/Cotton Blend Fabric. The Texture In The Fabric Gives Dimension And A Unique Look And Feel To The Duvet." x = x.upper() print x #x = "GRAPHIC" #x = "GRAPHIC PRINTS" matches = [cat for cat in categories_to_retain if cat in x.split()] matches Output: ['TEXT'] </code></pre> <p>Here you can see there is word present in my list called 'GRAPHIC PRINT'. I want find this word from my string.</p> <p>Also i need to find word even if its present in plural or past tense. For example,STRIPED,STRIPE,GRAPHIC PRINTS etc.</p> <p>Thanks , Niranjan</p>
-1
2016-09-20T08:02:19Z
39,588,999
<p>You can use regular expression, this will also help to avoid sequence of matching characters and exact input word will be displayed.</p> <pre><code>import re matches = [] categories_to_retain = ['SOLID', 'GEOMETRIC', 'FLORAL', 'BOTANICAL', 'STRIPES', 'ABSTRACT', 'ANIMAL', 'GRAPHIC PRINT', 'ORIENTAL', 'DAMASK', 'TEXT', 'CHEVRON', 'PLAID', 'PAISLEY', 'SPORTS'] x = " Beautiful Art By Design Studio **graphic print** Creates A **TEXT** Design For This Art Driven Duvet. Printed In Remarkable Detail On A Woven Duvet, This Is An Instant Focal Point Of Any Bedroom. The Fabric Is Woven Of Easy Care Polyester And Backed With A Soft Poly/Cotton Blend Fabric. The Texture In The Fabric Gives Dimension And A Unique Look And Feel To The Duvet." x = x.upper() print(x) def searchWholeWord(w): return re.compile(r'\b({0})\b'.format(w), flags=re.IGNORECASE).search for cat in categories_to_retain: return_value = searchWholeWord(cat)(x) if return_value: matches.append(cat) print(matches) </code></pre> <p>Output:</p> <pre><code>['GRAPHIC PRINT', 'TEXT'] </code></pre>
-1
2016-09-20T08:19:32Z
[ "python", "regex", "list", "search", "find" ]
Python search exact word from list in string?
39,588,726
<p>I need to find exact word from list in the string.</p> <p>I tried below code. Here I am getting exact match for single word from list but how to match two words from the list.</p> <pre><code>categories_to_retain = ['SOLID', 'GEOMETRIC', 'FLORAL', 'BOTANICAL', 'STRIPES', 'ABSTRACT', 'ANIMAL', 'GRAPHIC PRINT', 'ORIENTAL', 'DAMASK', 'TEXT', 'CHEVRON', 'PLAID', 'PAISLEY', 'SPORTS'] x = " Beautiful Art By Design Studio **graphic print** Creates A **TEXT** Design For This Art Driven Duvet. Printed In Remarkable Detail On A Woven Duvet, This Is An Instant Focal Point Of Any Bedroom. The Fabric Is Woven Of Easy Care Polyester And Backed With A Soft Poly/Cotton Blend Fabric. The Texture In The Fabric Gives Dimension And A Unique Look And Feel To The Duvet." x = x.upper() print x #x = "GRAPHIC" #x = "GRAPHIC PRINTS" matches = [cat for cat in categories_to_retain if cat in x.split()] matches Output: ['TEXT'] </code></pre> <p>Here you can see there is word present in my list called 'GRAPHIC PRINT'. I want find this word from my string.</p> <p>Also i need to find word even if its present in plural or past tense. For example,STRIPED,STRIPE,GRAPHIC PRINTS etc.</p> <p>Thanks , Niranjan</p>
-1
2016-09-20T08:02:19Z
39,589,263
<p>Use a regex with boundaries to get exact matches, even if you only had single words your logic would not work if you are trying to ignore any punctuation:</p> <pre><code>import re patts = re.compile("|".join(r"\b{}\b".format(s) for s in categories_to_retain), re.I) x = " Beautiful Art By Design Studio **graphic print** Creates A **TEXT** Design For This Art Driven Duvet. Printed In Remarkable Detail On A Woven Duvet, This Is An Instant Focal Point Of Any Bedroom. The Fabric Is Woven Of Easy Care Polyester And Backed With A Soft Poly/Cotton Blend Fabric. The Texture In The Fabric Gives Dimension And A Unique Look And Feel To The Duvet." print(patts.findall(x)) </code></pre> <p>Which would give you:</p> <pre><code>['graphic print', 'TEXT'] </code></pre>
0
2016-09-20T08:33:20Z
[ "python", "regex", "list", "search", "find" ]
Numpy delete multiple rows matching criteria
39,588,733
<p>I have a numpy array of folowing structure</p> <pre><code>sb = np.genfromtxt(open('HomePage/TodayList.txt', 'rb'), delimiter=',', skiprows=0, dtype=[('DataBase', np.str_, 16), ('Mode', np.str_, 16), ('SMB', np.str_, 16),('Desc', np.str_, 128), ('Res', np.str_, 16), ('RightCnt', np.float64), ('PercentCnt', np.float64), ('ModelType', np.float64)]) </code></pre> <p>The 6th column <code>'PercentCnt'</code> which can be accessed by name <code>'PercentCnt'</code> contains numbers from 0 to 50 the 7th column <code>'ModelType'</code> contains numbers from 0 to 5 so i need to remove or delete array rows which match these criteria <code>'PercentCnt'&lt;50</code> and <code>'ModelType'&lt;2</code>.</p>
1
2016-09-20T08:02:33Z
39,588,978
<p>You can find all rows matching your criteria by use of a column-wise comparison for <code>PercentCnt</code> and <code>ModelType</code> and connection using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_and.html" rel="nofollow"><code>np.logical_and</code></a>. Doing that, you actually copy all other rows rather than to delete the ones you wanted to get rid of, but the effect is the same.</p> <pre><code>sb = sb[np.logical_and(sb["PercentCnt"]&gt;=50, sb["ModelType"]&gt;=2)] </code></pre>
1
2016-09-20T08:18:29Z
[ "python", "numpy" ]
Numpy delete multiple rows matching criteria
39,588,733
<p>I have a numpy array of folowing structure</p> <pre><code>sb = np.genfromtxt(open('HomePage/TodayList.txt', 'rb'), delimiter=',', skiprows=0, dtype=[('DataBase', np.str_, 16), ('Mode', np.str_, 16), ('SMB', np.str_, 16),('Desc', np.str_, 128), ('Res', np.str_, 16), ('RightCnt', np.float64), ('PercentCnt', np.float64), ('ModelType', np.float64)]) </code></pre> <p>The 6th column <code>'PercentCnt'</code> which can be accessed by name <code>'PercentCnt'</code> contains numbers from 0 to 50 the 7th column <code>'ModelType'</code> contains numbers from 0 to 5 so i need to remove or delete array rows which match these criteria <code>'PercentCnt'&lt;50</code> and <code>'ModelType'&lt;2</code>.</p>
1
2016-09-20T08:02:33Z
39,588,995
<p>The condition</p> <pre><code>sb['PercentCnt'] &gt;= 50 </code></pre> <p>is the condition for <em>keeping</em> things on this column, and the condition</p> <pre><code>sb['ModelType'] &gt;= 2 </code></pre> <p>is the same for the other column.</p> <p>You can combine these with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_and.html" rel="nofollow"><code>np.logical_and</code></a>:</p> <pre><code>keep = np.logical_and(sb['PercentCnt'] &gt;= 50, sb['ModelType'] &gt;= 2) </code></pre> <p>Finally, just keep the rows you wish to keep:</p> <pre><code>sb[keep] </code></pre>
1
2016-09-20T08:19:24Z
[ "python", "numpy" ]
Display value in a button by linking two modals
39,588,781
<p>I have two modals each of it having different content. My first modal has a button which opens the second one. The second modal has Key Id's which should be displayed in the button of the first modal upon selection. I'm able to display the value in my second modal by just using label(Used this just to check if my selected is working). I'm not able to display the selected value in the button of my first modal. Any idea how it can be done?</p> <p>Here's the code which I used,</p> <p>In the template of the the first Modal,</p> <pre><code>&lt;button class="btn btn-default btn-lg" ng-click="Modal()" ng-model="cp.se"&gt;&lt;/button&gt; </code></pre> <p>Function to Open Modal and link to Controller,</p> <pre><code>$scope.Modal = function() { // function to open modal and link to Modal Controller var modalInstance = $modal.open({ backdrop: 'static', templateUrl: '{% url 'critical_process' %}', controller: 'Controller', }); modalInstance.result.then(function (msg) { $log.info('Modal success'); //console.log(msg); }); }; // End:function to open modal and link to Modal Controller </code></pre> <p>In the controller,</p> <pre><code>{{ngapp}}.controller( "Controller", function($scope, $http, $modalInstance){ $scope.selected=[]; $scope.items=[ { value:'Impact to Safety, regulatory compliance, or /n environment', key:10, color:'red'}, { value:'Reliability Impact',key:9, color:'brown'}]; }); </code></pre> <p>In the template of the second modal, </p> <pre><code>&lt;div class="well form-group" &gt; &lt;table class="table"&gt; &lt;tr ng-repeat="item in items" ng-click="selected.item= item"&gt; &lt;td ng-style="{ 'background-color': item.color }"&gt;{{item.key}}&lt;/td&gt; &lt;td&gt; {{item.value}} &lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; &lt;label&gt;{{selected.item}}&lt;/label&gt; {% endverbatim %} &lt;/div&gt; &lt;/div&gt; </code></pre>
1
2016-09-20T08:05:18Z
39,589,391
<p>In the template of second modal</p> <pre><code>&lt;div class="well form-group" &gt; &lt;table class="table"&gt; &lt;!-- bind event handler on row click and pass the item to handler function --&gt; &lt;tr ng-repeat="item in items" ng-click="ok(item)"&gt; &lt;td ng-style="{ 'background-color': item.color }"&gt;{{item.key}}&lt;/td&gt; &lt;td&gt; {{item.value}} &lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; &lt;label&gt;{{selected}}&lt;/label&gt; {% endverbatim %} &lt;/div&gt; </code></pre> <p>In the controller for second modal</p> <pre><code>{{ngapp}}.controller("Controller", function($scope, $http, $modalInstance){ $scope.selected=[]; $scope.items = [ { value:'Impact to Safety, regulatory compliance, or /n environment', key:10, color:'red'}, { value:'Reliability Impact',key:9, color:'brown'} ]; $scope.ok = function(item) { &lt;!-- Pass the selected value to first modal --&gt; &lt;!-- Closing the modal resolves the promise --&gt; $scope.selected = item; $modalInstance.close(item); } }); </code></pre> <p>And in your first modal function</p> <pre><code> $scope.Modal = function() { // function to open modal and link to Modal Controller var modalInstance = $modal.open({ backdrop: 'static', templateUrl: '{% url 'critical_process' %}', controller: 'Controller', }); // On promise resolved passed callback function gets executed modalInstance.result.then(function (selectedItem) { // Assign selected item value to your modal cp.se = selectedItem.value; }); }; </code></pre> <p>Html</p> <pre><code>&lt;button class="btn btn-default btn-lg" ng-click="Modal()"&gt;{{cp.se}}&lt;/button&gt; </code></pre>
0
2016-09-20T08:39:51Z
[ "python", "html", "angularjs", "django" ]
Matplotlib and networkx merging Graphs when drawing
39,588,977
<p>I'm using <a href="https://networkx.readthedocs.io/en/stable/reference/drawing.html" rel="nofollow">Networkx</a> to visualise some graphs using the following code:</p> <pre><code>import networkx as nx import matplotlib.pyplot as plt def drawgraph(g, filename): #plt.figure() I had to comment this line because it gives me an 'alloc: invalid block` error nx.draw(g) plt.draw() # I added this hoping it might solve the problem (outlined in the text below the code) plt.savefig(filename) #plt.show() this solves the problem, however it's blocking call and I'm drawing hundreds of graphs </code></pre> <p>Now the problem is that subsequent calls to <code>drawgraph</code>, will cause the drawn graphs to be merged with the previous ones e.g: if I call it twice, the first one is drawn correctly but the second picture contains the the first Graph in addition to the second Graph. Putting a <code>plt.show()</code> at the end of the function solves the Problem, however it's a blocking call and I can't have that. So how do I solve this Problem?</p>
0
2016-09-20T08:18:20Z
39,589,189
<p>So after some looking around I found out the answer, I uncommented the <code>plt.figure()</code> line and added a <code>plt.close()</code> call at the end: </p> <pre><code>import networkx as nx import matplotlib.pyplot as plt def drawgraph(g, filename): plt.figure() nx.draw(g) plt.draw() plt.savefig(filename) plt.close() </code></pre>
0
2016-09-20T08:30:04Z
[ "python", "matplotlib", "networkx" ]
Matplotlib and networkx merging Graphs when drawing
39,588,977
<p>I'm using <a href="https://networkx.readthedocs.io/en/stable/reference/drawing.html" rel="nofollow">Networkx</a> to visualise some graphs using the following code:</p> <pre><code>import networkx as nx import matplotlib.pyplot as plt def drawgraph(g, filename): #plt.figure() I had to comment this line because it gives me an 'alloc: invalid block` error nx.draw(g) plt.draw() # I added this hoping it might solve the problem (outlined in the text below the code) plt.savefig(filename) #plt.show() this solves the problem, however it's blocking call and I'm drawing hundreds of graphs </code></pre> <p>Now the problem is that subsequent calls to <code>drawgraph</code>, will cause the drawn graphs to be merged with the previous ones e.g: if I call it twice, the first one is drawn correctly but the second picture contains the the first Graph in addition to the second Graph. Putting a <code>plt.show()</code> at the end of the function solves the Problem, however it's a blocking call and I can't have that. So how do I solve this Problem?</p>
0
2016-09-20T08:18:20Z
39,599,384
<p>Any time you plot something and then plot something again, with <code>matplotlib.pyplot</code>, both things will appear. You need to close or clear the figure.</p> <p>Adding <code>plt.clf()</code> to clear the figure will solve the problem. I don't believe <code>plt.draw()</code> does anything for you.</p>
0
2016-09-20T16:30:06Z
[ "python", "matplotlib", "networkx" ]
On bluemix message hub, how do I exchange messages between a rest and mqlight client?
39,589,031
<p>according to the documentation <a href="https://console.ng.bluemix.net/docs/services/MessageHub/index.html#messagehub" rel="nofollow">https://console.ng.bluemix.net/docs/services/MessageHub/index.html#messagehub</a> it should be possible to submit a message to MessageHub via REST and receive it via a MQLight client. However the documentation is lacking an example and is somewhat ... opaque.</p> <p>So, if I create the MQLight topic, and have a python client listening,</p> <pre><code> import json import logging import mqlight import time amqps = 'amqps://xxxxxxxxxxxxx.messagehub.services.us-south.bluemix.net:5671' options = { 'user' : 'xxxxxxxxxxxxxxxx', 'password' : 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' } def on_message(message_type, data, delivery): d = json.loads(data) print str(d) def on_started(err): client.subscribe('test', on_message = on_message) def on_stopped(err): logging.info('stopped') client = mqlight.Client(amqps, security_options = options, client_id = 'client', on_started=on_started) while True: logging.info(str(client.get_state())) time.sleep(5) </code></pre> <p>how would I post a message via curl. I have tried, where the value string is base64 encoded,</p> <pre><code> curl -i \ -X POST \ -H "X-Auth-Token:${APIKEY}" \ -H "Content-Type: application/vnd.kafka.binary.v1+json" \ --data '{"records":[{"value":"S2Fma2E="}]}' \ "https://kafka-rest-prod01.messagehub.services.us-south.bluemix.net:443/topics/MQLight/test" </code></pre> <p>but that returns,</p> <pre><code> {"error_code":404,"message":"HTTP 404 Not Found"} </code></pre>
0
2016-09-20T08:21:09Z
39,596,947
<p>You're correct that the documentation here isn't particularly fleshed out. The only detail on this is in the small section <a href="https://new-console.ng.bluemix.net/docs/services/MessageHub/index.html#messagehub081" rel="nofollow">here</a> which is trying to explain that in order to inter-operate with an MQLight client from some other Kafka or REST client, you'd need to be able to encode/decode the AMQP 1.0 message format (see section 3 of the <a href="http://docs.oasis-open.org/amqp/core/v1.0/amqp-core-complete-v1.0.pdf" rel="nofollow">spec</a>).</p> <p>You'd struggle to achieve this in curl scripts as you need access to an AMQP 1.0 library, and even Python isn't ideal as currently your only real option there is to pull in <a href="https://pypi.python.org/pypi/python-qpid-proton/0.14.0" rel="nofollow">python-qpid-proton</a> which is quite heavyweight as it wraps the proton-c native library and hence requires install-time compilation.</p> <p>For example, in Java you could use a combination of the official Java <a href="https://kafka.apache.org" rel="nofollow">kafka-clients</a> and qpid <a href="https://qpid.apache.org/releases/qpid-proton-0.14.0/proton/java/api/index.html" rel="nofollow">proton-j</a> to provide the AMQP message encoding + decoding. Or if you <em>must</em> use the REST api, then pull in something like <a href="https://github.com/OpenFeign/feign" rel="nofollow">feign</a>.</p>
1
2016-09-20T14:35:21Z
[ "python", "rest", "ibm-bluemix", "message-hub" ]
How to close a blocking socket listening in a thread in while loop?
39,589,070
<p>I have a server and client which need to talk bidirectionally, but the problem is when client is waiting for server data, it can not be closed, I have called <code>socket.shutdown()</code> in client <code>closeEvent</code>, but the application doesn't quit, it just hangs there. What is the correct way of doing it? Thanks!</p> <p>see the problem demo <a href="https://gfycat.com/GregariousFriendlyJerboa" rel="nofollow">screencast here</a>, when I close client.py window, the process doesn't terminate, is it because the <code>recv</code> call in client.py is blocking?</p> <p>I have tried the suggestion here <a href="http://stackoverflow.com/a/11375908/2052889">how to close a blocking socket while it is waiting to receive data?</a>, but it doesn't work.</p> <p>server.py</p> <pre><code>import os import sys from PyQt4.QtCore import * from PyQt4.QtGui import * from PyQt4 import uic # enable ctrl-c to kill the app import signal signal.signal(signal.SIGINT, signal.SIG_DFL) import mysocket import random class MyWindow(QDialog): def __init__(self, parent=None): super(MyWindow, self).__init__(parent) layout = QVBoxLayout(self) button = QPushButton('start') button2 = QPushButton('send rand int') layout.addWidget(button) layout.addWidget(button2) self.setLayout(layout) self.resize(200, 40) button.clicked.connect(self.start_server) button2.clicked.connect(self.send_num) self.start_server() def send_num(self, *args): rand_num = random.randint(1, 10) self.socket.send(str(rand_num)) def start_server(self): self.socket = mysocket.SocketServer(port=5000) self.socket.start() print 'socket server started' def main(): app = QApplication(sys.argv) win = MyWindow() win.show() sys.exit(app.exec_()) if __name__ == "__main__": main() </code></pre> <p>mysocket.py</p> <pre><code># enable ctrl-c to kill the app import signal signal.signal(signal.SIGINT, signal.SIG_DFL) import socket import threading # enable ctrl-c to kill the app import signal signal.signal(signal.SIGINT, signal.SIG_DFL) class SocketServer(object): def __init__(self, port=0): self.host = 'localhost' self.port = port self.bufsize = 4096 self.backlog = 5 self.separator = '&lt;&gt;' self.clients = [] def listen(self): while True: client, address = self.socket.accept() # client.settimeout(60) self.clients.append(client) threading.Thread( target=self.server, args=(client, address)).start() def start(self): self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.socket.bind((self.host, self.port)) self.socket.listen(self.backlog) threading.Thread(target=self.listen).start() def server(self, client, address): data = client.recv(self.bufsize) while True: if self.separator in data: data_split = data.split(self.separator) cmds = data_split[:-1] # execute cmds in threads self.process_cmds(cmds) data = data_split[-1] data += client.recv(self.bufsize) def process_cmds(self, cmds): for cmd in cmds: print 'executing: %s' % cmd def send(self, data): for client in self.clients: try: client.send(data) except: # self.clients.pop(client) pass </code></pre> <p>client.py</p> <pre><code>import os import sys from PyQt4.QtCore import * from PyQt4.QtGui import * from PyQt4 import uic import socket import threading # enable ctrl-c to kill the app import signal signal.signal(signal.SIGINT, signal.SIG_DFL) class MyWindow(QDialog): def __init__(self, parent=None): super(MyWindow, self).__init__(parent) layout = QVBoxLayout(self) button = QPushButton('connect') for i in range(5): cmd_button = QPushButton('cmd - %s' % i) layout.addWidget(cmd_button) cmd_button.clicked.connect(lambda _, i=i: self.send_cmd(i)) layout.addWidget(button) self.setLayout(layout) self.resize(200, 40) button.clicked.connect(self.connect_server) self.connect_server() def listen(self): while True: data = self.socket.recv(1024) if data: print 'received:', data print 'executed while' def connect_server(self): self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.socket.connect(('localhost', 5000)) print 'socket connected' threading.Thread(target=self.listen).start() def send_cmd(self, i): cmd = 'cmd - %s&lt;&gt;' % i print 'sending : %s' % cmd self.socket.send(cmd) def closeEvent(self, e): # s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # s.connect(('localhost', 5000)) # self.socket.close() self.socket.shutdown(socket.SHUT_WR) super(MyWindow, self).closeEvent(e) def main(): app = QApplication(sys.argv) win = MyWindow() win.show() sys.exit(app.exec_()) if __name__ == "__main__": main() </code></pre>
1
2016-09-20T08:22:47Z
39,589,779
<p>You should use <code>select</code> function before reading data from socket:</p> <p>In the program top:</p> <pre><code>import select </code></pre> <p>And the modified <code>listen</code> function:</p> <pre><code>def listen(self): while self.isVisible(self): readable,_,_ = select.select([self.socket], [], [], 5) if (readable): data = self.socket.recv(1024) print 'received:', data else: print 'client send nothing in 5 seconds, or socket has been closed' print 'executed while' </code></pre> <p>See also: <a href="http://stackoverflow.com/a/38520949/1212012">http://stackoverflow.com/a/38520949/1212012</a> (same problem, but in a not-GUI program)</p>
0
2016-09-20T08:58:39Z
[ "python", "multithreading", "sockets", "pyqt", "blocking" ]
DataFrame value startswith
39,589,126
<p>I have the following dataframe in pandas:</p> <pre><code> Datum Zeit Event 0 14.11.2016 13:00 Veröffentlichung des 9-Monats-Berichtes 1 14.03.2017 13:00 Telefonkonferenz für Analysten 2 14.03.2017 13:00 Telefonkonferenz für Analysten 3 27.04.2017 14:00 Ordentliche Hauptversammlung 4 03.05.2017 14:00 Dividendenzahlung 5 15.05.2017 14:00 Bericht zum 1. Quartal 6 14.08.2017 14:00 Telefonkonferenz für Investoren 7 14.08.2017 14:00 Telefonkonferenz für Analysten 8 14.08.2017 14:00 Veröffentlichung des Halbjahresberichtes </code></pre> <p>I am looking for the dates of quarterly reports here ("Bericht" in good old German).<br> I can select the row via</p> <pre><code>df.loc[df["Event"].str.startswith("Bericht"), "Datum"] </code></pre> <p>which returns a <code>Series</code> object like</p> <pre><code>5 15.05.2017 Name: Datum, dtype: object </code></pre> <p>However, I only want to have the date - am I overcomplicating things here?</p>
1
2016-09-20T08:25:59Z
39,589,207
<p>By default a <code>Series</code> is returned when accessing a specific column and row in a <code>DataFrame</code> if you want a scalar value then you can access the array element using <code>.values</code> to return <code>np</code> array and then indexing into it:</p> <pre><code>In [101]: df.loc[df["Event"].str.startswith("Bericht"), "Datum"].values[0] Out[101]: '15.05.2017' </code></pre> <p>For safety you should check whether your selection yields any results prior to indexing into it, otherwise you get a <code>KeyError</code>:</p> <pre><code>if len(df.loc[df["Event"].str.startswith("Bericht"), "Datum"]) &gt; 0: return df.loc[df["Event"].str.startswith("Bericht"), "Datum"].values[0] </code></pre>
1
2016-09-20T08:30:35Z
[ "python", "pandas", "dataframe" ]
DataFrame value startswith
39,589,126
<p>I have the following dataframe in pandas:</p> <pre><code> Datum Zeit Event 0 14.11.2016 13:00 Veröffentlichung des 9-Monats-Berichtes 1 14.03.2017 13:00 Telefonkonferenz für Analysten 2 14.03.2017 13:00 Telefonkonferenz für Analysten 3 27.04.2017 14:00 Ordentliche Hauptversammlung 4 03.05.2017 14:00 Dividendenzahlung 5 15.05.2017 14:00 Bericht zum 1. Quartal 6 14.08.2017 14:00 Telefonkonferenz für Investoren 7 14.08.2017 14:00 Telefonkonferenz für Analysten 8 14.08.2017 14:00 Veröffentlichung des Halbjahresberichtes </code></pre> <p>I am looking for the dates of quarterly reports here ("Bericht" in good old German).<br> I can select the row via</p> <pre><code>df.loc[df["Event"].str.startswith("Bericht"), "Datum"] </code></pre> <p>which returns a <code>Series</code> object like</p> <pre><code>5 15.05.2017 Name: Datum, dtype: object </code></pre> <p>However, I only want to have the date - am I overcomplicating things here?</p>
1
2016-09-20T08:25:59Z
39,610,874
<p>You are doing well. If you only want to have the date, you can do:</p> <pre><code>df.loc[df["Event"].str.startswith("Bericht"), "Datum"].values </code></pre> <p>This returns a list of dates.</p>
0
2016-09-21T08:08:38Z
[ "python", "pandas", "dataframe" ]
Django detecting not needed changes
39,589,239
<p>django 1.10, py 3.5</p> <p>Here's my Enum-like class:</p> <pre><code>@deconstructible class EnumType(object): @classmethod def choices(cls): attrs = [i for i in cls.__dict__.keys() if i[:1] != '_' and i.isupper()] return tuple((cls.__dict__[attr], cls.__dict__[attr]) for attr in attrs) def __eq__(self, other): return self.choices() == other.choices() </code></pre> <p>Here's an example of the class:</p> <pre><code>class TransmissionType(EnumType): TRANSMISSION_PROGRAM = 'TRANSMISSION_PROGRAM' INFO_PROGRAM = 'INFO_PROGRAM' SPORT_PROGRAM = 'SPORT_PROGRAM' </code></pre> <p>Here's how i use it on a model:</p> <pre><code>type = models.TextField(choices=TransmissionType.choices(), db_index=True, default=None) </code></pre> <p>I think I made everything right according to the current <a href="https://docs.djangoproject.com/en/1.10/topics/migrations/#adding-a-deconstruct-method" rel="nofollow">django deconstruct docs</a> but apparently makemigration script still creates everytime migration like this:</p> <pre><code>operations = [ migrations.AlterField( model_name='transmission', name='type', field=models.TextField(choices=[('TRANSMISSION_PROGRAM', 'TRANSMISSION_PROGRAM'), ('INFO_PROGRAM', 'INFO_PROGRAM'), ('SPORT_PROGRAM', 'SPORT_PROGRAM')], db_index=True, default=None), ), ] </code></pre> <p>Edit1: expected behaviour - when class members does not change the generated migaration should not include AlterField</p>
1
2016-09-20T08:32:03Z
39,590,640
<p>Dictionaries have an arbitrary order, so your tuple has an arbitrary order as well. Especially on Python 3.3+ the order is likely to change because it uses a <a href="http://stackoverflow.com/a/27522708/2615075">random hash seed</a>. As such, the order of the tuple is different as well, and tuples with the same items but a different order don't compare equal. Django detects this change and creates a new migration.</p> <p>To fix this, simply sort the keys before constructing the tuple:</p> <pre><code>@deconstructible class EnumType(object): @classmethod def choices(cls): attrs = [i for i in cls.__dict__.keys() if i[:1] != '_' and i.isupper()] return tuple((cls.__dict__[attr], cls.__dict__[attr]) for attr in sorted(attrs)) def __eq__(self, other): return self.choices() == other.choices() </code></pre>
2
2016-09-20T09:36:27Z
[ "python", "django", "enums", "migration" ]
wxPython + weakref proxy = closing wx.Frame does not yield None
39,589,257
<p>I'm currently making a Python application with wxWigets that has two windows. The first one is the main "controller" window, and the second one is intended to be a data display window.</p> <p>I want to have some mechanism in place for the first window to know wherever the second window was already spawned, and if yes, if it was closed by the user. I though about using Python's weakref.proxy(), since based on my little understanding of the language, it seemed that if an object is closed/destroyed/deallocated/GC'ed, any attempts to call my proxy would return a <code>None</code> value, which could be conveniently checked with Python's <code>is None</code> / <code>is not None</code> operators.</p> <p>As long as the window is spawned once, the proxy works as intended, and returns <code>None</code> if the window is not yet spawned, or a reference to the object otherwise. But as soon as I close the secondary window, the proxy object won't revert to <code>None</code> as expected, and my application will crash with a <code>ReferenceError: weakly-referenced object no longer exists</code>.</p> <p>I remember trying to solve this previously and the most functioning solution I found was checking the object's class name against an internal wx class, like:</p> <pre><code>if windowObject.__class__.__name__ is not "_wxPyDeadObject": #do stuff </code></pre> <p>This, however, seems like a very hackish solution to me, and I'd like to know if there's any better way out other than the above. Below is some basic code which reproduces this issue of mine.</p> <pre><code>import wx import weakref class SillyWindow(wx.Frame): def __init__(self): wx.Frame.__init__(self, parent=None, title="Spawned Window") self.Show() class ExWindow(wx.Frame): def __init__(self): wx.Frame.__init__(self, parent=None, title="Main Window") self.panel = wx.Panel(self) self.button = wx.Button(self.panel, label="Spawn window!") self.Bind(wx.EVT_BUTTON, self.spawn, self.button) self.txt = wx.TextCtrl(self.panel, pos=(0,100)) self.wind = None self.timer = wx.Timer(self) self.Bind(wx.EVT_TIMER, self.update, self.timer) self.timer.Start(50) self.Show() def spawn(self,event): if self.wind is None: # Preventing multiple spawning windows self.wind = weakref.proxy(SillyWindow()) def update(self,event): if self.wind is not None: self.txt.SetValue(str(self.wind)) else: self.txt.SetValue("None") app = wx.App(False) frame = ExWindow() app.MainLoop() </code></pre>
0
2016-09-20T08:33:04Z
39,606,555
<p>As you've seen, when a wx widget object has been destroyed the Python proxy object's class is swapped with one that changes it to raise an exception when you try to use it. It also has a <code>__nonzero__</code> method so you can do things like this instead of digging into the guts of the object to find it's class name:</p> <pre><code>if not windowObject: # it has been destroyed already return </code></pre> <p>Another thing to keep in mind is that top-level windows are not destroyed at the time they are closed or their <code>Destroy</code> method has been called. Instead they are added to a pending delete list which is processed as soon as there are no more pending events. You can test for that case (closed but not yet destroyed) by calling the frame's <code>IsBeingDeleted</code> method. Also, the C++ parts of the UI objects hold their own reference to the Python object too, although that will be decref'd when the C++ object is destroyed. So some or all of these things could be interfering with your weafref approach. Personally I would just use an <code>if</code> statement like the above. Short. Sweet. Simple.</p> <p>P.S. Some of the details I've mentioned here are specific to wxPython Classic, and are not handled the same in Phoenix. However using an <code>if</code> statement like the above still works.</p>
2
2016-09-21T02:26:09Z
[ "python", "wxpython", "weak-references" ]
How to identify the occurrence of items in a list against another list
39,589,347
<p>I have a file with a column of text which I have loaded. I would like to check the occurrence of country names in the loaded text. I have loaded the Wikipedia countries CSV file and I am using the following code to count the number of occurrences of country names in the loaded text.</p> <p>My code is not working.</p> <p>Here is my code: <code> text = pd.read_sql(select_string, con) text['tokenized_text'] = mail_text.apply(lambda col:nltk.word_tokenize(col['SomeText']), axis=1) country_codes = pd.read_csv('wikipedia-iso-country-codes.csv') ccs = set(country_codes['English short name lower case']) count_occurrences=Counter(country for country in text['tokenized_text']if country in ccs) </code></p>
-1
2016-09-20T08:37:24Z
39,589,528
<p>In your original code the line</p> <pre><code>dic[country]= dic[country]+1 </code></pre> <p>should raise a <code>KeyError</code>, because the key is not yet present in the dictionary, when a country is met for the first time. Instead you should check if the key is present, and if not, initialize the value to 1.</p> <p>On the other hand, it will not, because the check</p> <pre><code>if country in country_codes['English short name lower case']: </code></pre> <p>yields <code>False</code> for all values: a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html" rel="nofollow"><code>Series</code></a> object's <code>__contains__</code> works with <a href="http://stackoverflow.com/questions/24841768/python-pandas-why-does-the-in-operator-work-with-indices-and-not-with-the-d">indices instead of values</a>. You should for example check</p> <pre><code>if country in country_codes['English short name lower case'].values: </code></pre> <p>if your <a href="http://stackoverflow.com/questions/21319929/how-to-determine-whether-a-pandas-column-contains-a-particular-value">list of values is short</a>.</p> <p>For general counting tasks Python provides <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow">collections.Counter</a>, which acts a bit like a <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>defaultdict(int)</code></a>, but with added benefits. It removes the need for manual checking of keys etc.</p> <p>As you already have <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow"><code>DataFrame</code></a> objects, you could use the tools <a href="http://pandas.pydata.org/pandas-docs/stable/index.html" rel="nofollow">pandas</a> provides:</p> <pre><code>In [12]: country_codes = pd.read_csv('wikipedia-iso-country-codes.csv') In [13]: text = pd.DataFrame({'SomeText': """Finland , Finland , Finland ...: The country where I want to be ...: Pony trekking or camping or just watch T.V. ...: Finland , Finland , Finland ...: It's the country for me ...: ...: You're so near to Russia ...: so far away from Japan ...: Quite a long way from Cairo ...: lots of miles from Vietnam ...: ...: Finland , Finland , Finland ...: The country where I want to be ...: Eating breakfast or dinner ...: or snack lunch in the hall ...: Finland , Finland , Finland ...: Finland has it all ...: ...: Read more: Monty Python - Finland Lyrics | MetroLyrics ...: """.split()}) In [14]: text[text['SomeText'].isin( ...: country_codes['English short name lower case'] ...: )]['SomeText'].value_counts().to_dict() ...: Out[14]: {'Finland': 14, 'Japan': 1} </code></pre> <p>This finds the rows of <code>text</code> where the <em>SomeText</em> column's value is in the <em>English short name lower case</em> column of <code>country_codes</code>, counts unique values of <em>SomeText</em>, and converts to dictionary. The same with descriptive intermediate variables:</p> <pre><code>In [49]: where_sometext_isin_country_codes = text['SomeText'].isin( ...: country_codes['English short name lower case']) In [50]: filtered_text = text[where_sometext_isin_country_codes] In [51]: value_counts = filtered_text['SomeText'].value_counts() In [52]: value_counts.to_dict() Out[52]: {'Finland': 14, 'Japan': 1} </code></pre> <p>The same with <code>Counter</code>:</p> <pre><code>In [23]: from collections import Counter In [24]: dic = Counter() ...: ccs = set(country_codes['English short name lower case']) ...: for country in text['SomeText']: ...: if country in ccs: ...: dic[country] += 1 ...: In [25]: dic Out[25]: Counter({'Finland': 14, 'Japan': 1}) </code></pre> <p>or simply:</p> <pre><code>In [30]: ccs = set(country_codes['English short name lower case']) In [31]: Counter(country for country in text['SomeText'] if country in ccs) Out[31]: Counter({'Finland': 14, 'Japan': 1}) </code></pre>
1
2016-09-20T08:45:57Z
[ "python", "python-3.x" ]
is there a loop in my logic-python
39,589,421
<p>I was solving the question from the website <a href="https://www.codechef.com/" rel="nofollow">CodeChef</a>. I found this question:</p> <blockquote> <p>Some programming contest problems are really tricky: not only do they require a different output format from what you might have expected, but also the sample output does not show the difference. For an example, let us look at permutations. A permutation of the integers 1 to n is an ordering of these integers. So the natural way to represent a permutation is to list the integers in this order. With n = 5, a permutation might look like 2, 3, 4, 5, 1. However, there is another possibility of representing a permutation: You create a list of numbers where the i-th number is the position of the integer i in the permutation. Let us call this second possibility an inverse permutation. The inverse permutation for the sequence above is 5, 1, 2, 3, 4. An ambiguous permutation is a permutation which cannot be distinguished from its inverse permutation. The permutation 1, 4, 3, 2 for example is ambiguous, because its inverse permutation is the same. To get rid of such annoying sample test cases, you have to write a program which detects if a given permutation is ambiguous or not.</p> <p>Input Specification</p> <p>The input contains several test cases. The first line of each test case contains an integer n (1 ≤ n ≤ 100000). Then a permutation of the integers 1 to n follows in the next line. There is exactly one space character between consecutive integers. You can assume that every integer between 1 and n appears exactly once in the permutation. The last test case is followed by a zero.</p> <p>Output Specification</p> <p>For each test case output whether the permutation is ambiguous or not. Adhere to the format shown in the sample output.</p> <p>Sample Input</p> <p>4 1 4 3 2 5 2 3 4 5 1 1 1 0 Sample Output</p> <p>ambiguous not ambiguous ambiguous</p> </blockquote> <p>I post the following python code but they said my answer is wrong can someone help me where is the mistake in my logic?</p> <p><strong>my code goes here:</strong></p> <pre><code>def main(): T=int(input()) result=[] while(T!=0): list=[] list1=[] y=0 value=raw_input().split(' ') for x in value: list.append(int(x)) for x in list: y+=1 x=list.index(y)+1 list1.append(x) if(list==list1): result.append("ambiguous") else: result.append("non-ambiguous") T=int(input()) for a in result: print a main() </code></pre>
-4
2016-09-20T08:41:27Z
39,590,296
<p>For this kind of things, before doubting your code it's better to double-check that the way you handle input and output match what's expected.</p> <p>I know how silly you feel when you realize you fail the tests despite a correct algorithm, for something like using the string <code>non-ambiguous</code> instead of the expected <code>not ambiguous</code>.</p>
0
2016-09-20T09:22:20Z
[ "python", "algorithm" ]
is there a loop in my logic-python
39,589,421
<p>I was solving the question from the website <a href="https://www.codechef.com/" rel="nofollow">CodeChef</a>. I found this question:</p> <blockquote> <p>Some programming contest problems are really tricky: not only do they require a different output format from what you might have expected, but also the sample output does not show the difference. For an example, let us look at permutations. A permutation of the integers 1 to n is an ordering of these integers. So the natural way to represent a permutation is to list the integers in this order. With n = 5, a permutation might look like 2, 3, 4, 5, 1. However, there is another possibility of representing a permutation: You create a list of numbers where the i-th number is the position of the integer i in the permutation. Let us call this second possibility an inverse permutation. The inverse permutation for the sequence above is 5, 1, 2, 3, 4. An ambiguous permutation is a permutation which cannot be distinguished from its inverse permutation. The permutation 1, 4, 3, 2 for example is ambiguous, because its inverse permutation is the same. To get rid of such annoying sample test cases, you have to write a program which detects if a given permutation is ambiguous or not.</p> <p>Input Specification</p> <p>The input contains several test cases. The first line of each test case contains an integer n (1 ≤ n ≤ 100000). Then a permutation of the integers 1 to n follows in the next line. There is exactly one space character between consecutive integers. You can assume that every integer between 1 and n appears exactly once in the permutation. The last test case is followed by a zero.</p> <p>Output Specification</p> <p>For each test case output whether the permutation is ambiguous or not. Adhere to the format shown in the sample output.</p> <p>Sample Input</p> <p>4 1 4 3 2 5 2 3 4 5 1 1 1 0 Sample Output</p> <p>ambiguous not ambiguous ambiguous</p> </blockquote> <p>I post the following python code but they said my answer is wrong can someone help me where is the mistake in my logic?</p> <p><strong>my code goes here:</strong></p> <pre><code>def main(): T=int(input()) result=[] while(T!=0): list=[] list1=[] y=0 value=raw_input().split(' ') for x in value: list.append(int(x)) for x in list: y+=1 x=list.index(y)+1 list1.append(x) if(list==list1): result.append("ambiguous") else: result.append("non-ambiguous") T=int(input()) for a in result: print a main() </code></pre>
-4
2016-09-20T08:41:27Z
39,592,153
<pre><code>arr = [int(i) for i in raw_input().split()] if arr[::] == arr[::-1]: print 'ambiguous' else: print ' not ambiguous' </code></pre> <p><a href="https://code.hackerearth.com/e29c82h" rel="nofollow">https://code.hackerearth.com/e29c82h</a></p>
0
2016-09-20T10:50:09Z
[ "python", "algorithm" ]
how to customize html report file generated using py.test?
39,589,426
<p>I am trying to customize html report using pytest. For example, if I've got a directory structure like:</p> <pre><code>tests temp1 test_temp1.py conftest.py </code></pre> <p>A conftest.py file is also in the tests directory, and it should be common to all the sub-directories in the tests directory. What fixtures and hookwrappers can I use in the conftest.py to change the contents of the html file generated using the following command:</p> <blockquote> <p>py.test tests/temp1/test_temp1.py --html=report.html</p> </blockquote>
0
2016-09-20T08:41:51Z
39,609,952
<p>Looks like you are using some plugin like pytest-html. If that is the case check documentation for that plugin for what all hooks are provided.</p> <p>for pytest-html following hooks are provided You can add change the Environment section of the report by modifying <code>request.config._html.environment</code> from a fixture:</p> <pre><code>@pytest.fixture(autouse=True) def _environment(request): request.config._environment.append(('foo', 'bar')) </code></pre> <p>You can add details to the HTML reports by creating an ‘extra’ list on the report object. The following example adds the various types of extras using a <code>pytest_runtest_makereport</code> hook, which can be implemented in a plugin or <code>conftest.py</code> file:</p> <pre><code>import pytest @pytest.mark.hookwrapper def pytest_runtest_makereport(item, call): pytest_html = item.config.pluginmanager.getplugin('html') outcome = yield report = outcome.get_result() extra = getattr(report, 'extra', []) if report.when == 'call': # always add url to report extra.append(pytest_html.extras.url('http://www.example.com/')) xfail = hasattr(report, 'wasxfail') if (report.skipped and xfail) or (report.failed and not xfail): # only add additional html on failure extra.append(pytest_html.extras.html('&lt;div&gt;Additional HTML&lt;/div&gt;')) report.extra = extra </code></pre>
1
2016-09-21T07:23:08Z
[ "python", "html", "py.test" ]
Assign Unique Values according Distinct Columns Values
39,589,558
<p>I know the question name is a little ambiguous.</p> <p>My goal is to assign global key column based on 2 columns + unique value in my data frame.</p> <p>For example</p> <pre><code>CountryCode | Accident AFG Car AFG Bike AFG Car AFG Plane USA Car USA Bike UK Car </code></pre> <p>Let Car = 01, Bike = 02, Plane = 03</p> <p>My desire global key format is [Accident][CountryCode][UniqueValue]</p> <p>Unique value is a count of similar [Accident][CountryCode]</p> <p>so if Accident = Car and CountryCode = AFG and it is the first occurrence, the global key would be 01AFG01</p> <p>The desired dataframe would look like this:</p> <pre><code>CountryCode | Accident | GlobalKey AFG Car 01AFG01 AFG Bike 02AFG01 AFG Car 01AFG02 AFG Plane 01AFG03 USA Car 01USA01 USA Bike 01USA02 UK Car 01UK01 </code></pre> <p>I have tried running a for loop to append Accident Number and CountryCode together</p> <p>for example:</p> <pre><code>globalKey = [] for x in range(0,6): string = df.iloc[x, 1] string2 = df.iloc[x, 2] if string2 == 'Car': number = '01' elif string2 == 'Bike': number = '02' elif string2 == 'Plane': number = '03' #Concat the number of accident and Country Code subKey = number + string #Append to the list globalKey.append(subKey) </code></pre> <p>This code will provide me with something like <code>01AFG</code>, <code>02AFG</code> based on the value I assign. but I want to assign a unique value by counting the occurrence of when <code>CountryCode</code> and <code>Accident</code> is similar.</p> <p>I am stuck with the code above. I think there should be a better way to do it using map function in Pandas.</p> <p>Thanks for helping guys! much appreciate!</p>
1
2016-09-20T08:47:09Z
39,589,840
<p>I don't have any experience of Pandas so this answer may not be what you are looking for. That being said, if the data you have is really that simple (few countries, few accident types) have you considered storing each country|accident combination in their own value?</p> <p>So as you traverse your input, just increment the counter for that country|accident combination, and then read through those counters at the end to produce the <code>GlobalKeys</code>.</p> <p>If you have other data to store besides the Global Key, then store the country|accident combinations as lists, and read through them at the end one-at-a-time to produce <code>GlobalKeys</code>.</p>
0
2016-09-20T09:00:54Z
[ "python", "pandas", "dataframe", "group-by" ]
Assign Unique Values according Distinct Columns Values
39,589,558
<p>I know the question name is a little ambiguous.</p> <p>My goal is to assign global key column based on 2 columns + unique value in my data frame.</p> <p>For example</p> <pre><code>CountryCode | Accident AFG Car AFG Bike AFG Car AFG Plane USA Car USA Bike UK Car </code></pre> <p>Let Car = 01, Bike = 02, Plane = 03</p> <p>My desire global key format is [Accident][CountryCode][UniqueValue]</p> <p>Unique value is a count of similar [Accident][CountryCode]</p> <p>so if Accident = Car and CountryCode = AFG and it is the first occurrence, the global key would be 01AFG01</p> <p>The desired dataframe would look like this:</p> <pre><code>CountryCode | Accident | GlobalKey AFG Car 01AFG01 AFG Bike 02AFG01 AFG Car 01AFG02 AFG Plane 01AFG03 USA Car 01USA01 USA Bike 01USA02 UK Car 01UK01 </code></pre> <p>I have tried running a for loop to append Accident Number and CountryCode together</p> <p>for example:</p> <pre><code>globalKey = [] for x in range(0,6): string = df.iloc[x, 1] string2 = df.iloc[x, 2] if string2 == 'Car': number = '01' elif string2 == 'Bike': number = '02' elif string2 == 'Plane': number = '03' #Concat the number of accident and Country Code subKey = number + string #Append to the list globalKey.append(subKey) </code></pre> <p>This code will provide me with something like <code>01AFG</code>, <code>02AFG</code> based on the value I assign. but I want to assign a unique value by counting the occurrence of when <code>CountryCode</code> and <code>Accident</code> is similar.</p> <p>I am stuck with the code above. I think there should be a better way to do it using map function in Pandas.</p> <p>Thanks for helping guys! much appreciate!</p>
1
2016-09-20T08:47:09Z
39,589,991
<p>You can try with <code>cumcount</code> to achieve this in a number of steps, like this:</p> <pre><code>In [1]: df = pd.DataFrame({'Country':['AFG','AFG','AFG','AFG','USA','USA','UK'], 'Accident':['Car','Bike','Car','Plane','Car','Bike','Car']}) In [2]: df Out[2]: Accident Country 0 Car AFG 1 Bike AFG 2 Car AFG 3 Plane AFG 4 Car USA 5 Bike USA 6 Car UK ## Create a column to keep incremental values for `Country` In [3]: df['cumcount'] = df.groupby('Country').cumcount() In [4]: df Out[4]: Accident Country cumcount 0 Car AFG 0 1 Bike AFG 1 2 Car AFG 2 3 Plane AFG 3 4 Car USA 0 5 Bike USA 1 6 Car UK 0 ## Create a column to keep incremental values for combination of `Country`,`Accident` In [5]: df['cumcount_type'] = df.groupby(['Country','Accident']).cumcount() In [6]: df Out[6]: Accident Country cumcount cumcount_type 0 Car AFG 0 0 1 Bike AFG 1 0 2 Car AFG 2 1 3 Plane AFG 3 0 4 Car USA 0 0 5 Bike USA 1 0 6 Car UK 0 0 </code></pre> <p>And from that point on you can concatenate the values of <code>cumcount</code>, <code>cumcount_type</code> and <code>Country</code> to achieve what you're after.</p> <p>Maybe you want to add <code>1</code> to each of the values you have under the different counts, depending on whether you want to start counting from 0 or 1.</p> <p>I hope this helps.</p>
5
2016-09-20T09:08:30Z
[ "python", "pandas", "dataframe", "group-by" ]
Assign Unique Values according Distinct Columns Values
39,589,558
<p>I know the question name is a little ambiguous.</p> <p>My goal is to assign global key column based on 2 columns + unique value in my data frame.</p> <p>For example</p> <pre><code>CountryCode | Accident AFG Car AFG Bike AFG Car AFG Plane USA Car USA Bike UK Car </code></pre> <p>Let Car = 01, Bike = 02, Plane = 03</p> <p>My desire global key format is [Accident][CountryCode][UniqueValue]</p> <p>Unique value is a count of similar [Accident][CountryCode]</p> <p>so if Accident = Car and CountryCode = AFG and it is the first occurrence, the global key would be 01AFG01</p> <p>The desired dataframe would look like this:</p> <pre><code>CountryCode | Accident | GlobalKey AFG Car 01AFG01 AFG Bike 02AFG01 AFG Car 01AFG02 AFG Plane 01AFG03 USA Car 01USA01 USA Bike 01USA02 UK Car 01UK01 </code></pre> <p>I have tried running a for loop to append Accident Number and CountryCode together</p> <p>for example:</p> <pre><code>globalKey = [] for x in range(0,6): string = df.iloc[x, 1] string2 = df.iloc[x, 2] if string2 == 'Car': number = '01' elif string2 == 'Bike': number = '02' elif string2 == 'Plane': number = '03' #Concat the number of accident and Country Code subKey = number + string #Append to the list globalKey.append(subKey) </code></pre> <p>This code will provide me with something like <code>01AFG</code>, <code>02AFG</code> based on the value I assign. but I want to assign a unique value by counting the occurrence of when <code>CountryCode</code> and <code>Accident</code> is similar.</p> <p>I am stuck with the code above. I think there should be a better way to do it using map function in Pandas.</p> <p>Thanks for helping guys! much appreciate!</p>
1
2016-09-20T08:47:09Z
39,590,029
<p>After you create your <code>subKey</code> we can sort the dataframe and count the occurences of the couples. First let's reset the index (to store the original order)</p> <pre><code>df = df.reset_index() </code></pre> <p>then sort by the <code>subKey</code> and count</p> <pre><code>df = df.sort_values(by='subKey') df['newnumber'] = 1 for ind in range(1, len(df)): #start by 1 because first row is always 1 if df.loc[ind, 'subKey'] == df.loc[ind - 1, 'subKey']: df.loc[ind, 'newnumber'] = df.loc[ind - 1, 'newnumber'] + 1 </code></pre> <p>Finally create the <code>GlobalKey</code> with the help of the <code>zfill</code> function, the reorder by <code>index</code>:</p> <pre><code>df['GlobalKey'] = df.apply(lambda x: x['subKey'] + str(x['new_number']).zfill(2), 1) df = df.sort_values(by='index').drop('index', 1).reset_index(drop=True) </code></pre>
1
2016-09-20T09:10:41Z
[ "python", "pandas", "dataframe", "group-by" ]
Assign Unique Values according Distinct Columns Values
39,589,558
<p>I know the question name is a little ambiguous.</p> <p>My goal is to assign global key column based on 2 columns + unique value in my data frame.</p> <p>For example</p> <pre><code>CountryCode | Accident AFG Car AFG Bike AFG Car AFG Plane USA Car USA Bike UK Car </code></pre> <p>Let Car = 01, Bike = 02, Plane = 03</p> <p>My desire global key format is [Accident][CountryCode][UniqueValue]</p> <p>Unique value is a count of similar [Accident][CountryCode]</p> <p>so if Accident = Car and CountryCode = AFG and it is the first occurrence, the global key would be 01AFG01</p> <p>The desired dataframe would look like this:</p> <pre><code>CountryCode | Accident | GlobalKey AFG Car 01AFG01 AFG Bike 02AFG01 AFG Car 01AFG02 AFG Plane 01AFG03 USA Car 01USA01 USA Bike 01USA02 UK Car 01UK01 </code></pre> <p>I have tried running a for loop to append Accident Number and CountryCode together</p> <p>for example:</p> <pre><code>globalKey = [] for x in range(0,6): string = df.iloc[x, 1] string2 = df.iloc[x, 2] if string2 == 'Car': number = '01' elif string2 == 'Bike': number = '02' elif string2 == 'Plane': number = '03' #Concat the number of accident and Country Code subKey = number + string #Append to the list globalKey.append(subKey) </code></pre> <p>This code will provide me with something like <code>01AFG</code>, <code>02AFG</code> based on the value I assign. but I want to assign a unique value by counting the occurrence of when <code>CountryCode</code> and <code>Accident</code> is similar.</p> <p>I am stuck with the code above. I think there should be a better way to do it using map function in Pandas.</p> <p>Thanks for helping guys! much appreciate!</p>
1
2016-09-20T08:47:09Z
39,591,082
<p>First of all, don't use for loops if you can help it. For example, you can do your Accident to code mapping with:</p> <pre><code>df['AccidentCode'] = df['Accident'].map({'Car': '01', 'Bike': '02', 'Plane': '03'}) </code></pre> <p>To get the unique code, <a href="http://stackoverflow.com/a/39589991/12663">Thanos has shown how to do</a> that using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow">GroupBy.cumcount</a>:</p> <pre><code>df['CA_ID'] = df.groupby(['CountryCode', 'Accident']).cumcount() + 1 </code></pre> <p>And then to put them all together into a unique key:</p> <pre><code>df['NewKey'] = df['AccidentCode'] + df['CountryCode'] + df['CA_ID'].map('{:0&gt;2}'.format) </code></pre> <p>which gives:</p> <pre><code> CountryCode Accident GlobalKey AccidentCode CA_ID NewKey 0 AFG Car 01AFG01 01 1 01AFG01 1 AFG Bike 02AFG01 02 1 02AFG01 2 AFG Car 01AFG02 01 2 01AFG02 3 AFG Plane 01AFG03 03 1 03AFG01 4 USA Car 01USA01 01 1 01USA01 5 USA Bike 01USA02 02 1 02USA01 6 UK Car 01UK01 01 1 01UK01 </code></pre>
1
2016-09-20T09:58:16Z
[ "python", "pandas", "dataframe", "group-by" ]
File 'tesseract.log' is Missing (Python 2.7, Windows)
39,589,601
<p>I'm trying to write an OCR script with Python (2.7, Windows OS) to get text from images. First I've downloaded <a href="https://pypi.python.org/pypi/PyTesser/" rel="nofollow">PyTesser</a> and extracted it to Python27/Lib/site-packages as 'pytesser' and I've installed tesseract with <code>pip install tesseract</code> . Then I wrote the following script as self.py:</p> <pre><code>from PIL import Image from pytesser.pytesser import * image_file = 'C:/Users/blabla/test.png' im = Image.open(image_file) text = image_to_string(im) text = image_file_to_string(image_file) text = image_file_to_string(image_file, graceful_errors=True) print text </code></pre> <p>But I'm getting the following error:</p> <pre><code>Traceback (most recent call last): File "C:/Users/blabla/self.py", line 7, in &lt;module&gt; text = image_file_to_string(image_file) File "C:\Python27\lib\site-packages\pytesser\pytesser.py", line 44, in image_file_to_string call_tesseract(filename, scratch_text_name_root) File "C:\Python27\lib\site-packages\pytesser\pytesser.py", line 24, in call_tesseract errors.check_for_errors() File "C:\Python27\lib\site-packages\pytesser\errors.py", line 10, in check_for_errors inf = file(logfile) IOError: [Errno 2] No such file or directory: 'tesseract.log' </code></pre> <p>And yes, there's no 'tesseract.log' file anywhere. What should I do? How should I solve this problem?</p> <p>Thank you in advance.</p> <p>Note: I've changed the line <code>tesseract_exe_name</code> from <em>pytesser.py</em> from <strong>tesseract</strong> to <strong>C:/Python27/Lib/site-packages/pytesser/tesseract</strong> but it doesn't work.</p> <p><strong>Edit:</strong> Alright, I've just runned <em>teseract.exe</em> that is in 'pytesser' and it created the 'tesseract.log' file but I'm still getting same error.</p>
0
2016-09-20T08:49:30Z
39,592,212
<p>I've changed the line from <code>def check_for_errors(logfile = "tesseract.log"):</code> to <code>def check_for_errors(logfile = "C:/Python27/Lib/site-packages/pytesser/tesseract.log"):</code> in <strong>../pytesser/errors.py</strong> and it worked.</p>
0
2016-09-20T10:53:50Z
[ "python", "ocr", "tesseract" ]
Element is no longer attached to the DOM
39,589,704
<p>I am writing a test, where i create an ip pool for the application , and then in the next step, delete it.</p> <p>The function is as follows:</p> <pre><code>def remove_passed_ip(self, ip): """ Find and ip and delete it Args: ip (str): the name of ip to click and delete Returns: webelement """ index = -1 try: ipDelBtnList = self.wait_for_presence_of_all_elements(self.onboarding_select_address_pools_delete_btn_list) ipList = self.wait_for_presence_of_all_elements(self.onboarding_select_address_pools_list) time.sleep(5) except: self.log.info("dasdda") else: for ips in ipList: index +=1 temp = ips.text.split('/') if str(ip)==str(temp[0]): ipHandle = ipDelBtnList[index] time.sleep(5) ipHandle.click() time.sleep(15) </code></pre> <p>The delete action works fine and the created ip is deleted, but after this when the test ends, it gives the error as </p> <pre><code>Message: Element is no longer attached to the DOM </code></pre> <p>Please provide with any pointers to resolve this issue. If any other clarification is required regarding the question, please let me know.</p>
0
2016-09-20T08:54:15Z
39,590,139
<p>This is because after the delete the DOM changes and all 'references' that Selenium has with the browser are lost. </p> <p>To solve this, you need to get the element/s again from the page after the previous one is deleted (or any other action that determines the DOM to be changed).</p>
1
2016-09-20T09:15:51Z
[ "python", "selenium-webdriver" ]
Flask response should display mongodb last-modified date in ISO format
39,589,746
<p>I'm using flask to create API. My database is mongodb.</p> <p>When flask response come, the last modified date is coming in the following format.</p> <pre><code>"lastModified": {"$date": 1473929954742} </code></pre> <p>I'm using Pycharm as IDE on the Pycharm Run terminal is showing in the following format</p> <pre><code>"lastModified": {"$date": 1473929954742} </code></pre> <p>In Mongodb shell showing the following format</p> <pre><code>"lastModified" : ISODate("2016-09-15T08:59:14.742Z") </code></pre> <p>How to show the date in the format which contains in Mongodb when flask response come.</p> <p>I have used the following line inside flask to return the response.</p> <pre><code>return json.dumps(alldata, default=json_util.default) </code></pre> <p>Please help me.</p> <p>Thanks</p>
0
2016-09-20T08:56:45Z
39,590,072
<p>If you want to convert the timestamp to a ISO formatted date, you can use <code>utcfromtimestamp</code>.</p> <pre><code>import datetime as dt timestamp = 1473929954742/1000 print dt.datetime.utcfromtimestamp(timestamp).isoformat() # '2016-09-15T08:59:14' </code></pre>
0
2016-09-20T09:12:54Z
[ "python", "mongodb", "python-2.7", "flask" ]
TensorFlow logs not showing up in Console and File
39,589,808
<p>this is probably just a stupid mistake on my end, but I can't seem to find it.</p> <p>I would like to see the logs from TensorFlow on my console and in the log file, which works for all of my code but the TensorFlow part.</p> <p>I have configured logging like this:</p> <pre><code>from logging.config import dictConfig ... # Setup Logging LOGGING_CONFIG = dict( version=1, formatters={ # For files 'detailed': {'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s'}, # For the console 'console': {'format': '[%(levelname)s] %(message)s'} }, handlers={ 'console': { 'class': 'logging.StreamHandler', 'level': logging.DEBUG, 'formatter': 'console', }, 'file': { 'class': 'logging.handlers.RotatingFileHandler', 'level': logging.DEBUG, 'formatter': 'detailed', 'filename': LOG_FILE, 'mode': 'a', 'maxBytes': 10485760, # 10 MB 'backupCount': 5 } }, root={ 'handlers': ['console', 'file'], 'level': logging.DEBUG, }, ) dictConfig(LOGGING_CONFIG) </code></pre> <p>I researched this problem and learned that I had to enable logging in TensorFlow with something like this:</p> <pre><code>import tensorflow as tf ... tf.logging.set_verbosity(tf.logging.INFO) </code></pre> <p>Unfortunately this doesn't seem to work - the logs are not showing up. If I use <code>logging.basicConfig()</code> instead of my own configuration, the logs are displayed as expected. In this case the logs are printed to my terminal. </p> <p>My conclusion is, that my logging configuration is somewhat faulty - please help me.</p>
0
2016-09-20T08:59:44Z
39,590,369
<p>After reading <a href="https://fangpenlin.com/posts/2012/08/26/good-logging-practice-in-python/" rel="nofollow" title="Good logging practice in Python">Good logging practice in Python</a> I found my mistake. The call to <code>dictConfig</code> disables existing loggers by default - I had to add a another key to the config to resolve this (<code>disable_existing_loggers</code>):</p> <pre><code># Setup Logging LOGGING_CONFIG = dict( version=1, formatters={ # For files 'detailed': {'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s'}, # For the console 'console': {'format': '[%(levelname)s] %(message)s'} }, handlers={ 'console': { 'class': 'logging.StreamHandler', 'level': logging.DEBUG, 'formatter': 'console', }, 'file': { 'class': 'logging.handlers.RotatingFileHandler', 'level': logging.DEBUG, 'formatter': 'detailed', 'filename': LOG_FILE, 'mode': 'a', 'maxBytes': 10485760, # 10 MB 'backupCount': 5 } }, root={ 'handlers': ['console', 'file'], 'level': logging.DEBUG, }, disable_existing_loggers=False ) </code></pre> <p>Now it works :)</p>
1
2016-09-20T09:25:37Z
[ "python", "python-3.x", "logging", "tensorflow" ]
Python can not start Process with Process.start() on Windows. PySide signals
39,589,818
<p>method header:</p> <pre><code>@engine.async @guiLoop def for_every_client(self): </code></pre> <p>connect PySide signal with method:</p> <pre><code>self.ui.pushButton.clicked.connect(lambda: self.for_every_client(self)) </code></pre> <p>stacktarce - <a href="http://pastebin.com/R1ZKcdPy" rel="nofollow">http://pastebin.com/R1ZKcdPy</a></p> <p>or here:</p> <pre><code>Traceback (most recent call last): File "app2.py", line 437, in &lt;lambda&gt; self.ui.pushButton.clicked.connect(lambda: self.for_every_client(self)) File "c:\Python27\lib\site-packages\async_gui\engine.py", line 79, in wrapper gen = func(*args, **kwargs) File "C:\visa\visa_production\libs\guiLoop.py", line 70, in __call__ _loop_in_the_gui(gui_element, generator, self.start_in_gui) File "C:\visa\visa_production\libs\guiLoop.py", line 44, in _loop_in_the_gui wait_time = next(generator) File "app2.py", line 530, in for_every_client p.start() File "c:\Python27\lib\multiprocessing\process.py", line 130, in start self._popen = Popen(self) File "c:\Python27\lib\multiprocessing\forking.py", line 277, in __init__ dump(process_obj, to_child, HIGHEST_PROTOCOL) File "c:\Python27\lib\multiprocessing\forking.py", line 199, in dump ForkingPickler(file, protocol).dump(obj) File "c:\Python27\lib\pickle.py", line 224, in dump self.save(obj) File "c:\Python27\lib\pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "c:\Python27\lib\pickle.py", line 425, in save_reduce save(state) File "c:\Python27\lib\pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "c:\Python27\lib\pickle.py", line 655, in save_dict self._batch_setitems(obj.iteritems()) File "c:\Python27\lib\pickle.py", line 687, in _batch_setitems save(v) File "c:\Python27\lib\pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "c:\Python27\lib\pickle.py", line 554, in save_tuple save(element) File "c:\Python27\lib\pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "c:\Python27\lib\pickle.py", line 425, in save_reduce save(state) File "c:\Python27\lib\pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "c:\Python27\lib\pickle.py", line 655, in save_dict self._batch_setitems(obj.iteritems()) File "c:\Python27\lib\pickle.py", line 687, in _batch_setitems save(v) File "c:\Python27\lib\pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "c:\Python27\lib\pickle.py", line 425, in save_reduce save(state) File "c:\Python27\lib\pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "c:\Python27\lib\pickle.py", line 655, in save_dict self._batch_setitems(obj.iteritems()) File "c:\Python27\lib\pickle.py", line 687, in _batch_setitems save(v) File "c:\Python27\lib\pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "c:\Python27\lib\pickle.py", line 396, in save_reduce save(cls) File "c:\Python27\lib\pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "c:\Python27\lib\pickle.py", line 754, in save_global (obj, module, name)) pickle.PicklingError: Can't pickle &lt;type 'PySide.QtCore.SignalInstance'&gt;: it's n ot found as PySide.QtCore.SignalInstance Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "c:\Python27\lib\multiprocessing\forking.py", line 381, in main self = load(from_parent) File "c:\Python27\lib\pickle.py", line 1384, in load return Unpickler(file).load() File "c:\Python27\lib\pickle.py", line 864, in load dispatch[key](self) File "c:\Python27\lib\pickle.py", line 886, in load_eof raise EOFError EOFError </code></pre>
0
2016-09-19T11:53:32Z
39,589,819
<pre><code>class Window SignalProgressBar = QtCore.pyqtSignal() def __init__(self): self.SignalProgressBar.connect(lambda: self.for_every_client(self)) </code></pre> <p>connect signal with method:</p> <pre><code> w = Window() w.SignalProgressBar.emit() </code></pre>
0
2016-09-19T13:52:04Z
[ "python", "windows", "multiprocessing", "pyside" ]
Python/ Numpy How to use arctan or arctan2 on the interval 0, pi/2?
39,589,834
<p>Is it possible use the arctan or arctan2 numpy functions with the result always on the 1st quadrant (0,pi/2)? Thanks</p>
0
2016-09-20T09:00:26Z
39,590,108
<p>if you want to calculate the minimum "deviation" angle (in the interval <code>[0,pi/2]</code>) between the <code>x</code>-axis and the segment connecting points (0, 0) and (x, y), you could use:</p> <pre><code>phi = numpy.arctan2(y, x) phi = min(abs(phi), math.pi - abs(phi)) </code></pre> <p>or:</p> <pre><code>phi = numpy.arctan2(abs(y), abs(x)) </code></pre>
3
2016-09-20T09:14:36Z
[ "python", "numpy", "trigonometry" ]
Python Selenium: How do I select a value from drop down menu
39,590,028
<p>I am writing a python selenium script. I wanted to select below class:</p> <pre><code> &lt;select style="position: absolute; left: 0px; top: 0px;" class="gwt-ListBox project-list"&gt; &lt;option value="10"&gt;Case Notes&lt;/option&gt; </code></pre> <p>but got failed:</p> <pre><code>elem = self.driver.find_element_by_xpath("//contains[@class=gwt-ListBox.project-list']") </code></pre> <p>Can any one explore please.</p>
0
2016-09-20T09:10:41Z
39,591,835
<p>Actually you missed a quote <code>'</code> after the <code>=</code>. The following will work:</p> <pre><code>elem = self.driver.find_element_by_xpath("//contains[@class='gwt-ListBox.dashboard-listbox.project-list']") </code></pre> <p>Hope it worked</p>
0
2016-09-20T10:31:27Z
[ "python", "selenium" ]
Python Selenium: How do I select a value from drop down menu
39,590,028
<p>I am writing a python selenium script. I wanted to select below class:</p> <pre><code> &lt;select style="position: absolute; left: 0px; top: 0px;" class="gwt-ListBox project-list"&gt; &lt;option value="10"&gt;Case Notes&lt;/option&gt; </code></pre> <p>but got failed:</p> <pre><code>elem = self.driver.find_element_by_xpath("//contains[@class=gwt-ListBox.project-list']") </code></pre> <p>Can any one explore please.</p>
0
2016-09-20T09:10:41Z
39,597,382
<p>Try this Xpath</p> <pre><code>"//select[@class='gwt-ListBox project-list']" </code></pre>
0
2016-09-20T14:55:21Z
[ "python", "selenium" ]
How to send email without mail template odoo 9
39,590,054
<p>I want to send email without any email template in odoo 9. I've set successfully outgoing &amp; incoming mail configuration. My Scenario is the the admin mail alias as sender of email this is my model:</p> <pre><code>class cashadvance(osv.osv): _name = 'comben.cashadvance' _order = 'create_date DESC' _columns = { 'user_target' : fields.many2one('res.users', string='User Target'), 'mail_target' : fields.char(string='Email Target'), } </code></pre> <p>The question is, how to send email to mail_target field using a function ? Help me please </p>
0
2016-09-20T09:12:04Z
39,590,866
<p>You can send a mail directly using the <code>mail.thread</code> model, by using the following code, attached to a button.</p> <pre><code>@api.multi def send_mail(self): user_id = self.user_target.id body = self.message_target mail_details = {'subject': "Message subject", 'body': body, 'partner_ids': [(user_target)] } mail = self.env['mail.thread'] mail.message_post(type="notification", subtype="mt_comment", **mail_details) </code></pre> <p>Or you can just take the bull by the horn and use plain ol <code>smtplib</code> from the standard library, (but then your odoo configuration won't matter any more). This example assumes that you're using gmail. If you're not, then you have to fill in your mail server's details for this to work properly.</p> <pre><code>@api.multi def send_mail(self): import smtplib receivers_email = self.user_target.login server = smtplib.SMTP('smtp.gmail.com', 587) server.starttls() server.login("your_email_address", "password") message = self.message_target server.sendmail("your_email_address", receivers_email, message) server.quit() </code></pre> <p>Since <code>user_target</code> is related to the <code>res.users</code> field you can get the email from the <code>login</code> field</p>
1
2016-09-20T09:47:44Z
[ "python", "postgresql", "openerp" ]
Python pandas dataframe sort_values does not work
39,590,055
<p>I have the following pandas data frame which I want to sort by 'test_type'</p> <pre><code> test_type tps mtt mem cpu 90th 0 sso_1000 205.263559 4139.031090 24.175933 34.817701 4897.4766 1 sso_1500 201.127133 5740.741266 24.599400 34.634209 6864.9820 2 sso_2000 203.204082 6610.437558 24.466267 34.831947 8005.9054 3 sso_500 189.566836 2431.867002 23.559557 35.787484 2869.7670 </code></pre> <p>My code to load the dataframe and sort it is, the first print line prints the data frame above.</p> <pre><code> df = pd.read_csv(file) #reads from a csv file print df df = df.sort_values(by=['test_type'], ascending=True) print '\nAfter sort...' print df </code></pre> <p>After doing the sort and printing the dataframe content, the data frame still looks like below.</p> <p>Program output:</p> <pre><code>After sort... test_type tps mtt mem cpu 90th 0 sso_1000 205.263559 4139.031090 24.175933 34.817701 4897.4766 1 sso_1500 201.127133 5740.741266 24.599400 34.634209 6864.9820 2 sso_2000 203.204082 6610.437558 24.466267 34.831947 8005.9054 3 sso_500 189.566836 2431.867002 23.559557 35.787484 2869.7670 </code></pre> <p>I expect row 3 (test type: sso_500 row) to be on top after sorting. Can someone help me figure why it's not working as it should?</p>
0
2016-09-20T09:12:06Z
39,590,240
<p>Presumbaly, what you're trying to do is sort by the numerical value after <code>sso_</code>. You can do this as follows:</p> <pre><code>import numpy as np df.ix[np.argsort(df.test_type.str.split('_').str[-1].astype(int).values) </code></pre> <p>This </p> <ol> <li><p>splits the strings at <code>_</code></p></li> <li><p>converts what's after this character to the numerical value</p></li> <li><p>Finds the indices sorted according to the numerical values</p></li> <li><p>Reorders the DataFrame according to these indices</p></li> </ol> <p><strong>Example</strong></p> <pre><code>In [15]: df = pd.DataFrame({'test_type': ['sso_1000', 'sso_500']}) In [16]: df.sort_values(by=['test_type'], ascending=True) Out[16]: test_type 0 sso_1000 1 sso_500 In [17]: df.ix[np.argsort(df.test_type.str.split('_').str[-1].astype(int).values)] Out[17]: test_type 1 sso_500 0 sso_1000 </code></pre>
4
2016-09-20T09:20:16Z
[ "python", "pandas" ]
Python pandas dataframe sort_values does not work
39,590,055
<p>I have the following pandas data frame which I want to sort by 'test_type'</p> <pre><code> test_type tps mtt mem cpu 90th 0 sso_1000 205.263559 4139.031090 24.175933 34.817701 4897.4766 1 sso_1500 201.127133 5740.741266 24.599400 34.634209 6864.9820 2 sso_2000 203.204082 6610.437558 24.466267 34.831947 8005.9054 3 sso_500 189.566836 2431.867002 23.559557 35.787484 2869.7670 </code></pre> <p>My code to load the dataframe and sort it is, the first print line prints the data frame above.</p> <pre><code> df = pd.read_csv(file) #reads from a csv file print df df = df.sort_values(by=['test_type'], ascending=True) print '\nAfter sort...' print df </code></pre> <p>After doing the sort and printing the dataframe content, the data frame still looks like below.</p> <p>Program output:</p> <pre><code>After sort... test_type tps mtt mem cpu 90th 0 sso_1000 205.263559 4139.031090 24.175933 34.817701 4897.4766 1 sso_1500 201.127133 5740.741266 24.599400 34.634209 6864.9820 2 sso_2000 203.204082 6610.437558 24.466267 34.831947 8005.9054 3 sso_500 189.566836 2431.867002 23.559557 35.787484 2869.7670 </code></pre> <p>I expect row 3 (test type: sso_500 row) to be on top after sorting. Can someone help me figure why it's not working as it should?</p>
0
2016-09-20T09:12:06Z
39,590,660
<p>Alternatively, you could also extract the numbers from <code>test_type</code> and sort them. Followed by reindexing <code>DF</code> according to those indices.</p> <pre><code>df.reindex(df['test_type'].str.extract('(\d+)', expand=False) \ .astype(int).sort_values().index).reset_index(drop=True) </code></pre> <p><a href="http://i.stack.imgur.com/7Xu95.png" rel="nofollow"><img src="http://i.stack.imgur.com/7Xu95.png" alt="Image"></a></p>
1
2016-09-20T09:37:24Z
[ "python", "pandas" ]
Print comma separated items without newline
39,590,098
<p>I thought I was copying word for word from a <a href="http://stackoverflow.com/questions/493386/how-to-print-in-python-without-newline-or-space">previous question/answer</a>, but, I don't get what I wanted. </p> <p>I was trying to answer a question from an online list of python questions for beginners. I wrote the following in a file called <code>q.py</code> and then gave the executed it with <code>python3 q.py</code> in my shell.</p> <p>The file <code>q.py</code> looks like:</p> <pre><code>from math import sqrt C=50 H=30 inp = input('input values: ') vals = inp.split(',') print(vals) for x in vals: D=int(x) print(int(sqrt(2*C*D/H)),sep=',',end='') print() </code></pre> <p>I input <code>100,150,180</code> as the question suggested, the required answer is:</p> <pre><code>18,22,24 </code></pre> <p>but the answer I get is:</p> <pre><code>182224 </code></pre> <p>When I omit the <code>"end"</code> argument to <code>print</code>, I get <code>18</code>, <code>22</code>, and <code>24</code> separated by newlines.</p>
0
2016-09-20T09:14:14Z
39,590,190
<p><code>sep</code> is used to separate values <em>if multiple values are actually provided</em>; you provide <em>one value</em> for every loop so the separator isn't used. </p> <p>Instead of using <code>sep=','</code>, use <code>end=','</code> to get the desired effect:</p> <pre><code>for x in vals: D=int(x) print(int(sqrt(2*C*D/H)),end=',') print() </code></pre> <p>Using similar input, the result printed out is:</p> <pre><code>['100', '150', '180'] 18,22,24, </code></pre> <p>If, for example, you provided <em>multiple</em> values, say, by unpacking a list comprehension in your <code>print</code> call, <code>sep=','</code> would work like a charm:</p> <pre><code>print(*[int(sqrt(2*C*int(x)/H)) for x in vals], sep=',') </code></pre> <p>Prints:</p> <pre><code>18,22,24 </code></pre>
2
2016-09-20T09:18:14Z
[ "python", "python-3.x", "printing" ]
openpyxl overwrites all data instead of update the excel sheet
39,590,102
<p>I'm trying to read a string from a text file and write it into an excel sheet without overwriting. I found somewhere that to update excel sheets, openpyxl in used. But my script just overwrites the entire sheet. I want other data to be the same.</p> <p>python script:</p> <pre><code>from openpyxl import Workbook file_name="D:\\a.txt" content={} with open(file_name) as f: for line in f: (key,value)=line.split(":") content[key]=value wb=Workbook() ws=wb.active r = 2 for item in content: ws.cell(row=r, column=3).value = item ws.cell(row=r, column=4).value = content[item] r += 1 wb.save("D:\\Reports.xlsx") </code></pre> <p>Excel sheet before script:</p> <p><a href="http://i.stack.imgur.com/KL3AF.png" rel="nofollow"><img src="http://i.stack.imgur.com/KL3AF.png" alt="enter image description here"></a></p> <p>Excel sheet after script :</p> <p><a href="http://i.stack.imgur.com/IJVuv.png" rel="nofollow"><img src="http://i.stack.imgur.com/IJVuv.png" alt="enter image description here"></a></p> <p>How do I write the data to excel with overwriting other things ? Help.</p>
0
2016-09-20T09:14:20Z
39,594,464
<p>Overwriting is due to both saving the file with <code>wb.save()</code> and your hard coded starting row number <code>r = 2</code>.<br /> 1) If you don't care of overwriting the rows each time you execute your script you could use something like this:</p> <pre><code>from openpyxl import Workbook from openpyxl import load_workbook path = 'P:\Desktop\\' file_name = "input.txt" content= {} with open(path + file_name) as f: for line in f: (key,value)=line.split(":") content[key]=value wb = load_workbook(path + 'Reports.xlsx') ws = wb.active r = 2 for item in content: ws.cell(row=r, column=3).value = item ws.cell(row=r, column=4).value = content[item] r += 1 wb.save(path + "Reports.xlsx") </code></pre> <p>2) If you care about overwriting rows and the column numbers (3 &amp; 4) you could try something like this:</p> <pre><code>from openpyxl import Workbook from openpyxl import load_workbook path = 'P:\Desktop\\' file_name = "input.txt" content= [] with open(path + file_name) as f: for line in f: key, value = line.split(":") content.append(['','', key, value]) # adding empty cells in col 1 + 2 wb = load_workbook(path + 'Reports.xlsx') ws = wb.active for row in content: ws.append(row) wb.save(path + "Reports.xlsx") </code></pre>
1
2016-09-20T12:44:15Z
[ "python", "excel", "openpyxl" ]
Determining EC2 instance public IP from SNS in lambda
39,590,123
<p>I have a lambda function which SSH ec2 instance and run some commands. This lambda function is triggered from SNS topic. SNS topic is integrated with a cloudwatch alarm. I am using python 2.7 in lambda function followed this thread <a href="https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/" rel="nofollow">https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/</a>. is it possible to get EC2 public IP address which actually triggered alarm?</p>
0
2016-09-20T09:15:11Z
39,687,531
<p>It depends on the CloudWatch Alarm you are using to trigger the SNS publish. My suggestion is to print out the entire <strong>event</strong> dictionary in your function and check if there is any mention about EC2 instance ID.</p> <p>In case of CloudWatch EC2 alarm (eg. CPU usage) you'll find the instance ID in the metric Dimension.</p> <pre><code># Python example import json message = json.loads(event['Records'][0]['Sns']['Message']) instance_id = message['Trigger']['Dimensions'][0] </code></pre> <p>If you have the instance ID you can easily retrieve the instance public IP using boto3 as follows:</p> <pre><code># Python example import boto3 instance_id = 'xxxx' # This is the instance ID from the event ec2 = boto3.client('ec2') instances = ec2.describe_instances(InstanceIds=[instance_id]) public_ip = instances[0]['Reservations'][0]['Instances'][0]['PublicIpAddress'] </code></pre> <p>Finally, as you are performing SSH from Lambda function to your EC2 instance keep in mind that Lambda Functions out of VPC get a dynamic public IP therefore it is impossible to restrict your EC2 instance security group for SSH. Leaving SSH opened to the entire world is not a good practice from a security perspective.</p> <p>I suggest to run both EC2 and Lambda function in VPC restricting SSH access to your EC2 instances from Lambda vpc security group only. In that case you'll need to retrieve the private Ip address rather than the public one to be able to ssh your instance (the python logic is the same as above, the only difference is that you use <em>'PrivateIpAddress'</em> instead of <em>'PublicIpAddress'</em>) . This is way more secure than using public internet.</p> <p>I hope this helps.</p> <p>G</p>
0
2016-09-25T13:47:26Z
[ "python", "amazon-ec2", "aws-lambda", "amazon-sns", "amazon-cloudwatch" ]
TypeError: can't multiply sequence by non-int of type 'float' even after using np.multiply
39,590,152
<p>I have the following which works along the rows of a dataframe. I have tried many things, like <code>np.multiply</code> or <code>np.array</code> to make that multiplication works but so far to no avail. I have also converted factor to an <code>int</code> by multiplying and then dividing it by 10, but nothing either. I have wrapped all my operands individually into <code>np.array()</code>, and also have tried to <code>np.multiply</code> around every single operation, but I still get that error.</p> <pre><code>factor = somefloatnumber # i.e 0.4 def func(row): return row *factor / np.sum(row) df2 = df1.apply(func, axis=1) </code></pre> <blockquote> <p>TypeError: can't multiply sequence by non-int of type 'float'</p> </blockquote> <p>Here is a sample of the data I am using. </p> <pre><code> A B C D E 2006-04-28 A 69.62 69.62 3.518 65.09 69.62 B 27.78 7.7 27.78 27.7 27 C 0.23 0.22 0.02 0.23 0.21 2006-05-01 A 71.5 71.5 6.522 65.16 71.5 B 28.4828 28.4828 28.4828 28.4828 28.4828 C 0.249841 0.249841 0.0227897 0.227687 0.249841 </code></pre>
0
2016-09-20T09:16:14Z
39,600,691
<p>It seems that <code>convert_objects(convert_numeric=True)</code> has done the trick!<br> It attempts to infer better dtype for object columns see: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.convert_objects.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.convert_objects.html</a></p> <p>In the case of my issue above, you would use it this way</p> <pre><code>df1.convert_objects(convert_numeric=True).apply(func,axis=1). </code></pre> <p>However, i get a Future warning:</p> <blockquote> <p>FutureWarning: convert_objects is deprecated. Use the data-type specific converters pd.to_datetime, pd.to_timedelta and pd.to_numeric.</p> </blockquote> <p>But it works.</p>
0
2016-09-20T17:53:01Z
[ "python", "arrays", "pandas", "numpy", "casting" ]
In requirements.txt, what does tilde equals (~=) mean?
39,590,187
<p>In the <code>requirements.txt</code> for a Python library I am using, one of the requirements is specified like:</p> <pre><code>mock-django~=0.6.10 </code></pre> <p>What does <code>~=</code> mean?</p>
4
2016-09-20T09:18:05Z
39,590,286
<p>It means it will select the lastest version of the package, greater or equal to 0.6.10, but still in the 0.6.* version, so it won't download the 0.7.0 for example. It ensures you will get security fixes but keep backward-compatibility, if the package maintener respects the semantic versionning (which state that breaking changes should occurs only in major versions).</p> <p>Or, as said by the PEP 440:</p> <blockquote> <p>For a given release identifier V.N , the compatible release clause is approximately equivalent to the pair of comparison clauses:</p> <p><code>&gt;= V.N, == V.*</code></p> </blockquote> <ul> <li><a href="https://www.python.org/dev/peps/pep-0440/#compatible-release">Definition in the PEP 440</a></li> <li><a href="https://pip.pypa.io/en/stable/reference/pip_install/#example-requirements-file">Complete example here in the documentation</a></li> </ul>
5
2016-09-20T09:21:53Z
[ "python", "requirements.txt" ]
In requirements.txt, what does tilde equals (~=) mean?
39,590,187
<p>In the <code>requirements.txt</code> for a Python library I am using, one of the requirements is specified like:</p> <pre><code>mock-django~=0.6.10 </code></pre> <p>What does <code>~=</code> mean?</p>
4
2016-09-20T09:18:05Z
39,590,317
<blockquote> <p>A compatible release clause consists of the compatible release operator ~= and a version identifier. It matches any candidate version that is expected to be compatible with the specified version.</p> </blockquote> <p>You can read more here: <a href="https://www.python.org/dev/peps/pep-0440/#compatible-release" rel="nofollow">https://www.python.org/dev/peps/pep-0440/#compatible-release</a></p>
1
2016-09-20T09:23:10Z
[ "python", "requirements.txt" ]
In requirements.txt, what does tilde equals (~=) mean?
39,590,187
<p>In the <code>requirements.txt</code> for a Python library I am using, one of the requirements is specified like:</p> <pre><code>mock-django~=0.6.10 </code></pre> <p>What does <code>~=</code> mean?</p>
4
2016-09-20T09:18:05Z
39,590,393
<p>That's the 'compatible release' <a href="https://www.python.org/dev/peps/pep-0440/#version-specifiers" rel="nofollow">version specifier</a>.</p> <p>It's equivalent to: <code>mock-django &gt;= 0.6.10, == 0.6.*</code>, and is a tidy way of matching a version which is expected to be compatible. In plain English, it's a bit like saying: "I need a version of mock-django which is at least as new as 0.6.10, but not so new that it isn't compatible with it."</p> <p>If you're not sure about all this version number stuff, a quick look at the PEP440 <a href="https://www.python.org/dev/peps/pep-0440/#version-scheme" rel="nofollow">version scheme</a> should sort you out!</p>
2
2016-09-20T09:26:57Z
[ "python", "requirements.txt" ]
In requirements.txt, what does tilde equals (~=) mean?
39,590,187
<p>In the <code>requirements.txt</code> for a Python library I am using, one of the requirements is specified like:</p> <pre><code>mock-django~=0.6.10 </code></pre> <p>What does <code>~=</code> mean?</p>
4
2016-09-20T09:18:05Z
39,590,412
<p>~= means a compatible version. Not less than 0.6.10 and higher (0.6.*).</p>
1
2016-09-20T09:27:48Z
[ "python", "requirements.txt" ]
AWS Lambda function using test trigger or schedule (Timeout)
39,590,227
<p>So I have created a lambda function (the purpose of which doesn't matter anymore) and have tested that it works when run from my laptop. However the bigger problem is that I cannot get it to run off of a test event or on a schedule in AWS. </p> <p>When I try to run it from AWS I get a 300s timeout error. </p> <p>The following is included for your consideration:</p> <ul> <li>Function</li> <li>Logs</li> <li>Trigger Event</li> <li>Policy</li> <li>VPC Related Configuration</li> </ul> <p>If anyone can tell me what the issue might be, I would appreciate it as I have been searching for the solution for about 3 days.</p> <p>FUNCTION:</p> <pre><code>from __future__ import print_function def lambda_handler(event, context): if event["account"] == "123456789012": print("Passed!!") return event['time'] import boto3 import datetime def find_auto_scaling_instances(): """ Find Auto-scaling Instances """ client=boto3.client("autoscaling") auto_scaling=client.describe_auto_scaling_groups() dict={} for group in auto_scaling["AutoScalingGroups"]: print("hello") auto_scaling_group_name=group["AutoScalingGroupName"] number_of_instances=len(group["Instances"]) if number_of_instances == 0: continue else: for i in range(number_of_instances): if group["Instances"][i]["LifecycleState"] == "InService": instance_id=group["Instances"][i]["InstanceId"] dict[auto_scaling_group_name]=instance_id break return dict def find_staging_instances(): """ Find Static Instances """ stg_list=[] tag_list=["Y-qunie-stepsrv-1a","S-StepSvr"] for i in range(num_of_instances): print("hello2") target=instances[i]["Instances"][0] number_of_tags=len(target["Tags"]) for tag in range(number_of_tags): if target["Tags"][tag]["Value"] in tag_list: stg_list+=[target["InstanceId"]] return stg_list def volumes_per_instance(): """ Find the EBS associated with the Instances """ instance_disk={} for i in range(num_of_instances): print("hello3") target=instances[i]["Instances"][0] if target["InstanceId"] in instance_list: instance_disk[target["InstanceId"]]=[] for disk in range(len(target["BlockDeviceMappings"])): print("hello4") instance_disk[target["InstanceId"]]+=\ target["BlockDeviceMappings"][disk]["Ebs"]["VolumeId"]] return instance_disk #Group instances together and prepare to process instance_in_asgroup_dict=find_auto_scaling_instances() as_instance_list=[] for group in instance_in_asgroup_dict: print("hello5") as_instance_list+=[instance_in_asgroup_dict[group]] client=boto3.client("ec2") instances=client.describe_instances()["Reservations"] num_of_instances=len(instances) staging_instances=find_staging_instances() instance_list=[] instance_list+=as_instance_list instance_list+=staging_instances #Gather Disk Information inst_disk_set=volumes_per_instance() date=str(datetime.datetime.now()).replace(" \ ","_").replace(":","").replace(".","") #Create Disk Images as_image={} image=[] for instance in instance_list: print("hello6") if instance in as_instance_list: as_image[instance]=client.create_image( InstanceId=instance, Name=instance+"-"+date+"-AMI", Description="AMI for instance "+instance+".", NoReboot=True ) else: image+=[client.create_image( InstanceId=instance, Name=instance+"-"+date+"-AMI", Description="AMI for instance "+instance+".", NoReboot=True )] </code></pre> <p>LOGS:</p> <pre><code>18:03:30 START RequestId: 0ca9e0a3-7f11-11e6-be11-6974d9213102 Version: $LATEST 18:08:30 END RequestId: 0ca9e0a3-7f11-11e6-be11-6974d9213102 18:08:30 REPORT RequestId: 0ca9e0a3-7f11-11e6-be11-6974d9213102 Duration: 300001.99 ms Billed Duration: 300000 ms Memory Size: 128 MB Max Memory Used: 24 MB 18:08:30 2016-09-20T09:08:30.544Z 0ca9e0a3-7f11-11e6-be11-6974d9213102 Task timed out after 300.00 seconds </code></pre> <p>TRIGGER_EVENT:</p> <pre><code>{ "account": "123456789012", "region": "us-east-1", "detail": {}, "detail-type": "Scheduled Event", "source": "aws.events", "time": "1970-01-01T00:00:00Z", "id": "cdc73f9d-aea9-11e3-9d5a-835b769c0d9c", "resources": [ "arn:aws:events:us-east-1:123456789012:rule/my-schedule" ] } </code></pre> <p><strong>EDIT-1</strong></p> <p>IAM POLICY:</p> <p>From my understanding all I need to allow VPC Access to my function is to add the following privilages to the lambda function's assigned policy. </p> <ul> <li>ec2:CreateNetworkInterface</li> <li>ec2:DeleteNetworkInterface</li> <li>ec2:DescribeNetworkInterfaces</li> </ul> <blockquote> <pre><code>{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "ec2:CreateNetworkInterface", "ec2:DescribeNetworkInterfaces", "ec2:DeleteNetworkInterface" ], "Resource": "*" }, { "Action": [ "ec2:DescribeInstances", "ec2:CreateImage", "autoscaling:DescribeAutoScalingGroups" ], "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:*" } ] } </code></pre> </blockquote> <p><strong>EDIT 2</strong></p> <p>CONFIGURATION:</p> <p>The subnets, security groups and VPC attached to the function.</p> <p><a href="http://i.stack.imgur.com/HwQfN.png" rel="nofollow"><img src="http://i.stack.imgur.com/HwQfN.png" alt="Configurations"></a></p> <p><strong>EDIT 3 (CONCLUSION)</strong></p> <p>Mark Gave an excellent answer, informing me that I had set my function up to run inside a VPC yet I was not accessing resources within the VPC. Rather, I was accessing the Amazon API endpoint which required that I have access to the internet or the transaction would timeout.</p> <p>As such, there were two options available to fix this situation. </p> <ul> <li>Remove my VPC settings</li> <li>Create a NAT Gateway inside my VPC</li> </ul> <p>I chose the one that costs the least money.</p>
0
2016-09-20T09:19:52Z
39,605,751
<p>You enabled VPC access for your Lambda function. So now it only has access to resources inside your VPC. Note that the AWS API exists outside your VPC. Any attempt to access something outside your VPC is going to result in a network timeout. That's why you are seeing the timeout issues.</p> <p>To fix this, you can move the Lambda function outside the VPC, or you can add a NAT Gateway to your VPC. I'm not seeing anything in your code that is accessing anything inside your VPC, so it's probably cheapest and easiest to just remove the VPC settings from the Lambda function.</p>
2
2016-09-21T00:34:43Z
[ "python", "amazon-web-services", "aws-lambda", "boto3" ]
multiple text replacements in string by markers
39,590,326
<p>I have an xml file that is used as a template. I have multiple markers inside this xml that will be replaced with actual data. This is what i did:<br/></p> <pre><code>def populate_template(self, value1, value2, value3): with open('my_template.xml', 'rb') as xml_template: template_string = xml_template.read() template_string.replace('{{MARKER_1}}', value1) template_string.replace('{{MARKER_2}}', value2) template_string.replace('{{MARKER_3}}', value3) return template_string </code></pre> <p>Each marker can appear multiple times inside the template.<br/> I was wondering if there is a more efficient way of doing this?<br/> <strong>Tech stuff:</strong></p> <ul> <li>Python 2.7</li> </ul>
0
2016-09-20T09:23:32Z
39,590,551
<p>Yes. Use the <code>jinja2</code> templating module. To use your existing template you could do something like this:</p> <pre><code>def populate_template(self, value1, value2, value3): from jinja2 import Template t = Template(open('my_template.xml', 'r').read()) output = t.render(MARKER_1=value2, MARKER_2=value2, MARKER_3=value3) return output </code></pre> <p>It's well worth studying the different ways you can pass arguments to the template, also. For example the same code could have been written as...</p> <pre><code> ... context = {'MARKER_1': value1, 'MARKER_2', value2, 'MARKER_3': value3} output = t.render(**context) </code></pre> <p>and you can use this trick with any old dicts you happen to have lying around. It's a great way of extracting readable information from dicts selectively.</p> <p>The designers of jinja2, being smart cookies sympathetic to the Python cause, have in fact helped you by allowing you to provide the context in any of the ways you can create a dict (keyword arguments, a list of <code>(key, value)</code> tuples or a dict - including other dict-like things such as <code>collections.OrderedDict</code>). So you could also write the second line as</p> <pre><code> output = t.render(context) </code></pre> <p>which is both more readable and more efficient (I'm guessing, but it's an informed guess).</p>
1
2016-09-20T09:33:11Z
[ "python", "xml", "string", "python-2.7", "replace" ]
Why do Python lists in my computer don't hold more than 693 numbers?
39,590,384
<p>I was solving the <a href="https://www.hackerrank.com/challenges/s10-basic-statistics" rel="nofollow">Day 0</a> of 10 Days of Statistics problems in HackerRank the other day. My solution was pretty straightforward and it worked with almost all the test cases. The one my code fails in has 2500 numbers as input. I placed a few prints to see how this is happening and discovered my list for the numbers holds only the 693 values of all.</p> <p>Here is the part to find median:</p> <pre><code>number_of_items = int(input().strip()) inputs = list(map(int, input().split(' '))) inputs.sort() if (number_of_items % 2) == 0: n = number_of_items // 2 - 1 median = (inputs[n] + inputs[n+1])/2 else: n = number_of_items // 2 median = inputs[n-1] print(median) </code></pre> <p>Here is the full code: <a href="https://gist.github.com/mnzr/5a6f6c1c49d4dc0dbb940ed3ecba79ff" rel="nofollow">https://gist.github.com/mnzr/5a6f6c1c49d4dc0dbb940ed3ecba79ff</a></p> <p>I have tried this code on an online editor and it worked, with an exception: the mode was way off from the actual number.</p> <p>I thought Python lists can hold large amount of numbers! Why doesn't it work on my PC?</p> <p>Edit: I have had a friend solve the problem on his own and relay me the results. Then told him to run my code and tell me what he sees. Here is his code: <a href="https://gist.github.com/sz-ashik440/192bc22b18da0292832e65997a6787a7" rel="nofollow">https://gist.github.com/sz-ashik440/192bc22b18da0292832e65997a6787a7</a> Here is what happened: 1: His code worked, he said. I ran it and it worked on my PC too! For the first time that is. 2: I gave him my code and run it myself again. I saw there were only 693 elements, he reported the same. 3: And perhaps the most surprising thing is, his own version has gave the same out of index error and an array with the size of 693! My friend's own code on his own PC now gives wrong answers.</p> <p>Here is my system config:</p> <ul> <li>Intel Core i5, 5th gen </li> <li>8GB of RAM, 1600 MHz bus speed </li> <li>Ubuntu 16.04.1</li> <li>Python 3.5.2</li> </ul> <p>My friend is using Python 3.4.3 on Ubuntu 14.04.</p>
-1
2016-09-20T09:26:22Z
39,591,914
<p>I don't know why you are experiencing this limitation when running your code on your local machine, however, I strongly suspect that it is due to your input data - perhaps a stray new line character as suggested by <a href="http://stackoverflow.com/questions/39590384/why-do-python-lists-in-my-computer-dont-hold-more-than-693-numbers#comment66489821_39590384">Martijn Pieters</a>.</p> <p>The problem that you have with HackerRank is due to your mode calculation:</p> <pre><code># mode occurances = {n: inputs.count(n) for n in inputs} mode = min(occurances) print(mode) </code></pre> <p>This always selects the lowest value from the input, not the lowest of the most frequently occurring values. Here is one way to do that:</p> <pre><code>from collections import Counter c = Counter(inputs) max_count = c.most_common(1)[0][1] print(sorted([x for x in c if c[x] == max_count])[0]) </code></pre> <p>Note that there is also another problem, you need to round the mean and median values to 1 decimal place, but this error is not exposed by HackerRank's test data.</p>
1
2016-09-20T10:35:54Z
[ "python", "list", "python-3.x", "ubuntu" ]
What am I doing wrong when using seasonal decompose in Python?
39,590,648
<p>I have a small time series with monthly intervals. I wanted to plot it and then decompose into seasonality, trend, residuals. I start by importing csv into pandas and than plotting just the time series which works fine. I follow <a href="http://www.cbcity.de/timeseries-decomposition-in-python-with-statsmodels-and-pandas" rel="nofollow">This</a> tutorial and my code goes like this:</p> <pre><code>%matplotlib inline import matplotlib.pyplot as plt import matplotlib.dates as mdates import pandas as pd ali3 = pd.read_csv('C:\\Users\\ALI\\Desktop\\CSV\\index\\ZIAM\\ME\\ME_DATA_7_MONTH_AVG_PROFIT\\data.csv', names=['Date', 'Month','AverageProfit'], index_col=['Date'], parse_dates=True) \* Delete month column which is a string */ del ali3['Month'] ali3 plt.plot(ali3) </code></pre> <p><a href="http://i.stack.imgur.com/aqSiu.jpg" rel="nofollow">Data Frame</a></p> <p>At this stage I try to do the seasonal decompose like this:</p> <pre><code>import statsmodels.api as sm res = sm.tsa.seasonal_decompose(ali3.AverageProfit) fig = res.plot() </code></pre> <p>which results in the following error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-41-afeab639d13b&gt; in &lt;module&gt;() 1 import statsmodels.api as sm ----&gt; 2 res = sm.tsa.seasonal_decompose(ali3.AverageProfit) 3 fig = res.plot() C:\Users\D063375\AppData\Local\Continuum\Anaconda2\lib\site-packages\statsmodels\tsa\seasonal.py in seasonal_decompose(x, model, filt, freq) 86 filt = np.repeat(1./freq, freq) 87 ---&gt; 88 trend = convolution_filter(x, filt) 89 90 # nan pad for conformability - convolve doesn't do it C:\Users\D063375\AppData\Local\Continuum\Anaconda2\lib\site-packages\statsmodels\tsa\filters\filtertools.py in convolution_filter(x, filt, nsides) 287 288 if filt.ndim == 1 or min(filt.shape) == 1: --&gt; 289 result = signal.convolve(x, filt, mode='valid') 290 elif filt.ndim == 2: 291 nlags = filt.shape[0] C:\Users\D063375\AppData\Local\Continuum\Anaconda2\lib\site-packages\scipy\signal\signaltools.py in convolve(in1, in2, mode) 468 return correlate(volume, kernel[slice_obj].conj(), mode) 469 else: --&gt; 470 return correlate(volume, kernel[slice_obj], mode) 471 472 C:\Users\D063375\AppData\Local\Continuum\Anaconda2\lib\site-packages\scipy\signal\signaltools.py in correlate(in1, in2, mode) 158 159 if mode == 'valid': --&gt; 160 _check_valid_mode_shapes(in1.shape, in2.shape) 161 # numpy is significantly faster for 1d 162 if in1.ndim == 1 and in2.ndim == 1: C:\Users\D063375\AppData\Local\Continuum\Anaconda2\lib\site-packages\scipy\signal\signaltools.py in _check_valid_mode_shapes(shape1, shape2) 70 if not d1 &gt;= d2: 71 raise ValueError( ---&gt; 72 "in1 should have at least as many items as in2 in " 73 "every dimension for 'valid' mode.") 74 ValueError: in1 should have at least as many items as in2 in every dimension for 'valid' mode. </code></pre> <p>Can anyone shed some light on what I'm doing wrong and how may I fix it? much obliged.</p> <p><strong>Edit:</strong> Thats how the data frame looks like</p> <pre><code>Date AverageProfit 2015-06-01 29.990231 2015-07-01 26.080038 2015-08-01 25.640862 2015-09-01 25.346447 2015-10-01 27.386001 2015-11-01 26.357709 2015-12-01 25.260644 </code></pre>
1
2016-09-20T09:36:59Z
39,591,005
<p>You have 7 data points, that is usually a very small number for performing stationarity analysis.</p> <p>You don't have enough points to use seasonal decomposition. To see this, you can concatenate your data to create an extended time series (just repeating your data for the following months). Let <code>extendedData</code> be this extended dataframe and <code>data</code> your original data.</p> <pre><code>data.plot() </code></pre> <p><a href="http://i.stack.imgur.com/TNi9J.png" rel="nofollow"><img src="http://i.stack.imgur.com/TNi9J.png" alt="enter image description here"></a></p> <pre><code>extendedData.plot() </code></pre> <p><a href="http://i.stack.imgur.com/m9Pdt.png" rel="nofollow"><img src="http://i.stack.imgur.com/m9Pdt.png" alt="enter image description here"></a></p> <pre><code>res = sm.tsa.seasonal_decompose(extendedData.interpolate()) res.plot() </code></pre> <p><a href="http://i.stack.imgur.com/fl6lN.png" rel="nofollow"><img src="http://i.stack.imgur.com/fl6lN.png" alt="enter image description here"></a></p> <p>The frequency (<code>freq</code>) for the seasonal estimate is automatically estimated form the data, and can be manually specified.</p> <hr> <p>You can try to take a first difference: generate a new time series subtracting each data value from the previous one. In your case it looks like this:</p> <p><a href="http://i.stack.imgur.com/p8bBQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/p8bBQ.png" alt="enter image description here"></a></p> <p>An stationarity test can be applied next, as explained <a href="http://stats.stackexchange.com/questions/120270/how-do-i-detrend-time-series">here</a></p>
1
2016-09-20T09:54:14Z
[ "python", "pandas", "jupyter" ]
expected_conditions with Selenium in Python
39,590,654
<p>I want to load my driver until the text of an element become numbers:</p> <p>here is the element:</p> <pre><code>&lt;span id="viewed"&gt;-&lt;span&gt; or &lt;span id="viewed"&gt;&lt;span&gt; </code></pre> <p>it will become :</p> <pre><code>&lt;span id="viewed"&gt;12345&lt;span&gt; </code></pre> <p>any solution?</p>
0
2016-09-20T09:37:17Z
39,607,088
<p>This is untested:</p> <pre><code>from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC ints = "012345679" def custom_ec(driver): ''' Custom expected condition function to feed wait.until ''' elem_text = driver.find_element_by_id("viewed").text # Test to see if all of the values are ints if all(map(lambda x: x in ints, elem_text)): # If all of the char in the span are ints, return the value as an int return int(elem_text) else: return False driver = webdriver.Chrome() try: driver.get("http://your.url/here") int_value = WebDriverWait(driver, 30).until(custom_ec) finally: driver.quit() </code></pre>
1
2016-09-21T03:33:05Z
[ "python", "selenium" ]
Import only first line of multiple csv, dummycode repeats and calculate conditional probability
39,590,712
<p>I am actually a R savvy person but for this task I am using python and am overwhelmed by little things. I really hope you guys may help me.</p> <p>I have three large csv files that contain ids in the first column. Each csv contains presence data of a day and each unique id stands for a person. csv1 is day 1, csv2 is day 2 and csv3 is day 3.</p> <p>What I want to do now is to import only the first column for each csv because I don't need the rest and the files are quite big. Then I'd like to compare the appearance for each person (so the appearance for each id) and dummycode the information into one data frame where the first column contains the unique ids of all three csv files, the second a binary code for the presence of the id in the first csv (0=id doesn't appear in csv1, 1=id appaers in csv1) and then in column two do the same for csv2 and so on.</p> <p>Last but not least, I'd like then to calculate the conditional probability for example for the case that a person that didn't came the first two days would show up on the third.</p> <p>Of course a whole solution would be great but I would be pretty happy with baby steps and some python for dummies advice here, too. The thing is that I know how to do this with R but not with python and I would really appreciate your help.</p> <p>Thank you very much and excuse my English.</p> <p>If you need sample data, maybe this helps (the real data is huge):</p> <pre><code> csv1 id col2 col3 1 x x 2 x x 7 x x csv2 id col2 col3 1 x x 3 x x 7 x x 4 x x csv3 id col2 col3 1 x x 2 x x 3 x x 4 x x 5 x x 6 x x </code></pre> <p>Dummycoded df would be:</p> <pre><code> df id csv1 csv2 csv3 1 1 1 1 2 1 0 1 3 0 1 1 4 0 1 1 5 0 0 1 6 0 0 1 7 1 1 0 </code></pre>
0
2016-09-20T09:39:49Z
39,592,989
<p>You can simply open your <code>csv</code> files and store all the first columns (ids) in a list:</p> <pre><code>with open(csv1_name) as f1, open(csv2_name) as f2,open(csv3_name) as f3: column1 = next(zip(*csv.reader(f1))) # in python 2 use zip(*csv.reader(f1))[0] column2 = next(zip(*csv.reader(f2))) column3 = next(zip(*csv.reader(f3))) </code></pre> <p>Now you have the indices of the items in new matrix that should be 1. So you can use a list comprehension to create your columns:</p> <pre><code>a = [[1, 2, 7], [1, 3, 7, 4], [1, 2, 3, 4, 5, 6]] [[1 if j in a[i] else 0 for j in range(1, 8)] for i in range(3)] [[0, 1, 1, 0, 0, 0, 0], [0, 1, 0, 1, 1, 0, 0], [0, 1, 1, 1, 1, 1, 1]] </code></pre> <p>Or use <code>zip</code> again to create the rows:</p> <pre><code>In [138]: zip(*[[1 if j in a[i] else 0 for j in range(1, 8)] for i in range(3)]) Out[138]: [(1, 1, 1), (1, 0, 1), (0, 1, 1), (0, 1, 1), (0, 0, 1), (0, 0, 1), (1, 1, 0)] </code></pre> <p>Note: if you don't know the maximum number of rows you can get that by flattening the columns list</p> <p>So here is the general code:</p> <pre><code>from itertools import chain import csv with open(csv1_name) as f1, open(csv2_name) as f2,open(csv3_name) as f3: column1 = next(zip(*csv.reader(f1))) # in python 2 use zip(*csv.reader(f1))[0] column2 = next(zip(*csv.reader(f2))) column3 = next(zip(*csv.reader(f3))) columns = [column1, column2, column3] m_row = max(chain.from_iterable(a)) f_number = 3 new_columns = [[1 if j in columns[i] else 0 for j in range(1, m_row + 1)] for i in range(f_number)] # Creating new csv file with open('new_file.csv', 'w') as f: csvwriter = csv.writer(f) for id, row in zip(*new_columns): csvwriter.writerow((id,)+row) </code></pre>
0
2016-09-20T11:34:45Z
[ "python", "csv", "statistics" ]
fatal error: 'QTKit/QTKit.h' file not found when I build OpenCV on mac
39,590,741
<p>I have followed this <a href="http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/" rel="nofollow">http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/</a> to install OpenCV on my mac. When I do this step : $ make -j4 a problem happened:</p> <pre><code>fatal error: 'QTKit/QTKit.h' file not found #import &lt;QTKit/QTKit.h&gt; ^ 1 error generated. make[2]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_qtkit.mm.o] Error 1 make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2 make: *** [all] Error 2 </code></pre>
3
2016-09-20T09:41:33Z
39,674,434
<p>Try building it like this instead:</p> <pre><code>cmake -DWITH_QUICKTIME=OFF -DWITH_GSTREAMER=OFF -DWITH_FFMPEG=OFF -DCMAKE_C_COMPILER=/usr/bin/clang -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMAKE_BUILD_TYPE=Release .. ; make -j4 </code></pre>
2
2016-09-24T08:52:03Z
[ "python", "osx", "opencv" ]
fatal error: 'QTKit/QTKit.h' file not found when I build OpenCV on mac
39,590,741
<p>I have followed this <a href="http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/" rel="nofollow">http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/</a> to install OpenCV on my mac. When I do this step : $ make -j4 a problem happened:</p> <pre><code>fatal error: 'QTKit/QTKit.h' file not found #import &lt;QTKit/QTKit.h&gt; ^ 1 error generated. make[2]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_qtkit.mm.o] Error 1 make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2 make: *** [all] Error 2 </code></pre>
3
2016-09-20T09:41:33Z
39,802,235
<p>Can you try to install opencv on mac using brew?</p> <pre><code>brew reinstall opencv3 --HEAD --with-python3 --with-ffmpeg --with-tbb --with-contrib </code></pre> <p>Worked for me on MAC OS SIERRA.</p>
0
2016-10-01T01:47:21Z
[ "python", "osx", "opencv" ]
fatal error: 'QTKit/QTKit.h' file not found when I build OpenCV on mac
39,590,741
<p>I have followed this <a href="http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/" rel="nofollow">http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/</a> to install OpenCV on my mac. When I do this step : $ make -j4 a problem happened:</p> <pre><code>fatal error: 'QTKit/QTKit.h' file not found #import &lt;QTKit/QTKit.h&gt; ^ 1 error generated. make[2]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_qtkit.mm.o] Error 1 make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2 make: *** [all] Error 2 </code></pre>
3
2016-09-20T09:41:33Z
39,837,081
<p>I have the same error and none of the solutions proposed worked for me.</p>
0
2016-10-03T17:24:19Z
[ "python", "osx", "opencv" ]
Odoo 9 Empty Tree View
39,590,816
<p>I have created a new model and added a tree view to edit/view it on the res.partner view.</p> <p>I can create entries no problem and they appear in the database. I cannot however get the tree view to display any data at all. Even with the filters removed.</p> <p>Here is the view xml</p> <pre><code>&lt;page string="Projects/Training"&gt; &lt;field name="training_logs" context="{'default_tl_partner_id': active_id}" domain="[('tl_partner_id','=',active_id)]"&gt; &lt;tree string="Training Logs" editable="top"&gt; &lt;field name="tl_partner_id" invisible="1"/&gt; &lt;field name="tl_date"/&gt; &lt;field name="tl_trainer"/&gt; &lt;field name="tl_present"/&gt; &lt;field name="tl_summary"/&gt; &lt;/tree&gt; &lt;/field&gt; &lt;/page&gt; </code></pre> <p>Model Structure as requested</p> <pre><code>class training_log(osv.Model): _name='training.log' _description='Log of past training events' _order='tl_date' _columns = { 'tl_partner_id': fields.many2one('res.partner', 'Customer'), 'tl_date': fields.date("Training Date"), 'tl_trainer': fields.many2one('res.users', 'Trainer'), 'tl_present': fields.char('People Present'), 'tl_summary': fields.text('Training Summary') } </code></pre> <p>res.partner - I have left out all my non related fields.</p> <pre><code>class res_partner(osv.osv): _inherit = "res.partner" _columns = { 'training_logs': fields.one2many('training.log','id','Training Logs'), } </code></pre>
1
2016-09-20T09:45:26Z
39,592,380
<p>When we define <em>one2many</em> field, we need to give proper <em>many2one</em> field name.</p> <p>In your case, you have given <em>id</em> which means it will take currently record not for as you have created new object <em>training.log</em></p> <p>Try with following code:</p> <p>Replace</p> <pre><code>'training_logs': fields.one2many('training.log','id','Training Logs'), </code></pre> <p>with</p> <pre><code>'training_logs': fields.one2many('training.log','tl_partner_id','Training Logs'), </code></pre> <p>Afterwards, restart Odoo server and upgrade module.</p>
1
2016-09-20T11:02:49Z
[ "python", "xml", "openerp", "odoo-9" ]
mod_wsgi runtime using old python version
39,590,818
<p>I'm running a django server on httpd service. I had to upgrade my python version(2.7.12). After installing the new python I rebuild the mod_wsgi with the new python (using with-python argument). I also rebuild the mod_python with the new python version. My new python path is /usr/local/bin/python2.7. In the /etc/httpd/conf.d/django.conf I added the following line : WSGIPythonHome /usr/local.</p> <p>However I see this error in my error_log file (httpd error log):</p> <pre><code>[Tue Sep 20 12:32:12.743338 2016] [:warn] [pid 8567:tid 139972130834496] mod_wsgi: Compiled for Python/2.7.12. [Tue Sep 20 12:32:12.743376 2016] [:warn] [pid 8567:tid 139972130834496] mod_wsgi: Runtime using Python/2.7.5. </code></pre> <p>What i'm missing ?</p> <p>FYI : I cannot change or redirect the default python that exists in /usr/bin/python because this affect centos package management.</p>
0
2016-09-20T09:45:28Z
39,592,334
<p>When you install a Python version of the same X.Y version as system Python, but different patch level, you need to force the runtime linker to use the shared Python library from the alternate location of your newer Python version.</p> <p>To do this, go back and rebuild mod_wsgi but set the <code>LD_RUN_PATH</code> environment variable when building mod_wsgi, to the directory containing the Python library for the alternate Python version.</p> <pre><code>make distclean ./configure --with-python=/usr/local/bin/python2.7 LD_RUN_PATH=/usr/local/lib make sudo make install </code></pre> <p>If this works correctly, you should be able to run:</p> <pre><code>ldd mod_wsgi.so </code></pre> <p>on the <code>mod_wsgi.so</code> file that was installed and it should be using the Python library from <code>/usr/local/lib</code> and not <code>/usr/lib</code>.</p> <p>You will also need to still set:</p> <pre><code>WSGIPythonHome /usr/local </code></pre>
0
2016-09-20T11:00:06Z
[ "python", "django", "apache", "mod-wsgi" ]
Plot RGB satellite image in Cassini-Soldner projection on a map in Python
39,590,887
<p>I have a geo-referenced RGB satellite image from the MODIS instrument in geotiff format. <strong>What is the correct way to plot it on a map using cartopy and preserve the RGB colours?</strong></p> <p>The main obstacle I guess is the projection of the image which is Cassini-Soldner:</p> <pre><code>import numpy as np from osgeo import gdal, osr ds = gdal.Open('modis_201303261252_rgb.tif') print(ds.GetGeoTransform()) (-1669791.8857914428, 250.0, 0.0, 1669792.327327792, 0.0, -250.0) proj = ds.GetProjection() inproj = osr.SpatialReference() inproj.ImportFromWkt(proj) print(inproj) PROJCS["unnamed", GEOGCS["unnamed ellipse", DATUM["unknown", SPHEROID["unnamed",6378137,0]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433]], PROJECTION["Cassini_Soldner"], PARAMETER["latitude_of_origin",72], PARAMETER["central_meridian",-4], PARAMETER["false_easting",0], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]]] </code></pre> <p>I tried to follow this example <a href="https://ocefpaf.github.io/python4oceanographers/blog/2015/03/02/geotiff/" rel="nofollow">https://ocefpaf.github.io/python4oceanographers/blog/2015/03/02/geotiff/</a> and use cartopy to define a projection from an EPSG code. So I googled for Cassini-Soldner EPSG code (9806), but cartopy's <code>ccrs.epsg()</code> doesn't recognise it.</p> <p>I want to use <code>plt.imshow()</code> method, but I'm a bit confused what to use as a projection keyword when the axis is created and what to pass as a <code>transform=</code> argument in <code>imshow</code>.</p>
0
2016-09-20T09:48:35Z
39,612,777
<p>(I would add this as a comment but I need 50 rep.)</p> <p>Out of interest, how did you come by a Cassini projection MODIS RGB?</p> <p>An alternative is to use a Cartopy pull request I've made, which allows for the time dimension to be used with GeoAxes.add_wmts(), so that you can pull MODIS RGB imagery tiles for your time from NASA GIBS straight into a Cartopy GeoAxes: <a href="https://github.com/SciTools/cartopy/pull/788" rel="nofollow">https://github.com/SciTools/cartopy/pull/788</a></p>
0
2016-09-21T09:35:58Z
[ "python", "matplotlib", "gdal", "imshow", "cartopy" ]
flycheck: undefined name 'xrange'
39,590,902
<p>I am running <code>emacs24</code> and I'm new to emacs. I have some code in Python 2.7 that I am checking with <code>flycheck</code>. When I check the syntax, I get:</p> <pre><code>error F821 undefined name 'xrange' (python-flake8) </code></pre> <p>I understand that <code>xrange</code> is not in Python3, but here I'm on Python 2.7. I guess it's configured to run on Python 3, since also <code>raw_input</code> yields the same error.</p> <p>How do I fix this?</p>
0
2016-09-20T09:49:10Z
39,591,842
<p>Flycheck does not care about the difference between Python 2 and Python 3. It runs the first <code>flake8</code> executable it finds in <code>exec-path</code>, and in your case, that's apparently a flake8 installed for Python 3.</p> <p>You need to install flake8 for Python 2 and point Flycheck to that executable, either by putting the target directory in <code>exec-path</code> or with <code>M-x flycheck-set-checker-executable</code>. </p> <p>I recommend to use a dedicated Python 2 virtualenv for your project and make Emacs and set <code>python-shell-virtualenv-root</code> to that directory in Python buffers (for instance with <a href="https://www.gnu.org/software/emacs/manual/html_node/emacs/Directory-Variables.html" rel="nofollow">Directory Variables</a>). Then you can point <code>exec-path</code> to that virtualenv. With a little <a href="https://github.com/lunaryorn/.emacs.d/blob/master/lisp/flycheck-virtualenv.el" rel="nofollow">custom Emacs Lisp</a> you can even automate that.</p>
0
2016-09-20T10:31:44Z
[ "python", "emacs", "flake8", "flycheck" ]
flycheck: undefined name 'xrange'
39,590,902
<p>I am running <code>emacs24</code> and I'm new to emacs. I have some code in Python 2.7 that I am checking with <code>flycheck</code>. When I check the syntax, I get:</p> <pre><code>error F821 undefined name 'xrange' (python-flake8) </code></pre> <p>I understand that <code>xrange</code> is not in Python3, but here I'm on Python 2.7. I guess it's configured to run on Python 3, since also <code>raw_input</code> yields the same error.</p> <p>How do I fix this?</p>
0
2016-09-20T09:49:10Z
39,600,290
<p>I were struggling with same problem, and can recommend use my solution for that: <a href="https://github.com/rmuslimov/flycheck-local-flake8" rel="nofollow">https://github.com/rmuslimov/flycheck-local-flake8</a>. It's trivial - and it will force <strong>flycheck</strong> use proper <strong>flake8</strong> executable from your virtualenv.</p> <p>I'd recommend add <code>setup.cfg</code> to each python project you work on. Flake8 has some environment variables which may be defined there. For example here is mine: </p> <pre><code>[metadata] name=fastttrace version=release.5.9.0 [flake8] exclude = tests/*, migrations/* ignore = D100,D101,D102,D103,D205,D400,E731 import-order-style = google max-complexity = 15 </code></pre> <p>It allows you have separate flake8 rules per project and store them in repo, which is convenient way to share with other devs.</p>
0
2016-09-20T17:26:51Z
[ "python", "emacs", "flake8", "flycheck" ]
Is there any performance reason to use ndim 1 or 2 vectors in numpy?
39,590,942
<p>This seems like a pretty basic question, but I didn't find anything related to it on stack. Apologies if I missed an existing question.</p> <p>I've seen some mathematical/linear algebraic reasons why one might want to use numpy vectors "proper" (i.e. ndim 1), as opposed to row/column vectors (i.e. ndim 2).</p> <p>But now I'm wondering: are there any (significant) <strong>efficiency</strong> reasons why one might pick one over the other? Or is the choice pretty much arbitrary in that respect? </p> <p>(edit) To clarify: By "ndim 1 vs ndim 2 vectors" I mean representing a vector that contains, say, numbers 3 and 4 as either:</p> <ul> <li><p>np.array([3, 4]) # ndim 1</p></li> <li><p>np.array([[3, 4]]) # ndim 2</p></li> </ul> <p>The numpy documentation seems to lean towards the first case as the default, but like I said, I'm wondering if there's any <em>performance</em> difference.</p>
3
2016-09-20T09:50:48Z
39,598,657
<p>If you use numpy properly, then no - it is not a consideration.</p> <p>If you look at the <a href="https://docs.scipy.org/doc/numpy/reference/internals.html" rel="nofollow">numpy internals documentation</a>, you can see that</p> <blockquote> <p>Numpy arrays consist of two major components, the raw array data (from now on, referred to as the data buffer), and the information about the raw array data. The data buffer is typically what people think of as arrays in C or Fortran, a contiguous (and fixed) block of memory containing fixed sized data items. Numpy also contains a significant set of data that describes how to interpret the data in the data buffer.</p> </blockquote> <p>So, irrespective of the dimensions of the array, all data is stored in a continuous buffer. Now consider</p> <pre><code>a = np.array([1, 2, 3, 4]) </code></pre> <p>and</p> <pre><code>b = np.array([[1, 2], [3, 4]]) </code></pre> <p>It is true that accessing <code>a[1]</code> requires (slightly) less operations than <code>b[1, 1]</code> (as the translation of <code>1, 1</code> to the flat index requires some calculations), but, for high performance, <a href="https://www.safaribooksonline.com/library/view/python-for-data/9781449323592/ch04.html" rel="nofollow">vectorized operations</a> are required anyway. </p> <p>If you want to sum all elements in the arrays, then, in both case you would use the same thing: <code>a.sum()</code>, and <code>b.sum()</code>, and the sum would be over elements in contiguous memory anyway. Conversely, if the data is inherently 2d, then you could do things like <code>b.sum(axis=1)</code> to sum over rows. Doing this yourself in a 1d array would be error prone, and not more efficient.</p> <p>So, basically a 2d array, if it is natural for the problem just gives greater functionality, with zero or negligible overhead.</p>
2
2016-09-20T15:52:30Z
[ "python", "numpy", "vector" ]
Formatting datetime for plotting time-series with pandas
39,590,948
<p>I am plotting some time-series with pandas dataframe and I ran into the problem of gaps on weekends. What can I do to remove gaps in the time-series plot? </p> <pre><code>date_concat = pd.to_datetime(pd.Series(df.index),infer_datetime_format=True) pca_factors.index = date_concat pca_colnames = ['Outright', 'Curve', 'Convexity'] pca_factors.columns = pca_colnames fig,axes = plt.subplots(2) pca_factors.Curve.plot(ax=axes[0]); axes[0].set_title('Curve') pca_factors.Convexity.plot(ax=axes[1]); axes[1].set_title('Convexity'); plt.axhline(linewidth=2, color = 'g') fig.tight_layout() fig.savefig('convexity.png') </code></pre> <p>Partial plot below: <a href="http://i.stack.imgur.com/q2WIl.png" rel="nofollow"><img src="http://i.stack.imgur.com/q2WIl.png" alt="enter image description here"></a></p> <p>Ideally, I would like the time-series to only show the weekdays and ignore weekends.</p>
0
2016-09-20T09:50:53Z
39,600,937
<p>To make MaxU's suggestion more explicit:</p> <ul> <li>convert to datetime as you have done, but drop the weekends</li> <li>reset the index and plot the data via this default Int64Index </li> <li>change the x tick labels </li> </ul> <p>Code:</p> <pre><code>date_concat = data_concat[date_concat.weekday &lt; 5] # drop weekends pca_factors = pca_factors.reset_index() # from MaxU's comment pca_factors['Convexity'].plot() # ^^^ plt.xticks(pca_factors.index, date_concat) # change the x tick labels </code></pre>
0
2016-09-20T18:08:41Z
[ "python", "pandas", "matplotlib", "dataframe" ]
How to write general get methods for a class?
39,591,045
<p>recently I started using oop in python. I would like to write a general get or set method for the initiated attributes. For example, there is the class Song with several attributes and a corresponding get method. I would like to avoid the multiple if statements. With just two attributes it does not matter, but if there are >5 attributes the code would be difficult to read. Is it possible to use a string in args to get the value from <strong>init</strong> whithout defining all possible cases?</p> <pre><code>class Song(object): def __init__(self,title,prod): self.title = title self.prod = prod def getParam(self,*args): retPar = dict() if 'title' in args: print(self.title) retPar['title'] = self.title if 'prod' in args: print(self.prod) retPar['prod'] = self.prod return(retPar) </code></pre> <p>I am not sure, if this is possible, because I could not find anything. How can I do this? </p>
1
2016-09-20T09:56:36Z
39,591,542
<blockquote> <p>provide a function for colleagues who are not familiar with the python syntax, such that they do not have to access the the attritubes directly. For plotting and such things</p> <p>because python is not necessarily taught and the dot syntax is confusing for people who have basic knowledge in matlab</p> </blockquote> <p>I'd argue that this can easily be taught and you shouldn't bend over backwards for something so simple… but assuming this is really what you want, this looks like a much better idea:</p> <pre><code>class Song(object): def __init__(self, title, prod): self.title = title self.prod = prod def __getitem__(self, key): return getattr(self, key) song = Song('Foo', 'Bar') print song['title'] </code></pre> <p>See <a href="https://docs.python.org/2/reference/datamodel.html#object.__getitem__" rel="nofollow">https://docs.python.org/2/reference/datamodel.html#object.__getitem__</a> and <a href="https://docs.python.org/2/library/functions.html#getattr" rel="nofollow">https://docs.python.org/2/library/functions.html#getattr</a>.</p>
0
2016-09-20T10:18:37Z
[ "python", "oop", "methods", "args" ]
Apache Lucene: How to use TokenStream to manually accept or reject a token when indexing
39,591,094
<p>I am looking for a way to write a custom index with Apache Lucene (PyLucene to be precise, but a Java answer is fine).</p> <p>What I would like to do is the following : When adding a document to the index, Lucene will tokenize it, remove stop words, etc. This is usually done with the <code>Analyzer</code> if I am not mistaken.</p> <p>What I would like to implement is the following : Before Lucene stores a given term, I would like to perform a lookup (say, in a dictionary) to check whether to keep the term or discard it (if the term is present in my dictionary, I keep it, otherwise I discard it). </p> <p>How should I proceed ? </p> <p>Here is (in Python) my custom implementation of the <code>Analyzer</code> :</p> <pre><code>class CustomAnalyzer(PythonAnalyzer): def createComponents(self, fieldName, reader): source = StandardTokenizer(Version.LUCENE_4_10_1, reader) filter = StandardFilter(Version.LUCENE_4_10_1, source) filter = LowerCaseFilter(Version.LUCENE_4_10_1, filter) filter = StopFilter(Version.LUCENE_4_10_1, filter, StopAnalyzer.ENGLISH_STOP_WORDS_SET) ts = tokenStream.getTokenStream() token = ts.addAttribute(CharTermAttribute.class_) offset = ts.addAttribute(OffsetAttribute.class_) ts.reset() while ts.incrementToken(): startOffset = offset.startOffset() endOffset = offset.endOffset() term = token.toString() # accept or reject term ts.end() ts.close() # How to store the terms in the index now ? return ???? </code></pre> <p>Thank you for your guidance in advance !</p> <p><strong>EDIT 1</strong> : After digging into Lucene's documentation, I figured it had something to do with the <code>TokenStreamComponents</code>. It returns a TokenStream with which you can iterate through the Token list of the field you are indexing.</p> <p>Now there is something to do with the <code>Attributes</code> that I do not understand. Or more precisely, I can read the tokens, but have no idea how should I proceed afterward.</p> <p><strong>EDIT 2</strong> : I found this <a href="http://stackoverflow.com/questions/2638200/how-to-get-a-token-from-a-lucene-tokenstream">post</a> where they mention the use of <code>CharTermAttribute</code>. However (in Python though) I cannot access or get a <code>CharTermAttribute</code>. Any thoughts ?</p> <p><strong>EDIT3</strong> : I can now access each term, see update code snippet. Now what is left to be done is actually <em>storing the desired terms</em>...</p>
2
2016-09-20T09:58:37Z
39,612,844
<p>The way I was trying to solve the problem was wrong. This <a href="http://stackoverflow.com/questions/24145688/how-to-tokenize-only-certain-words-in-lucene/24152554?noredirect=1#comment66511266_24152554">post</a> and <em>femtoRgon</em>'s answer were the solution.</p> <p>By defining a filter extending <code>PythonFilteringTokenFilter</code>, I can make use of the function <code>accept()</code> (as the one used in the <code>StopFilter</code> for instance).</p> <p>Here is the corresponding code snippet :</p> <pre><code>class MyFilter(PythonFilteringTokenFilter): def __init__(self, version, tokenStream): super(MyFilter, self).__init__(version, tokenStream) self.termAtt = self.addAttribute(CharTermAttribute.class_) def accept(self): term = self.termAtt.toString() accepted = False # Do whatever is needed with the term # accepted = ... (True/False) return accepted </code></pre> <p>Then just append the filter to the other filters (as in the code snipped of the question) :</p> <pre><code>filter = MyFilter(Version.LUCENE_4_10_1, filter) </code></pre>
0
2016-09-21T09:38:45Z
[ "java", "python", "apache", "indexing", "lucene" ]
What will happen if I delete a input file while some program is reading data from that file?
39,591,109
<p>If there is a python script doing this :</p> <pre><code>with open('large_input_file.log', 'rb') as f : for each_line in f : do something ..... </code></pre> <p>Let's call this script <code>a.py</code></p> <p><code>large_input_file.log</code> is about 16GB. <code>a.py</code> will take hours to process this file.</p> <p>What will happen if I do this (under Linux):</p> <ol> <li><p>keep <code>a.py</code> running</p></li> <li><p>delete <code>large_input_file.log</code></p></li> <li><p>replace <code>large_input_file.log</code> with different content but same name</p></li> </ol> <p>Is <code>a.py</code> able to get the correct data in <code>large_input_file.log</code> before I delete it? (I guess this is what will happen.)</p> <p>Or will <code>a.py</code> get new data starting with the same offset in the new <code>large_input_file.log</code></p> <p>Can you explain it in kernel level or filesystem level? (How does linux accomplish this)?</p> <p>-----------------Below is added after some answer------------------------</p> <p>What if my disk size is 16Gb, so there can be store only one <code>large_input_file.log</code>.</p> <p>What will happen if I delete <code>large_input_file.log</code> and create another 16Gb <code>large_input_file.log</code> file ?</p>
2
2016-09-20T09:59:14Z
39,591,698
<p>While a file is still open there is on the disk, and your python program will read the whole file, because it uses the inode number not the name. The new file has the same name but a new inode. That's why when you delete a logfile from /var/log and df shows the same before the delete. While open, the file is phisically on the hdd. When close, the kernel freeing up the space.</p>
1
2016-09-20T10:25:12Z
[ "python", "linux" ]
What will happen if I delete a input file while some program is reading data from that file?
39,591,109
<p>If there is a python script doing this :</p> <pre><code>with open('large_input_file.log', 'rb') as f : for each_line in f : do something ..... </code></pre> <p>Let's call this script <code>a.py</code></p> <p><code>large_input_file.log</code> is about 16GB. <code>a.py</code> will take hours to process this file.</p> <p>What will happen if I do this (under Linux):</p> <ol> <li><p>keep <code>a.py</code> running</p></li> <li><p>delete <code>large_input_file.log</code></p></li> <li><p>replace <code>large_input_file.log</code> with different content but same name</p></li> </ol> <p>Is <code>a.py</code> able to get the correct data in <code>large_input_file.log</code> before I delete it? (I guess this is what will happen.)</p> <p>Or will <code>a.py</code> get new data starting with the same offset in the new <code>large_input_file.log</code></p> <p>Can you explain it in kernel level or filesystem level? (How does linux accomplish this)?</p> <p>-----------------Below is added after some answer------------------------</p> <p>What if my disk size is 16Gb, so there can be store only one <code>large_input_file.log</code>.</p> <p>What will happen if I delete <code>large_input_file.log</code> and create another 16Gb <code>large_input_file.log</code> file ?</p>
2
2016-09-20T09:59:14Z
39,591,920
<p>Let's create a file:</p> <pre><code># echo foo &gt; test.txt </code></pre> <p>Now we'll use <code>tail</code> to monitor it for changes:</p> <pre><code># tail -f test.txt foo </code></pre> <p>Let's open another tab on our terminal, and check the pid of our <code>tail</code> process:</p> <pre><code># ps aux | grep -i tail root 5458 0.0 0.0 7484 724 ? S Sep15 0:13 tail -f -n 0 /var/log/syslog root 5919 0.0 0.0 7484 784 ? S Sep15 0:13 tail -f -n 0 /var/log/syslog root 6381 0.0 0.0 7484 840 ? S Sep15 0:14 tail -f -n 0 /var/log/syslog emil 27789 0.0 0.0 8852 784 pts/8 S+ 12:26 0:00 tail -f test.txt emil 27826 0.0 0.0 15752 1016 pts/9 S+ 12:26 0:00 grep -i tail </code></pre> <p>So, in my case the pid is 27789. We can look at the open files of the process by checking the <code>/proc/27789/fd</code> directory:</p> <pre><code># ls -lah /proc/27789/fd/ total 0 dr-x------ 2 emil emil 0 Sep 20 12:26 . dr-xr-xr-x 9 emil emil 0 Sep 20 12:26 .. lrwx------ 1 emil emil 64 Sep 20 12:26 0 -&gt; /dev/pts/8 lrwx------ 1 emil emil 64 Sep 20 12:26 1 -&gt; /dev/pts/8 lrwx------ 1 emil emil 64 Sep 20 12:26 2 -&gt; /dev/pts/8 lr-x------ 1 emil emil 64 Sep 20 12:26 3 -&gt; /home/emil/test.txt lr-x------ 1 emil emil 64 Sep 20 12:26 4 -&gt; anon_inode:inotify </code></pre> <p>Here we see that <code>tail</code> has a file descriptor called 3 to <code>test.txt</code>. What if we delete the file?</p> <pre><code># rm test.txt # ls -lah /proc/27789/fd total 0 dr-x------ 2 emil emil 0 Sep 20 12:26 . dr-xr-xr-x 9 emil emil 0 Sep 20 12:26 .. lrwx------ 1 emil emil 64 Sep 20 12:26 0 -&gt; /dev/pts/8 lrwx------ 1 emil emil 64 Sep 20 12:26 1 -&gt; /dev/pts/8 lrwx------ 1 emil emil 64 Sep 20 12:26 2 -&gt; /dev/pts/8 lr-x------ 1 emil emil 64 Sep 20 12:26 3 -&gt; /home/emil/test.txt (deleted) lr-x------ 1 emil emil 64 Sep 20 12:26 4 -&gt; anon_inode:inotify </code></pre> <p>The file descriptor still exists, but <code>ls</code> will helpfully let us know that the file has been deleted.</p> <p>As Igor says, each file has a physical location on disk where the raw data exists. In order to find files, the system maintains a table of inodes mapping file names to actual data. Removing a file doesn't wipe the data from disk, it simply modifies the inode. The data will still exist, until it's explicitly overwritten by something else. In this specific case, though, the kernel contains extra code to make sure that the file continues to exist - and won't be overwritten - until it's no longer open by any process.</p>
5
2016-09-20T10:36:18Z
[ "python", "linux" ]
Binary search for a string in a list of strings
39,591,238
<p>I have a list of permutations of a string, and a list full of words from a lexicon. I want to for each permutation find out if it's in the list of words. I tried a while loop and just brute-forced through and that gave me a bunch of words from the wordlist. But when I tried this binary search:</p> <pre><code>def binärSökning(word, wordList): first = 0 last = len(wordList) - 1 found = False while first &lt;= last and not found: middle = (first + last)//2 if wordList[middle] == word: found = True else: if word &lt; wordList[middle]: last = middle - 1 else: first = middle + 1 return found </code></pre> <p>I got nothing in return, just an empty list (Just false, if it returns true it adds the word to another list). Can anyone please tell me why it's not returning true when it hits a good word?</p> <p>Edit: What's calling the function is just a for-loop:</p> <pre><code>foundWords = set() for word in listOfWords: if binärSökning(word, NewWordList): foundWords.add(word) return foundWords </code></pre> <p>Where the NewWordList is the a narrower list of possible words it could be, nothing wrong with it, since it worked when I tried brute force.</p> <p>What I would like as a result is when ever the searching function returns true, the for-loop adds that word to a set that is then presented to the user once the program finishes.</p>
-2
2016-09-20T10:04:59Z
39,591,430
<p>It was working fine for me. The following was my code:</p> <pre><code>def binrSkning(word, wordList): first = 0 last = len(wordList) - 1 found = False while first &lt;= last and not found: middle = (first + last)//2 if wordList[middle] == word: found = True else: if word &lt; wordList[middle]: last = middle - 1 else: first = middle + 1 return found </code></pre> <p>The following was my output</p> <pre><code>&gt;&gt;&gt; binrSkning('hi', ['hello', 'hi', 'how']) True &gt;&gt;&gt; binrSkning('tim', ['hello', 'hi', 'how']) False </code></pre> <p>The following worked fine for me:</p> <pre><code>&gt;&gt;&gt; NewWordList = ['hello', 'hi', 'how'] &gt;&gt;&gt; listOfWords = ['hi', 'how', 'bye'] &gt;&gt;&gt; foundWords = set() &gt;&gt;&gt; for word in listOfWords: if binrSkning(word, NewWordList): foundWords.add(word) &gt;&gt;&gt; foundWords set(['how', 'hi']) </code></pre>
0
2016-09-20T10:14:01Z
[ "python", "python-3.x", "binary-search" ]
Binary search for a string in a list of strings
39,591,238
<p>I have a list of permutations of a string, and a list full of words from a lexicon. I want to for each permutation find out if it's in the list of words. I tried a while loop and just brute-forced through and that gave me a bunch of words from the wordlist. But when I tried this binary search:</p> <pre><code>def binärSökning(word, wordList): first = 0 last = len(wordList) - 1 found = False while first &lt;= last and not found: middle = (first + last)//2 if wordList[middle] == word: found = True else: if word &lt; wordList[middle]: last = middle - 1 else: first = middle + 1 return found </code></pre> <p>I got nothing in return, just an empty list (Just false, if it returns true it adds the word to another list). Can anyone please tell me why it's not returning true when it hits a good word?</p> <p>Edit: What's calling the function is just a for-loop:</p> <pre><code>foundWords = set() for word in listOfWords: if binärSökning(word, NewWordList): foundWords.add(word) return foundWords </code></pre> <p>Where the NewWordList is the a narrower list of possible words it could be, nothing wrong with it, since it worked when I tried brute force.</p> <p>What I would like as a result is when ever the searching function returns true, the for-loop adds that word to a set that is then presented to the user once the program finishes.</p>
-2
2016-09-20T10:04:59Z
39,591,527
<p>If you have a list of words, it is as simple as making a single if statement as follows: </p> <pre><code>def bomrSkning(word, wordList): found = False if word in wordList: found = True return found </code></pre>
0
2016-09-20T10:18:03Z
[ "python", "python-3.x", "binary-search" ]
Finding efficiently pandas (part of) rows with unique values
39,591,334
<p>Given a pandas dataframe with a row per individual/record. A row includes a property value and its evolution across time (0 to N). </p> <p>A schedule includes the estimated values of a variable 'property' for a number of entities from day 1 to day 10 in the following example. </p> <p>I want to filter entities with unique values for a given period and get those values</p> <pre><code>csv=',property,1,2,3,4,5,6,7,8,9,10\n0,100011,0,0,0,0,3,3,3,3,3,0\n1,100012,0,0,0,0,2,2,2,8,8,0\n2, \ 100012,0,0,0,0,2,2,2,2,2,0\n3,100012,0,0,0,0,0,0,0,0,0,0\n4,100011,0,0,0,0,2,2,2,2,2,0\n5, \ 180011,0,0,0,0,2,2,2,2,2,0\n6,110012,0,0,0,0,0,0,0,0,0,0\n7,110011,0,0,0,0,3,3,3,3,3,0\n8, \ 110012,0,0,0,0,3,3,3,3,3,0\n9,110013,0,0,0,0,0,0,0,0,0,0\n10,100011,0,0,0,0,3,3,3,3,4,0' from StringIO import StringIO import numpy as np schedule = pd.read_csv(StringIO(csv), index_col=0) print schedule property 1 2 3 4 5 6 7 8 9 10 0 100011 0 0 0 0 3 3 3 3 3 0 1 100012 0 0 0 0 2 2 2 8 8 0 2 100012 0 0 0 0 2 2 2 2 2 0 3 100012 0 0 0 0 0 0 0 0 0 0 4 100011 0 0 0 0 2 2 2 2 2 0 5 180011 0 0 0 0 2 2 2 2 2 0 6 110012 0 0 0 0 0 0 0 0 0 0 7 110011 0 0 0 0 3 3 3 3 3 0 8 110012 0 0 0 0 3 3 3 3 3 0 9 110013 0 0 0 0 0 0 0 0 0 0 10 100011 0 0 0 0 3 3 3 3 4 0 </code></pre> <p>I want to find records/individuals for who property has not changed during a given period and the corresponding unique values</p> <p>Here is what i came with : I want to locate individuals with property in [100011, 100012, 1100012] between days 7 and 10</p> <pre><code>props = [100011, 100012, 1100012] begin = 7 end = 10 res = schedule['property'].isin(props) df = schedule.ix[res, begin:end] print "df \n%s " %df </code></pre> <p>We have :</p> <pre><code>df 7 8 9 0 3 3 3 1 2 8 8 2 2 2 2 3 0 0 0 4 2 2 2 10 3 3 4 res = df.apply(lambda x: np.unique(x).size == 1, axis=1) print "res : %s\n" %res df_f = df.ix[res,] print "df filtered %s \n" % df_f res = pd.Series(df_f.values.ravel()).unique().tolist() print "unique values : %s " %res </code></pre> <p>Giving : </p> <pre><code>res : 0 True 1 False 2 True 3 True 4 True 10 False dtype: bool df filtered 7 8 9 0 3 3 3 2 2 2 2 3 0 0 0 4 2 2 2 unique values : [3, 2, 0] </code></pre> <p>As those operations need to be run many times (in millions) on a million rows dataframe, i need to be able to run it as quickly as possible.</p> <p>(@MaxU) : schedule can be seen as a database/repository updated many times. The repository is then requested as well many times for unique values </p> <p>Would you have some ideas for improvements/ alternate ways ?</p>
1
2016-09-20T10:09:29Z
39,591,809
<p>Given your df</p> <pre><code> 7 8 9 0 3 3 3 1 2 8 8 2 2 2 2 3 0 0 0 4 2 2 2 10 3 3 4 </code></pre> <p>You can simplify your code to:</p> <pre><code>df_f = df[df.apply(pd.Series.nunique, axis=1) == 1] print(df_f) 7 8 9 0 3 3 3 2 2 2 2 3 0 0 0 4 2 2 2 </code></pre> <p>And the final step to:</p> <pre><code>res = df_f.iloc[:,0].unique().tolist() print(res) [3, 2, 0] </code></pre> <p>It's not fully vectorised, but maybe this clarifies things a bit towards that?</p>
0
2016-09-20T10:30:06Z
[ "python", "performance", "pandas" ]
Dictionary multiprocessing
39,591,335
<p>I want to parallelize the processing of a dictionary using the multiprocessing library.</p> <p>My problem can be reduced to this code:</p> <pre><code>from multiprocessing import Manager,Pool def modify_dictionary(dictionary): if((3,3) not in dictionary): dictionary[(3,3)]=0. for i in range(100): dictionary[(3,3)] = dictionary[(3,3)]+1 return 0 if __name__ == "__main__": manager = Manager() dictionary = manager.dict(lock=True) jobargs = [(dictionary) for i in range(5)] p = Pool(5) t = p.map(modify_dictionary,jobargs) p.close() p.join() print dictionary[(3,3)] </code></pre> <p>I create a pool of 5 workers, and each worker should increment dictionary[(3,3)] 100 times. So, if the locking process works correctly, I expect dictionary[(3,3)] to be 500 at the end of the script. </p> <p>However; something in my code must be wrong, because this is not what I get: the locking process does not seem to be "activated" and dictionary[(3,3)] always have a valuer &lt;500 at the end of the script.</p> <p>Could you help me?</p>
2
2016-09-20T10:09:29Z
39,595,657
<p>The problem is with this line:</p> <pre><code>dictionary[(3,3)] = dictionary[(3,3)]+1 </code></pre> <p>Three things happen on that line:</p> <ul> <li>Read the value of the dictionary key (3,3)</li> <li>Increment the value by 1</li> <li>Write the value back again</li> </ul> <p>But the increment part is happening outside of any locking.</p> <p>The whole sequence must be atomic, and must be synchronized across all processes. Otherwise the processes will interleave giving you a lower than expected total. </p> <p>Holding a lock whist incrementing the value ensures that you get the total of 500 you expect:</p> <pre><code>from multiprocessing import Manager,Pool,Lock lock = Lock() def modify_array(dictionary): if((3,3) not in dictionary): dictionary[(3,3)]=0. for i in range(100): with lock: dictionary[(3,3)] = dictionary[(3,3)]+1 return 0 if __name__ == "__main__": manager = Manager() dictionary = manager.dict(lock=True) jobargs = [(dictionary) for i in range(5)] p = Pool(5) t = p.map(modify_array,jobargs) p.close() p.join() print dictionary[(3,3)] </code></pre>
0
2016-09-20T13:36:54Z
[ "python", "dictionary", "python-multiprocessing" ]
Reading text file read line by line not working
39,591,364
<p>I know there are too many questions were asked in this topic but still I'm not able to find the reason for my failure to read a text file line by line in Python.</p> <p>I'm using Python 3.4.3 and I want to read a text file line by line.</p> <pre><code>with open('D:\filename.txt') as fp: for line in fp: print (line) </code></pre> <p>I copy pasted the above lines in command prompt but nothing is printing.</p> <p>I have file with Sathiya as text.</p> <p>I just want to print this text in my command prompt. What I'm doing wrong here?</p> <p><a href="http://i.stack.imgur.com/gJ0yO.png" rel="nofollow"><img src="http://i.stack.imgur.com/gJ0yO.png" alt="enter image description here"></a></p> <p><a href="http://i.stack.imgur.com/INaXz.png" rel="nofollow"><img src="http://i.stack.imgur.com/INaXz.png" alt="enter image description here"></a></p>
0
2016-09-20T10:10:47Z
39,591,683
<p>The back slash (D:<strong>\f</strong>ilename.txt) in the filename escapes <code>f</code> char. That's why open could not find the file. To handle situation you can do the followings:</p> <p>You need to escape <code>\</code> char in the path:</p> <pre><code>with open('D:\\filename.txt') as fp: for line in fp: print (line) </code></pre> <p>There some other ways for example you could use forward slashes:</p> <pre><code>with open('D:/filename.txt') as fp: ... </code></pre> <p>Or you could use some helper methods:</p> <pre><code>import os file_path = os.path.join('d:', 'filename.txt') with open(filename) as fp: ... </code></pre> <p>You can also use raw string.</p> <pre><code>with open(r'D:\filename.txt') as fp: ... </code></pre>
1
2016-09-20T10:24:21Z
[ "python", "python-3.x" ]
Should I notify while holding the lock on a condition or after releasing it?
39,591,390
<p>The <a href="https://docs.python.org/3/library/threading.html" rel="nofollow">Python <code>threading</code> documentation</a> lists the following example of a producer:</p> <pre><code>from threading import Condition cv = Condition() # Produce one item with cv: make_an_item_available() cv.notify() </code></pre> <p>I had to review threading and I looked at <a href="http://en.cppreference.com/w/cpp/thread/condition_variable/notify_all" rel="nofollow">the C++ documentation, which states</a>:</p> <blockquote> <p>The notifying thread does not need to hold the lock on the same mutex as the one held by the waiting thread(s); in fact doing so is a pessimization, since the notified thread would immediately block again, waiting for the notifying thread to release the lock.</p> </blockquote> <p>That would suggest doing something like this:</p> <pre><code># Produce one item with cv: make_an_item_available() cv.notify() </code></pre>
0
2016-09-20T10:12:03Z
39,591,529
<p>Don't read C++ documentation to understand Python APIs. Per <a href="https://docs.python.org/3/library/threading.html#threading.Condition.notify" rel="nofollow">the actual Python docs</a>:</p> <blockquote> <p>If the calling thread has not acquired the lock when this method is called, a <code>RuntimeError</code> is raised.</p> </blockquote> <p>Python explicitly requires that the lock be held while <code>notify</code>ing.</p>
2
2016-09-20T10:18:14Z
[ "python", "multithreading", "python-multithreading" ]
Retrieve Network Logs using Python
39,591,396
<p>Generally, whenever a page is loaded, it sends several requests, which can be recorded in the <strong>Network</strong> tab in chrome developer tools. </p> <p>My prime motive is to log all the network requests whenever page is loaded using python script. A sample screen-shot is attached, to illustrate what all request I am trying to collect. </p> <p><a href="http://i.stack.imgur.com/Bh0sR.png" rel="nofollow">Image for Network Hit Logs</a></p> <p>I am trying to achieve the same using <em>urllib</em> library in python, however, I am not exactly sure of the usage.</p> <p>Looking forward for your responses. Thanks in advance. </p>
0
2016-09-20T10:12:24Z
39,591,622
<p>You can't do this with the <code>urllib</code> family of libraries. To capture AJAX Requests you need to use something that has Javascript support... like a browser.</p> <p>So your best option in this case is to use <a href="http://docs.seleniumhq.org/" rel="nofollow">Selenium</a> to write a script that uses the <a href="http://docs.seleniumhq.org/projects/webdriver/" rel="nofollow">Selenium Web Driver</a> to drive whatever browser you're using and then capture/log the AJAX requests being pushed out.</p>
0
2016-09-20T10:21:57Z
[ "python" ]