title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Django: jinja2 code not working when iterated over the same list twice
39,456,156
<p>I have an html file in my django project as follows:</p> <pre><code>&lt;body&gt; &lt;div class="row"&gt; &lt;div class="col-sm-3"&gt; &lt;ul class="list-group"&gt; {% for image in images %} &lt;li class="list-group-item"&gt;{{ image.name}} &lt;img class="img-responsive" src='http://krishna.com{{ image.path }}' /&gt; &lt;/li&gt; {% endfor %} &lt;/ul&gt; &lt;/div&gt; &lt;div class="col-sm-9"&gt; &lt;ul class="list-group"&gt; {% for image in images %} &lt;li class="list-group-item"&gt;{{ image.name}} &lt;img class="img-responsive" src='http://krishna.com{{ image.path }}' /&gt; &lt;/li&gt; {% endfor %} &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; &lt;/body&gt; </code></pre> <p>the content is showing for the <code>&lt;div class="col-sm-3"&gt;</code></p> <p>but no content is being shown for <code>&lt;div class="col-sm-9"&gt;</code></p> <p>In the image below, even thought the jinja code is same for both the divs, it shows only on one side.</p> <p>One may ask why i am doing. I am just testing.</p> <p><a href="http://i.stack.imgur.com/UpTXz.png" rel="nofollow"><img src="http://i.stack.imgur.com/UpTXz.png" alt="enter image description here"></a></p> <p>Edit:</p> <p>I am generating the list in the following way:</p> <pre><code>def gallery(request): import os, sys img_list = os.scandir('/home/shared/pictures') return render(request,'blog/gallery.html', {'images': img_list}) </code></pre>
-1
2016-09-12T18:02:12Z
39,464,111
<p>You might be passing a <a href="http://stackoverflow.com/a/231855/1268926">generator object</a> instead of a list / tuple.</p> <p>The difference is that you can iterate over the generator object only once.</p> <p>For example (below is the last / equivalent line in your view that renders the template)</p> <pre><code>return render(request, "template.html", {'images': (i for i in range(10))}) </code></pre> <p>Will result in <code>images</code> being iterated over only once. (In your example, the <code>col-sm-9</code> block will be empty).</p> <p>If you need to iterate over something more than once, you must pass it as a list or tuple.</p> <pre><code>return render(request, "template.html", {'images': list(i for i in range(10))}) </code></pre>
0
2016-09-13T07:09:31Z
[ "python", "django" ]
How to search for a different string in a different file using Python 3.x
39,456,225
<p>I am trying to search a large group of text files (160K) for a specific string that changes for each file. I have a text file that has every file in the directory with the string value I want to search. Basically I want to use python to create a new text file that gives the file name, the string, and a 1 if the string is present and a 0 if it is not.</p> <p>The approach I am using so far is to create a dictionary from a text file. From there I am stuck. Here is what I figure in pseudo-code:</p> <pre><code>**assign dictionary** d = {} with open('file.txt') as f: d = dict(x.rstrip().split(None, 1) for x in f) **loop through directory** for filename in os.listdir(os.getcwd()): ***here is where I get lost*** match file name to dictionary look for string write filename, string, 1 if found write filename, string, 0 if not found </code></pre> <p>Thank you. It needs to be somewhat efficient since its a large amount of text to go through. </p> <p>Here is what I ended up with</p> <pre><code>d = {} with open('ibes.txt') as f: d = dict(x.rstrip().split(None, 1) for x in f) import os for filename in os.listdir(os.getcwd()): string = d.get(filename, "!@#$%^&amp;*") if string in open(filename, 'r').read(): with open("ibes_in.txt", 'a') as out: out.write("{} {} {}\n".format(filename, string, 1)) else: with open("ibes_in.txt", 'a') as out: out.write("{} {} {}\n".format(filename, string, 0)) </code></pre>
1
2016-09-12T18:07:04Z
39,456,940
<p>As I understand your question, the dictionary relates file names to strings</p> <pre><code>d = { "file1.txt": "widget", "file2.txt": "sprocket", #etc } </code></pre> <p>If each file is not too large you can read each file into memory: </p> <pre><code>for filename in os.listdir(os.getcwd()): string = d[filename] if string in open(filename, 'r').read(): print(filename, string, "1") else: print(filename, string, "0") </code></pre> <p>This example uses print, but you could write to a file instead. Open the output file before the loop <code>outfile = open("outfile.txt", 'w')</code> and instead of printing use </p> <pre><code>outfile.write("{} {} {}\n".format(filename, string, 1)) </code></pre> <p>On the other hand, if each file is too large to fit easily into memory, you could use a <code>mmap</code> as described in <a href="http://stackoverflow.com/questions/4940032/search-for-string-in-txt-file-python">Search for string in txt file Python</a></p>
0
2016-09-12T18:57:03Z
[ "python", "python-3.x" ]
Non orthonal to Orthogonal coordinates system conversion in Python
39,456,232
<p>I have a vector in a non-orthogonal coordinate system spanned by axes a,b,c and their Euler angles alpha(between b&amp;c),beta(between c&amp;a),gamma(between a&amp;b). I want to convert this vector to an orthogonal coordinate system spanned by x,y,z. I assume that axes a and x coincide while conversion. I can do it mathematically by solving equations for coordinates, but I was wondering if there exists any library in Python to do it efficiently. (I found a library - transforms3d. But I couldn't understand its documentation well.)</p>
0
2016-09-12T18:07:31Z
39,458,107
<p>The best coordinate system transformations function I've knowen in python is this one: <a href="http://www.lfd.uci.edu/~gohlke/code/transformations.py.html" rel="nofollow">http://www.lfd.uci.edu/~gohlke/code/transformations.py.html</a></p> <p>Take a look at the functions: euler_matrix and euler_from_matrix. It comes with a self explained comments in the code.</p> <p>Other libraries that might be relevant is:</p> <ol> <li>Shapely - <a href="http://toblerity.org/shapely/manual.html" rel="nofollow">http://toblerity.org/shapely/manual.html</a></li> <li>Blender - <a href="https://www.blender.org/api/blender_python_api_2_62_0/mathutils.html" rel="nofollow">https://www.blender.org/api/blender_python_api_2_62_0/mathutils.html</a></li> </ol>
0
2016-09-12T20:13:45Z
[ "python", "computational-geometry", "coordinate-systems", "coordinate-transformation" ]
Python 3 nested ordered dictionary key access
39,456,283
<p>I have a nested, ordered dictionary built in Python 3. It looks like this:</p> <pre><code>coefFieldDict = OrderedDict([(('AC_Type',), OrderedDict([('BADA_code', 0), ('n_engine', 1), ('eng_type', 2), 'wake_cate', 3)])), (('Mass',), OrderedDic([('m_ref', 4), ('m_min', 5), ('m_max', 6), ('m_pyld', 7), ('G_w', 8), ('unused', 9)])) (('Flight_Env',), OrderedDict([('V_MO', 10), ('M_MO', 11), ('h_MO', 12), ('h_max', 13), ('G_t', 14), ('unused', 15)]))], ...) </code></pre> <p>Now, I want the list of keys at the top level, which I obtain with:</p> <pre><code>outerKeys = list(coefFieldDict.keys()) </code></pre> <p>which give me:</p> <pre><code>[('AC_Type',), ('Mass',), ('Flight_Env',), ('Aero',), ('Thrust',), ('Fuel',), ('Ground',)] </code></pre> <p>and for an example of one of the keys, I have:</p> <pre><code>list(coefFieldDict.keys())[1][0] Out[104]: 'Mass' </code></pre> <p>Now, on using this valid key into the orderedDict ('coefFieldDict'), I receive this error:</p> <pre><code>Traceback (most recent call last): File "&lt;ipython-input-107-61bab1ad7886&gt;", line 1, in &lt;module&gt; coefFieldDict['Mass'] KeyError: 'Mass' </code></pre> <p>What I am doing wrong?</p>
0
2016-09-12T18:11:27Z
39,456,422
<p>try <code>coefFieldDict[('Mass',)]</code></p> <p>...Since you are using tuples (why ?) instead of strings as key</p>
0
2016-09-12T18:20:32Z
[ "python", "key", "ordereddictionary" ]
Unbound name showing up in stack frame using inspect module
39,456,297
<p>I recently ran into a bug that was quite difficult to track down. I had accidentally re-used a class name as a variable (see code below), so when I tried to call the class I (understandably) got an error. The reason it was so hard to track down is that my debugger (Wing IDE 5.1.10) would execute the line successfully in the debug probe, but when I tried to run the same line in the interpreter it errored out. On further investigation, I found that when I examined the frame data using the inspect module, the name was still shown as a global variable bound to my class. So, I was mystified at receiving a UnboundLocalError on a name that was clearly defined and bound in my frame.</p> <p>This reproduces the issue:</p> <pre><code>import inspect class MyClass(object): def __init__(self): print "MyClass init() method called successfully" def newscope(): #MyClass is not in the current frame's locals: assert 'MyClass' not in inspect.currentframe().f_locals.keys() #MyClass is in the current frame's globals and can be called successfully: class_object = inspect.currentframe().f_globals['MyClass'] print class_object class_object() #But, calling MyClass by name results in UnboundLocalError: local #variable 'MyClass' referenced before assignment: print MyClass #Strangely, if at this point I go into the debug probe and run the same #line (print MyClass) it executes successfully, printing #"&lt;class '__main__.MyClass'&gt;" #Re-assigning the name MyClass is what causes the UnboundLocalError: MyClass = 5 if __name__ == '__main__': newscope() </code></pre> <p>Results:</p> <pre><code>&lt;class '__main__.MyClass'&gt; MyClass init() method called successfully Traceback (most recent call last): Python Shell, prompt 1, line 29 Python Shell, prompt 1, line 19 UnboundLocalError: local variable 'MyClass' referenced before assignment </code></pre> <p>Again, I understand why I am getting the UnboundLocalError. What I don't understand is why the inspect module is still showing the name as being bound to the class object when clearly that isn't the case. Am I missing something, or is this a bug in the inspect module?</p> <p>I'm running python 2.7.11.</p>
0
2016-09-12T18:12:30Z
39,456,778
<p>First, about the exception, I think your IDE doesn't respect the python specs :</p> <blockquote> <p>A scope defines the visibility of a name within a block. If a local variable is defined in a block, its scope includes that block.</p> </blockquote> <p>[...]</p> <blockquote> <p>If a name is bound in a block, it is a local variable of that block. If a name is bound at the module level, it is a global variable. (The variables of the module code block are local and global.) If a variable is used in a code block but not defined there, it is a <strong>free variable</strong>.</p> </blockquote> <p>[...]</p> <blockquote> <p>When a name is not found at all, a NameError exception is raised. If the name refers to a local variable that has not been bound, a UnboundLocalError exception is raised. UnboundLocalError is a subclass of NameError.</p> </blockquote> <p><a href="https://docs.python.org/2.7/reference/executionmodel.html#naming-and-binding" rel="nofollow">https://docs.python.org/2.7/reference/executionmodel.html#naming-and-binding</a></p> <p>Thus, I understand the whole block is parsed, it finds your variable, and it is added to the local scope, but before its assignment, it's considered as a <strong>free variable</strong></p> <p><strong>EDIT</strong></p> <p>About <code>inspect</code>, I think it lists the bound variables in the local namespace, thus, you don't see your variable. it's pretty logical : what value would you give to the key 'MyClass' if it is not bound yet ?</p> <p>Actually, you should use the <code>inspect.currentframe().f_code.co_varnames</code> to get what you want ;)</p> <pre><code>import inspect from pprint import pprint class MyClass(object): def __init__(self): print("MyClass init() method called successfully") def newscope(): pprint(inspect.currentframe().f_code.co_varnames) print("----------") pprint(inspect.currentframe().f_locals) print("----------") pprint(inspect.currentframe().f_globals) print("----------") try: pprint(MyClass) except Exception as e: print(e) MyClass = 5 pprint(inspect.currentframe().f_locals) print("----------") pprint(inspect.currentframe().f_globals) print("----------") if __name__ == '__main__': newscope() </code></pre> <p>and you get : </p> <pre><code>('MyClass', 'e') ---------- {} ---------- {'MyClass': &lt;class '__main__.MyClass'&gt;, '__builtins__': &lt;module 'builtins' (built-in)&gt;, '__cached__': None, '__doc__': None, '__file__': 'test.py', '__loader__': &lt;_frozen_importlib_external.SourceFileLoader object at 0x7f2fa3901160&gt;, '__name__': '__main__', '__package__': None, '__spec__': None, 'inspect': &lt;module 'inspect' from '/usr/lib/python3.5/inspect.py'&gt;, 'newscope': &lt;function newscope at 0x7f2fa39b8f28&gt;, 'pprint': &lt;function pprint at 0x7f2fa1fe66a8&gt;} ---------- local variable 'MyClass' referenced before assignment {'MyClass': 5} ---------- {'MyClass': &lt;class '__main__.MyClass'&gt;, '__builtins__': &lt;module 'builtins' (built-in)&gt;, '__cached__': None, '__doc__': None, '__file__': 'test.py', '__loader__': &lt;_frozen_importlib_external.SourceFileLoader object at 0x7f2fa3901160&gt;, '__name__': '__main__', '__package__': None, '__spec__': None, 'inspect': &lt;module 'inspect' from '/usr/lib/python3.5/inspect.py'&gt;, 'newscope': &lt;function newscope at 0x7f2fa39b8f28&gt;, 'pprint': &lt;function pprint at 0x7f2fa1fe66a8&gt;} ---------- </code></pre> <p>Remove your variable</p> <pre><code>import inspect from pprint import pprint class MyClass(object): def __init__(self): print("MyClass init() method called successfully") def newscope(): pprint(inspect.currentframe().f_code.co_varnames) print("----------") pprint(inspect.currentframe().f_locals) print("----------") pprint(inspect.currentframe().f_globals) print("----------") try: pprint(MyClass) except Exception as e: print(e) # MyClass = 5 pprint(inspect.currentframe().f_locals) print("----------") pprint(inspect.currentframe().f_globals) print("----------") if __name__ == '__main__': newscope() </code></pre> <p>and you get : </p> <pre><code>('e',) ---------- {} ---------- {'MyClass': &lt;class '__main__.MyClass'&gt;, '__builtins__': &lt;module 'builtins' (built-in)&gt;, '__cached__': None, '__doc__': None, '__file__': 'test.py', '__loader__': &lt;_frozen_importlib_external.SourceFileLoader object at 0x7fc6d3fcb160&gt;, '__name__': '__main__', '__package__': None, '__spec__': None, 'inspect': &lt;module 'inspect' from '/usr/lib/python3.5/inspect.py'&gt;, 'newscope': &lt;function newscope at 0x7fc6d4082f28&gt;, 'pprint': &lt;function pprint at 0x7fc6d26b06a8&gt;} ---------- &lt;class '__main__.MyClass'&gt; {} ---------- {'MyClass': &lt;class '__main__.MyClass'&gt;, '__builtins__': &lt;module 'builtins' (built-in)&gt;, '__cached__': None, '__doc__': None, '__file__': 'test.py', '__loader__': &lt;_frozen_importlib_external.SourceFileLoader object at 0x7fc6d3fcb160&gt;, '__name__': '__main__', '__package__': None, '__spec__': None, 'inspect': &lt;module 'inspect' from '/usr/lib/python3.5/inspect.py'&gt;, 'newscope': &lt;function newscope at 0x7fc6d4082f28&gt;, 'pprint': &lt;function pprint at 0x7fc6d26b06a8&gt;} ---------- </code></pre>
1
2016-09-12T18:45:32Z
[ "python", "stack-trace" ]
Unbound name showing up in stack frame using inspect module
39,456,297
<p>I recently ran into a bug that was quite difficult to track down. I had accidentally re-used a class name as a variable (see code below), so when I tried to call the class I (understandably) got an error. The reason it was so hard to track down is that my debugger (Wing IDE 5.1.10) would execute the line successfully in the debug probe, but when I tried to run the same line in the interpreter it errored out. On further investigation, I found that when I examined the frame data using the inspect module, the name was still shown as a global variable bound to my class. So, I was mystified at receiving a UnboundLocalError on a name that was clearly defined and bound in my frame.</p> <p>This reproduces the issue:</p> <pre><code>import inspect class MyClass(object): def __init__(self): print "MyClass init() method called successfully" def newscope(): #MyClass is not in the current frame's locals: assert 'MyClass' not in inspect.currentframe().f_locals.keys() #MyClass is in the current frame's globals and can be called successfully: class_object = inspect.currentframe().f_globals['MyClass'] print class_object class_object() #But, calling MyClass by name results in UnboundLocalError: local #variable 'MyClass' referenced before assignment: print MyClass #Strangely, if at this point I go into the debug probe and run the same #line (print MyClass) it executes successfully, printing #"&lt;class '__main__.MyClass'&gt;" #Re-assigning the name MyClass is what causes the UnboundLocalError: MyClass = 5 if __name__ == '__main__': newscope() </code></pre> <p>Results:</p> <pre><code>&lt;class '__main__.MyClass'&gt; MyClass init() method called successfully Traceback (most recent call last): Python Shell, prompt 1, line 29 Python Shell, prompt 1, line 19 UnboundLocalError: local variable 'MyClass' referenced before assignment </code></pre> <p>Again, I understand why I am getting the UnboundLocalError. What I don't understand is why the inspect module is still showing the name as being bound to the class object when clearly that isn't the case. Am I missing something, or is this a bug in the inspect module?</p> <p>I'm running python 2.7.11.</p>
0
2016-09-12T18:12:30Z
39,457,035
<p>If a value is assigned to a variable within a function, that variable becomes a local variable within that function.</p> <p>That variable is treated as local from the moment the function is created, i.e. before it is called for the first time. Python actually optimizes access to local variables and does not make a lookup into the locals() dictionary, but "knows" exactly where to find each local variable (see <a href="http://stackoverflow.com/a/11242447/389289">this answer about performance within a function</a>).</p> <p>So, the fact that this assignemt is done at the end of the function does not make a difference. Within your function <code>newscope</code>, variable <code>MyClass</code> is a local variable. Assigning to the <code>MyClass</code> variable after using it is actually what causes the <code>UnboundLocalError</code> in this example.</p> <p>Take a simpler example:</p> <pre><code>a = 4 def global_a_example(): print a # a is a global variable: prints 4 def local_a_example(): a = 5 print a # a is a local variable: prints 5 def unbound_local_a_example(): print a # a is a local variable, but not initialized: raises UnboundLocalError a = 5 </code></pre> <p><strong>EDIT:</strong> explanation why it looks like the variable is bound</p> <p>Note that the unbound locals do not end up in the locals dict. That is not because they are not locals. It is because they are unbound. See the following example:</p> <pre><code>a = 1 b = 2 def f(): assert 'a' in globals() # of course assert 'a' not in locals() # local 'a' has not been initialized and has no value assert 'a' in f.__code__.co_varnames # 'a' is local nevertheless! assert 'b' not in f.__code__.co_varnames # 'b' is not local... # a != 1 test would raise and exception here because 'a' is local and uninitialized a = 10 # initialize local 'a' (and store it in locals) assert 'a' in globals() # it is still in globals, yes assert 'a' in locals() # it is also in locals assert globals()['a'] == 1 # global 'a' has value 1 assert locals()['a'] == 2 # local 'a' has value 2 assert a == 10 # a is local 'a'! assert b == 2 # b is global 'b' # but you don't even have to call f()!!! # 'a' is already defined to be a local variable in f, see: print f.__code__.co_varnames # prints ('a',) </code></pre> <p>So, 'a' is not bound until written to. It is a key in the globals dict, but that is irrelevant. It is not used from that dict, because it is defined to be local.</p>
1
2016-09-12T19:03:07Z
[ "python", "stack-trace" ]
Using boto to invoke lambda functions how do I do so asynchronously?
39,456,309
<p>SO I'm using boto to invoke my lambda functions and test my backend. I want to invoke them asynchronously. I have noted that "invoke_async" is deprecated and should not be used. Instead you should use "invoke" with an InvocationType of "Event" to do the function asynchronously. </p> <p>I can't seem to figure out how to get the responses from the functions when they return though. I have tried the following: </p> <pre><code>payload3=b"""{ "latitude": 39.5732160891, "longitude": -119.672918997, "radius": 100 }""" client = boto3.client('lambda') for x in range (0, 5): response = client.invoke( FunctionName="loadSpotsAroundPoint", InvocationType='Event', Payload=payload3 ) time.sleep(15) print(json.loads(response['Payload'].read())) print("\n") </code></pre> <p>Even though I tell the code to sleep for 15 seconds, the response variable is still empty when I try and print it. If I change the invokation InvokationType to "RequestResponse" it all works fine and response variable prints, but this is synchronous. Am I missing something easy? How do i execute some code, for example print out the result, when the async invokation returns??</p> <p>Thanks.</p>
0
2016-09-12T18:13:02Z
39,456,752
<p>An asynchronously executed AWS Lambda function doesn't return the result of execution. If an asynchronous invocation request is successful (i.e. there were no errors due to permissions, etc), AWS Lambda immediately returns the HTTP status code <a href="https://httpstatuses.com/202" rel="nofollow">202 ACCEPTED</a> and bears no further responsibility for communicating any information about the outcome of this asynchronous invocation.</p> <p>From the documentation of <a href="http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_ResponseSyntax" rel="nofollow">AWS Lambda Invoke action</a>:</p> <blockquote> <h1>Response Syntax</h1> <pre><code>HTTP/1.1 StatusCode X-Amz-Function-Error: FunctionError X-Amz-Log-Result: LogResult Payload </code></pre> <h2>Response Elements</h2> <p>If the action is successful, the service sends back the following HTTP response.</p> <h3>StatusCode</h3> <p>The HTTP status code will be in the 200 range for successful request. For the <code>RequestResonse</code> invocation type this status code will be 200. <strong>For the <code>Event</code> invocation type this status code will be 202</strong>. For the <code>DryRun</code> invocation type the status code will be 204.</p> <p>[...]</p> <p>The response returns the following as the HTTP body.</p> <h3>Payload</h3> <p>It is the JSON representation of the object returned by the Lambda function. <strong>This is present only if the invocation type is <code>RequestResponse</code>.</strong></p> </blockquote>
2
2016-09-12T18:43:25Z
[ "python", "amazon-web-services", "boto", "aws-lambda" ]
Using boto to invoke lambda functions how do I do so asynchronously?
39,456,309
<p>SO I'm using boto to invoke my lambda functions and test my backend. I want to invoke them asynchronously. I have noted that "invoke_async" is deprecated and should not be used. Instead you should use "invoke" with an InvocationType of "Event" to do the function asynchronously. </p> <p>I can't seem to figure out how to get the responses from the functions when they return though. I have tried the following: </p> <pre><code>payload3=b"""{ "latitude": 39.5732160891, "longitude": -119.672918997, "radius": 100 }""" client = boto3.client('lambda') for x in range (0, 5): response = client.invoke( FunctionName="loadSpotsAroundPoint", InvocationType='Event', Payload=payload3 ) time.sleep(15) print(json.loads(response['Payload'].read())) print("\n") </code></pre> <p>Even though I tell the code to sleep for 15 seconds, the response variable is still empty when I try and print it. If I change the invokation InvokationType to "RequestResponse" it all works fine and response variable prints, but this is synchronous. Am I missing something easy? How do i execute some code, for example print out the result, when the async invokation returns??</p> <p>Thanks.</p>
0
2016-09-12T18:13:02Z
39,457,165
<p>There is a difference between an <em>'async AWS lambda invocation'</em> and <em>'async python code'</em>. When you set the <code>InvocationType</code> to <code>'Event'</code>, <a href="http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax" rel="nofollow">by definition</a>, it does not ever send back a response.</p> <p>In your example, <code>invoke()</code> immediately returns <code>None</code>, and does not implicitly start up anything in the background to change that value at a later time (thank goodness!). So, when you look at the value of <code>response</code> 15 seconds later, it's still <code>None</code>.</p> <p>It seems what you really want is the <code>RequestResponse</code> invocation type, with asynchronous Python code. You have a bunch of options to choose from, but my favorite is <a href="https://docs.python.org/3/library/concurrent.futures.html" rel="nofollow"><code>concurrent.futures</code></a>. Another is <a href="https://docs.python.org/3/library/threading.html" rel="nofollow"><code>threading</code></a>.</p> <p>Here's an example using <code>concurrent.futures</code>:</p> <p>(If you're using Python2 you'll need to <code>pip install futures</code>)</p> <pre><code>from concurrent.futures import ThreadPoolExecutor import json payload = {...} with ThreadPoolExecutor(max_workers=5) as executor: futs = [] for x in xrange(0, 5): futs.append( executor.submit(client.invoke, FunctionName = "loadSpotsAroundPoint", InvocationType = "RequestResponse", Payload = bytes(json.dumps(payload)) ) ) results = [ fut.result() for fut in futs ] print results </code></pre> <p>Another pattern you might want to look into is to use the <code>Event</code> invocation type, and have your Lambda function push messages to SNS, which are then consumed by another Lambda function. You can check out a tutorial for SNS-triggered lambda functions <a href="http://docs.aws.amazon.com/lambda/latest/dg/with-sns-example.html" rel="nofollow">here</a>.</p>
0
2016-09-12T19:12:01Z
[ "python", "amazon-web-services", "boto", "aws-lambda" ]
Delete directories older than X days?
39,456,318
<p>I decided to go python for this because I am in the process of learning Python, so I use it over Powershell whenever I can.</p> <p>I have the theory down for this, but it seems <code>os.stat</code> cannot take a list, but only a string or <code>int</code>. Right now I'm just printing before I go and delete things.</p> <pre><code>import os import time path = "\\\\path\\to\\videoroot\\" now = time.time() old = now - 1296000 for root, dirs, files in os.walk(path, topdown=False): if time.ctime(os.path.getmtime(dirs) &lt; old: print (dirs) </code></pre> <p>Output/error message:</p> <pre class="lang-none prettyprint-override"><code>return os.stat(filename).st_mtime TypeError: argument should be string, bytes or integer, not list </code></pre>
-1
2016-09-12T18:13:33Z
39,456,407
<p>Your code problem is that you are passing <code>dirs</code> to <code>os.path.getmtime()</code>, and <code>dirs</code> is a <code>list</code> as specified in the <a href="https://docs.python.org/2/library/os.html">documentation</a> for <code>os.walk</code></p> <p>So you can address this by:</p> <pre><code>import os import time path = "\\\\path\\to\\videoroot\\" now = time.time() old = now - 1296000 for root, dirs, files in os.walk(path, topdown=False): for _dir in dirs: if time.ctime(os.path.getmtime(_dir) &lt; old: print (_dir) </code></pre>
5
2016-09-12T18:19:31Z
[ "python" ]
Referencing named groups in look-around (Python 2.x)
39,456,607
<p>I have a pattern that matches for <em>multiple</em> key/value pairs, and the key/value strings can be delimited by any characters, then the groups of key/value can also be delimited, just <strong>not by the same character</strong>.</p> <p>I figured out how to allow dynamic delimiters, and restrict the same delimiter from being used twice. EG:</p> <pre><code>\w+(?P&lt;kv_delim&gt;[:;|])\d+(?P&lt;g_delim&gt;(?!(?P=kv_delim))[:;|])\w(?P=kv_delim)\d(?P=g_delim)? </code></pre> <p><a href="https://regex101.com/r/qC7hQ2/3" rel="nofollow">You can view the regex101.com example here</a>. And it works great, the problem comes when using either of the two named groups in a <em>positive look-behind</em>.</p> <p>Lets say the string is</p> <blockquote> <p><code>foo:1;r:2</code></p> </blockquote> <p>The "key/value delimiter" (named group: <code>kv_delim</code>) is the <code>:</code>, then the "group delimiter" (named group: <code>grp_delim</code>) is the <code>;</code></p> <p>What im trying to do is dynamically match the <code>:</code> and <code>;</code>, then in a look-around statement, look for <code>foo&lt;kv_delim&gt;</code>, or <code>bar&lt;kv_delim&gt;</code>.</p> <p>If I hard-code the delimiters (in the look-around), <a href="https://regex101.com/r/uM6bL8/3" rel="nofollow">you can see it works</a>. But if I try to reference the named-group <code>kv_delim</code> within the look-around statement, <a href="https://regex101.com/r/bA6dM6/2" rel="nofollow">you can see it throws errors</a>. I get the error:</p> <blockquote> <p>Subpattern references are not allowed within a lookbehind assertion</p> </blockquote> <p>Which is whats kickin my butt</p> <p>Anybody have a way to make this work?</p> <p>Thanks!</p>
3
2016-09-12T18:33:27Z
39,758,474
<p>Summing up what has already been said: the point is that the length of the pattern is unknown when you put backreferences into a lookbehind that <a href="https://docs.python.org/3/library/re.html#regular-expression-syntax" rel="nofollow">must be fixed-width</a> at design time. The newer <a href="https://pypi.python.org/pypi/regex" rel="nofollow">PyPi <code>regex</code> module</a> has no limitations regarding the lookbehind length, so, the current workaround is to use this module with your regex:</p> <pre><code>&gt;&gt;&gt; import regex &gt;&gt;&gt; s = "foo:1;r:2" &gt;&gt;&gt; rx = r"\w+(?P&lt;kv_delim&gt;[:;|])\d+(?P&lt;g_delim&gt;(?!(?P=kv_delim))[:;|])\w(?P=kv_delim)\d(?P=g_delim)?" &gt;&gt;&gt; print(regex.findall(rx, s)) [(':', ';')] &gt;&gt;&gt; print([m.group() for m in regex.finditer(rx, s)]) ['foo:1;r:2'] &gt;&gt;&gt; </code></pre>
0
2016-09-28T22:10:19Z
[ "python", "regex", "regex-lookarounds", "regex-greedy" ]
Python Serial Writes to Arduino is different from Arduino's Serial Monitor's Serial Writes
39,456,630
<p>I have a Python script that writes a string <code>test</code> to the Arduino serial port. If the arduino receives the <code>test</code> string, it should reply with a string <code>ok</code> and LED 13 should like up..</p> <p><strong>Problem:</strong> When the Arduino Serial Monitor is used to write <code>test</code> to serial port, Arduino replies with <code>ok</code> as expected and the LED #13 lights up.</p> <p>However when the Python script writes <code>test</code> to the same serial port, nothing happens. Arduino does not reply to serial port and the LED #13 does not light up.</p> <p>Any ideas how Python script can be fixed to get the <code>ok</code> response from Arduino and LED 13 to light up?</p> <p><strong>Arduino Sketch</strong></p> <pre><code>int ledPin = 13; void setup() { Serial.begin(9600); pinMode(ledPin, OUTPUT); } void loop() { while(Serial.available() == 0) { } if(Serial.readString() == "test\r\n") { Serial.print("ok\r\n"); digitalWrite(ledPin, HIGH); } readString = ""; // Clear recieved buffer delay(100); } </code></pre> <p><strong>Python Script</strong></p> <pre><code>port = 'COM5' ser = serial.Serial( port=port, baudrate=9600, timeout=5 ) serial.write("test\r\n") response = serial.readline() print response </code></pre>
1
2016-09-12T18:35:10Z
39,456,662
<pre><code>port = 'COM5' ser = serial.Serial( port=port, baudrate=9600, timeout=5 ) # you need to sleep after opening the port for a few seconds time.sleep(5) # arduino takes a few seconds to be ready ... #also you should write to your instance ser.write("test\r\n") # and give arduino time to respond time.sleep(0.5) response = self.serial.readline() print response </code></pre> <p>if you dont want to wait a fixed number of seconds you probably need to wait for <code>ser.cts</code> (clear to send)</p>
2
2016-09-12T18:37:26Z
[ "python", "arduino", "serial-port", "pyserial" ]
How to reload a configuration file on each request for Flask?
39,456,672
<p>Is there an idiomatic way to have Flask reload my configuration file on every request? The purpose of this would be so that I could change passwords or other configuration related items without having to shut down and restart the server in production.</p> <p>Edit: <code>app.run(debug=True)</code> is not acceptable as it restarts the server and shouldn't be used in production.</p> <p>Perhaps a decorator like the following:</p> <pre><code>def reload_configuration(func): @wraps(func) def _reload_configuration(*args, **kwargs): #even better, only reload if the file has changed reload(settings) app.config.from_object(settings.Config) return func(*args, **kwargs) return _reload_configuration @app.route('/') @reload_configuration def home(): return render_template('home.html') </code></pre> <p>If it is relevant, here is how I am loading the configuration now:</p> <p>My <code>app/app/__init__.py</code> file:</p> <pre><code>from flask import Flask from settings import Config app = Flask(__name__) app.config.from_object(Config) # ... </code></pre> <p>My <code>app/app/settings.py</code> file:</p> <pre><code>class Config(object): SQLALCHEMY_TRACK_MODIFICATIONS = False SECRET_KEY = os.urandom(32) # ... try: from app.local_settings import Config except ImportError: pass </code></pre>
0
2016-09-12T18:38:11Z
39,457,189
<p>You cannot safely / correctly reload the config after the application begins handling requests. Config is <em>only</em> meant to be read during application setup. The main reason is because a production server will be running using multiple processes (or even distributed across servers), and the worker that handles the config change request does not have access to other workers to tell them to change. Additionally, some config is not designed to be reloaded, so even if you could notify all the other workers and get them to reload properly, it might not have any effect.</p> <p>Production WSGI servers can gracefully reload, that is they won't kill running workers until they've completed their responses, so downtime shouldn't actually be an issue. If it is (and it really isn't), then you're at such a large scale that you're beyond the scope of this answer.</p> <p>Graceful reload in:</p> <ul> <li><a href="http://docs.gunicorn.org/en/stable/signals.html#master-process" rel="nofollow">Gunicorn</a></li> <li><a href="http://uwsgi-docs.readthedocs.io/en/latest/articles/TheArtOfGracefulReloading.html#standard-default-boring-graceful-reload-aka-sighup" rel="nofollow">uWSGI</a></li> <li><a href="http://stackoverflow.com/a/3682085/400617">mod_wsgi</a></li> </ul> <p>If you need to use config that can be updated dynamically, you'll have to write all the code you use to expect that. You could use a <code>before_request</code> handler to load the config fresh each request. However, keep in mind that anything you didn't write that uses config may not expect the config to change.</p>
2
2016-09-12T19:13:42Z
[ "python", "flask" ]
Evaluating the next line in a For Loop while in the current iteration
39,456,802
<p>Here is what I am trying to do: I am trying to solve an issue that has to do with wrapping in a text file. </p> <p>I want to open a txt file, read a line and if the line contains what I want it to contain, check the next line to see if it does not contain what is in the first line. If it does not, add the line to the first line.</p> <pre><code> import re stuff = open("my file") for line in stuff: if re.search("From ", line): first = line print first if re.search('From ', handle.next()): continue else: first = first + handle.next() else: continue </code></pre> <p>I have looked a quite a few things and cannot seem to find an answer. Please help!</p>
0
2016-09-12T18:47:00Z
39,457,281
<p>I would try to do something like this, but this is invalid for triples of "From " and not elegant at all.</p> <pre><code>lines = open("file", 'r').readlines() lines2 = open("file2", 'w') counter_list=[] last_from = 0 for counter, line in enumerate(lines): if "From " in line and counter != last_from +1: last_from = counter current_count = counter if current_count+1 == counter: if "From " in line: counter_list.append(current_count+1) for counter, line in enumerate(lines): if counter in counter_list: lines2.write(line) else: lines2.write(line, '\n') </code></pre> <p>Than you can check the lines2 if its helped.</p> <p>You could also revert order of lines, then check in next line not in previous. That would solve your problem in one loop.</p>
0
2016-09-12T19:20:00Z
[ "python", "python-2.7" ]
Evaluating the next line in a For Loop while in the current iteration
39,456,802
<p>Here is what I am trying to do: I am trying to solve an issue that has to do with wrapping in a text file. </p> <p>I want to open a txt file, read a line and if the line contains what I want it to contain, check the next line to see if it does not contain what is in the first line. If it does not, add the line to the first line.</p> <pre><code> import re stuff = open("my file") for line in stuff: if re.search("From ", line): first = line print first if re.search('From ', handle.next()): continue else: first = first + handle.next() else: continue </code></pre> <p>I have looked a quite a few things and cannot seem to find an answer. Please help!</p>
0
2016-09-12T18:47:00Z
39,458,558
<p>Thank you Martjin for helping me reset my mind frame! This is what I came up with:</p> <pre><code> handle = open("my file") first = "" second = "" sent = "" for line in handle: line = line.rstrip() if len(first) &gt; 0: if line.startswith("From "): if len(sent) &gt; 0: print sent else: continue first = line second = "" else: second = second + line else: if line.startswith("From "): first = line sent = first + second </code></pre> <p>It is probably crude, but it definitely got the job done!</p>
0
2016-09-12T20:47:27Z
[ "python", "python-2.7" ]
Using conditional expressions and incrementing/decrementing a variable
39,456,952
<p>How do I to put the if statement into a conditional expression and how do I increment/ decrement a variable?</p> <pre><code>num_users = 8 update_direction = 3 num_users = if update_direction ==3: num_users= num_users + 1 else: num_users= num_users - 1 print('New value is:', num_users) </code></pre>
-1
2016-09-12T18:57:49Z
39,457,135
<p>I might be way off the mark and my Python is a bit rusty, but the code looks alright for the problem provided besides the formatting issues pointed out by James K. The if statement you have forms a part of the conditional expression (it is a condition). </p> <p>Essentially, a conditional expression follows this pattern of: </p> <p>if(something)-->Do something</p> <p>The incrementation looks fine. Like James K said, fix the formatting and you should be fine. </p>
1
2016-09-12T19:09:26Z
[ "python" ]
Using conditional expressions and incrementing/decrementing a variable
39,456,952
<p>How do I to put the if statement into a conditional expression and how do I increment/ decrement a variable?</p> <pre><code>num_users = 8 update_direction = 3 num_users = if update_direction ==3: num_users= num_users + 1 else: num_users= num_users - 1 print('New value is:', num_users) </code></pre>
-1
2016-09-12T18:57:49Z
39,457,464
<p>The correct statement would be:</p> <pre><code>num_users = num_users + 1 if update_direction == 3 else num_users - 1 </code></pre> <p>For reference, see <a href="https://docs.python.org/2/reference/expressions.html#conditional-expressions" rel="nofollow">Conditional Expressions</a>.</p>
1
2016-09-12T19:30:34Z
[ "python" ]
How to mock nested / multiple layers of return objects in python
39,457,108
<p>I'm currently struggling to find a good way of mocking multiple layers / nested return values. In other words, I want to return a magic mock that in turn returns a magic mock with it's own set return values. I'm finding this relatively cumbersome and am looking for a more elegant and maintainable solution.</p> <p>I'm trying to test the following code efficiently. the URL returns a json string that needs further processing:</p> <pre><code>import json from urllib.request import url open def load_json(): # first return value response = urlopen("http://someurl.com/api/getjson") # in turn, contains two nested return values for read and decode response_dict = json.loads(response.read().decode('utf-8')) </code></pre> <p>This is how I've mocked this so far, which is extremely inelegant and makes maintenance complicated:</p> <pre><code>class MyTestCase(TestCase): @patch('load_json_path.urlopen') def test_load_json(self, mock_urlopen): ### trying to simplify all of this # third nested return mock_decode = MagicMock(return_value='["myjsondata"]') # second nested return value mock_response = MagicMock() mock_response.read.return_value=mock_decode # first nested return value mock_urlopen.return_value = mock_response ### trying to simplify all of this load_json() </code></pre> <p>In the end, all i'm trying to mock is the returned data from the decode function, that originates from the url open function. This should be possible in one line or in a simpler way, using perhaps the <strong>enter</strong> methods. Ideally the mock would look something like this in the test_load_json function:</p> <pre><code>mock_urlopen.__enter__.loads.__enter__.decode.return_value = '["myjsondata"]' </code></pre> <p>Unfortunately, I can't seem to find anything useful in the mock documentation. Any help appreciated.</p>
2
2016-09-12T19:07:41Z
39,457,691
<p>Turns out this is easily possible and documented. However, the naming is not straightforward and needed to know what one is looking for. The referred to mocking is chained calls, which are in fact documented in the mock library.</p> <p>In this example, the mock_urlopen should look like this:</p> <pre><code> mock_urlopen.return_value.read.return_value.decode.return_value = '["myjsondata"]' </code></pre> <p>This works beautifully. For more details check out the python doc: <a href="https://docs.python.org/3/library/unittest.mock-examples.html#mocking-chained-calls" rel="nofollow">https://docs.python.org/3/library/unittest.mock-examples.html#mocking-chained-calls</a></p>
1
2016-09-12T19:44:29Z
[ "python", "nested", "mocking" ]
How to mock nested / multiple layers of return objects in python
39,457,108
<p>I'm currently struggling to find a good way of mocking multiple layers / nested return values. In other words, I want to return a magic mock that in turn returns a magic mock with it's own set return values. I'm finding this relatively cumbersome and am looking for a more elegant and maintainable solution.</p> <p>I'm trying to test the following code efficiently. the URL returns a json string that needs further processing:</p> <pre><code>import json from urllib.request import url open def load_json(): # first return value response = urlopen("http://someurl.com/api/getjson") # in turn, contains two nested return values for read and decode response_dict = json.loads(response.read().decode('utf-8')) </code></pre> <p>This is how I've mocked this so far, which is extremely inelegant and makes maintenance complicated:</p> <pre><code>class MyTestCase(TestCase): @patch('load_json_path.urlopen') def test_load_json(self, mock_urlopen): ### trying to simplify all of this # third nested return mock_decode = MagicMock(return_value='["myjsondata"]') # second nested return value mock_response = MagicMock() mock_response.read.return_value=mock_decode # first nested return value mock_urlopen.return_value = mock_response ### trying to simplify all of this load_json() </code></pre> <p>In the end, all i'm trying to mock is the returned data from the decode function, that originates from the url open function. This should be possible in one line or in a simpler way, using perhaps the <strong>enter</strong> methods. Ideally the mock would look something like this in the test_load_json function:</p> <pre><code>mock_urlopen.__enter__.loads.__enter__.decode.return_value = '["myjsondata"]' </code></pre> <p>Unfortunately, I can't seem to find anything useful in the mock documentation. Any help appreciated.</p>
2
2016-09-12T19:07:41Z
39,458,013
<p>I have made this for you as a helper class:</p> <pre><code>from unittest.mock import Mock class ProxyMock: """Put me for easy referral""" def __init__(self, mock, _first=True): self._mock_ = mock self._first_ = _first def __getattr__(self, name): if self._first_: new_mock = getattr(self._mock_, name) else: new_mock = getattr(self._mock_.return_value, name) return ProxyMock(new_mock, _first=False) def __setattr__(self, name, value): if name in ("_mock_", "_first_"): return super().__setattr__(name, value) setattr(self._mock_, name, value) a = Mock() ProxyMock(a).b.c.return_value = 123 assert a.b().c() == 123 </code></pre>
0
2016-09-12T20:07:02Z
[ "python", "nested", "mocking" ]
Pandas groupby object filtering
39,457,130
<p>i have a pandas dataframe</p> <pre><code>df.columns Index([u’car_id’,u’color’,u’make’,u’year’)] </code></pre> <p>I would like to create a new FILTERABLE object that has the count of each group (color,make,year);</p> <pre><code>grp = df[[‘color’,’make’,’year’]].groupby([‘color’,’make’,’year’]).size() </code></pre> <p>which will return something like this</p> <pre><code>color make year count black honda 2011 416 </code></pre> <p>I would like to be able to filter it, however when I try this:</p> <pre><code>grp.filter(lambda x: x[‘color’]==‘black’) </code></pre> <p>I receive this error</p> <blockquote> <p>TypeError: 'function' object is not iterable</p> </blockquote> <p>How do I leverage a 'groupby' object in order to filter the rows out?</p>
5
2016-09-12T19:09:07Z
39,457,171
<p>I think you need add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a> and then output is <code>DataFrame</code>. Last use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>df = df[['color','make','year']].groupby(['color','make','year']) .size() .reset_index(name='count') df1 = df[df.color == 'black'] </code></pre>
3
2016-09-12T19:12:34Z
[ "python", "pandas", "indexing", "group-by", "condition" ]
Pandas groupby object filtering
39,457,130
<p>i have a pandas dataframe</p> <pre><code>df.columns Index([u’car_id’,u’color’,u’make’,u’year’)] </code></pre> <p>I would like to create a new FILTERABLE object that has the count of each group (color,make,year);</p> <pre><code>grp = df[[‘color’,’make’,’year’]].groupby([‘color’,’make’,’year’]).size() </code></pre> <p>which will return something like this</p> <pre><code>color make year count black honda 2011 416 </code></pre> <p>I would like to be able to filter it, however when I try this:</p> <pre><code>grp.filter(lambda x: x[‘color’]==‘black’) </code></pre> <p>I receive this error</p> <blockquote> <p>TypeError: 'function' object is not iterable</p> </blockquote> <p>How do I leverage a 'groupby' object in order to filter the rows out?</p>
5
2016-09-12T19:09:07Z
39,457,574
<p><strong><em>Option 1</em></strong><br> Filter ahead of time</p> <pre><code>cols = ['color','make','year'] df[df.color == 'black', cols].grouby(cols).size() </code></pre> <p><strong><em>Option 2</em></strong> Use <code>xs</code> for index cross sections</p> <pre><code>cols = ['color','make','year'] grp = df[cols].groupby(cols).size() df.xs('black', level='color', drop_level=False) </code></pre> <p>or</p> <pre><code>df.xs('honda', level='make', drop_level=False) </code></pre> <p>or</p> <pre><code>df.xs(2011, level='year', drop_level=False) </code></pre>
2
2016-09-12T19:37:28Z
[ "python", "pandas", "indexing", "group-by", "condition" ]
Running tests in single Python file with nose
39,457,196
<p>I'm using nose 1.3.7 with Anaconda 4.1.1 (Python 3.5.2). I want to run unit tests in a single file, e.g. <code>foo.py</code>. According to the <a href="http://nose.readthedocs.io/en/latest/usage.html" rel="nofollow">documentation</a> I should be able to simply run:</p> <pre><code>nosetests foo.py </code></pre> <p>But when I do this, nose runs all the tests in all the files in the directory!</p> <p>And if I do <code>nose --help</code>, the usage documentation doesn't even indicate that there is a parameter. It only shows [options].</p> <p>So can I run tests in a single file using nose?</p>
0
2016-09-12T19:14:16Z
39,457,947
<p>I have a standalone Python 3.4 version and <code>nosetests foo.py</code> runs tests only in <code>foo.py</code> and <code>nosetests spam.py</code> runs test only in <code>spam.py</code>. </p> <p>A plain <code>nosetests</code> command without any option specified, runs tests in all files with names starting with the word <code>test_</code> in the directory.</p> <p>Here's quoting from their <a href="https://nose.readthedocs.io/en/latest/finding_tests.html" rel="nofollow">test discovery</a> documentation which specifies rules for test discovery.The last line of the documentation clarifies what could be the cause for your anomaly.</p> <blockquote> <p>Be aware that plugins and command line options can change any of those rules.</p> </blockquote> <p>I suspect (and I may be wrong) that it has to do with how anaconda configures nose for your install.</p>
1
2016-09-12T20:02:28Z
[ "python", "unit-testing", "nose" ]
How would I use a while loop so that if they enter a number it would ask them the again?
39,457,203
<pre><code>fn = input("Hello, what is your first name?") firstname = (fn[0].upper()) ln = input("Hello, what is your last name?") lastname = (ln.lower()) </code></pre> <p>I want fn to be on a loop so that if they enter their a number instead of letters, it would repeat the question</p>
2
2016-09-12T19:14:38Z
39,457,239
<pre><code>if result.isalpha(): print "the string entered contains only letters !" </code></pre> <p>I guess ?</p> <pre><code>a="6" while not a.isalpha(): a = raw_input("Enter your name:") print "You entered:",a </code></pre> <p>if you just wanted to eliminate only words that contained numbers you could do</p> <pre><code>while any(ltr.isdigit() for ltr in a): </code></pre>
1
2016-09-12T19:16:56Z
[ "python", "python-3.x" ]
How would I use a while loop so that if they enter a number it would ask them the again?
39,457,203
<pre><code>fn = input("Hello, what is your first name?") firstname = (fn[0].upper()) ln = input("Hello, what is your last name?") lastname = (ln.lower()) </code></pre> <p>I want fn to be on a loop so that if they enter their a number instead of letters, it would repeat the question</p>
2
2016-09-12T19:14:38Z
39,457,270
<p>I guess you need something like this</p> <pre><code>final_fn = "" while True: fn = input("Hello, what is your first name?") if valid(fn): final_fn = fn break </code></pre> <p>Define you validation method before it. An example would be as Joran mentioned</p> <pre><code>def valid(fn): return fn.isalpha() </code></pre>
3
2016-09-12T19:18:56Z
[ "python", "python-3.x" ]
Blob detection using OpenCV
39,457,209
<p>I am trying to do some white blob detection using OpenCV. But my script failed to detect the big white block which is my goal while some small blobs are detected. I am new to OpenCV, and am i doing something wrong when using simpleblobdetection in OpenCV? [Solved partially, please read below]</p> <p>And here is the script:</p> <pre><code>#!/usr/bin/python # Standard imports import cv2 import numpy as np; from matplotlib import pyplot as plt # Read image im = cv2.imread('whiteborder.jpg', cv2.IMREAD_GRAYSCALE) imfiltered = cv2.inRange(im,255,255) #OPENING kernel = np.ones((5,5)) opening = cv2.morphologyEx(imfiltered,cv2.MORPH_OPEN,kernel) #write out the filtered image cv2.imwrite('colorfiltered.jpg',opening) # Setup SimpleBlobDetector parameters. params = cv2.SimpleBlobDetector_Params() params.blobColor= 255 params.filterByColor = True # Create a detector with the parameters ver = (cv2.__version__).split('.') if int(ver[0]) &lt; 3 : detector = cv2.SimpleBlobDetector(params) else : detector = cv2.SimpleBlobDetector_create(params) # Detect blobs. keypoints = detector.detect(opening) # Draw detected blobs as green circles. # cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures # the size of the circle corresponds to the size of blob print str(keypoints) im_with_keypoints = cv2.drawKeypoints(opening, keypoints, np.array([]), (0,255,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) # Show blobs ##cv2.imshow("Keypoints", im_with_keypoints) cv2.imwrite('Keypoints.jpg',im_with_keypoints) cv2.waitKey(0) </code></pre> <p><strong>EDIT</strong>:</p> <p>By adding a bigger value of area maximum value, i am able to identify a big blob but my end goal is to identify the big white rectangle exist or not. And the white blob detection i did returns not only the rectangle but also the surrounding areas as well. [This part solved]</p> <p><strong>EDIT 2:</strong></p> <p>Based on the answer from @PSchn, i update my code to apply the logic, first set the color filter to only get the white pixels and then remove the noise point using opening. It works for the sample data and i can successfully get the keypoint after blob detection. <a href="http://i.stack.imgur.com/U2TRP.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/U2TRP.jpg" alt="enter image description here"></a></p>
0
2016-09-12T19:15:10Z
39,462,615
<p>You could try setting params.maxArea to something obnoxiously large (somewhere in the tens of thousands): the default may be something lower than the area of the rectangle you're trying to detect. Also, I don't know how true this is or not, but I've heard that detection by color is bugged with a logic error, so it may be worth a try disabling it just in case that is causing problems (this has probably been fixed in later versions, but it could still be worth a try)</p>
1
2016-09-13T05:16:41Z
[ "python", "opencv" ]
Blob detection using OpenCV
39,457,209
<p>I am trying to do some white blob detection using OpenCV. But my script failed to detect the big white block which is my goal while some small blobs are detected. I am new to OpenCV, and am i doing something wrong when using simpleblobdetection in OpenCV? [Solved partially, please read below]</p> <p>And here is the script:</p> <pre><code>#!/usr/bin/python # Standard imports import cv2 import numpy as np; from matplotlib import pyplot as plt # Read image im = cv2.imread('whiteborder.jpg', cv2.IMREAD_GRAYSCALE) imfiltered = cv2.inRange(im,255,255) #OPENING kernel = np.ones((5,5)) opening = cv2.morphologyEx(imfiltered,cv2.MORPH_OPEN,kernel) #write out the filtered image cv2.imwrite('colorfiltered.jpg',opening) # Setup SimpleBlobDetector parameters. params = cv2.SimpleBlobDetector_Params() params.blobColor= 255 params.filterByColor = True # Create a detector with the parameters ver = (cv2.__version__).split('.') if int(ver[0]) &lt; 3 : detector = cv2.SimpleBlobDetector(params) else : detector = cv2.SimpleBlobDetector_create(params) # Detect blobs. keypoints = detector.detect(opening) # Draw detected blobs as green circles. # cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures # the size of the circle corresponds to the size of blob print str(keypoints) im_with_keypoints = cv2.drawKeypoints(opening, keypoints, np.array([]), (0,255,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) # Show blobs ##cv2.imshow("Keypoints", im_with_keypoints) cv2.imwrite('Keypoints.jpg',im_with_keypoints) cv2.waitKey(0) </code></pre> <p><strong>EDIT</strong>:</p> <p>By adding a bigger value of area maximum value, i am able to identify a big blob but my end goal is to identify the big white rectangle exist or not. And the white blob detection i did returns not only the rectangle but also the surrounding areas as well. [This part solved]</p> <p><strong>EDIT 2:</strong></p> <p>Based on the answer from @PSchn, i update my code to apply the logic, first set the color filter to only get the white pixels and then remove the noise point using opening. It works for the sample data and i can successfully get the keypoint after blob detection. <a href="http://i.stack.imgur.com/U2TRP.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/U2TRP.jpg" alt="enter image description here"></a></p>
0
2016-09-12T19:15:10Z
39,474,485
<p>If you just want to detect the white rectangle you can try to set a higher threshold, e.g. 253, erase small object with an opening and take the biggest blob. I first smoothed your image, then thresholding it:</p> <p><a href="http://i.stack.imgur.com/UrrBT.png" rel="nofollow"><img src="http://i.stack.imgur.com/UrrBT.png" alt="enter image description here"></a></p> <p>and the opening:</p> <p><a href="http://i.stack.imgur.com/LNEBf.png" rel="nofollow"><img src="http://i.stack.imgur.com/LNEBf.png" alt="enter image description here"></a></p> <p>now you just have to use <code>findContours</code> and take the <code>boundingRect</code>. If your rectangle is always that white it should work. If you get lower then 251 with your threshold the other small blobs will appear and your region merges with them, like here:</p> <p><a href="http://i.stack.imgur.com/GB6iM.png" rel="nofollow"><img src="http://i.stack.imgur.com/GB6iM.png" alt="enter image description here"></a></p> <p>Then you could still do an opening several times and you get this: <a href="http://i.stack.imgur.com/oKWJT.png" rel="nofollow"><img src="http://i.stack.imgur.com/oKWJT.png" alt="enter image description here"></a></p> <p>But i dont think that it is the fastest idea ;)</p>
1
2016-09-13T16:04:26Z
[ "python", "opencv" ]
Django - request.session not being saved
39,457,321
<p>I have a pretty simply utility function that gets an open web order if their is a session key called 'orderId', and will create one if there is no session key, and the parameter 'createIfNotFound' is equal to true in the function. Stepping through it with my debugger I can see that the piece of code that sets the session key after an order has been created does get hit with no exceptions, but when I check the Http request object' session field, it does not have that attribute ? </p> <p>Utility</p> <pre><code>def get_open_web_order(request, createIfNotFound=False): # Check for orderId in session order_id = request.session.get('orderId') web_order = None if None != order_id: try: web_order = WebOrder.objects.get(id=order_id, status='O') logging.info('Found open web order') except WebOrder.DoesNotExist: logging.info('Web order not found') if (None == web_order) and (createIfNotFound == True): logging.info('Creating new web order') web_order = WebOrder() web_order.status = 'O' web_order.save() request.session['orderId'] = web_order.id # Assign logged in user and default billing and shipping if request.user.is_authenticated() and hasattr(request.user, 'customer'): customer = request.user.customer web_order.customer = customer web_order.set_defaults_from_customer() web_order.save() return web_order </code></pre>
0
2016-09-12T19:22:26Z
39,457,680
<p>In some cases you need to explicitly tell the session that it has been modified.</p> <p>You can do this by adding <code>request.session.modified = True</code> to your view, after changing something in <code>session</code></p> <p>You can read more on this here - <a href="https://docs.djangoproject.com/en/1.10/topics/http/sessions/#when-sessions-are-saved" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/http/sessions/#when-sessions-are-saved</a></p>
2
2016-09-12T19:43:52Z
[ "python", "django", "session", "view" ]
Pandas: how to compute the rolling sum of a variable over the last few days but only at a given hour?
39,457,435
<p>I have a dataframe as follows</p> <pre><code>df = pd.DataFrame({ 'X' : np.random.randn(50000)}, index=pd.date_range('1/1/2000', periods=50000, freq='T')) df.head(10) Out[37]: X 2000-01-01 00:00:00 -0.699565 2000-01-01 00:01:00 -0.646129 2000-01-01 00:02:00 1.339314 2000-01-01 00:03:00 0.559563 2000-01-01 00:04:00 1.529063 2000-01-01 00:05:00 0.131740 2000-01-01 00:06:00 1.282263 2000-01-01 00:07:00 -1.003991 2000-01-01 00:08:00 -1.594918 2000-01-01 00:09:00 -0.775230 </code></pre> <p>I would like to create a variable that contains the <code>sum</code> of X </p> <ul> <li>over the last 5 days (<strong>not including the current observation</strong>)</li> <li>only considering observations that fall at the exact same hour as the current observation.</li> </ul> <p>In other words:</p> <ol> <li>At index <code>2000-01-01 00:00:00</code>, <code>df['rolling_sum_same_hour']</code> contains the sum the values of X observed at <code>00:00:00</code> during the last 5 days in the data (not including <code>2000-01-01</code> of course). </li> <li>At index <code>2000-01-01 00:01:00</code>, <code>df['rolling_sum_same_hour']</code> contains the sum of of X observed at <code>00:00:01</code> during the last 5 days and so on. </li> </ol> <p>The intuitive idea is that intraday prices have intraday seasonality, and I want to get rid of it that way.</p> <p>I tried to use <code>df['rolling_sum_same_hour']=df.at_time(df.index.minute).rolling(window=5).sum()</code></p> <p>with no success. Any ideas?</p> <p>Many thanks!</p>
4
2016-09-12T19:28:51Z
39,457,986
<p>Behold the power of <code>groupby</code>!</p> <pre><code>df = # as you defined above df['rolling_sum_by_time'] = df.groupby(df.index.time)['X'].apply(lambda x: x.shift(1).rolling(10).sum()) </code></pre> <p>It's a big pill to swallow there, but we are grouping by time (as in python datetime.time), then getting the column we care about (else apply will work on columns - it now works on the time-groups), and then applying the function you want! </p>
2
2016-09-12T20:05:14Z
[ "python", "pandas" ]
Pandas: how to compute the rolling sum of a variable over the last few days but only at a given hour?
39,457,435
<p>I have a dataframe as follows</p> <pre><code>df = pd.DataFrame({ 'X' : np.random.randn(50000)}, index=pd.date_range('1/1/2000', periods=50000, freq='T')) df.head(10) Out[37]: X 2000-01-01 00:00:00 -0.699565 2000-01-01 00:01:00 -0.646129 2000-01-01 00:02:00 1.339314 2000-01-01 00:03:00 0.559563 2000-01-01 00:04:00 1.529063 2000-01-01 00:05:00 0.131740 2000-01-01 00:06:00 1.282263 2000-01-01 00:07:00 -1.003991 2000-01-01 00:08:00 -1.594918 2000-01-01 00:09:00 -0.775230 </code></pre> <p>I would like to create a variable that contains the <code>sum</code> of X </p> <ul> <li>over the last 5 days (<strong>not including the current observation</strong>)</li> <li>only considering observations that fall at the exact same hour as the current observation.</li> </ul> <p>In other words:</p> <ol> <li>At index <code>2000-01-01 00:00:00</code>, <code>df['rolling_sum_same_hour']</code> contains the sum the values of X observed at <code>00:00:00</code> during the last 5 days in the data (not including <code>2000-01-01</code> of course). </li> <li>At index <code>2000-01-01 00:01:00</code>, <code>df['rolling_sum_same_hour']</code> contains the sum of of X observed at <code>00:00:01</code> during the last 5 days and so on. </li> </ol> <p>The intuitive idea is that intraday prices have intraday seasonality, and I want to get rid of it that way.</p> <p>I tried to use <code>df['rolling_sum_same_hour']=df.at_time(df.index.minute).rolling(window=5).sum()</code></p> <p>with no success. Any ideas?</p> <p>Many thanks!</p>
4
2016-09-12T19:28:51Z
39,457,992
<p>IIUC, what you want is to perform a rolling sum, but only on the observations grouped by the exact same time of day. This can be done by</p> <pre><code>df.X.groupby([df.index.hour, df.index.minute]).apply(lambda g: g.rolling(window=5).sum()) </code></pre> <p>(Note that your question alternates between 5 and 10 periods.) For example:</p> <pre><code>In [43]: df.X.groupby([df.index.hour, df.index.minute]).apply(lambda g: g.rolling(window=5).sum()).tail() Out[43]: 2000-02-04 17:15:00 -2.135887 2000-02-04 17:16:00 -3.056707 2000-02-04 17:17:00 0.813798 2000-02-04 17:18:00 -1.092548 2000-02-04 17:19:00 -0.997104 Freq: T, Name: X, dtype: float64 </code></pre>
2
2016-09-12T20:05:36Z
[ "python", "pandas" ]
Not able to get the resources attached with route table
39,457,457
<p>I am using python for AWS infrastructure automation. I need to get the resources attached with the Route Table for which API given is </p> <pre><code>ec2 = boto3.resource('ec2') route_table_association = ec2.RouteTableAssociation('rtb-**********') response=route_table_association.get_available_subresources() </code></pre> <p>Here the return type of response is giving me the empty list all the time. and <code>response=route_table_association.delete()</code> gives the exception</p> <pre><code>An error occurred (InvalidAssociationID.NotFound) when calling the `DisassociateRouteTable operation: The association ID 'rtb-*********' does not exist.` </code></pre> <p>But the route tebale exist and is attached to a subnet explicitly</p>
0
2016-09-12T19:30:13Z
39,465,382
<p>The id required are <a href="http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.RouteTableAssociation.id" rel="nofollow">RouteTableAssociationId i.e. rtbassoc-xxxxxx </a>, <strong>NOT the route table id</strong>.</p> <p>RouteTableAssociationId is inside <a href="http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Client.describe_route_tables" rel="nofollow">describe_route_tables</a> response JSON 'Associations' element. </p> <pre><code>{ 'RouteTables': [ { 'RouteTableId': 'string', 'VpcId': 'string', 'Routes': [ {....} ], 'Associations': [ { 'RouteTableAssociationId': 'string', 'RouteTableId': 'string', 'SubnetId': 'string', 'Main': True|False }, ], ..... } </code></pre>
0
2016-09-13T08:23:52Z
[ "python", "amazon-web-services", "amazon-ec2", "boto3", "botocore" ]
Not able to get the resources attached with route table
39,457,457
<p>I am using python for AWS infrastructure automation. I need to get the resources attached with the Route Table for which API given is </p> <pre><code>ec2 = boto3.resource('ec2') route_table_association = ec2.RouteTableAssociation('rtb-**********') response=route_table_association.get_available_subresources() </code></pre> <p>Here the return type of response is giving me the empty list all the time. and <code>response=route_table_association.delete()</code> gives the exception</p> <pre><code>An error occurred (InvalidAssociationID.NotFound) when calling the `DisassociateRouteTable operation: The association ID 'rtb-*********' does not exist.` </code></pre> <p>But the route tebale exist and is attached to a subnet explicitly</p>
0
2016-09-12T19:30:13Z
39,470,406
<p>Thanks this worked for me.</p> <pre><code>response = client.describe_route_tables( RouteTableIds=[ routetable, ], Filters=[ { 'Name': 'route-table-id', 'Values': [ routetable ] } ] ) </code></pre>
0
2016-09-13T12:42:10Z
[ "python", "amazon-web-services", "amazon-ec2", "boto3", "botocore" ]
Convolution without any padding-opencv Python
39,457,468
<p>Is there any function in Opencv-python that can convolve an image with a kernel without any padding ? Basically, I want an image in which convolution takes place only in the regions where the kernel and the portion of the image fully overlaps.</p>
1
2016-09-12T19:30:45Z
39,463,454
<p>OpenCV only supports convolving an image where the output returned is the same size as the input image. As such, you can still use OpenCV's filter functions, but simply ignore those pixels along the edges where the kernel didn't fully encapsulate itself inside the image. Assuming that your image kernel is odd, you can simply divide each of the dimensions by half, take the floor (or round down) and use these to cut away the information that isn't valid and return what is left. As Divakar mentioned, this is the same method as using <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve2d.html" rel="nofollow"><code>scipy</code>'s 2D convolution method</a> with the <code>'valid'</code> option. </p> <p>As such, assuming that your image is stored in <code>A</code> and your kernel is stored in <code>B</code>, you would simply do the following to get the filtered image where the kernel was fully encapsulated inside the image. Take note that we're going to assume that the kernel is odd and the output is stored in <code>C</code>.</p> <pre><code>import cv2 import numpy as np A = cv2.imread('...') # Load in image here B = (1.0/25.0)*np.ones((5,5)) # Specify kernel here C = cv2.filter2D(A, -1, B) # Convolve H = np.floor(np.array(B.shape)/2).astype(np.int) # Find half dims of kernel C = C[H[0]:-H[0],H[1]:-H[1]] # Cut away unwanted information </code></pre> <p>Take note that <code>cv2.filter2D</code> performs <strong>correlation</strong>, not convolution. However if the kernel is symmetric (that is if you take the transpose and it equals itself), correlation and convolution are equivalent. If this is not the case, you will need to perform a 180 degree rotation of the kernel before using <code>cv2.filter2D</code>. You can do that by simply doing:</p> <pre><code>B = B[::-1,::-1] </code></pre> <p>In order to compare, we can show that the above code is equivalent to use <code>scipy</code>'s <code>convolve2D</code> function. Here's a reproducible IPython session that shows us this:</p> <pre><code>In [41]: import cv2 In [42]: import numpy as np In [43]: from scipy.signal import convolve2d In [44]: A = np.reshape(np.arange(49), (7,7)).astype(np.float32) In [45]: A Out[45]: array([[ 0., 1., 2., 3., 4., 5., 6.], [ 7., 8., 9., 10., 11., 12., 13.], [ 14., 15., 16., 17., 18., 19., 20.], [ 21., 22., 23., 24., 25., 26., 27.], [ 28., 29., 30., 31., 32., 33., 34.], [ 35., 36., 37., 38., 39., 40., 41.], [ 42., 43., 44., 45., 46., 47., 48.]], dtype=float32) In [46]: B = (1.0/25.0)*np.ones((5,5), dtype=np.float32) In [47]: B Out[47]: array([[ 0.04, 0.04, 0.04, 0.04, 0.04], [ 0.04, 0.04, 0.04, 0.04, 0.04], [ 0.04, 0.04, 0.04, 0.04, 0.04], [ 0.04, 0.04, 0.04, 0.04, 0.04], [ 0.04, 0.04, 0.04, 0.04, 0.04]], dtype=float32) In [48]: C = cv2.filter2D(A, -1, B) In [49]: H = np.floor(np.array(B.shape)/2).astype(np.int) In [50]: C = C[H[0]:-H[0],H[1]:-H[1]] In [51]: C Out[51]: array([[ 15.99999809, 16.99999809, 18. ], [ 22.99999809, 24. , 24.99999809], [ 29.99999809, 30.99999809, 31.99999809]], dtype=float32) In [52]: C2 = convolve2d(A, B, mode='valid') In [53]: C2 Out[53]: array([[ 15.99999905, 17.00000191, 18.00000191], [ 22.99999809, 23.99999809, 24.99999809], [ 29.99999809, 30.99999809, 31.99999809]], dtype=float32) </code></pre> <p>The example is quite simple to understand. I declare a dummy matrix of 7 x 7 where the values increase from 0 to 48 row-wise. I also declare a 5 x 5 kernel of <code>(1/25)</code> for each element so this would essentially implement a 5 x 5 average filter. We thus use <code>cv2.filter2D</code> and <code>scipy.signal.convolve2d</code> to extract out only the valid portions of the convolution result. As far as precision goes, <code>C</code> which is the output of <code>cv2.filter2D</code> and <code>C2</code> which is the output of <code>convolve2d</code> are both equivalent. Take special note of not only the actual contents but the shape of both output arrays.</p> <p>However, if you wish to keep the size of the original image and replace the affected pixels by the filtered results, simply make a copy of the original image and use the same indexing logic that was used to cut away the information that was invalid with replacing those pixels in the copy with the convolved result:</p> <pre><code>C_copy = A.copy() C_copy[H[0]:-H[0],H[1]:-H[1]] = C </code></pre>
2
2016-09-13T06:25:29Z
[ "python", "opencv", "numpy", "image-processing" ]
Trying to interpolate linearly in python
39,457,469
<p>I have 3 arrays: a, b, c all with length 15. </p> <pre><code>a=[950, 850, 750, 675, 600, 525, 460, 400, 350, 300, 250, 225, 200, 175, 150] b = [16, 12, 9, -35, -40, -40, -40, -45, -50, -55, -60, -65, -70, -75, -80] c=[32.0, 22.2, 12.399999999999999, 2.599999999999998, -7.200000000000003, -17.0, -26.800000000000004, -36.60000000000001, -46.400000000000006, -56.2, -66.0, -75.80000000000001, -85.60000000000001, -95.4, -105.20000000000002] </code></pre> <p>I am trying to find the value of a at the index where b=c. T</p> <p>The problem is that there is no place where b=c exactly so I need to linearly interpolate between values in the array to find the value of a where b=c. Does that make sense? </p> <p>I was thinking about using <strong>scipy.interpolate</strong> to do the interpolation. </p> <p>I am having a hard time wrappying my mind around how to solve this problem. Any ideas on this would be great!</p>
2
2016-09-12T19:30:46Z
39,458,926
<p>Here's simpler variation of a function from <a href="http://stackoverflow.com/questions/15112964/digitizing-an-analog-signal/15114952#15114952">another answer of mine</a>:</p> <pre><code>from __future__ import division import numpy as np def find_roots(t, y): """ Given the input signal `y` with samples at times `t`, find the times where `y` is 0. `t` and `y` must be 1-D numpy arrays. Linear interpolation is used to estimate the time `t` between samples at which sign changes in `y` occur. """ # Find where y crosses 0. transition_indices = np.where(np.sign(y[1:]) != np.sign(y[:-1]))[0] # Linearly interpolate the time values where the transition occurs. t0 = t[transition_indices] t1 = t[transition_indices + 1] y0 = y[transition_indices] y1 = y[transition_indices + 1] slope = (y1 - y0) / (t1 - t0) transition_times = t0 - y0/slope return transition_times </code></pre> <p>That function can be used with <code>t = a</code> and <code>y = b - c</code>. For example, here is your data, entered as numpy arrays:</p> <pre><code>In [354]: a = np.array([950, 850, 750, 675, 600, 525, 460, 400, 350, 300, 250, 225, 200, 175, 150]) In [355]: b = np.array([16, 12, 9, -35, -40, -40, -40, -45, -50, -55, -60, -65, -70, -75, -80]) In [356]: c = np.array([32.0, 22.2, 12.399999999999999, 2.599999999999998, -7.200000000000003, -17.0, -26.800000000000004, -3 ...: 6.60000000000001, -46.400000000000006, -56.2, -66.0, -75.80000000000001, -85.60000000000001, -95.4, -105.2000000000 ...: 0002]) </code></pre> <p>The place where "b = c" is the place where "b - c = 0", so we pass <code>b - c</code> for <code>y</code>:</p> <pre><code>In [357]: find_roots(a, b - c) Out[357]: array([ 312.5]) </code></pre> <p>So the linearly interpolated value of <code>a</code> is 312.5.</p> <p>With the following matplotlib commands:</p> <pre><code>In [391]: plot(a, b, label="b") Out[391]: [&lt;matplotlib.lines.Line2D at 0x11eac8780&gt;] In [392]: plot(a, c, label="c") Out[392]: [&lt;matplotlib.lines.Line2D at 0x11f23aef0&gt;] In [393]: roots = find_roots(a, b - c) In [394]: [axvline(root, color='k', alpha=0.2) for root in roots] Out[394]: [&lt;matplotlib.lines.Line2D at 0x11f258208&gt;] In [395]: grid() In [396]: legend(loc="best") Out[396]: &lt;matplotlib.legend.Legend at 0x11f260ba8&gt; In [397]: xlabel("a") Out[397]: &lt;matplotlib.text.Text at 0x11e71c470&gt; </code></pre> <p>I get the plot</p> <p><a href="http://i.stack.imgur.com/JQzo3.png" rel="nofollow"><img src="http://i.stack.imgur.com/JQzo3.png" alt="plot"></a></p>
2
2016-09-12T21:17:08Z
[ "python", "scipy", "interpolation" ]
Trying to interpolate linearly in python
39,457,469
<p>I have 3 arrays: a, b, c all with length 15. </p> <pre><code>a=[950, 850, 750, 675, 600, 525, 460, 400, 350, 300, 250, 225, 200, 175, 150] b = [16, 12, 9, -35, -40, -40, -40, -45, -50, -55, -60, -65, -70, -75, -80] c=[32.0, 22.2, 12.399999999999999, 2.599999999999998, -7.200000000000003, -17.0, -26.800000000000004, -36.60000000000001, -46.400000000000006, -56.2, -66.0, -75.80000000000001, -85.60000000000001, -95.4, -105.20000000000002] </code></pre> <p>I am trying to find the value of a at the index where b=c. T</p> <p>The problem is that there is no place where b=c exactly so I need to linearly interpolate between values in the array to find the value of a where b=c. Does that make sense? </p> <p>I was thinking about using <strong>scipy.interpolate</strong> to do the interpolation. </p> <p>I am having a hard time wrappying my mind around how to solve this problem. Any ideas on this would be great!</p>
2
2016-09-12T19:30:46Z
39,459,095
<p>Another simple solution using:</p> <ul> <li>one linear-regressor for each vector (done with scikit-learn as scipy-docs were down for me; easy to switch to numpy/scipy-based linear-regression)</li> <li>general-purpose minimization using <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html" rel="nofollow">scipy.optimize.minimize</a></li> </ul> <h3>Code</h3> <pre><code>a=[950, 850, 750, 675, 600, 525, 460, 400, 350, 300, 250, 225, 200, 175, 150] b = [16, 12, 9, -35, -40, -40, -40, -45, -50, -55, -60, -65, -70, -75, -80] c=[32.0, 22.2, 12.399999999999999, 2.599999999999998, -7.200000000000003, -17.0, -26.800000000000004, -36.60000000000001, -46.400000000000006, -56.2, -66.0, -75.80000000000001, -85.60000000000001, -95.4, -105.20000000000002] from sklearn.linear_model import LinearRegression from scipy.optimize import minimize import numpy as np reg_a = LinearRegression().fit(np.arange(len(a)).reshape(-1,1), a) reg_b = LinearRegression().fit(np.arange(len(b)).reshape(-1,1), b) reg_c = LinearRegression().fit(np.arange(len(c)).reshape(-1,1), c) funA = lambda x: reg_a.predict(x.reshape(-1,1)) funB = lambda x: reg_b.predict(x.reshape(-1,1)) funC = lambda x: reg_c.predict(x.reshape(-1,1)) opt_crossing = lambda x: (funB(x) - funC(x))**2 x0 = 1 res = minimize(opt_crossing, x0, method='SLSQP', tol=1e-6) print(res) print('Solution: ', funA(res.x)) import matplotlib.pyplot as plt x = np.linspace(0, 15, 100) a_ = reg_a.predict(x.reshape(-1,1)) b_ = reg_b.predict(x.reshape(-1,1)) c_ = reg_c.predict(x.reshape(-1,1)) plt.plot(x, a_, color='blue') plt.plot(x, b_, color='green') plt.plot(x, c_, color='cyan') plt.scatter(np.arange(15), a, color='blue') plt.scatter(np.arange(15), b, color='green') plt.scatter(np.arange(15), c, color='cyan') plt.axvline(res.x, color='red', linestyle='solid') plt.axhline(funA(res.x), color='red', linestyle='solid') plt.show() </code></pre> <h3>Output</h3> <pre><code>fun: array([ 7.17320622e-15]) jac: array([ -3.99479864e-07, 0.00000000e+00]) message: 'Optimization terminated successfully.' nfev: 8 nit: 2 njev: 2 status: 0 success: True x: array([ 8.37754008]) Solution: [ 379.55151658] </code></pre> <h3>Plot</h3> <p><a href="http://i.stack.imgur.com/rW4gM.png" rel="nofollow"><img src="http://i.stack.imgur.com/rW4gM.png" alt="enter image description here"></a></p>
0
2016-09-12T21:30:47Z
[ "python", "scipy", "interpolation" ]
Trying to interpolate linearly in python
39,457,469
<p>I have 3 arrays: a, b, c all with length 15. </p> <pre><code>a=[950, 850, 750, 675, 600, 525, 460, 400, 350, 300, 250, 225, 200, 175, 150] b = [16, 12, 9, -35, -40, -40, -40, -45, -50, -55, -60, -65, -70, -75, -80] c=[32.0, 22.2, 12.399999999999999, 2.599999999999998, -7.200000000000003, -17.0, -26.800000000000004, -36.60000000000001, -46.400000000000006, -56.2, -66.0, -75.80000000000001, -85.60000000000001, -95.4, -105.20000000000002] </code></pre> <p>I am trying to find the value of a at the index where b=c. T</p> <p>The problem is that there is no place where b=c exactly so I need to linearly interpolate between values in the array to find the value of a where b=c. Does that make sense? </p> <p>I was thinking about using <strong>scipy.interpolate</strong> to do the interpolation. </p> <p>I am having a hard time wrappying my mind around how to solve this problem. Any ideas on this would be great!</p>
2
2016-09-12T19:30:46Z
39,459,127
<p>This is not necessarily a solution to your problem, since your data does not appear to be linear, but it might give you some ideas. If you assume that your lines a, b, and c are linear, then the following idea works:</p> <p>Perform a linear regression of lines a, b and c to get their respective slopes (m_a, m_b, m_c) and y-intercepts (b_a, b_b, b_c). Then solve the equation 'y_b = y_c' for x, and find y = m_a * x + b_a to get your result.</p> <p>Since the linear regression approximately solves y = m * x + b, equation y_b = y_c can be solved by hand giving: x = (b_b-b_c) / (m_c-m_b).</p> <p>Using python, you get:</p> <pre><code>&gt;&gt; m_a, b_a, r_a, p_a, err_a = stats.linregress(range(15), a) &gt;&gt; m_b, b_b, r_b, p_b, err_b = stats.linregress(range(15), b) &gt;&gt; m_c, b_c, r_c, p_c, err_c = stats.linregress(range(15), c) &gt;&gt; x = (b_b-b_c) / (m_c-m_b) &gt;&gt; m_a * x + b_a 379.55151515151516 </code></pre> <p>Since your data is not linear, you probably need to go through your vectors one by one and search for overlapping y intervals. Then you can apply the above method but using only the endpoints of your two intervals to construct your b and c inputs to the linear regression. In this case, you should get an exact result, since the least-squares method will interpolate perfectly with only two points (although there are more efficient ways to do this since the intersection can be solved exactly in this simple case where there are two straight lines).</p> <p>Cheers.</p>
1
2016-09-12T21:33:11Z
[ "python", "scipy", "interpolation" ]
Can I use a trigger to add combinations of foreign keys?
39,457,487
<p>I'm in the situation where I want to do multiples inserts on a table with a trigger after insert.</p> <p>Here the python code to understand the objective first:</p> <pre><code>d = dict() d["Table1"] = ["1", "2"] d["Table2"] = ["A","B"] from itertools import product d["Table12"] = [ (t1,t2,-1) for t1, t2 in product(d["Table1"], d["Table2"])] </code></pre> <p>Here the result: Table12 is the product of values of d. </p> <pre><code>{'Table1': ['1', '2'], 'Table2': ['A', 'B'], 'Table12': [('1', 'A', -1), ('1', 'B', -1), ('2', 'A', -1), ('2', 'B', -1)]} </code></pre> <p>Using a database I'm trying to have the same behavior with a trigger, and fully complete the association with all combinations of primary keys.</p> <pre><code>Table1: pk name1 VARCHAR Table 2: pk name2 VARCHAR Table 12: pk (name1, name2) val INTEGER fk (t1_name) reference Table1 (name1) fk (t2_name) reference Table2 (name2) CREATE TRIGGER table1_insert AFTER INSERT ON Table1 FOR EACH ROW BEGIN INSERT INTO Table12 VALUES(new.name1, #?, -1) END </code></pre> <p>Is there a way to get the product #? with something like</p> <blockquote> <p>INSERT INTO Table12 VALUES(new.name2, table2.name2, -1) FROM select * in table2;</p> </blockquote> <p>If "3" is inserted in Table1: Table12 must be completed with <strong>(3, 'A', -1), (3, 'B', -1)</strong>.</p>
0
2016-09-12T19:31:43Z
39,457,794
<p>Use a <code>SELECT</code> query in the <code>INSERT</code> query.</p> <pre><code>CREATE TRIGGER table1_inisert AFTER INSERT ON Table1 FOR EACH ROW BEGIN INSERT INTO Table12 (name1, name2, val) SELECT NEW.name1, name2, -1 FROM Table2 END </code></pre>
0
2016-09-12T19:52:45Z
[ "python", "mysql", "database-design", "foreign-keys", "database-trigger" ]
Field content not always visible in kivy Textinput
39,457,516
<p>I am encountering some strange/unexpected behaviour when displaying content in a Textinput field (Initially used for new record input - subsequently for show record data). Data is available in a dictionary and is assigned to Textinput fields. For short data the characters will be hidden sometimes: </p> <p><a href="http://i.stack.imgur.com/wma4C.png" rel="nofollow"><img src="http://i.stack.imgur.com/wma4C.png" alt="enter image description here"></a></p> <p>It seems that the cursor is at the end of the string and all characters are at the left side and 'hidden'(?) behind the label. After mouseclick in the field and arrow left, the characters appear.</p> <p><a href="http://i.stack.imgur.com/uBoLO.png" rel="nofollow"><img src="http://i.stack.imgur.com/uBoLO.png" alt="enter image description here"></a></p> <p>What is wrong in my kv definitions? :</p> <pre><code>BoxLayout: orientation: "horizontal" height: 25 size_hint_y: None Label: id: _socialsource_label size_hint: 0.35,1 text: "Social access token:" size: self.texture_size halign: 'left' valign: 'middle' font_size: 14 color: .3,.3,.3,1 TextInput: id: socialsource padding: 4,2,4,0 size_hint: 0.65,1 font_size: 14 multiline: False readonly: False text_size: self.width, None halign: 'left' foreground_color: .3,.3,.3,1 disabled_foreground_color: .3,.3,.3,1 background_normal: './images/tinputBGnormal.png' background_active: './images/tinputBGactive.png' background_disabled_normal: './images/tinputBGdisnormal.png' background_disabled_active: './images/tinputBGdisactive.png' </code></pre> <p>In the python code the data is assigned by:</p> <pre><code>self.socialchnl.text = projdict[0]['PRJSocchnl:'] self.socialsource.text = projdict[0]['PRJSocsrc:'] self.socialprovdr.text = projdict[0]['PRJSocprv:'] </code></pre>
0
2016-09-12T19:34:09Z
39,461,888
<p>You <code>hint_text</code> instead of <code>text</code> for your TextInputs. Something like</p> <pre><code> MyTextInput: id: social hint_text: some_social_name </code></pre>
0
2016-09-13T03:42:26Z
[ "python", "kivy", "textinput", "kivy-language" ]
How to print the \n character in Jinja2
39,457,587
<p>I have the following string in Python: <code>thestring = "123\n456"</code></p> <p>In my Jinja2 template, I use <code>{{thestring}}</code> and the output is:</p> <blockquote> <p>123<br> 456</p> </blockquote> <p>The only way I can get Jinja2 to print the exact representation <code>123\n456</code> (including the <code>\n</code>) is by escaping <code>thestring = "123\\n456"</code>.</p> <p><strong>Is there any other way this can be done directly in the template?</strong></p>
0
2016-09-12T19:38:17Z
39,458,104
<p>I'll answer my own, maybe it helps someone who has the same question.</p> <p>This works: <code>{{thestring.encode('string_escape')}}</code></p>
1
2016-09-12T20:13:37Z
[ "python", "flask", "jinja2" ]
Airflow: How to SSH and run BashOperator from a different server
39,457,592
<p>Is there a way to ssh to different server and run BashOperator using Airbnb's Airflow? I am trying to run a hive sql command with Airflow but I need to SSH to a different box in order to run the hive shell. My tasks should look like this:</p> <ol> <li>SSH to server1</li> <li>start Hive shell</li> <li>run Hive command</li> </ol> <p>Thanks!</p>
0
2016-09-12T19:38:28Z
39,494,330
<p>I think that I just figured it out:</p> <ol> <li><p>Create a SSH connection in UI under Admin > Connection. Note: the connection will be deleted if you reset the database</p></li> <li><p>In the Python file add the following</p> <pre><code>from airflow.contrib.operations.ssh_execute_operator import SSHExecuteOperator sshHook = SSHHook(conn_id=&lt;YOUR CONNECTION ID FROM THE UI&gt;) </code></pre></li> <li><p>Add the SSH operator task</p> <pre><code>t1 = SSHExecuteOperator( task_id="task1", bash_command=&lt;YOUR COMMAND&gt;, ssh_hook=sshHook, dag=dag) </code></pre></li> </ol> <p>Thanks!</p>
0
2016-09-14T15:29:01Z
[ "python", "ssh", "airflow" ]
improve linear search for KNN efficiency w/ NumPY
39,457,604
<p>I am trying to calculate the distance of each point in the testing set from each point in the training set:</p> <p>This is what my loop looks like right now:</p> <pre><code> for x in testingSet for y in trainingSet print numpy.linalg.norm(x-y) </code></pre> <p>Where testingSet and trainingSet are numpy arrays where each row of the two sets hold the feature data for one example.</p> <p>However, it's running extremely slowly, taking more than 10 minutes since my data set is bigger (testing set of 3000, training set of ~10,000). Does this have to do with my method or am I utilizing numPY incorrectly?</p>
0
2016-09-12T19:39:12Z
39,457,704
<p>This is because you naively iterate over your data, and loops are slow in python. Instead, use sklearn <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html" rel="nofollow">pairwise distance functions</a>, or even better - use sklearn <a href="http://scikit-learn.org/stable/modules/neighbors.html" rel="nofollow">efficient nearest neighbour</a> search (like BallTree or KDTree). If you do not want to use sklearn, there is also a <a href="http://docs.scipy.org/doc/scipy/reference/spatial.distance.html" rel="nofollow">module in scipy</a>. Finally you can do "matrix tricks" to compute this, since</p> <pre><code>|| x - y ||^2 = &lt;x-y, x-y&gt; = &lt;x,x&gt; + &lt;y,y&gt; - 2&lt;x,y&gt; </code></pre> <p>you can do (assuming your data is in matrix form given as X and Y):</p> <pre><code>X2 = (X**2).sum(axis=1).reshape((-1, 1)) Y2 = (Y**2).sum(axis=1).reshape((1, -1)) distances = np.sqrt(X2 + Y2 - 2*X.dot(Y.T)) </code></pre>
3
2016-09-12T19:45:16Z
[ "python", "numpy", "machine-learning" ]
What is the proper level of indent for hanging indent with type hinting in python?
39,457,607
<p>What is the proper syntax for a hanging indent for a method with multiple parameters and type hinting?</p> <p><strong>Align under first parameter</strong></p> <pre><code>def get_library_book(self, book_id: str, library_id: str )-&gt; Book: </code></pre> <p><strong>Indent one level beneath</strong></p> <pre><code>def get_library_book( self, book_id: str, library_id: str ) -&gt; Book: </code></pre> <p>PEP8 supports the <em>Indent one level beneath</em> case, but does not specify if <em>Align under first parameter</em> is allowed. It states: </p> <blockquote> <p>When using a hanging indent the following should be considered; there should be no arguments on the first line and further indentation should be used to clearly distinguish itself as a continuation line.</p> </blockquote>
4
2016-09-12T19:39:21Z
39,458,753
<p>Read the previous line of <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP 8</a> more carefully, the part before "or using a hanging indent".</p> <blockquote> <p>Continuation lines should align wrapped elements either vertically using Python's implicit line joining inside parentheses, brackets and braces, or using a hanging indent.</p> </blockquote> <p>This is intended to cover the first "yes' example, and your first example above.</p> <pre><code># Aligned with opening delimiter. foo = long_function_name(var_one, var_two, var_three, var_four) </code></pre>
2
2016-09-12T21:03:35Z
[ "python", "python-3.x", "pep8", "type-hinting" ]
What is the proper level of indent for hanging indent with type hinting in python?
39,457,607
<p>What is the proper syntax for a hanging indent for a method with multiple parameters and type hinting?</p> <p><strong>Align under first parameter</strong></p> <pre><code>def get_library_book(self, book_id: str, library_id: str )-&gt; Book: </code></pre> <p><strong>Indent one level beneath</strong></p> <pre><code>def get_library_book( self, book_id: str, library_id: str ) -&gt; Book: </code></pre> <p>PEP8 supports the <em>Indent one level beneath</em> case, but does not specify if <em>Align under first parameter</em> is allowed. It states: </p> <blockquote> <p>When using a hanging indent the following should be considered; there should be no arguments on the first line and further indentation should be used to clearly distinguish itself as a continuation line.</p> </blockquote>
4
2016-09-12T19:39:21Z
39,458,856
<p>Appart from Terrys answer, take an example from <a href="https://github.com/python/typeshed" rel="nofollow"><code>typeshed</code></a> which is the project on Python's GitHub for annotating the <code>stdlib</code> with stubs. </p> <p>For example, in <a href="https://github.com/python/typeshed/blob/master/stdlib/3/importlib/machinery.pyi" rel="nofollow"><code>importlib.machinery</code></a> (and in other cases if you look) annotations are done using your first form, <a href="https://github.com/python/typeshed/blob/master/stdlib/3/importlib/machinery.pyi#L16" rel="nofollow">for example</a>:</p> <pre><code>def find_module(cls, fullname: str, path: Optional[Sequence[importlib.abc._Path]] ) -&gt; Optional[importlib.abc.Loader]: </code></pre>
2
2016-09-12T21:12:29Z
[ "python", "python-3.x", "pep8", "type-hinting" ]
What is the proper level of indent for hanging indent with type hinting in python?
39,457,607
<p>What is the proper syntax for a hanging indent for a method with multiple parameters and type hinting?</p> <p><strong>Align under first parameter</strong></p> <pre><code>def get_library_book(self, book_id: str, library_id: str )-&gt; Book: </code></pre> <p><strong>Indent one level beneath</strong></p> <pre><code>def get_library_book( self, book_id: str, library_id: str ) -&gt; Book: </code></pre> <p>PEP8 supports the <em>Indent one level beneath</em> case, but does not specify if <em>Align under first parameter</em> is allowed. It states: </p> <blockquote> <p>When using a hanging indent the following should be considered; there should be no arguments on the first line and further indentation should be used to clearly distinguish itself as a continuation line.</p> </blockquote>
4
2016-09-12T19:39:21Z
39,459,117
<p>PEP8 has many good ideas in it, but I wouldn't rely on it to decide this kind of question about whitespace. When I studied PEP8's recommendations on whitespace, I found them to be inconsistent and even contradictory.</p> <p>Instead, I would look at general principles that apply to nearly all programming languages, not just Python.</p> <p>The column alignment shown in the first example has many disadvantages, and I don't use or allow it in any of my projects.</p> <p>Some of the disadvantages:</p> <ul> <li>If you change the function name so its length is different, you must realign all of the parameters.</li> <li>When you do that realignment, your source control diffs are cluttered with unnecessary whitespace changes.</li> <li>As the code is updated and maintained, it's likely that you'll miss some of the alignment when renaming variables, leading to misaligned code.</li> <li>You get much longer line lengths.</li> <li>The alignment doesn't work in a proportional font. (Yes, some developers prefer proportional fonts, and if you avoid column alignment, your code will be equally readable in monospaced or proportional fonts.)</li> </ul> <p>It gets even worse if you use column alignment in more complex cases. Consider this example:</p> <pre><code>let mut rewrites = try_opt!(subexpr_list.iter() .rev() .map(|e| { rewrite_chain_expr(e, total_span, context, max_width, indent) }) .collect::&lt;Option&lt;Vec&lt;_&gt;&gt;&gt;()); </code></pre> <p>This is Rust code from the Servo browser, whose coding style mandates this kind of column alignment. While it isn't Python code, exactly the same principles apply in Python or nearly any language.</p> <p>It should be apparent in this code sample how the use of column alignment leads to a bad situation. What if you needed to call another function, or had a longer variable name, inside that nested <code>rewrite_chain_expr</code> call? You're just about out of room unless you want <em>very</em> long lines.</p> <p>Compare the above with either of these versions which use a purely indentation-based style like your second Python example:</p> <pre><code>let mut rewrites = try_opt!( subexpr_list .iter() .rev() .map( |e| { rewrite_chain_expr( e, total_span, context, max_width, indent ) }) .collect::&lt;Option&lt;Vec&lt;_&gt;&gt;&gt;() ); </code></pre> <p>Or, if the parameters to <code>rewrite_chain_expr</code> were longer or if you just wanted shorter lines:</p> <pre><code>let mut rewrites = try_opt!( subexpr_list .iter() .rev() .map( |e| { rewrite_chain_expr( e, total_span, context, max_width, indent ) }) .collect::&lt;Option&lt;Vec&lt;_&gt;&gt;&gt;() ); </code></pre> <p>In contrast to the column-aligned style, this pure indentation style has many advantages and no disadvantages at all.</p>
2
2016-09-12T21:32:18Z
[ "python", "python-3.x", "pep8", "type-hinting" ]
Openshift python requests proxy permission denied
39,457,610
<p>I'm trying to use a proxy with the python 'requests' package on an Openshift server. I am getting a permission denied error. See below.</p> <p>Is Openshift blocking the connection or am I not configuring it correctly? Something else? Openshift doesn't want to let me connect to a proxy because the code works fine locally and on Heroku.</p> <p><strong>Code</strong></p> <pre><code>from ssl import PROTOCOL_TLSv1 import ssladapter proxies = {'https': 'http://{}:{}@96.44.147.34:6060'.format(CFG.proxy_username, CFG.proxy_password)} url1 = 'https://reservaciones.volaris.com/Flight/DeepLinkSearch' session = requests.Session() session.mount('https://', ssladapter.SSLAdapter(ssl_version=PROTOCOL_TLSv1)) request1 = session.get(url1, proxies=proxies) </code></pre> <p><strong>Traceback</strong></p> <pre><code>requests.exceptions.ProxyError: HTTPSConnectionPool(host='reservaciones.volaris.com', port=443): Max retries exceeded with url: /Flight/DeepLinkSearch (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('&lt;requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f4e78386ad0&gt;: Failed to establish a new connection: [Errno 13] Permission denied',))) </code></pre>
0
2016-09-12T19:39:30Z
39,620,002
<p>Most probably OpenShift blocks uncommon outgoing ports for <a href="http://security.stackexchange.com/questions/24310/why-block-outgoing-network-traffic-with-a-firewall">security reasons</a>. your proxy is listening on 6060. You should try to ssh into your gear and try <code>telnet</code></p> <p>In my gear, post 6060 is blocked. See the attached screenshot. <a href="http://portquiz.net/" rel="nofollow">portquiz</a> listens on all TCP ports.</p> <p><a href="http://i.stack.imgur.com/tdBWP.png" rel="nofollow"><img src="http://i.stack.imgur.com/tdBWP.png" alt="enter image description here"></a></p>
0
2016-09-21T14:55:54Z
[ "python", "proxy", "openshift", "python-requests" ]
is there a 2D dictionary in python?
39,457,653
<p>I was about to create a matrix like :</p> <pre><code> 33 12 23 42 11 32 43 22 33 − 1 1 1 0 0 1 1 12 1 − 1 1 0 0 1 1 23 1 1 − 1 1 1 0 0 42 1 1 1 − 1 1 0 0 11 0 0 1 1 − 1 1 1 32 0 0 1 1 1 − 1 1 43 1 1 0 0 1 1 − 1 22 1 1 0 0 1 1 1 − </code></pre> <p>I want to query by horizontal or vertical titles, so I created the matrix by:</p> <pre><code>a = np.matrix('99 33 12 23 42 11 32 43 22;33 99 1 1 1 0 0 1 1;12 1 99 1 1 0 0 1 1;23 1 1 99 1 1 1 0 0;42 1 1 1 99 1 1 0 0;11 0 0 1 1 99 1 1 1;32 0 0 1 1 1 99 1 1;43 1 1 0 0 1 1 99 1;22 1 1 0 0 1 1 1 99') </code></pre> <p>I want to have the certain data if I query a[23][11] = 1</p> <p>so is there a way we can create a 2D dictionary, so that a[23][11] = 1?</p> <p>Thanks</p>
3
2016-09-12T19:42:14Z
39,458,160
<p>You're clearly asking for something outside of <code>numpy</code>. </p> <p>A <a href="https://docs.python.org/2/library/collections.html#defaultdict-objects" rel="nofollow"><code>defauldict</code></a> with the <em><code>default_factory</code></em> as <code>dict</code> gives a sense of the <em>2D dictionary</em> you want:</p> <pre><code>&gt;&gt;&gt; from collections import defaultdict &gt;&gt;&gt; a = defaultdict(dict) &gt;&gt;&gt; a[23][11] = 1 &gt;&gt;&gt; a[23] {11: 1} &gt;&gt;&gt; a[23][11] 1 </code></pre>
3
2016-09-12T20:17:27Z
[ "python", "numpy" ]
is there a 2D dictionary in python?
39,457,653
<p>I was about to create a matrix like :</p> <pre><code> 33 12 23 42 11 32 43 22 33 − 1 1 1 0 0 1 1 12 1 − 1 1 0 0 1 1 23 1 1 − 1 1 1 0 0 42 1 1 1 − 1 1 0 0 11 0 0 1 1 − 1 1 1 32 0 0 1 1 1 − 1 1 43 1 1 0 0 1 1 − 1 22 1 1 0 0 1 1 1 − </code></pre> <p>I want to query by horizontal or vertical titles, so I created the matrix by:</p> <pre><code>a = np.matrix('99 33 12 23 42 11 32 43 22;33 99 1 1 1 0 0 1 1;12 1 99 1 1 0 0 1 1;23 1 1 99 1 1 1 0 0;42 1 1 1 99 1 1 0 0;11 0 0 1 1 99 1 1 1;32 0 0 1 1 1 99 1 1;43 1 1 0 0 1 1 99 1;22 1 1 0 0 1 1 1 99') </code></pre> <p>I want to have the certain data if I query a[23][11] = 1</p> <p>so is there a way we can create a 2D dictionary, so that a[23][11] = 1?</p> <p>Thanks</p>
3
2016-09-12T19:42:14Z
39,458,273
<p>If I understand correctly you just want to label your row/columns. To stay within the numpy array framework, a simple solution would be to create a mapping between the labels and the array order. I am also going to assume that it is OK to convert the labels into strings as they can be anything (though integers would also be fine). </p> <pre><code>l = {str(x) : ind for ind , x in enumerate((33,12,23,42,11,32,43,22))} a = sp.linalg.circulant([99,1,1,1,0,0,1,1]) a[l['32'],l['23']] </code></pre>
0
2016-09-12T20:24:33Z
[ "python", "numpy" ]
is there a 2D dictionary in python?
39,457,653
<p>I was about to create a matrix like :</p> <pre><code> 33 12 23 42 11 32 43 22 33 − 1 1 1 0 0 1 1 12 1 − 1 1 0 0 1 1 23 1 1 − 1 1 1 0 0 42 1 1 1 − 1 1 0 0 11 0 0 1 1 − 1 1 1 32 0 0 1 1 1 − 1 1 43 1 1 0 0 1 1 − 1 22 1 1 0 0 1 1 1 − </code></pre> <p>I want to query by horizontal or vertical titles, so I created the matrix by:</p> <pre><code>a = np.matrix('99 33 12 23 42 11 32 43 22;33 99 1 1 1 0 0 1 1;12 1 99 1 1 0 0 1 1;23 1 1 99 1 1 1 0 0;42 1 1 1 99 1 1 0 0;11 0 0 1 1 99 1 1 1;32 0 0 1 1 1 99 1 1;43 1 1 0 0 1 1 99 1;22 1 1 0 0 1 1 1 99') </code></pre> <p>I want to have the certain data if I query a[23][11] = 1</p> <p>so is there a way we can create a 2D dictionary, so that a[23][11] = 1?</p> <p>Thanks</p>
3
2016-09-12T19:42:14Z
39,458,815
<p>Are you looking for a dictionary with pairs as keys?</p> <pre><code>d = {} d[33, 12] = 1 d[33, 23] = 1 # etc </code></pre> <p>Note that in python <code>d[a, b]</code> is just syntactic sugar for <code>d[(a, b)]</code></p>
1
2016-09-12T21:08:29Z
[ "python", "numpy" ]
is there a 2D dictionary in python?
39,457,653
<p>I was about to create a matrix like :</p> <pre><code> 33 12 23 42 11 32 43 22 33 − 1 1 1 0 0 1 1 12 1 − 1 1 0 0 1 1 23 1 1 − 1 1 1 0 0 42 1 1 1 − 1 1 0 0 11 0 0 1 1 − 1 1 1 32 0 0 1 1 1 − 1 1 43 1 1 0 0 1 1 − 1 22 1 1 0 0 1 1 1 − </code></pre> <p>I want to query by horizontal or vertical titles, so I created the matrix by:</p> <pre><code>a = np.matrix('99 33 12 23 42 11 32 43 22;33 99 1 1 1 0 0 1 1;12 1 99 1 1 0 0 1 1;23 1 1 99 1 1 1 0 0;42 1 1 1 99 1 1 0 0;11 0 0 1 1 99 1 1 1;32 0 0 1 1 1 99 1 1;43 1 1 0 0 1 1 99 1;22 1 1 0 0 1 1 1 99') </code></pre> <p>I want to have the certain data if I query a[23][11] = 1</p> <p>so is there a way we can create a 2D dictionary, so that a[23][11] = 1?</p> <p>Thanks</p>
3
2016-09-12T19:42:14Z
39,458,901
<p>Another possibility is to use tuples as the dictionary keys</p> <pre><code>dict((33,12):1, (23,12):1, ...] </code></pre> <p><code>scipy.sparse</code> has a sparse matrix format that stores it's values in such a dictionary. With your values such a matrix would represent a 50x50 matrix with mostly 0 values, and just 1's at these selected coordinates.</p> <p>Keep in mind that the keys of a dictionary (ordinary at least) are not ordered</p> <p>What are going to be doing with this data? A dictionary, whether type or nested, is good for one kind of usage, but bad for others. A matrix such as you sample is better for other things, like operations along rows or columns. The dictionary format largely obscures that kind of ordered layout.</p>
1
2016-09-12T21:15:21Z
[ "python", "numpy" ]
Python extract italic content from html
39,457,658
<p>I am trying to extract 'Italic' Content from a pdf in python. I have converted the pdf to html so that I can use the italic tag to extract the text. Here is how the html looks like</p> <pre><code>&lt;br&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="position:absolute; border: textbox 1px solid; writing-mode:lr-tb; left:71px; top:225px; width:422px; height:15px;"&gt;&lt;span style="font-family: TTPGFA+Symbol; font- size:12px"&gt;•&lt;/span&gt;&lt;span style="font-family: YUWTQX+ArialMT; font- size:14px"&gt; Kornai, Janos. 1992. &lt;/span&gt;&lt;span style="font-family: PUCJZV+Arial-ItalicMT; font-size:14px"&gt;The Socialist System: The Political Economy of Communism&lt;/span&gt;&lt;span style="font-family: YUWTQX+ArialMT; font-size:14px"&gt;. </code></pre> <p>This is how the code looks:</p> <pre><code>from bs4 import BeautifulSoup soup = BeautifulSoup(open("/../..myfile.html")) bTags = [] for i in soup.findAll('span'): bTags.append(i.text) </code></pre> <p>I am not sure how can I get only the italic text.</p>
2
2016-09-12T19:42:35Z
39,458,011
<p>Try this:</p> <pre><code>from bs4 import BeautifulSoup soup = BeautifulSoup(html) bTags = [] for i in soup.find_all('span', style=lambda x: x and 'Italic' in x): bTags.append(i.text) print bTags </code></pre> <p>Passing a function to the <code>style</code> argument will filter results by the result of that function, with its input as the value of the <code>style</code> attribute. We check to see if the string <code>Italic</code> is inside the attribute, and if so, return True.</p> <p>You may need a more sophisticated algorithm depending on the rest of what your HTML looks like.</p>
2
2016-09-12T20:06:41Z
[ "python", "html", "italic" ]
Django No module named backendssocial.apps.django_app.context_processors
39,457,702
<p>I'm adding an authentication via facebook and when I run my localhost I'm getting this error in terminal:</p> <pre><code>xx-MacBook-Pro:bookstore xx$ python manage.py runserver /Library/Python/2.7/site-packages/django/db/models/fields/subclassing.py:22: RemovedInDjango110Warning: SubfieldBase has been deprecated. Use Field.from_db_value instead. RemovedInDjango110Warning) /Library/Python/2.7/site-packages/django/db/models/fields/subclassing.py:22: RemovedInDjango110Warning: SubfieldBase has been deprecated. Use Field.from_db_value instead. RemovedInDjango110Warning) Performing system checks... /Library/Python/2.7/site-packages/social/apps/django_app/urls.py:12: RemovedInDjango110Warning: Support for string view arguments to url() is deprecated and will be removed in Django 1.10 (got auth). Pass the callable instead. name='begin'), /Library/Python/2.7/site-packages/social/apps/django_app/urls.py:14: RemovedInDjango110Warning: Support for string view arguments to url() is deprecated and will be removed in Django 1.10 (got complete). Pass the callable instead. name='complete'), /Library/Python/2.7/site-packages/social/apps/django_app/urls.py:17: RemovedInDjango110Warning: Support for string view arguments to url() is deprecated and will be removed in Django 1.10 (got disconnect). Pass the callable instead. name='disconnect'), /Library/Python/2.7/site-packages/social/apps/django_app/urls.py:19: RemovedInDjango110Warning: Support for string view arguments to url() is deprecated and will be removed in Django 1.10 (got disconnect). Pass the callable instead. 'disconnect', name='disconnect_individual'), /Library/Python/2.7/site-packages/social/apps/django_app/urls.py:19: RemovedInDjango110Warning: django.conf.urls.patterns() is deprecated and will be removed in Django 1.10. Update your urlpatterns to be a list of django.conf.urls.url() instances instead. 'disconnect', name='disconnect_individual'), System check identified no issues (0 silenced). September 12, 2016 - 19:22:13 Django version 1.9, using settings 'bookstore.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. </code></pre> <p>I'm guessing that the bottom portion of this is simply saying that the newer version of Django will be different. I'm also guessing that the top portion is saying there there is a warning leading up to it. However, when I actually go to the localhost I get this error:</p> <pre><code>Request Method: GET Request URL: http://localhost:8000/store/ Django Version: 1.9 Python Version: 2.7.10 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'social.apps.django_app.default', 'registration', 'store'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback: File "/Library/Python/2.7/site-packages/django/core/handlers/base.py" in get_response 149. response = self.process_exception_by_middleware(e, request) File "/Library/Python/2.7/site-packages/django/core/handlers/base.py" in get_response 147. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/rseck/Desktop/stoneriverelearning/bookstore/store/views.py" in store 17. return render(request, 'store.html', context) File "/Library/Python/2.7/site-packages/django/shortcuts.py" in render 67. template_name, context, request=request, using=using) File "/Library/Python/2.7/site-packages/django/template/loader.py" in render_to_string 97. return template.render(context, request) File "/Library/Python/2.7/site-packages/django/template/backends/django.py" in render 95. return self.template.render(context) File "/Library/Python/2.7/site-packages/django/template/base.py" in render 204. with context.bind_template(self): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py" in __enter__ 17. return self.gen.next() File "/Library/Python/2.7/site-packages/django/template/context.py" in bind_template 256. processors = (template.engine.template_context_processors + File "/Library/Python/2.7/site-packages/django/utils/functional.py" in __get__ 33. res = instance.__dict__[self.name] = self.func(instance) File "/Library/Python/2.7/site-packages/django/template/engine.py" in template_context_processors 105. return tuple(import_string(path) for path in context_processors) File "/Library/Python/2.7/site-packages/django/template/engine.py" in &lt;genexpr&gt; 105. return tuple(import_string(path) for path in context_processors) File "/Library/Python/2.7/site-packages/django/utils/module_loading.py" in import_string 20. module = import_module(module_path) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py" in import_module 37. __import__(name) Exception Type: ImportError at /store/ Exception Value: No module named backendssocial.apps.django_app.context_processors </code></pre> <p>When I pip freeze I see that I have python-social-auth==0.2.7, so I'm not sure what is causing all this. </p> <p>This is parts of my settings.py file:</p> <pre><code>INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'social.apps.django_app.default', 'registration', 'store', ] TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'templates')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', 'social.apps.django_app.context_processors.backends' 'social.apps.django_app.context_processors.login_redirect' ], }, }, ] WSGI_APPLICATION = 'bookstore.wsgi.application' AUTHENTICATION_BACKENDS = ( 'social.backends.facebook.FacebookOAuth2', 'django.contrib.auth.backends.ModelBackend' ) # Registration ACCOUNT_ACTIVATION_DAYS = 7 REGISTRATION_AUTO_LOGIN = True LOGIN_REDIRECT_URL ='/store/' # Email settings EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend" EMAIL_HOST = "smtp.gmail.com" EMAIL_HOST_USER = "xx@gmail.com" EMAIL_HOST_PASSWORD = "xx" EMAIL_PORT = 587 EMAIL_USE_TLS = True DEFAULT_FROM_EMAIL = "xx@xx.com" # Social Auth - Facebook SOCIAL_AUTH_FACEBOOK_KEY = 'xx' SOCIAL_AUTH_FACEBOOK_SECRET = 'xx' </code></pre> <p>In advance, thank you for your help!</p>
0
2016-09-12T19:45:10Z
39,457,758
<p>You are missing one comma in <code>TEMPLATES</code> variable:</p> <pre><code>'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', 'social.apps.django_app.context_processors.backends' &lt;--- MISSING COMMA 'social.apps.django_app.context_processors.login_redirect' ], </code></pre>
1
2016-09-12T19:49:17Z
[ "python", "django", "facebook" ]
Python pandas: conditionally select a uniform sample from a dataframe
39,457,762
<p>Say I have a dataframe as such</p> <pre><code>category1 category2 other_col another_col .... a 1 a 2 a 2 a 3 a 3 a 1 b 10 b 10 b 10 b 11 b 11 b 11 </code></pre> <p>I want to obtain a sample from my dataframe so that <code>category1</code> a uniform number of times. I'm assuming that there are an equal number of each type in <code>category1</code>. I know that this can be done with pandas using <code>pandas.sample()</code>. However, I also want to ensure that that sample I select has <code>category2</code> equally represented as well. So, for example, if I have a sample size of 5, I would want something such as:</p> <pre><code>a 1 a 2 b 10 b 11 b 10 </code></pre> <p>I would <em>not</em> want something such as:</p> <pre><code>a 1 a 1 b 10 b 10 b 10 </code></pre> <p>While this is a valid random sample of <code>n=4</code>, it would not meet my requirements as I want to vary as much as possible the types of <code>category2</code>.</p> <p>Notice that in the first example, because <code>a</code> was only sampled twice, that <code>3</code> was not not represented from <code>category2</code>. This is okay. The goal is to just as uniformly as possible, represent that sample data.</p> <p>If it helps to provide a clearer example, one could thing having the categories <code>fruit</code>, <code>vegetables</code>, <code>meat</code>, <code>grains</code>, <code>junk</code>. In a sample size of 10, I would want as much as possible to represent each category. So ideally, 2 of each. Then each of those 2 selected rows belonging to the chosen categories would have subcategories that are also represented as uniformly as possible. So, for example, fruit could have a subcategories of red_fruits, yellow_fruits, etc. For the 2 fruit categories that are selected of the 10, red_fruits and yellow_fruits would both represented in the sample. Of course, if we had larger sample size, we would include more of the subcategories of fruit (green_fruits, blue_fruits, etc.).</p>
6
2016-09-12T19:49:44Z
39,460,259
<p>Trick is building up a balanced array. I provided a clumsy way of doing it. Then cycle through the groupby object sampling by referencing the balanced array.</p> <pre><code>def rep_sample(df, col, n, *args, **kwargs): nu = df[col].nunique() m = len(df) mpb = n // nu mku = n - mpb * nu fills = np.zeros(nu) fills[:mku] = 1 sample_sizes = (np.ones(nu) * mpb + fills).astype(int) gb = df.groupby(col) sample = lambda sub_df, i: sub_df.sample(sample_sizes[i], *args, **kwargs) subs = [sample(sub_df, i) for i, (_, sub_df) in enumerate(gb)] return pd.concat(subs) </code></pre> <hr> <h3>Demonstration</h3> <pre><code>rep_sample(df, 'category1', 5) </code></pre> <p><a href="http://i.stack.imgur.com/cCoqj.png" rel="nofollow"><img src="http://i.stack.imgur.com/cCoqj.png" alt="enter image description here"></a></p>
1
2016-09-12T23:39:40Z
[ "python", "pandas", "dataframe", "uniform" ]
How to use regex to disallow non-digts but allow a dot in Python?
39,457,780
<p>I am new to regex and trying to remove all <strong>non-digts</strong> but keep the <strong>dot</strong> (<strong>.</strong>) of a string:</p> <pre><code>x = ['ABCD, EFGH ', ' 20.9&amp;dog; ', ' IJKLM /&gt;'] </code></pre> <p>So far I have tried the following:</p> <pre><code>&gt;&gt;&gt; x = re.sub("\D", "", x) 209 </code></pre> <p>However I am trying to get the following outcome:</p> <pre><code>20.9 </code></pre> <p>Thanks.</p>
0
2016-09-12T19:51:31Z
39,457,854
<p>Instead of using the class <code>\D</code> you can define your own class of characters using <code>[...]</code>, and invert that class using <code>[^...]</code>. Now just put all the digits <code>0-9</code> and the <code>.</code> into that class:</p> <pre><code>&gt;&gt;&gt; x = ['ABCD, EFGH ', ' 20.9&amp;dog; ', ' IJKLM /&gt;'] &gt;&gt;&gt; [re.sub("[^0-9.]", "", y) for y in x] ['', '20.9', ''] </code></pre> <p>Of course, instead of <em>removing</em> everything that is <em>not</em> a number or dot, you could also use <code>re.findall</code> or <code>re.search</code> to get those parts of the string that <em>are</em> numbers or dot. This has the benefit that if the string contains more than one number, those will not clump together:</p> <pre><code>&gt;&gt;&gt; z = "foo20.9bar42.1blub" &gt;&gt;&gt; re.sub("[^0-9.]", "", z) '20.942.1' &gt;&gt;&gt; re.findall("[0-9.]+", z) ['20.9', '42.1'] </code></pre>
2
2016-09-12T19:56:10Z
[ "python", "regex", "python-2.7" ]
How to use regex to disallow non-digts but allow a dot in Python?
39,457,780
<p>I am new to regex and trying to remove all <strong>non-digts</strong> but keep the <strong>dot</strong> (<strong>.</strong>) of a string:</p> <pre><code>x = ['ABCD, EFGH ', ' 20.9&amp;dog; ', ' IJKLM /&gt;'] </code></pre> <p>So far I have tried the following:</p> <pre><code>&gt;&gt;&gt; x = re.sub("\D", "", x) 209 </code></pre> <p>However I am trying to get the following outcome:</p> <pre><code>20.9 </code></pre> <p>Thanks.</p>
0
2016-09-12T19:51:31Z
39,457,859
<p>This is a simple requirement which can be made explicit:</p> <pre><code>for item in x: print re.sub(r'[^0-9.]', "", item) </code></pre>
1
2016-09-12T19:56:19Z
[ "python", "regex", "python-2.7" ]
How to use regex to disallow non-digts but allow a dot in Python?
39,457,780
<p>I am new to regex and trying to remove all <strong>non-digts</strong> but keep the <strong>dot</strong> (<strong>.</strong>) of a string:</p> <pre><code>x = ['ABCD, EFGH ', ' 20.9&amp;dog; ', ' IJKLM /&gt;'] </code></pre> <p>So far I have tried the following:</p> <pre><code>&gt;&gt;&gt; x = re.sub("\D", "", x) 209 </code></pre> <p>However I am trying to get the following outcome:</p> <pre><code>20.9 </code></pre> <p>Thanks.</p>
0
2016-09-12T19:51:31Z
39,457,866
<p>You want an inverted character class:</p> <pre><code>re.sub(r"[^\d.]", "", x) </code></pre> <p>Note that <code>[^0-9.]</code> and <code>[^\d.]</code> are not the same, because <code>\d</code> matches many more characters than just <code>0123456789</code>:</p> <pre><code>&gt;&gt;&gt; print(textwrap.fill( ... "".join(x for x in (chr(y) for y in range(0x110000)) ... if re.match(r"\d", x)), ... break_long_words=True, width=10)) 0123456789 ٠١٢٣٤٥٦٧٨٩ ۰۱۲۳۴۵۶۷۸۹ ߀߁߂߃߄߅߆߇߈߉ ०१२३४५६७८९ ০১২৩৪৫৬৭৮৯ ੦੧੨੩੪੫੬੭੮੯ ૦૧૨૩૪૫૬૭૮૯ ୦୧୨୩୪୫୬୭୮୯ ௦௧௨௩௪௫௬௭௮௯ ౦౧౨౩౪౫౬౭౮౯ ೦೧೨೩೪೫೬೭೮೯ ൦൧൨൩൪൫൬൭൮൯ ෦෧෨෩෪෫෬෭෮෯ ๐๑๒๓๔๕๖๗๘๙ ໐໑໒໓໔໕໖໗໘໙ ༠༡༢༣༤༥༦༧༨༩ ၀၁၂၃၄၅၆၇၈၉ ႐႑႒႓႔႕႖႗႘႙ ០១២៣៤៥៦៧៨៩ ᠐᠑᠒᠓᠔᠕᠖᠗᠘᠙ ᥆᥇᥈᥉᥊᥋᥌᥍᥎᥏ ᧐᧑᧒᧓᧔᧕᧖᧗᧘᧙ ᪀᪁᪂᪃᪄᪅᪆᪇᪈᪉ ᪐᪑᪒᪓᪔᪕᪖᪗᪘᪙ ᭐᭑᭒᭓᭔᭕᭖᭗᭘᭙ ᮰᮱᮲᮳᮴᮵᮶᮷᮸᮹ ᱀᱁᱂᱃᱄᱅᱆᱇᱈᱉ ᱐᱑᱒᱓᱔᱕᱖᱗᱘᱙ ꘠꘡꘢꘣꘤꘥꘦꘧꘨꘩ ꣐꣑꣒꣓꣔꣕꣖꣗꣘꣙ ꤀꤁꤂꤃꤄꤅꤆꤇꤈꤉ ꧐꧑꧒꧓꧔꧕꧖꧗꧘꧙ ꧰꧱꧲꧳꧴꧵꧶꧷꧸꧹ ꩐꩑꩒꩓꩔꩕꩖꩗꩘꩙ ꯰꯱꯲꯳꯴꯵꯶꯷꯸꯹ 0123456789 𐒠𐒡𐒢𐒣𐒤𐒥𐒦𐒧𐒨𐒩 𑁦𑁧𑁨𑁩𑁪𑁫𑁬𑁭𑁮𑁯 𑃰𑃱𑃲𑃳𑃴𑃵𑃶𑃷𑃸𑃹 𑄶𑄷𑄸𑄹𑄺𑄻𑄼𑄽𑄾𑄿 𑇐𑇑𑇒𑇓𑇔𑇕𑇖𑇗𑇘𑇙 𑋰𑋱𑋲𑋳𑋴𑋵𑋶𑋷𑋸𑋹 𑓐𑓑𑓒𑓓𑓔𑓕𑓖𑓗𑓘𑓙 𑙐𑙑𑙒𑙓𑙔𑙕𑙖𑙗𑙘𑙙 𑛀𑛁𑛂𑛃𑛄𑛅𑛆𑛇𑛈𑛉 𑜰𑜱𑜲𑜳𑜴𑜵𑜶𑜷𑜸𑜹 𑣠𑣡𑣢𑣣𑣤𑣥𑣦𑣧𑣨𑣩 𖩠𖩡𖩢𖩣𖩤𖩥𖩦𖩧𖩨𖩩 𖭐𖭑𖭒𖭓𖭔𖭕𖭖𖭗𖭘𖭙 𝟎𝟏𝟐𝟑𝟒𝟓𝟔𝟕𝟖𝟗 𝟘𝟙𝟚𝟛𝟜𝟝𝟞𝟟𝟠𝟡 𝟢𝟣𝟤𝟥𝟦𝟧𝟨𝟩𝟪𝟫 𝟬𝟭𝟮𝟯𝟰𝟱𝟲𝟳𝟴𝟵 𝟶𝟷𝟸𝟹𝟺𝟻𝟼𝟽𝟾𝟿 </code></pre> <p>I bet you didn't know there were so many variations of the <a href="https://en.wikipedia.org/wiki/Hindu%E2%80%93Arabic_numeral_system" rel="nofollow">Hindu-Arabic numeral system</a>.</p> <p>It's also worth mentioning that even in the latest 3.x, Python's regular expressions do <strong>not</strong> support <a href="http://www.regular-expressions.info/posixbrackets.html" rel="nofollow">POSIX ERE named character classes</a> (scroll down to "character classes" -- sadly, there is no anchor). <code>[^[:digit:].]</code> won't do what you want.</p>
2
2016-09-12T19:56:48Z
[ "python", "regex", "python-2.7" ]
How to use regex to disallow non-digts but allow a dot in Python?
39,457,780
<p>I am new to regex and trying to remove all <strong>non-digts</strong> but keep the <strong>dot</strong> (<strong>.</strong>) of a string:</p> <pre><code>x = ['ABCD, EFGH ', ' 20.9&amp;dog; ', ' IJKLM /&gt;'] </code></pre> <p>So far I have tried the following:</p> <pre><code>&gt;&gt;&gt; x = re.sub("\D", "", x) 209 </code></pre> <p>However I am trying to get the following outcome:</p> <pre><code>20.9 </code></pre> <p>Thanks.</p>
0
2016-09-12T19:51:31Z
39,457,953
<p>All the answers have this issue of skipping DOT without making sure that DOT is actually part of a decimal number. Hence a string like <code>Mr.Bean</code> will remain <code>Mr.Bean</code> since DOT is is part of the negative character class (exclusion list).</p> <p>To fix tihs issue you can use this negative lookahead regex:</p> <pre><code>&gt;&gt;&gt; re.sub(r"(?!\.\d)\D", "", ' 20.9&amp;dog; ') '20.9' </code></pre> <p><code>(?!\.\d)</code> asserts that when we match <code>\D</code> we don't have a DOT and a digit ahead.</p>
0
2016-09-12T20:03:06Z
[ "python", "regex", "python-2.7" ]
Python: Turn List of Tuples into Dictionary of Nested Dictionaries
39,457,792
<p>so I have a bit of an issue on my hands. I have a list of tuples (made up of a level number and message) which will eventually become an HTML list. My issues is that before this happens, I would like to turn the tuples values into a dictionary of nested dictionaries. So here is the example:</p> <pre><code># I have this list of tuples in format of (level_number, message) tuple_list = [(1, 'line 1'), (2, 'line 2'), (3, 'line 3'), (1, 'line 4')] # And I want to turn it into this a_dict = { 'line 1': { 'line 2': { 'line 3': {} } }, 'line 4': {} } </code></pre> <p>Any help would be appreciated, as long as it is valid Python 3. Thanks!</p>
2
2016-09-12T19:52:42Z
39,457,912
<p>Assuming you only have three levels, something like following would do:</p> <pre><code>tuple_list = [(1, 'line 1'), (2, 'line 2'), (3, 'line 3'), (1, 'line 4')] a_dict = {} for prio, key in tuple_list: if prio == 1: a_dict[key] = {} first_level = key if prio == 2: a_dict[first_level][key] = {} second_level = key if prio == 3: a_dict[first_level][second_level][key] = {} # So on ... print a_dict </code></pre> <p>This also assumes that hierarchies are listed in order, meaning level 1, level 1', level 2, level 3 would be a single dict for level 1, and a hierarchal order like level 1' -> level 2 -> level 3. So the following</p> <pre><code>tuple_list = [(1, 'line 5'), (1, 'line 1'), (2, 'line 2'), (3, 'line 3'), (1, 'line 4')] </code></pre> <p>Would yield the following : </p> <pre><code>{'line 1': {'line 2': {'line 3': {}}}, 'line 4': {}, 'line 5': {}} </code></pre> <p>Or a little more complicated:</p> <pre><code>tuple_list = [(1, 'line 1'), (2, 'line 2'), (2, 'line 6'), (3, 'line 3'), (3, 'line 7'), (1, 'line 4'), (1, 'line 5')] </code></pre> <p>would yield</p> <pre><code>{'line 1': {'line 2': {}, 'line 6': {'line 3': {}, 'line 7': {}}}, 'line 4': {}, 'line 5': {}} </code></pre> <p>Since your levels are not limited to a small number, <strong>it's not a good approach to just do it through plain IFs</strong>. It's better to construct a tree, then traverse the tree and create the representation you want. Doing so is also easy, you have several root nodes (where parent=None), each one has a list of children and this repeats for the children, so you have a tree. You now start from the root and make the ordering you want ! </p> <p>It's easily implementable and I guess you get the idea !</p>
0
2016-09-12T20:00:09Z
[ "python", "list", "python-3.x", "dictionary", "tuples" ]
Python: Turn List of Tuples into Dictionary of Nested Dictionaries
39,457,792
<p>so I have a bit of an issue on my hands. I have a list of tuples (made up of a level number and message) which will eventually become an HTML list. My issues is that before this happens, I would like to turn the tuples values into a dictionary of nested dictionaries. So here is the example:</p> <pre><code># I have this list of tuples in format of (level_number, message) tuple_list = [(1, 'line 1'), (2, 'line 2'), (3, 'line 3'), (1, 'line 4')] # And I want to turn it into this a_dict = { 'line 1': { 'line 2': { 'line 3': {} } }, 'line 4': {} } </code></pre> <p>Any help would be appreciated, as long as it is valid Python 3. Thanks!</p>
2
2016-09-12T19:52:42Z
39,458,557
<p>As I pointed out in a comment, you should STRONGLY consider changing your incoming data structure if you have any control at all over it. A sequential list of tuples is definitely not ideal for what you're doing here. However it is possible if you treat it like a tree. Let's build a (sane) data structure to parse this with</p> <pre><code>class Node(object): def __init__(self, name, level, parent=None): self.children = [] self.name = name self.level = level self.parent = parent def make_child(self, othername, otherlevel): other = self.__class__(othername, otherlevel, self) self.children.append(other) return other </code></pre> <p>Now you should be able to iterate over your data structure in some sensible way</p> <pre><code>def make_nodes(tuple_list): """Builds an ordered grouping of Nodes out of a list of tuples of the form (level, name). Returns the last Node. """ curnode = Node("root", level=-float('inf')) # base Node who should always be first. for level, name in tuple_list: while curnode.level &gt;= level: curnode = curnode.parent # if we've done anything but gone up levels, go # back up the tree to the first parent who can own this curnode = curnode.make_child(name, level) # then make the node and move the cursor to it return curnode </code></pre> <p>Once your structure is complete, you can iterate on it. Doesn't much matter here if you go depth-first or breadth-first, so let's do a DFS just for ease of implementation.</p> <pre><code>def parse_tree(any_node): """Given any node in a singly-rooted tree, returns a dictionary of the form requested in the question """ def _parse_subtree(basenode): """Actually does the parsing, starting with the node given as its root. """ if not basenode.children: # base case, if there are no children then return an empty dict return {} subresult = {} for child in basenode.children: subresult.update({child.name: _parse_subtree(child)}) return subresult cursor = any_node while cursor.parent: cursor = cursor.parent # finds the root node result = {} for child in cursor.children: result[child.name] = _parse_subtree(child) return result </code></pre> <p>Then feed in your tuple list <em>et voila</em></p> <pre><code>tuple_list = [(1, 'line 1'), (2, 'line 2'), (3, 'line 3'), (1, 'line 4')] last_node = make_nodes(tuple_list) result = parse_tree(last_node) # {'line 1': {'line 2': {'line 3': {}}}, 'line 4': {}} </code></pre>
2
2016-09-12T20:47:27Z
[ "python", "list", "python-3.x", "dictionary", "tuples" ]
Alternative of threading.Timer?
39,457,850
<p>I have a producer-consumer pattern Queue, it consumes incoming events and schedule qualified events sending out in 5 seconds. I am using <code>threading.Timer()</code><a href="https://docs.python.org/2/library/sched.html" rel="nofollow">python document</a>to do it and everything was working fine.</p> <p>Recently, I was requested to change the scheduled time from 5 second to 30 minutes, and <code>threading.Timer()</code> crashes my script because previously the threads objects are created and are released very soon(only last 5 sec) but now it has to keep alive for 30 minutes.</p> <p>Here's the code:</p> <pre><code> if scheduled_time and out_event: threading.Timer(scheduled_time, self.send_out_event, (socket_connection, received_event, out_event,)).start() # schedule event send out </code></pre> <p>Can somesone shed some light on this? How can I solve this problem or is there any alternative for <code>threading.Timer()</code>?</p>
0
2016-09-12T19:55:54Z
39,478,434
<p>Thanks for @dano 's comment about the 3rd party modules! Based on my work requirement, I didn't install them on the server. </p> <p>Instead of using <code>threading.Timer()</code>, I choose to use a Redis based Delay Queue, I found some helpful source online: <a href="http://www.saltycrane.com/blog/2011/11/unique-python-redis-based-queue-delay/" rel="nofollow">A unique Python redis-based queue with delay</a>. It solved my issue.</p> <p>Briefly, the author creates a sorted set in redis and give it a name, <code>add()</code> would appends new data into the sorted set. Every time it pops at most <strong>one</strong> element from the sorted set based upon the epoc-time score, the element which holds qualified minimum score would be pop out(<strong>Not remove from redis</strong>)</p> <pre><code>def add(self, received_event, delay_queue_name="delay_queue", delay=config.SECOND_RETRY_DELAY): try: score = int(time.time()) + delay self.__client.zadd(delay_queue_name, score, received_event) self.__logger.debug("added {0} to delay queue, delay time:{1}".format(received_event, delay)) except Exception as e: self.__logger.error("error: {0}".format(e)) def pop(self, delay_queue_name="delay_queue"): min_score, max_score, element = 0, int(time.time()), None try: result = self.__client.zrangebyscore(delay_queue_name, min_score, max_score, start=0, num=1, withscores=False) except Exception as e: self.__logger.error("failed query from redis:{0}".format(e)) return None if result and len(result) == 1: element = result[0] self.__logger.debug("poped {0} from delay queue".format(element)) else: self.__logger.debug("no qualified element") return element def remove(self, element, delay_queue_name="delay_queue"): self.__client.zrem(delay_queue_name, element) </code></pre> <p><code>self.__client</code> is a Redis client instance, <code>redis.StrictRedis(host=rhost,port=rport, db=rindex)</code>. </p> <p><strong>The difference between the online source with mine is that I switched <code>zadd()</code> parameters. The order of <code>score</code> and <code>data</code> are switched. Below is docs of</strong> <code>zadd()</code> </p> <p>Here's the python redis doc:</p> <pre><code># SORTED SET COMMANDS def zadd(self, name, *args, **kwargs): """ Set any number of score, element-name pairs to the key ``name``. Pairs can be specified in two ways: As *args, in the form of: score1, name1, score2, name2, ... or as **kwargs, in the form of: name1=score1, name2=score2, ... The following example would add four values to the 'my-key' key: redis.zadd('my-key', 1.1, 'name1', 2.2, 'name2', name3=3.3, name4=4.4) """ pieces = [] if args: if len(args) % 2 != 0: raise RedisError("ZADD requires an equal number of " "values and scores") pieces.extend(args) for pair in iteritems(kwargs): pieces.append(pair[1]) pieces.append(pair[0]) return self.execute_command('ZADD', name, *pieces) </code></pre>
0
2016-09-13T20:24:50Z
[ "python", "python-2.7", "timer", "python-multithreading" ]
Returning the words after the first hypen is found
39,457,881
<p>Suppose I have a list of items - <code>['test_item_A-engine-blade', 'test_item_A-engine-part-initial', 'test_prop_prep-default-set']</code></p> <p>and I am trying to grab the words after the first hypen is found such that the result should be as follows:</p> <ul> <li>test_item_A-engine-blade => engine_blade</li> <li>test_item_A-engine-part-initial => engine_part_initial</li> <li>test_prop_prep-default-set => default_set</li> </ul> <p>I tried something like <code>re.sub("[^A-Z\d]", "", &lt;my string&gt;.split('-', 1))</code> but it seems that it only presents me the words before the string...</p>
0
2016-09-12T19:58:21Z
39,457,934
<p>You can just use <a href="https://docs.python.org/3.5/library/stdtypes.html#str.split"><code>split</code></a> with a maximum number of one.</p> <pre><code>something.split('-', maxsplit=1) </code></pre>
5
2016-09-12T20:01:48Z
[ "python" ]
How can I import a different version of a python module?
39,457,963
<p>I need to run my python script under sklearn v0.17 and on the server they have sklearn v0.15 installed.</p> <p>So I downloaded the <code>scikit-learn-0.17</code> package into <code>/home/mydir/lib/python2.7/site-packages/</code> and installed the package.</p> <p>However when I goto other directories and tried to run python and <code>import sklearn</code> the version is still 0.15.</p> <p>I created <code>~/.startup.py</code> and put the following code</p> <pre><code>import sys sys.path.insert(0,"/home/mydir/lib/Python2.7/site-packages/") </code></pre> <p>then I pointed to <code>~/.startup.py</code> in <code>~/.bashrc</code> with </p> <pre><code>PYTHONSTARTUP=~/.startup.py </code></pre> <p>But it does not help.</p> <p>I am wondering how to fix this. Thank you!</p> <p>The following files/dirs are in <code>site-packages/</code></p> <pre><code>easy-install.pth scikit-learn-0.17 site.pyc pysam-0.9.1.4 scikit_learn-0.17-py2.7.egg-info site.pyo pysam-0.9.1.4-py2.7-linux-x86_64.egg site.py sklearn </code></pre>
0
2016-09-12T20:03:53Z
39,458,109
<p>Python Virtual Environments were made to fix this problem. Create a virtual environment by navigating to the directory of your project and enter the <code>pyvenv ./Env</code> command. Activate the environment on a linux system with <code>source ./Env/bin/activate</code>. Now you have a sandboxed python environment, whatever package you install now will only be scoped to this environment. So you can <code>pip install scikit-learn-0.17</code> you will only point to <em>THAT</em> package. All other packages that are not in this virtual environment are ignored unless you explicitly add them via methods like <code>pip</code>.</p> <p>There are many other benefits to virtual environment, high reccomend reading more about them <a href="https://docs.python.org/3/library/venv.html" rel="nofollow">here</a>. </p>
0
2016-09-12T20:13:58Z
[ "python", "python-2.7", "scikit-learn", "python-module" ]
How can I import a different version of a python module?
39,457,963
<p>I need to run my python script under sklearn v0.17 and on the server they have sklearn v0.15 installed.</p> <p>So I downloaded the <code>scikit-learn-0.17</code> package into <code>/home/mydir/lib/python2.7/site-packages/</code> and installed the package.</p> <p>However when I goto other directories and tried to run python and <code>import sklearn</code> the version is still 0.15.</p> <p>I created <code>~/.startup.py</code> and put the following code</p> <pre><code>import sys sys.path.insert(0,"/home/mydir/lib/Python2.7/site-packages/") </code></pre> <p>then I pointed to <code>~/.startup.py</code> in <code>~/.bashrc</code> with </p> <pre><code>PYTHONSTARTUP=~/.startup.py </code></pre> <p>But it does not help.</p> <p>I am wondering how to fix this. Thank you!</p> <p>The following files/dirs are in <code>site-packages/</code></p> <pre><code>easy-install.pth scikit-learn-0.17 site.pyc pysam-0.9.1.4 scikit_learn-0.17-py2.7.egg-info site.pyo pysam-0.9.1.4-py2.7-linux-x86_64.egg site.py sklearn </code></pre>
0
2016-09-12T20:03:53Z
39,458,120
<p>General advice here would be to use <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtualenv</a>, it allows you to have isolated environments for all your python projects.</p> <p>So each of your project can use different scikit version. </p> <p>Tutorial: <a href="https://www.sitepoint.com/virtual-environments-python-made-easy/" rel="nofollow">https://www.sitepoint.com/virtual-environments-python-made-easy/</a></p>
0
2016-09-12T20:14:22Z
[ "python", "python-2.7", "scikit-learn", "python-module" ]
How to Trigger Windows Task Schedule restart after fails with python script
39,458,024
<p>I have a python script that when I find an error I throw sys.exit(1). This results in the task scheduler showing a "(0x1)" comment under last run result. A successful run returns "The operation completed successfully. (0x0)". Unfortunately though this does not trigger the task to be run again even though under setting I have the "if task fails, restart every:" checkbox checked. Any thoughts on how to improve this? </p> <p>Another post has the following answer but I can not find where to put the custom filter? under the four settings; Create a Basic Task, When an Event is Logged, Action,Finish</p> <p>You can,</p> <p>1 activate history for Schedule (if not already) 2 on a History "Action completed" right click "Attached Task to This Event..." 3 Set a custom filter like this:</p> <p>*[System[(EventID=201)]] and *[EventData[Data[@Name='ResultCode']='1']]</p>
1
2016-09-12T20:07:49Z
39,460,520
<p>Given that the task scheduler's event history is enabled, you can add a trigger for each exit code for which the task should be restarted. Trigger "on an event" with a custom XML query. The trigger should probably be delayed by at least 30 seconds to throttle attempts to restart the task. </p> <p>Here's an example query. It looks for event ID 201 (action completed) with a task named "\PyTest" (use the full path, starting at the root "\" folder) and an exit code of 0xC000013A (i.e. <code>STATUS_CTRL_C_EXIT</code>, i.e. a console process killed by Ctrl+Break).</p> <pre class="lang-xml prettyprint-override"><code>&lt;QueryList&gt; &lt;Query Id="0" Path="Microsoft-Windows-TaskScheduler/Operational"&gt; &lt;Select Path="Microsoft-Windows-TaskScheduler/Operational"&gt; *[System[EventID=201]] and *[EventData[Data[@Name='TaskName']='\PyTest']] and *[EventData[Data[@Name='ResultCode']='0xC000013A']] &lt;/Select&gt; &lt;/Query&gt; &lt;/QueryList&gt; </code></pre>
1
2016-09-13T00:21:44Z
[ "python", "windows", "error-handling", "scheduled-tasks" ]
Comparing two different solutions to a quiz
39,458,061
<p>I've been going through a course on CS on Udemy, and got a quiz to solve.</p> <p>Here's what is says:</p> <blockquote> <p>Define a procedure, find_last, that takes as input two strings, a search string and a target string, and returns the last position in the search string where the target string appears, or -1 if there are no occurrences.</p> <p><strong>Example</strong>: <code>find_last('aaaa', 'a')</code> returns 3</p> </blockquote> <p>The first solution is mine, the last is theirs. My question is, in what ways is theirs better than mine, and how could I think next time when solving this kind of a problem, so I would come to a solution faster and better. (I've spent at least 45 min thinking of a solution). </p> <pre><code>def find_last(s, t): some = s.find(t) i = some while s.find(t, i) != -1: some=s.find(t, i) i=i+1 return some def find_last(s,t): last_pos = -1 while True: pos = s.find(t, last_pos+1) if pos==-1: return last_pos last_pos = pos </code></pre> <p>Here are the examples:</p> <pre><code>print find_last('aaaa', 'a') # returns 3 print find_last('aaaaa', 'aa') # returns 3 print find_last('aaaa', 'b') # returns -1 print find_last("111111111", "1") # returns 8 print find_last("222222222", "") # returns 9 print find_last("", "3") # returns -1 print find_last("", "") # returns 0 </code></pre>
1
2016-09-12T20:10:29Z
39,458,117
<p>No need to code anything, just use python built-in string "right find":</p> <pre><code>print('aaaa'.rfind('a')) </code></pre> <p>result: 3</p> <pre><code>print('bbbbb'.rfind('a')) </code></pre> <p>result: -1</p> <p>also works for "more-than-1-char" search strings of course</p> <pre><code>print('bbbxb'.rfind('bx')) </code></pre> <p>result: 2</p> <p>Of course, in most widespread platforms like Linux or Windows, the processing is native: means you can't beat the speed.</p> <p>Edit: someone kindly suggested to code your <code>find_last</code> in one line. Very nice:</p> <pre><code>find_last = lambda x,y: x.rfind(y) </code></pre>
4
2016-09-12T20:14:20Z
[ "python", "algorithm" ]
Comparing two different solutions to a quiz
39,458,061
<p>I've been going through a course on CS on Udemy, and got a quiz to solve.</p> <p>Here's what is says:</p> <blockquote> <p>Define a procedure, find_last, that takes as input two strings, a search string and a target string, and returns the last position in the search string where the target string appears, or -1 if there are no occurrences.</p> <p><strong>Example</strong>: <code>find_last('aaaa', 'a')</code> returns 3</p> </blockquote> <p>The first solution is mine, the last is theirs. My question is, in what ways is theirs better than mine, and how could I think next time when solving this kind of a problem, so I would come to a solution faster and better. (I've spent at least 45 min thinking of a solution). </p> <pre><code>def find_last(s, t): some = s.find(t) i = some while s.find(t, i) != -1: some=s.find(t, i) i=i+1 return some def find_last(s,t): last_pos = -1 while True: pos = s.find(t, last_pos+1) if pos==-1: return last_pos last_pos = pos </code></pre> <p>Here are the examples:</p> <pre><code>print find_last('aaaa', 'a') # returns 3 print find_last('aaaaa', 'aa') # returns 3 print find_last('aaaa', 'b') # returns -1 print find_last("111111111", "1") # returns 8 print find_last("222222222", "") # returns 9 print find_last("", "3") # returns -1 print find_last("", "") # returns 0 </code></pre>
1
2016-09-12T20:10:29Z
39,459,740
<p>I think <strong>the main idea of the course is to learn algorithms</strong> and this exercise is good to start with (<em>regardless if the solution is not the most efficient way to resolve such problems in a current programming language</em>). So, their solution is better because they 'jump' during iteration, and you steps by 1, after you found index of first occurrence of <code>t</code>. I will try to explain with simple example:</p> <p>You have got a string <code>s = 'abbbabbba'</code> and you need to find <code>'a'</code>. When you use your function, let it be <code>fun_last_1(s, 'a')</code>:</p> <pre><code>def find_last_1(s, t): some = s.find(t) # some = 0 i = some # i = 0 while s.find(t, i) != -1: # ok some=s.find(t, i) # you again start from 0 (some = 0) ??? i=i+1 # and from here you steps by 1 # put print(i) in this row return some # in total you will have 9 iterations </code></pre> <p>you will need 9 iterations to achieve final result in this case. While they need only 4, lets add a print statement inside their loop:</p> <pre><code>def find_last_2(s,t): last_pos = -1 while True: pos = s.find(t, last_pos+1) print(pos) # add print here if pos==-1: return last_pos last_pos = pos # here they jumps &gt;&gt;&gt; find_last_2(s,'a') 0 4 8 -1 # and of course it will return 8 </code></pre> <p>You can also add this debuggy <code>print()</code> inside the loop in your function and compare:</p> <pre><code>&gt;&gt;&gt; find_last_1(s, 'a') 1 2 3 4 5 6 7 8 9 # and of course it will return 8 </code></pre> <p>Hope this will help you to understand the difference.</p>
1
2016-09-12T22:33:13Z
[ "python", "algorithm" ]
Insert rows and add missing data
39,458,148
<p>I wonder if somebody could give a few pointers on how to proceed with the following. Being a newbie to Pandas, I feel at the moment my overall knowledge and skill level is not sufficient at the moment to be able to process the request I outline below. </p> <p>I have a pandas dataframe which has a list of some 2000+ part numbers. For each part there are years of sale for the parts, a month number, a quantity sold and a sales value. For each year, there are likely to be occasional missing months. In the example data shown below for year 2007,month 11 is missing as there were no sales during that month. Similarly for 2008, months 11 &amp; 12 are missing. What I would like to do is to insert the missing months for each year and insert a row containing the appropriate year, month and a zero value for the Qty and Sales within each part_id group.<br> In total the data is approx. 60200, rows with approx. 2000 part id's. I do not mind spending time on developing a solution but could do with a few pointers to aid my education.</p> <pre><code>index Part_ID Year Month Qty Sales 60182 ZZSSL 2007 5 11.0 724.85 60183 ZZSSL 2007 6 7.0 537.94 60184 ZZSSL 2007 7 17.0 1165.02 60185 ZZSSL 2007 8 3.0 159.56 60186 ZZSSL 2007 9 67.0 4331.28 60187 ZZSSL 2007 10 72.0 4582.98 60188 ZZSSL 2007 12 42.0 2651.42 60189 ZZSSL 2008 1 22.0 1422.32 60190 ZZSSL 2008 2 16.0 1178.98 60191 ZZSSL 2008 3 20.0 1276.60 60192 ZZSSL 2008 4 28.0 2120.84 60193 ZZSSL 2008 5 2.0 83.03 60194 ZZSSL 2008 6 16.0 1250.24 60195 ZZSSL 2008 9 17.0 1323.34 60196 ZZSSL 2008 10 2.0 197.98 60197 ZZSSL 2009 1 21.0 1719.30 60198 ZZSSL 2009 2 1.0 78.15 60199 ZZSSL 2009 3 3.0 281.34 60200 ZZSSL 2009 4 25.0 2214.25 60201 ZZSSL 2009 5 10.0 833.60 60202 ZZSSL 2009 6 1.0 83.36 60203 ZZSSL 2009 7 1.0 83.36 </code></pre>
5
2016-09-12T20:16:23Z
39,458,416
<p>I think you need first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a>, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a> columns by <code>MultiIndex</code> created from <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_product.html" rel="nofollow"><code>from_product</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a>:</p> <pre><code>mux = pd.MultiIndex.from_product([['Qty','Sales'],np.arange(1,13)]) print (df.set_index(['Part_ID','Year', 'Month']) .unstack(fill_value=0) .reindex(columns=mux, fill_value=0) .stack() .rename_axis(['Part_ID','Year','Month']) .reset_index()) </code></pre> <pre><code> Part_ID Year Month Qty Sales 0 ZZSSL 2007 1 0.0 0.00 1 ZZSSL 2007 2 0.0 0.00 2 ZZSSL 2007 3 0.0 0.00 3 ZZSSL 2007 4 0.0 0.00 4 ZZSSL 2007 5 11.0 724.85 5 ZZSSL 2007 6 7.0 537.94 6 ZZSSL 2007 7 17.0 1165.02 7 ZZSSL 2007 8 3.0 159.56 8 ZZSSL 2007 9 67.0 4331.28 9 ZZSSL 2007 10 72.0 4582.98 10 ZZSSL 2007 11 0.0 0.00 11 ZZSSL 2007 12 42.0 2651.42 12 ZZSSL 2008 1 22.0 1422.32 13 ZZSSL 2008 2 16.0 1178.98 14 ZZSSL 2008 3 20.0 1276.60 15 ZZSSL 2008 4 28.0 2120.84 16 ZZSSL 2008 5 2.0 83.03 17 ZZSSL 2008 6 16.0 1250.24 18 ZZSSL 2008 7 0.0 0.00 19 ZZSSL 2008 8 0.0 0.00 20 ZZSSL 2008 9 17.0 1323.34 21 ZZSSL 2008 10 2.0 197.98 22 ZZSSL 2008 11 0.0 0.00 23 ZZSSL 2008 12 0.0 0.00 24 ZZSSL 2009 1 21.0 1719.30 25 ZZSSL 2009 2 1.0 78.15 26 ZZSSL 2009 3 3.0 281.34 27 ZZSSL 2009 4 25.0 2214.25 28 ZZSSL 2009 5 10.0 833.60 29 ZZSSL 2009 6 1.0 83.36 30 ZZSSL 2009 7 1.0 83.36 31 ZZSSL 2009 8 0.0 0.00 32 ZZSSL 2009 9 0.0 0.00 33 ZZSSL 2009 10 0.0 0.00 34 ZZSSL 2009 11 0.0 0.00 35 ZZSSL 2009 12 0.0 0.00 </code></pre> <p>If need only missing values between start and end <code>Month</code> for each <code>year</code>:</p> <pre><code>df['Month'] = pd.to_datetime(df.Month.astype(str) + '-01-' + df.Year.astype(str)) df = df.set_index('Month') .groupby(['Part_ID','Year']) .resample('MS') .asfreq() .fillna(0) .drop(['Part_ID','Year'], axis=1) .reset_index() df['Month'] = df['Month'].dt.month print (df) Part_ID Year Month Qty Sales 0 ZZSSL 2007 5 11.0 724.85 1 ZZSSL 2007 6 7.0 537.94 2 ZZSSL 2007 7 17.0 1165.02 3 ZZSSL 2007 8 3.0 159.56 4 ZZSSL 2007 9 67.0 4331.28 5 ZZSSL 2007 10 72.0 4582.98 6 ZZSSL 2007 11 0.0 0.00 7 ZZSSL 2007 12 42.0 2651.42 8 ZZSSL 2008 1 22.0 1422.32 9 ZZSSL 2008 2 16.0 1178.98 10 ZZSSL 2008 3 20.0 1276.60 11 ZZSSL 2008 4 28.0 2120.84 12 ZZSSL 2008 5 2.0 83.03 13 ZZSSL 2008 6 16.0 1250.24 14 ZZSSL 2008 7 0.0 0.00 15 ZZSSL 2008 8 0.0 0.00 16 ZZSSL 2008 9 17.0 1323.34 17 ZZSSL 2008 10 2.0 197.98 18 ZZSSL 2009 1 21.0 1719.30 19 ZZSSL 2009 2 1.0 78.15 20 ZZSSL 2009 3 3.0 281.34 21 ZZSSL 2009 4 25.0 2214.25 22 ZZSSL 2009 5 10.0 833.60 23 ZZSSL 2009 6 1.0 83.36 24 ZZSSL 2009 7 1.0 83.36 </code></pre>
2
2016-09-12T20:35:42Z
[ "python", "pandas", "insert", null, "reindex" ]
Insert rows and add missing data
39,458,148
<p>I wonder if somebody could give a few pointers on how to proceed with the following. Being a newbie to Pandas, I feel at the moment my overall knowledge and skill level is not sufficient at the moment to be able to process the request I outline below. </p> <p>I have a pandas dataframe which has a list of some 2000+ part numbers. For each part there are years of sale for the parts, a month number, a quantity sold and a sales value. For each year, there are likely to be occasional missing months. In the example data shown below for year 2007,month 11 is missing as there were no sales during that month. Similarly for 2008, months 11 &amp; 12 are missing. What I would like to do is to insert the missing months for each year and insert a row containing the appropriate year, month and a zero value for the Qty and Sales within each part_id group.<br> In total the data is approx. 60200, rows with approx. 2000 part id's. I do not mind spending time on developing a solution but could do with a few pointers to aid my education.</p> <pre><code>index Part_ID Year Month Qty Sales 60182 ZZSSL 2007 5 11.0 724.85 60183 ZZSSL 2007 6 7.0 537.94 60184 ZZSSL 2007 7 17.0 1165.02 60185 ZZSSL 2007 8 3.0 159.56 60186 ZZSSL 2007 9 67.0 4331.28 60187 ZZSSL 2007 10 72.0 4582.98 60188 ZZSSL 2007 12 42.0 2651.42 60189 ZZSSL 2008 1 22.0 1422.32 60190 ZZSSL 2008 2 16.0 1178.98 60191 ZZSSL 2008 3 20.0 1276.60 60192 ZZSSL 2008 4 28.0 2120.84 60193 ZZSSL 2008 5 2.0 83.03 60194 ZZSSL 2008 6 16.0 1250.24 60195 ZZSSL 2008 9 17.0 1323.34 60196 ZZSSL 2008 10 2.0 197.98 60197 ZZSSL 2009 1 21.0 1719.30 60198 ZZSSL 2009 2 1.0 78.15 60199 ZZSSL 2009 3 3.0 281.34 60200 ZZSSL 2009 4 25.0 2214.25 60201 ZZSSL 2009 5 10.0 833.60 60202 ZZSSL 2009 6 1.0 83.36 60203 ZZSSL 2009 7 1.0 83.36 </code></pre>
5
2016-09-12T20:16:23Z
39,458,835
<p>try this:</p> <pre><code>In [220]: r = (df.reset_index() .....: .set_index(pd.to_datetime(df.Year.map(str) + '-' + df.Month.map(str).str.zfill(2) + '-01')) .....: .resample('MS') .....: ) In [221]: new = r.pad().drop(['Qty','Sales'],1).join(r.mean().replace(np.nan, 0)[['Qty','Sales']]) In [222]: new.Month = new.index.month In [223]: new.reset_index(drop=True) Out[223]: index Part_ID Year Month Qty Sales 0 60182 ZZSSL 2007 5 11.0 724.85 1 60183 ZZSSL 2007 6 7.0 537.94 2 60184 ZZSSL 2007 7 17.0 1165.02 3 60185 ZZSSL 2007 8 3.0 159.56 4 60186 ZZSSL 2007 9 67.0 4331.28 5 60187 ZZSSL 2007 10 72.0 4582.98 6 60187 ZZSSL 2007 11 0.0 0.00 7 60188 ZZSSL 2007 12 42.0 2651.42 8 60189 ZZSSL 2008 1 22.0 1422.32 9 60190 ZZSSL 2008 2 16.0 1178.98 10 60191 ZZSSL 2008 3 20.0 1276.60 11 60192 ZZSSL 2008 4 28.0 2120.84 12 60193 ZZSSL 2008 5 2.0 83.03 13 60194 ZZSSL 2008 6 16.0 1250.24 14 60194 ZZSSL 2008 7 0.0 0.00 15 60194 ZZSSL 2008 8 0.0 0.00 16 60195 ZZSSL 2008 9 17.0 1323.34 17 60196 ZZSSL 2008 10 2.0 197.98 18 60196 ZZSSL 2008 11 0.0 0.00 19 60196 ZZSSL 2008 12 0.0 0.00 20 60197 ZZSSL 2009 1 21.0 1719.30 21 60198 ZZSSL 2009 2 1.0 78.15 22 60199 ZZSSL 2009 3 3.0 281.34 23 60200 ZZSSL 2009 4 25.0 2214.25 24 60201 ZZSSL 2009 5 10.0 833.60 25 60202 ZZSSL 2009 6 1.0 83.36 26 60203 ZZSSL 2009 7 1.0 83.36 </code></pre>
1
2016-09-12T21:10:55Z
[ "python", "pandas", "insert", null, "reindex" ]
How to serialize related models in Django Rest API?
39,458,189
<p>I have tried all the solutions. Still cannot resolve it. Here are the codes.</p> <p><strong><em>models.py</em></strong> </p> <pre><code>class Car(models.Model): car_name = models.CharField(max_length=250) car_description = models.CharField(max_length=250) def __str__(self): return self.car_name + ' - ' + str(self.pk) class Owners(models.Model): car = models.ForeignKey(Car, on_delete=models.CASCADE, default=0) owner_name = models.CharField(max_length=250) owner_desc = models.CharField(max_length=250) def get_absolute_url(self): return reverse('appname:index') def __str__(self): return self.owner_name + ' - ' + self.owner_desc </code></pre> <p><strong><em>serializers.py</em></strong></p> <pre><code>class OwnersSerializer(serializers.ModelSerializer): class Meta: model = Owners fields = '__all__' class CarSerializer(serializers.ModelSerializer): owners = OwnersSerializer(many=True, read_only=True) class Meta: model = Car fields = '__all__' </code></pre> <p><strong><em>views.py</em></strong></p> <pre><code>class CarList(APIView): def get(self, request): cars = Car.objects.all() serializer = CarSerializer(cars, many=True) return Response(serializer.data) def post(self): pass </code></pre> <p>I can't get to view all the 'Owner' objects related to a certain object of the 'Car' class.</p>
0
2016-09-12T20:19:39Z
39,458,674
<p>You need to define a related name on the ForeignKey to create the reverse reference.</p> <pre><code>class Owners(models.Model): car = models.ForeignKey(Car, on_delete=models.CASCADE, default=0, related_name='owners') </code></pre>
0
2016-09-12T20:56:58Z
[ "python", "django", "python-2.7", "rest", "django-rest-framework" ]
Using List/Tuple/etc. from typing vs directly referring type as list/tuple/etc
39,458,193
<p>What's the difference of using <code>List</code>, <code>Tuple</code>, etc. from <code>typing</code> module:</p> <pre><code>from typing import Tuple def f(points: Tuple): return map(do_stuff, points) </code></pre> <p>As opposed to referring to Python's types directly:</p> <pre><code>def f(points: tuple): return map(do_stuff, points) </code></pre> <p>And when should I use one over the other?</p>
2
2016-09-12T20:19:59Z
39,458,225
<p><code>typing.Tuple</code> and <code>typing.List</code> are <a href="https://docs.python.org/3/library/typing.html#generics" rel="nofollow"><em>Generic types</em></a>; this means you can specify what type their <em>contents</em> must be:</p> <pre><code>def f(points: Tuple[float, float]): return map(do_stuff, points) </code></pre> <p>This specifies that the tuple passed in must contain two <code>float</code> values. You can't do this with the built-in <code>tuple</code> type.</p> <p><a href="https://docs.python.org/3/library/typing.html#typing.Tuple" rel="nofollow"><code>typing.Tuple</code></a> is special here in that it lets you specify a specific number of elements expected and the type of each position. Use ellipsis if the length is not set and the type should be repeated: <code>Tuple[float, ...]</code> describes a variable-length <code>tuple</code> with <code>float</code>s.</p> <p>For <a href="https://docs.python.org/3/library/typing.html#typing.List" rel="nofollow"><code>typing.List</code></a> and other sequence types you generally only specify the type for all elements; <code>List[str]</code> is a list of strings, of any size. Note that functions should preferentially take <a href="https://docs.python.org/3/library/typing.html#typing.Sequence" rel="nofollow"><code>type.Sequence</code></a> as arguments and <code>typing.List</code> is typically only used for return types; generally speaking most functions would take any sequence and only iterate, but when you return a <code>list</code>, you really are returning a specific, mutable sequence type.</p> <p>You should always pick the <code>typing</code> generics even when you are not currently restricting the contents. It is easier to add that constraint later with a generic type as the resulting change will be smaller. </p>
8
2016-09-12T20:22:17Z
[ "python", "python-3.5", "typing", "type-hinting" ]
How to allow a tkinter window to be opened multiple times
39,458,318
<p>I am making a makeshift sign in system with python. Currently if you enter the correct password it brings up a new admin window. If you enter the wrong one it brings up a new window that says wrong password. If you exit out of one of those windows and then try to enter a password again it breaks. <code>tkinter.TclError: can't invoke "wm" command: application has been destroyed</code> Is there a way to prevent this so if someone enters the wrong password they don't need to restart the app? </p> <pre><code>import tkinter from tkinter import * #define root window root = tkinter.Tk() root.minsize(width=800, height = 600) root.maxsize(width=800, height = 600) #define admin window admin = tkinter.Tk() admin.minsize(width=800, height = 600) admin.maxsize(width=800, height = 600) admin.withdraw() #define wrong window wrong = tkinter.Tk() wrong.minsize(width=200, height = 100) wrong.maxsize(width=200, height = 100) wrong.withdraw() Label(wrong, text="Sorry that password is incorrect!", font=("Arial", 24), anchor=W, wraplength=180, fg="red").pack() #Admin sign in Label areAdmin = Label(root, text="Administrator sign in", font=("Arial", 18)) areAdmin.pack() #password label and password passwordLabel = Label(root, text="Password: ", font=("Arial", 12)) passwordLabel.place(x=300, y=30) #password entry adminPasswordEntry = Entry(root) adminPasswordEntry.place(x=385, y=32.5) #function for button def getEnteredPassword(): enteredPassword = adminPasswordEntry.get() if enteredPassword == password: admin.deiconify() else: wrong.deiconify() #enter button for password passwordEnterButton = Button(root, text="Enter", width=20, command=getEnteredPassword) passwordEnterButton.place(x=335, y=60) mainloop() </code></pre>
2
2016-09-12T20:28:16Z
39,458,522
<p>I don't know <code>tkinter</code> much but I could fix your code, I hope it's a proper fix.</p> <ol> <li>Create <code>Toplevel</code> windows not <code>Tk</code>. those are dialog windows, as opposed to <code>Tk</code> window which must be unique. Same look &amp; feel, same methods</li> <li>Create windows when needed, and each time. Else, closing them using the close gadget destroys them.</li> </ol> <p>Fixed code, enter wrong or good password as many times you want without crash:</p> <pre><code>import tkinter from tkinter import * password="good" #define root window root = tkinter.Tk() root.minsize(width=800, height = 600) root.maxsize(width=800, height = 600) #Admin sign in Label areAdmin = Label(root, text="Administrator sign in", font=("Arial", 18)) areAdmin.pack() #password label and password passwordLabel = Label(root, text="Password: ", font=("Arial", 12)) passwordLabel.place(x=300, y=30) #password entry adminPasswordEntry = Entry(root) adminPasswordEntry.place(x=385, y=32.5) #function for button def getEnteredPassword(): enteredPassword = adminPasswordEntry.get() if enteredPassword == password: admin = tkinter.Toplevel() admin.minsize(width=800, height = 600) admin.maxsize(width=800, height = 600) #admin.withdraw() else: wrong = tkinter.Toplevel() wrong.minsize(width=200, height = 100) wrong.maxsize(width=200, height = 100) Label(wrong, text="Sorry that password is incorrect!", font=("Arial", 24), anchor=W, wraplength=180, fg="red").pack() #enter button for password passwordEnterButton = Button(root, text="Enter", width=20, command=getEnteredPassword) passwordEnterButton.place(x=335, y=60) mainloop() </code></pre>
3
2016-09-12T20:44:30Z
[ "python", "tkinter" ]
Is there a way to append data to a hdfs file using Pydoop?
39,458,325
<p>I am trying to write contents of an object to a file in hdfs using python. For this, I have found a hdfs API implemented in python named Pydoop. Reading the API, I can easily use <code>dump()</code> method of pydoop to write contents to a file in hdfs path but have not seen any method like <code>append()</code> that could append new content to the old file. I know it is possible as I have found command line syntax for hdfs that does this but was thinking of using pydoop for doing this. Any help will be appreciated. Thanks</p>
0
2016-09-12T20:28:40Z
39,459,023
<p>Haven't used Pydoop, but this reads just like the Python API for appending to a regular file. </p> <pre><code>from pydoop import hdfs with hdfs.open('/path/to/file', 'a') as f: f.write('bla') </code></pre>
0
2016-09-12T21:24:52Z
[ "python", "hadoop", "hdfs" ]
How to call Python function from Node.JS
39,458,333
<p>I'm working on making a <a href="https://github.com/nfarina/homebridge" rel="nofollow">Homebridge</a> plugin for a project. <code>Homebridge</code> is a Node.JS server which I have running on a Raspberry Pi which emulates an Apple HomeKit Bridge.</p> <p>Using <a href="http://stackoverflow.com/questions/23450534/how-to-call-python-function-from-nodejs">this</a> link, I was able to execute Python code from the following Node.JS code:</p> <pre><code>var Service, Characteristic; var spawn = require('child_process').spawn; var py = spawn('python', ['/home/pi/Desktop/RFbulb/nRF24L01PLUS.py']); var data = [10,10,10]; var dataString = ''; var RFstatus = true; module.exports = function(homebridge) { Service = homebridge.hap.Service; Characteristic = homebridge.hap.Characteristic; homebridge.registerAccessory("homebridge-RFbulb", "RFbulb", RFbulbAccessory); } function RFbulbAccessory(log, config) { this.log = log; this.config = config; this.name = config["name"]; this.address = config["address"]; this.service = new Service.Lightbulb(this.name); this.service .getCharacteristic(Characteristic.On) .on('get', this.getOn.bind(this)) .on('set', this.setOn.bind(this)); } RFbulbAccessory.prototype.setOn = function(on, callback) { // This is the function throwing the error var state = on ? "on": "off"; if (state == "on") { data = [1,parseInt(this.address, 10),100]; dataString = ''; py.stdout.on('data', function(data) { dataString += data.toString(); }); py.stdout.on('end', function() { console.log(dataString); }); py.stdin.write(JSON.stringify(data)); py.stdin.end(); RFstatus = true; } callback(null); } RFbulbAccessory.prototype.getServices = function() { return [this.service]; } </code></pre> <p>Interestingly enough, when I activate the <code>setOn</code> function the first time (for example to turn the device on) it works fine, but when I activate the <code>setOn</code> function a second time (to turn the device off) I get the following errors and the server exits:</p> <pre><code>events.js:141 throw er; // Unhandled 'error' event ^ Error: write after end at writeAfterEnd (_stream_writable.js:166:12) at Socket.Writable.write (_stream_writable.js:211:5) at Socket.write (net.js:642:40) at RFbulbAccessory.setOn (/usr/lib/node_modules/homebridge-RFbulb/index.js:47:12) at emitThree (events.js:97:13) at emit (events.js:175:7) at Characteristic.setValue (/usr/lib/node_modules/homebridge/node_modules/hap-nodejs/lib/Characteristic.js:155:10) at Bridge.&lt;anonymous&gt; (/usr/lib/node_modules/homebridge/node_modules/hap-nodejs/lib/Accessory.js:710:22) at Array.forEach (native) at Bridge.Accessory._handleSetCharacteristics (/usr/lib/node_modules/homebridge/node_modules/hap-nodejs/lib/Accessory.js:655:8) </code></pre> <p>What could be causing this error? Especially since the function appears to work fine for a single use.</p>
0
2016-09-12T20:29:16Z
39,458,584
<p>You're getting that error because you're closing the input stream:</p> <pre><code>py.stdin.end(); </code></pre> <p>After a stream has been closed, you can no longer write to it like you are here: </p> <pre><code>py.stdin.write(JSON.stringify(data)); </code></pre> <p>If the Python program you're running accepts multiple commands over STDIN then simply remove the <code>py.stdin.end()</code> line.</p> <p>However, it's likely that your Python program runs once then completes. If that's the case, you will need to respawn the process every time you want the program to run.</p> <pre><code>if (state === "on") { py = spawn('python', ['/home/pi/Desktop/RFbulb/nRF24L01PLUS.py']); ... } </code></pre>
1
2016-09-12T20:49:38Z
[ "javascript", "python", "node.js" ]
Is there a way to add close buttons to tabs in tkinter.ttk.Notebook?
39,458,337
<p>I want to add close buttons to each tab in <code>tkinter.ttk.Notebook</code>. I already tried adding image and react to click event but unfortunately <code>BitmapImage</code> does not have <code>bind()</code> method.</p> <p>How can I fix this code?</p> <pre><code>#!/usr/binenv python3 from tkinter import * from tkinter.ttk import * class Application(Tk): def __init__(self): super().__init__() notebook = Notebook(self) notebook.pack(fill=BOTH, expand=True) self.img = BitmapImage(master=self, file='./image.xbm') self.img.bind('&lt;Button-1&gt;', self._on_click) notebook.add(Label(notebook, text='tab content'), text='tab caption', image=self.img) def _on_click(self, event): print('it works') app = Application() app.mainloop() </code></pre> <p>image.xbm</p> <pre><code>#define bullet_width 11 #define bullet_height 9 static char bullet_bits = { 0x00, 0x00, 0x00, 0x00, 0x78, 0x00, 0xf8, 0x00, 0xf8, 0x00, 0xf8, 0x00, 0x70, 0x00, 0x00, 0x00, 0x00, 0x00 } </code></pre>
0
2016-09-12T20:29:27Z
39,459,376
<p>One advantage of the themed (ttk) widgets is that you can create new widgets out of individual widget "elements". While not exactly simple (nor well documented), you can create a new "close tab" element add add that to the "tab" element. </p> <p>I will present one possible solution. I'll admit it's not particularly easy to understand. Perhaps one of the best sources for how to create custom widget styles can be found at tkdocs.com, starting with the <a href="http://www.tkdocs.com/tutorial/styles.html" rel="nofollow">Styles and Themes</a> section. </p> <pre><code>try: import Tkinter as tk import ttk except ImportError: # Python 3 import tkinter as tk from tkinter import ttk class CustomNotebook(ttk.Notebook): """A ttk Notebook with close buttons on each tab""" __initialized = False def __init__(self, *args, **kwargs): if not self.__initialized: self.__initialize_custom_style() self.__inititialized = True kwargs["style"] = "CustomNotebook" ttk.Notebook.__init__(self, *args, **kwargs) self._active = None self.bind("&lt;ButtonPress-1&gt;", self.on_close_press, True) self.bind("&lt;ButtonRelease-1&gt;", self.on_close_release) def on_close_press(self, event): """Called when the button is pressed over the close button""" element = self.identify(event.x, event.y) if "close" in element: index = self.index("@%d,%d" % (event.x, event.y)) self.state(['pressed']) self._active = index def on_close_release(self, event): """Called when the button is released over the close button""" if not self.instate(['pressed']): return element = self.identify(event.x, event.y) index = self.index("@%d,%d" % (event.x, event.y)) if "close" in element and self._active == index: self.forget(index) self.event_generate("&lt;&lt;NotebookTabClosed&gt;&gt;") self.state(["!pressed"]) self._active = None def __initialize_custom_style(self): style = ttk.Style() self.images = ( tk.PhotoImage("img_close", data=''' R0lGODlhCAAIAMIBAAAAADs7O4+Pj9nZ2Ts7Ozs7Ozs7Ozs7OyH+EUNyZWF0ZWQg d2l0aCBHSU1QACH5BAEKAAQALAAAAAAIAAgAAAMVGDBEA0qNJyGw7AmxmuaZhWEU 5kEJADs= '''), tk.PhotoImage("img_closeactive", data=''' R0lGODlhCAAIAMIEAAAAAP/SAP/bNNnZ2cbGxsbGxsbGxsbGxiH5BAEKAAQALAAA AAAIAAgAAAMVGDBEA0qNJyGw7AmxmuaZhWEU5kEJADs= '''), tk.PhotoImage("img_closepressed", data=''' R0lGODlhCAAIAMIEAAAAAOUqKv9mZtnZ2Ts7Ozs7Ozs7Ozs7OyH+EUNyZWF0ZWQg d2l0aCBHSU1QACH5BAEKAAQALAAAAAAIAAgAAAMVGDBEA0qNJyGw7AmxmuaZhWEU 5kEJADs= ''') ) style.element_create("close", "image", "img_close", ("active", "pressed", "!disabled", "img_closepressed"), ("active", "!disabled", "img_closeactive"), border=8, sticky='') style.layout("CustomNotebook", [("CustomNotebook.client", {"sticky": "nswe"})]) style.layout("CustomNotebook.Tab", [ ("CustomNotebook.tab", { "sticky": "nswe", "children": [ ("CustomNotebook.padding", { "side": "top", "sticky": "nswe", "children": [ ("CustomNotebook.focus", { "side": "top", "sticky": "nswe", "children": [ ("CustomNotebook.label", {"side": "left", "sticky": ''}), ("CustomNotebook.close", {"side": "left", "sticky": ''}), ] }) ] }) ] }) ]) if __name__ == "__main__": root = tk.Tk() notebook = CustomNotebook(width=200, height=200) notebook.pack(side="top", fill="both", expand=True) for color in ("red", "orange", "green", "blue", "violet"): frame = tk.Frame(notebook, background=color) notebook.add(frame, text=color) root.mainloop() </code></pre> <p>Here's what it looks like on a linux system:</p> <p><a href="http://i.stack.imgur.com/1T72M.png" rel="nofollow"><img src="http://i.stack.imgur.com/1T72M.png" alt="enter image description here"></a></p>
2
2016-09-12T21:55:14Z
[ "python", "tkinter" ]
Making a random coordinate generator
39,458,369
<p>This code was originally meant for user input, however I want it to randomly create a polygon rather than manually selecting points myself.<br> I'll probably make it a for loop, rather than a while loop so you don't need to mention that.</p> <pre><code>import pygame from pygame.locals import * from sys import exit import random from random import * pygame.init() screen = pygame.display.set_mode((640, 480), 0, 32) points = [] while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() exit() point1 = randint(0,639) point2 = randint(0,479) points = (str(randint(0,639)), str(randint(0,479))) screen.fill((255,255,255)) if len(points) &gt;= 3: pygame.draw.polygon(screen, (0,255,0), points) for point in points: pygame.draw.circle(screen, (0,0,255), point, 5) pygame.display.update() </code></pre> <p>What I am attempting to do is to make a coordinate point randomizer.<br> However, it isn't compatible with this code for some reason. I tried other things as well, and remnants of those attempts may be visible.<br> The segment I changed goes from the <code>for event in pygame.event.get</code>to the <code>screen.fill((255,255,255))</code>.<br> The original code was like this:</p> <pre><code>while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() exit() if event.type == MOUSEBUTTONDOWN: points.append(event.pos) screen.fill((255,255,255)) </code></pre> <p>When I run the program, I get a</p> <pre><code>Traceback (most recent call last): File "H:/Documents/it/Python/manual_box drawer.py", line 26, in &lt;module&gt; pygame.draw.circle(screen, (0,0,255), point, 5) TypeError: must be 2-item sequence, not int </code></pre> <p>error report.</p>
0
2016-09-12T20:32:19Z
39,458,508
<p>I think this </p> <pre><code>points = (str(randint(0,639)), str(randint(0,479))) </code></pre> <p>Should be written like so (the extra comma makes a tuple). You need to append to the <code>points</code> list rather than re-assign the variable. </p> <pre><code>points.append( (point1, point2,) ) </code></pre> <p>Then you can loop over <code>points</code> and draw them like you are already trying to do. </p>
0
2016-09-12T20:43:22Z
[ "python", "random", "coordinates" ]
Compare 2 Pandas dataframes, row by row, cell by cell
39,458,396
<p>I have 2 dataframes, <code>df1</code> and <code>df2</code>, and want to do the following, storing results in <code>df3</code>: </p> <pre><code>for each row in df1: for each row in df2: create a new row in df3 (called "df1-1, df2-1" or whatever) to store results for each cell(column) in df1: for the cell in df2 whose column name is the same as for the cell in df1: compare the cells (using some comparing function func(a,b) ) and, depending on the result of the comparison, write result into the appropriate column of the "df1-1, df2-1" row of df3) </code></pre> <p>For example, something like:</p> <pre><code>df1 A B C D foo bar foobar 7 gee whiz herp 10 df2 A B C D zoo car foobar 8 df3 df1-df2 A B C D foo-zoo func(foo,zoo) func(bar,car) func(foobar,foobar) func(7,8) gee-zoo func(gee,zoo) func(whiz,car) func(herp,foobar) func(10,8) </code></pre> <p>I've started with this: </p> <pre><code>for r1 in df1.iterrows(): for r2 in df2.iterrows(): for c1 in r1: for c2 in r2: </code></pre> <p>but am not sure what to do with it, and would appreciate some help. </p>
2
2016-09-12T20:33:49Z
39,458,848
<p>So to continue the discussion in the comments, you can use vectorization, which is one of the selling points of a library like pandas or numpy. Ideally, you shouldn't ever be calling <code>iterrows()</code>. To be a little more explicit with my suggestion:</p> <pre><code># with df1 and df2 provided as above, an example df3 = df1['A'] * 3 + df2['A'] # recall that df2 only has the one row so pandas will broadcast a NaN there df3 0 foofoofoozoo 1 NaN Name: A, dtype: object # more generally # we know that df1 and df2 share column names, so we can initialize df3 with those names df3 = pd.DataFrame(columns=df1.columns) for colName in df1: df3[colName] = func(df1[colName], df2[colName]) </code></pre> <p>Now, you could even have different functions applied to different columns by, say, creating lambda functions and then zipping them with the column names:</p> <pre><code># some example functions colAFunc = lambda x, y: x + y colBFunc = lambda x, y; x - y .... columnFunctions = [colAFunc, colBFunc, ...] # initialize df3 as above df3 = pd.DataFrame(columns=df1.columns) for func, colName in zip(columnFunctions, df1.columns): df3[colName] = func(df1[colName], df2[colName]) </code></pre> <p>The only "gotcha" that comes to mind is that you need to be sure that your function is applicable to the data in your columns. For instance, if you were to do something like <code>df1['A'] - df2['A']</code> (with df1, df2 as you have provided), that would raise a <code>ValueError</code> as the subtraction of two strings is undefined. Just something to be aware of. </p> <hr> <p><strong>Edit, re: your comment:</strong> That is doable as well. Iterate over the dfX.columns that is larger, so you don't run into a <code>KeyError</code>, and throw an <code>if</code> statement in there:</p> <pre><code># all the other jazz # let's say df1 is [['A', 'B', 'C']] and df2 is [['A', 'B', 'C', 'D']] # so iterate over df2 columns for colName in df2: if colName not in df1: df3[colName] = np.nan # be sure to import numpy as np else: df3[colName] = func(df1[colName], df2[colName]) </code></pre>
2
2016-09-12T21:12:13Z
[ "python", "pandas", "dataframe", "iterator", "iteration" ]
Matplotlib- How to make color fill bias towards max & min values?
39,458,426
<p><strong>The issue</strong></p> <p>I have a plot of correlation of two variables with most of the values close to either -1 or 1. I'm using a seismic colormap (red &amp; blue w/ white in the middle), but most of the plot is either dark blue (close to -1) or dark red (close to 1), showing little detail near min &amp; max values.</p> <p><strong>The code</strong></p> <p>Here's the code block I used for plotting.</p> <pre><code>#Set variables lonlabels = ['0','45E','90E','135E','180','135W','90W','45W','0'] latlabels = ['90S','60S','30S','Eq.','30N','60N','90N'] bounds = np.array([-1.0,-0.8,-0.6,-0.4,-0.2,0,0.2,0.4,0.6,0.8,1.0]) #Create basemap fig,ax = plt.subplots(figsize=(15.,10.)) m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,llcrnrlon=0,urcrnrlon=360.,lon_0=180.,resolution='c') m.drawcoastlines(linewidth=1,color='w') m.drawcountries(linewidth=1,color='w') m.drawparallels(np.arange(-90,90,30.),linewidth=0.3) m.drawmeridians(np.arange(-180.,180.,45.),linewidth=0.3) meshlon,meshlat = np.meshgrid(lon,lat) x,y = m(meshlon,meshlat) #Plot variable corre = m.pcolormesh(x,y,corrcoef,cmap='seismic', shading='gouraud',vmin=-1.0,vmax=1.0) #Set titles &amp; labels #Colorbar cbar = m.colorbar(corre,size="8%",ticks=bounds,location='bottom',pad=0.8) cbar.set_label(label='Correlation Coefficient',size=25) cbar.set_ticklabels(bounds) for t in cbar.ax.get_xticklabels(): t.set_fontsize(25) #Titles fig.suptitle('Correlation of Local Precipitation to Global (CanESM2)',fontsize=30,x=0.51,y=0.92) ax.set_xlabel('Longitude',fontsize=25) ax.set_xticks(np.arange(0, 405,45)) ax.set_xticklabels(lonlabels,fontsize=20) ax.set_ylabel('Latitude', fontsize=25) ax.set_yticks(np.arange(-90,120,30)) ax.set_yticklabels(latlabels,fontsize=20) </code></pre> <p>And here's the plot it generates.</p> <p><a href="http://i.stack.imgur.com/bWkw8.png" rel="nofollow"><img src="http://i.stack.imgur.com/bWkw8.png" alt="enter image description here"></a></p> <p><strong>The Question</strong></p> <p>I'd like to adjust the color fill scheme so the middle portion of the colormap, say the -0.9 to 0.9 range, is compacted (almost like a break but not quite) and the color fill better defines the values at the ends. How can I do that? Like a symmetric logarithmic distribution, but biased towards the max &amp; min instead of the middle value.</p>
0
2016-09-12T20:36:43Z
39,458,843
<p>There is a keyword argument <strong>norm</strong> that you can use with <strong>pcolormesh</strong> in order to change the scale of the color mapping. Take a look at the <a href="http://matplotlib.org/devdocs/users/colormapnorms.html#symmetric-logarithmic" rel="nofollow">matplotlib documentation</a> for this. And then you can use the parameter <strong>linthresh</strong> to change the middle range. I haven't tried it but I think it might solve your problem.</p>
1
2016-09-12T21:12:06Z
[ "python", "matplotlib", "colors", "colormap" ]
store dictionary in pandas dataframe
39,458,806
<p>I want to store a a dictionary to an data frame</p> <pre><code>dictionary_example={1234:{'choice':0,'choice_set':{0:{'A':100,'B':200,'C':300},1:{'A':200,'B':300,'C':300},2:{'A':500,'B':300,'C':300}}}, 234:{'choice':1,'choice_set':0:{'A':100,'B':400},1:{'A':100,'B':300,'C':1000}}, 1876:{'choice':2,'choice_set':0:{'A': 100,'B':400,'C':300},1:{'A':100,'B':300,'C':1000},2:{'A':600,'B':200,'C':100}} } </code></pre> <p>That put them into </p> <pre><code>id choice 0_A 0_B 0_C 1_A 1_B 1_C 2_A 2_B 2_C 1234 0 100 200 300 200 300 300 500 300 300 234 1 100 400 - 100 300 1000 - - - 1876 2 100 400 300 100 300 1000 600 200 100 </code></pre>
3
2016-09-12T21:07:26Z
39,459,076
<p>I think the following is pretty close, the core idea is simply to convert those dictionaries into json and relying on pandas.read_json to parse them. </p> <pre><code>dictionary_example={ "1234":{'choice':0,'choice_set':{0:{'A':100,'B':200,'C':300},1:{'A':200,'B':300,'C':300},2:{'A':500,'B':300,'C':300}}}, "234":{'choice':1,'choice_set':{0:{'A':100,'B':400},1:{'A':100,'B':300,'C':1000}}}, "1876":{'choice':2,'choice_set':{0:{'A': 100,'B':400,'C':300},1:{'A':100,'B':300,'C':1000},2:{'A':600,'B':200,'C':100}}} } df = pd.read_json(json.dumps(dictionary_example)).T def to_s(r): return pd.read_json(json.dumps(r)).unstack() flattened_choice_set = df["choice_set"].apply(to_s) flattened_choice_set.columns = ['_'.join((str(col[0]), col[1])) for col in flattened_choice_set.columns] result = pd.merge(df, flattened_choice_set, left_index=True, right_index=True).drop("choice_set", axis=1) result </code></pre> <p><a href="http://i.stack.imgur.com/w4Txu.png" rel="nofollow"><img src="http://i.stack.imgur.com/w4Txu.png" alt="enter image description here"></a></p>
3
2016-09-12T21:28:55Z
[ "python", "pandas", "dictionary" ]
error using Python Elasticserarch-py package
39,458,810
<p>So I am trying to create a connection to AWS ES. I have successfully connected to my S3 bucket in the same zone. However, when I try to connect to ES, I get this message every time.</p> <pre><code>Please install requests to use RequestsHttpConnection. </code></pre> <p>I have imported the correct module but nothing seems to fix this issue. Here is my code</p> <pre><code>import elasticsearch from elasticsearch import Elasticsearch, RequestsHttpConnection from boto3 import client, logging, s3, Session host = 'search-esdomain-t3rfr4trerdgfdh6t4t43ef.us-east-1.es.amazonaws.com' es = Elasticsearch( hosts = host, connection_class = RequestsHttpConnection, http_auth = ('user', 'password'), use_ssl = True, verify_certs = False) </code></pre> <p>This looks the same as every example I can find but for some reason it will not connect.</p> <p>This is with Python 3.5 and my dev environment is VS 2015.</p>
1
2016-09-12T21:08:05Z
39,473,127
<p>As per the documentation for <a href="http://elasticsearch-py.readthedocs.io/en/master/transports.html" rel="nofollow">elasticsearch-py</a>.</p> <blockquote> <p>Note that the RequestsHttpConnection requires requests to be installed.</p> </blockquote> <p>There is a need to explictly install the <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a> module if it does not already exist in the <code>PYTHONPATH</code></p>
1
2016-09-13T14:56:57Z
[ "python", "amazon-web-services", "elasticsearch", "amazon-elasticsearch" ]
Creating a Unittest
39,458,818
<p>Hey I'm pretty new to python and I'm trying to create a Unit test for some code and I'm running into a lot of trouble, to be honest I'm wondering if the code is even testable. </p> <pre><code>""" Core employee class """ class Employee(object): empCount = 0 def __init__(self, name, salary, debt, takehome): self.name = name self.salary = salary self.debt = debt self.takehome = takehome def display_employee(self): print "Name : ", self.name, ",Salary: ", self.salary, "Debt: ", \ self.debt, "Take Home: ", self.takehome emp1 = Employee("Scott", 2000, 200, 2000-200) emp2 = Employee("Mary", 5000, 300, 5000-300) emp3 = Employee("Sam", 4000, 700, 4000-700) emp4 = Employee("Sarah", 7000, 2000, 7000-200) emp5 = Employee("Charlie", 10000, 5000, 10000-5000) emp6 = Employee("Tony", 16000, 20000, 16000-20000) employees = [emp1, emp2, emp3, emp4, emp5, emp6] for employee in employees: employee.display_employee() Employee.empCount = len(employees) print "Total Employees is %d" % Employee.empCount} </code></pre> <p>If anyone would give me a hand that would be much appreciated, also any tips if I need to change to the code so it could run unittests on it more easily </p>
-1
2016-09-12T21:08:47Z
39,458,977
<p>I think your class is testable. But the tests are trivial since you only have an ˋ__init__` method and a display method.</p> <p>The starting for unit testing is using <strong><a href="https://docs.python.org/2/library/unittest.html" rel="nofollow">unittest</a></strong> library.</p> <p>Write a single test function for each "public" method:</p> <pre><code>class TestEmployee(unittest.TestCase): def test_init(self): ... def test_display_employee(self): ... </code></pre> <p>There are lots of tutorials on Internet, look at this one: <a href="http://docs.python-guide.org/en/latest/writing/tests/" rel="nofollow">Testing Your Code</a>. </p> <p>See also: <a href="http://pythontesting.net/framework/unittest/unittest-introduction/" rel="nofollow">Python testing</a></p>
0
2016-09-12T21:20:43Z
[ "python", "python-2.7" ]
Django model reload_from_db() vs. explicitly recalling from db
39,458,820
<p>If I have an object retrieved from a model, for example:</p> <pre><code>obj = Foo.objects.first() </code></pre> <p>I know that if I want to reference this object later and make sure that it has the current values from the database, I can call:</p> <pre><code>obj.refresh_from_db() </code></pre> <p>My question is, is there any advantage to using the <code>refresh_from_db()</code> method over simply doing?:</p> <pre><code>obj = Foo.objects.get(id=obj.id) </code></pre> <p>As far as I know, the result will be the same. <code>refresh_from_db()</code> seems more explicit, but in some cases it means an extra line of code. Lets say I update the <code>value</code> field for <code>obj</code> and later want to test that it has been updated to <code>False</code>. Compare:</p> <pre><code>obj = Foo.objects.first() assert obj.value is True # value of foo obj is updated somewhere to False and I want to test below obj.refresh_from_db() assert obj.value is False </code></pre> <p>with this:</p> <pre><code>obj = Foo.objects.first() assert obj.value is True # value of foo obj is updated somewhere to False and I want to test below assert Foo.objects.get(id=obj.id).value is False </code></pre> <p>I am not interested in a discussion of which of the two is more pythonic. Rather, <strong>I am wondering if one method has a practical advantage over the other in terms of resources, performance, etc</strong>. I have read <a href="https://docs.djangoproject.com/en/1.10/ref/models/instances/#refreshing-objects-from-database" rel="nofollow">this bit of documentation</a>, but I was not able to ascertain from that whether there is an advantage to using <code>reload_db()</code>. Thank you!</p>
1
2016-09-12T21:08:55Z
39,461,946
<p>Django sources are usually relatively easy to follow. If we look at the <a href="https://github.com/django/django/blob/master/django/db/models/base.py#L656" rel="nofollow">refresh_from_db() implementation</a>, at its core it is still using this same <code>Foo.objects.get(id=obj.id)</code> approach:</p> <pre><code>db_instance_qs = self.__class__._default_manager.using(db).filter(pk=self.pk) ... db_instance_qs = db_instance_qs.only(*fields) ... db_instance = db_instance_qs.get() </code></pre> <p>Only there are couple extra bells and whistles:</p> <ul> <li><a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#django.db.models.query.QuerySet.defer" rel="nofollow">deferred</a> fields are ignored</li> <li>stale foreign key references are cleared (according to the comment explanation)</li> </ul> <p>So for everyday usage it is safe to say that they are pretty much the same, use whatever you like.</p>
1
2016-09-13T03:50:38Z
[ "python", "django", "django-models" ]
How to select json data randomly
39,458,831
<p>I have following json file and I need a way to randomly select json data and prints its value. </p> <p><strong>json file :</strong></p> <pre><code>{ "base": [{"1": "add"},{"2": "act"}], "past": [{"add": "added"},{"act": "acted"}], "past-participle": [{"add": "added"},{"act": "acted"}], "s-es-ies": [{"add": "adds"},{"act": "acts"}], "ing": [{"add": "adding"},{"act": "acting"}] } </code></pre> <p><strong>example</strong></p> <pre><code>user_input = 'past' &gt;&gt; past code randomly selects 'add' or 'act' from past &gt;&gt; add prints out its value &gt;&gt; added </code></pre>
3
2016-09-12T21:09:58Z
39,458,941
<p>Use <code>random.choice</code> supplying as choices the sequence contained for the selected <code>key</code>:</p> <pre><code>user_input = input('&gt; ') &gt; past list(choice(j[user_input]).values())[0] Out[177]: 'added' </code></pre> <p>Factor it in a function to make it more compact:</p> <pre><code>def random_json_val(json_obj, k): return list(choice(json_obj[k]).values())[0] </code></pre> <p>Calling it gets you a random value for a given <code>k</code>:</p> <pre><code>&gt;&gt;&gt; random_json_val(j, 'past') 'added' &gt;&gt;&gt; random_json_val(j, 'past') 'acted' &gt;&gt;&gt; random_json_val(j, 's-es-ies') 'acts' </code></pre>
2
2016-09-12T21:18:26Z
[ "python", "json", "python-3.x" ]
Pandas: how to increment a column's cell value based on a list of ids
39,458,871
<p>I have a list of ids that correspond to the row of a data frame. From that list of ids, I want to increment a value of another column that intersects with that id's row.</p> <p>What I was thinking was something like this:</p> <pre><code>ids = [1,2,3,4] for id in ids: my_df.loc[my_df['id']] == id]['other_column'] += 1 </code></pre> <p>But this doesn't work. How can I mutate the original df, <code>my_df</code>?</p>
3
2016-09-12T21:13:37Z
39,458,910
<p>try this:</p> <pre><code>my_df.loc[my_df['id'].isin(ids), 'other_column'] += 1 </code></pre> <p>Demo:</p> <pre><code>In [233]: ids=[0,2] In [234]: df = pd.DataFrame(np.random.randint(0,3, (5, 3)), columns=list('abc')) In [235]: df Out[235]: a b c 0 2 2 1 1 1 0 2 2 2 2 0 3 0 2 1 4 0 1 2 In [236]: df.loc[df.a.isin(ids), 'c'] += 100 In [237]: df Out[237]: a b c 0 2 2 101 1 1 0 2 2 2 2 100 3 0 2 101 4 0 1 102 </code></pre>
3
2016-09-12T21:16:12Z
[ "python", "pandas", "dataframe" ]
Pandas: how to increment a column's cell value based on a list of ids
39,458,871
<p>I have a list of ids that correspond to the row of a data frame. From that list of ids, I want to increment a value of another column that intersects with that id's row.</p> <p>What I was thinking was something like this:</p> <pre><code>ids = [1,2,3,4] for id in ids: my_df.loc[my_df['id']] == id]['other_column'] += 1 </code></pre> <p>But this doesn't work. How can I mutate the original df, <code>my_df</code>?</p>
3
2016-09-12T21:13:37Z
39,458,914
<p>the ids are unique, correct? If so, you can directly place the <code>id</code> into the <code>df</code>:</p> <pre><code>ids = [1,2,3,4] for id in ids: df.loc[id,'column_name'] = id+1 </code></pre>
1
2016-09-12T21:16:31Z
[ "python", "pandas", "dataframe" ]
How to stop automatic resizing of frames
39,458,874
<p>I have three frames, but when a new label appears the frames automatically readjust to a new size. How do I stop the readjustment and have the size of each frame set and immutable.</p> <pre><code>#Import tkinter to make gui from tkinter import * from tkinter import ttk import codecs #Program that results when user attempts to log in def login(*args): file = open("rot13.txt", "r") lines = file.readlines() uname = user.get() pword = pw.get() for i in lines: x = i.split() if codecs.encode(uname,'rot13') == x[0] and codecs.encode(pword,'rot13') == x[1]: result.set("Successful") break; else: result.set("Access Denied") root = Tk() root.title("Login") #Configures column and row settings and sets padding mainframe = ttk.Frame(root, padding="3 3 12 12") mainframe['borderwidth'] = 5 mainframe['relief'] = "solid" mainframe.grid(column=1, row=1, columnspan=3, rowspan=2) mainframe.columnconfigure(0, weight=1) mainframe.rowconfigure(0, weight=1) mainframe2 = ttk.Frame(root, padding="3 3 12 12") mainframe2['borderwidth'] = 5 mainframe2['relief'] = "solid" mainframe2.grid(column=1, row=3, columnspan=1, rowspan=3) mainframe2.columnconfigure(0, weight=1) mainframe2.rowconfigure(0, weight=1) mainframe3 = ttk.Frame(root, padding="3 3 12 12") mainframe3['borderwidth'] = 5 mainframe3['relief'] = "solid" mainframe3.grid(column=2, row=5) mainframe3.columnconfigure(0, weight=1) mainframe3.rowconfigure(0, weight=1) #anchors for widgets user = StringVar() pw = StringVar() result = StringVar() #Asks user input user_entry = ttk.Entry(mainframe, width=20, textvariable=user) user_entry.grid(column=2, row=1, sticky=(W, E)) pw_entry = ttk.Entry(mainframe, width=20, textvariable=pw) pw_entry.grid(column=2, row=2, sticky=(W, E)) #Labels to make user-friendly and able to understand ttk.Label(mainframe, text="Username ").grid(column=1, row=1, sticky=W) ttk.Label(mainframe, text="Password ").grid(column=1, row=2, sticky=W) ttk.Label(mainframe2, text="").grid(column=1, row=3, sticky=W) ttk.Label(mainframe2, text="Result").grid(column=1, row=4, sticky=W) ttk.Label(mainframe2, text="").grid(column=1, row=5, sticky=W) #Button to log in ttk.Button(mainframe3, text="Login", command=login).grid(column=3, row=5, sticky=(W,E)) #Makes a spot to put in result ttk.Label(mainframe2, textvariable=result).grid(column=2, row=4, sticky=(W, E)) #Opens up with item selected and allows you to enter username without having to click it user_entry.focus() #Runs calculate if click enter root.bind('&lt;Return&gt;', login) root.mainloop() </code></pre> <p>Here is the before and after picture of the results:</p> <p><a href="http://i.stack.imgur.com/aB1VS.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/aB1VS.jpg" alt="beginning"></a></p> <p><a href="http://i.stack.imgur.com/qpxAD.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/qpxAD.jpg" alt="and then after a result appears this happens"></a></p> <p>As you can see the frames change sizes after the program runs its course. How do i stop this re-size and stick to a predetermined size?</p>
0
2016-09-12T21:13:51Z
39,459,490
<p>The simplest solution in this specific case is to give the label with the text a fixed width. When you specify a fixed width for a label, tkinter will do it's best to honor that width (though a lot depends on how you place it on the screen with <code>pack</code>, <code>place</code> or <code>grid</code>).</p> <pre><code>ttk.Label(..., width=12).grid(...) </code></pre> <p>Another solution is to turn off geometry propagation, which means you're responsible for giving the frame an explicit width and height: </p> <pre><code>mainframe2 = ttk.Frame(..., width=200, height=100) ... mainframe2.grid_propagate(False) </code></pre> <p>I do not recommend turning geometry propagation off. Tkinter is usually very good at computing the right size for widgets, and this usually results in very poor behavior if you change fonts, screen resolutions, or root window sizes. </p>
0
2016-09-12T22:06:07Z
[ "python", "tkinter" ]
How to stop automatic resizing of frames
39,458,874
<p>I have three frames, but when a new label appears the frames automatically readjust to a new size. How do I stop the readjustment and have the size of each frame set and immutable.</p> <pre><code>#Import tkinter to make gui from tkinter import * from tkinter import ttk import codecs #Program that results when user attempts to log in def login(*args): file = open("rot13.txt", "r") lines = file.readlines() uname = user.get() pword = pw.get() for i in lines: x = i.split() if codecs.encode(uname,'rot13') == x[0] and codecs.encode(pword,'rot13') == x[1]: result.set("Successful") break; else: result.set("Access Denied") root = Tk() root.title("Login") #Configures column and row settings and sets padding mainframe = ttk.Frame(root, padding="3 3 12 12") mainframe['borderwidth'] = 5 mainframe['relief'] = "solid" mainframe.grid(column=1, row=1, columnspan=3, rowspan=2) mainframe.columnconfigure(0, weight=1) mainframe.rowconfigure(0, weight=1) mainframe2 = ttk.Frame(root, padding="3 3 12 12") mainframe2['borderwidth'] = 5 mainframe2['relief'] = "solid" mainframe2.grid(column=1, row=3, columnspan=1, rowspan=3) mainframe2.columnconfigure(0, weight=1) mainframe2.rowconfigure(0, weight=1) mainframe3 = ttk.Frame(root, padding="3 3 12 12") mainframe3['borderwidth'] = 5 mainframe3['relief'] = "solid" mainframe3.grid(column=2, row=5) mainframe3.columnconfigure(0, weight=1) mainframe3.rowconfigure(0, weight=1) #anchors for widgets user = StringVar() pw = StringVar() result = StringVar() #Asks user input user_entry = ttk.Entry(mainframe, width=20, textvariable=user) user_entry.grid(column=2, row=1, sticky=(W, E)) pw_entry = ttk.Entry(mainframe, width=20, textvariable=pw) pw_entry.grid(column=2, row=2, sticky=(W, E)) #Labels to make user-friendly and able to understand ttk.Label(mainframe, text="Username ").grid(column=1, row=1, sticky=W) ttk.Label(mainframe, text="Password ").grid(column=1, row=2, sticky=W) ttk.Label(mainframe2, text="").grid(column=1, row=3, sticky=W) ttk.Label(mainframe2, text="Result").grid(column=1, row=4, sticky=W) ttk.Label(mainframe2, text="").grid(column=1, row=5, sticky=W) #Button to log in ttk.Button(mainframe3, text="Login", command=login).grid(column=3, row=5, sticky=(W,E)) #Makes a spot to put in result ttk.Label(mainframe2, textvariable=result).grid(column=2, row=4, sticky=(W, E)) #Opens up with item selected and allows you to enter username without having to click it user_entry.focus() #Runs calculate if click enter root.bind('&lt;Return&gt;', login) root.mainloop() </code></pre> <p>Here is the before and after picture of the results:</p> <p><a href="http://i.stack.imgur.com/aB1VS.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/aB1VS.jpg" alt="beginning"></a></p> <p><a href="http://i.stack.imgur.com/qpxAD.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/qpxAD.jpg" alt="and then after a result appears this happens"></a></p> <p>As you can see the frames change sizes after the program runs its course. How do i stop this re-size and stick to a predetermined size?</p>
0
2016-09-12T21:13:51Z
39,459,653
<p>Just add a <code>width</code> option to <code>ttk.Label(mainframe2, textvariable=result).grid(column=2, row=4, sticky=(W, E))</code>:<br/> So it will become like this:<br/> <code>ttk.Label(mainframe2, textvariable=result, width=20).grid(column=2, row=4, sticky=(W, E))</code></p>
0
2016-09-12T22:22:24Z
[ "python", "tkinter" ]
String instead of integer in finding "bob"
39,458,987
<p>My question code:</p> <pre><code>count = 0 for char in s: if char.startswith("bob"): count += 1 print ("Number of times bob occurs is: " + str(count)) </code></pre> <p>I have a good solution as followed:</p> <pre><code>count = 0 for i in range(len(s)): if s[i: i+3] == "bob" count += 1 print ("Number of times bob occurs is: " + str(count)) </code></pre> <p>My question: Instead of taking out a solution using integer "for i in range(len(s))", I want an alternative solution using character/string. Could anyone tell me why my above solution returns "0" in finding "bob"? Thanks.</p>
-1
2016-09-12T21:21:24Z
39,459,307
<p>Your first code doesn't work because</p> <pre><code>for char in s: </code></pre> <p>just sets <code>char</code> to individual characters in the string. So if <code>s = "bob is bob"</code>, <code>char</code> will be <code>"b"</code>, <code>"o"</code>, <code>"b"</code>, <code>"i"</code>, etc. None of those single-character strings starts with <code>"bob"</code>, so the <code>if</code> test will never succeed.</p> <p>There's no built-in looping syntax for looping over substrings, so you need to iterate the index.</p>
0
2016-09-12T21:49:43Z
[ "python" ]
String instead of integer in finding "bob"
39,458,987
<p>My question code:</p> <pre><code>count = 0 for char in s: if char.startswith("bob"): count += 1 print ("Number of times bob occurs is: " + str(count)) </code></pre> <p>I have a good solution as followed:</p> <pre><code>count = 0 for i in range(len(s)): if s[i: i+3] == "bob" count += 1 print ("Number of times bob occurs is: " + str(count)) </code></pre> <p>My question: Instead of taking out a solution using integer "for i in range(len(s))", I want an alternative solution using character/string. Could anyone tell me why my above solution returns "0" in finding "bob"? Thanks.</p>
-1
2016-09-12T21:21:24Z
39,459,320
<p>What is wrong with your first piece of code will become apparent with a print statement in the loop. The statement <code>for char in s</code> loops through each character in <code>s</code> and no character starts with the word <code>bob</code>.</p> <p>If you really want a <code>for something in something</code> type loop, you can do:</p> <pre><code>count = 0 for word in s.split(): if word == "bob": count += 1 print ("Number of times bob occurs is: " + str(count)) </code></pre> <p>This will only work if you wan to match only occurrences of bob that occur as a word alone. If you want to match bob anywhere, use the string.count method:</p> <pre><code>count = s.count("bob") </code></pre> <p>Or alternatively, regex:</p> <pre><code>import re count = len(re.findall("bob", s)) </code></pre> <p>If you want overlapping answers. This is actually what you are doing in your for loop but more concisely. I don't think there is any simpler a way to count overlapping occurrences than this.</p> <pre><code>[s[i:i+3] for i in range(len(s))].count('bob') </code></pre>
0
2016-09-12T21:50:27Z
[ "python" ]
scikit ShuffleSplit raising pandas "IndexError: index N is out of bounds for axis 0 with size M"
39,459,006
<p>I'm trying to use a scikit's GridSearch to find the best alpha for a Lasso, and one of parameters I want it iterate is the cross validation split. So, I'm doing:</p> <pre><code># X_train := Pandas Dataframe with no index (auto numbered index) and 62064 rows # y_train := Pandas 1-column Dataframe with no index (auto numbered index) and 62064 rows from sklearn import linear_model as lm from sklearn import cross_validation as cv from sklearn import grid_search model = lm.LassoCV(eps=0.001, n_alphas=1000) params = {"cv": [cv.ShuffleSplit(n=len(X_train), test_size=0.2), cv.ShuffleSplit(n=len(X_train), test_size=0.1)]} m_model = grid_search.GridSearchCV(model, params) m_model.fit(X_train, y_train) </code></pre> <p>But it raises the exception</p> <pre><code>--------------------------------------------------------------------------- IndexError Traceback (most recent call last) &lt;ipython-input-113-f791cb0644c1&gt; in &lt;module&gt;() 10 m_model = grid_search.GridSearchCV(model, params) 11 ---&gt; 12 m_model.fit(X_train.as_matrix(), y_train.as_matrix()) /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/grid_search.py in fit(self, X, y) 802 803 """ --&gt; 804 return self._fit(X, y, ParameterGrid(self.param_grid)) 805 806 /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/grid_search.py in _fit(self, X, y, parameter_iterable) 551 self.fit_params, return_parameters=True, 552 error_score=self.error_score) --&gt; 553 for parameters in parameter_iterable 554 for train, test in cv) 555 /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __call__(self, iterable) 798 # was dispatched. In particular this covers the edge 799 # case of Parallel used with an exhausted iterator. --&gt; 800 while self.dispatch_one_batch(iterator): 801 self._iterating = True 802 else: /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in dispatch_one_batch(self, iterator) 656 return False 657 else: --&gt; 658 self._dispatch(tasks) 659 return True 660 /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in _dispatch(self, batch) 564 565 if self._pool is None: --&gt; 566 job = ImmediateComputeBatch(batch) 567 self._jobs.append(job) 568 self.n_dispatched_batches += 1 /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __init__(self, batch) 178 # Don't delay the application, to avoid keeping the input 179 # arguments in memory --&gt; 180 self.results = batch() 181 182 def get(self): /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __call__(self) 70 71 def __call__(self): ---&gt; 72 return [func(*args, **kwargs) for func, args, kwargs in self.items] 73 74 def __len__(self): /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in &lt;listcomp&gt;(.0) 70 71 def __call__(self): ---&gt; 72 return [func(*args, **kwargs) for func, args, kwargs in self.items] 73 74 def __len__(self): /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/cross_validation.py in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, error_score) 1529 estimator.fit(X_train, **fit_params) 1530 else: -&gt; 1531 estimator.fit(X_train, y_train, **fit_params) 1532 1533 except Exception as e: /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/linear_model/coordinate_descent.py in fit(self, X, y) 1146 for train, test in folds) 1147 mse_paths = Parallel(n_jobs=self.n_jobs, verbose=self.verbose, -&gt; 1148 backend="threading")(jobs) 1149 mse_paths = np.reshape(mse_paths, (n_l1_ratio, len(folds), -1)) 1150 mean_mse = np.mean(mse_paths, axis=1) /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __call__(self, iterable) 798 # was dispatched. In particular this covers the edge 799 # case of Parallel used with an exhausted iterator. --&gt; 800 while self.dispatch_one_batch(iterator): 801 self._iterating = True 802 else: /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in dispatch_one_batch(self, iterator) 656 return False 657 else: --&gt; 658 self._dispatch(tasks) 659 return True 660 /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in _dispatch(self, batch) 564 565 if self._pool is None: --&gt; 566 job = ImmediateComputeBatch(batch) 567 self._jobs.append(job) 568 self.n_dispatched_batches += 1 /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __init__(self, batch) 178 # Don't delay the application, to avoid keeping the input 179 # arguments in memory --&gt; 180 self.results = batch() 181 182 def get(self): /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __call__(self) 70 71 def __call__(self): ---&gt; 72 return [func(*args, **kwargs) for func, args, kwargs in self.items] 73 74 def __len__(self): /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in &lt;listcomp&gt;(.0) 70 71 def __call__(self): ---&gt; 72 return [func(*args, **kwargs) for func, args, kwargs in self.items] 73 74 def __len__(self): /home/user/Programs/repos/pyenv/versions/3.5.2/envs/work/lib/python3.5/site-packages/sklearn/linear_model/coordinate_descent.py in _path_residuals(X, y, train, test, path, path_params, alphas, l1_ratio, X_order, dtype) 931 avoid memory copies 932 """ --&gt; 933 X_train = X[train] 934 y_train = y[train] 935 X_test = X[test] IndexError: index 60527 is out of bounds for axis 0 with size 41376 </code></pre> <p>I tried to use X_train.as_matrix() but didn't work either, giving the same error.</p> <p>Strange that I can use it manually:</p> <pre><code>cv_split = cv.ShuffleSplit(n=len(X_train), test_size=0.2) for tr, te in cv_split: print(X_train.as_matrix()[tr], y_train.as_matrix()[tr]) [[0 0 0 ..., 0 0 1] [0 0 0 ..., 0 0 1] [0 0 0 ..., 0 0 1] ..., [0 0 0 ..., 0 0 1] [0 0 0 ..., 0 0 1] [0 0 0 ..., 0 0 1]] [2 1 1 ..., 1 4 1] [[ 0 0 0 ..., 0 0 1] [1720 0 0 ..., 0 0 1] [ 0 0 0 ..., 0 0 1] ..., [ 773 0 0 ..., 0 0 1] [ 0 0 0 ..., 0 0 1] [ 501 1 0 ..., 0 0 1]] [1 1 1 ..., 1 2 1] </code></pre> <p>What am I not seeing here? Am I doing something wrong or is that a scikit bug?</p> <hr> <p><strong>Update 1</strong></p> <p>Just found out that cv parameter is not a cv.ShuffleSplit object. This is counterintuitive for me, since <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LassoCV.html#sklearn-linear-model-lassocv" rel="nofollow">the docs says</a></p> <p><a href="http://i.stack.imgur.com/nHLXC.png" rel="nofollow"><img src="http://i.stack.imgur.com/nHLXC.png" alt="enter image description here"></a></p> <p>Aren't cross_validation classes "object to be used as a cross-validation generator"?</p> <p>Thanks!</p>
0
2016-09-12T21:23:00Z
39,460,772
<p>You shouldn't be varying <code>cv</code> in the cross validation parameters grid, the idea is that you have a fixed cross-validation, and use this to grid search over other parameters, something like this:</p> <pre><code>m_model = grid_search.GridSearchCV(model, {'learning_rate': [0.1, 0.05, 0.02]}, cv = cv.ShuffleSplit(n=len(X_train), test_size=0.2)) </code></pre>
0
2016-09-13T01:01:07Z
[ "python", "pandas", "dataframe", "scikit-learn", "cross-validation" ]
Argparse: How to disallow some options in the presence of others - Python
39,459,015
<p>I have the following utility:</p> <pre><code>import argparse parser = argparse.ArgumentParser(description='Do some action.') parser.add_argument('--foo', '--fo', type=int, default=-1, help='do something foo') parser.add_argument('--bar', '--br', type=int, default=-1, help='do something bar') parser.add_argument('--baz', '--bz', type=int, default=-1, help='do something baz') parser.add_argument('--bat', '--bt', type=int, default=-1, help='do something bat') </code></pre> <p>However, if the <code>--foo</code> option is used, the <code>--bat</code> option should be disallowed, and conversely, the <code>--bat</code> option should only be used if <code>--bar</code> and <code>--baz</code> are present. How can I accomplish that using <code>argparse</code>? Sure, I could add a bunch of <code>if / else</code> blocks to check for that, but there's something built-in <code>argparse</code> that could do that for me? </p>
2
2016-09-12T21:23:55Z
39,459,093
<p>You can create mutually-exclusive groups of options with <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow"><code>parser.add_mutually_exclusive_group</code></a>:</p> <pre><code>group = parser.add_mutually_exclusive_group() group.add_argument('--foo', '--fo', type=int, default=-1, help='do something foo') group.add_argument('--bat', '--bt', type=int, default=-1, help='do something bat') </code></pre> <p>, but for more complex dependency graphs (for example, <code>--bat</code> requiring <code>--bar</code> and <code>--baz</code>), <code>argparse</code> doesn't offer any specific support. That'd be going too far in the direction of the <a href="https://en.wikipedia.org/wiki/Inner-platform_effect" rel="nofollow">inner-platform effect</a>, trying to rebuild too much of the full generality of a complete programming language within the <code>argparse</code> subsystem.</p>
1
2016-09-12T21:30:34Z
[ "python", "parsing", "command-line", "argparse" ]
TypeError: str object is not an iterator
39,459,121
<p>I have a file consisting of words, one word on each line. The file looks like this:</p> <pre><code>aaa bob fff err ddd fff err </code></pre> <p>I want to count the frequency of the pair of words which occur one after the other.</p> <p>For example,</p> <pre><code>aaa,bob: 1 bob,fff:1 fff,err:2 </code></pre> <p>and so on. I have tried this</p> <pre><code>f=open(file,'r') content=f.readlines() f.close() dic={} it=iter(content) for line in content: print line, next(line); dic.update({[line,next(line)]: 1}) </code></pre> <p>I got the error: </p> <pre><code>TypeError: str object is not an iterator </code></pre> <p>I then tried using an iterator:</p> <pre><code>it=iter(content) for x in it: print x, next(x); </code></pre> <p>Got the same error again. Please help!</p>
6
2016-09-12T21:32:36Z
39,459,149
<p>Your value <code>x</code> holds a string 'ddd/ccc/etc'. it has not next. <code>next()</code> belongs to the iterator and it used to get next element from the iterator. The correct way to call it is <code>it.next()</code></p> <pre><code>it=iter(content) for x in it: print x, it.next(); </code></pre> <p>But you will get an exception after you finish to consume all elements in the iterator. So, you need to catch StopIteration exception.</p> <pre><code>for x in it: try: line, next_line = x, it.next() # do your count logic overhere except StopIteration: break </code></pre> <p><code>dic.update({[line,next_line]: 1})</code> does not work. You will skip possible combinations.</p>
-1
2016-09-12T21:35:03Z
[ "python", "dictionary", "iterator", "generator" ]
TypeError: str object is not an iterator
39,459,121
<p>I have a file consisting of words, one word on each line. The file looks like this:</p> <pre><code>aaa bob fff err ddd fff err </code></pre> <p>I want to count the frequency of the pair of words which occur one after the other.</p> <p>For example,</p> <pre><code>aaa,bob: 1 bob,fff:1 fff,err:2 </code></pre> <p>and so on. I have tried this</p> <pre><code>f=open(file,'r') content=f.readlines() f.close() dic={} it=iter(content) for line in content: print line, next(line); dic.update({[line,next(line)]: 1}) </code></pre> <p>I got the error: </p> <pre><code>TypeError: str object is not an iterator </code></pre> <p>I then tried using an iterator:</p> <pre><code>it=iter(content) for x in it: print x, next(x); </code></pre> <p>Got the same error again. Please help!</p>
6
2016-09-12T21:32:36Z
39,459,152
<p><code>line</code>, like all <code>strs</code>, is an iter<strong>able</strong>, which means it has an <code>__iter__</code> method. But <code>next</code> works with iter<strong>ators</strong>, which have a <code>__next__</code> method (in Python 2 it's a <code>next</code> method). When the interpreter executes <code>next(line)</code>, it attempts to call <code>line.__next__</code>. Since <code>line</code> does not have a <code>__next__</code> method it raises <code>TypeError: str object is not an iterator</code>. </p> <p>Since <code>line</code> is an iter<strong>able</strong> and has an <code>__iter__</code> method, we can set <code>it = iter(line)</code>. <code>it</code> is an iter<strong>ator</strong> with a <code>__next__</code> method, and <code>next(it)</code> returns the next character in <code>line</code>. But you are looking for the next line in the file, so try something like:</p> <pre><code>from collections import defaultdict dic = defaultdict(int) with open('file.txt') as f: content = f.readlines() for i in range(len(content) - 1): key = content[i].rstrip() + ',' + content[i+1].rstrip() dic[key] += 1 for k,v in dic.items(): print(k,':',v) </code></pre> <p><strong>Output</strong> (<em>file.txt</em> as in OP)</p> <pre><code>err,ddd : 1 ddd,fff : 1 aaa,bob : 1 fff,err : 2 bob,fff : 1 </code></pre>
1
2016-09-12T21:35:24Z
[ "python", "dictionary", "iterator", "generator" ]
TypeError: str object is not an iterator
39,459,121
<p>I have a file consisting of words, one word on each line. The file looks like this:</p> <pre><code>aaa bob fff err ddd fff err </code></pre> <p>I want to count the frequency of the pair of words which occur one after the other.</p> <p>For example,</p> <pre><code>aaa,bob: 1 bob,fff:1 fff,err:2 </code></pre> <p>and so on. I have tried this</p> <pre><code>f=open(file,'r') content=f.readlines() f.close() dic={} it=iter(content) for line in content: print line, next(line); dic.update({[line,next(line)]: 1}) </code></pre> <p>I got the error: </p> <pre><code>TypeError: str object is not an iterator </code></pre> <p>I then tried using an iterator:</p> <pre><code>it=iter(content) for x in it: print x, next(x); </code></pre> <p>Got the same error again. Please help!</p>
6
2016-09-12T21:32:36Z
39,459,209
<p>As others mentioned, you can't use <code>next</code> on a line which is an string. You can use <code>itertools.tee</code> to create two independent iterator from your file object, then use <code>collections.Counter</code> and <code>zip</code> to create a counter object from the pairs of lines</p> <pre><code>from itertools import tee from collections import Counter with open('test.txt') as f: # f = (line.rstrip() for line in f) # if you don't want the trailing new lines f, ne = tee(f) next(ne) print(Counter(zip(f, ne))) </code></pre> <p>note that since file object is contain the lines with new-line at their trailing, if you don't want that you can strip the lines. </p>
0
2016-09-12T21:41:47Z
[ "python", "dictionary", "iterator", "generator" ]
TypeError: str object is not an iterator
39,459,121
<p>I have a file consisting of words, one word on each line. The file looks like this:</p> <pre><code>aaa bob fff err ddd fff err </code></pre> <p>I want to count the frequency of the pair of words which occur one after the other.</p> <p>For example,</p> <pre><code>aaa,bob: 1 bob,fff:1 fff,err:2 </code></pre> <p>and so on. I have tried this</p> <pre><code>f=open(file,'r') content=f.readlines() f.close() dic={} it=iter(content) for line in content: print line, next(line); dic.update({[line,next(line)]: 1}) </code></pre> <p>I got the error: </p> <pre><code>TypeError: str object is not an iterator </code></pre> <p>I then tried using an iterator:</p> <pre><code>it=iter(content) for x in it: print x, next(x); </code></pre> <p>Got the same error again. Please help!</p>
6
2016-09-12T21:32:36Z
39,459,338
<pre><code>from collections import Counter with open(file, 'r') as f: content = f.readlines() result = Counter((a, b) for a, b in zip(content[0:-1], content[1:])) </code></pre> <p>That will be a dictionary whose keys are the line pairs (in order) and whose values are the number of times that pair occurred.</p>
3
2016-09-12T21:52:02Z
[ "python", "dictionary", "iterator", "generator" ]
TypeError: str object is not an iterator
39,459,121
<p>I have a file consisting of words, one word on each line. The file looks like this:</p> <pre><code>aaa bob fff err ddd fff err </code></pre> <p>I want to count the frequency of the pair of words which occur one after the other.</p> <p>For example,</p> <pre><code>aaa,bob: 1 bob,fff:1 fff,err:2 </code></pre> <p>and so on. I have tried this</p> <pre><code>f=open(file,'r') content=f.readlines() f.close() dic={} it=iter(content) for line in content: print line, next(line); dic.update({[line,next(line)]: 1}) </code></pre> <p>I got the error: </p> <pre><code>TypeError: str object is not an iterator </code></pre> <p>I then tried using an iterator:</p> <pre><code>it=iter(content) for x in it: print x, next(x); </code></pre> <p>Got the same error again. Please help!</p>
6
2016-09-12T21:32:36Z
39,459,395
<p>As others said, <strong>line</strong> is a string and thus cannot be used with the <strong>next()</strong> method. Also you can't use a list as a key for the dictionary because they are hashable. You can use a tuple instead. A simple solution:</p> <pre><code>f=open(file,'r') content=f.readlines() f.close() dic={} for i in range(len(content)-1): print(content[i], content[i+1]) try: dic[(content[i], content[i+1])] += 1 except KeyError: dic[(content[i], content[i+1])] = 1 </code></pre> <p>Also notice that by using <strong>readlines()</strong> you also keep the '\n' of each line. You might want to strip it off first:</p> <pre><code> content = [] with open(file,'r') as f: for line in f: content.append(line.strip('\n')) </code></pre>
1
2016-09-12T21:57:12Z
[ "python", "dictionary", "iterator", "generator" ]
TypeError: str object is not an iterator
39,459,121
<p>I have a file consisting of words, one word on each line. The file looks like this:</p> <pre><code>aaa bob fff err ddd fff err </code></pre> <p>I want to count the frequency of the pair of words which occur one after the other.</p> <p>For example,</p> <pre><code>aaa,bob: 1 bob,fff:1 fff,err:2 </code></pre> <p>and so on. I have tried this</p> <pre><code>f=open(file,'r') content=f.readlines() f.close() dic={} it=iter(content) for line in content: print line, next(line); dic.update({[line,next(line)]: 1}) </code></pre> <p>I got the error: </p> <pre><code>TypeError: str object is not an iterator </code></pre> <p>I then tried using an iterator:</p> <pre><code>it=iter(content) for x in it: print x, next(x); </code></pre> <p>Got the same error again. Please help!</p>
6
2016-09-12T21:32:36Z
39,460,098
<p>You can use a 2 line <a href="https://docs.python.org/2.7/library/collections.html#collections.deque" rel="nofollow">deque</a> and a <a href="https://docs.python.org/2.7/library/collections.html#collections.Counter" rel="nofollow">Counter</a>:</p> <pre><code>from collections import Counter, deque lc=Counter() d=deque(maxlen=2) with open(fn) as f: d.append(next(f)) for line in f: d.append(line) lc+=Counter(["{},{}".format(*[e.rstrip() for e in d])]) &gt;&gt;&gt; lc Counter({'fff,err': 2, 'ddd,fff': 1, 'bob,fff': 1, 'aaa,bob': 1, 'err,ddd': 1}) </code></pre> <p>You can also use a <a href="https://regex101.com/r/uA0oG2/1" rel="nofollow">regex</a> with a capturing look ahead:</p> <pre><code>with open(fn) as f: lc=Counter((m.group(1)+','+m.group(2),) for m in re.finditer(r"(\w+)\n(?=(\w+))", f.read())) </code></pre>
1
2016-09-12T23:18:59Z
[ "python", "dictionary", "iterator", "generator" ]
TypeError: str object is not an iterator
39,459,121
<p>I have a file consisting of words, one word on each line. The file looks like this:</p> <pre><code>aaa bob fff err ddd fff err </code></pre> <p>I want to count the frequency of the pair of words which occur one after the other.</p> <p>For example,</p> <pre><code>aaa,bob: 1 bob,fff:1 fff,err:2 </code></pre> <p>and so on. I have tried this</p> <pre><code>f=open(file,'r') content=f.readlines() f.close() dic={} it=iter(content) for line in content: print line, next(line); dic.update({[line,next(line)]: 1}) </code></pre> <p>I got the error: </p> <pre><code>TypeError: str object is not an iterator </code></pre> <p>I then tried using an iterator:</p> <pre><code>it=iter(content) for x in it: print x, next(x); </code></pre> <p>Got the same error again. Please help!</p>
6
2016-09-12T21:32:36Z
39,460,918
<p>You just need to keep track of the previous line, a file object returns it own iterator so you don't need the <em>iter</em> or <em>readlines</em> at all, call <em>next</em> once at the very start to creating a variable <em>prev</em> then just keep updating <em>prev</em> in the loop:</p> <pre><code>from collections import defaultdict d = defaultdict(int) with open("in.txt") as f: prev = next(f).strip() for line in map(str.strip,f): # python2 use itertools.imap d[prev, line] += 1 prev = line </code></pre> <p>Which would give you:</p> <pre><code>defaultdict(&lt;type 'int'&gt;, {('aaa', 'bob'): 1, ('fff', 'err'): 2, ('err', 'ddd'): 1, ('bob', 'fff'): 1, ('ddd', 'fff'): 1}) </code></pre>
6
2016-09-13T01:23:22Z
[ "python", "dictionary", "iterator", "generator" ]
How does one configure a proxy upstream of browsermob on osx?
39,459,171
<p>I'm looking to configure an upstream proxy for browsermob, preferably programmatically from within a python or shell script.</p> <p>It doesn't look like the python bindings for browsermob include an upstream-proxy configuration command or method. Is there another method I can use?</p>
1
2016-09-12T21:37:13Z
39,482,388
<p>The python bindings do actually allow you to configure an upstream proxy. When creating a proxy using <code>create_proxy</code>, you can set the value of <code>httpProxy</code> to the IP address and port of the upstream proxy (see the <a href="https://github.com/AutomatedTester/browsermob-proxy-py/blob/master/browsermobproxy/server.py#L30" rel="nofollow"><code>params</code> parameter on create_proxy</a> for details).</p>
1
2016-09-14T04:15:03Z
[ "python", "osx", "proxy", "browsermob" ]
Getting data from a site, which cant be found in main HTML file in Python
39,459,199
<p>I'am using python and making a request: <code>page = requests.get('http://www.finam.ru/profile/moex-akcii/aeroflot/news/?start-date=2016-01-01&amp;end-date=2016-12-31',auth=('user', 'pass'))</code></p> <p>I expect, that i will be able to find everything, that i can see, when i view the website. But as i dont know it for certian and not familiar with libraries, i try to check it manually:</p> <ol> <li>I RightClick on the random part of the page and select "show the page code". And actually, i can not find needed info there!</li> <li>I RightClick on the random part of the page and select "research this element", and i CAN find it there in a wierd "tree" structure : <a href="http://i.stack.imgur.com/Bm0cK.png" rel="nofollow"><img src="http://i.stack.imgur.com/Bm0cK.png" alt="research this item"></a></li> </ol> <p>So the question is, which HTML file i recieve, when making request? and how to retrive topic names from the "tree structure". Total noob with HTML.</p>
2
2016-09-12T21:40:18Z
39,459,258
<p>Besides the source html, there is a JavaScript code running on the web site, which manipulate and change the DOM (the tree structure that you describe). When you request it via Python, the JavaScript code does not run so you can see only the initial html code. Doing such stuff called scraping , you can do it with tools such as selenuim</p>
2
2016-09-12T21:45:34Z
[ "python", "html" ]
python pandas.Series.str.contains words with space
39,459,277
<p>I'm trying to find strings that contain either " internet ", " program ", " socket programming " in the pandas dataframe. </p> <pre><code>df.col_name.str.contains(" internet | program | socket programming ", case=False) </code></pre> <p>Is this right way to do so? or Do I need to escape space using \ and raw string?</p>
1
2016-09-12T21:47:20Z
39,459,419
<p>Here is a small demo:</p> <pre><code>In [250]: df Out[250]: txt 0 Internet 1 There is no Internet in this apartment 2 Program2 3 I am learning socket programming too In [251]: df.txt.str.contains(" internet | program | socket programming ", case=False) Out[251]: 0 False 1 True 2 False 3 True Name: txt, dtype: bool </code></pre> <p>If you want to "match" also the first row: <code>Internet</code>:</p> <pre><code>In [252]: df.txt.str.contains(r"\b(?:internet|program|socket\s+programming)\b", case=False) Out[252]: 0 True 1 True 2 False 3 True Name: txt, dtype: bool </code></pre>
3
2016-09-12T21:58:42Z
[ "python", "regex", "pandas", "dataframe" ]
Setting up a result backend (rpc) with Celery in Django
39,459,290
<p>I am attempting to get a result backend working on my local machine for a project I'm working on but I am running into an issue.</p> <p>Currently I am trying to create a queue system in order for my lab to create cases. This is to prevent duplicate sequence numbers from being used. I am already using Celery for our printing so I figured I would create a new Celery queue and use that to handle the case. The front-end also needs to get the results of the case creations to display the case number that was created.</p> <p><a href="http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#rabbitmq" rel="nofollow">http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#rabbitmq</a></p> <p>I was following the above tutorial on getting my Celery configured. Below is the source:</p> <p>celeryconfig.py:</p> <pre><code>from kombu import Queue CELERY_DEFAULT_QUEUE = 'celery' CELERY_DEFAULT_EXCHANGE = 'celery' CELERY_DEFAULT_EXCHANGE_TYPE = 'direct' CELERY_RESULT_BACKEND = 'rpc://' CELERY_RESULT_PERSISTENT = False CELERY_QUEUES = ( Queue('celery', routing_key="celery"), Queue('case_creation', routing_key='create.#') ) CELERY_ROUTES = { 'case.tasks.create_case': { 'queue': 'case_creation', 'routing_key': 'create.1' }, 'print.tasks.connect_and_serve': { 'queue': 'celery', 'routing_key': 'celery' } } </code></pre> <p>celery.py:</p> <pre><code>import os from celery import Celery from django.conf import settings os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings.local') app = Celery('proj', broker='amqp://guest@localhost//') app.config_from_object('proj.celeryconfig') app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) </code></pre> <p>tasks.py:</p> <pre><code>import celery from django.db import IntegrityError from case.case_create import CaseCreate @celery.task(bind=True) def create_case(self, data, user, ip): try: acc = CaseCreate(data, user, ip) return acc.begin() except IntegrityError as e: self.retry(exc=e, countdown=2) </code></pre> <p>Here is my view that calls the above task:</p> <pre><code>@require_authentication() @requires_api_signature() @csrf_exempt @require_http_methods(['POST']) def api_create_case(request): result = create_case.delay(json.loads(request.body.decode('utf-8')), request.user, get_ip_address(request)) print(str(result)) # Prints the Task ID print(str(result.get(timeout=1))) # Throws error return HttpResponse(json.dumps({'result': str(result)}), status=200) </code></pre> <p>I start my celery queue with the following command:</p> <pre><code>celery -A proj worker -Q case_creation -n case_worker -c 1 </code></pre> <p>When I run the celery worker I do see results show up under config:</p> <pre><code> -------------- celery@case_worker v3.1.16 (Cipater) ---- **** ----- --- * *** * -- Windows-8-6.2.9200 -- * - **** --- - ** ---------- [config] - ** ---------- .&gt; app: proj:0x32a2990 - ** ---------- .&gt; transport: amqp://guest:**@localhost:5672// - ** ---------- .&gt; results: rpc:// - *** --- * --- .&gt; concurrency: 1 (prefork) -- ******* ---- --- ***** ----- [queues] -------------- .&gt; case_creation exchange=celery(direct) key=create.# </code></pre> <p>When I run the program and submit a new case this is the error message that I get:</p> <pre><code>No result backend configured. Please see the documentation for more information. </code></pre> <p>I have attempted every single thing I can find online. Is there anyone out there that can point me in the right direction? I'm so very close and so very tired of looking at this code.</p>
0
2016-09-12T21:48:12Z
39,460,999
<p>If you want to keep your result, try this <a href="http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#keeping-results" rel="nofollow">Keeping Results</a></p> <pre><code>app = Celery('proj', backend='amqp', broker='amqp://guest@localhost//') </code></pre> <h3>EDIT</h3> <blockquote> <p>Make sure the client is configured with the right backend.</p> <p>If for some reason the client is configured to use a different backend than the worker, you will not be able to receive the result, so make sure the backend is correct by inspecting it:</p> </blockquote> <p>Try this to see the output:</p> <pre><code>&gt;&gt;&gt; result = task.delay(…) &gt;&gt;&gt; print(result.backend) </code></pre> <p>other solutions will be instead of </p> <pre><code>app = Celery('proj', backend='amqp', broker='amqp://', include=['proj.tasks']) </code></pre> <p>Try:</p> <pre><code>app = Celery('proj', broker='amqp://', include=['proj.tasks']) app.conf.update( CELERY_RESULT_BACKEND='amqp' ) </code></pre>
1
2016-09-13T01:34:13Z
[ "python", "django", "celery" ]
When printing outputs an empty line appears before my outputs
39,459,312
<p>I have attempted to write a program which asks the user for a string and a number (On the same line) and then prints all possible combinations of the string up to the size of the number. The output format should be: All capitals, Each combination on each line, Length of combination(Shortest First) and in alphabetical.</p> <p>My code outputs the right combinations in the right order but it places an empty before the outputs and I'm not sure why.</p> <pre><code>from itertools import combinations allcombo = [] S = input().strip() inputlist = S.split() k = int(inputlist[1]) S = inputlist[0] # for L in range(0, k+1): allcombo = [] for pos in combinations(S, L): pos = sorted(pos) pos = str(pos).translate({ord(c): None for c in "[]()', "}) allcombo.append(pos) allcombo = sorted(allcombo) print(*allcombo, sep = '\n') </code></pre> <p>Input:</p> <pre><code>HACK 2 </code></pre> <p>Output:</p> <pre><code>(Empty Line) A C H K AC AH AK CH CK HK </code></pre> <p>Also I've only been coding for about a week so if anyone would like to show me how to write this properly, I'd be very pleased.</p>
1
2016-09-12T21:49:50Z
39,459,608
<p>Observe the line:</p> <pre><code>for L in range(0, k+1) # Notice that L is starting at 0. </code></pre> <p>Now, observe this line:</p> <pre><code>for pos in combinations(S, L) </code></pre> <p>So, we will have the following during our first iteration of the inner for loop:</p> <pre><code>for pos in combinations(S, 0) # This is an empty collection during your first loop. </code></pre> <p>Basically no work is being performed inside your loop because there is nothing to iterate over, and you will just being printing an empty string.</p> <p>Change the following code:</p> <pre><code>for L in range(0, k+1) </code></pre> <p>to this:</p> <pre><code>for L in range(1, k+1) # Skips the empty collection since L starts at 1. </code></pre> <p>and this will fix your problem.</p>
0
2016-09-12T22:17:37Z
[ "python", "combinations" ]
Have Pandas column containing lists, how to pivot unique list elements to columns?
39,459,321
<p>I wrote a web scraper to pull information from a table of products and build a dataframe. The data table has a Description column which contains a comma separated string of attributes describing the product. I want to create a column in the dataframe for every unique attribute and populate the row in that column with the attribute's substring. Example df below.</p> <pre><code>PRODUCTS DATE DESCRIPTION Product A 2016-9-12 Steel, Red, High Hardness Product B 2016-9-11 Blue, Lightweight, Steel Product C 2016-9-12 Red </code></pre> <p>I figure the first step is to split the description into a list.</p> <pre><code>In: df2 = df['DESCRIPTION'].str.split(',') Out: DESCRIPTION ['Steel', 'Red', 'High Hardness'] ['Blue', 'Lightweight', 'Steel'] ['Red'] </code></pre> <p>My desired output looks like the table below. The column names are not particularly important.</p> <pre><code>PRODUCTS DATE STEEL_COL RED_COL HIGH HARDNESS_COL BLUE COL LIGHTWEIGHT_COL Product A 2016-9-12 Steel Red High Hardness Product B 2016-9-11 Steel Blue Lightweight Product C 2016-9-12 Red </code></pre> <p>I believe the columns can be set up using a Pivot but I'm not sure the most Pythonic way to populate the columns after establishing them. Any help is appreciated.</p> <h2>UPDATE</h2> <p>Thank you very much for the answers. I selected @MaxU's response as correct since it seems slightly more flexible, but @piRSquared's gets a very similar result and may even be considered the more Pythonic approach. I tested both version and both do what I needed. Thanks!</p>
6
2016-09-12T21:50:30Z
39,459,769
<p>How about something that places an 'X' in the feature column if the product has that feature.</p> <p>The below creates a list of unique features ('Steel', 'Red', etc.), then creates a column for each feature in the original df. Then we iterate through each row and for each product feature, we place an 'X' in the cell.</p> <pre><code>ml = [] a = [ml.append(item) for l in df.DESCRIPTION for item in l] unique_list_of_attributes = list(set(ml)) # unique features list # place empty columns in original df for each feature df = pd.concat([df,pd.DataFrame(columns=unique_list_of_attributes)]).fillna(value='') # add 'X' in column if product has feature for row in df.iterrows(): for attribute in row[1]['DESCRIPTION']: df.loc[row[0],attribute] = 'X' </code></pre> <p>updated with example output:</p> <pre><code> PRODUCTS DATE DESCRIPTION Blue HighHardness \ 0 Product A 2016-9-12 [Steel, Red, HighHardness] X 1 Product B 2016-9-11 [Blue, Lightweight, Steel] X 2 Product C 2016-9-12 [Red] Lightweight Red Steel 0 X X 1 X X 2 X </code></pre>
0
2016-09-12T22:35:33Z
[ "python", "pandas", "numpy", "dataframe", "pivot" ]