title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Replace Values in Dictionary
39,602,476
<p>I am trying to replace the <code>values</code> in the <code>lines dictionary</code> by the <code>values of the corresponding key</code> in the <code>points dictionary</code></p> <pre><code>lines = {'24': ('2', '10'), '25': ('17', '18')} #k,v =&gt; Line ID(Points) points = {'10': ('2.416067758476', '2.075872272548'), '17': ('2.454725131264', '5.000000000000'), '18': ('1.828299357105', '5.000000000000'), '2': ('3.541310767185', '2.774647545044')} #Point ID =&gt; (X,Y) i = 1 while i in range(len(lines)): for k, v in lines.iteritems(): for k1, v1 in points.iteritems(): for n, new in enumerate(v): if k1 == new: lines[k] = v1 i += 1 </code></pre> <p>The expected output is:</p> <pre><code>lines = {'24': ('2.416067758476', '2.075872272548', '3.541310767185', '2.774647545044'), '25': ('2.454725131264', '5.000000000000', '1.828299357105', '5.000000000000')} </code></pre> <p>The output from the above code</p> <pre><code>lines = {'24': ('3.541310767185', '2.774647545044'), '25': ('1.828299357105', '5.000000000000')} </code></pre> <p>If i create a list, then append that into the <code>lines[k]</code> i end up getting the all the values in the points list as that of each key element of lines. I think it is not looping out of the for loop properly. </p>
2
2016-09-20T19:37:41Z
39,602,647
<p>Just use a dictionary comprehension:</p> <pre><code>&gt;&gt;&gt; {k:points[v[0]]+points[v[1]] for k,v in lines.items()} {'24': ('3.541310767185', '2.774647545044', '2.416067758476', '2.075872272548'), '25': ('2.454725131264', '5.000000000000', '1.828299357105', '5.000000000000')} </code></pre>
3
2016-09-20T19:49:03Z
[ "python", "python-2.7", "dictionary" ]
How to append lists of list into one list
39,602,512
<p>Mylists = [['A B','A B C','A B C D'],['C','D E F','A B']]</p> <p>How can I get a list like: myli = [['A','B','A','B','C','A','B','C','D'],['C','D','E','F','A','B']]</p> <p>Here is my code:</p> <pre><code>x = [] myli = [] for item in Mylists: for i in range(len(item)): x.extend(item[i].split()) myli.append(x) </code></pre>
1
2016-09-20T19:40:35Z
39,602,606
<p>You need to initialize x inside the outer forloop.</p> <p>Your code</p> <pre><code>Mylists = [['A B','A B C','A B C D'],['C','D E F','A B']] x = [] myli = [] for item in Mylists: for i in range(len(item)): # BUG: since x is not reinitialized, it contains the same old # memory address that was appended to myli # extending x means you are modifying the x that you previously appended to myli # infact, every appended x is the same x ! x.extend(item[i].split()) # appending the same old x, which will inevitably be modified # in the next iteration myli.append(x) </code></pre> <p>Correction</p> <pre><code>Mylists = [['A B','A B C','A B C D'],['C','D E F','A B']] myli = [] for item in Mylists: x = [] # reinitialize x for i in range(len(item)): x.extend(item[i].split()) myli.append(x) </code></pre>
2
2016-09-20T19:46:26Z
[ "python", "list" ]
How to append lists of list into one list
39,602,512
<p>Mylists = [['A B','A B C','A B C D'],['C','D E F','A B']]</p> <p>How can I get a list like: myli = [['A','B','A','B','C','A','B','C','D'],['C','D','E','F','A','B']]</p> <p>Here is my code:</p> <pre><code>x = [] myli = [] for item in Mylists: for i in range(len(item)): x.extend(item[i].split()) myli.append(x) </code></pre>
1
2016-09-20T19:40:35Z
39,602,642
<p>Assuming that <code>mylist</code> is a list of lists of strings where words are separated by spaces:</p> <pre><code>ml = [['A B','A B C','A B C D'],['C','D E F','A B']] l = [] for inner in ml: newinner = [] for innerstr in inner: for word in innerstr.split(' ') newinner.append(word) l.append(newinner) </code></pre>
0
2016-09-20T19:48:47Z
[ "python", "list" ]
Select reverse Foreign Key in ModelChoicefield
39,602,579
<p>I have an order form where the user can choose one item and select a quantity. The price depends on how much is ordered. For example, each item is $10 if you order &lt;100, but $7 if you order 100-200.</p> <p>In the template, I want to display the pricing information underneath the form for each choice.</p> <p>These are my Models:</p> <pre><code>class Product(models.Model): name = models.TextField() class Price(models.Model): """ This lets us define rules such as: When ordering &lt;100 items, the price is $10 When ordering 100-200 items, the price is $7 When ordering 200-300 items, the price is $5 etc """ price = models.FloatField() min_quantity = models.PositiveIntegerField() max_quantity = models.PositiveIntegerField() product = models.ForeignKey(Product) class Order(models.Model): product = models.ForeignKey(Product, null=False, blank=False, default='') quantity = models.IntegerField() </code></pre> <p>I can loop over the form fields and the queryset independently:</p> <pre><code>{% for choice in form.product.field.queryset %} &lt;h1&gt;{{choice}} {{choice.price_set.all}}&lt;/h1&gt; {% endfor %} {% for choice in form.product %} &lt;h1&gt;{{ choice.tag }} {{ choice.choice_label }}&lt;/h1&gt; {% endfor %} </code></pre> <p>... but I don't know how to combine the loops to display the prices underneath the form fields.</p> <p>Essentially, I want to select a reverse foreign key from a ModelChoicefield widget. I either need to loop over both the form fields and the queryset at once or access elements in the queryset from the form element. Ideally, this is what I'd like to do in my template:</p> <pre><code>{% for choice in form.product %} &lt;h1&gt;{{ choice.tag }} {{ choice.choice_label }}&lt;/h1&gt; {% for price in choice.price_set.all %} &lt;h1&gt;{{price}} etc...&lt;/h1&gt; {% endfor %} {% endfor %} </code></pre> <p>Surely I'm not the first person with this use case. What's the best way to do this?</p> <p><strong>Edit:</strong> As requested, this is my form and my view. Reviewing this, I suppose I should have mentioned I was using the <code>RadioSelect</code> widget.</p> <p>Form:</p> <pre><code>class OrderForm(forms.ModelForm): class Meta: exclude = ['date_added'] widgets = { 'mailer': forms.RadioSelect } model = Order </code></pre> <p>View:</p> <pre><code>def processOrder(request): if request.method == 'POST': orderForm = OrderForm(request.POST) if orderForm.is_valid(): orderObject = orderForm.save() return render(request, TEMPLATE_PREFIX + "confirm.html", {"orderObject": orderObject}) else: return render(request, TEMPLATE_PREFIX + "register.html", { "form": orderForm }) else: return render(request, TEMPLATE_PREFIX + "register.html", { "form": OrderForm()}) </code></pre>
0
2016-09-20T19:45:01Z
39,604,904
<p>For (non)perfectionists with deadlines, this code works, albeit inefficiently.</p> <pre><code>{% for choice in form.product %} {% for price_candidate in form.mailer.field.queryset %} {% if price_candidate.id == choice.choice_value|add:0 %} &lt;h1&gt;{{ choice.tag }} {{ choice.choice_label }}&lt;/h1&gt; {% for price in price_candidate.price_set.all %} &lt;h1&gt;{{price}} etc...&lt;/h1&gt; {% endfor %} {% endif %} {% endfor %} {% endfor %} </code></pre> <p>(The <code>add:0</code> hack converts choice_value into an int. cf <a href="http://zurb.com/forrst/posts/Compare_string_and_integer_in_Django_templates-0Az" rel="nofollow">http://zurb.com/forrst/posts/Compare_string_and_integer_in_Django_templates-0Az</a> )</p>
0
2016-09-20T22:49:33Z
[ "python", "django", "django-templates" ]
Is there a way to share a link with only a spefic mail recipient?
39,602,586
<p>Not sure if this question should come to SO, but here it goes.</p> <p>I have the following scenario:</p> <p>A Flask app with typical users that can login using username / password. Users can share some resources among them, but now we want to let them share those with anyone, not users of the app basically.</p> <p>Because the resources content is important, only the person that received the email should be able to access the resource. Not everyone with the link, in other words.</p> <p>What I've thought so far:</p> <ul> <li><p>Create a one-time link -> This could work, but I'd prefer if the link is permanent</p></li> <li><p>Add some Javascript in the HTML email message sent and add a parameter to the request sent so I can make sure the email address that opened the link was the correct one. This assuming that I can do that with Javascript...which is not clear to me. This will make the link permanent though.</p></li> </ul> <p>Any toughts? Thanks</p>
0
2016-09-20T19:45:31Z
39,603,344
<p>The first time someone accesses the URL, you could send them a random cookie, and save that cookie with the document. On future accesses, check if the cookie matches the saved cookie. If they share the URL with someone, that person won't have the cookie.</p> <p>Caveats:</p> <ol> <li><p>If they share the URL with someone else, and the other person goes to the URL first, <strong>they</strong> will be the one who can access it, not the original recipient.</p></li> <li><p>If the recipient clears cookies, they'll lose access to the document. You'll need a recovery procedure. This could send a new URL to the original email address.</p></li> </ol>
2
2016-09-20T20:39:38Z
[ "javascript", "python", "email", "security" ]
Python, Pandas - Issue applying function to a column in a dataframe to replace only certain items
39,602,588
<p>I have a dictionary of abbreviations of some city names that our system (for some reason) applies to data (i.e. 'Kansas City' is abbreviated 'Kansas CY', and Oklahoma City is spelled correctly).</p> <p>I am having an issue getting my function to apply to the column of the dataframe but it works when I pass in strings of data. Code sample below:</p> <pre><code>def multiple_replace(text, dict): # Create a regular expression from the dictionary keys regex = re.compile("(%s)" % "|".join(map(re.escape, dict.keys()))) # For each match, look-up corresponding value in dictionary return regex.sub(lambda mo: dict[mo.string[mo.start():mo.end()]], text) testDict = {"Kansas CY": "Kansas City"} dfData['PREV_CITY'] = dfData['PREV_CITY'].apply(multiple_replace, dict=testDict) </code></pre> <p>When I add 'axis=1' into that last line it errors out saying I provided too many args. Otherwise, it runs without error it just doesn't make the changes when there is a match to the dictionary.</p> <p>Thank you in advance! -Reece</p>
1
2016-09-20T19:45:32Z
39,602,922
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>map</code></a> and pass a dict to replace exact matches against the dict keys with the dict values, as you may have case-sensitive matches I'd <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.lower.html" rel="nofollow"><code>lower</code></a> all the strings first prior to the match:</p> <pre><code>dfData['PREV_CITY'] = dfData['PREV_CITY'].str.lower().map(testDict, na_action='ignore') </code></pre> <p>this assumes the keys in your dict are also lower case</p>
2
2016-09-20T20:07:37Z
[ "python", "pandas", "data-munging" ]
Custom Actions triggered by Parsed values in Python
39,602,760
<p>I am looking to build or more preferably use a framework to implement custom Assertions in python. I am listing below potential input that will be parsed to trigger the various assertions on retrieved data </p> <pre><code>assertValue : [ SOME STRING A ] or assertValue : [ SOME STRING B ] or assertValue : [ SOME STRING C ] </code></pre> <p>So above, when the parsed value is <code>"SOME STRING A"</code>, I would want to trigger an appropriate assertion. For eg, if the provided value is <code>"NOT NULL"</code> I would assert that retrieved data does not have NULL values in it. </p> <p>The goal for the framework is to provide flexibility to add support for different Assertions and the implementations it will trigger. I tried searching for any existing frameworks that i can use for this purpose in Python. I only found event driven frameworks like "PyDispatcher" or "Django Dispatch" , which i thought is a bit over kill for this. </p> <p>Has anyone come across a similar need and used something successfully. Thanks much</p>
1
2016-09-20T19:56:37Z
39,603,335
<p>Maybe you're looking for something along the lines of one of <a href="https://github.com/cucumber/cucumber/wiki/Python" rel="nofollow">Cucumber's Python ports? </a></p>
0
2016-09-20T20:39:00Z
[ "python", "event-triggers" ]
Complex time operations in Pandas
39,602,785
<p>Below is a small sample of my very large dataframe:</p> <pre><code>In [38]: df Out[38]: Send_Customer Pay_Customer Send_Time 0 1000000000284044644 1000000000251680999 2016-08-01 09:55:48 1 2000000000223021617 1000000000190078650 2016-08-01 02:44:23 2 2000000000289301033 1000000000309048473 2016-08-01 09:20:14 3 1000000000333893941 1000000000333956151 2016-08-01 09:20:14 4 1000000000340371553 2000000000103942022 2016-08-01 09:20:14 5 2000000000098132192 2000000000089264458 2016-08-01 09:21:27 6 1000000000007716594 2000000000144437513 2016-08-01 09:20:54 7 1000000000135884145 1000000000278399847 2016-08-01 09:21:43 8 2000000000141318366 2000000000151080468 2016-08-01 09:20:46 9 1000000000056842546 2000000000139908360 2016-08-01 09:20:55 10 1000000000275051425 2000000000254558241 2016-08-01 09:20:17 11 1000000000162362467 1000000000340653197 2016-08-01 09:23:45 12 1000000000039529533 1000000000072903285 2016-08-01 09:22:56 13 1000000000034147075 2000000000079408765 2016-08-01 09:20:17 14 1000000000319501203 1000000000337830072 2016-08-01 09:20:20 15 1000000000025289495 2000000000287368163 2016-08-01 09:20:31 16 1000000000043110429 1000000000209850047 2016-08-01 09:22:33 </code></pre> <p>I need to find out, in a timespan of 10 hours, how many non-unique or unique <code>Pay_Customers</code> a <code>Send_Customer</code> has?</p> <p>So, here is the approach I am using:</p> <pre><code>In [39]: df['time_diff'] = df.groupby('Send_Customer')['Send_Time'].apply(lambda x : x.diff().abs()) In [41]: df[df['time_diff']&lt;=dt.timedelta(seconds=36000)] Out[41]: Send_Customer Pay_Customer Send_Time \ 4361 1000000000284044644 1000000000326834813 2016-08-01 14:32:17 7530 2000000000223021617 1000000000340199555 2016-08-01 04:49:41 10937 2000000000148219588 1000000000312697109 2016-08-01 04:49:40 12876 1000000000339947901 2000000000218218239 2016-08-01 14:51:51 13553 1000000000248905073 1000000000248729812 2016-08-01 16:44:35 14281 2000000000270573223 1000000000341120021 2016-08-01 09:35:11 time_diff 4361 00:10:37 7530 00:17:06 10937 01:09:45 12876 00:53:59 13553 01:12:17 14281 05:19:34 </code></pre> <p>This approach works partially as using <code>.diff()</code> on <code>['Send_Time']</code> eliminates the first row that was used to take the difference. Any thoughts on how to preserve such rows?</p>
4
2016-09-20T19:58:04Z
39,612,548
<p>If I understand correctly: after a <code>diff</code> the first row is <code>NaT</code>. In order to preserve the first row you could replace <code>NaT</code> values with something that would not be filtered out by your condition, for instance <code>0</code>.</p> <p>Here I simply add <code>.fillna(0)</code> at the end of your first line:</p> <pre><code>df['time_diff'] = df.groupby('Send_Customer')['Send_Time'].apply( lambda x : x.diff().abs() ).fillna(0) df[df['time_diff'] &lt;= dt.timedelta(seconds=36000)] </code></pre>
0
2016-09-21T09:26:03Z
[ "python", "pandas" ]
Python Break Inside Function
39,602,810
<p>I am using Python 3.5, and I would like to use the <code>break</code> command inside a function, but I do not know how. I would like to use something like this:</p> <pre><code>def stopIfZero(a): if int(a) == 0: break else: print('Continue') while True: stopIfZero(input('Number: ')) </code></pre> <p>I know that I could just use this code:</p> <pre><code>while True: a = int(input('Number: ')) if a == 0: break else: print('Continue') </code></pre> <p>And if you don't care about the <code>print('Continue')</code> part, you can even do this one-liner: <code>while a != 0: a = int(input('Number: '))</code>(as long as a was already assigned to something other than 0) However, I would like to use a function, because other times it could help a lot.</p> <p>Thanks for any help.</p>
2
2016-09-20T19:59:25Z
39,602,851
<p>Usually, this is done by returning some value that lets you decide whether or not you want to stop the while loop (i.e. whether some condition is true or false):</p> <pre><code>def stopIfZero(a): if int(a) == 0: return True else: print('Continue') return False while True: if stopIfZero(input('Number: ')): break </code></pre>
5
2016-09-20T20:02:18Z
[ "python", "function", "python-3.x", "break" ]
Python Break Inside Function
39,602,810
<p>I am using Python 3.5, and I would like to use the <code>break</code> command inside a function, but I do not know how. I would like to use something like this:</p> <pre><code>def stopIfZero(a): if int(a) == 0: break else: print('Continue') while True: stopIfZero(input('Number: ')) </code></pre> <p>I know that I could just use this code:</p> <pre><code>while True: a = int(input('Number: ')) if a == 0: break else: print('Continue') </code></pre> <p>And if you don't care about the <code>print('Continue')</code> part, you can even do this one-liner: <code>while a != 0: a = int(input('Number: '))</code>(as long as a was already assigned to something other than 0) However, I would like to use a function, because other times it could help a lot.</p> <p>Thanks for any help.</p>
2
2016-09-20T19:59:25Z
39,602,853
<p>A function can't <code>break</code> on behalf of its caller. The <code>break</code> has to be syntactically inside the loop.</p>
3
2016-09-20T20:02:24Z
[ "python", "function", "python-3.x", "break" ]
Python Break Inside Function
39,602,810
<p>I am using Python 3.5, and I would like to use the <code>break</code> command inside a function, but I do not know how. I would like to use something like this:</p> <pre><code>def stopIfZero(a): if int(a) == 0: break else: print('Continue') while True: stopIfZero(input('Number: ')) </code></pre> <p>I know that I could just use this code:</p> <pre><code>while True: a = int(input('Number: ')) if a == 0: break else: print('Continue') </code></pre> <p>And if you don't care about the <code>print('Continue')</code> part, you can even do this one-liner: <code>while a != 0: a = int(input('Number: '))</code>(as long as a was already assigned to something other than 0) However, I would like to use a function, because other times it could help a lot.</p> <p>Thanks for any help.</p>
2
2016-09-20T19:59:25Z
39,602,864
<p>You want to use <code>return</code>, not <code>break</code>. </p> <p><a href="https://docs.python.org/3/tutorial/controlflow.html#break-and-continue-statements-and-else-clauses-on-loops" rel="nofollow"><code>break</code></a> is used to stop a loop. </p> <blockquote> <p>The break statement, like in C, breaks out of the smallest enclosing for or while loop.</p> </blockquote> <p><a href="https://docs.python.org/3/reference/simple_stmts.html#return" rel="nofollow"><code>return</code></a> is used to exit the function and return a value. You can also return <code>None</code>. </p>
0
2016-09-20T20:03:08Z
[ "python", "function", "python-3.x", "break" ]
Python Break Inside Function
39,602,810
<p>I am using Python 3.5, and I would like to use the <code>break</code> command inside a function, but I do not know how. I would like to use something like this:</p> <pre><code>def stopIfZero(a): if int(a) == 0: break else: print('Continue') while True: stopIfZero(input('Number: ')) </code></pre> <p>I know that I could just use this code:</p> <pre><code>while True: a = int(input('Number: ')) if a == 0: break else: print('Continue') </code></pre> <p>And if you don't care about the <code>print('Continue')</code> part, you can even do this one-liner: <code>while a != 0: a = int(input('Number: '))</code>(as long as a was already assigned to something other than 0) However, I would like to use a function, because other times it could help a lot.</p> <p>Thanks for any help.</p>
2
2016-09-20T19:59:25Z
39,602,910
<p>The <code>break</code> keyword is meant to be used as in your loop example only and must be inside the loop's scope.</p> <p>Update your function to return <code>True</code> or <code>False</code> only instead:</p> <pre><code>def is_zero(value): if value == 0: return True return False </code></pre> <p>Then in your loop, simply use like this:</p> <pre><code>while True: try: value = int(input("Enter a value (0 to exit): ")) except ValueError: # user entered a non-numeric value (e.g. a letter) # try asking again immediately instead of crashing :) print('Use numbers only, please') continue if is_zero(value): break print('Good entry. Continuing...') </code></pre> <p>Your <code>stopIfZero</code> function is also somewhat overloaded (i.e. has more than just 1 responsibility) because it does more than just check if a value is zero or not. For example, it:</p> <ol> <li>converts the arguments into an <code>int</code>,</li> <li>prints a message, and</li> <li>is not re-usable in other parts of your program</li> </ol> <p>Take those reasons into consideration to improve your code.</p>
0
2016-09-20T20:06:46Z
[ "python", "function", "python-3.x", "break" ]
pandas: replace string with another string
39,602,824
<p>I have the following data frame</p> <pre><code> prod_type 0 responsive 1 responsive 2 respon 3 r 4 respon 5 r 6 responsive </code></pre> <p>I would like to replace <code>respon</code> and <code>r</code> with <code>responsive</code>, so the final data frame is</p> <pre><code> prod_type 0 responsive 1 responsive 2 responsive 3 responsive 4 responsive 5 responsive 6 responsive </code></pre> <p>I tried the following but it did not work:</p> <pre><code>df['prod_type'] = df['prod_type'].replace({'respon' : 'responsvie'}, regex=True) df['prod_type'] = df['prod_type'].replace({'r' : 'responsive'}, regex=True) </code></pre>
4
2016-09-20T20:00:39Z
39,602,862
<p>You don't need to pass <code>regex=True</code> here, as this will look for partial matches, as you''re after exact matches just pass the params as separate args:</p> <pre><code>In [7]: df['prod_type'] = df['prod_type'].replace('respon' ,'responsvie') df['prod_type'] = df['prod_type'].replace('r', 'responsive') df Out[7]: prod_type 0 responsive 1 responsive 2 responsvie 3 responsive 4 responsvie 5 responsive 6 responsive </code></pre>
2
2016-09-20T20:03:06Z
[ "python", "string", "python-2.7", "pandas", "replace" ]
pandas: replace string with another string
39,602,824
<p>I have the following data frame</p> <pre><code> prod_type 0 responsive 1 responsive 2 respon 3 r 4 respon 5 r 6 responsive </code></pre> <p>I would like to replace <code>respon</code> and <code>r</code> with <code>responsive</code>, so the final data frame is</p> <pre><code> prod_type 0 responsive 1 responsive 2 responsive 3 responsive 4 responsive 5 responsive 6 responsive </code></pre> <p>I tried the following but it did not work:</p> <pre><code>df['prod_type'] = df['prod_type'].replace({'respon' : 'responsvie'}, regex=True) df['prod_type'] = df['prod_type'].replace({'r' : 'responsive'}, regex=True) </code></pre>
4
2016-09-20T20:00:39Z
39,602,902
<p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow"><code>replace</code></a> by <code>dictionary</code>:</p> <pre><code>df['prod_type'] = df['prod_type'].replace({'respon':'responsive', 'r':'responsive'}) print (df) prod_type 0 responsive 1 responsive 2 responsive 3 responsive 4 responsive 5 responsive 6 responsive </code></pre> <p>If need set all values in column to some <code>string</code>:</p> <pre><code>df['prod_type'] = 'responsive' </code></pre>
3
2016-09-20T20:06:18Z
[ "python", "string", "python-2.7", "pandas", "replace" ]
pandas: replace string with another string
39,602,824
<p>I have the following data frame</p> <pre><code> prod_type 0 responsive 1 responsive 2 respon 3 r 4 respon 5 r 6 responsive </code></pre> <p>I would like to replace <code>respon</code> and <code>r</code> with <code>responsive</code>, so the final data frame is</p> <pre><code> prod_type 0 responsive 1 responsive 2 responsive 3 responsive 4 responsive 5 responsive 6 responsive </code></pre> <p>I tried the following but it did not work:</p> <pre><code>df['prod_type'] = df['prod_type'].replace({'respon' : 'responsvie'}, regex=True) df['prod_type'] = df['prod_type'].replace({'r' : 'responsive'}, regex=True) </code></pre>
4
2016-09-20T20:00:39Z
39,603,224
<p>Other solution in case all items from <code>df['prod_type']</code> will be the same:</p> <pre><code>df['prod_type'] = ['responsive' for item in df['prod_type']] In[0]: df Out[0]: prod_type 0 responsive 1 responsive 2 responsive 3 responsive 4 responsive 5 responsive 6 responsive </code></pre>
1
2016-09-20T20:30:55Z
[ "python", "string", "python-2.7", "pandas", "replace" ]
comparing object with repr() is throwing NameError
39,602,833
<p>So I'm trying to compare a Deck object with the evaluated representation of a Deck object and getting</p> <pre><code>Traceback (most recent call last): File "C:/Users/Philipp/PycharmProjects/fnaround/src.py", line 3, in &lt;module&gt; print(Deck() == eval(repr(Deck()))) File "&lt;string&gt;", line 1, in &lt;module&gt; NameError: name 'Card' is not defined </code></pre> <p>I can not figure out what it is as I also have overridden the <code>__repr__</code> method in other classes and it works fine. I think it has something to do with it jumping from the Deck class to the Card class but I'm not sure. Can someone explain to me how the program is moving through the classes and how to fix the error. Thanks </p> <pre><code>class Deck: suits = ['\u2660', '\u2661', '\u2662', '\u2663'] ranks = ['2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K', 'A'] def __init__(self): self.deck = [] for suit in self.suits: for rank in self.ranks: self.deck.append(Card(rank, suit)) def __repr__(self): return 'Deck({})'.format(self.deck) def __eq__(self, other): return self.deck == other.deck class Card: def __init__(self, rank, suit): self.rank = rank self.suit = suit def __repr__(self): return "Card('{}', '{}')".format(self.rank, self.suit) def __eq__(self, other): return self.rank == other.rank and self.suit == other.suit print(Deck() == eval(repr(Deck()))) </code></pre>
-1
2016-09-20T20:01:08Z
39,603,269
<p>Your Deck class takes no arguments, but your representation passes a list of cards as argument. You can change your <code>Deck.__init__</code> to something like this:</p> <pre><code>def __init__(self, deck=None): if deck: self.deck = deck else: self.deck = [Card(r, s) for r in self.ranks for s in self.suits] </code></pre> <p>And the final comparison should also work as you expect.</p>
0
2016-09-20T20:34:16Z
[ "python", "class", "eval", "repr" ]
comparing object with repr() is throwing NameError
39,602,833
<p>So I'm trying to compare a Deck object with the evaluated representation of a Deck object and getting</p> <pre><code>Traceback (most recent call last): File "C:/Users/Philipp/PycharmProjects/fnaround/src.py", line 3, in &lt;module&gt; print(Deck() == eval(repr(Deck()))) File "&lt;string&gt;", line 1, in &lt;module&gt; NameError: name 'Card' is not defined </code></pre> <p>I can not figure out what it is as I also have overridden the <code>__repr__</code> method in other classes and it works fine. I think it has something to do with it jumping from the Deck class to the Card class but I'm not sure. Can someone explain to me how the program is moving through the classes and how to fix the error. Thanks </p> <pre><code>class Deck: suits = ['\u2660', '\u2661', '\u2662', '\u2663'] ranks = ['2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K', 'A'] def __init__(self): self.deck = [] for suit in self.suits: for rank in self.ranks: self.deck.append(Card(rank, suit)) def __repr__(self): return 'Deck({})'.format(self.deck) def __eq__(self, other): return self.deck == other.deck class Card: def __init__(self, rank, suit): self.rank = rank self.suit = suit def __repr__(self): return "Card('{}', '{}')".format(self.rank, self.suit) def __eq__(self, other): return self.rank == other.rank and self.suit == other.suit print(Deck() == eval(repr(Deck()))) </code></pre>
-1
2016-09-20T20:01:08Z
39,603,296
<p>I get this output;</p> <pre><code>deck.py", line 32, in &lt;module&gt; print(Deck() == eval(repr(Deck()))) File "&lt;string&gt;", line 1, in &lt;module&gt; TypeError: __init__() takes 1 positional argument but 2 were given &lt;&lt;&lt; Process finished. (Exit code 1) </code></pre> <p>Your repr is trying to pass a second argument to <code>Deck__init__()</code>. It only takes one. This works</p> <pre><code>def __repr__(self): return 'Deck()'.format(self.deck) </code></pre>
0
2016-09-20T20:36:01Z
[ "python", "class", "eval", "repr" ]
comparing object with repr() is throwing NameError
39,602,833
<p>So I'm trying to compare a Deck object with the evaluated representation of a Deck object and getting</p> <pre><code>Traceback (most recent call last): File "C:/Users/Philipp/PycharmProjects/fnaround/src.py", line 3, in &lt;module&gt; print(Deck() == eval(repr(Deck()))) File "&lt;string&gt;", line 1, in &lt;module&gt; NameError: name 'Card' is not defined </code></pre> <p>I can not figure out what it is as I also have overridden the <code>__repr__</code> method in other classes and it works fine. I think it has something to do with it jumping from the Deck class to the Card class but I'm not sure. Can someone explain to me how the program is moving through the classes and how to fix the error. Thanks </p> <pre><code>class Deck: suits = ['\u2660', '\u2661', '\u2662', '\u2663'] ranks = ['2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K', 'A'] def __init__(self): self.deck = [] for suit in self.suits: for rank in self.ranks: self.deck.append(Card(rank, suit)) def __repr__(self): return 'Deck({})'.format(self.deck) def __eq__(self, other): return self.deck == other.deck class Card: def __init__(self, rank, suit): self.rank = rank self.suit = suit def __repr__(self): return "Card('{}', '{}')".format(self.rank, self.suit) def __eq__(self, other): return self.rank == other.rank and self.suit == other.suit print(Deck() == eval(repr(Deck()))) </code></pre>
-1
2016-09-20T20:01:08Z
39,603,520
<p>The current issue (causing the <code>NameError</code>) is that when you're running the <code>eval</code> on a <code>Deck</code> instance's <code>repr</code>, you don't have the name <code>Card</code> in the local namespace. This means that When <code>eval</code> tries to interpret the string that <code>repr(Deck())</code> returns, it fails when it comes across the first <code>Card</code>.</p> <p>You can fix this part of the problem simply by importing <code>Card</code> as well as <code>Deck</code> into the namespace where you're running the other code:</p> <pre><code>from deck_module import Deck # you've not said what your module names are, from card_module import Card # so I'm making these up eval(repr(Deck())) # raises a TypeError, but not a ValueError any more </code></pre> <p>Adding the <code>import</code> of the <code>Card</code> class doesn't make the code work (due to shortcomings of the <code>Deck</code> constructor), but at least it will get you past the <code>NameError</code>.</p>
0
2016-09-20T20:51:59Z
[ "python", "class", "eval", "repr" ]
how to find that file is image or document or ... without extension and content type?
39,602,869
<h1>File Manager</h1> <p>I want uploading any file and i have a file manager service that get the file and saving without extension and files name are UUID and return file information.</p> <p>my file manager handler :</p> <pre><code>#!/usr/bin/python # -*- coding: utf-8 -*- import json from pyramid_storage.exceptions import FileNotAllowed import uuid from pyramid.view import view_config from pyramid.response import Response import os class UploadHandler: def __init__(self, request): self.request = request self.settings = self.request.registry.settings @view_config(route_name='upload', request_method='POST', renderer='json') def post(self): # file f = self.request.POST.items() # file name file_name_main = f[0][1].filename # content type content_type = str(f[0][1].type) if content_type: extension_main = content_type.split('.')[-1] # set id for file name if extension_main: f[0][1].filename = str(uuid.uuid4()) else: response = Response(body=json.dumps({'ERROR': 'Your File Not Allowed'})) response.headers.update({ 'Access-Control-Allow-Origin': self.settings['Access-Control-Allow-Origin'], }) return response else: response = Response(body=json.dumps({'ERROR': 'Your File Not Allowed'})) response.headers.update({ 'Access-Control-Allow-Origin': self.settings['Access-Control-Allow-Origin'], }) return response try: # save file file_name = str(self.request.storage.save(f[0][1])) except FileNotAllowed: response = Response(body=json.dumps({'ERROR': 'Your File Not Allowed'})) response.headers.update({ 'Access-Control-Allow-Origin': self.settings['Access-Control-Allow-Origin'], }) return response # file name == file id f_name = file_name file_path = self.request.storage.base_path + os.sep + f_name file_size = os.path.getsize(file_path) response = Response(body=json.dumps( {'file_id': f_name, 'file_name': file_name_main, 'content_type': content_type, 'size': file_size, 'extension': extension_main})) response.headers.update({ 'Access-Control-Allow-Origin': self.settings['Access-Control-Allow-Origin'], }) return response </code></pre>
2
2016-09-20T20:03:42Z
39,602,941
<p>There's a UNIX utility called <code>file</code> that uses "magic" to recognize known file types. <code>file</code> uses a library called <code>libmagic</code> for this purpose.</p> <p>The python interface to <code>libmagic</code> is called <code>filemagic</code> and you can get it <a href="https://pypi.python.org/pypi/filemagic/" rel="nofollow">here</a>.</p>
1
2016-09-20T20:08:41Z
[ "python", "file-upload" ]
Django form. How hidden colon from initial_text?
39,602,903
<p>I'm try do this:</p> <pre> class NoClearableFileInput(ClearableFileInput): initial_text = '' input_text = '' class ImageUploadForm(forms.ModelForm): title = forms.CharField(label="TITLE", required=False,widget=forms.TextInput(attrs={'placeholder': 'name'}), label_suffix="") image = forms.ImageField(label='NEW FILE',widget=NoClearableFileInput, label_suffix="") class Meta: model = Image fields = ('title','image') </pre> <p>There in class NoClearableFileInput cleaned value initial_text. In fields 'title' and 'image' use label_suffix, but from initial_text symbol ":" remained.</p> <p><a href="http://i.stack.imgur.com/Hdxmq.png" rel="nofollow">result</a></p> <p>How get rid of them?</p>
0
2016-09-20T20:06:26Z
39,604,248
<p>You have to override the <code>label_suffix</code> on initialization. Try making the following changes:</p> <pre><code>class ImageUploadForm(forms.ModelForm): def __init__(self, *args, **kwargs): kwargs.setdefault('label_suffix', '') super(ImageUploadForm, self).__init__(*args, **kwargs) # ... (the rest of your code) ... </code></pre>
0
2016-09-20T21:49:02Z
[ "python", "django", "forms" ]
Unable to load pickle file
39,602,907
<p>I was previously able to load a pickle file. I saved a new file under a different name. I am unable to load either the old or the new file. Which is a bummer as it contains data which I have worked hard to scrub.</p> <p>Here is the code that I use to save:</p> <pre><code>def pickleStore(): pickle.dump(store, open("...shelf3.p", "wb")) </code></pre> <p>Here is the code that I use to re-load:</p> <pre><code>def pickleLoad(): store = pickle.load(open(".../shelf3.p","rb" ) ) </code></pre> <p>The created file exists, and I run pickleLoad() no errors come up, neither does any variables show in the panel variable explorer. If I try to load a non-existent file, I get a error message. </p> <p>I am running spyder, python 3.5.</p> <p>Any suggestions? </p>
1
2016-09-20T20:06:39Z
39,602,992
<p>As a general and more versatile approach I would suggest something like this:</p> <pre><code>def load(file_name): with open(simulation, 'rb') as pickle_file: return pickle.load(pickle_file) def save(file_name, data): with open(file_name, 'wb') as f: pickle.dump(data, f) </code></pre> <p>I have added this snippet to several projects in order to reduce rewriting same code several times.</p>
1
2016-09-20T20:12:18Z
[ "python", "pickle" ]
Unable to load pickle file
39,602,907
<p>I was previously able to load a pickle file. I saved a new file under a different name. I am unable to load either the old or the new file. Which is a bummer as it contains data which I have worked hard to scrub.</p> <p>Here is the code that I use to save:</p> <pre><code>def pickleStore(): pickle.dump(store, open("...shelf3.p", "wb")) </code></pre> <p>Here is the code that I use to re-load:</p> <pre><code>def pickleLoad(): store = pickle.load(open(".../shelf3.p","rb" ) ) </code></pre> <p>The created file exists, and I run pickleLoad() no errors come up, neither does any variables show in the panel variable explorer. If I try to load a non-existent file, I get a error message. </p> <p>I am running spyder, python 3.5.</p> <p>Any suggestions? </p>
1
2016-09-20T20:06:39Z
39,603,293
<p>If you want to write to a module-level variable from a function, you need to use the <code>global</code> keyword:</p> <pre><code>store = None def pickleLoad(): global store store = pickle.load(open(".../shelf3.p","rb" ) ) </code></pre> <p>...or return the value and perform the assignment from module-level code:</p> <pre><code>store = None def pickleLoad(): return pickle.load(open(".../shelf3.p","rb" ) ) store = pickleLoad() </code></pre>
2
2016-09-20T20:35:49Z
[ "python", "pickle" ]
How to display specific parts of json?
39,602,939
<p>Can someone help me with this python api calling program? </p> <pre><code>import json from pprint import pprint import requests weather = requests.get('http://api.openweathermap.org/data/2.5/weather? q=London&amp;APPID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') pprint(weather.json()) wjson = weather.read() wjdata = json.load(weather) print (wjdata['temp_max']) </code></pre> <p>So with this piece of code I'm trying to get information from the weather api it prints it properly but when I want to select certain values only I get this error.</p> <pre><code>Traceback (most recent call last): File "gawwad.py", line 7, in &lt;module&gt; wjson = weather.read() AttributeError: 'Response' object has no attribute 'read' </code></pre>
1
2016-09-20T20:08:38Z
39,602,964
<p><a href="http://docs.python-requests.org/en/master/user/quickstart/#json-response-content"><code>.json()</code></a> is a built into <code>requests</code> JSON decoder, no need to parse JSON separately:</p> <pre><code>import requests weather = requests.get('http://api.openweathermap.org/data/2.5/weather?q=London&amp;APPID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') wjdata = weather.json() print (wjdata['temp_max']) </code></pre>
5
2016-09-20T20:10:13Z
[ "python", "json", "api" ]
How to display specific parts of json?
39,602,939
<p>Can someone help me with this python api calling program? </p> <pre><code>import json from pprint import pprint import requests weather = requests.get('http://api.openweathermap.org/data/2.5/weather? q=London&amp;APPID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') pprint(weather.json()) wjson = weather.read() wjdata = json.load(weather) print (wjdata['temp_max']) </code></pre> <p>So with this piece of code I'm trying to get information from the weather api it prints it properly but when I want to select certain values only I get this error.</p> <pre><code>Traceback (most recent call last): File "gawwad.py", line 7, in &lt;module&gt; wjson = weather.read() AttributeError: 'Response' object has no attribute 'read' </code></pre>
1
2016-09-20T20:08:38Z
39,604,118
<p>alecxe is correct with using the json library, if you want to know more about referencing specific values in a json object, look into pythons Dictionary data types. That is what json essentially converts them into.</p> <p><a href="https://docs.python.org/3/tutorial/datastructures.html#dictionaries" rel="nofollow">https://docs.python.org/3/tutorial/datastructures.html#dictionaries</a></p> <p>The link above will show you how to use them!! Dictionaries are 'key-value' data structures where keys are unique and must not be hashable, and values are any type.</p> <p>python Dictionary quick example: </p> <pre><code>dict = {'keyA' : 'valueA', 'keyB': 'valueB'} # To reference a value, use the key! print(dict['keyA']) # To add something dict['keyC'] = 'valueC' # now my dictionary looks like this dict = {'keyA' : 'valueA', 'keyB': 'valueB', 'KeyC' : 'valueC'} </code></pre> <p>The print statement will output, 'valueA'</p>
0
2016-09-20T21:36:14Z
[ "python", "json", "api" ]
Using more memory than available
39,602,962
<p>I have written a program that expands a database of prime numbers. This program is written in python and runs on windows 10 (x64) with 8GB RAM.</p> <p>The program stores all primes it has found in a <code>list</code> of <code>integers</code> for further calculations and uses approximately <code>6-7GB</code> of RAM while running. During some runs however, this figure has dropped to below <code>100MB</code>. The memory usage then stays low for the duration of the run, though increasing as expected as more numbers are added to the prime array. Note that not all runs result in a memory drop.</p> <p><em>Memory usage measured with task manager</em></p> <p>These, seemingly random, drops has led me the following theories:</p> <ol> <li>There's a bug in my code, making it drop critical data and messing up the results (most likely but not supported by the results)</li> <li>Python just happens to optimize my code extremely well after a while.</li> <li>Python or Windows is compensating for my over-usage of the RAM by cleaning out portions of my prime-number array that aren't used that much. (eventually resulting in incorrect calculations)</li> <li>Python or Windows is compensating for my over-usage of the RAM by allocating disk space instead of ram.</li> </ol> <h1>Questions</h1> <ol> <li>What could be the reason(s) for this memory drop?</li> <li>How does python handle programs that use more than available RAM?</li> <li>How does Windows handle programs that use more than available RAM?</li> </ol>
2
2016-09-20T20:09:55Z
39,603,005
<p>1, 2, and 3 are incorrect theories.</p> <p>4 is correct. Windows (not Python) is moving some of your process memory to swap space. This is almost totally transparent to your application - you don't need to do anything special to respond to or handle this situation. The only thing you will notice is your application may get slower as information is written to and read from disk. But it all happens transparently. See <a href="https://en.wikipedia.org/wiki/Virtual_memory" rel="nofollow">https://en.wikipedia.org/wiki/Virtual_memory</a> for more information.</p>
3
2016-09-20T20:12:59Z
[ "python", "windows", "memory", "memory-management" ]
Using more memory than available
39,602,962
<p>I have written a program that expands a database of prime numbers. This program is written in python and runs on windows 10 (x64) with 8GB RAM.</p> <p>The program stores all primes it has found in a <code>list</code> of <code>integers</code> for further calculations and uses approximately <code>6-7GB</code> of RAM while running. During some runs however, this figure has dropped to below <code>100MB</code>. The memory usage then stays low for the duration of the run, though increasing as expected as more numbers are added to the prime array. Note that not all runs result in a memory drop.</p> <p><em>Memory usage measured with task manager</em></p> <p>These, seemingly random, drops has led me the following theories:</p> <ol> <li>There's a bug in my code, making it drop critical data and messing up the results (most likely but not supported by the results)</li> <li>Python just happens to optimize my code extremely well after a while.</li> <li>Python or Windows is compensating for my over-usage of the RAM by cleaning out portions of my prime-number array that aren't used that much. (eventually resulting in incorrect calculations)</li> <li>Python or Windows is compensating for my over-usage of the RAM by allocating disk space instead of ram.</li> </ol> <h1>Questions</h1> <ol> <li>What could be the reason(s) for this memory drop?</li> <li>How does python handle programs that use more than available RAM?</li> <li>How does Windows handle programs that use more than available RAM?</li> </ol>
2
2016-09-20T20:09:55Z
39,603,049
<p>Have you heard of paging? Windows dumps some ram (that hasn't been used in a while) to your hard drive to keep your computer from running out or ram and ultimately crashing.</p> <p>Only Windows deals with memory management. Although, if you use Windows 10, it will also compress your memory, somewhat like a zip file.</p>
0
2016-09-20T20:16:18Z
[ "python", "windows", "memory", "memory-management" ]
Pyjnius, Facebook/Google SDK for Sign in Button with Kivy
39,603,112
<p>So I have been wondering. Can this be implemented with kivy?</p> <p>I have seen read <a href="https://kivy.org/planet/2013/08/using-facebook-sdk-with-python-for-android-kivy/" rel="nofollow">this post in kivy planet</a> about it, but noticed that the login button was a problem. The article is from 2013, so I would like to know what is the current situation regarding Facebook and Google SDKs and Kivy integration with Pyjnius.</p> <p>First of all, i'd like to know how to install the SDKs without android-studio, if possible. Then, how to propperly integrate them with my app.</p> <p>Hope someone can shed some light in the matter.</p>
0
2016-09-20T20:21:09Z
39,603,276
<p>A Facebook/Google sign in can be implemented with Kivy. You do not necessarily have to use the Google or Facebook SDK, there are other auth libraries available in Python.</p> <p>If you are to use the Facebook/Google SDKs, it is still possible as you can execute any Java code with Pyjnius. Per your question about installing 'SDKs' without Android studio, I'm not sure what you mean, but you may specify dependencies in your Buildozer dependency file.</p>
0
2016-09-20T20:34:36Z
[ "android", "python", "facebook", "kivy", "pyjnius" ]
Having issues with optimizing query on foreign key
39,603,206
<p>I have tried different ways of querying my data, but still results in a large amount of pings to the DB. </p> <p>I have tried using <code>select_related</code>.</p> <p>Here are my models:</p> <pre><code>class Order(models.Model): num = models.CharField(max_length=50, unique=True) class OrderInfo(models.Model): info = models.CharField(max_length=100, unique=True) datetime = models.DateTimeField(auto_now_add=True, blank=True) order_fk = models.ForeignKey(Order) </code></pre> <p><strong>What I am trying to achieve:</strong></p> <p><code>OrderInfo</code> has a bunch of information pertaining to the <code>Order</code>.</p> <p>What I want is to be able to get the most recent <code>OrderInfo</code> from the DB, but I want it unique to <code>Order</code>. The unique part is where I am struggling to minimize my query amount.</p> <pre><code>ois = OrderInfo.objects.order_by('-datetime').select_related('order_fk') </code></pre> <p>When I try to filter it is doing a query on each Order to check for uniqueness the queries ramp up.</p> <pre><code>For instances: _ = [oi.order_fk for oi in ois] # queries reach to 20k, takes too long. </code></pre> <p>Also, then I just need to limit how many responses I get, but I need to know how many unique <code>Orders</code> there are first in order to limit it. </p> <p>Anyone, know a proper approach to minimize these queries or possibly I may need to restructure my models.</p> <p>Notes:</p> <ul> <li>Django 1.7</li> <li>Python 2.7</li> <li>SQLite</li> </ul>
0
2016-09-20T20:29:58Z
39,604,088
<p>Based on <a href="http://stackoverflow.com/a/14293530/4978266">this stack overflow answer to a similar question</a>, I think you'd just need to split this up into two queries to hopefully reduce how much you have to peg the database.</p> <p>Since you're using SQLite, you do not have access to <code>DISTINCT ON</code>, so perhaps try the following:</p> <pre><code>from django.db.models import Q, Max import operator ois = OrderInfo.objects.values('order_id').annotate(max_datetime=Max('datetime')) filters = reduce(operator.or_, [(Q(order_id=oi['order_id']) &amp; Q(datetime=oi['max_datetime'])) for oi in ois]) filtered_ois = OrderInfo.objects.filter(filters) </code></pre> <p>To explain what this is doing (again, this is based off of the linked stack overflow answer):</p> <ul> <li>Get the OrderInfo objects grouped by their <code>order_id</code> and annotate those with the maximum datetime among all of the <code>OrderInfo</code> objects that have that <code>order_id</code>.</li> <li>Use these values to create a new filter on the <code>OrderInfo</code> objects.</li> <li>Use the filter on <code>OrderInfo</code> objects and return them.</li> </ul>
0
2016-09-20T21:33:21Z
[ "python", "django", "sqlite3" ]
Having issues with optimizing query on foreign key
39,603,206
<p>I have tried different ways of querying my data, but still results in a large amount of pings to the DB. </p> <p>I have tried using <code>select_related</code>.</p> <p>Here are my models:</p> <pre><code>class Order(models.Model): num = models.CharField(max_length=50, unique=True) class OrderInfo(models.Model): info = models.CharField(max_length=100, unique=True) datetime = models.DateTimeField(auto_now_add=True, blank=True) order_fk = models.ForeignKey(Order) </code></pre> <p><strong>What I am trying to achieve:</strong></p> <p><code>OrderInfo</code> has a bunch of information pertaining to the <code>Order</code>.</p> <p>What I want is to be able to get the most recent <code>OrderInfo</code> from the DB, but I want it unique to <code>Order</code>. The unique part is where I am struggling to minimize my query amount.</p> <pre><code>ois = OrderInfo.objects.order_by('-datetime').select_related('order_fk') </code></pre> <p>When I try to filter it is doing a query on each Order to check for uniqueness the queries ramp up.</p> <pre><code>For instances: _ = [oi.order_fk for oi in ois] # queries reach to 20k, takes too long. </code></pre> <p>Also, then I just need to limit how many responses I get, but I need to know how many unique <code>Orders</code> there are first in order to limit it. </p> <p>Anyone, know a proper approach to minimize these queries or possibly I may need to restructure my models.</p> <p>Notes:</p> <ul> <li>Django 1.7</li> <li>Python 2.7</li> <li>SQLite</li> </ul>
0
2016-09-20T20:29:58Z
39,608,023
<p>How many <code>OrderInfo</code> objects are there per order? If it's small, it is probably easiest to use <code>prefetch_related</code> and just do the filtering in python:</p> <pre><code>class Order(models.Model): num = models.CharField(max_length=50, unique=True) @property def latest_order_info(self): return max(self.orderinfo_set.all(), key=attrgetter('datetime') </code></pre> <p>Then in your application code, you can do:</p> <pre><code>orders = Order.objects.filter(...).prefetch_related('orderinfo_set') </code></pre> <p>It's a little wasteful, but in my experience is usually not a bottleneck unless the parent model has a very large number of children.</p>
0
2016-09-21T05:12:38Z
[ "python", "django", "sqlite3" ]
Numpy create index/slicing programmatically from array
39,603,246
<p>I can use <code>numpy.mgrid</code> as follows:</p> <pre><code>a = numpy.mgrid[x0:x1, y0:y1] # 2 dimensional b = numpy.mgrid[x0:x1, y0:y1, z0:z1] # 3 dimensional </code></pre> <p>Now, I'd like to create the expression in brackets programmatically, because I do not know whether I have 1, 2, 3 or more dimensions. I'm looking for something like:</p> <pre><code>shape = np.array([[x0, x1], [y0, y1], ... maybe more dimensions ...]) idx = (s[0]:s[1] for s in shape) a = numpy.mgrid[idx] </code></pre> <p>That gives at least a syntax error in the second line. <strong>How can I properly generate those indices/slices programmatically?</strong> (The mgrid here is rather an example/use case, the question is really about indexing in general.) </p>
2
2016-09-20T20:32:49Z
39,603,562
<p>Use the <a href="https://docs.python.org/2/library/functions.html#slice" rel="nofollow"><code>slice</code> object</a>. For example:</p> <pre><code>shape = np.array([[0, 10], [0, 10]]) idx = tuple(slice(s[0],s[1], 1) for s in shape) #yields the following #(slice(0, 10, 1), slice(0, 10, 1)) np.mgrid[idx] </code></pre> <p>yields</p> <pre><code>array([[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3, 3, 3, 3, 3], [4, 4, 4, 4, 4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6, 6, 6, 6, 6], [7, 7, 7, 7, 7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9, 9, 9, 9, 9]], [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]]) </code></pre> <p>Alternatively, you could use the Numpy shorthand <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.s_.html" rel="nofollow"><code>np.s_</code></a>, e.g. <code>np.s_[0:10:1]</code>, instead of <code>slice(1, 10, 1)</code>, but they are equivalent objects.</p>
4
2016-09-20T20:55:38Z
[ "python", "numpy" ]
Sending And Receiving Bytes through a Socket, Depending On Your Internet Speed
39,603,248
<p>I made a quick program that sends a file using sockets in python.</p> <p>Server:</p> <pre><code>import socket, threading #Create a socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Bind the socket. sock.bind( ("", 5050) ) #Start listening. sock.listen() #Accept client. client, addr = sock.accept() #Open a new file jpg file. file = open("out.jpg", "wb") #Receive all the bytes and write them into the file. while True: received = client.recv(5) #Stop receiving. if received == b'': file.close() break #Write bytes into the file. file.write( received ) </code></pre> <p>Client:</p> <pre><code>import socket, threading #Create a socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Connect to the server. sock.connect(("192.168.1.3", 5050)) #Open a file for read. file = open("cpp.jpg", "rb") #Read first 5 bytes. read = file.read(5) #Keep sending bytes until reaching EOF. while read != b'': #Send bytes. sock.send(read) #Read next five bytes from the file. read = file.read(1024) sock.close() file.close() </code></pre> <p>From experience a learn that send can send an amount of bytes that your network speed is capble of sending them. If you give for example: sock.send(20 gb) you are going to lose bytes because most network connections can't send 20 gb at once. You must send them part by part.</p> <p>So my question is: How can i know the maximum amount of bytes that socket.send() can send over the internet? How can i improve my program to send the file as quick as possible depending on my internet speed?</p>
1
2016-09-20T20:32:53Z
39,603,386
<p>Just send those bytes in a loop until all were sent, here's an <a href="https://docs.python.org/2/howto/sockets.html#using-a-socket" rel="nofollow">example from the docs</a></p> <pre><code>def mysend(self, msg): totalsent = 0 while totalsent &lt; MSGLEN: sent = self.sock.send(msg[totalsent:]) if sent == 0: raise RuntimeError("socket connection broken") totalsent = totalsent + sent </code></pre> <p>In your case, <code>MSGLEN</code> would be 1024, and since you're not using a class, you don't need the self argument</p>
1
2016-09-20T20:42:19Z
[ "python", "sockets" ]
Sending And Receiving Bytes through a Socket, Depending On Your Internet Speed
39,603,248
<p>I made a quick program that sends a file using sockets in python.</p> <p>Server:</p> <pre><code>import socket, threading #Create a socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Bind the socket. sock.bind( ("", 5050) ) #Start listening. sock.listen() #Accept client. client, addr = sock.accept() #Open a new file jpg file. file = open("out.jpg", "wb") #Receive all the bytes and write them into the file. while True: received = client.recv(5) #Stop receiving. if received == b'': file.close() break #Write bytes into the file. file.write( received ) </code></pre> <p>Client:</p> <pre><code>import socket, threading #Create a socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Connect to the server. sock.connect(("192.168.1.3", 5050)) #Open a file for read. file = open("cpp.jpg", "rb") #Read first 5 bytes. read = file.read(5) #Keep sending bytes until reaching EOF. while read != b'': #Send bytes. sock.send(read) #Read next five bytes from the file. read = file.read(1024) sock.close() file.close() </code></pre> <p>From experience a learn that send can send an amount of bytes that your network speed is capble of sending them. If you give for example: sock.send(20 gb) you are going to lose bytes because most network connections can't send 20 gb at once. You must send them part by part.</p> <p>So my question is: How can i know the maximum amount of bytes that socket.send() can send over the internet? How can i improve my program to send the file as quick as possible depending on my internet speed?</p>
1
2016-09-20T20:32:53Z
39,606,825
<p>There are input/output buffers at all steps along the way between your source and destination. Once a buffer fills, nothing else will be accepted on to it until space has been made available.</p> <p>As your application attempts to send data, it will fill up a buffer in the operating system that is cleared as the operating system is able to offload that data to the network device driver (which also has a buffer). </p> <p>The network device driver interfaces with the actual network and understands how to know when it can send data and how receipt will be confirmed by the other side (if at all). As data is sent, that buffer is emptied, allowing the OS to push more data from its buffer. That, in turn, frees up room for your application to push more of its data to the OS.</p> <p>There are a bunch of other things that factor into this process (timeouts, max hops are two I can think off offhand), but the general process is that you have to buffer the data at each step until it can be sent to the next step.</p>
1
2016-09-21T02:59:33Z
[ "python", "sockets" ]
Sending And Receiving Bytes through a Socket, Depending On Your Internet Speed
39,603,248
<p>I made a quick program that sends a file using sockets in python.</p> <p>Server:</p> <pre><code>import socket, threading #Create a socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Bind the socket. sock.bind( ("", 5050) ) #Start listening. sock.listen() #Accept client. client, addr = sock.accept() #Open a new file jpg file. file = open("out.jpg", "wb") #Receive all the bytes and write them into the file. while True: received = client.recv(5) #Stop receiving. if received == b'': file.close() break #Write bytes into the file. file.write( received ) </code></pre> <p>Client:</p> <pre><code>import socket, threading #Create a socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Connect to the server. sock.connect(("192.168.1.3", 5050)) #Open a file for read. file = open("cpp.jpg", "rb") #Read first 5 bytes. read = file.read(5) #Keep sending bytes until reaching EOF. while read != b'': #Send bytes. sock.send(read) #Read next five bytes from the file. read = file.read(1024) sock.close() file.close() </code></pre> <p>From experience a learn that send can send an amount of bytes that your network speed is capble of sending them. If you give for example: sock.send(20 gb) you are going to lose bytes because most network connections can't send 20 gb at once. You must send them part by part.</p> <p>So my question is: How can i know the maximum amount of bytes that socket.send() can send over the internet? How can i improve my program to send the file as quick as possible depending on my internet speed?</p>
1
2016-09-20T20:32:53Z
39,606,911
<p><code>send</code> makes no guarantees that all the data is sent (it not directly tied to network speed; there are multiple reasons it could send less than requested), just that it lets you know how much was sent. You could explicitly write loops to <code>send</code> until it's all really sent, per <a href="http://stackoverflow.com/a/39603386/364696">Dunno's answer</a>.</p> <p>Or you could just use <a href="https://docs.python.org/3/library/socket.html#socket.socket.sendall" rel="nofollow"><code>sendall</code></a> and avoid the hassle. <code>sendall</code> is basically the wrapper described in <a href="http://stackoverflow.com/a/39603386/364696">the other answer</a>, but Python does all the heavy lifting for you.</p> <p>If you don't care about slurping the whole file into memory, you could use this to replace your whole loop structure with just:</p> <pre><code>sock.sendall(file.read()) </code></pre> <p>If you're on modern Python (3.5 or higher) on a UNIX-like OS, you could optimize a bit to avoid even reading the file data into Python using <a href="https://docs.python.org/3/library/socket.html#socket.socket.sendfile" rel="nofollow"><code>socket.sendfile</code></a> (which should only lead to partial <code>send</code> on error):</p> <pre><code>sock.sendfile(file) </code></pre> <p>If the Python doesn't support <code>os.sendfile</code> on your OS, this is just a effectively a loop that <code>read</code>s and <code>send</code>s repeatedly, but on a system that supports it, this directly copies from file to socket in the kernel, without even handling file data in Python (which can improve throughput speeds significantly by reducing system calls and eliminating some memory copies entirely).</p>
1
2016-09-21T03:10:17Z
[ "python", "sockets" ]
Sending And Receiving Bytes through a Socket, Depending On Your Internet Speed
39,603,248
<p>I made a quick program that sends a file using sockets in python.</p> <p>Server:</p> <pre><code>import socket, threading #Create a socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Bind the socket. sock.bind( ("", 5050) ) #Start listening. sock.listen() #Accept client. client, addr = sock.accept() #Open a new file jpg file. file = open("out.jpg", "wb") #Receive all the bytes and write them into the file. while True: received = client.recv(5) #Stop receiving. if received == b'': file.close() break #Write bytes into the file. file.write( received ) </code></pre> <p>Client:</p> <pre><code>import socket, threading #Create a socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Connect to the server. sock.connect(("192.168.1.3", 5050)) #Open a file for read. file = open("cpp.jpg", "rb") #Read first 5 bytes. read = file.read(5) #Keep sending bytes until reaching EOF. while read != b'': #Send bytes. sock.send(read) #Read next five bytes from the file. read = file.read(1024) sock.close() file.close() </code></pre> <p>From experience a learn that send can send an amount of bytes that your network speed is capble of sending them. If you give for example: sock.send(20 gb) you are going to lose bytes because most network connections can't send 20 gb at once. You must send them part by part.</p> <p>So my question is: How can i know the maximum amount of bytes that socket.send() can send over the internet? How can i improve my program to send the file as quick as possible depending on my internet speed?</p>
1
2016-09-20T20:32:53Z
39,607,144
<blockquote> <p>From experience a learn that send can send an amount of bytes that your network speed is capble of sending them.</p> </blockquote> <p>Since you are using a TCP Socket (i.e. SOCK_STREAM), speed-of-transmission issues are handled for you automatically. That is, once some bytes have been copied from your buffer (and into the socket's internal send-buffer) by the send() call, the TCP layer will make sure they make it to the receiving program, no matter how long it takes (well, within reason, anyway; the TCP layer will eventually give up on resending packets if it can't make any progress at all over the course of multiple minutes).</p> <blockquote> <p>If you give for example: sock.send(20 gb) you are going to lose bytes because most network connections can't send 20 gb at once. You must send them part by part.</p> </blockquote> <p>This is incorrect; you are not going to "lose bytes", as the TCP layer will automatically resend any lost packets when necessary. What might happen, however, is that send() might decide not to accept all of the bytes that you offered it. That's why it is absolutely necessary to check the return value of send() to see how many bytes send() actually accepted responsibility for -- you <em>cannot</em> simply assume that send() will always accept all the bytes you offered to it.</p> <blockquote> <p>So my question is: How can i know the maximum amount of bytes that socket.send() can send over the internet?</p> </blockquote> <p>You can't. Instead, you have to look at the value returned by send() to know how many bytes send() has copied out of your buffer. That way, on your next call to send() you'll know what data to pass in (i.e. starting with the next byte after the last one that was sent in the previous call)</p> <blockquote> <p>How can i improve my program to send the file as quick as possible depending on my internet speed?</p> </blockquote> <p>Offer send() as many bytes as you can at once; that will give it the most flexibility to optimize what it's doing behind the scenes. Other than that, just call send() in a loop, using the return value of each send() call to determine what bytes to pass to send() the next time (e.g. if the first call returns 5, you know that send() read the first 5 bytes out of your buffer and will make sure they get to their destination, so your next call to send() should pass in a buffer starting at the 6th byte of your data stream... and so on). (Or if you don't want to deal with that logic yourself, you can call sendall() like @ShadowRanger suggested; sendall() is just a wrapper containing a loop around send() that does that logic for you. The only disadvantage is that e.g. if you call sendall() on 20 gigabytes of data, it might be several hours before the sendall() call returns! Whether or not that would pose a problem for you depends on what else your program might want to accomplish, if anything, while sending the data).</p> <p>That's really all there is to it for TCP.</p> <p>If you were sending data using a UDP socket, on the other hand, things would be very different; in the UDP case, packets can simply be dropped, and it's up to the programmer to manage speed-of-transmission issues, packet resends, etc, explicitely. But with TCP all that is handled for you by the OS.</p>
1
2016-09-21T03:39:39Z
[ "python", "sockets" ]
Sending And Receiving Bytes through a Socket, Depending On Your Internet Speed
39,603,248
<p>I made a quick program that sends a file using sockets in python.</p> <p>Server:</p> <pre><code>import socket, threading #Create a socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Bind the socket. sock.bind( ("", 5050) ) #Start listening. sock.listen() #Accept client. client, addr = sock.accept() #Open a new file jpg file. file = open("out.jpg", "wb") #Receive all the bytes and write them into the file. while True: received = client.recv(5) #Stop receiving. if received == b'': file.close() break #Write bytes into the file. file.write( received ) </code></pre> <p>Client:</p> <pre><code>import socket, threading #Create a socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Connect to the server. sock.connect(("192.168.1.3", 5050)) #Open a file for read. file = open("cpp.jpg", "rb") #Read first 5 bytes. read = file.read(5) #Keep sending bytes until reaching EOF. while read != b'': #Send bytes. sock.send(read) #Read next five bytes from the file. read = file.read(1024) sock.close() file.close() </code></pre> <p>From experience a learn that send can send an amount of bytes that your network speed is capble of sending them. If you give for example: sock.send(20 gb) you are going to lose bytes because most network connections can't send 20 gb at once. You must send them part by part.</p> <p>So my question is: How can i know the maximum amount of bytes that socket.send() can send over the internet? How can i improve my program to send the file as quick as possible depending on my internet speed?</p>
1
2016-09-20T20:32:53Z
39,611,867
<p>@Jeremy Friesner</p> <p>So I can do something like that:</p> <pre><code>file = open(filename, "rb") read = file.read(1024**3) #Read 1 gb. totalsend = 0 #Send Loop while totalsend &lt; filesize: #Try to send all the bytes. send = sock.send(read) totalsend += send #If failed, then seek into the file the position #where the next read will also read the missing bytes. if send &lt; 1024**3: file.seek(totalsend) read = file.read(1024**3) #Read 1 gb. </code></pre> <p>Is this correct?</p> <p>Also, from this example i undestood one more think. The data you can send in every loop, can't be bigger in size than your memory. Because you are bringing bytes from the disk on the memory. So theoretically even if your network speed is infinity, you can't send all the bytes at once if the file is bigger than your memory.</p>
0
2016-09-21T08:56:33Z
[ "python", "sockets" ]
How to delete unpopulated placeholder items using python-pptx
39,603,318
<p>This is very simple, but I cannot find the actual method anywhere in the documentation or otherwise. </p> <p>I am using the python-pptx module and all I need to do is delete a single placeholder item, an empty text box, on some slides (without having to create a completely new layout just for these slides) - the closest thing to an answer is here: <a href="http://python-pptx.readthedocs.io/en/latest/user/placeholders-understanding.html" rel="nofollow">http://python-pptx.readthedocs.io/en/latest/user/placeholders-understanding.html</a> under <strong>Unpopulated vs. populated</strong> but it still does not say <em>how</em> to actually delete/remove the placeholder.</p> <p>I've tried all the obvious methods .delete(), .remove(), etc.</p>
0
2016-09-20T20:37:50Z
39,608,066
<p>There is no API support for this, but if you delete the text box shape element, that should do the trick. It would be something like this:</p> <pre><code>textbox = shapes[textbox_idx] sp = textbox.element sp.getparent().remove(sp) </code></pre> <p>Using the <code>textbox</code> variable/reference after this operation is not likely to go well. But otherwise, I expect this will do the trick. </p>
0
2016-09-21T05:16:06Z
[ "python", "python-pptx" ]
Most efficient way to store list of integers
39,603,364
<p>I have recently been doing a project in which one of the aims is to use as little memory as possible to store a series of files using Python 3. Almost all of the files take up very little space, apart from one list of integers that is roughly <code>333,000</code> integers long and has integers up to about <code>8000</code> in size. </p> <p>I'm currently using <code>pickle</code> to store the list, which takes up around <code>7mb</code>, but I feel like there must be a more memory efficient way to do this.</p> <p>I have tried storing it as a text file and <code>csv</code>, bur both of these used in excess of <code>10mb</code> of space.</p>
0
2016-09-20T20:41:15Z
39,603,768
<p>One <code>stdlib</code> solution you could use is arrays from <a href="https://docs.python.org/3/library/array.html" rel="nofollow"><code>array</code></a>, from the docs:</p> <blockquote> <p>This module defines an object type which can compactly represent an array of basic values: characters, integers, floating point numbers. Arrays are sequence types and behave very much like lists, except that the type of objects stored in them is constrained. </p> </blockquote> <p>This generally sheds a bit of memory of large lists, for example, with a 10 million element a list, the array trims up <code>11mb</code>:</p> <pre><code>import pickle from array import array l = [i for i in range(10000000)] a = array('i', l) # tofile can also be used. with open('arrfile', 'wb') as f: pickle.dump(a, f) with open('lstfile', 'wb') as f: pickle.dump(l, f) </code></pre> <p>Sizes: </p> <pre><code>!du -sh ./* 39M arrfile 48M lstfile </code></pre>
2
2016-09-20T21:10:15Z
[ "python", "list", "python-3.x", "memory", "integer" ]
Most efficient way to store list of integers
39,603,364
<p>I have recently been doing a project in which one of the aims is to use as little memory as possible to store a series of files using Python 3. Almost all of the files take up very little space, apart from one list of integers that is roughly <code>333,000</code> integers long and has integers up to about <code>8000</code> in size. </p> <p>I'm currently using <code>pickle</code> to store the list, which takes up around <code>7mb</code>, but I feel like there must be a more memory efficient way to do this.</p> <p>I have tried storing it as a text file and <code>csv</code>, bur both of these used in excess of <code>10mb</code> of space.</p>
0
2016-09-20T20:41:15Z
39,603,864
<p>Here is a small demo, which uses Pandas module:</p> <pre><code>import numpy as np import pandas as pd import feather # let's generate an array of 1M int64 elements... df = pd.DataFrame({'num_col':np.random.randint(0, 10**9, 10**6)}, dtype=np.int64) df.info() %timeit -n 1 -r 1 df.to_pickle('d:/temp/a.pickle') %timeit -n 1 -r 1 df.to_hdf('d:/temp/a.h5', 'df_key', complib='blosc', complevel=5) %timeit -n 1 -r 1 df.to_hdf('d:/temp/a_blosc.h5', 'df_key', complib='blosc', complevel=5) %timeit -n 1 -r 1 df.to_hdf('d:/temp/a_zlib.h5', 'df_key', complib='zlib', complevel=5) %timeit -n 1 -r 1 df.to_hdf('d:/temp/a_bzip2.h5', 'df_key', complib='bzip2', complevel=5) %timeit -n 1 -r 1 df.to_hdf('d:/temp/a_lzo.h5', 'df_key', complib='lzo', complevel=5) %timeit -n 1 -r 1 feather.write_dataframe(df, 'd:/temp/a.feather') </code></pre> <p><strong>DataFrame info:</strong></p> <pre><code>In [56]: df.info() &lt;class 'pandas.core.frame.DataFrame'&gt; RangeIndex: 1000000 entries, 0 to 999999 Data columns (total 1 columns): num_col 1000000 non-null int64 dtypes: int64(1) memory usage: 7.6 MB </code></pre> <p><strong>Results (speed):</strong></p> <pre><code>In [49]: %timeit -n 1 -r 1 df.to_pickle('d:/temp/a.pickle') 1 loop, best of 1: 16.2 ms per loop In [50]: %timeit -n 1 -r 1 df.to_hdf('d:/temp/a.h5', 'df_key', complib='blosc', complevel=5) 1 loop, best of 1: 39.7 ms per loop In [51]: %timeit -n 1 -r 1 df.to_hdf('d:/temp/a_blosc.h5', 'df_key', complib='blosc', complevel=5) 1 loop, best of 1: 40.6 ms per loop In [52]: %timeit -n 1 -r 1 df.to_hdf('d:/temp/a_zlib.h5', 'df_key', complib='zlib', complevel=5) 1 loop, best of 1: 213 ms per loop In [53]: %timeit -n 1 -r 1 df.to_hdf('d:/temp/a_bzip2.h5', 'df_key', complib='bzip2', complevel=5) 1 loop, best of 1: 1.09 s per loop In [54]: %timeit -n 1 -r 1 df.to_hdf('d:/temp/a_lzo.h5', 'df_key', complib='lzo', complevel=5) 1 loop, best of 1: 32.1 ms per loop In [55]: %timeit -n 1 -r 1 feather.write_dataframe(df, 'd:/temp/a.feather') 1 loop, best of 1: 3.49 ms per loop </code></pre> <p><strong>Results (size):</strong></p> <pre><code>{ temp } » ls -lh a* /d/temp -rw-r--r-- 1 Max None 7.7M Sep 20 23:15 a.feather -rw-r--r-- 1 Max None 4.1M Sep 20 23:15 a.h5 -rw-r--r-- 1 Max None 7.7M Sep 20 23:15 a.pickle -rw-r--r-- 1 Max None 4.1M Sep 20 23:15 a_blosc.h5 -rw-r--r-- 1 Max None 4.0M Sep 20 23:15 a_bzip2.h5 -rw-r--r-- 1 Max None 4.1M Sep 20 23:15 a_lzo.h5 -rw-r--r-- 1 Max None 3.9M Sep 20 23:15 a_zlib.h5 </code></pre> <p><strong>Conclusion:</strong> pay attention at HDF5 (+ <code>blosc</code> or <code>lzo</code> compression) if you need both speed and a reasonable size or at <a href="https://blog.rstudio.org/2016/03/29/feather/" rel="nofollow">Feather-format</a> if you only care of speed - it's 4 times faster compared to Pickle!</p>
1
2016-09-20T21:17:54Z
[ "python", "list", "python-3.x", "memory", "integer" ]
Most efficient way to store list of integers
39,603,364
<p>I have recently been doing a project in which one of the aims is to use as little memory as possible to store a series of files using Python 3. Almost all of the files take up very little space, apart from one list of integers that is roughly <code>333,000</code> integers long and has integers up to about <code>8000</code> in size. </p> <p>I'm currently using <code>pickle</code> to store the list, which takes up around <code>7mb</code>, but I feel like there must be a more memory efficient way to do this.</p> <p>I have tried storing it as a text file and <code>csv</code>, bur both of these used in excess of <code>10mb</code> of space.</p>
0
2016-09-20T20:41:15Z
39,604,308
<p>I like <a href="http://stackoverflow.com/a/39603768/1392132">Jim's suggestion</a> of using the <a href="https://docs.python.org/3/library/array.html" rel="nofollow"><code>array</code></a> module. If your numeric values are small enough to fit into the machine's native <code>int</code> type, then this is a fine solution. (I'd prefer to serialize the array with the <code>array.tofile</code> method instead of using <code>pickle</code>, though.) If an <code>int</code> is 32 bits, then this uses 4 bytes per number.</p> <p>I would like to question how you did your text file, though. If I create a file with 333&#x202f;000 integers in the range [0, 8&#x202f;000] with one number per line,</p> <pre><code>import random with open('numbers.txt', 'w') as ostr: for i in range(333000): r = random.randint(0, 8000) print(r, file=ostr) </code></pre> <p>it comes out to a size of only 1.6&#x202f;MiB which isn't all that bad compared to the 1.3&#x202f;MiB that the binary representation would use. And if you do happen to have a value outside the range of the native <code>int</code> type one day, the text file will handle it happily without overflow.</p> <p>Furthermore, if I <em>compress</em> the file using gzip, the file size shrinks down to 686&#x202f;KiB. That's better than gzipping the binary data! When using bzip2, the file size is only 562&#x202f;KiB. Python's standard library has support for both <a href="https://docs.python.org/3/library/gzip.html" rel="nofollow"><code>gzip</code></a> and <a href="https://docs.python.org/3/library/bz2.html" rel="nofollow"><code>bz2</code></a> so you might want to give the plain-text format plus compression another try.</p>
0
2016-09-20T21:53:50Z
[ "python", "list", "python-3.x", "memory", "integer" ]
using more than one highlight color in pygments
39,603,381
<p>I'm using pygments to highlight lines from a file, but I want to highlight different lines with different colors.</p> <p><strong>note</strong> While I was writing this question I tried different things until I found what looks like a decent solution that solves my problem. I'll post it in the answers.</p> <p>My first attempt at altering the default yellow (which is very pale) was:</p> <pre class="lang-py prettyprint-override"><code>HIGHLIGHT_COLOR = '#F4E004' formatter = HtmlFormatter(linenos='inline', hl_lines=hl_lines, lineanchors='foo') style = formatter.get_style_defs() with open(the_xml_fullpath) as f: highlighted = highlight(f.read(), XmlLexer(), formatter) # make the yellow more ...yellow _style = re.sub(r'background-color: \#.+ ', 'background-color: {} '.format(HIGHLIGHT_COLOR), style) </code></pre> <p>Now I am fully aware of <a href="http://stackoverflow.com/a/1732454/204634">the perils of using a regular expression to parse HTML</a> but I thought the only alternative was to use the <code>noclasses=True</code> option of <code>highlight()</code> which does not use CSS classes inline CSS and then iterate through the entire file and replace the background colour of the lines I want.</p> <p>So my question is: how can I highlight different set of lines using pygments with different colors?</p>
0
2016-09-20T20:42:10Z
39,603,382
<p>My solution subclassed the <code>HtmlFormatter</code> class as suggested by the documentation like this:</p> <pre class="lang-py prettyprint-override"><code>class MyFormatter(HtmlFormatter): """Overriding formatter to highlight more than one kind of lines""" def __init__(self, **kwargs): super(MyFormatter, self).__init__(**kwargs) # a list of [ (highlight_colour, [lines]) ] self.highlight_groups = kwargs.get('highlight_groups', []) def wrap(self, source, outfile): return self._wrap_code(source) # generator: returns 0, html if it's not a source line; 1, line if it is def _wrap_code(self, source): _prefix = '' if self.cssclass is not None: _prefix += '&lt;div class="highlight"&gt;' if self.filename is not None: _prefix += '&lt;span class="filename"&gt;{}&lt;/span&gt;'.format(self.filename) yield 0, _prefix + '&lt;pre&gt;' for count, _t in enumerate(source): i, t = _t if i == 1: # it's a line of formatted code for highlight_group in self.highlight_groups: col, lines = highlight_group # count starts from 0... if (count + 1) in lines: # it's a highlighted line - set the colour _row = '&lt;span style="background-color:{}"&gt;{}&lt;/span&gt;'.format(col, t) t = _row yield i, t # close open things _postfix = '' if self.cssclass is not None: _postfix += '&lt;/div&gt;' yield 0, '&lt;/pre&gt;' + _postfix </code></pre> <p>To use it:</p> <pre class="lang-py prettyprint-override"><code># dark yellow HIGHLIGHT_COLOUR = '#F4E004' # pinkish DEPRECATED_COLOUR = '#FF4ED1' formatter = MyFormatter( linenos='inline', # no need to highlight lines - we take care of it in the formatter # hl_lines=hl_lines, filename="sourcefile", # a list of tuples (color, [lines]) indicating which colur to use highlight_groups=[ (HIGHLIGHT_COLOR, hl_lines), (DEPRECATED_COLOR, deprecated_lines), ] ) </code></pre>
0
2016-09-20T20:42:10Z
[ "python", "css", "pygments" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,603,504
<p>Just use a plain old for loop:</p> <pre><code>results = {} for function in [check_a, check_b, ...]: results[function.__name__] = result = function() if not result: break </code></pre> <p>The results will be a mapping of the function name to their return values, and you can do what you want with the values after the loop breaks. </p> <p>Use an <code>else</code> clause on the for loop if you want special handling for the case where all of the functions have returned truthy results. </p>
27
2016-09-20T20:51:07Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,603,506
<p>Write a function that takes an iterable of functions to run. Call each one and append the result to a list, or return <code>None</code> if the result is <code>False</code>. Either the function will stop calling further checks after one fails, or it will return the results of all the checks.</p> <pre><code>def all_or_none(checks, *args, **kwargs): out = [] for check in checks: rv = check(*args, **kwargs) if not rv: return None out.append(rv) return out </code></pre> <pre><code>rv = all_or_none((check_a, check_b, check_c)) # rv is a list if all checks passed, otherwise None if rv is not None: return rv </code></pre> <pre><code>def check_a(obj): ... def check_b(obj): ... # pass arguments to each check, useful for writing reusable checks rv = all_or_none((check_a, check_b), obj=my_object) </code></pre>
9
2016-09-20T20:51:16Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,603,512
<p>Try this:</p> <pre><code>mapping = {'a': assign_a(), 'b': assign_b()} if None not in mapping.values(): return mapping </code></pre> <p>Where <code>assign_a</code> and <code>assign_b</code> are of the form:</p> <pre><code>def assign_&lt;variable&gt;(): if condition: return value else: return None </code></pre>
-2
2016-09-20T20:51:32Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,603,516
<p>You could use either a list or an OrderedDict, using a for loop would serve the purpose of emulating short circuiting.</p> <pre><code>from collections import OrderedDict def check_a(): return "A" def check_b(): return "B" def check_c(): return "C" def check_d(): return False def method1(*args): results = [] for i, f in enumerate(args): value = f() results.append(value) if not value: return None return results def method2(*args): results = OrderedDict() for f in args: results[f.__name__] = result = f() if not result: return None return results # Case 1, it should return check_a, check_b, check_c for m in [method1, method2]: print(m(check_a, check_b, check_c)) # Case 1, it should return None for m in [method1, method2]: print(m(check_a, check_b, check_d, check_c)) </code></pre>
3
2016-09-20T20:51:46Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,603,737
<p>If i understand correctly you don't need skip the last functions because if the the first condition fails, the others will not be evaluated. </p> <p>For me your code:</p> <pre><code>a = check_a() b = check_b() c = check_c() .... if a and b and c and ... return (a, b, c, ...) </code></pre> <p>is right. If a fails, b and c will not be evaluated (redundant but necessary).</p>
-3
2016-09-20T21:07:49Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,605,495
<p>In other languages that did have <a href="http://stackoverflow.com/q/4869770/1048572">assignments as expressions</a> you would be able to use</p> <pre><code>if (a = check_a()) and (b = check_b()) and (c = check_c()): </code></pre> <p>but Python is no such language. Still, we can circumvent the restriction and emulate that behaviour:</p> <pre><code>result = [] def put(value): result.append(value) return value if put(check_a()) and put(check_b()) and put(check_c()): # if you need them as variables, you could do # (a, b, c) = result # but you just want return tuple(result) </code></pre> <p>This might loosen the connection between the variables and function calls a bit too much, so if you want to do lots of separate things with the variables, instead of using the <code>result</code> elements in the order they were put in the list, I would rather avoid this approach. Still, it might be quicker and shorter than some loop.</p>
6
2016-09-20T23:57:59Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,609,961
<p>main logic:</p> <pre><code>results = list(takewhile(lambda x: x, map(lambda x: x(), function_list))) if len(results) == len(function_list): return results </code></pre> <p>you can learn a lot about collection transformations if you look at all methods of an api like <a href="http://www.scala-lang.org/api/2.11.7/#scala.collection.immutable.List" rel="nofollow">http://www.scala-lang.org/api/2.11.7/#scala.collection.immutable.List</a> and search/implement python equivalents</p> <p>logic with setup and alternatives:</p> <pre><code>import sys if sys.version_info.major == 2: from collections import imap map = imap def test(bool): def inner(): print(bool) return bool return inner def function_for_return(): function_list = [test(True),test(True),test(False),test(True)] from itertools import takewhile print("results:") results = list(takewhile(lambda x:x,map(lambda x:x(),function_list))) if len(results) == len(function_list): return results print(results) #personally i prefer another syntax: class Iterator(object): def __init__(self,iterable): self.iterator = iter(iterable) def __next__(self): return next(self.iterator) def __iter__(self): return self def map(self,f): return Iterator(map(f,self.iterator)) def takewhile(self,f): return Iterator(takewhile(f,self.iterator)) print("results2:") results2 = list( Iterator(function_list) .map(lambda x:x()) .takewhile(lambda x:x) ) print(results2) print("with additional information") function_list2 = [(test(True),"a"),(test(True),"b"),(test(False),"c"),(test(True),"d")] results3 = list( Iterator(function_list2) .map(lambda x:(x[0](),x[1])) .takewhile(lambda x:x[0]) ) print(results3) function_for_return() </code></pre>
1
2016-09-21T07:23:25Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,610,678
<p>Since I can not comment "wim":s answer as guest, I'll just add an extra answer. Since you want a tuple, you should collect the results in a list and then cast to tuple.</p> <pre><code>def short_eval(*checks): result = [] for check in checks: checked = check() if not checked: break result.append(checked) return tuple(result) # Example wished = short_eval(check_a, check_b, check_c) </code></pre>
0
2016-09-21T07:58:42Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,611,845
<p>There are lots of ways to do this! Here's another.</p> <p>You can use a generator expression to defer the execution of the functions. Then you can use <code>itertools.takewhile</code> to implement the short-circuiting logic by consuming items from the generator until one of them is false.</p> <pre><code>from itertools import takewhile functions = (check_a, check_b, check_c) generator = (f() for f in functions) results = tuple(takewhile(bool, generator)) if len(results) == len(functions): return results </code></pre>
2
2016-09-21T08:55:32Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,612,853
<p>You can try use <code>@lazy_function</code> decorator from <code>lazy_python</code> <a href="https://pypi.python.org/pypi/lazy_python/0.2.1" rel="nofollow">package</a>. Example of usage:</p> <pre><code>from lazy import lazy_function, strict @lazy_function def check(a, b): strict(print('Call: {} {}'.format(a, b))) if a + b &gt; a * b: return '{}, {}'.format(a, b) a = check(-1, -2) b = check(1, 2) c = check(-1, 2) print('First condition') if c and a and b: print('Ok: {}'.format((a, b))) print('Second condition') if c and b: print('Ok: {}'.format((c, b))) # Output: # First condition # Call: -1 2 # Call: -1 -2 # Second condition # Call: 1 2 # Ok: ('-1, 2', '1, 2') </code></pre>
0
2016-09-21T09:39:03Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,613,007
<p>Flexible short circuiting is really best done with Exceptions. For a very simple prototype you could even just assert each check result:</p> <pre><code>try: a = check_a() assert a b = check_b() assert b c = check_c() assert c return a, b, c except AssertionException as e: return None </code></pre> <p>You should probably raise a custom Exception instead. You could change your check_X functions to raise Exceptions themself, in an arbitrary nested way. Or you could wrap or decorate your check_X functions to raise errors on falsy return values.</p> <p>In short, exception handling is very flexible and exactly what you are looking for, don't be afraid to use it. If you learned somewhere that exception handling is not to be used for your own flow control, this does not apply to python. Liberal use of exception handling is considered pythonic, as in <a href="https://docs.python.org/2/glossary.html#term-eafp" rel="nofollow" title="EAFP">EAFP</a>.</p>
1
2016-09-21T09:46:09Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,615,548
<p>If you don't need to take an arbitrary number of expressions at runtime (possibly wrapped in lambdas), you can expand your code directly into this pattern:</p> <pre><code>def f (): try: return (&lt;a&gt; or jump(), &lt;b&gt; or jump(), &lt;c&gt; or jump()) except NonLocalExit: return None </code></pre> <p>Where those definitions apply:</p> <pre><code>class NonLocalExit(Exception): pass def jump(): raise NonLocalExit() </code></pre>
1
2016-09-21T11:38:48Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,619,506
<p>This is similar to Bergi's answer but I think that answer misses the point of wanting separate functions (check_a, check_b, check_c):</p> <pre><code>list1 = [] def check_a(): condition = True a = 1 if (condition): list1.append(a) print ("checking a") return True else: return False def check_b(): condition = False b = 2 if (condition): list1.append(b) print ("checking b") return True else: return False def check_c(): condition = True c = 3 if (condition): list1.append(c) print ("checking c") return True else: return False if check_a() and check_b() and check_c(): # won't get here tuple1 = tuple(list1) print (tuple1) # output is: # checking a # (1,) </code></pre> <p>Or, if you don't want to use the global list, pass a reference of a local list to each of the functions.</p>
0
2016-09-21T14:32:14Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,642,669
<p>Another way to tackle this is using a generator, since generators use lazy evaluation. First put all checks into a generator:</p> <pre><code>def checks(): yield check_a() yield check_b() yield check_c() </code></pre> <p>Now you could force evaluation of everything by converting it to a list:</p> <pre><code>list(checks()) </code></pre> <p>But the standard all function does proper short cut evaluation on the iterator returned from checks(), and returns whether all elements are truthy:</p> <pre><code>all(checks()) </code></pre> <p>Last, if you want the results of succeeding checks up to the failure you can use itertools.takewhile to take the first run of truthy values only. Since the result of takewhile is lazy itself you'll need to convert it to a list to see the result in a REPL:</p> <pre><code>from itertools import takewhile takewhile(lambda x: x, checks()) list(takewhile(lambda x: x, checks())) </code></pre>
2
2016-09-22T15:06:54Z
[ "python", "short-circuiting" ]
Short-circuit evaluation like Python's "and" while storing results of checks
39,603,391
<p>I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of <code>and</code>. I could nest <code>if</code> statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of <code>and</code> while also storing the results for later use?</p> <pre><code>def check_a(): # do something and return the result, # for simplicity, just make it "A" return "A" def check_b(): # do something and return the result, # for simplicity, just make it "B" return "B" ... </code></pre> <p>This doesn't short-circuit:</p> <pre><code>a = check_a() b = check_b() c = check_c() if a and b and c: return a, b, c </code></pre> <p>This is messy if there are many checks:</p> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>Is there a shorter way to do this?</p>
26
2016-09-20T20:42:29Z
39,675,635
<p>If the main objection is</p> <blockquote> <p>This is messy if there are many checks:</p> </blockquote> <pre><code>if a: b = check_b() if b: c = check_c() if c: return a, b, c </code></pre> <p>A fairly nice pattern is to reverse the condition and return early</p> <pre><code>if not a: return # None, or some value, or however you want to handle this b = check_b() if not b: return c = check_c() if not c: return # ok, they were all truthy return a, b, c </code></pre>
0
2016-09-24T11:06:15Z
[ "python", "short-circuiting" ]
pandas daily average, pandas.resample
39,603,399
<p>I have a csv file similar to this</p> <pre><code>Date,Temp1,Temp2 23-Oct-09 01:00:00,21.1,22.3 23-Oct-09 04:00:00,22.3,23.8 23-Oct-09 07:00:00,21.4,21.3 23-Oct-09 10:00:00,21.5,21.6 23-Oct-09 13:00:00,22.3,23.8 23-Oct-09 16:00:00,21.4,21.3 23-Oct-09 19:00:00,21.1,22.3 23-Oct-09 22:00:00,21.4,21.3 24-Oct-09 01:00:00,22.3,23.8 24-Oct-09 04:00:00,22.3,23.8 24-Oct-09 07:00:00,21.1,22.3 24-Oct-09 10:00:00,22.3,23.8 24-Oct-09 13:00:00,21.1,22.3 24-Oct-09 16:00:00,22.3,23.8 24-Oct-09 19:00:00,21.1,22.3 24-Oct-09 22:00:00,22.3,23.8 </code></pre> <p>I have read the data with:</p> <pre><code>df=pd.read_csv('data.csv', index_col=0) </code></pre> <p>and converted the index to date time</p> <pre><code>df.index=pd.to_datetime(df.index) </code></pre> <p>Now I want to take the mean of each daily temperature, I have been trying to use pd.resample as below, but have been receiving errors. I've read the pandas.resample docs and numerous examples on here and am still at a loss...</p> <pre><code>df_avg = df.resample('D', how = 'mean') </code></pre> <blockquote> <p>DataError: No numeric types to aggregate</p> </blockquote> <p>I would like df_avg to be a dataframe with a datetime index and the two 2 columns. I am using pandas 0.17.1 and python 3.5.2, any help greatly appreciated!</p>
3
2016-09-20T20:43:13Z
39,603,437
<p>You need convert <code>string</code> columns to <code>float</code> first:</p> <pre><code>#add parameter parse_dates for convert to datetime first column df=pd.read_csv('data.csv', index_col=0, parse_dates=[0]) df['Temp1'] = df.Temp1.astype(float) df['Temp2'] = df.Temp2.astype(float) df_avg = df.resample('D', how = 'mean') </code></pre> <hr> <p>If <code>astype</code> return <code>error</code>, problem is there are some non numeric values. So you need use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a> with <code>errors='coerce'</code> - then all 'problematic' values are converted to <code>NaN</code>:</p> <pre><code>df['Temp1'] = pd.to_numeric(df.Temp1, errors='coerce') df['Temp2'] = pd.to_numeric(df.Temp2, errors='coerce') </code></pre> <p>You can also check all rows with problematic values with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>print df[pd.to_numeric(df.Temp1, errors='coerce').isnull()] print df[pd.to_numeric(df.Temp2, errors='coerce').isnull()] </code></pre>
2
2016-09-20T20:46:07Z
[ "python", "pandas", "mean", "numeric", "resampling" ]
Determining Increasing or Decreasing Values based on Sequential Rows Per Key
39,603,514
<p>I have a dataset of 90,000 records. These 90,000 records belong to about 3,000 unique Keys. For each Key, the values are ordered starting with an ItemNumber of 1 and going up to 'n'. </p> <p>For each Key 1 to n, I want to compare the 2nd row to the first row, the 3rd row to the 2nd row and so on. A sample of my table is given below with some values populated as an example of what's expected.</p> <p>I have a column for milepost values and want to know whether the values are ascending or descending between consecutive records. </p> <p>Example image found at URL: <a href="http://i.imgur.com/i1nuAK9.png" rel="nofollow">http://i.imgur.com/i1nuAK9.png</a> since I am too new to embed a picture.</p> <p>I am very new to python and am having trouble getting started. Even if I can compare "ProjectKey A, ItemNum 2" to "ProjectKey A, ItemNum 1" to know that it's ascending what can I compare the 1st record to? I am having trouble determining which direction to parse. </p> <p>Any help would be greatly appreciated!</p> <p>EDIT: Snippet as a csv</p> <p>ProjectKey,ItemNum,BMP,Direction A,1,0.2,_ A,2,1.7,_ A,3,2.5,_ A,4,5,_ A,5,9,_ A,6,12,_ B,1,25,_ B,2,24.2,_ B,3,21.7,_ B,4,20.3,_ C,1,3,_ C,2,4,_ C,3,5,_ C,4,6,_ C,5,5,_ C,6,4,_ C,7,3,_ C,8,2,_</p>
0
2016-09-20T20:51:42Z
39,603,694
<p>Once you have transferred the data to text, you could use a set of lists and a for loop to parse through and compare:</p> <pre><code>keylistA = [0.2, 1.7, 2.5, 5, 9, 12] listAdirection = ['(start)'] for i in range(0, len(keylistA)): if keylistA[i] &gt; keylistA[i+1]: listAdirection.append('DESC') elif keylistA[i] &lt; keylistA[i+1]: listAdirection.append('ASC') else: listAdirection.append('SAME') </code></pre> <p>This would give you some lists like this:</p> <pre><code>listAdirection = ['(start)', 'ASC', 'ASC', 'ASC', 'ASC', 'ASC'] </code></pre> <p>You could have the lists saved in a nested list format, or using some sort of dictionary setup. Of course, this is all highly dependent on how you choose to export the data from those columns into text.</p>
0
2016-09-20T21:04:47Z
[ "python", "list", "direction" ]
Python authentication for ServiceNow JSON Web Service
39,603,523
<p>I'm working on a reporting tool in Python which would fetch data from JSON Web Service of ServiceNow. Our ServiceNow instance uses normal user id / pw authentication plus SHA-1 certification. My problem is that I'm not able to access the JSON Web Service result page (<a href="https://servicenowserver.com/table.do?JSONv2&amp;sysparm_query=active=true" rel="nofollow">https://servicenowserver.com/table.do?JSONv2&amp;sysparm_query=active=true</a>^number=12345678) with my script to grab the data from there. I can log in with my script to the main page (<a href="https://servicenowserver.com" rel="nofollow">https://servicenowserver.com</a>), it authenticates and it gives HTTP 200 but when I'm calling the JSON webservice page is gives me HTTP 401 (Unauthorized).</p> <p>Once I logged in through a browser to ServiceNow and my session started I can call the JSON service on a new tab an it shows me the result, but this is not working with my Python script. I tried to use both <code>urllib3</code> and <code>requests</code> libraries together with a session parameter to keep the session opened but it's not working neither. I think my script just closes the session immediately after I call the main page. I tried to pass cookies as well without any luck.</p> <p>Long story short: It works from my browser but it doesn't if I use Python script.</p> <p>Do you have any idea how should I authenticate to get the JSON result? Or at least if someone can guide me how can I get a more detailed debug?</p> <p>Below you can find one of the solutions that I have tried:</p> <pre><code>import requests s = requests.session() s.auth = ('user', 'password') s.verify = 'sn.cer' r = s.get('https://servicenowserver.com', verify=True) print (r) # This gives HTTP 200 r2 = s.get ('https://servicenowserver.com/table.do?JSONv2&amp;sysparm_query=active=true^number=12345678', verify=True, cookies=s.cookies) print (r2) # This gives HTTP 401 </code></pre>
0
2016-09-20T20:52:08Z
39,663,432
<p>I could manage to figure out the solution, so I'm publishing it here. What I'm going to publish here is not the exact solution for my problem rather a general approach to understand and check how authentication can be tracked. I used this technique to trace the login process in my case.</p> <p>In my case ServiceNow uses a cookie based authentication and passes information back and forth among 4 pages. The first page generates an ID called NSC, and passes it towards to the second page as a cookie to generate another ID called SMSESSION ID, which is then passed to a third page together with NSC ID in the cookie to generate the final JSESSION ID. Finally the process passes over all previously generated 3 IDs to the login page in a cookie to validate the session.</p> <p>I used Google Developer Tools to figure this out. What I would recommend you to do is the following.</p> <p>1.) Go to the login page in Google Chrome that you would like to pass and wait until the site loads. Do not log in yet.</p> <p>2.) Open Developer Tools (Right click, Inspect elements menu option). If you are familiar with other browser's dev capabilities that's also fine.</p> <p>3.) Go to Application tab of Dev Tools and click on Clear storage at the left side menu bar. This will clear all data which is stored for this page. You can do the same in Setting menu of Chrome by clearing the cookies and other data. This is required to clear all historical steps which happened already on the page to not to make any confusion.</p> <p>4.) Once it's done go to the Network tab of Dev Tools and click on the Clear menu option (next to the Record button). This will clear the Network log history.</p> <p>5.) As a next step tick the Preserve log checkbox on the Network tab. This will allow us to keep track of every steps even in case of any redirection. If you don't tick this option you will loose all data once your login page redirects you to somewhere else, because it clears the Network log.</p> <p>6.) Now as we removed all historical data and set up everything, we can start the investigation. Log in to the page with your user id and password and keep Developer Tools open, so you can see all network requests. Wait until the login process finishes and start to examine your network log entries one by one.</p> <p>7.) You will see some GET and POST requests. This is your login process flow. Open the first one by double clicking on it. It will show you information organized in sections just like (General, Response Headers, Request Headers, Query Params, Form Data etc.). This is the information exchange which happens between the web server and the client (your machine). You need to simulate the same with you script.This means whatever you see in the Request Headers section, you need to pass exactly the same with your script. This way you will receive the very same Response Headers and you can grab all the information from there which is required to move forward.</p> <p>Let me show you and example.</p> <p>In my first POST request I can see the following in the Network log:</p> <pre><code>General Request URL:https://mysnserver.net/siteminderagent/forms/dssologinprod.fcc?TYPE=33554433&amp;REALMOID=06-0cffd45f-7ca7-106f-bbab-84fb3af10000&amp;GUID=&amp;SMAUTHREASON=0&amp;METHOD=GET&amp;SMAGENTNAME=-SM-28THtkr3KQi%2fJmb193GjY0nVjpKo6ULc%2fJNV5hRyjzC17qWZfgyVPkR%2f7EAWoDVu3Gd3y3kTm3N2p0B8KVp0Hixjin0ZsDZ3&amp;TARGET=-SM-%2f Request Method:POST Status Code:302 Found Remote Address:1.1.1.196:443 Response Headers Cache-Control:no-store Connection:Keep-Alive Content-Length:1541 Content-Type:text/html; charset=iso-8859-1 Date:Wed, 21 Sep 2016 19:11:46 GMT Keep-Alive:timeout=5, max=496 Location:https://anothersite.com/SmMakeCookie.ccc?SMSESSION=-SM-w0Gp2DpiPEG&amp;PERSIST=0&amp;TARGET=-SM-https%3a%2f%2fservicemanagement%2net%2f Set-Cookie:SMSESSION=w0Gp2DpiPEGPrLepzXds9qUTVER/Xl75WO36n37IxRpLaE6dwQPwN2+iaNn4rQZODb+65k2Gy9fggnKU04I7rSU6; path=/; domain=.mysnserver.net; secure Set-Cookie:SMIDENTITY=EoIkGNtD3Y+FBWumdJuml3J78o61Qtc07b73XmqEeze; path=/; domain=.mysnserver.net; secure Set-Cookie:NSC_1.1.1.196-443-C72169=ffffffffaaa3746145525d5f4f58455e445a4a4253a5;expires=Wed, 21-Sep-2016 21:11:47 GMT;path=/;secure;httponly Set-Cookie:SMTRYNO=; expires=Fri, 25 Mar 2016 19:11:46 GMT; path=/; domain=.mysnserver.net; secure Request Headers Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Encoding:gzip, deflate, br Accept-Language:en-US,en;q=0.8 Cache-Control:max-age=0 Connection:keep-alive Content-Length:238 Content-Type:application/x-www-form-urlencoded Host:mysnserver.net Origin:https://mysnserver.net Referer:https://anothersite.net/forms/dssologinprod.fcc?TYPE=33554433&amp;REALMOID=06-0cffd45f-7ca7-106f-bbab-84fb3af10000&amp;GUID=&amp;SMAUTHREASON=0&amp;METHOD=GET&amp;SMAGENTNAME=-SM-28THtkr3KQi%2fJmb193GjY0nVjpKo6ULc%2fJNV5hRyjzC17qWZfgyVPkR%2f7EAWoDVu3Gd3y3kTm3N2p0B8KVp0Hixjin0ZsDZ3&amp;TARGET=-SM-%2f Upgrade-Insecure-Requests:1 User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36 Query String Parameters TYPE:33554433 REALMOID:06-0cffd45f-7ca7-106f-bbab-84fb3af10000 GUID: SMAUTHREASON:0 METHOD:GET SMAGENTNAME:-SM-28THtkr3KQi/Jmb193GjY0nVjpKo6ULc/JNV5hRyjzC17qWZfgyVPkR/7EAWoDVu3Gd3y3kTm3N2p0B8KVp0Hixjin0ZsDZ3 TARGET:-SM-/ Form Data SMENC:ISO-8859-1 SMLOCALE:US-EN target:/ smquerydata: smauthreason:0 smagentname:28THtkr3KQi/Jmb193GjY0nVjpKo6ULc/JNV5hRyjzC17qWZfgyVPkR/7EAWoDVu3Gd3y3kTm3N2p0B8KVp0Hixjin0ZsDZ3 postpreservationdata: USER:my_userid PASSWORD:my_password </code></pre> <p>Whatever you can see in the Request Headers section, that needs to be passed to the first URL to get the Response Headers information. If you see in the Response Headers I have received several IDs which was given by the server. This means that I need to prepare my first request in Python to pass the very same information that I have in the Request Header. Like this:</p> <pre><code>auth_url1 = 'https://mysnserver.net/siteminderagent/forms/dssologinprod.fcc?TYPE=33554433&amp;REALMOID=06-0cffd45f-7ca7-106f-bbab-84fb3af10000&amp;GUID=&amp;SMAUTHREASON=0&amp;METHOD=GET&amp;SMAGENTNAME=-SM-28THtkr3KQi%2fJmb193GjY0nVjpKo6ULc%2fJNV5hRyjzC17qWZfgyVPkR%2f7EAWoDVu3Gd3y3kTm3N2p0B8KVp0Hixjin0ZsDZ3&amp;TARGET=-SM-%2f' # Initiating session s = requests.session() request_header_1 = { 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Encoding':'gzip, deflate, br', 'Accept-Language':'en-US,en;q=0.8', 'Cache-Control':'max-age=0', 'Connection':'keep-alive', 'Content-Length':'238', 'Content-Type':'application/x-www-form-urlencoded', 'Host':'mysnserver.net', 'Origin':'https://mysnserver.net', 'Referer':'https:///anothersite.net/forms/dssologinprod.fcc?TYPE=33554433&amp;REALMOID=06-0cffd45f-7ca7-106f-bbab-84fb3af10000&amp;GUID=&amp;SMAUTHREASON=0&amp;METHOD=GET&amp;SMAGENTNAME=-SM-28THtkr3KQi%2fJmb193GjY0nVjpKo6ULc%2fJNV5hRyjzC17qWZfgyVPkR%2f7EAWoDVu3Gd3y3kTm3N2p0B8KVp0Hixjin0ZsDZ3&amp;TARGET=-SM-%2f', 'Upgrade-Insecure-Requests':'1', 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36' } form_data_1 = { 'SMENC':'ISO-8859-1', 'SMLOCALE':'US-EN', 'target':'/', 'smquerydata':'', 'smauthreason':'0', 'smagentname':'28THtkr3KQi/Jmb193GjY0nVjpKo6ULc/JNV5hRyjzC17qWZfgyVPkR/7EAWoDVu3Gd3y3kTm3N2p0B8KVp0Hixjin0ZsDZ3', 'postpreservationdata':'', 'USER':'my_userid', #&lt;----- Put your user ID here 'PASSWORD':'my_password' #&lt;----- Put your password here } r = s.post(auth_url1, headers=request_header_1, data=form_data_1, verify=False, allow_redirects=False) # Get NSC ID from the response header which needs to be passed over in the 3rd request nsc_id = r.cookies.keys()[2] + "=" + r.cookies.values()[2] </code></pre> <p>That's it. You need to follow the very same process if you have more redirection until you pass the last page and your session authenticates. After this you can use the cookie information that you collected to authenticate all of your upcoming requests. As you can see I have initiated a session with <code>s = requests.session()</code> command, that I can use to submit all of my requests without passing my user id and pw for all requests. Take care when you need to send a GET and when you need to send a POST request. You can see this in the General information section of the header.</p> <p>One more important note. Use <code>allow_redirects=False</code> in your <code>requests</code> if you have redirection on your site. In this way you can make sure that your request is not got redirected to other sites an you get back the proper Response Headers information.</p>
1
2016-09-23T14:35:41Z
[ "python", "authentication", "servicenow" ]
How to Combine CSV Files with Pandas (And Add Identifying Column)
39,603,567
<p>How do I add multiple CSV files together and an extra column to indicate where each file came from?</p> <p>So far I have:</p> <pre><code>import os import pandas as pd import glob os.chdir('C:\...') # path to folder where all CSVs are stored for f, i in zip(glob.glob('*.csv'), short_list): df = pd.read_csv(f, header = None) df.index = i * len(df) dfs.append(df) all_data = pd.concat(dfs, ignore_index=True) </code></pre> <p>It all works well, except for the identifying column. <code>i</code> is a list of <code>strings</code> that I want to put in column A of <code>all_data</code>. One string for every row of each column. Instead it returns a lot of numbers, and gives a <code>TypeError: Index(....) must be called witha collection of some kind</code>. </p> <p>Expected output:</p> <pre><code>str1 file1entry1 str1 file1entry2 str1 file1entry3 str2 file2entry1 str2 file2entry2 str2 file2entry3 </code></pre> <p>Where <code>short_list = ['str1', 'str2', 'str3']</code>, and <code>file1entery1, file2entry2... etc</code> comes from the CSV files I already have.</p> <p>Solution: I wasn't able to get it all in one line like the solution suggested, however it pointed me in the right direction. </p> <pre><code>for f zip(glob.glob('*csv')): df = pd.read_csv(f, header = None) df = df.assign(id = os.path.basename(f)) # simpler than pulling from the array. Adds file name to each line. dfs.append(df) all_data = pd.concat(dfs) </code></pre>
1
2016-09-20T20:56:00Z
39,604,159
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html" rel="nofollow">.assign(id=i)</a> method, which will add <code>id</code> column to each parsed CSV and will populate it with the <code>i</code> value:</p> <pre><code>df = pd.concat([pd.read_csv(f, header = None).assign(id=i) for f, i in zip(glob.glob('*.csv), short_list)], ignore_index=True) </code></pre>
3
2016-09-20T21:40:03Z
[ "python", "csv", "pandas" ]
Key error in data-frame handling
39,603,571
<p>I have a dataframe <code>stockData</code>. A part example looks like:</p> <pre><code>Name: BBG.XCSE.CARLB.S_LAST_ADJ BBG.XCSE.CARLB.S_FX ..... date 2015-09-11 0.1340 490.763 2015-09-14 0.1340 484.263 2015-09-15 0.1340 484.755 2015-09-16 0.1340 507.703 2015-09-17 0.1340 514.104 ..... </code></pre> <p>each column has a data type , dtype: float64</p> <p>I am looping a static data dataframe which contans every name in my universe and I iterate through this, then iterating through each day for each name (in this example the name is BBG.XCSE.CARLB.S but there are hundreds of names in reality) taking the column 'name_LAST_ADJ' and multiplying by the column 'name_FX'.<br> the code that I am using looks like:</p> <pre><code>for i, row in staticData.iterrows(): unique_id = i #Create new column for the current name that will take the result of the following calculation stockData[unique_id+"_LAST_ADJ_EUR"] = np.nan #Perform calculation - this is where I get the KeyError when there is no data in the name_ADJ_LAST column. stockData[unique_id+"_LAST_ADJ_EUR"] = stockData[unique_id+"_FX"]*stockData[unique_id+"_LAST_ADJ"] return stockData </code></pre> <p>However sometimes the data does not exist (because there is no history for the name) and I receive a key error because the columns for the name are not in the data-frame.</p> <p>With the above code I am trying to create an additional column called name_LAST_ADJ_EUR and when there is data it should look like:</p> <pre><code>Name: BBG.XCSE.CARLB.S_LAST_ADJ BBG.XCSE.CARLB.S_FX BBG.XCSE.CARLB.S_LAST_ADJ_EUR date 2015-09-11 0.1340 490.763 65.762242 2015-09-14 0.1340 484.263 64.891242 2015-09-15 0.1340 484.755 64.95717 2015-09-16 0.1340 507.703 68.032202 2015-09-17 0.1340 514.104 68.889936 </code></pre> <p>and when there is data no data in the name_LAST_ADJ column is there a way generate an NaN output for he column so it looks like:</p> <pre><code>Name: BBG.XCSE.CARLB.S_LAST_ADJ_EUR date 2015-09-11 NaN 2015-09-14 NaN 2015-09-15 NaN 2015-09-16 NaN 2015-09-17 NaN </code></pre> <p>I have tried using the following:</p> <pre><code>stockData[unique_id+"_LAST_ADJ_EUR"] = np.where((stockData[unique_id+"_LAST_ADJ"] == np.nan),stockData[unique_id+"_LAST_ADJ_EUR"]='NaN',stockData[unique_id+"_LAST_ADJ_EUR"] = stockData[unique_id+"_FX"] * stockData[unique_id+"_LAST_ADJ"]) </code></pre> <p>which would be fine if there was a column but when there is no column to reference it throws the KeyError exception.</p> <p>Any help much appreciated</p>
3
2016-09-20T20:56:15Z
39,603,747
<p>In your <code>for</code> loop, try adding something akin to</p> <pre><code>for uid, row in staticData.iterrows(): if uid not in stockData.columns: stockData[uid + "_FX"] = np.nan stockData[uid + "_LAST_ADJ"] = np.nan # continue with what you have: # no longer needed #stockData[uid+"_LAST_ADJ_EUR"] = np.nan stockData[uid+"_LAST_ADJ_EUR"] = stockData[uid+"_FX"]*stockData[uid+"_LAST_ADJ"] </code></pre> <p>While doing it inside the <code>for</code> loop is probably most efficient, you could also do it all at once like:</p> <pre><code>stockData = pd.concat([stockData, pd.DataFrame(columns=staticData.index)]) </code></pre> <p>For example:</p> <pre><code>df = pd.DataFrame(np.random.rand(10, 3), columns=list('abc')) a b c 0 0.627303 0.183463 0.714470 1 0.458124 0.135907 0.515340 2 0.629373 0.725247 0.306275 3 0.113927 0.259965 0.996407 4 0.321131 0.734002 0.766044 5 0.740858 0.238741 0.531810 6 0.063990 0.974056 0.178260 7 0.977651 0.047287 0.435681 8 0.972060 0.606288 0.600896 9 0.250377 0.807237 0.153419 pd.concat([df, pd.DataFrame(columns=list('abcde'))]) a b c d e 0 0.627303 0.183463 0.714470 NaN NaN 1 0.458124 0.135907 0.515340 NaN NaN 2 0.629373 0.725247 0.306275 NaN NaN 3 0.113927 0.259965 0.996407 NaN NaN 4 0.321131 0.734002 0.766044 NaN NaN 5 0.740858 0.238741 0.531810 NaN NaN 6 0.063990 0.974056 0.178260 NaN NaN 7 0.977651 0.047287 0.435681 NaN NaN 8 0.972060 0.606288 0.600896 NaN NaN 9 0.250377 0.807237 0.153419 NaN NaN </code></pre>
1
2016-09-20T21:08:32Z
[ "python", "pandas" ]
Key error in data-frame handling
39,603,571
<p>I have a dataframe <code>stockData</code>. A part example looks like:</p> <pre><code>Name: BBG.XCSE.CARLB.S_LAST_ADJ BBG.XCSE.CARLB.S_FX ..... date 2015-09-11 0.1340 490.763 2015-09-14 0.1340 484.263 2015-09-15 0.1340 484.755 2015-09-16 0.1340 507.703 2015-09-17 0.1340 514.104 ..... </code></pre> <p>each column has a data type , dtype: float64</p> <p>I am looping a static data dataframe which contans every name in my universe and I iterate through this, then iterating through each day for each name (in this example the name is BBG.XCSE.CARLB.S but there are hundreds of names in reality) taking the column 'name_LAST_ADJ' and multiplying by the column 'name_FX'.<br> the code that I am using looks like:</p> <pre><code>for i, row in staticData.iterrows(): unique_id = i #Create new column for the current name that will take the result of the following calculation stockData[unique_id+"_LAST_ADJ_EUR"] = np.nan #Perform calculation - this is where I get the KeyError when there is no data in the name_ADJ_LAST column. stockData[unique_id+"_LAST_ADJ_EUR"] = stockData[unique_id+"_FX"]*stockData[unique_id+"_LAST_ADJ"] return stockData </code></pre> <p>However sometimes the data does not exist (because there is no history for the name) and I receive a key error because the columns for the name are not in the data-frame.</p> <p>With the above code I am trying to create an additional column called name_LAST_ADJ_EUR and when there is data it should look like:</p> <pre><code>Name: BBG.XCSE.CARLB.S_LAST_ADJ BBG.XCSE.CARLB.S_FX BBG.XCSE.CARLB.S_LAST_ADJ_EUR date 2015-09-11 0.1340 490.763 65.762242 2015-09-14 0.1340 484.263 64.891242 2015-09-15 0.1340 484.755 64.95717 2015-09-16 0.1340 507.703 68.032202 2015-09-17 0.1340 514.104 68.889936 </code></pre> <p>and when there is data no data in the name_LAST_ADJ column is there a way generate an NaN output for he column so it looks like:</p> <pre><code>Name: BBG.XCSE.CARLB.S_LAST_ADJ_EUR date 2015-09-11 NaN 2015-09-14 NaN 2015-09-15 NaN 2015-09-16 NaN 2015-09-17 NaN </code></pre> <p>I have tried using the following:</p> <pre><code>stockData[unique_id+"_LAST_ADJ_EUR"] = np.where((stockData[unique_id+"_LAST_ADJ"] == np.nan),stockData[unique_id+"_LAST_ADJ_EUR"]='NaN',stockData[unique_id+"_LAST_ADJ_EUR"] = stockData[unique_id+"_FX"] * stockData[unique_id+"_LAST_ADJ"]) </code></pre> <p>which would be fine if there was a column but when there is no column to reference it throws the KeyError exception.</p> <p>Any help much appreciated</p>
3
2016-09-20T20:56:15Z
39,603,963
<p>I'd start by parsing your columns into a multiindex</p> <pre><code>tups = df.columns.to_series() \ .str.extract(r'(.*)_(LAST_ADJ|FX)', expand=False) \ .apply(tuple, 1).tolist() df.columns = pd.MultiIndex.from_tuples(tups).swaplevel(0, 1) df </code></pre> <p><a href="http://i.stack.imgur.com/1dSlS.png" rel="nofollow"><img src="http://i.stack.imgur.com/1dSlS.png" alt="enter image description here"></a></p> <p>Then multiplication becomes simple</p> <pre><code>df.LAST_ADJ * df.FX </code></pre> <p><a href="http://i.stack.imgur.com/OMDls.png" rel="nofollow"><img src="http://i.stack.imgur.com/OMDls.png" alt="enter image description here"></a></p> <p>Tricky part for me is inserting it back with <code>'EUR'</code>. I did this</p> <pre><code>pd.concat([df, pd.concat([df.LAST_ADJ.mul(df.FX)], axis=1, keys=['EUR'])], axis=1) </code></pre> <p><a href="http://i.stack.imgur.com/5SiHG.png" rel="nofollow"><img src="http://i.stack.imgur.com/5SiHG.png" alt="enter image description here"></a></p>
2
2016-09-20T21:25:04Z
[ "python", "pandas" ]
tastypie: sequence item 0: expected string, function found
39,603,617
<pre><code>class ActionResource(ModelResource): #place = fields.ManyToManyField('restaurant.resource.PlaceResource', 'place', full=True, null=True) place = fields.ManyToManyField(PlaceResource, attribute=lambda bundle: PlaceInfo.objects.filter(action=bundle.obj)) class Meta: queryset = ActionInfo.objects.all() resource_name = 'action' filtering = { 'place' : ALL_WITH_RELATIONS, } class PlaceResource(ModelResource): location = fields.ManyToManyField(PlaceLocationResource, 'location') class Meta: queryset = PlaceInfo.objects.all() resource_name = 'place' filtering = { 'id' : ALL, } </code></pre> <p>This is my resource.py. With this code i want to filter places with some id:</p> <pre><code>http://localhost/api/v1/action/?place__id=2&amp;format=json </code></pre> <p>By action id i can find only one place, action is uniqe for place. With this url i get error: </p> <pre><code>sequence item 0: expected string, function found </code></pre> <p>Django models look like PlaceModel has ManyToMany field with reference to ActionModel</p> <pre><code>http://localhost/api/v1/action/2/?format=json </code></pre> <p>Gives me normal json with reference to places</p> <p><strong>Additional:</strong></p> <p>My Django models:</p> <pre><code>class ActionInfo(models.Model): name = models.ForeignKey(ActionName, related_name="title", on_delete=models.CASCADE, null=True, blank=True) class PlaceInfo(models.Model): name = models.ForeignKey(PlaceName, related_name="title", on_delete=models </code></pre> <p>.CASCADE, null=True) action = models.ManyToManyField(ActionInfo, related_name="action", blank=True)</p> <p>I've fond that i must constract my resources like this:</p> <pre><code>class ActionResource(ModelResource): place = fields.ToOneField(PlaceResource, 'place') class PlaceResource(ModelResource): location = fields.ManyToManyField(PlaceLocationResource, action = fields.ToManyField('menus.resources.ActionResource', 'action', full=True) </code></pre> <p>But with such code i getting:</p> <pre><code>error: "The model '&amp;lt;ActionInfo: name&amp;gt;' has an empty attribute 'place' and doesn't allow a null value." </code></pre> <p><strong>Solve:</strong></p> <pre><code>class ActionResource(ModelResource): place = fields.ToManyField(PlaceResource, 'action', null=True) </code></pre> <p>Now it workes with:</p> <pre><code>http://localhost/api/v1/action/?place=1&amp;format=json </code></pre>
0
2016-09-20T20:59:59Z
39,622,751
<p>Try setting <code>ActionResource.place.attribute</code> equal to the name (or reverse name, depending on your models) of the relation.</p> <pre><code>class ActionResource(ModelResource): place = fields.ToOneField(PlaceResource, 'reverse_name_here') </code></pre>
0
2016-09-21T17:16:08Z
[ "python", "django", "tastypie" ]
nltk semantic word substitution
39,603,633
<p>I'm trying to find different ways of writing "events in [city]" which are semantically similar. I am trying to do this by finding words that are semantically similar to "events" so I can substitute them in. </p> <p>To find these words I'm using nltk's wordnet corpus, but I'm getting some pretty strange results. For example, using the hyponyms of 'event.n.01', I'm getting "Miracles in Ottawa". </p> <p>co-hyponyms and hypernyms seem just as bad or worse. I wonder if anyone understands the structure better and can offer a potential solution?</p> <p>Here's some sample code:</p> <pre><code>!/usr/bin/python3 import nltk lemma = 'event.n.01' synset = nltk.corpus.wordnet.synset(lemma) print("%s: %s" % (synset.name(), synset.definition())) print("\nFinding hyponyms...") print([s.split('.')[0] for w in synset.hyponyms() for s in w.lemma_names()]) print("\nFinding hypernym paths...") print([s.split('.')[0] for hyprs in synset.hypernym_paths() for hypr in hyprs for s in hypr.lemma_names()]) print("\nFinding co-hyponyms...") for hypers in synset.hypernym_paths(): for hyper in hypers: print(hyper.name()) for hypos in hyper.hyponyms(): print("\t%s" % (', '.join(hypos.lemma_names()))) print(synset.similar()) </code></pre>
0
2016-09-20T21:00:46Z
39,605,656
<p>The <em>hyponyms</em> of "event" are types of "event". One of them is "miracle", some others are:</p> <pre><code>&gt;&gt;&gt; [s for w in synset.hyponyms() for s in w.lemma_names][:7] # is 7 enough? :) ['zap', 'act', 'deed', 'human_action', 'human_activity', 'happening', 'occurrence'] </code></pre> <p>"Event's" <em>hypernyms</em> are the oposite. Terms that "event" is a type of:</p> <pre><code>&gt;&gt;&gt; synset.hypernyms() [Synset('psychological_feature.n.01')] </code></pre> <p>You can see that "event" is one of it's <em>hyponyms</em>:</p> <pre><code>&gt;&gt;&gt; synset.hypernyms()[0].hyponyms() [Synset('motivation.n.01'), Synset('cognition.n.01'), Synset('event.n.01')] </code></pre> <p>Those are not really "similar" terms ("Psychological features in Ottawa" may seem like a correct result to a robot, but not to humans).</p> <p>Perhaps it is better to go at it from a completely different angle, e.g.</p> <pre><code>&gt;&gt;&gt; text = nltk.Text(word.lower() for word in nltk.corpus.brown.words()) &gt;&gt;&gt; text.similar('event') time day man order state way case house one place action night point situation work year act and area audience </code></pre> <p>Now, take those and sort them e.g. by path_similarity:</p> <pre><code>&gt;&gt;&gt; words = 'time day man order state way case house one place action night point'\ ... ' situation work year act and area audience'.split() &gt;&gt;&gt; &gt;&gt;&gt; def get_symilarity(synset, word): ... return max([synset.path_similarity(synset2) ... for synset2 in nltk.corpus.wordnet.synsets(word)]+[0]) &gt;&gt;&gt; &gt;&gt;&gt; sorted(words, key=lambda w: get_symilarity(synset, w), reverse=True)[:5] ['act', 'case', 'action', 'time', 'way'] </code></pre> <p>Is that a good result? I don't know. I guess it could work: "Acts in Ottawa", "Cases in New York", "Action in Rome", "Time in Tokyo", "Ways in Amsterdam"...</p>
1
2016-09-21T00:22:10Z
[ "python", "nlp", "nltk" ]
nltk semantic word substitution
39,603,633
<p>I'm trying to find different ways of writing "events in [city]" which are semantically similar. I am trying to do this by finding words that are semantically similar to "events" so I can substitute them in. </p> <p>To find these words I'm using nltk's wordnet corpus, but I'm getting some pretty strange results. For example, using the hyponyms of 'event.n.01', I'm getting "Miracles in Ottawa". </p> <p>co-hyponyms and hypernyms seem just as bad or worse. I wonder if anyone understands the structure better and can offer a potential solution?</p> <p>Here's some sample code:</p> <pre><code>!/usr/bin/python3 import nltk lemma = 'event.n.01' synset = nltk.corpus.wordnet.synset(lemma) print("%s: %s" % (synset.name(), synset.definition())) print("\nFinding hyponyms...") print([s.split('.')[0] for w in synset.hyponyms() for s in w.lemma_names()]) print("\nFinding hypernym paths...") print([s.split('.')[0] for hyprs in synset.hypernym_paths() for hypr in hyprs for s in hypr.lemma_names()]) print("\nFinding co-hyponyms...") for hypers in synset.hypernym_paths(): for hyper in hypers: print(hyper.name()) for hypos in hyper.hyponyms(): print("\t%s" % (', '.join(hypos.lemma_names()))) print(synset.similar()) </code></pre>
0
2016-09-20T21:00:46Z
39,623,043
<p>You can take a deep learning approach. Train a word2vec model and get the most similar vectors to the "event" vector.</p> <p>You can test a model here <a href="http://turbomaze.github.io/word2vecjson/" rel="nofollow" title="Word2Vec">Word2Vec Demo</a></p>
1
2016-09-21T17:33:29Z
[ "python", "nlp", "nltk" ]
Clustering dense data points horizontally
39,603,813
<p>I have a set of about 34,000 data labels with their respective features (state probabilities) in a 2D numpy array that, visualised as a scatter plot, looks <img src="http://i.stack.imgur.com/UlYJG.png" alt="like this">.</p> <p>It's easy to see that the majority of b data points is at the bottom and quite dense. I would like to use a clustering algorithm to extract the bottom area. I don't strive for a perfect result. This is simply about extracting the majority of b points.</p> <p>So far I have tried the DBSCAN algorithm:</p> <pre><code>import sklearn.cluster as sklc data1, data2 = zip(*dist_list[1]) data = np.array([data1, data2]).T core_samples, labels_db = sklc.dbscan( data, # array has to be (n_samples, n_features) eps=2.0, min_samples=5, metric='euclidean', algorithm='auto' ) core_samples_mask = np.zeros_like(labels_db, dtype=bool) core_samples_mask[core_samples] = True unique_labels = set(labels_db) n_clusters_ = len(unique_labels) - (1 if -1 in labels_db else 0) colors = plt.cm.Spectral(np.linspace(0, 1, len(unique_labels))) for k, col in zip(unique_labels, colors): if k == -1: # Black used for noise. col = 'k' class_member_mask = (labels_db == k) xy = data[class_member_mask &amp; core_samples_mask] plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=6) xy = data[class_member_mask &amp; ~core_samples_mask] plt.plot(xy[:, 0], xy[:, 1], 'x', markerfacecolor=col, markeredgecolor='k', markersize=4) plt.rcParams["figure.figsize"] = (15, 15) plt.title('Estimated number of clusters: %d' % n_clusters_) plt.show() </code></pre> <p>which yields <img src="http://i.stack.imgur.com/PPX1E.png" alt="this plot">.</p> <p>Increasing the minimal amount of samples simply results in smaller vertical lines being classified as noise and the longer (and denser) vertical lines staying.</p> <p>I also tried clustering with <code>scipy.cluster.hierarchy</code>:</p> <pre><code>thresh = 2 clusters = hcluster.fclusterdata(data, thresh, criterion="distance") plt.scatter(*data.T, c=clusters) title = "t=%f, n=%d" % (thresh, len(set(clusters))) plt.title(title) plt.show() </code></pre> <p>which resulted in a similar vertical classification. Please view the comment for the plot. With my reputation I'm not allowed to post more than two links yet.</p> <p>Now my question is, did I make a mistake in the calibration of the algorithms? Or was my choice of algorithms wrong in the first place? How can I extract the dense area of b data points?</p>
0
2016-09-20T21:13:30Z
39,608,788
<ol> <li><p>Never include ID attributes. Your "node index" supposedly should not be used for similarity computations, should it?</p></li> <li><p>When your <strong>attributes have different units and scale</strong> you need to be very careful. A popular heuristic is <code>StandardScaler</code>, i.e. normalize each attribute to have unit variance. But given the distribution of your data (uniform on x, skewed on y) I do not think the use of standardization is well supported by theory - it assumes Gaussian distributions.</p></li> </ol> <p>The combination of these two is why you get this result: your x axis completely dominates the result. It literally has over 10000x the influence than your y axis You are effectively clustering only based on Node ID...</p> <p><strong>Never assume clustering "just works"</strong>. All of these methods are <em>very</em> sensitive to preprocessing, and therefor need to be applied with care.</p>
0
2016-09-21T06:17:23Z
[ "python", "scipy", "scikit-learn", "cluster-analysis", "dbscan" ]
Python: How to activate an event by keypress with pyautogui?
39,603,897
<p>I've installed the pyautogui package to use the .hotkey() function to trigger an event. For example: If you press the key combination "Ctrl + c" the console shall display the message "Hello world".</p> <p>I tried something like this:</p> <pre><code>while True: if pyautogui.hotkey("ctrl", "c"): print("Hello World") </code></pre> <p>It's wrong I know but is there a possibility to print this message when I've pressed Ctrl and C at the same time?</p>
0
2016-09-20T21:20:01Z
39,608,720
<p>I solved the problem myself. It seems to be you don't need the pyautogui modul at all and you only have to implement tkinter bindings like this:</p> <pre><code>import tkinter from * root = TK() def keyevent(event): if event.keycode == 67: #Check if pressed key has code 67 (character 'c') print("Hello World") root.bind("&lt;Control - Key&gt;", keyevent) #You press Ctrl and a key at the same time root.mainloop() </code></pre>
0
2016-09-21T06:13:09Z
[ "python", "events", "key", "key-events", "pyautogui" ]
Reading a tarfile into BytesIO
39,603,978
<p>I'm using <a href="https://docker-py.readthedocs.io/en/latest/api/#put_archive" rel="nofollow">Docker-py</a> API to handle and manipulate <code>Docker</code> containers. In the <code>API</code>, the <code>put_archive</code> function expects the <code>data</code> field to be in bytes. So, using the <code>tarfile</code> library I have:</p> <pre><code>import tarfile import io container = client.create_container(image="myimage", command="/bin/bash") source = "~/.ssh/id_rsa.pub" tarfile = create_tar_file(path=source, name="keys.tar") # tarfile = "keys.tar" # How can I read the tar file was a BytesIO() object? data = io.BytesIO() client.put_archive(container=container, path="/tmp", data=data) </code></pre> <p>The API says:</p> <blockquote> <p>put_archive</p> <p>Insert a file or folder in an existing container using a tar archive as source.</p> <p>Params:</p> <p>container (str): The container where the file(s) will be extracted path (str): Path inside the container where the file(s) will be extracted. Must exist. data (bytes): tar data to be extracted Returns (bool): True if the call succeeds. docker.errors.APIError will be raised if an error occurs.</p> </blockquote> <p>My Question is: How can I read the tar file as a <code>BytesIO()</code> so it can be passed to the <code>put_archive()</code> function?</p>
1
2016-09-20T21:25:59Z
39,605,507
<p>You could do this way (untested because I don't have<code>Docker-py</code> installed):</p> <pre><code>with open('keys.tar', 'rb') as fin: data = io.BytesIO(fin.read()) </code></pre>
1
2016-09-20T23:59:44Z
[ "python", "byte", "tar" ]
Python Code not Working in Certain Versions
39,604,004
<p>So for my school, I have to make this script that calculates the tip and tax of a meal. I'm ahead of everyone in my class, so no others have had this issue. </p> <p>The code works fine in the Python3 IDLE on my PC, it also works fine at <a href="https://repl.it/languages/python3" rel="nofollow">repl.it</a>. Oddly enough, at my school's IDLE, which is private, and <a href="http://pythonfiddle.com/" rel="nofollow">Python Fiddle, which is pretty much the same</a> the tip and the tax do not properly calculate. </p> <p>Also, there are some other bugs, such as displaying extra digits or not enough digits. I tried my hardest to fix this with string slicing but it didn't work. The only other way I know how to do it is with <strong>if</strong> statements which aren't allowed.</p> <p>Any help would be appreciated, thanks in advance.</p> <p>Code:</p> <pre><code>#Name #9/20/16 #This program calculates the total cost of a meal. def main(): #INPUT meal = float(30.96) am = int(input("How many meals would you like? ")) tx = int(input("What is the tax %? ")) tp = int(input("How much % do you want to tip?" )) #CALCULATIONS subT = am*meal tax1 = tx/100 tax = tax1*subT subTotalWithTax = subT + tax tip1 = tp/100 tip = tip1*subTotalWithTax total = subTotalWithTax + tip clTip = str(tip)[0: 4] clTax = str(tax)[0: 4] clTotal = str(total)[0: 6] clSubT = str(subT)[0: 6] #OUTPUT print("-----------------------------------------------") print("Items: ") print(str(am) + " Overloaded Potato Skins ------------- $7.99") print(str(am) + " Grilled Top Sirloin 12oz ------------ $16.49") print(str(am) + " Sweet Tea --------------------------- $1.99") print(str(am) + " Southern Pecan Pie ------------------ $3.99") print("------------------------------------------------") print("Totals: ") print("Subtotal: ----------------------------- $" + str(clSubT)) print("Tax: ---------------------------------- $" + str(clTax)) print("Tip: ---------------------------------- $" + str(clTip)) print("Total --------------------------------- $" + str(clTotal)) print("------------------------------------------------") main() </code></pre>
1
2016-09-20T21:28:07Z
39,604,295
<p>I agree with edwinksl comment, check which version of python is on your schools computer. You can right click your python file and click edit with idle, the version should be in the top right coner of the page (next to the file path).</p> <p>I have one other note however. Your teacher could have otherwise specified, but typically the subtotal is the total for the meal PLUS the tax. Then your tip gets calculated based on that and then added in. (unless your teacher said otherwise, follow their guidelines.)</p> <pre><code>subT = am*meal tax1 = tx/100 tax = tax1*subT subTotalWithTax = subT + tax tip1 = tp/100 tip = tip1*subTotalWithTax total = subTotalWithTax + tip </code></pre>
0
2016-09-20T21:52:54Z
[ "python", "python-2.7", "python-3.x", "calculator" ]
Python Code not Working in Certain Versions
39,604,004
<p>So for my school, I have to make this script that calculates the tip and tax of a meal. I'm ahead of everyone in my class, so no others have had this issue. </p> <p>The code works fine in the Python3 IDLE on my PC, it also works fine at <a href="https://repl.it/languages/python3" rel="nofollow">repl.it</a>. Oddly enough, at my school's IDLE, which is private, and <a href="http://pythonfiddle.com/" rel="nofollow">Python Fiddle, which is pretty much the same</a> the tip and the tax do not properly calculate. </p> <p>Also, there are some other bugs, such as displaying extra digits or not enough digits. I tried my hardest to fix this with string slicing but it didn't work. The only other way I know how to do it is with <strong>if</strong> statements which aren't allowed.</p> <p>Any help would be appreciated, thanks in advance.</p> <p>Code:</p> <pre><code>#Name #9/20/16 #This program calculates the total cost of a meal. def main(): #INPUT meal = float(30.96) am = int(input("How many meals would you like? ")) tx = int(input("What is the tax %? ")) tp = int(input("How much % do you want to tip?" )) #CALCULATIONS subT = am*meal tax1 = tx/100 tax = tax1*subT subTotalWithTax = subT + tax tip1 = tp/100 tip = tip1*subTotalWithTax total = subTotalWithTax + tip clTip = str(tip)[0: 4] clTax = str(tax)[0: 4] clTotal = str(total)[0: 6] clSubT = str(subT)[0: 6] #OUTPUT print("-----------------------------------------------") print("Items: ") print(str(am) + " Overloaded Potato Skins ------------- $7.99") print(str(am) + " Grilled Top Sirloin 12oz ------------ $16.49") print(str(am) + " Sweet Tea --------------------------- $1.99") print(str(am) + " Southern Pecan Pie ------------------ $3.99") print("------------------------------------------------") print("Totals: ") print("Subtotal: ----------------------------- $" + str(clSubT)) print("Tax: ---------------------------------- $" + str(clTax)) print("Tip: ---------------------------------- $" + str(clTip)) print("Total --------------------------------- $" + str(clTotal)) print("------------------------------------------------") main() </code></pre>
1
2016-09-20T21:28:07Z
39,604,418
<p>The fact that you get $0 as answer may show that python (2, likely) still represents numbers as integers. Try explicitly casting to floats. Such as:</p> <pre><code>tip1 = tp/100.0 # instead of 100 </code></pre> <p>or</p> <pre><code>tx = float(input("What is the tax %? ")) </code></pre> <p>Also the clipping for displaying is a bit messy, try</p> <pre><code>print("Total --------------------------------- ${}".format(total)) </code></pre> <p>The <code>.format()</code> is the trick you're looking for. There are ways to show only two decimals, check around SO or <a href="https://pyformat.info/" rel="nofollow">https://pyformat.info/</a> --but do try <code>"{:.2f}".format(total)</code> :-)</p> <hr> <p><strong>Edit</strong> Alternatively, without <code>format</code>: <code>print("%.2f" % total)</code></p> <p>And now, for a completely convoluted way to print the price (that is, if formatting is not allowed but string manipulation is):</p> <pre><code>totalDollar, totalCent = str(total).split('.') totalCent += "00" print("Total --------------------------------- $" + totalDollar + "." + totalCent[:2]) </code></pre>
1
2016-09-20T22:02:37Z
[ "python", "python-2.7", "python-3.x", "calculator" ]
Path for Windows10 Python 2.7 USB COM6
39,604,083
<p>I am trying to access my USB COM6 port on my windows10 using python2.7</p> <p>The original code was for Linux was: </p> <pre><code>board = MultiWii("/dev/ttyUSB0") </code></pre> <p>My modified code for Windows is:</p> <pre><code>board = MultiWii("/COM6") </code></pre> <p>Is the correct code?</p>
0
2016-09-20T21:33:05Z
39,604,704
<p>No, <code>/COM6</code> is not correct in Windows. Try <code>COM6</code> and <code>\\.\COM6</code>. To write the latter one inside a string literal in Python, you probably have to escape the back slashes, so you would write <code>"\\\\.\\COM6"</code>.</p>
0
2016-09-20T22:29:27Z
[ "python", "python-2.7", "usb", "hardware" ]
Pandas delete all rows that are not a 'datetime' type
39,604,094
<p>I've got a large file with login information for a list of users. The problem is that the file includes other information in the <code>Date</code> column. I would like to remove all rows that are not of type <code>datetime</code> in the <code>Date</code> column. My data resembles</p> <pre><code>df= Name Date name_1 | 2012-07-12 22:20:00 name_1 | 2012-07-16 22:19:00 name_1 | 2013-12-16 17:50:00 name_1 | 4345 # type = 'int' # type = 'float' name_2 | 2010-01-11 19:54:00 name_2 | 2010-02-06 12:10:00 ... name_2 | 2012-07-18 22:12:00 name_2 | 4521 ... name_5423 | 2013-11-23 10:21:00 ... name_5423 | 7532 </code></pre> <p>I've tried modifying the solution to </p> <p><a href="http://stackoverflow.com/questions/21771133/finding-non-numeric-rows-in-dataframe-in-pandas">finding non-numeric rows in dataframe in pandas?</a> </p> <p><a href="http://stackoverflow.com/questions/26771471/remove-rows-where-column-value-type-is-string-pandas">Remove rows where column value type is string Pandas</a></p> <p>and <a href="https://www.quora.com/How-should-I-delete-rows-from-a-DataFrame-in-Python-Pandas" rel="nofollow">How-should-I-delete-rows-from-a-DataFrame-in-Python-Pandas</a> </p> <p>to fit my needs. </p> <p>The problem is that whenever I attempt the change I either get an error or the entire dataframe gets deleted</p>
2
2016-09-20T21:33:50Z
39,604,232
<p>Use <code>pd.to_datetime</code> with parameter <code>errors='coerce'</code> to make non-dates into <code>NaT</code> null values. Then you can drop those rows</p> <pre><code>df['Date'] = pd.to_datetime(df['Date'], errors='coerce') df = df.dropna(subset=['Date']) df </code></pre> <p><a href="http://i.stack.imgur.com/8tdmh.png"><img src="http://i.stack.imgur.com/8tdmh.png" alt="enter image description here"></a></p>
6
2016-09-20T21:47:09Z
[ "python", "pandas" ]
(python and jes) How to find the position of a character in a string if 2 characters are the same?
39,604,193
<pre><code>def sumNumbers1(num1,num2): sum= str(num1+num2) print "Sum is " + sum for char in sum: digit = sum.find(char)+1 print "Digit " + str(digit) + " is " + char </code></pre> <p>I'm trying to get a function that prints the sum of two numbers, then the first digit of the sum and what that character is, and so on for each digit. However, if the sum is a number with two of the same characters, (77 for example), my function prints "Sum is 77 Digit 1 is 7 Digit 1 is 7" I realize this is an issue with index but how do I fix it? Thanks!</p>
0
2016-09-20T21:42:35Z
39,604,221
<p>You likely want to use <code>enumerate</code>. <code>sum</code> is a string, enumerate will let you iterate over the characters in that string while also returning the index of each character. Python indexing is 0-based, so if you want the <em>digit</em> index to start at 1, you need to add 1 to <code>i</code></p> <pre><code>for i, char in enumerate(sum): print "Digit " + str(i+1) + " is " + char </code></pre>
1
2016-09-20T21:45:55Z
[ "python", "jes" ]
(python and jes) How to find the position of a character in a string if 2 characters are the same?
39,604,193
<pre><code>def sumNumbers1(num1,num2): sum= str(num1+num2) print "Sum is " + sum for char in sum: digit = sum.find(char)+1 print "Digit " + str(digit) + " is " + char </code></pre> <p>I'm trying to get a function that prints the sum of two numbers, then the first digit of the sum and what that character is, and so on for each digit. However, if the sum is a number with two of the same characters, (77 for example), my function prints "Sum is 77 Digit 1 is 7 Digit 1 is 7" I realize this is an issue with index but how do I fix it? Thanks!</p>
0
2016-09-20T21:42:35Z
39,604,696
<pre><code>def sumNumbers1(num1,num2): sum = num1 + num2 print ("the sum is" , sum) s = str(sum) for digit,i in enumerate(s,start =1): print("digit ", digit, "is", i) sumNumbers1(2230,20) def sumNumbers1(num1,num2): sum = num1 + num2 pos = 1 print ("the sum is" , sum) for e in str(sum): print ("digit", pos,"is" , e) pos += 1 sumNumbers1(2230,20) </code></pre>
0
2016-09-20T22:28:46Z
[ "python", "jes" ]
subprocess Popen: signal not propagating
39,604,228
<p>execute_sleep.py</p> <pre><code>import os import time import subprocess sleep_script = 'sleep_forever.py' child = subprocess.Popen(sleep_script, shell=True, close_fds=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, ) subout = child.communicate()[0] print subout rc = child.returncode </code></pre> <p>sleep_forever.py</p> <pre><code>#!/usr/bin/python import time while True: time.sleep(1) </code></pre> <p>When I run execute_sleep.py and then <code>kill -TERM -- -&lt;pid of execute_sleep.py&gt;</code>, execute_sleep.py dies but sleep_forever.py keeps running.</p> <p>What can I do so that when execute_sleep.py gets signal, it propagates down to sleep_forever.py?</p> <p>Do any of these options block signals?</p> <ul> <li><code>shell=True</code></li> <li><code>close_fds=True</code></li> <li>the use of <code>subprocess.PIPE</code></li> </ul>
1
2016-09-20T21:47:04Z
39,604,645
<p>You need to handle the signal, otherwise you are creating orphans (sleep_script processes). So that when the signal to terminate the father process is caught it propagates to child processes before.</p> <pre><code>def handler(signum, frame): functionToKillSleeper(pidtokill) signal.signal(signal.SIGTERM, handler) </code></pre> <h1>Note that SIGKILL cannot be caught.</h1> <p>So you will need to SIGINT or SIGTERM the father process in order for the signal to be handled, otherwise your problem will remain. </p>
1
2016-09-20T22:23:51Z
[ "python", "subprocess", "signals" ]
Django 1.8 calls to database in html code
39,604,282
<p>long-time lurker for this website, but I finally decided to join the community.</p> <p>I have a quick question on some of my code. I took a job this year for my university developing a website for the journalist department. The website was being built the previous year by another student using Django 1.8, python 2, and everything else that comes with that. I knew a decent amount about these languages, and I have learned a lot testing out different methods for hours on end. However, there is one thing I am having trouble with that I have researched for forever.</p> <p>Basically, for my website, I have different "sections" for different pages of articles. These articles have many traits. One trait is called "section" and this section has the names of the pages. So for example:</p> <p>One page is named "look". I can call my code and display all of my featured_articles. HOWEVER, I am trying to only display the articles where the name of the section equals "look".</p> <p>Here is my current code. Any ideas? I have tried many things but I can't get it to work properly. For loops, if statements, different HTML processes, different pages in django, etc...</p> <pre><code>{% for article, section in featured_articles %} &lt;div class="media panel panel-default"&gt; &lt;div class="panel-body"&gt; &lt;div class="media-left"&gt; &lt;a href="articles/{{ article.url }}"&gt; &lt;img class="media-object thumbnail-featured" src="{{ article.image }}"&gt; &lt;/a&gt; &lt;/div&gt; &lt;div class="media-body"&gt; &lt;a href="articles/{{ article.url }}"&gt; &lt;h3 class="media-heading"&gt;{{ article.title }}&lt;/h3&gt; &lt;/a&gt; &lt;!-- TODO figure out how to iterate through the authors field, manytomany --&gt; {% for contributor in article.authors.all %} &lt;p&gt;&lt;a href="/{{ section.url }}"&gt;{{ section.name }}&lt;/a&gt; | &lt;a href="/contributors/{{ contributor.twitter }}"&gt;{{contributor}}&lt;/a&gt;&lt;/p&gt; {% endfor %} &lt;p&gt;{{article.preview}}&lt;/p&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; {% endfor %} </code></pre> <p>Thank you for any help!!</p>
0
2016-09-20T21:51:50Z
39,604,365
<p>As I understand, you just need to add if statement:</p> <pre><code>{% for article, section in featured_articles %} {% if section.name == 'look' %} &lt;div class="media panel panel-default"&gt; &lt;div class="panel-body"&gt; &lt;div class="media-left"&gt; &lt;a href="articles/{{ article.url }}"&gt; &lt;img class="media-object thumbnail-featured" src="{{ article.image }}"&gt; &lt;/a&gt; &lt;/div&gt; &lt;div class="media-body"&gt; &lt;a href="articles/{{ article.url }}"&gt; &lt;h3 class="media-heading"&gt;{{ article.title }}&lt;/h3&gt; &lt;/a&gt; &lt;!-- TODO figure out how to iterate through the authors field, manytomany --&gt; {% for contributor in article.authors.all %} &lt;p&gt;&lt;a href="/{{ section.url }}"&gt;{{ section.name }}&lt;/a&gt; | &lt;a href="/contributors/{{ contributor.twitter }}"&gt;{{ contributor }}&lt;/a&gt; &lt;/p&gt; {% endfor %} &lt;p&gt;{{article.preview}}&lt;/p&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; {% endif %} {% endfor %} </code></pre>
0
2016-09-20T21:58:09Z
[ "python", "html", "django" ]
Django 1.8 calls to database in html code
39,604,282
<p>long-time lurker for this website, but I finally decided to join the community.</p> <p>I have a quick question on some of my code. I took a job this year for my university developing a website for the journalist department. The website was being built the previous year by another student using Django 1.8, python 2, and everything else that comes with that. I knew a decent amount about these languages, and I have learned a lot testing out different methods for hours on end. However, there is one thing I am having trouble with that I have researched for forever.</p> <p>Basically, for my website, I have different "sections" for different pages of articles. These articles have many traits. One trait is called "section" and this section has the names of the pages. So for example:</p> <p>One page is named "look". I can call my code and display all of my featured_articles. HOWEVER, I am trying to only display the articles where the name of the section equals "look".</p> <p>Here is my current code. Any ideas? I have tried many things but I can't get it to work properly. For loops, if statements, different HTML processes, different pages in django, etc...</p> <pre><code>{% for article, section in featured_articles %} &lt;div class="media panel panel-default"&gt; &lt;div class="panel-body"&gt; &lt;div class="media-left"&gt; &lt;a href="articles/{{ article.url }}"&gt; &lt;img class="media-object thumbnail-featured" src="{{ article.image }}"&gt; &lt;/a&gt; &lt;/div&gt; &lt;div class="media-body"&gt; &lt;a href="articles/{{ article.url }}"&gt; &lt;h3 class="media-heading"&gt;{{ article.title }}&lt;/h3&gt; &lt;/a&gt; &lt;!-- TODO figure out how to iterate through the authors field, manytomany --&gt; {% for contributor in article.authors.all %} &lt;p&gt;&lt;a href="/{{ section.url }}"&gt;{{ section.name }}&lt;/a&gt; | &lt;a href="/contributors/{{ contributor.twitter }}"&gt;{{contributor}}&lt;/a&gt;&lt;/p&gt; {% endfor %} &lt;p&gt;{{article.preview}}&lt;/p&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; {% endfor %} </code></pre> <p>Thank you for any help!!</p>
0
2016-09-20T21:51:50Z
39,608,619
<p>Overall, it is a not such a good idea. You are sending all data to the template engine and doing the filtering there?</p> <p>Why not filter it in the view function / view class and then return that data inside a template variable and then render in the front end?</p> <pre><code>def detail(request, poll_id): filtered_data = .......objects.get(name='look') return render(request, 'polls/detail.html', {'look_data': filtered_data}) {% for article, section in look_data %} &lt;div class="media panel panel-default"&gt; .... blah blah blah &lt;/div&gt; {% endfor %} </code></pre>
1
2016-09-21T06:03:46Z
[ "python", "html", "django" ]
How to catch an index error
39,604,377
<p>Given <code>mylist = [0, 1]</code></p> <pre><code>def catch_index_error(value): try: return value except IndexError: return None catch_index_error(mylist[5]) </code></pre> <p>returns an <code>IndexError</code></p> <p>The argument is evaluated prior to the function being executed and therefore the function can't catch the exception. Is there a way of catching it?</p>
1
2016-09-20T21:58:53Z
39,604,416
<pre><code>try: catch_index_error(mylist[5]) except IndexError: do something </code></pre> <p>The error occurs in the call</p>
1
2016-09-20T22:02:18Z
[ "python" ]
How to catch an index error
39,604,377
<p>Given <code>mylist = [0, 1]</code></p> <pre><code>def catch_index_error(value): try: return value except IndexError: return None catch_index_error(mylist[5]) </code></pre> <p>returns an <code>IndexError</code></p> <p>The argument is evaluated prior to the function being executed and therefore the function can't catch the exception. Is there a way of catching it?</p>
1
2016-09-20T21:58:53Z
39,604,430
<p>Use this:</p> <pre><code>try: mylist[5] except IndexError: #your code </code></pre>
1
2016-09-20T22:03:29Z
[ "python" ]
How to catch an index error
39,604,377
<p>Given <code>mylist = [0, 1]</code></p> <pre><code>def catch_index_error(value): try: return value except IndexError: return None catch_index_error(mylist[5]) </code></pre> <p>returns an <code>IndexError</code></p> <p>The argument is evaluated prior to the function being executed and therefore the function can't catch the exception. Is there a way of catching it?</p>
1
2016-09-20T21:58:53Z
39,604,436
<p>The expression <code>mylist[5]</code> causes the <code>IndexError</code>, because it is evaluated <em>before</em> the function is called.</p> <p>The only way to fix this is by letting the function return the correct value <em>from <code>mylist</code></em>:</p> <pre><code>mylist = [0,1] def catch_index_error(index): try: return mylist[index] except IndexError: return None catch_index_error(0) # returns '0' catch_index_error(4) # returns None </code></pre>
2
2016-09-20T22:04:08Z
[ "python" ]
How to catch an index error
39,604,377
<p>Given <code>mylist = [0, 1]</code></p> <pre><code>def catch_index_error(value): try: return value except IndexError: return None catch_index_error(mylist[5]) </code></pre> <p>returns an <code>IndexError</code></p> <p>The argument is evaluated prior to the function being executed and therefore the function can't catch the exception. Is there a way of catching it?</p>
1
2016-09-20T21:58:53Z
39,604,882
<p>All the answers mentioned here will suffice your requirement. But I believe the ideal way to achieve this is via using <code>decorator</code>. In fact they exist in python for such kind of scenarios. For example, create a decorator as:</p> <pre><code>def wrap_index_error(func): def wrapped_func(*args, **kwargs): try: func_response = func(*args, **kwargs) except IndexError: func_response = None return func_response return wrapped_func </code></pre> <p>Now you can use this decorator as <code>@wrap_index_error</code> in any function where you want to catch the <code>IndexError</code> exception. For example in your case:</p> <pre><code>@wrap_index_error def get_item_from_list(my_list, index): return my_list[index] </code></pre> <p>Now you can make the simple call to access the value as:</p> <pre><code>&gt;&gt;&gt; new_list = [1, 3, 5] &gt;&gt;&gt; print get_item_from_list(new_list, 2) 5 &gt;&gt;&gt; print get_item_from_list(new_list, 8) None </code></pre> <p><em>Note:</em> I am passing list as the argument to function because it is not a good practice to update the global value within function. Instead passed it as a argument. That way your code will be more modular. For example, may be you want to perform the same operation on another list?</p> <p>To know more about Decorators, check: <a href="http://thecodeship.com/patterns/guide-to-python-function-decorators/" rel="nofollow">A guide to Python's function decorators</a></p>
0
2016-09-20T22:46:55Z
[ "python" ]
Counting paths in a recursive call
39,604,394
<p>The problem I'm working on is this, from Cracking the Coding Interview:</p> <p>"A child is running up a staircase with n steps, and can hop either 1 step, 2 steps, or 3 steps at a time. Implement a method to count how many possible ways the child can run up the stairs."</p> <p>Coming from C++ I know that a counter can be passed as reference, but in python you can't. I am also trying to track the step sequence that results in a success. I'm writing my code like this:</p> <pre><code>def __calculatePaths(currPathLength, paths, currSeries): if currPathLength == 0: print "successful series is", currSeries return 1 elif currPathLength &lt; 0: return 0 for i in range(1, 4): newSeries = list(currSeries) # make new series to track steps newSeries.append(i) paths += __calculatePaths(currPathLength - i, paths, newSeries) return paths def calculatePaths(pathLength): paths = __calculatePaths(pathLength, 0, []) return paths if __name__ == '__main__': calculatePaths(3) </code></pre> <p>The output for this call is:</p> <pre><code>successful series is [1, 1, 1] successful series is [1, 2] successful series is [2, 1] successful series is [3] 6 </code></pre> <p>I'm confused because my program gets the correct path sequences, but the wrong number of paths. How should I be incrementing my paths? I know how to do this without a global variable, but I can't figure it out without using one. Thank you!</p>
1
2016-09-20T22:00:24Z
39,604,622
<p>In your function __calculatePaths, you have to set paths = 0, before the for loop. Otherwise it is add the values to the global instance of paths and that's why you are getting wrong answer.</p> <pre><code>def __calculatePaths(currPathLength, paths, currSeries): if currPathLength == 0: print "successful series is", currSeries return 1 elif currPathLength &lt; 0: return 0 paths = 0 for i in range(1, 4): newSeries = list(currSeries) # make new series to track steps newSeries.append(i) paths += __calculatePaths(currPathLength - i, paths, newSeries) return paths def calculatePaths(pathLength): paths = __calculatePaths(pathLength, 0, []) return paths if __name__ == '__main__': calculatePaths(3) </code></pre> <p>You can get the number of ways in much efficient manner. By using Dynamic Programming in O(N). And even more efficiently using matrix exponentiation O(logN).</p>
0
2016-09-20T22:21:15Z
[ "python", "recursion", "counter" ]
Counting paths in a recursive call
39,604,394
<p>The problem I'm working on is this, from Cracking the Coding Interview:</p> <p>"A child is running up a staircase with n steps, and can hop either 1 step, 2 steps, or 3 steps at a time. Implement a method to count how many possible ways the child can run up the stairs."</p> <p>Coming from C++ I know that a counter can be passed as reference, but in python you can't. I am also trying to track the step sequence that results in a success. I'm writing my code like this:</p> <pre><code>def __calculatePaths(currPathLength, paths, currSeries): if currPathLength == 0: print "successful series is", currSeries return 1 elif currPathLength &lt; 0: return 0 for i in range(1, 4): newSeries = list(currSeries) # make new series to track steps newSeries.append(i) paths += __calculatePaths(currPathLength - i, paths, newSeries) return paths def calculatePaths(pathLength): paths = __calculatePaths(pathLength, 0, []) return paths if __name__ == '__main__': calculatePaths(3) </code></pre> <p>The output for this call is:</p> <pre><code>successful series is [1, 1, 1] successful series is [1, 2] successful series is [2, 1] successful series is [3] 6 </code></pre> <p>I'm confused because my program gets the correct path sequences, but the wrong number of paths. How should I be incrementing my paths? I know how to do this without a global variable, but I can't figure it out without using one. Thank you!</p>
1
2016-09-20T22:00:24Z
39,604,777
<p>Most important, realize that you don't have to determine those sequences: you need only to count them. For instance, there's only one way to finish from step N-1: hop 1 step. From N-2, there are two ways: hop both steps at once, or hop 1 step and finish from there. Our "ways to finish" list now looks like this, working backwards:</p> <pre><code>way = [1, 2, ...] </code></pre> <p>Now, watch what happens with step N-3. We finally have 3 choices:</p> <ol> <li>Hop 1 step and have 2 ways to finish</li> <li>Hop 2 steps and have 1 way to finish</li> <li>Hop 3 steps and be done.</li> </ol> <p>That's a total of 2+1+1, or 4 ways to finish.</p> <p>That initializes our algorithm. Now for the recurrence relationship. The initial list looks like this:</p> <pre><code>way = [1, 2, 4, ...] </code></pre> <p>From here on, we can't make a single hop to the top. Instead, we have to depend on the three steps above us. Our choices from step N-J are:</p> <ol> <li>Hop 1 step and have <strong>way[J-1]</strong> ways to finish</li> <li>Hop 2 steps and have <strong>way[J-2]</strong> ways to finish</li> <li>Hop 3 steps and have <strong>way[J-3]</strong> ways to finish</li> </ol> <p>Thus, for all j >= 3:</p> <pre><code>way[j] = way[j-1] + way[j-2] + way[j-3] </code></pre> <p>This gives you the solution in <strong>O(N)</strong> time.</p>
1
2016-09-20T22:35:34Z
[ "python", "recursion", "counter" ]
Counting paths in a recursive call
39,604,394
<p>The problem I'm working on is this, from Cracking the Coding Interview:</p> <p>"A child is running up a staircase with n steps, and can hop either 1 step, 2 steps, or 3 steps at a time. Implement a method to count how many possible ways the child can run up the stairs."</p> <p>Coming from C++ I know that a counter can be passed as reference, but in python you can't. I am also trying to track the step sequence that results in a success. I'm writing my code like this:</p> <pre><code>def __calculatePaths(currPathLength, paths, currSeries): if currPathLength == 0: print "successful series is", currSeries return 1 elif currPathLength &lt; 0: return 0 for i in range(1, 4): newSeries = list(currSeries) # make new series to track steps newSeries.append(i) paths += __calculatePaths(currPathLength - i, paths, newSeries) return paths def calculatePaths(pathLength): paths = __calculatePaths(pathLength, 0, []) return paths if __name__ == '__main__': calculatePaths(3) </code></pre> <p>The output for this call is:</p> <pre><code>successful series is [1, 1, 1] successful series is [1, 2] successful series is [2, 1] successful series is [3] 6 </code></pre> <p>I'm confused because my program gets the correct path sequences, but the wrong number of paths. How should I be incrementing my paths? I know how to do this without a global variable, but I can't figure it out without using one. Thank you!</p>
1
2016-09-20T22:00:24Z
39,604,838
<p>This should be the most efficient way to do it computing-wise using your solution:</p> <pre><code>from collections import deque def calculate_paths(length): count = 0 # Global count def calcuate(remaining_length): # 0 means success # 1 means only 1 option is available (hop 1) if remaining_length &lt; 2: nonlocal count # Refer to outer count count += 1 return # Calculates, removing the length already passed. # For 1...4 or remaining_length+1 if it's less than 4. # deque(, maxlen=0) is the fastest way of consuming an iterator # without also keeping it's data. This is the most efficient both # memory-wise and clock-wise deque((calcuate(remaining_length-i) for i in range(1, min(4, remaining_length+1))), maxlen=0) calcuate(length) return count &gt;&gt;&gt; calculate_paths(2) 2 &gt;&gt;&gt; calculate_paths(3) 4 &gt;&gt;&gt; calculate_paths(4) 7 </code></pre> <p>As you can see, there is no need to keep the path as only the remaining length matters.</p> <hr> <p>@Prune's answer has better algorithm. Here it is implemented:</p> <pre><code>def calculate_paths(length): results = deque((1, 2, 4), maxlen=3) if length &lt;= 3: return results[length-1] for i in range(3, length): results.append(sum(results)) return results.pop() </code></pre> <p>Eliminating recursion also causes less frames to be used, and doesn't stop with max recursion.</p>
0
2016-09-20T22:42:53Z
[ "python", "recursion", "counter" ]
Adaline Learning Algorithm
39,604,428
<p>I can not seem to debug the following implementation of an Adaline neuron... I'm hoping someone can spot what I cannot. I think the problem lies in the last few lines of my train method?</p> <pre><code>from numpy import random, array, dot import numpy as np import matplotlib.pyplot as plt from random import choice import math import sympy class adalineANN(object): def __init__(self, gamma=.2, trials=500, errors=[], weights=[]): self.gamma = gamma self.trials = trials self.errors = errors self.weights = weights def train(self): self.weights = random.rand(3) coordinates_class1 = [] coordinates_class2 = [] for x in np.random.normal(2, .5, 20): for y in np.random.normal(3, .5, 20): coordinates_class1.append(([x, y, 1], 1)) break for x in np.random.normal(2, .25, 20): for y in np.random.normal(-1, .25, 20): coordinates_class2.append(([x, y, 1], -1)) break trainingData = coordinates_class1 + coordinates_class2 for i in range(self.trials): x, target = choice(trainingData) y = np.dot(x, self.weights) error, errors = [], [] error = (target - y) self.errors.append(error) for i in range(0, 3): self.weights[i] += self.gamma * x[i] * (target - y) #????* (sympy.cosh(y)**(-1))**2 def plot(self): plt.plot(self.errors) plt.show() A = adalineANN() A.train() A.plot() </code></pre> <p>Do I need the derivative of my threshold function too? See the above note.</p>
0
2016-09-20T22:03:12Z
39,624,302
<p>There is nothing really wrong, but:</p> <ol> <li><p>You are using "i" for iterator in <strong>both</strong> loops, in this code it does not matter since you do not really use it in the outer loop, but it could lead to very nasty bugs in general.</p></li> <li><p>Your gamma is too high, change it to 0.01 </p></li> <li><p>Do not plot signed error, adaline is minimizing <strong>squared error</strong> thus you should plot <code>error = (target - y)**2</code>; you can even go for accuracy to see that it discriminates just fine with <code>error = target != np.sign(y)</code>.</p></li> </ol> <p><a href="http://i.stack.imgur.com/8xDny.png" rel="nofollow"><img src="http://i.stack.imgur.com/8xDny.png" alt="after changes"></a></p>
0
2016-09-21T18:46:06Z
[ "python", "machine-learning", "neural-network" ]
API Call - Multi dimensional nested dictionary to pandas data frame
39,604,461
<p>I need your help with converting a multidimensional dict to a pandas data frame. I get the dict from a JSON file which I retrieve from a API call (Shopify). </p> <pre><code>response = requests.get("URL", auth=("ID","KEY")) data = json.loads(response.text) </code></pre> <p>The "data" dictionary looks as follows:</p> <pre><code>{'orders': [{'created_at': '2016-09-20T22:04:49+02:00', 'email': 'test@aol.com', 'id': 4314127108, 'line_items': [{'destination_location': {'address1': 'Teststreet 12', 'address2': '', 'city': 'Berlin', 'country_code': 'DE', 'id': 2383331012, 'name': 'Test Test', 'zip': '10117'}, 'gift_card': False, 'name': 'Blueberry Cup'}] }]} </code></pre> <p>In this case the dictionary has 4 Dimensions and I would like to convert the dict into a pandas data frame. I tried everything ranging from json_normalize() to pandas.DataFrame.from_dict(), yet I did not manage to get anywhere. When I try to convert the dict to a df, I get columns which contain list of lists.</p> <p>Does anyone know how to approach that?</p> <p>Thanks</p> <p>EDITED:</p> <p>Thank you @piRSquared. Your solution works fine! However, how you solve it if there was another product in the order? Because then it does work. JSON response of an order with 2 products is as follows (goals is to have a second row with the same "created_at". "email" etc. columns):</p> <pre><code>{'orders': [{'created_at': '2016-09-20T22:04:49+02:00', 'email': 'test@aol.com', 'id': 4314127108, 'line_items': [{'destination_location': {'address1': 'Teststreet 12', 'address2': '', 'city': 'Berlin', 'country_code': 'DE', 'id': 2383331012, 'name': 'Test Test', 'zip': '10117'}, 'gift_card': False, 'name': 'Blueberry Cup'}, {'destination_location': {'address1': 'Teststreet 12', 'address2': '', 'city': 'Berlin', 'country_code': 'DE', 'id': 2383331012, 'name': 'Test Test', 'zip': '10117'}, 'gift_card': False, 'name': 'Strawberry Cup'}] }]} </code></pre> <p>So the df in the end should be on a row by row basis for all sold products. Thank you, I really appreciate your help!</p>
2
2016-09-20T22:05:58Z
39,604,682
<p>There are a number of ways to do this. This is just a way I decided to do it. You need to explore how you want to see this represented, then figure out how to get there.</p> <pre><code>df = pd.DataFrame(data['orders']) df1 = df.line_items.str[0].apply(pd.Series) df2 = df1.destination_location.apply(pd.Series) pd.concat([df.drop('line_items', 1), df1.drop('destination_location', 1), df2], axis=1, keys=['', 'line_items', 'destination_location']) </code></pre> <p><a href="http://i.stack.imgur.com/zYw6A.png" rel="nofollow"><img src="http://i.stack.imgur.com/zYw6A.png" alt="enter image description here"></a></p>
1
2016-09-20T22:27:17Z
[ "python", "json", "pandas", "dictionary", "dataframe" ]
break out of list comprehension?
39,604,476
<p>I am taking in an integer value, finding the factorial of that value and trying to count the number of trailing zeros if any are present. For example:</p> <pre><code>def zeros(n): import math factorial = str(math.factorial(n)) zeros_lst = [number if number == "0" (else) for number in factorial[::-1]] return len(zeros_lst) </code></pre> <p>The "else" in parenthesis is where the issue is occurring. I want to leave the loop if the as soon as it encounters a number that is not zero. I tried using break like you normally would, then looking up some examples but found nothing of similarity. </p> <p>If someone knows how to break from a list comprehension or if is even possible that would be great. I am sure there are better ways to solve this problem, please post if you do. </p>
0
2016-09-20T22:07:14Z
39,604,572
<p>There is no "breaking" in list comprehensions, but there are other tricks, e.g. <code>itertools.takewhile</code> which iterates an iterable while a condition is satisfied:</p> <pre><code>&gt;&gt;&gt; from itertools import takewhile &gt;&gt;&gt; &gt;&gt;&gt; values = [7, 9, 11, 4, 2, 78, 9] &gt;&gt;&gt; list(takewhile(lambda x: x &gt; 5, values)) [7, 9, 11] </code></pre> <p>In your case (<em>I want to leave the loop if the as soon as it encounters a number that is not zero</em>):</p> <pre><code>zeros_lst = list(takewhile(lambda x: x=="0", factorial[::-1])) </code></pre>
0
2016-09-20T22:15:15Z
[ "python", "list-comprehension", "break" ]
break out of list comprehension?
39,604,476
<p>I am taking in an integer value, finding the factorial of that value and trying to count the number of trailing zeros if any are present. For example:</p> <pre><code>def zeros(n): import math factorial = str(math.factorial(n)) zeros_lst = [number if number == "0" (else) for number in factorial[::-1]] return len(zeros_lst) </code></pre> <p>The "else" in parenthesis is where the issue is occurring. I want to leave the loop if the as soon as it encounters a number that is not zero. I tried using break like you normally would, then looking up some examples but found nothing of similarity. </p> <p>If someone knows how to break from a list comprehension or if is even possible that would be great. I am sure there are better ways to solve this problem, please post if you do. </p>
0
2016-09-20T22:07:14Z
39,604,607
<blockquote> <p>If someone knows how to break from a list comprehension </p> </blockquote> <p>You can not break a list compression. </p> <p>But you can modify your list comprehension with the <code>if</code> condition in <code>for</code> loop. With if, you can decide what values are needed to be the part of the list:</p> <pre><code>def zeros(n): import math factorial = str(math.factorial(n)) # Check this line zeros_lst = [number for number in factorial[::-1] if number == '0'] return len(zeros_lst) </code></pre> <p>It is better to use simple <code>for</code> loop. In fact for loops are faster than list comprehension in terms of performance. Check <a href="http://stackoverflow.com/a/39518977/2063361">HERE</a> the comparison I did for another question.</p> <p>Even though list comprehension should be preferred as they are clean and more readable. Again, it is opinion based: Readability V/S Speed. </p> <p><strong>Suggestion</strong>:</p> <p>Also, there is a easier way to achieve what you are doing via:</p> <pre><code>import math def find_zeros_in_factorial(n): num_str = str(math.factorial(n)) return len(num_str)-len(num_str.rstrip('0')) </code></pre> <p>The idea here is to subtract the length of the string with the length of string <em>without zeroes</em> at the end.</p>
0
2016-09-20T22:19:34Z
[ "python", "list-comprehension", "break" ]
break out of list comprehension?
39,604,476
<p>I am taking in an integer value, finding the factorial of that value and trying to count the number of trailing zeros if any are present. For example:</p> <pre><code>def zeros(n): import math factorial = str(math.factorial(n)) zeros_lst = [number if number == "0" (else) for number in factorial[::-1]] return len(zeros_lst) </code></pre> <p>The "else" in parenthesis is where the issue is occurring. I want to leave the loop if the as soon as it encounters a number that is not zero. I tried using break like you normally would, then looking up some examples but found nothing of similarity. </p> <p>If someone knows how to break from a list comprehension or if is even possible that would be great. I am sure there are better ways to solve this problem, please post if you do. </p>
0
2016-09-20T22:07:14Z
39,604,895
<p>There is a more mathematical approach to this problem that is very simple and easy to implement. We only need to count how many factors of ten there are in factorial(n). We have an excess of factors of 2, so we choose to count factors of 5. It doesn't look as clean, but it avoids the computation of a factorial. The algorithm accounts for extra factors of 5 that show up in numbers like 25, 50, 125 and all of the rest.</p> <pre><code>def find_zeros_in_factorial(n): factors_of_5 = [n/5] while factors_of_5[-1] &gt; 0: factors_of_5.append(factors_of_5[-1]/5) return sum(factors_of_5) </code></pre>
0
2016-09-20T22:48:35Z
[ "python", "list-comprehension", "break" ]
break out of list comprehension?
39,604,476
<p>I am taking in an integer value, finding the factorial of that value and trying to count the number of trailing zeros if any are present. For example:</p> <pre><code>def zeros(n): import math factorial = str(math.factorial(n)) zeros_lst = [number if number == "0" (else) for number in factorial[::-1]] return len(zeros_lst) </code></pre> <p>The "else" in parenthesis is where the issue is occurring. I want to leave the loop if the as soon as it encounters a number that is not zero. I tried using break like you normally would, then looking up some examples but found nothing of similarity. </p> <p>If someone knows how to break from a list comprehension or if is even possible that would be great. I am sure there are better ways to solve this problem, please post if you do. </p>
0
2016-09-20T22:07:14Z
39,605,022
<p>Here is a function that will count the zeros, you just need to pass it your number. This saves the string operations you had before. It will terminate once there are no more trailing zeros.</p> <pre><code>def count_zeros(n): n_zeros = 0; while True: if n%10 == 0: n = n/10 n_zeros+=1 else: return n_zeros print(count_zeros(math.factorial(12))) </code></pre>
0
2016-09-20T23:02:58Z
[ "python", "list-comprehension", "break" ]
Problems Using the Berkeley DB Transactional Processing
39,604,509
<p>I'm writing a set of programs that have to operate on a common database, possibly concurrently. For the sake of simplicity (for the user), I didn't want to require the setup of a database server. Therefore I setteled on Berkeley DB, where one can just fire up a program and let it create the DB if it doesn't exist.</p> <p>In order to let programs work concurrently on a database, one has to use the transactional features present in the 5.x release (here I use python3-bsddb3 6.1.0-1+b2 with libdb5.3 5.3.28-12): the documentation clearly says that it can be done. However I quickly ran in trouble, even with some basic tasks :</p> <ul> <li>Program 1 initializes records in a table </li> <li>Program 2 has to scan the records previously added by program 1 and updates them with additional data.</li> </ul> <p>To speed things up, there is an index for said additional data. When program 1 creates the records, the additional data isn't present, so the pointer to that record is added to the index under an empty key. Program 2 can then just quickly seek to the not-yet-updated records.</p> <p>Even when not run concurrently, the record updating program crashes after a few updates. First it complained about insufficient space in the mutex area. I had to resolve this with an obscure DB_CONFIG file and then run db_recover.</p> <p>Next, again after a few updates it complained '<em>Cannot allocate memory -- BDB3017 unable to allocate space from the buffer cache</em>'. db_recover and relaunching the program did the trick, only for it to crash again with the same error a few records later.</p> <p>I'm not even mentioning concurrent use: when one of the programs is launched while the other is running, they almost instantly crash with deadlock, panic about corrupted segments and ask to run recover. I made many changes so I went throug a wide spectrum of errors which often yield irrelevant matches when searched for. I even rewrote the db calls to use lmdb, which in fact works quite well and is really quick, which tends to indicate my program logic isn't at fault. Unfortunately it seems the datafile produced by lmdb is quite sparse, and quickly grew to unacceptable sizes.</p> <p>From what I said, it seems that maybe some resources are being leaked somewhere. I'm hesitant to rewrite all this directly in C to check if the problem can come from the Python binding.</p> <p>I can and I will update the question with code, but for the moment ti is long enough. I'm looking for people who have used the transactional stuff in BDB, for similar uses, which could point me to some of the gotchas.</p> <p>Thanks</p>
0
2016-09-20T22:10:14Z
39,919,736
<p>RPM (see <a href="http://rpm5.org" rel="nofollow">http://rpm5.org</a>) uses Berkeley DB in transactional mode. There's a fair number of gotchas, depending on what you are attempting.</p> <p>You have already found DB_CONFIG: you MUST configure the sizes for mutexes and locks, the defaults are invariably too small.</p> <p>Needing to run db_recover while developing is quite painful too. The best fix (imho) is to automate recovery while opening by checking the return code for DB_RUNRECOVERY, and then reopening the dbenv with DB_RECOVER.</p> <p>Deadlocks are usually design/coding errors: run db_stat -CA to see what is deadlocked (or what locks are held) and adjust your program. "Works with lmdv" isn't sufficient to claim working code ;-)</p> <p>Leaks can be seen with either valgrind and/or BDB compilation with -fsanitize:address. Note that valgrind will report false uninitializations unless you use overrides and/or compile BDB to initialize.</p>
0
2016-10-07T14:30:42Z
[ "python", "python-3.x", "berkeley-db" ]
Pandas apply with list output gives ValueError once df contains timeseries
39,604,586
<p>I'm trying to implement an apply function that returns two values because the calculations are similar and pretty time consuming, so I don't want to do apply twice. The below is an MWE that is pretty stupid and I know there are easier ways to achieve what this MWE does. My actual function is more complicated, but I already run into an error with this MWE:</p> <p>So, I got this to work:</p> <pre><code>def function(row): return [row.A, row.A/2] df = pd.DataFrame({'A' : np.random.randn(8), 'B' : np.random.randn(8)}) df[['D','E']] = df.apply(lambda row: function(row), axis=1).apply(pd.Series) </code></pre> <p>However, this does not:</p> <pre><code>df2 = pd.DataFrame({'A' : np.random.randn(8), 'B' : pd.date_range('1/1/2011', periods=8, freq='H'), 'C' : np.random.randn(8)}) df2[['D','E']] = df2.apply(lambda row: function(row), axis=1).apply(pd.Series) </code></pre> <p>Instead, it gives me ValueError: Shape of passed values is (8, 2), indices imply (8, 3)</p> <p>I don't understand why changing the type of the B column would impact the outcome, it is not even used in the apply function at all?</p> <p>I guess I could avoid this issue in the example by temporary excluding the date column. However, in my function later I will need to use the date.</p> <p>Can someone explain me, why this example does not work? What changes by including a TS?</p>
3
2016-09-20T22:16:47Z
39,604,756
<p>have <code>function</code> return a <code>pd.Series</code> instead. Returning a list is making apply try to fit the list into the existing row. Returning a <code>pd.Series</code> convinces pandas of something different.</p> <pre><code>def function(row): return pd.Series([row.A, row.A/2]) df2 = pd.DataFrame({'A' : np.random.randn(8), 'B' : pd.date_range('1/1/2011', periods=8, freq='H'), 'C' : np.random.randn(8)}) df2[['D','E']] = df2.apply(function, axis=1) df2 </code></pre> <p><a href="http://i.stack.imgur.com/oreC4.png" rel="nofollow"><img src="http://i.stack.imgur.com/oreC4.png" alt="enter image description here"></a></p> <hr> <p><strong><em>Attempt to explain</em></strong> </p> <pre><code>s = pd.Series([1, 2, 3]) s 0 1 1 2 2 3 dtype: int64 </code></pre> <hr> <pre><code>s.loc[:] = [4, 5, 6] s 0 4 1 5 2 6 dtype: int64 </code></pre> <hr> <pre><code>s.loc[:] = [7, 8] </code></pre> <blockquote> <p>ValueError: cannot set using a slice indexer with a different length than the value</p> </blockquote>
2
2016-09-20T22:33:56Z
[ "python", "pandas", "time-series" ]
Can you construct csrf from request object - csrf constructor exception?
39,604,591
<p>I am following an example of user registration and my code looks like this</p> <pre><code>from django.views.decorators import csrf def register_user(request): args={} args.update(csrf(request)) #----&gt;Crashes here args["form"] = UserCreationForm() return render_to_response("register.html",args) </code></pre> <p>I get an exception at the statement </p> <pre><code>args.update(csrf(request)) </code></pre> <p>stating that the </p> <pre><code>module object is not callable. </code></pre> <p>Any suggestions on what I might be doing wrong ?</p>
0
2016-09-20T22:17:26Z
39,604,627
<p>A quick google shows that the correct import for the csrf decorator is</p> <pre><code>from django.views.decorators.csrf import csrf_protect </code></pre> <p>My guess (I don't have django available to test) is that you're importing something else, although it's probably a non callable module :)</p>
1
2016-09-20T22:21:53Z
[ "python", "django" ]
Can you construct csrf from request object - csrf constructor exception?
39,604,591
<p>I am following an example of user registration and my code looks like this</p> <pre><code>from django.views.decorators import csrf def register_user(request): args={} args.update(csrf(request)) #----&gt;Crashes here args["form"] = UserCreationForm() return render_to_response("register.html",args) </code></pre> <p>I get an exception at the statement </p> <pre><code>args.update(csrf(request)) </code></pre> <p>stating that the </p> <pre><code>module object is not callable. </code></pre> <p>Any suggestions on what I might be doing wrong ?</p>
0
2016-09-20T22:17:26Z
39,604,691
<p>There are two way to CSRF protect your django websites :</p> <h1>1 - Using the middleware, the simplest way :</h1> <p>The <code>django.middleware.csrf.CsrfViewMiddleware</code> automatically adds a CSRF token to the context.</p> <p>This middleware is enabled by default in your <code>settings.py</code> file and you can directly use this token in your template.</p> <p><strong>With this solution you have nothing to do but using the {% csrf_token %} tag in your template as bellow.</strong></p> <h1>2 - Using the <code>csrf_protect</code> decorator :</h1> <p>If you deactivate the middleware (which is not recommended), you can still use the <code>csrf_protect</code> decorator (It seems it's the solution you're trying, but not with its correct import as Danielle pointed out).</p> <p>But your problem seems to be that you don't use it as you should.</p> <p>It's a <strong>decorator</strong>, i-e a function that returns a modified version of a function passed as parameter. Here you're passing it a request object.</p> <p>With Python, you canuse a decorator this way :</p> <pre><code>@decorator def function([...]): [...] </code></pre> <p>So your view should look like :</p> <pre><code>@csrf_token def your_view(request, *args, **kwargs): # Your view code </code></pre> <h1>Using the <code>{% csrf_token %}</code> tag :</h1> <p>After using one of these solutions, you can directly use the <code>{% csrf_token %}</code> tag in your template since the csrf token should be in your context at template rendering (thanks to either the middleware or the <code>csrf_protect</code> decorator) :</p> <pre><code>&lt;form&gt; {% csrf_token %} {{ form.as_p }} &lt;input type="submit" value="Submit" /&gt; &lt;/form&gt; </code></pre> <p>Here is more about CSRF protections with Django :</p> <p><a href="https://docs.djangoproject.com/en/1.10/ref/csrf/" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/csrf/</a></p> <p>and here is more about decorators with Python :</p> <p><a href="https://wiki.python.org/moin/PythonDecorators" rel="nofollow">https://wiki.python.org/moin/PythonDecorators</a></p>
2
2016-09-20T22:28:04Z
[ "python", "django" ]
Can you construct csrf from request object - csrf constructor exception?
39,604,591
<p>I am following an example of user registration and my code looks like this</p> <pre><code>from django.views.decorators import csrf def register_user(request): args={} args.update(csrf(request)) #----&gt;Crashes here args["form"] = UserCreationForm() return render_to_response("register.html",args) </code></pre> <p>I get an exception at the statement </p> <pre><code>args.update(csrf(request)) </code></pre> <p>stating that the </p> <pre><code>module object is not callable. </code></pre> <p>Any suggestions on what I might be doing wrong ?</p>
0
2016-09-20T22:17:26Z
39,605,030
<p>Judging from your code, you need <code>django.template.context_processors.csrf()</code>, not <code>django.views.decorators.csrf</code>. This puts the csrf token in the template context.</p> <p>The recommended way is to use <code>render</code> instead of <code>render_to_response</code>. This will run all configured <a href="https://docs.djangoproject.com/en/1.10/ref/templates/api/#using-requestcontext" rel="nofollow">context processors</a>, including the <code>csrf</code> context processor.</p> <pre><code>from django.shortcuts import render def register_user(request): args = {} args["form"] = UserCreationForm() return render(request, "register.html", args) </code></pre> <p>This is what allows you to use the <code>{% csrf_token %}</code> template tag in your templates.</p> <p>You still need to use either the <code>CsrfViewMiddleware</code> (recommended) or <code>csrf_protect</code> decorator to actually protect your views. </p>
1
2016-09-20T23:03:35Z
[ "python", "django" ]
Sorting 2D array by Mathematical Equation
39,604,638
<p>I have a 2D array list that contains (x,y), however I want to sort this list by the Equation of Minimum value of (square root of (x^2 + y^2)).</p> <p>For example I have these four 2D lists:</p> <pre><code>(20,10) (3,4) (5,6) (1.2,7) </code></pre> <p>If I take the square root of each 2D array in this list and return the minimum of the sorted list, the output is:</p> <pre><code>(3,4) (1.2,7) (6.5,4) (5,6) (20,10) </code></pre> <p>the Code :</p> <pre><code>M=[ [20,10],[3,4],[5,6],[1.2,7],[6.5,4]] </code></pre> <p>s=np.sqrt(M)</p> <p>a=[]</p> <p>print s</p> <p>for i in range(0,h):</p> <pre><code> for j in range(0,w): a[i] =s[i][j]+a[i] </code></pre> <p>Any ideas?</p>
0
2016-09-20T22:23:00Z
39,605,073
<p>the code below will solve what you request, uncomment the print statements if you wish to see how the ordering works!!</p> <pre><code>import math array = [(20,10), (3,4), (5,6), (1.2,7)] sortList = [] count = 0 tempList = [] placeholder = [] #Compute the Equation of Minimum Value for x,y in array: tempList.append(math.sqrt((x**2) + (y**2))) tempList.append(array[count]) sortList.append(tempList) tempList = [] count += 1 #Sort list count = 1 placeholder = sortList[0][:] ##print('ORIGINAL LIST\n', sortList) while count &lt; (len(sortList)): if sortList[count - 1][0] &lt; sortList[count][0]: ## print('THIS IS COUNT', count) count += 1 else: placeholder = sortList[count - 1][:] ## print("this is placeholder: ", placeholder) sortList[count - 1] = sortList[count] ## print(sortList) sortList[count] = placeholder ## print(sortList) placeholder = [] count = 1 </code></pre>
0
2016-09-20T23:07:44Z
[ "python", "arrays", "sorting" ]
Sorting 2D array by Mathematical Equation
39,604,638
<p>I have a 2D array list that contains (x,y), however I want to sort this list by the Equation of Minimum value of (square root of (x^2 + y^2)).</p> <p>For example I have these four 2D lists:</p> <pre><code>(20,10) (3,4) (5,6) (1.2,7) </code></pre> <p>If I take the square root of each 2D array in this list and return the minimum of the sorted list, the output is:</p> <pre><code>(3,4) (1.2,7) (6.5,4) (5,6) (20,10) </code></pre> <p>the Code :</p> <pre><code>M=[ [20,10],[3,4],[5,6],[1.2,7],[6.5,4]] </code></pre> <p>s=np.sqrt(M)</p> <p>a=[]</p> <p>print s</p> <p>for i in range(0,h):</p> <pre><code> for j in range(0,w): a[i] =s[i][j]+a[i] </code></pre> <p>Any ideas?</p>
0
2016-09-20T22:23:00Z
39,605,122
<p>Switch your data structure to a list of tuples and then sort using minimal value as the key function (with memoization for efficiency):</p> <pre><code>M = [(20, 10), (3, 4), (5,6), (1.2, 7), (6.5, 4)] def minimum_value(coordinate, dictionary={}): # intentional dangerous default value if coordinate not in dictionary: dictionary[coordinate] = coordinate[0]**2 + coordinate[1]**2 return dictionary[coordinate] M_sorted = sorted(M, key=minimum_value) print(M_sorted) </code></pre> <p><strong>OUTPUT</strong></p> <pre><code>[(3, 4), (1.2, 7), (6.5, 4), (5, 6), (20, 10)] </code></pre> <p>Since we're only sorting, we don't need to calculate the square roots, the squares are sufficient.</p>
0
2016-09-20T23:12:57Z
[ "python", "arrays", "sorting" ]
Sorting 2D array by Mathematical Equation
39,604,638
<p>I have a 2D array list that contains (x,y), however I want to sort this list by the Equation of Minimum value of (square root of (x^2 + y^2)).</p> <p>For example I have these four 2D lists:</p> <pre><code>(20,10) (3,4) (5,6) (1.2,7) </code></pre> <p>If I take the square root of each 2D array in this list and return the minimum of the sorted list, the output is:</p> <pre><code>(3,4) (1.2,7) (6.5,4) (5,6) (20,10) </code></pre> <p>the Code :</p> <pre><code>M=[ [20,10],[3,4],[5,6],[1.2,7],[6.5,4]] </code></pre> <p>s=np.sqrt(M)</p> <p>a=[]</p> <p>print s</p> <p>for i in range(0,h):</p> <pre><code> for j in range(0,w): a[i] =s[i][j]+a[i] </code></pre> <p>Any ideas?</p>
0
2016-09-20T22:23:00Z
39,605,145
<p>Use the built in sort method of a list:</p> <pre><code>from math import sqrt def dist(elem): return sqrt(pow(elem[0], 2) + pow(elem[1], 2)) def sorting_func(first, second): if dist(first) &lt; dist(second): return 1 elif dist(second) &lt; dist(first): return -1 else: return 0 bla= [(3, 2), (5, 4)] bla.sort(sorting_func) print bla </code></pre>
1
2016-09-20T23:14:32Z
[ "python", "arrays", "sorting" ]
Can SlidingWindows have half second periods in python apache beam?
39,604,646
<p>SlidingWindows seems to be rounding my periods. If I set the period to 1.5 and size to 3.5 seconds, it will create 3.5 windows every 1.0 second. It expected to have 3.5 second windows every 1.5 seconds. </p> <p>Is it possible to have a period that is a fraction of a second?</p>
1
2016-09-20T22:24:14Z
39,624,079
<p>It should be possible; I've filed <a href="https://issues.apache.org/jira/browse/BEAM-662" rel="nofollow">https://issues.apache.org/jira/browse/BEAM-662</a> to address this</p>
1
2016-09-21T18:32:55Z
[ "python", "google-cloud-dataflow", "dataflow", "apache-beam" ]
Java byte array to Ruby
39,604,672
<p>I have a Desktop application using java swing frames. I have to rewrite the application to either Ruby or python. However I understand java to certain extent - I need help rewriting certain piece of the code in java.</p> <p>1.</p> <pre><code>byte[] ModuleGuid = new byte[]{ (byte)0xe1, (byte)0x9a, (byte)0x69, (byte)0x01, (byte)0xb8, (byte)0xc2, (byte)0x49, (byte)0x80, (byte)0x87, (byte)0x7e, (byte)0x11, (byte)0xd4, (byte)0xd8, (byte)0xf1, (byte)0xbe, (byte)0x79 }; </code></pre> <p>2.</p> <pre><code>JAVA_PvAPI_SensorInfoEx[] lptSensorInfo = new JAVA_PvAPI_SensorInfoEx[(int)PsConstant.JAVA_PvAPI_GET_SENSOR_INFO_MAX]; </code></pre>
-2
2016-09-20T22:26:14Z
39,604,992
<ol> <li><p>There's no such thing as <code>byte</code> in Ruby. When you want to store a sequence of bytes, you store it in a string:</p> <pre><code>module_guid = "\xE1\x9A\x69\x01...".force_encoding('ASCII-8BIT') </code></pre> <p>If you want to copy and paste from Java code, then use:</p> <pre><code>module_guid = [0xe1, 0x9a, ...].pack('c*') </code></pre> <p>By the way, I didn't declare the variable as <code>ModuleGuid</code> because capitalized identifiers are for constants in Ruby, and I didn't see the <code>final</code> keyword in your Java code.</p></li> <li><p>Arrays in Ruby are just arrays. They can store any kind of objects. There's no <code>int[]</code> or <code>String[]</code> or <code>Whatever[]</code> in Ruby. Ruby arrays don't have fixed sizes and can be expanded or shrink any time. Your second piece of Java code can be rewritten as:</p> <pre><code>sensor_info = [] </code></pre> <p>But, depending on your use-case, this statement itself may not be necessary because in Ruby you have sooooo many ways to obtain an array.</p></li> </ol>
2
2016-09-20T22:59:00Z
[ "java", "python", "ruby" ]
Running PySpark using Cronjob (crontab)
39,604,706
<p>First, I assume that we have <code>SPARK_HOME</code> set up, in my case it's at <code>~/Desktop/spark-2.0.0</code>. Basically, I want to run my PySpark script using Cronjob (e.g. <code>crontab -e</code>). My question is how to add environment path to make Spark script works with Cronjob. Here is my sample script, <code>example.py</code></p> <pre><code>import os from pyspark import SparkConf, SparkContext # Configure the environment if 'SPARK_HOME' not in os.environ: os.environ['SPARK_HOME'] = '~/Desktop/spark-2.0.0' conf = SparkConf().setAppName('example').setMaster('local[8]') sc = SparkContext(conf=conf) if __name__ == '__main__': ls = range(100) ls_rdd = sc.parallelize(ls, numSlices=10) ls_out = ls_rdd.map(lambda x: x+1).collect() f = open('test.txt', 'w') for item in ls_out: f.write("%s\n" % item) # save list to test.txt </code></pre> <p>My bash script in <code>run_example.sh</code> is as follows</p> <pre><code>rm test.txt ~/Desktop/spark-2.0.0/bin/spark-submit \ --master local[8] \ --driver-memory 4g \ --executor-memory 4g \ example.py </code></pre> <p>Here, I want to run <code>run_example.sh</code> every minutes using <code>crontab</code>. However, I don't know how to custom path when I run <code>crontab -e</code>. So far, I only see this <a href="https://zedar.gitbooks.io/spark-hadoop-notes/content/spark_job_running_in_cron.html" rel="nofollow">Gitbook link</a>. I have something like this in my Cronjob editor that doesn't run my code yet.</p> <pre><code>#!/bin/bash # add path to cron (this line is the one I don't know) PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:$HOME/anaconda/bin # run script every minutes * * * * * source run_example.sh </code></pre> <p>Thanks in advance!</p>
0
2016-09-20T22:29:30Z
39,605,198
<p>What you can do is , add following line in your .bashrc file in home location.</p> <pre><code>export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:$HOME/anaconda/bin </code></pre> <p>then you can have following entry in crontab</p> <pre><code>* * * * * source ~/.bashrc;sh run_example.sh </code></pre> <p>This line will execute your .bashrc file first, which will set the PATH value, then it will execute run_example.sh</p> <p>Alternatively, you can set the PATH in run_example.sh only,</p> <pre><code>export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:$HOME/anaconda/bin rm test.txt ~/Desktop/spark-2.0.0/bin/spark-submit \ --master local[8] \ --driver-memory 4g \ --executor-memory 4g \ example.py </code></pre>
2
2016-09-20T23:21:58Z
[ "python", "cron", "pyspark", "crontab" ]