title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Removing zigzag line joining data in matplotlib
39,511,404
<p>I am plotting using the matplotlib.pyplot plot() method:</p> <pre><code>import matplotlib.pyplot as plt import matplotlib #matplotlib.style.use('ggplot') plt.plot(df_both_grouped['TSD_mean'], df_both_grouped['hgd_mean'], 'or', color = 'blue') plt.errorbar(df_both_grouped['TSD_mean'], df_both_grouped['hgd_mean'], yerr = df_both_grouped['TSD_err'], xerr = df_both_grouped['home_goal_err'], color = 'blue') </code></pre> <p>Everything works great except I cannot seem to get rid of the annoying zigzag line that connects my points (example below).</p> <p><a href="http://i.stack.imgur.com/P95Nd.png" rel="nofollow"><img src="http://i.stack.imgur.com/P95Nd.png" alt="enter image description here"></a></p> <p>N.B: There is an additional fitting line I wish to keep. I didn't add the code for brevity.</p> <p>Is there an argument that I need to add/remove? I am sure this is a simple issue, but it is driving me insane ;)</p> <p>Thanks in advance!</p>
0
2016-09-15T12:46:02Z
39,511,534
<p>You are looking for the <em>fmt</em> argument of the <em>errorbar</em> method.</p> <p>Simply change your code to</p> <pre><code>import matplotlib.pyplot as plt import matplotlib #matplotlib.style.use('ggplot') plt.plot(df_both_grouped['TSD_mean'], df_both_grouped['hgd_mean'], 'or', color = 'blue') plt.errorbar(df_both_grouped['TSD_mean'], df_both_grouped['hgd_mean'], yerr = df_both_grouped['TSD_err'], xerr = df_both_grouped['home_goal_err'], color = 'blue', fmt='None') </code></pre> <p>to get rid of the connection lines as documented <a href="http://matplotlib.org/1.2.1/api/axes_api.html?highlight=errorbar#matplotlib.axes.Axes.errorbar" rel="nofollow">here (errorbar)</a>.</p> <p>Actually, you can also use the <em>linestyle</em> or <em>ls</em> argument (comming from the resutling <a href="http://matplotlib.org/1.2.1/api/artist_api.html#matplotlib.lines.Line2D" rel="nofollow">Line2D</a> object), like with the ordinary plot command.</p> <pre><code>plt.errorbar(df_both_grouped['TSD_mean'], df_both_grouped['hgd_mean'], yerr = df_both_grouped['TSD_err'], xerr = df_both_grouped['home_goal_err'], color = 'blue', ls='None') </code></pre>
0
2016-09-15T12:52:35Z
[ "python", "matplotlib", "plot" ]
What determines how "at most size bytes are read and returned" with Python read()?
39,511,613
<p>In the documentation for Python's input/output, it states under Reading and Writing Files:</p> <p><a href="https://docs.python.org/3.5/tutorial/inputoutput.html#methods-of-file-objects" rel="nofollow">https://docs.python.org/3.5/tutorial/inputoutput.html#methods-of-file-objects</a></p> <p>"When size is omitted or negative, the entire contents of the file will be read and returned; it’s your problem if the file is twice as large as your machine’s memory. Otherwise, at most size bytes are read and returned."</p> <p>Let's take the following code: </p> <pre><code>size = 1000 with open('file.txt', 'r') as f: while True: read_data = f.read(size) if not read_data: break print(read_data) # outputs data in sizes equal to at most 1000 bytes </code></pre> <p>Here, <code>size</code> is at most 1000 bytes. What determines "at most"? </p> <p>Let's say we are parsing rows of structured data. Each row is 750 bytes. Would read "cut off" the next row, or stop at the <code>\n</code>?</p>
-2
2016-09-15T12:56:41Z
39,511,682
<p><code>read</code> is not <code>readline</code> or <code>readlines</code>. It just reads bytes regardless of the file content (apart from the end of line translation since your file is open as text)</p> <ul> <li>If there's 1000 bytes to be read in the buffer, it returns 1000 bytes (or less if file has <code>\r\n</code> format (Windows CR+LF) and read as text, the <code>\r</code> chars are stripped)</li> <li>If there's 700 bytes left, it returns 700 bytes (give or take the <code>\r</code> issue)</li> <li>If there's nothing to read, it returns an empty buffer (<code>len(read_data)==0</code>).</li> </ul>
1
2016-09-15T12:59:08Z
[ "python", "file", "input", "chunking" ]
Django field Error
39,511,646
<p>I am a Django beginner working on Django v1.9 and trying to learn and replicate DjangoGirl tutorial. I have stuck at "Dynamic data in templates" and "Django templates". </p> <pre><code>from django.shortcuts import render from .models import Post from django.utils import timezone # Create your views here. def post_list(request): posts = Post.objects.filter(published_date__lte=timezone.now()).order_by('published_date') return render(request, 'blog/post_list.html', {'posts':posts}) </code></pre> <p>the given "view.py" showing error </p> <pre><code>Exception Value: Cannot resolve keyword 'published_date' into the field. Choices are: author, author_id, creat_date, id, publish_Data, text, title </code></pre> <p>I have tried each and everything possible but it's not working... kindly help.</p>
0
2016-09-15T12:58:00Z
39,511,679
<pre><code>from django.shortcuts import render from .models import Post from django.utils import timezone def post_list(request): posts = Post.objects.filter(publish_Data__lte=timezone.now()).order_by('publish_Data') return render(request, 'blog/post_list.html', {'posts':posts}) </code></pre>
1
2016-09-15T12:59:02Z
[ "python", "django" ]
Python simple ProcessPoolExecutor example wont work
39,511,695
<p>I'm trying out multithreading in Python 3, but I can't figure out why my example won't work. It should just print the numbers and as confirmation print the number again. But it runs into many errors.</p> <pre><code>from concurrent import futures def print_me(num): print('Zahl: ' + str(num)) return num def say_bye(job): num = job.result() print(str(num) + ' is out!') def test_multi(): num_list = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] with futures.ProcessPoolExecutor(max_workers=4) as executor: for num in num_list: job = executor.submit(print_me, num) job.add_done_callback(say_bye(job)) test_multi() </code></pre> <p>Theoretically, instead of:</p> <pre><code>for num in num_list: job = executor.submit(print_me, num) job.add_done_callback(say_bye(job))` </code></pre> <p>I could use this, or?</p> <pre><code>executor.map(print_me, num_list) </code></pre>
0
2016-09-15T12:59:40Z
39,576,162
<p>To get the code running i just need to add:</p> <pre><code>**if __name__ == "__main__":** test_multi() </code></pre>
0
2016-09-19T14:53:06Z
[ "python", "multithreading" ]
Generating a list of random lists
39,511,708
<p>I'm new to Python, so I might be doing basic errors, so apologies first.</p> <p>Here is the kind of result I'm trying to obtain : </p> <pre><code>foo = [ ["B","C","E","A","D"], ["E","B","A","C","D"], ["D","B","A","E","C"], ["C","D","E","B","A"] ] </code></pre> <p>So basically, a list of lists of randomly permutated letters without repeat. </p> <p>Here is the look of what I can get so far :</p> <pre><code>foo = ['BDCEA', 'BDCEA', 'BDCEA', 'BDCEA'] </code></pre> <p>The main problem being that everytime is the same permutation. This is my code so far :</p> <pre><code>import random import numpy as np letters = ["A", "B", "C", "D", "E"] nblines = 4 foo = np.repeat(''.join(random.sample(letters, len(letters))), nblines) </code></pre> <p>Help appreciated. Thanks</p>
1
2016-09-15T13:00:20Z
39,511,956
<p><code>np.repeat</code> repeats the same array. Your approach would work if you changed it to:</p> <pre><code>[''.join(random.sample(letters, len(letters))) for _ in range(nblines)] Out: ['EBCAD', 'BCEAD', 'EBDCA', 'DBACE'] </code></pre> <p>This is a short way of writing this:</p> <pre><code>foo = [] for _ in range(nblines): foo.append(''.join(random.sample(letters, len(letters)))) foo Out: ['DBACE', 'CBAED', 'ACDEB', 'ADBCE'] </code></pre>
1
2016-09-15T13:12:10Z
[ "python" ]
Generating a list of random lists
39,511,708
<p>I'm new to Python, so I might be doing basic errors, so apologies first.</p> <p>Here is the kind of result I'm trying to obtain : </p> <pre><code>foo = [ ["B","C","E","A","D"], ["E","B","A","C","D"], ["D","B","A","E","C"], ["C","D","E","B","A"] ] </code></pre> <p>So basically, a list of lists of randomly permutated letters without repeat. </p> <p>Here is the look of what I can get so far :</p> <pre><code>foo = ['BDCEA', 'BDCEA', 'BDCEA', 'BDCEA'] </code></pre> <p>The main problem being that everytime is the same permutation. This is my code so far :</p> <pre><code>import random import numpy as np letters = ["A", "B", "C", "D", "E"] nblines = 4 foo = np.repeat(''.join(random.sample(letters, len(letters))), nblines) </code></pre> <p>Help appreciated. Thanks</p>
1
2016-09-15T13:00:20Z
39,512,413
<p>Here's a plain Python solution using a "traditional" style <code>for</code> loop.</p> <pre><code>from random import shuffle nblines = 4 letters = list("ABCDE") foo = [] for _ in range(nblines): shuffle(letters) foo.append(letters[:]) print(foo) </code></pre> <p><strong>typical output</strong></p> <pre><code>[['E', 'C', 'D', 'A', 'B'], ['A', 'B', 'D', 'C', 'E'], ['A', 'C', 'B', 'E', 'D'], ['C', 'A', 'E', 'B', 'D']] </code></pre> <p>The <code>random.shuffle</code> function shuffles the list in-place. We append a <em>copy</em> of the list to <code>foo</code> using <code>letters[:]</code>, otherwise <code>foo</code> would just end up containing 4 references to the one list object.</p> <hr> <p>Here's a slightly more advanced version, using a generator function to handle the shuffling. Each time we call <code>next(sh)</code> it shuffles the <code>lst</code> list stored in the generator and returns a copy of it. So we can call <code>next(sh)</code> in a list comprehension to build the list, which is a little neater than using a traditional <code>for</code> loop. Also, list comprehesions can be slightly faster than using <code>.append</code> in a traditional <code>for</code> loop.</p> <pre><code>from random import shuffle def shuffler(seq): lst = list(seq) while True: shuffle(lst) yield lst[:] sh = shuffler('ABCDE') foo = [next(sh) for _ in range(10)] for row in foo: print(row) </code></pre> <p><strong>typical output</strong></p> <pre><code>['C', 'B', 'A', 'E', 'D'] ['C', 'A', 'E', 'B', 'D'] ['D', 'B', 'C', 'A', 'E'] ['E', 'D', 'A', 'B', 'C'] ['B', 'A', 'E', 'C', 'D'] ['B', 'D', 'C', 'E', 'A'] ['A', 'B', 'C', 'E', 'D'] ['D', 'C', 'A', 'B', 'E'] ['D', 'C', 'B', 'E', 'A'] ['E', 'D', 'A', 'C', 'B'] </code></pre>
1
2016-09-15T13:31:57Z
[ "python" ]
Generating a list of random lists
39,511,708
<p>I'm new to Python, so I might be doing basic errors, so apologies first.</p> <p>Here is the kind of result I'm trying to obtain : </p> <pre><code>foo = [ ["B","C","E","A","D"], ["E","B","A","C","D"], ["D","B","A","E","C"], ["C","D","E","B","A"] ] </code></pre> <p>So basically, a list of lists of randomly permutated letters without repeat. </p> <p>Here is the look of what I can get so far :</p> <pre><code>foo = ['BDCEA', 'BDCEA', 'BDCEA', 'BDCEA'] </code></pre> <p>The main problem being that everytime is the same permutation. This is my code so far :</p> <pre><code>import random import numpy as np letters = ["A", "B", "C", "D", "E"] nblines = 4 foo = np.repeat(''.join(random.sample(letters, len(letters))), nblines) </code></pre> <p>Help appreciated. Thanks</p>
1
2016-09-15T13:00:20Z
39,512,427
<p>The problem with your code is that the line</p> <pre><code>foo = np.repeat(''.join(random.sample(letters, len(letters))), nblines) </code></pre> <p>will first create a random permutation, and then repeat that same permutation nblines times. Numpy.repeat does not repeatedly invoke a function, it repeats elements of an already existing array, which you created with random.sample.</p> <p>Another thing is that numpy is designed to work with numbers, not strings. Here is a short code snippet (without using numpy) to obtain your desired result:</p> <pre><code>[random.sample(letters,len(letters)) for i in range(nblines)] </code></pre> <p>Result: similar to this:</p> <pre><code>foo = [ ["B","C","E","A","D"], ["E","B","A","C","D"], ["D","B","A","E","C"], ["C","D","E","B","A"] ] </code></pre> <p>I hope this helped ;)</p> <p>PS: I see that others gave similar answers to this while I was writing it.</p>
2
2016-09-15T13:32:28Z
[ "python" ]
Data structures homework in Python
39,511,713
<p>I'm supposed to write a function called manipulate_data which will act as follows:</p> <p>When given a list of integers, return a list, where the first element is the count of positives numbers and the second element is the sum of negative numbers. </p> <p>Here is my code</p> <pre><code>def manipulate_data(data): if isinstance(data, (list, tuple, set)): #checking if its a list return [len([s for s in data if isinstance(s, int) and s &gt; 0]), sum(s for s in data if isinstance(s, int) and s &lt; 0)] </code></pre> <p>code it should be tested on</p> <pre><code>import unittest class ManipulateDataTestCases(unittest.TestCase): def test_only_lists_allowed(self): result = manipulate_data({}) self.assertEqual(result, 'Only lists allowed', msg='Invalid argument') def test_it_returns_correct_output_with_positives(self): result = manipulate_data([1, 2, 3, 4]) self.assertEqual(result, [4, 0], msg='Invalid output') def test_returns_correct_ouptut_with_negatives(self): result = manipulate_data([1, -9, 2, 3, 4, -5]); self.assertEqual(result, [4, -14], msg='Invalid output') </code></pre>
-6
2016-09-15T13:00:32Z
39,520,648
<p>Apparently, you have only to add the following at the end of your function:</p> <pre><code>else: return 'Only lists allowed' </code></pre> <p>I would say it works pretty well... ;)</p> <p><a href="http://i.stack.imgur.com/8W3t6.png" rel="nofollow"><img src="http://i.stack.imgur.com/8W3t6.png" alt="enter image description here"></a></p> <p><a href="http://i.stack.imgur.com/duL3I.png" rel="nofollow"><img src="http://i.stack.imgur.com/duL3I.png" alt="enter image description here"></a></p>
-1
2016-09-15T21:28:14Z
[ "python" ]
Data structures homework in Python
39,511,713
<p>I'm supposed to write a function called manipulate_data which will act as follows:</p> <p>When given a list of integers, return a list, where the first element is the count of positives numbers and the second element is the sum of negative numbers. </p> <p>Here is my code</p> <pre><code>def manipulate_data(data): if isinstance(data, (list, tuple, set)): #checking if its a list return [len([s for s in data if isinstance(s, int) and s &gt; 0]), sum(s for s in data if isinstance(s, int) and s &lt; 0)] </code></pre> <p>code it should be tested on</p> <pre><code>import unittest class ManipulateDataTestCases(unittest.TestCase): def test_only_lists_allowed(self): result = manipulate_data({}) self.assertEqual(result, 'Only lists allowed', msg='Invalid argument') def test_it_returns_correct_output_with_positives(self): result = manipulate_data([1, 2, 3, 4]) self.assertEqual(result, [4, 0], msg='Invalid output') def test_returns_correct_ouptut_with_negatives(self): result = manipulate_data([1, -9, 2, 3, 4, -5]); self.assertEqual(result, [4, -14], msg='Invalid output') </code></pre>
-6
2016-09-15T13:00:32Z
39,544,969
<p>This should work well, just needed to correct some things.</p> <pre><code>def manipulate_data(data): if isinstance(data, list): return [sum(1 for n in data if isinstance(n, int) and n &gt;= 0), sum(n for n in data if isinstance(n, int) and n &lt; 0)] else: return 'Only lists allowed' </code></pre>
0
2016-09-17T09:06:12Z
[ "python" ]
Data structures homework in Python
39,511,713
<p>I'm supposed to write a function called manipulate_data which will act as follows:</p> <p>When given a list of integers, return a list, where the first element is the count of positives numbers and the second element is the sum of negative numbers. </p> <p>Here is my code</p> <pre><code>def manipulate_data(data): if isinstance(data, (list, tuple, set)): #checking if its a list return [len([s for s in data if isinstance(s, int) and s &gt; 0]), sum(s for s in data if isinstance(s, int) and s &lt; 0)] </code></pre> <p>code it should be tested on</p> <pre><code>import unittest class ManipulateDataTestCases(unittest.TestCase): def test_only_lists_allowed(self): result = manipulate_data({}) self.assertEqual(result, 'Only lists allowed', msg='Invalid argument') def test_it_returns_correct_output_with_positives(self): result = manipulate_data([1, 2, 3, 4]) self.assertEqual(result, [4, 0], msg='Invalid output') def test_returns_correct_ouptut_with_negatives(self): result = manipulate_data([1, -9, 2, 3, 4, -5]); self.assertEqual(result, [4, -14], msg='Invalid output') </code></pre>
-6
2016-09-15T13:00:32Z
40,057,215
<p>this should work now for home study</p> <pre><code>def manipulate_data(data): if isinstance(data, list): return [sum(1 for n in data if isinstance(n, int) and n &gt;= 0), sum(n for n in data if isinstance(n, int) and n &lt; 0)] else: return 'Only lists allowed' class manipulateDataTestCases(unittest.TestCases): def test_only_lists_allowed(self): result=manipulate_data({}) self.assertEqual(result,'only lists allowed',msg='invalid argument') def test_returns_correct_output_with_positives(self): result=manipulate_data([1,2,3,4]) self.assertEqual(result,[4,0],msg='invalid output') pass def test_returns_correct_output_with_negatives(self): result=manipulate_data([1,-9,2,3,4,-5]); self.assertEqual(result,[4,-14],msg='invalid output') pass if __name__=='__main__': unittest.main() </code></pre>
0
2016-10-15T09:11:58Z
[ "python" ]
Import Module Benchmark
39,511,774
<p>I have been doing some performance testing and got rather curious at my latest findings.</p> <pre><code>&gt;&gt;&gt; timeit("import timeit") 0.8010718822479248 &gt;&gt;&gt; timeit("from timeit import timeit") 1.3421258926391602 </code></pre> <p>How can importing the <strong>whole module be faster</strong>, than importing just a <strong>specific part</strong> ?</p> <p>Based on the answers, I have been doing some tests and I came across the following:</p> <pre><code>&gt;&gt;&gt; timeit("x = timeit.timeit", setup="import timeit") 0.09205102920532227 &gt;&gt;&gt; timeit("x = timeit", setup="from timeit import timeit") 0.0244600772857666 </code></pre> <p>Regarding <strong>performance</strong>, if you plan on using a <em>class/function/submodule</em> a lot, it takes less time if you specify where to import from and can offset or even make up for the time lost in the import.</p>
3
2016-09-15T13:03:16Z
39,511,879
<p><code>import timeit</code> will get the module directly while <code>from timeit import timeit</code> will take time browsing entire timeit module. Hence the results.</p>
2
2016-09-15T13:08:36Z
[ "python", "performance", "time" ]
Import Module Benchmark
39,511,774
<p>I have been doing some performance testing and got rather curious at my latest findings.</p> <pre><code>&gt;&gt;&gt; timeit("import timeit") 0.8010718822479248 &gt;&gt;&gt; timeit("from timeit import timeit") 1.3421258926391602 </code></pre> <p>How can importing the <strong>whole module be faster</strong>, than importing just a <strong>specific part</strong> ?</p> <p>Based on the answers, I have been doing some tests and I came across the following:</p> <pre><code>&gt;&gt;&gt; timeit("x = timeit.timeit", setup="import timeit") 0.09205102920532227 &gt;&gt;&gt; timeit("x = timeit", setup="from timeit import timeit") 0.0244600772857666 </code></pre> <p>Regarding <strong>performance</strong>, if you plan on using a <em>class/function/submodule</em> a lot, it takes less time if you specify where to import from and can offset or even make up for the time lost in the import.</p>
3
2016-09-15T13:03:16Z
39,511,889
<p>Because when you want to import one/some part of a module, all the searching through the module's namespace, storing the object in stack and pop from it take time, while for importing the module at once python just does one step, binding the module to its name.</p> <p>For a better demonstration you can use <code>dis</code> module for checking the bytecode for two recipes separately:</p> <pre><code>In [10]: def import_all(): import timeit ....: In [11]: def import_one(): ....: from timeit import timeit ....: In [12]: import dis In [13]: dis.dis(import_all) 2 0 LOAD_CONST 1 (0) 3 LOAD_CONST 0 (None) 6 IMPORT_NAME 0 (timeit) 9 STORE_FAST 0 (timeit) 12 LOAD_CONST 0 (None) 15 RETURN_VALUE In [14]: dis.dis(import_one) 2 0 LOAD_CONST 1 (0) 3 LOAD_CONST 2 (('timeit',)) 6 IMPORT_NAME 0 (timeit) 9 IMPORT_FROM 0 (timeit) 12 STORE_FAST 0 (timeit) 15 POP_TOP 16 LOAD_CONST 0 (None) 19 RETURN_VALUE </code></pre> <p>As you can see in second case we have an <code>IMPORT_FROM</code> and <code>POP_TOP</code> more than the first one.</p>
2
2016-09-15T13:09:08Z
[ "python", "performance", "time" ]
Why wrap a single static method in a class?
39,511,826
<p>I'm reading some code where the author is using a coding style with which I am unfamiliar; they put absolutely every function definition into a class. For example (details removed so as to not identify the author and codebase):</p> <pre><code>class CSVChecker: @staticmethod def is_ok(file): #some stuff that could return False return True </code></pre> <p>and that's the end of this class. Many similar. No <code>__init__</code> or <code>self</code>. Usage: <code>if CSVChecker.is_ok(afile)</code></p> <p>Is this just an odd stylistic quirk carried over to Python from some other language? Or is there a Pythonic reason for this, rather than just <code>def csv_check_file_ok(file):</code> at the top level of the file?</p>
0
2016-09-15T13:05:38Z
39,513,383
<p>@deceze probably has the answer in his comment. Some of these function-objects are indeed stored in lists, and some elements of the lists may be instances of "proper" objects. Stripped down to the absolute basics:</p> <pre><code>class F1: def __init__(self, a): self.max=a def ok(self, x): return x &lt; self.max class F2: @staticmethod def ok(x): return x &gt; 0 </code></pre> <p>elsewhere</p> <pre><code>checkers = [] ... checkers.append( F1(i+j) ) checkers.append( F2 ) .... if ( all( check.ok(x) for check in checkers ) </code></pre> <p>Personally I'd not have bothered with <code>@staticmethod</code> and just written the classes that didn't need any initialization with a dummy <code>def __init__(self): pass</code> and instantiate <code>F2()</code> Also there's probably some carry-over of style from another language (Java?) because not all the single-static-method function objects are used in this way. There again, if the author thought that they might be so used in the future, or might acquire the need to be instantiated with parameters, it makes sense.</p> <p>Anyway, I've learned something and hopefully others will do in future. </p> <p>EDIT added later: this usage accomplishes the same as what might have been done using <code>functools.partial</code> </p> <pre><code>def f1( x, max=None ): return x &lt; max #elsewhere ... checkers.append( functools.partial( f1, max=i+j )) </code></pre> <p>Now pondering which is best. Also how this usage of classes fits into the "inheritance, composition, aggregation" classification of objects. And whether it's an exception to</p> <p><em>There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch.</em></p>
1
2016-09-15T14:13:27Z
[ "python", "oop" ]
Pygame (A bit racey) Objects are there, but not visible
39,511,858
<p>I have a couple of objects defined/entered into a variable</p> <pre><code>car1 = pygame.image.load('C:\Users\itzrb_000\Downloads\download (3).png') car2 = pygame.image.load('C:\Users\itzrb_000\Downloads\download (2).png') car3 = pygame.image.load('C:\Users\itzrb_000\Downloads\images.png') rock = pygame.image.load('C:\Users\itzrb_000\Downloads\Rock.png') </code></pre> <p>Then created a function called <code>cars</code> which takes the x and y start position as arguments</p> <pre><code>def cars(thingx, thingy): objects = [car1, car2, car3, rock] num_of_objects = random.randint(1,4) for x in range(num_of_objects): y = random.choice(objects) gameDisplay.blit(y,(random.randrange(130, 625),-300)) </code></pre> <p>inside the <code>game_loop</code> function</p> <pre><code>thing_startx = random.randrange(130,625) thing_starty = -600 thing_speed = 10 </code></pre> <p>Then I call all the objects</p> <pre><code>background() things(thing_startx, thing_starty, thing_width, thing_height, black) car(x,y) cars(thing_startx, thing_starty) thing_starty += thing_speed score(dodged) </code></pre> <p><code>car</code> is the car that the player plays with, <code>things</code> is a function that draws a black square that later on is the thing that I shouldn't crash into</p> <pre><code>def things(thingx, thingy, thingw, thingh, color): pygame.draw.rect(gameDisplay, color, [thingx, thingy, thingw, thingh]) </code></pre> <p>I'm trying to display the car images instead of just a black block, I commented out the <code>things</code> function and its call in <code>game_loop</code>. When I tested there's nothing visible, however I noticed that my car at some point crashes, so the conclusion is that the car image is there, but it's not visible.</p> <p>Any idea on how to solve this ?</p>
0
2016-09-15T13:07:24Z
39,513,920
<p>You're blitting the objects off screen. <code>gameDisplay.blit(y,(random.randrange(130, 625),-300))</code> will blit the image at x between 130 to 625 and y at -300 (which means 300 to the left of the screen).</p> <p>But is the function doing what you want it to do? Every loop you're blitting a random number of random objects at random positions. I strongly suggest finding a better tutorial which teaches basics as code structure, good naming, how to use docstrings and object-oriented programming, like <a href="https://www.youtube.com/watch?v=bMt47wvK6u0&amp;list=PL4Yp6gRH-R1Birdm-Gs-SdBFWLUC1q3Fa&amp;index=4" rel="nofollow">this</a>.</p>
1
2016-09-15T14:38:19Z
[ "python", "pygame" ]
Why does providing an environment to my Python subprocess cause it to abort?
39,511,923
<p>I am attempting to run a subprocess through Python of an executable developed by my company; let's call it <code>prog.exe</code>. I can run this command just fine on from CMD; I can run it just fine through <code>subprocess</code>; but if I try to pass <code>env</code> to <code>subprocess</code>, I get an error:</p> <pre><code>C:\Users\me&gt; prog.exe -h prog V1.2.2 (Build 09-07-2016.12.52) more dummy help text... C:\Users\me&gt; python Python 3.5.0 (v3.5.0:374f501f4567, Sep 13 2015, 02:27:37) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import subprocess &gt;&gt;&gt; import os &gt;&gt;&gt; subprocess.Popen("prog.exe -h").wait() prog V1.2.2 (Build 09-07-2016.12.52) more dummy help text... 0 &gt;&gt;&gt; &gt;&gt;&gt; subprocess.Popen("prog.exe -h", env=os.environ).wait() </code></pre> <p>After executing that command, the following dialog opens informing me that "<code>prog.exe</code> has stopped working" and "Windows is checking for a solution to the problem...":</p> <p><a href="http://i.stack.imgur.com/KNukk.png" rel="nofollow"><img src="http://i.stack.imgur.com/KNukk.png" alt="prog.exe has stopped working"></a></p> <p>which turns into "A problem caused the program to stop working correctly. Windows will close the program and notify you if a solution is available.":</p> <p><a href="http://i.stack.imgur.com/S2wMj.png" rel="nofollow"><img src="http://i.stack.imgur.com/S2wMj.png" alt="prog.exe has truly stopped working"></a></p> <p>When I close that dialog, the subprocess exits with an error:</p> <pre><code>255 &gt;&gt;&gt; </code></pre> <p>What is going on? I thought that <code>os.environ</code> is essentially passed as <code>env</code> to <code>subprocess</code> if I do not specify <code>env</code>. So why when I specify it does it cause my program to die?</p> <p>I have tried Python 3.5 and Python 2.7 with same results.</p>
0
2016-09-15T13:10:55Z
39,514,071
<p>As described <a href="http://stackoverflow.com/a/19023293/1449189">in an older SO post</a>, the <code>os.environ</code> keys are stored/accessed in a case insensitive manner. <code>nt.environ</code> preserves the case of the environment variables as passed into the Python process.</p> <p>In this case, <code>prog.exe</code> is evidently accessing environment variables in a case sensitive manner, thus it requires the original mixed-case environment to be passed in.</p> <p>Using <code>nt.environ</code> rather than <code>os.environ</code> resolves the issue:</p> <pre><code>&gt;&gt;&gt; import nt &gt;&gt;&gt; subprocess.Popen("prog.exe -h", env=nt.environ).wait() prog V1.2.2 (Build 09-07-2016.12.52) more dummy help text... 0 &gt;&gt;&gt; </code></pre>
1
2016-09-15T14:45:40Z
[ "python", "windows", "subprocess", "case-sensitive" ]
After reading a csv using header/skiprows argument values become NaN
39,511,976
<p>I'm an trying to open a csv file with the header spanning multiple rows. To avoid dealing with MultiIndex I am using the header argument to skip some lines, but all values become NaNs.</p> <p>An example which reproduces the error:</p> <pre><code>,, x,a,c y,b,d labels,l1,l2 2016-01-01,1,6 2016-01-02,2.0,7.0 2016-01-03,3.0,8 </code></pre> <p>test.csv</p> <pre><code>t = pandas.read_csv('test.csv',skiprows=3, header=[0], index_col=[0] </code></pre> <p>or</p> <pre><code>t = pandas.read_csv('test.csv', header=[3], index_col=[0] ) </code></pre> <p>produce the same output</p> <pre><code>labels l1 l2 2016-01-01 NaN NaN 2016-01-02 NaN NaN 2016-01-03 NaN NaN [3 rows x 2 columns] </code></pre> <p>When I'm using all 3 header rows</p> <pre><code>t = pandas.read_csv('test.csv', header=[1,2,3], index_col=[0] ) </code></pre> <p>it works and I can access the data.</p> <p>Am I missing something or is this a bug?</p> <p>ps: I'm now using the MultiIndex, I had a problem where I got a KeyError because i forgot one argument (the header has 8 rows ...)</p>
0
2016-09-15T13:13:28Z
39,512,109
<p>how about this:</p> <pre><code>my_file = 'test.csv' df = pd.read_csv(my_file, sep=',', names=['labels', 'l1', 'l2'], skiprows=4, header=None) </code></pre> <p>Forget about the first 4 rows entirely and specify the headers yourself.</p>
1
2016-09-15T13:19:23Z
[ "python", "pandas" ]
After reading a csv using header/skiprows argument values become NaN
39,511,976
<p>I'm an trying to open a csv file with the header spanning multiple rows. To avoid dealing with MultiIndex I am using the header argument to skip some lines, but all values become NaNs.</p> <p>An example which reproduces the error:</p> <pre><code>,, x,a,c y,b,d labels,l1,l2 2016-01-01,1,6 2016-01-02,2.0,7.0 2016-01-03,3.0,8 </code></pre> <p>test.csv</p> <pre><code>t = pandas.read_csv('test.csv',skiprows=3, header=[0], index_col=[0] </code></pre> <p>or</p> <pre><code>t = pandas.read_csv('test.csv', header=[3], index_col=[0] ) </code></pre> <p>produce the same output</p> <pre><code>labels l1 l2 2016-01-01 NaN NaN 2016-01-02 NaN NaN 2016-01-03 NaN NaN [3 rows x 2 columns] </code></pre> <p>When I'm using all 3 header rows</p> <pre><code>t = pandas.read_csv('test.csv', header=[1,2,3], index_col=[0] ) </code></pre> <p>it works and I can access the data.</p> <p>Am I missing something or is this a bug?</p> <p>ps: I'm now using the MultiIndex, I had a problem where I got a KeyError because i forgot one argument (the header has 8 rows ...)</p>
0
2016-09-15T13:13:28Z
39,513,207
<p>try this:</p> <pre><code>In [20]: pd.read_csv(filename, skiprows=3) Out[20]: labels l1 l2 0 2016-01-01 1.0 6.0 1 2016-01-02 2.0 7.0 2 2016-01-03 3.0 8.0 </code></pre>
1
2016-09-15T14:06:00Z
[ "python", "pandas" ]
How to recursively query in django efficiently?
39,511,993
<p>I have a model, which looks like:</p> <pre><code>class StaffMember(models.Model): id = models.OneToOneField(to=User, unique=True, primary_key=True, related_name='staff_member') supervisor = models.ForeignKey(to='self', null=True, blank=True, related_name='team_members') </code></pre> <p>My current hierarchy of team is designed in such a way that there is let's say an Admin (who is at the top most point of hierarchy). Now, let's say 3 people (A, B, C) report to Admin and each one of A, B and C have their own team reporting to them and so on.</p> <p>I want to find all the team members (boiling down to the bottom most level of hierarchy), for any employee. My current method to get all the team members of a person is like:</p> <pre><code>def get_team(self): team = [self] for c in self.team_members.all(): team += list(c.get_team()) if len(team) &gt; 2000: break return team </code></pre> <p>I get the team members of a member by:</p> <pre><code>member = StaffMember.objects.get(pk=72) team = member.get_team() </code></pre> <p>But obviously, this leads to a lot of db calls and my API eventually times out. What could be more efficient way to fetch all the members of a team? </p>
2
2016-09-15T13:14:07Z
39,932,852
<p>I found a solution to the problem. The recursive solution takes the node, goes to it's first child and goes deep down till bottom of the hierarchy. Then comes back up again to the second child (if exists), and then again goes down till the bottom. In short, it explores all the nodes one by one and appends all the members in an array. The solution I came up with, fetches the members layer-wise. </p> <pre><code>member = StaffMember.objects.get(id__id=user_id) new_list = [member] new_list = get_final_team(new_list) def get_final_team(qs): team = [] staffmembers = StaffMember.objects.filter(supervisor__in=qs) team += staffmembers if staffmembers: interim_team_qs = get_final_team(staffmembers) for qs in interim_team_qs: team.append(qs) else: team = [qs] return team </code></pre> <p>The number of db calls this method entails is the number of layers (of hierarchy) that are present beneath the member whose team we want to find out. </p>
0
2016-10-08T13:19:46Z
[ "python", "django", "recursion", "list-comprehension", "django-orm" ]
How to recursively query in django efficiently?
39,511,993
<p>I have a model, which looks like:</p> <pre><code>class StaffMember(models.Model): id = models.OneToOneField(to=User, unique=True, primary_key=True, related_name='staff_member') supervisor = models.ForeignKey(to='self', null=True, blank=True, related_name='team_members') </code></pre> <p>My current hierarchy of team is designed in such a way that there is let's say an Admin (who is at the top most point of hierarchy). Now, let's say 3 people (A, B, C) report to Admin and each one of A, B and C have their own team reporting to them and so on.</p> <p>I want to find all the team members (boiling down to the bottom most level of hierarchy), for any employee. My current method to get all the team members of a person is like:</p> <pre><code>def get_team(self): team = [self] for c in self.team_members.all(): team += list(c.get_team()) if len(team) &gt; 2000: break return team </code></pre> <p>I get the team members of a member by:</p> <pre><code>member = StaffMember.objects.get(pk=72) team = member.get_team() </code></pre> <p>But obviously, this leads to a lot of db calls and my API eventually times out. What could be more efficient way to fetch all the members of a team? </p>
2
2016-09-15T13:14:07Z
39,933,958
<p>If you're using a database that supports recursive common table expressions (e.g. PostgreSQL), this is precisely the use-case.</p> <pre><code>team = StaffMember.objects.raw(''' WITH RECURSIVE team(id, supervisor) AS ( SELECT id, supervisor FROM staff_member WHERE id = 42 UNION ALL SELECT sm.id, sm.supervisor FROM staff_member AS sm, team AS t WHERE sm.id = t.supervisor ) SELECT * FROM team ''') </code></pre> <p>References: <a href="https://docs.djangoproject.com/en/1.10/topics/db/sql/" rel="nofollow">Raw SQL queries in Django</a><br> <a href="https://www.postgresql.org/docs/current/static/queries-with.html" rel="nofollow">Recursive Common Table Expressions in PostgreSQL</a></p>
0
2016-10-08T15:09:56Z
[ "python", "django", "recursion", "list-comprehension", "django-orm" ]
Convert whole dataframe from lower case to upper case with Pandas
39,512,002
<p>I have a dataframe like the one displayed below: </p> <pre><code># Create an example dataframe about a fictional army raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks'], 'company': ['1st', '1st', '2nd', '2nd'], 'deaths': ['kkk', 52, '25', 616], 'battles': [5, '42', 2, 2], 'size': ['l', 'll', 'l', 'm']} df = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'deaths', 'battles', 'size']) </code></pre> <p><a href="http://i.stack.imgur.com/fiOEt.png" rel="nofollow"><img src="http://i.stack.imgur.com/fiOEt.png" alt="enter image description here"></a></p> <p>My goal is to transform every single string inside of the dataframe to upper case so that it looks like this:</p> <p><a href="http://i.stack.imgur.com/gprF6.png" rel="nofollow"><img src="http://i.stack.imgur.com/gprF6.png" alt="enter image description here"></a></p> <p>Notice: all data types are objects and must not be changed; the output must contain all objects. I want to avoid to convert every single column one by one... I would like to do it generally over the whole dataframe possibly.</p> <p>What I tried so far is to do this but without success</p> <pre><code>df.str.upper() </code></pre>
0
2016-09-15T13:14:27Z
39,512,116
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.astype.html" rel="nofollow">astype()</a> will cast each series to the <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html#astype" rel="nofollow">dtype</a> object (string) and then call the <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.str.html#pandas.Series.str" rel="nofollow">str()</a> method on the converted series to get the string literally and call the function <a href="https://docs.python.org/2/library/stdtypes.html#str.upper" rel="nofollow">upper()</a> on it. Note that after this, the dtype of all columns changes to object.</p> <pre><code>In [17]: df Out[17]: regiment company deaths battles size 0 Nighthawks 1st kkk 5 l 1 Nighthawks 1st 52 42 ll 2 Nighthawks 2nd 25 2 l 3 Nighthawks 2nd 616 2 m In [18]: df.apply(lambda x: x.astype(str).str.upper()) Out[18]: regiment company deaths battles size 0 NIGHTHAWKS 1ST KKK 5 L 1 NIGHTHAWKS 1ST 52 42 LL 2 NIGHTHAWKS 2ND 25 2 L 3 NIGHTHAWKS 2ND 616 2 M </code></pre> <p>You can later convert the 'battles' column to numeric again, using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html#pandas.to_numeric" rel="nofollow">to_numeric()</a>:</p> <pre><code>In [42]: df2 = df.apply(lambda x: x.astype(str).str.upper()) In [43]: df2['battles'] = pd.to_numeric(df2['battles']) In [44]: df2 Out[44]: regiment company deaths battles size 0 NIGHTHAWKS 1ST KKK 5 L 1 NIGHTHAWKS 1ST 52 42 LL 2 NIGHTHAWKS 2ND 25 2 L 3 NIGHTHAWKS 2ND 616 2 M In [45]: df2.dtypes Out[45]: regiment object company object deaths object battles int64 size object dtype: object </code></pre>
5
2016-09-15T13:19:39Z
[ "python", "pandas", "type-conversion", "uppercase", "lowercase" ]
Convert whole dataframe from lower case to upper case with Pandas
39,512,002
<p>I have a dataframe like the one displayed below: </p> <pre><code># Create an example dataframe about a fictional army raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks'], 'company': ['1st', '1st', '2nd', '2nd'], 'deaths': ['kkk', 52, '25', 616], 'battles': [5, '42', 2, 2], 'size': ['l', 'll', 'l', 'm']} df = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'deaths', 'battles', 'size']) </code></pre> <p><a href="http://i.stack.imgur.com/fiOEt.png" rel="nofollow"><img src="http://i.stack.imgur.com/fiOEt.png" alt="enter image description here"></a></p> <p>My goal is to transform every single string inside of the dataframe to upper case so that it looks like this:</p> <p><a href="http://i.stack.imgur.com/gprF6.png" rel="nofollow"><img src="http://i.stack.imgur.com/gprF6.png" alt="enter image description here"></a></p> <p>Notice: all data types are objects and must not be changed; the output must contain all objects. I want to avoid to convert every single column one by one... I would like to do it generally over the whole dataframe possibly.</p> <p>What I tried so far is to do this but without success</p> <pre><code>df.str.upper() </code></pre>
0
2016-09-15T13:14:27Z
39,512,200
<p>Since <code>str</code> only works for series, you can apply it to each column individually then concatenate:</p> <pre><code>In [6]: pd.concat([df[col].astype(str).str.upper() for col in df.columns], axis=1) Out[6]: regiment company deaths battles size 0 NIGHTHAWKS 1ST KKK 5 L 1 NIGHTHAWKS 1ST 52 42 LL 2 NIGHTHAWKS 2ND 25 2 L 3 NIGHTHAWKS 2ND 616 2 M </code></pre> <hr> <p>Edit: <strong>performance comparison</strong></p> <pre><code>In [10]: %timeit df.apply(lambda x: x.astype(str).str.upper()) 100 loops, best of 3: 3.32 ms per loop In [11]: %timeit pd.concat([df[col].astype(str).str.upper() for col in df.columns], axis=1) 100 loops, best of 3: 3.32 ms per loop </code></pre> <p>Both answers perform equally on a small dataframe. </p> <pre><code>In [15]: df = pd.concat(10000 * [df]) In [16]: %timeit pd.concat([df[col].astype(str).str.upper() for col in df.columns], axis=1) 10 loops, best of 3: 104 ms per loop In [17]: %timeit df.apply(lambda x: x.astype(str).str.upper()) 10 loops, best of 3: 130 ms per loop </code></pre> <p>On a large dataframe my answer is slightly faster.</p>
2
2016-09-15T13:23:52Z
[ "python", "pandas", "type-conversion", "uppercase", "lowercase" ]
Py.test mixin class can't access `self`
39,512,042
<p>I am trying to make a mixin for a shared set of tests. I want to be able to inherit from the mixin whenever I want those generic tests to run.</p> <p>Here is some of my mixin:</p> <pre><code>class CommonRuleWhenTestsMixin(TestCase): def test_returns_false_if_rule_inactive(self): self.rule.active = False assert not self.rule.when(self.sim) </code></pre> <p>Here is when I use the mixin:</p> <pre><code>class TestWhen(CommonRuleWhenTestsMixin): def setUp(self): self.customer = mommy.make(Customer) self.rule = mommy.make( UsageRule, customer=self.customer, max_recharges_per_month=2 ) self.sim = mommy.make( Sim, msisdn='0821234567', customer=self.customer ) assert self.rule.when(self.sim) def test_returns_false_if_airtime_max_recharges_exceeded(self): self.rule.recharge_type = AIRTIME mommy.make( SimRechargeHistory, sim=self.sim, product_type=AIRTIME, _quantity=3 ) assert not self.rule.when(self.sim) </code></pre> <p>I keep getting this message:</p> <pre><code>_________ CommonRuleWhenTestsMixin.test_returns_false_if_rule_inactive _________ simcontrol/rules/tests/test_models.py:14: in test_returns_false_if_rule_inactive self.rule.active = False E AttributeError: 'CommonRuleWhenTestsMixin' object has no attribute 'rule' </code></pre> <p>How can my mixin access the variables set on <code>self</code> by the child class?</p>
0
2016-09-15T13:16:41Z
39,513,850
<p>Your mixin inerhits from <code>unittest.TestCase</code>, so its test gets picked up by pytest (and would probably get picked up by <code>unittest</code> as well).</p> <p>Instead, don't inherit your mixin from anything (or from <code>object</code> on Python 2), and make your <code>TestWhen</code> class inherit from both <code>unittest.TestCase</code> and <code>CommonRuleWhenTestsMixin</code>.</p>
2
2016-09-15T14:35:20Z
[ "python", "py.test" ]
Writing Parallelly into two files python
39,512,186
<p>I am just trying to write parallelly in to two file with the help of threading.</p> <pre><code>def dmesg (i): cmd = 'dmesg' print cmd (status, cmd_out) = commands.getstatusoutput(cmd) fil = open('dmesg_logs', 'w') fil.write(cmd_out) fil.close() def dump (i): cmd = 'lsmod' print cmd (status, cmd_out) = commands.getstatusoutput(cmd) fil = open('logs', 'w') fil.write(cmd_out) fil.close() if __name__ == "__main__": t1 = threading.Thread(target = dmesg, args=(0,)) t1.start() t2 = threading.Thread(target = dump, args=(0,)) t2.start() while True : "My own code" </code></pre> <p>Here my problem is logs file is not created in thread 2. Can i iknow what am doing wrong ?</p>
1
2016-09-15T13:23:20Z
39,675,378
<pre><code>cmd = ['dmesg'] with open ('dmesg_log.txt', 'w') as out1: retun1 = subprocess.Popen(cmd, shell = True, stdout=out1) </code></pre> <p>Found the solution. Above code works for me.</p>
0
2016-09-24T10:33:39Z
[ "python", "python-2.7" ]
Python Scrapy - Execute code after spider exits
39,512,249
<p>I am not able to find an answer for that question. How can I execute a python code after a scrapy spider exits:</p> <p>I did the following inside the function which parses the response (def parse_item(self, response):) : self.my_function() Than I defined my_function(), but the problem is that it is still inside the loop of the spider. My main idea is to execute a given code in a function outside the spider's loop with the gathered data. Thanks.</p>
0
2016-09-15T13:26:03Z
39,516,118
<p>Use the function <a href="http://doc.scrapy.org/en/latest/topics/signals.html#spider-closed" rel="nofollow">closed</a> of the Scrapy class as follows:</p> <pre><code>class MySpider(scrapy.Spider): # some attributes spider_attr=[] def parse(self, response): # do your logic here # page_text = response.xpath('//text()').extract() self.spider_attr.append(whatever) def closed( self, reason ): # will be called when the crawler process ends # any code # do something with collected data for i in self.spider_attr: print i </code></pre>
1
2016-09-15T16:32:14Z
[ "python", "scrapy" ]
calculating Gini coefficient in Python/numpy
39,512,260
<p>i'm calculating <a href="https://en.wikipedia.org/wiki/Gini_coefficient" rel="nofollow">Gini coefficient</a> (similar to: <a href="http://stackoverflow.com/questions/31416664/python-gini-coefficient-calculation-using-numpy">Python - Gini coefficient calculation using Numpy</a>) but i get an odd result. for a uniform distribution sampled from <code>np.random.rand()</code>, the Gini coefficient is 0.3 but I would have expected it to be close to 0 (perfect equality). what is going wrong here?</p> <pre><code>def G(v): bins = np.linspace(0., 100., 11) total = float(np.sum(v)) yvals = [] for b in bins: bin_vals = v[v &lt;= np.percentile(v, b)] bin_fraction = (np.sum(bin_vals) / total) * 100.0 yvals.append(bin_fraction) # perfect equality area pe_area = np.trapz(bins, x=bins) # lorenz area lorenz_area = np.trapz(yvals, x=bins) gini_val = (pe_area - lorenz_area) / float(pe_area) return bins, yvals, gini_val v = np.random.rand(500) bins, result, gini_val = G(v) plt.figure() plt.subplot(2, 1, 1) plt.plot(bins, result, label="observed") plt.plot(bins, bins, '--', label="perfect eq.") plt.xlabel("fraction of population") plt.ylabel("fraction of wealth") plt.title("GINI: %.4f" %(gini_val)) plt.legend() plt.subplot(2, 1, 2) plt.hist(v, bins=20) </code></pre> <p>for the given set of numbers, the above code calculates the fraction of the total distribution's values that are in each percentile bin.</p> <p>the result:</p> <p><a href="http://i.stack.imgur.com/YKOUG.png" rel="nofollow"><img src="http://i.stack.imgur.com/YKOUG.png" alt="enter image description here"></a></p> <p>uniform distributions should be near "perfect equality" so the lorenz curve bending is off.</p>
1
2016-09-15T13:26:21Z
39,513,799
<p>This is to be expected. A random sample from a uniform distribution does not result in uniform values (i.e. values that are all relatively close to each other). With a little calculus, it can be shown that the <em>expected</em> value (in the statistical sense) of the Gini coefficient of a sample from the uniform distribution on [0, 1] is 1/3, so getting values around 1/3 for a given sample is reasonable.</p> <p>You'll get a lower Gini coefficient with a sample such as <code>v = 10 + np.random.rand(500)</code>. Those values are all close to 10.5; the <em>relative</em> variation is lower than the sample <code>v = np.random.rand(500)</code>. In fact, the expected value of the Gini coefficient for the sample <code>base + np.random.rand(n)</code> is 1/(6*base + 3).</p> <p>Here's a simple implementation of the Gini coefficient. It uses the fact that the Gini coefficient is half the <a href="https://en.wikipedia.org/wiki/Mean_absolute_difference#Relative_mean_absolute_difference" rel="nofollow">relative mean absolute difference</a>.</p> <pre><code>def gini(x): # (Warning: This is a concise implementation, but it is O(n**2) # in time and memory, where n = len(x). *Don't* pass in huge # samples!) # Mean absolute difference mad = np.abs(np.subtract.outer(x, x)).mean() # Relative mean absolute difference rmad = mad/np.mean(x) # Gini coefficient g = 0.5 * rmad return g </code></pre> <p>Here's the Gini coefficient for several samples of the form <code>v = base + np.random.rand(500)</code>:</p> <pre><code>In [80]: v = np.random.rand(500) In [81]: gini(v) Out[81]: 0.32760618249832563 In [82]: v = 1 + np.random.rand(500) In [83]: gini(v) Out[83]: 0.11121487509454202 In [84]: v = 10 + np.random.rand(500) In [85]: gini(v) Out[85]: 0.01567937753659053 In [86]: v = 100 + np.random.rand(500) In [87]: gini(v) Out[87]: 0.0016594595244509495 </code></pre>
1
2016-09-15T14:33:05Z
[ "python", "numpy", "statistics" ]
TensorFlow simple operations: tensors vs Python variables
39,512,276
<p>I'm unsure about the practical differences between the 4 variations below (they all evaluate to the same value). My understanding is that if I call <code>tf</code>, it <em>will</em> create an operation on the graph, and otherwise it <em>might</em>. If I don't create the <code>tf.constant()</code> at the beginning, I believe that the constants will be created implicitly when doing the addition; but for <code>tf.add(a,b)</code> vs <code>a + b</code> where <code>a</code> and <code>b</code> are both Tensors (#1 and #3), I can see no difference besides the default naming (former is <code>Add</code> and the latter one is <code>add</code>). Can anyone shed some light on the differences between those, and when should one use each?</p> <pre><code>## 1 a = tf.constant(1) b = tf.constant(1) x = tf.add(a, b) with tf.Session() as sess: x.eval() ## 2 a = 1 b = 1 x = tf.add(a, b) with tf.Session() as sess: x.eval() ## 3 a = tf.constant(1) b = tf.constant(1) x = a + b with tf.Session() as sess: x.eval() ## 4 a = 1 b = tf.constant(1) x = a + b with tf.Session() as sess: x.eval() </code></pre>
3
2016-09-15T13:27:02Z
39,513,161
<p>They are all the same. </p> <p>The python-'+' in a + b is captured by tensorflow and actually does generate the same op as tf.add(a, b) does.</p> <p>The tf.conctant allows you more specifics, such as defining the shape, type and name of the created tensor. But again tensorflow owns that "a" in your example a = 1 and it is equivalent to tf.constant(1) (treating the constant as an int-value in this case)</p>
1
2016-09-15T14:03:54Z
[ "python", "tensorflow" ]
TensorFlow simple operations: tensors vs Python variables
39,512,276
<p>I'm unsure about the practical differences between the 4 variations below (they all evaluate to the same value). My understanding is that if I call <code>tf</code>, it <em>will</em> create an operation on the graph, and otherwise it <em>might</em>. If I don't create the <code>tf.constant()</code> at the beginning, I believe that the constants will be created implicitly when doing the addition; but for <code>tf.add(a,b)</code> vs <code>a + b</code> where <code>a</code> and <code>b</code> are both Tensors (#1 and #3), I can see no difference besides the default naming (former is <code>Add</code> and the latter one is <code>add</code>). Can anyone shed some light on the differences between those, and when should one use each?</p> <pre><code>## 1 a = tf.constant(1) b = tf.constant(1) x = tf.add(a, b) with tf.Session() as sess: x.eval() ## 2 a = 1 b = 1 x = tf.add(a, b) with tf.Session() as sess: x.eval() ## 3 a = tf.constant(1) b = tf.constant(1) x = a + b with tf.Session() as sess: x.eval() ## 4 a = 1 b = tf.constant(1) x = a + b with tf.Session() as sess: x.eval() </code></pre>
3
2016-09-15T13:27:02Z
39,513,596
<p>The result is the same because every operator (<code>add</code> or <code>__add__</code> that's the overload of <code>+</code>) call <a href="https://www.tensorflow.org/versions/master/api_docs/python/framework.html#convert_to_tensor%60" rel="nofollow"><code>tf.convert_to_tensor</code></a> on its operands.</p> <p>The difference between <code>tf.add(a + b)</code> and <code>a + b</code> is that the former gives you the ability to give a name to the operation with the <code>name</code> parameter. The latter, instead, does not give you this ability and also make it possibile that the computation is done by the Python interpreter and not outside it, in the Tensorflow environment.</p> <p>This happen if (and only if) both <code>a</code> and <code>b</code> are not <code>Tensor</code> objects and thus Tensorflow will be not involved in the computation.</p>
1
2016-09-15T14:23:43Z
[ "python", "tensorflow" ]
TensorFlow simple operations: tensors vs Python variables
39,512,276
<p>I'm unsure about the practical differences between the 4 variations below (they all evaluate to the same value). My understanding is that if I call <code>tf</code>, it <em>will</em> create an operation on the graph, and otherwise it <em>might</em>. If I don't create the <code>tf.constant()</code> at the beginning, I believe that the constants will be created implicitly when doing the addition; but for <code>tf.add(a,b)</code> vs <code>a + b</code> where <code>a</code> and <code>b</code> are both Tensors (#1 and #3), I can see no difference besides the default naming (former is <code>Add</code> and the latter one is <code>add</code>). Can anyone shed some light on the differences between those, and when should one use each?</p> <pre><code>## 1 a = tf.constant(1) b = tf.constant(1) x = tf.add(a, b) with tf.Session() as sess: x.eval() ## 2 a = 1 b = 1 x = tf.add(a, b) with tf.Session() as sess: x.eval() ## 3 a = tf.constant(1) b = tf.constant(1) x = a + b with tf.Session() as sess: x.eval() ## 4 a = 1 b = tf.constant(1) x = a + b with tf.Session() as sess: x.eval() </code></pre>
3
2016-09-15T13:27:02Z
39,513,635
<p>The four examples you gave will all give the same result, and generate the same graph (if you ignore that some of the operation names in the graph are different). TensorFlow will convert many different Python objects into <code>tf.Tensor</code> objects when they are passed as arguments to TensorFlow operators, such as <code>tf.add()</code> here. The <code>+</code> operator is just a simple wrapper on <code>tf.add()</code>, and the overload is used when either the left-hand or right-hand argument is a <code>tf.Tensor</code> (or <code>tf.Variable</code>).</p> <p>Given that you can just pass many Python objects to TensorFlow operators, why would you ever use <code>tf.constant()</code>? There are a few reasons:</p> <ul> <li><p>If you use the same Python object as the argument to multiple different operations, TensorFlow will convert it to a tensor multiple times, and represent each of those tensors in the graph. Therefore, if your Python object is a large NumPy array, you may run out of memory if you make too many copies of that array's data. In that case, you may wish to convert the array to a <code>tf.Tensor</code> once</p></li> <li><p>Creating a <code>tf.constant()</code> explicitly allows you to set its <code>name</code> property, which can be useful for TensorBoard debugging and graph visualization. (Note though that the default TensorFlow ops will attempt to give a meaningful name to each automatically converted tensor, based on the name of the op's argument.)</p></li> <li><p>Creating a <code>tf.constant()</code> explicitly allows you to set the exact element type of the tensor. TensorFlow will convert Python <code>int</code> objects to <code>tf.int32</code>, and <code>float</code> objects to <code>tf.float32</code>. If you want <code>tf.int64</code> or <code>tf.float64</code>, you can get this by passing the same value to <code>tf.constant()</code> and passing an explicit <code>dtype</code> argument.</p></li> <li><p>The <code>tf.constant()</code> function also offers a useful feature when creating large tensors with a repeated value:</p> <pre><code>c = tf.constant(17.0, shape=[1024, 1024], dtype=tf.float32) </code></pre> <p>The tensor <code>c</code> above represents 4 * 1024 * 1024 bytes of data, but TensorFlow will represent it compactly in the graph as a single float <code>17.0</code> plus shape information that indicates how it should be interpreted. If you have many large, filled constants in your graph, it can be more efficient to create them this way.</p></li> </ul>
2
2016-09-15T14:25:13Z
[ "python", "tensorflow" ]
Python 3 | Sorted, Max, Min Dictionary trought a ZIP, Error
39,512,333
<p>I'm learning python 3 and I'm trying to use zip to transform a dictionary into a zip, this way I would be able to use functions like sorted, max and min on it.</p> <p>Stocks is the dictionary btw. So I tested it out like this, and it worked:</p> <pre><code>print(min(zip(Stocks.values(),Stocks.keys()))) print(max(zip(Stocks.values(),Stocks.keys()))) print(sorted(zip(Stocks.values(),Stocks.keys()))) </code></pre> <p>Then I tried to do this:</p> <pre><code>stock_zip = zip(Stocks.values(), Stocks.keys()) print(max(stock_zip)) print(min(stock_zip)) print(sorted(stock_zip)) </code></pre> <p>And this was the console result but I'm rather confused why it happened:</p> <pre><code>(520, 'GOOG') Traceback (most recent call last): File "----------------------------------------------------------------", line 11, in &lt;module&gt; print(min(stock_zip)) ValueError: min() arg is an empty sequence </code></pre> <p>If anyone can explain me why the second piece of code doens't work I would be very appreciated :)</p>
-1
2016-09-15T13:29:02Z
39,512,376
<p>In python3.X <code>zip</code> returns an iterator, and once you pass it to a function actually you've consumed it, therefor when you pass it to another function you're passing an empty iterator. </p> <pre><code>In [15]: a = zip(range(3), range(3)) In [16]: list(a) Out[16]: [(0, 0), (1, 1), (2, 2)] In [17]: list(a) Out[17]: [] </code></pre>
3
2016-09-15T13:30:50Z
[ "python", "python-3.x" ]
Python 3 | Sorted, Max, Min Dictionary trought a ZIP, Error
39,512,333
<p>I'm learning python 3 and I'm trying to use zip to transform a dictionary into a zip, this way I would be able to use functions like sorted, max and min on it.</p> <p>Stocks is the dictionary btw. So I tested it out like this, and it worked:</p> <pre><code>print(min(zip(Stocks.values(),Stocks.keys()))) print(max(zip(Stocks.values(),Stocks.keys()))) print(sorted(zip(Stocks.values(),Stocks.keys()))) </code></pre> <p>Then I tried to do this:</p> <pre><code>stock_zip = zip(Stocks.values(), Stocks.keys()) print(max(stock_zip)) print(min(stock_zip)) print(sorted(stock_zip)) </code></pre> <p>And this was the console result but I'm rather confused why it happened:</p> <pre><code>(520, 'GOOG') Traceback (most recent call last): File "----------------------------------------------------------------", line 11, in &lt;module&gt; print(min(stock_zip)) ValueError: min() arg is an empty sequence </code></pre> <p>If anyone can explain me why the second piece of code doens't work I would be very appreciated :)</p>
-1
2016-09-15T13:29:02Z
39,512,403
<p><code>zip</code> returns an iterator.</p> <p>When you are calling <code>max(stock_zip)</code> it iterates and consume the <code>stock_zip</code> iterator. By the time <code>min(stock_zip)</code> is called, <code>stock_zip</code> is totally consumed and is empty.</p> <p>Instead of saving a reference to the output of <code>zip(dict.keys(), dict.values())</code> you can simply use <code>dict.items()</code>:</p> <pre><code>print(max(Stocks.items())) print(min(Stocks.items())) </code></pre>
1
2016-09-15T13:31:30Z
[ "python", "python-3.x" ]
How to import time column from snowflake to jupyter notebook dataframe?
39,512,396
<p>I need to import data from snowflake to Jupyter. In the dataset I have a time column which is derived from timestamp values. </p> <p>Every time I try to import the data, Jupyter says the process failed and below is the error message.</p> <p>How should I get around this issue?</p> <pre><code>ERROR:snowflake.connector.converter:Failed to convert: field T: TIME::76493.000000000 Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/snowflake/connector/converter.py", line 88, in to_python type_name=type_name)) AttributeError: 'SnowflakeConverter' object has no attribute '_TIME_to_python' ERROR:snowflake.connector.cursor:failed to convert row to python Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/snowflake/connector/cursor.py", line 658, in __row_to_python res += (self._connection.converter.to_python(col_desc, col),) File "/usr/local/lib/python2.7/site-packages/snowflake/connector/converter.py", line 88, in to_python type_name=type_name)) AttributeError: 'SnowflakeConverter' object has no attribute '_TIME_to_python' ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line string', (1, 0)) </code></pre>
1
2016-09-15T13:31:22Z
39,553,249
<p>Can you check the Python Connector version? The error indicates TIME data type is not supported by Python Connector. TIME data type has been supported since v1.0.6. As of today, the latest version is 1.2.8: <a href="https://pypi.python.org/pypi/snowflake-connector-python/" rel="nofollow">https://pypi.python.org/pypi/snowflake-connector-python/</a></p> <p>Here is an example of TIME data type in Jupyter notebook: <a href="https://gist.github.com/smtakeda/e401c80d71f2da4aa7452d238c5ccffa" rel="nofollow">https://gist.github.com/smtakeda/e401c80d71f2da4aa7452d238c5ccffa</a></p>
0
2016-09-18T01:29:56Z
[ "python", "jupyter", "snowflake-datawarehouse" ]
How to add queryset to ManyToMany relationship?
39,512,467
<p>I have following models:</p> <pre><code>class EnMovielist(models.Model): content_ID = models.CharField(max_length=30) release_date = models.CharField(max_length=30) running_time = models.CharField(max_length=10) actress = models.CharField(max_length=300) series = models.CharField(max_length=30) studio = models.CharField(max_length=30, null=True) director = models.CharField(max_length=30) def __str__(self): return self.content_ID class EnActress(models.Model): name = models.CharField(max_length=100, null=True) movielist = models.ManyToManyField(EnMovielist, related_name='movies') def __str__(self): return self.name </code></pre> <p>I got error when I try to this in Django shell,</p> <pre><code>b = EnActress.objects.values_list('name', flat=True) a = EnMovielist.objects.filter(actress__contains=b).values_list('content_ID') b.movielist.add(a) AttributeError: 'QuerySet' object has no attribute 'movielist' </code></pre> <p>How can I django queryset add into many-to-many field? I have no idea why this is happening.. Any help appreciated! :)</p>
0
2016-09-15T13:34:39Z
39,512,564
<p>You should call m2m <code>add</code> from instance and adding entity should be also model instance. Otherwise your expression doesn't make sense. </p> <pre><code>b = EnActress.objects.get(pk=some_pk) # get an instance, not queryset a = EnMovielist.objects.get(pk=some_pk) # also instance b.movielist.add(a) </code></pre>
0
2016-09-15T13:38:46Z
[ "python", "django", "django-queryset", "manytomanyfield" ]
How to add queryset to ManyToMany relationship?
39,512,467
<p>I have following models:</p> <pre><code>class EnMovielist(models.Model): content_ID = models.CharField(max_length=30) release_date = models.CharField(max_length=30) running_time = models.CharField(max_length=10) actress = models.CharField(max_length=300) series = models.CharField(max_length=30) studio = models.CharField(max_length=30, null=True) director = models.CharField(max_length=30) def __str__(self): return self.content_ID class EnActress(models.Model): name = models.CharField(max_length=100, null=True) movielist = models.ManyToManyField(EnMovielist, related_name='movies') def __str__(self): return self.name </code></pre> <p>I got error when I try to this in Django shell,</p> <pre><code>b = EnActress.objects.values_list('name', flat=True) a = EnMovielist.objects.filter(actress__contains=b).values_list('content_ID') b.movielist.add(a) AttributeError: 'QuerySet' object has no attribute 'movielist' </code></pre> <p>How can I django queryset add into many-to-many field? I have no idea why this is happening.. Any help appreciated! :)</p>
0
2016-09-15T13:34:39Z
39,512,677
<p>You should not be using <code>values_list</code> if you intend to <code>add</code> a new relation afterwards. From the <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#values-list" rel="nofollow">docs</a>:</p> <blockquote> <p><code>values()</code> and <code>values_list()</code> are both intended as optimizations for a specific use case: retrieving a subset of data <strong>without the overhead of creating a model instance</strong></p> </blockquote> <p>[<em>Emphasis mine</em>]</p> <p>It's hard to tell what you're up to without having a good description of what you want to achieve.</p>
0
2016-09-15T13:43:46Z
[ "python", "django", "django-queryset", "manytomanyfield" ]
Django - Tango With Django upload picture
39,512,498
<p>I'm on chapter 9 in Tango With Django - creating user authentication. In the registration page I have the option of uploading a picture. In my admin file everything looks good after I register myself. I show up in the User Profiles, and it even shows the image I uploaded: <code>Picture: Currently: profile_images/earth.jpeg Clear</code>. However when I click on that picture this is the error message I get:</p> <pre><code>Page not found (404) Request Method: GET Request URL: http://localhost:8000/admin/rango/userprofile/1/change/profile_images/earth.jpeg/change/ Raised by: django.contrib.admin.options.change_view user profile object with primary key u'1/change/profile_images/earth.jpeg' does not exist. You're seeing this error because you have DEBUG = True in your Django settings file. Change that to False, and Django will display a standard 404 page. </code></pre> <p>models.py:</p> <pre><code>from __future__ import unicode_literals from django.db import models from django.template.defaultfilters import slugify from django.contrib.auth.models import User class Category(models.Model): name = models.CharField(max_length=128, unique=True) views = models.IntegerField(default=0) likes = models.IntegerField(default=0) slug = models.SlugField(unique=True) def save(self, *args, **kwargs): self.slug = slugify(self.name) super(Category, self).save(*args, **kwargs) class Meta: verbose_name_plural = 'categories' def __str__(self): return self.name class Page(models.Model): category = models.ForeignKey(Category) title = models.CharField(max_length=128) url = models.URLField() views = models.IntegerField(default=0) def __str__(self): return self.title class UserProfile(models.Model): user = models.OneToOneField(User) website = models.URLField(blank=True) picture = models.ImageField(upload_to='profile_images', blank=True) def __str__(self): return self.user.username </code></pre> <p>views.py - only the register():</p> <pre><code>def register(request): registered = False if request.method == 'POST': user_form = UserForm(data=request.POST) profile_form = UserProfileForm(data=request.POST) if user_form.is_valid() and profile_form.is_valid(): user = user_form.save() user.set_password(user.password) user.save() profile = profile_form.save(commit=False) profile.user = user if 'picture' in request.FILES: profile.picture = request.FILES['picture'] profile.save() registered = True else: print user_form.errors, profile_form.errors else: user_form = UserForm() profile_form = UserProfileForm() return render(request, 'rango/register.html', {'user_form': user_form, 'profile_form': profile_form, 'registered': registered} ) </code></pre> <p>finally, my register.html file:</p> <pre><code>{% extends 'rango/base.html' %} {% load staticfiles %} {% block title_block %} Register {% endblock %} {% block body_block %} &lt;h1&gt;Register with Rango&lt;/h1&gt; {% if registered %} Rango says: &lt;strong&gt;thank you for registering!&lt;/strong&gt; &lt;a href="/rango/"&gt;Return to the homepage&lt;/a&gt;&lt;br/&gt; {% else %} Rango says: &lt;strong&gt;register here!&lt;/strong&gt; Click &lt;a href="/rango/"&gt;here&lt;/a&gt; to go to the homepage&lt;br/&gt; &lt;form id="user_form" method="post" action="/rango/register/" enctype="multipart/form-data"&gt; {% csrf_token %} {{ user_form.as_p }} {{ profile_form.as_p }} &lt;input type="submit" name="submit" value="Register" /&gt; &lt;/form&gt; {% endif %} {% endblock %} </code></pre>
0
2016-09-15T13:35:43Z
39,516,325
<blockquote> <p>user profile object with primary key u'1/change/profile_images/earth.jpeg' does not exist.</p> </blockquote> <p>It looks like one of your URL patterns may be off; it probably just wants to capture that <code>1</code> to use as the PK for a lookup, but instead is capturing <code>1/change/profile_images/earth.jpeg</code>. </p>
1
2016-09-15T16:44:19Z
[ "python", "django" ]
csv python file not saving in individual line
39,512,506
<ol> <li><p>The code is working but the URLs saved are being saved in a single line rather than a different line.</p> <pre><code>from bs4 import BeautifulSoup import urllib2 import requests import csv import re page=requests.get("https://www.tutorialspoint.com/python/dictionary_values.htm") data=BeautifulSoup(page.content) csv1=open("123.csv","wb+") for link in data.find_all('a'): print(link.get('href')) csv1.write(link.get('href')) csv1.close() </code></pre></li> <li><p>Thanks in advance</p></li> </ol>
0
2016-09-15T13:35:58Z
39,516,448
<p>Instead of just <code>csv1.write</code>, use <code>csv.writer</code> modules</p> <pre><code>from bs4 import BeautifulSoup import urllib2 import requests import csv import re page=requests.get("https://www.tutorialspoint.com/python/dictionary_values.htm") data=BeautifulSoup(page.content) csv1=open("123.csv","wb+") csv1writer = csv.writer(csv1) for link in data.find_all('a'): print(link.get('href')) csv1writer.writerow([link.get('href')]) csv1.close() </code></pre>
0
2016-09-15T16:51:33Z
[ "python", "csv", "url", "beautifulsoup" ]
Regexp: some questions
39,512,633
<p><strong>Python 3</strong></p> <pre><code>text = "(CNN)Meaalofa Te'o -- Buemi. Canberra," def discard_punctuation(text): regex = '\W*^\s^\d*-' return re.sub(regex, "", text) def handle_text(text): text_without_punctuation = discard_punctuation(text) words_array = text_without_punctuation.split() pass // Breakpoint handle_text(text) </code></pre> <p>From an arbitrary text <strong>I want to select words only.</strong> Investigating the problem, I have discovered that sometimes a hyphen is inside a word. Or a number may be as well (9-year-old, canyon-like).</p> <p>My regex is regex = '\W*^\s^\d*-'. </p> <p>Take all non-alphanumeric character; exclude all whitespace charactes, which are necessary for split method; exclude all numbers that are not followed by a hyphen.</p> <p>I should also exclude hyphens that are not in words.</p> <p>The result is: : ['(CNN)Meaalofa', "Te'o", '--', 'Buemi.', 'Canberra,']</p> <p>The documentation: <a href="https://docs.python.org/3/howto/regex.html" rel="nofollow">https://docs.python.org/3/howto/regex.html</a></p> <pre><code>\W Matches any non-alphanumeric character; this is equivalent to the class [^a-zA-Z0-9_]. </code></pre> <p>I thought that points, commas, hyphen, brackets and apostrophe should match \W. </p> <p>Questions: <strong>1. I can't understand why: brackets, points and commas, and apostrophe are still present.</strong></p> <ol start="2"> <li><p>I would say that I excluded the apostrophe. I need it, and it is present in the result, it is Ok. But I can't catch how it occurred there. <strong>Could you help me understand how apostrophe happened to occur in the result.</strong></p></li> <li><p><strong>Well, "--" is definitely an error here. How to cope?</strong></p></li> <li><p>Could you, please, suggest me a better regexp.</p></li> </ol>
0
2016-09-15T13:41:43Z
39,513,184
<p>With your rather vague definition of a "word", you could come up with:</p> <pre><code>import re rx = re.compile(r'\s*(\S+)\s*') string = """(CNN)Meaalofa Te'o -- Buemi. Canberra,""" words = rx.findall(string) print(words) # ['(CNN)Meaalofa', "Te'o", '--', 'Buemi.', 'Canberra,'] </code></pre> <p>See <a href="http://ideone.com/WB9QmL" rel="nofollow"><strong>a demo on ideone.com</strong></a> and on <a href="https://regex101.com/r/zH2bQ6/1" rel="nofollow"><strong>regex101.com</strong></a>. You might redefine what a "word" is.</p>
1
2016-09-15T14:05:04Z
[ "python", "regex", "python-3.x" ]
many2many fileds are both are same values?
39,512,749
<p>I have this code:</p> <p>In .py file:</p> <pre><code>class newsaleorderline(models.Model): _inherit='sale.order.line' supply_tax_id = fields.Many2many('account.tax',string='Supply Taxes',domain=['|', ('active', '=', False), ('active', '=', True)]) labour_tax_id = fields.Many2many('account.tax',string='Labour Taxes',domain=['|', ('active', '=', False), ('active', '=', True)]) </code></pre> <p>in .xml file:</p> <pre><code>&lt;field name="supply_tax_id" widget="many2many_tags" domain="[('type_tax_use','=','sale'),('company_id','=',parent.company_id)]" attrs="{'readonly': [('qty_invoiced', '&amp;gt;', 0)]}"/&gt; &lt;field name="labour_tax_id" widget="many2many_tags" domain="[('type_tax_use','=','sale'),('company_id','=',parent.company_id)]" attrs="{'readonly': [('qty_invoiced', '&amp;gt;', 0)]}"/&gt; </code></pre> <p>while i trying to change <code>supply_tax_id</code> it changes, but after save <code>supply_tax_id</code>,<code>labour_tax_id</code> both are same. I don't know how it's inter connected. I want <code>supply_tax_id</code> and <code>labour_tax_id</code> should be different values and fields should come from <code>account.tax</code>.</p> <p>Pls help me to find this solution to problem. Thanks all for the suggestion.</p>
2
2016-09-15T13:46:53Z
39,515,895
<p>Odoo is generating relational tables in your database. You can give table names by yourself on field definition:</p> <pre class="lang-py prettyprint-override"><code>class MyModel(models.Model): _name = "my.model" my_m2m_field = fields.Many2Many( comodel_name="another.model", # required relation="my_model_another_model_rel", # optional column1="my_model_id", # optional column2="another_model_id", # optional string="Another Records" # optional ) </code></pre> <p>Your example isn't using the <code>relation</code> parameter on field definition, so Odoo is generating a name by itself. It's using both model (table) names for it and adds an "_rel" on the end of the name:</p> <p><code>sale_order_line_account_tax_rel</code></p> <p>Problem here: You're using same both models on two different Many2Many fields, which will end up in one relational table. So on using the fields, both fields will represent the same data in client.</p> <p><strong>Solution</strong>: Use the parameter <code>relation</code> and define two different names for the relation tables.</p>
3
2016-09-15T16:18:24Z
[ "python", "postgresql", "openerp" ]
Enums with pointers in struct definitions
39,512,841
<p>I am working on creating access to a dynamic library in python using the ctypes module. While duplicating some of the tydef'd structures in my python implementation I came across a bit of code that has me stumped as to what is happening. Basically what I have is</p> <pre><code>enum foo { a, b, c, }; typedef struct barStruct bar; struct barStruct{ enum foo (*lem)(); enum foo (*baz)(bar *next); } </code></pre> <p>in a header file.</p> <p>I am trying to understand the two enum calls in the struct definition. I thought that the first one was creating a pointer called <code>lem</code> which has to point to values contained in the enumeration (so I essentially ignored the open/close parenthesis at the end). When I noticed the second one, with another pointer in the parenthesis, then I got really confused as to what is going on, and doubted my guess as to what the first one meant.</p> <p>I apologize if this question already has an answer (which I'm sure it does) but unfortunately I cannot think of how to effectively search for what I need.</p>
1
2016-09-15T13:51:01Z
39,512,925
<p>Those aren't "enum calls", those are declaring two members in the structure, members that are pointers to functions.</p> <p>For example</p> <pre><code>enum foo (*lem)(); </code></pre> <p>declares a structure member variable <code>lem</code> that is a pointer to a function taking an indeterminate number of arguments, and <em>returns</em> a <code>foo</code> enumeration.</p>
3
2016-09-15T13:54:25Z
[ "python", "c", "pointers", "enums", "ctypes" ]
sorting by dictionary value in array python
39,512,942
<p>Okay so I've been working on processing some annotated text output. What I have so far is a dictionary with annotation as key and relations an array of elements:</p> <pre><code>'Adenotonsillectomy': ['0', '18', '1869', '1716'], 'OSAS': ['57', '61'], 'apnea': ['41', '46'], 'can': ['94', '97', '1796', '1746'], 'deleterious': ['103', '114'], 'effects': ['122', '129', '1806', '1752'], 'for': ['19', '22'], 'gain': ['82', '86', '1776', '1734'], 'have': ['98', '102', ['1776 1786 1796 1806 1816'], '1702'], 'health': ['115', '121'], 'lead': ['67', '71', ['1869 1879 1889'], '1695'], 'leading': ['135', '142', ['1842 1852'], '1709'], 'may': ['63', '66', '1879', '1722'], 'obesity': ['146', '153'], 'obstructive': ['23', '34'], 'sleep': ['35', '40'], 'syndrome': ['47', '55'], 'to': ['143', '145', '1852', '1770'], 'weight': ['75', '81'], 'when': ['130', '134', '1842', '1758'], 'which': ['88', '93', '1786', '1740']} </code></pre> <p>What I want to do is sort this by the first element in the array and reorder the dict as:</p> <pre><code>'Adenotonsillectomy': ['0', '18', '1869', '1716'] 'for': ['19', '22'], 'obstructive': ['23', '34'], 'sleep': ['35', '40'], 'apnea': ['41', '46'], etc... </code></pre> <p>right now I've tried to use operator to sort by value:</p> <pre><code>sorted(dependency_dict.items(), key=lambda x: x[1][0]) </code></pre> <p>However the output I'm getting is still incorrect:</p> <pre><code>[('Adenotonsillectomy', ['0', '18', '1869', '1716']), ('deleterious', ['103', '114']), ('health', ['115', '121']), ('effects', ['122', '129', '1806', '1752']), ('when', ['130', '134', '1842', '1758']), ('leading', ['135', '142', ['1842 1852'], '1709']), ('to', ['143', '145', '1852', '1770']), ('obesity', ['146', '153']), ('for', ['19', '22']), ('obstructive', ['23', '34']), ('sleep', ['35', '40']), ('apnea', ['41', '46']), ('syndrome', ['47', '55']), ('OSAS', ['57', '61']), ('may', ['63', '66', '1879', '1722']), ('lead', ['67', '71', ['1869 1879 1889'], '1695']), ('weight', ['75', '81']), ('gain', ['82', '86', '1776', '1734']), ('which', ['88', '93', '1786', '1740']), ('can', ['94', '97', '1796', '1746']), ('have', ['98', '102', ['1776 1786 1796 1806 1816'], '1702'])] </code></pre> <p>I'm not sure whats going wrong. Any help is appreciated.</p>
1
2016-09-15T13:55:12Z
39,512,969
<p>The entries are sorted in alphabetical order. If you want to sort them on integer value, convert the value to int first:</p> <pre><code>sorted(dependency_dict.items(), key=lambda x: int(x[1][0])) </code></pre>
5
2016-09-15T13:56:34Z
[ "python", "arrays", "sorting", "dictionary" ]
How to return array list as a python object?
39,513,084
<p>I have a method which sends message to facebook in <a href="https://developers.facebook.com/docs/messenger-platform/send-api-reference/generic-template" rel="nofollow">Generic template</a> . My code:</p> <pre><code>def send_receipt(self, fbid, title, url, img_url, summary): return self._send(message={ "recipient": { "id": fbid }, "message": { "attachment": { "type": "template", "payload": { "template_type": "generic", "elements": [ { "title": title, "item_url": url, "image_url": img_url, "subtitle": summary } ] } } } }) </code></pre> <p>It works fine for me, but it only returns 1 element. I want to get 2 or 3 elements from JSON, so I think that I can make it works by making an element object and this object returns the array list.</p> <pre><code>def send_receipt(self, fbid, elements): return self._send(message={ "recipient": { "id": fbid }, "message": { "attachment": { "type": "template", "payload": { "template_type": "generic", "elements": elements } } } }) </code></pre> <p>And I did make a method to return elements. But i'm new to python, so what I have done didnt work for me.</p> <pre><code>elements = [{ "title": title, "item_url": url, "image_url": img_url, "subtitle": summary }] </code></pre>
0
2016-09-15T14:01:06Z
39,514,471
<p>What I do is making a method which converts the result to a list elements</p> <pre><code>temp = [] for index, product in enumerate(products): element = {'title': title, 'subtitle': sumary, 'item_url': item_url} #not every product has image_url so to prevent KeyError, I have a if if 'image_url' in product: element['image_url'] = image_url temp.append(element) # In Facebook API element is limited to 10 if index == 9: break return temp </code></pre>
0
2016-09-15T15:05:45Z
[ "python", "arrays", "json", "list" ]
percentage completion of a long-running python task
39,513,085
<p>I have a python program that crunches a large dataset using Pandas. It currently takes about 15 minute to complete. I want to log (stdout &amp; send the metric to Datadog) about the progress of the task. Is there a way to get the %-complete of the task (or a function)? In the future, I might be dealing with larger datasets. The Python task that I am doing is a simple grouping of a large pandas data frame. Something like this:</p> <pre><code>dfDict = {} for cat in categoryList: df1 = df[df['category'] == cat] if len(df1.index) &gt; 0: df1[dateCol] = pd.to_datetime(df[dateCol]) dfDict[cat] = df1 </code></pre> <p>here, the categoryList has about 20000 items, and df is a large data frame having (say) a 5 million rows.</p> <p>I am not looking for anything fancy (like progress-bars..). Just percentage complete value. Any ideas?</p> <p>Thanks!</p>
0
2016-09-15T14:01:07Z
39,513,221
<p>You can modify the following according to your needs.</p> <pre><code>from time import sleep for i in range(12): sleep(1) print("\r\t&gt; Progress\t:{:.2%}".format((i + 1)/12), end='') </code></pre> <p>What this basically does, is that it prevents <code>print()</code> from writing the default end character (<code>end=''</code>) and at the same time, it write a carriage return (<code>'\r'</code>) before anything else. In simple terms, you are overwriting the previous <code>print()</code> statement.</p>
0
2016-09-15T14:06:29Z
[ "python" ]
percentage completion of a long-running python task
39,513,085
<p>I have a python program that crunches a large dataset using Pandas. It currently takes about 15 minute to complete. I want to log (stdout &amp; send the metric to Datadog) about the progress of the task. Is there a way to get the %-complete of the task (or a function)? In the future, I might be dealing with larger datasets. The Python task that I am doing is a simple grouping of a large pandas data frame. Something like this:</p> <pre><code>dfDict = {} for cat in categoryList: df1 = df[df['category'] == cat] if len(df1.index) &gt; 0: df1[dateCol] = pd.to_datetime(df[dateCol]) dfDict[cat] = df1 </code></pre> <p>here, the categoryList has about 20000 items, and df is a large data frame having (say) a 5 million rows.</p> <p>I am not looking for anything fancy (like progress-bars..). Just percentage complete value. Any ideas?</p> <p>Thanks!</p>
0
2016-09-15T14:01:07Z
39,513,348
<p>the naive solution would be to just use the total amount of rows in your dataset and the index your are at, then calculate the progress:</p> <pre><code>size = len(dataset) for index, element in enumerate(dataset): print(index / size * 100) </code></pre> <p>This will only be somewhat reliable if every row takes around the same time to complete. Because you have a large dataset, it might average out over time, but if some rows take a millisecond, and another takes 10 minutes, the percentage will be garbage.</p> <p>Also consider rounding the percentage to one decimal:</p> <pre><code>size = len(dataset) for index, element in enumerate(dataset): print(round(index / size * 100), 1) </code></pre> <p>Printing for every row might slow your task down significantly so consider this improvement:</p> <pre><code>size = len(dataset) percentage = 0 for index, element in enumerate(dataset): new_percentage = round(index / size * 100), 1) if percentage != new_percentage: percentage = new_percentage print(percentage) </code></pre> <p>There are, of course, also modules for this:</p> <p><a href="https://pypi.python.org/pypi/progressbar" rel="nofollow">progressbar</a></p> <p><a href="https://pypi.python.org/pypi/progress" rel="nofollow">progress</a></p>
0
2016-09-15T14:12:09Z
[ "python" ]
Python operator overloading with multiple operands
39,513,191
<p>I know I can do simple operator overloading in python by the following way.</p> <p>Let say overloading '+' operator.</p> <pre><code>class A(object): def __init__(self,value): self.value = value def __add__(self,other): return self.value + other.value a1 = A(10) a2 = A(20) print a1 + a2 </code></pre> <p>But It fails when I try to do the following,</p> <pre><code>a1 = A(10) a2 = A(20) a3 = A(30) print a1 + a2 + a3 </code></pre> <p>Since the <code>__add__</code> accepts only 2 parameters. What is the best solution to achieve operator overloading with n number of operands.</p>
1
2016-09-15T14:05:25Z
39,513,296
<p>This is failing because <code>a1 + a2</code> return an <code>int</code> instance and its <code>__add__</code> is called which doesn't support addition with the custom class <code>A</code>; you could return a <code>A</code> instance in <code>__add__</code> to eliminate the Exception for this specific operation:</p> <pre><code>class A(object): def __init__(self,value): self.value = value def __add__(self,other): return type(self)(self.value + other.value) </code></pre> <p>Adding them together now behaves in the expected way:</p> <pre><code>&gt;&gt;&gt; a1 = A(10) &gt;&gt;&gt; a2 = A(20) &gt;&gt;&gt; a3 = A(30) &gt;&gt;&gt; print(a1 + a2 + a3) &lt;__main__.A object at 0x7f2acd7d25c0&gt; &gt;&gt;&gt; print((a1 + a2 + a3).value) 60 </code></pre> <p>This class of course suffers from the same issues with other operations; you need to implement the other dunders to return an instance of your class or else you'll bump into the same result with other operations.</p> <p>If you want a nice result displayed when you <code>print</code> these objects, you should also implement <code>__str__</code> to return the value when called:</p> <pre><code>class A(object): def __init__(self,value): self.value = value def __add__(self,other): return A(self.value + other.value) def __str__(self): return "{}".format(self.value) </code></pre> <p>Now printing has the effect you need:</p> <pre><code>&gt;&gt;&gt; print(a1 + a2 + a3) 60 </code></pre>
2
2016-09-15T14:10:04Z
[ "python", "python-2.7", "python-3.x", "python-2.x" ]
Python operator overloading with multiple operands
39,513,191
<p>I know I can do simple operator overloading in python by the following way.</p> <p>Let say overloading '+' operator.</p> <pre><code>class A(object): def __init__(self,value): self.value = value def __add__(self,other): return self.value + other.value a1 = A(10) a2 = A(20) print a1 + a2 </code></pre> <p>But It fails when I try to do the following,</p> <pre><code>a1 = A(10) a2 = A(20) a3 = A(30) print a1 + a2 + a3 </code></pre> <p>Since the <code>__add__</code> accepts only 2 parameters. What is the best solution to achieve operator overloading with n number of operands.</p>
1
2016-09-15T14:05:25Z
39,513,302
<p>The problem is that</p> <p><code>a1 + a2 + a3</code> gives <code>30 + a3</code></p> <p>30 is an int, and ints do not know how to sum to A</p> <p>You should return an instance of A in your <code>__add__</code> function</p>
0
2016-09-15T14:10:21Z
[ "python", "python-2.7", "python-3.x", "python-2.x" ]
Getting invalid syntax
39,513,232
<p>Im getting this error bellow when I run my python script.</p> <pre><code> File "supreme.py", line 24 print UTCtoEST(),':: Parsing page...' ^ SyntaxError: invalid syntax </code></pre> <p>Preview of the part of the script:</p> <pre><code>import sys, json, time, requests, urllib2 from datetime import datetime qty='1' def UTCtoEST(): current=datetime.now() return str(current) + ' EST' print poll=raw_input("Polling interval? ") poll=int(poll) keyword=raw_input("Product name? ").title() # hardwire here by declaring keyword as a string color=raw_input("Color? ").title() # hardwire here by declaring keyword as a string sz=raw_input("Size? ").title() # hardwire here by declaring keyword as a string print print UTCtoEST(),':: Parsing page...' def main():..... </code></pre> <p>Any fix for this? Need help</p> <p>Thanks in advance.</p>
-3
2016-09-15T14:06:43Z
39,513,766
<p>Seems like your issue here is not the code, but the version of Python you are running it with. Your code is written in Python 2.7, but you are running with Python 3.5.</p> <p>Option one, run with Python 2.7.</p> <p>Option two, change the code...</p> <pre><code># imports ^ qty='1' def UTCtoEST(): current=datetime.now() return str(current) + ' EST' print poll=input("Polling interval? ") poll=int(poll) keyword=input("Product name? ").title() color=input("Color? ").title() sz=input("Size? ").title() print print(UTCtoEST(),':: Parsing page...') </code></pre>
1
2016-09-15T14:31:45Z
[ "python", "python-3.x" ]
Appending a list into a list of lists in python
39,513,256
<p>I am trying to create and append a list and index in a list. Appending any list element is being automatically appended to all the lists available in this list First of all I have a list as following</p> <pre><code>sygma_list [[]] * 3 </code></pre> <p>and I have another lists having the form</p> <pre><code>mts_columns1 = [[1,2,3], [4,5,6], [6,7,8]] mts_columns2 = [[1,2,3], [4,5,6], [6,7,8]] </code></pre> <p>When looping over the sygma_list I have like so:</p> <pre><code> for i in range(0, 3): sygma_list[i].append(mts_column[i]) </code></pre> <p>the results of sygma_list are being quite shocking, as append() is behaving on each element of the list instead of obtaining a final result of</p> <pre><code>sygma_list = [[[1,2,3], [1,2,3]], [[4,5,6],[4,5,6]], [[6,7,8],[6,7,8]]] </code></pre>
0
2016-09-15T14:08:09Z
39,513,304
<p>The biggest catch in your code is this: When you do this:</p> <pre><code>sygma_list = [[]] * 3 </code></pre> <p>you create a size-3 array of references on the same list: not that you generally want and certainly not here</p> <p>Do this instead:</p> <pre><code>sygma_list = [list() for x in range(3)] </code></pre> <p>That will create 3 distinct lists.</p> <p>(this construct is OK for immutable objects like [0]*3 or["foo"]*4)</p> <p>Let's consider this fixed code:</p> <pre><code>sygma_list = [list() for _ in range(3)] mts_column = [[1,2,3], [4,5,6], [6,7,8]] for i in range(0, 3): sygma_list[i].append(mts_column[i]) print(sygma_list) </code></pre> <p>yields:</p> <pre><code>[[[1, 2, 3]], [[4, 5, 6]], [[6, 7, 8]]] </code></pre> <p>BTW: Not sure if you want to append or extend your list (flatten it or not)</p> <pre><code>sygma_list[i].extend(mts_column[i]) </code></pre> <p>would make a list of lists instead of making a list of lists of lists</p>
3
2016-09-15T14:10:27Z
[ "python", "list", "append" ]
Appending a list into a list of lists in python
39,513,256
<p>I am trying to create and append a list and index in a list. Appending any list element is being automatically appended to all the lists available in this list First of all I have a list as following</p> <pre><code>sygma_list [[]] * 3 </code></pre> <p>and I have another lists having the form</p> <pre><code>mts_columns1 = [[1,2,3], [4,5,6], [6,7,8]] mts_columns2 = [[1,2,3], [4,5,6], [6,7,8]] </code></pre> <p>When looping over the sygma_list I have like so:</p> <pre><code> for i in range(0, 3): sygma_list[i].append(mts_column[i]) </code></pre> <p>the results of sygma_list are being quite shocking, as append() is behaving on each element of the list instead of obtaining a final result of</p> <pre><code>sygma_list = [[[1,2,3], [1,2,3]], [[4,5,6],[4,5,6]], [[6,7,8],[6,7,8]]] </code></pre>
0
2016-09-15T14:08:09Z
39,519,792
<p>If all the <code>mts_columns</code> lists contain the same sublists as in your post then you could use a list comprehension like so:</p> <pre><code>sygma_list = [ [i]*2 for i in mts_columns] </code></pre> <p>in this case <code>sygma_list</code> doesn't have to be initialized as in your post (or you could simply do <code>sygma_list = []</code>, but you don't have to ) as it will be created automatically, and will give you the following output:</p> <pre><code>[[[1, 2, 3], [1, 2, 3]], [[4, 5, 6], [4, 5, 6]], [[6, 7, 8], [6, 7, 8]]] </code></pre> <p>Of course you can change <strong>2</strong> to whatever number you like (2 is the number of <code>mts_columns</code> we choose for the creation of <code>sygma_list</code>).</p> <p>Now if the <code>mts_columns</code> lists do not contain the same sublilsts, I mean the case that you would have <em>mts_columns1, mts_columns2,...</em> lists and each of those lists contained different sublists from the other, then you could also use a list comprehension like:</p> <pre><code>sygma_list = [ [i,j] for i,j in zip(mts_columns1,mts_columns2)] </code></pre>
0
2016-09-15T20:26:58Z
[ "python", "list", "append" ]
My if statement keeps returning 'None' for empty list
39,513,310
<p>I'm a beginner at coding in Python and I've been practising with exercises CodeWars.</p> <p>There's this exercise which basically wants you to recreate the display function of the "likes" on Facebook, i.e. how it shows the number likes you have on a post etc.</p> <p>Here is my code:</p> <pre><code>def likes(names): for name in names: if len(names) == 0: return 'no one likes this' elif len(names) == 1: return '%s likes this' % (name) elif len(names) == 2: return '%s and %s like this' % (names[0], names[1]) elif len(names) == 3: return '%s, %s and %s like this' % (names[0], names[1], names[2]) elif len(names) &gt;= 4: return '%s, %s and %s others like this' % (names[0], names[1], len(names) - 2) print likes([]) print likes(['Peter']) print likes(['Alex', 'Jacob', 'Mark', 'Max']) </code></pre> <p>This prints out:</p> <pre><code>None Peter likes this Alex, Jacob and 2 others like this </code></pre> <p>My main issue here is that my first 'if' statement is not producing the string 'no one likes this' for when the argument: [] is empty. Is there a way around this problem?</p>
2
2016-09-15T14:10:49Z
39,513,375
<p>If <code>names</code> is an empty list the <code>for</code> loop won't be executed at all, which will cause the function to return <code>None</code>. You should change the structure of your function (hint: you might not even need a loop, not an explicit one at least). There is no point in having a loop and then <code>return</code> on the very first iteration.</p>
7
2016-09-15T14:13:06Z
[ "python", "list", "function", "if-statement" ]
My if statement keeps returning 'None' for empty list
39,513,310
<p>I'm a beginner at coding in Python and I've been practising with exercises CodeWars.</p> <p>There's this exercise which basically wants you to recreate the display function of the "likes" on Facebook, i.e. how it shows the number likes you have on a post etc.</p> <p>Here is my code:</p> <pre><code>def likes(names): for name in names: if len(names) == 0: return 'no one likes this' elif len(names) == 1: return '%s likes this' % (name) elif len(names) == 2: return '%s and %s like this' % (names[0], names[1]) elif len(names) == 3: return '%s, %s and %s like this' % (names[0], names[1], names[2]) elif len(names) &gt;= 4: return '%s, %s and %s others like this' % (names[0], names[1], len(names) - 2) print likes([]) print likes(['Peter']) print likes(['Alex', 'Jacob', 'Mark', 'Max']) </code></pre> <p>This prints out:</p> <pre><code>None Peter likes this Alex, Jacob and 2 others like this </code></pre> <p>My main issue here is that my first 'if' statement is not producing the string 'no one likes this' for when the argument: [] is empty. Is there a way around this problem?</p>
2
2016-09-15T14:10:49Z
39,513,453
<p>The for loop that you have done occurs one time for each element of the list, but you do non have elements in the list, so the loop won't occour, and the return value will be "None"</p> <pre><code>def likes(names): #for name in names: #LOOK HERE: you definetly not need this loop if len(names) == 0: return 'no one likes this' elif len(names) == 1: return '%s likes this' % (names[0]) elif len(names) == 2: return '%s and %s like this' % (names[0], names[1]) elif len(names) == 3: return '%s, %s and %s like this' % (names[0], names[1], names[2]) elif len(names) &gt;= 4: return '%s, %s and %s others like this' % (names[0], names[1], len(names) - 2) </code></pre>
4
2016-09-15T14:17:15Z
[ "python", "list", "function", "if-statement" ]
Python DataFrame Transform Rows to Column
39,513,332
<p>I have a csv that stores information about a particular object and Date. </p> <pre><code>Device Date Category Amount Pen 01/01/2014 A 12 Pen 01/01/2014 B 42 Pen 01/01/2014 C 10 Pen 01/01/2014 D 5 Pen 02/01/2014 A 7 Pen 02/01/2014 B 52 Pen 02/01/2014 C 1 Pen 02/01/2014 D 7 Pencil 01/01/2014 A 22 Pencil 01/01/2014 B 42 Pencil 01/01/2014 C 70 Pencil 01/01/2014 D 8 </code></pre> <p>I want to read it into a DataFrame and make Category a column and Amounts for a specific device and date a row. This will make the dataset much smaller. </p> <pre><code>Device Date A B C D Pen 01/01/2014 12 42 10 5 Pen 02/01/2014 7 52 1 7 Pencil 01/01/2014 22 42 70 8 </code></pre>
1
2016-09-15T14:11:39Z
39,513,425
<p>You can use <code>pivot_table</code> where columns that you want to keep are set as <code>index</code>, columns that go to the header are set as <code>columns</code> and columns that fill the cells in the output data frame are set as <code>values</code>:</p> <pre><code>df.pivot_table(index=['Device', 'Date'], columns='Category', values='Amount').reset_index() # Category Device Date A B C D # 0 Pen 01/01/2014 12 42 10 5 # 1 Pen 02/01/2014 7 52 1 7 # 2 Pencil 01/01/2014 22 42 70 8 </code></pre>
5
2016-09-15T14:15:49Z
[ "python", "pandas", "dataframe" ]
Python DataFrame Transform Rows to Column
39,513,332
<p>I have a csv that stores information about a particular object and Date. </p> <pre><code>Device Date Category Amount Pen 01/01/2014 A 12 Pen 01/01/2014 B 42 Pen 01/01/2014 C 10 Pen 01/01/2014 D 5 Pen 02/01/2014 A 7 Pen 02/01/2014 B 52 Pen 02/01/2014 C 1 Pen 02/01/2014 D 7 Pencil 01/01/2014 A 22 Pencil 01/01/2014 B 42 Pencil 01/01/2014 C 70 Pencil 01/01/2014 D 8 </code></pre> <p>I want to read it into a DataFrame and make Category a column and Amounts for a specific device and date a row. This will make the dataset much smaller. </p> <pre><code>Device Date A B C D Pen 01/01/2014 12 42 10 5 Pen 02/01/2014 7 52 1 7 Pencil 01/01/2014 22 42 70 8 </code></pre>
1
2016-09-15T14:11:39Z
39,514,169
<p>using <code>groupby</code> and <code>unstack</code></p> <pre><code>df.groupby(['Device', 'Date', 'Category'])['Amount'].sum().unstack().reset_index() </code></pre> <p><a href="http://i.stack.imgur.com/ZOOY6.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZOOY6.png" alt="enter image description here"></a></p>
2
2016-09-15T14:50:41Z
[ "python", "pandas", "dataframe" ]
Given an arc with a known start(x, y), end(x, y) and angle, how can I calculate its bounding box?
39,513,353
<p>The title says it all. Given an arc with (for example):</p> <pre><code>Start Point: x = 53.34, y = 52.07 End Point: x = 13.97, y = 52.07 Angle: 180 degrees </code></pre> <p><a href="http://i.stack.imgur.com/xxQ0i.png" rel="nofollow"><img src="http://i.stack.imgur.com/xxQ0i.png" alt="enter image description here"></a> How can I find its bounding box?</p> <p>Even though I am writing in python, puesdocode is preferred, so that it will be useful to other people.</p> <p>Thanks!</p> <p>-Tom</p>
0
2016-09-15T14:12:24Z
39,513,824
<pre><code>h = Sqrt( (start.x - end.x)^2 + (start.y - end.y)^2) or h = Math.Hypot(start.x - end.x, start.y - end.y) R = Abs(h / (2*Sin(Angle/2))) if angle &lt;= Pi/2 top = end.y left = end.x bottom = start.y right = start.x else if angle &lt;= Pi top = start.y - R left = end.x bottom = start.y right = start.x else if angle &lt;= 3*Pi/2 top = start.y - R left = start.x - 2*R bottom = end.y right = start.x else top = start.y - R left = start.x - 2*R bottom = start.y + R right = start.x </code></pre>
1
2016-09-15T14:34:07Z
[ "python", "math", "geometry", "bounding-box" ]
RedisClusterException when trying to write using Celery
39,513,405
<p><strong>My environment:</strong> I have three Ubuntu servers. One server is used as a load balancer using Nginx. The other two servers contain the exact same project (are identical apart from redis where one box is the master and other is the slave).</p> <p><strong>The Programs/Applications I'm using</strong> Python, Gunicorn, Django, Celery, Redis, Sentinel</p> <p><strong>What My project does:</strong> I have a URL which takes in a GET request, does some logic regarding the request and saves into redis. </p> <pre><code>def save(key, value): conn = redis_cluster_connect() print conn conn.set(key, value) conn.set('bar','foo') </code></pre> <p>This was working fine when my connection was:</p> <pre><code>def redis_connect(): print 'redis connected' return redis.Redis(host="xxx.xx.xxx.x", port=6380, db=1) # CHANGE TO THE REDIS PORT THAT IS ACTIVE </code></pre> <p>But when I use the cluster connection:</p> <pre><code>def redis_cluster_connect(): startup_nodes = [{"host": "xxx.xx.xxx.x", "port": "6380"}] rc = StrictRedisCluster(startup_nodes=startup_nodes, decode_responses=True) return rc </code></pre> <p><strong>The Error I get:</strong></p> <pre><code>[2016-09-15 13:28:59,682: INFO/MainProcess] Received task: testapp.tasks.mobilise_stops[cc64c425-bd37-4896-b6ab-4319de5fb743] [2016-09-15 13:28:59,684: WARNING/Worker-1] Oh no! Task failed: RedisClusterException("ERROR sending 'cluster slots' command to redis server: {'host': 'xxx.xx.xxx.x', 'port': '6380'}",) [2016-09-15 13:28:59,685: ERROR/MainProcess] Task testapp.tasks.mobilise_stops[cc64c425-bd37-4896-b6ab-4319de5fb743] raised unexpected: RedisClusterException("ERROR sending 'cluster slots' command to redis server: {'host': 'xxx.xx.xxx.x', 'port': '6380'}",) Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task R = retval = fun(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__ return self.run(*args, **kwargs) File "/var/www/gateway/venv/gateway/testapp/tasks.py", line 50, in mobilise_stops save(mobilise_stops.request.id, 'Success') File "/var/www/gateway/venv/gateway/testapp/tasks.py", line 28, in save conn = redis_cluster_connect() File "/var/www/gateway/venv/gateway/testapp/tasks.py", line 19, in redis_cluster_connect rc = StrictRedisCluster(startup_nodes=startup_nodes, decode_responses=True) File "/usr/local/lib/python2.7/dist-packages/rediscluster/client.py", line 157, in __init__ **kwargs File "/usr/local/lib/python2.7/dist-packages/rediscluster/connection.py", line 83, in __init__ self.nodes.initialize() File "/usr/local/lib/python2.7/dist-packages/rediscluster/nodemanager.py", line 148, in initialize raise RedisClusterException("ERROR sending 'cluster slots' command to redis server: {0}".format(node)) RedisClusterException: ERROR sending 'cluster slots' command to redis server: {'host': 'xxx.xx.xxx.x', 'port': '6380'} </code></pre> <p>When I run redis-cli and set a variable on the master (on server 1) I can retrieve it on the slave (on server 2)</p> <pre><code>server 1: xxx.xx.xxx.x:6380&gt; set test 100 OK server 2: xxx.xx.xxx.x:6381&gt; get test "100" </code></pre> <p>When I try doing the cluster slots command in the redis client I get the same error as the above. </p> <pre><code>xxx.xx.xxx.x:6380&gt; cluster slots (error) ERR unknown command 'cluster' </code></pre> <p>One thing to consider is my redis config file doesn't have a tcp socket, when I tried using tcp redis would not work. It gave me an error when I tried running redis. This was changed to the default Unixsocket that comes in the default redis.conf file</p> <p>From:</p> <pre><code>tcp-backlog 511 </code></pre> <p>To:</p> <pre><code>unixsocket /var/run/redis/redis.sock unixsocketperm 700 </code></pre>
0
2016-09-15T14:14:30Z
39,548,072
<p>You are not running a redis cluster, so attempting to run redis-cluster only commands are going to fail. </p>
0
2016-09-17T14:43:50Z
[ "python", "django", "sockets", "redis", "redis-cluster" ]
Python: How to restart a FOR loop, which iterates over a csv
39,513,569
<p>I am using Python 3.5 and I wanna load data from a csv into several lists, but it only works exactly one time with a FOR-Loop. Then it loads 0 into it.</p> <p>Here is the code:</p> <pre><code>f1 = open("csvfile.csv", encoding="latin-1") csv_f1 = csv.reader(f1, delimiter=';') list_f1_vorname = [] for row_f1 in csv_f1: list_f1_vorname.append(row_f1[2]) list_f1_name = [] for row_f1 in csv_f1: # &lt;-- HERE IS THE ERROR, IT DOESNT WORK A SECOND TIME! list_f1_name.append(row_f1[3]) </code></pre> <p>Does anybody know how to restart this thing? Many thanks and best regards, Saitam</p>
2
2016-09-15T14:22:19Z
39,513,620
<p>Try:</p> <pre><code>csv_f1 = list(csv.reader(f1, delimiter=';')) </code></pre> <p>It is not exactly restarting the reader, but rather caching the file contents in a list, which may be iterated many times.</p>
0
2016-09-15T14:24:47Z
[ "python", "loops", "csv", "for-loop", "restart" ]
Python: How to restart a FOR loop, which iterates over a csv
39,513,569
<p>I am using Python 3.5 and I wanna load data from a csv into several lists, but it only works exactly one time with a FOR-Loop. Then it loads 0 into it.</p> <p>Here is the code:</p> <pre><code>f1 = open("csvfile.csv", encoding="latin-1") csv_f1 = csv.reader(f1, delimiter=';') list_f1_vorname = [] for row_f1 in csv_f1: list_f1_vorname.append(row_f1[2]) list_f1_name = [] for row_f1 in csv_f1: # &lt;-- HERE IS THE ERROR, IT DOESNT WORK A SECOND TIME! list_f1_name.append(row_f1[3]) </code></pre> <p>Does anybody know how to restart this thing? Many thanks and best regards, Saitam</p>
2
2016-09-15T14:22:19Z
39,513,657
<p><code>csv_f1</code> is not an <code>list</code>, it is an iterative.</p> <p>Either, you cache the <code>csv_f1</code> into a list by using <code>list()</code> or you just recreate the object.</p> <p>I would recommend to recreate the object in case your cvs data gets very big. This way, the data is not put into RAM completely.</p>
3
2016-09-15T14:26:28Z
[ "python", "loops", "csv", "for-loop", "restart" ]
Python: How to restart a FOR loop, which iterates over a csv
39,513,569
<p>I am using Python 3.5 and I wanna load data from a csv into several lists, but it only works exactly one time with a FOR-Loop. Then it loads 0 into it.</p> <p>Here is the code:</p> <pre><code>f1 = open("csvfile.csv", encoding="latin-1") csv_f1 = csv.reader(f1, delimiter=';') list_f1_vorname = [] for row_f1 in csv_f1: list_f1_vorname.append(row_f1[2]) list_f1_name = [] for row_f1 in csv_f1: # &lt;-- HERE IS THE ERROR, IT DOESNT WORK A SECOND TIME! list_f1_name.append(row_f1[3]) </code></pre> <p>Does anybody know how to restart this thing? Many thanks and best regards, Saitam</p>
2
2016-09-15T14:22:19Z
39,513,935
<p>The simple answer is to iterate over the csv once and store it into a list. something like</p> <pre><code>my_list = [] for row in csv_f1: my_list.append(row) </code></pre> <p>or what abukaj wrote with</p> <pre><code>csv_f1 = list(csv.reader(f1, delimiter=';')) </code></pre> <p>and then move on and iterate over that list as many times as you want.</p> <p>However if you are only trying to get certain columns then you can simply do that in the same for loop.</p> <pre><code>list_f1_vorname = [] list_f1_name = [] for row in csv_f1: list_f1_vorname.append(row[2]) list_f1_name.append(row[3]) </code></pre> <p>The reason it doesn't work multiple times is because it is an iterator...so it will iterate over the values once but not restart at beginning again after it has already iterated over the data.</p>
1
2016-09-15T14:38:58Z
[ "python", "loops", "csv", "for-loop", "restart" ]
Python: How to restart a FOR loop, which iterates over a csv
39,513,569
<p>I am using Python 3.5 and I wanna load data from a csv into several lists, but it only works exactly one time with a FOR-Loop. Then it loads 0 into it.</p> <p>Here is the code:</p> <pre><code>f1 = open("csvfile.csv", encoding="latin-1") csv_f1 = csv.reader(f1, delimiter=';') list_f1_vorname = [] for row_f1 in csv_f1: list_f1_vorname.append(row_f1[2]) list_f1_name = [] for row_f1 in csv_f1: # &lt;-- HERE IS THE ERROR, IT DOESNT WORK A SECOND TIME! list_f1_name.append(row_f1[3]) </code></pre> <p>Does anybody know how to restart this thing? Many thanks and best regards, Saitam</p>
2
2016-09-15T14:22:19Z
39,515,167
<p>One thing nobody noticed so far is that you're trying to store names and last names in two separate lists. This is not going to be very convenient to use later on. Therefore despite other answers show correct possible solutions to read names and last names from csv into two separate lists, I'm going to propose you to use a single list of dicts instead:</p> <pre><code>f1 = open("csvfile.csv", encoding="latin-1") csv_f1 = csv.reader(f1, delimiter=";") list_of_names = [] for row_f1 in csv_f1: list_of_names.append({ "vorname": row_f1[2], "name": row_f1[3] }) </code></pre> <p>Then you can iterate over this list and take the value you want. For example to simply print the values:</p> <pre><code>for row in list_of_names: print(row["vorname"]) print(row["name"]) </code></pre> <p>And the last but not least, you could build this list also by using list comprehension (kinda more Pythonic):</p> <pre><code>list_of_names = [ { "vorname": row_f1[2], "name": row_f1[3] } for row_f1 in csv_f1 ] </code></pre> <p>As I said, I appreciate other answers. They solve the issue of csv reader being iterable and not a list-like object.<br> Nevertheless I see a little bit of XY Problem in your question. I've seen so many times attempts to store entity properties (name and last name are obviously related properties and form a simple entity together) in multiple lists. It always ends up with the code which is hard to read and maintain.</p>
0
2016-09-15T15:39:50Z
[ "python", "loops", "csv", "for-loop", "restart" ]
Django Count of Items in a Field
39,513,627
<p>models.py</p> <pre><code>class Event(models.Model): name = models.CharField(max_length=20, unique=True) distance = models.IntegerField() date = models.DateField() class Category(models.Model): name = models.CharField(max_length=20, unique=True) description = models.CharField(max_length=20, unique=True) isnew = models.BooleanField(default=False) class Result(models.Model): event = models.ForeignKey(Event) category = models.ForeignKey(Category) score = models.IntegerField() </code></pre> <p>I want to do a query to return a count of each unique Category in the Result table, for a given Event.</p> <p>What I'm doing now is something like:</p> <pre><code>results = Result.objects.filter(event=myevent) categorycountdict = {} for r in results: if r.category in categorycountdict: categorycountdict[r.category] += 1 else: categorycountdict[r.category] = 1 </code></pre> <p>Is there a better way, perhaps by query instead of python.</p>
3
2016-09-15T14:24:58Z
39,514,173
<p>Use <a href="https://docs.djangoproject.com/en/1.10/topics/db/aggregation/" rel="nofollow"><code>annotate</code></a>:</p> <pre><code>from django.db.models import Count Results.objects.filter(event=some_event).annotate(Count('category'), distinct=True) </code></pre>
0
2016-09-15T14:51:00Z
[ "python", "django", "django-models" ]
Django Count of Items in a Field
39,513,627
<p>models.py</p> <pre><code>class Event(models.Model): name = models.CharField(max_length=20, unique=True) distance = models.IntegerField() date = models.DateField() class Category(models.Model): name = models.CharField(max_length=20, unique=True) description = models.CharField(max_length=20, unique=True) isnew = models.BooleanField(default=False) class Result(models.Model): event = models.ForeignKey(Event) category = models.ForeignKey(Category) score = models.IntegerField() </code></pre> <p>I want to do a query to return a count of each unique Category in the Result table, for a given Event.</p> <p>What I'm doing now is something like:</p> <pre><code>results = Result.objects.filter(event=myevent) categorycountdict = {} for r in results: if r.category in categorycountdict: categorycountdict[r.category] += 1 else: categorycountdict[r.category] = 1 </code></pre> <p>Is there a better way, perhaps by query instead of python.</p>
3
2016-09-15T14:24:58Z
39,514,186
<p>You can use <code>annotate()</code> with <code>values()</code>. This approach is shown in the docs for <a href="https://docs.djangoproject.com/en/1.10/topics/db/aggregation/#values" rel="nofollow"><code>values()</code></a>. To get the count for each category name, you could do:</p> <pre><code>from django.db.models import Count categories = Result.objects.filter( event=myevent, ).order_by('category').values( 'category__name' ).annotate(count=Count('category__name')) </code></pre> <p>This will return a list of dictionaries with keys <code>category__name</code> and <code>count</code>, for example:</p> <pre><code>[{'count': 3, 'category__name': u'category1'}, {'count': 1, 'category__name': u'category2'}] </code></pre> <p>You could convert this to a single dictionary by using a dictionary comprehension:</p> <pre><code>counts_by_category = {d['category__name']: d['count'] for f in categories} </code></pre>
4
2016-09-15T14:51:49Z
[ "python", "django", "django-models" ]
Django Count of Items in a Field
39,513,627
<p>models.py</p> <pre><code>class Event(models.Model): name = models.CharField(max_length=20, unique=True) distance = models.IntegerField() date = models.DateField() class Category(models.Model): name = models.CharField(max_length=20, unique=True) description = models.CharField(max_length=20, unique=True) isnew = models.BooleanField(default=False) class Result(models.Model): event = models.ForeignKey(Event) category = models.ForeignKey(Category) score = models.IntegerField() </code></pre> <p>I want to do a query to return a count of each unique Category in the Result table, for a given Event.</p> <p>What I'm doing now is something like:</p> <pre><code>results = Result.objects.filter(event=myevent) categorycountdict = {} for r in results: if r.category in categorycountdict: categorycountdict[r.category] += 1 else: categorycountdict[r.category] = 1 </code></pre> <p>Is there a better way, perhaps by query instead of python.</p>
3
2016-09-15T14:24:58Z
39,514,304
<p>There is <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow">collections.Counter</a> in python standard library:</p> <pre><code>results = Result.objects.filter(event=myevent).select_related('category') c = Counter(r.category for r in results) </code></pre> <p>Now c is a dict-like object where keys are Category instances, and values are counts.</p> <p>This option is not suitable for large datasets though, since it doesn't use database features. So if you have a lot of data, approaches that use values, annotate and Count are more suitable.</p> <p><a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#select-related" rel="nofollow">select_related</a> is used here in order to eliminate database hit for every result. Basically it makes 1 query with join and generates python objects for categories.</p> <hr> <hr> <p>It turns out, that ManyToManyField keeps track of unique records only, so below answer is incorrect.</p> <p>Your Event and Category models expose many-to-many connection through Result model.</p> <p>You can express it using <a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#django.db.models.ManyToManyField.through" rel="nofollow">through</a> in Django (note categories ManyToManyField):</p> <pre><code>class Event(models.Model): name = models.CharField(max_length=20, unique=True) distance = models.IntegerField() date = models.DateField() categories = models.ManyToManyField(Category, through='Result', related_name='events') </code></pre> <p>In your code, for given event you could query like this:</p> <pre><code>event.categories.count() </code></pre> <p>Taking Alasdair note into account:</p> <pre><code>event.categories.values('pk').annotate(count=Count('pk')) </code></pre>
0
2016-09-15T14:57:50Z
[ "python", "django", "django-models" ]
from pytrends.pyGTrends import pyGTrendsnot working
39,514,109
<p>I installed pytrends using this wheel file <a href="https://pypi.python.org/packages/b5/b9/1942e2b6cfa643212d5d856793ae9584d635be2f536e94983fb74496ab5b/pytrends-3.1.0-py2.py3-none-any.whl" rel="nofollow">pytrends.whl</a> and the below directories</p> <pre><code> Directory of C:\Python35\Lib\site-packages\pytrends 09/15/2016 10:30 AM &lt;DIR&gt; . 09/15/2016 10:30 AM &lt;DIR&gt; .. 09/15/2016 10:30 AM 6,799 request.py 09/15/2016 10:30 AM 0 __init__.py 09/15/2016 10:30 AM &lt;DIR&gt; __pycache__ 2 File(s) 6,799 bytes 3 Dir(s) 323,016,486,912 bytes free </code></pre> <p>and</p> <pre><code> Directory of C:\Python35\Lib\site-packages\pytrends-3.1.0.dist-info 09/15/2016 10:30 AM &lt;DIR&gt; . 09/15/2016 10:30 AM &lt;DIR&gt; .. 09/15/2016 10:30 AM 7,900 DESCRIPTION.rst 09/15/2016 10:30 AM 4 INSTALLER 09/15/2016 10:30 AM 8,622 METADATA 09/15/2016 10:30 AM 878 metadata.json 09/15/2016 10:30 AM 835 RECORD 09/15/2016 10:30 AM 9 top_level.txt 09/15/2016 10:30 AM 116 WHEEL 7 File(s) 18,364 bytes 2 Dir(s) 323,017,011,200 bytes free </code></pre> <p>And from here, unlike in any packages where the submodules the are imported are inside the package folder, This one basically contains no sumodules. It might be the way i installled the package. This ome liner , </p> <pre><code>from pytrends.pyGTrends import pyGTrends </code></pre> <p>gives the following error:</p> <pre><code>Traceback (most recent call last): File "GoogleTrend.py", line 1, in &lt;module&gt; from pytrends.pyGTrends import pyGTrends ImportError: No module named 'pytrends.pyGTrends' Press any key to continue . . . </code></pre> <p>Or is it better to use <code>gtrends</code> module instead? i am using <code>python 3.5</code>. </p>
0
2016-09-15T14:47:51Z
39,621,481
<p>You should probably use <code>from pytrends.request import TrendReq</code>.</p> <p>See <a href="https://github.com/GeneralMills/pytrends/blob/master/examples/example.py" rel="nofollow">example.py</a>.</p>
2
2016-09-21T16:04:53Z
[ "python", "google-trends" ]
Counting how many times in a row the result of a sum is positive (or negative)
39,514,202
<p><strong>First Part</strong></p> <blockquote> <p>I have a dataframe with finance data (33023 rows, here the link to the data: <a href="https://mab.to/Ssy3TelRs" rel="nofollow">https://mab.to/Ssy3TelRs</a>); df.open is the price of the title and df.close is the closing price.</p> <p>I have been trying to see how many times in a row the title closed with a gain and with a lost.</p> <p>The result that I'm looking for should tell me that the title was positive 2 days in a row x times, 3 days in a row y times, 4 days in a row z times and so forth.</p> <p>I have started with a for:</p> <pre><code>for x in range(1,df.close.count()): y = df.close[x]-df.open[x] </code></pre> <p>and then unsuccessful series of if statements...</p> <p>Thank you for your help.</p> <p>CronosVirus00</p> <p>EDITS:</p> <pre><code>&gt;&gt;&gt; df.head(7) data ora open max min close Unnamed: 6 0 20160801 0 1.11781 1.11781 1.11772 1.11773 0 1 20160801 100 1.11774 1.11779 1.11773 1.11777 0 2 20160801 200 1.11779 1.11800 1.11779 1.11795 0 3 20160801 300 1.11794 1.11801 1.11771 1.11771 0 4 20160801 400 1.11766 1.11772 1.11763 1.11772 0 5 20160801 500 1.11774 1.11798 1.11774 1.11796 0 6 20160801 600 1.11796 1.11796 1.11783 1.11783 0 </code></pre> <p>Ifs:</p> <pre><code>for x in range(1,df.close.count()): y = df.close[x]-df.open[x] if y &gt; 0 : green += 1 y = df.close[x+1] - df.close[x+1] twotimes += 1 if y &gt; 0 : green += 1 y = df.close[x+2] - </code></pre> <p>df.close[x+2] threetimes += 1 if y > 0 : green += 1 y = df.close[x+3] - df.close[x+3] fourtimes += 1</p> <p>FINAL SOLUTION</p> <p>Thank you all! And the end I did this:</p> <pre><code>df['test'] = df.close - df.open &gt;0 green = df.test #days that it was positive def gg(z): tot =green.count() giorni = range (1,z+1) # days in a row i wanna check for x in giorni: y = (green.rolling(x).sum()&gt;x-1).sum() print(x," ",y, " ", round((y/tot)*100,1),"%") gg(5) 1 14850 45.0 % 2 6647 20.1 % 3 2980 9.0 % 4 1346 4.1 % 5 607 1.8 % </code></pre> </blockquote>
2
2016-09-15T14:52:39Z
39,514,416
<p>It sounds like what you want to do is: </p> <ul> <li>compute the difference of two series (open &amp; close), eg <code>diff = df.open - df.close</code></li> <li>apply a condition to the result to get a boolean series <code>diff &gt; 0</code></li> <li>pass the resulting boolean series to the DataFrame to get a subset of the DataFrame where the condition is true <code>df[diff &gt; 0]</code></li> <li>Find all contiguous subsequences by applying a column wise function to identify and count </li> </ul> <p>I need to board a plane, but I will provide a sample of what the last step looks like when I can.</p>
2
2016-09-15T15:03:27Z
[ "python", "pandas", "dataframe" ]
Counting how many times in a row the result of a sum is positive (or negative)
39,514,202
<p><strong>First Part</strong></p> <blockquote> <p>I have a dataframe with finance data (33023 rows, here the link to the data: <a href="https://mab.to/Ssy3TelRs" rel="nofollow">https://mab.to/Ssy3TelRs</a>); df.open is the price of the title and df.close is the closing price.</p> <p>I have been trying to see how many times in a row the title closed with a gain and with a lost.</p> <p>The result that I'm looking for should tell me that the title was positive 2 days in a row x times, 3 days in a row y times, 4 days in a row z times and so forth.</p> <p>I have started with a for:</p> <pre><code>for x in range(1,df.close.count()): y = df.close[x]-df.open[x] </code></pre> <p>and then unsuccessful series of if statements...</p> <p>Thank you for your help.</p> <p>CronosVirus00</p> <p>EDITS:</p> <pre><code>&gt;&gt;&gt; df.head(7) data ora open max min close Unnamed: 6 0 20160801 0 1.11781 1.11781 1.11772 1.11773 0 1 20160801 100 1.11774 1.11779 1.11773 1.11777 0 2 20160801 200 1.11779 1.11800 1.11779 1.11795 0 3 20160801 300 1.11794 1.11801 1.11771 1.11771 0 4 20160801 400 1.11766 1.11772 1.11763 1.11772 0 5 20160801 500 1.11774 1.11798 1.11774 1.11796 0 6 20160801 600 1.11796 1.11796 1.11783 1.11783 0 </code></pre> <p>Ifs:</p> <pre><code>for x in range(1,df.close.count()): y = df.close[x]-df.open[x] if y &gt; 0 : green += 1 y = df.close[x+1] - df.close[x+1] twotimes += 1 if y &gt; 0 : green += 1 y = df.close[x+2] - </code></pre> <p>df.close[x+2] threetimes += 1 if y > 0 : green += 1 y = df.close[x+3] - df.close[x+3] fourtimes += 1</p> <p>FINAL SOLUTION</p> <p>Thank you all! And the end I did this:</p> <pre><code>df['test'] = df.close - df.open &gt;0 green = df.test #days that it was positive def gg(z): tot =green.count() giorni = range (1,z+1) # days in a row i wanna check for x in giorni: y = (green.rolling(x).sum()&gt;x-1).sum() print(x," ",y, " ", round((y/tot)*100,1),"%") gg(5) 1 14850 45.0 % 2 6647 20.1 % 3 2980 9.0 % 4 1346 4.1 % 5 607 1.8 % </code></pre> </blockquote>
2
2016-09-15T14:52:39Z
39,514,805
<p>If I understood you correctly, you want the number of days that have at least <code>n</code> positive days in a row before and itself included.</p> <p>Similarly to what @Thang suggested, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html" rel="nofollow">rolling</a>:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame(np.random.rand(10, 2), columns=["open", "close"]) # This just sets up random test data, for example: # open close # 0 0.997986 0.594789 # 1 0.052712 0.401275 # 2 0.895179 0.842259 # 3 0.747268 0.919169 # 4 0.113408 0.253440 # 5 0.199062 0.399003 # 6 0.436424 0.514781 # 7 0.180154 0.235816 # 8 0.750042 0.558278 # 9 0.840404 0.139869 positiveDays = df["close"]-df["open"] &gt; 0 # This will give you a series that is True for positive days: # 0 False # 1 True # 2 False # 3 True # 4 True # 5 True # 6 True # 7 True # 8 False # 9 False # dtype: bool daysToCheck = 3 positiveDays.rolling(daysToCheck).sum()&gt;daysToCheck-1 </code></pre> <p>This will now give you a series, indicating for every day, whether it has been positive for <code>daysToCheck</code> number of days in a row:</p> <pre><code>0 False 1 False 2 False 3 False 4 False 5 True 6 True 7 True 8 False 9 False dtype: bool </code></pre> <p>Now you can use <code>(positiveDays.rolling(daysToCheck).sum()&gt;daysToCheck-1).sum()</code> to get the number of days (in the example <code>3</code>) that obey this, which is what you want, as far as I understand.</p>
2
2016-09-15T15:21:36Z
[ "python", "pandas", "dataframe" ]
Counting how many times in a row the result of a sum is positive (or negative)
39,514,202
<p><strong>First Part</strong></p> <blockquote> <p>I have a dataframe with finance data (33023 rows, here the link to the data: <a href="https://mab.to/Ssy3TelRs" rel="nofollow">https://mab.to/Ssy3TelRs</a>); df.open is the price of the title and df.close is the closing price.</p> <p>I have been trying to see how many times in a row the title closed with a gain and with a lost.</p> <p>The result that I'm looking for should tell me that the title was positive 2 days in a row x times, 3 days in a row y times, 4 days in a row z times and so forth.</p> <p>I have started with a for:</p> <pre><code>for x in range(1,df.close.count()): y = df.close[x]-df.open[x] </code></pre> <p>and then unsuccessful series of if statements...</p> <p>Thank you for your help.</p> <p>CronosVirus00</p> <p>EDITS:</p> <pre><code>&gt;&gt;&gt; df.head(7) data ora open max min close Unnamed: 6 0 20160801 0 1.11781 1.11781 1.11772 1.11773 0 1 20160801 100 1.11774 1.11779 1.11773 1.11777 0 2 20160801 200 1.11779 1.11800 1.11779 1.11795 0 3 20160801 300 1.11794 1.11801 1.11771 1.11771 0 4 20160801 400 1.11766 1.11772 1.11763 1.11772 0 5 20160801 500 1.11774 1.11798 1.11774 1.11796 0 6 20160801 600 1.11796 1.11796 1.11783 1.11783 0 </code></pre> <p>Ifs:</p> <pre><code>for x in range(1,df.close.count()): y = df.close[x]-df.open[x] if y &gt; 0 : green += 1 y = df.close[x+1] - df.close[x+1] twotimes += 1 if y &gt; 0 : green += 1 y = df.close[x+2] - </code></pre> <p>df.close[x+2] threetimes += 1 if y > 0 : green += 1 y = df.close[x+3] - df.close[x+3] fourtimes += 1</p> <p>FINAL SOLUTION</p> <p>Thank you all! And the end I did this:</p> <pre><code>df['test'] = df.close - df.open &gt;0 green = df.test #days that it was positive def gg(z): tot =green.count() giorni = range (1,z+1) # days in a row i wanna check for x in giorni: y = (green.rolling(x).sum()&gt;x-1).sum() print(x," ",y, " ", round((y/tot)*100,1),"%") gg(5) 1 14850 45.0 % 2 6647 20.1 % 3 2980 9.0 % 4 1346 4.1 % 5 607 1.8 % </code></pre> </blockquote>
2
2016-09-15T14:52:39Z
39,515,000
<p>This should work:</p> <pre><code>import pandas as pd import numpy as np test = pd.DataFrame(np.random.randn(100,2), columns = ['open','close']) test['gain?'] = (test['open']-test['close'] &lt; 0) test['cumulative'] = 0 for i in test.index[1:]: if test['gain?'][i]: test['cumulative'][i] = test['cumulative'][i-1] + 1 test['cumulative'][i-1] = 0 results = test['cumulative'].value_counts() </code></pre> <p>Ignore the '0' row in the results. It can be modified without too much trouble if you want to e.g count both days in a run-of-two as runs-of-one as well.</p> <p>Edit: without the warnings -</p> <pre><code>import pandas as pd import numpy as np test = pd.DataFrame(np.random.randn(100,2), columns = ['open','close']) test['gain?'] = (test['open']-test['close'] &lt; 0) test['cumulative'] = 0 for i in test.index[1:]: if test['gain?'][i]: test.loc[i,'cumulative'] = test.loc[i-1,'cumulative'] + 1 test.loc[i-1,'cumulative'] = 0 results = test['cumulative'].value_counts() </code></pre>
0
2016-09-15T15:31:18Z
[ "python", "pandas", "dataframe" ]
Counting how many times in a row the result of a sum is positive (or negative)
39,514,202
<p><strong>First Part</strong></p> <blockquote> <p>I have a dataframe with finance data (33023 rows, here the link to the data: <a href="https://mab.to/Ssy3TelRs" rel="nofollow">https://mab.to/Ssy3TelRs</a>); df.open is the price of the title and df.close is the closing price.</p> <p>I have been trying to see how many times in a row the title closed with a gain and with a lost.</p> <p>The result that I'm looking for should tell me that the title was positive 2 days in a row x times, 3 days in a row y times, 4 days in a row z times and so forth.</p> <p>I have started with a for:</p> <pre><code>for x in range(1,df.close.count()): y = df.close[x]-df.open[x] </code></pre> <p>and then unsuccessful series of if statements...</p> <p>Thank you for your help.</p> <p>CronosVirus00</p> <p>EDITS:</p> <pre><code>&gt;&gt;&gt; df.head(7) data ora open max min close Unnamed: 6 0 20160801 0 1.11781 1.11781 1.11772 1.11773 0 1 20160801 100 1.11774 1.11779 1.11773 1.11777 0 2 20160801 200 1.11779 1.11800 1.11779 1.11795 0 3 20160801 300 1.11794 1.11801 1.11771 1.11771 0 4 20160801 400 1.11766 1.11772 1.11763 1.11772 0 5 20160801 500 1.11774 1.11798 1.11774 1.11796 0 6 20160801 600 1.11796 1.11796 1.11783 1.11783 0 </code></pre> <p>Ifs:</p> <pre><code>for x in range(1,df.close.count()): y = df.close[x]-df.open[x] if y &gt; 0 : green += 1 y = df.close[x+1] - df.close[x+1] twotimes += 1 if y &gt; 0 : green += 1 y = df.close[x+2] - </code></pre> <p>df.close[x+2] threetimes += 1 if y > 0 : green += 1 y = df.close[x+3] - df.close[x+3] fourtimes += 1</p> <p>FINAL SOLUTION</p> <p>Thank you all! And the end I did this:</p> <pre><code>df['test'] = df.close - df.open &gt;0 green = df.test #days that it was positive def gg(z): tot =green.count() giorni = range (1,z+1) # days in a row i wanna check for x in giorni: y = (green.rolling(x).sum()&gt;x-1).sum() print(x," ",y, " ", round((y/tot)*100,1),"%") gg(5) 1 14850 45.0 % 2 6647 20.1 % 3 2980 9.0 % 4 1346 4.1 % 5 607 1.8 % </code></pre> </blockquote>
2
2016-09-15T14:52:39Z
39,516,000
<p>If i understood your question correctly you can do it this way:</p> <pre><code>In [76]: df.groupby((df.close.diff() &lt; 0).cumsum()).cumcount() Out[76]: 0 0 1 1 2 2 3 0 4 1 5 2 6 0 7 0 dtype: int64 </code></pre> <blockquote> <p>The result that I'm looking for should tell me that the title was positive 2 days in a row x times, 3 days in a row y times, 4 days in a row z times and so forth.</p> </blockquote> <pre><code>In [114]: df.groupby((df.close.diff() &lt; 0).cumsum()).cumcount().value_counts().to_frame('count') Out[114]: count 0 4 2 2 1 2 </code></pre> <p>Data set:</p> <pre><code>In [78]: df Out[78]: data ora open max min close 0 20160801 0 1.11781 1.11781 1.11772 1.11773 1 20160801 100 1.11774 1.11779 1.11773 1.11777 2 20160801 200 1.11779 1.11800 1.11779 1.11795 3 20160801 300 1.11794 1.11801 1.11771 1.11771 4 20160801 400 1.11766 1.11772 1.11763 1.11772 5 20160801 500 1.11774 1.11798 1.11774 1.11796 6 20160801 600 1.11796 1.11796 1.11783 1.11783 7 20160801 700 1.11783 1.11799 1.11783 1.11780 In [80]: df.close.diff() Out[80]: 0 NaN 1 0.00004 2 0.00018 3 -0.00024 4 0.00001 5 0.00024 6 -0.00013 7 -0.00003 Name: close, dtype: float64 </code></pre>
1
2016-09-15T16:25:09Z
[ "python", "pandas", "dataframe" ]
Execute python script via Arduino Uno in windows
39,514,309
<p>I am wondering is there any way to run python script via Arduino commands in Windows ?</p>
1
2016-09-15T14:58:09Z
39,515,223
<p>I believe that there won't be any Arduino library that support Python because python is interpreted and the Arduino doesn't have the memory for the entire of Python, if you're looking to program an Arduino using Python then maybe just try C the code you need to learn for programming the Arduino isn't too different to the code you would find in python most of the code you can find here : <a href="https://www.arduino.cc/en/Reference/HomePage" rel="nofollow">https://www.arduino.cc/en/Reference/HomePage</a></p> <p>but these are some of the python modules related to running Python on an Arduino : <a href="http://playground.arduino.cc/CommonTopics/PyMite" rel="nofollow">http://playground.arduino.cc/CommonTopics/PyMite</a></p>
0
2016-09-15T15:42:55Z
[ "python", "arduino-uno" ]
Execute python script via Arduino Uno in windows
39,514,309
<p>I am wondering is there any way to run python script via Arduino commands in Windows ?</p>
1
2016-09-15T14:58:09Z
39,521,928
<p>I don't know if this answers your question, but you can download Vpython library to create some cool projects with it, or connect sensors and getting data back into python from arduino or viceversa</p> <p>So for example:</p> <pre><code>int trigPin=13; //Sensor Trig pin connected to Arduino pin 13 int echoPin=11; //Sensor Echo pin connected to Arduino pin 11 float pingTime; //time for ping to travel from sensor to target and return float targetDistance; //Distance to Target in inches float speedOfSound=776.5; //Speed of sound in miles per hour when temp is 77 degrees. void setup() { // put your setup code here, to run once: Serial.begin(9600); pinMode(trigPin, OUTPUT); pinMode(echoPin, INPUT); } void loop() { // put your main code here, to run repeatedly: digitalWrite(trigPin, LOW); //Set trigger pin low delayMicroseconds(2000); //Let signal settle digitalWrite(trigPin, HIGH); //Set trigPin high delayMicroseconds(15); //Delay in high state digitalWrite(trigPin, LOW); //ping has now been sent delayMicroseconds(10); //Delay in low state pingTime = pulseIn(echoPin, HIGH); //pingTime is presented in microceconds pingTime=pingTime/1000000; //convert pingTime to seconds by dividing by 1000000 (microseconds in a second) pingTime=pingTime/3600; //convert pingtime to hourse by dividing by 3600 (seconds in an hour) targetDistance= speedOfSound * pingTime; //This will be in miles, since speed of sound was miles per hour targetDistance=targetDistance/2; //Remember ping travels to target and back from target, so you must divide by 2 for actual target distance. targetDistance= targetDistance*63360; //Convert miles to inches by multipling by 63360 (inches per mile) Serial.println(targetDistance); delay(100); //delay tenth of a second to slow things down a little. } </code></pre> <p>And in python</p> <pre><code>import serial #Import Serial Library from visual import * #Import all the vPython library arduinoSerialData = serial.Serial('com11', 9600) #Create an object for the Serial port. Adjust 'com11' to whatever port your arduino is sending to. measuringRod = cylinder( radius= .1, length=6, color=color.yellow, pos=(-3,-2,0)) lengthLabel = label(pos=(0,5,0), text='Target Distance is: ', box=false, height=30) target=box(pos=(0,-.5,0), length=.2, width=3, height=3, color=color.green) while (1==1): #Create a loop that continues to read and display the data rate(20)#Tell vpython to run this loop 20 times a second if (arduinoSerialData.inWaiting()&gt;0): #Check to see if a data point is available on the serial port myData = arduinoSerialData.readline() #Read the distance measure as a string print myData #Print the measurement to confirm things are working distance = float(myData) #convert reading to a floating point number measuringRod.length=distance #Change the length of your measuring rod to your last measurement target.pos=(-3+distance,-.5,0) myLabel= 'Target Distance is: ' + myData #Create label by appending string myData to string lengthLabel.text = myLabel #display updated myLabel on your graphic </code></pre> <p>This will make graphics in python representing something you are holding in front of an ultrasonic sensor and you can see the object moving in real time</p> <p>I took the code from this website:</p> <p><a href="http://www.toptechboy.com/arduino/python-with-ardiuno-3-example-using-ultrasonic-sensor/" rel="nofollow">Toptechboy</a></p> <p>This is has really good tutorial how to hook up arduino to python! And is very simple</p>
0
2016-09-15T23:48:35Z
[ "python", "arduino-uno" ]
Nginx - serve flask python on https and another port without https
39,514,407
<p>What I'm trying to accomplish. Have a domain on https. Check. it's working ok using the following config. The flask app runs on port 1337 -> nginx takes it -> serves it though https. Everything is working nicely</p> <p>Now I want to run another app, on port 1338 let's say. But if I do this, the browser (chrome) automatically redirects it to https. I want: <a href="http://domain.com:1338" rel="nofollow">http://domain.com:1338</a> .... to run ok I get: <a href="https://domain.com:1338" rel="nofollow">https://domain.com:1338</a> ... error certificate</p> <p>My question is: how can I make the other app (on port 1338) either work with https:// or to work with http://</p> <p>Here's my config...</p> <pre><code>server { listen 80 default_server; listen [::]:80 default_server; root /home/cleverbots; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name _; # SSL configuration # listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; ssl_certificate /xxxxxxxxxx.crt; ssl_certificate_key /xxxxxxxxxx.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_ecdh_curve secp384r1; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; # Disable preloading HSTS for now. You can use the commented out header line that includes # the "preload" directive if you understand the implications. #add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; ssl_dhparam /xxxxxx/dhparam.pem; location /static/ { expires 30d; add_header Last-Modified $sent_http_Expires; alias /home/my_first_app/application/static/; } location / { try_files $uri @tornado; } location @tornado { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:1337; } } </code></pre>
0
2016-09-15T15:03:02Z
39,517,216
<p>The answer to your question depends on what exactly you want the user experience to be.</p> <p>As I understand your goal, you only have one domain (example.com). Your first app, (I'm going to call it <code>app1337</code>) is running on port 1337 and you can access in a browser at <a href="https://example.com/" rel="nofollow">https://example.com/</a>. Now you want to add another app (<code>app1338</code>) that you want to be able to access at <a href="https://example.com:1338/" rel="nofollow">https://example.com:1338/</a>. The problem here is that only one service can run on a given port on a given interface. This can work, but means that you have to be really careful to make sure that your flask app <em>only</em> listens on loopback (127.0.0.1) and Nginx only listens on your Ethernet interface. If not, you'll get "socket already in use" errors. I would recommend instead using something else like 8338 in Nginx to avoid this confusion.</p> <p>The fastest solution I can see would be to leave your existing server block exactly as is. Duplicate the entire thing, and in the new block:</p> <ol> <li>Change the 2 <code>listen 443</code> lines to the port you want to use in browser (8338).</li> <li>Remove the <code>listen 80</code> lines or, if you want to serve the app on both ssl and non-ssl, change the port to the non-ssl port you want to use.</li> <li>Change your <code>proxy_pass</code> line to point to your second flask app.</li> </ol> <p>Like Keenan, I would recommend you use subdomains to sort your traffic. Something like <a href="https://app1337.example.com/" rel="nofollow">https://app1337.example.com/</a> and <a href="https://app1338.example.com/" rel="nofollow">https://app1338.example.com/</a> to make for a better user experience. To do this, duplicate the server block as above, but this time leave the ports the same, but change the "server_name" directive in each block to match the domain. Remove all of the "default_server" parts from the listen directives.</p> <p>As an example:</p> <pre><code>server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name app1337.example.com; # SSL configuration # Certificate and key for "app1337.example.com" ssl_certificate /xxxxxxxxxx.crt; ssl_certificate_key /xxxxxxxxxx.key; # The rest of the ssl stuff is common and can be moved to a shared file and included # in whatever blocks it is needed. include sslcommon.conf; root /home/cleverbots; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; location /static/ { expires 30d; add_header Last-Modified $sent_http_Expires; alias /home/my_first_app/application/static/; } location / { try_files $uri @tornado; } location @tornado { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:1337; } } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name app1338.example.com; # SSL configuration # Certificate and key for "app1338.example.com" ssl_certificate /xxxxxxxxxx.crt; ssl_certificate_key /xxxxxxxxxx.key; # The rest of the ssl stuff is common and can be moved to a shared file and included # in whatever blocks it is needed. include sslcommon.conf; ## This might be different for app1338 root /home/cleverbots; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; ## This might be different for app1338 location /static/ { expires 30d; add_header Last-Modified $sent_http_Expires; alias /home/my_first_app/application/static/; } location / { try_files $uri @app1338; } location @app1338 { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:1338; } } </code></pre>
2
2016-09-15T17:41:01Z
[ "python", "nginx", "flask" ]
How do you output a list out without quotes around strings?
39,514,657
<p>I'm trying to set up a block to accept only inputs that are in a list but first it asks for the inputs in the input function but I can't seem to get rid of the quotes around the strings in the list. Here is some example code:</p> <pre><code>def Sinput(acceptable): while True: acceptable = [str(i) for i in acceptable] a = input('Enter'+str(acceptable[:-1]).strip('[]')+' or '+str(acceptable[-1]+': ')) if a in acceptable: return a break a = Sinput([ 1, 2.01, '\'cat\'', 'dog']) print('you entred:', a) </code></pre> <p>The input asks: <code>Enter'1', '2.01', "'cat'" or dog:</code> I want it to ask: <code>Enter 1, 2.01, 'cat' or dog:</code></p> <p>Using <code>.replace('\'', '')</code> won't work because the input 'cat' would no longer display correctly</p> <p>Thanks for any help, I've only been doing coding for about a week.</p>
0
2016-09-15T15:14:26Z
39,514,729
<p>Use <a href="https://docs.python.org/2/library/stdtypes.html#str.join" rel="nofollow"><code>.join(...)</code></a> which is the recommended way for joining an iterable of strings:</p> <pre><code>a = input('Enter'+ ' ,'.join(acceptable[:-1]) + ...) # ^^^^^^^^^ </code></pre> <p>P.S. I don't see why you need a <code>break</code> after that <code>return</code> statement.</p>
2
2016-09-15T15:17:49Z
[ "python", "python-3.x" ]
How do you output a list out without quotes around strings?
39,514,657
<p>I'm trying to set up a block to accept only inputs that are in a list but first it asks for the inputs in the input function but I can't seem to get rid of the quotes around the strings in the list. Here is some example code:</p> <pre><code>def Sinput(acceptable): while True: acceptable = [str(i) for i in acceptable] a = input('Enter'+str(acceptable[:-1]).strip('[]')+' or '+str(acceptable[-1]+': ')) if a in acceptable: return a break a = Sinput([ 1, 2.01, '\'cat\'', 'dog']) print('you entred:', a) </code></pre> <p>The input asks: <code>Enter'1', '2.01', "'cat'" or dog:</code> I want it to ask: <code>Enter 1, 2.01, 'cat' or dog:</code></p> <p>Using <code>.replace('\'', '')</code> won't work because the input 'cat' would no longer display correctly</p> <p>Thanks for any help, I've only been doing coding for about a week.</p>
0
2016-09-15T15:14:26Z
39,515,206
<p>I think this would do good for you:</p> <pre><code> a = input('Enter {} or {}'.format(' ,'.join(acceptable[:-1]), acceptable[-1])) </code></pre>
2
2016-09-15T15:41:50Z
[ "python", "python-3.x" ]
How to retrieve subset in partitioning algorithm?
39,514,839
<p>I have an array and I would like to split it two parts such that their sum is equal for example <code>[10, 30, 20, 50]</code> can be split into <code>[10, 40] , [20, 30]</code>. Both have a sum of 50. This is essentially partitioning algorithm but I'd like the retrieve the subsets not just identify whether it's partitionable. So, I went ahead and did the following:</p> <p><strong>Update</strong>: updated script to handle duplicates</p> <pre><code>from collections import Counter def is_partitionable(a): possible_sums = [a[0]] corresponding_subsets = [[a[0]]] target_value = sum(a)/2 if a[0] == target_value: print("yes",[a[0]],a[1:]) return for x in a[1:]: temp_possible_sums = [] for (ind, t) in enumerate(possible_sums): cursum = t + x if cursum &lt; target_value: corresponding_subsets.append(corresponding_subsets[ind] + [x]) temp_possible_sums.append(cursum) if cursum == target_value: one_subset = corresponding_subsets[ind] + [x] another_subset = list((Counter(a) - Counter(one_subset)).elements()) print("yes", one_subset,another_subset) return possible_sums.extend(temp_possible_sums) print("no") return is_partitionable(list(map(int, input().split()))) </code></pre> <p>Sample Input &amp; Output:</p> <pre><code>&gt;&gt;&gt; is_partitionable([10,30,20,40]) yes [10, 40] [30, 20] &gt;&gt;&gt; is_partitionable([10,30,20,20]) yes [10, 30] [20, 20] &gt;&gt;&gt; is_partitionable([10,30,20,10]) no </code></pre> <p>I'm essentially storing the corresponding values that were added to get a value in <code>corresponding_subsets</code>. But, as the size of <code>a</code> increases, it's obvious that the <code>corresponding_subsets</code> would have way too many sub-lists (equal to the number of elements in <code>possible_sums</code>). Is there a better/more efficient way to do this?</p>
4
2016-09-15T15:23:33Z
39,515,717
<p>Though it is still a hard problem, you could try the following. I assume that there are <code>n</code> elements and they are stored in the array named <code>arr</code> ( I assume 1-based indexing ). Let us make two teams <code>A</code> and <code>B</code>, such that I want to partition the elements of <code>arr</code> among teams <code>A</code> and <code>B</code> such that sum of elements in both the teams is equal. Each element of <code>arr</code> has an option of either going to team <code>A</code> or team <code>B</code>. Say if an element ( say ith element ) goes to team <code>A</code> we denote it by <code>-a[i]</code> and if it goes to team <code>B</code> we let it be <code>a[i]</code>. Thus after assigning each element to a team, if the total sum is <code>0</code> our job is done. We will create <code>n</code> sets ( they do not store duplicates ). I will work with the example <code>arr = {10,20,30,40}</code>. Follow the following steps</p> <pre><code>set_1 = {10,-10} # -10 if it goes to Team A and 10 if goes to B set_2 = {30,-10,10,-30} # four options as we add -20 and 20 set_3 = {60,0,20,-40,-20,-60} # note we don't need to store duplicates set_4 = {100,20,40,-40,60,-20,-80,0,-60,-100} # see there is a zero means our task is possible </code></pre> <p>Now all you have to do is backtrack from the <code>0</code> in the last set to see if the ith element <code>a[i]</code> was added as <code>a[i]</code> or as <code>-a[i]</code>, ie. whether it is added to Team <code>A</code> or <code>B</code>.</p> <p><strong>EDIT</strong></p> <p>The backtracking routine. So we have <code>n</code> sets from <code>set_1</code> to <code>set_n</code>. Let us make two lists <code>list_A</code> to push the elements that belong to team <code>A</code> and similarly <code>list_B</code>. We start from <code>set_n</code> , thus using a variable <code>current_set</code> initially having value <code>n</code>. Also we are focusing at element <code>0</code> in the last list, thus using a variable <code>current_element</code> initially having value <code>0</code>. Follow the approach in the code below ( I assume all sets 1 to <code>n</code> have been formed, for sake of ease I have stored them as list of list, but you should use set data structure ). Also the code below assumes a <code>0</code> is seen in the last list ie. our task is possible. </p> <pre><code>sets = [ [0], #see this dummy set it is important, this is set_0 #because initially we add -arr[0] or arr[0] to 0 [10,-10], [30,-10,10,-30], [60,0,20,-40,-20,-60], [100,20,40,-40,60,-20,-80,0,-60,-100]] # my array is 1 based so ignore the zero arr = [0,10,20,30,40] list_A = [] list_B = [] current_element = 0 current_set = 4 # Total number of sets in this case is n=4 while current_set &gt;= 1: print current_set,current_element for element in sets[current_set-1]: if element + arr[current_set] == current_element: list_B.append(arr[current_set]) current_element = element current_set -= 1 break elif element - arr[current_set] == current_element: list_A.append(arr[current_set]) current_element = element current_set -= 1 break print list_A,list_B </code></pre>
3
2016-09-15T16:08:30Z
[ "python", "algorithm", "performance", "python-3.x", "partitioning" ]
How to retrieve subset in partitioning algorithm?
39,514,839
<p>I have an array and I would like to split it two parts such that their sum is equal for example <code>[10, 30, 20, 50]</code> can be split into <code>[10, 40] , [20, 30]</code>. Both have a sum of 50. This is essentially partitioning algorithm but I'd like the retrieve the subsets not just identify whether it's partitionable. So, I went ahead and did the following:</p> <p><strong>Update</strong>: updated script to handle duplicates</p> <pre><code>from collections import Counter def is_partitionable(a): possible_sums = [a[0]] corresponding_subsets = [[a[0]]] target_value = sum(a)/2 if a[0] == target_value: print("yes",[a[0]],a[1:]) return for x in a[1:]: temp_possible_sums = [] for (ind, t) in enumerate(possible_sums): cursum = t + x if cursum &lt; target_value: corresponding_subsets.append(corresponding_subsets[ind] + [x]) temp_possible_sums.append(cursum) if cursum == target_value: one_subset = corresponding_subsets[ind] + [x] another_subset = list((Counter(a) - Counter(one_subset)).elements()) print("yes", one_subset,another_subset) return possible_sums.extend(temp_possible_sums) print("no") return is_partitionable(list(map(int, input().split()))) </code></pre> <p>Sample Input &amp; Output:</p> <pre><code>&gt;&gt;&gt; is_partitionable([10,30,20,40]) yes [10, 40] [30, 20] &gt;&gt;&gt; is_partitionable([10,30,20,20]) yes [10, 30] [20, 20] &gt;&gt;&gt; is_partitionable([10,30,20,10]) no </code></pre> <p>I'm essentially storing the corresponding values that were added to get a value in <code>corresponding_subsets</code>. But, as the size of <code>a</code> increases, it's obvious that the <code>corresponding_subsets</code> would have way too many sub-lists (equal to the number of elements in <code>possible_sums</code>). Is there a better/more efficient way to do this?</p>
4
2016-09-15T15:23:33Z
39,516,239
<p>This is my implementation of @sasha's algo on the feasibility.</p> <pre><code>def my_part(my_list): item = my_list.pop() balance = [] temp = [item, -item] while len(my_list) != 0: new_player = my_list.pop() for i, items in enumerate(temp): balance.append(items + new_player) balance.append(items - new_player) temp = balance[:] balance = set(balance) if 0 in balance: return 'YES' else: return 'NO' </code></pre> <p>I am working on the backtracking too.</p>
0
2016-09-15T16:39:05Z
[ "python", "algorithm", "performance", "python-3.x", "partitioning" ]
ImportError: No module named durationfield.db.models.fields.duration (Python, Django 1.9)
39,514,880
<p>I'm trying to put a duration field in my models and I'm following the instructions <a href="https://django-durationfield.readthedocs.io/en/latest/" rel="nofollow">here</a>. First problems I run into is that I can't seem to import the module. Doesn't this come standard with Django?</p> <pre><code> from durationfield.db.models.fields.duration import DurationField ImportError: No module named durationfield.db.models.fields.duration </code></pre> <p>Following Daniel Roseman's suggestion, I changed this to: </p> <pre><code>from django.db.models.field.duration </code></pre> <p>Now I'm getting:</p> <pre><code>ImportError: No module named duration </code></pre>
-1
2016-09-15T15:25:30Z
39,514,916
<p>You should be importing from django.db.models....</p>
0
2016-09-15T15:27:26Z
[ "python", "django", "import", "models", "duration" ]
ImportError: No module named durationfield.db.models.fields.duration (Python, Django 1.9)
39,514,880
<p>I'm trying to put a duration field in my models and I'm following the instructions <a href="https://django-durationfield.readthedocs.io/en/latest/" rel="nofollow">here</a>. First problems I run into is that I can't seem to import the module. Doesn't this come standard with Django?</p> <pre><code> from durationfield.db.models.fields.duration import DurationField ImportError: No module named durationfield.db.models.fields.duration </code></pre> <p>Following Daniel Roseman's suggestion, I changed this to: </p> <pre><code>from django.db.models.field.duration </code></pre> <p>Now I'm getting:</p> <pre><code>ImportError: No module named duration </code></pre>
-1
2016-09-15T15:25:30Z
39,515,057
<p>It's here:</p> <pre><code>from django.db.models import DurationField </code></pre> <p>And yes, it comes with Django 1.8+ so you don't need to install it. </p>
1
2016-09-15T15:34:00Z
[ "python", "django", "import", "models", "duration" ]
Pandas - Expand DataFrame by fetching data using one column
39,515,043
<p>I start with a DataFrame configuration</p> <pre><code>import pandas as pd getData = lambda n: pd.util.testing.makeTimeDataFrame(n) origDF = pd.DataFrame([{'weight':70, 'name':'GOLD', 'n':3}, {'weight':30, 'name':'SILVER', 'n':4}]) n name weight 0 3 GOLD 70 1 4 SILVER 30 </code></pre> <p>Now I want to expand this configuration DataFrame into a full data DataFrame by fetching data using the <code>n</code> column. The result I want is</p> <pre><code>res = [] for row in origDF.iterrows(): tmp = getData(row[1]['n']) for c,v in row[1].iteritems(): if c != 'n': tmp[c] = v res.append(tmp) res = pd.concat(res) A B C D name weight 2000-01-03 -0.084821 -0.345260 -0.789547 0.001570 GOLD 70 2000-01-04 -0.035577 -1.283943 -0.304142 -0.978453 GOLD 70 2000-01-05 0.014727 0.400858 -0.607918 1.769886 GOLD 70 2000-01-03 -0.644647 2.142646 0.617880 -0.178515 SILVER 30 2000-01-04 0.256490 -1.037556 -0.224503 0.148258 SILVER 30 2000-01-05 0.679844 0.976823 -0.403927 -0.459163 SILVER 30 2000-01-06 0.433366 0.429025 0.951633 -0.026547 SILVER 30 </code></pre> <p>Is there a nice Pandas routine to get this directly without a loop?</p>
1
2016-09-15T15:33:27Z
39,516,715
<p>here is a solution, which loops ones through your <code>origDF</code>:</p> <pre><code>In [167]: res = getData(origDF.n.sum()) In [168]: res['name'] = 'N/A' In [169]: res['weight'] = 0 In [170]: res Out[170]: A B C D name weight 2000-01-03 1.097798 -1.537407 0.692180 -0.359577 N/A 0 2000-01-04 1.762158 0.568963 0.420136 0.265061 N/A 0 2000-01-05 -0.241067 -0.471753 0.370949 0.533276 N/A 0 2000-01-06 0.099100 -1.757071 -0.680193 0.261295 N/A 0 2000-01-07 -0.818920 0.201746 1.251089 0.834474 N/A 0 2000-01-10 1.551190 -0.329135 0.323669 -0.365978 N/A 0 2000-01-11 -1.941802 0.496720 0.969223 -0.413078 N/A 0 In [171]: i = 0 In [172]: for idx, row in origDF.iterrows(): .....: res.ix[i : i + row.n, 'name'] = row['name'] .....: res.ix[i : i + row.n, 'weight'] = row.weight .....: i += row.n .....: In [173]: res Out[173]: A B C D name weight 2000-01-03 1.097798 -1.537407 0.692180 -0.359577 GOLD 70 2000-01-04 1.762158 0.568963 0.420136 0.265061 GOLD 70 2000-01-05 -0.241067 -0.471753 0.370949 0.533276 GOLD 70 2000-01-06 0.099100 -1.757071 -0.680193 0.261295 SILVER 30 2000-01-07 -0.818920 0.201746 1.251089 0.834474 SILVER 30 2000-01-10 1.551190 -0.329135 0.323669 -0.365978 SILVER 30 2000-01-11 -1.941802 0.496720 0.969223 -0.413078 SILVER 30 </code></pre>
0
2016-09-15T17:08:21Z
[ "python", "pandas", "dataframe" ]
Set a current row in a dataframe based off of future values
39,515,086
<p>Given:</p> <pre class="lang-py prettyprint-override"><code> d = { 'datetime': ['2010-01-08 09:45:00', '2010-01-08 10:00:00', '2010-01-08 10:15:00', '2010-01-08 10:30:00', '2010-01-08 10:45:00', '2010-01-08 11:00:00', '2010-01-08 11:15:00', '2010-01-08 11:30:00', '2010-01-08 11:45:00', '2010-01-08 12:00:00', '2010-01-08 12:15:00', '2010-01-08 12:30:00', '2010-01-08 12:45:00', '2010-01-08 13:00:00', '2010-01-08 13:15:00', '2010-01-08 13:30:00', '2010-01-08 13:45:00', '2010-01-08 14:00:00', '2010-01-08 14:15:00', '2010-01-08 14:30:00', '2010-01-08 14:45:00', '2010-01-08 15:00:00', '2010-01-08 15:15:00', '2010-01-08 15:30:00', '2010-01-08 15:45:00', '2010-01-08 16:00:00', '2010-01-08 16:15:00'], 'Total-tops': [0,-1,-1,2,3,0,0,4,0,0,0,0,5,6,7,8,-1,0,0,0,0,0,0,0,-1,-1,2] } df = pandas.DataFrame(d) df = df.set_index('datetime') </code></pre> <p>I want to add another column which is a boolean for whether that row will break or not. A break means the tops are at a number greater than 1 and then a -1 occurs somewhere in the future. For example the first 2 will break at the next -1 it encounters. Here is the desired dataframe: <a href="http://i.stack.imgur.com/3TlcG.png" rel="nofollow"><img src="http://i.stack.imgur.com/3TlcG.png" alt="desired_dataframe"></a></p> <p>Here is the function I am currently using, but it runs very slow, since I iterate over all rows.</p> <pre class="lang-py prettyprint-override"><code>def does_break(data): cur_breaks = [] for index, row in data.iterrows(): if row['Total-tops'] &gt; 1: # Get all rows after this time that are new tops breaks = data[(data['Total-tops'] == -1) &amp; (data.index.time &gt; index.time())] if len(breaks) &gt; 0: cur_breaks.append(True) else: cur_breaks.append(False) else: cur_breaks.append(False) return cur_breaks </code></pre>
0
2016-09-15T15:35:23Z
39,515,387
<p>You can use the ungainly expression</p> <pre><code>In [56]: import numpy as np In [57]: ((np.cumsum((df['Total-tops'] == -1)[:: -1])[:: -1] &gt; 0) &amp; (df['Total-tops'] &gt; 0)).astype(int) Out[57]: datetime 2010-01-08 09:45:00 0 2010-01-08 10:00:00 0 2010-01-08 10:15:00 0 2010-01-08 10:30:00 1 2010-01-08 10:45:00 1 2010-01-08 11:00:00 0 2010-01-08 11:15:00 0 2010-01-08 11:30:00 1 2010-01-08 11:45:00 0 2010-01-08 12:00:00 0 2010-01-08 12:15:00 0 2010-01-08 12:30:00 0 2010-01-08 12:45:00 1 2010-01-08 13:00:00 1 2010-01-08 13:15:00 1 2010-01-08 13:30:00 1 2010-01-08 13:45:00 0 2010-01-08 14:00:00 0 2010-01-08 14:15:00 0 2010-01-08 14:30:00 0 2010-01-08 14:45:00 0 2010-01-08 15:00:00 0 2010-01-08 15:15:00 0 2010-01-08 15:30:00 0 2010-01-08 15:45:00 0 2010-01-08 16:00:00 0 2010-01-08 16:15:00 0 Name: Total-tops, dtype: int64 </code></pre> <p>(Of course, for your new column, you can use <code>df['breaks'] = ...</code>.)</p> <p>What this does is as follows:</p> <ol> <li>We find where the values are -1, and reverse. Now any operations we do on the past (in particular <code>cumsum</code>), is really performed on the future.</li> <li>We find the cumulative sum, and reverse again. At this point, the meaning is how many times in the future will we see a -1.</li> <li>We find where the result is larger than 0, since we don't care <em>how many times</em> will we see a -1, only <em>whether</em> we will see it.</li> <li>Finally, we also require that the current entry is positive. This is just the definition from your question.</li> </ol>
0
2016-09-15T15:51:19Z
[ "python", "pandas" ]
Set a current row in a dataframe based off of future values
39,515,086
<p>Given:</p> <pre class="lang-py prettyprint-override"><code> d = { 'datetime': ['2010-01-08 09:45:00', '2010-01-08 10:00:00', '2010-01-08 10:15:00', '2010-01-08 10:30:00', '2010-01-08 10:45:00', '2010-01-08 11:00:00', '2010-01-08 11:15:00', '2010-01-08 11:30:00', '2010-01-08 11:45:00', '2010-01-08 12:00:00', '2010-01-08 12:15:00', '2010-01-08 12:30:00', '2010-01-08 12:45:00', '2010-01-08 13:00:00', '2010-01-08 13:15:00', '2010-01-08 13:30:00', '2010-01-08 13:45:00', '2010-01-08 14:00:00', '2010-01-08 14:15:00', '2010-01-08 14:30:00', '2010-01-08 14:45:00', '2010-01-08 15:00:00', '2010-01-08 15:15:00', '2010-01-08 15:30:00', '2010-01-08 15:45:00', '2010-01-08 16:00:00', '2010-01-08 16:15:00'], 'Total-tops': [0,-1,-1,2,3,0,0,4,0,0,0,0,5,6,7,8,-1,0,0,0,0,0,0,0,-1,-1,2] } df = pandas.DataFrame(d) df = df.set_index('datetime') </code></pre> <p>I want to add another column which is a boolean for whether that row will break or not. A break means the tops are at a number greater than 1 and then a -1 occurs somewhere in the future. For example the first 2 will break at the next -1 it encounters. Here is the desired dataframe: <a href="http://i.stack.imgur.com/3TlcG.png" rel="nofollow"><img src="http://i.stack.imgur.com/3TlcG.png" alt="desired_dataframe"></a></p> <p>Here is the function I am currently using, but it runs very slow, since I iterate over all rows.</p> <pre class="lang-py prettyprint-override"><code>def does_break(data): cur_breaks = [] for index, row in data.iterrows(): if row['Total-tops'] &gt; 1: # Get all rows after this time that are new tops breaks = data[(data['Total-tops'] == -1) &amp; (data.index.time &gt; index.time())] if len(breaks) &gt; 0: cur_breaks.append(True) else: cur_breaks.append(False) else: cur_breaks.append(False) return cur_breaks </code></pre>
0
2016-09-15T15:35:23Z
39,518,063
<p>You could select all rows until the last index where -1 occurs and assign values to a new column, <em>breaks</em> where <em>Total-tops</em> values are greater than 1 using <code>loc</code>.</p> <p>Then, cast the bool types to integers followed by filling values which come after these indices with 0's as shown:</p> <pre><code>index = df[df['Total-tops'] == -1].index.tolist()[-1] df.loc[ :index, 'breaks'] = (df['Total-tops'] &gt; 1).astype(int) df.fillna(0, inplace=True) df.astype(int) df['breaks'].values #array([ 0., 0., 0., 1., 1., 0., 0., 1., 0., 0., 0., 0., 1., # 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) </code></pre>
0
2016-09-15T18:31:57Z
[ "python", "pandas" ]
urllib.error.HTTPError: HTTP Error 403: Forbidden Python
39,515,096
<p>My code: </p> <pre><code>import sqlite3, os, urllib.request from xml.dom import minidom if os.path.exists("data.db"): con = sqlite3.connect("data.db") cursor = con.cursor() sql = "SELECT * FROM data WHERE test= '123'" cursor.execute(sql) else: print("ERROR") for dsatz in cursor: #print(dsatz) link = 'http://test.org/publication/' + dsatz[0] + '' + dsatz[1] +'/bib' #print(link) web_data = urllib.request.urlopen(link) xmldoc = minidom.parse(web_data) di = xmldoc.getElementsByTagName("document-id")[:1] for x in di: publicationcountry = x.getElementsByTagName("country")[0].firstChild.data publicationdocnumber = x.getElementsByTagName("doc-number")[0].firstChild.data punlicationkind = x.getElementsByTagName("kind")[0].firstChild.data publicationdate = x.getElementsByTagName("date")[0].firstChild.data sql = "INSERT INTO link_xml_data VALUES('" \ + publicationcountry + "', '" \ + str(publicationdocnumber) + "', '" \ + punlicationkind + "')" con.close() </code></pre> <p>But after like 15 links I get the ERROR :</p> <pre><code>Traceback (most recent call last): File "C:\Users\j\3.py", line 34, in &lt;module&gt; web_data = urllib.request.urlopen(link) File "C:\Users\j\Python35-32\lib\urllib\request.py", line 163, in urlopen return opener.open(url, data, timeout) File "C:\Users\j\Python35-32\lib\urllib\request.py", line 472, in open response = meth(req, response) File "C:\Users\j\Python35-32\lib\urllib\request.py", line 582, in http_response 'http', request, response, code, msg, hdrs) File "C:\Users\j\Python35-32\lib\urllib\request.py", line 510, in error return self._call_chain(*args) File "C:\Users\j\Python35-32\lib\urllib\request.py", line 444, in _call_chain result = func(*args) File "C:\Users\j\Python35-32\lib\urllib\request.py", line 590, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden </code></pre> <p>What do should I add or change?</p>
-1
2016-09-15T15:36:05Z
39,515,701
<p>The web server is telling you that link is forbidden. There's (probably) nothing wrong with your code.</p> <p>Do some links always work and other links always fail, or does the pattern change over time?</p> <p>After getting a 403 Forbidden response, have you tried going back and re-requesting one of the earlier successful links?</p> <p>Perhaps the server eventually recognizes you as a web scraper and is telling you to go away?</p>
0
2016-09-15T16:07:51Z
[ "python", "xml", "parsing" ]
Pandas, Concurrent.Futures and the GIL
39,515,123
<p>I'm writing code using Pandas 0.18/Python 3.5 on an intel i3 (four cores).</p> <p>I have read this: <a href="https://www.continuum.io/content/pandas-releasing-gil" rel="nofollow">https://www.continuum.io/content/pandas-releasing-gil</a></p> <p>I also have some work that is IO bound (parsing CSV files into dataframes). I have to do a lot of calculation that is mostly multiplying dataframes.</p> <p>My code is currently parallel using <code>concurrent.futures ThreadPoolExecutor</code>.</p> <p>My question is:</p> <ul> <li>In general, should I be using threads to run pandas jobs in parallel, or does pandas make effective use of all cores without me having to explicitly tell it to? (in which case, I will execute my jobs serially).</li> </ul>
1
2016-09-15T15:37:28Z
39,515,335
<p>Best I can tell from reading the docs, pandas <a href="http://pandas-docs.github.io/pandas-docs-travis/whatsnew.html#releasing-the-gil" rel="nofollow">simply releases the GIL for certain operations</a>:</p> <blockquote> <p>We are releasing the global-interpreter-lock (GIL) on some cython operations. This will allow other threads to run simultaneously during computation, potentially allowing performance improvements from multi-threading. Notably <code>groupby</code>, <code>nsmallest</code>, <code>value_counts</code> and some indexing operations benefit from this.</p> </blockquote> <p>All this means is that other threads can be executed by the Python interpreter while the calculations being one by pandas continue. It doesn't mean that pandas automatically scales the calculations across many threads. They sort of mention this in the docs as well:</p> <blockquote> <p>Releasing of the GIL could benefit an application that uses threads for user interactions (e.g. QT), or performing multi-threaded computations.</p> </blockquote> <p>In order to get parallelization benefits, you need to actually be creating and executing multiple threads in your own code. So, you should continue using the <code>ThreadPoolExecutor</code> if you're trying to get parallel execution in your application.</p> <p>Keep in mind that pandas is only releasing the GIL for <em>some</em> operations, so you may not get performance improvements with multiple threads if you're not calling any methods that actually release it.</p>
1
2016-09-15T15:48:44Z
[ "python", "multithreading", "pandas", "python-multithreading" ]
Python, Pandas - How can I get something printed in a data range?
39,515,152
<p>I suppose to create a function that allows user pick a range and it will print out the number within the range. however, I keep getting empty DataFrame with my code. can anyone help me? </p> <p>` import pandas as pd</p> <pre><code>if __name__ == "__main__": file_name = "sales_rossetti.xlsx" # Formatting numbers (e.g. $1,000,000) pd.options.display.float_format = '${:,.0f}'.format # Reading Excel file df = pd.read_excel(file_name, index_col = 0, convert_float = False) print ("Welcome to Rossetti's Sales program\n") print ("1) Search by State") print ("2) Search by Jan Sales") print ("3) Search by Q2 sales") print ("4) Exit") my_option = input ("Please select a menu option:") if (my_option=="2"): my_columns = ["Name", "City", "State", "Jan"] your_sales = input("please enter the minimum sale: ") your_sales = input("please enter the maxium sale: ") print (df[my_columns][df.Jan&gt;int(your_sales)][df.Jan&lt;int(your_sales)])` </code></pre>
0
2016-09-15T15:39:13Z
39,515,373
<p>You're overwriting the <code>your_sales</code> variable as you're reusing it, so you should use a different variable name for the min and max params. You then need to generate a proper boolean mask using <code>loc</code> and enclosing your boolean conditions using parentheses and <code>&amp;</code> to <code>and</code> the array of boolean values:</p> <pre><code>if (my_option=="2"): my_columns = ["Name", "City", "State", "Jan"] min_sales = input("please enter the minimum sale: ") max_sales = input("please enter the maxium sale: ") print (df.loc[(df.Jan &gt; int(min_sales) ) &amp; (df.Jan &lt; int(max_sales)), my_columns]) </code></pre> <p>the above should work</p>
0
2016-09-15T15:50:43Z
[ "python", "pandas", "dataframe" ]
Convert from decimal to Binary function
39,515,184
<p>I tried creating a function to convert to binary the long way but i keep getting a very basic error that i can't seem to figure out. Would appreciate an extra pair of eyes.</p> <pre><code>def convert_to_binary(n): if (-1.0 &lt; n &lt; 256.0): number_list = [] while (n != 0): rem = n % 2 number_list.append(rem) n = n // 2 new_list = number_list[::-1] print("".join(str(x) for x in new_list)) else: print("Invalid input") </code></pre> <p>the error i keep getting is:</p> <p><code>File "", line 13 else : ^ SyntaxError: invalid syntax</code></p> <p>I'd really appreciate any feedback. Thanks</p>
0
2016-09-15T15:40:36Z
39,515,410
<pre><code>def convert_to_binary(n): if (-1.0 &lt; n &lt; 256.0): print '{0:b}'.format(n) else: print("Invalid input") </code></pre>
0
2016-09-15T15:52:08Z
[ "python" ]
Get file from Folder in Python
39,515,199
<p>The issue has been reported <a href="https://github.com/fchollet/keras/issues/2369" rel="nofollow">here</a> too.</p> <p>I have code like:</p> <pre><code>from keras.datasets.data_utils import get_file path = get_file('babi-tasks-v1-2.tar.gz', origin='http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz') tar = tarfile.open(path) </code></pre> <p>When I put the original file location (as the origin above) which is my Downloads folder;</p> <pre><code>/home/xxxxxx/Downloads/tasks_1-20_v1-2.tar.gz </code></pre> <p>I get the error:</p> <pre><code>ValueError: unknown url type: /home/xxxxxx/Downloads/tasks_1-20_v1-2.tar.gz </code></pre> <p>How to resolve the issue as the file location is actually correct?</p>
1
2016-09-15T15:41:27Z
39,515,306
<p>Your function <em>get_file</em> seems to take an <strong>url address as a argument</strong> and not an absolute path.</p> <p>Could you give us more details about this function ?</p>
1
2016-09-15T15:47:28Z
[ "python", "url", "error-handling", "anaconda" ]
Get file from Folder in Python
39,515,199
<p>The issue has been reported <a href="https://github.com/fchollet/keras/issues/2369" rel="nofollow">here</a> too.</p> <p>I have code like:</p> <pre><code>from keras.datasets.data_utils import get_file path = get_file('babi-tasks-v1-2.tar.gz', origin='http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz') tar = tarfile.open(path) </code></pre> <p>When I put the original file location (as the origin above) which is my Downloads folder;</p> <pre><code>/home/xxxxxx/Downloads/tasks_1-20_v1-2.tar.gz </code></pre> <p>I get the error:</p> <pre><code>ValueError: unknown url type: /home/xxxxxx/Downloads/tasks_1-20_v1-2.tar.gz </code></pre> <p>How to resolve the issue as the file location is actually correct?</p>
1
2016-09-15T15:41:27Z
39,515,504
<p>The error message suggests that your library requires a resource type at the beginning of the URL. Try specifying your path like this:</p> <pre class="lang-none prettyprint-override"><code>file:/home/xxxxxx/Downloads/tasks_1-20_v1-2.tar.gz </code></pre>
1
2016-09-15T15:55:58Z
[ "python", "url", "error-handling", "anaconda" ]
Python CSV: How to ignore writing similar rows given one row meets a condition?
39,515,291
<p>I'm currently keeping track of the large scale digitization of video tapes and need help pulling data from multiple CSVs. Most tapes have multiple copies, but we only digitize one tape from the set. I would like to create a new CSV containing only tapes of shows that have yet to be digitized. Here's a mockup of my original CSV:</p> <pre><code>Date Digitized | Series | Episode Number | Title | Format ---------------|----------|----------------|-------|-------- 01-01-2016 | Series A | 101 | | VHS | Series A | 101 | | Beta | Series A | 101 | | U-Matic | Series B | 101 | | VHS </code></pre> <p>From here, I'd like to ignore all fields containing "Series A" AND "101", as this show has a value in the "Date Digitized" cell. I attempted isolating these conditions but can't seem to get a complete list of undigitized content. Here's my code: </p> <pre><code>import csv, glob names = glob.glob("*.csv") names = [os.path.splitext(each)[0] for each in names] for name in names: with open("%s_.csv" % name, "rb") as source: reader = csv.reader( source ) with open("%s_edit.csv" % name,"wb") as result: writer = csv.writer( result ) for row in reader: if row[0]: series = row[1] epnum = row[2] if row[1] != series and row[2] != epnum: writer.writerow(row) </code></pre> <p>I'll add that this is my first question and I'm very new to Python, so any advice would be much appreciated! </p>
1
2016-09-15T15:46:54Z
39,535,979
<p>The simplest approach is to make two reads of the set of CSV files: one to build a list of all digitized tapes, the second to build a unique list of all tapes not on the digitized list:</p> <pre><code># build list of digitized tapes digitized = [] for name in names: with open("%s_.csv" % name, "rb") as source: reader = csv.reader(source) next(reader) # skip header for row in reader: if row[0] and ((row[1], row[2]) not in digitized): digitized.append((row[1], row[2])) # build list of non-digitized tapes digitize_me = [] for name in names: with open("%s_.csv" % name, "rb") as source: reader = csv.reader(source) header = next(reader)[1:3] # skip / save header for row in reader: if not row[0] and ((row[1], row[2]) not in digitized + digitize_me): digitize_me.append((row[1], row[2])) # write non-digitized tapes to 'digitize.csv` with open("digitize.csv","wb") as result: writer = csv.writer(result) writer.writerow(header) for tape in digitize_me: writer.writerow(tape) </code></pre> <p><em>input file 1:</em></p> <pre><code>Date Digitized,Series,Episode Number,Title,Format 01-01-2016,Series A,101,,VHS ,Series A,101,,Beta ,Series C,101,,Beta ,Series D,102,,VHS ,Series B,101,,U-Matic </code></pre> <p><em>input file 2:</em></p> <pre><code>Date Digitized,Series,Episode Number,Title,Format ,Series B,101,,VHS ,Series D,101,,Beta 01-01-2016,Series C,101,,VHS </code></pre> <p><strong>Output:</strong></p> <pre><code>Series,Episode Number Series D,102 Series B,101 Series D,101 </code></pre> <hr> <p>As per OP comment, the line</p> <pre><code>header = next(reader)[1:3] # skip / save header </code></pre> <p>serves two purposes:</p> <ol> <li>Assuming each <code>csv</code> file starts with a header, we do not want to read that header row as if it contained data about our tapes, so we need to "skip" the header row in that sense</li> <li>But we also want to save the relevant parts of the header for when we write the output <code>csv</code> file. We want that file to have a header as well. Since we are only writing the <code>series</code> and <code>episode number</code>, which are <code>row</code> fields <code>1</code> and <code>2</code>, we assign just that slice, i.e. <code>[1:3]</code>, of the header row to the <code>header</code> variable</li> </ol> <p>It's not really standard to have a line of code serve two pretty unrelated purposes like that, which is why I commented it. It also assigns to <code>header</code> multiple times (assuming multiple input files) when <code>header</code> only needs to be assigned once. Perhaps a cleaner way to write that section would be:</p> <pre><code># build list of non-digitized tapes digitize_me = [] header = None for name in names: with open("%s_.csv" % name, "rb") as source: reader = csv.reader(source) if header: next(reader) # skip header else: header = next(reader)[1:3] # read header for row in reader: ... </code></pre> <p>It's a question of which form is more readable. Either way is close but I thought combining 5 lines into one keeps the focus on the more salient parts of the code. I would probably do it the other way next time.</p>
0
2016-09-16T16:13:54Z
[ "python", "csv" ]
Python CSV: How to ignore writing similar rows given one row meets a condition?
39,515,291
<p>I'm currently keeping track of the large scale digitization of video tapes and need help pulling data from multiple CSVs. Most tapes have multiple copies, but we only digitize one tape from the set. I would like to create a new CSV containing only tapes of shows that have yet to be digitized. Here's a mockup of my original CSV:</p> <pre><code>Date Digitized | Series | Episode Number | Title | Format ---------------|----------|----------------|-------|-------- 01-01-2016 | Series A | 101 | | VHS | Series A | 101 | | Beta | Series A | 101 | | U-Matic | Series B | 101 | | VHS </code></pre> <p>From here, I'd like to ignore all fields containing "Series A" AND "101", as this show has a value in the "Date Digitized" cell. I attempted isolating these conditions but can't seem to get a complete list of undigitized content. Here's my code: </p> <pre><code>import csv, glob names = glob.glob("*.csv") names = [os.path.splitext(each)[0] for each in names] for name in names: with open("%s_.csv" % name, "rb") as source: reader = csv.reader( source ) with open("%s_edit.csv" % name,"wb") as result: writer = csv.writer( result ) for row in reader: if row[0]: series = row[1] epnum = row[2] if row[1] != series and row[2] != epnum: writer.writerow(row) </code></pre> <p>I'll add that this is my first question and I'm very new to Python, so any advice would be much appreciated! </p>
1
2016-09-15T15:46:54Z
39,536,891
<p>I am not a hundred percent sure I've understood your needs. However, this might put you on a right track. I am using <a href="http://pandas.pydata.org/pandas-docs/stable/index.html" rel="nofollow"><code>pandas</code></a> module:</p> <pre><code>data = """ Date Digitized | Series | Episode Number | Title | Format ---------------|----------|----------------|-------|-------- 01-01-2016 | Series A | 101 | | VHS | Series A | 101 | | Beta | Series A | 101 | | U-Matic | Series B | 101 | | VHS""" # useful module for treating csv files (and many other) import pandas as pd # module to handle data as it was a csv file import io # read the csv into pandas DataFrame # use the 0 row as a header # fields are separated by | df = pd.read_csv( io.StringIO(data), header=0, sep="|" ) # there is a bit problem with white spaces # remove white space from the column names df.columns = [x.strip() for x in df.columns] # remove white space from all string fields df = df.applymap(lambda x: x.strip() if type(x) == str else x) # finally choose the subset we want # for some reason pandas guessed the type of Episode Number wrong # it should be integer, this probably won't be a problem when loading # directly from file df = df[~((df["Series"] == "Series A") &amp; (df["Episode Number"] == "101"))] # print the result print(df) # Date Digitized Series Episode Number Title Format # 0 --------------- ---------- ---------------- ------- -------- # 4 Series B 101 VHS </code></pre> <p>Feel free to ask, hopefully I'll be able to change the code according to your actual needs or help in any other way.</p>
1
2016-09-16T17:13:59Z
[ "python", "csv" ]
Django Model reference assignment in loop
39,515,354
<p>Is the following code correct? The <code>e</code> should refer to a new object in the beginning of each for iteration, after try/except blocks are executed. I suspect some interference with an old object, because there is a bug which I cannot reproduce now.</p> <pre><code> from webapp.models import Profile .... for e in Profile.objects.all(): if not e.profile_link in profile_data: e.delete() try: for key, employee in profile_data.iteritems(): #e still holds old reference try: #edit DB object if exists e = Profile.objects.all().filter(profile_link=key)[0] except Exception: #or create a new one e = Profile(profile_link=key) #modify e using employee e.save() except Exception: #handle exception </code></pre>
0
2016-09-15T15:49:44Z
39,515,825
<p>If you want all the try/catch blocks to run within the for loop you need make sure your identation is correct</p> <p>Try (check the indentation)</p> <pre><code>from webapp.models import Profile .... for e in Profile.objects.all(): if not e.profile_link in profile_data: e.delete() try: for key, employee in profile_data.iteritems(): #e still holds old reference try: #edit DB object if exists e = Profile.objects.get(profile_link=key) except Exception: #or create a new one e = Profile(profile_link=key) #modify e using employee e.save() except Exception: #handle exception </code></pre>
0
2016-09-15T16:13:53Z
[ "python", "django" ]
Can one form a view into diagonal of ndarray in numpy
39,515,391
<p>Simple slices form views into the parent array. The strides of the view is generically the multiple of the strides of the parent array. </p> <p>Given 2d parent array with strides <code>(s0, s1)</code>, the 1D array with strides <code>(s0+s1)</code> gives the view in the diagonal of the parent array. </p> <p>Is there a way to create such a view in top-level Python/numpy? Thank you in advance.</p>
1
2016-09-15T15:51:25Z
39,516,049
<p>With <code>as_strided</code> I can do what you want:</p> <pre><code>In [298]: X=np.eye(5) In [299]: X.strides Out[299]: (40, 8) In [300]: np.lib.stride_tricks.as_strided(X,shape=(5,),strides=(48,)) Out[300]: array([ 1., 1., 1., 1., 1.]) </code></pre> <p>though some would argue the <code>as_strided</code> is a step closer to the 'guts' than most of <code>numpy</code> Python code.</p> <p>I can do the same striding by indexing on the flattened array:</p> <pre><code>In [311]: X.ravel()[::6] Out[311]: array([ 1., 2., 3., 4., 5.]) </code></pre> <p>(here the <code>X</code> values were changed by a <code>view</code> test).</p>
2
2016-09-15T16:28:10Z
[ "python", "numpy" ]
Can one form a view into diagonal of ndarray in numpy
39,515,391
<p>Simple slices form views into the parent array. The strides of the view is generically the multiple of the strides of the parent array. </p> <p>Given 2d parent array with strides <code>(s0, s1)</code>, the 1D array with strides <code>(s0+s1)</code> gives the view in the diagonal of the parent array. </p> <p>Is there a way to create such a view in top-level Python/numpy? Thank you in advance.</p>
1
2016-09-15T15:51:25Z
39,523,651
<p>If you are using numpy 1.9 or later, and a read-only view is sufficient, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.diagonal.html" rel="nofollow"><code>numpy.diagonal</code></a>. The docstring says that in some future version of numpy, <code>numpy.diagonal</code> will return a read/write view, but that doesn't help you now. If you need a read/write view, @hpaulj's suggestion to use <code>as_strided</code> will work. I suggest something like </p> <pre><code>diag = as_strided(a, shape=(min(a.shape),), strides=(sum(a.strides),)) </code></pre> <p>Be sure to read the "Notes" section of the <code>as_strided</code> docstring.</p>
2
2016-09-16T04:15:31Z
[ "python", "numpy" ]
Plotting two groupby() series in python
39,515,408
<p>I am new to python in general but I did do a lot of research to see if I can find a solution to this problem. Hope you guys can help.</p> <p>Say A is a dataframe with Cost and site fields. B is a similar dataframe with cost and site fields. I want to group by the Site field and plot A/B for each site as a bar graph.</p> <pre><code>A= pd.DataFrame({'Cost':[20,30,40,50,60,60,82,92,35], 'Site':['S1','S1','S2','S3','S3','S3','S4','S5','S5']}) B= pd.DataFrame({'Cost':[40,75,92,105,110,200,15,62,32,12], 'Site':['S1','S2','S2','S3','S4','S1','S5','S3','S4','S5']}) C=A.groupby('Site')['Cost'].sum()/B.groupby('Site')['Cost'].sum() </code></pre> <p>Now how do I plot a bar chart to plot C where each bar is a site name and the displayed value is from C?</p>
0
2016-09-15T15:51:55Z
39,515,728
<p>You're nearly there!</p> <pre><code>import pylab as P t = P.bar(range(5),C,tick_label = C.index, align = 'center') </code></pre> <p>The first argument tells pylab there are 5 bars, the second gives the values from C, tick_label and alignment just name and align the bar labels.</p>
0
2016-09-15T16:09:07Z
[ "python", "dataframe" ]
How can admin post content visible by other users in django 1.8?
39,515,435
<p>First of all, I'm new to django, this is my first projet so I have minimum knowledge.</p> <p>I'm working with django 1.8 and I have made a basic website. My problem is the following: You know how when you visit a website and the content is updated by the admin ? (news, schedule or reminder of deadline if you're a uni student) Is there a way to do that so the admin uses only an interface without touching the code ? I mean, supposing that the admin knows nothing about django and has a website in which he wants to uploads "news" or "announcements" that will be visible by all users or he can edit/delete old posts...</p> <p>I would appreiate it if you can guide me by giving me useful links to doumentations, tutorials or existing projects on github to see how it atually works. Thank you for your help.</p>
0
2016-09-15T15:53:25Z
39,519,937
<p>You can use the Django Admin to create/edit models through a web interface with zero programming.</p> <p>Of course, for this to work you'd have to keep your editable content in models instead of hardcoding them in your templates.</p> <p>Let's assume our site is a small blog and our posts are instances of the following model:</p> <pre><code>class Post(models.Model): title = models.CharField(max_length=50) content = models.TextField() </code></pre> <p>And you are displaying them like so on the homepage:</p> <pre><code>&lt;ul&gt; {% for post in posts %} &lt;li&gt; &lt;p&gt;{{ post.content }}&lt;/p&gt; &lt;/li&gt; {% endfor %} &lt;/ul&gt; </code></pre> <p>In <code>&lt;app&gt;/admin.py</code> you need to <em>register</em> the model into the admin, so the admin interface knows about the model and displays it:</p> <pre><code>from .models import Post # import your model admin.site.register(Post) </code></pre> <p>Now you should be able to easily create and modify instances of <code>Post</code> from the admin interface, and they should automatically appear on the homepage.</p> <p>I suggest taking a look at the <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial01/" rel="nofollow">official Django tutorial</a> or the <a href="https://tutorial.djangogirls.org/en/" rel="nofollow">Django Girls blog tutorial</a> which show in detail how to use the admin to update what's displayed on the customer-facing webpages.</p>
1
2016-09-15T20:35:37Z
[ "python", "django", "model-view-controller", "admin", "django-1.8" ]
What is the downside of using object.__new__ in __new__?
39,515,463
<p>Coding exception classes, I came across this error: </p> <p><code>TypeError: object.__new__(A) is not safe, use Exception.__new__()</code></p> <p>There's a similar question posted here: <a href="http://stackoverflow.com/questions/12952184/typeerror-object-new-int-is-not-safe-use-int-new">TypeError: object.__new__(int) is not safe, use int.__new__()</a>. So <code>__new__</code> was deprecated for the following reason:</p> <p><a href="https://mail.python.org/pipermail/python-dev/2008-February/076854.html" rel="nofollow">[Python-Dev] <code>__new__</code> deprecation</a></p> <blockquote> <p>Guido van Rossum </p> <p><em>"The message means just what it says. :-) There's no point in calling <code>object.__new__()</code> with more than a class parameter, and any code that did so was just dumping those args into a black hole."</em></p> </blockquote> <p>But the warning in 3.3 that I get "is not safe" is scary. I try to understand the implication of using <code>object.__new__</code>, let's consider this example: </p> <pre><code>&gt;&gt;&gt; class A(Exception): ... def __new__(cls, *args): ... return object.__new__(A) ... &gt;&gt;&gt; A() TypeError: object.__new__(A) is not safe, use Exception.__new__() </code></pre> <p>Fails miserably. Another Example: </p> <pre><code>&gt;&gt;&gt; class A(object): ... def __new__(cls, *args): ... return object.__new__(A) ... &gt;&gt;&gt; &gt;&gt;&gt; A() &lt;__main__.A object at 0x0000000002F2E278&gt; </code></pre> <p>works fine. Although, <code>object</code> is a builtin class just like <code>Exception</code> with respect to their roles, they share the trait of being builtin-classes. Now with <code>Exception</code>, the first example raises <code>TypeError</code>, but with <code>object</code>, it does not? </p> <p><strong>(a) What are the downsides of using <code>object.__new__</code> that made Python to raise the error (<code>TypeError:...is not safe...</code>) in the first Example?</strong> </p> <p><strong>(b) What sort of checking Python performs before to calling <code>__new__</code>? Or: What is the condition that makes Python raise the error in the first example?</strong> </p>
3
2016-09-15T15:54:20Z
39,515,655
<p>Calling <code>object.__new__(A)</code> returns an instance of <code>A</code>, but does so without calling <code>Exception.__new__()</code> if it is defined.</p>
0
2016-09-15T16:04:59Z
[ "python" ]
What is the downside of using object.__new__ in __new__?
39,515,463
<p>Coding exception classes, I came across this error: </p> <p><code>TypeError: object.__new__(A) is not safe, use Exception.__new__()</code></p> <p>There's a similar question posted here: <a href="http://stackoverflow.com/questions/12952184/typeerror-object-new-int-is-not-safe-use-int-new">TypeError: object.__new__(int) is not safe, use int.__new__()</a>. So <code>__new__</code> was deprecated for the following reason:</p> <p><a href="https://mail.python.org/pipermail/python-dev/2008-February/076854.html" rel="nofollow">[Python-Dev] <code>__new__</code> deprecation</a></p> <blockquote> <p>Guido van Rossum </p> <p><em>"The message means just what it says. :-) There's no point in calling <code>object.__new__()</code> with more than a class parameter, and any code that did so was just dumping those args into a black hole."</em></p> </blockquote> <p>But the warning in 3.3 that I get "is not safe" is scary. I try to understand the implication of using <code>object.__new__</code>, let's consider this example: </p> <pre><code>&gt;&gt;&gt; class A(Exception): ... def __new__(cls, *args): ... return object.__new__(A) ... &gt;&gt;&gt; A() TypeError: object.__new__(A) is not safe, use Exception.__new__() </code></pre> <p>Fails miserably. Another Example: </p> <pre><code>&gt;&gt;&gt; class A(object): ... def __new__(cls, *args): ... return object.__new__(A) ... &gt;&gt;&gt; &gt;&gt;&gt; A() &lt;__main__.A object at 0x0000000002F2E278&gt; </code></pre> <p>works fine. Although, <code>object</code> is a builtin class just like <code>Exception</code> with respect to their roles, they share the trait of being builtin-classes. Now with <code>Exception</code>, the first example raises <code>TypeError</code>, but with <code>object</code>, it does not? </p> <p><strong>(a) What are the downsides of using <code>object.__new__</code> that made Python to raise the error (<code>TypeError:...is not safe...</code>) in the first Example?</strong> </p> <p><strong>(b) What sort of checking Python performs before to calling <code>__new__</code>? Or: What is the condition that makes Python raise the error in the first example?</strong> </p>
3
2016-09-15T15:54:20Z
39,519,014
<p>There is no problem in calling <code>object.__new__</code>, but there is a problem in not calling <code>Exception.__new__</code>.</p> <p><code>Exception</code> class was designed in such way that it is crucial that its <code>__new__</code> must be called, so it complains if that is not done.</p> <p>There was a question why this happens only with built-in classes. Python in fact does it with every class which is programmed to do that.</p> <p>Here is a simplified poor-mans implementation of the same mechanism in a custom class:</p> <pre><code>class A(object): def __new__(cls): rtn = object.__new__(cls) rtn.new_called = True return rtn def __init__(self): assert getattr(self,'new_called',False), \ "object.__new__ is unsafe, use A.__new__" class B(A): def __new__(cls): return object.__new__(cls) </code></pre> <p>And now:</p> <pre><code>&gt;&gt;&gt; A() &lt;__main__.A object at 0x00000000025CFF98&gt; &gt;&gt;&gt; B() Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "&lt;stdin&gt;", line 7, in __init__ AssertionError: object.__new__ is unsafe, use A.__new__ </code></pre> <hr> <p>As a side note, this example from the question actually has two errors:</p> <pre><code>&gt;&gt;&gt; class A(Exception): ... def __new__(cls, *args): ... return object.__new__(A) </code></pre> <p>The first is that <code>__new__</code> is called on <code>object</code>, thus ignoring <code>Exception.__new__</code>.</p> <p>The other, just as severe is that <code>A</code> is passed to <code>__new__</code> instead of <code>cls</code>, which hinders all classes inherited from <code>A</code>.</p> <p>See this example:</p> <pre><code>class A(object): def __new__(cls): return object.__new__(A) # The same erroneous line, without Exception class B(A): pass </code></pre> <p>And now <code>B()</code> does not create an instance of <code>B</code>:</p> <pre><code>&gt;&gt;&gt; B() &lt;__main__.A object at 0x00000000025D30B8&gt; </code></pre>
1
2016-09-15T19:35:00Z
[ "python" ]
Autotesting application written on Qt with Python
39,515,498
<p>I want to test application written on Qt by Python. Workflow that I want: 1. Python script should run .exe 2. Python script should get/set info from/into active window.</p> <p>Is it possible to manipulate Qt window if I know "object name" (<a href="http://doc.qt.io/qt-5/qobject.html#objectName-prop" rel="nofollow">http://doc.qt.io/qt-5/qobject.html#objectName-prop</a>)?</p> <p>Thanks a lot! :)</p>
-1
2016-09-15T15:55:48Z
39,548,179
<p>You will need a method of communication between the Python test and the program.</p> <p>For example the program could read commands from STDIN or a socket when being started in a test mode and the test would write to that.</p> <p>Depending on the platform it might also be possible to expose objects via a remote procedure calling mechanism, e.g. D-Bus (using QtDBus on the program side and Pythons D-Bus bindings on the test side).</p> <p>Ultimately it might be better though to consider using an existing test suite with Qt support, such as <a href="https://www.froglogic.com/squish/gui-testing/" rel="nofollow">Squish</a> or <a href="https://wiki.ubuntu.com/Touch/Testing/Autopilot" rel="nofollow">AutoPilot</a></p>
0
2016-09-17T14:53:58Z
[ "python", "c++", "qt", "automated-tests" ]
scipy.integrate.quad precision on big numbers
39,515,582
<p>I try to compute such an integral (actually cdf of exponential distribution with its pdf) via <code>scipy.integrate.quad()</code>:</p> <pre><code>import numpy as np from scipy.integrate import quad def g(x): return .5 * np.exp(-.5 * x) print quad(g, a=0., b=np.inf) print quad(g, a=0., b=10**6) print quad(g, a=0., b=10**5) print quad(g, a=0., b=10**4) </code></pre> <p>And the result is as follows:</p> <pre><code>(1.0, 3.5807346295637055e-11) (0.0, 0.0) (3.881683817604194e-22, 7.717972744764185e-22) (1.0, 1.6059202674761255e-14) </code></pre> <p>All the attempts to use a big upper integration limit yield an incorrect answer though the usage of <code>np.inf</code> solves the problem.</p> <p>Similiar case is discussed in <a href="https://github.com/scipy/scipy/issues/5428" rel="nofollow">scipy issue #5428 at GitHub</a>.</p> <p>What should I do to avoid such an error in integrating other density functions?</p>
0
2016-09-15T16:00:24Z
39,517,294
<p>I believe the issue is due to <code>np.exp(-x)</code> quickly becoming very small as <code>x</code> increases, which results in evaluating as zero due to limited numerical precision. For example, even for <code>x</code> as small as <code>x=10**2*</code>, <code>np.exp(-x)</code> evaluates to <code>3.72007597602e-44</code>, whereas <code>x</code> values of order <code>10**3</code> or above result in <code>0</code>.</p> <p>I do not know the implementation specifics of <code>quad</code>, but it probably performs some kind of sampling of the function to be integrated over the given integration range. For a large upper integration limit, most of the samples of <code>np.exp(-x)</code> evaluate to zero, hence the integral value is underestimated. (Note that in these cases the provided absolute error by <code>quad</code> is of the same order as the integral value which is an indicator that the latter is unreliable.)</p> <p>One approach to avoid this issue is to restrict the integration upper bound to a value above which the numerical function becomes very small (and, hence, contributes marginally to the integral value). From your code snipet, the value of <code>10**4</code> appears to be a good choice, however, a value of <code>10**2</code> also results in an accurate evaluation of the integral.</p> <p>Another approach to avoid numerical precision issues is to use a module that performs computation in <em>arbitrary</em> precision arithmetic, such as <code>mpmath</code>. For example, for <code>x=10**5</code>, <code>mpmath</code> evaluates <code>exp(-x)</code> as follows (using the native <code>mpmath</code> exponential function) </p> <pre><code>import mpmath as mp print(mp.exp(-10**5)) </code></pre> <blockquote> <p><code>3.56294956530937e-43430</code></p> </blockquote> <p>Note how small this value is. With the standard hardware numerical precision (used by <code>numpy</code>) this value becomes <code>0</code>. </p> <p><code>mpmath</code> offers an integration function (<code>mp.quad</code>), which can provide an accurate estimate of the integral for arbitrary values of the upper integral bound.</p> <pre><code>import mpmath as mp print(mp.quad(lambda x : .5 * mp.exp(-.5 * x), [0, mp.inf])) print(mp.quad(lambda x : .5 * mp.exp(-.5 * x), [0, 10**13])) print(mp.quad(lambda x : .5 * mp.exp(-.5 * x), [0, 10**8])) print(mp.quad(lambda x : .5 * mp.exp(-.5 * x), [0, 10**5])) </code></pre> <blockquote> <pre><code>1.0 0.999999650469474 0.999999999996516 0.999999999999997 </code></pre> </blockquote> <p>We can also obtain even more accurate estimates by increasing the precision to, say, <code>50</code> decimal points (from <code>15</code> which is the standard precision)</p> <pre><code>mp.mp.dps = 50; print(mp.quad(lambda x : .5 * mp.exp(-.5 * x), [0, mp.inf])) print(mp.quad(lambda x : .5 * mp.exp(-.5 * x), [0, 10**13])) print(mp.quad(lambda x : .5 * mp.exp(-.5 * x), [0, 10**8])) print(mp.quad(lambda x : .5 * mp.exp(-.5 * x), [0, 10**5])) </code></pre> <blockquote> <pre><code>1.0 0.99999999999999999999999999999999999999999829880262 0.99999999999999999999999999999999999999999999997463 0.99999999999999999999999999999999999999999999999998 </code></pre> </blockquote> <p>In general, the cost for obtaining this accuracy is an increased computation time. </p> <p>P.S.: It goes without saying that if you are able to evaluate your integral analytically in the first place (e.g., with the help of <code>Sympy</code>) you can forget all the above. </p>
1
2016-09-15T17:45:54Z
[ "python", "numpy", "scipy", "calculus" ]
scipy.integrate.quad precision on big numbers
39,515,582
<p>I try to compute such an integral (actually cdf of exponential distribution with its pdf) via <code>scipy.integrate.quad()</code>:</p> <pre><code>import numpy as np from scipy.integrate import quad def g(x): return .5 * np.exp(-.5 * x) print quad(g, a=0., b=np.inf) print quad(g, a=0., b=10**6) print quad(g, a=0., b=10**5) print quad(g, a=0., b=10**4) </code></pre> <p>And the result is as follows:</p> <pre><code>(1.0, 3.5807346295637055e-11) (0.0, 0.0) (3.881683817604194e-22, 7.717972744764185e-22) (1.0, 1.6059202674761255e-14) </code></pre> <p>All the attempts to use a big upper integration limit yield an incorrect answer though the usage of <code>np.inf</code> solves the problem.</p> <p>Similiar case is discussed in <a href="https://github.com/scipy/scipy/issues/5428" rel="nofollow">scipy issue #5428 at GitHub</a>.</p> <p>What should I do to avoid such an error in integrating other density functions?</p>
0
2016-09-15T16:00:24Z
39,518,212
<p>Use the <code>points</code> argument to tell the algorithm where the support of your function roughly is:</p> <pre><code>import numpy as np from scipy.integrate import quad def g(x): return .5 * np.exp(-.5 * x) print quad(g, a=0., b=10**3, points=[1, 100]) print quad(g, a=0., b=10**6, points=[1, 100]) print quad(g, a=0., b=10**9, points=[1, 100]) print quad(g, a=0., b=10**12, points=[1, 100]) </code></pre>
2
2016-09-15T18:41:53Z
[ "python", "numpy", "scipy", "calculus" ]
getting percentage of values meeting conditions within a Pandas group by
39,515,589
<p>I have a dataframe of of <code>People</code>, <code>Days</code>, and <code>Types</code>. The data doesn't really make sense, it's just an example.</p> <p>I'd like to do a group by first on <code>People</code> then <code>Type</code> and then find the percentage of Days that are less than or equal to 3.</p> <p>In order to do this, I am created a <code>Boolean</code> column for equal or under 3 days. Then applying a <code>count</code> and <code>sum</code> aggregate. I'm not a big fan of this method because I really only need the <code>count</code> for the <code>Days</code> column and <code>sum</code> for the <code>Under Day Limit</code> column. This method is essentially creating two unnecessary columns and creates a number of extra steps. How can I clean this code up so it runs more efficiently over my larger dataset.</p> <pre><code>import pandas as pd # create dataframe df = pd.DataFrame(data=[['A', 4, 'Type 1'], ['A', 1, 'Type 1'], ['A', 3, 'Type 2'], ['A', 0, 'Type 1'], ['A', 12, 'Type 2'], ['B', 1, 'Type 1'], ['B', 3, 'Type 1'], ['B', 5, 'Type 2']], columns=['Person', 'Days', 'Type']) df['Under Day Limit'] = df['Days'] &lt;= 3; print df df = df.groupby(['Person', 'Type']).agg(['count', 'sum']) df['Percent under Day Limit'] = df['Under Day Limit']['sum'] / df['Days']['count'] print df </code></pre> <p>Ouput:</p> <pre><code> Days Under Day Limit Percent under Day Limit count sum count sum Person Type A Type 1 3 5 3 2 0.666667 Type 2 2 15 2 1 0.500000 B Type 1 2 4 2 2 1.000000 Type 2 1 5 1 0 0.000000 </code></pre>
1
2016-09-15T16:00:52Z
39,516,256
<ul> <li><code>set_index</code> on <code>Person</code> and <code>Type</code></li> <li>boolean series of <code>Days</code> >= 3</li> <li><code>groupby</code> levels in index</li> <li><code>value_counts(normalize=True)</code></li> </ul> <hr> <pre><code>df.set_index(['Person', 'Type']).Days.ge(3).groupby(level=[0, 1]).value_counts(True) Person Type Days A Type 1 False 0.666667 True 0.333333 Type 2 True 1.000000 B Type 1 False 0.500000 True 0.500000 Type 2 True 1.000000 Name: Days, dtype: float64 </code></pre> <hr> <p><strong><em>With a wee bit more formatting</em></strong></p> <pre><code>df.set_index(['Person', 'Type']).Days.rename('&gt;= 3').ge(3) \ .groupby(level=[0, 1]).value_counts(True).unstack(fill_value=0) </code></pre> <p><a href="http://i.stack.imgur.com/i4sSG.png" rel="nofollow"><img src="http://i.stack.imgur.com/i4sSG.png" alt="enter image description here"></a></p>
2
2016-09-15T16:39:53Z
[ "python", "python-2.7", "pandas" ]
Efficiently looping through lists in Python
39,515,671
<p>I need to make a python function which, given ordered list <code>a</code> and <code>b</code> returns <code>True</code> if there exists an item in a for which holds that there is an item in <code>b</code> that is <code>a+1</code>.</p> <p>This is of course easily done by using something like this:</p> <pre><code>for item in a: if (a+1 in b): return True </code></pre> <p>However, I need to make this as efficient as possible because the function will be used to process loads of data. The tip that was given me was to use the <code>iter()</code> and <code>next()</code> operations, but I have not found a way to use these for effecient processing yet. Does anyone know how to implement these, or use another fast algorythm? Thanks in advance.</p>
0
2016-09-15T16:06:27Z
39,516,275
<p>I can see two options that are more efficient.</p> <ol> <li>Go through each element in <code>a</code>, and perform a binary search for <code>element+1</code> in <code>b</code>.</li> </ol> <p>Time complexity: O(n*log(m)) where n = <code>|a|</code> and m = <code>|b|</code>.</p> <pre><code> for element in a: if binary_search(a, element+1): return True return False </code></pre> <ol start="2"> <li>Increment two counters through [0,<code>|a|</code>) and [0,<code>|b|</code>), say <code>i</code> and <code>j</code>. Loop while <code>i</code> is less than <code>|a|</code> and <code>j</code> is less than <code>|b|</code>. Compare <code>a[i] + 1</code> with <code>b[j]</code>. If they are equal, return <code>True</code>. If the value of <code>a[i] + 1 &gt; b[j]</code>, increment <code>j</code>. Otherwise, increment <code>i</code>.</li> </ol> <p>Time complexity: O(n+m) where n = <code>|a|</code> and m = <code>|b|</code>.</p> <pre><code> i = 0 j = 0 while i &lt; len(a) and j &lt; len(b): if a[i] + 1 == b[j]: return True elif a[i] + 1 &gt; b[j]: j += 1 else: i += 1 return False </code></pre>
0
2016-09-15T16:41:40Z
[ "python", "list", "loops", "iterator" ]
Efficiently looping through lists in Python
39,515,671
<p>I need to make a python function which, given ordered list <code>a</code> and <code>b</code> returns <code>True</code> if there exists an item in a for which holds that there is an item in <code>b</code> that is <code>a+1</code>.</p> <p>This is of course easily done by using something like this:</p> <pre><code>for item in a: if (a+1 in b): return True </code></pre> <p>However, I need to make this as efficient as possible because the function will be used to process loads of data. The tip that was given me was to use the <code>iter()</code> and <code>next()</code> operations, but I have not found a way to use these for effecient processing yet. Does anyone know how to implement these, or use another fast algorythm? Thanks in advance.</p>
0
2016-09-15T16:06:27Z
39,516,408
<p>Warning: not very well tested code. Assuming sorted lists as you wrote.</p> <p>The idea behind the hint with <code>iter</code> and <code>next</code> is that within a single loop you can advance in one or in the other list. If the first number is too small, you try the next first number. If the second one is too small, you try the text second number.</p> <pre><code>def test1(a, b): ia = iter(a) ib = iter(b) try: ea = next(ia) eb = next(ib) while True: print("debug: comparing {} -- {}".format(ea, eb)) diff = ea - eb if diff == -1: print("debug: OK!") return True elif diff &lt; -1: ea = next(ia) else: eb = next(ib) except StopIteration: print("debug: not found") return False lista=[1,2,4,10,31,33,45,67] listb=[7,16,22,29,34,39,49,59,60,100,200,300] test1(lista, listb) </code></pre> <p>The the output shows the algorithm at work:</p> <pre><code>debug: comparing 1 -- 7 debug: comparing 2 -- 7 debug: comparing 4 -- 7 debug: comparing 10 -- 7 debug: comparing 10 -- 16 debug: comparing 31 -- 16 debug: comparing 31 -- 22 debug: comparing 31 -- 29 debug: comparing 31 -- 34 debug: comparing 33 -- 34 debug: OK! </code></pre>
0
2016-09-15T16:49:02Z
[ "python", "list", "loops", "iterator" ]
Efficiently looping through lists in Python
39,515,671
<p>I need to make a python function which, given ordered list <code>a</code> and <code>b</code> returns <code>True</code> if there exists an item in a for which holds that there is an item in <code>b</code> that is <code>a+1</code>.</p> <p>This is of course easily done by using something like this:</p> <pre><code>for item in a: if (a+1 in b): return True </code></pre> <p>However, I need to make this as efficient as possible because the function will be used to process loads of data. The tip that was given me was to use the <code>iter()</code> and <code>next()</code> operations, but I have not found a way to use these for effecient processing yet. Does anyone know how to implement these, or use another fast algorythm? Thanks in advance.</p>
0
2016-09-15T16:06:27Z
39,520,027
<p>Thank you both for your answers! I ended up using a combined version of your solutions:</p> <pre><code>a = iter(l1) b = iter(l2) i = next(a) j = next(b) try: while (i): if i + 1 == j: return True elif i + 1 &gt; j: j = next(b) else: i = next(a) except StopIteration: return False </code></pre>
0
2016-09-15T20:41:48Z
[ "python", "list", "loops", "iterator" ]
How can I kill omxplayer player on a Raspberry Pi using Python
39,515,757
<p>I'm doing a DIY project using a Raspberry Pi 3 where I need to play 4 videos using omxplayer. </p> <p>Each video is played once you press a certain button on the protoboard:</p> <ul> <li>Press Button 1 - Play Video 1</li> <li>Press Button 2 - Play Video 2</li> <li>Press Button 3 - Play Video 3</li> <li>Press Button 4 - Play Video 4</li> </ul> <p>I was successful playing the 4 videos whenever I press any of the buttons using the following python code:</p> <pre><code>import RPi.GPIO as GPIO import time import os GPIO.setmode(GPIO.BCM) # Declaramos que los pines seran llamados como numeros GPIO.setwarnings(False) GPIO.setup(4, GPIO.IN) # GPIO 7 como entrada GPIO.setup(17, GPIO.IN) # GPIO 17 como entrada GPIO.setup(27, GPIO.IN) # GPIO 27 como entrada GPIO.setup(22, GPIO.IN) # GPIO 22 como entrada pathVideos = "/home/pi/VideoHD/Belen" # Directorio donde se encuentran los videos en HD def reproducirVideos(nameVideo): command = "omxplayer -p -o hdmi %s/%s.mp4" % (pathVideos,nameVideo) os.system(command) print "Reproduciendo el Video: %s " % nameVideo def programaPrincipal(): print("Inicio") while True: if (GPIO.input(4)): print("Iniciando Video: AMANECER") reproducirVideos("amanecer") elif (GPIO.input(17)): print("Iniciando Video: DIA") reproducirVideos("dia") elif (GPIO.input(27)): print("Iniciando Video: ATARDECER") reproducirVideos("atardecer") elif (GPIO.input(22)): print("Iniciando Video: ANOCHECER") reproducirVideos("anochecer") else: pass print("Fin de programa") GPIO.cleanup() #Limpiar los GPIO programaPrincipal() #Llamamos a la funcion blinkLeds para ejecutar el programa </code></pre> <p>Here is my issue. </p> <p>When I press a button e.g Button 1, the whole video 1 starts playing properly on the screen. If I press any button while the video1 is running, nothing happens. What I want to achieve is that whenever I press any button on the protoboard, omxplayer should stop reproducing any video (if the is any playing) and start a new one.</p> <p>I have read something about kill omxplayer using PIPE like they say in the following link but without success:</p> <p><a href="http://stackoverflow.com/questions/20871658/how-can-i-kill-omxplayer-by-python-subprocess">How can I kill omxplayer by Python Subprocess</a></p> <p>Any help will be appreciated</p>
0
2016-09-15T16:10:50Z
39,552,427
<p>A bit hacky i guess but have you tried to killall before running the omxplayer?</p> <pre><code>command = "killall omxplayer; omxplayer -p -o hdmi %s/%s.mp4" % (pathVideos,nameVideo) os.system(command) </code></pre>
1
2016-09-17T22:45:49Z
[ "python", "raspberry-pi", "omxplayer" ]