title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Python Slice a List Using a List of Multiple Tuples
39,617,956
<p>I have a list of numbers that I would like to slice the range of numbers that is given from a list of multiple tuples. For example, I have a list that looks like:</p> <pre><code>my_list = [ 5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0,] </code></pre> <p>I also have a list of tuples that are the indicies of values that that I want that looks like:</p> <pre><code>my_tups = [(5,9), (14,18)] </code></pre> <p>How would I return only the values of my_list using my_tups as an index? </p>
1
2016-09-21T13:27:48Z
39,618,092
<p>You can use the builtin <a href="https://docs.python.org/2/library/functions.html#slice" rel="nofollow">slice</a> as follows</p> <pre><code>my_list = [5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0] my_tups = [(5, 9), (14, 18)] my_list2 = [my_list[slice(*o)] for o in my_tups] print(my_list2) &gt;&gt;&gt; [[1, 3, 4, 8], [21, 34, 25, 91]] </code></pre>
1
2016-09-21T13:33:23Z
[ "python", "list", "tuples", "slice" ]
Python Slice a List Using a List of Multiple Tuples
39,617,956
<p>I have a list of numbers that I would like to slice the range of numbers that is given from a list of multiple tuples. For example, I have a list that looks like:</p> <pre><code>my_list = [ 5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0,] </code></pre> <p>I also have a list of tuples that are the indicies of values that that I want that looks like:</p> <pre><code>my_tups = [(5,9), (14,18)] </code></pre> <p>How would I return only the values of my_list using my_tups as an index? </p>
1
2016-09-21T13:27:48Z
39,618,094
<p>You could also use <code>slice</code> with <a href="https://docs.python.org/2/library/itertools.html#itertools.starmap" rel="nofollow"><code>itertools.starmap</code></a>:</p> <pre><code>from itertools import starmap my_list = [ 5., 8., 3., 0., 0., 1., 3., 4., 8., 13., 0., 0., 0., 0., 21., 34., 25., 91., 61., 0., 0.] my_tups = [(5,9), (14,18)] result = [my_list[slc] for slc in starmap(slice, my_tups)] # [[1.0, 3.0, 4.0, 8.0], [21.0, 34.0, 25.0, 91.0]] </code></pre> <hr> <p>With <code>starmap</code>, you could add a <em>step</em> parameter to <em>any</em> of your slices arbitrarily:</p> <pre><code>&gt;&gt;&gt; my_tups = [(5,9,2), (14,18)] # step first slice by 2 &gt;&gt;&gt; [my_list[slc] for slc in starmap(slice, my_tups)] [[1.0, 4.0], [21.0, 34.0, 25.0, 91.0]] </code></pre>
0
2016-09-21T13:33:24Z
[ "python", "list", "tuples", "slice" ]
Python Slice a List Using a List of Multiple Tuples
39,617,956
<p>I have a list of numbers that I would like to slice the range of numbers that is given from a list of multiple tuples. For example, I have a list that looks like:</p> <pre><code>my_list = [ 5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0,] </code></pre> <p>I also have a list of tuples that are the indicies of values that that I want that looks like:</p> <pre><code>my_tups = [(5,9), (14,18)] </code></pre> <p>How would I return only the values of my_list using my_tups as an index? </p>
1
2016-09-21T13:27:48Z
39,618,116
<p>This should do the trick:</p> <pre><code>for a, b in my_tups: print(my_list[a:b]) </code></pre> <p>Instead of printing you can do something else with the list.</p>
0
2016-09-21T13:34:03Z
[ "python", "list", "tuples", "slice" ]
Python Slice a List Using a List of Multiple Tuples
39,617,956
<p>I have a list of numbers that I would like to slice the range of numbers that is given from a list of multiple tuples. For example, I have a list that looks like:</p> <pre><code>my_list = [ 5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0,] </code></pre> <p>I also have a list of tuples that are the indicies of values that that I want that looks like:</p> <pre><code>my_tups = [(5,9), (14,18)] </code></pre> <p>How would I return only the values of my_list using my_tups as an index? </p>
1
2016-09-21T13:27:48Z
39,618,169
<p>Using list comprehension:</p> <pre><code>my_list = [ 5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0,] my_tups = [(5,9), (14,18)] new_list = [my_list[i:j] for i,j in my_tups] </code></pre> <hr> <p>After your comment:</p> <pre><code>my_list = [ 5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0] my_tups = [(5,9), (14,18)] new_list = [0 for i in my_list] # Create a list filled with zeros for i,j in my_tups: new_list[i:j] = my_list[i:j] # Replace items with items from my_list using the indexes from my_tups </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; new_list [0, 0, 0, 0, 0, 1, 3, 4, 8, 0, 0, 0, 0, 0, 21, 34, 25, 91, 0, 0, 0] </code></pre>
3
2016-09-21T13:36:29Z
[ "python", "list", "tuples", "slice" ]
Python Slice a List Using a List of Multiple Tuples
39,617,956
<p>I have a list of numbers that I would like to slice the range of numbers that is given from a list of multiple tuples. For example, I have a list that looks like:</p> <pre><code>my_list = [ 5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0,] </code></pre> <p>I also have a list of tuples that are the indicies of values that that I want that looks like:</p> <pre><code>my_tups = [(5,9), (14,18)] </code></pre> <p>How would I return only the values of my_list using my_tups as an index? </p>
1
2016-09-21T13:27:48Z
39,618,288
<p>You can do something like this on a list comprehension</p> <pre><code>slices = [my_list[x:y] for x, y in my_tups if x &lt; len(my_list) and y &lt; len(my_list)] </code></pre>
0
2016-09-21T13:41:28Z
[ "python", "list", "tuples", "slice" ]
Python - separate duplicate objects into different list
39,618,210
<p>So let say I have this class:</p> <pre><code>class Spam(object): def __init__(self, a): self.a = a </code></pre> <p>And now I have these objects:</p> <pre><code>s1 = Spam((1, 1, 1, 4)) s2 = Spam((1, 2, 1, 4)) s3 = Spam((1, 2, 1, 4)) s4 = Spam((2, 2, 1, 4)) s5 = Spam((2, 1, 1, 8)) s6 = Spam((2, 1, 1, 8)) objects = [s1, s2, s3, s4, s5, s6] </code></pre> <p>so after running some kind of method, I need to have two lists that have objects that had same <code>a</code> attribute value in one list and the other objects that had unique <code>a</code> attribute.</p> <p>Like this:</p> <pre><code>dups = [s2, s3, s5, s6] normal = [s1, s4] </code></pre> <p>So it is something like getting duplicates, but in addition it should also add even first occurrence of object that shares same <code>a</code> attribute value.</p> <p>I have written this method and it seems to be working, but it is quite ugly in my opinion (and probably not very optimal). </p> <pre><code>def eggs(objects): vals = [] dups = [] normal = [] for obj in objects: if obj.a in vals: dups.append(obj) else: normal.append(obj) vals.append(obj.a) dups_vals = [o.a for o in dups] # separate again new_normal = [] for n in normal: if n.a in dups_vals: dups.append(n) else: new_normal.append(n) return dups, new_normal </code></pre> <p>Can anyone write more appropriate pythonic approach for such problem?</p>
0
2016-09-21T13:38:09Z
39,618,362
<p>I would group together the objects in a dictionary, using the <code>a</code> attribute as the key. Then I would separate them by the size of the groups.</p> <pre><code>import collections def separate_dupes(seq, key_func): d = collections.defaultdict(list) for item in seq: d[key_func(item)].append(item) dupes = [item for v in d.values() for item in v if len(v) &gt; 1] uniques = [item for v in d.values() for item in v if len(v) == 1] return dupes, uniques class Spam(object): def __init__(self, a): self.a = a #this method is not necessary for the solution, just for displaying the results nicely def __repr__(self): return "Spam({})".format(self.a) s1 = Spam((1, 1, 1, 4)) s2 = Spam((1, 2, 1, 4)) s3 = Spam((1, 2, 1, 4)) s4 = Spam((2, 2, 1, 4)) s5 = Spam((2, 1, 1, 8)) s6 = Spam((2, 1, 1, 8)) objects = [s1, s2, s3, s4, s5, s6] dupes, uniques = separate_dupes(objects, lambda item: item.a) print(dupes) print(uniques) </code></pre> <p>Result:</p> <pre><code>[Spam((2, 1, 1, 8)), Spam((2, 1, 1, 8)), Spam((1, 2, 1, 4)), Spam((1, 2, 1, 4))] [Spam((1, 1, 1, 4)), Spam((2, 2, 1, 4))] </code></pre>
2
2016-09-21T13:44:03Z
[ "python", "duplicates", "unique" ]
Python - separate duplicate objects into different list
39,618,210
<p>So let say I have this class:</p> <pre><code>class Spam(object): def __init__(self, a): self.a = a </code></pre> <p>And now I have these objects:</p> <pre><code>s1 = Spam((1, 1, 1, 4)) s2 = Spam((1, 2, 1, 4)) s3 = Spam((1, 2, 1, 4)) s4 = Spam((2, 2, 1, 4)) s5 = Spam((2, 1, 1, 8)) s6 = Spam((2, 1, 1, 8)) objects = [s1, s2, s3, s4, s5, s6] </code></pre> <p>so after running some kind of method, I need to have two lists that have objects that had same <code>a</code> attribute value in one list and the other objects that had unique <code>a</code> attribute.</p> <p>Like this:</p> <pre><code>dups = [s2, s3, s5, s6] normal = [s1, s4] </code></pre> <p>So it is something like getting duplicates, but in addition it should also add even first occurrence of object that shares same <code>a</code> attribute value.</p> <p>I have written this method and it seems to be working, but it is quite ugly in my opinion (and probably not very optimal). </p> <pre><code>def eggs(objects): vals = [] dups = [] normal = [] for obj in objects: if obj.a in vals: dups.append(obj) else: normal.append(obj) vals.append(obj.a) dups_vals = [o.a for o in dups] # separate again new_normal = [] for n in normal: if n.a in dups_vals: dups.append(n) else: new_normal.append(n) return dups, new_normal </code></pre> <p>Can anyone write more appropriate pythonic approach for such problem?</p>
0
2016-09-21T13:38:09Z
39,618,416
<p>If you add an <code>__eq__</code> method to <code>Spam</code>, defined as</p> <pre><code>def __eq__(self, other): return self.a == other.a </code></pre> <p>then you can do this quite simply with something like</p> <pre><code># you can inline this if you want, just wanted to give it a name def except_at(elems, ind): return elems[:ind] + elems[ind+1:] dups = [obj for (i, obj) in enumerate(objects) if obj in except_at(objects, i)] normal = [obj for (i, obj) in enumerate(objects) if obj not in except_at(objects, i)] </code></pre>
1
2016-09-21T13:46:18Z
[ "python", "duplicates", "unique" ]
Python - separate duplicate objects into different list
39,618,210
<p>So let say I have this class:</p> <pre><code>class Spam(object): def __init__(self, a): self.a = a </code></pre> <p>And now I have these objects:</p> <pre><code>s1 = Spam((1, 1, 1, 4)) s2 = Spam((1, 2, 1, 4)) s3 = Spam((1, 2, 1, 4)) s4 = Spam((2, 2, 1, 4)) s5 = Spam((2, 1, 1, 8)) s6 = Spam((2, 1, 1, 8)) objects = [s1, s2, s3, s4, s5, s6] </code></pre> <p>so after running some kind of method, I need to have two lists that have objects that had same <code>a</code> attribute value in one list and the other objects that had unique <code>a</code> attribute.</p> <p>Like this:</p> <pre><code>dups = [s2, s3, s5, s6] normal = [s1, s4] </code></pre> <p>So it is something like getting duplicates, but in addition it should also add even first occurrence of object that shares same <code>a</code> attribute value.</p> <p>I have written this method and it seems to be working, but it is quite ugly in my opinion (and probably not very optimal). </p> <pre><code>def eggs(objects): vals = [] dups = [] normal = [] for obj in objects: if obj.a in vals: dups.append(obj) else: normal.append(obj) vals.append(obj.a) dups_vals = [o.a for o in dups] # separate again new_normal = [] for n in normal: if n.a in dups_vals: dups.append(n) else: new_normal.append(n) return dups, new_normal </code></pre> <p>Can anyone write more appropriate pythonic approach for such problem?</p>
0
2016-09-21T13:38:09Z
39,618,672
<p>Using <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter</code></a>, these are the keys that are common to more than one:</p> <pre><code>import collections common = [k for (k, v) in collections.Counter([o.a for o in objects]).items() if v &gt; 1] </code></pre> <p>Your two lists, now are</p> <pre><code>[o for o in objects if o.a in common], [o for o in objects if o.a not in common] </code></pre>
0
2016-09-21T13:56:29Z
[ "python", "duplicates", "unique" ]
Python - separate duplicate objects into different list
39,618,210
<p>So let say I have this class:</p> <pre><code>class Spam(object): def __init__(self, a): self.a = a </code></pre> <p>And now I have these objects:</p> <pre><code>s1 = Spam((1, 1, 1, 4)) s2 = Spam((1, 2, 1, 4)) s3 = Spam((1, 2, 1, 4)) s4 = Spam((2, 2, 1, 4)) s5 = Spam((2, 1, 1, 8)) s6 = Spam((2, 1, 1, 8)) objects = [s1, s2, s3, s4, s5, s6] </code></pre> <p>so after running some kind of method, I need to have two lists that have objects that had same <code>a</code> attribute value in one list and the other objects that had unique <code>a</code> attribute.</p> <p>Like this:</p> <pre><code>dups = [s2, s3, s5, s6] normal = [s1, s4] </code></pre> <p>So it is something like getting duplicates, but in addition it should also add even first occurrence of object that shares same <code>a</code> attribute value.</p> <p>I have written this method and it seems to be working, but it is quite ugly in my opinion (and probably not very optimal). </p> <pre><code>def eggs(objects): vals = [] dups = [] normal = [] for obj in objects: if obj.a in vals: dups.append(obj) else: normal.append(obj) vals.append(obj.a) dups_vals = [o.a for o in dups] # separate again new_normal = [] for n in normal: if n.a in dups_vals: dups.append(n) else: new_normal.append(n) return dups, new_normal </code></pre> <p>Can anyone write more appropriate pythonic approach for such problem?</p>
0
2016-09-21T13:38:09Z
39,618,675
<p>One way to do this, if the list of objects isn't too large, is to sort the list of objects and then apply <code>groupby</code> to it to get the duplicates. To sort the list we supply a key function that extracts the value of the object's <code>.a</code> attribute.</p> <pre><code>from operator import attrgetter from itertools import groupby class Spam(object): def __init__(self, a): self.a = a def __repr__(self): return 'Spam({})'.format(self.a) s1 = Spam((1, 1, 1, 4)) s2 = Spam((1, 2, 1, 4)) s3 = Spam((1, 2, 1, 4)) s4 = Spam((2, 2, 1, 4)) s5 = Spam((2, 1, 1, 8)) s6 = Spam((2, 1, 1, 8)) objects = [s1, s2, s3, s4, s5, s6] keyfunc = attrgetter('a') dupe, unique = [], [] for k, g in groupby(sorted(objects, key=keyfunc), key=keyfunc): g = list(g) target = unique if len(g) == 1 else dupe target.extend(g) print('dupe', dupe) print('unique', unique) </code></pre> <p><strong>output</strong></p> <pre><code>dupe [Spam((1, 2, 1, 4)), Spam((1, 2, 1, 4)), Spam((2, 1, 1, 8)), Spam((2, 1, 1, 8))] unique [Spam((1, 1, 1, 4)), Spam((2, 2, 1, 4))] </code></pre>
0
2016-09-21T13:56:44Z
[ "python", "duplicates", "unique" ]
Python Django populate() isn't reentrant
39,618,364
<p>I've had a program that has been running fine for months. I've been trying to install Postfix on the server this morning and suddenly start getting an error on the site. Here is the traceback</p> <pre><code>mod_wsgi (pid=11948): Target WSGI script '/var/www/zouzoukos/zouzoukos/wsgi.py$ mod_wsgi (pid=11948): Exception occurred processing '/var/www/zouzoukos/zouzoukos/wsgi.py'. Traceback (most recent call last): File "/var/www/zouzoukos/zouzoukos/wsgi.py", line 29, in &lt;module&gt; application = get_wsgi_application() File "/var/www/zouzoukos/env/lib/python2.7/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application django.setup() File "/var/www/zouzoukos/env/lib/python2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) "/var/www/zouzoukos/env/lib/python2.7/site-packages/django/__init__.py", line 18, in setup raise RuntimeError("populate() isn't reentrant") RuntimeError: populate() isn't reentrant </code></pre> <p>The thing is, I have a couple more versions of the site running for other people and they're still fine (this was the first). I cannot understand what I need to update to get it working again.</p> <p>I've tried everything in <a href="http://stackoverflow.com/questions/30954398/django-populate-isnt-reentrant">this thread</a> and still nothing</p>
0
2016-09-21T13:44:04Z
39,618,850
<p>This error basically means that something is already tried to mess with <code>app_config</code> ordered dict from <code>Apps</code> class before django was able to setup installed apps properly in first place. Check <code>django.apps.registry.Apps#populate</code>, it sais:</p> <pre><code># app_config should be pristine, otherwise the code below won't # guarantee that the order matches the order in INSTALLED_APPS. if self.app_configs: raise RuntimeError("populate() isn't reentrant") </code></pre> <p>Try to check what's in this <code>app_config</code> dictionary to get more information. Also just restarting all the things might help.</p>
0
2016-09-21T14:03:27Z
[ "python", "django" ]
Python Django populate() isn't reentrant
39,618,364
<p>I've had a program that has been running fine for months. I've been trying to install Postfix on the server this morning and suddenly start getting an error on the site. Here is the traceback</p> <pre><code>mod_wsgi (pid=11948): Target WSGI script '/var/www/zouzoukos/zouzoukos/wsgi.py$ mod_wsgi (pid=11948): Exception occurred processing '/var/www/zouzoukos/zouzoukos/wsgi.py'. Traceback (most recent call last): File "/var/www/zouzoukos/zouzoukos/wsgi.py", line 29, in &lt;module&gt; application = get_wsgi_application() File "/var/www/zouzoukos/env/lib/python2.7/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application django.setup() File "/var/www/zouzoukos/env/lib/python2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) "/var/www/zouzoukos/env/lib/python2.7/site-packages/django/__init__.py", line 18, in setup raise RuntimeError("populate() isn't reentrant") RuntimeError: populate() isn't reentrant </code></pre> <p>The thing is, I have a couple more versions of the site running for other people and they're still fine (this was the first). I cannot understand what I need to update to get it working again.</p> <p>I've tried everything in <a href="http://stackoverflow.com/questions/30954398/django-populate-isnt-reentrant">this thread</a> and still nothing</p>
0
2016-09-21T13:44:04Z
39,620,676
<p>I tried the approach from @valentjjedi and then I tired manage.py and got a different error indicating a MySQL-python issue so I uninstalled and reinstalled and it worked</p> <pre><code>env/bin/pip uninstall mysql-python env/bin/pip install mysql-python </code></pre>
0
2016-09-21T15:24:55Z
[ "python", "django" ]
BeautifulSoup HTML Table Parsing for the tags without classes
39,618,388
<p>I have this html table: I need to get specific data from this table and assign it to a variable, I do not need all the information. flag = "United Arab Emirates", home_port="Sharjah" etc. Since there are no 'class' on html elements, how do we extract this data. </p> <pre><code> r = requests.get('http://maritime-connector.com/ship/'+str(imo_number), headers={'User-Agent': 'Mozilla/5.0'}) soup = BeautifulSoup(r.content, "lxml") table = soup.find("table", { "class" : "ship-data-table" }) for row in table.findAll("tr"): tname = row.findAll("th") cells = row.findAll("td") print (type(tname)) print (type(cells)) </code></pre> <p>I am using the python module beautfulSoup.</p> <pre><code>&lt;table class="ship-data-table" style="margin-bottom:3px"&gt; &lt;thead&gt; &lt;tr&gt; &lt;th&gt;IMO number&lt;/th&gt; &lt;td&gt;9492749&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Name of the ship&lt;/th&gt; &lt;td&gt;SHARIEF PILOT&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Type of ship&lt;/th&gt; &lt;td&gt;ANCHOR HANDLING VESSEL&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;MMSI&lt;/th&gt; &lt;td&gt;470535000&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Gross tonnage&lt;/th&gt; &lt;td&gt;499 tons&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;DWT&lt;/th&gt; &lt;td&gt;222 tons&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Year of build&lt;/th&gt; &lt;td&gt;2008&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Builder&lt;/th&gt; &lt;td&gt;NANYANG SHIPBUILDING - JINGJIANG, CHINA&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Flag&lt;/th&gt; &lt;td&gt;UNITED ARAB EMIRATES&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Home port&lt;/th&gt; &lt;td&gt;SHARJAH&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Manager &amp; owner&lt;/th&gt; &lt;td&gt;GLOBAL MARINE SERVICES - SHARJAH, UNITED ARAB EMIRATES&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Former names&lt;/th&gt; &lt;td&gt;SUPERIOR PILOT until 2008 Sep&lt;/td&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;/table&gt; </code></pre>
2
2016-09-21T13:44:40Z
39,618,492
<p>Go over all the <code>th</code> elements in the table, get the text and the following <code>td</code> sibling's text:</p> <pre><code>from pprint import pprint from bs4 import BeautifulSoup data = """your HTML here""" soup = BeautifulSoup(data, "html.parser") result = {header.get_text(strip=True): header.find_next_sibling("td").get_text(strip=True) for header in soup.select("table.ship-data-table tr th")} pprint(result) </code></pre> <p>This would construct a nice dictionary with headers as keys and corresponding <code>td</code> texts as values:</p> <pre><code>{'Builder': 'NANYANG SHIPBUILDING - JINGJIANG, CHINA', 'DWT': '222 tons', 'Flag': 'UNITED ARAB EMIRATES', 'Former names': 'SUPERIOR PILOT until 2008 Sep', 'Gross tonnage': '499 tons', 'Home port': 'SHARJAH', 'IMO number': '9492749', 'MMSI': '470535000', 'Manager &amp; owner': 'GLOBAL MARINE SERVICES - SHARJAH, UNITED ARAB EMIRATES', 'Name of the ship': 'SHARIEF PILOT', 'Type of ship': 'ANCHOR HANDLING VESSEL', 'Year of build': '2008'} </code></pre>
2
2016-09-21T13:49:34Z
[ "python", "html", "beautifulsoup" ]
BeautifulSoup HTML Table Parsing for the tags without classes
39,618,388
<p>I have this html table: I need to get specific data from this table and assign it to a variable, I do not need all the information. flag = "United Arab Emirates", home_port="Sharjah" etc. Since there are no 'class' on html elements, how do we extract this data. </p> <pre><code> r = requests.get('http://maritime-connector.com/ship/'+str(imo_number), headers={'User-Agent': 'Mozilla/5.0'}) soup = BeautifulSoup(r.content, "lxml") table = soup.find("table", { "class" : "ship-data-table" }) for row in table.findAll("tr"): tname = row.findAll("th") cells = row.findAll("td") print (type(tname)) print (type(cells)) </code></pre> <p>I am using the python module beautfulSoup.</p> <pre><code>&lt;table class="ship-data-table" style="margin-bottom:3px"&gt; &lt;thead&gt; &lt;tr&gt; &lt;th&gt;IMO number&lt;/th&gt; &lt;td&gt;9492749&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Name of the ship&lt;/th&gt; &lt;td&gt;SHARIEF PILOT&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Type of ship&lt;/th&gt; &lt;td&gt;ANCHOR HANDLING VESSEL&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;MMSI&lt;/th&gt; &lt;td&gt;470535000&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Gross tonnage&lt;/th&gt; &lt;td&gt;499 tons&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;DWT&lt;/th&gt; &lt;td&gt;222 tons&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Year of build&lt;/th&gt; &lt;td&gt;2008&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Builder&lt;/th&gt; &lt;td&gt;NANYANG SHIPBUILDING - JINGJIANG, CHINA&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Flag&lt;/th&gt; &lt;td&gt;UNITED ARAB EMIRATES&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Home port&lt;/th&gt; &lt;td&gt;SHARJAH&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Manager &amp; owner&lt;/th&gt; &lt;td&gt;GLOBAL MARINE SERVICES - SHARJAH, UNITED ARAB EMIRATES&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;Former names&lt;/th&gt; &lt;td&gt;SUPERIOR PILOT until 2008 Sep&lt;/td&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;/table&gt; </code></pre>
2
2016-09-21T13:44:40Z
39,619,209
<p>I would do something like this:</p> <pre><code>html = """ &lt;your table&gt; """ from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'html.parser') flag = soup.find("th", string="Flag").find_next("td").get_text(strip=True) home_port = soup.find("th", string="Home port").find_next("td").get_text(strip=True) print(flag) print(home_port) </code></pre> <p>That way I'm making sure I match the text only in <code>th</code> elements and getting the contents of next <code>td</code></p>
0
2016-09-21T14:18:28Z
[ "python", "html", "beautifulsoup" ]
Simulating multi-user to operate PostgreSQL by Python?
39,618,478
<p>I want to compare my distributed database and postgresql. So I need to simulate multi-user SQL operations on postgresql using Python.</p> <p>Can I use the <code>mutliprocessing</code> module?</p> <p>here is my code</p> <pre><code> # encoding=utf-8 import datetime import multiprocessing import psycopg2 def exe(cmd): conn = psycopg2.connect("dbname = test user = pj password = dbrgdbrg") cur = conn.cursor() try: sql = "SELECT id FROM test WHERE ST_MAKEENVELOPE(118,38,119,39,4326) &amp;&amp; wkb_geometry;" cur.execute(sql) print cur.fetchone() except Exception, e: print e if __name__ == "__main__": cmds=range(5) for cmd in cmds: p = multiprocessing.Process(target=exe,args=(cmd,)) p.start() p.join() </code></pre> <p>But I don't know if it is correct? What should I do if I wanted to create random parameter to my SQL statements?</p>
0
2016-09-21T13:48:56Z
39,619,169
<p>You can use the <code>random</code> function to generate random 'ST_MAKEENVELOPE` qualifiers this is assuming they belong to some range. And then use the random function to get 5 different ID's of ST_MAKEENVELOPE.</p> <p>Another thing you can just use <code>for cmd in range(5):</code> instead of creating a list and then using it.</p>
1
2016-09-21T14:16:46Z
[ "python", "postgresql", "parallel-processing" ]
Python Sockets, Advanced Chat Box
39,618,547
<p>I want to create a server that <strong>handles</strong> a lot of clients at the same time (<strong>handles</strong>: receiving data from clients and sending data to all clients at the same time!!!)</p> <p>Actually i'm trying to create a chat box. The program will work like this:</p> <p>1) There's going to be a server which handles the clients. </p> <p>2) More than one clients can join the server.</p> <p>3) Clients send messages (Strings) to the server.</p> <p>4) The server receive's the message from a client and then it send's it to all the clients except the one he got it from.</p> <p>And this is how the clients will communicate each other. No private messages available. When someone hits the enter all the clients will see the message on their screen.</p> <p>The client module is easy to make, because the client communicates only with one socket (The Server).</p> <p>The server module from the other hand is really complicated, i don't know how to do it (I also know about threads).</p> <p>This is my atempt:</p> <pre><code>import socket, threading class Server: def __init__(self, ip = "", port = 5050): '''Server Constructor. If __init__ return None, then you can use self.error to print the specified error message.''' #Error message. self.error = "" #Creating a socket object. self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Trying to bind it. try: self.server.bind( (ip, port) ) pass #Failed, because socket has been shuted down. except OSError : self.error = "The server socket has been shuted down." return None #Failed, because socket has been forcibly reseted. except ConnectionResetError: self.error = "The server socket has been forcibly reseted." return None #Start Listening. self.server.listen() #_____Other Variables_____# #A flag to know when to shut down thread loops. self.running = True #Store clients here. self.clients = [] #_____Other Variables_____# #Start accepting clients. thread = threading.thread(target = self.acceptClients) thread.start() #Start handling the client. self.clientHandler() #Accept Clients. def acceptClients(self): while self.running: self.clients.append( self.server.accept() ) #Close the server. self.server.close() #Handle clients. def clientHandler(self): while self.running: for client in self.clients: sock = client[0] addr = client[1] #Receive at most 1 mb of data. #The problem is that recv will block the loop!!! data = sock.recv(1024 ** 2) </code></pre> <p>As you can see, i accept clients using a thread so the server.accept() won't block the program. And then i store the clients into a list.</p> <p>But the problem is on the <strong>clientHandler</strong>. How am i going to recv from all clients at the same time? The first recv will block the loop!!!</p> <p>I also tried to start new threads (clientHandlers) for every new client but the problem was the synchronization.</p> <p>And what about the send? The server must send data to all the clients, so the clientHandler is not yet finished. But if i mix the methods <strong>recv</strong> and <strong>send</strong> then the problem become's more complicated. </p> <p>So what is the proper and best way to do this? I'd like to give me an example too.</p>
0
2016-09-21T13:51:30Z
39,619,882
<p>Multithreading is great when the different clients are independant of each other: you write your code as if only one client existed and you start a thread for each client.</p> <p>But here, what comes from one client must be send to the others. One thread per client will certainly lead to a synchronization nightmare. So let's call <code>select</code> to the rescue! <code>select.select</code> allows to poll a list of sockets and returns as soon as one is ready. Here you can just build a list containing the listening socket and all the accepted ones (that part is initially empty...):</p> <ul> <li>when the listening socket is ready for read, accept a new socket and add it to the list</li> <li>when another socket is ready for read, read some data from it. If you read 0 bytes, its peer has been shut down or closed: close it and remove it from the list</li> <li>if you have read something from one accepted socket, loop on the list, skipping the listening socket and the one from which you have read and send data to any other one</li> </ul> <p>Code could be (more or less):</p> <pre><code> main = socket.socket() # create the listening socket main.bind((addr, port)) main.listen(5) socks = [main] # initialize the list and optionaly count the accepted sockets count = 0 while True: r, w, x = select.select(socks, [], socks) if main in r: # a new client s, addr = main.accept() if count == mx: # reject (optionaly) if max number of clients reached s.close() else: socks.append(s) # appends the new socket to the list elif len(r) &gt; 0: data = r[0].recv(1024) # an accepted socket is ready: read if len(data) == 0: # nothing to read: close it r[0].close() socks.remove(r[0]) else: for s in socks[1:]: # send the data to any other socket if s != r[0]: s.send(data) elif main in x: # close if exceptional condition met (optional) break elif len(x) &gt; 0: x[0].close() socks.remove(x[0]) # if the loop ends, close everything for s in socks[1:]: s.close() main.close() </code></pre> <p>You will certainly need to implement a mechanism to ask the server to stop, and to test all that but it should be a starting place</p>
1
2016-09-21T14:50:40Z
[ "python", "multithreading", "sockets" ]
Python Sockets, Advanced Chat Box
39,618,547
<p>I want to create a server that <strong>handles</strong> a lot of clients at the same time (<strong>handles</strong>: receiving data from clients and sending data to all clients at the same time!!!)</p> <p>Actually i'm trying to create a chat box. The program will work like this:</p> <p>1) There's going to be a server which handles the clients. </p> <p>2) More than one clients can join the server.</p> <p>3) Clients send messages (Strings) to the server.</p> <p>4) The server receive's the message from a client and then it send's it to all the clients except the one he got it from.</p> <p>And this is how the clients will communicate each other. No private messages available. When someone hits the enter all the clients will see the message on their screen.</p> <p>The client module is easy to make, because the client communicates only with one socket (The Server).</p> <p>The server module from the other hand is really complicated, i don't know how to do it (I also know about threads).</p> <p>This is my atempt:</p> <pre><code>import socket, threading class Server: def __init__(self, ip = "", port = 5050): '''Server Constructor. If __init__ return None, then you can use self.error to print the specified error message.''' #Error message. self.error = "" #Creating a socket object. self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Trying to bind it. try: self.server.bind( (ip, port) ) pass #Failed, because socket has been shuted down. except OSError : self.error = "The server socket has been shuted down." return None #Failed, because socket has been forcibly reseted. except ConnectionResetError: self.error = "The server socket has been forcibly reseted." return None #Start Listening. self.server.listen() #_____Other Variables_____# #A flag to know when to shut down thread loops. self.running = True #Store clients here. self.clients = [] #_____Other Variables_____# #Start accepting clients. thread = threading.thread(target = self.acceptClients) thread.start() #Start handling the client. self.clientHandler() #Accept Clients. def acceptClients(self): while self.running: self.clients.append( self.server.accept() ) #Close the server. self.server.close() #Handle clients. def clientHandler(self): while self.running: for client in self.clients: sock = client[0] addr = client[1] #Receive at most 1 mb of data. #The problem is that recv will block the loop!!! data = sock.recv(1024 ** 2) </code></pre> <p>As you can see, i accept clients using a thread so the server.accept() won't block the program. And then i store the clients into a list.</p> <p>But the problem is on the <strong>clientHandler</strong>. How am i going to recv from all clients at the same time? The first recv will block the loop!!!</p> <p>I also tried to start new threads (clientHandlers) for every new client but the problem was the synchronization.</p> <p>And what about the send? The server must send data to all the clients, so the clientHandler is not yet finished. But if i mix the methods <strong>recv</strong> and <strong>send</strong> then the problem become's more complicated. </p> <p>So what is the proper and best way to do this? I'd like to give me an example too.</p>
0
2016-09-21T13:51:30Z
39,622,680
<p><em>This my final program and works like a charm.</em></p> <p><strong>Server.py</strong></p> <p>import socket, select</p> <p>class Server:</p> <pre><code>def __init__(self, ip = "", port = 5050): '''Server Constructor. If __init__ return None, then you can use self.error to print the specified error message.''' #Error message. self.error = "" #Creating a socket object. self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Trying to bind it. try: self.server.bind( (ip, port) ) pass #Failed, because socket has been shuted down. except OSError : self.error = "The server socket has been shuted down." #Failed, because socket has been forcibly reseted. except ConnectionResetError: self.error = "The server socket has been forcibly reseted." #Start Listening. self.server.listen() #_____Other Variables_____# #A flag to know when to shut down thread loops. self.running = True #Store clients here. self.sockets = [self.server] #_____Other Variables_____# #Start Handling the sockets. self.handleSockets() #Handle Sockets. def handleSockets(self): while True: r, w, x = select.select(self.sockets, [], self.sockets) #If server is ready to accept. if self.server in r: client, address = self.server.accept() self.sockets.append(client) #Elif a client send data. elif len(r) &gt; 0: #Receive data. try: data = r[0].recv( 1024 ) #If the client disconnects suddenly. except ConnectionResetError: r[0].close() self.sockets.remove( r[0] ) print("A user has been disconnected forcible.") continue #Connection has been closed or lost. if len(data) == 0: r[0].close() self.sockets.remove( r[0] ) print("A user has been disconnected.") #Else send the data to all users. else: #For all sockets except server. for client in self.sockets[1:]: #Do not send to yourself. if client != r[0]: client.send(data) server = Server() print("Errors:",server.error) </code></pre> <p><strong>Client.py</strong></p> <p>import socket, threading</p> <p>from tkinter import *</p> <p>class Client:</p> <pre><code>def __init__(self, ip = "192.168.1.3", port = 5050): '''Client Constructor. If __init__ return None, then you can use self.error to print the specified error message.''' #Error message. self.error = "" #Creating a socket object. self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #Trying to bind it. try: self.server.connect( (ip, port) ) pass #Failed, because socket has been shuted down. except OSError : self.error = "The client socket has been shuted down." return #Failed, because socket has been forcibly reseted. except ConnectionResetError: self.error = "The client socket has been forcibly reseted." return #Failed, because socket has been forcibly reseted. except ConnectionRefusedError: self.error = "The server socket refuses the connection." return #_____Other Variables_____# #A flag to know when to shut down thread loops. self.running = True #_____Other Variables_____# #Start the GUI Interface. def startGUI(self): #Initialiazing tk. screen = Tk() screen.geometry("200x100") #Tk variable. self.msg = StringVar() #Creating widgets. entry = Entry( textvariable = self.msg ) button = Button( text = "Send", command = self.sendMSG ) #Packing widgets. entry.pack() button.pack() screen.mainloop() #Send the message. def sendMSG(self): self.server.send( str.encode( self.msg.get() ) ) #Receive message. def recvMSG(self): while self.running: data = self.server.recv(1024) print( bytes.decode(data) ) #New client. main = Client() print("Errors:", main.error) #Start a thread with the recvMSG method. thread = threading.Thread(target = main.recvMSG) thread.start() #Start the gui. main.startGUI() #Close the connection when the program terminates and stop threads. main.running = False main.server.close() </code></pre> <p>The program works fine exactly as i wanted.</p> <p>But i have some more questions.</p> <p><strong>r, w, x = select.select(self.sockets, [], self.sockets)</strong></p> <p><strong>r</strong> is a list which contains all the ready sockets. But i did not undesrand what <strong>w</strong> and <strong>x</strong> are.</p> <p>The <strong>first</strong> parameter is the sockets list, the <strong>second</strong> the accepted clients and the <strong>third</strong> parameter what is it? Why am i giving again the sockets list?</p>
0
2016-09-21T17:12:59Z
[ "python", "multithreading", "sockets" ]
django admin custom view in model properties
39,618,586
<p>I have such models:</p> <pre><code>class Product(models.Model): ... id = models.IntegerField(unique=True) ... class Question(models.Model): product = models.ForeignKey(Product, related_name='question', null=True) answer = models.ForeignKey(Answer, related_name='question', blank=True, null=True) user = models.ForeignKey(User, null=True) text = models.TextField(null=True) ... class Answer(models.Model): user = models.ForeignKey(User, null=True) text = models.TextField(null=True) ... </code></pre> <p>All these models are registered in django admin. How can I get a custom report table while editing one of a Questions (/admin/qa/question/1/change/):</p> <pre><code>... editable standart_fields from Question model ... non-standart report(without editable fields): all questions: related answers to them User: Question(related to a product) - User: Answer to it User: Question(if it exists) - User: answer to it </code></pre> <p>Is it possible in admin site?</p>
0
2016-09-21T13:53:11Z
39,619,301
<p>You'll need to create a custom <strong>ModelAdmin</strong> for <strong>Questions</strong> and override the <strong>form</strong> property, as explained at <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/admin/#django.contrib.admin.ModelAdmin.form" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/contrib/admin/#django.contrib.admin.ModelAdmin.form</a>.</p> <p>In your case, you should get the dynamically created form using <strong>ModelAdmin.get_form()</strong> and add the report you want to it, using Django's form framework.</p>
1
2016-09-21T14:21:46Z
[ "python", "django", "django-models", "django-admin", "django-views" ]
Impute missing data, while forcing correlation coefficient to remain the same
39,618,648
<p>Consider the following (excel) dataset:</p> <pre><code>m | r ----|------ 2.0 | 3.3 0.8 | | 4.0 1.3 | 2.1 | 5.2 | 2.3 | 1.9 2.5 | 1.2 | 3.0 2.0 | 2.6 </code></pre> <p><strong>My goal is to fill in missing values using the following condition:</strong></p> <blockquote> <p>Denote as R the pairwise correlation between the above two columns (around 0.68). Denote as R* the correlation <strong>after</strong> the empty cells have been filled in. <strong>Fill in the table so that (R - R*)^2 = 0</strong>. This is, I want to keep the correlation structure of the data intact.</p> </blockquote> <p>So far I have done it using Matlab:</p> <pre><code>clear all; m = xlsread('data.xlsx','A2:A11') ; r = xlsread('data.xlsx','B2:B11') ; rho = corr(m,r,'rows','pairwise'); x0 = [1,1,1,1,1,1]; lb = [0,0,0,0,0,0]; f = @(x)my_correl(x,rho); SOL = fmincon(f,x0,[],[],[],[],lb) </code></pre> <p>where the function <code>my_correl</code> is:</p> <pre><code>function X = my_correl(x,rho) sum_m = (11.9 + x(1) + x(2) + x(3)); sum_r = (22.3 + x(1) + x(2) + x(3)); avg_m = (11.9 + x(1) + x(2) + x(3))/8; avg_r = (22.3 + x(4) + x(5) + x(6))/8; rho_num = 8*(26.32 + 4*x(1) + 2.3*x(2) + 1.9*x(3) + 0.8*x(4) + 1.3*x(5) + 2.5*x(6)) - sum_m*sum_r; rho_den = sqrt(8*(22.43 + (4*x(1))^2 + (2.3*x(2))^2 + (1.9*x(3))^2) - sum_m^2)*sqrt(8*(78.6 + (0.8*x(4))^2 + (1.3*x(5))^ + (2.5*x(6))^2) - sum_r^2); X = (rho - rho_num/rho_den)^2; end </code></pre> <p>This function computes the correlation manually, where every missing data is a variable <code>x(i)</code>.</p> <p><strong>The problem: my actual dataset has more than 20,000 observations.</strong> There is no way I can create that rho formula manually.</p> <p><strong>How can I fill in my dataset?</strong></p> <p>Note 1: I am open to use alternative languages like Python, Julia, or R. Matlab it's just my default one.</p> <p>Note 2: a 100 points bounty will be awarded to the answer. Promise from now.</p>
7
2016-09-21T13:55:39Z
39,622,066
<p>This is how I would approach it, with an implementation in R provided: </p> <p>There is not a unique solution for imputing the missing data points, such that the pairwise correlation of the complete (imputed) data is equal to the pairwise correlation of the incomplete data. So to find a 'good' solution rather than just 'any' solution, we can introduce an additional criteria that the complete imputed data should also share the same linear regression with the original data. This leads us to a fairly simple approach. </p> <ol> <li>calculate a linear regression model for the original data. </li> <li>find the imputed values for missing values that would lie exactly on this regression line. </li> <li>generate a random scatter of residuals for the imputed values around this regression line</li> <li>scale the imputed residuals to force the correlation of the complete imputed data to be equal to that of the original data</li> </ol> <p>A solution like this in R:</p> <pre><code>library(data.table) set.seed(123) rho = cor(dt$m,dt$r,'pairwise') # calculate linear regression of original data fit1 = lm(r ~ m, data=dt) fit2 = lm(m ~ r, data=dt) # extract the standard errors of regression intercept (in each m &amp; r direction) # and multiply s.e. by sqrt(n) to get standard deviation sd1 = summary(fit1)$coefficients[1,2] * sqrt(dt[!is.na(r), .N]) sd2 = summary(fit2)$coefficients[1,2] * sqrt(dt[!is.na(m), .N]) # find where data points with missing values lie on the regression line dt[is.na(r), r.imp := coefficients(fit1)[1] + coefficients(fit1)[2] * m] dt[is.na(m), m.imp := coefficients(fit2)[1] + coefficients(fit2)[2] * r] # generate randomised residuals for the missing data, using the s.d. calculated above dt[is.na(r), r.ran := rnorm(.N, sd=sd1)] dt[is.na(m), m.ran := rnorm(.N, sd=sd2)] # function that scales the residuals by a factor x, then calculates how close correlation of imputed data is to that of original data obj = function(x, dt, rho) { dt[, r.comp := r][, m.comp := m] dt[is.na(r), r.comp := r.imp + r.ran*x] dt[is.na(m), m.comp := m.imp + m.ran*x] rho2 = cor(dt$m.comp, dt$r.comp,'pairwise') (rho-rho2)^2 } # find the value of x that minimises the discrepencay of imputed versus original correlation fit = optimize(obj, c(-5,5), dt, rho) x=fit$minimum dt[, r.comp := r][, m.comp := m] dt[is.na(r), r.comp := r.imp + r.ran*x] dt[is.na(m), m.comp := m.imp + m.ran*x] rho2 = cor(dt$m.comp, dt$r.comp,'pairwise') (rho-rho2)^2 # check that rho2 is approximately equal to rho </code></pre> <p>As a final check, calculate linear regression of the complete imputed data and plot to show that regression line is same as for original data. Note that the plot below was for the large data set shown below, to demonstrate use of this method on large data.</p> <pre><code>fit.comp = lm(r.comp ~ m.comp, data=dt) plot(dt$m.comp, dt$r.comp) points(dt$m, dt$r, col="red") abline(fit1, col="green") abline(fit.comp, col="blue") mtext(paste(" Rho =", round(rho,5)), at=-1) mtext(paste(" Rho2 =", round(rho2, 5)), at=6) </code></pre> <p><a href="http://i.stack.imgur.com/nH8fm.png" rel="nofollow"><img src="http://i.stack.imgur.com/nH8fm.png" alt="enter image description here"></a></p> <p><strong>DATA</strong></p> <p>original toy data from OP example:</p> <pre><code>dt=structure(list(m = c(2, 0.8, NA, 1.3, 2.1, NA, NA, 2.5, 1.2, 2), r = c(3.3, NA, 4, NA, 5.2, 2.3, 1.9, NA, 3, 2.6)), .Names = c("m", "r"), row.names = c(NA, -10L), class = c("data.table", "data.frame")) </code></pre> <p>A larger data set to demonstrate on big data</p> <pre><code>dt = data.table(m=rnorm(1e5, 3, 2))[, r:=1.5 + 1.1*m + rnorm(1e5,0,2)] dt[sample(.N, 3e4), m:=NA] dt[sample(which(!is.na(m)), 3e4), r := NA] </code></pre>
5
2016-09-21T16:38:12Z
[ "python", "matlab", "julia-lang" ]
Pandas convert columns type from list to np.array
39,618,678
<p>I'm trying to apply a function to a pandas dataframe, such a function required two np.array as input and it fit them using a well defined model.</p> <p>The point is that I'm not able to apply this function starting from the selected columns since their "rows" contain list read from a JSON file and not np.array.</p> <p>Now, I've tried different solutions:</p> <pre><code>#Here is where I discover the problem train_df['result'] = train_df.apply(my_function(train_df['col1'],train_df['col2'])) #so I've tried to cast the Series before passing them to the function in both these ways: X_col1_casted = trai_df['col1'].dtype(np.array) X_col2_casted = trai_df['col2'].dtype(np.array) </code></pre> <p>doesn't work.</p> <pre><code>X_col1_casted = trai_df['col1'].astype(np.array) X_col2_casted = trai_df['col2'].astype(np.array) </code></pre> <p>doesn't work.</p> <pre><code>X_col1_casted = trai_df['col1'].dtype(np.array) X_col2_casted = trai_df['col2'].dtype(np.array) </code></pre> <p>does'nt work.</p> <p>What I'm thinking to do now is a long procedure like:</p> <p>starting from the uncasted column-series, convert them into list(), iterate on them apply the function to the np.array() single elements, and append the results into a temporary list. Once done I will convert this list into a new column. ( clearly, I don't know if it will work )</p> <p>Does anyone of you know how to help me ?</p> <p>EDIT: I add one example to be clear:</p> <p>The function assume to have as input two np.arrays. Now it has two lists since they are retrieved form a json file. The situation is this one:</p> <pre><code>col1 col2 result [1,2,3] [4,5,6] [5,7,9] [0,0,0] [1,2,3] [1,2,3] </code></pre> <p>Clearly the function is not the sum one, but a own function. For a moment assume that this sum can work only starting from arrays and not form lists, what should I do ?</p> <p>Thanks in advance </p>
0
2016-09-21T13:56:46Z
39,619,291
<p>Use <code>apply</code> to convert each element to it's equivalent array:</p> <pre><code>df['col1'] = df['col1'].apply(lambda x: np.array(x)) type(df['col1'].iloc[0]) numpy.ndarray </code></pre> <p>Data:</p> <pre><code>df = pd.DataFrame({'col1': [[1,2,3],[0,0,0]]}) df </code></pre> <p><a href="http://i.stack.imgur.com/F2F0B.png" rel="nofollow"><img src="http://i.stack.imgur.com/F2F0B.png" alt="Image"></a></p>
2
2016-09-21T14:21:17Z
[ "python", "pandas", "numpy", "dataframe", "casting" ]
Converting list of list to dictionary
39,618,805
<p>I have the following list of list:</p> <pre><code>[[u'3', u'4'], [u'4', u'5'], [u'7', u'8'], [u'1', u'2'], [u'2', u'3'], [u'6', u'7'], [u'5', u'6']] </code></pre> <p>I want to obtain:</p> <pre><code>{3:'4', 4:'5', 7:'8', 1:'2', 2:'3', 6:'7', 5:'6'} </code></pre> <p>List could be long, so it needs to be more efficient as possible.</p> <p>The first element of each "pair" <code>[[first, second], ...]</code> is unique, so we can make a dictionary from it.</p> <p>I tried with the following, but I think it will be slow:</p> <pre><code>def getdict(l): result = {} for e in l: result[int(e[0])] = e[1] return result </code></pre>
-4
2016-09-21T14:02:02Z
39,618,854
<p>You can use a <em>dictionary comprehension</em>:</p> <pre><code>&gt;&gt;&gt; l = [[u'3', u'4'], [u'4', u'5'], [u'7', u'8'], [u'1', u'2'], [u'2', u'3'], [u'6', u'7'], [u'5', u'6']] &gt;&gt;&gt; {int(key): value for key, value in l} {1: u'2', 2: u'3', 3: u'4', 4: u'5', 5: u'6', 6: u'7', 7: u'8'} </code></pre> <p>Note that you would "lose" duplicates like in the @Kevin's example:</p> <pre><code>&gt;&gt;&gt; l = [['1', '2'], ['1', '3']] &gt;&gt;&gt; {int(key): value for key, value in l} {1: '3'} </code></pre>
3
2016-09-21T14:03:43Z
[ "python", "algorithm", "list", "dictionary" ]
Converting list of list to dictionary
39,618,805
<p>I have the following list of list:</p> <pre><code>[[u'3', u'4'], [u'4', u'5'], [u'7', u'8'], [u'1', u'2'], [u'2', u'3'], [u'6', u'7'], [u'5', u'6']] </code></pre> <p>I want to obtain:</p> <pre><code>{3:'4', 4:'5', 7:'8', 1:'2', 2:'3', 6:'7', 5:'6'} </code></pre> <p>List could be long, so it needs to be more efficient as possible.</p> <p>The first element of each "pair" <code>[[first, second], ...]</code> is unique, so we can make a dictionary from it.</p> <p>I tried with the following, but I think it will be slow:</p> <pre><code>def getdict(l): result = {} for e in l: result[int(e[0])] = e[1] return result </code></pre>
-4
2016-09-21T14:02:02Z
39,618,860
<pre><code>my_dict = {} for item in my_list: my_dict[int(item[0])] = item[1] </code></pre>
1
2016-09-21T14:03:56Z
[ "python", "algorithm", "list", "dictionary" ]
Plotting timestampt data from CSV using matplotlib
39,618,881
<p>I am trying to plot data from a csv file using matplotlib. There is 1 column against a timestamp:</p> <pre><code>26-08-2016 00:01 0.062964691 26-08-2016 00:11 0.047209214 26-08-2016 00:21 0.047237823 </code></pre> <p>I have only been able to create a simple plot using only integers using the code below, which doesn't work when the y data is a timestamp. What do I need to add? This may seem simple, but I am pressed for time :/ thanks!</p> <pre><code>from matplotlib import pyplot as plt from matplotlib import style import numpy as np import datetime as dt x,y = np.loadtxt('I112-1.csv', unpack=True, delimiter = ',') plt.plot(x,y) plt.title('Title') plt.ylabel('Y axis') plt.xlabel('X axis') plt.show() </code></pre>
0
2016-09-21T14:04:33Z
39,619,423
<p>Here's my example for this problem:</p> <pre><code>import pandas as pd from io import StringIO from datetime import datetime %matplotlib inline import matplotlib import matplotlib.pyplot as plt data_file = StringIO(""" time,value 26-08-2016 00:01,0.062964691 26-08-2016 00:11,0.047209214 26-08-2016 00:21,0.047237823""") df = pd.read_table(data_file,delimiter=",") df['datetime']= df.time.map(lambda l: datetime.strptime(l, '%d-%m-%Y %H:%M')) ax = df.set_index("datetime",drop=False)[['value','datetime']].plot(title="Title",yticks=df.value) </code></pre> <p><a href="http://i.stack.imgur.com/rGqBq.png" rel="nofollow"><img src="http://i.stack.imgur.com/rGqBq.png" alt="enter image description here"></a></p>
0
2016-09-21T14:28:05Z
[ "python", "csv", "matplotlib", "plot" ]
Why does the floating-point value of 4*0.1 look nice in Python 3 but 3*0.1 doesn't?
39,618,943
<p>I know that most decimals don't have an exact floating point representation (<a href="http://stackoverflow.com/questions/588004">Is floating point math broken?</a>).</p> <p>But I don't see why <code>4*0.1</code> is printed nicely as <code>0.4</code>, but <code>3*0.1</code> isn't, when both values actually have ugly decimal representations:</p> <pre><code>&gt;&gt;&gt; 3*0.1 0.30000000000000004 &gt;&gt;&gt; 4*0.1 0.4 &gt;&gt;&gt; from decimal import Decimal &gt;&gt;&gt; Decimal(3*0.1) Decimal('0.3000000000000000444089209850062616169452667236328125') &gt;&gt;&gt; Decimal(4*0.1) Decimal('0.40000000000000002220446049250313080847263336181640625') </code></pre>
148
2016-09-21T14:07:21Z
39,619,388
<p><code>repr</code> (and <code>str</code> in Python 3) will put out as many digits as required to make the value unambiguous. In this case the result of the multiplication <code>3*0.1</code> isn't the closest value to 0.3 (0x1.3333333333333p-2 in hex), it's actually one LSB higher (0x1.3333333333334p-2) so it needs more digits to distinguish it from 0.3.</p> <p>On the other hand, the multiplication <code>4*0.1</code> <em>does</em> get the closest value to 0.4 (0x1.999999999999ap-2 in hex), so it doesn't need any additional digits.</p> <p>You can verify this quite easily:</p> <pre><code>&gt;&gt;&gt; 3*0.1 == 0.3 False &gt;&gt;&gt; 4*0.1 == 0.4 True </code></pre> <p>I used hex notation above because it's nice and compact and shows the bit difference between the two values. You can do this yourself using e.g. <code>(3*0.1).hex()</code>. If you'd rather see them in all their decimal glory, here you go:</p> <pre><code>&gt;&gt;&gt; Decimal(3*0.1) Decimal('0.3000000000000000444089209850062616169452667236328125') &gt;&gt;&gt; Decimal(0.3) Decimal('0.299999999999999988897769753748434595763683319091796875') &gt;&gt;&gt; Decimal(4*0.1) Decimal('0.40000000000000002220446049250313080847263336181640625') &gt;&gt;&gt; Decimal(0.4) Decimal('0.40000000000000002220446049250313080847263336181640625') </code></pre>
73
2016-09-21T14:26:14Z
[ "python", "floating-point", "rounding", "floating-accuracy", "ieee-754" ]
Why does the floating-point value of 4*0.1 look nice in Python 3 but 3*0.1 doesn't?
39,618,943
<p>I know that most decimals don't have an exact floating point representation (<a href="http://stackoverflow.com/questions/588004">Is floating point math broken?</a>).</p> <p>But I don't see why <code>4*0.1</code> is printed nicely as <code>0.4</code>, but <code>3*0.1</code> isn't, when both values actually have ugly decimal representations:</p> <pre><code>&gt;&gt;&gt; 3*0.1 0.30000000000000004 &gt;&gt;&gt; 4*0.1 0.4 &gt;&gt;&gt; from decimal import Decimal &gt;&gt;&gt; Decimal(3*0.1) Decimal('0.3000000000000000444089209850062616169452667236328125') &gt;&gt;&gt; Decimal(4*0.1) Decimal('0.40000000000000002220446049250313080847263336181640625') </code></pre>
148
2016-09-21T14:07:21Z
39,619,467
<p>The simple answer is because <code>3*0.1 != 0.3</code> due to quantization (roundoff) error (whereas <code>4*0.1 == 0.4</code> because multiplying by a power of two is usually an "exact" operation).</p> <p>You can use the <code>.hex</code> method in Python to view the internal representation of a number (basically, the <em>exact</em> binary floating point value, rather than the base-10 approximation). This can help to explain what's going on under the hood.</p> <pre><code>&gt;&gt;&gt; (0.1).hex() '0x1.999999999999ap-4' &gt;&gt;&gt; (0.3).hex() '0x1.3333333333333p-2' &gt;&gt;&gt; (0.1*3).hex() '0x1.3333333333334p-2' &gt;&gt;&gt; (0.4).hex() '0x1.999999999999ap-2' &gt;&gt;&gt; (0.1*4).hex() '0x1.999999999999ap-2' </code></pre> <p>0.1 is 0x1.999999999999a times 2^-4. The "a" at the end means the digit 10 - in other words, 0.1 in binary floating point is <em>very slightly</em> larger than the "exact" value of 0.1 (because the final 0x0.99 is rounded up to 0x0.a). When you multiply this by 4, a power of two, the exponent shifts up (from 2^-4 to 2^-2) but the number is otherwise unchanged, so <code>4*0.1 == 0.4</code>.</p> <p>However, when you multiply by 3, the little tiny difference between 0x0.99 and 0x0.a0 (0x0.07) magnifies into a 0x0.15 error, which shows up as a one-digit error in the last position. This causes 0.1*3 to be <em>very slightly</em> larger than the rounded value of 0.3.</p> <p>Python 3's float <code>repr</code> is designed to be <em>round-trippable</em>, that is, the value shown should be exactly convertible into the original value. Therefore, it cannot display <code>0.3</code> and <code>0.1*3</code> exactly the same way, or the two <em>different</em> numbers would end up the same after round-tripping. Consequently, Python 3's <code>repr</code> engine chooses to display one with a slight apparent error.</p>
287
2016-09-21T14:30:11Z
[ "python", "floating-point", "rounding", "floating-accuracy", "ieee-754" ]
Why does the floating-point value of 4*0.1 look nice in Python 3 but 3*0.1 doesn't?
39,618,943
<p>I know that most decimals don't have an exact floating point representation (<a href="http://stackoverflow.com/questions/588004">Is floating point math broken?</a>).</p> <p>But I don't see why <code>4*0.1</code> is printed nicely as <code>0.4</code>, but <code>3*0.1</code> isn't, when both values actually have ugly decimal representations:</p> <pre><code>&gt;&gt;&gt; 3*0.1 0.30000000000000004 &gt;&gt;&gt; 4*0.1 0.4 &gt;&gt;&gt; from decimal import Decimal &gt;&gt;&gt; Decimal(3*0.1) Decimal('0.3000000000000000444089209850062616169452667236328125') &gt;&gt;&gt; Decimal(4*0.1) Decimal('0.40000000000000002220446049250313080847263336181640625') </code></pre>
148
2016-09-21T14:07:21Z
39,623,207
<p>Here's a simplified conclusion from other answers.</p> <blockquote> <p>If you check a float on Python's command line or print it, it goes through function <code>repr</code> which creates its string representation.</p> <p>Starting with version 3.2, Python's <code>str</code> and <code>repr</code> use a complex rounding scheme, which prefers nice-looking decimals if possible, but uses more digits where necessary to guarantee bijective (one-to-one) mapping between floats and their string representations.</p> <p>This scheme guarantees that value of <code>repr(float(s))</code> looks nice for simple decimals, even if they can't be represented precisely as floats (eg. when <code>s = "0.1")</code>.</p> <p>At the same time it guarantees that <code>float(repr(x)) == x</code> holds for every float <code>x</code></p> </blockquote>
19
2016-09-21T17:42:10Z
[ "python", "floating-point", "rounding", "floating-accuracy", "ieee-754" ]
Why does the floating-point value of 4*0.1 look nice in Python 3 but 3*0.1 doesn't?
39,618,943
<p>I know that most decimals don't have an exact floating point representation (<a href="http://stackoverflow.com/questions/588004">Is floating point math broken?</a>).</p> <p>But I don't see why <code>4*0.1</code> is printed nicely as <code>0.4</code>, but <code>3*0.1</code> isn't, when both values actually have ugly decimal representations:</p> <pre><code>&gt;&gt;&gt; 3*0.1 0.30000000000000004 &gt;&gt;&gt; 4*0.1 0.4 &gt;&gt;&gt; from decimal import Decimal &gt;&gt;&gt; Decimal(3*0.1) Decimal('0.3000000000000000444089209850062616169452667236328125') &gt;&gt;&gt; Decimal(4*0.1) Decimal('0.40000000000000002220446049250313080847263336181640625') </code></pre>
148
2016-09-21T14:07:21Z
39,633,884
<p>Not really specific to Python's implementation but should apply to any float to decimal string functions.</p> <p>A floating point number is essentially a binary number, but in scientific notation with a fixed limit of significant figures.</p> <p>The inverse of any number that has a prime number factor that is not shared with the base will always result in a recurring dot point representation. For example 1/7 has a prime factor, 7, that is not shared with 10, and therefore has a recurring decimal representation, and the same is true for 1/10 with prime factors 2 and 5, the latter not being shared with 2; this means that 0.1 cannot be exactly represented by a finite number of bits after the dot point.</p> <p>Since 0.1 has no exact representation, a function that converts the approximation to a decimal point string will usually try to approximate certain values so that they don't get unintuitive results like 0.1000000000004121.</p> <p>Since the floating point is in scientific notation, any multiplication by a power of the base only affects the exponent part of the number. For example 1.231e+2 * 100 = 1.231e+4 for decimal notation, and likewise, 1.00101010e11 * 100 = 1.00101010e101 in binary notation. If I multiply by a non-power of the base, the significant digits will also be affected. For example 1.2e1 * 3 = 3.6e1</p> <p>Depending on the algorithm used, it may try to guess common decimals based on the significant figures only. Both 0.1 and 0.4 have the same significant figures in binary, because their floats are essentially truncations of (8/5)<em>(2^-4) and (8/5)</em>(2^-6) respectively. If the algorithm identifies the 8/5 sigfig pattern as the decimal 1.6, then it will work on 0.1, 0.2, 0.4, 0.8, etc. It may also have magic sigfig patterns for other combinations, such as the float 3 divided by float 10 and other magic patterns statistically likely to be formed by division by 10.</p> <p>In the case of 3*0.1, the last few significant figures will likely be different from dividing a float 3 by float 10, causing the algorithm to fail to recognize the magic number for the 0.3 constant depending on its tolerance for precision loss.</p> <p>Edit: <a href="https://docs.python.org/3.1/tutorial/floatingpoint.html" rel="nofollow">https://docs.python.org/3.1/tutorial/floatingpoint.html</a></p> <blockquote> <p>Interestingly, there are many different decimal numbers that share the same nearest approximate binary fraction. For example, the numbers 0.1 and 0.10000000000000001 and 0.1000000000000000055511151231257827021181583404541015625 are all approximated by 3602879701896397 / 2 ** 55. Since all of these decimal values share the same approximation, any one of them could be displayed while still preserving the invariant eval(repr(x)) == x.</p> </blockquote> <p>There is no tolerance for precision loss, if float x (0.3) is not exactly equal to float y (0.1*3), then repr(x) is not exactly equal to repr(y).</p>
5
2016-09-22T08:25:23Z
[ "python", "floating-point", "rounding", "floating-accuracy", "ieee-754" ]
Deprecated Scikit-learn module prevents joblib from loading it
39,618,985
<p>I have a Hidden Markov Model that has been pickled with joblib using the sklearn.hmm module. Apparently, in version 0.17.x this module has been deprecated and moved to hmmlearn. I am unable to load the model and I get the following error:</p> <blockquote> <p>ImportError: No module named 'sklearn.hmm'</p> </blockquote> <p>I have tried to revert back to version 0.16.x but still cannot load the model. I get the following error:</p> <blockquote> <p>ImportError: libopenblas.so.0: cannot open shared object file: No such file or directory</p> </blockquote> <p>I do not have access to the source code to recreate the model and re-pickle it</p> <p>I am running Python 3.5</p> <p>Has anyone else experienced this problem and have you found a solution? Does anyone know if scikit-learn has any way to guarantee persistence since the deprecation?</p>
0
2016-09-21T14:08:53Z
39,621,200
<p>After reverting to scikit-learn 0.16.x, I just needed to install OpenBlas for Ubuntu. It appears that the problem was more a feature of the operating system rather than Python.</p>
0
2016-09-21T15:49:27Z
[ "python", "scikit-learn", "joblib" ]
Why look ahead is returning matches for time-stamp
39,619,098
<p>Trying to write a script in python for some post processing. I have a file that contains messages with a time-stamp. I want to extract all the messages into a list.<br> Regex - start from message until next time-stamp. </p> <pre><code>findallItems = re.findall(r'(?s)((?&lt;=message).*?(?=((\d{4})\-((0[1-9])|(1[0-2]))\-((0[1-9])|(1[0-2]))|\Z)))', fileread) </code></pre> <p>This works fine but it also returns time-stamps as matches. How can I only return the message and not include time-stamps ?</p> <p>If I use look ahead position as text then it works fine. For e.g </p> <pre><code>findallItems = re.findall(r'(?s)((?&lt;=message).*?(?=message|\Z))',fileread) </code></pre>
1
2016-09-21T14:13:48Z
39,619,230
<p>You need to remove unnecessary capturing parentheses and convert others to non-capturing:</p> <pre><code>findallItems = re.findall(r'(?s)(?&lt;=message).*?(?=(?:\d{4}-(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-2])|\Z))', fileread) </code></pre> <p>See <a href="https://regex101.com/r/yE7iF1/1" rel="nofollow">this regex demo</a></p> <p>However, you may just keep 1 capturing group over your necessary pattern and <code>re.findall</code> will only return this group value:</p> <pre><code>(?s)message(.*?)(?:\d{4}-(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-2])|\Z) ^ ^ </code></pre> <p>See <a href="https://regex101.com/r/yE7iF1/2" rel="nofollow">another regex demo</a></p>
1
2016-09-21T14:19:03Z
[ "python", "regex" ]
Cast to Array failed with moongoose and dict
39,619,118
<p>I have a problem to insert a data with mongoose in mongodb </p> <p>Here is my db.js model : </p> <pre><code>var Appointment = new Schema({ date: Date, coach: ObjectId, complement: String, isOwner: Boolean, fiter : ObjectId, fiters: [ { user: ObjectId, isOwner: Boolean, status: String, invitationDate: Date } ], place: ObjectId, objectif : ObjectId, pricing: Number, status: String, ratings: [ { date: Date, user: ObjectId, score: Number, comment: String, target: ObjectId, targetType: String } ], annulation : Boolean, late: Number, log: [{ logType: String, date: Date, user: ObjectId, details: String, relatedTo: ObjectId }] }, { timestamps: true }); </code></pre> <p>Here is my python script test : </p> <pre><code>appointment = { "_id":idFiter, "date": "2016-09-25T00:00:00.0000000Z", "coach":"57dfd22f7f8effc700bfa16f", "fiters" : [ { "user": "57da891db39797707093c6e1", "isOwner": False, "status": "invite", "invitationDate": "2016-09-25T00:00:00.0000000Z", }], "place" : "57d66a5b73c0ab6c007beb74", "objectif": "57e28b64cae2161f33b641e3", } r = requests.post("http://127.0.0.1:8010/appointment/", data=appointment,headers=headers) print(r.status_code) print(r.content) </code></pre> <p>and here is my enter point in nodejs with express :</p> <pre><code>router.post('/', authenticate.passport.authenticate('bearer', { session: false }), function(req, res) { appointmentToInsert = { date : req.body.date, coach : req.body.coach, fiter : req.body._id, fiters : req.body.fiters, place : req.body.place, objectif : req.body.objectif, isOwner : true, }; new Appointment(appointmentToInsert).save(function (error, appointment) { if (error == null) { res.status(200).send(appointment); } else { console.log(error); res.status(500).send(error); } }); }); </code></pre> <p>Here is the error :</p> <pre><code>{ [ValidationError: Appointment validation failed] message: 'Appointment validation failed', name: 'ValidationError', errors: { fiters: { [CastError: Cast to Array failed for value "[ 'status', 'isOwner', 'invitationDate', 'user' ]" at path "fiters"] message: 'Cast to Array failed for value "[ \'status\', \'isOwner\', \'invitationDate\', \'user\' ]" at path "fiters"', name: 'CastError', kind: 'Array', value: [Object], path: 'fiters', reason: [Object] } } } </code></pre> <p>So the error seems to come from the fiters dict field but i don't understand why if anyone have any clue.</p> <p>Thanks and regards</p>
1
2016-09-21T14:14:48Z
39,630,674
<p>Your Python script is only sending the keys of the dictionary for <code>fiters</code>, try to add <code>.items()</code> to send 2-tuples. I'm not exactly sure which format your ORM expects.</p> <p>If that doesn't work, JSON can also be used to pass complex structures through POST.</p>
1
2016-09-22T05:07:02Z
[ "python", "node.js", "express" ]
Cast to Array failed with moongoose and dict
39,619,118
<p>I have a problem to insert a data with mongoose in mongodb </p> <p>Here is my db.js model : </p> <pre><code>var Appointment = new Schema({ date: Date, coach: ObjectId, complement: String, isOwner: Boolean, fiter : ObjectId, fiters: [ { user: ObjectId, isOwner: Boolean, status: String, invitationDate: Date } ], place: ObjectId, objectif : ObjectId, pricing: Number, status: String, ratings: [ { date: Date, user: ObjectId, score: Number, comment: String, target: ObjectId, targetType: String } ], annulation : Boolean, late: Number, log: [{ logType: String, date: Date, user: ObjectId, details: String, relatedTo: ObjectId }] }, { timestamps: true }); </code></pre> <p>Here is my python script test : </p> <pre><code>appointment = { "_id":idFiter, "date": "2016-09-25T00:00:00.0000000Z", "coach":"57dfd22f7f8effc700bfa16f", "fiters" : [ { "user": "57da891db39797707093c6e1", "isOwner": False, "status": "invite", "invitationDate": "2016-09-25T00:00:00.0000000Z", }], "place" : "57d66a5b73c0ab6c007beb74", "objectif": "57e28b64cae2161f33b641e3", } r = requests.post("http://127.0.0.1:8010/appointment/", data=appointment,headers=headers) print(r.status_code) print(r.content) </code></pre> <p>and here is my enter point in nodejs with express :</p> <pre><code>router.post('/', authenticate.passport.authenticate('bearer', { session: false }), function(req, res) { appointmentToInsert = { date : req.body.date, coach : req.body.coach, fiter : req.body._id, fiters : req.body.fiters, place : req.body.place, objectif : req.body.objectif, isOwner : true, }; new Appointment(appointmentToInsert).save(function (error, appointment) { if (error == null) { res.status(200).send(appointment); } else { console.log(error); res.status(500).send(error); } }); }); </code></pre> <p>Here is the error :</p> <pre><code>{ [ValidationError: Appointment validation failed] message: 'Appointment validation failed', name: 'ValidationError', errors: { fiters: { [CastError: Cast to Array failed for value "[ 'status', 'isOwner', 'invitationDate', 'user' ]" at path "fiters"] message: 'Cast to Array failed for value "[ \'status\', \'isOwner\', \'invitationDate\', \'user\' ]" at path "fiters"', name: 'CastError', kind: 'Array', value: [Object], path: 'fiters', reason: [Object] } } } </code></pre> <p>So the error seems to come from the fiters dict field but i don't understand why if anyone have any clue.</p> <p>Thanks and regards</p>
1
2016-09-21T14:14:48Z
39,637,911
<p>the answer was to send json instead of data :</p> <pre><code>appointment = { "_id":idFiter, "date": "2016-09-25T00:00:00.0000000Z", "coach":"57dfd22f7f8effc700bfa16f", "fiters" : [ { "user": "57da891db39797707093c6e1", "isOwner": False, "status": "invite", "invitationDate": "2016-09-25T00:00:00.0000000Z", }], "place" : "57d66a5b73c0ab6c007beb74", "objectif": "57e28b64cae2161f33b641e3", } r = requests.post("http://127.0.0.1:8010/appointment/", json=appointment,headers=headers) print(r.status_code) print(r.content) </code></pre>
1
2016-09-22T11:35:11Z
[ "python", "node.js", "express" ]
Plotting direction field in python
39,619,128
<p>I want to plot direction field for a simple equation: </p> <pre><code>y' = 3 − 2y </code></pre> <p>I have found similar Matlab problem <a href="http://www.math.tamu.edu/~efendiev/math442_spring04/matlabode.pdf" rel="nofollow">here</a> (1.3). But I do not know how to rewrite it to python. My last try is: </p> <pre><code>from matplotlib.pyplot import cm import matplotlib.pyplot as plt import numpy as np nx, ny = .3, .3 x = np.arange(-3, 3, nx) y = np.arange(-2, 2, ny) X, Y = np.meshgrid(x, y) dy = X + np.sin(Y) dx = np.ones((10,10)) plot2 = plt.figure() plt.quiver(X, Y, dx, dy, color='Teal', headlength=7) plt.title('Quiver Plot, Single Colour') plt.show(plot2) </code></pre> <p>But I'm getting error:</p> <pre><code>builtins.ValueError: operands could not be broadcast together with shapes (100,) (280,) </code></pre> <p>I though it will be very simple, but after few hours of searching how to plot a simple direction field, I am quite depressed.</p>
0
2016-09-21T14:15:10Z
39,619,305
<p><code>dx</code> and <code>dy</code> must be of the same shape as <code>X</code> and <code>Y</code>.</p> <p>Currently, you have a shape of <code>(14, 20)</code> for <code>X</code>, <code>Y</code> and <code>dy</code>, but <code>(10,10)</code> for <code>dx</code>. </p> <p>If you change the line defining <code>dx</code> to:</p> <pre><code>dx = np.ones(dy.shape) </code></pre> <p>Everything works fine:</p> <p><a href="http://i.stack.imgur.com/uYKqY.png" rel="nofollow"><img src="http://i.stack.imgur.com/uYKqY.png" alt="enter image description here"></a></p>
2
2016-09-21T14:21:56Z
[ "python", "matlab", "numpy", "matplotlib", "scipy" ]
Plotting direction field in python
39,619,128
<p>I want to plot direction field for a simple equation: </p> <pre><code>y' = 3 − 2y </code></pre> <p>I have found similar Matlab problem <a href="http://www.math.tamu.edu/~efendiev/math442_spring04/matlabode.pdf" rel="nofollow">here</a> (1.3). But I do not know how to rewrite it to python. My last try is: </p> <pre><code>from matplotlib.pyplot import cm import matplotlib.pyplot as plt import numpy as np nx, ny = .3, .3 x = np.arange(-3, 3, nx) y = np.arange(-2, 2, ny) X, Y = np.meshgrid(x, y) dy = X + np.sin(Y) dx = np.ones((10,10)) plot2 = plt.figure() plt.quiver(X, Y, dx, dy, color='Teal', headlength=7) plt.title('Quiver Plot, Single Colour') plt.show(plot2) </code></pre> <p>But I'm getting error:</p> <pre><code>builtins.ValueError: operands could not be broadcast together with shapes (100,) (280,) </code></pre> <p>I though it will be very simple, but after few hours of searching how to plot a simple direction field, I am quite depressed.</p>
0
2016-09-21T14:15:10Z
39,620,591
<p>You could also use the <a href="https://en.wikipedia.org/wiki/Streamlines,_streaklines,_and_pathlines" rel="nofollow">streamlines</a> of the filed to give a nice impression of the flow, and colour the curves according to some property of the field (dy in this case). Look at the following example:</p> <pre><code>nx, ny = .3, .3 x = np.arange(-3, 3, nx) y = np.arange(-2, 2, ny) X, Y = np.meshgrid(x, y) dy = X + np.sin(Y) dx = np.ones(dy.shape) color = dy lw = 1 plt.streamplot(X,Y,dx, dy, color=color, density=1., cmap='jet', arrowsize=1) </code></pre> <p>which produces:</p> <p><a href="http://i.stack.imgur.com/PpvBG.png" rel="nofollow"><img src="http://i.stack.imgur.com/PpvBG.png" alt="enter image description here"></a></p>
2
2016-09-21T15:20:44Z
[ "python", "matlab", "numpy", "matplotlib", "scipy" ]
Django saving in a model with User field
39,619,133
<p>I have token auth implemented in django and my models looks like-</p> <pre><code>class Portfolio(models.Model): owner = models.ForeignKey(User, verbose_name='User', null=True) company = models.TextField(null=True) volume = models.IntegerField(blank=True) date = models.DateField(null=True) </code></pre> <p>And to save in this model, I have the following in views-</p> <pre><code>arr = [] contents = request.data user = User.objects.filter(username=request.user) user_is = User(username=user) for i in range(0, len(portfolio_contents)): line = portfolio_contents[i].split(",") get_isin = Endday.objects.get(company=line[0]) datestuff = line[2] datestuff = datestuff[0:10] arr.append(Portfolio(owner=user_is, company=line[0], volume=line[1], date=datestuff)) Portfolio.objects.bulk_create(arr) </code></pre> <p>This code saves the data but when I try to see the data, I get this-</p> <pre><code>[ { "company": "BAL", "volume": 1425, "date": "2014-02-19", "owner": null }, { "company": "RLD", "volume": 2245, "date": "2014-02-19", "owner": null }, </code></pre> <p>Owner should not be <strong>null</strong> because if I try to <code>print(user.username)</code>, it prints <code>&lt;QuerySet [&lt;User: ku&gt;]&gt;</code>.</p> <p>What seems to be the problem?</p>
0
2016-09-21T14:15:25Z
39,619,215
<p>Your <code>user_is</code> is a <code>User</code> model instance which is not <em>saved</em> in the database:</p> <pre><code>user_is = User(username=user) # ??? username != user </code></pre> <hr> <p>Why not do:</p> <pre><code>user_is = User.objects.get(username=request.user.username) </code></pre> <p>Or make <code>owner=request.user</code> in the <code>ForeignKey</code> field in your loop.</p>
0
2016-09-21T14:18:36Z
[ "python", "django" ]
Django saving in a model with User field
39,619,133
<p>I have token auth implemented in django and my models looks like-</p> <pre><code>class Portfolio(models.Model): owner = models.ForeignKey(User, verbose_name='User', null=True) company = models.TextField(null=True) volume = models.IntegerField(blank=True) date = models.DateField(null=True) </code></pre> <p>And to save in this model, I have the following in views-</p> <pre><code>arr = [] contents = request.data user = User.objects.filter(username=request.user) user_is = User(username=user) for i in range(0, len(portfolio_contents)): line = portfolio_contents[i].split(",") get_isin = Endday.objects.get(company=line[0]) datestuff = line[2] datestuff = datestuff[0:10] arr.append(Portfolio(owner=user_is, company=line[0], volume=line[1], date=datestuff)) Portfolio.objects.bulk_create(arr) </code></pre> <p>This code saves the data but when I try to see the data, I get this-</p> <pre><code>[ { "company": "BAL", "volume": 1425, "date": "2014-02-19", "owner": null }, { "company": "RLD", "volume": 2245, "date": "2014-02-19", "owner": null }, </code></pre> <p>Owner should not be <strong>null</strong> because if I try to <code>print(user.username)</code>, it prints <code>&lt;QuerySet [&lt;User: ku&gt;]&gt;</code>.</p> <p>What seems to be the problem?</p>
0
2016-09-21T14:15:25Z
39,619,312
<p>It's because you didn't save your user e.g. it doesn't have an id in database. Also you need a <code>User</code> instance and not <code>QuerySet</code> which you'll get from calling <code>User.objects.filter(username=request.user)</code>.</p> <p>Try to:</p> <pre><code>user_is = User.objects.get(username=request.user.username) </code></pre>
0
2016-09-21T14:22:15Z
[ "python", "django" ]
Measure border overlap between numpy 2d regions
39,619,187
<p>I have a large numpy 2d (10000,10000) with many regions (clustered cells with the same cell value). Wat I want is to merge neighbouring regions which are showing more than 35% border overlap. This overlap should be measured by dividing the size of the common border with the neighbour, by the total border size of the region. </p> <p>I know how to detect the neighbouring regions (<a href="http://stackoverflow.com/a/38081569/6199354">Look here</a>), but I have no idea how to measure the border overlap.</p> <p>As I am working with large arrays a vectorized solution would be most optimal.</p> <hr> <h1>Example</h1> <pre><code>#input region_arr=np.array([[1,1,3,3],[1,2,2,3],[2,2,4,4],[5,5,4,4]]) </code></pre> <p><a href="http://i.stack.imgur.com/50q7q.png" rel="nofollow"><img src="http://i.stack.imgur.com/50q7q.png" alt="enter image description here"></a></p> <p>Output of the neighbour detection script is a numpy 2-d array with the region in the first and the neighbour in the second column.</p> <pre><code>#result of neighbour detection &gt;&gt;&gt; region_neighbour=detect_neighbours(region_arr) &gt;&gt;&gt; region_neighbour array([[1, 2], [1, 3], [2, 1], [2, 3], [2, 4], [2, 5], [3, 1], [3, 2], [3, 4], [4, 2], [4, 3], [4, 5], [5, 2], [5, 4]]) </code></pre> <p>I would like to add a column to the result of the neighbour detection, which contains the percentual overlap between the region and its neighbour. <em>Percentual overlap between region 1 and 3 = 1/8 = 0.125 = common border size/total border size of region 1.</em> </p> <p>In this example the desired output would look like this:</p> <pre><code>#output &gt;&gt;&gt; percentual_overlap=measure_border_overlap(region_arr,region_neighbour) &gt;&gt;&gt; percentual_overlap array([[ 1. , 3. , 0.125 ], [ 1. , 2. , 0.375 ], [ 2. , 1. , 0.3 ], [ 2. , 3. , 0.3 ], [ 2. , 4. , 0.2 ], [ 2. , 5. , 0.2 ], [ 3. , 1. , 0.125 ], [ 3. , 2. , 0.25 ], [ 3. , 4. , 0.125 ], [ 4. , 2. , 0.375 ], [ 4. , 3. , 0.125 ], [ 4. , 5. , 0.125 ], [ 5. , 2. , 0.333333], [ 5. , 4. , 0.166667]]) </code></pre> <p>With this output it is relatively easy to merge the regions that overlap more than 35% (regions 1 and 2; regions 4 and 2). After the region merging the new array will look like this:</p> <p><a href="http://i.stack.imgur.com/OlU5o.png" rel="nofollow"><img src="http://i.stack.imgur.com/OlU5o.png" alt="enter image description here"></a></p> <h1>Edit</h1> <p>You can calculate the perimeter of each region by applying the function of <a href="http://stackoverflow.com/a/13444457/6199354">pv.</a>. </p>
1
2016-09-21T14:17:29Z
39,634,445
<p>Take a look at this <a href="http://stackoverflow.com/questions/39346545/count-cells-of-adjacent-numpy-regions/39348877#39348877">Count cells of adjacent numpy regions</a> for inspiration. Deciding how to merge based on such information is a problem with multiple answers I think; it may not have a unique solution depending on the order in which you proceed...</p> <pre><code>import numpy_indexed as npi neighbors = np.concatenate([x[:, :-1].flatten(), x[:, +1:].flatten(), x[+1:, :].flatten(), x[:-1, :].flatten()]) centers = np.concatenate([x[:, +1:].flatten(), x[:, :-1].flatten(), x[:-1, :].flatten(), x[+1:, :].flatten()]) border = neighbors != centers (neighbors, centers), counts = npi.count((neighbors[border], centers[border])) region_group = group_by(centers) regions, neighbors_per_region = region_group.sum(counts) fractions = counts / neighbors_per_region[region_group.inverse] for result in zip(centers, neighbors, fractions): print(result) </code></pre>
1
2016-09-22T08:53:18Z
[ "python", "arrays", "numpy", "multidimensional-array", "vectorization" ]
Python date comparison with string output/format
39,619,219
<p>I'm currently trying to writing a script to automate a function at work, but I'm not intimately familiar with Python. I'm trying to take a XML dump and compare a specific entry's date to see if the time has passed or not.</p> <p>The date is in a particular format, given:</p> <pre><code>&lt;3-letter Month&gt; &lt;DD&gt; &lt;HH:MM:SS&gt; &lt;YYYY&gt; &lt;3-letter Timezone&gt; </code></pre> <p>For example:</p> <pre><code>May 14 20:11:20 2014 GMT </code></pre> <p>I've parsed out a string in that raw form, and need to somehow compare it with the current time to find out if the time has passed or not. That said, I'm having a bit of trouble figuring out how I should go about either formatting my text, or choosing the right mask/time format in Python.</p> <p>I've been messing around with different variations of the same basic format:</p> <pre><code>if(trimmed &lt; time.strftime("%x") ): </code></pre> <p>Trimmed is the clean date/time string. Time is derived from <em>import time</em>.</p> <p>Is there a simple way to fix this or will I have to dig into converting the format etc.? I know the above attempt is simplistic, but I'm still very new to Python. Thanks for your time and patience!</p>
0
2016-09-21T14:18:43Z
39,619,771
<p>You should use combination of gmtime (for GMT time),mktime and datetime.</p> <pre><code>from time import gmtime,mktime from datetime import datetime s = "May 14 20:11:20 2014 GMT" f = "%b %d %H:%M:%S %Y GMT" dt = datetime.strptime(s, f) gmt = datetime.fromtimestamp(mktime(gmtime())) if dt&lt;gmt: print(dt) else: print(gmt) </code></pre>
1
2016-09-21T14:43:57Z
[ "python", "date", "time", "scripting" ]
/bin/sh: line 62: to: command not found
39,619,221
<p>I have a python code in which I am calling a shell command. The part of the code where I did the shell command is:</p> <pre><code>try: def parse(text_list): text = '\n'.join(text_list) cwd = os.getcwd() os.chdir("/var/www/html/alenza/hdfs/user/alenza/sree_account/sree_project/src/core/data_analysis/syntaxnet/models/syntaxnet") synnet_output = subprocess.check_output(["echo '%s' | syntaxnet/demo.sh 2&gt;/dev/null"%text], shell = True) os.chdir(cwd) return synnet_output except Exception as e: sys.stdout.write(str(e)) </code></pre> <p>Now, when i run this code on a local file with some sample input (I did <code>cat /home/sree/example.json | python parse.py</code>) it works fine and I get the required output. But I am trying to run the code with an input on my HDFS (the same <code>cat</code> command but input file path is from HDFS) which contains exactly the same type of json entries and it fails with an error:</p> <pre><code>/bin/sh: line 62: to: command not found list index out of range </code></pre> <p>I read similar questions on Stack Overflow and the solution was to include a Shebang line for the shell script that is being called. I do have the shebang line <code>#!/usr/bin/bash</code> in <code>demo.sh</code> script.</p> <p>Also, <code>which bash</code> gives <code>/usr/bin/bash</code>.</p> <p>Someone please elaborate.</p>
0
2016-09-21T14:18:47Z
39,619,413
<p>You rarely, if ever, want to combine passing a list argument with <code>shell=True</code>. Just pass the string:</p> <pre><code>synnet_output = subprocess.check_output("echo '%s' | syntaxnet/demo.sh 2&gt;/dev/null"%(text,), shell=True) </code></pre> <p>However, you don't really need a shell pipeline here.</p> <pre><code>from subprocess import check_output from StringIO import StringIO # from io import StringIO in Python 3 synnet_output = check_output(["syntaxnet/demo.sh"], stdin=StringIO(text), stderr=os.devnull) </code></pre>
1
2016-09-21T14:27:44Z
[ "python", "json", "shell", "hadoop", "subprocess" ]
/bin/sh: line 62: to: command not found
39,619,221
<p>I have a python code in which I am calling a shell command. The part of the code where I did the shell command is:</p> <pre><code>try: def parse(text_list): text = '\n'.join(text_list) cwd = os.getcwd() os.chdir("/var/www/html/alenza/hdfs/user/alenza/sree_account/sree_project/src/core/data_analysis/syntaxnet/models/syntaxnet") synnet_output = subprocess.check_output(["echo '%s' | syntaxnet/demo.sh 2&gt;/dev/null"%text], shell = True) os.chdir(cwd) return synnet_output except Exception as e: sys.stdout.write(str(e)) </code></pre> <p>Now, when i run this code on a local file with some sample input (I did <code>cat /home/sree/example.json | python parse.py</code>) it works fine and I get the required output. But I am trying to run the code with an input on my HDFS (the same <code>cat</code> command but input file path is from HDFS) which contains exactly the same type of json entries and it fails with an error:</p> <pre><code>/bin/sh: line 62: to: command not found list index out of range </code></pre> <p>I read similar questions on Stack Overflow and the solution was to include a Shebang line for the shell script that is being called. I do have the shebang line <code>#!/usr/bin/bash</code> in <code>demo.sh</code> script.</p> <p>Also, <code>which bash</code> gives <code>/usr/bin/bash</code>.</p> <p>Someone please elaborate.</p>
0
2016-09-21T14:18:47Z
39,624,197
<p>There was a problem with some special characters appearing in the text string that i was inputting to <code>demo.sh</code>. I solved this by storing <code>text</code> into a temporary file and sending the contents of that file to <code>demo.sh</code>.</p> <p>That is: </p> <pre><code>try: def parse(text_list): text = '\n'.join(text_list) cwd = os.getcwd() with open('/tmp/data', 'w') as f: f.write(text) os.chdir("/var/www/html/alenza/hdfs/user/alenza/sree_account/sree_project/src/core/data_analysis/syntaxnet/models/syntaxnet") synnet_output = subprocess.check_output(["cat /tmp/data | syntaxnet/demo.sh 2&gt;/dev/null"%text], shell = True) os.chdir(cwd) return synnet_output except Exception as e: sys.stdout.write(str(e)) </code></pre>
0
2016-09-21T18:40:02Z
[ "python", "json", "shell", "hadoop", "subprocess" ]
Laplacian sharpening - grey image as result
39,619,222
<p>As many people before me, I am trying to implement an example of image sharpening from Gonzalez and Woods "Digital image processing" book.</p> <p>I create a negative Laplacian kernel (-1, -1, -1; -1, 8, -1; -1, -1,-1) and convolve it with the image, then subtract the result from the original image. (I also tried taking positive Laplacian (1, 1, 1; 1, -8, 1; 1, 1, 1) and adding it to the image). On each stage I perform fitting the results into the (0, 255) range, the normalized Laplacian looks nice and grey as it is supposed.</p> <pre><code>import matplotlib.cm as cm import scipy.misc import scipy.ndimage.filters #Function for plotting abs: pic_n = 1 def show_abs(I, plot_title): plt.title(plot_title) plt.tight_layout() plt.axis('off') plt.imshow(abs(I), cm.gray) #Reading the image into numpy array: A = scipy.misc.imread('moon1.jpg', flatten=True) plt.figure(pic_n) pic_n += 1 show_abs(A, 'Original image') A -= np.amin(A) #map values to the (0, 255) range A *= 255.0/np.amax(A) #Kernel for negative Laplacian kernel = np.ones((3,3))*(-1) kernel[1,1] = 8 #Convolution of the image with the kernel: Lap = scipy.ndimage.filters.convolve(A, kernel) #Laplacian now has negative values in range (-255, 255): print('L', np.amax(Lap), np.amin(Lap)) plt.figure(pic_n) pic_n += 1 show_abs(Lap, 'Laplacian') #Map Laplacian to the (0, 255) range: Lap -= np.amin(Lap) Lap *= 255.0/np.amax(Lap) print('L', np.amax(Lap), np.amin(Lap)) plt.figure(pic_n) pic_n += 1 show_abs(Lap, 'Normalized Laplacian') A += Lap #Add negative Laplacian to the original image print('A', np.amax(A), np.amin(A)) A -= np.amin(A) A *= 255.0/np.amax(A) print('A', np.amax(A), np.amin(A)) plt.figure(pic_n) pic_n += 1 show_abs(A, 'Laplacian filtered img') plt.show() </code></pre> <h2>Original image:</h2> <p><img src="http://i.stack.imgur.com/tM3V5.jpg" alt="Original image"></p> <h2>Results:</h2> <p><img src="http://i.stack.imgur.com/o6Dmy.png" alt="Results"></p> <p>The problem is that the final sharpened image looks faded and grey. I tried doing histogram equalization to make it more contrasting, but the result was weird. I thought about applying gamma-correction, but I don't like the voluntary choice of the gamma coefficient.</p> <p>It seems there must be an easy and convenient way of bringing the image back to the original dynamic range. I would appreciate ideas and comments on the code. Thank you!</p>
1
2016-09-21T14:18:48Z
39,620,430
<p>It seems to me that part of the problem has to do with how you are rescaling <code>Lap</code>. I don't think you want to subtract the minimum first - sharpening should decrease the intensity of some pixels as well as increasing that of others. You may also wish to play around with the scaling factor you multiply <code>Lap</code> by in order to control the degree of sharpening (255 might be too extreme).</p> <p>The reason the background looks grey in the final image is probably because after adding the negative Laplacian there will be pixels in the moon that are darker than the background (the magnitude of the Laplacian in this part of the image will be greater since it contains more local structure). This means that you do your rescaling the background pixels will map to some value > 0. If you don't subtract the min from <code>Lap</code> then these darker pixels would have negative values, so you could then clip the pixel values in the resulting image such that they are all > 0. That way you will end up with a pure black background.</p>
0
2016-09-21T15:14:11Z
[ "python", "image-processing", "scipy" ]
Laplacian sharpening - grey image as result
39,619,222
<p>As many people before me, I am trying to implement an example of image sharpening from Gonzalez and Woods "Digital image processing" book.</p> <p>I create a negative Laplacian kernel (-1, -1, -1; -1, 8, -1; -1, -1,-1) and convolve it with the image, then subtract the result from the original image. (I also tried taking positive Laplacian (1, 1, 1; 1, -8, 1; 1, 1, 1) and adding it to the image). On each stage I perform fitting the results into the (0, 255) range, the normalized Laplacian looks nice and grey as it is supposed.</p> <pre><code>import matplotlib.cm as cm import scipy.misc import scipy.ndimage.filters #Function for plotting abs: pic_n = 1 def show_abs(I, plot_title): plt.title(plot_title) plt.tight_layout() plt.axis('off') plt.imshow(abs(I), cm.gray) #Reading the image into numpy array: A = scipy.misc.imread('moon1.jpg', flatten=True) plt.figure(pic_n) pic_n += 1 show_abs(A, 'Original image') A -= np.amin(A) #map values to the (0, 255) range A *= 255.0/np.amax(A) #Kernel for negative Laplacian kernel = np.ones((3,3))*(-1) kernel[1,1] = 8 #Convolution of the image with the kernel: Lap = scipy.ndimage.filters.convolve(A, kernel) #Laplacian now has negative values in range (-255, 255): print('L', np.amax(Lap), np.amin(Lap)) plt.figure(pic_n) pic_n += 1 show_abs(Lap, 'Laplacian') #Map Laplacian to the (0, 255) range: Lap -= np.amin(Lap) Lap *= 255.0/np.amax(Lap) print('L', np.amax(Lap), np.amin(Lap)) plt.figure(pic_n) pic_n += 1 show_abs(Lap, 'Normalized Laplacian') A += Lap #Add negative Laplacian to the original image print('A', np.amax(A), np.amin(A)) A -= np.amin(A) A *= 255.0/np.amax(A) print('A', np.amax(A), np.amin(A)) plt.figure(pic_n) pic_n += 1 show_abs(A, 'Laplacian filtered img') plt.show() </code></pre> <h2>Original image:</h2> <p><img src="http://i.stack.imgur.com/tM3V5.jpg" alt="Original image"></p> <h2>Results:</h2> <p><img src="http://i.stack.imgur.com/o6Dmy.png" alt="Results"></p> <p>The problem is that the final sharpened image looks faded and grey. I tried doing histogram equalization to make it more contrasting, but the result was weird. I thought about applying gamma-correction, but I don't like the voluntary choice of the gamma coefficient.</p> <p>It seems there must be an easy and convenient way of bringing the image back to the original dynamic range. I would appreciate ideas and comments on the code. Thank you!</p>
1
2016-09-21T14:18:48Z
39,659,905
<p>After correcting the code as <strong>ali_m</strong> advised, I apply local histogram equalization - it slows down the code and also adds dependency on the OpenCV library, but the resulting image looks fine.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm import scipy.misc import scipy.ndimage.filters import cv2 #Function for plotting abs: pic_n = 1 def show_abs(I, plot_title): plt.title(plot_title) plt.tight_layout() plt.axis('off') plt.imshow(abs(I), cm.gray) #Reading of the image into numpy array: A0 = scipy.misc.imread('moon1.jpg', flatten=True) A0 -= np.amin(A0)#map values to the (0, 255) range A0 *= 255.0/np.amax(A0) print('Img ', np.amax(A0), np.amin(A0)) #&gt;&gt;&gt; Img 255.0 0.0 #Kernel for negative Laplacian kernel = np.ones((3,3))*(-1) kernel[1,1] = 8 #Convolution of the image with the kernel: Lap = scipy.ndimage.filters.convolve(A0, kernel) #Laplacian now has negative values print('Original Lap', np.amax(Lap), np.amin(Lap)) #&gt;&gt;&gt; Original Lap 1151.0 -1166.0 #Map Laplacian to some new range: Laps = Lap*100.0/np.amax(Lap) #Sharpening factor! print('Scaled Lap ', np.amax(Laps), np.amin(Laps)) #&gt;&gt;&gt; Scaled Lap 100.0 -101.303 plt.figure(pic_n) pic_n += 1 plt.subplot(1,2,1) show_abs(Lap, 'Laplacian') plt.subplot(1,2,2) show_abs(Laps, 'Scaled Laplacian') A = A0 + Laps #Add negative Laplacian to the original image print('SharpImg ', np.amax(A), np.amin(A)) #&gt;&gt;&gt; SharpImg 350.917 -81.06 A = abs(A) #Get rid of negative values print('SharpImg abs', np.amax(A), np.amin(A)) A *= 255.0/np.amax(A) print('SharpImg after scaling', np.amax(A), np.amin(A)) #&gt;&gt;&gt; SharpImg abs 350.917 0.0 # Local Histogram Equalization with OpenCV: A_cv2 = A A_cv2 = A_cv2.astype(np.uint8) tile_s0 = 4 tile_s1 = 4 clahe = cv2.createCLAHE(clipLimit=1, tileGridSize=(tile_s0,tile_s1)) A_cv2 = clahe.apply(A_cv2) plt.figure(pic_n) pic_n += 1 plt.subplot(2,1,1) plt.hist(A_cv2) plt.title('Original Histogram') plt.subplot(2,1,2) plt.hist(A_cv2) plt.title('Locally Equalized Histogram') plt.figure(pic_n) pic_n += 1 plt.subplot(1,3,1) show_abs(A0, 'Original image') plt.subplot(1,3,2) show_abs(A, 'Laplacian filtered img') plt.subplot(1,3,3) show_abs(A_cv2, 'Local Hist equalized img') plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/pbZN9.png" rel="nofollow"><img src="http://i.stack.imgur.com/pbZN9.png" alt="Results:"></a></p>
0
2016-09-23T11:40:23Z
[ "python", "image-processing", "scipy" ]
need to click on particular object on website to load more content multiple time using python and chrome driver
39,619,256
<h1>need to click on particular object multiple times but it is executing only once. please help me</h1> <pre><code> from selenium import webdriver from selenium.webdriver.common.keys import Keys import time chrome_path=r"C:\Users\Bhanwar\Desktop\New folder (2)\chromedriver.exe" driver =webdriver.Chrome(chrome_path) driver.get("https://priceraja.com/mobile/pricelist/samsung-mobile-price-list-in-india") driver.implicitly_wait(10) i=0 while i&lt;4: driver.find_element_by_css_selector('.loadMore').click() driver.implicitly_wait(10) i+=1 </code></pre>
0
2016-09-21T14:19:44Z
39,622,637
<p>I think that the line <code>driver.implicitly_wait(10)</code> doesn't actually make the driver wait at that time, and because there is an element with a css selector of <code>.loadmore</code> in the page the whole time, that element may be getting clicked faster than the page can handle. Another way to check would be to get a css selector of a new element that only appears after the page changes. Each phone item (in this page) has a specific id, so you can check for the id of the last item to see how many items are in the page. The id seems to be "product-itmes-", itmes is correct, it is not items.</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as ec from selenium.webdriver.common.by import By import time chrome_path=r"C:\Users\Bhanwar\Desktop\New folder (2)\chromedriver.exe" driver =webdriver.Chrome(chrome_path) driver.get("https://priceraja.com/mobile/pricelist/samsung-mobile-price-list-in-india") driver.implicitly_wait(10) i=0 time = 5 by = By.ID hook = "product-itmes-" # The id of one item, they seems to be this plus # the number item that they are button = '.loadmore' while i&lt;4: element_number = 25*i # It looks like there are 25 items added each time, and starts at 25 WebDriverWait(driver, time). until(ec.presence_of_element_located(by, hook+str(element_number))) driver.find_element_by_css_selector(button).click() time.sleep(1) # Makes the page wait for the element to change i+=1 </code></pre>
0
2016-09-21T17:10:28Z
[ "python", "google-chrome", "selenium" ]
Integer parameter in pandas DataFrame.drop function
39,619,295
<p>In following line of code</p> <pre><code> X = np.array(df.drop(['label'], 1)) </code></pre> <p>Could you please explain what does number <code>1</code> do? </p> <p>From documentation I understand that <code>DataFrame.drop</code> function drops desired column named <code>'label'</code> from dataframe and returns new dataframe without this column. But I dont understand what does this particular integer parameter <code>1</code> do.</p>
1
2016-09-21T14:21:22Z
39,619,316
<p>It is parameter <code>axis</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow"><code>drop</code></a>. It is same as <code>axis=1</code>. And it means you need remove columns from <code>DataFrame</code> which are specify in first parameter <code>labels</code>:</p> <p><code>labels</code> is omited most times.<br> Parameter <code>axis</code> can be removed if need remove row with <code>index</code>, because by default <code>axis=0</code>. Parameter <code>axis=1</code> is sometimes replaced by <code>1</code>, because less text, but it is worse readable.</p> <p>Sample:</p> <pre><code>import pandas as pd df = pd.DataFrame({'label':[1,2,3], 'label1':[4,5,6], 'label2':[7,8,9]}) print (df) label label1 label2 0 1 4 7 1 2 5 8 2 3 6 9 print (df.drop(['label'], 1)) label1 label2 0 4 7 1 5 8 2 6 9 #most commonly used print (df.drop(['label'], axis=1)) label1 label2 0 4 7 1 5 8 2 6 9 print (df.drop(labels=['label'], axis=1)) label1 label2 0 4 7 1 5 8 2 6 9 </code></pre>
2
2016-09-21T14:22:24Z
[ "python", "pandas", "dataframe", "int", "multiple-columns" ]
How to perform a left join in SQLALchemy?
39,619,353
<p>I have a SQL query which perfroms a series of left joins on a few tables:</p> <pre><code>SELECT &lt;some attributes&gt; FROM table1 t1 INNER JOIN table2 t2 ON attr = 1 AND attr2 = 1 LEFT JOIN table3 t3 ON t1.Code = t2.Code AND t3.Date_ = t1.Date_ LEFT JOIN tabl4 t4 ON t4.Code = t1.code AND t4.Date_ = t1.Date_ </code></pre> <p>So far, I have:</p> <pre><code>(sa.select([idc.c.Code]) .select_from( t1.join(t2, and_(t1.c.attr == 1, t2.c.attr2 = 1)) .join(t3, t3.c.Code == t1.c.Code))) </code></pre> <p>but I can't figure out how to make the join a <code>LEFT JOIN</code>.</p>
0
2016-09-21T14:24:26Z
39,669,377
<p>The isouter=True flag will produce a <code>LEFT OUTER JOIN</code> which is the same as a <code>LEFT JOIN</code>.</p>
0
2016-09-23T20:53:28Z
[ "python", "sql", "sqlalchemy" ]
Django Tables2 Display Multiple Tables
39,619,369
<p>I've got user records that have related(ish) course and enrollment records. I want to click on a user and see raw data from the user table, course table, and enrollment table in the same page.</p> <p>The process breaks down when I attempt to render the tables.</p> <p>views.py:</p> <pre><code>def explore_related(request, client_id, user_id): client = get_object_or_404(Client, pk=client_id) users = Users.objects.filter(pk=user_id) enrollments = Enrollments.objects.filter(client_id=client_id).filter(userID__in=users.values_list('userID', flat=True)).all() courses = Courses.objects.filter(client_id=client_id).filter(sectionSchoolCode__in=enrollments.values_list('sectionSchoolCode', flat=True)).all() userTable = UserTable(users, prefix='u_') courseTable = CourseTable(courses, prefix='c_') enrollmentTable = EnrollmentTable(enrollments, prefix='e_') payload = { 'userTable': userTable, 'enrollmentTable': enrollmentTable, 'courseTable': courseTable, } return render(request, 'importer/explore_related.html', payload) </code></pre> <p>explore_related.html:</p> <pre><code>{% load render_table from django_tables2 %} &lt;html&gt; &lt;body&gt; {% render_table userTable %} &lt;br&gt; {% render_table courseTable %} &lt;br&gt; {% render_table enrollmentTable %} &lt;/body&gt; &lt;/html&gt; </code></pre> <p>tables.py</p> <pre><code>class UserTable(tables.Table): selection = tables.CheckBoxColumn(accessor='userID', orderable=False) errors = tables.Column() User_ID = tables.LinkColumn( 'importer:explore_related', text=lambda record: record.userID, kwargs={ 'client_id': tables.A('client_id'), 'file_kind': 'user', 'object_id': tables.A('id'), }, empty_values='NULL', ) class Meta: model = Users attrs = {'class': 'paleblue'} exclude = ['id', 'client', 'userID'] sequence = ( 'selection', '...', 'errors' ) class CourseTable(tables.Table): selection = tables.CheckBoxColumn(accessor='pk', orderable=False) errors = tables.Column() class Meta: model = Courses attrs = {'class': 'paleblue'} exclude = ['id', 'client'] sequence = ( 'selection', '...', 'errors' ) class EnrollmentTable(tables.Table): selection = tables.CheckBoxColumn(accessor='pk', orderable=False) errors = tables.Column() class Meta: model = Enrollments attrs = {'class': 'paleblue'} exclude = ['id', 'client'] sequence = ( 'selection', '...', 'errors' ) </code></pre>
0
2016-09-21T14:25:05Z
39,619,749
<p>If you use custom table classes, you need to use a <strong>RequestConfig</strong> object to properly set-up the table.</p> <p>In your example, it should be enough to add</p> <pre><code>RequestConfig(request, paginate=False).configure(userTable) RequestConfig(request, paginate=False).configure(courseTable) RequestConfig(request, paginate=False).configure(enrollmentTable) </code></pre> <p>before adding them to <em>payload</em>.</p>
0
2016-09-21T14:42:40Z
[ "python", "django", "django-tables2" ]
Read and Write file from MS Azure using Python
39,619,379
<p>I am newbie to Python and Spark, I am trying to load file from Azure to table. Below is my simple code. </p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>import os import sys os.environ['SPARK_HOME'] = "C:\spark-2.0.0-bin-hadoop2.74" sys.path.append("C:\spark-2.0.0-bin-hadoop2.7\python") sys.path.append("C:\spark-2.0.0-bin-hadoop2.7\python\lib\py4j-0.10.1-src.zip") from pyspark import SparkContext from pyspark import SparkConf from pyspark.sql.types import * from pyspark.sql import * sc = SparkContext("local", "Simple App") def loadFile(path, rowDelimeter, columnDelimeter, firstHeaderColName): loadedFile = sc.newAPIHadoopFile(path, "org.apache.hadoop.mapreduce.lib.input.TextInputFormat", "org.apache.hadoop.io.LongWritable", "org.apache.hadoop.io.Text", conf={"textinputformat.record.delimiter": rowDelimeter}) rddData = loadedFile.map(lambda l:l[1].split(columnDelimeter)).filter(lambda f: f[0] != firstHeaderColName) return rddData Schema= StructType([ StructField("Column1", StringType(), True), StructField("Column2", StringType(), True), StructField("Column3", StringType(), True), StructField("Column4", StringType(), True) ]) rData= loadFile("wasbs://Storagename@Accountname.blob.core.windows.net/File.txt", '\r\n',"#|#","Column1") DF = sc.createDataFrame(Data,Schema) DF.write.saveAsTable("Table1")</code></pre> </div> </div> </p> <p>I am getting error like FileNotFoundError: [WinError 2] The system cannot find the file specified</p>
0
2016-09-21T14:25:35Z
39,653,591
<p>@Miruthan, As far as I know, If we'd like to read data from WASB into Spark, the URL syntax is as following :</p> <pre><code>wasb[s]://&lt;containername&gt;@&lt;accountname&gt;.blob.core.windows.net/&lt;path&gt; </code></pre> <p>Meanwhile, due to Azure Storage Blob (WASB) is used as the storage account associated with an HDInsight cluster.Could you please double check it? Any update, please let me know. </p>
0
2016-09-23T06:01:27Z
[ "python", "azure", "apache-spark", "pyspark" ]
Replace multiple lines after pattern in python?
39,619,471
<p>I want to write a python script to replace first word in multiple lines after my pattern, by now I can only replace 1 line after my pattern, how can it replace more lines? Let's say 3 lines. </p> <p>lines.txt (input file, pattern"section 2") :</p> <pre><code>section 1 line 1 line 2 line 3 line 4 endsection section 2 line 1 line 2 line 3 line 4 endsection section 3 line 1 line 2 line 3 line 4 endsection </code></pre> <p>lines_mod.txt (result with my current code) :</p> <pre><code>section 1 line 1 line 2 line 3 line 4 endsection section 2 mod 1 line 2 line 3 line 4 endsection section 3 line 1 line 2 line 3 line 4 endsection </code></pre> <p>Here is my python script:</p> <pre><code>with open('E:/lines.txt') as fin, open('E:/lines_m.txt', 'w') as fout: flag = 0 for line in fin: if flag == 1: mod_line = 'mod ' + line.split()[-1] + '\n' fout.write(mod_line) flag = 0 continue fout.write(line) if line.find('section 2') != -1: flag = 1 </code></pre> <p>Thanks for help.</p>
0
2016-09-21T14:30:16Z
39,620,135
<pre><code>list_of_words_to_replace = ['mod','apple','xxx'] with open('E:/lines.txt') as fin, open('E:/lines_mod.txt', 'w') as fout: flag = 0 counter = 0 # &lt;&lt;&lt;&lt;------- added a counter for line in fin: if flag == 1: counter += 1 #&lt;&lt;&lt;&lt;-------- increasing counter by one every time it loops mod_line = list_of_words_to_replace[counter-1] + line.split()[-1] + '\n' #&lt;---- changed 'mod' to a list of words to replaced.... yes I know it's counter - 1 because we made counter start at 1 before counting and list index starts at 0 fout.write(mod_line) if counter &gt; 3: #replaces 3 lines you can replace the number 3 with however many lines you want to override. flag = 0 continue fout.write(line) if line.find('section 2') != -1: flag = 1 counter = 0 #&lt;&lt;&lt;&lt;--------- just in case you want to find anotehr section </code></pre> <p>Like in the comments said. You were turning <code>flag = 0</code> after one writeline, so we now have a counter that counts how many lines you want to write and when it goes over it sets <code>flag = 0</code>. </p> <p>Any other questions? </p>
0
2016-09-21T15:00:22Z
[ "python", "python-2.7" ]
Replace multiple lines after pattern in python?
39,619,471
<p>I want to write a python script to replace first word in multiple lines after my pattern, by now I can only replace 1 line after my pattern, how can it replace more lines? Let's say 3 lines. </p> <p>lines.txt (input file, pattern"section 2") :</p> <pre><code>section 1 line 1 line 2 line 3 line 4 endsection section 2 line 1 line 2 line 3 line 4 endsection section 3 line 1 line 2 line 3 line 4 endsection </code></pre> <p>lines_mod.txt (result with my current code) :</p> <pre><code>section 1 line 1 line 2 line 3 line 4 endsection section 2 mod 1 line 2 line 3 line 4 endsection section 3 line 1 line 2 line 3 line 4 endsection </code></pre> <p>Here is my python script:</p> <pre><code>with open('E:/lines.txt') as fin, open('E:/lines_m.txt', 'w') as fout: flag = 0 for line in fin: if flag == 1: mod_line = 'mod ' + line.split()[-1] + '\n' fout.write(mod_line) flag = 0 continue fout.write(line) if line.find('section 2') != -1: flag = 1 </code></pre> <p>Thanks for help.</p>
0
2016-09-21T14:30:16Z
39,620,853
<p>You need to revisit your flag update criteria to match the expected output.</p> <p>Since you set <code>flag = 1</code> for the <code>if line.find('section 2') != -1</code> condition, it will modify only one line following the matched line.</p> <p>Since you mention you want to replace more lines, you can probably add a counter that tracks the number of lines you've modified since <code>flag = 1</code>... and when you hit the desired number of lines, then reset <code>flag = 0</code>.</p> <pre><code>count = 0 for line in fin: if flag == 1 and count &lt; 3: mod_line = 'mod ' + line.split()[-1] + '\n' count += 1 fout.write(mod_line) continue </code></pre>
0
2016-09-21T15:33:07Z
[ "python", "python-2.7" ]
How to find file path for specific file for multiple folders
39,619,574
<p>The title doesn't really explain it well, so I'll try to explain it better here. I'm not expecting to be spoon-fed, but I'd like to be pointed in the right direction as to how I would do this.</p> <p>I have one folder, let's call it A.</p> <p>A has 100 other folders in, that have the folder name as just a completely random number.</p> <p>Inside each of these folders, there is a png file, a txt file, and a bsp file.</p> <p>I have to put the folder directory to the bsp files, into a text file, so that another program can read them (but I think that is irrelevant)</p> <p>I'd expect to be able to find the file directories for each file, e.g. it would print </p> <p><code>\A\&lt;random number for the folder&gt;\&lt;bsp file name&gt; \A\&lt;another random number for another folder&gt;\&lt;the bsp file name for the file in that folder&gt;</code></p> <p>I'm aware its quite vague, but how would I approach this?</p>
0
2016-09-21T14:35:21Z
39,620,619
<p>What you require can be achieved with the 'find' command on the shell.</p> <p>After changing into the main directory with all the subdirectories, to get the relative path, execute:</p> <pre><code>find -name *.bsp ./1/aa.bsp ./2/bb.bsp ./3/cc.bsp </code></pre> <p>And for absolute path, execute:</p> <pre><code>find -name *.bsp -exec readlink -f {} \; /home/user/A/1/aa.bsp /home/user/A/2/bb.bsp /home/user/A/3/cc.bsp </code></pre> <p>You can execute this in python using os.system()</p>
0
2016-09-21T15:22:03Z
[ "python" ]
struct.unpack(struct.pack(float)) has roundoff error?
39,619,636
<p>When testing my library, <a href="https://github.com/construct/construct" rel="nofollow">Construct</a>, I found out that tests fail when numbers are built then parsed back to a float. Should floats not be represented exactly as in-memory floats?</p> <pre><code>In [14]: d = struct.Struct("&lt;f") In [15]: d.unpack(d.pack(1.23)) Out[15]: (1.2300000190734863,) </code></pre>
0
2016-09-21T14:37:48Z
39,619,662
<p>Floating point is inherently imprecise, but you are packing a double-precision float (<code>binary64</code>) into a single-precision (<code>binary32</code>) space there. See <a href="https://en.wikipedia.org/wiki/IEEE_floating_point#Basic_and_interchange_formats" rel="nofollow"><em>Basic and interchange formats</em></a> in the Wikipedia article on IEEE floating point formats; the Python <code>float</code> format uses double precision (see the <a href="https://docs.python.org/3/library/stdtypes.html#numeric-types-int-float-complex" rel="nofollow">standard types docs</a>; <em>Floating point numbers are usually implemented using double in C</em>).</p> <p>Use <code>d</code> to use double precision:</p> <pre><code>&gt;&gt;&gt; import struct &gt;&gt;&gt; d = struct.Struct("&lt;d") &gt;&gt;&gt; d.unpack(d.pack(1.23)) (1.23,) </code></pre> <p>From the <a href="https://docs.python.org/3/library/struct.html#format-characters" rel="nofollow"><em>Format characters</em> section</a>:</p> <blockquote> <p>format: <code>f</code>, C Type: <code>float</code>, Python type: <code>float</code>, Standard size: <code>4</code>, Footnote: (5)<br> format: <code>d</code>, C Type: <code>double</code>, Python type: <code>float</code>, Standard size: <code>8</code>, Footnote: (5) </p> <ol start="5"> <li>For the <code>'f'</code> and <code>'d'</code> conversion codes, the packed representation uses the IEEE 754 binary32 (for <code>'f'</code>) or binary64 (for <code>'d'</code>) format, regardless of the floating-point format used by the platform.</li> </ol> </blockquote>
3
2016-09-21T14:38:59Z
[ "python", "floating-point", "pack" ]
csv.reader read from Requests stream: iterator should return strings, not bytes
39,619,676
<p>I'm trying to stream response to <code>csv.reader</code> using <code>requests.get(url, stream=True)</code> to handle quite big data feeds. My code worked fine with <code>python2.7</code>. Here's code:</p> <pre><code>response = requests.get(url, stream=True) ret = csv.reader(response.iter_lines(decode_unicode=True), delimiter=delimiter, quotechar=quotechar, dialect=csv.excel_tab) for line in ret: line.get('name') </code></pre> <p>Unfortunately after migration to python3.6 I got an following error: </p> <pre><code>_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?) </code></pre> <p>I was trying to find some wrapper/decorator that would covert result of <code>response.iter_lines()</code> iterator from bytes to string, but no luck with that. I already tried to use <code>io</code> package and also <code>codecs</code>. Using <code>codecs.iterdecode</code> doesn't split data in lines, it's just split probably by <code>chunk_size</code>, and in this case <code>csv.reader</code> is complaining in following way:</p> <pre><code>_csv.Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode? </code></pre>
5
2016-09-21T14:39:44Z
39,620,977
<p>I'm guessing you could wrap this in a <code>genexp</code> and feed decoded lines to it:</p> <pre><code>from contextlib import closing with closing(requests.get(url, stream=True)) as r: f = (line.decode('utf-8') for line in r.iter_lines()) reader = csv.reader(f, delimiter=',', quotechar='"') for row in reader: print(row) </code></pre> <p>Using some sample data in <code>3.5</code> this shuts up <code>csv.reader</code>, every line fed to it is first <code>decoded</code> in the genexp. Also, I'm using <a href="https://docs.python.org/3/library/contextlib.html#contextlib.closing" rel="nofollow"><code>closing</code></a> from <a href="https://docs.python.org/3/library/contextlib.html" rel="nofollow"><code>contextlib</code></a> as is <a href="http://docs.python-requests.org/en/master/user/advanced/#body-content-workflow" rel="nofollow"><em>generally suggested</em></a> to automatically <code>close</code> the responce.</p>
2
2016-09-21T15:38:30Z
[ "python", "django", "python-3.x", "csv", "python-requests" ]
How to read django autocomplete lights value to generate choices dynamically?
39,619,709
<p>Hey i would like to get the value of django autocomplete light in the model form and generate choices for the next fields accordingly.</p> <pre><code>class GroupPropertiesForm(forms.ModelForm): &lt;strike&gt;fields['equipment_grade']: forms.ChoiceField( choices=[(o.id, str(o)) for o in GroupProperties.objects.all(group=???group???)]&lt;/strike&gt; class Meta: model = GroupProperties fields = ('group', 'bells') widgets = { 'group': autocomplete.ModelSelect2( url='groups-autocomplete') ) } </code></pre>
0
2016-09-21T14:41:03Z
39,762,980
<p>This is not the correct way to implement this functionality, autocomplete light will provide information as a forward for the next attribute as they have written <a href="https://django-autocomplete-light.readthedocs.io/en/master/tutorial.html#filtering-results-based-on-the-value-of-other-fields-in-the-form" rel="nofollow">Django auto complete light documentation</a></p>
1
2016-09-29T06:32:26Z
[ "python", "django", "django-forms", "django-admin", "django-autocomplete-light" ]
How to convert a Numpy matrix into a Sympy
39,619,718
<p>I have constructed a randomly generated matrix of a specified size and that part works great. Give it a row and column size and boom a matrix of whole numbers from 0 to 100. More recently I tried to perform a sympy operation to a numpy matrix and python kept crashing on me. I soon learned that operations from sympy could not work on a numpy matrix. So I looked into how to convert a numpy into a sympy, but more often than not I have only found sympy into numpy using lambdify. I was wondering if I could use lambdify still to convert from numpy to sympy however. Here is the code I have</p> <pre><code>import math import numpy as SHIT import sympy as GARBAGE from sympy import * from sympy import Matrix from sympy.utilities.lambdify import lambdify, implemented_function from sympy import Function import __future__ import __init__ # init_print(use_unicode=True) alpha = eval(input("How many rows? ")) beta = eval(input("How many columns? ")) def make_matrix(alpha,beta): matrix_thing = SHIT.random.randint(0,50,(alpha,beta)) return(matrix_thing) print(make_matrix(alpha,beta)) matrix_thing_sympy = lambdify(alpha,beta,make_matrix(alpha,beta), SHIT) </code></pre> <p>Traceback: Argument must be either a string, dict or module but it is: [24 11] FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison if modname in modlist:</p> <p>The [24 11] you see was from a randomly generated 2 by 2 matrix. So if lambdify is reading this row by row, how is this not a string of numbers? This is the string: 24, 11. But python doesn't seem to agree with me on that.</p> <p>I have varied the statement of the final line to the following, none have worked.</p> <pre><code>matrix_thing_sympy = lambdify(alpha,beta,make_matrix, SHIT) </code></pre> <p>AttributeError: module 'numpy' has no attribute 'doprint'</p> <pre><code>matrix_thing_sympy = lambdify((alpha,beta),make_matrix(alpha,beta), SHIT) </code></pre> <p>VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future rational=rational) for x in a])</p> <p>lambda 2,2: ([[17 6] ^ SyntaxError: invalid syntax</p> <p>More importantly to me is, why won't this just work by default? I had figured if a matrix were a matrix that it is a matrix and who cares about if it were made using numpy sympy or any py for that matter. I digress but maybe this isn't a half bad point for me to understand as well.</p>
-2
2016-09-21T14:41:22Z
39,621,887
<p><strong>TL;DR</strong> Perform <code>sympy.Matrix(numpy_matrix)</code></p> <p>From comments, I suggest this</p> <h1>Converting NumPy matrix to SymPy matrix</h1> <pre><code>import math import numpy as SHIT import sympy as GARBAGE from sympy import * from sympy import Matrix from sympy.utilities.lambdify import lambdify, implemented_function from sympy import Function import __future__ import __init__ # init_print(use_unicode=True) alpha = eval(input("How many rows? ")) beta = eval(input("How many columns? ")) def make_matrix(alpha,beta): matrix_thing = SHIT.random.randint(0,50,(alpha,beta)) return(matrix_thing) matrix_sympy = Matrix(make_matrix(alpha, beta)) # use sympy.Matrix() </code></pre> <p>After then</p> <pre><code>matrix_sympy.rref() </code></pre> <h1>Alternatively,</h1> <p>NumPy also has RREF (strictly speaking, SciPy does)</p> <pre><code>import numpy as np import scipy.linalg as la def make_matrix(alpha,beta): matrix_thing = np.random.randint(0,50,(alpha,beta)) return(matrix_thing) matrix_numpy = make_matrix(alpha, beta) (_, rref) = la.qr(matrix_numpy) # perform QR decomposition, R is RREF </code></pre> <p>Both methods don't require symbolic variable. NumPy is not a SHIT thing.</p> <h1>When to use SymPy</h1> <p>Generally, you need SymPy when you want to find a general solution which is represented with a arbitrary variable without specific values.</p> <pre><code>import sympy x = symbols('x a b c') y = a * x ** 2 + b * x + c # generall quadratic equation. sympy.solve(y, x) </code></pre> <p>output:</p> <pre><code>[(-b + sqrt(-4*a*c + b**2))/(2*a), -(b + sqrt(-4*a*c + b**2))/(2*a)] </code></pre> <p>In your example, there is no space for RREF to be represented with respect to <code>alpha</code> and <code>beta</code></p>
0
2016-09-21T16:28:03Z
[ "python", "numpy", "matrix", "sympy" ]
Python Pandas select rows based on membership of another collection (set)
39,619,799
<p>Suppose I have a <code>DataFrame</code> constructed as follows:</p> <pre><code>import pandas import numpy column_names = ["name", "age", "score"] names = numpy.random.choice(["Jorge", "Xavier", "Joaquin", "Juan", "Jose"], 50) ages = numpy.random.randint(0, 100, 50) scores = numpy.random.rand(50) df = pandas.DataFrame.from_dict(dict(zip(column_names, [names, ages, scores]))) </code></pre> <p>The top 10 rows of the above <code>DataFrame</code> looks like the following.</p> <pre><code> age name score 0 15 Jorge 0.031380 1 44 Juan 0.373199 2 84 Xavier 0.999065 3 55 Juan 0.159873 4 55 Joaquin 0.211931 5 33 Juan 0.484350 6 22 Xavier 0.510276 7 86 Joaquin 0.490013 8 2 Jose 0.185086 9 51 Juan 0.979015 </code></pre> <p>I want to be able to select rows for which the elements of the <code>name</code> column is a member of <code>{"Xavier", "Joaquin"}</code>. Instinctively I'm thinking of something like <code>df.iloc[df["name"] in {"Xavier", "Joaquin"}, :]</code> but that doesn't work. So how do I achieve it?</p> <h3>Note</h3> <p>I know I can achieve this particular example by</p> <pre><code>df.loc[numpy.logical_or(df["name"] == "Xavier", df["name"] == "Joaquin"), :] </code></pre> <p>but that's not the point. This is just a simplified example of my real problem. I have a <code>DataFrame</code> of height 2,340,923 and a name set <code>names</code> of size 3,624 and I want to select the rows whose names are members of the name set <code>names</code>.</p>
1
2016-09-21T14:45:03Z
39,619,839
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a>:</p> <pre><code>print (df.loc[df["name"].isin(["Xavier", "Joaquin"]), :]) age name score 1 66 Joaquin 0.767056 2 17 Joaquin 0.721369 7 53 Joaquin 0.209415 10 9 Xavier 0.394815 13 20 Joaquin 0.276596 14 17 Xavier 0.810725 15 76 Xavier 0.918273 17 91 Joaquin 0.974723 18 39 Xavier 0.869607 21 3 Xavier 0.200578 22 34 Joaquin 0.938018 23 90 Xavier 0.664387 26 51 Xavier 0.946753 28 49 Xavier 0.859911 30 22 Joaquin 0.602381 34 7 Xavier 0.759837 35 96 Joaquin 0.790691 39 13 Joaquin 0.599557 40 10 Xavier 0.563933 41 69 Xavier 0.983787 43 58 Xavier 0.542903 44 8 Joaquin 0.307106 45 77 Joaquin 0.330278 46 55 Joaquin 0.980077 47 12 Xavier 0.177509 49 15 Joaquin 0.590958 </code></pre> <p>It works with <code>set</code> nice also:</p> <pre><code>names = set(["Xavier", "Joaquin"]) print (df.loc[df["name"].isin(names), :]) age name score 1 66 Joaquin 0.767056 2 17 Joaquin 0.721369 7 53 Joaquin 0.209415 10 9 Xavier 0.394815 13 20 Joaquin 0.276596 14 17 Xavier 0.810725 15 76 Xavier 0.918273 17 91 Joaquin 0.974723 18 39 Xavier 0.869607 21 3 Xavier 0.200578 22 34 Joaquin 0.938018 23 90 Xavier 0.664387 26 51 Xavier 0.946753 28 49 Xavier 0.859911 30 22 Joaquin 0.602381 34 7 Xavier 0.759837 35 96 Joaquin 0.790691 39 13 Joaquin 0.599557 40 10 Xavier 0.563933 41 69 Xavier 0.983787 43 58 Xavier 0.542903 44 8 Joaquin 0.307106 45 77 Joaquin 0.330278 46 55 Joaquin 0.980077 47 12 Xavier 0.177509 49 15 Joaquin 0.590958 </code></pre>
2
2016-09-21T14:46:59Z
[ "python", "pandas", "indexing", "set", "condition" ]
Can Django collectstatic overwrite old files?
39,619,900
<p>In my deb postinst file:</p> <pre><code>PYTHON=/usr/bin/python PYTHON_VERSION=`$PYTHON -c 'import sys; print sys.version[:3]'` SITE_PACKAGES=/opt/pkgs/mypackage/lib/python$PYTHON_VERSION/site-packages export PYTHONPATH=$SITE_PACKAGES echo "collect static files" $PYTHON manage.py collectstatic --noinput </code></pre> <blockquote> <p>When I run 'dpkg -i mypackage.deb' to install the package, no problem.</p> <p>When I run 'dpkg -i mypackage.deb' to <strong>re-install the package, old css files unchanged</strong>. </p> <p>When I changed '$PYTHON manage.py collectstatic --noinput ' to '$PYTHON manage.py collectstatic --noinput -c' and run 'dpkg -i mypackage.deb' to <strong>re-install the package</strong>, the error is following: OSError: [Errno 2] No such file or directory: '/opt/pkgs/myporject/static'</p> </blockquote> <p>Any idea?</p> <p><strong>Can Django collectstatic overwrite old files?</strong></p>
0
2016-09-21T14:51:28Z
39,640,609
<p>(Added here, maybe someone will have same problems with mine.) Yes.</p> <p>The timestamp of css files in /opt/pkgs/mypropject/lib/python2.7/site-packages/mypropject-py2.7.egg/myapp/static/css (directory A) is the time when package building finished, not the time when css files installed.</p> <p>But the timestamp of css files in /opt/pkgs/myporject/static (directory B) is the time of installation.</p> <p>That is why collectstatic sometimes cannot overwrite my old css files (cannot copy some css files from directory A to directory B).</p>
0
2016-09-22T13:35:37Z
[ "python", "css", "django", "debian", "deb" ]
Django-Haystack(elasticsearch) Autocomplete giving results for substring in search term
39,619,944
<p>I have a search index with elasticsearch as backend:</p> <pre><code>class MySearchIndex(indexes.SearchIndex, indexes.Indexable): ... name = indexes.CharField(model_attr='name') name_auto = indexes.NgramField(model_attr='name') ... </code></pre> <p>Suppose I have following values in elasticsearch:</p> <p><code>Cable</code><br/> <code>Magnet</code><br/> <code>Network</code><br/> <code>Internet</code><br/> <code>Switch</code><br/></p> <p>When I execute search for '<strong>netw</strong>' it returned <strong>Magnet</strong> &amp; <strong>Internet</strong> also along with <strong>Network</strong>. Using some other test cases I think haystack is searching for substring also, like <strong>net</strong> in <strong>netw</strong> as you see in above example. </p> <p>Here is the code:</p> <pre><code>sqs = sqs.filter(category='cat_name').using(using) querried = sqs.autocomplete(name_auto=q) </code></pre> <p>Also tried with:</p> <pre><code>querried = sqs.autocomplete(name_auto__contains=q) </code></pre> <p>How can I resolve this and make it working to return only those results that contains exact search term ?</p> <p>Using django-haystack==2.4.1 Django==1.9.1 elasticsearch==1.9.0</p>
0
2016-09-21T14:53:15Z
39,654,127
<p>Customize your elasticsearch backend settings with <a href="https://github.com/rhblind/django-hesab" rel="nofollow">django-hesab</a></p> <p>The default settings of django-hesab will return the exact search result. </p>
0
2016-09-23T06:37:20Z
[ "python", "django", "elasticsearch", "django-haystack" ]
Is this the most efficient and accurate way to extrapolate using scipy?
39,619,960
<p>I have a set of data points over time, but there is some missing data and the data is not at regular intervals. In order to get a full data set over time at regular intervals I did the following:</p> <pre><code>import pandas as pd import numpy as np from scipy import interpolate x = data['time'] y = data['shares'] f = interpolate.interp1d(x, y, fill_value='extrapolate') time = np.arange(0, 3780060, 600) new_data = [] for interval in time: new_data.append(f(interval)) test = pd.DataFrame({'time': time, 'shares': y}) test_func = test_func.astype(float) </code></pre> <p>When both the original and the extrapolated data sets are plotted, they seem to line up almost perfectly, but I still wonder if there is a more efficient and/or accurate way to accomplish the above.</p>
0
2016-09-21T14:54:14Z
39,620,414
<p>You should apply interpolation function only once, like this</p> <pre><code>new_data = f(time) </code></pre> <p>If you need values at regular intervals fill_value='extrapolate' is redundant, because it is just interpolation. You may use 'extrapolate' if your new interval is wider than original one. But it is bad practice.</p>
1
2016-09-21T15:13:42Z
[ "python", "numpy", "scipy" ]
Python regex freezes with small input string
39,619,973
<p>I am using regular expressions on a bunch of wikipedia pages. Actually working really good for the first like 20 pages, but then it suddenly freezes without me seeing a reason. Interrupting the script delivers this:</p> <pre><code>File "imageListFiller.py", line 30, in getImage foundImage = re.search(urlRegex, str(decodedLine)) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/re.py",line 173, in search return _compile(pattern, flags).search(string) </code></pre> <p>this is my code:</p> <pre><code>def getImage(wikiHtml): urlRegex = """File:((?:[a-zA-Z]|[0-9]|[$-_@.&amp;+]|[!*\(\),]|(?:[0-9a-fA-F][0-9a-fA-F]))*?\.(png|jpg|svg|JPG))""" uselessPictures = ("Wiktionary-logo-v2.svg", "Disambig_gray.svg", "Question_book-new.svg", "Commons-logo.png") for line in wikiHtml: decodedLine = line.decode('utf-8') foundImage = re.search(urlRegex, str(decodedLine)) if foundImage: if not foundImage.group(1) in uselessPictures: return foundImage.group(1) </code></pre> <p>and this is the input string that causes it to freeze:</p> <blockquote> href="/wiki/File:EARTH_-_WIKIPEDIA_SPOKEN_ARTICLE_(Part_01).ogg" title="Listen to this article"> src="//upload.wikimedia.org/wikipedia/commons/thumb/4/47/Sound-icon.svg/20px-Sound-icon.svg.png" width="20" height="15" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/4/47/Sound-icon.svg/30px-Sound-icon.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/4/47/Sound-icon.svg/40px-Sound-icon.svg.png 2x" data-file-width="128" data-file-height="96" > /> </blockquote> <p>The regex is not actually supposed to match here, it just needs to skip this line. Thanks!</p>
2
2016-09-21T14:54:38Z
39,620,170
<p>The <code>$-_</code> part in your pattern created a range matching uppercase letters and digits, and even more chars. Since other alternative branches in the group could match at the same location (like <code>[a-zA-Z]</code>) that led to a time out / catastrophic backtracking issue.</p> <p>You need to just concat all the character classes that only match 1 char in the 1st group, and either escape the <code>-</code> inside the character class or put it at the start/end of the character class (I'd still escape it if the pattern is going to be updated in the future):</p> <pre><code>r"""File:((?:[0-9a-fA-F]{2}|[a-zA-Z0-9\-$_@.&amp;+!*(),])*?\.(png|jpg|svg|JPG))""" </code></pre> <p>See the <a href="https://regex101.com/r/sF2vH8/3" rel="nofollow">regex demo</a>.</p> <p>Also, the longer alternative should precede the shorter one, so the <code>[0-9a-fA-F]{2}</code> should go first.</p> <p>Also, <code>\w</code> can be used to shorten the pattern a bit (to replace <code>[a-zA-Z0-9_]</code>):</p> <pre><code>r"""File:((?:[0-9a-fA-F]{2}|[\w\-$@.&amp;+!*(),])*?\.(png|jpg|svg|JPG))""" ^^^ </code></pre>
1
2016-09-21T15:02:27Z
[ "python", "regex", "freeze" ]
Pycharm cannot find installed packages: keras
39,620,014
<p>I installed pycharm-2016.1.4 in my PC running Ubuntu 14.04. I have installed keras (a python package) using <code>pip install keras</code> and pycharm <strong>can</strong> find it <strong>before</strong>. But it <strong>cannot</strong> find keras <strong>now</strong>. I do not modify any settings, so this problem may be wired. My python version is python2.7. <a href="http://i.stack.imgur.com/LO1bo.png" rel="nofollow"><img src="http://i.stack.imgur.com/LO1bo.png" alt="enter image description here"></a></p> <p>I use <code>pip list</code> to verify that I have keras installed:</p> <p><a href="http://i.stack.imgur.com/es7ZC.png" rel="nofollow"><img src="http://i.stack.imgur.com/es7ZC.png" alt="enter image description here"></a></p> <p>But check this package in pycharm by using: Settings -> Project -> Project Interpreter. Keras is <strong>NOT</strong> in package list (The interpreter used is the same as the result of <code>which python</code> in terminal).</p> <p>Any suggestions are welcome. Thank you.</p>
1
2016-09-21T14:56:26Z
39,635,116
<p>This is strange, but you can install Keras directly through Pycharm.</p> <p>You can follow this steps:</p> <ul> <li>Go to <strong>Settings -> Project -> Project Interpreter</strong> </li> <li>Click on <strong>plus icon</strong> in the top-right corner</li> <li>Search <strong>Keras</strong> and press on <strong>Install Package</strong></li> </ul> <p>Please let me know if this procedure solve your issue.</p>
2
2016-09-22T09:24:01Z
[ "python", "python-2.7", "pycharm", "keras" ]
Pycharm cannot find installed packages: keras
39,620,014
<p>I installed pycharm-2016.1.4 in my PC running Ubuntu 14.04. I have installed keras (a python package) using <code>pip install keras</code> and pycharm <strong>can</strong> find it <strong>before</strong>. But it <strong>cannot</strong> find keras <strong>now</strong>. I do not modify any settings, so this problem may be wired. My python version is python2.7. <a href="http://i.stack.imgur.com/LO1bo.png" rel="nofollow"><img src="http://i.stack.imgur.com/LO1bo.png" alt="enter image description here"></a></p> <p>I use <code>pip list</code> to verify that I have keras installed:</p> <p><a href="http://i.stack.imgur.com/es7ZC.png" rel="nofollow"><img src="http://i.stack.imgur.com/es7ZC.png" alt="enter image description here"></a></p> <p>But check this package in pycharm by using: Settings -> Project -> Project Interpreter. Keras is <strong>NOT</strong> in package list (The interpreter used is the same as the result of <code>which python</code> in terminal).</p> <p>Any suggestions are welcome. Thank you.</p>
1
2016-09-21T14:56:26Z
39,673,259
<p>I do not know what happened, but the problem solved with the following steps.</p> <ol> <li>Uninstall old keras </li> <li>Re-install keras: <code>pip install keras</code></li> </ol> <p>Then I can <code>import keras</code> in pycharm.</p> <p><strong>NOTE:</strong> It is strange since I have keras installed but cannot find it in <strong>Project Interpreter's package list</strong>.</p>
0
2016-09-24T06:26:29Z
[ "python", "python-2.7", "pycharm", "keras" ]
Type of variable changes based on access?
39,620,023
<p>I have a directed graph structure consisting of nodes and edges both of which subclass an Event parent class. Depending on external events, edges can either be active or inactive. I then find all the directed paths from a given node to the root node, but I really only care about the nodes along the way, not the edges. For instance, to convert from a set of edges to a set of nodes I use:</p> <pre><code>&gt;&gt;&gt; paths [[&lt;Edge F&gt;, &lt;Edge B&gt;]] &gt;&gt;&gt; lst = [set(map(lambda e: e.tail, path)) for path in paths] </code></pre> <p>where path is a list of edges. This is what confuses me: when I go to check the contents of <code>lst</code>, it changes depending on how I access it</p> <pre><code>&gt;&gt;&gt; lst [set([&lt;Node 2&gt;, &lt;Node 1&gt;])] &gt;&gt;&gt; [type(n) for n in path for path in lst] [&lt;class 'libs.network.Edge'&gt;, &lt;class 'libs.network.Edge'&gt;] &gt;&gt;&gt; [type(n) for n in lst[0]] [&lt;class 'libs.network.Node'&gt;, &lt;class 'libs.network.Node'&gt;] </code></pre> <p>Why aren't these two ways of accessing the type information the same?</p>
0
2016-09-21T14:56:45Z
39,620,120
<p>You have your list comprehension order wrong. Nested loops are listed from <em>left to right</em>.</p> <p>So the expression</p> <pre><code>[type(n) for n in path for path in lst] </code></pre> <p>is executed as</p> <pre><code>for n in path: for path in lst: type(n) </code></pre> <p>so <code>n</code> is taken from some random pre-assigned <code>path</code> variable you had before, and it is <em>that variable</em> that contains <code>Edge</code> instances. Those objects have <em>nothing</em> to do with the contents of <code>lst[0]</code> you loop over in your other expression.</p> <p>You probably wanted to do this the other way around:</p> <pre><code>[type(n) for path in lst for n in path] </code></pre> <p>so that <code>path</code> is actually set from <code>lst</code> <em>before</em> you iterate over it.</p>
3
2016-09-21T15:00:02Z
[ "python", "types", "list-comprehension" ]
Converting between projections using pyproj in Pandas dataframe
39,620,105
<p>This is undoubtedly a bit of a "can't see the wood for the trees" moment. I've been staring at this code for an hour and can't see what I've done wrong. I know it's staring me in the face but I just can't see it!</p> <p>I'm trying to convert between two geographical co-ordinate systems using Python.</p> <p>I have longitude (x-axis) and latitude (y-axis) values and want to convert to OSGB 1936. For a single point, I can do the following:</p> <pre><code>import numpy as np import pandas as pd import shapefile import pyproj inProj = pyproj.Proj(init='epsg:4326') outProj = pyproj.Proj(init='epsg:27700') x1,y1 = (-2.772048, 53.364265) x2,y2 = pyproj.transform(inProj,outProj,x1,y1) print(x1,y1) print(x2,y2) </code></pre> <p>This produces the following:</p> <pre><code>-2.772048 53.364265 348721.01039783185 385543.95241055806 </code></pre> <p>Which seems reasonable and suggests that longitude of -2.772048 is converted to a co-ordinate of 348721.0103978.</p> <p>In fact, I want to do this in a Pandas dataframe. The dataframe contains columns containing longitude and latitude and I want to add two additional columns that contain the converted co-ordinates (called newLong and newLat).</p> <p>An exemplar dataframe might be:</p> <pre><code> latitude longitude 0 53.364265 -2.772048 1 53.632481 -2.816242 2 53.644596 -2.970592 </code></pre> <p>And the code I've written is:</p> <pre><code>import numpy as np import pandas as pd import shapefile import pyproj inProj = pyproj.Proj(init='epsg:4326') outProj = pyproj.Proj(init='epsg:27700') df = pd.DataFrame({'longitude':[-2.772048,-2.816242,-2.970592],'latitude':[53.364265,53.632481,53.644596]}) def convertCoords(row): x2,y2 = pyproj.transform(inProj,outProj,row['longitude'],row['latitude']) return pd.Series({'newLong':x2,'newLat':y2}) df[['newLong','newLat']] = df.apply(convertCoords,axis=1) print(df) </code></pre> <p>Which produces:</p> <pre><code> latitude longitude newLong newLat 0 53.364265 -2.772048 385543.952411 348721.010398 1 53.632481 -2.816242 415416.003113 346121.990302 2 53.644596 -2.970592 416892.024217 335933.971216 </code></pre> <p>But now it seems that the newLong and newLat values have been mixed up (compared with the results of the single point conversion shown above).</p> <p>Where have I got my wires crossed to produce this result? (I apologise if it's completely obvious!)</p>
0
2016-09-21T14:59:33Z
39,620,728
<p>When you do <code>df[['newLong','newLat']] = df.apply(convertCoords,axis=1)</code>, you are indexing the columns of the <code>df.apply</code> output. However, the column order is arbitrary because your series was defined using a dictionary (which is inherently unordered).</p> <p>You can opt to return a Series with a fixed column ordering:</p> <pre><code>return pd.Series([x2, y2]) </code></pre> <p>Alternatively, if you want to keep the <code>convertCoords</code> output labelled, then you can use <code>.join</code> to combine results instead:</p> <pre><code>return pd.Series({'newLong':x2,'newLat':y2}) ... df = df.join(df.apply(convertCoords, axis=1)) </code></pre>
2
2016-09-21T15:27:07Z
[ "python", "python-3.x", "pandas", "gis", "proj" ]
Importing my package from a Django management command
39,620,143
<p>I've written a package which was originally a command-line tool, but I've decided that for Django it should be run from a management command. I've installed my external package (called <code>codequal</code>) using <code>pip install --editable</code>, and I can successfully use <code>manage.py shell</code> to import a module from that package:</p> <pre><code>in[0]: from codequal import something in[1]: something.some_method() out[2]: u'result' </code></pre> <p>This works fine. However, when I try to do the same thing in a management command, I run into an error:</p> <pre><code>File "/home/path/to/django/project/some_app/management/commands/codequal.py", line 8, in &lt;module&gt; from codequal import something ImportError: cannot import name something </code></pre> <p>Why is this? I can use other installed packages from management commands. Could it be something to do with my setup.py? I can post snippets from that if needed. Mainly I'm wondering if this part is to blame:</p> <pre><code> entry_points={ 'console_scripts': [ 'codequal = codequal.cli:main', ], </code></pre> <p>Does this prevent from the module being imported from certain places? I can't see how it would, since I can do it from <code>manage.py shell</code>.</p>
0
2016-09-21T15:00:54Z
39,620,231
<p>The problem is that your file (codequal.py) has the same name that the module. You need to change one of them. I recomended the file inside the app:</p> <pre><code>/home/path/to/django/project/some_app/management/commands/codequal.py </code></pre> <p>to</p> <pre><code>/home/path/to/django/project/some_app/management/commands/codequal_utils.py </code></pre>
2
2016-09-21T15:05:52Z
[ "python", "django", "setuptools", "django-management-command" ]
SciPy Minimize with monotonically decreasing Xs constraint
39,620,149
<p>I am looking to do a strenuous optimization in which I use SciPy to optimize discount factors for bond cashflows (application less important, but if interested). So essentially I take multiple known values 'P', where P[i] is a function of C[i] known constant, and array X (X[j]=x(t) where x is a function of time). where the sum-product of C[i] and X = P.</p> <p>Hope that makes some sense, but essentially in order for a sensible result, I want to put a constraint where X (my array of x values) has the constraint that x[j] &lt; x[j-1], that is, x's are monotonically decreasing.</p> <p>Here is my code snippet for the optimization function:</p> <p>In [400]:</p> <pre><code>import numpy as np import pandas as pd import scipy as s def MyOptimization(X): P=np.array([99.,100.,105.,110.]) #just example known "P" array, in reality closer to 40 values c=np.array([1.25,4.,3.1,2.5]) #Cash flows for each P t=np.array([[1.2,2.,4.,10.0],[0.5,1.],[2.3,5.,10.5],[1.7]]) #time t of each cash flow, multiple per 'P' #remember P=X(t)*c[i] and x(t) where x[i+1]&lt;x[i] tlist=[] #t's will be used as index, so pulling individual values for i in t: for j in i: tlist.append(j) df=pd.DataFrame(data=X,index=tlist).drop_duplicates().sort() #dataframe to hold t (index) and x, x(t), and P(x,c) where c is known #print df sse=0 for i in range(0,len(P)): pxi = np.sum(df.loc[t[i],0].values*c[i])+100*df.loc[t[i][-1],0] sse=sse+(pxi-P[i])**2 #want to minimize sum squared errors between calculated P(x,c) and known P return sse cons=({'type':'ineq','fun': lambda x: x[1] &lt; x[0]}) #trying to define constraint that x is decreasing with t opti=s.optimize.minimize(MyOptimization,x0=[0.90,0.89,0.88,0.87,0.86,0.85,0.84,0.83,0.82,0.81],bounds=([0,1],)*10,constraints=cons) </code></pre> <p>In [401]:</p> <pre><code>opti </code></pre> <p>Out[401]:</p> <pre><code>status: 0 success: True njev: 4 nfev: 69 fun: 5.445290696814009e-15 x: array([ 0.90092322, 0.89092322, 0.88092322, 0.94478062, 0.86301329, 0.92834564, 0.84444848, 0.83444848, 0.96794781, 1.07317073]) message: 'Optimization terminated successfully.' jac: array([ -7.50609263e-05, -7.50609263e-05, -7.50609263e-05, -5.92906077e-03, 3.46914830e-04, 9.17475767e-03, -4.89504256e-04, -4.89504256e-04, -1.61263312e-02, 8.35321580e-03, 0.00000000e+00]) nit: 4 </code></pre> <p>And it is clear to see where in the results the x array is not decreasing. (tried adding (0,1) bounds as well but result failed, so focussing on this for now.</p> <p>The important line here for the constraint that I'm really not sure about is:</p> <p><code>cons=({'type':'ineq','fun': lambda x: x[1] &lt; x[0]})</code></p> <p>I tried following the documentation, but clearly it hasn't worked.</p> <p>Any ideas greatly appreciated.</p>
1
2016-09-21T15:01:14Z
39,621,270
<p>Let's try </p> <pre><code>def con(x): for i in range(len(x)-1): if x[i] &lt;= x[i+1]: return -1 return 1 cons=({'type':'ineq','fun': con}) </code></pre> <p>This should reject lists that aren't set up like you want, but I'm not sure is scipy is going to like it.</p>
0
2016-09-21T15:52:57Z
[ "python", "optimization", "scipy" ]
sklearn Logistic Regression with n_jobs=-1 doesn't actually parallelize
39,620,185
<p>I'm trying to train a huge dataset with sklearn's logistic regression. I've set the parameter n_jobs=-1 (also have tried n_jobs = 5, 10, ...), but when I open htop, I can see that it still uses only one core.</p> <p>Does it mean that logistic regression just ignores the n_jobs parameter?</p> <p>How can I fix this? I really need this process to become parallelized...</p> <p>P.S. I am using sklearn 0.17.1</p>
0
2016-09-21T15:03:31Z
39,620,443
<p>the parallel process backend also depends on the solver method. if you want to utilize multi core, the <code>multiprocessing</code> backend is needed. </p> <p>but solver like 'sag' can only use <code>threading</code> backend.</p> <p>and also mostly, it can be blocked due to a lot of pre-processing.</p>
0
2016-09-21T15:14:52Z
[ "python", "python-2.7", "parallel-processing", "scikit-learn", "logistic-regression" ]
Derive a Google Form View ID from the Definition ID
39,620,273
<p>When I define a form, it has the url:</p> <p><a href="https://docs.google.com/a/domain.com/forms/d/1d2Y9R9JJwymtjEbZI6xTMLtS9wB7GDMaopTQeNNNaD0/edit" rel="nofollow">https://docs.google.com/a/domain.com/forms/d/1d2Y9R9JJwymtjEbZI6xTMLtS9wB7GDMaopTQeNNNaD0/edit</a></p> <p>I finish adding the questions I want the form to have and use the "Send" button to distribute my form, I do not click the "Include form in email" tickbox. The person that I e-mail the form to now recieves and e-mail inviting them to complete the form, when they do so they end up on a page with the following url:</p> <p><a href="https://docs.google.com/a/domain.com/forms/d/e/1FAIpQLScINR5rudwEKNVwlvz45ersqk_SO0kNcyN_EM1tJe3mXeFksw/viewform" rel="nofollow">https://docs.google.com/a/domain.com/forms/d/e/1FAIpQLScINR5rudwEKNVwlvz45ersqk_SO0kNcyN_EM1tJe3mXeFksw/viewform</a></p> <p>In my AppEngine / Python 2.7 code I only have access to this id - '1d2Y9R9JJwymtjEbZI6xTMLtS9wB7GDMaopTQeNNNaD0' - my question is how can I get from this original definition id to the view id - '1FAIpQLScINR5rudwEKNVwlvz45ersqk_SO0kNcyN_EM1tJe3mXeFksw'</p> <p>I have tried using the Google Drive v2 AND v3 REST API's to retrieve the 'definition' file but the response has no reference (that I can see) to this 'view' id. I have also tried using Google AppsScript FormApp API to retrieve the 'definition' form, but this also contains no reference (that I can see) to the 'view' id. In all the cases I have tried if I use the 'view' ID as the source the API's return a 404 error indicating that this 'view' ID is not the ID of a Google Drive file.</p> <p>Any ideas?</p>
0
2016-09-21T15:07:50Z
39,665,908
<p>Use Google Apps Script to publish a Web App (or use the Execution API) to leaverage the "getPublishedUrl" function</p> <pre><code>function doGet(theRequest){ var theDefinitionId = theRequest.parameters.formId; var thePublishedUrl = myAPI_(theDefinitionId); var output = ContentService.createTextOutput(thePublishedUrl); output = output.setMimeType(ContentService.MimeType.JSON); return output; } function myAPI_(theDefinitionId){ var thePublishedUrl = null; try{ var existingForm = FormApp.openById(theDefinitionId); thePublishedUrl = existingForm.getPublishedUrl(); } catch (err){ Logger.log(err); } var theReturn = {publishedUrl: thePublishedUrl}; var theJSONReturn = JSON.stringify(theReturn); return theJSONReturn; } </code></pre>
0
2016-09-23T16:53:07Z
[ "python", "google-form" ]
How do I add a custom field to logstash/kibana?
39,620,302
<p>I am using <a href="https://github.com/vklochan/python-logstash" rel="nofollow">python-logstash</a> in order to write to logstash. It offers the option to add extra fields but problem is that all fields are under the <code>message</code> field. What I want to accomplish is adding a new field at the higher level. </p> <p>I found the option to do that from the <code>logstash.config</code> (using <code>ruby</code>/<code>grok</code>/<code>mutate</code> plugins) but this solution is not a scalable one (Would have to configure for every machine instance)</p> <p>Something like:</p> <pre><code>logger.info('my message') </code></pre> <p>And in Kibana I will see: </p> <pre><code>{ '@timestamp': ... ... ... 'message': 'my message' 'new_field' : 'new_field_value' } </code></pre> <p>How do I do that?</p> <p>Thanks.</p>
0
2016-09-21T15:08:41Z
39,650,991
<p>From reading the python-logstash documentation, they refer to "extra" fields and say they "add extra fields to logstash message", which seems like exactly what you want!</p> <pre><code># add extra field to logstash message extra = { 'test_string': 'python version: ' + repr(sys.version_info), 'test_boolean': True, 'test_dict': {'a': 1, 'b': 'c'}, 'test_float': 1.23, 'test_integer': 123, 'test_list': [1, 2, '3'], } test_logger.info('python-logstash: test extra fields', extra=extra) </code></pre> <p>And then ensure that you have the "json" codec in the logstash input block like this:</p> <pre><code>input { udp { port =&gt; 5959 codec =&gt; json } } </code></pre> <p><strong>EDIT</strong>: I tested this with the sample script and get the following Logstash output:</p> <pre><code>{ "host" =&gt; "xxx", "path" =&gt; "pylog.py", "test_float" =&gt; 1.23, "type" =&gt; "logstash", "tags" =&gt; [], "test_list" =&gt; [ [0] 1, [1] 2, [2] "3" ], "test_boolean" =&gt; true, "level" =&gt; "INFO", "stack_info" =&gt; nil, "message" =&gt; "python-logstash: test extra fields", "@timestamp" =&gt; "2016-09-25T15:23:58.259Z", "test_dict" =&gt; { "a" =&gt; 1, "b" =&gt; "c" }, "test_string" =&gt; "python version: sys.version_info(major=3, minor=5, micro=1, releaselevel='final', serial=0)", "@version" =&gt; "1", "test_integer" =&gt; 123, "logger_name" =&gt; "python-logstash-logger" } </code></pre> <p>All of the extra fields added show at the same level as the message field. They are not added to the 'message' inner-object.</p>
0
2016-09-23T00:57:14Z
[ "python", "logstash" ]
Optimal bulk query size for multiSearch in ElasticSearch
39,620,328
<p>I am trying to query elasticsearch with multisearch , but it doesn't seems to improve the time a lot.</p> <p>For approx <strong>70k queries</strong> , time taken by different bulk_sizes are :</p> <p>For <strong>single search</strong> for every item time taken = <strong>2611s</strong></p> <p>For <strong>multisearch (bulksize=1000)</strong> time taken = <strong>2400s</strong> </p> <p>For <strong>multisearch (bulksize=10)</strong> time taken = <strong>2326s</strong></p> <p>So, I need to know </p> <p>a) Is this the correct way to do MultiSearch ?</p> <p>b) What is the optimal Bulk Size for Multi Search ?</p> <p>Here'e my code :</p> <pre><code>search_arr = [] for k in range(i,i+BULK_SIZE): search_arr.append({'index':'test'}) search_arr.append({"query": {"match": {"title": title[k]}}, "size": 5}) request ='' for each in search_arr: request += '%s \n' %json.dumps(each) resp = es.msearch(body=request) </code></pre>
0
2016-09-21T15:09:46Z
39,624,742
<p>The number of concurrent searches is limited by the Search Thread Pool. </p> <p><a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html" rel="nofollow">https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html</a></p> <blockquote> <p>For count/search operations. Thread pool type is fixed with a size of int((# of available_processors * 3) / 2) + 1, queue_size of 1000.</p> </blockquote> <p>This means if you have single processor, then you will have 2 concurrent searches, and rest of the requests will go into the queue and will be processed as and when the threads become available again. </p>
1
2016-09-21T19:11:48Z
[ "python", "algorithm", "elasticsearch" ]
Web Scraping with Javascript Contents using Python PyQt
39,620,351
<p>I am now performing a task of scraping content systematically from a course list which seems to be rendered by javascript. I followed some scripts using PyQt4 on the web but failed (which I copied below). More precisely, the script works at some websites with javascript which loads content with clicking on its specific link. However, the following website (ouhk, the link I copied below in the script) does not seem to carry link for directing users to specific content, namely Programme Information, Programme Structure and Fee, etc. Instead, it uses tag containers and FTP for storing and loading information (that I found from its source code). </p> <p>I am wondering if there is anyway to modify the following script so that I can scrape those content by using PyQt4, or I have to look for other ways to achieve this purpose?</p> <pre><code>import sys from PyQt4.QtGui import * from PyQt4.QtCore import * from PyQt4.QtWebKit import * from lxml import html from bs4 import BeautifulSoup #import urllib.request #from urllib.parse import urljoin #Take this class for granted.Just use result of rendering. class Render(QWebPage): def __init__(self, url): self.app = QApplication(sys.argv) QWebPage.__init__(self) self.loadFinished.connect(self._loadFinished) self.mainFrame().load(QUrl(url)) self.app.exec_() def _loadFinished(self, result): self.frame = self.mainFrame() self.app.quit() url = 'http://www.ouhk.edu.hk/wcsprd/Satellite?pagename=OUHK/tcSchSing2014&amp;c=C_LIPACE&amp;cid=1450268562831&amp;lang=eng&amp;sch=LIP' r = Render(url) result = r.frame.toHtml() print result </code></pre>
3
2016-09-21T15:10:50Z
39,621,672
<p>Look into using selenium Library! I have scraped multiple websites with this library. People state that it is slow, but for my purposes it works great.</p> <p>Also if your kinda new to web scraping, look into what Xpaths are for scraping elements that would otherwise be difficult to get to. With Xpath, all you need to do in a chrome browser is right clickt he page, insspect element, unfold all the tags, and then right click the tag you want to scrape and click copy Xpath, then you can paste the path into modules in the selenium library. Really simple, heres a link for selenium information. </p> <p><a href="http://selenium-python.readthedocs.io/" rel="nofollow">http://selenium-python.readthedocs.io/</a></p>
0
2016-09-21T16:16:03Z
[ "javascript", "python", "web-scraping", "pyqt" ]
Django filter with two hops over ForeignKeys
39,620,374
<p>In Django, is it possible to do something like this?</p> <p><code>foo = Account.objects.filter(owner__address__zipcode='94704').get()</code></p> <p>with the following premises:</p> <ul> <li>Account has an <code>Owner</code> foreign key to an Owner model.</li> <li>Owner has an <code>Address</code> foreign key to an Address model.</li> <li>Address has a <code>zipcode</code> char field.</li> </ul>
0
2016-09-21T15:11:29Z
39,620,525
<p>Yes, this is supported in Django ORM. </p> <p>The feature is <a href="https://docs.djangoproject.com/en/dev/topics/db/queries/#lookups-that-span-relationships" rel="nofollow">documented here</a>. </p>
0
2016-09-21T15:18:15Z
[ "python", "django", "django-models" ]
My code scrapes data reliably on my machine, but not others
39,620,387
<p>I use Ubuntu 16.04, and I created a code that takes in an eBay link and scrapes the amount of units of that product sold, if the information is available. My version looks like this:</p> <pre><code>import requests from bs4 import BeautifulSoup def sold(url): soup = BeautifulSoup(requests.get(url).text, 'html.parser') amount_sold = soup.find('span', attrs={'class':"qtyTxt vi-bboxrev-dsplblk vi-qty-fixAlignment"}) if amount_sold: amount_sold = amount_sold.find('a') amount_sold = amount_sold.get_text().replace(u',', u'')[:-5] else: amount_sold = "N/A" </code></pre> <p>When I use this function on my machine, it reliably scrapes the data every time it is available. For example, for this page: <a href="http://www.ebay.com/itm/13-lighted-led-bat-halloween-window-silhouette-decoration/311651989599?hash=item488fe7f85f" rel="nofollow">http://www.ebay.com/itm/13-lighted-led-bat-halloween-window-silhouette-decoration/311651989599?hash=item488fe7f85f</a> , it will reliably return 53 (at the time of this writing). However, when I try this code on another machine (on the other machine I've tried Windows, Cygwin, and even an external Ubuntu server), my function returns "N/A", because my amount_sold variable does not find anything at all. I went into the page to make sure that the html code is the same, and when I tested my code by manually writing out each line on this other machine, it worked perfectly. It is only when it runs within a function that it fails to work.</p> <p>Is there a way to ensure that my code will work on any machine? What are some reasons the code may work on my own, but not on a different machine? And why does the code work when I test it line-by-line, but not when it's inside a function?</p>
0
2016-09-21T15:12:08Z
39,620,720
<p>I don't know how you got it to work on your machine but to search for a tag with multiple classes, you're supposed to pass in as a list whose elements are the names of the classes you're interested in. like this:</p> <pre><code>soup.find('span', attrs={'class':["qtyTxt", "vi-bboxrev-dsplblk", "vi-qty-fixAlignment"]}) </code></pre> <p>I suspect you're using a different version of <code>beautifulsoup</code> that allows you to pass in the classes as a single string (i don't know of any version that does that).</p> <p>This answer was tested on <code>beautifulsoup4</code> version <code>4.5</code></p> <p>This is the output from <code>pip freeze</code> on my machine.</p> <p><code>beautifulsoup4==4.5.0</code></p>
0
2016-09-21T15:26:46Z
[ "python", "web-scraping", "ebay" ]
the k-fold test can get result just by itself?
39,620,395
<p>My problem is when i comment the pipe_lr in the first line, I can still get accuracy score. It make me so confused, I think it should report an error why? It looks like it don't need the definition of the pipe_lr. This the relevant part. <a href="http://i.stack.imgur.com/l0xbB.png" rel="nofollow">enter image description here</a></p> <pre><code>from sklearn.ensemble import ExtraTreesClassifier from sklearn.feature_selection import SelectFromModel import pandas as pd import numpy as np from sklearn.cross_validation import train_test_split from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import LabelEncoder from sklearn.pipeline import Pipeline from sklearn.tree import DecisionTreeClassifier from sklearn.cross_validation import StratifiedKFold pipe_lr=Pipeline([('normalization',mms), ('feature_selection',feature_selection), ('classification',DecisionTreeClassifier())]) """ K-fold test """ kfold=StratifiedKFold(y=y_train,n_folds=10,random_state=1) scores=[] features=[] for k,(train,test) in enumerate (kfold): pipe_lr.fit(X_train[train],y_train[train]) score=pipe_lr.score(X_train[test],y_train[test]) scores.append(score) print('Fold: %s, Class dist.: %s,Acc: %.3f' %(k+1, np.bincount(y_train[train]), score)) </code></pre>
-1
2016-09-21T15:12:31Z
39,623,745
<p>The short answer is <strong>no, it cannot</strong>. Your code will not run without <code>pipe_lr</code> line. The only reason for it to run without it is because you ran it in some peculiar environment, like ipython notebook, and previously you ran the same code <strong>with</strong> this line, thus it still persists in your memory in the same scope (thus second run still accesses the original <code>pipe_lr</code> object). Or you actually defined this variable somewhere else in your code (for example in the chunk which defines data, <code>mms</code> etc. which you removed from the question itself). There is no way for this code to work (or even run) without <code>pipe_lr</code> when <strong>correctly run</strong> in the clean environment. </p>
0
2016-09-21T18:14:09Z
[ "python", "machine-learning", "scikit-learn" ]
Dividing every value in one list by every value in another list
39,620,517
<p>I am writing a function to find all the rational zeros of a polynomial and in need to divide each number in a list by each number in a list Ex.</p> <pre><code>list1 = [1,2,3,4,5,6,10,12,15,20,30,60] list2 = [1,2,3,6] zeros = [1/1,2/1,3/1,4/1,...,1/2,2/2,3/2,4/2,...1/3,2/3,3/3,4/3,etc...] </code></pre> <p>How would I accomplish this? Edit: Here is my code:</p> <pre><code>from fractions import Fraction def factor(x): x = abs(x) factors = [] new_factors = [] for n in range(1,x): if x % n == 0: factors.append(n) new_factors.append(n) for y in factors: new_factors.append(y * -1) new_factors = sorted(new_factors) return new_factors print(factor(8)) def find_zeros(fnctn,pwr): last = fnctn[len(fnctn)] first = fnctn[0] P_s = factor(last) Q_s = factor(first) p_zeros = [] </code></pre> <p>Would I do something like this:</p> <pre><code>for x in P_s: for y in Q_s: p_zeros.append(x/y) </code></pre>
-1
2016-09-21T15:17:50Z
39,620,654
<pre><code>zeros = [] for i in list1: zeros.extend([i/j for j in list2]) </code></pre>
0
2016-09-21T15:23:46Z
[ "python", "math" ]
Dividing every value in one list by every value in another list
39,620,517
<p>I am writing a function to find all the rational zeros of a polynomial and in need to divide each number in a list by each number in a list Ex.</p> <pre><code>list1 = [1,2,3,4,5,6,10,12,15,20,30,60] list2 = [1,2,3,6] zeros = [1/1,2/1,3/1,4/1,...,1/2,2/2,3/2,4/2,...1/3,2/3,3/3,4/3,etc...] </code></pre> <p>How would I accomplish this? Edit: Here is my code:</p> <pre><code>from fractions import Fraction def factor(x): x = abs(x) factors = [] new_factors = [] for n in range(1,x): if x % n == 0: factors.append(n) new_factors.append(n) for y in factors: new_factors.append(y * -1) new_factors = sorted(new_factors) return new_factors print(factor(8)) def find_zeros(fnctn,pwr): last = fnctn[len(fnctn)] first = fnctn[0] P_s = factor(last) Q_s = factor(first) p_zeros = [] </code></pre> <p>Would I do something like this:</p> <pre><code>for x in P_s: for y in Q_s: p_zeros.append(x/y) </code></pre>
-1
2016-09-21T15:17:50Z
39,620,856
<p>You can achieve it by using <code>itertools.product()</code> and <code>map()</code></p> <pre><code>&gt;&gt;&gt; from itertools import product &gt;&gt;&gt; from operator import div &gt;&gt;&gt; a = [1,2,3,4,5,6,10,12,15,20,30,60] &gt;&gt;&gt; b = [1,2,3,6] &gt;&gt;&gt; map(lambda x: div(x[1], x[0]), product(a, b)) [1, 2, 3, 6, 0, 1, 1, 3, 0, 0, 1, 2, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] </code></pre> <p><strong>Explanation:</strong></p> <p><code>itertools.product()</code> gives cartesian product of two lists. For example, in your case:</p> <pre><code>&gt;&gt;&gt; list(product(a, b)) [(1, 1), (1, 2), (1, 3), (1, 6), (2, 1), (2, 2), (2, 3), (2, 6), (3, 1), (3, 2), (3, 3), (3, 6), (4, 1), (4, 2), (4, 3), (4, 6), (5, 1), (5, 2), (5, 3), (5, 6), (6, 1), (6, 2), (6, 3), (6, 6), (10, 1), (10, 2), (10, 3), (10, 6), (12, 1), (12, 2), (12, 3), (12, 6), (15, 1), (15, 2), (15, 3), (15, 6), (20, 1), (20, 2), (20, 3), (20, 6), (30, 1), (30, 2), (30, 3), (30, 6), (60, 1), (60, 2), (60, 3), (60, 6)] </code></pre> <p>where as, <code>map(func, list)</code> is used to executed function <code>func</code> on every element of <code>list</code> and return new list</p>
-1
2016-09-21T15:33:20Z
[ "python", "math" ]
Dividing every value in one list by every value in another list
39,620,517
<p>I am writing a function to find all the rational zeros of a polynomial and in need to divide each number in a list by each number in a list Ex.</p> <pre><code>list1 = [1,2,3,4,5,6,10,12,15,20,30,60] list2 = [1,2,3,6] zeros = [1/1,2/1,3/1,4/1,...,1/2,2/2,3/2,4/2,...1/3,2/3,3/3,4/3,etc...] </code></pre> <p>How would I accomplish this? Edit: Here is my code:</p> <pre><code>from fractions import Fraction def factor(x): x = abs(x) factors = [] new_factors = [] for n in range(1,x): if x % n == 0: factors.append(n) new_factors.append(n) for y in factors: new_factors.append(y * -1) new_factors = sorted(new_factors) return new_factors print(factor(8)) def find_zeros(fnctn,pwr): last = fnctn[len(fnctn)] first = fnctn[0] P_s = factor(last) Q_s = factor(first) p_zeros = [] </code></pre> <p>Would I do something like this:</p> <pre><code>for x in P_s: for y in Q_s: p_zeros.append(x/y) </code></pre>
-1
2016-09-21T15:17:50Z
39,620,966
<p>All I needed were nested for loops</p> <pre><code>list1 = [1,2,3,4,5,6,10,12,15,20,30,60] list2 = [1,2,3,6] zeros = [] for x in list1: for y in list2: zeros.append(x/y) print(zeros) </code></pre> <p>output: <code>[1.0, 0.5, 0.3333333333333333, 0.16666666666666666, 2.0, 1.0, 0.6666666666666666, 0.3333333333333333, 3.0, 1.5, 1.0, 0.5, 4.0, 2.0, 1.3333333333333333, 0.6666666666666666, 5.0, 2.5, 1.6666666666666667, 0.8333333333333334, 6.0, 3.0, 2.0, 1.0, 10.0, 5.0, 3.3333333333333335, 1.6666666666666667, 12.0, 6.0, 4.0, 2.0, 15.0, 7.5, 5.0, 2.5, 20.0, 10.0, 6.666666666666667, 3.3333333333333335, 30.0, 15.0, 10.0, 5.0, 60.0, 30.0, 20.0, 10.0]</code> I appreciate your help everyone!</p>
0
2016-09-21T15:37:41Z
[ "python", "math" ]
Dividing every value in one list by every value in another list
39,620,517
<p>I am writing a function to find all the rational zeros of a polynomial and in need to divide each number in a list by each number in a list Ex.</p> <pre><code>list1 = [1,2,3,4,5,6,10,12,15,20,30,60] list2 = [1,2,3,6] zeros = [1/1,2/1,3/1,4/1,...,1/2,2/2,3/2,4/2,...1/3,2/3,3/3,4/3,etc...] </code></pre> <p>How would I accomplish this? Edit: Here is my code:</p> <pre><code>from fractions import Fraction def factor(x): x = abs(x) factors = [] new_factors = [] for n in range(1,x): if x % n == 0: factors.append(n) new_factors.append(n) for y in factors: new_factors.append(y * -1) new_factors = sorted(new_factors) return new_factors print(factor(8)) def find_zeros(fnctn,pwr): last = fnctn[len(fnctn)] first = fnctn[0] P_s = factor(last) Q_s = factor(first) p_zeros = [] </code></pre> <p>Would I do something like this:</p> <pre><code>for x in P_s: for y in Q_s: p_zeros.append(x/y) </code></pre>
-1
2016-09-21T15:17:50Z
39,621,974
<p>A solution without itertools, for whomever may want it:</p> <p>To get this result:</p> <pre><code>list1 = [1,2,3,4,5,6,10,12,15,20,30,60] list2 = [1,2,3,6] zeros = [1/1,2/1,3/1,4/1,...,1/2,2/2,3/2,4/2,...1/3,2/3,3/3,4/3,etc...] </code></pre> <p><strong>English Explanation</strong></p> <p>You can iterate a variable, say <code>i</code>, through <code>range(len(list1))</code> to get the index of each element in <code>list1</code> in order. </p> <p>To get the corresponding index in <code>list2</code> just floor divide <code>i</code> by <code>len(list1)/float(len(list2))</code>.</p> <p>Then we'll take the element from <code>list1</code> and divide by the element in <code>list2</code> and add that to some initially empty list (you named this zeros).</p> <p><strong>Python Code</strong> (correct me if I'm wrong it's untested an hastily written)</p> <pre><code>zeros = [] list1 = [1,2,3,4,5,6,10,12,15,20,30,60] list2 = [1,2,3,6] for i in range(len(list1)): zeros.append(list1[i] / float(list2[i//(len(list1)/float(len(list2))))]) print(zeros) </code></pre> <p><strong>Postword</strong></p> <p>You didn't mention what version of python you're using. This will influence the default behavior when dividing integers. See <a href="http://stackoverflow.com/questions/183853/in-python-what-is-the-difference-between-and-when-used-for-division">this SO Q/A</a> for an explanation of division for the major versions of python. I expect you want float division because 1/4 with evaluate to zero with floor division. The code I wrote should work for either version of python, as 1//2 is explicit floor division and 1/float(2) is explicit float division.</p>
0
2016-09-21T16:32:30Z
[ "python", "math" ]
Python, why is my probabilistic neural network (PNN) always predicting zeros?
39,620,547
<p>I'm working with Python 2.7.5 on Linux CentOS 7 machine. I'm trying to apply a <strong>probabilistic neural network (PNN)</strong> my dataset, to solve a <strong>binary classification problem</strong>.</p> <p>I'm using the following Python packages: numpy, sklearn, neupy.algorithms. I'm trying to follow <a href="https://github.com/itdxer/neupy/blob/master/examples/rbfn/pnn_iris.py">this example used for the iris dataset</a>.</p> <p>The issue is that my <strong>PNN always predicts zero values</strong> (elements classified as zeros), and I cannot understand why...</p> <p>Here's my dataset ("dataset_file.csv"). There are 34 features and 1 label target (the last column, that might be 0 or 1):</p> <pre><code>47,1,0,1,0,20,1,0,1,24,1,1,0,2,1,8050,9,1,274,60,258,65,7,3.2,105,289,0,0,79,1,0,0,0,34,0 55,1,0,1,0,45,1,0,0,1,1,1,1,3,0,11200,7,0,615,86,531,97,5.4,2.6,96,7541,1.6,0.8,6,1,1,1,1,42,0 29,1,1,1,0,23,0,1,0,1,0,0,0,2,1,5300,12,1,189,30,203,72,7,3.5,93,480,0,0,90,1,0,0,0,43,1 39,1,0,1,0,10,1,0,0,3,0,1,1,0,1,7910,14,1,462,28,197,50,8,4.5,93,459,5,2.8,45,1,1,0,0,21,0 47,1,0,1,0,10,1,1,1,1.5,1,1,0,3,1,9120,4,0,530,71,181,60,6.2,3.8,83,213,3.6,1.95,53,1,1,0,0,11,0 57,1,0,1,0,50,0,1,0,24,1,0,1,3,1,16000,9,0,330,78,172,74,5.9,2.9,112,332,4.1,2.1,82,1,1,0,0,23,1 44,1,0,1,0,15,1,1,0,0.5,1,1,1,2,0,5800,14,0,275,44,155,105,7.2,3.5,84,360,3.44,1.6,55,1,1,0,0,24,0 49,1,3,1,0,25,1,1,1,1,0,1,0,3,1,8200,12,1,441,74,237,111,6.2,3.6,79,211,0,0,91,1,0,0,0,43,0 56,1,0,1,0,5,1,0,0,3,1,0,0,3,1,5100,7,1,188,58,185,62,7.8,3.9,112,472,0,0,83,1,0,0,0,34,0 33,1,4,1,0,20,1,0,1,3,0,0,0,3,1,7300,10,1,329,40,139,80,6.9,3.7,89,122,3.4,1.2,75,1,1,0,0,33,0 22,0,0,1,0,15,1,0,0,1,1,1,1,0,1,3700,8,0,617,53,267,128,6.2,3.8,91,3060,3.1,1.9,63,1,1,0,0,54,0 82,0,5,1,0,60,1,0,1,3,1,1,1,0,0,8900,11,1,275,83,255,93,5.9,3.1,95,455,4.8,1.9,68,1,1,0,0,55,0 49,0,2,1,0,20,1,0,1,2,1,0,0,0,1,8500,6,1,292,84,241,79,6.8,3.9,100,158,3.4,1.25,75,1,1,1,0,65,0 51,1,4,1,0,51,1,1,1,2,1,0,1,0,1,18300,14,1,522,91,268,105,6.1,3.1,98,758,4.2,2.5,19,1,1,1,1,67,0 61,1,2,1,0,20,1,0,0,3,1,0,0,3,1,6600,9,1,563,101,268,78,6.4,3.7,115,694,5.2,3,29,1,1,1,1,77,0 48,0,1,1,0,28,1,0,0,12,1,0,0,3,1,9100,22,0,114,18,165,63,7.2,3.6,103,429,0,0,84,1,0,0,0,34,0 57,0,0,1,0,40,1,0,1,1,0,0,0,3,1,8100,8,0,264,15,120,69,6.8,3.4,91,390,0,0,91,1,0,0,0,23,0 57,0,0,1,0,25,1,0,0,12,0,1,0,0,0,6900,16,0,847,111,289,78,5.3,2.4,105,162,3.1,1.9,68,1,1,1,0,78,0 47,1,4,1,0,40,0,1,1,6,1,1,1,2,1,21500,10,0,632,121,219,108,7.5,2.8,149,1158,3.17,1.77,8,1,1,1,0,58,1 52,0,0,1,0,30,1,1,1,2,1,0,1,0,0,14600,5,1,405,88,280,140,5.8,3.1,121,983,3.9,1.8,17,1,1,1,1,76,0 50,1,2,1,0,16,1,1,0,1,1,1,1,0,1,12200,9,1,280,7,176,71,7.4,4.2,105,293,4.5,2.7,68,1,1,0,0,67,0 63,1,4,1,0,18,1,0,1,0.5,1,1,1,3,0,16400,8,0,479,93,140,64,5.8,3.7,226,1286,6.22,3.6,18,1,1,0,0,19,0 54,0,0,1,0,20,0,0,1,8,0,0,1,0,1,7200,10,0,366,71,284,73,6.4,3.7,114,384,4.1,2.8,65,1,1,0,0,24,1 31,0,3,1,0,10,0,1,0,1,1,1,1,1,1,3800,8,0,568,102,236,59,6.4,3.7,99,387,0,0,78,1,0,0,0,45,1 44,0,6,1,0,10,1,1,0,2,1,1,1,0,1,7700,15,1,274,44,139,62,6.7,4.1,93,129,0,0,76,1,0,0,0,24,0 50,0,6,1,0,20,1,0,0,3,1,1,0,0,1,5200,6,0,403,90,224,79,6.3,3.1,109,151,3.1,1.4,79,1,0,0,0,34,0 61,1,3,1,0,30,0,1,0,3,1,0,1,2,1,11500,7,0,668,88,178,65,6.7,3.08,104,680,4.1,2.5,22,1,1,0,0,23,1 </code></pre> <p>And here's my Python code:</p> <pre><code>import numpy as np from sklearn import datasets from sklearn.metrics import matthews_corrcoef from sklearn.cross_validation import StratifiedKFold from neupy.algorithms import PNN fileName="dataset_file.csv" TARGET_COLUMN=35 from numpy import genfromtxt input_dataset_data = genfromtxt(fileName, delimiter=',', skip_header=0, usecols=(range(0, TARGET_COLUMN-1))) #print(input_dataset_data) input_dataset_target = genfromtxt(fileName, delimiter=',', skip_header=0, usecols=(TARGET_COLUMN-1)) #print(input_dataset_target) kfold_number = 2 skfold = StratifiedKFold(input_dataset_target, kfold_number, shuffle=True) avarage_result = 0 print("&gt; Start classify input_dataset dataset") for i, (train, test) in enumerate(skfold, start=1): pnn_network = PNN(std=0.1, step=0.2, verbose=True) pnn_network.train(input_dataset_data[train], input_dataset_target[train]) predictions = pnn_network.predict(input_dataset_data[test]) print(predictions) #print(input_dataset_target[test]) mcc = matthews_corrcoef(input_dataset_target[test], predictions) print "The Matthews correlation coefficient is %f" % mcc print("kfold #{:&lt;2}: Guessed {} out of {}".format( i, np.sum(predictions == input_dataset_target[test]), test.size )) </code></pre> <p>Does anyone know <strong>why</strong> I am getting only predictions having 0 value? Can you give me some <strong>suggestion</strong> to solve this issue?</p> <p>Thanks!</p> <p>EDIT: Here's the normalized dataset (normalized by column):</p> <pre><code>0.55,1,0,1,0,0.29,1,0,1,0.46,1,1,0,0.67,1,0.37,0.41,1,0.08,0.47,0.23,0.13,0.82,0.46,0.25,0.04,0,0,0.52,1,0,0,0,0.33,0 0.65,1,0,1,0,0.64,1,0,0,0.02,1,1,1,1,0,0.52,0.32,0,0.18,0.67,0.47,0.2,0.64,0.38,0.23,1,0.24,0.18,0.04,1,1,1,1,0.41,0 0.34,1,0.13,1,0,0.33,0,0.5,0,0.02,0,0,0,0.67,1,0.25,0.55,1,0.06,0.23,0.18,0.15,0.82,0.51,0.22,0.06,0,0,0.6,1,0,0,0,0.42,1 0.46,1,0,1,0,0.14,1,0,0,0.06,0,1,1,0,1,0.37,0.64,1,0.14,0.22,0.17,0.1,0.94,0.65,0.22,0.06,0.75,0.64,0.3,1,1,0,0,0.2,0 0.55,1,0,1,0,0.14,1,0.5,1,0.03,1,1,0,1,1,0.42,0.18,0,0.16,0.55,0.16,0.12,0.73,0.55,0.2,0.03,0.54,0.44,0.35,1,1,0,0,0.11,0 0.67,1,0,1,0,0.71,0,0.5,0,0.46,1,0,1,1,1,0.74,0.41,0,0.1,0.6,0.15,0.15,0.69,0.42,0.27,0.04,0.61,0.48,0.54,1,1,0,0,0.22,1 0.52,1,0,1,0,0.21,1,0.5,0,0.01,1,1,1,0.67,0,0.27,0.64,0,0.08,0.34,0.14,0.21,0.85,0.51,0.2,0.05,0.51,0.36,0.36,1,1,0,0,0.23,0 0.58,1,0.38,1,0,0.36,1,0.5,1,0.02,0,1,0,1,1,0.38,0.55,1,0.13,0.57,0.21,0.23,0.73,0.52,0.19,0.03,0,0,0.6,1,0,0,0,0.42,0 0.66,1,0,1,0,0.07,1,0,0,0.06,1,0,0,1,1,0.24,0.32,1,0.06,0.45,0.16,0.13,0.92,0.57,0.27,0.06,0,0,0.55,1,0,0,0,0.33,0 0.39,1,0.5,1,0,0.29,1,0,1,0.06,0,0,0,1,1,0.34,0.45,1,0.1,0.31,0.12,0.16,0.81,0.54,0.21,0.02,0.51,0.27,0.5,1,1,0,0,0.32,0 0.26,0,0,1,0,0.21,1,0,0,0.02,1,1,1,0,1,0.17,0.36,0,0.19,0.41,0.24,0.26,0.73,0.55,0.22,0.41,0.46,0.43,0.42,1,1,0,0,0.52,0 0.96,0,0.63,1,0,0.86,1,0,1,0.06,1,1,1,0,0,0.41,0.5,1,0.08,0.64,0.23,0.19,0.69,0.45,0.23,0.06,0.72,0.43,0.45,1,1,0,0,0.53,0 0.58,0,0.25,1,0,0.29,1,0,1,0.04,1,0,0,0,1,0.4,0.27,1,0.09,0.65,0.21,0.16,0.8,0.57,0.24,0.02,0.51,0.28,0.5,1,1,1,0,0.63,0 0.6,1,0.5,1,0,0.73,1,0.5,1,0.04,1,0,1,0,1,0.85,0.64,1,0.16,0.71,0.24,0.21,0.72,0.45,0.23,0.1,0.63,0.57,0.13,1,1,1,1,0.65,0 0.72,1,0.25,1,0,0.29,1,0,0,0.06,1,0,0,1,1,0.31,0.41,1,0.17,0.78,0.24,0.16,0.75,0.54,0.27,0.09,0.78,0.68,0.19,1,1,1,1,0.75,0 0.56,0,0.13,1,0,0.4,1,0,0,0.23,1,0,0,1,1,0.42,1,0,0.03,0.14,0.15,0.13,0.85,0.52,0.24,0.06,0,0,0.56,1,0,0,0,0.33,0 0.67,0,0,1,0,0.57,1,0,1,0.02,0,0,0,1,1,0.38,0.36,0,0.08,0.12,0.11,0.14,0.8,0.49,0.22,0.05,0,0,0.6,1,0,0,0,0.22,0 0.67,0,0,1,0,0.36,1,0,0,0.23,0,1,0,0,0,0.32,0.73,0,0.25,0.86,0.26,0.16,0.62,0.35,0.25,0.02,0.46,0.43,0.45,1,1,1,0,0.76,0 0.55,1,0.5,1,0,0.57,0,0.5,1,0.12,1,1,1,0.67,1,1,0.45,0,0.19,0.94,0.19,0.22,0.88,0.41,0.35,0.15,0.47,0.4,0.05,1,1,1,0,0.56,1 0.61,0,0,1,0,0.43,1,0.5,1,0.04,1,0,1,0,0,0.68,0.23,1,0.12,0.68,0.25,0.29,0.68,0.45,0.29,0.13,0.58,0.41,0.11,1,1,1,1,0.74,0 0.59,1,0.25,1,0,0.23,1,0.5,0,0.02,1,1,1,0,1,0.57,0.41,1,0.08,0.05,0.16,0.15,0.87,0.61,0.25,0.04,0.67,0.61,0.45,1,1,0,0,0.65,0 0.74,1,0.5,1,0,0.26,1,0,1,0.01,1,1,1,1,0,0.76,0.36,0,0.14,0.72,0.12,0.13,0.68,0.54,0.54,0.17,0.93,0.82,0.12,1,1,0,0,0.18,0 0.64,0,0,1,0,0.29,0,0,1,0.15,0,0,1,0,1,0.33,0.45,0,0.11,0.55,0.25,0.15,0.75,0.54,0.27,0.05,0.61,0.64,0.43,1,1,0,0,0.23,1 0.36,0,0.38,1,0,0.14,0,0.5,0,0.02,1,1,1,0.33,1,0.18,0.36,0,0.17,0.79,0.21,0.12,0.75,0.54,0.24,0.05,0,0,0.52,1,0,0,0,0.44,1 0.52,0,0.75,1,0,0.14,1,0.5,0,0.04,1,1,1,0,1,0.36,0.68,1,0.08,0.34,0.12,0.13,0.79,0.59,0.22,0.02,0,0,0.5,1,0,0,0,0.23,0 0.59,0,0.75,1,0,0.29,1,0,0,0.06,1,1,0,0,1,0.24,0.27,0,0.12,0.7,0.2,0.16,0.74,0.45,0.26,0.02,0.46,0.32,0.52,1,0,0,0,0.33,0 0.72,1,0.38,1,0,0.43,0,0.5,0,0.06,1,0,1,0.67,1,0.53,0.32,0,0.2,0.68,0.16,0.13,0.79,0.45,0.25,0.09,0.61,0.57,0.15,1,1,0,0,0.22,1 </code></pre>
8
2016-09-21T15:18:56Z
39,719,772
<p>I have tried your code and it seems to me that your network predicts both classes: </p> <pre><code>import numpy as np from sklearn.cross_validation import StratifiedKFold from neupy.algorithms import PNN fileName="dataset_file.csv" TARGET_COLUMN=35 from numpy import genfromtxt input_dataset_data = genfromtxt(fileName, delimiter=',', skip_header=0, usecols=(range(0, TARGET_COLUMN-1))) input_dataset_target = genfromtxt(fileName, delimiter=',', skip_header=0, usecols=(TARGET_COLUMN-1)) kfold_number = 5 skfold = StratifiedKFold(input_dataset_target, kfold_number, shuffle=True) print("&gt; Start classify input_dataset dataset") for std in [0.2, 0.4, 0.6, 0.8, 1]: average_results = [] for i, (train, test) in enumerate(skfold, start=1): pnn_network = PNN(std=std, step=0.2, verbose=False, batch_size=2) pnn_network.train(input_dataset_data[train], input_dataset_target[train]) predictions = pnn_network.predict(input_dataset_data[test]) print("Positive in predictions:", 1 in predictions) average_result.append(np.sum(predictions == input_dataset_target[test]) /float(len(predictions))) print std, np.average(average_result) </code></pre> <p>An example output while tuning std:</p> <pre><code>1 0.881558441558 ('Positive in predictions:', True) ('Positive in predictions:', True) ('Positive in predictions:', True) ('Positive in predictions:', True) ('Positive in predictions:', True) </code></pre> <p>That is for <code>std=1</code> the average accuracy in the folds is ~0.88 and 1 is predicted in each of the rounds of stratified cross-validation. I have used the normalized data of your edit.</p>
4
2016-09-27T08:29:19Z
[ "python", "numpy", "scikit-learn", "neural-network", "neupy" ]
Import scipy on webserver online
39,620,559
<p>I'm running my python matplotlib script online. I have an error like this:</p> <pre><code>Traceback (most recent call last): File "heatmap.py", line 8, in &amp;lt;module&amp;gt; from scipy.stats.kde import gaussian_kde ImportError: No module named scipy.stats.kde </code></pre> <p>I think that <code>scipy</code> is not found while importing. Is there any solution to this problem? How can i bypass this to get my script working?</p>
-1
2016-09-21T15:19:28Z
39,675,289
<p>Scipy was missing on my server, need to install it manually. Solved in this way.</p>
0
2016-09-24T10:23:03Z
[ "python", "matplotlib", "scipy" ]
PEP8 style for decorators exceeding recommended line length
39,620,577
<p>I have been tinkering with <a href="https://github.com/pyinvoke/invoke" rel="nofollow">Invoke</a>, but I came across something that is quite an odd case that there seems to be no real PEP guideline for it.</p> <p>Invoke lets you define your own CLI arguments for whatever tasks you define, and you can optionally provide "help notes" to a <code>task</code> decorator. Specific example can be found <a href="http://docs.pyinvoke.org/en/latest/getting_started.html#adding-help-for-parameters" rel="nofollow">here</a>.</p> <p>If there are more parameters, I could probably do it like this, but it feels kind of weird if there are many tasks. What coding style would you guys do?</p> <pre><code>help_for_hi = { 'name1': 'Name of Person 1', 'name2': 'Name of Person 2', 'name3': 'Name of Person 3', } @task(help=help_for_hi) def hi(ctx, name1, name2, name3): """Say hi to three people.""" print("Hi %s, %s, and %s!" % (name1, name2, name3)) </code></pre> <p><strong>UPDATED</strong> </p> <p>As requested, this is what 'too long' would probably look like.</p> <pre><code>@task(help={'name1': 'Name of Person 1', 'name2': 'Name of Person 2', 'name3': 'Name of Person 3'}) def hi(ctx, name1, name2, name3): """Say hi to three people.""" print("Hi %s, %s, and %s!" % (name1, name2, name3)) </code></pre>
0
2016-09-21T15:20:05Z
39,620,750
<p>You'd just break up the decorator expression into many lines the same way you'd break up any other function call. One example could be:</p> <pre><code>@task(help={ 'name1': 'Name of Person 1', 'name2': 'Name of Person 2', 'name3': 'Name of Person 3'}) def hi(ctx, name1, name2, name3): """Say hi to three people.""" print("Hi %s, %s, and %s!" % (name1, name2, name3)) </code></pre> <p>... A codebase that I work on daily has quite a few <code>@mock.patch.object</code> written out this way.</p> <p>Obviously, breaking out the dictionary into a separate variable can work in this case as you've done in the question (I might actually prefer it assuming that the variable is well named :-).</p> <pre><code># Nothing wrong with this as far as I can tell ... help_for_hi = { 'name1': 'Name of Person 1', 'name2': 'Name of Person 2', 'name3': 'Name of Person 3', } @task(help=help_for_hi) def hi(ctx, name1, name2, name3): """Say hi to three people.""" print("Hi %s, %s, and %s!" % (name1, name2, name3)) </code></pre>
2
2016-09-21T15:28:07Z
[ "python", "python-decorators", "pep8", "pyinvoke" ]
Positioning the exponent of tick labels when using scientific notation in matplotlib
39,620,700
<p>I am looking for a way to change the position of the exponent on an axis when using scientific notation. I got stuck at this problem already a few times. I know already that the default formatter is the <code>ScalarFormatter</code> but it has no option to access the exponent somehow.</p> <p>There is a <a href="http://werthmuller.org/blog/2014/move-scientific-notation/" rel="nofollow">workaround</a> but I don't like it, since it also manipulates the existing ticklabels. So far I found out, that the list returned by the axes' <code>get_xmajorticklabels()</code> method contains one text object more if scientific notation is used. For example:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np fig= plt.figure(figsize = plt.figaspect(0.5)) ax1= fig.add_subplot(121) ax1.plot(np.arange(146), np.random.rand(146)) print(ax1.get_xmajorticklabels()) ax2= fig.add_subplot(122) ax2.plot(np.arange(146)*1e-6, np.random.rand(146)) print(ax2.get_xmajorticklabels()) </code></pre> <p><img src="http://i.stack.imgur.com/iq4iQ.png" alt="Image with the resulting plots"></p> <p>The prints give: <code>&lt;a list of 9 Text xticklabel objects&gt;</code> and <code>&lt;a list of 10 Text xticklabel objects&gt;</code> So I thought the additional list item could be the text object for the exponent. But it's empty when I print the text.</p> <p>Is there any way to access this exponent as a text object? Then it should be possible to set its position, isn't it?</p>
1
2016-09-21T15:25:43Z
39,622,229
<p>To access the list of <a href="http://matplotlib.org/api/text_api.html#matplotlib.text.Text" rel="nofollow">Text objects</a> you can use a method of that class, e.g. <code>get_text()</code>:</p> <pre><code>print([s.get_text() for s in ax2.get_xmajorticklabels()]) </code></pre> <p>However, the result of that is</p> <pre><code>&lt;a list of 9 Text xticklabel objects&gt; [u'', u'', u'', u'', u'', u'', u'', u'', u''] &lt;a list of 10 Text xticklabel objects&gt; [u'', u'', u'', u'', u'', u'', u'', u'', u'', u''] </code></pre> <p>After running <code>fig.tight_layout()</code>, the output of these <code>Text xticklabel objects</code> can now be enumerated:</p> <pre><code>&lt;a list of 9 Text xticklabel objects&gt; [(0.0, 0), (20.0, 0), (40.0, 0), (60.0, 0), (80.0, 0), (100.0, 0), (120.0, 0), (140.0, 0), (160.0, 0)] &lt;a list of 10 Text xticklabel objects&gt; [(0.0, 0), (2.0000000000000002e-05, 0), (4.0000000000000003e-05, 0), (6.0000000000000008e-05, 0), (8.0000000000000007e-05, 0), (0.0001, 0), (0.00012000000000000002, 0), (0.00014000000000000001, 0), (0.00016000000000000001, 0), (0, 0)] </code></pre> <p>For an exponent like <code>-7</code>, there is actually the same amount of objects in both lists.</p> <p>The closest method I've found for positioning the label is detailed <a href="http://stackoverflow.com/a/30772807/6779606">here by Scott</a>. Sadly, it will only work horizontally for the x-axis and vertically for the y-axis, so you can't really arbitrarily position the label on the graph.</p> <pre><code>ax2.get_xaxis().get_offset_text().set_position((0.5,0)) </code></pre> <p><a href="http://i.stack.imgur.com/9mGT9.png" rel="nofollow"><img src="http://i.stack.imgur.com/9mGT9.png" alt="exponent label moved horizontally"></a></p>
0
2016-09-21T16:47:35Z
[ "python", "matplotlib" ]
Spark: How to "reduceByKey" when the keys are numpy arrays which are not hashable?
39,620,767
<p>I have an RDD of (key,value) elements. The keys are NumPy arrays. NumPy arrays are not hashable, and this causes a problem when I try to do a <code>reduceByKey</code> operation.</p> <p>Is there a way to supply the Spark context with my manual hash function? Or is there any other way around this problem (other than actually hashing the arrays "offline" and passing to Spark just the hashed key)?</p> <p>Here is an example:</p> <pre><code>import numpy as np from pyspark import SparkContext sc = SparkContext() data = np.array([[1,2,3],[4,5,6],[1,2,3],[4,5,6]]) rd = sc.parallelize(data).map(lambda x: (x,np.sum(x))).reduceByKey(lambda x,y: x+y) rd.collect() </code></pre> <p>The error is:</p> <blockquote> <p>An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.</p> <p>...</p> <p>TypeError: unhashable type: 'numpy.ndarray'</p> </blockquote>
2
2016-09-21T15:29:01Z
39,621,058
<p>The simplest solution is to convert it to an object that is hashable. For example:</p> <pre><code>from operator import add reduced = sc.parallelize(data).map( lambda x: (tuple(x), x.sum()) ).reduceByKey(add) </code></pre> <p>and convert it back later if needed.</p> <blockquote> <p>Is there a way to supply the Spark context with my manual hash function</p> </blockquote> <p>Not a straightforward one. A whole mechanism depend on the fact object implements a <code>__hash__</code> method and C extensions are cannot be monkey patched. You could try to use dispatching to override <code>pyspark.rdd.portable_hash</code> but I doubt it is worth it even if you consider the cost of conversions. </p>
2
2016-09-21T15:42:58Z
[ "python", "numpy", "apache-spark", "pyspark" ]
How can I add a pylint config file for Visual Studio 2015?
39,620,834
<p>I've been using and loving the Python Tools for Visual Studio. <a href="https://www.visualstudio.com/en-us/features/python-vs.aspx" rel="nofollow">https://www.visualstudio.com/en-us/features/python-vs.aspx</a> New to Python, but I've been using VS for a very long time and it has been very quick to get up and running in the familiar environment.</p> <p>I've used jslint before but just saw that there is pylint integration in VS 2015. It does exactly what I want and very happy with it. I would like to edit the config file for it and disable some of the warnings. I've searched high and low and unable to find any information about where the file goes that is specific to Visual Studio and what the name of the config file needs to be. </p> <p>The jslint integration in VS had a right-click config section that you could alter the config file and mark certain variables as globals or hide other warnings. Does anyone know if you can do this for pylint? or what the file name and path should be to edit it manually?</p>
0
2016-09-21T15:32:10Z
39,620,835
<p>Took awhile, but I found through trial and error that you can place a .pylintrc file in the project or solution folder and pylint will pick it up.</p> <p>To create this file, open a command window and type</p> <pre><code>pylint --generate-rcfile &gt; .pylintrc </code></pre> <p>You can then move that file to the root folder of your project or solution.</p> <p>Other interesting thing I learned in this investigation is that Windows won't let you rename a file to start with a dot. You will get an error that says "You must type a file name." You can get around it by ending the filename with a dot as well. So name it .pylintrc. and Windows will remove the last dot and name it .pylintrc</p> <p>You want to add your message code to the list under [MESSAGE CONTROL] and disable= They made it easy by allowing you to use the number code (C0303) or the text version (trailing-whitespace)</p> <p>Great list of error codes and meanings can be found here <a href="http://pylint-messages.wikidot.com/all-codes" rel="nofollow">http://pylint-messages.wikidot.com/all-codes</a></p>
0
2016-09-21T15:32:10Z
[ "python", "visual-studio", "visual-studio-2015", "pylint" ]
A query in SQLite3 with python
39,620,874
<p>I'm testing a query with python and sqlite3. First method works fine, but second is not working. It is about the defined type of variable containing the resgisters in <code>DB</code>:</p> <pre><code>import sqlite3 def insertar(): db1=sqlite3.connect('tabla.db') print("Estas en la funcion insertar") nombre1=raw_input("Escribe el titulo de la novela: ") autor1=raw_input("Escribe el autor de la novela: ") year1=str(input("Digita el any de la novela: ")) consulta=db1.cursor() strConsulta = "insert into tabla(nombre, autor, year) values\ ('"+nombre1+"','"+autor1+"','"+year1+"')" print(strConsulta) consulta.execute(strConsulta) consulta.close() db1.commit() db1.close() def consultar(): db2 = sqlite3.connect("tabla.db") print("Estas en la funcion insertar") db2row_factory = sqlite3.Row consulta = db2.cursor() consulta.execute("select * from tabla") filas = consulta.fetchall() lista = [] for fila in filas: s = {} s['nombre'] = fila['nombre'] s['autor'] = fila['autor'] s['year'] = str(fila['year']) lista.append(s) consulta.close() db2.close() return(lista) #consultar() def menu(): Opcion= input("\nIngresa la opcion deseada\n1.Inserta un valor en la tabla\n2.Consultar los valores de la tabla\n") if Opcion==1: insertar() menu() elif Opcion==2: ListaNovelas = consultar() for novela in ListaNovelas: print(novela['nombre'],novela['autor'],novela['year']) menu() menu() </code></pre> <blockquote> <p>I get this error while testing the second method <code>consultar()</code>.</p> </blockquote> <pre><code>$ python file.py Ingresa la opcion deseada 1.Inserta un valor en la tabla 2.Consultar los valores de la tabla 2 Estas en la funcion insertar Traceback (most recent call last): File "insertar.py", line 56, in &lt;module&gt; menu() File "insertar.py", line 51, in menu ListaNovelas = consultar() File "insertar.py", line 33, in consultar s['nombre'] = fila['nombre'] TypeError: tuple indices must be integers, not str </code></pre>
0
2016-09-21T15:34:05Z
39,620,935
<blockquote> <p><code>db2row_factory = sqlite3.Row</code></p> </blockquote> <p>This is the problematic line. Instead you meant to set the <code>row_factory</code> factory on the <code>db2</code> connection instance:</p> <pre><code>db2.row_factory = sqlite3.Row </code></pre> <p>Then, all the fetched rows would be now <a href="https://docs.python.org/2/library/sqlite3.html#sqlite3.Row" rel="nofollow"><code>sqlite3.Row</code></a> instances having dictionary-like access to field values.</p>
1
2016-09-21T15:36:03Z
[ "python", "sqlite3" ]
With pytest, why do inherited test methods not give proper assertion output?
39,620,910
<p>I am working on an application that has a command line interface that I would like to test using pytest. My idea was to define test classes like</p> <pre><code>class TestEchoCmd(Scenario): commands = [ "echo foo" ] stdout = "foo\n" </code></pre> <p>and then use a class with a test method to do the actual test. In other words, rather than define the test method – which is always the same – in every class that describes a scenario (which would be very tedious), these classes inherit the test method from the <code>Scenario</code> class:</p> <pre><code>class Scenario: commands = [] stdout = "" def test_scenario(self, capsys): for cmd in self.commands: ret = my_app.execute_command(shlex.split(cmd)) assert ret == 0 stdout, stderr = capsys.readouterr() assert stdout == self.stdout </code></pre> <p>This works fine as long as the tests pass. If a test fails, then pytest simply outputs an <code>AssertionError</code> with no additional information, unlike in the case when the test method is not inherited and it describes the <code>assert</code>ed expression in great detail. This is counterproductive because it is impossible to tell exactly why the assertion failed.</p> <p>Is there a way to make this work? I would really like to make the scenario descriptions as concise as possible. (I know about <code>@pytest.mark.parametrize</code> but I don't think it makes for very readable code in this case.)</p> <p>(Oh, incidentally, this is pytest 3.0.2 as provided by Debian GNU/Linux.)</p>
2
2016-09-21T15:35:13Z
39,621,521
<p>Found the answer myself: Pytest likes to rewrite the <code>assert</code> statements in the Python AST to add the more explicit output. This rewriting occurs (among other places) in classes that it deems to contain test methods, i.e., ones whose names start with <code>Test</code>. If a test method is inherited from a class in a different module that wouldn't otherwise be considered a test, the <code>assert</code> doesn't get rewritten, and hence no fancy error messages.</p> <p>The solution, according to the pytest docs, is to get pytest to rewrite the <code>assert</code> statements in that other module, too, using</p> <pre><code>pytest.register_assert_rewrite("module.name.goes.here") </code></pre> <p>This must be done before the module in question is imported.</p>
1
2016-09-21T16:06:38Z
[ "python", "inheritance", "py.test" ]
Replace/Add a class file to subfolder in jar using python zipfile command
39,621,003
<p>I have a jar file and I have a path which represents a location inside Jar file. </p> <p>Using this location I need to replace class file inside jar(Add a class file in some cases).I have class file inside another folder which is present where jar is present(This class file i have to move to Jar).</p> <p>Code which I am trying to achieve above objective :</p> <pre><code> import zipfile import os zf = zipfile.ZipFile(os.path.normpath('D:\mystuff\test.jar'),mode='a') try: print('adding testclass.class') zf.write(os.path.normpath('D:\mystuff\testclass.class')) finally: print('closing') zf.close() </code></pre> <p>After executing above code when I saw jar below mentioned format:</p> <pre><code> Jar |----META-INF |----com.XYZ |----Mystuff |--testclass.class </code></pre> <p>Actual Output I need is -</p> <pre><code> Jar |----META-INF |----com.XYZ |--ABC |-testclass.class </code></pre> <p>How can achieve this using zipfile.write command or any other way in python?</p> <p>I didn't find any params in write command where i can provide destination file location inside Jar/Zip file.</p> <p>ZipFile.write(filename, arcname=None, compress_type=None)</p>
0
2016-09-21T15:39:30Z
39,621,045
<p>Specify <code>arcname</code> to change the name of the file in the archive.</p> <pre><code>import zipfile import os zf = zipfile.ZipFile(os.path.normpath(r'D:\mystuff\test.jar'),mode='a') try: print('adding testclass.class') zf.write(os.path.normpath(r'D:\mystuff\testclass.class'),arcname="com.XYZ/ABC/testclass.class") finally: print('closing') zf.close() </code></pre> <p>Note: I doubt <code>test.jar</code> is your real jar name, since you didn't protect your string against special chars and the jar file opened would have been <code>'D:\mystuff\&lt;TAB&gt;est.jar'</code> (well, it doesn't work :))</p> <p>EDIT: if you want to add the new file but remove the old one, you have to do differently: you cannot delete from a zipfile, you have to rebuild another one (inspired by <a href="http://stackoverflow.com/questions/513788/delete-file-from-zipfile-with-the-zipfile-module">Delete file from zipfile with the ZipFile Module</a>)</p> <pre><code>import zipfile import os infile = os.path.normpath(r'D:\mystuff\test.jar') outfile = os.path.normpath(r'D:\mystuff\test_new.jar') zin = zipfile.ZipFile(infile,mode='r') zout = zipfile.ZipFile(outfile,mode='w') for item in zin.infolist(): if os.path.basename(item.filename)=="testclass.class": pass # skip item else: # write the item to the new archive buffer = zin.read(item.filename) zout.writestr(item, buffer) print('adding testclass.class') zout.write(os.path.normpath(r'D:\mystuff\testclass.class'),arcname="com.XYZ/ABC/testclass.class") zout.close() zin.close() os.remove(infile) os.rename(outfile,infile) </code></pre>
0
2016-09-21T15:42:22Z
[ "python", "jar", "cmd", "zip" ]
adding a list of menu items in a django session
39,621,174
<p>I have the user menu in an object list, and I want to put it into the django sesion. I've trying but django tells me </p> <pre><code>'list' object has no attribute '_meta' </code></pre> <p>actually this is the object that represents a item in the menu</p> <pre><code>class MenuItem(object): def __init__(self, id, name, link, items=None): self.id = id self.name = name self.link = link self.items = items </code></pre> <p>and in a function I append MenuItems in a list.</p> <pre><code>menu = [] menu.append(MenuItem(1, "hi", "some_link")) </code></pre> <p>finally in the view I try to put the menu in session. </p> <pre><code>request.session['menu'] = menu </code></pre> <p>And in this part is when django throws a </p> <blockquote> <p>'list' object has no attribute '_meta' error.</p> </blockquote>
0
2016-09-21T15:47:56Z
39,621,546
<p>This is happening because the object you're trying to store in the session is not serializable.</p> <p>You can test with with</p> <pre><code>import json json.dumps(MenuItem(1, "hi", "some_link")) </code></pre> <p>Which gives</p> <pre><code>MenuItem object at ... is not JSON serializable </code></pre> <p>One thing you can do is write your own function to serialize the object. Here's one way to approach it:</p> <pre><code>class MenuItem(object): def __init__(self, id, name, link, items=None): self.id = id self.name = name self.link = link self.items = items def serialize(self): return self.__dict__ </code></pre> <p>Then,</p> <pre><code>menu = [] menu.append(MenuItem(1, "hi", "some_link").serialize()) request.session["menu"] = menu </code></pre>
1
2016-09-21T16:08:09Z
[ "python", "django", "session", "menu" ]
os.rename returning winerror 2
39,621,199
<p>I'm trying to a script to rename to the date that it was sent as an email(which is the first part of the script but doesn't matter for this part) then to rename, and sort it into a 'Complete' folder. This is what my code looks like</p> <p>Edit - I have all the imported stuff way up at the top and i didnt show it, but i assume i have the right stuff imported if you would like to see just ask</p> <pre><code>dir5 = "C:\\Users\\Michael D\\Documents\\Test\\AmLit" dir6 = "C:\\Users\\Michael D\\Documents\\Test\\History" dir7 = "C:\\Users\\Michael D\\Documents\\Test\\MultiLit" dir8 = "C:\\Users\\Michael D\\Documents\\Test\\Physics" dir5_final = "C:\\Users\\Michael D\\Documents\\TestMove\\AmLit" dir6_final = "C:\\Users\\Michael D\\Documents\\TestMove\\History" dir7_final = "C:\\Users\\Michael D\\Documents\\TestMove\\MultiLit" dir8_final = "C:\\Users\\Michael D\\Documents\\TestMove\\Physics" now = datetime.datetime.now() now1 = (str(now.day) + '/' + str(now.month) + '/' + str(now.year)) dir5_files = os.listdir(dir5) dir6_files = os.listdir(dir6) dir7_files = os.listdir(dir7) dir8_files = os.listdir(dir8) for f in dir5_files: if (f.startswith("A") or f.startswith("a")): os.rename(f, now1 + " " + f) </code></pre> <p>but i keep getting this error</p> <pre><code> RESTART: C:/Users/Michael D/Documents/Coding/Schoolwork Email/Email Sender Beta 1.7.21.9.16.py Traceback (most recent call last): File "C:/Users/Michael D/Documents/Coding/Schoolwork Email/Email Sender Beta 1.7.21.9.16.py", line 148, in &lt;module&gt; os.rename(f, now1 + " " + f) FileNotFoundError: [WinError 2] The system cannot find the file specified: 'A Test.txt' -&gt; '21/9/2016 A Test.txt' </code></pre> <p>any thoughts as to what I'm doing wrong?</p>
0
2016-09-21T15:49:25Z
39,621,246
<p>2 errors:</p> <ol> <li><p>You are not in the current directory</p></li> <li><p>You just cannot have slashes in the names. The filesystem won't allow it as it is (alternately) used to separate path parts.</p></li> </ol> <p>First, generate the date directly with underscores:</p> <pre><code>now1 = (str(now.day) + '_' + str(now.month) + '_' + str(now.year)) </code></pre> <p>Then replace</p> <pre><code>os.rename(f, now1 + " " + f) </code></pre> <p>by</p> <pre><code>os.rename(os.path.join(dir5,f), os.path.join(dir5,now1.replace("/","_") + " " + f)) </code></pre> <p>and <code>A Test.txt</code> would be renamed to <code>21_9_2016 A Test.txt</code> in the directory you specified.</p>
0
2016-09-21T15:51:55Z
[ "python", "python-3.x" ]
Is it possible to show a link in Django to admins only?
39,621,264
<p>is it possible to test in Django if the logged in user is an admin? And just in this case to show a link normal users can't see? </p> <p>Thanks<br> Croghs</p>
0
2016-09-21T15:52:34Z
39,621,308
<p>There's a builtin method for that. (Depending on what you call an admin though)</p> <p><strong><a href="https://docs.djangoproject.com/en/1.10/ref/contrib/auth/#django.contrib.auth.models.User.is_superuser" rel="nofollow">user.is_superuser</a></strong> ?</p> <p>You may even want <code>user.is_staff</code>, again, depending on what you call an admin.</p>
1
2016-09-21T15:55:23Z
[ "python", "django" ]
Python Numpy - What is numpy.numarray?
39,621,298
<p>When experimenting with Numpy, I found:</p> <pre><code>In [1]: numpy.numarray Out[1]: 'removed' In [2]: type(numpy.numarray) Out[2]: str </code></pre> <p>What is <code>numpy.numarray</code>? What is it's purpose in Numpy? Why does it only say <code>'removed'</code>?</p>
1
2016-09-21T15:54:56Z
39,621,370
<p><a href="https://wiki.python.org/moin/NumArray" rel="nofollow"><code>numarray</code></a> was a predecessor of <code>numpy</code>. A long time ago, there were several packages (<code>numarray</code>, <code>numeric</code>), which had lots of overlap, and eventually were superceded by <code>numpy</code>.</p> <p>(Wikipedia has a <a href="https://en.wikipedia.org/wiki/NumPy#History" rel="nofollow">whole section on <code>numpy</code> history</a>, if you're into this sort of stuff.)</p> <p><code>numarray</code> was <a href="http://docs.scipy.org/doc/numpy/reference/routines.numarray.html" rel="nofollow">removed in 1.9</a>. It has probably been replaced by this string, so that attempts to reference it would lead to something legible. In any case, there is nothing useful in this anymore.</p>
1
2016-09-21T15:59:19Z
[ "python", "string", "numpy" ]
How to not parse periods with sklearn TfidfVectorizer?
39,621,351
<p>I just picked up sklearn so pardon my blatant ignorance :)...Right now I am trying to figure out how TfidfVectorizer works and how to avoid splitting on periods. </p> <pre><code>from sklearn.feature_extraction.text import TfidfVectorizer docs= ("'CSC.labtrunk', 'CSC.datacenter', 'CSC.netbu', 'CSC.asr5k.general', 'CSC.ena', 'CSC.embu'", "'CSC.ena'", "'CSC.embu', 'CSC.security', 'CSC.ena'", "'CSC.embu', 'CSC.datacenter', 'CSC.labtrunk', 'CSC.content-security', 'CSC.ena', 'CSC.embu.dev', 'CSC.spv.custom-prods', 'CSC.voice', 'CSC.policy-mgmt', 'CSC.nuova'", "'CSC.embu', 'CSC.sys', 'CSC.policy-mgmt', 'CSC.content-security', 'CSC.datacenter'", "'CSC.asr5k.general'", "'CSC.sys'", "'CSC.labtrunk'") vec = TfidfVectorizer() trfm_data = vec.fit_transform(docs) print trfm_data </code></pre> <p>Output sample:</p> <pre><code> (0, 6) 0.200552591995 (0, 7) 0.200552591995 (0, 8) 0.265074737928 (0, 0) 0.265074737928 (0, 11) 0.316288846342 (0, 4) 0.228737749732 (0, 9) 0.228737749732 (0, 2) 0.757857197424 print vec.inverse_transform(trfm_data) </code></pre> <p>Output sample:</p> <pre><code>[u'embu', u'ena', u'general', u'asr5k', u'netbu', u'datacenter', u'labtrunk', u'csc'] </code></pre> <p>Ideally, I'd like to treat each item as a string such as <code>"'CSC.labtrunk', 'CSC.datacenter', 'CSC.netbu', 'CSC.asr5k.general', 'CSC.ena', 'CSC.embu'"</code>.</p>
0
2016-09-21T15:58:07Z
39,623,684
<p>Not sure if It is proper convention, but I used list of strings rather than tuple of strings and got desired output.</p> <p>sample data:</p> <pre><code>data = ["'CSC.labtrunk', 'CSC.datacenter', 'CSC.netbu', 'CSC.asr5k.general', 'CSC.ena', 'CSC.embu'", "'CSC.ena'", "'CSC.embu', 'CSC.security', 'CSC.ena'", "'CSC.embu', 'CSC.datacenter', 'CSC.labtrunk', 'CSC.content-security', 'CSC.ena', 'CSC.embu.dev', 'CSC.spv.custom-prods', 'CSC.voice', 'CSC.policy-mgmt', 'CSC.nuova'", "'CSC.embu', 'CSC.sys', 'CSC.policy-mgmt', 'CSC.content-security', 'CSC.datacenter'", "'CSC.asr5k.general'", "'CSC.sys'", "'CSC.labtrunk'"] vec = TfidfVectorizer(tokenizer=lambda i: i, lowercase=False) trfm_data = vec.fit_transform(data) data = trfm_data trfm_data data </code></pre> <p>Sample output: </p> <pre><code>[array(['CSC.embu', 'CSC.ena', 'CSC.asr5k.general', 'CSC.netbu', 'CSC.datacenter', 'CSC.labtrunk'], dtype='|S20'), array(['CSC.ena'], dtype='|S20'), array(['CSC.security', 'CSC.embu', 'CSC.ena'] </code></pre>
0
2016-09-21T18:10:31Z
[ "python", "string", "scikit-learn", "vectorization" ]
How do you get at a value in a list of lists /replace it with a dictionary value?
39,621,399
<p>I need to get at a value in a list of lists. The list in question is called 'newdetails'. It's contents are below</p> <pre><code>[['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] </code></pre> <p>I found that print(newdetails[0]) prints the entire first list. What I need however, is to get at index[0] of the first list which is 12345670. I also need to replace index 03 (of both lists) with the VALUE in a dictionary, which corresponds to the first GTIN number.</p> <p>the code I have so far is:</p> <pre><code>for gtin,currentstock in dictionary.items(): if newdetails[0]==gtin: newdetails[3]=currentstock print("checking to see what newdetails[0] and [3] and [9] is") print(newdetails[0]) print("or is it a matrix or a 2d array") print(newdetails[0,3]) print("now will this work...print replacement list") print(newdetails) </code></pre> <p>Can someone help?</p> <p>UPDATE:</p> <p>Thank you for your suggestion. I tried this: (but it came up with an error)</p> <pre><code>for sub_list in newdetails: sub_list[3] = dictionary(sub_list[3], sub_list[3]) print("sublist") print(sub_list) </code></pre> <p>Error: sub_list[3] = dictionary(sub_list[3], sub_list[3]) TypeError: 'dict' object is not callable</p> <p>To clarify, the list i have is called 'newdetails' - and it has two lists inside it (read in from a file). The name of the dictionary is simply dictionary (for now), and it has a GTIN key and a 'currentstock' VALUE. I want the GTIN key in the dictionary that corresponds with the same GTIN value in BOTH lists to update index 3 (currently showing as 5),. with the value 'currentstock' in the dictionary, that corresponds to the GTIN number.</p> <p>Thanks in advance to the helpful genius who can help me solve this!</p>
1
2016-09-21T16:00:53Z
39,621,454
<p>If you have a list of lists </p> <pre><code>data = [['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] </code></pre> <p>Then as you've found <code>data[0]</code> will return the first list in the list. And then if you want the first item again then <code>data[0][0]</code> is what you need. </p> <p>To replace the third index of both can be done like this:</p> <pre><code>for row in data: row[3] = 'foo' </code></pre>
0
2016-09-21T16:03:32Z
[ "python", "list", "dictionary" ]
How do you get at a value in a list of lists /replace it with a dictionary value?
39,621,399
<p>I need to get at a value in a list of lists. The list in question is called 'newdetails'. It's contents are below</p> <pre><code>[['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] </code></pre> <p>I found that print(newdetails[0]) prints the entire first list. What I need however, is to get at index[0] of the first list which is 12345670. I also need to replace index 03 (of both lists) with the VALUE in a dictionary, which corresponds to the first GTIN number.</p> <p>the code I have so far is:</p> <pre><code>for gtin,currentstock in dictionary.items(): if newdetails[0]==gtin: newdetails[3]=currentstock print("checking to see what newdetails[0] and [3] and [9] is") print(newdetails[0]) print("or is it a matrix or a 2d array") print(newdetails[0,3]) print("now will this work...print replacement list") print(newdetails) </code></pre> <p>Can someone help?</p> <p>UPDATE:</p> <p>Thank you for your suggestion. I tried this: (but it came up with an error)</p> <pre><code>for sub_list in newdetails: sub_list[3] = dictionary(sub_list[3], sub_list[3]) print("sublist") print(sub_list) </code></pre> <p>Error: sub_list[3] = dictionary(sub_list[3], sub_list[3]) TypeError: 'dict' object is not callable</p> <p>To clarify, the list i have is called 'newdetails' - and it has two lists inside it (read in from a file). The name of the dictionary is simply dictionary (for now), and it has a GTIN key and a 'currentstock' VALUE. I want the GTIN key in the dictionary that corresponds with the same GTIN value in BOTH lists to update index 3 (currently showing as 5),. with the value 'currentstock' in the dictionary, that corresponds to the GTIN number.</p> <p>Thanks in advance to the helpful genius who can help me solve this!</p>
1
2016-09-21T16:00:53Z
39,621,461
<p>For multidimensional arrays use <a href="http://www.numpy.org/" rel="nofollow">numpy</a> library.</p> <p>In python, simply use nested lists indexing:</p> <pre><code>&gt;&gt;&gt; x = [['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] &gt;&gt;&gt; x[0] ['12345670', 'Iphone 9.0', '500', '5', '3', '5'] &gt;&gt;&gt; x[0][0] '12345670' &gt;&gt;&gt; x[0][3] = '600' &gt;&gt;&gt; x[0][3] '600' </code></pre>
0
2016-09-21T16:03:48Z
[ "python", "list", "dictionary" ]
How do you get at a value in a list of lists /replace it with a dictionary value?
39,621,399
<p>I need to get at a value in a list of lists. The list in question is called 'newdetails'. It's contents are below</p> <pre><code>[['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] </code></pre> <p>I found that print(newdetails[0]) prints the entire first list. What I need however, is to get at index[0] of the first list which is 12345670. I also need to replace index 03 (of both lists) with the VALUE in a dictionary, which corresponds to the first GTIN number.</p> <p>the code I have so far is:</p> <pre><code>for gtin,currentstock in dictionary.items(): if newdetails[0]==gtin: newdetails[3]=currentstock print("checking to see what newdetails[0] and [3] and [9] is") print(newdetails[0]) print("or is it a matrix or a 2d array") print(newdetails[0,3]) print("now will this work...print replacement list") print(newdetails) </code></pre> <p>Can someone help?</p> <p>UPDATE:</p> <p>Thank you for your suggestion. I tried this: (but it came up with an error)</p> <pre><code>for sub_list in newdetails: sub_list[3] = dictionary(sub_list[3], sub_list[3]) print("sublist") print(sub_list) </code></pre> <p>Error: sub_list[3] = dictionary(sub_list[3], sub_list[3]) TypeError: 'dict' object is not callable</p> <p>To clarify, the list i have is called 'newdetails' - and it has two lists inside it (read in from a file). The name of the dictionary is simply dictionary (for now), and it has a GTIN key and a 'currentstock' VALUE. I want the GTIN key in the dictionary that corresponds with the same GTIN value in BOTH lists to update index 3 (currently showing as 5),. with the value 'currentstock' in the dictionary, that corresponds to the GTIN number.</p> <p>Thanks in advance to the helpful genius who can help me solve this!</p>
1
2016-09-21T16:00:53Z
39,621,539
<p>Let's say you are having a list of <code>key</code>s mapped to corresponding element in the sub-list mentioned by you in the question. Below is the sample code to achieve that:</p> <pre><code>my_list = [['12345670', 'Iphone 9.0', '500', '5', '3', '5'], ['12121212', 'Samsung Laptop', '900', '5', '3', '5']] my_key = ['item1', 'item2', 'item3', 'item4', 'item5', 'item6'] dict_list = [{k: v for k, v in zip(my_key, sub_list)} for sub_list in my_list] # Value of "dict_list": [{ 'item2': 'Iphone 9.0', 'item3': '500', 'item6': '5', 'item4': '5', 'item5': '3', 'item1': '12345670' }, { 'item2': 'Samsung Laptop', 'item3': '900', 'item6': '5', 'item4': '5', 'item5': '3', 'item1': '12121212' }] </code></pre> <p>To know more about how this code worked, check <a href="https://docs.python.org/2/library/functions.html#zip" rel="nofollow"><code>zip()</code></a> and <a href="http://stackoverflow.com/questions/14507591/python-dictionary-comprehension"><code>Dict Comprehension in Python</code></a>.</p> <p>In case you want <strong>to update the <code>value</code> of column</strong>, let's say 'item6' of <code>0</code>th index in the list, you may do:</p> <pre><code>dict_list[0]['item6'] = 'new value' </code></pre> <p><strong>Update</strong>: Based on the comment fromm the OP</p> <pre><code># my_gtin &lt;-- GTIN dict for sub_list in my_list: sub_list[3] = my_gtin.get(sub_list[3], sub_list[3]) </code></pre> <p>Above code will update the entry of <code>3</code>rd index in <code>sub_list</code> based on if <code>key</code> is present in <code>my_gtin</code> dict. Else <code>sub_list</code> will have the same value.</p>
0
2016-09-21T16:07:47Z
[ "python", "list", "dictionary" ]
comparing parts of lines in two tsv files in python
39,621,474
<p>So I want to sum/analyse values pertaining to a given line in one file which match another file. The format of the first file I wish to compare against is:</p> <pre><code>Acetobacter cibinongensis Acetobacter Acetobacteraceae Rhodospirillales Proteobacteria Bacteria Acetobacter ghanensis Acetobacter Acetobacteraceae Rhodospirillales Proteobacteria Bacteria Acetobacter pasteurianus Acetobacter Acetobacteraceae Rhodospirillales Proteobacteria Bacteria </code></pre> <p>And the second file is like:</p> <pre><code>Blochmannia endosymbiont of Polyrhachis (Hedomyrma) turneri Candidatus Blochmannia Enterobacteriaceae Enterobacteriales Proteobacteria Bacteria 1990 7.511 14946.9 Blochmannia endosymbiont of Polyrhachis (Hedomyrma) turneri Candidatus Blochmannia Enterobacteriaceae Enterobacteriales Proteobacteria Bacteria 2061 6.451 13295.5 Calyptogena okutanii thioautotrophic gill symbiont Proteobacteria-undef Proteobacteria-undef Proteobacteria-undef Proteobacteria Bacteria 7121 2.466 17560.4 </code></pre> <p>What I want to do is parse every line in the first file, and for every line in the second file where the first 6 fields match, perform analysis on the numbers in the 3 fields following the species info.</p> <p>My code is as follows:</p> <pre><code>with open('file1', 'r') as file1: with open('file2', 'r') as file2: for line in file1: count = 0 line = line.split("\t") for l in file2: l = l.split("\t") if l[0:6] == line[0:6]: count+=1 count = str(count) print line + '\t' + count +'\t'+'\n' </code></pre> <p>Which I'm hoping will give me the line from the first file and the number of times that species was found in the second file. I know there's probably a better way of doing THIS particular part of the analysis but I wanted to give a simple example of the objective.. Anyway, I don't get any matches, i.e. I never see an instance where l[0:6] == line[0:6] is True. Any ideas?? :-S</p>
1
2016-09-21T16:04:26Z
39,621,575
<p>The root cause is that you consume <code>file2</code> at the first iteration, then it always iterate over nothing.</p> <p>Quick fix: read file2 fully and put it in a list. However, this is rather inefficient in terms of speed (O(N^2): double loop). Could be better if creating a dictionary with key = tuple of the 6 first values.</p> <pre><code>with open('file2', 'r') as f: file2 = list(f) with open('file1', 'r') as file1: for line in file1: count = 0 line = line.split("\t") for l in file2: l = l.split("\t") if l[0:6] == line[0:6]: count+=1 count = str(count) print line + '\t' + count +'\t'+'\n' </code></pre> <p>Also, using <code>csv</code> module configured with TAB as separator would avoid you some surprises in the future.</p> <p>Better version, using a dictionary for faster access on data of <code>file2</code> (the first 6 elements are the key, note that we cannot use a <code>list</code> as key since it's mutable but we have to convert it to a <code>tuple</code>):</p> <pre><code>d = dict() # create the dictionary from file2 with open('file2', 'r') as file2: for l in file2: fields = l.split("\t") d[tuple(fields[0:6])] = fields[6:] # iterate through file1, and use dict lookup on data of file2 # much, much faster if file2 contains a lot of data with open('file1', 'r') as file1: for line in file1: count = 0 line = line.split("\t") if tuple(line[0:6]) in d: # check if in dictionary count+=1 # we could extract the extra data by accessing # d[tuple(line[0:6])] count = str(count) print(line + '\t' + count +'\t'+'\n') </code></pre>
0
2016-09-21T16:09:52Z
[ "python", "csv", "compare" ]
Match multiple times a group in a string
39,621,578
<p>i'am trying to use regular expression. I have this string that has to be matched</p> <pre><code> influences = {{hlist |[[Plato]] |[[Aristotle]] |[[Socrates]] |[[David Hume]] |[[Adam Smith]] |[[Cicero]] |[[John Locke]]}} {{hlist |[[Saint Augustine]] |[[Saint Thomas Aquinas]] |[[Saint Thomas More]] |[[Richard Hooker]] |[[Edward Coke]]}} {{hlist |[[Thomas Hobbes]] |[[Rene Descartes]] |[[Montesquieu]] |[[Joshua Reynolds]] |[[Sir William Blackstone|William Blackstone]]}} {{hlist |[[Niccolo Machiavelli]] |[[Dante Alighieri]] |[[Samuel Johnson]] |[[Voltaire]] |[[Jean Jacques Rousseau]] |[[Jeremy Bentham]]}} </code></pre> <p>I would like to extract from the text the following templates: </p> <pre><code>{{hlist .... }} </code></pre> <p>Instead, the following text has not to be matched:</p> <pre><code>main_interests = {{hlist |[[Music]] |[[Art]] |[[Theatre]] |[[Literature]]}} </code></pre> <p>I wrote this regex but it doesn't work</p> <pre><code>(?:^\|\s*)?(?:influences)\s*?=\s*?(?:(?:\s*\{\{hlist)\s*\|([\d\w\s\-()*—&amp;;\[\]|#%.&lt;&gt;·:/",\'!{}=•?’ á~ü°œéö$àèìòùÀÈÌÒÙáéíóúýÁÉÍÓÚÝâêîôûÂÊÎÔÛãñõÃÑÕäëïöüÿÄËÏÖÜŸçÇߨøÅ寿œ]*?)(?=\n))+ </code></pre> <p>I'm using python.</p>
2
2016-09-21T16:10:07Z
39,621,712
<p>You can use a list comprehension with some regular expressions:</p> <pre><code>import re string = """ influences = {{hlist |[[Plato]] |[[Aristotle]] |[[Socrates]] |[[David Hume]] |[[Adam Smith]] |[[Cicero]] |[[John Locke]]}} {{hlist |[[Saint Augustine]] |[[Saint Thomas Aquinas]] |[[Saint Thomas More]] |[[Richard Hooker]] |[[Edward Coke]]}} {{hlist |[[Thomas Hobbes]] |[[Rene Descartes]] |[[Montesquieu]] |[[Joshua Reynolds]] |[[Sir William Blackstone|William Blackstone]]}} {{hlist |[[Niccolo Machiavelli]] |[[Dante Alighieri]] |[[Samuel Johnson]] |[[Voltaire]] |[[Jean Jacques Rousseau]] |[[Jeremy Bentham]]}} """ matches = [template.group(1) for match in re.findall(r'\{\{hlist.+?\}}', string) for template in re.finditer(r'\[\[([^]]+)\]\]', match)] print(matches) # ['Plato', 'Aristotle', 'Socrates', 'David Hume', 'Adam Smith', 'Cicero', 'John Locke', 'Saint Augustine', 'Saint Thomas Aquinas', 'Saint Thomas More', 'Richard Hooker', 'Edward Coke', 'Thomas Hobbes', 'Rene Descartes', 'Montesquieu', 'Joshua Reynolds', 'Sir William Blackstone|William Blackstone', 'Niccolo Machiavelli', 'Dante Alighieri', 'Samuel Johnson', 'Voltaire', 'Jean Jacques Rousseau', 'Jeremy Bentham'] </code></pre> <p>This uses two expressions, one for the outer part (<code>{{hlist...}}</code>) and another one for the inner part (<code>[[...]]</code>). <hr> See <a href="http://ideone.com/B7r3Tz" rel="nofollow"><strong>a demo on regex101.com</strong></a>.</p>
0
2016-09-21T16:18:25Z
[ "python", "regex" ]
django email username and password
39,621,594
<p>I'm implementing a contact form for one of my sites. One thing I'm not sure I understand completely is why you need <code>EMAIL_HOST_USER</code> and <code>EMAIL_HOST_PASSWORD</code>.</p> <p>The user would only need to provide his/her email address, so what is the <code>EMAIL_HOST_USER</code> referring to then and why would I need to specify an email and password?</p> <p>EDIT: I'm using webfaction as my mail server</p>
1
2016-09-21T16:11:13Z
39,628,967
<p>You set <strong>EMAIL_HOST</strong> and <strong>EMAIL_PORT</strong> just for sending emails to your user.</p> <blockquote> <p>Mail is sent using the SMTP host and port specified in the EMAIL_HOST and EMAIL_PORT settings. The EMAIL_HOST_USER and EMAIL_HOST_PASSWORD settings, if set, are used to authenticate to the SMTP server, and the EMAIL_USE_TLS and EMAIL_USE_SSL settings control whether a secure connection is used.</p> </blockquote>
0
2016-09-22T01:41:37Z
[ "python", "django", "email", "smtp", "webfaction" ]
How to use a colormap from pyplot.jl as a function in julia-lang
39,621,637
<p>What would be the way to get in Julia-lang, something similar to</p> <pre><code>import numpy as np import matplotlib.pyplot as plt i = 0, f = 255, N = 100 colors = [ plt.cm.viridis(x) for x in np.linspace(i, f, N) ] </code></pre> <p>in Python? Generally speaking, I am looking for a way to get the RGB list from a colormap using Pyplot.jl in Julia-lang with a desired number of colors (N).</p>
2
2016-09-21T16:13:52Z
39,622,304
<p>This is easiest using PlotUtils (which is re-exported on <code>using Plots</code>):</p> <pre><code>julia&gt; using PlotUtils julia&gt; cm = cgrad(:viridis); julia&gt; colors = [cm[i] for i in linspace(0,1,10)] 10-element Array{ColorTypes.RGBA{Float64},1}: RGBA{Float64}(0.267004,0.004874,0.329415,1.0) RGBA{Float64}(0.280935,0.155726,0.468508,1.0) RGBA{Float64}(0.242949,0.291862,0.537908,1.0) RGBA{Float64}(0.190784,0.406937,0.555933,1.0) RGBA{Float64}(0.147463,0.512137,0.556955,1.0) RGBA{Float64}(0.120243,0.618071,0.536316,1.0) RGBA{Float64}(0.209417,0.718547,0.472197,1.0) RGBA{Float64}(0.422721,0.805438,0.351155,1.0) RGBA{Float64}(0.710003,0.8685,0.169494,1.0) RGBA{Float64}(0.993248,0.906157,0.143936,1.0) </code></pre>
3
2016-09-21T16:51:52Z
[ "python", "matplotlib", "colors", "julia-lang" ]
how to create a for loop for this case in python?
39,621,698
<p>maybe the title seems strange, but i'm bloqued for certain times searching how to create a for loop of this case:</p> <p>I have a list of lists having this format :</p> <pre><code> data=[['nature', author1, author2, ...author n] ['sport', author1, author2, ....author n] .... ] </code></pre> <p>I have tried this code :</p> <pre><code> authors=[author1, author2, ...author n] for i in range(len(authors)): data = [['nature', function(names[i], 'nature')], ['sport', function(names[i], 'sport') ..] </code></pre> <p>but unfortunately I guess it returns a result in this format :</p> <pre><code> data=[['nature', author1] ['sport', author1] .... ] </code></pre>
-1
2016-09-21T16:17:37Z
39,621,849
<pre><code>ary_grp_Example = [["AA1", "BB1"], ["CC2", "DD2"],["EE3","FF3"]] ### Dimension as a three column matrix array (list) ary_grp_Example.pop(0) ## Kill the first record leaving two #Loop through by row (x) for int_CurCount in range(0,len(ary_grp_Example)): print ("Row: " + str(int_CurCount) + " was #" + str(ary_grp_Example[int_CurCount][0]) + "#" + str(ary_grp_Example[int_CurCount][1]) +"#") #Loop through by Cell (x,y) for int_RowCount in range(0,len(ary_grp_Example)): for int_ColCount in range(0,len(ary_grp_Example[int_RowCount]) ): print ("Cell Data for location =(" + str(int_RowCount) + "," + str(int_ColCount) + ") was #" + ary_grp_Example[int_RowCount][int_ColCount] + "#") </code></pre> <p>You can get to the data if you use two numbers data[1][1] or data[1][2]. Sometimes referred to as a matrix array or matrix list. Pythons name for it appears to be a "list" but it is a multi dimensional list aka matrix array/list.</p> <p>Sport Author 2 for example would be data[1][2] as these are zero based counting.</p> <p>To loop through all the data you have to loop by row and by column like you would a database result set.</p>
0
2016-09-21T16:25:48Z
[ "python", "django", "for-loop" ]
how to create a for loop for this case in python?
39,621,698
<p>maybe the title seems strange, but i'm bloqued for certain times searching how to create a for loop of this case:</p> <p>I have a list of lists having this format :</p> <pre><code> data=[['nature', author1, author2, ...author n] ['sport', author1, author2, ....author n] .... ] </code></pre> <p>I have tried this code :</p> <pre><code> authors=[author1, author2, ...author n] for i in range(len(authors)): data = [['nature', function(names[i], 'nature')], ['sport', function(names[i], 'sport') ..] </code></pre> <p>but unfortunately I guess it returns a result in this format :</p> <pre><code> data=[['nature', author1] ['sport', author1] .... ] </code></pre>
-1
2016-09-21T16:17:37Z
39,622,460
<p>Is what you want something along these lines?</p> <pre><code>&gt;&gt;&gt; data=[['nature', 'author1', 'author2', 'author3'],['sport', 'author1', 'author2', 'author3'],['Horses', 'author1', 'author2']] &gt;&gt;&gt; for i in range(len(data)): for x in range(1, len(data[i])): print data[i][0], data[i][x] nature author1 nature author2 nature author3 sport author1 sport author2 sport author3 Horses author1 Horses author2 </code></pre>
0
2016-09-21T17:00:25Z
[ "python", "django", "for-loop" ]
how to create a for loop for this case in python?
39,621,698
<p>maybe the title seems strange, but i'm bloqued for certain times searching how to create a for loop of this case:</p> <p>I have a list of lists having this format :</p> <pre><code> data=[['nature', author1, author2, ...author n] ['sport', author1, author2, ....author n] .... ] </code></pre> <p>I have tried this code :</p> <pre><code> authors=[author1, author2, ...author n] for i in range(len(authors)): data = [['nature', function(names[i], 'nature')], ['sport', function(names[i], 'sport') ..] </code></pre> <p>but unfortunately I guess it returns a result in this format :</p> <pre><code> data=[['nature', author1] ['sport', author1] .... ] </code></pre>
-1
2016-09-21T16:17:37Z
39,623,566
<p>Convert it to a dictionary, and then iterate over the key pairs</p> <pre><code>data = [['foo', 1, 2, 3], ['bar', 2,3,4]] dat = {i[0]: i[1:] for i in data } for k, v in dat.items(): print("{0}: {1}".format(k, v)) Output bar: [2, 3, 4] foo: [1, 2, 3] </code></pre> <p>except.... <em>dont do this</em>.</p> <p>I'm merely showing this to show that your data should be in a dictionary in the first place.</p>
0
2016-09-21T18:03:09Z
[ "python", "django", "for-loop" ]
python loadtxt with exotic format of table
39,621,800
<p>I have a file from simulation which reads like :</p> <pre><code>5.2000 -0.01047 -0.02721 0.823400 -0.56669 1.086e-5 2.109e-5 -1.57e-5 -3.12e-5 0.823400 -0.56669 -0.02166 -0.01949 -2.28e-5 -2.66e-5 1.435e-5 1.875e-5 1.086e-5 2.109e-5 -2.28e-5 -2.66e-5 -0.01878 -0.01836 0.820753 -0.57065 -1.57e-5 -3.12e-5 1.435e-5 1.875e-5 0.820753 -0.57065 -0.01066 -0.02402 5.2005 -0.01045 -0.02721 0.823354 -0.56676 1.086e-5 2.109e-5 -1.57e-5 -3.12e-5 0.823354 -0.56676 -0.02167 -0.01947 -2.28e-5 -2.66e-5 1.435e-5 1.875e-5 1.086e-5 2.109e-5 -2.28e-5 -2.66e-5 -0.01878 -0.01833 0.820703 -0.57073 -1.57e-5 -3.12e-5 1.435e-5 1.875e-5 0.820703 -0.57073 -0.01063 -0.02401 5.2010 -0.01043 -0.02721 0.823309 -0.56683 1.087e-5 2.108e-5 -1.57e-5 -3.12e-5 0.823309 -0.56683 -0.02168 -0.01945 -2.28e-5 -2.66e-5 1.435e-5 1.874e-5 1.087e-5 2.108e-5 -2.28e-5 -2.66e-5 -0.01878 -0.01830 0.820654 -0.57080 -1.57e-5 -3.12e-5 1.435e-5 1.874e-5 0.820654 -0.57080 -0.01061 -0.02400 </code></pre> <p>And I would like to get it as a float + an array of float (the float would be the '5.2000' and the array what is after (4x8 table) but the numpy command loadtxt don't get this exotic kind of structure. Is there a solution to it ?</p>
2
2016-09-21T16:23:30Z
39,625,664
<p>If the "tables" are always 4x8 then it may be easier to read the data in as a 1D array, then index/reshape this in order to get the output you desire:</p> <pre><code># to get s you could do something like s = open(fname, 'r').read() s = """ 5.2000 -0.01047 -0.02721 0.8234 -0.56669 1.086e-5 2.109e-5 -1.57e-5 -3.12e-5 0.8234 -0.56669 -0.02166 -0.01949 -2.28e-5 -2.66e-5 1.435e-5 1.875e-5 1.086e-5 2.109e-5 -2.28e-5 -2.66e-5 -0.01878 -0.01836 0.820753 -0.57065 -1.57e-5 -3.12e-5 1.435e-5 1.875e-5 0.820753 -0.57065 -0.01066 -0.02402 5.2005 -0.01045 -0.02721 0.823354 -0.56676 1.086e-5 2.109e-5 -1.57e-5 -3.12e-5 0.823354 -0.56676 -0.02167 -0.01947 -2.28e-5 -2.66e-5 1.435e-5 1.875e-5 1.086e-5 2.109e-5 -2.28e-5 -2.66e-5 -0.01878 -0.01833 0.820703 -0.57073 -1.57e-5 -3.12e-5 1.435e-5 1.875e-5 0.820703 -0.57073 -0.01063 -0.02401 5.2010 -0.01043 -0.02721 0.823309 -0.56683 1.087e-5 2.108e-5 -1.57e-5 -3.12e-5 0.823309 -0.56683 -0.02168 -0.01945 -2.28e-5 -2.66e-5 1.435e-5 1.874e-5 1.087e-5 2.108e-5 -2.28e-5 -2.66e-5 -0.01878 -0.0183 0.820654 -0.57080 -1.57e-5 -3.12e-5 1.435e-5 1.874e-5 0.820654 -0.5708 -0.01061 -0.02400 """ # a 1D array of floats x = np.array(s.split(), dtype=np.double) # we can extract the first column by indexing every 33rd element, since each "section" # contains one float in the left-hand column and 4*8 = 32 values in the "table". first_col = x[::33] # we can extract the values corresponding to the "tables" by constructing a boolean # vector that is True wherever the index is not divisible by 33 tables = x[(np.arange(x.size) % 33) &gt; 0] # finally we can reshape these values to get an array of 4x8 tables stacked in the # first dimension tables = tables.reshape(-1, 4, 8) print(repr(first_col)) # array([ 5.2 , 5.2005, 5.201 ]) print(repr(tables[0])) # array([[ -1.04700000e-02, -2.72100000e-02, 8.23400000e-01, # -5.66690000e-01, 1.08600000e-05, 2.10900000e-05, # -1.57000000e-05, -3.12000000e-05], # [ 8.23400000e-01, -5.66690000e-01, -2.16600000e-02, # -1.94900000e-02, -2.28000000e-05, -2.66000000e-05, # 1.43500000e-05, 1.87500000e-05], # [ 1.08600000e-05, 2.10900000e-05, -2.28000000e-05, # -2.66000000e-05, -1.87800000e-02, -1.83600000e-02, # 8.20753000e-01, -5.70650000e-01], # [ -1.57000000e-05, -3.12000000e-05, 1.43500000e-05, # 1.87500000e-05, 8.20753000e-01, -5.70650000e-01, # -1.06600000e-02, -2.40200000e-02]]) </code></pre>
1
2016-09-21T20:06:52Z
[ "python", "numpy" ]