title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
how to edit entire field values of CSV in Python?
39,521,744
<p>I am trying to manipulate CSV data like this</p> <pre><code>ID,val1,val2,data1,val3,val4 BIGINT,BOOL,INT,VARCHAR,INT,BIGINT 10000,'F',1,'batman',1,0 20000,'T',0,'robin',1,1 30000,'T',1,'joker',0,1 </code></pre> <p>to</p> <pre><code>ID,val1,val2,data1,val3,val4 BIGINT,BOOL,BOOL,VARCHAR,BOOL,BIGINT 10000,'F','T','batman','T',0 20000,'T','F','robin','T',1 30000,'T','T','joker','F',1 </code></pre> <p>I have written the code, which is printing the row as list after edit but somehow not writing it to file correctly. can somone please help what I am doing wrong here ?</p> <pre><code>index_list=[] with open('input.csv') as fr: reader = csv.reader(fr) reader.next() reader.next() column_type=(reader.next()) for index,val in enumerate(column_type): if val=='INT': index_list.append(index) #this prints the list of fields where change to go print(index_list) with open('input.csv', 'rb') as f: reader1 = csv.reader(f) reader1.next() reader1.next() with open('out.csv', 'wb') as fr: for row in reader1: for key in index_list: if row[key]==1: row[key]='T' elif row[key]==0: row[key]='F' #this row prints correct data print(row) writer = csv.writer(fr) writer.writerows(row) </code></pre>
0
2016-09-15T23:24:14Z
39,521,808
<p>looks like your indentation is wrong for the write statements. </p> <pre><code>with open('input.csv', 'rb') as f: reader1 = csv.reader(f) reader1.next() reader1.next() with open('test.csv', 'wb') as fr: for row in reader1: for key in index_list: if row[key]=='1': row[key]='T' elif row[key]=='0': row[key]='F' #this row prints correct data print(row) writer = csv.writer(fr) writer.writerows(row) </code></pre>
0
2016-09-15T23:32:17Z
[ "python", "csv" ]
how to edit entire field values of CSV in Python?
39,521,744
<p>I am trying to manipulate CSV data like this</p> <pre><code>ID,val1,val2,data1,val3,val4 BIGINT,BOOL,INT,VARCHAR,INT,BIGINT 10000,'F',1,'batman',1,0 20000,'T',0,'robin',1,1 30000,'T',1,'joker',0,1 </code></pre> <p>to</p> <pre><code>ID,val1,val2,data1,val3,val4 BIGINT,BOOL,BOOL,VARCHAR,BOOL,BIGINT 10000,'F','T','batman','T',0 20000,'T','F','robin','T',1 30000,'T','T','joker','F',1 </code></pre> <p>I have written the code, which is printing the row as list after edit but somehow not writing it to file correctly. can somone please help what I am doing wrong here ?</p> <pre><code>index_list=[] with open('input.csv') as fr: reader = csv.reader(fr) reader.next() reader.next() column_type=(reader.next()) for index,val in enumerate(column_type): if val=='INT': index_list.append(index) #this prints the list of fields where change to go print(index_list) with open('input.csv', 'rb') as f: reader1 = csv.reader(f) reader1.next() reader1.next() with open('out.csv', 'wb') as fr: for row in reader1: for key in index_list: if row[key]==1: row[key]='T' elif row[key]==0: row[key]='F' #this row prints correct data print(row) writer = csv.writer(fr) writer.writerows(row) </code></pre>
0
2016-09-15T23:24:14Z
39,535,445
<p>You can use <code>pandas</code> and replace all the values as desired. it would look something like:</p> <pre><code>from StringIO import StringIO import pandas as pd TESTDATA = StringIO("""BIGINT,BOOL,INT,VARCHAR,INT 10000,'F',1,'batman',1 20000,'T',0,'robin',1 30000,'T',1,'joker',0""") df = pd.read_csv(TESTDATA, sep=",", engine='python') for column in df: if "int" in column.lower(): df[column] = df[column].replace([1], 'T') df[column] = df[column].replace([0], 'F') </code></pre> <p>which returns:</p> <pre><code> BIGINT BOOL INT VARCHAR INT.1 0 10000 'F' T 'batman' T 1 20000 'T' F 'robin' T 2 30000 'T' T 'joker' F </code></pre> <p>after that you can save it back to a CSV file</p>
0
2016-09-16T15:44:37Z
[ "python", "csv" ]
List converted to Dropdown in Flask/WTForms
39,521,749
<p>Basically I have a list in Python and would like to programmatically call on and create a dropdown form from this list using WTForms into an HTML doc. I can't figure out what I am missing here when trying to use the <strong>SelectField</strong> approach in WTForms. The list is within "updatef". </p> <p>I get the error: <strong>ValueError: need more than 1 value to unpack</strong> when trying to run this.</p> <pre><code>class ReusableForm(Form): # Define the form fields and validation parameters updates = SelectField(u'Update Frequency', choices = updatef, validators = [validators.required()]) @app.route("/editor", methods=['GET', 'POST']) def hello(): form = ReusableForm(request.form) # calls on form if request.method == 'POST': updates = request.form['updates'] </code></pre> <p>HTML</p> <pre><code> &lt;form id="main" action="" method="post" role="form" spellcheck="true"&gt; &lt;p&gt; {{ form.updates }} &lt;/p&gt; </code></pre>
0
2016-09-15T23:24:40Z
39,522,040
<p><code>choices</code> has to be a list of 2 value tuples such as <code>[(1,'Daily'),(2,'Weekly')]</code> - your error seems to suggest you might only have a list of values.</p>
0
2016-09-16T00:04:09Z
[ "python", "flask", "jinja2", "wtforms", "flask-wtforms" ]
Creating a dictionary from a dictionary using a tuple Python
39,521,817
<p>given a dictionary with N keys and a tuple of K keys, K&lt;=N is there a pythonic way to get a dictionary with only the K keys?</p> <p>ex. </p> <pre><code>orig_dict = {'key1':'value1', 'key2':'value2', ..., 'keyN':'valueN'} tuple = ('key2', 'keyM') newdict = myFunc(orig_dict, tuple) print newdict </code></pre> <p>Output:</p> <pre><code>'key2':'value2', 'keyM':'valueM' </code></pre>
1
2016-09-15T23:34:31Z
39,521,846
<p>You can use a dictionary comprehension:</p> <pre><code>{k:v for k,v in orig_dict.iteritems() if k in tuple_keys} </code></pre> <p>Observe:</p> <pre><code>&gt;&gt;&gt; orig_dict = {'key1':'value1', 'key2':'value2', 'keyN':'valueN'} &gt;&gt;&gt; tuple_keys = ('key2', 'keyN') &gt;&gt;&gt; {k:v for k,v in orig_dict.iteritems() if k in tuple_keys} {'keyN': 'valueN', 'key2': 'value2'} </code></pre>
2
2016-09-15T23:38:04Z
[ "python", "dictionary", "tuples" ]
Creating a dictionary from a dictionary using a tuple Python
39,521,817
<p>given a dictionary with N keys and a tuple of K keys, K&lt;=N is there a pythonic way to get a dictionary with only the K keys?</p> <p>ex. </p> <pre><code>orig_dict = {'key1':'value1', 'key2':'value2', ..., 'keyN':'valueN'} tuple = ('key2', 'keyM') newdict = myFunc(orig_dict, tuple) print newdict </code></pre> <p>Output:</p> <pre><code>'key2':'value2', 'keyM':'valueM' </code></pre>
1
2016-09-15T23:34:31Z
39,521,865
<p>Just use a comprehension:</p> <pre><code>tple = ('key2', 'keyM') {k: orig_dict[k] for k in tple} </code></pre> <p>Or if you prefer functional:</p> <pre><code>from operator import itemgetter dict(zip(tple, itemgetter(*tple)(orig_dict))) </code></pre> <p>What is more <em>pythonic</em> is debatable, what is definitely not pythonic is using <em>tuple</em> as a variable name.</p> <p>If some keys may not exist you can get the <a href="https://en.wikipedia.org/wiki/Intersection_(set_theory)" rel="nofollow"><em>intersection</em></a> with <a href="https://docs.python.org/2/library/stdtypes.html#dict.viewkeys" rel="nofollow"><em>viewkeys</em></a>:</p> <pre><code>dict(zip(tuple, itemgetter(*orig_dict.viewkeys() &amp; tple)(orig_dict))) {k : orig_dict[k] for k in orig_dict.viewkeys() &amp; tple} </code></pre> <p>For <code>python3</code> just use <code>.keys()</code> which returns a <em>dict_view</em> object as opposed to a list in python2.</p> <p>If you wanted to give a default value of <em>None</em> for missing keys, you could also use <em>map</em> with <em>dict.get</em> so missing keys would have their value set to None.</p> <pre><code>dict(zip(tuple, map(orig_dict.get, tuple) </code></pre>
2
2016-09-15T23:40:02Z
[ "python", "dictionary", "tuples" ]
Creating a dictionary from a dictionary using a tuple Python
39,521,817
<p>given a dictionary with N keys and a tuple of K keys, K&lt;=N is there a pythonic way to get a dictionary with only the K keys?</p> <p>ex. </p> <pre><code>orig_dict = {'key1':'value1', 'key2':'value2', ..., 'keyN':'valueN'} tuple = ('key2', 'keyM') newdict = myFunc(orig_dict, tuple) print newdict </code></pre> <p>Output:</p> <pre><code>'key2':'value2', 'keyM':'valueM' </code></pre>
1
2016-09-15T23:34:31Z
39,521,929
<p>Use a dictionary comprehension</p> <pre><code>orig_dict = {'key1':'value1', 'key2':'value2', 'keyN':'valueN'} keys = ('key2', 'keyM') &gt;&gt;&gt; {k:orig_dict[k] for k in keys if k in orig_dict} {'key2': 'value2'} </code></pre> <p>This will be more efficient than iterating over the dictionary's keys and checking whether the key exists in the tuple because it is an O(1) operation to lookup a dict vs O(n) to search in a tuple.</p> <p>Alternatively you can use a set to get the common keys and combine that with a dict comprehension:</p> <pre><code>&gt;&gt;&gt; {k:orig_dict[k] for k in set(keys).intersection(orig_dict)} {'key2': 'value2'} </code></pre>
1
2016-09-15T23:49:02Z
[ "python", "dictionary", "tuples" ]
Listing out names of Python dictionary instances
39,521,869
<p>I collected public course data from Udemy and put it all in a json file. Each course has an identifier number under which all the data is stored. I can perfectly list out any details I want, except for these identifier numbers.</p> <p>How can I list out these numbers themselves? Thanks.</p> <pre><code>{ "153318": { "lectures data": "31 lectures, 5 hours video", "instructor work": "Academy Of Technical Courses, Grow Your Skills Today", "title": "Oracle Applications R12 Order Management and Pricing", "promotional price": "$19", "price": "$20", "link": "https://www.udemy.com/oracle-applications-r12-order-management-and-pricing/", "instructor": "Parallel Branch Inc" }, "616990": { "lectures data": "24 lectures, 1.5 hours video", "instructor work": "Learning Sans Location", "title": "Cloud Computing Development Essentials", "promotional price": "$19", "price": "$20", "link": "https://www.udemy.com/cloud-computing-development-essentials/", "instructor": "Destin Learning" } } </code></pre>
0
2016-09-15T23:40:28Z
39,521,901
<p>Parse the json into a python dict, then loop over the keys</p> <pre><code>parsed = json.loads(input) for key in parsed.keys(): print(key) </code></pre>
1
2016-09-15T23:45:19Z
[ "python", "json" ]
Listing out names of Python dictionary instances
39,521,869
<p>I collected public course data from Udemy and put it all in a json file. Each course has an identifier number under which all the data is stored. I can perfectly list out any details I want, except for these identifier numbers.</p> <p>How can I list out these numbers themselves? Thanks.</p> <pre><code>{ "153318": { "lectures data": "31 lectures, 5 hours video", "instructor work": "Academy Of Technical Courses, Grow Your Skills Today", "title": "Oracle Applications R12 Order Management and Pricing", "promotional price": "$19", "price": "$20", "link": "https://www.udemy.com/oracle-applications-r12-order-management-and-pricing/", "instructor": "Parallel Branch Inc" }, "616990": { "lectures data": "24 lectures, 1.5 hours video", "instructor work": "Learning Sans Location", "title": "Cloud Computing Development Essentials", "promotional price": "$19", "price": "$20", "link": "https://www.udemy.com/cloud-computing-development-essentials/", "instructor": "Destin Learning" } } </code></pre>
0
2016-09-15T23:40:28Z
39,521,904
<p>You want the <code>keys</code> of that dictionnary.</p> <pre><code>import json with open('course.json') as json_file: course=json.load(json_file) print course.keys() </code></pre> <p><br>giving : <code>[u'616990', u'153318']</code></p>
4
2016-09-15T23:45:38Z
[ "python", "json" ]
How do I get a regex pattern type for MyPy
39,521,895
<p>If I compile a regex</p> <pre><code>&gt;&gt;&gt; type(re.compile("")) &lt;class '_sre.SRE_Pattern'&gt; </code></pre> <p>And want to pass that regex to a function and use Mypy to type check</p> <pre><code>def my_func(compiled_regex: _sre.SRE_Pattern): </code></pre> <p>I'm running into this problem</p> <pre><code>&gt;&gt;&gt; import _sre &gt;&gt;&gt; from _sre import SRE_Pattern Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ImportError: cannot import name 'SRE_Pattern' </code></pre> <p>It seems that you can import <code>_sre</code> but for some reason <code>SRE_Pattern</code> isn't importable.</p>
0
2016-09-15T23:44:22Z
39,521,927
<p>Yeah, the types the <code>re</code> module uses aren't actually accessible by name. You'll need to use the <a href="https://docs.python.org/3/library/typing.html#typing.re" rel="nofollow"><code>typing.re</code></a> types for type annotations instead:</p> <pre><code>import typing def my_func(compiled_regex: typing.re.Pattern): ... </code></pre>
1
2016-09-15T23:48:31Z
[ "python", "mypy" ]
How do I get a regex pattern type for MyPy
39,521,895
<p>If I compile a regex</p> <pre><code>&gt;&gt;&gt; type(re.compile("")) &lt;class '_sre.SRE_Pattern'&gt; </code></pre> <p>And want to pass that regex to a function and use Mypy to type check</p> <pre><code>def my_func(compiled_regex: _sre.SRE_Pattern): </code></pre> <p>I'm running into this problem</p> <pre><code>&gt;&gt;&gt; import _sre &gt;&gt;&gt; from _sre import SRE_Pattern Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ImportError: cannot import name 'SRE_Pattern' </code></pre> <p>It seems that you can import <code>_sre</code> but for some reason <code>SRE_Pattern</code> isn't importable.</p>
0
2016-09-15T23:44:22Z
39,522,012
<p><code>mypy</code> is very strict in terms of what it can accept, so you can't just generate the types or use import locations that it doesn't know how to support (otherwise it will just complain about library stubs for the syntax to a standard library import it doesn't understand). Full solution:</p> <pre><code>import re from typing import Pattern def my_func(compiled_regex: Pattern): return compiled_regex.flags patt = re.compile('') print(my_func(patt)) </code></pre> <p>Example run:</p> <pre><code>$ mypy foo.py $ python foo.py 32 </code></pre>
2
2016-09-16T00:00:55Z
[ "python", "mypy" ]
How set permissions for create action in rest_framework
39,521,924
<p>I want to set perissions for the create action (post). And I dont know hot to do it.</p> <p>This is my code:</p> <p>In permissions.py</p> <pre><code>class IsAdmin(permissions.BasePermission): def has_object_permission(self, request, view, category): if request.user.role == "admin": return request.user.role == "admin" return False </code></pre> <p>In view</p> <p>class CategoryViewSet(viewsets.ModelViewSet):</p> <pre><code>queryset = Category.objects.all() serializer_class = CategorySerializer def get_permissions(self): if self.request.method in permissions.SAFE_METHODS: return (permissions.DjangoModelPermissions(),) return (permissions.IsAuthenticated(), IsAdmin(),) </code></pre>
0
2016-09-15T23:48:24Z
39,521,957
<p>Just set the <code>permission_classes</code> on the viewset directly:</p> <pre><code>class CategoryViewSet(viewsets.ModelViewSet): queryset = Category.objects.all() serializer_class = CategorySerializer permission_classes = [IsAccountAdminOrReadOnly] </code></pre>
0
2016-09-15T23:53:39Z
[ "python", "django", "rest", "django-rest-framework" ]
How set permissions for create action in rest_framework
39,521,924
<p>I want to set perissions for the create action (post). And I dont know hot to do it.</p> <p>This is my code:</p> <p>In permissions.py</p> <pre><code>class IsAdmin(permissions.BasePermission): def has_object_permission(self, request, view, category): if request.user.role == "admin": return request.user.role == "admin" return False </code></pre> <p>In view</p> <p>class CategoryViewSet(viewsets.ModelViewSet):</p> <pre><code>queryset = Category.objects.all() serializer_class = CategorySerializer def get_permissions(self): if self.request.method in permissions.SAFE_METHODS: return (permissions.DjangoModelPermissions(),) return (permissions.IsAuthenticated(), IsAdmin(),) </code></pre>
0
2016-09-15T23:48:24Z
39,522,111
<p>Already, I have solved it, just i've change in my permissions.py file, the method:</p> <pre><code> def has_object_permission(self, request, view, category): if request.user.role == "admin": return request.user.role == "admin" return False </code></pre> <p>for this method:</p> <pre><code> def has_permission(self, request, view): if request.user.role == "admin": return request.user.role == "admin" return False </code></pre> <p>It's Simple, i complicate too much</p>
0
2016-09-16T00:15:42Z
[ "python", "django", "rest", "django-rest-framework" ]
Django: How can I get the foreign key's class from a classmethod in the referred-to class?
39,521,948
<p>I have the following two Django Classes <code>MyClassA</code> and <code>MyClassB</code> in two separate files. <code>MyClassB</code> has a foreign key reference to an instance of <code>MyClassA</code>. <code>MyClassA</code> cannot import the class <code>MyClassB</code>.</p> <p>my_class_a/models.py:</p> <pre><code>from django.db import models class MyClassA(models.Model): name = models.CharField(max_length=50, null=False) @classmethod def my_method_a(cls): # What do I put here to call MyClassB.my_method_b()?? </code></pre> <p>my_class_b/models.py:</p> <pre><code>from my_class_a.models import MyClassA from django.db import models class MyClassB(models.Model): name = models.CharField(max_length=50, null=False) my_class_a = models.ForeignKey(MyClassA, related_name="MyClassB_my_class_a") @staticmethod def my_method_b(): return "Hello" </code></pre> <p>From within <code>MyClassA</code>'s class method <code>my_method_a</code>, I would like to call <code>MyClassB</code>'s static method <code>my_method_b</code>. How can I do it? </p> <p>If <code>my_method_a</code> was an instance method, I would simply do <code>self.MyClassB_my_class_a.model.my_method_b()</code>. But since I don't have an instance of <code>MyClassA</code>, I don't know how to do it. I would like to take advantage of the related_name field that allows for reverse lookups of instances.</p>
1
2016-09-15T23:52:10Z
39,522,809
<p>You can do it like this.</p> <pre><code>@classmethod def my_method_a(cls): from myclass_b.models import MyClassB # yes, you can have an import here. and it will not # lead to a cyclic import error MyClassB.my_method_b() </code></pre> <p>The import failure happens only if you add the import to the top of the file. That would lead to cyclic imports one module cannot be loaded because it depends on another which depends on the other module. However when the import is inside a method the same problem does not arise.</p>
1
2016-09-16T02:06:52Z
[ "python", "django", "foreign-keys", "static-methods", "class-method" ]
Do generators simply make new objects with __iter__ and next functions?
39,521,964
<p>I tried to search for this answer on my own, but there was too much noise.</p> <p>Are generators in python just a convenience wrapper for the user to make an iterator object?</p> <p>When you define the generator:</p> <pre><code>def test(): x = 0 while True: x += 1 yield x </code></pre> <p>is python simply making a new object, adding the <code>__iter__</code> method, then putting the rest of the code into the <code>next</code> function?</p> <pre><code>class Test(object): def __init__(self): self.x = 0 def __iter__(self): return self def next(self): self.x += 1 return self.x </code></pre>
1
2016-09-15T23:54:37Z
39,522,038
<p>No -- Generators also provide other methods (<code>.send</code>, <code>.throw</code>, etc) and can be used for more purposes than simply making iterators (e.g. coroutines).</p> <p>Indeed, <code>generators</code> are an entirely different beast and a core language feature. It'd be very hard (possibly impossible) to create one in vanilla python if they weren't baked into the language.</p> <hr> <p>With that said, <em>one</em> application of generators is to provide an easy syntax for creating an iterator :-).</p>
2
2016-09-16T00:03:41Z
[ "python" ]
Do generators simply make new objects with __iter__ and next functions?
39,521,964
<p>I tried to search for this answer on my own, but there was too much noise.</p> <p>Are generators in python just a convenience wrapper for the user to make an iterator object?</p> <p>When you define the generator:</p> <pre><code>def test(): x = 0 while True: x += 1 yield x </code></pre> <p>is python simply making a new object, adding the <code>__iter__</code> method, then putting the rest of the code into the <code>next</code> function?</p> <pre><code>class Test(object): def __init__(self): self.x = 0 def __iter__(self): return self def next(self): self.x += 1 return self.x </code></pre>
1
2016-09-15T23:54:37Z
39,522,039
<blockquote> <p>Are generators in python just a convenience wrapper for the user to make an iterator object?</p> </blockquote> <p>No. A <code>generator</code> is a function, where as <code>iterators</code> are class. Hence, generator can not be a object of iterator. But in some way you can say that <em>generator is a simplified approach to get iterator like capability</em>. It means:</p> <blockquote> <p>All Generators are iterators, but not all iterators are generators.</p> </blockquote> <p>I will strongly suggest you refer below wiki links:</p> <ul> <li><a href="https://en.wikipedia.org/wiki/Iterator" rel="nofollow">Iterator</a> - traverses a collection one at a time</li> <li><a href="https://en.wikipedia.org/wiki/Generator_(computer_programming)" rel="nofollow">Generator</a> - generates a sequence, one item at a time</li> </ul> <p>An iterator is typically something that has a <code>next</code> method to get the next element from a stream. A generator is an iterator that is tied to a function.</p> <p>I will suggest you to refer: <a href="http://stackoverflow.com/questions/2776829/difference-between-pythons-generators-and-iterators">Difference between Python's Generators and Iterators</a>.</p>
1
2016-09-16T00:03:53Z
[ "python" ]
Do generators simply make new objects with __iter__ and next functions?
39,521,964
<p>I tried to search for this answer on my own, but there was too much noise.</p> <p>Are generators in python just a convenience wrapper for the user to make an iterator object?</p> <p>When you define the generator:</p> <pre><code>def test(): x = 0 while True: x += 1 yield x </code></pre> <p>is python simply making a new object, adding the <code>__iter__</code> method, then putting the rest of the code into the <code>next</code> function?</p> <pre><code>class Test(object): def __init__(self): self.x = 0 def __iter__(self): return self def next(self): self.x += 1 return self.x </code></pre>
1
2016-09-15T23:54:37Z
39,522,076
<p>Nope. Like so:</p> <pre><code>&gt;&gt;&gt; def test(): ... x = 0 ... while True: ... x += 1 ... yield x ... &gt;&gt;&gt; type(test) &lt;type 'function'&gt; </code></pre> <p>So what it returns is a function object. Details from there get hairy; the short course is that the code object belonging to the function (<code>test.func_code</code>) is marked as a generator by one of the flags in <code>test.func_code.co_flags</code>.</p> <p>You can disassemble the bytecode for <code>test</code> to see that it's just like any other function otherwise, apart from that a generator function always contains a <code>YIELD_VALUE</code> opcode:</p> <pre><code>&gt;&gt;&gt; import dis &gt;&gt;&gt; dis.dis(test) 2 0 LOAD_CONST 1 (0) 3 STORE_FAST 0 (x) 3 6 SETUP_LOOP 25 (to 34) &gt;&gt; 9 LOAD_GLOBAL 0 (True) 12 POP_JUMP_IF_FALSE 33 4 15 LOAD_FAST 0 (x) 18 LOAD_CONST 2 (1) 21 INPLACE_ADD 22 STORE_FAST 0 (x) 5 25 LOAD_FAST 0 (x) 28 YIELD_VALUE 29 POP_TOP 30 JUMP_ABSOLUTE 9 &gt;&gt; 33 POP_BLOCK &gt;&gt; 34 LOAD_CONST 0 (None) 37 RETURN_VALUE </code></pre> <p>To do it the way you have in mind, the horrors just start ;-) if you think about how to create an object to mimic just this:</p> <pre><code>def test(): yield 2 yield 3 yield 4 </code></pre> <p>Now your <code>next()</code> method would have to carry additional hidden state just to remember which <code>yield</code> comes next. Wrap that in some nested loops with some conditionals, and "unrolling" it into a single-entry <code>next()</code> becomes a nightmare.</p>
2
2016-09-16T00:10:14Z
[ "python" ]
How do we find validation error of linear regression and elastic-net using Scikit-learn and python?
39,522,079
<p>When do we use test set and validation set while calculating errors? I have linear regression and elastic-net models working. I am new to Machine learning with Scikit-learn and Python.</p> <p><strong>I am trying to solve this problem.</strong> <a href="http://i.stack.imgur.com/HsQ0F.png" rel="nofollow">Data Set: UCI Machine Learning Forest Fire data</a></p>
-2
2016-09-16T00:11:05Z
39,522,933
<p>You can use the test set when you build the model by use the method of minimizing the errors.After you build the model, you can calculate the actual error by using the validation set and then use the error to correct the parameter.I hope this can help you.</p>
0
2016-09-16T02:28:57Z
[ "python", "machine-learning", "scikit-learn", "linear-regression" ]
Python HackerRank Bonetrousle code times out
39,522,148
<p>I have been working on the Bonetrousle problem in hacker rank using python. After many hours the below code passes all the test cases except one where it times out. Any suggestions on how I can make the code quicker would be appreciated. I believe the problem is the code that deals with the remainder, I put comments below and above it so it is easy to find. Unfortunately I am at a loss on how to refactor it so it works faster.</p> <p>The code I wrote gets the right answer for all test cases, I have verified this in pycharm. The only problem is that it is to slow for one of the hacker rank test cases. </p> <p>Here is the link to the problem <a href="https://www.hackerrank.com/challenges/bonetrousle" rel="nofollow">https://www.hackerrank.com/challenges/bonetrousle</a></p> <p>Here is a link to the test case it fails <a href="https://hr-testcases-us-east-1.s3.amazonaws.com/21649/input12.txt?AWSAccessKeyId=AKIAJAMR4KJHHUS76CYQ&amp;Expires=1473987910&amp;Signature=xaHGvYRVmUVJHh4r3on%2BWgoIsjs%3D&amp;response-content-type=text%2Fplain" rel="nofollow">https://hr-testcases-us-east-1.s3.amazonaws.com/21649/input12.txt?AWSAccessKeyId=AKIAJAMR4KJHHUS76CYQ&amp;Expires=1473987910&amp;Signature=xaHGvYRVmUVJHh4r3on%2BWgoIsjs%3D&amp;response-content-type=text%2Fplain</a></p> <pre><code>firstLine = int(input()) for a in range(0, firstLine): nums = input() numsArr = list(map(int, nums.split(" "))) n = numsArr[0] k = numsArr[1] b = numsArr[2] num1 = 0 rem = 0 answer = True remAdded = False count = 0 boxArr = [] for i in range(1, b+1): count += i boxArr.append(i) num1 = (n - count)//b rem = (n - count)%b for j in range(0, len(boxArr)): boxArr[j] = boxArr[j] + num1 if boxArr[j] &gt; k: answer = False # In below code -&gt; if there is a remainder I am adding it to an element in the array that has box numbers # I check to see if I can add the remainder to an element in the array #without that element exceeding k, the number of sticks. If I can't then the bool remAdded doesn't get set to True # The below code works but it seems inefficient and looks like the problem if rem == 0: remAdded = True elif answer != False: for r in range(len(boxArr) - 1, 0, -1): if boxArr[r] + rem &lt;= k and r == len(boxArr) - 1: boxArr[r] = boxArr[r] + rem remAdded = True break else: if boxArr[r] + rem &lt;= k and (boxArr[r] + rem) not in boxArr: boxArr[r] = boxArr[r] + rem remAdded = True break # above is code for dealing with remainder. Might be the problem if answer == False or remAdded == False: print(-1) elif 0 in boxArr: print(-1) else: for z in range(0, len(boxArr)): if z != len(boxArr) - 1: print(boxArr[z], end =" ") else: print(boxArr[z]) </code></pre>
3
2016-09-16T00:22:55Z
39,534,498
<p>Replace the code between your comments by:</p> <pre><code>if rem == 0: remAdded = True elif boxArr[-1] + 1 &gt; k: remAdded = False elif answer != False: l = len(boxArr)-1 for r in range(l, l-rem, -1): boxArr[r] += 1 remAdded = True </code></pre> <p>This gets rid of the expensive <code>(boxArr[r] + rem) not in boxArr</code>, basically. It passes all test cases for me.</p>
2
2016-09-16T14:55:24Z
[ "python", "algorithm" ]
Attribute Error when importing dictionary from module to another
39,522,169
<p>So I have 4 different classes which I need to combine into one class at the end in a program called "Main" . However whenever I run the Main program, it keeps throwing up an attribute error when it comes to accessing the <code>emp_det</code> dictionary from the <code>Employee</code> class into the other classes so I can use the generated Employee ID stored in the dictionary. (There are two more classes called <code>Weekly_Paid</code> and <code>Timecard</code> but I've only mentioned 3 for the sake of the brevity and because the error seems to be universal). (All 3 classes are in different files.)</p> <p>The error I keep getting:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "C:/Users/Saloni/Documents/Case Sudy 3/MAIN.py", line 28, in &lt;module&gt; s.MaintainTimecard() File "C:/Users/Saloni/Documents/Case Sudy 3\Monthly_Paid.py", line 24, in MaintainTimecard if emp_id in Employee.emp_det: AttributeError: module 'Employee' has no attribute 'emp_det' </code></pre> <p>Employee Class:</p> <pre><code># ATTRIBUTES: emp_nm, emp_ph, emp_add,emp_id, emp_dob from random import * class Employee: emp_det={} #dictioary to save the details of the employees #emp_id:[emp_nm,emp_ph,emp_add,emp_dob] def add_emp(self): lst=[] #to store all inputed details print("Enter Employee Details:") emp_nm=input("Name: ") emp_ph=input("Contact Number: ") emp_add=input("Address: ") emp_dob=input("Date of Birth:") lst.extend([emp_nm,emp_ph,emp_add,emp_dob]) #store the details emp_id="emp"+str(randrange(1000,10000)) while emp_id in Employee.emp_det: # avoid repetition emp_id="emp"+str(randrange(1000,10000)) Employee.emp_det[emp_id]=lst # make dictionary key and store in list print("Your Employee ID is",emp_id) def del_emp(self): t=0 # to count number of invalid inputs while t&lt;=3: emp_id=input("Employee ID:") if emp_id in Employee.emp_det: del Employee.emp_det[emp_id] t=4 # to get the program out of the loop else: print("Invalid ID. Try Again.") t+=1 def edit_emp(self): t=0 # counting invalid inputs while t&lt;=3: emp_id=input("Employee ID:") if emp_id in Employee.emp_det: # checking validity print("\n Edit: \n 1.Contact \n 2.Address \n") ch=int(input("Option:")) if ch==1: Employee.emp_det[emp_id][1]=input("New Contact Number:") elif ch==2: Employee.emp_det[emp_id][2]=input("New Address:") else: print("Invalid Option") t=4 #t o get the program out of the loop else: print("Invalid ID. Try Again.") t+=1 def Edisplay(self): print("The Employees are:") print(" ID \t Name \t Contact \t Address \t Date of Birth") for i in Employee.emp_det: # access to each dictionary element print(i,"\t",end=" ") for j in Employee.emp_det[i]: # access every value under the key print(j,"\t",end=" ") print("\n") </code></pre> <p>Monthly-Paid Class</p> <pre><code>import Employee import Timecard class Monthly_Paid: fixSalary = 40000 def AcceptTimeCard (self): print ("Timecard details are:") for i in Timecard.emp_tcinfo: print(i, "\t", end ="") for j in Timecard.emp_tcinfo[i]: print(j,"\t",end=" ") def Gen_Paycheck (self): emp_id = input("please enter employee ID") if emp_id in Employee.emp_det: print ("Total Salary of " + emp_id + " is :" + fixSalary) def MaintainTimecard (self): emp_id = input("Please enter your employee ID") if emp_id in Employee.emp_det: print("\n 1.Edit Start Time Hour " "\n 2.Edit Start Time Minute " "\n 3. Edit End Time Hour " "\n 4.Edit End Time Minute") ch = int(input("Input option")) if ch == 1: Timecard.emp_tcinfo[emp_id][1] = input( "Input new Start Time Hour") if ch ==2: Timecard.emp_tcinfo[emp_id][2] = input( "Input new Start Time Minute") if ch == 3: Timecard.emp_tcinfo[emp_id][3] = input( "Input new End Time Hour") if ch == 4: Timecard.emp_tcinfo[emp_id][4] = input( "Input new End Time Minute") else: print("Invalid input") </code></pre> <p>Main script</p> <pre><code>print ("Welcome to Employee Time Card System") import Employee e= Employee.Employee() e.add_emp() print("What kind of employee are you?") print ("\n 1.Monthly Paid \n 2.Weekly Paid") ch= int(input("Enter Choice")) if ch ==1: import Monthly_Paid import Timecard s = Monthly_Paid.Monthly_Paid() w = Timecard.Timecard() print("Monthly Paid") t1= "y" while t1=="y" or t1=="Y": print ("\n 1.See Time Card \n2.Edit TimeCard \n 3.See Paycheck") ch1 = int(input("Enter Choice")) if ch1 == 1: s.AcceptTimeCard if ch1 == 2: s.MaintainTimecard() if ch1 == 3: s.Gen_Paycheck() else: print("Invalid Choice") t1 = input("Continue with Monthly Paid? Y/N") elif ch == 2: import Weekly_Paid a= Weekly_Paid.Weekly_Paid() t2= "y" print ("Weekly Paid") while t2=="y" or t2=="Y": print ("\n 1.See Time Card \n2.Edit TimeCard \n 3.See Paycheck") ch1 = int(input("Enter Choice")) if ch1 == 1: a.AcceptTimeCard() if ch1 == 2: a.MaintainTimeCard() if ch1 == 3: a.Gen_Paycheck() else: print("Invalid Choice") t2 = input("Continue with Weekly Paid? Y/N") else: print("Invalid choice") </code></pre>
0
2016-09-16T00:26:36Z
39,522,190
<p><code>import Employee</code> will look for a module Employee.py. If there is no file called <code>Employee.py</code> then you won't be able to make that import work.</p> <p>So in the <code>Monthly-Paid Class</code> file you have to do something like:</p> <p><code>from path.to.employee_file_name import Employee</code></p> <p>The problem then arises from the fact that there's a module called <code>Employee</code> but that contains a class called <code>Employee</code>. Importing the module Employee doesn't give you access to the class automatically. To access the attribute of the <code>Employee</code> class called <code>emp_det</code> you have to specify the class. So if you imported using</p> <p><code>from Employee import Employee</code></p> <p>To access you need:</p> <p><code>Employee.emp_det</code></p> <p>Alternatively if you imported with:</p> <p><code>import Employee</code></p> <p>then to access you need:</p> <p><code>Employee.Employee.emp_det</code></p>
0
2016-09-16T00:29:18Z
[ "python", "dictionary", "python-module" ]
Flask errorhandler isn't called for view that returns 404
39,522,184
<p>With the following functions, I'd expect the error handler to get called when the view returns 404. But it does not trigger. Why doesn't a 404 error handler get called when a view returns 404?</p> <pre><code>@app.errorhandler(404) def error(e): return render_template('error.html'), 404 @app.route('/error/') def error_test(): return '', 404 </code></pre>
1
2016-09-16T00:28:32Z
39,522,393
<p>You aren't triggering the error. You can use <code>abort</code> for that. </p> <pre><code>from flask import abort @app.route('/error/') def error_test(): abort(404) </code></pre>
1
2016-09-16T00:59:08Z
[ "python", "flask" ]
Flask errorhandler isn't called for view that returns 404
39,522,184
<p>With the following functions, I'd expect the error handler to get called when the view returns 404. But it does not trigger. Why doesn't a 404 error handler get called when a view returns 404?</p> <pre><code>@app.errorhandler(404) def error(e): return render_template('error.html'), 404 @app.route('/error/') def error_test(): return '', 404 </code></pre>
1
2016-09-16T00:28:32Z
39,522,427
<p>Error handlers handle errors. If you just <code>return '',404</code> then your function has finished normally, it just returned an 404 HTTP code. There were no errors, so error handler is not called.</p> <p>In python we report errors with exceptions. If something goes wrong, you throw it. Flask has special base exception class <a href="http://werkzeug.pocoo.org/docs/0.11/exceptions/#werkzeug.exceptions.HTTPException" rel="nofollow">HTTPException</a>. If you throw it, then appropriate error handler would be used to render it. See <a href="http://werkzeug.pocoo.org/docs/0.11/exceptions/#werkzeug.exceptions.HTTPException" rel="nofollow">HTTPException documentation</a> for examples. There is also a bunch of predefined exceptions and in your case you need to throw <a href="http://werkzeug.pocoo.org/docs/0.11/exceptions/#werkzeug.exceptions.NotFound" rel="nofollow">NotFound</a>.</p>
1
2016-09-16T01:04:45Z
[ "python", "flask" ]
GtkFileChooserDialog with image preview
39,522,192
<p>How to create an image preview in a <code>GtkFileChooserDialog</code>?</p> <p>Here is a simple ChooserDialog:</p> <pre><code>#!/usr/bin/env python3 import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk filechooserdialog = Gtk.FileChooserDialog() filechooserdialog.set_title("FileChooserDialog") filechooserdialog.add_button("_Open", Gtk.ResponseType.OK) filechooserdialog.add_button("_Cancel", Gtk.ResponseType.CANCEL) filechooserdialog.set_default_response(Gtk.ResponseType.OK) response = filechooserdialog.run() if response == Gtk.ResponseType.OK: print("File selected: %s" % filechooserdialog.get_filename()) filechooserdialog.destroy() </code></pre> <p>How to add a image preview?</p>
0
2016-09-16T00:29:24Z
39,534,472
<p>GtkFileChooser provides the <a href="https://developer.gnome.org/gtk3/stable/GtkFileChooser.html#gtk-file-chooser-set-preview-widget" rel="nofollow">set_preview_widget</a> function. You create a GtkImage, set it as the preview widget, and update the image whenever the <code>update-preview</code> signal is emitted:</p> <pre><code>from gi.repository import GdkPixbuf preview_image= Gtk.Image() filechooserdialog.set_preview_widget(preview_image) def update_preview(dialog): path= dialog.get_preview_filename() try: pixbuf= GdkPixbuf.Pixbuf.new_from_file(path) except Exception: dialog.set_preview_widget_active(False) else: #scale the image maxwidth, maxheight= 300.0, 700.0 # as floats to avoid integer division in python2 width, height= pixbuf.get_width(), pixbuf.get_height() scale= min(maxwidth/width, maxheight/height) if scale&lt;1: width, height= int(width*scale), int(height*scale) pixbuf= pixbuf.scale_simple(width, height, GdkPixbuf.InterpType.BILINEAR) preview_image.set_from_pixbuf(pixbuf) dialog.set_preview_widget_active(True) filechooserdialog.connect('update-preview', update_preview) </code></pre>
1
2016-09-16T14:54:09Z
[ "python", "gtk" ]
Is there any way to use TCP/IP communication between a BBB and PC connected only by USB cable?
39,522,220
<p>Is there anyway to use TCP/IP communication between a Beaglebone Black and PC connected only by USB cable?</p> <p>I'm try to create an oscilloscope using a Beaglebone Black ADC connected to a computer using a USB cable.</p> <p>I know that when I connect my BBB (Beaglebone Black) I can access it by its ip 192.168.7.2 and I see this device on my local network if I use ipconfig on cmd (I'm using Windows 10), and if I ping on this IP I receive the data. But on the Beaglebone side I cannot see my computer on its network or ping to my computer local address.</p> <p>Also I tried to use a basic python socket connection tutorial between my PC and BBB, here (PC as server and BBB as client): </p> <p><a href="https://pymotw.com/2/socket/tcp.html" rel="nofollow">https://pymotw.com/2/socket/tcp.html</a></p> <p>And I receive connection denied on my BBB.</p> <p>Just to remember, I'm not pretending to solve it using an Ethernet cable or WiFi module.</p>
0
2016-09-16T00:33:02Z
39,522,399
<p>You can send an outgoing ping so you know that a TCP/IP connection can be established. However that is an <em>outgoing</em> connection and you are wanting to accept and <em>incoming</em> connection.</p> <p>Check your firewall settings on your development machine, if the incoming connection is denied there you will not be able to connect. The outgoing ping might make it through the firewall but the incoming connection can still be denied. Because you know you can connect I would recommend checking the firewall rules allow an incoming connection to be made.</p> <p>If you need to debug further go and get <a href="https://www.wireshark.org/" rel="nofollow">Wireshark</a> and see what traffic is going over the wire.</p>
0
2016-09-16T01:00:10Z
[ "python", "tcp", "embedded", "usb" ]
find the manhatten sum of a dataframe given two points
39,522,333
<p>consider the dataframe <code>df</code></p> <pre><code>df = pd.DataFrame(np.arange(25).reshape(5, 5), list('abcde'), list('ABCDE')) df </code></pre> <p><a href="http://i.stack.imgur.com/46MJj.png" rel="nofollow"><img src="http://i.stack.imgur.com/46MJj.png" alt="enter image description here"></a></p> <p>and two points <code>p0</code> and <code>p1</code></p> <pre><code>p0 = ('a', 'B') p1 = ('d', 'E') </code></pre> <p>I want to find the sum of all elements along the path from point <code>p0</code> to <code>p1</code>.<br> Assumptions</p> <ul> <li><code>p0</code> is always to the left or same column as <code>p1</code></li> <li><code>p0</code> is always above or same row as <code>p1</code></li> <li>path should go down first then right from <code>p0</code> to <code>p1</code></li> </ul> <p><a href="http://i.stack.imgur.com/ARQ6y.png" rel="nofollow"><img src="http://i.stack.imgur.com/ARQ6y.png" alt="enter image description here"></a></p> <p>I'm expecting a value of <code>88</code></p>
0
2016-09-16T00:50:28Z
39,522,476
<p>I <em>think</em> you can just sum down the first column and across the row. This will double count the bottom left item so you need to remove that one:</p> <pre><code>s1 = df.ix['a':'d', 'B'].sum() s2 = df.ix['d', 'B':'E'].sum() print s1 + s2 - df.ix['d', 'B'] </code></pre> <p>Or, if you prefer, you can do:</p> <pre><code>s2 = df.ix['d', 'B':'E'][1:].sum() print s1 + s2 </code></pre> <p>As this will slice off the element that would have been double counted...</p> <p>Perhaps a pandas guru can come up with a more efficient or more clever way to do it -- But this seems to work OK. I've cheated a little bit and hard-coded the points, but it should be easy enough to unravel that -- Just substitute:</p> <ul> <li><code>p0[0]</code> for <code>'a'</code></li> <li><code>p1[0]</code> for <code>'d'</code></li> <li><code>p0[1]</code> for <code>'B'</code></li> <li><code>p1[1]</code> for <code>'E'</code></li> </ul>
3
2016-09-16T01:12:17Z
[ "python", "pandas", "numpy", "dataframe" ]
Converting python2 PIL to python3 code
39,522,544
<p>A program I'm trying to convert from python2 to python3 uses <code>PIL</code> the python image library.</p> <p>I try to download a thumbnail from the web to display within a tkinter style gui. Here is, what I think, the offending line of code:</p> <pre><code># grab the thumbnail jpg. req = requests.get(video.thumb) photo = ImageTk.PhotoImage(Image.open(StringIO(req.content))) # now this is in PhotoImage format can be displayed by Tk. plabel = tk.Label(tf, image=photo) plabel.image = photo plabel.grid(row=0, column=2) </code></pre> <p>The program stops and gives a TypeError, here is the backtrce:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "/home/kreator/.local/bin/url-launcher.py", line 281, in &lt;module&gt; main() File "/home/kreator/.local/bin/url-launcher.py", line 251, in main runYoutubeDownloader() File "/home/kreator/.local/bin/url-launcher.py", line 210, in runYoutubeDownloader photo = ImageTk.PhotoImage(Image.open(StringIO(req.content))) TypeError: initial_value must be str or None, not bytes </code></pre> <p>How do I satisfy python3's requirements here?</p>
0
2016-09-16T01:22:18Z
39,522,571
<p>In this case you can see that the library you have imported doesn't support Python3. This isn't something you can fix from within your own code.</p> <p>The lack of Python3 support is because PIL has been discontinued for quite some time now. There is a fork that's actively maintained that I would recommend you use instead: <a href="https://pillow.readthedocs.io/" rel="nofollow">https://pillow.readthedocs.io/</a></p> <p>You can download it from <a href="https://pypi.python.org/pypi/Pillow" rel="nofollow">pypi</a>.</p>
1
2016-09-16T01:26:46Z
[ "python", "python-3.5" ]
append adding contents to first list item, not adding new items to list
39,522,578
<pre><code>def gen(filename): result=[] with open(filename) as sample: for line in sample.read().splitlines(): for ch in line.split(): result.append(ch) yield ch return result </code></pre> <p>If i pass in "ABCDEF", i get ["ABCDEF"] back in result, instead of ["A","B","C","D","E","F"]</p> <p>What could be the problem?</p> <p>Also, am i using the generator correctly? If not, what am i doing wrong? I am close to grasping the concept, but i am not quite there yet and feel that adding a list might be making the generator counterproductive</p> <p>EDIT: Here is how i am opening the file:</p> <pre><code>with filled_filename("ABCDEF") as fn: self.assertEqual(list(project.gen(f)), ["A","B","C","D","E","F"]) print(list(project.gen(ff))) </code></pre>
0
2016-09-16T01:27:34Z
39,522,613
<p>I think this is what you want:</p> <pre><code>def gen(filename): with open(filename) as sample: for line in sample.read().splitlines(): for ch in line: yield ch # Example usage: for ch in gen('myfile.txt'): print("Got character '{}'.".format(ch)) </code></pre> <p>As you said, the <code>list</code> you're building up sort of makes the generator redundant. (You're <code>yield</code>ing each character as you go, but you're also returning the complete <code>list</code>, which is a pattern I've never seen before and probably not what you want to do.)</p> <p>The main issue with your code, though, is that I think you want to split a line into individual characters, but <code>line.split()</code> doesn't do that. Just use <code>for ch in line</code>.</p> <p><strong>EDIT</strong></p> <p>Trying to get somewhere closer to your code:</p> <pre><code>def gen(filename): with open(filename) as sample: for line in sample: line = line.rstrip() for ch in line: yield ch def filled_filename(text): with open('test.txt', 'wb') as f: f.write(text) return 'test.txt' filename = filled_filename(b'ABCDEF') assert list(gen(filename)) == ['A', 'B', 'C', 'D', 'E', 'F'] </code></pre> <p>This code works and passes the assert.</p>
1
2016-09-16T01:32:51Z
[ "python" ]
selenium click() skipping
39,522,579
<p>I have been doing some work with selenium lately and I am having an issue with the click() function.</p> <p>Given the following HTML code:</p> <pre><code> &lt;div id="sendreply"&gt; &lt;input type="submit" class="button norm-green" value= "Send Message name="sendmessage"&gt; == $0 &lt;/div&gt; </code></pre> <p>I have been trying to click on the "Send Message" button however click() just passes over the action and the program proceeds to downstream operation.</p> <p>I have tried finding the element by both name and xpath:</p> <pre><code> time.sleep(2) option='by_name' if option == 'by_name': driver.find_element_by_name('sendmessage').click() else: driver.find_element_by_xpath("//div[@id='sendreply']").click() </code></pre> <p>I get no error code whatsoever.</p> <p>Any help greatly appreciated.</p>
0
2016-09-16T01:27:41Z
39,522,915
<p>If you've tried all but nothing get success, try using <code>execute_script()</code> as an alternate solution to perform click and get rid from this issue as below :-</p> <pre><code>driver.execute_script("arguments[0].click()", driver.find_element_by_name('sendmessage')) </code></pre> <p><strong>Warning</strong>:- The <code>JavaScript</code> injection <code>HTMLElement#click()</code> shouldn't be used in a testing context. It defeats the purpose of the test. First because it doesn't generate all the events like a real click <code>(focus, blur, mousedown, mouseup...)</code> and second because it doesn't guarantee that a real user can interact with the element. But you can use it as an alternate solution to get rid from this issue.</p>
0
2016-09-16T02:26:42Z
[ "python", "selenium-webdriver" ]
How To Combine Multiple Rows Into One Based on Shared Value in Pandas
39,522,589
<p>I have a dataframe that generically looks like this: </p> <pre><code>df = pd.DataFrame({'Country': ['USA', 'USA', 'Canada', 'Canada'], 'GDP': [45000, 68000, 34000, 46000], 'Education': [5, 3, 7, 9]}) </code></pre> <p>Giving: </p> <pre><code> Country Education GDP 0 USA 5 45000 1 USA 3 68000 2 Canada 7 34000 3 Canada 9 46000 </code></pre> <p>I'd like to have all the values for each country listed on the same row, so it reads:</p> <pre><code>Country Education Education GDP GDP USA 5 3 45000 68000 </code></pre> <p>How does one accomplish this? </p> <p>And yes, some of the columns do have the same name.</p> <p>Thank you.</p>
0
2016-09-16T01:29:16Z
39,522,954
<p>Original DataFrame:</p> <pre><code>In [150]: df Out[150]: Country Education GDP 0 USA 5 45000 1 USA 3 68000 2 Canada 7 34000 3 Canada 9 46000 </code></pre> <p>Given that <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow">each country</a> will have exactly two values for the same attribute:</p> <pre><code>In [151]: df1 = df.groupby('Country').nth(0).reset_index() In [152]: df1 Out[152]: Country Education GDP 0 Canada 7 34000 1 USA 5 45000 In [153]: df2 = df.groupby('Country').nth(1).reset_index() In [154]: df2 Out[154]: Country Education GDP 0 Canada 9 46000 1 USA 3 68000 </code></pre> <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow">Concat</a> the two data frames and <a href="http://pandas.pydata.org/pandas-docs/version/0.18.0/generated/pandas.DataFrame.drop.html" rel="nofollow">drop</a> duplicate column from any one: </p> <pre><code>In [155]: pd.concat([df1, df2.drop('Country', 1)], axis=1) Out[155]: Country Education GDP Education GDP 0 Canada 7 34000 9 46000 1 USA 5 45000 3 68000 </code></pre> <p>Rearrange the columns, if needed:</p> <pre><code>In [165]: df3 = pd.concat([df1, df2.drop('Country', 1)], axis=1) In [166]: df3 = df3[['Country', 'Education', 'GDP']] In [167]: df3 Out[167]: Country Education Education GDP GDP 0 Canada 7 9 34000 46000 1 USA 5 3 45000 68000 </code></pre>
1
2016-09-16T02:31:23Z
[ "python", "pandas" ]
How To Combine Multiple Rows Into One Based on Shared Value in Pandas
39,522,589
<p>I have a dataframe that generically looks like this: </p> <pre><code>df = pd.DataFrame({'Country': ['USA', 'USA', 'Canada', 'Canada'], 'GDP': [45000, 68000, 34000, 46000], 'Education': [5, 3, 7, 9]}) </code></pre> <p>Giving: </p> <pre><code> Country Education GDP 0 USA 5 45000 1 USA 3 68000 2 Canada 7 34000 3 Canada 9 46000 </code></pre> <p>I'd like to have all the values for each country listed on the same row, so it reads:</p> <pre><code>Country Education Education GDP GDP USA 5 3 45000 68000 </code></pre> <p>How does one accomplish this? </p> <p>And yes, some of the columns do have the same name.</p> <p>Thank you.</p>
0
2016-09-16T01:29:16Z
39,523,913
<p>The output that you want generally leads to loss of information.</p> <pre><code>Country Education Education GDP GDP USA 5 3 45000 68000 </code></pre> <p>In the above case you would need to keep track of which GDP column corresponds to which Education column.</p> <p>If you are not adamant about keeping it in this form, you can form a pivot table:</p> <pre><code>df2=df.pivot(index='Country',columns='Education',values='GDP').reset_index() </code></pre> <p>This makes each unique value of the education as a column and the value of that column will be the corresponding GDP value.</p> <pre><code>Education Country 3 5 7 9 0 Canada NaN NaN 34000.0 46000.0 1 USA 68000.0 45000.0 NaN NaN </code></pre> <p>A better looking output can be obtained by:</p> <pre><code>df2=df.pivot(index='Country',columns='Education',values='GDP').reset_index().set_index('Country') </code></pre> <p>which yields</p> <pre><code>Country 3 5 7 9 Canada 34000.0 46000.0 USA 68000.0 45000.0 </code></pre>
1
2016-09-16T04:49:12Z
[ "python", "pandas" ]
convert pandas timedelta output from days to quarter
39,522,594
<p>here is my input:</p> <pre><code> import pandas as pd dt_one = pd.to_datetime('2015/5/25') - pd.tseries.offsets.QuarterEnd() dt_two = pd.to_datetime('2016/9/15') - pd.tseries.offsets.QuarterEnd() </code></pre> <p>here is my output:</p> <pre><code>(dt_two - dt_one) Out[75]: Timedelta('457 days 00:00:00') </code></pre> <p>however, i desire to convert the above time delta from days in between two dates to quarters in between those two dates. How can I achieve this? The output should be '5'</p>
0
2016-09-16T01:30:13Z
39,522,845
<p>You can calculate the number of quarters like this:</p> <pre><code>In [106]: dt_one = pd.to_datetime('2015/5/25') - pd.tseries.offsets.QuarterEnd() ...: dt_two = pd.to_datetime('2016/9/15') - pd.tseries.offsets.QuarterEnd() ...: In [107]: ((dt_two.year - dt_one.year)*12 + (dt_two.month - dt_one.month))/3.0 Out[107]: 5.0 </code></pre>
0
2016-09-16T02:14:08Z
[ "python", "pandas", "timestamp", "datetimeindex" ]
Bokeh - check checkboxes with a button/checkbox callback
39,522,642
<p>How can I check checkboxes in a CheckBoxGroup by clicking a button or checking a separate checkbox in bokeh?</p> <p>I am aware of this solution in javascript <a href="http://stackoverflow.com/questions/5229023/jquery-check-uncheck-all-checkboxes-with-a-button">jquery check uncheck all checkboxes with a button</a></p> <p>however the checkboxgroup bokeh object passed in customJS can't be manipulated with .prop ! Also I don't know of a way to access the individuals checkboxes inside the checkboxgroup. I am not sure how to do that with the bokeh checkboxgroup object.</p> <p>here is what I tried, plots is a list containing different scatter plots in a figure:</p> <pre><code>checkbox = CheckboxGroup(labels=[str(i) for i in range(len(plots))],active=range(len(plots)),width=200) iterable = [('p'+str(i),plots[i]) for i in range(len(plots))]+[('checkbox',checkbox)] code = ''.join(['p'+str(i)+'.visible = '+str(i)+' not in checkbox.active;' for i in range(len(plots))]) checkbox.callback = CustomJS(args={key: value for key,value in iterable},lang="coffeescript", code=code) checkbox2 = CheckboxGroup(labels=['check all'],active=[0],width=100) checkbox2.callback = CustomJS(args={'checkbox':checkbox}, code = """ if (0 not in cb_obj.active){ checkbox.set("active",_.range(27); } checkbox.trigger("change"); """) </code></pre> <p>range(27) because len(plots)=27. My first checkboxgroup works perfectly fine to trigger on/off the visibility of plots in the figure. However the second checkbox has no effect.</p>
0
2016-09-16T01:38:10Z
39,581,694
<p>I adapted the answer of Bigreddot to this question: <a href="http://stackoverflow.com/questions/39561553/bokeh-widget-callback-to-select-all-checkboxes/39562653#39562653">Bokeh widget callback to select all checkboxes</a> To have a similar effect from a CustomJS callback.</p> <p>Assuming a Figure "fig" and a list of plots in that figure "plots", here is an example with checkboxes that trigger line visibility:</p> <pre><code>N_plots = range(len(plots)) checkbox = CheckboxGroup(labels=[str(i) for i in N_plots],active=[],width=200) iterable = [('p'+str(i),plots[i]) for i in N_plots]+[('checkbox',checkbox)] checkbox_code = ''.join(['p'+str(i)+'.visible = '+str(i)+'in checkbox.active;' for i in N_plots]) checkbox.callback = CustomJS(args={key: value for key,value in iterable}, lang="coffeescript", code=checkbox_code) </code></pre> <p>Here is a button that can clear all checkboxes:</p> <pre><code>clear_button = Button(label='Clear all') clear_button_code = """checkbox.set("active",[]);"""+checkbox_code clear_button.callback = CustomJS(args={key: value for key,value in iterable}, lang="coffeescript", code=clear_button_code) </code></pre> <p>And here is a button that checks all the checkboxes:</p> <pre><code>check_button = Button(label='Check all') check_button_code = """checkbox.set("active","""+str(N_plots)+""");"""+checkbox_code check_button.callback = CustomJS(args={key: value for key,value in iterable}, lang="coffeescript", code=check_button_code) </code></pre>
0
2016-09-19T20:36:40Z
[ "javascript", "python", "checkbox", "bokeh" ]
Filter rows in a CSV file and then sort them based on a column
39,522,677
<p>Trying to parse the data file (below) to find only the rows where the user started before a certain date. then order the values from the words column from these rows in ascending order (by start date)</p> <pre class="lang-none prettyprint-override"><code>id, name, start_date, role, end_date, words 657, mystical, 1351140260, cleaner, 1951140260, very lazy 1987, kanyau, 1451189768, watchman, 1539742445, sleeping </code></pre> <p>Can anyone assist?</p> <p>P.S: newbie here but here is what I have been playing around with.</p> <pre><code>date_pivot = "6/09/2010 00:00:00" d = datetime.strptime(date_pivot, "%d/%m/%Y %H:%M:%S") date_pivot = time.mktime(d.timetuple()) dp = int(date_pivot) infile = csv.DictReader(open("sample_data.csv","rb"), delimiter=",") previous_users = [row for row in infile if row['start_date'] &lt; 'dp'] #print previous_users with open('final_test.csv','wb') as fou: dw = csv.DictWriter(fou, previous_users.keys()) dw.writeheader() dw.writerow(my_dict) </code></pre>
1
2016-09-16T01:44:00Z
39,522,794
<p>Should be fairly simple. Since you need to type convert and do lookup for your key function, a <code>lambda</code> is simplest:</p> <pre><code>previous_users.sort(key=lambda row: int(row['start_date'])) </code></pre> <p>A note: Passing <code>previous_users.keys()</code> to <code>DictWriter</code> as the fieldnames is doubly wrong. One, you'd need to do <code>previous_users[0].keys()</code> (after verifying it was non-empty), because <code>previous_users</code> is a <code>list</code> of <code>dict</code>, not <code>dict</code>. Two, <code>dict</code>s have no defined ordering, so your output columns will likely be rearranged. If that's not a problem, so be it. But you probably want to explicitly pass the field names in the desired order or read them in the right order from the <code>DictReader</code>, e.g. <code>csv.DictWriter(fou, infile.fieldnames)</code></p> <p>Additional typo note: Presumably you want to compare <code>int(row['start_date']) &lt; dp</code>; you need to convert to <code>int</code>, and you want to compare to the value in <code>dp</code>, not the string <code>"dp"</code>.</p>
0
2016-09-16T02:05:16Z
[ "python", "csv" ]
Python real time image classification problems with Neural Networks
39,522,693
<p>I'm attempting use caffe and python to do real-time image classification. I'm using OpenCV to stream from my webcam in one process, and in a separate process, using caffe to perform image classification on the frames pulled from the webcam. Then I'm passing the result of the classification back to the main thread to caption the webcam stream.</p> <p>The problem is that even though I have an NVIDIA GPU and am performing the caffe predictions on the GPU, the main thread gets slown down. Normally without doing any predictions, my webcam stream runs at 30 fps; however, with the predictions, my webcam stream gets at best 15 fps. </p> <p>I've verified that caffe is indeed using the GPU when performing the predictions, and that my GPU or GPU memory is not maxing out. I've also verified that my CPU cores are not getting maxed out at any point during the program. I'm wondering if I am doing something wrong or if there is no way to keep these 2 processes truly separate. Any advice is appreciated. Here is my code for reference</p> <pre><code>class Consumer(multiprocessing.Process): def __init__(self, task_queue, result_queue): multiprocessing.Process.__init__(self) self.task_queue = task_queue self.result_queue = result_queue #other initialization stuff def run(self): caffe.set_mode_gpu() caffe.set_device(0) #Load caffe net -- code omitted while True: image = self.task_queue.get() #crop image -- code omitted text = net.predict(image) self.result_queue.put(text) return import cv2 import caffe import multiprocessing import Queue tasks = multiprocessing.Queue() results = multiprocessing.Queue() consumer = Consumer(tasks,results) consumer.start() #Creating window and starting video capturer from camera cv2.namedWindow("preview") vc = cv2.VideoCapture(0) #Try to get the first frame if vc.isOpened(): rval, frame = vc.read() else: rval = False frame_copy[:] = frame task_empty = True while rval: if task_empty: tasks.put(frame_copy) task_empty = False if not results.empty(): text = results.get() #Add text to frame cv2.putText(frame,text) task_empty = True #Showing the frame with all the applied modifications cv2.imshow("preview", frame) #Getting next frame from camera rval, frame = vc.read() frame_copy[:] = frame #Getting keyboard input key = cv2.waitKey(1) #exit on ESC if key == 27: break </code></pre> <p>I am pretty sure it is the caffe prediction slowing everything down, because when I comment out the prediction and pass dummy text back and forth between the processes, I get 30 fps again.</p> <pre><code>class Consumer(multiprocessing.Process): def __init__(self, task_queue, result_queue): multiprocessing.Process.__init__(self) self.task_queue = task_queue self.result_queue = result_queue #other initialization stuff def run(self): caffe.set_mode_gpu() caffe.set_device(0) #Load caffe net -- code omitted while True: image = self.task_queue.get() #crop image -- code omitted #text = net.predict(image) text = "dummy text" self.result_queue.put(text) return import cv2 import caffe import multiprocessing import Queue tasks = multiprocessing.Queue() results = multiprocessing.Queue() consumer = Consumer(tasks,results) consumer.start() #Creating window and starting video capturer from camera cv2.namedWindow("preview") vc = cv2.VideoCapture(0) #Try to get the first frame if vc.isOpened(): rval, frame = vc.read() else: rval = False frame_copy[:] = frame task_empty = True while rval: if task_empty: tasks.put(frame_copy) task_empty = False if not results.empty(): text = results.get() #Add text to frame cv2.putText(frame,text) task_empty = True #Showing the frame with all the applied modifications cv2.imshow("preview", frame) #Getting next frame from camera rval, frame = vc.read() frame_copy[:] = frame #Getting keyboard input key = cv2.waitKey(1) #exit on ESC if key == 27: break </code></pre>
24
2016-09-16T01:46:46Z
39,609,210
<p>One think might happen in your code, that is it works in gpu mode for the first call and on later calls it calculates the classification under cpu mode as it the default mode. On older version of caffe set gpu mode for once was enough, now newer version it needs to set mode everytime. You can try with following change:</p> <pre><code>def run(self): #Load caffe net -- code omitted while True: caffe.set_mode_gpu() caffe.set_device(0) image = self.task_queue.get() #crop image -- code omitted text = net.predict(image) self.result_queue.put(text) return </code></pre> <p>Also please have a look at the gpu timings while the consumer thread is running. You can use following command for nvidia:</p> <pre><code>nvidia-smi </code></pre> <p>Above command will show you the gpu utilization at runtime.</p> <p>If it not solves another solution is, make the opencv frame extraction code under a thread. As it is related with I/O and device access you might get benefit running it on separate thread from GUI thread/main thread. That thread will push frames in an queue and current consumer thread will predict. In that case carefully handle the queue with critical block.</p>
0
2016-09-21T06:44:37Z
[ "python", "opencv", "multiprocessing", "gpgpu", "caffe" ]
Python real time image classification problems with Neural Networks
39,522,693
<p>I'm attempting use caffe and python to do real-time image classification. I'm using OpenCV to stream from my webcam in one process, and in a separate process, using caffe to perform image classification on the frames pulled from the webcam. Then I'm passing the result of the classification back to the main thread to caption the webcam stream.</p> <p>The problem is that even though I have an NVIDIA GPU and am performing the caffe predictions on the GPU, the main thread gets slown down. Normally without doing any predictions, my webcam stream runs at 30 fps; however, with the predictions, my webcam stream gets at best 15 fps. </p> <p>I've verified that caffe is indeed using the GPU when performing the predictions, and that my GPU or GPU memory is not maxing out. I've also verified that my CPU cores are not getting maxed out at any point during the program. I'm wondering if I am doing something wrong or if there is no way to keep these 2 processes truly separate. Any advice is appreciated. Here is my code for reference</p> <pre><code>class Consumer(multiprocessing.Process): def __init__(self, task_queue, result_queue): multiprocessing.Process.__init__(self) self.task_queue = task_queue self.result_queue = result_queue #other initialization stuff def run(self): caffe.set_mode_gpu() caffe.set_device(0) #Load caffe net -- code omitted while True: image = self.task_queue.get() #crop image -- code omitted text = net.predict(image) self.result_queue.put(text) return import cv2 import caffe import multiprocessing import Queue tasks = multiprocessing.Queue() results = multiprocessing.Queue() consumer = Consumer(tasks,results) consumer.start() #Creating window and starting video capturer from camera cv2.namedWindow("preview") vc = cv2.VideoCapture(0) #Try to get the first frame if vc.isOpened(): rval, frame = vc.read() else: rval = False frame_copy[:] = frame task_empty = True while rval: if task_empty: tasks.put(frame_copy) task_empty = False if not results.empty(): text = results.get() #Add text to frame cv2.putText(frame,text) task_empty = True #Showing the frame with all the applied modifications cv2.imshow("preview", frame) #Getting next frame from camera rval, frame = vc.read() frame_copy[:] = frame #Getting keyboard input key = cv2.waitKey(1) #exit on ESC if key == 27: break </code></pre> <p>I am pretty sure it is the caffe prediction slowing everything down, because when I comment out the prediction and pass dummy text back and forth between the processes, I get 30 fps again.</p> <pre><code>class Consumer(multiprocessing.Process): def __init__(self, task_queue, result_queue): multiprocessing.Process.__init__(self) self.task_queue = task_queue self.result_queue = result_queue #other initialization stuff def run(self): caffe.set_mode_gpu() caffe.set_device(0) #Load caffe net -- code omitted while True: image = self.task_queue.get() #crop image -- code omitted #text = net.predict(image) text = "dummy text" self.result_queue.put(text) return import cv2 import caffe import multiprocessing import Queue tasks = multiprocessing.Queue() results = multiprocessing.Queue() consumer = Consumer(tasks,results) consumer.start() #Creating window and starting video capturer from camera cv2.namedWindow("preview") vc = cv2.VideoCapture(0) #Try to get the first frame if vc.isOpened(): rval, frame = vc.read() else: rval = False frame_copy[:] = frame task_empty = True while rval: if task_empty: tasks.put(frame_copy) task_empty = False if not results.empty(): text = results.get() #Add text to frame cv2.putText(frame,text) task_empty = True #Showing the frame with all the applied modifications cv2.imshow("preview", frame) #Getting next frame from camera rval, frame = vc.read() frame_copy[:] = frame #Getting keyboard input key = cv2.waitKey(1) #exit on ESC if key == 27: break </code></pre>
24
2016-09-16T01:46:46Z
39,738,897
<p>Try multi threading approach instead of multiprocessing. Spawning processes is slower than spawning into threads. Once they are running, there is not much difference. In your case I think threading approach will benefit as there are so many frames data involved.</p>
0
2016-09-28T05:44:07Z
[ "python", "opencv", "multiprocessing", "gpgpu", "caffe" ]
Python real time image classification problems with Neural Networks
39,522,693
<p>I'm attempting use caffe and python to do real-time image classification. I'm using OpenCV to stream from my webcam in one process, and in a separate process, using caffe to perform image classification on the frames pulled from the webcam. Then I'm passing the result of the classification back to the main thread to caption the webcam stream.</p> <p>The problem is that even though I have an NVIDIA GPU and am performing the caffe predictions on the GPU, the main thread gets slown down. Normally without doing any predictions, my webcam stream runs at 30 fps; however, with the predictions, my webcam stream gets at best 15 fps. </p> <p>I've verified that caffe is indeed using the GPU when performing the predictions, and that my GPU or GPU memory is not maxing out. I've also verified that my CPU cores are not getting maxed out at any point during the program. I'm wondering if I am doing something wrong or if there is no way to keep these 2 processes truly separate. Any advice is appreciated. Here is my code for reference</p> <pre><code>class Consumer(multiprocessing.Process): def __init__(self, task_queue, result_queue): multiprocessing.Process.__init__(self) self.task_queue = task_queue self.result_queue = result_queue #other initialization stuff def run(self): caffe.set_mode_gpu() caffe.set_device(0) #Load caffe net -- code omitted while True: image = self.task_queue.get() #crop image -- code omitted text = net.predict(image) self.result_queue.put(text) return import cv2 import caffe import multiprocessing import Queue tasks = multiprocessing.Queue() results = multiprocessing.Queue() consumer = Consumer(tasks,results) consumer.start() #Creating window and starting video capturer from camera cv2.namedWindow("preview") vc = cv2.VideoCapture(0) #Try to get the first frame if vc.isOpened(): rval, frame = vc.read() else: rval = False frame_copy[:] = frame task_empty = True while rval: if task_empty: tasks.put(frame_copy) task_empty = False if not results.empty(): text = results.get() #Add text to frame cv2.putText(frame,text) task_empty = True #Showing the frame with all the applied modifications cv2.imshow("preview", frame) #Getting next frame from camera rval, frame = vc.read() frame_copy[:] = frame #Getting keyboard input key = cv2.waitKey(1) #exit on ESC if key == 27: break </code></pre> <p>I am pretty sure it is the caffe prediction slowing everything down, because when I comment out the prediction and pass dummy text back and forth between the processes, I get 30 fps again.</p> <pre><code>class Consumer(multiprocessing.Process): def __init__(self, task_queue, result_queue): multiprocessing.Process.__init__(self) self.task_queue = task_queue self.result_queue = result_queue #other initialization stuff def run(self): caffe.set_mode_gpu() caffe.set_device(0) #Load caffe net -- code omitted while True: image = self.task_queue.get() #crop image -- code omitted #text = net.predict(image) text = "dummy text" self.result_queue.put(text) return import cv2 import caffe import multiprocessing import Queue tasks = multiprocessing.Queue() results = multiprocessing.Queue() consumer = Consumer(tasks,results) consumer.start() #Creating window and starting video capturer from camera cv2.namedWindow("preview") vc = cv2.VideoCapture(0) #Try to get the first frame if vc.isOpened(): rval, frame = vc.read() else: rval = False frame_copy[:] = frame task_empty = True while rval: if task_empty: tasks.put(frame_copy) task_empty = False if not results.empty(): text = results.get() #Add text to frame cv2.putText(frame,text) task_empty = True #Showing the frame with all the applied modifications cv2.imshow("preview", frame) #Getting next frame from camera rval, frame = vc.read() frame_copy[:] = frame #Getting keyboard input key = cv2.waitKey(1) #exit on ESC if key == 27: break </code></pre>
24
2016-09-16T01:46:46Z
39,766,758
<p><strong>Some Explanations and Some Rethinks:</strong></p> <ol> <li><p>I ran my code below on a laptop with an <code>Intel Core i5-6300HQ @2.3GHz</code> cpu, <code>8 GB RAM</code> and <code>NVIDIA GeForce GTX 960M</code> gpu(2GB memory), and the result was: </p> <p>Whether I ran the code with caffe running or not(by commenting out or not <code>net_output = this-&gt;net_-&gt;Forward(net_input)</code> and some necessary stuff in <code>void Consumer::entry()</code>), I could always get around 30 fps in the main thread.</p> <p>The similar result was got on a PC with an <code>Intel Core i5-4440</code> cpu, <code>8 GB RAM</code>, <code>NVIDIA GeForce GT 630</code> gpu(1GB memory).</p></li> <li><p>I ran the code of <a href="http://stackoverflow.com/q/39522693/6281477">@user3543300</a> in the question on the same laptop, the result was:</p> <p>Whether caffe was running(on gpu) or not, I could also get around 30 fps.</p></li> <li><p>According to <a href="http://stackoverflow.com/q/39522693/6281477">@user3543300</a> 's feedback, with the 2 versions of code mentioned above, @user3543300 could get only around 15 fps, when running caffe(on a <code>Nvidia GeForce 940MX GPU and Intel® Core™ i7-6500U CPU @ 2.50GHz × 4</code> laptop). And there will also be a slowdown of frame rate of the webcam when caffe running on gpu as an independent program. </p></li> </ol> <p>So I still think that the problem may most possibly lie in hardware I/O limitaions such as gpu memory bandwidth or RAM bandwidth. Hope <a href="http://stackoverflow.com/q/39522693/6281477">@user3543300</a> can check this or find out the true problem that I haven't realized of.</p> <p>If the problem is indeed what I think of above, then a sensible thought would be to reduce memory I/O overhead introduced by the CNN network. In fact, to solve the similar problem on embedded systems with limited hardware resources, there have been some research on this topic, e.g. <a href="https://github.com/wenwei202/caffe/tree/scnn" rel="nofollow">Structurally Sparse Deep Neural Networks</a>, <a href="https://github.com/songhan/SqueezeNet-Deep-Compression" rel="nofollow">SqueezeNet</a>, <a href="https://github.com/songhan/Deep-Compression-AlexNet" rel="nofollow">Deep-Compression</a>. So hopefully, it will also help to improve the frame rate of webcam in the question by applying such skills.</p> <hr> <p><strong>Original Answer:</strong></p> <p>Try this c++ solution. It uses threads for the <a href="https://en.wikipedia.org/wiki/I/O_bound" rel="nofollow">I/O overhead</a> in your task, I tested it using <a href="https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet" rel="nofollow"><code>bvlc_alexnet.caffemodel</code></a> to do image classification and didn't see obvious slowing down of the main thread(webcam stream) when caffe running(on GPU):</p> <pre><code>#include &lt;stdio.h&gt; #include &lt;iostream&gt; #include &lt;string&gt; #include &lt;boost/thread.hpp&gt; #include &lt;boost/shared_ptr.hpp&gt; #include "caffe/caffe.hpp" #include "caffe/util/blocking_queue.hpp" #include "caffe/data_transformer.hpp" #include "opencv2/opencv.hpp" using namespace cv; //Queue pair for sharing image/results between webcam and caffe threads template&lt;typename T&gt; class QueuePair { public: explicit QueuePair(int size); ~QueuePair(); caffe::BlockingQueue&lt;T*&gt; free_; caffe::BlockingQueue&lt;T*&gt; full_; DISABLE_COPY_AND_ASSIGN(QueuePair); }; template&lt;typename T&gt; QueuePair&lt;T&gt;::QueuePair(int size) { // Initialize the free queue for (int i = 0; i &lt; size; ++i) { free_.push(new T); } } template&lt;typename T&gt; QueuePair&lt;T&gt;::~QueuePair(){ T *data; while (free_.try_pop(&amp;data)){ delete data; } while (full_.try_pop(&amp;data)){ delete data; } } template class QueuePair&lt;Mat&gt;; template class QueuePair&lt;std::string&gt;; //Do image classification(caffe predict) using a subthread class Consumer{ public: Consumer(boost::shared_ptr&lt;QueuePair&lt;Mat&gt;&gt; task , boost::shared_ptr&lt;QueuePair&lt;std::string&gt;&gt; result); ~Consumer(); void Run(); void Stop(); void entry(boost::shared_ptr&lt;QueuePair&lt;Mat&gt;&gt; task , boost::shared_ptr&lt;QueuePair&lt;std::string&gt;&gt; result); private: bool must_stop(); boost::shared_ptr&lt;QueuePair&lt;Mat&gt; &gt; task_q_; boost::shared_ptr&lt;QueuePair&lt;std::string&gt; &gt; result_q_; caffe::Blob&lt;float&gt; *net_input_blob_; boost::shared_ptr&lt;caffe::DataTransformer&lt;float&gt; &gt; data_transformer_; boost::shared_ptr&lt;caffe::Net&lt;float&gt; &gt; net_; std::vector&lt;std::string&gt; synset_words_; boost::shared_ptr&lt;boost::thread&gt; thread_; }; Consumer::Consumer(boost::shared_ptr&lt;QueuePair&lt;Mat&gt;&gt; task , boost::shared_ptr&lt;QueuePair&lt;std::string&gt;&gt; result) : task_q_(task), result_q_(result), thread_(){ //for data preprocess caffe::TransformationParameter trans_para; //set mean trans_para.set_mean_file("/path/to/imagenet_mean.binaryproto"); //set crop size, here is cropping 227x227 from 256x256 trans_para.set_crop_size(227); //instantiate a DataTransformer using trans_para for image preprocess data_transformer_.reset(new caffe::DataTransformer&lt;float&gt;(trans_para , caffe::TEST)); //initialize a caffe net net_.reset(new caffe::Net&lt;float&gt;(std::string("/path/to/deploy.prototxt") , caffe::TEST)); //net parameter net_-&gt;CopyTrainedLayersFrom(std::string("/path/to/bvlc_alexnet.caffemodel")); std::fstream synset_word("path/to/caffe/data/ilsvrc12/synset_words.txt"); std::string line; if (!synset_word.good()){ std::cerr &lt;&lt; "synset words open failed!" &lt;&lt; std::endl; } while (std::getline(synset_word, line)){ synset_words_.push_back(line.substr(line.find_first_of(' '), line.length())); } //a container for net input, holds data converted from cv::Mat net_input_blob_ = new caffe::Blob&lt;float&gt;(1, 3, 227, 227); } Consumer::~Consumer(){ Stop(); delete net_input_blob_; } void Consumer::entry(boost::shared_ptr&lt;QueuePair&lt;Mat&gt;&gt; task , boost::shared_ptr&lt;QueuePair&lt;std::string&gt;&gt; result){ caffe::Caffe::set_mode(caffe::Caffe::GPU); caffe::Caffe::SetDevice(0); cv::Mat *frame; cv::Mat resized_image(256, 256, CV_8UC3); cv::Size re_size(resized_image.cols, resized_image.rows); //for caffe input and output std::vector&lt;caffe::Blob&lt;float&gt; *&gt; net_input; std::vector&lt;caffe::Blob&lt;float&gt; *&gt; net_output; net_input.push_back(caffe_input_blob); std::string *res; int pre_num = 1; while (!must_stop()){ std::stringstream result_strm; frame = task-&gt;full_.pop(); cv::resize(*frame, resized_image, re_size, 0, 0, CV_INTER_LINEAR); this-&gt;data_transformer_-&gt;Transform(resized_image, caffe_input_blob); net_output = this-&gt;net_-&gt;Forward(net_input); task-&gt;free_.push(frame); res = result-&gt;free_.pop(); //Process results here for (int i = 0; i &lt; pre_num; ++i){ result_strm &lt;&lt; synset_words_[net_output[0]-&gt;cpu_data()[i]] &lt;&lt; " " &lt;&lt; net_output[0]-&gt;cpu_data()[i + pre_num] &lt;&lt; "\n"; } *res = result_strm.str(); result-&gt;full_.push(res); } } void Consumer::Run(){ if (!thread_){ try{ thread_.reset(new boost::thread(&amp;Consumer::entry, this, task_q_, result_q_)); } catch (std::exception&amp; e) { std::cerr &lt;&lt; "Thread exception: " &lt;&lt; e.what() &lt;&lt; std::endl; } } else std::cout &lt;&lt; "Consumer thread may have been running!" &lt;&lt; std::endl; }; void Consumer::Stop(){ if (thread_ &amp;&amp; thread_-&gt;joinable()){ thread_-&gt;interrupt(); try { thread_-&gt;join(); } catch (boost::thread_interrupted&amp;) { } catch (std::exception&amp; e) { std::cerr &lt;&lt; "Thread exception: " &lt;&lt; e.what() &lt;&lt; std::endl; } } } bool Consumer::must_stop(){ return thread_ &amp;&amp; thread_-&gt;interruption_requested(); } int main(void) { int max_queue_size = 1000; boost::shared_ptr&lt;QueuePair&lt;Mat&gt;&gt; tasks(new QueuePair&lt;Mat&gt;(max_queue_size)); boost::shared_ptr&lt;QueuePair&lt;std::string&gt;&gt; results(new QueuePair&lt;std::string&gt;(max_queue_size)); char str[100], info_str[100] = " results: "; VideoCapture vc(0); if (!vc.isOpened()) return -1; Consumer consumer(tasks, results); consumer.Run(); Mat frame, *frame_copy; namedWindow("preview"); double t, fps; while (true){ t = (double)getTickCount(); vc.read(frame); if (waitKey(1) &gt;= 0){ consuer.Stop(); break; } if (tasks-&gt;free_.try_peek(&amp;frame_copy)){ frame_copy = tasks-&gt;free_.pop(); *frame_copy = frame.clone(); tasks-&gt;full_.push(frame_copy); } std::string *res; std::string frame_info(""); if (results-&gt;full_.try_peek(&amp;res)){ res = results-&gt;full_.pop(); frame_info = frame_info + info_str; frame_info = frame_info + *res; results-&gt;free_.push(res); } t = ((double)getTickCount() - t) / getTickFrequency(); fps = 1.0 / t; sprintf(str, " fps: %.2f", fps); frame_info = frame_info + str; putText(frame, frame_info, Point(5, 20) , FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0)); imshow("preview", frame); } } </code></pre> <p>And in <a href="https://github.com/BVLC/caffe/blob/master/src/caffe/util/blocking_queue.cpp" rel="nofollow">src/caffe/util/blocking_queue.cpp</a>, make a little change below and rebuild caffe:</p> <pre><code>...//Other stuff template class BlockingQueue&lt;Batch&lt;float&gt;*&gt;; template class BlockingQueue&lt;Batch&lt;double&gt;*&gt;; template class BlockingQueue&lt;Datum*&gt;; template class BlockingQueue&lt;shared_ptr&lt;DataReader::QueuePair&gt; &gt;; template class BlockingQueue&lt;P2PSync&lt;float&gt;*&gt;; template class BlockingQueue&lt;P2PSync&lt;double&gt;*&gt;; //add these 2 lines below template class BlockingQueue&lt;cv::Mat*&gt;; template class BlockingQueue&lt;std::string*&gt;; </code></pre>
2
2016-09-29T09:38:35Z
[ "python", "opencv", "multiprocessing", "gpgpu", "caffe" ]
How to pass a extra parameter using perform_create of django rest framwork?
39,522,703
<p>I am using django rest frameworks generic views. I am trying to insert request user name in the author field of post. </p> <h1>Serializers</h1> <pre><code>class PostSerializer(serializers.ModelSerializer): spoter = serializers.PrimaryKeyRelatedField( queryset= User.objects.all(), ) class Meta: model = PostModel fields = ('author','text') </code></pre> <h1>View</h1> <pre><code>class UserRequestMixin(object): def perform_update(self, serializer): serializer.save(author = self.request.user.id) def perform_create(self, serializer): serializer.save(author = self.request.user.id) class PostViewSet(UserRequestMixin,DefaultsMixin,viewsets.ModelViewSet): permission_classes = (IsOwnerOrReadOnly,) queryset = PostModel.objects.all() serializer_class = PostSerializer </code></pre> <h1>Error</h1> <pre><code> status -&gt; 400 { "author": [ "This field is required." ] } </code></pre>
0
2016-09-16T01:47:45Z
39,525,112
<p>Depending on what you want to do, you should:</p> <ul> <li>Mark the author field as either <code>read_only=True</code> or <code>required=False</code></li> <li>Set the author field with <a href="http://www.django-rest-framework.org/api-guide/validators/#createonlydefault" rel="nofollow"><code>CreateOnlyDefault</code></a></li> </ul> <p>For example:</p> <pre><code>class PostSerializer(serializers.ModelSerializer): spoter = serializers.PrimaryKeyRelatedField( queryset= User.objects.all(), ) class Meta: model = PostModel fields = ('author','text') extra_kwargs = { 'author': { 'read_only': True, 'default': serializers.CurrentUserDefault(), } } </code></pre>
0
2016-09-16T06:34:12Z
[ "python", "django", "python-3.x", "django-rest-framework", "django-serializer" ]
Can't install M2Crypto into Linux mint Rafaela
39,522,846
<p>I'm trying to install <code>M2Crypto</code> library with <code>pip</code> inside a <code>virtualenv</code>, but I just can't make it work,</p> <p>I have done <code>sudo apt-get install python-dev</code> and <code>sudo apt-et install python-m2crypto</code> already, they are in the system</p> <p>Also tried installing <code>pip install pyopenssl</code></p> <p>These are last lines of my traceback (it's way too long):</p> <pre><code> SWIG/_m2crypto_wrap.c: In function ‘dsa_get_g’: SWIG/_m2crypto_wrap.c:6220:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ SWIG/_m2crypto_wrap.c: In function ‘dsa_get_pub’: SWIG/_m2crypto_wrap.c:6228:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ SWIG/_m2crypto_wrap.c: In function ‘dsa_get_priv’: SWIG/_m2crypto_wrap.c:6236:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ SWIG/_m2crypto_wrap.c: In function ‘dsa_check_key’: SWIG/_m2crypto_wrap.c:6489:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ SWIG/_m2crypto_wrap.c: In function ‘dsa_check_pub_key’: SWIG/_m2crypto_wrap.c:6493:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ SWIG/_m2crypto_wrap.c: In function ‘dsa_keylen’: SWIG/_m2crypto_wrap.c:6497:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ SWIG/_m2crypto_wrap.c: In function ‘x509_name_get_der’: SWIG/_m2crypto_wrap.c:7313:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ SWIG/_m2crypto_wrap.c: In function ‘ecdsa_sig_get_r’: SWIG/_m2crypto_wrap.c:8127:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ SWIG/_m2crypto_wrap.c: In function ‘ecdsa_sig_get_s’: SWIG/_m2crypto_wrap.c:8131:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 </code></pre> <p><code> I also have</code>swig installed no my system, any ideas on what it could be?</p> <p>Thanks in advance!</p>
0
2016-09-16T02:14:12Z
39,523,290
<p>I managed to solve it, it was a version issue, just ran:</p> <pre><code>pip install M2Crypto==0.24.0 </code></pre> <p>Thank You</p>
0
2016-09-16T03:20:49Z
[ "python", "linux", "swig", "m2crypto" ]
Python Thread getting items from Queue - how do you determine thread has not called task_done()?
39,522,877
<pre><code>dispQ = Queue.Queue() stop_thr_event = threading.Event() def worker (stop_event): while not stop_event.wait(0): try: job = dispQ.get(timeout=1) job.do_work() # does some work and may put more jobs in the dispQ dispQ.task_done() except Queue.Empty, msg: continue t1 = threading.Thread( target=worker, args=(stop_thr_event,) ) t1.daemon = True t1.start() t2 = threading.Thread( target=worker, args=(stop_thr_event,) ) t2.daemon = True t2.start() # put some initial jobs in the dispQ here while True: if dispQ.qsize() == 0: break else: # do something important stop_thr_event.set() t1.join() t2.join() </code></pre> <p>The Problem:<br> dispQ.qsize is 0 when inside worker func, job is creating more items and hence breaking out the loop (after breaking out of the loop, there are more jobs in dispQ)</p> <p>I need do something like:<br> if dispQ.qsize() == 0 and there is no item being worked inside the worker func, then break out of the while loop</p> <p>i.e. task_done() hasnt called yet after get() was called</p> <p>it would be nice if it had dispQ.join() with a timeout</p>
1
2016-09-16T02:19:45Z
39,523,961
<p>Here is the source for Queue.join, it looks like you can just use <code>queue.unfinished_tasks &gt; 0</code> to check if any tasks are running or in queue. You may want to acquire (then release) the lock to ensure that the queue doesn't change while you're checking its status. (I believe) You don't need to do this if you can guarantee that no items will be added to the queue after the last task finishes.</p> <pre><code>def join(self): # ... self.all_tasks_done.acquire() try: while self.unfinished_tasks: self.all_tasks_done.wait() finally: self.all_tasks_done.release() </code></pre>
1
2016-09-16T04:55:31Z
[ "python", "multithreading", "queue" ]
Best way to filter a django QuerySet based on value without name (rank)
39,522,924
<p>So I've build a web app with django, using postgres for the database. Using <code>django.contrib.postgres.search.SearchRank</code>, I search for all recipes in my database with the word 'chocolate', and it gives me back a nice sorted QuerySet.</p> <pre><code>from django.contrib.postgres.search import SearchVector, SearchQuery, SearchRank # example code: # searching recipe_name: vector = SearchVector('recipe_name') # searching for the term 'chocolate': query = SearchQuery('chocolate') # Recipe = recipes class in Models.py; contains 'recipe_name' column. search_recipes = Recipe.objects.annotate( rank=SearchRank(vector, query) ).order_by('-rank') # this gives me a QuerySet of recipes, which have a rank attribute: print(search_recipes[0].rank) # 0.0607927 </code></pre> <p>I don't want recipes whose match scores are 0 to show up, so I want to filter by score ('rank'). Usually I'd do this by e.g.:</p> <pre><code>search_recipes.filter(rank_gt=0) </code></pre> <p>But <code>'rank'</code> is an added attribute from postgres's special <code>SearchRank</code> function, it's not a column name or anything, so this creates an error (`Cannot resolve keyword 'rank_gt' into field). I can get what I want with the following:</p> <pre><code>x = [r for r in search_recipes if r.rank &gt; 0] </code></pre> <p>But I'm just wondering if there's a better, less hacky way to do this (seeing as the list comprehension gives me a list recipes, and not a QuerySet --> so I lose e.g. my 'rank' information that I had before).</p> <p>Thanks!</p>
0
2016-09-16T02:27:15Z
39,526,163
<p>you have an error in your filter, if you would like to use additional conditions to your field, you have to do it be double underscore <code>__</code></p> <p><code>search_recipes.filter(rank__gt=0)</code></p>
1
2016-09-16T07:39:18Z
[ "python", "django", "postgresql" ]
Digit Pattern matching
39,522,926
<p>I have a list:</p> <pre><code>a = [1, 2, 4, 5, 8, 2, 4, 2, 4, 6, 7] </code></pre> <p>I would like to count how many times <code>2, 4</code> appear together </p> <pre><code>( 2 at index i and 4 at index i+1). </code></pre> <p>How can I do that? </p>
1
2016-09-16T02:27:43Z
39,523,004
<p>In order to do that you'd need to check every pair in your given list to see if the pair contains <code>2</code> and <code>4</code>. In Python, you can create pairs by using the <code>zip</code> built-in function. </p> <p>In order to create adjacent pairs, we supply the list <code>a</code> as the first argument to <code>zip</code> and the <em>same list</em> one position forward:</p> <pre><code>list(zip(a, a[1:])) # a[1:] is [2, 4, 5, 8, 2, 4, 2, 4, 6, 7] </code></pre> <p>This produces pairs of the form:</p> <pre><code>[(1, 2), (2, 4), (4, 5), (5, 8), (8, 2), (2, 4), ...] </code></pre> <p>Now, what we can do is <em>iterate through</em> these pairs (using <code>for</code>), unpack them to variables (<code>i</code> and <code>j</code>) and test our condition (one is equal to <code>2</code> and the other to <code>4</code>). This looks like this:</p> <pre><code>[i == 2 and j == 4 for i, j in zip(a, a[1:])] </code></pre> <p>This creates a list with <code>True/False</code> entries for pairs that succeed and fail the condition we set:</p> <pre><code>[False, True, False, False, False, True, False, True, False, False] </code></pre> <p>Here's where another built-in function comes in handy, <code>sum()</code>. You can supply <code>sum</code> with a list of values and it will sum these up to a single value; fortunately, in <code>Python</code>, <code>True</code> is treated as <code>1</code> so <code>sum</code> can easily sum it up and find a result if we supply the list we created previously:</p> <pre><code>sum(i == 2 and j == 4 for i,j in zip(a, a[1:])) </code></pre> <p>Doing this helps us reach the final result of <code>3</code>.</p> <p>In a procedural way this can be translated as:</p> <pre><code>count = 0 for i,j in zip(a, a[1:]): if i == 2 and j == 4: count += 1 print(count) </code></pre> <p>This again yields <code>3</code> as does the <code>sum</code> solution.</p>
0
2016-09-16T02:38:22Z
[ "python", "python-3.x", "indexing" ]
Digit Pattern matching
39,522,926
<p>I have a list:</p> <pre><code>a = [1, 2, 4, 5, 8, 2, 4, 2, 4, 6, 7] </code></pre> <p>I would like to count how many times <code>2, 4</code> appear together </p> <pre><code>( 2 at index i and 4 at index i+1). </code></pre> <p>How can I do that? </p>
1
2016-09-16T02:27:43Z
39,523,395
<p>Join the list into a string and search for <code>'24'</code>.</p> <pre><code>&gt;&gt;&gt; a = [1, 2, 4, 5, 8, 2, 4, 2, 4, 6, 7] &gt;&gt;&gt; ''.join(map(str, a)).count('24') 3 </code></pre>
0
2016-09-16T03:35:32Z
[ "python", "python-3.x", "indexing" ]
Parse a Directory of json files with python
39,522,949
<p>I have been looking for how to do this but I cannot. I have a directory of .json files and I am supposed to parse each one. I know I have to use glob and os. I feel like the logic behind it is loop over the directory and when reading each file extract the data that is needed, but I cannot find anywhere to help me nor do I know the syntax. If its against stack rules and people think this is me asking for answers to homework that is fine I can just search elsewhere, this isn't homework I just don't understand. </p>
0
2016-09-16T02:31:09Z
39,523,013
<p>Assuming that your JSON files are named with a <code>.json</code> extension and that they are in the same directory that you are running the script from:</p> <pre><code>import json from glob import glob data = [] for file_name in glob('*.json'): with open(file_name) as f: data.append(json.load(f)) </code></pre> <p>This will give you a list of parsed JSON objects (dictionaries and/or lists).</p> <p>If you need to access the files in another directory you can construct your glob pattern like this:</p> <pre><code>import os.path pattern = os.path.join('/path/to/json/files', '*.json') for file_name in glob(pattern): .... </code></pre>
0
2016-09-16T02:40:01Z
[ "python", "json", "parsing" ]
List all customers via stripe API
39,522,963
<p>I'm trying to get a list of all the customers in my stripe account but am limited by pagination, wanted to know what the most pythonic way to do this.</p> <pre><code>customers = [] results = stripe.Customer.list(limit=100) print len(results.data) for c in results: customers.append(c) results = stripe.Customer.list(limit=100, starting_after=results.data[-1].id) for c in results: customers.append(c) </code></pre> <p>This lists the first 200, but then how do I do this if I have say 300, 500, etc customers?</p>
1
2016-09-16T02:32:25Z
39,528,416
<p>Stripe's Python library has an "auto-pagination" feature:</p> <pre><code>customers = stripe.Customer.list(limit=100) for customer in customers.auto_paging_iter(): # Do something with customer </code></pre> <p>The <code>auto_paging_iter</code> method will iterate over every customer, firing new requests as needed in the background until every customer has been retrieved.</p> <p>The auto-pagination feature is documented <a href="https://stripe.com/docs/api/python#pagination" rel="nofollow">here</a> (you have to scroll down a bit).</p>
2
2016-09-16T09:47:09Z
[ "python", "stripe-payments" ]
How to speed up a script to get backups from routers in python
39,522,975
<p>I have a script that gets IPs from a text file, and then get a backup or introduces commands to routers or switches(ASR,Nexus,Catalyst,etc). I am using Python version 2.7.12 and telnetlib module. The problem is that i takes 1 hour for just 200 devices, so it is not very efficient. Maybe running multiple processes in parellel would be the solution ? I attached the snippet.</p> <pre><code>#!/usr/bin/python #------------------------------------------------------------------------------- # Purpose: Get Backup # Enterprise: CLARO Cisco # Date: 31 de Agosto #------------------------------------------------------------------------------- import datetime import getpass import sys import telnetlib import os x = datetime.datetime.now() #date = ("%s-%s-%s" % (x.year, x.month, x.day) ) date = ("%s-%s-%s_%s:%s" % (x.year, x.month, x.day,x.hour,x.minute) ) HOST = [] file = open('./file.txt','r') NUM= len(file.readlines()) file.seek(0) for j in range(0,NUM): JOC=file.readline() part=JOC.split() if len(part)&gt;1: HOST.append(part[0].strip()) else: HOST.append(JOC.strip()) file.close() print "###Script to get backup from Cisco devices####" print HOST user = "usr" password = "pwd" enable = "nbl" carpeta = "/home/jocmtb/BACKUP_" + date os.makedirs(carpeta) print "###Getting info from devices listed above####" for item in HOST: try: rutadir = "./BACKUP_"+date+"/"+date +"_"+ item tn = telnetlib.Telnet(item) tn.read_until("Username: ") tn.write((user + "\n").encode('ascii')) tn.read_until("Password: ") tn.write((password + "\n").encode('ascii')) tn.write("enable\n") tn.read_until("Password: ") tn.write((enable + "\n").encode('ascii')) tn.write("terminal len 0\n") tn.write("sh version | i Software\n") tn.write("exit\n") print "# Getting info from device "+item running = tn.read_until("^exit\n") FILE = open(rutadir, "w") FILE.write(running) FILE.close() print "# Finish" tn.close() del tn except: print("Unexpected error: on host " + item) exit() </code></pre>
-1
2016-09-16T02:33:39Z
39,550,522
<p>Well after 1 day of struggle, i did it. Netmiko is a great module but only works for ssh and i need it telnet access, so i stuck with this one. Here is the final code. Hope someone think is usefull. I get the IPs of the devices via a file text that that can have different numbers of columns. </p> <pre><code>from multiprocessing import Pool import datetime import getpass import time import sys import telnetlib import os def f1(): print "###Script to get backup from Cisco devices####" print HOST print "###Getting info from devices listed above####" for item in HOST: try: rutadir = "./BACKUP_"+date+"/"+date +"_"+ item tn = telnetlib.Telnet(item) tn.read_until("Username: ") tn.write((user + "\n").encode('ascii')) time.sleep(1) tn.read_until("Password: ") tn.write((password + "\n").encode('ascii')) time.sleep(1) tn.write("enable\n") tn.read_until("Password: ") tn.write((enable + "\n").encode('ascii')) time.sleep(1) tn.write("terminal len 0\n") time.sleep(1) tn.write("sh version\n") time.sleep(1) tn.write("exit\n") time.sleep(1) #print "# Getting info from device "+item running = tn.read_until("^exit\n") FILE = open(rutadir, "w") FILE.write(running) FILE.close() print "# Getting info from device "+item+" OK" tn.close() del tn except: print("# Unexpected ERROR: on host " + item) def f2(): print HOST2 for item2 in HOST2: try: rutadir = "./BACKUP_"+date+"/"+date +"_"+ item2 tn = telnetlib.Telnet(item2) tn.read_until("Username: ") tn.write((user2 + "\n").encode('ascii')) time.sleep(1) tn.read_until("Password: ") tn.write((password2 + "\n").encode('ascii')) time.sleep(1) tn.write("terminal len 0\n") tn.write("sh version\n") tn.write("exit\n") #print "# Getting info from device "+item2 running = tn.read_until("^exit\n") FILE = open(rutadir, "w") FILE.write(running) FILE.close() print "# Getting info from device "+item2+" OK" tn.close() del tn except: print("# Unexpected ERROR: on host " + item2) def f3(): print HOST3 for item3 in HOST3: try: rutadir = "./BACKUP_"+date+"/"+date +"_"+ item3 tn = telnetlib.Telnet(item3) tn.read_until("login: ") tn.write((user + "\n").encode('ascii')) time.sleep(1) tn.read_until("Password: ") tn.write((password + "\n").encode('ascii')) time.sleep(1) tn.write("terminal len 0\n") time.sleep(1) tn.write("sh version\n") time.sleep(1) tn.write("exit\n") #print "# Getting info from device "+item3 running = tn.read_until("^exit\n") FILE = open(rutadir, "w") FILE.write(running) FILE.close() print "# Getting info from device "+item3+" OK" tn.close() del tn except: tn.close() FILE.close() del tn print("# Unexpected ERROR: on host " + item3) def f4(): print HOST4 for item4 in HOST4: try: rutadir = "./BACKUP_"+date+"/"+date +"_"+ item4 tn = telnetlib.Telnet(item4) tn.read_until("Username: ") tn.write((user2 + "\n").encode('ascii')) time.sleep(1) tn.read_until("Password: ") tn.write((password2 + "\n").encode('ascii')) time.sleep(1) tn.write("terminal len 0\n") tn.write("sh version\n") tn.write("exit\n") #print "# Getting info from device "+item4 running = tn.read_until("^exit\n") FILE = open(rutadir, "w") FILE.write(running) FILE.close() print "# Getting info from device "+item4+" OK" tn.close() del tn except: print("# Unexpected ERROR: on host " + item4) if __name__ == '__main__': x = datetime.datetime.now() date = ("%s-%s-%s_%s:%s" % (x.year, x.month, x.day,x.hour,x.minute) ) START = ("%s-%s-%s_%s:%s:%s" % (x.year, x.month, x.day,x.hour,x.minute,x.second) ) HOST = [] HOST2 = [] HOST3 = [] HOST4 = [] file = open('./file.txt','r') NUM= len(file.readlines()) file.seek(0) for j in range(0,NUM): JOC=file.readline() part=JOC.split() if len(part)&gt;1: HOST.append(part[0].strip()) else: HOST.append(JOC.strip()) file2 = open('./file2.txt','r') NUM2= len(file2.readlines()) file2.seek(0) for j in range(0,NUM2): JOC2=file2.readline() part2=JOC2.split() if len(part2)&gt;1: HOST2.append(part2[0].strip()) else: HOST2.append(JOC2.strip()) file3 = open('./file3.txt','r') NUM3= len(file3.readlines()) file3.seek(0) for j in range(0,NUM3): JOC3=file3.readline() part3=JOC3.split() if len(part3)&gt;1: HOST3.append(part3[0].strip()) else: HOST3.append(JOC3.strip()) file4 = open('./file4.txt','r') NUM4= len(file4.readlines()) file4.seek(0) for j in range(0,NUM4): JOC4=file4.readline() part4=JOC4.split() if len(part4)&gt;1: HOST4.append(part4[0].strip()) else: HOST4.append(JOC4.strip()) file.close() file2.close() file3.close() file4.close() user = "user" password = "pwd" enable = "enable" user2 = "user2" password2 = "pwd2" carpeta = "/home/user/BACKUP_" + date os.makedirs(carpeta) pool = Pool(processes=4) # start 4 worker processes result1 = pool.apply_async(f1) # evaluate "f(10)" asynchronously result2 = pool.apply_async(f2) result3 = pool.apply_async(f3) result4 = pool.apply_async(f4) pool.close() result1.get() result2.get() result3.get() result4.get() y = datetime.datetime.now() STOP = ("%s-%s-%s_%s:%s:%s" % (y.year, y.month, y.day,y.hour,y.minute,y.second) ) print("##Time Execution of the script##") print("# Time Script start: " + START) print("# Time Script stop: " + STOP) </code></pre> <p>exit()</p>
0
2016-09-17T18:48:46Z
[ "python", "networking", "backup", "cisco" ]
Receiving TypeError: can only concatenate list (not "NoneType") to list
39,522,985
<p>For my computer science class, we are learning recursion and I'm having difficulty understanding it. One part of my assignment is to create an algorithm that returns another list that is identical to L except all elements of e are removed. However, I am currently coming up with a </p> <blockquote> <p>TypeError: can only concatenate list (not "NoneType") to list</p> </blockquote> <p>What does this mean and how can I fix it</p> <pre><code>def removeAll(e, L): '''returns another list that is identical to L except all elements of e are removed''' if L==[]: return [] if L[0]!=e: return [L[0]]+removeAll(e,L[1:]) </code></pre> <p><a href="http://i.stack.imgur.com/bWVeG.png" rel="nofollow">Code</a></p>
1
2016-09-16T02:35:50Z
39,523,063
<p>You are checking for <code>L</code> being an empty list, and you are checking whether the first element of <code>L</code> is not equal to <code>e</code> (and thus part of your result), but as soon as an element is equal to <code>e</code>, your function returns nothing, meaning that it returns a default value of <code>None</code>. Add the proper handling for that situation with an <code>else</code>:</p> <pre><code>else: return removeAll(e, L[1:]) </code></pre>
1
2016-09-16T02:47:07Z
[ "python", "functional-programming" ]
Matplotlib: animation for audio player cursor (sliding vertical line) with blitting looks broken on WxPython
39,523,011
<p>I am building up a project that requires an audio player for very specific purposes. I'm currently using WxPython, Matplotlib and PyAudio packages. I'm also using Matplotlib's WXAgg backend (<code>backend_wxagg</code>).</p> <p>The basic idea is very simple: the <strong>audio data will be plotted on the main window and simultaneously played through PyAudio</strong> while showing playback progress on the plot — which I intend to do with an animated vertical line (<strong>horizontally sliding cursor</strong>), pretty much the type that you see in Audacity, for instance. I've already tried the generic Matplotlib animation examples, and some others spread through the web, but they are either too slow (no blitting), or rely upon FuncAnimation (an architecture that doesn't fit in my project), or utilize the technique that I'm trying to use (which is not working so far).</p> <p><strong>Something actually moves on the screen</strong> but the overall picture is a mess... A <strong>white filled rectangle appears over the plot</strong> on my Ubuntu 16 desktop, while a <strong>black filled rectangle</strong> appears on my Mint laptop. Despite having tried so hard to work on it for days and ALMOST getting it to work, time has come to humbly ask you for help... :/</p> <p>I'm insisting on the <code>blit()</code> method because as far as I know it (i) allows me to refresh the plotting under custom events (in this case an audio frame consumption) and (ii) has a good performance (which is a concern here due to the large, variable size dataset).</p> <p>Stripping my project down to the bare minimum, here's the code that, once taken care of, will hopefully allow me to fix my entire application (2000+ lines):</p> <pre><code># -*- coding: UTF-8 -*- # import wx import gettext import struct import matplotlib matplotlib.use('WX') from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas from matplotlib.figure import Figure import pyaudio import numpy as np import time CHUNK_SIZE = 1024 # Size (in samples) of each audio_callback reading. BYTES_PER_FRAME = 2 # Number of bytes in each audio frame. CHANNELS = 1 # Number of channels. SAMPLING_RATE = 11025 # Audio sampling rate. audio_chunks = [] # A simple frame with a tab (generated by WxGlade and simplified for example purposes): class PlayerFrame(wx.Frame): def __init__(self, *args, **kwds): kwds["style"] = wx.CAPTION | wx.CLOSE_BOX | wx.MINIMIZE_BOX | wx.MAXIMIZE | wx.MAXIMIZE_BOX | wx.SYSTEM_MENU | wx.RESIZE_BORDER | wx.CLIP_CHILDREN wx.Frame.__init__(self, *args, **kwds) self.main_notebook = wx.Notebook(self, wx.ID_ANY, style=0) self.__set_properties() self.__do_layout() self.initAudio() # Initiates audio variables and event binding. self.initPlotting() # Initiates plotting variables and widgets. self.startPlayback() # Starts audio playback. def __set_properties(self): self.SetTitle(_("Audio signal plotting and playback with cursor")) self.SetSize((720, 654)) def __do_layout(self): sizer_main = wx.BoxSizer(wx.VERTICAL) sizer_main.Add(self.main_notebook, 1, wx.LEFT | wx.RIGHT | wx.BOTTOM | wx.EXPAND, 25) self.SetSizer(sizer_main) self.Layout() # Audio stuff initialization: def initAudio(self): # Binds the playback move event to a handler: self.Bind(EVT_PLAYBACK_MOVE, self.OnPlaybackMove) # Creates an empty audio chunk with "CHUNK_SIZE" samples of zero value ([0, 0, ..., 0, 0]): empty_chunk = struct.pack("&lt;h", 0)*CHUNK_SIZE # Initializes audio chunk array with 20 empty audio chunks: audio_chunks.extend([empty_chunk]*20) # Points playback_counter to the first audio chunk: global playback_counter; playback_counter = 0 def startPlayback(self): # Initializes audio playback: global p; p = pyaudio.PyAudio() global audio_stream; audio_stream = p.open ( format = p.get_format_from_width(BYTES_PER_FRAME) , channels = CHANNELS , rate = SAMPLING_RATE , output = True , stream_callback = audio_callback , frames_per_buffer = CHUNK_SIZE ) # Plotting stuff initialization: def initPlotting(self): # Converts the raw audio chunks to a normal array: samples = np.fromstring(b''.join(audio_chunks), dtype=np.int16) # Creates plot supporting widgets: self.pane = wx.Panel(self.main_notebook, wx.ID_ANY) self.canvas = FigureCanvas(self.pane, wx.ID_ANY, Figure()) self.figure = self.canvas.figure self.pane.SetMinSize((664, 355)) sizer_15 = wx.BoxSizer(wx.HORIZONTAL) sizer_16 = wx.BoxSizer(wx.VERTICAL) sizer_10 = wx.BoxSizer(wx.HORIZONTAL) sizer_10.Add(self.canvas, 1, wx.EXPAND, 0) sizer_16.Add(sizer_10, 2, wx.BOTTOM | wx.EXPAND, 25) sizer_15.Add(sizer_16, 1, wx.ALL | wx.EXPAND, 25) self.pane.SetSizer(sizer_15) self.main_notebook.AddPage(self.pane, _("my_audio.wav")) # ================================================ # Initializes plotting (is the problem in here???) # ================================================ t = range(len(samples)) self.axes1 = self.figure.add_subplot(111) self.axes1.set_xlim(0, len(samples)) self.axes1.set_ylim(-32768, 32767) self.line1, = self.axes1.plot(t, samples) self.Layout() self.background = self.figure.canvas.copy_from_bbox(self.axes1.bbox) self.playback_line = self.axes1.axvline(color="y", animated=True) # For each new chunk read by the audio_callback function, we update the cursor position on the plot. # It's important to notice that the audio_callback function CANNOT manipulate UI's widgets on it's # own, because they live in different threads and Wx allows only the main thread to perform UI changes. def OnPlaybackMove(self, event): # ================================================= # Updates the cursor (vertical line) at each event: # ================================================= self.figure.canvas.restore_region(self.background) new_position = playback_counter*CHUNK_SIZE self.playback_line.set_xdata(new_position) self.axes1.draw_artist(self.playback_line) self.canvas.blit(self.axes1.bbox) # Playback move event (for indicating that a chunk has just been played and so the cursor must be moved): EVT_PLAYBACK_MOVE = wx.PyEventBinder(wx.NewEventType(), 0) class PlaybackMoveEvent(wx.PyCommandEvent): def __init__(self, eventType=EVT_PLAYBACK_MOVE.evtType[0], id=0): wx.PyCommandEvent.__init__(self, eventType, id) # Callback function for audio playback (called each time the sound card needs "frame_count" more samples): def audio_callback(in_data, frame_count, time_info, status): global playback_counter # In case we've run out of samples: if playback_counter == len(audio_chunks): print "Playback ended." # Returns an empty chunk, thus ending playback: return ("", pyaudio.paComplete) else: # Gets the next audio chunk, increments the counter and returns the new chunk: new_chunk = audio_chunks[playback_counter] main_window.AddPendingEvent(PlaybackMoveEvent()) playback_counter += 1 return (new_chunk, pyaudio.paContinue) # WxGlade default initialization instructions: if __name__ == "__main__": gettext.install("app") app = wx.PySimpleApp(0) wx.InitAllImageHandlers() main_window = PlayerFrame(None, wx.ID_ANY, "") app.SetTopWindow(main_window) main_window.Show() app.MainLoop() # UI's main loop. Checks for events and stuff. # Final lines (if we're executing here, this means the program is closing): audio_stream.close() p.terminate() </code></pre> <p>Thank you so much for your help and patience! Hopefully this will help not only me, but also someone else struggling against WxPython WXAgg backend blitting.</p>
0
2016-09-16T02:39:36Z
39,650,382
<p>After more research, I finally found out that the solution was to call the <code>draw()</code> method of canvas object before effectively copying from bbox. Thus, the middle line here is the answer (the other ones serve only as a reference to the correct spot to place the fix):</p> <pre><code> (...) self.Layout() self.figure.canvas.draw() # THIS is the solution. self.background = self.figure.canvas.copy_from_bbox(self.axes1.bbox) </code></pre> <p>But I must add here that, while this might work for some cases, any scenario where your plot gets resized will probably result in a broken image again. So, in order to fix that, bind a method of yours to the <code>"resize_event"</code> of the figure canvas, and inside your method force a redraw and a new copy:</p> <pre><code> self.playback_line = self.axes1.axvline(color="y", animated=True) # New line here: self.figure.canvas.mpl_connect("resize_event", self.on_resize_canvas) # New method here: def on_resize_canvas(self, event): self.figure.canvas.draw() self.background = self.figure.canvas.copy_from_bbox(self.axes1.bbox) (...) </code></pre> <p>And there you go! This problem has consumed a lot of my project's time, so I make it a point to share the solution with everyone else, even because this might be the first functional audio player template available on the Internet with WxPython, Matplotlib and PyAudio. Hope you find it useful!</p>
0
2016-09-22T23:33:07Z
[ "python", "animation", "matplotlib", "wxpython", "blit" ]
Web-Scraping with Python on Canopy
39,523,055
<p>I'm having trouble with this line of code in which I want to print the 4 stock prices for the companies listed. My issue is that, while there are no errors when I run it, the code only prints out empty brackets where the stock prices should go. This is the source of my confusion. </p> <pre><code>import urllib2 import re symbolslist = ["aapl","spy","goog","nflx"] i = 0 while i&lt;len(symbolslist): url = "http://money.cnn.com/quote/quote.html?symb=' +symbolslist[i] + '" htmlfile = urllib2.urlopen(url) htmltext = htmlfile.read() regex = '&lt;span stream='+symbolslist[i]+' streamformat="ToHundredth" streamfeed="SunGard"&gt;(.+?)&lt;/span&gt;' pattern = re.compile(regex) price = re.findall(pattern,htmltext) print "the price of", symbolslist[i], " is ", price i+=1 </code></pre>
2
2016-09-16T02:45:59Z
39,530,003
<p>Because you don't pass the variable:</p> <pre><code> url = "http://money.cnn.com/quote/quote.html?symb=' +symbolslist[i] + '" ^^^^^ a string not the list element </code></pre> <p>Use <em>str.format</em>:</p> <pre><code>url = "http://money.cnn.com/quote/quote.html?symb={}".format(symbolslist[i]) </code></pre> <p>Also you can iterate directly over the list, no need for a while loop, never <a href="http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454">parse html with a regex</a>, use a html parse like <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">bs4</a> and your regex is also wrong. There is no <code>stream="aapl"</code> etc.. what you want is the span where <code>streamformat="ToHundredth"</code> and <code>streamfeed="SunGard"</code>;</p> <pre><code>import urllib2 from bs4 import BeautifulSoup symbolslist = ["aapl","spy","goog","nflx"] for symbol in symbolslist: url = "http://money.cnn.com/quote/quote.html?symb={}".format(symbol) htmlfile = urllib2.urlopen(url) soup = BeautifulSoup(htmlfile.read()) price = soup.find("span",streamformat="ToHundredth", streamfeed="SunGard").text print "the price of {} is {}".format(symbol,price) </code></pre> <p>You can see if we run the code:</p> <pre><code>In [1]: import urllib2 In [2]: from bs4 import BeautifulSoup In [3]: symbols_list = ["aapl", "spy", "goog", "nflx"] In [4]: for symbol in symbols_list: ...: url = "http://money.cnn.com/quote/quote.html?symb={}".format(symbol) ...: htmlfile = urllib2.urlopen(url) ...: soup = BeautifulSoup(htmlfile.read(), "html.parser") ...: price = soup.find("span",streamformat="ToHundredth", streamfeed="SunGard").text ...: print "the price of {} is {}".format(symbol,price) ...: the price of aapl is 115.57 the price of spy is 215.28 the price of goog is 771.76 the price of nflx is 97.34 </code></pre> <p>We get what you want.</p>
1
2016-09-16T11:07:53Z
[ "python", "web-scraping", "canopy" ]
Django load resources on startup
39,523,075
<p>How can I load resources from mysql database when Django starts up and put it in memory (Redis) to use by all the applications. </p> <p>I have seen this [<a href="https://docs.djangoproject.com/en/dev/ref/applications/#django.apps.AppConfig.ready]" rel="nofollow">https://docs.djangoproject.com/en/dev/ref/applications/#django.apps.AppConfig.ready]</a></p> <pre><code>class MyAppConfig(AppConfig): def ready(self): </code></pre> <p>But they mention not use db connections inside ready function. How can do it when my Website starts.?</p> <p>And Can I also set cache value inside ready ?</p> <pre><code>from django.core.cache import cache cache.set() </code></pre>
2
2016-09-16T02:48:24Z
39,523,820
<p>Since you are only loading into redis rather than creating instances of models that are held in memory and shared by all apps in your website, perhaps the best way is to use a <a class='doc-link' href="http://stackoverflow.com/documentation/django/1661/management-commands/5367/creating-and-running-a-management-command#t=201609160431023464416">custom management command</a> or create a <a class='doc-link' href="http://stackoverflow.com/documentation/django/5848/django-from-the-command-line/20596/django-from-the-command-line#t=201609160434189987315">Django CLI</a></p> <p>Since you are using redis, do you really need to store things in memcache as well? But if you need to that too is something that can be done from a CLI</p>
1
2016-09-16T04:37:18Z
[ "python", "django", "apache", "redis" ]
Pandas relabel rows to recognize unique values within a groupby
39,523,085
<p>I have a <code>Pandas DataFrame</code> that describes some version testing and looks like this:</p> <pre><code>MailingName EmailSubject MailingID Promo_v1s1 Hello! A8FEFE Promo_v1s2 Line 2 A8FEFE Promo_v2s1 Line 2 A8FEFE Promo_v2s2 Yo! A8FEFE Promo_v2S3 Hello! A8FEFE deal_v2s1 Line 2 bbb deal_v2s2 Yo! bbb deal_v2ss Hello bbb </code></pre> <p>The same mailing campaign, with different version tests, can be identified by the <code>MailingID</code> (so that would be the <code>groupby</code> term for more characteristics).</p> <p>The naming convention for <code>MailingName</code> for these is that <code>v + a number</code> indicates the email body version that was tested, and <code>s + a number</code> indicates the email subject line that was tested in a particular combo. However, the convention is not helpful in the sense that the subject line from a <code>v1s1</code> is not necessarily the same as a subject line in <code>v2s2</code> even when the mailingID is shared. </p> <p>I want to, within each <code>MailingID</code> group, have all email subject lines that are actually identical, have the same 'subject line version number'. So I'd like to create another column that would result in something like this:</p> <pre><code> MailingName EmailSubject MailingID TrueEmailVersionNumber Promo_v1s1 Hello! A8FEFE 1 Promo_v1s2 Line 2 A8FEFE 2 Promo_v2s1 Line 2 A8FEFE 2 Promo_v2s2 Yo! A8FEFE 3 Promo_v2S3 Hello! A8FEFE 1 deal_v2s1 Line 2 bbb 1 deal_v2s2 Yo! bbb 2 deal_v2ss Hello bbb 3 </code></pre> <p>Basically I want to add unique labels, per group, to a column. How can I do this with <code>Pandas</code>?</p> <p>I had an idea of getting a starting in a clunky way like so:</p> <pre><code>def processThis(x): uni = list(set(x)) keys = {x_i:uni.index(x_i) for x_i in x} return keys ab_data.groupby('mailing_id')['subject'].apply(processThis) </code></pre> <p>But this actually did not yield back a list of dictionaries, so even my first step is a non-starter. Thanks for any advice!</p>
0
2016-09-16T02:50:10Z
39,523,246
<pre><code>In [229]: df Out[229]: MailingName EmailSubject MailingID 0 Promo_v1s1 Hello! A8FEFE 1 Promo_v1s2 Line 2 A8FEFE 2 Promo_v2s1 Line 2 A8FEFE 3 Promo_v2s2 Yo! A8FEFE 4 Promo_v2S3 Hello! A8FEFE 5 deal_v2s1 Line 2 bbb 6 deal_v2s2 Yo! bbb 7 deal_v2ss Hello bbb In [230]: def f(x): ...: unq = list(x['EmailSubject'].unique()) ...: return pd.Series([unq.index(y) + 1 for y in x['EmailSubject']]) ...: In [231]: df['TrueEmailVersionNumber'] = df.groupby('MailingID').apply(f).values In [232]: df Out[232]: MailingName EmailSubject MailingID TrueEmailVersionNumber 0 Promo_v1s1 Hello! A8FEFE 1 1 Promo_v1s2 Line 2 A8FEFE 2 2 Promo_v2s1 Line 2 A8FEFE 2 3 Promo_v2s2 Yo! A8FEFE 3 4 Promo_v2S3 Hello! A8FEFE 1 5 deal_v2s1 Line 2 bbb 1 6 deal_v2s2 Yo! bbb 2 7 deal_v2ss Hello bbb 3 </code></pre>
1
2016-09-16T03:15:09Z
[ "python", "pandas" ]
Python for Windows hang while calling a function of MinGW-w64 compiled library
39,523,088
<p>I'm using ctypes to call a function of a MinGW-w64 compiled library.</p> <p>C code:</p> <pre><code>#include &lt;stdio.h&gt; int hello() { printf("Hello world!\n"); return 233; } </code></pre> <p>Python Code:</p> <pre><code>from ctypes import * lib = CDLL("a.dll") hello = lib.hello hello.restype = c_int hello() </code></pre> <p>Compile C code with gcc in MinGW-w64:</p> <pre><code>gcc tdll.c -shared -o a.dll </code></pre> <p>Then run python code in Python for Windows 3.5.2, python hangs on <strong>hello()</strong> with 100% cpu usage.<br> Then I've tried running the python code in Python for MinGW 3.4.3(installed from msys2 repo), it's no problem.</p> <p>So What's wrong with my code? How can I workaround it?</p>
2
2016-09-16T02:50:42Z
39,770,350
<p>Use 'x86_64-w64-mingw32-gcc' or 'i686-w64-mingw32-gcc' instead of 'gcc' in msys!</p> <p>The 'gcc' command calls x86_64-pc-msys-gcc.</p>
0
2016-09-29T12:27:57Z
[ "python", "c", "mingw", "ctypes", "msys" ]
What code can I use if I have ten numbers and I want to produce 1000 different randomly shuffled orders of the the ten numbers
39,523,112
<p>I can use this to find all of the possible options, however, I want a random sample of 1000 from this set:</p> <pre><code>items = range(1,11) //from itertools import permutations //for p in permutations(items): print(p) </code></pre>
0
2016-09-16T02:54:07Z
39,523,500
<p>Here is a suggested solution. Based on your requirements, the list of 10 numbers are already chosen, and you just want to have a list that will have those 10 numbers shuffled in 1000 unique ways. So below, I have a list of 10 numbers. The random.sample(num_list,10), will select those 10 numbers at random, and create a new list. Since the list has only 10 numbers, it selects all 10 in random order and creates the new list. It checks the overall answer list to see if that order had been selected before. If it hasn't, it adds it to the list. It does this until 1000 unique orders have been found:</p> <pre><code>import random num_list = [6, 9, 1000, 53, 321, 8, -5, 714, 0, 2120] answer_list = [] while len(answer_list) != 1000: new_sample = random.sample(num_list, 10) if new_sample not in answer_list: answer_list.append(new_sample) </code></pre> <p>So after running the script here are the last 5 items on the list and the first 5 items on the list:</p> <pre><code>==== RESTART: C:/Users/Joe/Desktop/scripts/Stack_overflow/ransom.py ==== &gt;&gt;&gt; answer_list[-5:] [[0, 9, 8, 53, 321, 1000, -5, 6, 2120, 714], [714, 0, 2120, 6, -5, 53, 8, 321, 9, 1000], [6, -5, 53, 321, 0, 1000, 2120, 8, 9, 714], [321, 53, 9, 8, 2120, 714, 0, 1000, -5, 6], [1000, 53, 2120, 6, 8, 9, -5, 714, 321, 0]] &gt;&gt;&gt; answer_list[:5] [[321, 9, 714, -5, 53, 6, 1000, 8, 0, 2120], [0, 6, 2120, -5, 714, 9, 1000, 8, 53, 321], [1000, 8, 9, 6, 53, 321, 2120, 714, 0, -5], [9, 0, 321, 6, 714, 53, 1000, 2120, -5, 8], [6, 1000, 321, 0, -5, 2120, 8, 714, 9, 53]] &gt;&gt;&gt; </code></pre>
0
2016-09-16T03:53:40Z
[ "python", "random", "permutation", "shuffle", "sample" ]
What code can I use if I have ten numbers and I want to produce 1000 different randomly shuffled orders of the the ten numbers
39,523,112
<p>I can use this to find all of the possible options, however, I want a random sample of 1000 from this set:</p> <pre><code>items = range(1,11) //from itertools import permutations //for p in permutations(items): print(p) </code></pre>
0
2016-09-16T02:54:07Z
39,535,529
<p>Since <code>set</code> guarantees the uniqueness of its elements, you can generate permutations via <code>random.sample()</code> and attempt adding the result until the size of the set gets up to the desired quantity:</p> <pre><code>import random a = range(1,11) solutions = set() while len(solutions) &lt; 1000: solutions.add(tuple(random.sample(a, 10))) </code></pre> <p>At this point <code>solutions</code> contains a set of tuples which are unique permutations. A variation on the theme uses shuffling rather than sampling:</p> <pre><code>import random a = list(range(1,11)) solutions = set() while len(solutions) &lt; 1000: random.shuffle(a) solutions.add(tuple(a)) </code></pre> <p>If you want a list of lists rather than a set of tuples, append the following after the <code>while</code> loop:</p> <pre><code>lols = [list(x) for x in solutions] </code></pre> <p>Given that you want 1000 out of the 10! (which is 3628800) permutations, the risk of collisions is close to zero. The expected number of iterations of the loop is 1000 plus a small fraction, less than 1001.</p>
0
2016-09-16T15:49:29Z
[ "python", "random", "permutation", "shuffle", "sample" ]
What code can I use if I have ten numbers and I want to produce 1000 different randomly shuffled orders of the the ten numbers
39,523,112
<p>I can use this to find all of the possible options, however, I want a random sample of 1000 from this set:</p> <pre><code>items = range(1,11) //from itertools import permutations //for p in permutations(items): print(p) </code></pre>
0
2016-09-16T02:54:07Z
39,536,994
<p>Unfortunately, Python's permutation module only provides a way to iterate through the 3-million-plus permutations, and not a way to pick one arbitrarily by order. You'll need something like this:</p> <pre><code>def kthperm(S, k): P = [] while S != []: f = math.factorial(len(S)-1) i = int(math.floor(k/f)) x = S[i] k = k%f P.append(x) S = S[:i] + S[i+1:] return P </code></pre> <p>Now you can pick a random number from 0 to 10!-1, compute that permutation directly, and add it to your list, as many times as needed:</p> <pre><code>perms = [] for i in range(1000): perms.append(kthperm(list(range(1,11)), random.randint(0,math.factorial(10)-1)) </code></pre>
0
2016-09-16T17:20:15Z
[ "python", "random", "permutation", "shuffle", "sample" ]
How to use python split?
39,523,115
<p>I have some data like that:</p> <p><img src="http://i.stack.imgur.com/6Jgf7.jpg" alt="enter image description here"></p> <p>I use split function process data into list.I want to make some data mining in this training set,but I don't know how to separate data as futures like this in python:</p> <p><img src="http://i.stack.imgur.com/GDBD2.jpg" alt="enter image description here"></p> <p>By the way,some data are float.Firstly,I want to fill all the data into a list,coding like that:</p> <pre><code>key_zi = [] for i in range(len(train_set['zi_id'])): key_zi = key_zi + train_set['zi_id'][i].split('/') </code></pre> <p>but the result call:</p> <blockquote> <p>AttributeError: 'float' object has no attribute 'split'.</p> </blockquote> <p>Could you please help me ?</p>
-1
2016-09-16T02:54:59Z
39,545,695
<p>You said that the documentation says</p> <pre><code>str.split(str="", num=string.count(str)) </code></pre> <p>What you're missing is that the the first instance of <code>str</code> in this definition is the delimiter for splitting.</p> <p>Your error is that in this:</p> <pre><code> train_set['zi_id'][i].split('/') </code></pre> <p>You are trying to call the <code>split</code> method of <code>train_set['zi_id'][i]</code>. But that thing is a float, not a delimiter string.</p>
0
2016-09-17T10:23:29Z
[ "python", "machine-learning" ]
How much overhead does python numpy tolist() add?
39,523,210
<p>I am using a python program that uses numpy array as standard data type for arrays. For the heavy computation I pass the arrays to a C++ library. In order to do so, I use <a href="https://github.com/pybind/pybind11" rel="nofollow">pybind</a>. However, I am required to use python <code>list</code>. I do the conversion from <code>numpy</code> array and <code>list</code> via:</p> <pre><code>NativeSolver.vector_add(array1.tolist(), array2.tolist(), ...) </code></pre> <p>How much overhead does this conversion generate? I hope it doesn't create a whole new copy. Numpy reference says:</p> <blockquote> <p><strong>ndarray.tolist()</strong></p> <p>Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible Python type.</p> </blockquote>
0
2016-09-16T03:09:37Z
39,523,454
<p>A lot. For simple built-in types, you can use <code>sys.getsizeof</code> on an object to determine the memory overhead associated with that object (for containers, this does not include the values stored in it, only the pointers used to store them).</p> <p>So for example, a <code>list</code> of 100 smallish <code>int</code>s (but greater than 256 to avoid small <code>int</code> cache) is (on my 3.5.1 Windows x64 install):</p> <pre><code>&gt;&gt;&gt; sys.getsizeof([0] * 100) + sys.getsizeof(0) * 100 3264 </code></pre> <p>or about 3 KB of memory required. If those same values were stored in a <code>numpy</code> <code>array</code> of <code>int32</code>s, with no Python objects per number, and no per object pointers, the size would drop to roughly 100 * 4 (plus another few dozen bytes, for the <code>array</code> object overhead itself), somewhere under 500 bytes. The incremental cost for each additional small <code>int</code> is 24 bytes for the object (though it's free if it's in the small int cache for values from -5 to 256 IIRC), and 8 bytes for the storage in the <code>list</code>, 32 bytes total, vs. 4 for the C level type, roughly 8x the storage requirements (and you're still storing the original object too).</p> <p>If you have enough memory to deal with it, so be it. But otherwise, you might trying looking at a wrapping that lets you pass in buffer protocol supporting objects (<code>numpy.array</code>, <code>array.array</code> on Py3, <code>ctypes</code> arrays populated via memoryview slice assignment, etc.) so conversion to Python level types isn't needed.</p>
3
2016-09-16T03:47:07Z
[ "python", "c++", "arrays", "numpy", "wrapper" ]
How much overhead does python numpy tolist() add?
39,523,210
<p>I am using a python program that uses numpy array as standard data type for arrays. For the heavy computation I pass the arrays to a C++ library. In order to do so, I use <a href="https://github.com/pybind/pybind11" rel="nofollow">pybind</a>. However, I am required to use python <code>list</code>. I do the conversion from <code>numpy</code> array and <code>list</code> via:</p> <pre><code>NativeSolver.vector_add(array1.tolist(), array2.tolist(), ...) </code></pre> <p>How much overhead does this conversion generate? I hope it doesn't create a whole new copy. Numpy reference says:</p> <blockquote> <p><strong>ndarray.tolist()</strong></p> <p>Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible Python type.</p> </blockquote>
0
2016-09-16T03:09:37Z
39,523,462
<p>Yes it will be new copy. The data layout for an array is very different from that of a list.</p> <p>An array has attributes like shape and strides, and a 1d data buffer that contains the elements - just a contiguous set of bytes. It's the other attributes and code that treats them as floats, int, strings, 1d, 2d etc.</p> <p>A list is a buffer of pointers, with each pointer pointing to an object else where in memory. It may point to a number, a string, or another list. It is not going to point to the array's databuffer or elements in it.</p> <p>There are interfacing numpy arrays with compiled code and C arrays that make use of the array data buffer. <code>cython</code> is a common on. There is also a whole documentation section on the C API for numpy. I know anything about <code>pbind</code>. If it requires a list interface it may not be the best.</p> <p>When I've done <code>timeit</code> tests with <code>tolist()</code> it hasn't appeared to be that expensive.</p> <p>=======================</p> <p>But looking at the <code>pybind11</code> github I find a number of references to <code>numpy</code>, and this</p> <p><a href="http://pybind11.readthedocs.io/en/latest/advanced.html#numpy-support" rel="nofollow">http://pybind11.readthedocs.io/en/latest/advanced.html#numpy-support</a></p> <p>documentation page. It supports the buffer protocol, and numpy arrays. So you shouldn't have to go through the <code>tolist</code> step.</p> <pre><code>#include &lt;pybind11/numpy.h&gt; void f(py::array_t&lt;double&gt; array); </code></pre>
3
2016-09-16T03:47:56Z
[ "python", "c++", "arrays", "numpy", "wrapper" ]
Django - ImportError: No module named apps
39,523,214
<p>I am trying out the Django tutorial on the djangoproject.com website, but when I reach the part where I do the first "makemigrations polls" I keep getting this error:</p> <blockquote> <p>ImportError: No module named apps</p> </blockquote> <pre> Traceback (most recent call last): File "manage.py", line 22, in execute_from_command_line(sys.argv) File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 312, in execute django.setup() File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/Library/Python/2.7/site-packages/django/apps/config.py", line 112, in create mod = import_module(mod_path) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) </pre> <p>How can I resolve this error?</p>
0
2016-09-16T03:10:14Z
39,531,859
<p>You need to install required packages in your virtualenv to run Django project. First and foremost create virtualenv for your project.</p> <pre><code>virtualenv env #For python 2.7 virtualenv -p python3 env #For python 3.4 </code></pre> <p>Actiavte env to install your requirements</p> <pre><code>source env/bin/activate </code></pre> <p>By using pip you can install your required packages</p> <pre><code>pip install django </code></pre> <p>And then start your Django project</p>
1
2016-09-16T12:44:22Z
[ "python", "django", "python-2.7", "django-models" ]
Django - ImportError: No module named apps
39,523,214
<p>I am trying out the Django tutorial on the djangoproject.com website, but when I reach the part where I do the first "makemigrations polls" I keep getting this error:</p> <blockquote> <p>ImportError: No module named apps</p> </blockquote> <pre> Traceback (most recent call last): File "manage.py", line 22, in execute_from_command_line(sys.argv) File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 312, in execute django.setup() File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/Library/Python/2.7/site-packages/django/apps/config.py", line 112, in create mod = import_module(mod_path) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) </pre> <p>How can I resolve this error?</p>
0
2016-09-16T03:10:14Z
39,537,105
<p>There is an error in the tutorial.</p> <p>It instructs to add <code>polls.apps.PollsConfig</code> in the <code>INSTALLED_APPS</code> section of the <code>settings.py</code> file. I changed it from <code>polls.apps.PollsConfig</code> to simply <code>polls</code> and that did the trick. I was able to successfully make migrations.</p> <p>I hope this helps other people who face similar problems.</p>
0
2016-09-16T17:27:12Z
[ "python", "django", "python-2.7", "django-models" ]
Django - ImportError: No module named apps
39,523,214
<p>I am trying out the Django tutorial on the djangoproject.com website, but when I reach the part where I do the first "makemigrations polls" I keep getting this error:</p> <blockquote> <p>ImportError: No module named apps</p> </blockquote> <pre> Traceback (most recent call last): File "manage.py", line 22, in execute_from_command_line(sys.argv) File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 312, in execute django.setup() File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/Library/Python/2.7/site-packages/django/apps/config.py", line 112, in create mod = import_module(mod_path) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) </pre> <p>How can I resolve this error?</p>
0
2016-09-16T03:10:14Z
39,537,343
<p>Your problem is that your Django version does not match the version of the tutorial.</p> <p>In Django 1.9+, the startapp command automatically creates an app config class, so <a href="https://docs.djangoproject.com/en/1.9/intro/tutorial02/#activating-models" rel="nofollow">the tutorial</a> asks you to add <code>polls.apps.PollsConfig</code> to <code>INSTALLED_APPS</code>.</p> <p>For Django 1.8 and earlier, <a href="https://docs.djangoproject.com/en/1.8/intro/tutorial01/#activating-models" rel="nofollow">the tutorial</a> asks you to add <code>polls</code> to <code>INSTALLED_APPS</code>. If you add <code>polls.apps.PollsConfig</code> instead, you will get an import error, unless you create the <code>PollsConfig</code> manually.</p>
0
2016-09-16T17:43:37Z
[ "python", "django", "python-2.7", "django-models" ]
Update AWS Lambda API Key Usage Plans with boto3
39,523,225
<p>I have an API Key associated with a particular usage plan. How do I use <code>boto3</code> to update the usage plan to another usage plan?</p> <p>I've tried the following methods:</p> <p><a href="http://boto3.readthedocs.io/en/latest/reference/services/apigateway.html#APIGateway.Client.update_usage_plan" rel="nofollow">update_api_key()</a> // add, remove and replace operations do not have usage plan path <a href="http://boto3.readthedocs.io/en/latest/reference/services/apigateway.html#APIGateway.Client.update_usage_plan" rel="nofollow">update_usage_plan()</a> // add, remove and replace operations do not have usage plan path</p> <p>I thought about remove the key from the plan then re-adding but there are no usage plan paths.</p>
0
2016-09-16T03:11:50Z
39,538,899
<p>You are looking for <a href="http://boto3.readthedocs.io/en/latest/reference/services/apigateway.html#APIGateway.Client.create_usage_plan_key" rel="nofollow">create_usage_plan_key</a></p> <p>i.e.</p> <pre><code>response = client.create_usage_plan_key( usagePlanId='12345', keyId='[API_KEY_ID]', keyType='API_KEY' ) </code></pre>
0
2016-09-16T19:32:03Z
[ "python", "amazon-web-services", "aws-api-gateway", "boto3" ]
BAD REQUEST 400, when trying to access JSON
39,523,306
<p>I've been working on a bot for <code>Discord</code> in <code>discord.py</code> (not related) and I'm trying to pull from a server so I can interpret it, however, I'm getting a</p> <blockquote> <p>BAD REQUEST 400</p> </blockquote> <p>when trying to actually pull from the server. I've tried to add a header to specify it as a JSON but it won't work.</p> <pre><code>await bot.say("Fetching data") headers = {"Content-type": "application/json"} url = 'http://jisho.org/api/v1/search/words?keyword=boushi' response = requests.get(url, headers=headers).json() await bot.say(response) </code></pre> <p>The <code>bot.say</code> is just repeating back to me the output.</p>
0
2016-09-16T03:23:16Z
39,523,536
<p>I wouldn't use .json() at the end of the request, in case you want to check the status_code for a bad request first.</p> <pre><code>response = requests.get(url, headers=headers) if response.status_code == 200: print response.content </code></pre> <p>And if you want to do something with the dict you can use json.loads()</p> <pre><code>foo = json.loads(response.content) </code></pre>
2
2016-09-16T03:58:07Z
[ "python", "json", "python-requests" ]
User defined aggregation function for spark dataframe (python)
39,523,317
<p>I have the below spark dataframe, where id is int and attributes is a list of string</p> <pre><code>id | attributes 1 | ['a','c', 'd'] 2 | ['a', 'e'] 1 | ['e', 'f'] 1 | ['g'] 3 | ['a', 'b'] 2 | ['e', 'g'] </code></pre> <p>I need to perform an aggregation, where the attributes lists for each id are concatenated. The results of the aggregation are:</p> <pre><code>id | concat(attributes) 1 | ['a', 'c', 'd', 'e', 'f', 'g'] 2 | ['a', 'e', 'e', 'g'] 3 | ['a', 'b'] </code></pre> <p>Is there a way to achieve this using python?</p> <p>Thanks.</p>
0
2016-09-16T03:24:56Z
39,523,836
<p>One way is to create a new frame, using reduceByKey:</p> <pre><code>&gt;&gt;&gt; df.show() +---+----------+ | id|attributes| +---+----------+ | 1| [a, c, d]| | 2| [a, e]| | 1| [e, f]| | 1| [g]| | 3| [a, b]| | 2| [e, g]| +---+----------+ &gt;&gt;&gt; custom_list = df.rdd.reduceByKey(lambda x,y:x+y).collect() &gt;&gt;&gt; new_df = sqlCtx.createDataFrame(custom_list, ["id", "attributes"]) &gt;&gt;&gt; new_df.show() +---+------------------+ | id| attributes| +---+------------------+ | 1|[a, c, d, e, f, g]| | 2| [a, e, e, g]| | 3| [a, b]| +---+------------------+ </code></pre> <blockquote> <p>reduceByKey(func, [numTasks]):</p> <p>When called on a dataset of (K, V) pairs, returns a dataset of (K, V) pairs where the values for each key are aggregated using the given reduce function func, which must be of type (V,V) => V. Like in groupByKey, the number of reduce tasks is configurable through an optional second argument.</p> </blockquote>
1
2016-09-16T04:39:15Z
[ "python", "apache-spark", "spark-dataframe", "udf" ]
Flask Security- check what Roles a User has
39,523,394
<p>I was looking at the flask-security api and i dont see any function that returns the list of roles a specific User has. Is there anyway to return a list of Roles a user has?</p>
-1
2016-09-16T03:35:22Z
39,524,988
<p>If you look at how the <a href="https://github.com/mattupstate/flask-security/blob/develop/flask_security/core.py#L318" rel="nofollow"><code>has_role(...)</code> method</a> has been defined, it simply iterates through <code>self.roles</code>. So the <code>roles</code> attribute in the user is a list of <code>Role</code> objects.</p> <p>You need to define your <code>User</code> and <code>Role</code> models as in the example <a href="https://pythonhosted.org/Flask-Security/quickstart.html" rel="nofollow">here</a>, so that the <code>User</code> model has a many-to-many relationship to <code>Role</code> model set in the <code>User.roles</code> attribute.</p> <pre class="lang-py prettyprint-override"><code># This one is a list of Role objects roles = user.roles # This one is a list of Role names role_names = (role.name for role in user.roles) </code></pre>
1
2016-09-16T06:25:35Z
[ "python", "flask", "flask-sqlalchemy", "flask-security" ]
Receiving a KeyError: '0_0' for my python program
39,523,479
<p>I'm was working on this question (it's from Leetcode):</p> <blockquote> <p>"Given an integer matrix, find the length of the longest increasing path.</p> <p>From each cell, you can either move to four directions: left, right, up or down. You may NOT move diagonally or move outside of the boundary (i.e. wrap-around is not allowed).</p> <pre><code>Example 1: nums = [ [9,9,4], [6,6,8], [2,1,1] ] Return 4. </code></pre> </blockquote> <p>I keep running into a KeyError specifically:</p> <pre><code>Traceback (most recent call last): File "/Users/Desktop/longest_increasing_path_in_a_matrix.py", line 41, in &lt;module&gt; print(test.longestIncreasingPath(matrix)) File "/Users/Desktop/longest_increasing_path_in_a_matrix.py", line 31, in longestIncreasingPath traverse(x, y, []) File "/Users/Desktop/longest_increasing_path_in_a_matrix.py", line 5, in traverse if traverse.traveled[str(x_coor) + "_" + str(y_coor)]: KeyError: '0_0' </code></pre> <p>and I'm not sure exactly what I'm doing wrong. I understand it has to do with my dictionary. Please let me know if there is anything else I need to post:</p> <pre><code>class Solution(object): def longestIncreasingPath(self, matrix): def traverse(x_coor, y_coor, build): key = str(x_coor) + "_" + str(y_coor) if key in traverse.traveled and traverse.traveled[key]: if traveled[str(x_coor) + "_" + str(y_coor)]: return elif x_coor &lt; 0 or y_coor &lt; 0 or x_coor &gt;= len(matrix[0]) or y_coor &gt;= len(matrix)-1: return elif len(build) &gt; 0 and matrix[x_coor][y_coor] &lt;= build[-1]: if len(build) &gt; traverse.count: traverse.count = len(build) return traveled[str(x_coor) + "_" + str(y_coor)] = true build.append(matrix[y_coor][x_coor]) traverse(x_coor, y_coor-1, build) traverse(x_coor, y_coor+1, build) traverse(x_coor+1, y_coor, build) traverse(x_coor-1, y_coor, build) build.pop() del traveled[str(x_coor) + "_" + str(y_coor)] traverse.count = 0 traverse.traveled = {} for y in range(0, len(matrix)-1, 1): for x in range(0, len(matrix[0]), 1): traverse(x, y, []) return(traverse.count) matrix = [ [9,9,4], [6,6,8], [2,1,1] ] test = Solution() print(test.longestIncreasingPath(matrix)) </code></pre>
0
2016-09-16T03:49:29Z
39,523,566
<p>You are trying to access a key (<code>0_0</code>) that doesn't exist yet in the dictionary.</p> <p>You have to check beforehand if it exists, I'd suggest the <code>in</code> keyword:</p> <pre><code>key = str(x_coor) + "_" + str(y_coor) if key in traverse.traveled and traverse.traveled[key]: # ... </code></pre> <p>See also: <a class='doc-link' href="https://stackoverflow.com/documentation/python/396/dictionary/1313/avoiding-keyerror-exceptions#t=201609160403446807907">Avoiding KeyError Exceptions</a></p>
1
2016-09-16T04:02:23Z
[ "python", "algorithm", "python-2.7", "data-structures" ]
jinja2 itterate through list of tuples
39,523,485
<p>I have a list of tuples called <em>items</em>:</p> <pre><code>[ (1,2), (3,4), (5,6), (7,8) ] </code></pre> <p>I thought I could iterate though using, but it's not working:</p> <pre><code># Code output = template.render( items ) # Tempalte {% for item in items %} {{ item[0] }}; {{ item[1] }}; {% endfor %} </code></pre> <p>Any suggestions? </p>
0
2016-09-16T03:50:23Z
39,523,551
<p>From the <a href="http://jinja.pocoo.org/docs/dev/api/#jinja2.Template.render" rel="nofollow">documentation</a>:</p> <blockquote> <p>render([context])</p> <p>This method accepts the same arguments as the dict constructor: A dict, a dict subclass or some keyword arguments. If no arguments are given the context will be empty. </p> </blockquote> <pre><code>from jinja2 import Environment TEMPLATE = """ {% for item in items %} {{ item[0] }}; {{ item[1] }}; {% endfor %} """ template = Environment().from_string(TEMPLATE) items = [(1,2), (3,4), (5,6), (7,8)] print template.render(items=items) </code></pre> <p>While parsing the template, jinja2 will look for a key called 'items' but in your case, there is none, you have to explicitly specify it.</p>
2
2016-09-16T04:00:32Z
[ "python", "jinja2" ]
Nginx will not load Flask static files
39,523,533
<p>New flask/nginx/uWSGI server. Application loads and functions as expected but non of the html/css templates will load. Using developer tools in Chrome shows the pages are not being returned. </p> <p>"tail -f /var/log/nginx/error.log" says the file or directory does not exist. But it appears to be the correct path. Full path is "/opt/netmgmt/app/static/css/main.css"</p> <pre><code>2016/09/15 22:04:40 [error] 4939#0: *1 open() "/app/static/css/main.css" failed (2: No such file or directory), client: 10.0.0.69, server: , request: "GET /static/css/main.css HTTP/1.1", host: "nms-srv2", referrer: "http://nms-srv2/" </code></pre> <p>Flask debugging shows no issue at all. Ive been banging my head against this one for a while and found multiple other threads with the same issue but nothing is solving this for me. </p> <p>Thanks for the help!</p> <p>/etc/nginx/nginx.conf</p> <pre><code>worker_processes 1; events { worker_connections 1024; } http { sendfile on; gzip on; gzip_http_version 1.0; gzip_proxied any; gzip_min_length 500; gzip_disable "MSIE [1-6]\."; gzip_types text/plain text/xml text/css text/comma-separated-values text/javascript application/x-javascript application/atom+xml; # Configuration containing list of application servers upstream uwsgicluster { server 127.0.0.1:8080; # server 127.0.0.1:8081; # .. # . } # Configuration for Nginx server { # Running port listen 80; # Settings to by-pass for static files location ^~ /static/ { # Example: # root /full/path/to/application/static/file/dir; root /app/; } # Serve a static file (ex. favico) outside static dir. location = /favico.ico { root /app/favico.ico; } # Proxying connections to application servers location / { include uwsgi_params; uwsgi_pass uwsgicluster; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; } </code></pre> <p>App file (Ive tried changing the static location multiple ways and using alias instead of root with no luck.) </p> <pre><code>@app.route('/') def home(): return render_template('home.html') </code></pre> <p>Layout.html</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;NetMgmt&lt;/title&gt; &lt;strong&gt;&lt;link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}"&gt;&lt;/strong&gt; &lt;/head&gt; &lt;body&gt; &lt;header&gt; &lt;div class="container"&gt; &lt;h1 class="logo"&gt;NetMgmt (beta)&lt;/h1&gt; &lt;strong&gt;&lt;nav&gt; &lt;ul class="menu"&gt; &lt;li&gt;&lt;a href="{{ url_for('home') }}"&gt;Home&lt;/a&gt;&lt;/li&gt; &lt;/ul&gt; &lt;/nav&gt;&lt;/strong&gt; &lt;/div&gt; &lt;/header&gt; &lt;div class="container"&gt; {% block content %} {% endblock %} &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre>
0
2016-09-16T03:57:25Z
39,527,071
<p>In your nginx configuration use an absolute path to your <code>static</code> directory. <code>static</code> should be in the path:</p> <pre><code>location ^~ /static/ { root /full/path/to/app/static/; } </code></pre>
0
2016-09-16T08:37:09Z
[ "python", "nginx", "flask", "uwsgi" ]
Nested for-loop and appending to empty objects
39,523,665
<p>I am providing values to a website filter In order to generate different html which l parse. I want to save each page source to a different Python object in order to distinguish the data. I have a list of empty objects which l will append to. parsing page source,and want to save each page source to its own Python object, which is itself in a list. In this way </p> <p>The challenge is how to append the td elements from a particular html source, to the particular empty object in the list. I need to store html source at each iteration, in a separate object, which is itself found in a list.</p> <p>I will simplify my example:</p> <pre><code>years = ['2015', '2016] weeks = ['1', '2'] store = [[], [], [], []] </code></pre> <p>This gives me 4 sets of html source that I need to capture:</p> <pre><code>for y in years: for w in weeks: </code></pre> <p>#I will use y and w in webdriver.select to provide values for web page filter I will then use BS to copy page source for each iteration:</p> <pre><code>html = browser.page_source soup = BeautifulSoup(html, "lxml") </code></pre> <p>And then iterate through the particular page source to extract td elements:</p> <pre><code>counter = 0 for el in soup.find_all('td'): </code></pre> <p>to provide index for store list in order to append td elements to separate empty objects</p> <pre><code>for el in soup.find_all('td'): store[counter].append(el.get_text()) counter = counter + 1 </code></pre> <p>Strip the element of html characters, and add to counter to move to the next object in the store list.</p> <p>But the result is that all the td elements get appended to first object in the list instead of each html source having its own object. What am I missing? </p> <p>Would it better to somehow use map function?</p>
0
2016-09-16T04:17:53Z
39,524,368
<p>Your statement</p> <pre><code>counter=counter+1 </code></pre> <p>is not within the for loop.</p> <p>You need to indent it at the same level as the previous line, so that counter is incremented each time around the loop</p>
0
2016-09-16T05:37:33Z
[ "python", "nested-loops" ]
Missing class in beautifulsoup
39,523,680
<p>I am attempting to grab a div on the MTA info page. When I grab the html and parse it with BeautifulSoup it seems to be missing some data.</p> <p>Here is my code so far</p> <pre><code>from bs4 import BeautifulSoup import urllib # access the web # SUBWAY STATUS PROJECT userURL = "http://www.mta.info" # MTA SITE htmlfile = urllib.urlopen(userURL) #creates html file htmldoc = htmlfile.read() #creates html text soup = BeautifulSoup(htmldoc, 'html.parser') subChart = soup.find( id = 'subwayDiv') print subChart </code></pre> <p>I am using print just to be sure that I am getting all of the data. I see that I am missing some information that I am trying to grab. If I look to the page myself I can see that I am missing a div with the class that shows the subway status.</p> <p>I am very new to programming so please mind my ignorance</p>
0
2016-09-16T04:19:35Z
39,524,231
<p>In the subchart variable look for the elements with the class subwayCategory and store the values of the attribute id. For ex: from this part of the data</p> <pre><code>&lt;div style="float: left; width: 220px; border-bottom: 1px solid #7B7B98; padding: 4px 0;"&gt; &lt;div class="span-11"&gt;&lt;img alt="1 2 3 Subway" class="subwayIcon_123" src="http://www.mta.info/sites/all/modules/custom/servicestatus/images/img_trans.gif"/&gt;&lt;/div&gt; &lt;div class="subwayCategory" id="123" style="margin-top: 4px;"&gt;&lt;/div&gt; </code></pre> <p></p> <p>The id value of the div with the class subwayCategory is 123. Now make a request to <code>http://www.mta.info/status/subway/{ID}</code></p> <p>Replace the term <code>{ID}</code> with the id you want</p>
0
2016-09-16T05:27:09Z
[ "python", "html", "parsing", "beautifulsoup" ]
Missing class in beautifulsoup
39,523,680
<p>I am attempting to grab a div on the MTA info page. When I grab the html and parse it with BeautifulSoup it seems to be missing some data.</p> <p>Here is my code so far</p> <pre><code>from bs4 import BeautifulSoup import urllib # access the web # SUBWAY STATUS PROJECT userURL = "http://www.mta.info" # MTA SITE htmlfile = urllib.urlopen(userURL) #creates html file htmldoc = htmlfile.read() #creates html text soup = BeautifulSoup(htmldoc, 'html.parser') subChart = soup.find( id = 'subwayDiv') print subChart </code></pre> <p>I am using print just to be sure that I am getting all of the data. I see that I am missing some information that I am trying to grab. If I look to the page myself I can see that I am missing a div with the class that shows the subway status.</p> <p>I am very new to programming so please mind my ignorance</p>
0
2016-09-16T04:19:35Z
39,528,624
<p>The data is retrieved with an ajax request, you can get the info in <em>json</em> format, the only thing you need to pass it a <em>timestamp</em> which you can get with <em>time.time()</em> then just parse it with the <a href="https://docs.python.org/2/library/json.html" rel="nofollow">json</a> library:</p> <pre><code>from time import time from json import load, loads import urllib url = "http://www.mta.info/service_status_json/{}".format(int(time())) json_dict = loads(load(urllib.urlopen(url))) from pprint import pprint as pp pp(json_dict) </code></pre> <p>I won't add all the output as there is too much but using the <code>"BT"</code> we get:</p> <pre><code>{u'line': [{u'Date': {}, u'Time': {}, u'name': u'Bronx-Whitestone', u'status': u'GOOD SERVICE', u'text': {}}, {u'Date': {}, u'Time': {}, u'name': u'Cross Bay', u'status': u'GOOD SERVICE', u'text': {}}, {u'Date': {}, u'Time': {}, u'name': u'Henry Hudson', u'status': u'GOOD SERVICE', u'text': {}}, {u'Date': u'09/16/2016', u'Time': u' 5:57AM', u'name': u'Hugh L. Carey', u'status': u'SERVICE CHANGE', u'text': u" &lt;span class='TitleServiceChange' &gt;Service Change&lt;/span&gt; &lt;span class='DateStyle'&gt; &amp;nbsp;Posted:&amp;nbsp;09/16/2016&amp;nbsp; 5:57AM &lt;/span&gt;&lt;br/&gt;&lt;br/&gt; HLC - HOV Lane Open 6 AM to 10 AM. Two-Way Operations in effect. Three (3) lanes Manhattan-bound. One (1) lane Brooklyn-bound. &lt;br/&gt;&lt;br/&gt; "}, {u'Date': {}, u'Time': {}, u'name': u'Marine Parkway', u'status': u'GOOD SERVICE', u'text': {}}, {u'Date': u'09/16/2016', u'Time': u' 5:57AM', u'name': u'Queens Midtown', u'status': u'SERVICE CHANGE', u'text': u" &lt;span class='TitleServiceChange' &gt;Service Change&lt;/span&gt; &lt;span class='DateStyle'&gt; &amp;nbsp;Posted:&amp;nbsp;09/16/2016&amp;nbsp; 5:57AM &lt;/span&gt;&lt;br/&gt;&lt;br/&gt; QMT - HOV Lane Open 6 AM to 10 AM. Two-Way Operation in effect. Three (3) lanes Manhattan bound. One (1) lane Queens bound. &lt;br/&gt;&lt;br/&gt; &lt;span class='TitlePlannedWork' &gt;Planned Work&lt;/span&gt; &lt;br/&gt; &lt;P style='MARGIN: 0in 0in 0pt'&gt;&lt;SPAN style=''Times New Roman';2016; Queens-Midtown Tunnel downtown exit; One lane closed. Use 37&lt;SUP&gt;th&lt;/SUP&gt;&lt;/FONT&gt;&lt;FONT size=3&gt; St tunnel exit for access to 2&lt;/FONT&gt;&lt;SUP&gt;&lt;FONT size=3&gt;nd&lt;/FONT&gt;&lt;/SUP&gt;&lt;FONT size=3&gt; Ave. Motorists should allow extra time and may wish to use an alternate route if possible' Drivers should expect delays and plan accordingly. Motorists can sign up for MTA e-mail or text alerts at &lt;/FONT&gt;&lt;SPAN style='COLOR: blue'&gt;&lt;A href='http://www.mta.info/'&gt;&lt;SPAN style='COLOR: #0563c1'&gt;&lt;FONT size=3&gt;www.mta.info&lt;/FONT&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;FONT size=3&gt; &lt;/FONT&gt;&lt;/SPAN&gt;&lt;FONT size=3&gt;and check the Bridges and Tunnels homepage or Facebook page for the latest information on this planned work.&lt;/FONT&gt;&lt;/FONT&gt;&lt;/SPAN&gt;&lt;/P&gt; &lt;br/&gt;&lt;br/&gt; &lt;span class='TitlePlannedWork' &gt;Planned Work&lt;/span&gt; &lt;br/&gt; QMT- MANHATTAN PLAZA WORK REQUIRES CLOSURE OF 'CROSSTOWN' LANES FOR 2 MONTHS. CUSTOMERS SEEKING A CROSSTOWN MANHATTAN ROUTE USE THE UPTOWN LANES; EXPECT DELAYS. &lt;br/&gt;&lt;br/&gt; "}, {u'Date': u'08/15/2016', u'Time': u' 3:56PM', u'name': u'Robert F. Kennedy', u'status': u'PLANNED WORK', u'text': u" &lt;span class='TitlePlannedWork' &gt;Planned Work&lt;/span&gt; &lt;br/&gt; &lt;P style='MARGIN: 0in 0in 0pt'&gt;&lt;SPAN style='COLOR: #1f497d'&gt;&lt;FONT size=3 face=Calibri&gt;Starting Monday, August 15, 2016 and through early 2018, one lane will be closed on the Queens-to-Manhattan ramp at the Robert F. Kennedy Bridge for roadway rehabilitation. In addition, overnight on Thursday, August 18 and Friday, August 19, there will be a series of intermittent FULL ramp closures, lasting 15-20 minutes each.&lt;/FONT&gt;&lt;/SPAN&gt;&lt;/P&gt; &lt;br/&gt;&lt;br/&gt; "}, {u'Date': {}, u'Time': {}, u'name': u'Throgs Neck', u'status': u'GOOD SERVICE', u'text': {}}, {u'Date': u'09/16/2016', u'Time': u' 5:28AM', u'name': u'Verrazano-Narrows', u'status': u'PLANNED WORK', u'text': u" &lt;span class='TitlePlannedWork' &gt;Planned Work&lt;/span&gt; &lt;br/&gt; VNB: PLANNED WORK; S. I. BOUND LOWER LEVEL - ONE LANE CLOSED; EXPECT DELAYS. &lt;br/&gt;&lt;br/&gt; "}]} </code></pre> <p>So you just need to go through the dict and pick out what you want.</p>
0
2016-09-16T09:57:00Z
[ "python", "html", "parsing", "beautifulsoup" ]
Python Bisect Search- abs() causing failure
39,523,798
<p>So I am attempting to implement a <strong>bisection search algorithm</strong> in Python that returns an "optimal" savings rate.</p> <p>I've tried creating several different functions, and I don't understand why the program gets caught in an infinite loop. I do know that the abs(current_savings - down_payment) is what causes the recursive infinite loop but I do not know why.</p> <p>First things first, this doesn't really explain why my program doesn't work but here goes:</p> <blockquote> <p>At the end of each month I earn interest on current savings, which is applied first, and then I receive my monthly salary, which is just 1/12 of my annual salary.</p> </blockquote> <p>I am attempting to find the best rate to apply to my monthly salary, to then add to my current savings. </p> <p>My first function simply checks to see if one's salary is high enough to ever save for the 250K down payment. If their salary is not high enough, it prints that it is not adequate and returns False. </p> <p>My second function attempts to find the best rate ("portion saved"), or the best rate to save of monthly salary in order to fall within 100 dollars of the down_payment. In addition, I must record the number of "steps" my bisection search function takes to find the optimal rate. </p> <p>Here is the code:</p> <pre><code> #Givens annual_salary = 150000 initial_salary = annual_salary interest_rate = float(0.04/12.0) down_payment = float(250000.0) semi_annual_raise = 0.07 #Bisect-search low = float(0.0) high = float(10000.0) portion_saved = float((low+high)/2) current_savings = 0 months = 0 steps = 0 def isPossible(annual_salary): count = 0 current_savings = 0 while count &lt; 36: current_savings += (current_savings*interest_rate) + (annual_salary/12) count += 1 if count % 6 == 0: annual_salary += (annual_salary*semi_annual_raise) if current_savings &lt; down_payment: print("It is not possible to pay the down payment in three years.") return False else: return True def bisearch(initial_salary,interest_rate,down_payment,semi_annual_raise,low,high,portion_saved,steps): current_savings = 0 months = 0 while abs(current_savings - down_payment) &gt; 100.0: months = 0 current_savings = 0 while months &lt; 36: current_savings = current_savings + (initial_salary*interest_rate) current_savings = current_savings + (initial_salary/12*portion_saved/10000.0) months += 1 if months % 6 == 0: initial_salary += (initial_salary*semi_annual_raise) steps += 1 if current_savings &gt; down_payment: high = portion_saved else: low = portion_saved portion_saved = ((low+high)/2.0) print("Best saving rate: ", (portion_saved/10000.0)) print("Steps in bisection search: ", steps) if isPossible(annual_salary) == True: bisearch(initial_salary,interest_rate,down_payment,semi_annual_raise,low,high,portion_saved,steps) </code></pre> <p>And the test cases:</p> <p><strong>Note:</strong> the number of bisection search steps doesn't have to be the same, but the rate should be the same</p> <p>Test Case 1</p> <pre><code>Enter the starting salary: 150000 Best savings rate: 0.4411 Steps in bisection search: 12 </code></pre> <p>Test Case 2</p> <pre><code>Enter the starting salary: 300000 Best savings rate: 0.2206 Steps in bisection search: 9 </code></pre> <p>If anyone could help me out I would greatly appreciate it, been at this for hours trying to come up with a fix. </p>
0
2016-09-16T04:33:50Z
39,627,788
<p>I was confused in the same problem and finally found a solution. Try resetting initial_salary back to annual_salary inside the first while loop within your bisection function, else that would just keep on increasing every time you hit six months on the inner loop. Does that work?</p>
0
2016-09-21T23:01:49Z
[ "python", "python-3.x", "bisection" ]
Process a list in a Dataframe column
39,523,900
<p>I created a DataFrame <code>neighbours</code> using <code>sim_measure_i</code> which is also a DataFrame.</p> <pre><code>neighbours= sim_measure_i.apply(lambda s: s.nlargest(k).index.tolist(), axis =1) </code></pre> <p><code>neighbours</code> looks like this: </p> <pre><code>1500 [0, 1, 2, 3, 4] 1501 [0, 1, 2, 3, 4] 1502 [0, 1, 2, 3, 4] 1503 [7230, 12951, 13783, 8000, 18077] 1504 [1, 3, 6, 27, 47] </code></pre> <p>The second column here has lists - I want to iterate over this DataFrame and work on the list such that I can read each element in the list - say 7230 and lookup a score for 7230 in another DataFrameI have which contains (id, score). </p> <p>I would then like to add a column to this DataFrame such that it looks like </p> <pre><code>test_case_id nbr_list scores 1500 [0, 1, 2, 3, 4] [+1, -1, -1, +1, -1] 1501 [0, 1, 2, 3, 4] [+1, +1, +1, -1, -1] 1502 [0, 1, 2, 3, 4] [+1, +1, +1, -1, -1] 1503 [7230, 12951, 13783, 8000, 18077] [+1, +1, +1, -1, -1] 1504 [1, 3, 6, 27, 47] [+1, +1, +1, -1, -1] </code></pre> <p>Edit: I've written a method <code>get_scores()</code></p> <pre><code>def get_scores(list_of_neighbours): score_matrix = [] for x, val in enumerate(list_of_neighbours): score_matrix.append(df.iloc[val].score) return score_matrix </code></pre> <p>When I try to use <code>lambda</code> on each of <code>nbr_list</code>, I get this error: </p> <pre><code>TypeError: ("cannot do positional indexing on &lt;class 'pandas.indexes.numeric.Int64Index'&gt; with these indexers [0] of &lt;type 'str'&gt;", u'occurred at index 1500') </code></pre> <p>The code causing this issue: </p> <pre><code>def nearest_neighbours(similarity_matrix, k): neighbours = pd.DataFrame(similarity_matrix.apply(lambda s: s.nlargest(k).index.tolist(), axis =1)) neighbours = neighbours.rename(columns={0 : 'nbr_list'}) nbr_scores = neighbours.apply(lambda l: get_scores(l.nbr_list), axis=1) print neighbours </code></pre>
1
2016-09-16T04:47:31Z
39,524,002
<p>You can try a nested loop:</p> <pre><code>for i in range(neighbours.shape[0]): #iterate over each row for j in range(len(neighbours['neighbours_lists'].iloc[i])): #iterate over each element of the list a = neighbours['neighbours_lists'].iloc[i][j] #access the element of the list index j in cell location of row i </code></pre> <p>where <code>i</code> is the outer loop variable which iterates over each row and <code>j</code> is the inner loop variable which iterates over the length of the list inside each cell.</p>
1
2016-09-16T05:01:58Z
[ "python", "pandas", "numpy", "dataframe" ]
Process a list in a Dataframe column
39,523,900
<p>I created a DataFrame <code>neighbours</code> using <code>sim_measure_i</code> which is also a DataFrame.</p> <pre><code>neighbours= sim_measure_i.apply(lambda s: s.nlargest(k).index.tolist(), axis =1) </code></pre> <p><code>neighbours</code> looks like this: </p> <pre><code>1500 [0, 1, 2, 3, 4] 1501 [0, 1, 2, 3, 4] 1502 [0, 1, 2, 3, 4] 1503 [7230, 12951, 13783, 8000, 18077] 1504 [1, 3, 6, 27, 47] </code></pre> <p>The second column here has lists - I want to iterate over this DataFrame and work on the list such that I can read each element in the list - say 7230 and lookup a score for 7230 in another DataFrameI have which contains (id, score). </p> <p>I would then like to add a column to this DataFrame such that it looks like </p> <pre><code>test_case_id nbr_list scores 1500 [0, 1, 2, 3, 4] [+1, -1, -1, +1, -1] 1501 [0, 1, 2, 3, 4] [+1, +1, +1, -1, -1] 1502 [0, 1, 2, 3, 4] [+1, +1, +1, -1, -1] 1503 [7230, 12951, 13783, 8000, 18077] [+1, +1, +1, -1, -1] 1504 [1, 3, 6, 27, 47] [+1, +1, +1, -1, -1] </code></pre> <p>Edit: I've written a method <code>get_scores()</code></p> <pre><code>def get_scores(list_of_neighbours): score_matrix = [] for x, val in enumerate(list_of_neighbours): score_matrix.append(df.iloc[val].score) return score_matrix </code></pre> <p>When I try to use <code>lambda</code> on each of <code>nbr_list</code>, I get this error: </p> <pre><code>TypeError: ("cannot do positional indexing on &lt;class 'pandas.indexes.numeric.Int64Index'&gt; with these indexers [0] of &lt;type 'str'&gt;", u'occurred at index 1500') </code></pre> <p>The code causing this issue: </p> <pre><code>def nearest_neighbours(similarity_matrix, k): neighbours = pd.DataFrame(similarity_matrix.apply(lambda s: s.nlargest(k).index.tolist(), axis =1)) neighbours = neighbours.rename(columns={0 : 'nbr_list'}) nbr_scores = neighbours.apply(lambda l: get_scores(l.nbr_list), axis=1) print neighbours </code></pre>
1
2016-09-16T04:47:31Z
39,524,060
<p>Original Data Frame:</p> <pre><code>In [68]: df Out[68]: test_case_id neighbours_lists 0 1500 [0, 1, 2, 3, 4] 1 1501 [0, 1, 2, 3, 4] 2 1502 [0, 1, 2, 3, 4] 3 1503 [7230, 12951, 13783, 8000, 18077] 4 1504 [1, 3, 6, 27, 47] </code></pre> <p>Custom function which takes id and list and does some computation to evaluate score:</p> <pre><code>In [69]: def g(_id, nbs): ...: return ['-1' if (_id + 1) % (nb + 1) else '+1' for nb in nbs] ...: </code></pre> <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow">Apply</a> custom function to all rows of original data frame:</p> <pre><code>In [70]: scores = df.apply(lambda x: g(x.test_case_id, x.neighbours_lists), axis=1) </code></pre> <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_frame.html" rel="nofollow">Convert</a> the scores series to a data frame and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow">concat</a> it with the original data frame: </p> <pre><code>In [71]: df = pd.concat([df, scores.to_frame(name='scores')], 1) In [72]: df Out[72]: test_case_id neighbours_lists scores 0 1500 [0, 1, 2, 3, 4] [+1, -1, -1, -1, -1] 1 1501 [0, 1, 2, 3, 4] [+1, +1, -1, -1, -1] 2 1502 [0, 1, 2, 3, 4] [+1, -1, +1, -1, -1] 3 1503 [7230, 12951, 13783, 8000, 18077] [-1, -1, -1, -1, -1] 4 1504 [1, 3, 6, 27, 47] [-1, -1, +1, -1, -1] </code></pre>
1
2016-09-16T05:08:27Z
[ "python", "pandas", "numpy", "dataframe" ]
Process a list in a Dataframe column
39,523,900
<p>I created a DataFrame <code>neighbours</code> using <code>sim_measure_i</code> which is also a DataFrame.</p> <pre><code>neighbours= sim_measure_i.apply(lambda s: s.nlargest(k).index.tolist(), axis =1) </code></pre> <p><code>neighbours</code> looks like this: </p> <pre><code>1500 [0, 1, 2, 3, 4] 1501 [0, 1, 2, 3, 4] 1502 [0, 1, 2, 3, 4] 1503 [7230, 12951, 13783, 8000, 18077] 1504 [1, 3, 6, 27, 47] </code></pre> <p>The second column here has lists - I want to iterate over this DataFrame and work on the list such that I can read each element in the list - say 7230 and lookup a score for 7230 in another DataFrameI have which contains (id, score). </p> <p>I would then like to add a column to this DataFrame such that it looks like </p> <pre><code>test_case_id nbr_list scores 1500 [0, 1, 2, 3, 4] [+1, -1, -1, +1, -1] 1501 [0, 1, 2, 3, 4] [+1, +1, +1, -1, -1] 1502 [0, 1, 2, 3, 4] [+1, +1, +1, -1, -1] 1503 [7230, 12951, 13783, 8000, 18077] [+1, +1, +1, -1, -1] 1504 [1, 3, 6, 27, 47] [+1, +1, +1, -1, -1] </code></pre> <p>Edit: I've written a method <code>get_scores()</code></p> <pre><code>def get_scores(list_of_neighbours): score_matrix = [] for x, val in enumerate(list_of_neighbours): score_matrix.append(df.iloc[val].score) return score_matrix </code></pre> <p>When I try to use <code>lambda</code> on each of <code>nbr_list</code>, I get this error: </p> <pre><code>TypeError: ("cannot do positional indexing on &lt;class 'pandas.indexes.numeric.Int64Index'&gt; with these indexers [0] of &lt;type 'str'&gt;", u'occurred at index 1500') </code></pre> <p>The code causing this issue: </p> <pre><code>def nearest_neighbours(similarity_matrix, k): neighbours = pd.DataFrame(similarity_matrix.apply(lambda s: s.nlargest(k).index.tolist(), axis =1)) neighbours = neighbours.rename(columns={0 : 'nbr_list'}) nbr_scores = neighbours.apply(lambda l: get_scores(l.nbr_list), axis=1) print neighbours </code></pre>
1
2016-09-16T04:47:31Z
39,524,094
<p>Say you start with <code>neighbors</code> looking like this.</p> <pre><code>In [87]: neighbors = pd.DataFrame({'neighbors_list': [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]}) In [88]: neighbors Out[88]: neighbors_list 0 [0, 1, 2, 3, 4] 1 [0, 1, 2, 3, 4] </code></pre> <p>You didn't specify exactly how the other DataFrame (containing the id-score pairs looks), so here is an approximation.</p> <pre><code>In [89]: id_score = pd.DataFrame({'id': [0, 1, 2, 3, 4], 'score': [1, -1, -1, 1, -1]}) In [90]: id_score Out[90]: id score 0 0 1 1 1 -1 2 2 -1 3 3 1 4 4 -1 </code></pre> <p>You can convert this to a dictionary:</p> <pre><code>In [91]: d = id_score.set_index('id')['score'].to_dict() </code></pre> <p>And then <code>apply</code>:</p> <pre><code>In [92]: neighbors.neighbors_list.apply(lambda l: [d[e] for e in l]) Out[92]: 0 [1, -1, -1, 1, -1] 1 [1, -1, -1, 1, -1] Name: neighbors_list, dtype: object </code></pre>
1
2016-09-16T05:12:33Z
[ "python", "pandas", "numpy", "dataframe" ]
Odoo v9 - Using Onchange, how do you clear what is already entered in a field?
39,523,901
<p>I am extending product.template, and I've added a field called uom_class. When I change this field when editing a product, I'd like to clear what is entered in the Many2One field called "Unit of Measure" (uom_id in product.template). I am doing this because I am also changing the domain for the uom_id, and the existing selection ("Units" by default) will probably not be in that domain.</p> <p>I've tried the following, as suggested for earlier versions, but it did not work.</p> <pre><code>@api.onchange('uom_class') def onchange_uom_class(self): # [...] not shown: previously set new domain for uom_id result['value'] ={'uom_id':False} return result </code></pre> <p>I've seen other posts suggest I need to add assign it an empty product.uom record, but I have no idea how to do that. Any help would be greatly appreciated.</p>
0
2016-09-16T04:47:32Z
39,524,063
<p>Well, I figured this one out. </p> <p>I just declared</p> <pre><code>uom_id = False </code></pre> <p>For some reason, returning the domain works, but not returning the value. Either that, or I just have no idea what I'm doing and I'm returning the value wrong... which is entirely possible.</p>
1
2016-09-16T05:08:42Z
[ "python", "model", "openerp" ]
i want to compare my sum of Q2 with the sum of all individual account
39,523,902
<p>I want my option 3 work out. I want to compare the sum of Apr, May, Jun with the sum of every individual accounts. (sum of column compare to sum of axis) I keep getting the series lengths must match to compare</p> <pre><code>import pandas as pd if __name__ == "__main__": file_name = "sales_rossetti.xlsx" # Formatting numbers (e.g. $1,000,000) pd.options.display.float_format = '${:,.0f}'.format # Reading Excel file df = pd.read_excel(file_name, index_col = 0, convert_float = False) print ("Welcome to Rossetti's Sales program\n") print ("1) Search by State") print ("2) Search by Jan Sales") print ("3) Search by Q2 sales") print ("4) Exit") my_option = input ("Please select a menu option:") if (my_option=="3"): my_columns = ["Apr","May","Jun"] all_columns = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"] your_sales = input ("Please enter your minimum sale: ") print (df[my_columns].sum()&lt;df[all_columns].sum(axis=1, skipna=None, level=None, numeric_only=True))' </code></pre> <p>Error message is:" File "C:\Users\jay\Anaconda3\lib\site-packages\pandas\core\ops.py", line 735, in wrapper raise ValueError('Series lengths must match to compare')</p> <p>ValueError: Series lengths must match to compare"</p>
-2
2016-09-16T04:47:50Z
39,531,489
<p>If you print out the result of <code>df[my_columns].sum()</code> and <code>df[all_columns].sum(axis=1, skipna=None, level=None, numeric_only=True)</code> you would probably be able to debug it for yourself.</p> <p>My guess is that because you have different values for skipna the first series has dropped some rows so you cannot compare them directly.</p>
0
2016-09-16T12:25:59Z
[ "python", "pandas" ]
Is there any issue with this Python script i am using it through sikuli? its not giving me the correct time
39,523,914
<p>When I measure the time manually, it is less than the time that I got through this script:</p> <pre><code>import time import os def getTimes(): try: times = [] if(exists("1472205483589.png",60)): click("1472192774056.png") wait("1472040968178.png",10) click("1472036591623.png") click("1472036834091.png") click("1472036868986.png") if(exists("1472192829443.png",5)): click("1472192829443.png") u = time.time() click("1472539655695.png") wait("1472042542247.png",120) v = time.time() print("Open File to when views list appear  (sec) : " , int(v-u)) times.append(int(v-u)) u = time.time() click("1472042542247.png") wait("1472108424071.png",120) mouseMove("1472108424071.png") wait("1472108486171.png",120) v = time.time() print("Opening view (sec) : ",int(v-u)) times.append(int(v-u)) u = time.time() click("1472109163884.png") wait("1472042181291.png",120) v = time.time() print("Clicking element (sec) : ", float(v-u)) times.append(int(v-u)) return times except FindFailed as ex: print("Failed. Navigator might have stopped working") if(exists("1472204045678.png",10)): click("1472204045678.png") return -1 file = open(r"C:\BSW\SikulixScripts\NavigatorAutoTesting\log.txt",'w') ret = getTimes() if (ret == -1): file.write("-1") exit() str = " ".join(str(x) for x in ret) file.write(str) file.close() </code></pre>
0
2016-09-16T04:49:36Z
39,540,065
<p>By using time.time(), you are actually returning a number of seconds--the difference between "the epoch" and now. (The epoch is the same as <code>gmtime(0)</code>). Instead, try using <code>datetime.now()</code> which will give you a datetime object. You can add and subtract datetime objects freely, resulting in a timedelta object as per the <a href="https://docs.python.org/2/library/datetime.html#module-datetime" rel="nofollow">Python docs</a></p> <pre><code> u = datetime.now() click("1472539655695.png") wait("1472042542247.png",120) v = datetime.now() tdelta = v-u seconds = tdelta.total_seconds() #if you want the number of seconds as a floating point number... (available in Python 2.7 and up) times.append(seconds) </code></pre> <p>This should yield more accuracy for you. </p>
0
2016-09-16T21:05:48Z
[ "python", "sikuli" ]
Analyzing a stack sort algorithm's time complexity
39,523,947
<p>I've been working through problems in Cracking the Coding Interview to prepare for some interviews. I was able to solve the stack sort problem but I'm having a really hard time figuring out how to reason about the time complexity. My solution was very similar to the one supplied in the book and I have tested it quite a bit so I'm sure it is correct. Any insight into the thought process one would go through to analyze this algorithm would be very appreciated. The book says it's O(n^2). Here is the algorithm:</p> <pre><code>def sort_stack(stack): temp_stack = Stack() while not stack.is_empty(): v = stack.pop() if temp_stack.is_empty() or temp_stack.peek() &lt;= v: temp_stack.push(v) else: while not temp_stack.is_empty() and temp_stack.peek() &gt; v: stack.push(temp_stack.pop()) temp_stack.push(v) while not temp_stack.is_empty(): stack.push(temp_stack.pop()) </code></pre> <p>As a side note: I used this approach to sort the stack in order to be within the constraints of the problem. I am aware that faster solutions exist.</p> <p>Thank you in advance.</p>
0
2016-09-16T04:54:23Z
39,524,066
<p>This may be an over-simplified approach to algorithm analysis, but whenever I see a nested loop, I think n^2. Three loops nested -- n^3, etc. As a rule of thumb for simple programs, count the nested loops. This is a pretty helpful tutorial: <a href="http://cs.lmu.edu/~ray/notes/alganalysis/" rel="nofollow">http://cs.lmu.edu/~ray/notes/alganalysis/</a></p>
1
2016-09-16T05:09:13Z
[ "python", "algorithm", "sorting", "stack", "big-o" ]
Analyzing a stack sort algorithm's time complexity
39,523,947
<p>I've been working through problems in Cracking the Coding Interview to prepare for some interviews. I was able to solve the stack sort problem but I'm having a really hard time figuring out how to reason about the time complexity. My solution was very similar to the one supplied in the book and I have tested it quite a bit so I'm sure it is correct. Any insight into the thought process one would go through to analyze this algorithm would be very appreciated. The book says it's O(n^2). Here is the algorithm:</p> <pre><code>def sort_stack(stack): temp_stack = Stack() while not stack.is_empty(): v = stack.pop() if temp_stack.is_empty() or temp_stack.peek() &lt;= v: temp_stack.push(v) else: while not temp_stack.is_empty() and temp_stack.peek() &gt; v: stack.push(temp_stack.pop()) temp_stack.push(v) while not temp_stack.is_empty(): stack.push(temp_stack.pop()) </code></pre> <p>As a side note: I used this approach to sort the stack in order to be within the constraints of the problem. I am aware that faster solutions exist.</p> <p>Thank you in advance.</p>
0
2016-09-16T04:54:23Z
39,524,334
<p>Consider the worst case, in which sorting each and every item in the stack requires completely emptying the temp stack. (This happens when trying to sort a reverse-sorted stack.)</p> <p>What is the cost of emptying/refilling the temp stack for each item?<br> How many items are there?</p> <p>Combining these should give O(n^2).</p>
2
2016-09-16T05:34:13Z
[ "python", "algorithm", "sorting", "stack", "big-o" ]
Why does python's datetime.datetime.strptime('201412', '%Y%m%d') not raise a ValueError?
39,523,968
<p>In the format I am given, the date 2014-01-02 would be represented by "20140102". This is correctly parsed with the standard strptime:</p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime("20140102", "%Y%m%d") datetime.datetime(2014, 1, 2, 0, 0) </code></pre> <p>In this format, "201412" would <em>not</em> be a valid date. The <a href="https://docs.python.org/2/library/time.html">docs</a> say that the "%m" directive is "Month as a zero-padded decimal number." It gives as examples "01, 02, ..., 12". The days directive "%d" is also supposed to be zero-padded.</p> <p>Based on this, I expected that "201412" would be an invalid input with this format, so would raise a ValueError. Instead, it is interpreted as 2014-01-02:</p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime("201412", "%Y%m%d") datetime.datetime(2014, 1, 2, 0, 0) </code></pre> <p>The question is: is there a way to specify "no seriously zero-padded only"? Or am I misunderstanding the term "zero-padded" in this context?</p> <p>Note that the question is not about how to parse dates in this format, but about understanding the behavior of strptime.</p>
10
2016-09-16T04:56:18Z
39,524,265
<p>If you look here at how the regex is defined for <code>%m</code> <a href="https://github.com/python/cpython/blob/2d264235f6e066611b412f7c2e1603866e0f7f1b/Lib/_strptime.py#L204">https://github.com/python/cpython/blob/2d264235f6e066611b412f7c2e1603866e0f7f1b/Lib/_strptime.py#L204</a></p> <p><code>'m': r"(?P&lt;m&gt;1[0-2]|0[1-9]|[1-9])"</code></p> <p>You can see you can either have a 10-12, 01-09, or 1-9 as acceptable months.</p>
6
2016-09-16T05:29:00Z
[ "python", "datetime", "strptime" ]
Why does python's datetime.datetime.strptime('201412', '%Y%m%d') not raise a ValueError?
39,523,968
<p>In the format I am given, the date 2014-01-02 would be represented by "20140102". This is correctly parsed with the standard strptime:</p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime("20140102", "%Y%m%d") datetime.datetime(2014, 1, 2, 0, 0) </code></pre> <p>In this format, "201412" would <em>not</em> be a valid date. The <a href="https://docs.python.org/2/library/time.html">docs</a> say that the "%m" directive is "Month as a zero-padded decimal number." It gives as examples "01, 02, ..., 12". The days directive "%d" is also supposed to be zero-padded.</p> <p>Based on this, I expected that "201412" would be an invalid input with this format, so would raise a ValueError. Instead, it is interpreted as 2014-01-02:</p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime("201412", "%Y%m%d") datetime.datetime(2014, 1, 2, 0, 0) </code></pre> <p>The question is: is there a way to specify "no seriously zero-padded only"? Or am I misunderstanding the term "zero-padded" in this context?</p> <p>Note that the question is not about how to parse dates in this format, but about understanding the behavior of strptime.</p>
10
2016-09-16T04:56:18Z
39,524,268
<p>According to the related <a href="https://bugs.python.org/issue22840">issue</a> on the Python tracker, with the example being like such (a bit of a modification to this question, however the concept is the exact same):</p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime('20141110', '%Y%m%d').isoformat() '2014-11-10T00:00:00' &gt;&gt;&gt; datetime.datetime.strptime('20141110', '%Y%m%d%H%M').isoformat() '2014-01-01T01:00:00' </code></pre> <p>The above behavior is determined to be <strong>not</strong> a bug as explained by <a href="https://bugs.python.org/issue22840#msg230991">this comment</a> which states that they conform to the <a href="http://pubs.opengroup.org/onlinepubs/009695399/functions/strptime.html">OpenGroup strptime standard</a> which specifies that "leading zeros are permitted but not required.".</p> <p>I guess the workaround is to use regex or check that the length of the string is of length 8 before passing into <code>strptime</code>.</p>
5
2016-09-16T05:29:15Z
[ "python", "datetime", "strptime" ]
Why does python's datetime.datetime.strptime('201412', '%Y%m%d') not raise a ValueError?
39,523,968
<p>In the format I am given, the date 2014-01-02 would be represented by "20140102". This is correctly parsed with the standard strptime:</p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime("20140102", "%Y%m%d") datetime.datetime(2014, 1, 2, 0, 0) </code></pre> <p>In this format, "201412" would <em>not</em> be a valid date. The <a href="https://docs.python.org/2/library/time.html">docs</a> say that the "%m" directive is "Month as a zero-padded decimal number." It gives as examples "01, 02, ..., 12". The days directive "%d" is also supposed to be zero-padded.</p> <p>Based on this, I expected that "201412" would be an invalid input with this format, so would raise a ValueError. Instead, it is interpreted as 2014-01-02:</p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime("201412", "%Y%m%d") datetime.datetime(2014, 1, 2, 0, 0) </code></pre> <p>The question is: is there a way to specify "no seriously zero-padded only"? Or am I misunderstanding the term "zero-padded" in this context?</p> <p>Note that the question is not about how to parse dates in this format, but about understanding the behavior of strptime.</p>
10
2016-09-16T04:56:18Z
39,524,276
<p>This is pretty tricky, but it sounds like <code>strptime</code> just tries to match the string as closely as possible. Python's <code>strptime</code> is the same as C's <code>strptime</code>, and the docs say that padding is optional:</p> <blockquote> <p>is the month number [1,12]; leading zeros are permitted but not required.</p> </blockquote> <p><a href="http://pubs.opengroup.org/onlinepubs/7908799/xsh/strptime.html" rel="nofollow">http://pubs.opengroup.org/onlinepubs/7908799/xsh/strptime.html</a></p>
1
2016-09-16T05:29:58Z
[ "python", "datetime", "strptime" ]
Requesting and Parsing a JSON with Python
39,524,008
<p>I have this code below that I'm working on as part of an AI project. I'm looking to get tags that the user searches and return a url of a post with said tags.</p> <pre><code>from meya import Component import requests import json API_URL = "https://e621.net/post/index.json?limit=1&amp;tags=rating:s+order:random+{tag}" class e621_call(Component): """Produces a e621 post based on the user's tags.""" def start(self): tag = self.properties.get('tag') or \ self.db.flow.get('tag') response = requests.get(API_URL.format(tag=tag)) img = response.json()['file_url'] message = self.create_message(text=img) return self.respond(message=message, action="next") </code></pre> <p>Everything is fine up until I assign img to the <em>file_url</em> value, which brings up a</p> <pre><code>TypeError: list indices must be integers, not str </code></pre> <p>For the record, I am using meya.ai for this, and the packages are not the issue</p>
0
2016-09-16T05:02:37Z
39,524,111
<p>Your API returns a dictionary inside a list. Get inside the list first then you can do what you wish with the dictionary.</p> <pre><code>response = requests.get(API_URL) foo = json.loads(response.content) file_url = foo[0].get('file_url') </code></pre> <p>If you plan on having multiple dictionaries returned inside the list, you can just loop through foo and find the multiple urls.</p> <pre><code>for d in foo: print d.get('file_url') </code></pre> <p>Also, I prefer to not use .json() (As you may have noticed I didn't include it in my answer) because that way you can correctly check for the status_code first before proceeding. Otherwise if you get a 404 or a 500 for example, you will get errors.</p> <pre><code>if response.status_code == 200: do stuff here else: print "Something is wrong" </code></pre>
1
2016-09-16T05:14:22Z
[ "python", "json", "parsing" ]
Requesting and Parsing a JSON with Python
39,524,008
<p>I have this code below that I'm working on as part of an AI project. I'm looking to get tags that the user searches and return a url of a post with said tags.</p> <pre><code>from meya import Component import requests import json API_URL = "https://e621.net/post/index.json?limit=1&amp;tags=rating:s+order:random+{tag}" class e621_call(Component): """Produces a e621 post based on the user's tags.""" def start(self): tag = self.properties.get('tag') or \ self.db.flow.get('tag') response = requests.get(API_URL.format(tag=tag)) img = response.json()['file_url'] message = self.create_message(text=img) return self.respond(message=message, action="next") </code></pre> <p>Everything is fine up until I assign img to the <em>file_url</em> value, which brings up a</p> <pre><code>TypeError: list indices must be integers, not str </code></pre> <p>For the record, I am using meya.ai for this, and the packages are not the issue</p>
0
2016-09-16T05:02:37Z
39,524,131
<p>I found a temporary workthrough, replacing <code>response.json()['file_url']</code> with <code>response.json()[0]['file_url']</code> works fine in the AI, although it still gives me the error.</p>
0
2016-09-16T05:16:47Z
[ "python", "json", "parsing" ]
How to obtain a dictionary of the named arguments in a Python function
39,524,183
<p>I know it's a common case for people to give a function an arbitrary number of kwargs with <code>**kwargs</code>, and then access them as a dictionary; however, I want to explicitly specify my functions kwargs, but still be able to access them as a dictionary.</p> <p>This is because I want my function to only receive specific <code>kwargs</code>, but I need to perform an identical operation with all of them, which I can put into a <code>for</code> loop.</p> <pre><code>def my_func(kwarg1=None, kwarg2=None, kwarg3=None): kwargs = {} # After somehow getting all my kwargs into a dictionary for k in kwargs: # Do my operation </code></pre> <p>I do <strong>not</strong> want my function to receive an arbitrary number of <code>kwargs</code>, but I <strong>do</strong> want to access my <code>kwargs</code> in a dictionary.</p>
0
2016-09-16T05:22:55Z
39,524,212
<p>Simply create a dictionary as normal, retrieving the values of each argument.</p> <pre><code>def my_func(kwarg1=None, kwarg2=None, kwarg3=None): kwargs = {'kwarg1':kwarg1, 'kwarg2':kwarg2, 'kwarg3':kwarg3} for k in kwargs: # Do my operation </code></pre>
0
2016-09-16T05:25:17Z
[ "python", "dictionary", "parameters" ]
How to obtain a dictionary of the named arguments in a Python function
39,524,183
<p>I know it's a common case for people to give a function an arbitrary number of kwargs with <code>**kwargs</code>, and then access them as a dictionary; however, I want to explicitly specify my functions kwargs, but still be able to access them as a dictionary.</p> <p>This is because I want my function to only receive specific <code>kwargs</code>, but I need to perform an identical operation with all of them, which I can put into a <code>for</code> loop.</p> <pre><code>def my_func(kwarg1=None, kwarg2=None, kwarg3=None): kwargs = {} # After somehow getting all my kwargs into a dictionary for k in kwargs: # Do my operation </code></pre> <p>I do <strong>not</strong> want my function to receive an arbitrary number of <code>kwargs</code>, but I <strong>do</strong> want to access my <code>kwargs</code> in a dictionary.</p>
0
2016-09-16T05:22:55Z
39,524,254
<p>Assuming you have no positional arguments, you could get access to your kwargs via <code>locals</code> if you put it at the top of your function:</p> <pre><code>def my_func(kwarg1=None, kwarg2=None, kwarg3=None): # Don't add any more variables before the next line! kwargs = dict(locals()) for k in kwargs: # Do my operation </code></pre> <p>This is hacky (at best) and it's probably better to just spell it out:</p> <pre><code>kwargs = {'kwarg1': kwarg1, ...} </code></pre>
1
2016-09-16T05:28:25Z
[ "python", "dictionary", "parameters" ]
How to obtain a dictionary of the named arguments in a Python function
39,524,183
<p>I know it's a common case for people to give a function an arbitrary number of kwargs with <code>**kwargs</code>, and then access them as a dictionary; however, I want to explicitly specify my functions kwargs, but still be able to access them as a dictionary.</p> <p>This is because I want my function to only receive specific <code>kwargs</code>, but I need to perform an identical operation with all of them, which I can put into a <code>for</code> loop.</p> <pre><code>def my_func(kwarg1=None, kwarg2=None, kwarg3=None): kwargs = {} # After somehow getting all my kwargs into a dictionary for k in kwargs: # Do my operation </code></pre> <p>I do <strong>not</strong> want my function to receive an arbitrary number of <code>kwargs</code>, but I <strong>do</strong> want to access my <code>kwargs</code> in a dictionary.</p>
0
2016-09-16T05:22:55Z
39,526,190
<p>This is Python3.3+ code that creates the list of keyword argument names automatically. Just for completness. I would prefer any of the simpler solutions.</p> <pre><code>import inspect def my_func(*, kwarg1=None, kwarg2=None, kwarg3=None): local_vars = locals() kwargs = {k: local_vars[k] for k in KWARGS_my_func} print(kwargs) KWARGS_my_func = [p.name for p in inspect.signature(my_func).parameters.values() if p.kind == inspect.Parameter.KEYWORD_ONLY] my_func(kwarg2=2) </code></pre>
1
2016-09-16T07:40:51Z
[ "python", "dictionary", "parameters" ]
PYTHON: enter the number of days after today for a future day and display the future day of the week
39,524,225
<p>I figured out how to code todays date but i am having trouble finding out how to code the future date, if the future date is larger than the integer 6. In the textbook they entered todays date is 0 so it is Sunday. But for days elapsed since Sunday, they entered 31. The result was "Today is Sunday and the future day is Wednesday". I do not understand how this was coded. This is what I have so far.</p> <pre><code>todaysDate = eval(input("Enter an interger for today's day of the week from 0 - 6, Sunday is 0 and Saturday is 6.")) if todaysDate == 0: print("Today is Sunday") elif todaysDate == 1: print("Today is Monday") elif todaysDate == 2: print("Today is Tuesday") elif todaysDate == 3: print("Today is Wednesday") elif todaysDate == 4: print("Today is Thursday") elif todaysDate == 5: print("Today is Friday") elif todaysDate == 6: print("Today is Saturday") daysElapsed = eval(input("Enter the number of days elapsed since today:")) if daysElapsed == 1: print("Today is Sunday and the future day is Monday") </code></pre>
0
2016-09-16T05:26:40Z
39,524,261
<p>Use the modulus operator, <code>%</code>, to divide by 7 and return the remainder of that operation:</p> <pre><code>&gt;&gt;&gt; 0 % 7 0 &gt;&gt;&gt; 5 % 7 5 &gt;&gt;&gt; 7 % 7 0 &gt;&gt;&gt; 10 % 7 3 </code></pre> <p>Also, use <code>int()</code> instead of <code>eval()</code> to cast the user's input as an integer. It's much safer.</p> <p>You can put this together with a <code>list</code> to hold the values instead of a large <code>if</code> block:</p> <pre><code>days = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] today = int(input("Enter an interger for today's day of the week from 0 - 6, Sunday is 0 and Saturday is 6.")) future = int(input("Enter the number of days elapsed since today:")) print('Today is {} and the future day is {}.'.format(days[today], days[(today+future)%7])) </code></pre>
0
2016-09-16T05:28:41Z
[ "python" ]
PYTHON: enter the number of days after today for a future day and display the future day of the week
39,524,225
<p>I figured out how to code todays date but i am having trouble finding out how to code the future date, if the future date is larger than the integer 6. In the textbook they entered todays date is 0 so it is Sunday. But for days elapsed since Sunday, they entered 31. The result was "Today is Sunday and the future day is Wednesday". I do not understand how this was coded. This is what I have so far.</p> <pre><code>todaysDate = eval(input("Enter an interger for today's day of the week from 0 - 6, Sunday is 0 and Saturday is 6.")) if todaysDate == 0: print("Today is Sunday") elif todaysDate == 1: print("Today is Monday") elif todaysDate == 2: print("Today is Tuesday") elif todaysDate == 3: print("Today is Wednesday") elif todaysDate == 4: print("Today is Thursday") elif todaysDate == 5: print("Today is Friday") elif todaysDate == 6: print("Today is Saturday") daysElapsed = eval(input("Enter the number of days elapsed since today:")) if daysElapsed == 1: print("Today is Sunday and the future day is Monday") </code></pre>
0
2016-09-16T05:26:40Z
39,524,389
<p>In addition to Tigerhawk's answer (Would comment but not enough rep :()</p> <p>You could minimise the code repeition by storing the days in a list and accessing them as such:</p> <pre><code>days = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] todaysDate = int(input("Enter an interger for today's day of the week from 0 - 6, Sunday is 0 and Saturday is 6.")) print "Today is {}".format(days[todaysDate]) daysElapsed = int(input("Enter the number of days elapsed since today:")) print "Today is Sunday and the future day is {}".format(days[daysElapsed % 7]) </code></pre> <p>Definitely don't use eval, because every time someone writes it on SO, it gets bashed ;) Oh and its unsafe or something.</p>
0
2016-09-16T05:39:05Z
[ "python" ]
PYTHON: enter the number of days after today for a future day and display the future day of the week
39,524,225
<p>I figured out how to code todays date but i am having trouble finding out how to code the future date, if the future date is larger than the integer 6. In the textbook they entered todays date is 0 so it is Sunday. But for days elapsed since Sunday, they entered 31. The result was "Today is Sunday and the future day is Wednesday". I do not understand how this was coded. This is what I have so far.</p> <pre><code>todaysDate = eval(input("Enter an interger for today's day of the week from 0 - 6, Sunday is 0 and Saturday is 6.")) if todaysDate == 0: print("Today is Sunday") elif todaysDate == 1: print("Today is Monday") elif todaysDate == 2: print("Today is Tuesday") elif todaysDate == 3: print("Today is Wednesday") elif todaysDate == 4: print("Today is Thursday") elif todaysDate == 5: print("Today is Friday") elif todaysDate == 6: print("Today is Saturday") daysElapsed = eval(input("Enter the number of days elapsed since today:")) if daysElapsed == 1: print("Today is Sunday and the future day is Monday") </code></pre>
0
2016-09-16T05:26:40Z
39,524,525
<p>here the <code>(daysElapsed+todaysDate) % 7</code> gives you the value of future day</p> <pre><code>def future(day): if day == 0: print("future is Sunday") elif day == 1: print("future is Monday") elif day == 2: print("future is Tuesday") elif day == 3: print("future is Wednesday") elif day == 4: print("future is Thursday") elif day == 5: print("future is Friday") elif day == 6: print("future is Saturday") todaysDate = int(input("Enter an interger for today's day of the week from 0 - 6, Sunday is 0 and Saturday is 6.")) if todaysDate == 0: print("Today is Sunday") elif todaysDate == 1: print("Today is Monday") elif todaysDate == 2: print("Today is Tuesday") elif todaysDate == 3: print("Today is Wednesday") elif todaysDate == 4: print("Today is Thursday") elif todaysDate == 5: print("Today is Friday") elif todaysDate == 6: print("Today is Saturday") daysElapsed = int(input("Enter the number of days elapsed since today:")) future((daysElapsed+todaysDate) % 7) </code></pre>
-1
2016-09-16T05:51:05Z
[ "python" ]
tf.SequenceExample with multidimensional arrays
39,524,323
<p>In Tensorflow, I want to save a multidimensional array to a TFRecord. For example:</p> <pre><code>[[1, 2, 3], [1, 2], [3, 2, 1]] </code></pre> <p>As the task I am trying to solve is sequential, I am trying to use Tensorflow's <code>tf.train.SequenceExample()</code> and when writing the data I am successful in writing the data to a TFRecord file. However, when I try to load the data from the TFRecord file using <code>tf.parse_single_sequence_example</code>, I am greeted with a large number of cryptic errors:</p> <pre><code>W tensorflow/core/framework/op_kernel.cc:936] Invalid argument: Name: , Key: input_characters, Index: 1. Number of int64 values != expected. values size: 6 but output shape: [] E tensorflow/core/client/tensor_c_api.cc:485] Name: , Key: input_characters, Index: 1. Number of int64 values != expected. values size: 6 but output shape: [] </code></pre> <p>The function I am using to try to load my data is below:</p> <pre><code>def read_and_decode_single_example(filename): filename_queue = tf.train.string_input_producer([filename], num_epochs=None) reader = tf.TFRecordReader() _, serialized_example = reader.read(filename_queue) context_features = { "length": tf.FixedLenFeature([], dtype=tf.int64) } sequence_features = { "input_characters": tf.FixedLenSequenceFeature([], dtype=tf.int64), "output_characters": tf.FixedLenSequenceFeature([], dtype=tf.int64) } context_parsed, sequence_parsed = tf.parse_single_sequence_example( serialized=serialized_example, context_features=context_features, sequence_features=sequence_features ) context = tf.contrib.learn.run_n(context_parsed, n=1, feed_dict=None) print context </code></pre> <p>The function that I am using to save the data is here:</p> <pre><code># http://www.wildml.com/2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/ def make_example(input_sequence, output_sequence): """ Makes a single example from Python lists that follows the format of tf.train.SequenceExample. """ example_sequence = tf.train.SequenceExample() # 3D length sequence_length = sum([len(word) for word in input_sequence]) example_sequence.context.feature["length"].int64_list.value.append(sequence_length) input_characters = example_sequence.feature_lists.feature_list["input_characters"] output_characters = example_sequence.feature_lists.feature_list["output_characters"] for input_character, output_character in izip_longest(input_sequence, output_sequence): # Extend seems to work, therefore it replaces append. if input_sequence is not None: input_characters.feature.add().int64_list.value.extend(input_character) if output_characters is not None: output_characters.feature.add().int64_list.value.extend(output_character) return example_sequence </code></pre> <p>Any help would be welcomed.</p>
9
2016-09-16T05:33:17Z
39,681,738
<p>With the provided code I wasn't able to reproduce your error but making some educated guesses gave the following working code.</p> <pre><code>import tensorflow as tf import numpy as np import tempfile tmp_filename = 'tf.tmp' sequences = [[1, 2, 3], [1, 2], [3, 2, 1]] label_sequences = [[0, 1, 0], [1, 0], [1, 1, 1]] def make_example(input_sequence, output_sequence): """ Makes a single example from Python lists that follows the format of tf.train.SequenceExample. """ example_sequence = tf.train.SequenceExample() # 3D length sequence_length = len(input_sequence) example_sequence.context.feature["length"].int64_list.value.append(sequence_length) input_characters = example_sequence.feature_lists.feature_list["input_characters"] output_characters = example_sequence.feature_lists.feature_list["output_characters"] for input_character, output_character in zip(input_sequence, output_sequence): if input_sequence is not None: input_characters.feature.add().int64_list.value.append(input_character) if output_characters is not None: output_characters.feature.add().int64_list.value.append(output_character) return example_sequence # Write all examples into a TFRecords file def save_tf(filename): with open(filename, 'w') as fp: writer = tf.python_io.TFRecordWriter(fp.name) for sequence, label_sequence in zip(sequences, label_sequences): ex = make_example(sequence, label_sequence) writer.write(ex.SerializeToString()) writer.close() def read_and_decode_single_example(filename): filename_queue = tf.train.string_input_producer([filename], num_epochs=None) reader = tf.TFRecordReader() _, serialized_example = reader.read(filename_queue) context_features = { "length": tf.FixedLenFeature([], dtype=tf.int64) } sequence_features = { "input_characters": tf.FixedLenSequenceFeature([], dtype=tf.int64), "output_characters": tf.FixedLenSequenceFeature([], dtype=tf.int64) } return serialized_example, context_features, sequence_features save_tf(tmp_filename) ex,context_features,sequence_features = read_and_decode_single_example(tmp_filename) context_parsed, sequence_parsed = tf.parse_single_sequence_example( serialized=ex, context_features=context_features, sequence_features=sequence_features ) sequence = tf.contrib.learn.run_n(sequence_parsed, n=1, feed_dict=None) #check if the saved data matches the input data print(sequences[0] in sequence[0]['input_characters']) </code></pre> <hr> <p>The required changes were:</p> <ol> <li><code>sequence_length = sum([len(word) for word in input_sequence])</code> to <code>sequence_length = len(input_sequence)</code></li> </ol> <p>Otherwise it doesn't work for your example data</p> <ol start="2"> <li><code>extend</code> was changed to <code>append</code></li> </ol>
0
2016-09-24T22:59:29Z
[ "python", "multidimensional-array", "tensorflow", "protocol-buffers" ]
django-mptt show "no such column"
39,524,365
<p>I used MPTT to implement a forum which allow posts to latest news.I can't solve this error: "no such column: news_comment.lft"</p> <p>The app name is news.This is models.py:</p> <pre><code>from django.db import models from mptt.models import MPTTModel, TreeForeignKey class News(models.Model): .... class Comment(MPTTModel): .... news = models.ForeignKey(News,related_name='news', null=True) parent = TreeForeignKey('self', null=True, blank=True, related_name='children', db_index=True) class MPTTMeta: order_insertion_by = ['created'] </code></pre> <p>this is views.py</p> <pre><code>def show_news(request,request_id): news = News.objects.get(id=request_id) comments = Comment.objects.filter(news=news) return render( request, 'news.html', {'news': news, 'comments': comments,}) </code></pre> <p>This is part of news.html</p> <pre><code>{% block title %}{{news.title}}{% endblock title %} {% block content %} &lt;div id='news_info'&gt; &lt;h1&gt;{{ news.title }}&lt;/h1&gt; &lt;p&gt;{{ news.created}}&lt;/p&gt; &lt;p&gt;{{ news.content}}&lt;/p&gt; &lt;/div&gt; &lt;div id="comment_list"&gt; &lt;h2&gt;Comments&lt;/h2&gt; **{% for comment in comments %} -- error here** {% if comment.replyto == null %} &lt;div class="comment_info"&gt; .... </code></pre> <p>This is error report:</p> <pre><code>Environment: Request Method: GET Request URL: http://127.0.0.1:8000/news/4/ Django Version: 1.10.1 Python Version: 3.5.2 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'news', 'mptt'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Template error: In template /home/sinoease/Desktop/ross/Project/NewsPortal/news/templates/news.html, error at line 11 no such column: news_comment.lft 1 : {% block title %}{{news.title}}{% endblock title %} 2 : 3 : {% block content %} 4 : &lt;div id='news_info'&gt; 5 : &lt;h1&gt;{{ news.title }}&lt;/h1&gt; 6 : &lt;p&gt;{{ news.created}}&lt;/p&gt; 7 : &lt;p&gt;{{ news.content}}&lt;/p&gt; 8 : &lt;/div&gt; 9 : &lt;div id="comment_list"&gt; 10 : &lt;h2&gt;Comments&lt;/h2&gt; 11 : {% for comment in comments %} 12 : {% if comment.replyto == null %} 13 : &lt;div class="comment_info"&gt; 14 : &lt;div class="comment_content"&gt; 15 : comment info:{{comment.content}} 16 : &lt;/div&gt; 17 : &lt;div clss="comment_create"&gt; 18 : comment on: {{comment.created}} 19 : &lt;/div&gt; 20 : &lt;a href="{% url 'reply_comment' news.id comment.id %}"&gt;&lt;button&gt;Reply&lt;/button&gt;&lt;/a&gt; 21 : Traceback: File "/usr/local/lib/python3.5/dist-packages/django/db/backends/utils.py" in execute 64. return self.cursor.execute(sql, params) File "/usr/local/lib/python3.5/dist-packages/django/db/backends/sqlite3/base.py" in execute 337. return Database.Cursor.execute(self, query, params) The above exception (no such column: news_comment.lft) was the direct cause of the following exception: File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/exception.py" in inner 39. response = get_response(request) File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/base.py" in _get_response 187. response = self.process_exception_by_middleware(e, request) File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/base.py" in _get_response 185. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/sinoease/Desktop/ross/Project/NewsPortal/news/views.py" in show_news 17. 'comments': comments,}) File "/usr/local/lib/python3.5/dist-packages/django/shortcuts.py" in render 30. content = loader.render_to_string(template_name, context, request, using=using) File "/usr/local/lib/python3.5/dist-packages/django/template/loader.py" in render_to_string 68. return template.render(context, request) File "/usr/local/lib/python3.5/dist-packages/django/template/backends/django.py" in render 66. return self.template.render(context) File "/usr/local/lib/python3.5/dist-packages/django/template/base.py" in render 208. return self._render(context) File "/usr/local/lib/python3.5/dist-packages/django/template/base.py" in _render 199. return self.nodelist.render(context) File "/usr/local/lib/python3.5/dist-packages/django/template/base.py" in render 994. bit = node.render_annotated(context) File "/usr/local/lib/python3.5/dist-packages/django/template/base.py" in render_annotated 961. return self.render(context) File "/usr/local/lib/python3.5/dist-packages/django/template/loader_tags.py" in render 61. result = self.nodelist.render(context) File "/usr/local/lib/python3.5/dist-packages/django/template/base.py" in render 994. bit = node.render_annotated(context) File "/usr/local/lib/python3.5/dist-packages/django/template/base.py" in render_annotated 961. return self.render(context) File "/usr/local/lib/python3.5/dist-packages/django/template/defaulttags.py" in render 166. len_values = len(values) File "/usr/local/lib/python3.5/dist-packages/django/db/models/query.py" in __len__ 238. self._fetch_all() File "/usr/local/lib/python3.5/dist-packages/django/db/models/query.py" in _fetch_all 1087. self._result_cache = list(self.iterator()) File "/usr/local/lib/python3.5/dist-packages/django/db/models/query.py" in __iter__ 54. results = compiler.execute_sql() File "/usr/local/lib/python3.5/dist-packages/django/db/models/sql/compiler.py" in execute_sql 835. cursor.execute(sql, params) File "/usr/local/lib/python3.5/dist-packages/django/db/backends/utils.py" in execute 79. return super(CursorDebugWrapper, self).execute(sql, params) File "/usr/local/lib/python3.5/dist-packages/django/db/backends/utils.py" in execute 64. return self.cursor.execute(sql, params) File "/usr/local/lib/python3.5/dist-packages/django/db/utils.py" in __exit__ 94. six.reraise(dj_exc_type, dj_exc_value, traceback) File "/usr/local/lib/python3.5/dist-packages/django/utils/six.py" in reraise 685. raise value.with_traceback(tb) File "/usr/local/lib/python3.5/dist-packages/django/db/backends/utils.py" in execute 64. return self.cursor.execute(sql, params) File "/usr/local/lib/python3.5/dist-packages/django/db/backends/sqlite3/base.py" in execute 337. return Database.Cursor.execute(self, query, params) Exception Type: OperationalError at /news/4/ Exception Value: no such column: news_comment.lft </code></pre> <p>Can anyone give me some idea how to fix this problem?Thanks~</p>
0
2016-09-16T05:37:20Z
39,525,827
<p>MPTT adds fields to models which are using it - it should be enough to run <code>python manage.py makemigrations &lt;your app&gt; &amp;&amp; python manage.py migrate</code> as far as i can remember. The <a href="http://django-mptt.github.io/django-mptt/tutorial.html" rel="nofollow">MTPP Tutorial</a> mentions this in the part <em>Set up your model</em>.</p>
0
2016-09-16T07:19:48Z
[ "python", "django", "django-mptt" ]
Python how cyclic fetch a pre-fixed number of elements in array
39,524,440
<p>I'm trying to make a function that will always return me a pre-fixed number of elements from an array which will be larger than the pre-fixed number:</p> <pre><code>def getElements(i,arr,size=10): return cyclic array return </code></pre> <p>where <code>i</code> stands for index of array to fetch and <code>arr</code> represent the array of all elements:</p> <h2>Example:</h2> <pre><code>a = [0,1,2,3,4,5,6,7,8,9,10,11] b = getElements(9,a) &gt;&gt; b &gt;&gt; [9,10,11,0,1,2,3,4,5,6] b = getElements(1,a) &gt;&gt; b &gt;&gt; [1,2,3,4,5,6,7,8,9,10] </code></pre> <p>where <code>i = 9</code> and the array return the <code>[9:11]+[0:7]</code> to complete <strong>10 elements</strong> with <code>i = 1</code> don't need to cyclic the array just get <code>[1:11]</code></p> <p>thanks for the help</p> <h2>Initial code (not working):</h2> <pre><code>def getElements(i,arr,size=10): total = len(arr) start = i%total end = start+size return arr[start:end] #not working cos not being cyclic </code></pre> <h2>EDIT:</h2> <p>I can't make any <code>import</code> for this script</p>
5
2016-09-16T05:43:47Z
39,524,550
<pre><code>def get_elements(i, arr, size=10): if size - (len(arr) - i) &lt; 0: return arr[i:size+i] return arr[i:] + arr[:size - (len(arr) - i)] </code></pre> <p>Is that what you want? <strong>Updated to work with lower numbers.</strong></p>
2
2016-09-16T05:53:42Z
[ "python", "arrays", "circular-list" ]
Python how cyclic fetch a pre-fixed number of elements in array
39,524,440
<p>I'm trying to make a function that will always return me a pre-fixed number of elements from an array which will be larger than the pre-fixed number:</p> <pre><code>def getElements(i,arr,size=10): return cyclic array return </code></pre> <p>where <code>i</code> stands for index of array to fetch and <code>arr</code> represent the array of all elements:</p> <h2>Example:</h2> <pre><code>a = [0,1,2,3,4,5,6,7,8,9,10,11] b = getElements(9,a) &gt;&gt; b &gt;&gt; [9,10,11,0,1,2,3,4,5,6] b = getElements(1,a) &gt;&gt; b &gt;&gt; [1,2,3,4,5,6,7,8,9,10] </code></pre> <p>where <code>i = 9</code> and the array return the <code>[9:11]+[0:7]</code> to complete <strong>10 elements</strong> with <code>i = 1</code> don't need to cyclic the array just get <code>[1:11]</code></p> <p>thanks for the help</p> <h2>Initial code (not working):</h2> <pre><code>def getElements(i,arr,size=10): total = len(arr) start = i%total end = start+size return arr[start:end] #not working cos not being cyclic </code></pre> <h2>EDIT:</h2> <p>I can't make any <code>import</code> for this script</p>
5
2016-09-16T05:43:47Z
39,524,588
<p>You could return </p> <pre><code>array[i: i + size] + array[: max(0, i + size - len(array))] </code></pre> <p>For example</p> <pre><code>In [144]: array = list(range(10)) In [145]: array Out[145]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] In [146]: i, size = 1, 10 In [147]: array[i: i + size] + array[: max(0, i + size - len(array))] Out[147]: [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] In [148]: i, size = 2, 3 In [149]: array[i: i + size] + array[: max(0, i + size - len(array))] Out[149]: [2, 3, 4] In [150]: i, size = 5, 9 In [151]: array[i: i + size] + array[: max(0, i + size - len(array))] Out[151]: [5, 6, 7, 8, 9, 0, 1, 2, 3] </code></pre>
3
2016-09-16T05:56:42Z
[ "python", "arrays", "circular-list" ]
Python how cyclic fetch a pre-fixed number of elements in array
39,524,440
<p>I'm trying to make a function that will always return me a pre-fixed number of elements from an array which will be larger than the pre-fixed number:</p> <pre><code>def getElements(i,arr,size=10): return cyclic array return </code></pre> <p>where <code>i</code> stands for index of array to fetch and <code>arr</code> represent the array of all elements:</p> <h2>Example:</h2> <pre><code>a = [0,1,2,3,4,5,6,7,8,9,10,11] b = getElements(9,a) &gt;&gt; b &gt;&gt; [9,10,11,0,1,2,3,4,5,6] b = getElements(1,a) &gt;&gt; b &gt;&gt; [1,2,3,4,5,6,7,8,9,10] </code></pre> <p>where <code>i = 9</code> and the array return the <code>[9:11]+[0:7]</code> to complete <strong>10 elements</strong> with <code>i = 1</code> don't need to cyclic the array just get <code>[1:11]</code></p> <p>thanks for the help</p> <h2>Initial code (not working):</h2> <pre><code>def getElements(i,arr,size=10): total = len(arr) start = i%total end = start+size return arr[start:end] #not working cos not being cyclic </code></pre> <h2>EDIT:</h2> <p>I can't make any <code>import</code> for this script</p>
5
2016-09-16T05:43:47Z
39,524,757
<p>The <a href="https://docs.python.org/3/library/itertools.html#itertools.cycle" rel="nofollow"><code>itertools</code></a> is a fantastic library with lots of cool things. For this case we can use <code>cycle</code> and <code>islice</code>.</p> <pre><code>from itertools import cycle, islice def getElements(i, a, size=10): c = cycle(a) # make a cycle out of the array list(islice(c,i)) # skip the first `i` elements return list(islice(c, size)) # get `size` elements from the cycle </code></pre> <p>Works just as you wanted.</p> <pre><code>&gt;&gt;&gt; getElements(9, [0,1,2,3,4,5,6,7,8,9,10,11]) [9, 10, 11, 0, 1, 2, 3, 4, 5, 6] </code></pre>
3
2016-09-16T06:09:08Z
[ "python", "arrays", "circular-list" ]
Python how cyclic fetch a pre-fixed number of elements in array
39,524,440
<p>I'm trying to make a function that will always return me a pre-fixed number of elements from an array which will be larger than the pre-fixed number:</p> <pre><code>def getElements(i,arr,size=10): return cyclic array return </code></pre> <p>where <code>i</code> stands for index of array to fetch and <code>arr</code> represent the array of all elements:</p> <h2>Example:</h2> <pre><code>a = [0,1,2,3,4,5,6,7,8,9,10,11] b = getElements(9,a) &gt;&gt; b &gt;&gt; [9,10,11,0,1,2,3,4,5,6] b = getElements(1,a) &gt;&gt; b &gt;&gt; [1,2,3,4,5,6,7,8,9,10] </code></pre> <p>where <code>i = 9</code> and the array return the <code>[9:11]+[0:7]</code> to complete <strong>10 elements</strong> with <code>i = 1</code> don't need to cyclic the array just get <code>[1:11]</code></p> <p>thanks for the help</p> <h2>Initial code (not working):</h2> <pre><code>def getElements(i,arr,size=10): total = len(arr) start = i%total end = start+size return arr[start:end] #not working cos not being cyclic </code></pre> <h2>EDIT:</h2> <p>I can't make any <code>import</code> for this script</p>
5
2016-09-16T05:43:47Z
39,526,491
<pre><code>a=[1, 2, 3] def cyclic(a, i): b=a*2 return b[i:i+len(a)] print(cyclic(a, 2)) </code></pre>
2
2016-09-16T08:01:25Z
[ "python", "arrays", "circular-list" ]
How to get rid of row numbers, pd.read_excel?
39,524,570
<p>I am a complete beginner with Python. I am working on a assignment and I can't seem to figure out how to get rid of the <em>row numbers</em> from my excel spreadsheet, while using <code>import pandas</code>. </p> <p>This is what I get when I run the code:</p> <pre><code>0 $20,000,000 $159,000,000 1 $9,900,000 $35,600,000 2 $35,000,000 $45,000,000 3 $9,900,000 $35,600,000 4 $12,000,000 $9,400,000 </code></pre> <p>But instead I just want:</p> <pre><code>$20,000,000 $159,000,000 $9,900,000 $35,600,000 $35,000,000 $45,000,000 $9,900,000 $35,600,000 $12,000,000 $9,400,000 </code></pre> <p>This is inside of my main block for formatting:</p> <pre><code>if __name__ == "__main__": file_name = "movie_theme.xlsx" # Formatting numbers (e.g. $1,000,000) pd.options.display.float_format = '${:,.0f}'.format # Reading Excel file df = pd.read_excel(file_name, convert_float = False) </code></pre> <p>Any suggestions on how to go about doing this?</p>
0
2016-09-16T05:55:19Z
39,525,204
<p>You can try </p> <pre><code>df = df.set_index('Salary') </code></pre> <p>where <code>Salary</code> is the column name you want to be the index.</p> <p>The row number is called the index.</p>
0
2016-09-16T06:41:15Z
[ "python", "excel", "pandas" ]
How to get rid of row numbers, pd.read_excel?
39,524,570
<p>I am a complete beginner with Python. I am working on a assignment and I can't seem to figure out how to get rid of the <em>row numbers</em> from my excel spreadsheet, while using <code>import pandas</code>. </p> <p>This is what I get when I run the code:</p> <pre><code>0 $20,000,000 $159,000,000 1 $9,900,000 $35,600,000 2 $35,000,000 $45,000,000 3 $9,900,000 $35,600,000 4 $12,000,000 $9,400,000 </code></pre> <p>But instead I just want:</p> <pre><code>$20,000,000 $159,000,000 $9,900,000 $35,600,000 $35,000,000 $45,000,000 $9,900,000 $35,600,000 $12,000,000 $9,400,000 </code></pre> <p>This is inside of my main block for formatting:</p> <pre><code>if __name__ == "__main__": file_name = "movie_theme.xlsx" # Formatting numbers (e.g. $1,000,000) pd.options.display.float_format = '${:,.0f}'.format # Reading Excel file df = pd.read_excel(file_name, convert_float = False) </code></pre> <p>Any suggestions on how to go about doing this?</p>
0
2016-09-16T05:55:19Z
39,526,619
<p>Internally your dataframe always needs an index. If you get rid of the integer index another column has to be your index and you should only use a data column as your index if you need to for some special purpose.</p> <p>When you write your dataframe to a file, e.g. with the <code>to_csv()</code> method, you can always specify the keyword <code>index=False</code> and you won't get that index written to your output.</p>
1
2016-09-16T08:09:15Z
[ "python", "excel", "pandas" ]
How to create an sObject for a sType without using get_unique_sobject method?
39,524,577
<p>I wish to create a new sobject for a specific stype. Currently I am using <code>server.get_unique_sobject( stype, data)</code>, But it assumes that an sobject is already present i.e it creates a new sobject <strong><em>iff</em></strong> there is no combination of sobject with same data <strong>already</strong> existing in the DB.</p> <p>I wish to create a new sobject each and every time I wish , <strong>even if there is a sobject already present with the same name and data present.</strong></p>
0
2016-09-16T05:55:42Z
39,710,247
<p>Found it, the API itself provides a method of inserting a sObject to any sType in the system. It was by using <code>server.insert( sType, data = {})</code> where data is a dictionary of key value pairs.</p>
0
2016-09-26T18:50:42Z
[ "python", "tactic" ]
Write a CSV to store in Google Cloud Storage
39,524,581
<p>Background: I'm taking data in my Python/AppEngine project and creating a .tsv file so that I can create charts with d3.js. Right now I'm writing the CSV with each page load; I want to instead store the file once in Google Cloud Storage and read it from there.</p> <p>How I'm currently writing the file, each time the page is loaded!:</p> <pre><code>def get(self): ## this gets called when loading myfile.tsv from d3.js datalist = MyEntity.all() self.response.headers['Content-Type'] = 'text/csv' writer = csv.writer(self.response.out, delimiter='\t') writer.writerow(['field1', 'field2']) for eachco in datalist: writer.writerow([eachco.variable1, eachco.variable2]) </code></pre> <p>And while inefficient, this is working just fine.</p> <p>Using <a href="https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/read-write-to-cloud-storage" rel="nofollow">this Google Cloud Storage documentation</a>, I've been trying to get something like this working:</p> <pre><code>def get(self): filename = '/bucket/myfile.tsv' datalist = MyEntity.all() bucket_name = os.environ.get('BUCKET_NAME', app_identity.get_default_gcs_bucket_name()) write_retry_params = gcs.RetryParams(backoff_factor=1.1) writer = csv.writer(self.response.out, delimiter='\t') gcs_file = gcs.open(filename, 'w', content_type='text/csv', retry_params=write_retry_params) gcs_file.write(writer.writerow(['field1', 'field2'])) for eachco in datalist: gcs_file.write(writer.writerow([eachco.variable1, eachco.variable2])) gcs_file.close() </code></pre> <p>But I'm getting:</p> <pre><code>TypeError: Expected str but got &lt;type 'NoneType'&gt;. </code></pre> <p>I thought that the output of csv.writer would be a string, so I'm not sure why I'm getting the TypeError.</p> <p>So I can think of two situations:</p> <ol> <li>I've got something screwed up in my code that writes the tsv to Cloud Storage. It should be simple to iterate through and write a TSV/CSV file to Cloud Storage though, right?</li> <li>I've gone about this the completely wrong way entirely, and should maybe even use BlobStore or db.TextProperty() to store this .tsv data. (The files aren't that big; definitely well under 1MB)</li> </ol> <p>I'd appreciate any help!</p> <p><strong>edit - full traceback</strong></p> <pre><code>Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.1/webapp2.py", line 1530, in __call__ rv = self.router.dispatch(request, response) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.1/webapp2.py", line 1278, in default_dispatcher return route.handler_adapter(request, response) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.1/webapp2.py", line 1102, in __call__ return handler.dispatch() File "/mydirectory/myapp/handlers.py", line 21, in dispatch webapp2.RequestHandler.dispatch(self) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.1/webapp2.py", line 572, in dispatch return self.handle_exception(e, self.app.debug) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.1/webapp2.py", line 570, in dispatch return method(*args, **kwargs) File "/mydirectory/myapp/thisapp.py", line 384, in get gcs_file.write(writer.writerow(['field1', 'field2'])) File "lib/cloudstorage/storage_api.py", line 754, in write raise TypeError('Expected str but got %s.' % type(data)) TypeError: Expected str but got &lt;type 'NoneType'&gt;. </code></pre>
0
2016-09-16T05:56:07Z
39,525,044
<p>The problem is that <code>writer.writerow</code> doesn't return anything. The return type will be <code>None</code>, and you are trying to write that into <code>gcs_file</code>.</p>
0
2016-09-16T06:29:36Z
[ "python", "csv", "google-app-engine", "google-cloud-storage" ]
Write a CSV to store in Google Cloud Storage
39,524,581
<p>Background: I'm taking data in my Python/AppEngine project and creating a .tsv file so that I can create charts with d3.js. Right now I'm writing the CSV with each page load; I want to instead store the file once in Google Cloud Storage and read it from there.</p> <p>How I'm currently writing the file, each time the page is loaded!:</p> <pre><code>def get(self): ## this gets called when loading myfile.tsv from d3.js datalist = MyEntity.all() self.response.headers['Content-Type'] = 'text/csv' writer = csv.writer(self.response.out, delimiter='\t') writer.writerow(['field1', 'field2']) for eachco in datalist: writer.writerow([eachco.variable1, eachco.variable2]) </code></pre> <p>And while inefficient, this is working just fine.</p> <p>Using <a href="https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/read-write-to-cloud-storage" rel="nofollow">this Google Cloud Storage documentation</a>, I've been trying to get something like this working:</p> <pre><code>def get(self): filename = '/bucket/myfile.tsv' datalist = MyEntity.all() bucket_name = os.environ.get('BUCKET_NAME', app_identity.get_default_gcs_bucket_name()) write_retry_params = gcs.RetryParams(backoff_factor=1.1) writer = csv.writer(self.response.out, delimiter='\t') gcs_file = gcs.open(filename, 'w', content_type='text/csv', retry_params=write_retry_params) gcs_file.write(writer.writerow(['field1', 'field2'])) for eachco in datalist: gcs_file.write(writer.writerow([eachco.variable1, eachco.variable2])) gcs_file.close() </code></pre> <p>But I'm getting:</p> <pre><code>TypeError: Expected str but got &lt;type 'NoneType'&gt;. </code></pre> <p>I thought that the output of csv.writer would be a string, so I'm not sure why I'm getting the TypeError.</p> <p>So I can think of two situations:</p> <ol> <li>I've got something screwed up in my code that writes the tsv to Cloud Storage. It should be simple to iterate through and write a TSV/CSV file to Cloud Storage though, right?</li> <li>I've gone about this the completely wrong way entirely, and should maybe even use BlobStore or db.TextProperty() to store this .tsv data. (The files aren't that big; definitely well under 1MB)</li> </ol> <p>I'd appreciate any help!</p> <p><strong>edit - full traceback</strong></p> <pre><code>Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.1/webapp2.py", line 1530, in __call__ rv = self.router.dispatch(request, response) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.1/webapp2.py", line 1278, in default_dispatcher return route.handler_adapter(request, response) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.1/webapp2.py", line 1102, in __call__ return handler.dispatch() File "/mydirectory/myapp/handlers.py", line 21, in dispatch webapp2.RequestHandler.dispatch(self) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.1/webapp2.py", line 572, in dispatch return self.handle_exception(e, self.app.debug) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.1/webapp2.py", line 570, in dispatch return method(*args, **kwargs) File "/mydirectory/myapp/thisapp.py", line 384, in get gcs_file.write(writer.writerow(['field1', 'field2'])) File "lib/cloudstorage/storage_api.py", line 754, in write raise TypeError('Expected str but got %s.' % type(data)) TypeError: Expected str but got &lt;type 'NoneType'&gt;. </code></pre>
0
2016-09-16T05:56:07Z
39,525,052
<p>You're still attempting to create the writer on a response:</p> <pre><code>writer = csv.writer(self.response.out, delimiter='\t') </code></pre> <p>You need to write to the GCS file. Something like this:</p> <pre><code> datalist = MyEntity.all() bucket_name = os.environ.get('BUCKET_NAME', app_identity.get_default_gcs_bucket_name()) filename = os.path.join(bucket_name, 'myfile.tsv') write_retry_params = gcs.RetryParams(backoff_factor=1.1) gcs_file = gcs.open(filename, 'w', content_type='text/csv', retry_params=write_retry_params) writer = csv.writer(gcs_file, delimiter='\t') writer.writerow(['field1', 'field2']) for eachco in datalist: writer.writerow([eachco.variable1, eachco.variable2]) gcs_file.close() </code></pre> <p>Notes: </p> <ul> <li>not actually tested</li> <li>I also adjusted the filename to use <code>bucket_name</code></li> <li>if you do this in the <code>get()</code> request you may want to check if the file already exists and, if so, use it, otherwise you'd be still generating it at every request. Alternatively you could move this code on a task or in the <code>.tsv</code> upload handler.</li> </ul>
1
2016-09-16T06:30:00Z
[ "python", "csv", "google-app-engine", "google-cloud-storage" ]