title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
unexpected kawrgs ,TypeError at /courses/course/1/1/
39,493,533
<p>I'm getting type error for the pk. Something like:</p> <p>step_detail() got an unexpected keyword argument 'pk' for the second one in /courses/course/1/1, where as it has been taken care in the following method of step_detail. What am I doing wrong? </p> <p>views.py</p> <pre><code>from django.shortcuts import render from django.shortcuts import get_object_or_404 from .models import Course, Step def course_list(request): courses = Course.objects.all() return render(request, 'courses/course_list.html', {'courses': courses}) def course_detail(request, pk): # course=Course.objects.get(pk=pk) course = get_object_or_404(Course, pk=pk) return render(request, 'courses/course_detail.html', {'course': course}) def step_detail(request, course_pk, step_pk): step = get_object_or_404(Step, course_id=course_pk, pk=step_pk) return render(request, 'courses/step_detail.html', {'step': step}) </code></pre> <p>And the url.py:</p> <pre><code>from django.conf.urls import url from . import views urlpatterns= [ url(r'^$', views.course_list), url(r'(?P&lt;course_pk&gt;\d+)/(?P&lt;pk&gt;\d+)/$', views.step_detail), url(r'(?P&lt;pk&gt;\d+)/$', views.course_detail), ] </code></pre>
0
2016-09-14T14:50:58Z
39,493,664
<p>You are using 2 different variable names:</p> <ul> <li><code>pk</code> in <code>urls.py</code></li> <li><code>step_pk</code> in <code>views.py</code></li> </ul> <p>You should use the same name.</p>
1
2016-09-14T14:56:44Z
[ "python", "django" ]
Flask : how to pass variables between html pages app route
39,493,606
<p>I have a getvlanconfig.html page with a form that collects information like vlanid and vlannetwork. I want to be able to pass that information over to the page showvlanconfig.html that loads when the form is submitted. I am new to Flask and from whatever lookup i could do, I was unable to find out the best way to do it. Also I have taken a look at sessions and I am not sure if that would be my option here. </p> <pre><code> from flask import Flask, render_template, url_for, flash, request, redirect app = Flask(__name__) @app.route('/') def homepage(): return render_template("index.html") @app.route('/getvlanconfig/', methods=["GET","POST"]) def getvlan(): try: if request.method == "POST": getvlanid = request.form['vlanid'] getvlannetwork = request.form['vlannetwork'] return redirect(url_for('showvlan')) except Exception as e: return render_template("vlanconfig.html") @app.route('/showvlanconfig', methods=["GET","POST"]) def showvlan(): try: getvlanid = ??? getvlannetwork = ?? return render_template("index.html", vlanid = getvlanid, vlannetwork = getvlannetwork) except Exception as e: flash(e) if __name__ == "__main__": app.run(debug = True) </code></pre>
-1
2016-09-14T14:54:07Z
39,493,863
<p>Use the session to store data between requests from the same client.</p> <pre><code>from flask import session def getvlan(): session['vlanid'] = request.form['vlanid'] return redirect(url_for('showvlan')) def showvlan(): vlanid = session['vlanid'] ... </code></pre> <p>Use a database (or other external, persistent store) to store data in a more generally accessible sense.</p>
0
2016-09-14T15:04:34Z
[ "python", "html", "flask" ]
Json data not reaching Django properly
39,493,607
<p>In my project, the frontend is in Angular2, I'm making a POST request to a Django url like this:</p> <pre><code>let headers = new Headers({ 'Content-Type': 'application/json' }); let options = new RequestOptions({ headers: headers }); let body = JSON.stringify(this.myMetadata); let req = this.http.post(this.url,body,options) .map((res:Response) =&gt; res.json()) .subscribe( data =&gt; { this.response = data}, err =&gt; console.error(err), () =&gt; console.log('done') ); return req; </code></pre> <p>But the Django backend is not receving the data as JSON</p> <pre><code>if request.method == 'POST': logger.info(request.method) logger.info(type(request.body)) logger.info(request.POST) </code></pre> <p>log dumps show that Django has received the data as string, due to this none of the json methods work on it.</p> <pre><code>Method:POST request.body type:&lt;type 'str'&gt; </code></pre> <p>What is the best way to resolve this?</p> <p>is there a way to convert a string to a dictionary?</p>
0
2016-09-14T14:54:10Z
39,493,697
<blockquote> <p>Is there a way to convert a string to a dictionary?</p> </blockquote> <p>Yes just use the json module</p> <pre><code>import json ... if request.method == 'POST': data = json.loads(request.body) logger.info(type(data)) </code></pre>
0
2016-09-14T14:58:08Z
[ "python", "json", "django" ]
I need to add a '$' to my output of denominations used
39,493,699
<p><a href="http://ideone.com/QxPDFh" rel="nofollow">http://ideone.com/QxPDFh</a></p> <p>This is my program that I am working on but I need the <code>amount_denom</code> to display the $ with the number. How would I get this done? I need to fix the print statement but how? I am new to coding so sometimes I can't see the obvious stuff. Thank you. </p> <pre><code>def main(): denominations = [20, 10, 5, 1, .25, .10, .05, .01] used_denom = [] amount_denom = [] user_input = float(input('enter a dollar amount: ')) #Keep asking user to re-enter input until they enter a value (0-200) while user_input &lt; 0 or user_input &gt; 200: user_input = float(input('re-enter a dollar amount: ')) #Traverse the list to breakdown the user_input into denominations. remainder = user_input for d in denominations: num_denom = int(remainder / d) if num_denom &gt; 0: used_denom.append(d) amount_denom.append(num_denom) #Avoid dividing by a float (prevents .01 issue from occurring) remainder = (remainder*100) % (d * 100) / 100 #Traverse the amount_denom list and print the output to be formatted a certain way. for i in range(len(amount_denom)): print("{0: 2d}{1:8.2f}".format(amount_denom[i],used_denom[i]),end = "") print("s" if amount_denom[i] &gt; 1 else "") main() print("{0: 2d} ${1:8.2f}".format(amount_denom[i],used_denom[i]),end="") </code></pre> <p>this just shows me: <a href="https://gyazo.com/ff41f5c82567c71cca05340f23a51e98" rel="nofollow">https://gyazo.com/ff41f5c82567c71cca05340f23a51e98</a></p> <p>Found out that I just need to change the 8.2f to 5.2f to join the $ and the used_denom to the format I like. Thanks. </p>
0
2016-09-14T14:58:13Z
39,494,080
<p>Can you add a <code>$</code> to your <code>print</code> statement?</p> <pre><code>print("${0: 2d}{1:8.2f}".format(amount_denom[i],used_denom[i]),end = "") </code></pre> <p>or:</p> <pre><code>print("{0: 2d} ${1:8.2f}".format(amount_denom[i],used_denom[i]),end = "") </code></pre> <p>You can place the <code>$</code> wherever you wish within the string. You can also add any other characters (e.g. <code>_</code> or <code>-</code>) within the string.</p> <p>to remove white spaces around the variables, use <code>strip()</code>:</p> <pre><code>print("{0: 2d} ${1:8.2f}".format(amount_denom[i].strip(),used_denom[i].strip()),end = "") </code></pre> <p>if <code>amount_denom</code> and <code>used_denom</code> are not strings, first convert using <code>str()</code>:</p> <pre><code>print("{0: 2d} ${1:8.2f}".format(str(amount_denom[i]).strip(),str(used_denom[i]).strip()),end = "") </code></pre>
0
2016-09-14T15:16:04Z
[ "python" ]
Generate unique alphanumeric ID such as in the UK NINO (SN-60-70-45-B) format
39,493,709
<p>How can I generate unique alphanumeric ID such as in the UK NINO (SN-60-70-45-B) format? I want to use this ID to uniquely identify the applications and also to be able to associate these IDs to their medical, tax records and so on.</p>
-1
2016-09-14T14:58:33Z
39,513,501
<p>Assuming the question is for python, start here: <a href="https://docs.python.org/2/library/uuid.html#module-uuid" rel="nofollow">uuid</a></p> <p>There is most likely a nicer way to do it but this should help:</p> <pre><code>import uuid, re myUID = uuid.uuid1() myNum = str(myUID.int).replace("-", "")[:6] myLet = re.sub('\d+', '', str(myUID)).replace("-", "")[:3].upper() myRes = myLet[:2] + "-" + myNum[:2] + "-" + myNum[2:4] + "-" + myNum[4:6] + "-" + myLet[1:2] print myRes </code></pre> <p>Live Demo:</p> <p><a href="https://repl.it/DaYP" rel="nofollow">https://repl.it/DaYP</a></p>
0
2016-09-15T14:19:02Z
[ "python" ]
Python how does == work for float/double?
39,493,732
<p>I know using <code>==</code> for float is generally not safe. But does it work for the below scenario?</p> <ol> <li>Read from csv file A.csv, save first half of the data to csv file B.csv without doing anything.</li> <li>Read from both A.csv and B.csv. Use <code>==</code> to check if data match everywhere in the first half.</li> </ol> <p>These are all done with Pandas. The columns in A.csv have types datetime, string, and float. Obviously <code>==</code> works for datetime and string, so if <code>==</code> works for float as well in this case, it saves a lot of work.</p> <p>It seems to be working for all my tests, but can I assume it will work all the time?</p>
0
2016-09-14T14:59:24Z
39,493,833
<p>The same string representation will become the same float representation when put through the same parse routine. The float inaccuracy issue occurs either when mathematical operations are performed on the values or when high-precision representations are used, but equality on low-precision values is no reason to worry.</p>
3
2016-09-14T15:03:34Z
[ "python", "pandas", "floating-point" ]
Python how does == work for float/double?
39,493,732
<p>I know using <code>==</code> for float is generally not safe. But does it work for the below scenario?</p> <ol> <li>Read from csv file A.csv, save first half of the data to csv file B.csv without doing anything.</li> <li>Read from both A.csv and B.csv. Use <code>==</code> to check if data match everywhere in the first half.</li> </ol> <p>These are all done with Pandas. The columns in A.csv have types datetime, string, and float. Obviously <code>==</code> works for datetime and string, so if <code>==</code> works for float as well in this case, it saves a lot of work.</p> <p>It seems to be working for all my tests, but can I assume it will work all the time?</p>
0
2016-09-14T14:59:24Z
39,498,946
<p>No, you cannot assume that this will work all the time.</p> <p>For this to work, you need to know that the text value written out by Pandas when it's writing to a CSV file recovers the exact same value when read back in (again using Pandas). But by default, the Pandas <code>read_csv</code> function sacrifices accuracy for speed, and so the parsing operation does <em>not</em> automatically recover the same float.</p> <p>To demonstrate this, try the following: we'll create some random values, write them out to a CSV file, and read them back in, all using Pandas. First the necessary imports:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; import numpy as np </code></pre> <p>Now create some random values, and put them into a Pandas <code>Series</code> object:</p> <pre><code>&gt;&gt;&gt; test_values = np.random.rand(10000) &gt;&gt;&gt; s = pd.Series(test_values, name='test_values') </code></pre> <p>Now we use the <code>to_csv</code> method to write these values out to a file, and then read the contents of that file back into a <code>DataFrame</code>:</p> <pre><code>&gt;&gt;&gt; s.to_csv('test.csv', header=True) &gt;&gt;&gt; df = pd.read_csv('test.csv') </code></pre> <p>Finally, let's extract the values from the relevant column of <code>df</code> and compare. We'll sum the result of the <code>==</code> operation to find out how many of the <code>10000</code> input values were recovered exactly.</p> <pre><code>&gt;&gt;&gt; sum(test_values == df['test_values']) 7808 </code></pre> <p>So approximately 78% of the values were recovered correctly; the others were not.</p> <p>This behaviour is considered a feature of Pandas, rather than a bug. However, there's a workaround: Pandas 0.15 added a new <code>float_precision</code> argument to the CSV reader. By supplying <code>float_precision='round_trip'</code> to the <code>read_csv</code> operation, Pandas uses a slower but more accurate parser. Trying that on the example above, we get the values recovered perfectly:</p> <pre><code>&gt;&gt;&gt; df = pd.read_csv('test.csv', float_precision='round_trip') &gt;&gt;&gt; sum(test_values == df['test_values']) 10000 </code></pre> <p>Here's a second example, going in the other direction. The previous example showed that writing and then reading doesn't give back the same data. This example shows that reading and then writing doesn't preserve the data, either. The setup closely matches the one you describe in the question. First we'll create <code>A.csv</code>, this time using regularly-spaced values instead of random ones:</p> <pre><code>&gt;&gt;&gt; import pandas as pd, numpy as np &gt;&gt;&gt; s = pd.Series(np.arange(10**4) / 1e3, name='test_values') &gt;&gt;&gt; s.to_csv('A.csv', header=True) </code></pre> <p>Now we read <code>A.csv</code>, and write the first half of the data back out again to <code>B.csv</code>, as in your Step 1.</p> <pre><code>&gt;&gt;&gt; recovered_s = pd.read_csv('A.csv').test_values &gt;&gt;&gt; recovered_s[:5000].to_csv('B.csv', header=True) </code></pre> <p>Then we read in both <code>A.csv</code> and <code>B.csv</code>, and compare the first half of <code>A</code> with <code>B</code>, as in your Step 2.</p> <pre><code>&gt;&gt;&gt; a = pd.read_csv('A.csv').test_values &gt;&gt;&gt; b = pd.read_csv('B.csv').test_values &gt;&gt;&gt; (a[:5000] == b).all() False &gt;&gt;&gt; (a[:5000] == b).sum() 4251 </code></pre> <p>So again, several of the values don't compare correctly. Opening up the files, <code>A.csv</code> looks pretty much as I expect. Here are the first 15 entries in <code>A.csv</code>:</p> <pre><code>,test_values 0,0.0 1,0.001 2,0.002 3,0.003 4,0.004 5,0.005 6,0.006 7,0.007 8,0.008 9,0.009 10,0.01 11,0.011 12,0.012 13,0.013 14,0.014 15,0.015 </code></pre> <p>And here are the corresponding entries in <code>B.csv</code>:</p> <pre><code>,test_values 0,0.0 1,0.001 2,0.002 3,0.003 4,0.004 5,0.005 6,0.006 7,0.006999999999999999 8,0.008 9,0.009000000000000001 10,0.01 11,0.011000000000000001 12,0.012 13,0.013000000000000001 14,0.013999999999999999 15,0.015 </code></pre> <p>See this <a href="https://github.com/pydata/pandas/issues/8002" rel="nofollow">bug report</a> for more information on the introduction of the <code>float_precision</code> keyword to <code>read_csv</code>.</p>
3
2016-09-14T20:16:48Z
[ "python", "pandas", "floating-point" ]
Make nested dictionary, unflatten
39,493,783
<p>I have a list of lists containing key and value like so:</p> <pre><code>[ ['mounts:device', '/dev/sda3'], ['mounts:fstype:[0]', 'ext1'], ['mounts:fstype:[1]', 'ext3'] ] </code></pre> <p>Well I can easily change the list to this </p> <h1>(Lists arent seperated by ':')</h1> <pre><code>[ ['mounts:device', '/dev/sda3'], ['mounts:fstype[0]', 'ext1'], ['mounts:fstype[1]', 'ext3'] ] </code></pre> <p>Whatever suits better for this problem:<br> Problem is to create a dictionary:</p> <pre><code>{ 'mounts': { 'device': '/dev/sda3', 'fstype': [ 'ext1', 'ext3' ] } </code></pre> <p>It should also be possible to have lists in lists for example:</p> <pre><code>['mounts:test:lala:fstype[0][0]', 'abc'] </code></pre> <p>or</p> <pre><code>['mounts:test:lala:fstype:[0]:[0]', 'abc'] </code></pre> <p>This is what I have so far:</p> <pre><code>def unflatten(pair_list): root = {} for pair in pair_list: context = root key_list = pair[0].split(':') key_list_last_item = key_list.pop() for key in key_list: if key not in context: context[key] = {} context = context[key] context[key_list_last_item] = pair[1] return root </code></pre> <p>Based on this answer <a href="http://stackoverflow.com/a/18648007/5413035">http://stackoverflow.com/a/18648007/5413035</a> but as requested I need recursivness and lists in the mix </p> <p>Thanks in advance</p>
0
2016-09-14T15:01:23Z
39,494,221
<pre><code>input1=[ ['mounts:device', '/dev/sda3'], ['mounts:fstype:[0]', 'ext1'], ['mounts:fstype:[1]', 'ext3'] ] input2={x[1]:x[0].split(':')[1] for x in input1} input3=['ext3', 'ext1', '/dev/sda3'] input4=['fstype', 'fstype', 'device'] res={} for x,y in zip(input3, input4): res.setdefault(y,[]).append(x) res1=res.keys() res2=res.values() res3=[x[0] for x in res2 if len(x)==1]+[x for x in res2 if len(x)&gt;1] result=dict(zip(res1,res3)) print result </code></pre> <p>Output :</p> <pre><code>{'device': '/dev/sda3', 'fstype': ['ext3', 'ext1']} </code></pre>
0
2016-09-14T15:23:44Z
[ "python", "python-3.x", "dictionary" ]
Make nested dictionary, unflatten
39,493,783
<p>I have a list of lists containing key and value like so:</p> <pre><code>[ ['mounts:device', '/dev/sda3'], ['mounts:fstype:[0]', 'ext1'], ['mounts:fstype:[1]', 'ext3'] ] </code></pre> <p>Well I can easily change the list to this </p> <h1>(Lists arent seperated by ':')</h1> <pre><code>[ ['mounts:device', '/dev/sda3'], ['mounts:fstype[0]', 'ext1'], ['mounts:fstype[1]', 'ext3'] ] </code></pre> <p>Whatever suits better for this problem:<br> Problem is to create a dictionary:</p> <pre><code>{ 'mounts': { 'device': '/dev/sda3', 'fstype': [ 'ext1', 'ext3' ] } </code></pre> <p>It should also be possible to have lists in lists for example:</p> <pre><code>['mounts:test:lala:fstype[0][0]', 'abc'] </code></pre> <p>or</p> <pre><code>['mounts:test:lala:fstype:[0]:[0]', 'abc'] </code></pre> <p>This is what I have so far:</p> <pre><code>def unflatten(pair_list): root = {} for pair in pair_list: context = root key_list = pair[0].split(':') key_list_last_item = key_list.pop() for key in key_list: if key not in context: context[key] = {} context = context[key] context[key_list_last_item] = pair[1] return root </code></pre> <p>Based on this answer <a href="http://stackoverflow.com/a/18648007/5413035">http://stackoverflow.com/a/18648007/5413035</a> but as requested I need recursivness and lists in the mix </p> <p>Thanks in advance</p>
0
2016-09-14T15:01:23Z
39,494,699
<p>Here is a solution using a <strong>tree</strong> of <code>dict</code>:</p> <pre><code>import collections def tree(): return collections.defaultdict(tree) def unflatten(pair_list): root = tree() for mount, path in pair_list: parts = mount.split(":") curr = root for part in parts[:-1]: index = int(part[1:-1]) if part[0] == "[" else part curr = curr[index] part = parts[-1] index = int(part[1:-1]) if part[0] == "[" else part curr[index] = path return root </code></pre> <p>With the following input:</p> <pre><code>pair_list = [ ['mounts:device', '/dev/sda3'], ['mounts:fstype:[0]', 'ext1'], ['mounts:fstype:[1]', 'ext3'], ['mounts:test:lala:fstype:[0]:[0]', 'abc'] ] </code></pre> <p>You'll get:</p> <pre><code>{ "mounts": { "fstype": { "0": "ext1", "1": "ext3" }, "test": { "lala": { "fstype": { "0": { "0": "abc" } } } }, "device": "/dev/sda3" } } </code></pre> <p>Then you can use the recursive function <code>make_list</code>bellow to turn the integer indexes in a <code>list</code>.</p> <pre><code>def make_list(root): if isinstance(root, str): return root keys = list(root.keys()) if all(isinstance(k, int) for k in keys): values = [None] * (max(keys) + 1) for k in keys: values[k] = make_list(root[k]) return values else: return {k: make_list(v) for k, v in root.items()} </code></pre> <p>Here is the result with the <code>pair_list</code>:</p> <pre><code>flat = unflatten(pair_list) flat = make_list(flat) </code></pre> <p>You'll get:</p> <pre><code>{'mounts': {'device': '/dev/sda3', 'fstype': ['ext1', 'ext3'], 'test': {'lala': {'fstype': [['abc']]}}}} </code></pre> <p>Is it fine?</p>
1
2016-09-14T15:46:05Z
[ "python", "python-3.x", "dictionary" ]
SQLAlchemy ThreadPoolExecutor "Too many clients"
39,493,785
<p>I wrote a script with this sort of logic in order to insert many records into a PostgreSQL table as they are generated.</p> <pre><code>#!/usr/bin/env python3 import asyncio from concurrent.futures import ProcessPoolExecutor as pool from functools import partial import sqlalchemy as sa from sqlalchemy.ext.declarative import declarative_base metadata = sa.MetaData(schema='stackoverflow') Base = declarative_base(metadata=metadata) class Example(Base): __tablename__ = 'example' pk = sa.Column(sa.Integer, primary_key=True) text = sa.Column(sa.Text) sa.event.listen(Base.metadata, 'before_create', sa.DDL('CREATE SCHEMA IF NOT EXISTS stackoverflow')) engine = sa.create_engine( 'postgresql+psycopg2://postgres:password@localhost:5432/stackoverflow' ) Base.metadata.create_all(engine) session = sa.orm.sessionmaker(bind=engine, autocommit=True)() def task(value): engine.dispose() with session.begin(): session.add(Example(text=value)) async def infinite_task(loop): spawn_task = partial(loop.run_in_executor, None, task) while True: await asyncio.wait([spawn_task(value) for value in range(10000)]) def main(): loop = asyncio.get_event_loop() with pool() as executor: loop.set_default_executor(executor) asyncio.ensure_future(infinite_task(loop)) loop.run_forever() loop.close() if __name__ == '__main__': main() </code></pre> <p>This code works just fine, creating a pool of as many processes as I have CPU cores, and happily chugging along forever. I wanted to see how threads would compare to processes, but I could not get a working example. Here are the changes I made:</p> <pre><code>from concurrent.futures import ThreadPoolExecutor as pool session_maker = sa.orm.sessionmaker(bind=engine, autocommit=True) Session = sa.orm.scoped_session(session_maker) def task(value): engine.dispose() # create new session per thread session = Session() with session.begin(): session.add(Example(text=value)) # remove session once the work is done Session.remove() </code></pre> <p>This version runs for a while before a flood of "too many clients" exceptions:</p> <pre><code>sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: sorry, too many clients already </code></pre> <p>What am I missing?</p>
0
2016-09-14T15:01:35Z
39,508,857
<p>It looks like you're opening a lot of new connections without closing them, try to add engine.dispose() after:</p> <pre><code>from concurrent.futures import ThreadPoolExecutor as pool session_maker = sa.orm.sessionmaker(bind=engine, autocommit=True) Session = sa.orm.scoped_session(session_maker) def task(value): engine.dispose() # create new session per thread session = Session() with session.begin(): session.add(Example(text=value)) # remove session once the work is done Session.remove() engine.dispose() </code></pre> <p>Keep in mind the cost of a new connection, so ideally you should have one connection per process/thread, but I'm not sure how ThreadPoolExecutor works and probably connections are not being closed on thread's execution finish.</p>
0
2016-09-15T10:31:27Z
[ "python", "multithreading", "postgresql", "sqlalchemy", "python-asyncio" ]
SQLAlchemy ThreadPoolExecutor "Too many clients"
39,493,785
<p>I wrote a script with this sort of logic in order to insert many records into a PostgreSQL table as they are generated.</p> <pre><code>#!/usr/bin/env python3 import asyncio from concurrent.futures import ProcessPoolExecutor as pool from functools import partial import sqlalchemy as sa from sqlalchemy.ext.declarative import declarative_base metadata = sa.MetaData(schema='stackoverflow') Base = declarative_base(metadata=metadata) class Example(Base): __tablename__ = 'example' pk = sa.Column(sa.Integer, primary_key=True) text = sa.Column(sa.Text) sa.event.listen(Base.metadata, 'before_create', sa.DDL('CREATE SCHEMA IF NOT EXISTS stackoverflow')) engine = sa.create_engine( 'postgresql+psycopg2://postgres:password@localhost:5432/stackoverflow' ) Base.metadata.create_all(engine) session = sa.orm.sessionmaker(bind=engine, autocommit=True)() def task(value): engine.dispose() with session.begin(): session.add(Example(text=value)) async def infinite_task(loop): spawn_task = partial(loop.run_in_executor, None, task) while True: await asyncio.wait([spawn_task(value) for value in range(10000)]) def main(): loop = asyncio.get_event_loop() with pool() as executor: loop.set_default_executor(executor) asyncio.ensure_future(infinite_task(loop)) loop.run_forever() loop.close() if __name__ == '__main__': main() </code></pre> <p>This code works just fine, creating a pool of as many processes as I have CPU cores, and happily chugging along forever. I wanted to see how threads would compare to processes, but I could not get a working example. Here are the changes I made:</p> <pre><code>from concurrent.futures import ThreadPoolExecutor as pool session_maker = sa.orm.sessionmaker(bind=engine, autocommit=True) Session = sa.orm.scoped_session(session_maker) def task(value): engine.dispose() # create new session per thread session = Session() with session.begin(): session.add(Example(text=value)) # remove session once the work is done Session.remove() </code></pre> <p>This version runs for a while before a flood of "too many clients" exceptions:</p> <pre><code>sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: sorry, too many clients already </code></pre> <p>What am I missing?</p>
0
2016-09-14T15:01:35Z
39,760,228
<p>It turns out that the problem is <code>engine.dispose()</code>, which, in the words of Mike Bayer (zzzeek) "is leaving PG connections lying open to be garbage collected."</p> <p>Source: <a href="https://groups.google.com/forum/#!topic/sqlalchemy/zhjCBNebnDY" rel="nofollow">https://groups.google.com/forum/#!topic/sqlalchemy/zhjCBNebnDY</a></p> <p>So the updated <code>task</code> function looks like this:</p> <pre><code>def task(value): # create new session per thread session = Session() with session.begin(): session.add(Example(text=value)) # remove session once the work is done Session.remove() </code></pre>
1
2016-09-29T02:01:55Z
[ "python", "multithreading", "postgresql", "sqlalchemy", "python-asyncio" ]
Tkinter class module reference exception
39,493,805
<p>I'm having some trouble calling a module(updateUI) within a class (Eventsim).</p> <p>The line Sim = EventSim() throws an exception because it's missing an argument (parent). I can't figure out how to fix this / reference the parent object.</p> <p>This is my first attempt wit Tkinter and my python knowledge is also rather limited (for now).</p> <pre><code>from Tkinter import * class EventSim(Frame): def __init__(self, parent): Frame.__init__(self, parent) self.parent = parent def updateUI(self,IP_Address,Port_Number,Events_Directory): self.parent.title('ECP Event Simulator') self.parent.resizable(0, 0) self.pack(fill=BOTH, expand=True) frame1 = Frame(self) frame1.pack(fill=X) frame2 = Frame(self) frame2.pack(fill=X) frame3 = Frame(self) frame3.pack(fill=X) frame4 = Frame(self) frame4.pack(fill=X) frame5 = Frame(self) frame5.pack(fill=X) frame6 = Frame(self) frame6.pack(fill=X,pady=(10,30)) frame7 = Frame(self) frame7.pack(fill=X) frame8 = Frame(self) frame8.pack(fill=X,pady=(10,0)) Main_Label = Label(frame1,text='ECP EventSim') Main_Label.pack(side=LEFT,padx=100) IP_Label = Label(frame2,text='IP Address:') IP_Label.pack(side=LEFT,padx=10) Port_Label = Label(frame2,text='Port:') Port_Label.pack(side=RIGHT,padx=70) IP_Text = Entry(frame3) IP_Text.pack(fill=X,side=LEFT,padx=10) IP_Text = Entry(frame3) IP_Text.pack(fill=X,side=RIGHT,padx=10) Dir_Label = Label(frame4,text='Events Directory:') Dir_Label.pack(side=LEFT,padx=10) Dir_Text = Entry(frame5) Dir_Text.pack(fill=X,side=LEFT,padx=10,expand=True) Save_Button = Button(frame6,text='Save Config') Save_Button.pack(fill=X,side=LEFT,padx=10,expand=True) Con_Button = Button(frame7,text='Connect') Con_Button.pack(fill=X,side=LEFT,padx=10,expand=True) Send_Button = Button(frame8,text='Start Sending Events') Send_Button.pack(fill=X,side=LEFT,padx=10,expand=True) def main(): root = Tk() root.geometry("300x300+750+300") app = EventSim(root) root.mainloop() Sim = EventSim() Sim.updateUI('1','1','1') main() </code></pre>
0
2016-09-14T15:02:19Z
39,494,069
<p>The <code>parent</code> should be <code>root</code>. So, replacing:</p> <pre><code>def main(): root = Tk() root.geometry("300x300+750+300") app = EventSim(root) root.mainloop() Sim = EventSim() Sim.updateUI('1','1','1') main() </code></pre> <p>with:</p> <pre><code>root = Tk() root.geometry("300x300+750+300") Sim = EventSim(root) Sim.updateUI('1','1','1') root.mainloop() </code></pre> <p>brings up the desired window. The <code>updateUI</code> method requires work to populate the entry fields but you can remove its <code>parent</code> parameter since you already have the <code>parent</code> instance variable.</p>
2
2016-09-14T15:15:15Z
[ "python", "class", "tkinter" ]
Tkinter class module reference exception
39,493,805
<p>I'm having some trouble calling a module(updateUI) within a class (Eventsim).</p> <p>The line Sim = EventSim() throws an exception because it's missing an argument (parent). I can't figure out how to fix this / reference the parent object.</p> <p>This is my first attempt wit Tkinter and my python knowledge is also rather limited (for now).</p> <pre><code>from Tkinter import * class EventSim(Frame): def __init__(self, parent): Frame.__init__(self, parent) self.parent = parent def updateUI(self,IP_Address,Port_Number,Events_Directory): self.parent.title('ECP Event Simulator') self.parent.resizable(0, 0) self.pack(fill=BOTH, expand=True) frame1 = Frame(self) frame1.pack(fill=X) frame2 = Frame(self) frame2.pack(fill=X) frame3 = Frame(self) frame3.pack(fill=X) frame4 = Frame(self) frame4.pack(fill=X) frame5 = Frame(self) frame5.pack(fill=X) frame6 = Frame(self) frame6.pack(fill=X,pady=(10,30)) frame7 = Frame(self) frame7.pack(fill=X) frame8 = Frame(self) frame8.pack(fill=X,pady=(10,0)) Main_Label = Label(frame1,text='ECP EventSim') Main_Label.pack(side=LEFT,padx=100) IP_Label = Label(frame2,text='IP Address:') IP_Label.pack(side=LEFT,padx=10) Port_Label = Label(frame2,text='Port:') Port_Label.pack(side=RIGHT,padx=70) IP_Text = Entry(frame3) IP_Text.pack(fill=X,side=LEFT,padx=10) IP_Text = Entry(frame3) IP_Text.pack(fill=X,side=RIGHT,padx=10) Dir_Label = Label(frame4,text='Events Directory:') Dir_Label.pack(side=LEFT,padx=10) Dir_Text = Entry(frame5) Dir_Text.pack(fill=X,side=LEFT,padx=10,expand=True) Save_Button = Button(frame6,text='Save Config') Save_Button.pack(fill=X,side=LEFT,padx=10,expand=True) Con_Button = Button(frame7,text='Connect') Con_Button.pack(fill=X,side=LEFT,padx=10,expand=True) Send_Button = Button(frame8,text='Start Sending Events') Send_Button.pack(fill=X,side=LEFT,padx=10,expand=True) def main(): root = Tk() root.geometry("300x300+750+300") app = EventSim(root) root.mainloop() Sim = EventSim() Sim.updateUI('1','1','1') main() </code></pre>
0
2016-09-14T15:02:19Z
39,494,169
<p>Remove <code>Sim = EventSim()</code> and move <code>Sim.updateUI('1','1','1')</code> to <code>main</code>: </p> <pre><code>def main(): root = Tk() root.geometry("300x300+750+300") app = EventSim(root) app.updateUI('1','1','1') root.mainloop() main() </code></pre>
1
2016-09-14T15:20:59Z
[ "python", "class", "tkinter" ]
Pandas DataFrame with levels of graph nodes and edges to square matrix
39,493,963
<p>My Googlefu has failed me!</p> <p>I have a Pandas <code>DataFrame</code> of the form:</p> <pre><code>Level 1 Level 2 Level 3 Level 4 ------------------------------------- A B C NaN A B D E A B D F G H NaN NaN G I J K </code></pre> <p>It basically contains nodes of a graph with the levels depicting an outgoing edge from a level of lower order to a level of a higher order. I want to convert the DataFrame/create a new DataFrame of the form:</p> <pre><code> A B C D E F G H I J K --------------------------------------------- A | 0 1 0 0 0 0 0 0 0 0 0 B | 0 0 1 1 0 0 0 0 0 0 0 C | 0 0 0 0 0 0 0 0 0 0 0 D | 0 0 0 0 1 1 0 0 0 0 0 E | 0 0 0 0 0 0 0 0 0 0 0 F | 0 0 0 0 0 0 0 0 0 0 0 G | 0 0 0 0 0 0 0 1 1 0 0 H | 0 0 0 0 0 0 0 0 0 0 0 I | 0 0 0 0 0 0 0 0 0 1 0 J | 0 0 0 0 0 0 0 0 0 0 1 K | 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <p>A cell containing <code>1</code> depicts an outgoing edge from the corresponding row to the corresponding column. Is there a Pythonic way to achieve this without loops and conditions in Pandas?</p>
1
2016-09-14T15:09:22Z
39,494,660
<p>Try this code:</p> <pre><code>df = pd.DataFrame({'level_1':['A', 'A', 'A', 'G', 'G'], 'level_2':['B', 'B', 'B', 'H', 'I'], 'level_3':['C', 'D', 'D', np.nan, 'J'], 'level_4':[np.nan, 'E', 'F', np.nan, 'K']}) </code></pre> <p>Your input dataframe is:</p> <pre><code> level_1 level_2 level_3 level_4 0 A B C NaN 1 A B D E 2 A B D F 3 G H NaN NaN 4 G I J K </code></pre> <p>And the solution is:</p> <pre><code># Get unique values from input dataframe and filter out 'nan' values list_nodes = [] for i_col in df.columns.tolist(): list_nodes.extend(filter(lambda v: v==v, df[i_col].unique().tolist())) # Initialize your result dataframe df_res = pd.DataFrame(columns=sorted(list_nodes), index=sorted(list_nodes)) df_res = df_res.fillna(0) # Get 'index-column' pairs from input dataframe ('nan's are exluded) list_indexes = [] for i_col in range(df.shape[1]-1): list_indexes.extend(list(set([tuple(i) for i in df.iloc[:, i_col:i_col+2]\ .dropna(axis=0).values.tolist()]))) # Use 'index-column' pairs to fill the result dataframe for i_list_indexes in list_indexes: df_res.set_value(i_list_indexes[0], i_list_indexes[1], 1) </code></pre> <p>And the final result is:</p> <pre><code> A B C D E F G H I J K A 0 1 0 0 0 0 0 0 0 0 0 B 0 0 1 1 0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 0 0 0 0 D 0 0 0 0 1 1 0 0 0 0 0 E 0 0 0 0 0 0 0 0 0 0 0 F 0 0 0 0 0 0 0 0 0 0 0 G 0 0 0 0 0 0 0 1 1 0 0 H 0 0 0 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 0 1 0 J 0 0 0 0 0 0 0 0 0 0 1 K 0 0 0 0 0 0 0 0 0 0 0 </code></pre>
1
2016-09-14T15:43:42Z
[ "python", "pandas", "graph", "digraphs" ]
Using LabelEncoder for a series in scikitlearn
39,494,001
<p>I have a Column in a Dataset which has categorical values and I want to convert them in Numerical values. I am trying to use LabelEncoder but get errors doing so.</p> <pre><code>from sklearn.preprocessing import LabelEncoder m = hsp_train["Alley"] m_enc = LabelEncoder() j = m_enc.fit_transform(m) </code></pre> <p>I am getting an error: </p> <blockquote> <p>unorderable types: float() > str()</p> </blockquote> <p>The series in the Column has 3 values. I want them to be 0, 1, 2 respectively but I am getting that error.</p> <p>I also tried this:</p> <pre><code>l = hsp_train["Alley"] l_enc = pd.factorize(l) hsp_train["Alley"] = l_enc[0] </code></pre> <p>But this is giving me values -1, 1, 2. which I don't want I want it from 1.</p>
1
2016-09-14T15:11:30Z
39,495,062
<p>It's obviously clear that you have missing values in your series. If you want to remove <code>NaN</code> values from your series, just do <code>hsp_train["Alley"].dropna()</code></p> <p><strong>Illustration:</strong></p> <pre><code>df = pd.DataFrame({'Categorical': ['apple', 'mango', 'apple', 'orange', 'mango', 'apple', 'orange', np.NaN]}) </code></pre> <p>Using <code>LabelEncoder</code> to encode the categorical labels:</p> <pre><code>enc = LabelEncoder() enc.fit_transform(df['Categorical']) </code></pre> <p>Gives:</p> <blockquote> <p>TypeError: unorderable types: float() > str()</p> </blockquote> <p>Doing <code>pd.factorize</code> automatically assigns -1 to missing values by default and hence you get those values:</p> <pre><code>pd.factorize(df['Categorical'])[0] array([ 0, 1, 0, 2, 1, 0, 2, -1]) </code></pre> <hr> <p>If you do not want <code>NAN</code> values to be identified and to consider them just as any string, you can do it while reading process using <code>na_filter</code>: </p> <pre><code>df = pd.read_csv(data, na_filter=False, ...) </code></pre> <p>It also improves the performance of reading a relatively large file drastically.</p> <hr> <p>Or, you could fill all the <code>NaN</code> values using <code>fillna</code> to the desired string of your choice:</p> <pre><code>df.fillna('Na', inplace=True) </code></pre> <p>This replaces all the <code>NaN</code> values to your string value "Na" and you can continue as before.</p>
2
2016-09-14T16:06:54Z
[ "python", "pandas", "machine-learning", "scikit-learn" ]
Custom distribution in scipy with pdf given
39,494,046
<p>I try to define a custom distribution with pdf given via scipy.stats</p> <pre><code>import numpy as np from scipy.stats import rv_continuous class CustomDistribution(rv_continuous): def __init__(self, pdf=None): super(CustomDistribution, self).__init__() self.custom_pdf = pdf print "Initialized!" def _pdf(self, x, *args): if self.custom_pdf is None: # print 'PDF is not overridden' return super(CustomDistribution, self)._pdf(x, *args) else: # print 'PDF is overridden' return self.custom_pdf(x) def g(x, mu): if x &lt; 0: return 0 else: return mu * np.exp(- mu * x) my_exp_dist = CustomDistribution(pdf=lambda x: g(x, .5)) print my_exp_dist.mean() </code></pre> <p>As you see I try to define exponential distribution wuth parameter mu=0.5, but the output is as follows.</p> <blockquote> <p>Initialized!</p> <p>D:\Anaconda2\lib\site-packages\scipy\integrate\quadpack.py:357: </p> <p>IntegrationWarning: The algorithm does not converge. Roundoff error is detected in the extrapolation table. It is assumed that the requested tolerance cannot be achieved, and that the returned result (if full_output = 1) is the best which can be obtained.<br> warnings.warn(msg, IntegrationWarning)</p> <p>D:\Anaconda2\lib\site-packages\scipy\integrate\quadpack.py:357: </p> <p>IntegrationWarning: The maximum number of subdivisions (50) has been achieved.</p> <p>2.0576933609</p> <p>If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. warnings.warn(msg, IntegrationWarning)</p> </blockquote> <p>What should I do to improve this?</p> <p><strong>NOTE:</strong> The problem of computational accuracy is discussed in <a href="https://github.com/scipy/scipy/issues/6579" rel="nofollow">this GitHub issue</a>.</p>
1
2016-09-14T15:14:07Z
39,495,944
<p>This seems to do what you want. An instance of the class must be given a value for the lambda parameter each time the instance is created. rv_continuous is clever enough to infer items that you do not supply but you can, of course, offer more definitions that I have here.</p> <pre><code>from scipy.stats import rv_continuous import numpy class Neg_exp(rv_continuous): "negative exponential" def _pdf(self, x, lamda): self.lamda=lamda return lamda*numpy.exp(-lamda*x) def _cdf(self, x, lamda): return 1-numpy.exp(-lamda*x) def _stats(self,lamda): return [1/self.lamda,0,0,0] neg_exp=Neg_exp(name="negative exponential",a=0) print (neg_exp.pdf(0,.5)) print (neg_exp.pdf(5,.5)) print (neg_exp.stats(0.5)) print (neg_exp.rvs(0.5)) </code></pre>
0
2016-09-14T16:58:23Z
[ "python", "scipy", "statistics" ]
Progress bar for pandas.DataFrame.to_sql
39,494,056
<p>I want to migrate data from a large csv file to sqlite3 database.</p> <p>My code on Python 3.5 using pandas:</p> <pre><code>con = sqlite3.connect(DB_FILENAME) df = pd.read_csv(MLS_FULLPATH) df.to_sql(con=con, name="MLS", if_exists="replace", index=False) </code></pre> <p>Is it possible to print current status (progress bar) of execution of to_sql method? </p> <p>I looked the article about <a href="https://github.com/tqdm/tqdm" rel="nofollow">tqdm</a>, but didn't find how to do this.</p>
2
2016-09-14T15:14:41Z
39,495,229
<p>Unfortuantely <code>DataFrame.to_sql</code> does not provide a chunk-by-chunk callback, which is needed by tqdm to update its status. However, you can process the dataframe chunk by chunk:</p> <pre><code>import sqlite3 import pandas as pd from tqdm import tqdm DB_FILENAME='/tmp/test.sqlite' def chunker(seq, size): # from http://stackoverflow.com/a/434328 return (seq[pos:pos + size] for pos in xrange(0, len(seq), size)) def insert_with_progress(df, dbfile): con = sqlite3.connect(dbfile) chunksize = int(len(df) / 10) # 10% with tqdm(total=len(df)) as pbar: for i, cdf in enumerate(chunker(df, chunksize)): replace = "replace" if i == 0 else "append" cdf.to_sql(con=con, name="MLS", if_exists=replace, index=False) pbar.update(chunksize) df = pd.DataFrame({'a': range(0,100000)}) insert_with_progress(df, DB_FILENAME) </code></pre> <p>Note I'm generating the DataFrame inline here for the sake of having a complete workable example without dependency.</p> <p>The result is quite stunning:</p> <p><a href="http://i.stack.imgur.com/Alz8X.png" rel="nofollow"><img src="http://i.stack.imgur.com/Alz8X.png" alt="enter image description here"></a></p>
2
2016-09-14T16:15:34Z
[ "python", "sqlite", "pandas", "dataframe", "tqdm" ]
Boto3 - Delete_snapshot not evaluating variables
39,494,092
<p>I'm trying to run boto3 to loop through snapshots older than 14 days. It can find all the snapshots older than 14 days fine, and I've verified that all that works okay. The problem is when it runs through the dictionary trying to delete, it looks like the function isn't correctly evaluating the variable (See below). </p> <p>It seems to just include it as a string. </p> <p>The loop runs through the dict using a "for snapshot in ..." if'ing the tags to find the snapshots ready for deletion. Here's the 'if' part:</p> <pre><code> if snap_start_time &lt; expiry: # check if it's more than a &lt;expiry&gt; old print "Deleting Snapshot: " + snapshot['SnapshotId'] response = ec2client.delete_snapshot( SnapshotId=snapshot['SnapshotId'] ) </code></pre> <p>errors here:</p> <pre><code>Deleting Snapshot: snap-f4f0079d Traceback (most recent call last): File "./aws-snap.py", line 27, in &lt;module&gt; SnapshotId=snapshot['SnapshotId'] File "/usr/lib/python2.6/site-packages/botocore/client.py", line 159, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/lib/python2.6/site-packages/botocore/client.py", line 494, in _make_api_call raise ClientError(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (InvalidSnapshot.NotFound) when calling the DeleteSnapshot operation: None </code></pre> <p>Any clues? \o/</p>
1
2016-09-14T15:16:57Z
39,506,812
<p>As it turns out, referencing straight from the dictionary is a bad idea. It needs to be wrapped in str() and provided with the DryRun=False option too.</p>
0
2016-09-15T08:52:28Z
[ "python", "boto3", "amazon-api" ]
Boto3 - Delete_snapshot not evaluating variables
39,494,092
<p>I'm trying to run boto3 to loop through snapshots older than 14 days. It can find all the snapshots older than 14 days fine, and I've verified that all that works okay. The problem is when it runs through the dictionary trying to delete, it looks like the function isn't correctly evaluating the variable (See below). </p> <p>It seems to just include it as a string. </p> <p>The loop runs through the dict using a "for snapshot in ..." if'ing the tags to find the snapshots ready for deletion. Here's the 'if' part:</p> <pre><code> if snap_start_time &lt; expiry: # check if it's more than a &lt;expiry&gt; old print "Deleting Snapshot: " + snapshot['SnapshotId'] response = ec2client.delete_snapshot( SnapshotId=snapshot['SnapshotId'] ) </code></pre> <p>errors here:</p> <pre><code>Deleting Snapshot: snap-f4f0079d Traceback (most recent call last): File "./aws-snap.py", line 27, in &lt;module&gt; SnapshotId=snapshot['SnapshotId'] File "/usr/lib/python2.6/site-packages/botocore/client.py", line 159, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/lib/python2.6/site-packages/botocore/client.py", line 494, in _make_api_call raise ClientError(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (InvalidSnapshot.NotFound) when calling the DeleteSnapshot operation: None </code></pre> <p>Any clues? \o/</p>
1
2016-09-14T15:16:57Z
39,529,383
<p>I would doubt that the SnapshotId might not be passing as a string. Change the SnapshotId to a string format and pass it for deletion. <code>str(snapshot['SnapshotId'])</code></p>
0
2016-09-16T10:36:11Z
[ "python", "boto3", "amazon-api" ]
Is there any column match or row match function in python?
39,494,152
<p>I have two data frame lets say:</p> <p>dataframe A with column 'name'</p> <pre><code> name 0 4 1 2 2 1 3 3 </code></pre> <p>Another dataframe B with two columns i.e. name and value</p> <pre><code> name value 0 3 5 1 2 6 2 4 7 3 1 8 </code></pre> <p>I want to rearrange the value in dataframe B according to the name column in dataframe A</p> <p>I am expecting final dataframe similar to this: </p> <pre><code> name value 0 4 7 1 2 6 2 1 8 3 3 5 </code></pre>
0
2016-09-14T15:19:59Z
39,494,321
<p>Here are two options:</p> <pre><code>dfB.set_index('name').loc[dfA.name].reset_index() Out: name value 0 4 7 1 2 6 2 1 8 3 3 5 </code></pre> <p>Or, </p> <pre><code>dfA['value'] = dfA['name'].map(dfB.set_index('name')['value']) dfA Out: name value 0 4 7 1 2 6 2 1 8 3 3 5 </code></pre> <hr> <p>Timings:</p> <pre><code>import numpy as np import pandas as pd prng = np.random.RandomState(0) names = np.arange(10**7) prng.shuffle(names) dfA = pd.DataFrame({'name': names}) prng.shuffle(names) dfB = pd.DataFrame({'name': names, 'value': prng.randint(0, 100, 10**7)}) %timeit dfB.set_index('name').loc[dfA.name].reset_index() 1 loop, best of 3: 2.27 s per loop %timeit dfA['value'] = dfA['name'].map(dfB.set_index('name')['value']) 1 loop, best of 3: 1.65 s per loop %timeit dfB.set_index('name').ix[dfA.name].reset_index() 1 loop, best of 3: 1.66 s per loop </code></pre>
1
2016-09-14T15:28:27Z
[ "python", "pandas", "dataframe" ]
PEP8 import conventions
39,494,192
<p>I'm trying to stick to the best practices when it comes to import modules, I'm trying to understand what <a href="https://www.python.org/dev/peps/pep-0008/#imports" rel="nofollow">PEP8</a> says about this. </p> <p>Let's say my framework has hundred of classes and few dozen of packages. For instance, PyQt5 or sympy would be good candidates... what'd be the best choice from this set?</p> <p><strong>a) Import everything</strong></p> <pre><code>from PyQt5.QtCore import * from PyQt5.QtGui import * from PyQt5.QtWidgets import * print(QPoint) print(QPixmap) print(QApplication) </code></pre> <p><strong>b) Import only big packages and using prefixes for the whole application</strong></p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets print(QtCore.QPoint) print(QtGui.QPixmap) print(QtWidgets.QApplication) </code></pre> <p><strong>c) Import specific classes from the big packages</strong></p> <pre><code>from PyQt5.QtCore import QPoint from PyQt5.QtGui import QPixmap from PyQt5.QtWidgets import QApplication print(QPoint) print(QPixmap) print(QApplication) </code></pre> <p>The option a) is discouraged by PEP8, what about b) or c)... what's the PEP8's recomendation about it?</p>
0
2016-09-14T15:22:19Z
39,494,309
<p>There is no recommendation because it depends too much on your project, and what potential name clashes you may experience. If you don't already have a <code>QPoint</code> object (either of your own, or potentially from a different package), you may find it easier to read and write just the <code>QPoint</code> symbol where it is needed.</p> <p>However, should you in the future interact with a different package that also provides a <code>QPoint</code>, you would need either to refer to them via their parent package, or use the <code>from PyQt5.QtCore import QPoint as PyQt5QPoint</code> syntax before referring to <code>PyQt5Point</code> in subsequent code.</p>
0
2016-09-14T15:28:02Z
[ "python", "pyqt", "pep8" ]
How to plot data after groupby
39,494,246
<p>I have a data frame similar to this</p> <pre><code>import pandas as pd df = pd.DataFrame([['1','3','1','2','3','1','2','2','1','1'], ['ONE','TWO','ONE','ONE','ONE','TWO','ONE','TWO','ONE','THREE']]).T df.columns = [['age','data']] print(df) #printing dataframe. </code></pre> <p>I performed the groupby function on it to get the required output.</p> <pre><code>df['COUNTER'] =1 #initially, set that counter to 1. group_data = df.groupby(['age','data'])['COUNTER'].sum() #sum function print(group_data) </code></pre> <p>now i want to plot the out using matplot lib. Please help me with it.. I am not able to figure how to start and what to do. I want to plot using the counter value and something similar to bar graph</p>
1
2016-09-14T15:25:10Z
39,494,317
<p>Try:</p> <pre><code>group_data = group_data.reset_index() </code></pre> <p>in order to get rid of the multiple index that the <code>groupby()</code> has created for you.</p> <p>Your <code>print(group_data)</code> will give you this:</p> <pre><code>In [24]: group_data = df.groupby(['age','data'])['COUNTER'].sum() #sum function In [25]: print(group_data) age data 1 ONE 3 THREE 1 TWO 1 2 ONE 2 TWO 1 3 ONE 1 TWO 1 Name: COUNTER, dtype: int64 </code></pre> <p>Whereas, reseting will 'simplify' the new index:</p> <pre><code>In [26]: group_data = group_data.reset_index() In [27]: group_data Out[27]: age data COUNTER 0 1 ONE 3 1 1 THREE 1 2 1 TWO 1 3 2 ONE 2 4 2 TWO 1 5 3 ONE 1 6 3 TWO 1 </code></pre> <p>Then depending on what it is exactly that you want to plot, you might want to take a look at the <a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html" rel="nofollow">Matplotlib docs</a></p> <p><strong>EDIT</strong></p> <p>I now read more carefully that you want to create a 'bar' chart.</p> <p>If that is the case then I would take a step back and <strong>not</strong> use <code>reset_index()</code> on the groupby result. Instead, try this:</p> <pre><code>In [46]: fig = group_data.plot.bar() In [47]: fig.figure.show() </code></pre> <p>I hope this helps</p>
2
2016-09-14T15:28:22Z
[ "python", "pandas", "matplotlib" ]
How to plot data after groupby
39,494,246
<p>I have a data frame similar to this</p> <pre><code>import pandas as pd df = pd.DataFrame([['1','3','1','2','3','1','2','2','1','1'], ['ONE','TWO','ONE','ONE','ONE','TWO','ONE','TWO','ONE','THREE']]).T df.columns = [['age','data']] print(df) #printing dataframe. </code></pre> <p>I performed the groupby function on it to get the required output.</p> <pre><code>df['COUNTER'] =1 #initially, set that counter to 1. group_data = df.groupby(['age','data'])['COUNTER'].sum() #sum function print(group_data) </code></pre> <p>now i want to plot the out using matplot lib. Please help me with it.. I am not able to figure how to start and what to do. I want to plot using the counter value and something similar to bar graph</p>
1
2016-09-14T15:25:10Z
39,495,809
<p>Try with this:</p> <pre><code># This is a great tool to add plots to jupyter notebook % matplotlib inline import pandas as pd import matplotlib.pyplot as plt # Params get plot bigger plt.rcParams["axes.labelsize"] = 16 plt.rcParams["xtick.labelsize"] = 14 plt.rcParams["ytick.labelsize"] = 14 plt.rcParams["legend.fontsize"] = 12 plt.rcParams["figure.figsize"] = [15, 7] df = pd.DataFrame([['1','3','1','2','3','1','2','2','1','1'], ['ONE','TWO','ONE','ONE','ONE','TWO','ONE','TWO','ONE','THREE']]).T df.columns = [['age','data']] df['COUNTER'] = 1 group_data = df.groupby(['age','data']).sum()[['COUNTER']].plot.bar(rot = 90) # If you want to rotate labels from x axis _ = group_data.set(xlabel = 'xlabel', ylabel = 'ylabel'), group_data.legend(['Legend']) # you can add labels and legend </code></pre>
0
2016-09-14T16:49:52Z
[ "python", "pandas", "matplotlib" ]
too many legend with array column data in matplotlib
39,494,254
<p>I try to plot simple rotation matrix result with list data. but My figure with result array have so many index as screen dump image. and the second plot is not exact with my attribute(line style, etc.) I guess that I do mistake array handling to plot but don't know what. Any comments are welcome. Thanks in advance.</p> <p><a href="http://i.stack.imgur.com/i9HAP.png" rel="nofollow"><img src="http://i.stack.imgur.com/i9HAP.png" alt="enter image description here"></a></p> <p>My code is below. </p> <pre><code>import numpy as np import matplotlib.pyplot as plt theta = np.radians(30) c, s = np.cos(theta), np.sin(theta) R = np.matrix('{} {}; {} {}'.format(c, -s, s, c)) x = [-9, -8, -7, -6, -5, -4, -3, -2, -1,0,1,2,3,4,5,6,7,8,9] y = [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1] line_b = [x,y] result_a = R*np.array(line_b) fig=plt.figure() ax1 = fig.add_subplot(111) plt.plot(line_b[0],line_b[1], color="blue", linewidth=2.5, linestyle="-", label='measured') plt.plot(result_a[0], result_a[1], 'r*-', label='rotated') ax1.set_ylim(-10,10) ax1.set_xlim(-10,10) plt.legend() # axis center to move 0,0 ax1.spines['right'].set_color('none') ax1.spines['top'].set_color('none') ax1.xaxis.set_ticks_position('bottom') ax1.spines['bottom'].set_position(('data',0)) ax1.yaxis.set_ticks_position('left') ax1.spines['left'].set_position(('data',0)) plt.show() </code></pre>
-1
2016-09-14T15:25:28Z
39,501,403
<p>The issue is that you are trying to plot the two rows of <code>result_a</code> as if they were 1-dimensional <code>np.ndarray</code>s, when in fact they are <code>np.matrix</code> which are always 2-dimensional. See for yourself:</p> <pre><code>&gt;&gt;&gt; result_a[0].shape (1, 19) </code></pre> <p>To remedy this, you need to convert your vectors <code>result_a[0], result_a[1]</code> to arrays. Simple ways can be found <a href="http://stackoverflow.com/questions/3337301/numpy-matrix-to-array">in this answer</a>. For example,</p> <pre><code>rx = result_a[0].A1 ry = result_a[1].A1 # alternatively, the more compact # rx, ry = np.array(result_a) plt.plot(rx, ry, 'r*-', label='rotated') </code></pre> <p>yields the following (with <code>plt.legend(); plt.show()</code>):</p> <p><a href="http://i.stack.imgur.com/eWsTc.png" rel="nofollow"><img src="http://i.stack.imgur.com/eWsTc.png" alt="enter image description here"></a></p>
0
2016-09-15T00:10:13Z
[ "python", "arrays", "matplotlib", "legend" ]
Multiple jsons to csv
39,494,294
<p>I have multiple files, each containing multiple highly nested json <em>rows</em>. The two first rows of one such file look like:</p> <pre><code>{ "u":"28", "evv":{ "w":{ "1":400, "2":{ "i":[{ "l":14, "c":"7", "p":"4" } ] } } } } { "u":"29", "evv":{ "w":{ "3":400, "2":{ "i":[{ "c":14, "y":"7", "z":"4" } ] } } } } </code></pre> <p>they are actually rows, I just wrote them here this way for more visibility.</p> <p>My question is the following:</p> <p><del>Is there any way to convert <em>all</em> these files to one (or multiple, i.e. one per file) csv/excel... ?</del></p> <p>Is there any simple way, that doesn't require writing dozens, or hundreds of lines in Python, specific to my file, to convert <em>all</em> these files to one (or multiple, i.e. one per file) csv/excel... ? One example would be using an external library, script... that handles this particular task, regardless of the names of the fields.</p> <p>The trap is that some elements do not appear in each line. For example, for the <em>"i"</em> key, we have 3 fields (l, c, p) in the first json, and 3 in the second one (c, y, z). Ideally, the csv should contain as many columns as possible fields (e.g. evv.w.2.i.l, evv.w.2.i.c, evv.w.2.i.p, evv.w.2.i.y, evv.w.2.i.z) at the risk of having (many) null values per csv row.</p> <p>A possible csv output for this example would have the following columns:</p> <pre><code>u, evv.w.1, evv.w.3, evv.w.2.i.l, evv.w.2.i.c, evv.w.2.i.p, evv.w.2.i.y, evv.w.2.i.z </code></pre> <p>Any idea/reference is welcome :)</p> <p>Thanks</p>
1
2016-09-14T15:27:19Z
39,496,177
<p>Please check if this (python3) solution works for you. </p> <pre><code>import json import csv with open('test.json') as data_file: with open('output.csv', 'w', newline='') as fp: for line in data_file: data = json.loads(line) output = [[data['u'], data['evv']['w'].get('1'), data['evv']['w'].get('3'), data['evv']['w'].get('2')['i'][0].get('l'), data['evv']['w'].get('2')['i'][0].get('c'), data['evv']['w'].get('2')['i'][0].get('p'), data['evv']['w'].get('2')['i'][0].get('y'), data['evv']['w'].get('2')['i'][0].get('z')]] a = csv.writer(fp, delimiter=',') a.writerows(output) </code></pre> <p><strong>test.json</strong></p> <pre><code>{ "u": "28", "evv": { "w": { "1": 400, "2": { "i": [{ "l": 14, "c": "7", "p": "4" }] } } }} {"u":"29","evv":{ "w":{ "3":400, "2":{ "i":[{ "c":14, "y":"7", "z":"4" } ] } } }} </code></pre> <p><strong>output</strong></p> <pre><code>python3 pyprog.py dac@dac-Latitude-E7450 ~/P/pyprog&gt; more output.csv 28,400,,14,7,4,, 29,,400,,14,,7,4 </code></pre>
1
2016-09-14T17:14:09Z
[ "python", "json", "excel", "csv" ]
Multiple jsons to csv
39,494,294
<p>I have multiple files, each containing multiple highly nested json <em>rows</em>. The two first rows of one such file look like:</p> <pre><code>{ "u":"28", "evv":{ "w":{ "1":400, "2":{ "i":[{ "l":14, "c":"7", "p":"4" } ] } } } } { "u":"29", "evv":{ "w":{ "3":400, "2":{ "i":[{ "c":14, "y":"7", "z":"4" } ] } } } } </code></pre> <p>they are actually rows, I just wrote them here this way for more visibility.</p> <p>My question is the following:</p> <p><del>Is there any way to convert <em>all</em> these files to one (or multiple, i.e. one per file) csv/excel... ?</del></p> <p>Is there any simple way, that doesn't require writing dozens, or hundreds of lines in Python, specific to my file, to convert <em>all</em> these files to one (or multiple, i.e. one per file) csv/excel... ? One example would be using an external library, script... that handles this particular task, regardless of the names of the fields.</p> <p>The trap is that some elements do not appear in each line. For example, for the <em>"i"</em> key, we have 3 fields (l, c, p) in the first json, and 3 in the second one (c, y, z). Ideally, the csv should contain as many columns as possible fields (e.g. evv.w.2.i.l, evv.w.2.i.c, evv.w.2.i.p, evv.w.2.i.y, evv.w.2.i.z) at the risk of having (many) null values per csv row.</p> <p>A possible csv output for this example would have the following columns:</p> <pre><code>u, evv.w.1, evv.w.3, evv.w.2.i.l, evv.w.2.i.c, evv.w.2.i.p, evv.w.2.i.y, evv.w.2.i.z </code></pre> <p>Any idea/reference is welcome :)</p> <p>Thanks</p>
1
2016-09-14T15:27:19Z
39,496,778
<p>No, there is no general-purpose program that does precisely what you ask for. </p> <p>You can, however, write a Python program that does it.</p> <p>This program might do what you want. It does not have any code specific to your key names, but it is specific to your file format.</p> <ul> <li>It can take several files on the command line.</li> <li>Each file is presumed to have one JSON object per line.</li> <li>It flattens the JSON object, joining labels with "."</li> </ul> <p>&nbsp;</p> <pre><code>import fileinput import json import csv def flattify(d, key=()): if isinstance(d, list): result = {} for i in d: result.update(flattify(i, key)) return result if isinstance(d, dict): result = {} for k, v in d.items(): result.update(flattify(v, key + (k,))) return result return {key: d} total = [] for line in fileinput.input(): if(line.strip()): line = json.loads(line) line = flattify(line) line = {'.'.join(k): v for k, v in line.items()} total.append(line) keys = set() for d in total: keys.update(d) with open('result.csv', 'w') as output_file: output_file = csv.DictWriter(output_file, sorted(keys)) output_file.writeheader() output_file.writerows(total) </code></pre>
1
2016-09-14T17:53:47Z
[ "python", "json", "excel", "csv" ]
What are the requirements of a protocol factory in python asyncio?
39,494,332
<p>I am looking at the example of the <a href="https://docs.python.org/3/library/asyncio-protocol.html#udp-echo-server-protocol" rel="nofollow">UDP echo server</a>:</p> <pre><code>import asyncio class EchoServerProtocol: def connection_made(self, transport): self.transport = transport def datagram_received(self, data, addr): message = data.decode() print('Received %r from %s' % (message, addr)) print('Send %r to %s' % (message, addr)) self.transport.sendto(data, addr) loop = asyncio.get_event_loop() print("Starting UDP server") # One protocol instance will be created to serve all client requests listen = loop.create_datagram_endpoint( EchoServerProtocol, local_addr=('127.0.0.1', 9999)) transport, protocol = loop.run_until_complete(listen) try: loop.run_forever() except KeyboardInterrupt: pass transport.close() loop.close() </code></pre> <p>It seems that the call...</p> <p><code>loop.create_datagram_endpoint(EchoServerProtocol, local_addr=('127.0.0.1', 9999))</code></p> <p>...is doing all the work here. The <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.create_datagram_endpoint" rel="nofollow">method documentation</a> states the following for the first argument (<code>protocol_factory</code>):</p> <blockquote> <p>protocol_factory must be a callable returning a protocol instance.</p> </blockquote> <p>My questions:</p> <ul> <li>What defines a <code>protocol instance</code>?</li> <li>Is <code>returning a protocol instance</code> a different wording for <code>initiating a protocol object</code>?</li> <li>How does the <code>EchoServerProtocol</code> in the example fulfill this requirement?</li> </ul>
1
2016-09-14T15:29:08Z
39,495,859
<h3>What defines a <em>protocol instance</em>?</h3> <p>A protocol is a class you define that implements one of the interfaces defined in the <a href="https://docs.python.org/3/library/asyncio-protocol.html#protocol-classes" rel="nofollow">Protocols section</a>, i.e. provides implementations for a set of callbacks, e.g. <a href="https://docs.python.org/3/library/asyncio-protocol.html#connection-callbacks" rel="nofollow">Connection Callbacks</a>.</p> <p>So for the UDP echo server example you have posted, the <code>EchoServerProtocol</code> user defined class actually defines a protocol by implementing the <a href="https://docs.python.org/3/library/asyncio-protocol.html#asyncio.BaseProtocol.connection_made" rel="nofollow"><code>connection_made</code></a> and <a href="https://docs.python.org/3/library/asyncio-protocol.html#asyncio.DatagramProtocol.datagram_received" rel="nofollow"><code>datagram_received</code></a>.</p> <p>In summary, if you implement one of those callbacks in a class, you are said to be defining a <code>Protocol</code>. So an instance/object of that class would be a <strong>protocol instance</strong>.</p> <hr> <h3>Is <em>returning a protocol instance</em> a different wording for <em>initiating a protocol object</em>?</h3> <p>Formally <em>YES</em>. Before you would return a <strong>protocol instance</strong>, you would have <strong>initialized</strong> it. So basically one is a prerequisite of the other.</p> <hr> <h3>How does the <code>EchoServerProtocol</code> in the example fulfill this requirement?</h3> <p>So first of all, as answered the first question, the <code>EchoServerProtocol</code> defines a protocol. Thus the next thing is to provide a <code>protocol_factory</code>, which is <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.create_datagram_endpoint" rel="nofollow">defined</a> as:</p> <blockquote> <p><code>protocol_factory</code> must be a callable returning a protocol instance.</p> </blockquote> <p>So to satisfy this requirement, you could just have this simple method:</p> <pre><code>def my_protocol_factory(): return EchoServerProtocol() </code></pre> <p>Note, that this factory first initializes the protocol instance and then returns it.</p> <p>So the thing that might confuse you in the example, is that the class <code>EchoServerProtocol</code> itself is passed as the <code>protocol_factory</code>, but if you summarize what I've said, you will see that the <code>EchoServerProtocol</code> is actually a callable, and when it gets called, i.e. <code>EchoServerProtocol()</code> it actually initializes a <code>EchoServerProtocol</code> instance, and returns it.</p> <p>So yep, the example fulfills the requirement.</p> <hr>
2
2016-09-14T16:53:49Z
[ "python", "python-3.5", "python-asyncio" ]
Drop Item If More Than 1 Item Present
39,494,356
<p>I'm working on a piece of code, and it is for a little AI creature to randomly come into a room, and look to see if there is anything there. If there is anything that the player has touched before, then it takes the item. The next room it goes to it drops that item and may pick up a new one. So far, I have:</p> <pre><code>import random rooms = ['kitchen', 'livingroom', 'basement'] itemsstatus = {'Umbrella': 1, 'Coin': 1} itemsstatus['Umbrella'] = raw_input() print "itemstatus['Umbrella']", itemsstatus['Umbrella'] roominventory = ['Umbrella', 'Coin'] goblininventory = ['baseball'] notpickedanythingelse = 'true' gotoroom = random.choice(rooms) if(gotoroom == 'kitchen') or (gotoroom == 'livingroom') or (gotoroom == 'basement'): ininventory = len(goblininventory) if(ininventory &gt;= 1): roominventory.append(goblininventory[0]) goblininventory.remove([0]) else: print "" for items in roominventory: if(itemsstatus[items] == 1) and (notpickedanythingelse == 'true'): goblininventory.append(items) roominventory.remove(items) notpickedanythingelse = 'false' else: print "" notpickedanythingelse = 'true' print roominventory print goblininventory </code></pre> <p>The itemstatus[''] = rawinput() will be done automatically by the game and won't be a raw imput, it is just here so I can test it. As well, each room will have it's own inventory and loop, but this is just for the simplicity of it. The goblin will pick up an item, and keep it, but it won't drop the one it already has (it can only carry 1 thing at a time). How can I get it so that it will drop the item it is holding once entering a new room?</p>
0
2016-09-14T15:30:20Z
39,494,682
<p><code>goblininventory.remove([0])</code> is incorrect. You should use <code>goblininventory.pop()</code> to remove the first element from the list.</p> <p>See here for more info on <code>remove</code> and <code>pop</code>: <a href="https://docs.python.org/2/tutorial/datastructures.html#more-on-lists" rel="nofollow">https://docs.python.org/2/tutorial/datastructures.html#more-on-lists</a></p>
0
2016-09-14T15:45:24Z
[ "python" ]
GAE Python code MUCH slower in production than locally
39,494,684
<p>In my Python GAE app, the following snippet of code is MUCH slower in production than when run locally. The processing goes like this:</p> <ol> <li>A text file of about 1 MB is loaded in a POST. Each line of the text file is an "item".</li> <li>My code creates a list of items from the text file and checks for duplicates and validity (by comparing against a compiled RE).</li> </ol> <p>Here is the code: </p> <pre><code>def process_items(self, text): item_list = text.split() item_set = set() n_valid = 0 n_invalid = 0 n_dups = 0 out = "" for item in item_list: if item in item_set: n_dups += 1 out += "DUPLICATE: %s\n" % item elif valid_item(item): # This compares against a compiled RE item_set.add(item) n_valid += 1 out += "%s\n" % item else: n_invalid += 1 out += "INVALID: %s\n" % item return out </code></pre> <p>When I run this on the local dev server, a 1MB file of 50,000 lines takes 5 seconds to process.</p> <p>When I run this in production, the same file takes over a minute and the request times out. The file upload only takes about a second so I know the bottle neck is the above code.</p> <p>In the past, production code was about the same speed as my local code. I don't think this code has changed, so I suspect there may have been a change on Google's end.</p> <p>Any idea why this code is now much slower in production? Anything I can do to make this code faster? I need to return an annotated file to the user that indicates which lines are duplicates and which lines are invalid.</p> <p>EDIT:</p> <p>In response to mgilson's comment, I tried the following code, and it made a huge difference in execution time! The processing that previously timed out after a minute now takes only about 5 seconds. GAE is still slower than expected (even accounting the relatively slow server CPUs), but with the improved algorithm, it doesn't matter for me now.</p> <pre><code>def process_items(self, text): item_list = text.split() item_set = set() n_valid = 0 n_invalid = 0 n_dups = 0 for i, item in enumerate(item_list): item = item.strip() if item in item_set: n_dups += 1 item_list[i] = "DUPLICATE: %s" % item elif valid_item(item): # This compares against a compiled RE item_set.add(item) n_valid += 1 item_list[i] = item else: n_invalid += 1 item_list[i] = "INVALID: %s" % item return "\n".join(item_list) </code></pre>
2
2016-09-14T15:45:26Z
39,516,803
<p>It's not at all unexpected that GAE production would run slower than locally -- Depending on your <a href="https://cloud.google.com/appengine/docs/about-the-standard-environment#instance_classes" rel="nofollow">instance class</a>, your production CPU can be throttled as low as 600MHz which is significantly slower than most developer computers.</p> <p>One thing you <em>can</em> do to speed things up is to accumulate your results in a list (or yield them from a generator) and then use <code>str.join</code> to get the full result:</p> <pre><code>def process_items(self, text): item_list = text.split() item_set = set() n_valid = 0 n_invalid = 0 n_dups = 0 out = [] for item in item_list: if item in item_set: n_dups += 1 out.append("DUPLICATE: %s\n" % item) elif valid_item(item): # This compares against a compiled RE item_set.add(item) n_valid += 1 out.append("%s\n" % item) else: n_invalid += 1 out.append("INVALID: %s\n" % item) return "".join(out) </code></pre>
4
2016-09-15T17:15:46Z
[ "python", "performance", "google-app-engine" ]
Converting time string 11:00am/pm into UTC in 24 hour format
39,494,737
<p>I have time string <code>11:15am</code> or <code>11:15pm</code>. I am trying to convert this string into UTC timezone with 24 hour format.</p> <p>FROM EST to UTC</p> <p>For example: When I pass <code>11:15am</code> It should convert into <code>15:15</code> and when I pass <code>11:15pm</code> then it should convert to <code>3:15</code>.</p> <p>I have this code which I am trying:</p> <pre><code>def appointment_time_string(time_str): import datetime a = time_str.split()[0] # b = re.findall(r"[^\W\d_]+|\d+",a) # c = str(int(b[0]) + 4) + ":" + b[1] # print("c", c) in_time = datetime.datetime.strptime(a,'%I:%M%p') print("In Time", in_time) start_time = str(datetime.datetime.strftime(in_time, "%H:%M:%S")) print("Start TIme", start_time) if time_str.split()[3] == 'Today,': start_date = datetime.datetime.utcnow().strftime("%Y-%m-%dT") elif time_str.split()[3] == 'Tomorrow,': today = datetime.date.today( ) start_date = (today + datetime.timedelta(days=1)).strftime("%Y-%m-%dT") appointment_time = str(start_date) + str(start_time) return appointment_time x = appointment_time_string(time_str) print("x", x) </code></pre> <p>But this is just converting to 24 hour not to UTC.</p>
1
2016-09-14T15:48:25Z
39,494,831
<p>To convert the time from 12 hours to 24 hours format, you may use below code:</p> <pre><code>from datetime import datetime new_time = datetime.strptime('11:15pm', '%I:%M%p').strftime("%H:%M") # new_time: '23:15' </code></pre> <p>In order to convert time from <code>EST</code> to <code>UTC</code>, the most reliable way is to use third party library <a href="http://pytz.sourceforge.net/" rel="nofollow"><code>pytz</code></a>. Refer <a href="http://stackoverflow.com/a/5491705/2063361">How to convert EST/EDT to GMT?</a> for more details</p>
1
2016-09-14T15:53:05Z
[ "python", "python-2.x" ]
Converting time string 11:00am/pm into UTC in 24 hour format
39,494,737
<p>I have time string <code>11:15am</code> or <code>11:15pm</code>. I am trying to convert this string into UTC timezone with 24 hour format.</p> <p>FROM EST to UTC</p> <p>For example: When I pass <code>11:15am</code> It should convert into <code>15:15</code> and when I pass <code>11:15pm</code> then it should convert to <code>3:15</code>.</p> <p>I have this code which I am trying:</p> <pre><code>def appointment_time_string(time_str): import datetime a = time_str.split()[0] # b = re.findall(r"[^\W\d_]+|\d+",a) # c = str(int(b[0]) + 4) + ":" + b[1] # print("c", c) in_time = datetime.datetime.strptime(a,'%I:%M%p') print("In Time", in_time) start_time = str(datetime.datetime.strftime(in_time, "%H:%M:%S")) print("Start TIme", start_time) if time_str.split()[3] == 'Today,': start_date = datetime.datetime.utcnow().strftime("%Y-%m-%dT") elif time_str.split()[3] == 'Tomorrow,': today = datetime.date.today( ) start_date = (today + datetime.timedelta(days=1)).strftime("%Y-%m-%dT") appointment_time = str(start_date) + str(start_time) return appointment_time x = appointment_time_string(time_str) print("x", x) </code></pre> <p>But this is just converting to 24 hour not to UTC.</p>
1
2016-09-14T15:48:25Z
39,496,107
<p>Developed the following script using provided options/solutions to satisfy my requirement. </p> <p>def appointment_time_string(time_str):</p> <pre><code> import datetime import pytz a = time_str.split()[0] in_time = datetime.datetime.strptime(a,'%I:%M%p') start_time = str(datetime.datetime.strftime(in_time, "%H:%M:%S")) if time_str.split()[3] == 'Today,': start_date = datetime.datetime.utcnow().strftime("%Y-%m-%d") elif time_str.split()[3] == 'Tomorrow,': today = datetime.date.today( ) start_date = (today + datetime.timedelta(days=1)).strftime("%Y-%m-%d") appointment_time = str(start_date) + " " + str(start_time) # print("Provided Time", appointment_time) utc=pytz.utc eastern=pytz.timezone('US/Eastern') fmt='%Y-%m-%dT%H:%M:%SZ' # testeddate = '2016-09-14 22:30:00' test_date = appointment_time dt_obj = datetime.datetime.strptime(test_date,'%Y-%m-%d %H:%M:%S') dt_str = datetime.datetime.strftime(dt_obj, '%m/%d/%Y %H:%M:%S') date=datetime.datetime.strptime(dt_str,"%m/%d/%Y %H:%M:%S") date_eastern=eastern.localize(date,is_dst=None) date_utc=date_eastern.astimezone(utc) # print("Required Time", date_utc.strftime(fmt)) return date_utc.strftime(fmt) </code></pre>
0
2016-09-14T17:08:08Z
[ "python", "python-2.x" ]
How to get bandwidth from eth0 use python script?
39,494,766
<p>Python in ubuntu ifconfig : How to get bandwidth value from device eth0 use os modules ifconfig in python script, like this: </p> <pre><code>$python script.py eth0 UP: 5 KB/sec DOWN: 30.5 KB/sec </code></pre> <p>where output script can change every second time and value bandwidth use Kb/s ?</p>
1
2016-09-14T15:49:54Z
39,495,392
<p>You can use <a href="https://pythonhosted.org/psutil/" rel="nofollow">Psutil</a> to get information about network interfaces like this:</p> <pre><code>psutil.net_io_counters(pernic=True) </code></pre> <p>This will return something like the following</p> <pre><code>{'awdl0': snetio(bytes_sent=0L, bytes_recv=0L, packets_sent=0L, packets_recv=0L, errin=0L, errout=0L, dropin=0L, dropout=0), 'bridge0': snetio(bytes_sent=342L, bytes_recv=0L, packets_sent=1L, packets_recv=0L, errin=0L, errout=0L, dropin=0L, dropout=0), 'en0': snetio(bytes_sent=0L, bytes_recv=0L, packets_sent=0L, packets_recv=0L, errin=0L, errout=0L, dropin=0L, dropout=0), 'en1': snetio(bytes_sent=0L, bytes_recv=0L, packets_sent=0L, packets_recv=0L, errin=0L, errout=0L, dropin=0L, dropout=0), 'en4': snetio(bytes_sent=68008896L, bytes_recv=1972984495L, packets_sent=776722L, packets_recv=1487084L, errin=0L, errout=10L, dropin=0L, dropout=0), 'lo0': snetio(bytes_sent=87119711L, bytes_recv=87119711L, packets_sent=54606L, packets_recv=54606L, errin=0L, errout=0L, dropin=0L, dropout=0)} </code></pre> <p>You can measure the difference on sent/received bytes once every second and print the up/down speeds. If you want the speeds to be human readable take a look at how you at <a href="https://stackoverflow.com/questions/1094841/reusable-library-to-get-human-readable-version-of-file-size">this</a>.</p>
0
2016-09-14T16:25:39Z
[ "python", "ubuntu", "module", "bandwidth" ]
How to import .e00 ArcGIS file in GeoPandas
39,494,787
<p>I'm trying to work with files from this site:</p> <p><a href="ftp://ftp.epa.gov/castnet/tdep/grids/" rel="nofollow">NADP Website</a></p> <p>The files are <code>.e00</code> format. When I attempt to open them with GeoPandas, I get a message that they appear to be compressed.</p> <p>If I try using e00conv or AVCE00 to decompress the files, then open them with GeoPandas, I get a <code>FionaValueError</code>, that no dataset has been found. </p> <p>Any suggestions for how I can get these files opened so I can put them in a format I can use?</p> <p>I can load the decompressed file using <code>np.fromfile</code> but then all I have is a vector.</p>
0
2016-09-14T15:51:21Z
39,500,205
<p>I finally figured this out. In this instance, even though the .e00 format is not usually used to store raster file, these files are raster images. They open fine with rasterio. </p>
0
2016-09-14T21:52:04Z
[ "python", "gis", "arcgis", "geopandas", "fiona" ]
Pass file descriptor via a dbus function call from Python (aka call flatpak's HostCommand)
39,494,813
<p>I want to call Flatpak's <a href="https://github.com/flatpak/flatpak/commit/c8df0e6208e96e2743aa63a824d2ea7d19ce4cde" rel="nofollow">new Development DBus service to spawn a process on the host</a>, rather than in the sandbox.</p> <p>To call the DBus service, I've come up with the following piece of code:</p> <pre><code>#!/usr/bin/env python import logging import os import sys import dbus def call_on_host(cmd): "Calls Flatpak via DBus to spawn a process" name = "org.freedesktop.Flatpak" path = "/org/freedesktop/Flatpak/Development" bus = dbus.SessionBus() proxy = bus.get_object(name, path) iface = "org.freedesktop.Flatpak.Development" fp_helper = dbus.Interface(proxy, iface) wd = '/tmp/' read_fd, write_fd = os.pipe() fds = {0:dbus.types.UnixFd(read_fd)} envs = {'FOO':'bar'} flags = 1 # cwd, cmd, fds, env, flags = ('/', ['ls'], {0:dbus.types.UnixFd(open('/etc/passwd'))}, {'foo':'bar'}, 1) logging.info("Executing %r %r %r %r %r", wd, cmd, fds, envs, flags) ret = fp_helper.HostCommand(wd, cmd, fds, envs, flags) return ret logging.basicConfig(level=logging.DEBUG) print (call_on_host(sys.argv[1:])) </code></pre> <p>That, however, does not work so well. The Flatpak DBus helper does not receive any values, i.e. they are all NULL.</p> <pre><code>$ python execute_on_host.py 'ls' / INFO:root:Executing '/tmp/' ['ls'] {0: &lt;dbus.UnixFd object at 0x7f4b5ae6c120&gt;} {'FOO': 'bar'} 1 Traceback (most recent call last): File "execute_on_host.py", line 42, in &lt;module&gt; print (call_on_host(sys.argv[1:])) File "execute_on_host.py", line 35, in call_on_host ret = fp_helper.HostCommand(wd, cmd, fds, envs, flags) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 70, in __call__ return self._proxy_method(*args, **keywords) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 145, in __call__ **keywords) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 651, in call_blocking message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.InvalidArgs: No command 19430 given!!1 - *arg_argv[0] == 0 </code></pre> <p>I'm a bit confused now. Do I need to wrap my types before calling into GVariants the function on the proxy object?</p> <p>To test whether I can call a service with that signature at all, I stole most of the suff from <a href="http://stackoverflow.com/a/512723">this question</a> and came up with the following:</p> <pre><code>import unittest import os import sys import subprocess import time import dbus import dbus.service import dbus.glib import gobject class MyDBUSService(dbus.service.Object): def __init__(self): bus_name = dbus.service.BusName('test.helloservice', bus = dbus.SessionBus()) dbus.service.Object.__init__(self, bus_name, '/test/helloservice') def listen(self): loop = gobject.MainLoop() loop.run() @dbus.service.method('test.helloservice', in_signature="ayaaya{uh}a{ss}u") def hello(self, cwd, cmd, fds, env, flags): print ([type(foo) for foo in (cwd, cmd, fds, env, flags)] ) print ("cwd: %s" % cwd) print ("cmd: %s" % cmd) print ("fsd: %s" % fds) r = os.fdopen(fds[0].take()).read() return r class BaseTestCase(unittest.TestCase): def setUp(self): env = os.environ.copy() self.p = subprocess.Popen(['python', __file__, 'server'], env=env) # Wait for the service to become available time.sleep(1) assert self.p.stdout == None assert self.p.stderr == None open("/tmp/dbus-test", "w").write("Hello, World!") def testHelloService(self): bus = dbus.SessionBus() helloservice = bus.get_object('test.helloservice', '/test/helloservice') hello = helloservice.get_dbus_method('hello', 'test.helloservice') cwd, cmd, fds, env, flags = ('/', ['ls'], {0:dbus.types.UnixFd(open('/tmp/dbus-test'))}, {'foo':'bar'}, 1) r = hello(cwd, cmd, fds, env, flags) assert r == "Hello, World!" def tearDown(self): # terminate() not supported in Python 2.5 #self.p.terminate() os.kill(self.p.pid, 15) if __name__ == '__main__': arg = "" if len(sys.argv) &gt; 1: arg = sys.argv[1] if arg == "server": myservice = MyDBUSService() myservice.listen() else: unittest.main() </code></pre> <p>That works nicely.</p> <p>So I'm wondering: How can I call the Flatpak Development service from Python?</p>
2
2016-09-14T15:52:19Z
39,677,775
<p>To investigate whether my code is causing the same messages to be sent over the bus, I started <code>dbus-monitor</code> to check what happens if a known-good client sends that message. I got the following:</p> <pre><code>method call time=14743.5 sender=:1.6736 -&gt; destination=org.freedesktop.Flatpak serial=8 path=/org/freedesktop/Flatpak/Development; interface=org.freedesktop.Flatpak.Development; member=HostCommand array of bytes "/" + \0 array [ array of bytes "ls" + \0 ] array [ dict entry( uint32 0 file descriptor inode: 40 type: char ) dict entry( uint32 1 file descriptor inode: 58091333 type: fifo ) dict entry( uint32 2 file descriptor inode: 40 type: char ) ] array [ dict entry( string "CLUTTER_IM_MODULE" string "xim" ) ] uint32 1 </code></pre> <p>My own client, however, produced:</p> <pre><code>array of bytes "/" array [ array of bytes "ls" ] array [ dict entry( uint32 0 file descriptor inode: 1866322 type: file ) ] array [ dict entry( string "FOO" string "bar" ) ] uint32 1 </code></pre> <p>So the difference is the null-byte. When adding that to my code, it works. It turns out that there is <a href="https://github.com/flatpak/flatpak/issues/319" rel="nofollow">bug report</a> about this issue.</p>
1
2016-09-24T15:02:41Z
[ "python", "file-descriptor", "dbus", "flatpak" ]
Django: How to put existing fields when displaying form for UpdateView?
39,494,896
<p>I'm making an episode tracker website and when I want the user to edit a show, the form provided starts with empty fields. How do I fill the form with already existing fields? For example, when a user is watching a show and is originally at episode 5, how do I call the update form with the Episode field already at 5 instead of it being empty?</p> <p><strong>views.py</strong></p> <pre><code>class ShowUpdate(UpdateView): model = Show slug_field = 'title' slug_url_kwarg = 'show' fields = ['description', 'season', 'episode'] </code></pre> <p><strong>show-detail.html</strong></p> <pre><code>&lt;form action="{% url 'show:show-update' show=show.title %}" method="post"&gt; {% csrf_token %} &lt;button type="submit"&gt;Edit&lt;/button&gt; &lt;/form&gt; </code></pre>
0
2016-09-14T15:57:03Z
39,495,040
<p>You can fill a form with the values in an UpdateView with</p> <pre><code>def get_initial(self): return { 'field1': 'something', 'field2': 'more stuff' } </code></pre> <p>Also the UpdateView inherits from the SingleObjectMixin, which provides <code>get_object</code> and <code>get_queryset</code>, so you could use either to get the object data you want to populate the form with. Something like:</p> <pre><code>def get_object(self): return Show.objects.get(pk=self.request.GET.get('pk')) </code></pre> <p><a href="https://docs.djangoproject.com/en/1.10/ref/class-based-views/mixins-single-object/#django.views.generic.detail.SingleObjectMixin.get_object" rel="nofollow">Check out the docs for more info</a></p> <p>Also, if you have a pk or slug as a parameter in your url, it should pick it up as well. Such as:</p> <pre><code>url(r'^shows/update/(?P&lt;pk&gt;\d+)/$',ShowUpdate.as_view()) </code></pre>
0
2016-09-14T16:05:41Z
[ "python", "django" ]
How to divide two dataframes with different length and duplicated indexs in Python
39,494,981
<p>Here is my code and I want to get the expected output, but, division of dataframes does not work, what is wrong here? </p> <pre><code>import pandas as pd data1 = {'name':['A', 'C', 'D'], 'cond_a':['B','B','B'], 'value':[10,12,14]} data2 = {'name':['A', 'C', 'D','D','A'], 'cond_a':['G','G','G','G','G'], 'value':[5,6,7,3,2]} df1 = pd.DataFrame(data1) df2 = pd.DataFrame(data2) df1.set_index('name', inplace=True) df2.set_index('name', inplace=True) df2['new_col'] = df2['value'] / df1['value'] </code></pre> <p>expected output:</p> <pre><code> cond_a value new_col name A G 5 5/10 C G 6 6/12 D G 7 7/14 D G 3 3/14 A G 2 2/10 </code></pre>
2
2016-09-14T16:01:35Z
39,495,296
<p>As long as <code>df1</code> has a unique index, you can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a> it on <code>df2</code> when performing the division:</p> <pre><code>df2['new_col'] = df2['value'] / df1['value'].reindex(df2.index) </code></pre> <p>The resulting output:</p> <pre><code> cond_a value new_col name A G 5 0.500000 C G 6 0.500000 D G 7 0.500000 D G 3 0.214286 A G 2 0.200000 </code></pre>
4
2016-09-14T16:19:28Z
[ "python", "pandas" ]
How to divide two dataframes with different length and duplicated indexs in Python
39,494,981
<p>Here is my code and I want to get the expected output, but, division of dataframes does not work, what is wrong here? </p> <pre><code>import pandas as pd data1 = {'name':['A', 'C', 'D'], 'cond_a':['B','B','B'], 'value':[10,12,14]} data2 = {'name':['A', 'C', 'D','D','A'], 'cond_a':['G','G','G','G','G'], 'value':[5,6,7,3,2]} df1 = pd.DataFrame(data1) df2 = pd.DataFrame(data2) df1.set_index('name', inplace=True) df2.set_index('name', inplace=True) df2['new_col'] = df2['value'] / df1['value'] </code></pre> <p>expected output:</p> <pre><code> cond_a value new_col name A G 5 5/10 C G 6 6/12 D G 7 7/14 D G 3 3/14 A G 2 2/10 </code></pre>
2
2016-09-14T16:01:35Z
39,495,300
<p>What doesn't work in your case is not DataFrame division, which you can easily check:</p> <pre><code>df2['value'] / df1['value'] Out[]: name A 0.500000 A 0.200000 C 0.500000 D 0.500000 D 0.214286 Name: value, dtype: float64 </code></pre> <p>The problem is that in the process of this division <code>pandas</code> loses track of the order of index <code>name</code>. Then when you are trying to assign the result back to the <code>df2</code>, you have duplicates in your index <code>name</code> and <code>pandas</code> doesn't know how to merge them, because it is an ambiguous situation to have. In general having duplicates in your index is not a good idea. Get rid of the duplicates and your code will work.</p>
1
2016-09-14T16:19:44Z
[ "python", "pandas" ]
when trying to run celery local I get an error message
39,495,003
<p>This is my file structure</p> <pre><code>venv/ |-src/ |-gettingstarted/ | |-settings/ | |-__init__.py | |-base.py | |-local.py | |-production.py | |-blog/ | |-__init__.py | |-admin.py | |-forms.py | |-models.py | |-tasks.py | |-urls.py | |-views.py | |-manage.py </code></pre> <p>my views.py</p> <pre><code>from .tasks import add, p_panties def shopan(request): # one = scrape_and_store_world() # two = panties() # argument_scrapes(one, two) p_panties.delay() return redirect('/') </code></pre> <p>my tasks.py</p> <pre><code>import requests import random import re import os from celery import Celery from bs4 import BeautifulSoup app = Celery('tasks', backend='redis://localhost', broker='redis://localhost') @app.task def add(x, y): return x + y @app.task def reverse(string): return string[::-1] @app.task def p_panties(): def swappo(): user_one = ' "Mozilla/5.0 (Windows NT 6.0; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0" ' user_two = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5)" ' user_thr = ' "Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko" ' user_for = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:10.0) Gecko/20100101 Firefox/10.0" ' agent_list = [user_one, user_two, user_thr, user_for] a = random.choice(agent_list) return a headers = { "user-agent": swappo(), "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "accept-charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3", "accept-encoding": "gzip,deflate,sdch", "accept-language": "en-US,en;q=0.8", } pan_url = 'http://www.example.com' shtml = requests.get(pan_url, headers=headers) soup = BeautifulSoup(shtml.text, 'html5lib') video_row = soup.find_all('div', {'class': 'post-start'}) name = 'pan videos' def youtube_link(url): youtube_page = requests.get(url, headers=headers) soupdata = BeautifulSoup(youtube_page.text, 'html5lib') video_row = soupdata.find_all('p')[0] entries = [{'text': div, } for div in video_row] tubby = str(entries[0]['text']) urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&amp;+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', tubby) cleaned_url = urls[0].replace('?&amp;amp;autoplay=1', '') return cleaned_url def yt_id(code): the_id = code youtube_id = the_id.replace('https://www.example.com/embed/', '') return youtube_id def strip_hd(hd, move): str = hd new_hd = str.replace(move, '') return new_hd entries = [{'href': div.a.get('href'), 'text': strip_hd(strip_hd(div.h2.text, '– Official video HD'), '– Oficial video HD').lstrip(), 'embed': youtube_link(div.a.get('href')), #embed 'comments': strip_hd(strip_hd(div.h2.text, '– Official video HD'), '– Oficial video HD').lstrip(), 'src': 'https://i.ytimg.com/vi/' + yt_id(youtube_link(div.a.get('href'))) + '/maxresdefault.jpg', #image 'name': name, 'url': div.a.get('href'), # 'author': author, 'video': True } for div in video_row][:13] return entries </code></pre> <p>in my terminal </p> <pre><code>(practice) apples-MBP:blog ray$ celery worker -A tasks -l info [2016-09-14 11:30:14,537: WARNING/MainProcess] /Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/apps/worker.py:161: CDeprecationWarning: Starting from version 3.2 Celery will refuse to accept pickle by default. The pickle serializer is a security concern as it may give attackers the ability to execute any command. It's important to secure your broker from unauthorized access when using pickle, so we think that enabling pickle should require a deliberate action and not be the default choice. If you depend on pickle then you should set a setting to disable this warning and to be sure that everything will continue working when you upgrade to Celery 3.2:: CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml'] You must only enable the serializers that you will actually use. warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED)) -------------- celery@apples-MBP.fios-router.home v3.1.23 (Cipater) ---- **** ----- --- * *** * -- Darwin-15.6.0-x86_64-i386-64bit -- * - **** --- - ** ---------- [config] - ** ---------- .&gt; app: tasks:0x1037836a0 - ** ---------- .&gt; transport: redis://localhost:6379// - ** ---------- .&gt; results: redis://localhost/ - *** --- * --- .&gt; concurrency: 4 (prefork) -- ******* ---- --- ***** ----- [queues] -------------- .&gt; celery exchange=celery(direct) key=celery [tasks] . tasks.add . tasks.p_panties . tasks.reverse [2016-09-14 11:30:14,689: INFO/MainProcess] Connected to redis://localhost:6379// [2016-09-14 11:30:14,702: INFO/MainProcess] mingle: searching for neighbors [2016-09-14 11:30:15,710: INFO/MainProcess] mingle: all alone [2016-09-14 11:30:15,723: WARNING/MainProcess] celery@apples-MBP.fios-router.home ready. </code></pre> <p>when I try to run my my program this error message I get is</p> <pre><code>[2016-09-14 11:30:32,171: ERROR/MainProcess] Received unregistered task of type 'blog.tasks.p_panties'. The message has been ignored and discarded. Did you remember to import the module containing this task? Or maybe you are using relative imports? Please see </code></pre> <p><a href="http://docs.celeryq.org/en/latest/userguide/tasks.html#task-names" rel="nofollow">here</a> for more information.</p> <pre><code>The full contents of the message body was: {'timelimit': (None, None), 'task': 'blog.tasks.p_panties', 'chord': None, 'taskset': None, 'errbacks': None, 'id': 'd9814eb1-98a9-4b45-b049-e36ac64fc55c', 'retries': 0, 'args': [], 'utc': True, 'expires': None, 'eta': None, 'kwargs': {}, 'callbacks': None} (263b) Traceback (most recent call last): File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/worker/consumer.py", line 456, in on_task_received strategies[name](message, body, KeyError: 'blog.tasks.p_panties' </code></pre> <p>Am I supposed to make this an app so I can use it?</p> <h1>EDIT</h1> <p>my blog/celery.py as you suggested me to write it</p> <pre><code>import os from celery import Celery from django.conf import settings # set the default Django settings module for the 'celery' program. os.environ.setdefault( 'DJANGO_SETTINGS_MODULE', 'gettingstarted.settings' ) app = Celery('tasks') # Using a string here means the worker will not have to # pickle the object when using Windows. app.config_from_object('django.conf:settings') app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) </code></pre> <p>How I run the celery worker</p> <pre><code>celery worker -A tasks -l info </code></pre> <p>and I'm running it in the blog directory because that's where the tasks.py is, else If I run in src I get </p> <pre><code>No module named 'tasks' </code></pre> <p>but If I run it in src I get</p> <pre><code>Parent module '' not loaded, cannot perform relative import </code></pre>
0
2016-09-14T16:02:59Z
39,495,595
<p>You're getting a long warning, telling you not to use <em>pickle</em> in case <a href="http://docs.celeryproject.org/en/latest/faq.html#isn-t-using-pickle-a-security-concern" rel="nofollow">you're not familiar with it's possible side effects</a>. So it's better to set celery serializer to use json.</p> <pre><code>CELERY_TASK_SERIALIZER = 'json' </code></pre> <p>Now to the main problem, the task that's being run is, as the error indicates, <strong>blog.tasks.p_panties</strong>, But as the output of running celery worker indicates, the task that's registered with celery, is <strong>tasks.p_panties</strong>.</p> <p>The problem is arising since you're creating the celery app instance, inside your <strong>tasks.py</strong>, so it's registering tasks with names like <strong>task.*</strong>.</p> <p>The solution to this is to create a <em>celery.py</em> in your root directory and setup the celery app there and then finally, import that app object into your tasks.py file.</p> <p>the contents of the celery.py would be like the following:</p> <pre><code>import os from celery import Celery from django.conf import settings # set the default Django settings module for the 'celery' program. os.environ.setdefault( 'DJANGO_SETTINGS_MODULE', DEFAULT_SETTINGS_MODULE ) app = Celery('app_name') # Using a string here means the worker will not have to # pickle the object when using Windows. app.config_from_object('django.conf:settings') app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) </code></pre> <p>and then in your tasks.py, import it like below:</p> <pre><code>from blog.celery import app </code></pre> <p>As can be seen, in the <strong>celery.py</strong> file, I've instantiated celery, and called <code>app.config_from_object('django.conf:settings')</code>. This way celery knows to read its configuration parameters from your django settings file. So all celery settings, would go there, including <code>BROKER_URL</code> and other settings (that you're passing directly to the celery object) </p> <p><strong>Edit</strong></p> <p>And don't run celery like : <code>celery worker -A tasks -l info</code> </p> <p>run it like : <code>celery worker -A project_name -l info</code></p>
0
2016-09-14T16:36:34Z
[ "python", "django", "celery", "relativelayout", "file-structure" ]
Persistent history in python cmd module
39,495,024
<p>Is there any way to configure the <a href="https://docs.python.org/2/library/cmd.html" rel="nofollow">CMD module from Python</a> to keep a persistent history even after the interactive shell has been closed?</p> <p>When I press the up and down keys I would like to access commands that were previously entered into the shell on previous occasions that I ran the python script as well as the ones I have just entered during this session.</p> <p>If its any help cmd uses <code>set_completer</code> imported from the <a href="https://docs.python.org/2/library/readline.html" rel="nofollow">readline module</a></p>
2
2016-09-14T16:04:19Z
39,495,060
<p><code>readline</code> automatically keeps a history of everything you enter. All you need to add is hooks to load and store that history.</p> <p>Use <a href="https://docs.python.org/2/library/readline.html#readline.read_history_file" rel="nofollow"><code>readline.read_history_file(filename)</code></a> to read a history file. Use <a href="https://docs.python.org/2/library/readline.html#readline.write_history_file" rel="nofollow"><code>readline.write_history_file()</code></a> to tell <code>readline</code> to persist the history so far. You may want to use <a href="https://docs.python.org/2/library/readline.html#readline.set_history_length" rel="nofollow"><code>readline.set_history_length()</code></a> to keep this file from growing without bound:</p> <pre><code>import os.path try: import readline except ImportError: readline = None histfile = os.path.expanduser('~/.someconsole_history') histfile_size = 1000 class SomeConsole(cmd.Cmd): def preloop(self): if readline and os.path.exists(histfile): readline.read_history_file(histfile) def postloop(self): if readline: readline.set_history_length(histfile_size) readline.write_history_file(histfile) </code></pre> <p>I used the <a href="https://docs.python.org/2/library/cmd.html#cmd.Cmd.preloop" rel="nofollow"><code>Cmd.preloop()</code></a> and <a href="https://docs.python.org/2/library/cmd.html#cmd.Cmd.postloop" rel="nofollow"><code>Cmd.postloop()</code></a> hooks to trigger loading and saving to the points where the command loop starts and ends.</p> <p>If you don't have <code>readline</code> installed, you could simulate this still by adding a <a href="https://docs.python.org/2/library/cmd.html#cmd.Cmd.precmd" rel="nofollow"><code>precmd()</code> method</a> and record the entered commands yourself.</p>
1
2016-09-14T16:06:50Z
[ "python", "readline", "python-module", "python-cmd" ]
Making Plot of Numpy Array Values
39,495,146
<p>I have a large numpy.ndarray that I want to make a plot of, where the x axis has to do with the values in the array and the y axis shows how often that value has appeared in the array. To be clear, I don't care about the order of the data in the array or if their order gets screwed up, I just want to take the numbers, bin them, and then plot them.</p> <p>Steps I have so far that I want to do, each separate in my Jupyter notebook</p> <ul> <li><p>Open/read my array (it's 1024x1024, so quite large)- step done</p></li> <li><p>Convert array into list- done</p></li> <li><p>Spit out null values in array... currently not working</p></li> <li><p>Bin data to count values... really lost here</p></li> <li><p>Scatter plot- trimmed vs count- this part will be fine once the previous two work, matplotlib and I get along</p> <p>import numpy as np</p> <p>import matplotlib.pyplot as plt</p> <p>scidata = np array of data that's 1024x1024</p> <p>lsci = []</p> <p>for r in range(1024):</p> <pre><code>scilist = scidata[r,:].tolist() lsci.extend(scilist) trimmed = lsci </code></pre> <p>for item in lsci: </p> <pre><code> if 12.58 &lt;= i== 12.59: #the null value I don't want is in this range r.remove(item) </code></pre></li> </ul> <p>I'm sorry, I wish I had more, but this is where things get dicey for me and I'm kinda ashamed to post what I've tried and failed at because most are dead ends. The only real solution I've thought of is binning the data... but that won't work for a scatter plot because the length of the two lists won't be the same and a histogram isn't what I want as my final product anyway. So is there another approach I can be using for this that I'm unaware of? (I feel like there's some chunk of coding knowledge that I just never learned- surely I'm not the first person to want to do this.) Thanks!</p> <p>Edit: Sorry, all my code isn't showing up as code even though I put four spaces...</p>
1
2016-09-14T16:10:56Z
39,496,906
<p>'Binning' is definitely a histogram feature but I get the impression you want a simple pivot table. How about:</p> <ol> <li>Remove undesired value</li> <li>Convert your numpy array to dataframe</li> <li>Create pivot table from dataframe</li> <li>Plot results</li> </ol> <hr> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt % matplotlib inline a = np.random.randint(10, size=100) # array([3, 0, 3, 8, 1, 9, 1, 8,...]) exclude_value = 3 # change as required a_new = [item for item in a if item != exclude_value] # new list without exclude value df = pd.DataFrame(a_new).pivot_table(columns=0, aggfunc='size') x = df.index.values y = df.values plt.bar(x,y) plt.xticks(x) plt.show() </code></pre> <hr> <p>OUTPUT:</p> <p><em>(note how one value has been excluded, in this case 3)</em> <a href="http://i.stack.imgur.com/wX2Tb.png" rel="nofollow"><img src="http://i.stack.imgur.com/wX2Tb.png" alt="enter image description here"></a></p>
0
2016-09-14T18:03:40Z
[ "python", "arrays", "numpy", "matplotlib" ]
Making Plot of Numpy Array Values
39,495,146
<p>I have a large numpy.ndarray that I want to make a plot of, where the x axis has to do with the values in the array and the y axis shows how often that value has appeared in the array. To be clear, I don't care about the order of the data in the array or if their order gets screwed up, I just want to take the numbers, bin them, and then plot them.</p> <p>Steps I have so far that I want to do, each separate in my Jupyter notebook</p> <ul> <li><p>Open/read my array (it's 1024x1024, so quite large)- step done</p></li> <li><p>Convert array into list- done</p></li> <li><p>Spit out null values in array... currently not working</p></li> <li><p>Bin data to count values... really lost here</p></li> <li><p>Scatter plot- trimmed vs count- this part will be fine once the previous two work, matplotlib and I get along</p> <p>import numpy as np</p> <p>import matplotlib.pyplot as plt</p> <p>scidata = np array of data that's 1024x1024</p> <p>lsci = []</p> <p>for r in range(1024):</p> <pre><code>scilist = scidata[r,:].tolist() lsci.extend(scilist) trimmed = lsci </code></pre> <p>for item in lsci: </p> <pre><code> if 12.58 &lt;= i== 12.59: #the null value I don't want is in this range r.remove(item) </code></pre></li> </ul> <p>I'm sorry, I wish I had more, but this is where things get dicey for me and I'm kinda ashamed to post what I've tried and failed at because most are dead ends. The only real solution I've thought of is binning the data... but that won't work for a scatter plot because the length of the two lists won't be the same and a histogram isn't what I want as my final product anyway. So is there another approach I can be using for this that I'm unaware of? (I feel like there's some chunk of coding knowledge that I just never learned- surely I'm not the first person to want to do this.) Thanks!</p> <p>Edit: Sorry, all my code isn't showing up as code even though I put four spaces...</p>
1
2016-09-14T16:10:56Z
39,497,035
<p>The same thing Nick Braunage proposed, but without pandas:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt a = np.random.randint(10, size=100) # or use yourarray.ravel() here to make it flat num, bins, _ = plt.hist(a) plt.show() </code></pre> <p>or</p> <pre><code>num, bins = np.histogram(a) plt.bar(bins[:-1], num) plt.show() </code></pre>
0
2016-09-14T18:12:31Z
[ "python", "arrays", "numpy", "matplotlib" ]
Python3 for-loop even or odd
39,495,160
<p>Hi i got stuck in an exercise i have in school. and could use some help. </p> <p>Create a for-loop that goes through the numbers:</p> <pre><code>67,2,12,28,128,15,90,4,579,450 </code></pre> <p>If the current number is even, you should add it to a variable and if the current number is odd, you should subtract it from the variable.</p> <p>Answer with the final result.</p> <p>Here is my code so far.</p> <pre><code>def listnum(a): for num in [67, 2, 12, 28, 128, 15, 90, 4, 579, 450]: if (num%2): a = a + num else: a = a - num return a </code></pre> <p>ANSWER = a</p> <p>when i run this program i get the answer:</p> <pre><code>5.6 FAIL. You said: 4 class 'int'&gt; </code></pre> <p>the correct answer should be <code>53</code> if initial value of <code>a</code> is <code>0</code>. </p> <p>can any one help me and tell what im doing wrong? or maybe point me in the right direction. thank you! </p>
1
2016-09-14T16:11:43Z
39,495,285
<p>It looks like you need to adjust your condition mostly.</p> <pre><code>def listSum(a): for num in [67, 2, 12, 28, 128, 15, 90, 4, 579, 450]: if(num % 2 == 0): #subtle difference here. a += num else: a -= num return a </code></pre> <p>this will see just a subtle difference. </p>
0
2016-09-14T16:18:50Z
[ "python", "python-3.x", "for-loop" ]
Python3 for-loop even or odd
39,495,160
<p>Hi i got stuck in an exercise i have in school. and could use some help. </p> <p>Create a for-loop that goes through the numbers:</p> <pre><code>67,2,12,28,128,15,90,4,579,450 </code></pre> <p>If the current number is even, you should add it to a variable and if the current number is odd, you should subtract it from the variable.</p> <p>Answer with the final result.</p> <p>Here is my code so far.</p> <pre><code>def listnum(a): for num in [67, 2, 12, 28, 128, 15, 90, 4, 579, 450]: if (num%2): a = a + num else: a = a - num return a </code></pre> <p>ANSWER = a</p> <p>when i run this program i get the answer:</p> <pre><code>5.6 FAIL. You said: 4 class 'int'&gt; </code></pre> <p>the correct answer should be <code>53</code> if initial value of <code>a</code> is <code>0</code>. </p> <p>can any one help me and tell what im doing wrong? or maybe point me in the right direction. thank you! </p>
1
2016-09-14T16:11:43Z
39,495,542
<p>I think it would make more sense if your function input is the list and not the return value. Also (as others have noted) you need <code>num % 2 == 0</code> and your indentation is not quite right. Try this instead:</p> <pre><code>def listSum(l): ans = 0 for num in l: if num % 2 == 0: ans += num else: ans -= num return ans </code></pre> <p>Note that you could do this in a single line:</p> <pre><code>def listSum(l): return sum(i if i % 2 == 0 else -i for i in l) </code></pre> <p><code>print(listSum([67, 2, 12, 28, 128, 15, 90, 4, 579, 450]))</code> prints <code>53</code> in both cases.</p>
0
2016-09-14T16:32:44Z
[ "python", "python-3.x", "for-loop" ]
Immediate vs load-on-first-access availability
39,495,181
<p>The <a href="http://docs.sqlalchemy.org/en/rel_1_0/orm/tutorial.html#adding-and-updating-objects" rel="nofollow">docs say</a> (at the end of the linked section):</p> <blockquote> <p>After the Session inserts new rows in the database, all newly generated identifiers and database-generated defaults become available on the instance, either immediately or via load-on-first-access.</p> </blockquote> <p>What's the difference between <code>immediately</code> and <code>load-on-first-access</code>? Doesn't SQLAlchemy know the new identifiers and defaults after it completed the <code>INSERT</code> operation, and so don't they become available even without reloading?</p>
0
2016-09-14T16:12:52Z
39,499,583
<p>SQLAlchemy uses <code>INSERT .. RETURNING</code> for DBs that support it to fetch primary keys, or special functions like <a href="http://dev.mysql.com/doc/refman/5.7/en/mysql-insert-id.html" rel="nofollow"><code>mysql_insert_id</code></a> for DBs that don't.</p> <p>For default values, it tries to use <code>RETURNING</code> but if it's not supported, it has to use another <code>SELECT</code>.</p> <p>Furthermore, it may not be possible to use <code>RETURNING</code> (even on supported DBs) in specific situations (e.g. when the <code>INSERT</code> statements are batched, see <a href="https://groups.google.com/d/msg/sqlalchemy/3l6RhfAFpvM/3OqL6aXJo70J" rel="nofollow">this mailing list post</a>).</p> <p>Finally, SQLAlchemy only fetches the primary key by default after an <code>INSERT</code>. Thus, only the primary key is available "immediately" while unconfigured default columns are "load-on-first-access". If you need it to fetch generated values, specify <a href="http://docs.sqlalchemy.org/en/latest/core/defaults.html#triggered-columns" rel="nofollow"><code>server_default=FetchedValue()</code></a>. (In the case where the DB does not support <code>RETURNING</code> and you specify <code>FetchedValue</code> for a column, I am not certain whether it still fetches the value with an immediate <code>SELECT</code> or simply falls back to "load-on-first-access".)</p>
1
2016-09-14T21:01:16Z
[ "python", "sqlalchemy" ]
Immediate vs load-on-first-access availability
39,495,181
<p>The <a href="http://docs.sqlalchemy.org/en/rel_1_0/orm/tutorial.html#adding-and-updating-objects" rel="nofollow">docs say</a> (at the end of the linked section):</p> <blockquote> <p>After the Session inserts new rows in the database, all newly generated identifiers and database-generated defaults become available on the instance, either immediately or via load-on-first-access.</p> </blockquote> <p>What's the difference between <code>immediately</code> and <code>load-on-first-access</code>? Doesn't SQLAlchemy know the new identifiers and defaults after it completed the <code>INSERT</code> operation, and so don't they become available even without reloading?</p>
0
2016-09-14T16:12:52Z
39,506,678
<p><strong>immediately:</strong> SQLAlchemy gets a primary key from the database for each session object inserted and assigns it to the session object. The session object needs the primary key immediately since, prior to the transaction snapshot being committed, the object enters the persistent state and is added to the Identity Map collection which keys it on its primary key. <a href="http://pyvideo.org/pycon-us-2013/the-sqlalchemy-session-in-depth-0.html" rel="nofollow">[The SQLAlchemy Session - In Depth]</a></p> <p><strong>load-on-first-access:</strong> database-generated defaults associated with an object may never be used in its session. So to conserve resources these are <a href="http://martinfowler.com/eaaCatalog/lazyLoad.html" rel="nofollow">lazy loaded</a> as needed.</p>
1
2016-09-15T08:45:31Z
[ "python", "sqlalchemy" ]
Noise removal in spatial data
39,495,287
<p>I have an array with two different values that characterize this image: <a href="http://i.stack.imgur.com/rnOYu.png" rel="nofollow"><img src="http://i.stack.imgur.com/rnOYu.png" alt="enter image description here"></a></p> <p>I would like to keep the linear trends in red and remove the "noise" (single red points). Is there a good way to do this? </p>
0
2016-09-14T16:19:05Z
39,508,198
<p>If there is no way to determine the noise from the signal based on a threshold value (i.e. all the red point have the same value or are just a 1/0 flag), a relatively simple but easy to implement approach could be to look at removing the noise based on the size of the clumps. </p> <p>Take a look at <a href="http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.ndimage.measurements.label.html" rel="nofollow">scipy's label</a>. This will give you an array where each separate 'clump' has an individual number. It is then a case of just removing those features that are less then some threshold number of pixels (<code>n_thresh</code> below).</p> <pre><code>&gt;&gt;&gt; from scipy.ndimage.measurements import label &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; n_thresh = 1 &gt;&gt;&gt; a = np.array([[0,0,1,1,0,0],[0,0,0,1,0,0], [1,1,0,0,1,0],[0,0,0,1,0,0], [1,1,0,0,1,0],[0,0,1,1,0,0]]) &gt;&gt;&gt; a array([[0, 0, 1, 1, 0, 0], [0, 0, 0, 1, 0, 0], [1, 1, 0, 0, 1, 0], [0, 0, 0, 1, 0, 0], [1, 1, 0, 0, 1, 0], [0, 0, 1, 1, 0, 0]]) &gt;&gt;&gt; labeled_array, num_features = label(a) &gt;&gt;&gt; binc = np.bincount(labeled_array.ravel()) &gt;&gt;&gt; noise_idx = np.where(binc &lt;= n_thresh) &gt;&gt;&gt; shp = a.shape &gt;&gt;&gt; mask = np.in1d(labeled_array, noise_idx).reshape(shp) &gt;&gt;&gt; a[mask] = 0 &gt;&gt;&gt; a array([[0, 0, 1, 1, 0, 0], [0, 0, 0, 1, 0, 0], [1, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0]]) &gt;&gt;&gt; a </code></pre> <p>As the features are diagonal, you may want to pay attention to the example in <code>label</code>'s documentations that groups diagonally-touching pixels in the sample clump.</p>
1
2016-09-15T09:58:16Z
[ "python", "numpy", "scipy", "noise-reduction" ]
Creating Multiple Plots Using Matplotlib
39,495,298
<p>I'm trying to create multiple separate plots using Matplotib, then save these to a single PDF document. Here's my code:</p> <pre><code>pdf = matplotlib.backends.backend_pdf.PdfPages('Activity_Report.pdf') fig1 = plt.figure(1) fig1.figure(figsize=(11.69, 8.27)) ax1 = fig1.add_subplot(111) # ******** product 1 ******** ax1.plot(Prod_01['Date'], Prod_01['Orders'], marker='o', label='Orders', color='navy', linewidth='2') ax1.plot(Prod_01['Date'], Prod_01['Orders_MA'], linestyle='--', label='Orders (10-d)', color='darkblue', linewidth='2') ax1.plot(Prod_01['Date'], Prod_01['Volume'], marker='o', label='Volume', color='firebrick', linewidth='2') ax1.plot(Prod_01['Date'], Prod_01['Volume_MA'], linestyle='--', label='Volume (10-d)', color='firebrick', linewidth='2') ax1.plot(Prod_01['Date'], Prod_01['Pass'], marker='o', label='Pass', color='darkgreen', linewidth='2') ax1.plot(Prod_01['Date'], Prod_01['Pass_MA'], linestyle='--', label='Pass (10-d)', color='darkgreen', linewidth='2') ax1.plot(Prod_01['Date'], Prod_01['Request'], marker='o', label='Request', color='cyan', linewidth='2') ax1.plot(Prod_01['Date'], Prod_01['Request_MA'], linestyle='--', label='Request (10-d)', color='cyan', linewidth='2') ax1.set_title('Prod_01', fontsize='20', rasterized=True) ax1.tick_params(axis='both', which='major', labelsize='10') ax1.legend(loc='upper left', fontsize='10') ax1.get_yaxis().set_major_formatter(tkr.FuncFormatter(lambda x, p: format(int(x), ','))) ax1.xaxis.set_major_formatter(custom_x_axis_format) # ******** product 2 ******** fig2 = plt.figure(2) fig2.figure(figsize=(11.69, 8.27)) ax2 = fig2.add_subplot(111) ax2.plot(Prod_02['Date'], Prod_02['Order'], marker='o', label='Order', color='navy', linewidth='2') ax2.plot(Prod_02['Date'], Prod_02['Order_MA'], linestyle='--', label='Order (10-d)', color='darkblue', linewidth='2') ax2.plot(Prod_02['Date'], Prod_02['Volume'], marker='o', label='Volume', color='firebrick', linewidth='2') ax2.plot(Prod_02['Date'], Prod_02['Volume_MA'], linestyle='--', label='Volume (10-d)', color='firebrick', linewidth='2') ax2.plot(Prod_02['Date'], Prod_02['Pass'], marker='o', label='Pass', color='darkgreen', linewidth='2') ax2.plot(Prod_02['Date'], Prod_02['Pass_MA'], linestyle='--', label='Pass (10-d)', color='darkgreen', linewidth='2') ax2.plot(Prod_02['Date'], Prod_02['Request'], marker='o', label='Request', color='cyan', linewidth='2') ax2.plot(Prod_02['Date'], Prod_02['Request_MA'], linestyle='--', label='Request (10-d)', color='cyan', linewidth='2') ax2.set_title('Prod_02', fontsize='20', rasterized=True) ax2.tick_params(axis='both', which='major', labelsize='10') ax2.legend(loc='upper left', fontsize='10') ax2.get_yaxis().set_major_formatter(tkr.FuncFormatter(lambda x, p: format(int(x), ','))) ax2.xaxis.set_major_formatter(custom_x_axis_format) pdf.savefig() pdf.close() </code></pre> <p>The problem I'm having is that this code is plotting to a <strong>single</strong> figure (rather than <strong>two separate</strong> figures. That is, <code>product 1</code> and <code>product 2</code> are plotted on a single figure (i.e. on top of each other). I tried to great separate plots using <code>Figure()</code> (as seen on a few posts on StackOverflow), but that doesn't seem to work.</p> <p>It's likely that I have <code>fig1</code>, <code>ax1</code>, <code>fig2</code> and <code>ax2</code> defined incorrectly (being new to Python, I still don't understand their use 100%).</p> <p>Does anyone see why this code is producing a single plot instead of the intended two separate plots?</p> <p>Thanks in advance!</p>
1
2016-09-14T16:19:35Z
39,495,530
<p>Your code is a bit over-complicated. These two lines are redundant:</p> <pre><code>fig1 = plt.figure(1) fig1.figure(figsize=(11.69, 8.27)) </code></pre> <p>I would just do:</p> <pre><code>FIGSIZE = (11.69, 8.27) fig1, ax1 = plt.subplots(figsize=FIGSIZE) # plot things fig2, ax2 = plt.subplots(figsize=FIGSIZE) # plot more things # etc etc pdf = matplotlib.backends.backend_pdf.PdfPages('Activity_Report.pdf') for fig in [fig1, fig2, ...]: pdf.savefig(fig) plt.close('all') </code></pre>
2
2016-09-14T16:32:22Z
[ "python", "matplotlib" ]
passing arguments python script
39,495,309
<p>I am trying to know haw can I pass these files as arguments in a py file, and construct dataframe from these files</p> <pre><code>pd.read_csv('C:/Users/Demonstrator/Downloads/file1.csv',delimiter=';', parse_dates=[0], infer_datetime_format = True) df_energy2=pd.read_csv('C:/Users/Demonstrator/Downloads/file2.csv', delimiter=';', parse_dates=[0], infer_datetime_format = True) </code></pre> <p>Thank you</p>
-3
2016-09-14T16:20:02Z
39,495,444
<p>Passing arguments is simple. You can have a look at <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow">https://docs.python.org/3/library/argparse.html</a></p> <p>The most easiest way to pass argument to a python script is by adding these line to you python script and modifying them as per need:</p> <pre><code>if __name__ == '__main__': import sys if len(sys.argv) != 2 # here I am expecting only one commandline agrument print("USAGE: &lt;Scriptname&gt; &lt;commandlineargument&gt;") sys.exit(1) commandlineValue = sys.argv[1] # sys.argv[0] contains the file name you are running # Do what ever you want to do with the commandlineValue, it will just print it print ("CommandlineValue Passed is : {}".format(commandlineValue)) </code></pre>
1
2016-09-14T16:28:19Z
[ "python", "pandas" ]
Sort input file according to first column, no delimiters but spaces
39,495,312
<p>I'm a Python beginner and trying to post process the a long txt file which is a list without delimiters, only spaces. I wanna sort it according to the first column. </p> <p>The code compiles fine, but it only sorts my output file according to the very first value in the first column, but not according to the number itself. I also tried itemgetter operator without success.</p> <p>I have tried this for hours now and hope anyone can help me. Why does my line split function not have the effect that I want?</p> <pre><code>f = open("traj_nvt_20000000.txt","r+") lines = f.readlines() for line in f.readlines(): line = line.strip() parts = line.split(" ") lines = sorted(lines, key=lambda line: line[0]) with open('test123.txt', 'w') as text: text.writelines(lines) </code></pre> <p>An excerpt of the text file table that I want to sort:</p> <pre><code>54 2 -9.5377 -4.02842 -7.51558 7 55 2 -9.6834 -4.88656 -7.29358 7 459 2 -8.76522 -8.30942 -10.144 58 50 1 -9.33774 -4.46175 -7.24097 7 56 2 -8.84618 -4.59922 -7.44773 7 462 2 -10.3377 -9.37008 -10.2265 58 460 2 -8.59323 -8.64832 -9.32914 58 457 1 -8.96511 -8.38283 -9.63619 58 461 2 -9.0727 -7.89321 -9.40869 58 369 1 -5.93643 -6.20083 -7.56102 47 504 2 -7.94033 -2.66938 -10.3925 63 371 2 -6.24752 -6.57434 -7.3023 47 </code></pre> <p>Help is very much appreciated.</p>
0
2016-09-14T16:20:08Z
39,495,330
<p>you have to sort as numerical, not alphanumerical, so convert your string to integer or float (I don't have all your data, I'm not sure if they're all integers):</p> <pre><code>lines = sorted(lines, key=lambda line: float(line[0])) </code></pre> <p>but it would be even better to sort on all the values by returning a tuple of floats so if first values are equal, the rest can be used to discriminate:</p> <pre><code>lines = sorted(lines, key=lambda line: [float(x) for x in line]) </code></pre> <p>BTW your sample code is incorrect (there's a mix of readlines &amp; loops at the start which does not work). Here's a small test which works:</p> <pre><code>f = open(r"U:\test.txt","r") # sample file in the question lines=[] for line in f: lines.append(line.strip().split(" ")) f.close() lines.sort(key=lambda line: [float(x) for x in line]) for l in lines: print(",".join(l)) </code></pre> <p>result:</p> <pre><code>50,1,-9.33774,-4.46175,-7.24097,7 54,2,-9.5377,-4.02842,-7.51558,7 55,2,-9.6834,-4.88656,-7.29358,7 56,2,-8.84618,-4.59922,-7.44773,7 369,1,-5.93643,-6.20083,-7.56102,47 371,2,-6.24752,-6.57434,-7.3023,47 457,1,-8.96511,-8.38283,-9.63619,58 459,2,-8.76522,-8.30942,-10.144,58 460,2,-8.59323,-8.64832,-9.32914,58 461,2,-9.0727,-7.89321,-9.40869,58 462,2,-10.3377,-9.37008,-10.2265,58 504,2,-7.94033,-2.66938,-10.3925,63 </code></pre>
1
2016-09-14T16:21:43Z
[ "python", "sorting", "split", "lines" ]
Python - How to choose line in console to read from
39,495,407
<p>As I understand it, input() reads a new line every time it's called, but is there a way for me to make it read, say, the third line of input and then read the second line of input without first storing the second line?</p>
0
2016-09-14T16:26:31Z
39,495,935
<p>No. The <code>input()</code> function takes data from standard input stream and returns it somewhere in your code. Then the contents of standard input stream is forgotten. </p>
0
2016-09-14T16:57:42Z
[ "python", "input" ]
How do I install scipy & its optimization module (using Python 2.7.10)
39,495,417
<p>I've run the following optimization example code (and alternatives) and it keeps giving me the following error(s) -- </p> <blockquote> <p>"Input Error: cannot import optimize" OR "No module named optimize"</p> </blockquote> <pre><code>import numpy as np from scipy import optimize from scipy.optimize import fmin_slsqp def f(x): return np.sqrt((x[0] - 3)**2 + (x[1] - 2)**2) def constraint(x): return np.atleast_1d(1.5 - np.sum(np.abs(x))) scipy.optimize.fmin_slsqp(f, np.array([0,0]), ieqcons = [constraint,]) </code></pre> <p>I've also tried to update Scipy and Optimize with the pip command. Scipy is updated and I get the following when trying to update Optimize: </p> <p>"Could not find a version that satisfies the requirement optimize (from versions: ) No matching distribution found for optimize"</p> <p>Thanks in advance</p>
-1
2016-09-14T16:27:13Z
39,496,105
<p>Can you upgrade <code>scipy</code> to version 0.17? I can't reproduce the error with version 0.17:</p> <p><code>pip install scipy==0.17.0</code></p>
0
2016-09-14T17:07:42Z
[ "python", "numpy", "optimization", "scipy" ]
MySQL python setup error
39,495,498
<p>I am following <a href="https://www.tutorialspoint.com/python/python_database_access.htm" rel="nofollow">https://www.tutorialspoint.com/python/python_database_access.htm</a> to connect Python with SQL. </p> <pre><code>gunzip MySQL-python-1.2.2.tar.gz tar -xvf MySQL-python-1.2.2.tar cd MySQL-python-1.2.2 python setup.py build python setup.py install </code></pre> <p>The last command "python setup.py install" gives :</p> <p>error: [Errno 13] Permission denied: '/anaconda/lib/python2.7/site-packages/easy-install.pth'</p>
0
2016-09-14T16:30:49Z
39,496,442
<p>Using Anaconda, you should use <code>conda install &lt;package-name&gt;</code></p> <p><strong>Linux / Windows:</strong> <code>conda install mysql-python</code></p> <p><strong>Mac OS:</strong> <code>conda install --channel https://conda.anaconda.org/coursera mysql-python</code></p> <p>Then it will check for dependencies and install them for you.</p>
1
2016-09-14T17:31:40Z
[ "python", "mysql", "sql", "python-2.7" ]
Writing cross-compatible python2/python3 code in pycharm
39,495,557
<p>I've taken care to make sure library works on both python2 and python3, but pycharm adds some vexatious red squiggles as seen below </p> <p><a href="http://i.stack.imgur.com/EBCpC.png"><img src="http://i.stack.imgur.com/EBCpC.png" alt="enter image description here"></a></p> <p>If I switch the project interpreter to python 3.5 instead, the nag just moves onto the other import. <strong>Which inspection is this? I want to turn it off.</strong> </p>
9
2016-09-14T16:33:45Z
40,065,652
<p>Although it doesn't solve the issue for all cases, you can solve this particular problem by using the <code>future</code> package.</p> <p>As you can see <a href="http://python-future.org/imports.html#imports-of-builtins">here</a>, the <code>future</code> package provides its own version of <code>builtins</code> for python 2 and python 3. By relying on this package instead of doing it yourself, you can import <code>future</code>'s implementation of builtins, thus removing the problematic code and avoiding Pycharm's (erroneous) error.</p>
6
2016-10-16T00:35:14Z
[ "python", "pycharm" ]
How to evaluate an expression in for loop using Python?
39,495,700
<p>I want to evaluate an expression inside for loop. I am doing:</p> <pre><code>for i in range(0,255): Q[i+1,1] = (np.floor_divide(i, q) * q + q/2) </code></pre> <p>but this returns an error saying </p> <blockquote> <p>IndexError: index 1 is out of bounds for axis 1 with size 1". </p> </blockquote>
1
2016-09-14T16:42:37Z
39,495,741
<p>The size is 256x1 but still you have to index with starting at 0. So you need <code>Q[i,0]</code> </p>
2
2016-09-14T16:45:39Z
[ "python" ]
How to evaluate an expression in for loop using Python?
39,495,700
<p>I want to evaluate an expression inside for loop. I am doing:</p> <pre><code>for i in range(0,255): Q[i+1,1] = (np.floor_divide(i, q) * q + q/2) </code></pre> <p>but this returns an error saying </p> <blockquote> <p>IndexError: index 1 is out of bounds for axis 1 with size 1". </p> </blockquote>
1
2016-09-14T16:42:37Z
39,495,768
<p><em>python</em> (and <em>numpy</em>) uses <a href="https://en.wikipedia.org/wiki/Zero-based_numbering" rel="nofollow">zero-based indexing</a>, so the first position is 0. You should change your loop to:</p> <pre><code>for i in range(0,255): Q[i,0] = (np.floor_divide(i, q) * q + q/2) # ---^-^--- </code></pre>
0
2016-09-14T16:47:22Z
[ "python" ]
How to handle output with Luigi
39,495,757
<p>I'm trying to grasp how luigi works, and I get the idea, but actual implementation is a bit harder ;) This is what i have:</p> <pre><code>class MyTask(luigi.Task): x = luigi.IntParameter() def requires(self): return OtherTask(self.x) def run(self): print(self.x) class OtherTask(luigi.Task): x = luigi.IntParameter() def run(self): y = self.x + 1 print(y) </code></pre> <p>And this fails with <code>RuntimeError: Unfulfilled dependency at run time: OtherTask_3_5862334ee2</code>. I've figured that I need to produce output using <code>def output(self):</code> to workaround this issue\feature. And I can't comprehend how do I produce reasonable output without writing to a file, say:</p> <pre><code>def output(self): return luigi.LocalTarget('words.txt') def run(self): words = [ 'apple', 'banana', 'grapefruit' ] with self.output().open('w') as f: for word in words: f.write('{word}\n'.format(word=word)) </code></pre> <p>I've tried reading the documentation, but I can't understand the concept behind output at all. What if I need to output to screen only. What if I need to output an object to another task? Thanks!</p>
0
2016-09-14T16:46:35Z
39,502,239
<blockquote> <p>What if I need to output an object to another task?</p> </blockquote> <p>Luigi tasks can run in different processes. Therefore you do usually have to write to disk, a database, pickle, or some external mechanism that allows data to be exchanged between the processes (and the existence of which can be verified) if you want to exchange an object that is the result of a task.</p> <p>As opposed to writing the output() method, which requires a target, you can also override the complete() method where you can write any custom logic that allows the tasks to be considered complete.</p>
2
2016-09-15T02:16:22Z
[ "python", "luigi" ]
validation of python script for extracting values based on regex
39,495,815
<p>I have data of the form: </p> <pre><code>submission #,scores 882,"Overall evaluation: 1 Invite to interview: 1 Strength or novelty of the idea (1): 4 Strength or novelty of the idea (2): 4 Strength or novelty of the idea (3): 3 Use or provision of open data (1): 3 Use or provision of open data (2): 3 ""Open by default"" (1): 4 ""Open by default"" (2): 4 Value proposition and potential scale (1): 2 Value proposition and potential scale (2): 1 Market opportunity and timing (1): 3 Market opportunity and timing (2): 1 Triple bottom line impact (1): 2 Triple bottom line impact (2): 2 Triple bottom line impact (3): 4 Knowledge and skills of the team (1): 1 Knowledge and skills of the team (2): 2 Capacity to realise the idea (1): 1 Capacity to realise the idea (2): 3 Capacity to realise the idea (3): 1 Appropriateness of the budget to realise the idea: 3" 882,"Overall evaluation: 2 Invite to interview: 3 Strength or novelty of the idea (1): 4 Strength or novelty of the idea (2): 1 Strength or novelty of the idea (3): 4 Use or provision of open data (1): 4 Use or provision of open data (2): 3 ""Open by default"" (1): 3 ""Open by default"" (2): 4 Value proposition and potential scale (1): 4 Value proposition and potential scale (2): 4 Market opportunity and timing (1): 4 Market opportunity and timing (2): 4 Triple bottom line impact (1): 4 Triple bottom line impact (2): 1 Triple bottom line impact (3): 3 Knowledge and skills of the team (1): 3 Knowledge and skills of the team (2): 2 Capacity to realise the idea (1): 2 Capacity to realise the idea (2): 3 Capacity to realise the idea (3): 3 Appropriateness of the budget to realise the idea: 3" 883,"Overall evaluation: 1 Invite to interview: 1 Strength or novelty of the idea (1): 4 Strength or novelty of the idea (2): 3 Strength or novelty of the idea (3): 4 Use or provision of open data (1): 2 Use or provision of open data (2): 3 ""Open by default"" (1): 3 ""Open by default"" (2): 3 Value proposition and potential scale (1): 2 Value proposition and potential scale (2): 1 Market opportunity and timing (1): 3 Market opportunity and timing (2): 1 Triple bottom line impact (1): 1 Triple bottom line impact (2): 4 Triple bottom line impact (3): 2 Knowledge and skills of the team (1): 3 Knowledge and skills of the team (2): 3 Capacity to realise the idea (1): 1 Capacity to realise the idea (2): 3 Capacity to realise the idea (3): 4 Appropriateness of the budget to realise the idea: 3" 883,"Overall evaluation: 1 Invite to interview: 1 Strength or novelty of the idea (1): 2 Strength or novelty of the idea (2): 2 Strength or novelty of the idea (3): 1 Use or provision of open data (1): 2 Use or provision of open data (2): 1 ""Open by default"" (1): 3 ""Open by default"" (2): 2 Value proposition and potential scale (1): 2 Value proposition and potential scale (2): 2 Market opportunity and timing (1): 2 Market opportunity and timing (2): 2 Triple bottom line impact (1): 1 Triple bottom line impact (2): 2 Triple bottom line impact (3): 2 Knowledge and skills of the team (1): 4 Knowledge and skills of the team (2): 2 Capacity to realise the idea (1): 2 Capacity to realise the idea (2): 2 Capacity to realise the idea (3): 3 Appropriateness of the budget to realise the idea: 3" 885,"Overall evaluation: 2 Invite to interview: 1 Strength or novelty of the idea (1): 2 Strength or novelty of the idea (2): 2 Strength or novelty of the idea (3): 2 Use or provision of open data (1): 2 Use or provision of open data (2): 2 ""Open by default"" (1): 2 ""Open by default"" (2): 2 Value proposition and potential scale (1): 1 Value proposition and potential scale (2): 2 Market opportunity and timing (1): 2 Market opportunity and timing (2): 1 Triple bottom line impact (1): 2 Triple bottom line impact (2): 1 Triple bottom line impact (3): 1 Knowledge and skills of the team (1): 4 Knowledge and skills of the team (2): 2 Capacity to realise the idea (1): 1 Capacity to realise the idea (2): 3 Capacity to realise the idea (3): 2 Appropriateness of the budget to realise the idea: 3" </code></pre> <p>and the following python script: </p> <pre><code>map = {} lines=open("new_data.csv",'r').read().splitlines() for l in lines: data = l.split('"Overall evaluation:') if len(data) == 2: if data[0] not in map.keys(): map[data[0]] = (0,0) map[data[0]] = (map[data[0]][0]+int(data[1]) , map[data[0]][1]+1) for x, y in map.items(): print(str(x) + ", " + str(y[0]/y[1])) </code></pre> <p>what I think is happening is that it takes the average of the two <code>Overall evaluation:</code> numbers and outputs it next to the submission number, is that correct? </p>
-2
2016-09-14T16:50:29Z
39,495,940
<p>Your <code>map</code> values are each tuples corresponding with the number of items seen and the total of all values seen for the same item.</p> <p>Dividing the two does indeed return the average (though since they're integers, that result is rounded -- consider casting one or both to floating-point if you want a floating-point result rather than an integer result).</p>
1
2016-09-14T16:57:59Z
[ "python", "regex" ]
How to convert python list of tuples into tree?
39,495,924
<p>I have a list of tuples like</p> <pre><code>list_of_tuples = [(number, name, id, parent_id), (number, name, id, parent_id), ] </code></pre> <p>I am trying to sort it into an ordered structure like:</p> <pre><code>{ parent: [(id, name), (id, name)], parent: {parent: [(id, name)] { </code></pre> <p>So, any node could have a parent and/or children I tried with:</p> <pre><code>tree = defaultdict(lambda: [None, ()]) ancestors = set([item[3] for item in list_of_tuples]) for items in list_of_tuples: children_root = {} descendants = [] number, name, id, parent = items if parent is None: tree[id] = [(id, name)] elif parent: if parent not in tree.keys(): node = tree.get(parent) node.append((id, name)) children = (id, name) tree[parent].append(children) </code></pre> <p>But I'm losing deep hierarchy when a node has both a parent and children</p> <p>How do I make the ordering work correctly?</p>
2
2016-09-14T16:57:21Z
39,497,141
<p>I propose to represent the tree nodes as tuples ((id, name), dict_of_children).</p> <pre><code>list_of_tuples = [(1, 'name1', 1, None), (2, 'name2', 2, 1), (3, 'name3', 3, 1), (4, 'name4', 4, 2), (5, 'name5', 5, 2), (6, 'name5', 6, None), (7, 'name5', 7, 6), ] def build_tree(list_of_tuples): """ &gt;&gt;&gt; import pprint &gt;&gt;&gt; pprint.pprint(build_tree(list_of_tuples)) {1: ((1, 'name1'), {2: ((2, 'name2'), {4: ((4, 'name4'), {}), 5: ((5, 'name5'), {})}), 3: ((3, 'name3'), {})}), 6: ((6, 'name5'), {7: ((7, 'name5'), {})})} """ all_nodes = {n[2]:((n[2], n[1]), {}) for n in list_of_tuples} root = {} for item in list_of_tuples: number, name, id, parent = item if parent is not None: all_nodes[parent][1][id] = all_nodes[id] else: root[id] = all_nodes[id] return root </code></pre>
1
2016-09-14T18:20:41Z
[ "python", "algorithm", "tree", "binary-search-tree" ]
NoReverseMatch at /courses/course/1/1/
39,495,962
<p>I was editing the template to include a hyperlink. But when I do I get NoReverseMatch error.</p> <p>Reverse for 'views.hello_world' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []</p> <p>The template file: </p> <p>layout.html</p> <pre><code>{% load static from staticfiles %} &lt;!DOCTYPE html&gt; &lt;html lang="en"&gt; &lt;head&gt; &lt;meta charset="UTF-8"&gt; &lt;title&gt;{% block title %}{% endblock %}&lt;/title&gt; &lt;link rel="stylesheet" href="{% static 'css/layout.css' %}"&gt; &lt;/head&gt; &lt;body&gt; &lt;div class="site-container"&gt; &lt;nav&gt; &lt;a href="{% url 'views.hello_world' %}"&gt;Home&lt;/a&gt; [**Error here**] &lt;/nav&gt; {% block content %}{% endblock %} &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>urls.py</p> <pre><code>from django.conf.urls import include, url from django.contrib import admin from django.contrib.staticfiles.urls import staticfiles_urlpatterns from . import views urlpatterns = [ url(r'^courses/', include('courses.urls')), url(r'^admin/', admin.site.urls), url(r'^$', views.hello_world) ] urlpatterns+=staticfiles_urlpatterns() </code></pre> <p>views.py</p> <pre><code>from django.shortcuts import render def hello_world(request): return render(request, 'home.html') </code></pre> <p>The line, Home when removed I don't get any error. But I add, the NoReverseMatch arises. What am I doing wrong? </p>
0
2016-09-14T16:59:12Z
39,496,039
<p>You need to give the URL a name, and refer to that name in the url tag.</p> <pre><code>url(r'^$', views.hello_world, name='hello_world') </code></pre> <p>...</p> <pre><code>&lt;a href="{% url 'hello_world' %}"&gt; </code></pre>
0
2016-09-14T17:03:45Z
[ "python", "django" ]
Uploading files using Browse Button in Jupyter and Using/Saving them
39,495,994
<p>I came across <a href="https://github.com/peteut/ipython-file-upload/blob/master/README.rst" rel="nofollow">this snippet</a> for uploading files in Jupyter however I don't know how to save this file on the machine that executes the code or how to show the first 5 lines of the uploaded file. Basically I am looking for proper commands for accessing the file after it has been uploaded:</p> <pre><code>import io from IPython.display import display import fileupload def _upload(): _upload_widget = fileupload.FileUploadWidget() def _cb(change): decoded = io.StringIO(change['owner'].data.decode('utf-8')) filename = change['owner'].filename print('Uploaded `{}` ({:.2f} kB)'.format( filename, len(decoded.read()) / 2 **10)) _upload_widget.observe(_cb, names='data') display(_upload_widget) _upload() </code></pre>
0
2016-09-14T17:01:45Z
39,506,795
<p><code>_cb</code> is called when the upload finishes. As described in the comment above, you can write to a file there, or store it in a variable. For example:</p> <pre><code>from IPython.display import display import fileupload uploader = fileupload.FileUploadWidget() def _handle_upload(change): w = change['owner'] with open(w.filename, 'wb') as f: f.write(w.data) print('Uploaded `{}` ({:.2f} kB)'.format( w.filename, len(w.data) / 2**10)) uploader.observe(_handle_upload, names='data') display(uploader) </code></pre> <p>After the upload has finished, you can access the filename as:</p> <pre><code>uploader.filename </code></pre>
0
2016-09-15T08:51:39Z
[ "python", "file-upload", "ipython", "jupyter-notebook", "ipywidgets" ]
TypeError in python 3.x ('int' object is not subscriptable)
39,495,998
<pre><code>GTIN0 = int(GTIN[0]) </code></pre> <p>brings up the error</p> <pre><code>TypeError: 'int' object is not subscriptable </code></pre> <p>can someone explain to me why this happens, can you use simple terms as I'm not too experienced at coding so I'm not "in" with the code terms</p>
-4
2016-09-14T17:01:55Z
39,496,045
<p>whatever <code>GTIN</code> is, it's not a list or string and it seems to be an integer so therefore there is no subscripting available. The <code>TypeError</code> states that.</p>
0
2016-09-14T17:04:04Z
[ "python", "python-3.x", "typeerror" ]
TypeError in python 3.x ('int' object is not subscriptable)
39,495,998
<pre><code>GTIN0 = int(GTIN[0]) </code></pre> <p>brings up the error</p> <pre><code>TypeError: 'int' object is not subscriptable </code></pre> <p>can someone explain to me why this happens, can you use simple terms as I'm not too experienced at coding so I'm not "in" with the code terms</p>
-4
2016-09-14T17:01:55Z
39,496,095
<p>It looks like GTIN is an integer rather than a list or tuple, so the interpreter is trying to tell you that you can't take element 0 of an integer because it's not a container type.</p> <p>How did <code>GTIN</code> acquire its value, and why do you think it should be subscriptable?</p>
0
2016-09-14T17:07:13Z
[ "python", "python-3.x", "typeerror" ]
How to get model instance's attribute in UpdateView (Django)?
39,496,041
<p>I have an UpdateView for a model. I want to get the 'car_owner' attribute (of the Newcars model) in the UpdateView. Here's the code.</p> <p><strong><em>models.py</em></strong></p> <pre><code>class Newcars(models.Model): shop_no = models.ForeignKey(Shop, on_delete=models.CASCADE, default=0, related_name='newcars') car_name = models.CharField(max_length=250) car_owner = models.CharField(max_length=250) def get_absolute_url(self): return reverse('carapp:index') def __str__(self): return self.car_name + ' - ' + self.car_owner </code></pre> <p><strong><em>views.py</em></strong> (Here's the UpdateView.)</p> <pre><code>class NewcarUpdate(UpdateView): model = Newcars fields = ['car_name', 'car_owner'] </code></pre> <p><strong><em>urls.py</em></strong> (only the necessary part of the urlpatterns)</p> <pre><code>url(r'^newcars/(?P&lt;pk&gt;[0-9]+)/$', views.NewcarUpdate.as_view(), name='newcar-update'), </code></pre> <p>This is what I intend to do with the UpdateView, but cannot understand how.</p> <pre><code>class NewcarUpdate(UpdateView): model = Newcars fields = ['car_name', 'car_owner'] #Get the selected newcar object's 'car_owner' attribute. #Check if the object's 'car_owner' attribute == "sometext" or not. #If matches, only then go the normal update form. #If doesn't, redirect to a 404 page. </code></pre>
1
2016-09-14T17:03:47Z
39,496,846
<p>add this method to your view:</p> <pre><code>def dispatch(self, request, *args, **kwargs): if self.get_object().car_owner != "sometext": raise Http404('Car owner does not match.') return super(NewcarUpdate, self).dispatch( request, *args, **kwargs) </code></pre> <p>You will need to import <code>Http404</code> from <code>django.http</code></p>
1
2016-09-14T17:58:20Z
[ "python", "django", "python-2.7", "python-3.x" ]
How to get model instance's attribute in UpdateView (Django)?
39,496,041
<p>I have an UpdateView for a model. I want to get the 'car_owner' attribute (of the Newcars model) in the UpdateView. Here's the code.</p> <p><strong><em>models.py</em></strong></p> <pre><code>class Newcars(models.Model): shop_no = models.ForeignKey(Shop, on_delete=models.CASCADE, default=0, related_name='newcars') car_name = models.CharField(max_length=250) car_owner = models.CharField(max_length=250) def get_absolute_url(self): return reverse('carapp:index') def __str__(self): return self.car_name + ' - ' + self.car_owner </code></pre> <p><strong><em>views.py</em></strong> (Here's the UpdateView.)</p> <pre><code>class NewcarUpdate(UpdateView): model = Newcars fields = ['car_name', 'car_owner'] </code></pre> <p><strong><em>urls.py</em></strong> (only the necessary part of the urlpatterns)</p> <pre><code>url(r'^newcars/(?P&lt;pk&gt;[0-9]+)/$', views.NewcarUpdate.as_view(), name='newcar-update'), </code></pre> <p>This is what I intend to do with the UpdateView, but cannot understand how.</p> <pre><code>class NewcarUpdate(UpdateView): model = Newcars fields = ['car_name', 'car_owner'] #Get the selected newcar object's 'car_owner' attribute. #Check if the object's 'car_owner' attribute == "sometext" or not. #If matches, only then go the normal update form. #If doesn't, redirect to a 404 page. </code></pre>
1
2016-09-14T17:03:47Z
39,496,850
<p>You could that in the <a href="https://docs.djangoproject.com/en/1.10/ref/class-based-views/mixins-single-object/#django.views.generic.detail.SingleObjectMixin.get_object" rel="nofollow">get_object</a> method:</p> <pre><code>from django.http import Http404 # ... class NewcarUpdate(UpdateView): # ... def get_object(self, queryset=None): obj = super(NewcarUpdate, self).get_object(queryset) if obj.car_owner == "sometext": raise Http404 return obj </code></pre>
1
2016-09-14T17:58:47Z
[ "python", "django", "python-2.7", "python-3.x" ]
Transform a locale-aware unicode string in a valid Decimal number in Python/Django
39,496,068
<p>In my <strong>Django 1.7.11</strong> app, I get data formatted with a spanish locale via HTTP. So, in my view, I get a unicode string representing a decimal number in spanish locale:</p> <pre><code>spanish_number = request.GET.get('some_post_value', 0) # spanish_number may be u'12,542' now, for example. And may come via POST. This is not important. Just the value itself is important: it contains commas as decimal separator. </code></pre> <p>And I want to store that in a Django Model's field of type DecimalField:</p> <pre><code>class MyModel(models.Model): my_number = models.DecimalField(max_digits=10, decimal_places=3, null=True, blank=True) </code></pre> <p>In my settings.py, I have </p> <pre><code>USE_L10N=True USE_I18N=True </code></pre> <p>If I try something like</p> <pre><code>m = MyModel() m.my_number = spanish_number # This is, u'12,542' m.save() </code></pre> <p>It fails with a ValidationError because </p> <pre><code>u'12,542' must be a decimal number </code></pre> <p><strong>What would be the right way to deal with this?</strong> I mean, <strong>with the fact that my application is going to receive numbers (and dates...) formatted this way (spanish locale)</strong></p> <p>P.S.: I know Django has a <a href="https://docs.djangoproject.com/en/1.7/ref/forms/fields/#decimalfield" rel="nofollow">DecimalField</a> for forms, with a localize=True option, but I'm directly dealing with Models, not with Forms. </p>
0
2016-09-14T17:05:43Z
39,496,692
<p>You're on the right track looking at form <code>DecimalField</code>. If you check its source, you'll see that when set to localize it runs the string through <code>formats.sanitize_separators</code>. You can call this directly to convert to the format <code>Decimal()</code> expects:</p> <pre><code>import django.utils.formats m.my_number = formats.sanitize_separators(spanish_number) </code></pre>
1
2016-09-14T17:47:37Z
[ "python", "django", "unicode", "localization" ]
Python Groupby omitting columns
39,496,086
<p>I have a dataframe that looks like this</p> <p>dg:</p> <pre><code>thing1 thing2 thing3 thing4 thing5 thing6 thing7 ID NAN 1 NAN NAN NAN NAN NAN 222 NAN NAN 3 NAN NAN NAN NAN 222 NAN NAN NAN 2 NAN NAN NAN 222 3 NAN NAN NAN NAN NAN 3 222 NAN NAN NAN NAN NAN NAN NAN 222 NAN NAN NAN NAN 4 NAN NAN 222 NAN NAN NAN NAN NAN 4 NAN 222 NAN 3 NAN 2 NAN NAN NAN 555 NAN NAN 3 NAN NAN NAN NAN 555 NAN NAN NAN NAN NAN NAN NAN 555 </code></pre> <p>when I do a groupby like this: </p> <pre><code>dg = dg.groupby('ID').max().reset_index() </code></pre> <p>it produces the following ouput, omitting two columns, like this: </p> <pre><code>ID thing2 thing3 thing4 thing5 thing7 222 1 3 2 4 3 555 3 2 </code></pre> <p>The dataframe follows that pattern but I don't know why two columns are being omitted </p> <p><strong>NAN values are np.nan</strong></p>
2
2016-09-14T17:06:50Z
39,496,363
<p>I found out I had a string "N/A" value in the midst of my np.nan values. Lesson is strings with integers can cause columns to disappear when doing groupby functions. The columns that didn't have "N/A" string didn't disappear upon doing groupby functions. When I replaced "N/A" strings with np.nan the columns didn't disappear when I did the groupby</p>
0
2016-09-14T17:26:07Z
[ "python", "pandas", "dataframe" ]
Understanding dictionary.get in Python
39,496,096
<p>I was reading this really helpful SO post on <a href="http://stackoverflow.com/questions/613183/sort-a-python-dictionary-by-value">sorting dictionaries</a>. One of the most popular answers suggests this:</p> <pre><code>sorted(dict1, key=dict1.get) </code></pre> <p>While this seems to work perfectly fine, I don't get the <code>key=dict1.get</code> part. </p> <p>What exactly is <code>get</code> here and what does it do? </p> <p>I am only familiar with using <code>get('X')</code> to extract X from a dictionary... And I couldn't find anything in the <a href="https://docs.python.org/2/library/stdtypes.html" rel="nofollow">docs</a> on dictionaries and stdtypes, so any pointers are much appreciated!</p> <p>NB here is what <a href="https://docs.python.org/2/library/stdtypes.html" rel="nofollow">they</a> have to say about <code>get()</code>, or is this something entirely different? Thanks!</p> <blockquote> <p>get(key[, default]) Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError</p> </blockquote>
1
2016-09-14T17:07:18Z
39,496,153
<p>As you have found, <code>get</code> just gets the value corresponding to a given key. <code>sorted</code> will iterate through the iterable it's passed. In this case that iterable is a <code>dict</code>, and iterating through a <code>dict</code> just iterates through its keys. If you want to sort based on the values instead, you need to transform the keys to their corresponding values, and of course the obvious way to do this is with <code>get</code>.</p> <p>To clarify, this is for if you want a list of keys sorted based on their values. If you just wanted a sorted list of values you could do <code>sorted(dict1.values())</code>, and if you wanted the keys sorted by their value (not the value they map to), you could just do <code>sorted(dict1)</code>.</p> <p>Example:</p> <pre><code>&gt;&gt;&gt; d = {'a': 3, 'b': 2, 'c': 1} &gt;&gt;&gt; sorted(d) ['a', 'b', 'c'] &gt;&gt;&gt; sorted(d.values()) [1, 2, 3] &gt;&gt;&gt; sorted(d, key=d.get) ['c', 'b', 'a'] </code></pre>
2
2016-09-14T17:12:08Z
[ "python", "dictionary" ]
Understanding dictionary.get in Python
39,496,096
<p>I was reading this really helpful SO post on <a href="http://stackoverflow.com/questions/613183/sort-a-python-dictionary-by-value">sorting dictionaries</a>. One of the most popular answers suggests this:</p> <pre><code>sorted(dict1, key=dict1.get) </code></pre> <p>While this seems to work perfectly fine, I don't get the <code>key=dict1.get</code> part. </p> <p>What exactly is <code>get</code> here and what does it do? </p> <p>I am only familiar with using <code>get('X')</code> to extract X from a dictionary... And I couldn't find anything in the <a href="https://docs.python.org/2/library/stdtypes.html" rel="nofollow">docs</a> on dictionaries and stdtypes, so any pointers are much appreciated!</p> <p>NB here is what <a href="https://docs.python.org/2/library/stdtypes.html" rel="nofollow">they</a> have to say about <code>get()</code>, or is this something entirely different? Thanks!</p> <blockquote> <p>get(key[, default]) Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError</p> </blockquote>
1
2016-09-14T17:07:18Z
39,496,195
<pre><code>sorted(dict1, key=dict1.get) </code></pre> <p>is a less verbose and more pythonic way of saying:</p> <pre><code>sorted(dict1, key=lambda x: dict1[x] if x in dict1 else None) </code></pre> <p>Bear in mind that iterating on a dictionary will return its keys, therefore the <code>get</code> method takes arguments which are the dictionary keys, which in turn returns the value that key is pointing to. </p> <p>TL;DR It's a simple way of saying sort the dictionary keys using the values as the sort criteria.</p>
2
2016-09-14T17:15:16Z
[ "python", "dictionary" ]
Understanding dictionary.get in Python
39,496,096
<p>I was reading this really helpful SO post on <a href="http://stackoverflow.com/questions/613183/sort-a-python-dictionary-by-value">sorting dictionaries</a>. One of the most popular answers suggests this:</p> <pre><code>sorted(dict1, key=dict1.get) </code></pre> <p>While this seems to work perfectly fine, I don't get the <code>key=dict1.get</code> part. </p> <p>What exactly is <code>get</code> here and what does it do? </p> <p>I am only familiar with using <code>get('X')</code> to extract X from a dictionary... And I couldn't find anything in the <a href="https://docs.python.org/2/library/stdtypes.html" rel="nofollow">docs</a> on dictionaries and stdtypes, so any pointers are much appreciated!</p> <p>NB here is what <a href="https://docs.python.org/2/library/stdtypes.html" rel="nofollow">they</a> have to say about <code>get()</code>, or is this something entirely different? Thanks!</p> <blockquote> <p>get(key[, default]) Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError</p> </blockquote>
1
2016-09-14T17:07:18Z
39,496,240
<p>The <code>key</code> argument to <code>sorted</code> is a <em>callable</em> (e.g. a function) which takes one argument.</p> <p>By default, <code>sorted</code> sorts the values by comparing them to each other. For example:</p> <pre><code>sorted([2, 3, 1]) # returns [1, 2, 3] </code></pre> <p>This is because 1 &lt; 2 &lt; 3.</p> <p>On the other hand, if a different value should be used for comparison, that can be defined with <code>key</code>. For example, to sort strings by length, one coudld do:</p> <pre><code>def string_length(s): return len(s) sorted(['abcd', 'efghi', 'jk'], key=string_length) # returns ['jk', 'abcd', 'efghi'] </code></pre> <p>This is because <code>string_length('jk') &lt; string_length('abcd') &lt; string_length('efghi')</code>.</p> <p>But instead of a funcion, you can pass any other <em>callable</em>. In your example, that is <code>dict1.get</code>, so for each key in the dict, <code>dict1.get(key)</code> will be executed and the result of that will be used in comparison.</p> <pre><code>dict1 = {'a':3, 'b':1, 'c':2} sorted(dict1, key=dict1.get) # returns ['b', 'c', 'a'] </code></pre> <p>This is because <code>dict1.get('b') &lt; dict1.get('c') &lt; dict1.get('a')</code>.</p>
6
2016-09-14T17:18:43Z
[ "python", "dictionary" ]
Understanding dictionary.get in Python
39,496,096
<p>I was reading this really helpful SO post on <a href="http://stackoverflow.com/questions/613183/sort-a-python-dictionary-by-value">sorting dictionaries</a>. One of the most popular answers suggests this:</p> <pre><code>sorted(dict1, key=dict1.get) </code></pre> <p>While this seems to work perfectly fine, I don't get the <code>key=dict1.get</code> part. </p> <p>What exactly is <code>get</code> here and what does it do? </p> <p>I am only familiar with using <code>get('X')</code> to extract X from a dictionary... And I couldn't find anything in the <a href="https://docs.python.org/2/library/stdtypes.html" rel="nofollow">docs</a> on dictionaries and stdtypes, so any pointers are much appreciated!</p> <p>NB here is what <a href="https://docs.python.org/2/library/stdtypes.html" rel="nofollow">they</a> have to say about <code>get()</code>, or is this something entirely different? Thanks!</p> <blockquote> <p>get(key[, default]) Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError</p> </blockquote>
1
2016-09-14T17:07:18Z
39,496,400
<p>the second parameter of <code>sorted([dictionnary],[function])</code> is a function and not a value:</p> <p>This means the method will compare the keys according to the value returned by the function applied to the items.</p> <p>the parenthesis added after a function call mean you are passing a value, while without the parenthesis you pass the function</p> <p><code>dict.get(x)</code> is the value of the key x in dict </p> <p><code>dict.get</code> is the function that gets the said value from dict1</p> <p>supposing we have </p> <pre><code>d = {'a': 3, 'b': 2, 'c': 1} </code></pre> <p><code>sorted(d)</code> will compare 'a','b'and 'c' while <code>sorted(d,d.get)</code> will compare <code>d.get('a')</code>,<code>d.get('b')</code> and <code>d.get('c')</code></p>
1
2016-09-14T17:28:42Z
[ "python", "dictionary" ]
search and replace a space with a tab at a specific location in a string
39,496,234
<p>I need some help replacing a space with a tab on a string with multiple spaces. I need to search the string for a time format such as 08:20:10 and replace the space on the end with a tab. My code is the following:</p> <pre><code>alert = '9/14/2016 08:20:10 CH1 This is a test.' str = re.sub (r'(/d{2}:\d{2}:\d{2})(\s)$', '\t', alert) </code></pre> <p>I've been unsuccessful in this endeavor. The output should look like:</p> <pre><code>9/14/2016 08:20:10 CH1 This is a test. </code></pre> <p>What am I doing wrong?</p>
2
2016-09-14T17:18:31Z
39,496,320
<p>First, don't name your variable <code>str</code>, as this masks the built-in function of that name.</p> <p>Second, I don't see the need for regex here. Simply <code>split</code> the string and then rejoin it however you like.</p> <pre><code>&gt;&gt;&gt; alert = '9/14/2016 08:20:10 CH1 This is a test.' &gt;&gt;&gt; l = alert.split() &gt;&gt;&gt; new_alert = ' '.join(l[:2]) + '\t' + ' '.join(l[2:]) &gt;&gt;&gt; new_alert '9/14/2016 08:20:10\tCH1 This is a test.' </code></pre> <p>You can also repair your original expression by referring to the captured group in the replacement substring (and fixing the <code>/</code> to a <code>\</code>):</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; re.sub (r'(\d{2}:\d{2}:\d{2})(\s)', r'\1\t', alert) '9/14/2016 08:20:10\tCH1 This is a test.' </code></pre>
0
2016-09-14T17:23:51Z
[ "python" ]
search and replace a space with a tab at a specific location in a string
39,496,234
<p>I need some help replacing a space with a tab on a string with multiple spaces. I need to search the string for a time format such as 08:20:10 and replace the space on the end with a tab. My code is the following:</p> <pre><code>alert = '9/14/2016 08:20:10 CH1 This is a test.' str = re.sub (r'(/d{2}:\d{2}:\d{2})(\s)$', '\t', alert) </code></pre> <p>I've been unsuccessful in this endeavor. The output should look like:</p> <pre><code>9/14/2016 08:20:10 CH1 This is a test. </code></pre> <p>What am I doing wrong?</p>
2
2016-09-14T17:18:31Z
39,496,321
<p>How about a <em>positive lookbehind</em> without a line terminator:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; &gt;&gt;&gt; re.sub(r'(?&lt;=\d{2}:\d{2}:\d{2})\s', '\t', alert) '9/14/2016 08:20:10\tCH1 This is a test.' &gt;&gt;&gt; &gt;&gt;&gt; print(_) 9/14/2016 08:20:10 CH1 This is a test. </code></pre> <p>Also, your <code>/d</code> should have read <code>\d</code></p>
1
2016-09-14T17:23:51Z
[ "python" ]
search and replace a space with a tab at a specific location in a string
39,496,234
<p>I need some help replacing a space with a tab on a string with multiple spaces. I need to search the string for a time format such as 08:20:10 and replace the space on the end with a tab. My code is the following:</p> <pre><code>alert = '9/14/2016 08:20:10 CH1 This is a test.' str = re.sub (r'(/d{2}:\d{2}:\d{2})(\s)$', '\t', alert) </code></pre> <p>I've been unsuccessful in this endeavor. The output should look like:</p> <pre><code>9/14/2016 08:20:10 CH1 This is a test. </code></pre> <p>What am I doing wrong?</p>
2
2016-09-14T17:18:31Z
39,496,323
<p>Couple things to fix in your expression:</p> <ul> <li><code>/d</code> should have been <code>\d</code></li> <li>no need for the end of the string match <code>$</code></li> </ul> <p>Also, I would use a <a href="http://www.regular-expressions.info/lookaround.html" rel="nofollow">positive look behind</a> instead of a capturing group:</p> <pre><code>(?&lt;=\d{2}:\d{2}:\d{2})\s </code></pre> <p>Works for me:</p> <pre><code>&gt;&gt;&gt; print(re.sub(r'(?&lt;=\d{2}:\d{2}:\d{2})\s', '\t', alert)) 9/14/2016 08:20:10 CH1 This is a test. </code></pre>
2
2016-09-14T17:23:58Z
[ "python" ]
"Undeclared variable" declaration in python
39,496,411
<p>The following code is an example:</p> <pre><code>class A(object): def f(self): pass A.f.b = 42 </code></pre> <p>How is this variable being allocated? If I declare A.f.a, A.f.b, and A.f.c variables am I creating 3 different objects of A? Can someone explain what's going on in memory (as this does not appear to be something easily coded in C)?</p>
0
2016-09-14T17:29:25Z
39,496,898
<p><code>A.b = 42</code> adds a class variable to <code>A</code>, and thus makes it visible instantly for each instance of <code>A</code> (but only 1 entry in memory)</p> <p>You can add attributes to classes and instances anytime you like in Python. The cleanest way would be to do it a declare time or this could be misleading.</p> <pre><code>class A: b = 12 </code></pre> <p>But for quick "extensions" of classes or instances you could choose to dynamically add them.</p> <p>ex:</p> <pre><code>class A(object): pass a = A() print('b' in dir(a)) # False A.b = 42 print('b' in dir(a)) # True even if instanciated before creation of `b` </code></pre>
0
2016-09-14T18:03:00Z
[ "python" ]
"Undeclared variable" declaration in python
39,496,411
<p>The following code is an example:</p> <pre><code>class A(object): def f(self): pass A.f.b = 42 </code></pre> <p>How is this variable being allocated? If I declare A.f.a, A.f.b, and A.f.c variables am I creating 3 different objects of A? Can someone explain what's going on in memory (as this does not appear to be something easily coded in C)?</p>
0
2016-09-14T17:29:25Z
39,497,181
<p>The following only works in Python 3:</p> <pre><code>class A(object): def f(self): pass A.f.a = 41 A.f.b = 42 A.f.c = 43 </code></pre> <p><code>A.f</code> is an object of type <code>function</code>, and you have always been able to add new attributes to a <code>function</code> object. No instances of <code>A</code> have been created; the three attributes are referenced from the function <code>f</code> itself.</p> <p>If you had two instances <code>a1 = A()</code> and <code>a2 = A()</code>, however, neither <code>a1.f.b</code> and <code>a2.f.b</code> are defined, because <code>a1.f</code> is not a reference to <code>A.f</code>; it is a reference to an object of type <code>method</code>. This results from how Python's descriptor protocol is used to implement instance methods.</p>
0
2016-09-14T18:22:58Z
[ "python" ]
remove duplicate values from items in a dictionary in Python
39,496,440
<p>How can I check and remove duplicate values from items in a dictionary? I have a large data set so I'm looking for an efficient method. The following is an example of values in a dictionary that contains a duplicate:</p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])] </code></pre> <p>needs to become </p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10])] </code></pre>
1
2016-09-14T17:31:33Z
39,496,449
<pre><code>your_list = [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])] new = [] for x in your_list: if x not in new: new.append(x) print(new) &gt;&gt;&gt;[('769817', [6]), ('769819', [4, 10])] </code></pre>
0
2016-09-14T17:32:26Z
[ "python", "dictionary" ]
remove duplicate values from items in a dictionary in Python
39,496,440
<p>How can I check and remove duplicate values from items in a dictionary? I have a large data set so I'm looking for an efficient method. The following is an example of values in a dictionary that contains a duplicate:</p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])] </code></pre> <p>needs to become </p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10])] </code></pre>
1
2016-09-14T17:31:33Z
39,496,507
<p>You have a list, not a dictionary. Python dictionaries may have only one value for each key. Try</p> <pre><code>my_dict = dict([('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])]) </code></pre> <p>result:</p> <pre><code>{'769817': [6], '769819': [4, 10]} </code></pre> <p>a Python dictionary. For more information <a href="https://docs.python.org/3/tutorial/datastructures.html#dictionaries" rel="nofollow">https://docs.python.org/3/tutorial/datastructures.html#dictionaries</a></p>
0
2016-09-14T17:36:16Z
[ "python", "dictionary" ]
remove duplicate values from items in a dictionary in Python
39,496,440
<p>How can I check and remove duplicate values from items in a dictionary? I have a large data set so I'm looking for an efficient method. The following is an example of values in a dictionary that contains a duplicate:</p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])] </code></pre> <p>needs to become </p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10])] </code></pre>
1
2016-09-14T17:31:33Z
39,496,560
<p>Strikethrough applied to original question before edits, left for posterity: <s>You're not using a <code>dict</code> at all, just a <code>list</code> of two-<code>tuple</code>s, where the second element in each <code>tuple</code> is itself a <code>list</code>. If you actually want a <code>dict</code>, </p> <pre><code>dict([('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])]) </code></pre> <p>will convert it, and uniquify by key (so you'd end up with <code>{'769817': [6], '769819': [4, 10]}</code>, though it loses order, and doesn't pay attention to whether the values (the sub-<code>list</code>s) are unique or not (it just keeps the last pairing for a given key).</s></p> <p>If you need to uniquify adjacent duplicates (where the values are important to uniqueness) while preserving order, and don't want/need a real <code>dict</code>, use <code>itertools.groupby</code>:</p> <pre><code>import itertools nonuniq = [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])] uniq = [k for k, g in itertools.groupby(nonuniq)] # uniq is [('769817', [6]), ('769819', [4, 10])] # but it wouldn't work if the input was # [('769819', [4, 10]), ('769817', [6]), ('769819', [4, 10])] # because the duplicates aren't adjacent </code></pre> <p>If you need to collapse non-adjacent duplicates, and don't need to preserve order (or sorted order is fine), you can use <code>groupby</code> to get a <code>O(n log n)</code> solution (as opposed to naive solutions that create a new list and avoid duplicates by checking for presence in the new list at <code>O(n^2)</code> complexity, or <code>set</code> based solutions that would be <code>O(n)</code> but require you to convert sub-<code>list</code>s in your data to <code>tuple</code>s to make them hashable):</p> <pre><code># Only difference is sorting nonuniq before grouping uniq = [k for k, g in itertools.groupby(sorted(nonuniq))] # uniq is [('769817', [6]), ('769819', [4, 10])] </code></pre>
0
2016-09-14T17:39:46Z
[ "python", "dictionary" ]
remove duplicate values from items in a dictionary in Python
39,496,440
<p>How can I check and remove duplicate values from items in a dictionary? I have a large data set so I'm looking for an efficient method. The following is an example of values in a dictionary that contains a duplicate:</p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])] </code></pre> <p>needs to become </p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10])] </code></pre>
1
2016-09-14T17:31:33Z
39,496,608
<p>How about this: I am just focusing on the list part:</p> <pre><code>&gt;&gt;&gt; s = [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])] &gt;&gt;&gt; [(x,y) for x,y in {key: value for (key, value) in s}.items()] [('769817', [6]), ('769819', [4, 10])] &gt;&gt;&gt; </code></pre>
0
2016-09-14T17:42:24Z
[ "python", "dictionary" ]
remove duplicate values from items in a dictionary in Python
39,496,440
<p>How can I check and remove duplicate values from items in a dictionary? I have a large data set so I'm looking for an efficient method. The following is an example of values in a dictionary that contains a duplicate:</p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])] </code></pre> <p>needs to become </p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10])] </code></pre>
1
2016-09-14T17:31:33Z
39,496,609
<p>This problem essentially boils down to removing duplicates from a list of <strong>unhashable</strong> types, for which converting to a set does not possible.</p> <p>One possible method is to check for membership in the current value while building up a new list value.</p> <pre><code>d = {'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])]} for k, v in d.items(): new_list = [] for item in v: if item not in new_list: new_list.append(item) d[k] = new_list </code></pre> <p><em>Alternatively</em>, use <a href="https://docs.python.org/2/library/itertools.html#itertools.groupby" rel="nofollow"><code>groupby()</code></a> for a more concise answer, although <strong>potentially</strong> slower (<em>the list must be sorted first, if it is, then it is faster than doing a membership check</em>).</p> <pre><code>import itertools d = {'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])]} for k, v in d.items(): v.sort() d[k] = [item for item, _ in itertools.groupby(v)] </code></pre> <p><strong>Output</strong> -> <code>{'word': [('769817', [6]), ('769819', [4, 10])]}</code></p>
2
2016-09-14T17:42:25Z
[ "python", "dictionary" ]
remove duplicate values from items in a dictionary in Python
39,496,440
<p>How can I check and remove duplicate values from items in a dictionary? I have a large data set so I'm looking for an efficient method. The following is an example of values in a dictionary that contains a duplicate:</p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])] </code></pre> <p>needs to become </p> <pre><code>'word': [('769817', [6]), ('769819', [4, 10])] </code></pre>
1
2016-09-14T17:31:33Z
39,496,976
<p>You can uniqify the items based on the hash they generate. Hash could be anything, a sorted <code>json.dumps</code>, or <code>cPickle.dumps</code>. This one liner can uniqify your dict as required.</p> <pre><code>&gt;&gt;&gt; d = {'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])]} &gt;&gt;&gt; import json &gt;&gt;&gt; { k: { json.dumps(x,sort_keys = True):x for x in v}.values() for k,v in d.iteritems()} {'word': [('769817', [6]), ('769819', [4, 10])]} </code></pre>
0
2016-09-14T18:08:28Z
[ "python", "dictionary" ]
how to filter by iloc
39,496,476
<p>I have a dataframe that has 2 columns. the second column is one of only a few values. I want to make a method that returns a dataframe where only the rows where that column had a specific value are included.</p> <p>I had this working with this this code:</p> <pre><code>def filterOnName(df1): d1columns = df1.columns return df1[df1[d1columns[1]] == "Jimmy"] </code></pre> <p>Seems quite convoluted doesn't it? I guess there's a pandas method called iloc that should clean this up abit but I'm having trouble implementing it. Can you explain what I'm doing wrong?</p> <pre><code>def filterOnName(df1): return df1[df1.iloc[1] == "Jimmy"] </code></pre> <p>Thanks for your help!</p>
2
2016-09-14T17:34:14Z
39,498,176
<p>First argument of <code>.iloc</code> is for rows. To get the second column, you'll need:</p> <pre><code>df.iloc[:, 1] </code></pre> <p>where <code>:</code> means "all rows". </p>
3
2016-09-14T19:25:43Z
[ "python", "pandas", "dataframe" ]
web scraping with beautifulsoup getting error
39,496,531
<p>I'm pretty new to Python and mainly need it for getting information from website. </p> <pre><code>def spider(max_pages): page = 1 while page &lt;= max_pages: url = 'https://www.example.com' source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text, "html.parser") for link in soup.findAll('a', {'class': 'c5'}): href = link.get('href') time.sleep(0.3) # print(href) single_item(href) page += 1 def single_item(item_url): s_code = requests.get(item_url) p_text = s_code.text soup = BeautifulSoup(p_text, "html.parser") upc = ('div', {'class': 'product-upc'}) for upc in soup.findAll('span', {'class': 'upcNum'}): print(upc.string) sku = ('span', {'data-selenium': 'bhSku'}) for sku in soup.findAll('span', {'class': 'fs16 c28'}): print(sku.text) price = ('span', {'class': 'price'}) for price in soup.findAll('meta', {'itemprop': 'price'}): print(price) outFile = open(r'C:\Users\abc.txt', 'a') outFile.write(str(upc)) outFile.write("\n") outFile.write(str(sku)) outFile.write("\n") outFile.write(str(price)) outFile.write('\n') outFile.close() spider(1) </code></pre> <p>What i want to get is "UPC:813066012487, price:26.45 and SKU:KBPTMCC2" without any span, meta or content attributes.I attached my output below Here is my output: <a href="http://i.stack.imgur.com/OWoH0.jpg" rel="nofollow">screenshot</a></p> <p>Where do i do wrong ? Hope someone can figure it out! Thanks!!</p>
0
2016-09-14T17:38:01Z
39,499,544
<p>The data you want is in the div attribute <em>data-itemdata</em>, you can call <code>json.loads</code> and it will give you a dict that you can access to get what you want:</p> <pre><code>from bs4 import BeautifulSoup import requests import json soup = BeautifulSoup(requests.get("https://www.bhphotovideo.com/c/buy/accessories/ipp/100/mnp/25/Ns/p_PRICE_2%7c0/ci/20861/pn/1/N/4005352853+35").content, "html.parser") for d in soup.select("div[data-selenium=itemDetail]"): data = json.loads(d["data-itemdata"]) print(data) </code></pre> <p>Each data dict will look like:</p> <pre><code>{u'catagoryId': u'20861', u'inCart': False, u'inWish': False, u'is': u'REG', u'itemCode': u'KBPTMCC2', u'li': [], u'price': u'26.45', u'searchTerm': u'', u'sku': u'890522'} </code></pre> <p>So just access by key i.e <code>price = data["price"]</code>.</p> <p>To get the <em>UPC</em> we just need to visit the items page, we can get the url from <em>h3</em> with the <em>data-selenium</em> attribute:</p> <pre><code>for d in soup.select("div[data-selenium=itemDetail]"): url = d.select_one("h3[data-selenium] a")["href"] upc = BeautifulSoup(requests.get(url).content, "html.parser").select_one("span.upcNum").text.strip() data = json.loads(d["data-itemdata"]) </code></pre> <p>Not all pages have a UPC value so you will have to decide what to do, if you just want products with UPC's first check if the select finds anything:</p> <pre><code>for d in soup.select("div[data-selenium=itemDetail]"): url = d.select_one("h3[data-selenium] a")["href"] upc = BeautifulSoup(requests.get(url).content, "html.parser").select_one("span.upcNum") if upc: data = json.loads(d["data-itemdata"]) text = (upc.text.strip() </code></pre>
1
2016-09-14T20:58:14Z
[ "python", "web-scraping", "beautifulsoup" ]
Cannot subclass multiprocessing Queue in Python 3.5
39,496,554
<p>My eventual goal is to redirect the <code>stdout</code> from several subprocesses to some queues, and print those out somewhere (maybe in a little GUI).</p> <p>The first step is to subclass <code>Queue</code> into an object that behaves much like the <code>stdout</code>. But that is where I got stuck. Subclassing the multiprocessing <code>Queue</code> seems impossible in Python v3.5.</p> <pre><code># This is a Queue that behaves like stdout # Unfortunately, doesn't work in Python 3.5 :-( class StdoutQueue(Queue): def __init__(self,*args,**kwargs): Queue.__init__(self,*args,**kwargs, ctx='') def write(self,msg): self.put(msg) def flush(self): sys.__stdout__.flush() </code></pre> <p>I found this snippet in the following post (probably Python 3.5 did not yet exist at that moment): <a href="http://stackoverflow.com/questions/23947281/python-multiprocessing-redirect-stdout-of-a-child-process-to-a-tkinter-text">Python multiprocessing redirect stdout of a child process to a Tkinter Text</a></p> <p>In Python v3.5 you stumble on strange error messages when subclassing the multiprocessing <code>Queue</code> class. I found two bug reports describing the issue:</p> <p><a href="https://bugs.python.org/issue21367" rel="nofollow">https://bugs.python.org/issue21367</a></p> <p><a href="https://bugs.python.org/issue19895" rel="nofollow">https://bugs.python.org/issue19895</a></p> <p>I have 2 questions:</p> <ol> <li>Suppose I want to stick to Python v3.5 - going to a previous version is not really an option. What workaround can I use to subclass the multiprocessing Queue somehow?</li> <li>Is the bug still around if I upgrade to Python v3.6?</li> </ol> <hr> <p><strong>EDIT :</strong></p> <p>There is a known issue when you try to subclass the <code>Queue</code> class found in here:</p> <pre><code>from multiprocessing import Queue # &lt;- known issue: you cannot subclass # this Queue class, because it is # not a genuine python class. </code></pre> <p>But the following should work:</p> <pre><code>from multiprocessing.queues import Queue # &lt;- from this Queue class, you # should be able to make a # subclass. But Python 3.5 # refuses :-( </code></pre> <p>Sadly, even that doesn't work in Python v3.5. You get the following error:</p> <pre><code> C:\Users\..\myFolder &gt; python myTest.py Traceback (most recent call last): File "myTest.py", line 49, in &lt;module&gt; q = StdoutQueue() File "myTest.py", line 22, in __init__ super(StdoutQueue,self).__init__(*args,**kwargs) TypeError: __init__() missing 1 required keyword-only argument: 'ctx' </code></pre> <hr> <p><strong>EDIT :</strong></p> <p>Thank you Darth Kotik for solving the problem! Here is the complete code, updated with his solution. Now it works.</p> <pre><code>import sys import time import multiprocessing as mp import multiprocessing.queues as mpq from threading import Thread from tkinter import * '''-------------------------------------------------------------------''' ''' SUBCLASSING THE MULTIPROCESSING QUEUE ''' ''' ''' ''' ..and make it behave as a general stdout io ''' '''-------------------------------------------------------------------''' # The StdoutQueue is a Queue that behaves like stdout. # We will subclass the Queue class from the multiprocessing package # and give it the typical stdout functions. # # (1) First issue # Subclassing multiprocessing.Queue or multiprocessing.SimpleQueue # will not work, because these classes are not genuine # python classes. # Therefore, you need to subclass multiprocessing.queues.Queue or # multiprocessing.queues.SimpleQueue . This issue is known, and is not # the reason for asking this question. But I mention it here, for # completeness. # # (2) Second issue # There is another problem that arises only in Python V5 (and beyond). # When subclassing multiprocessing.queues.Queue, you have to provide # a 'multiprocessing context'. Not doing that, leads to an obscure error # message, which is in fact the main topic of this question. Darth Kotik # solved it. # His solution is visible in this code: class StdoutQueue(mpq.Queue): def __init__(self,*args,**kwargs): ctx = mp.get_context() super(StdoutQueue, self).__init__(*args, **kwargs, ctx=ctx) def write(self,msg): self.put(msg) def flush(self): sys.__stdout__.flush() '''-------------------------------------------------------------------''' ''' TEST SETUP ''' '''-------------------------------------------------------------------''' # This function takes the text widget and a queue as inputs. # It functions by waiting on new data entering the queue, when it # finds new data it will insert it into the text widget. def text_catcher(text_widget,queue): while True: text_widget.insert(END, queue.get()) def test_child(q): # This line only redirects stdout inside the current process sys.stdout = q # or sys.stdout = sys.__stdout__ if you want to print the child to the terminal print('child running') def test_parent(q): # Again this only redirects inside the current (main) process # commenting this like out will cause only the child to write to the widget sys.stdout = q print('parent running') time.sleep(0.5) mp.Process(target=test_child,args=(q,)).start() if __name__ == '__main__': gui_root = Tk() gui_txt = Text(gui_root) gui_txt.pack() q = StdoutQueue() gui_btn = Button(gui_root, text='Test', command=lambda:test_parent(q),) gui_btn.pack() # Instantiate and start the text monitor monitor = Thread(target=text_catcher,args=(gui_txt,q)) monitor.daemon = True monitor.start() gui_root.mainloop() </code></pre>
0
2016-09-14T17:39:23Z
39,496,873
<pre><code>&gt;&gt;&gt; import multiprocessing &gt;&gt;&gt; type(multiprocessing.Queue) &lt;class 'method'&gt; AttributeError: module 'multiprocessing' has no attribute 'queues' &gt;&gt;&gt; import multiprocessing.queues &gt;&gt;&gt; type(multiprocessing.queues.Queue) &lt;class 'type'&gt; </code></pre> <p>So as you can see <code>multiprocessing.Queue</code> is just constructor method for <code>multiprocessing.queues.Queue</code> class. If you want to make a child class just do <code>class MyQueue(multiprocessing.queues.Queue)</code></p> <p>You can see source of this method <a href="https://github.com/python/cpython/blob/master/Lib/multiprocessing/context.py#L99-L102" rel="nofollow">here</a></p> <p><strong>EDIT</strong>: Okay. I got your problem now. As you can see on a link above, <code>multiprocessing.Queue</code> passes <code>ctx</code> argument to Queue. So I managed to get it working by doing it myself in <code>__init__</code> method. I don't completely inderstand where <code>BaseContext</code> object supposed to get <code>_name</code> attribute, so I passed it manually.</p> <pre><code>def __init__(self,*args,**kwargs): from multiprocessing.context import BaseContext ctx = BaseContext() ctx._name = "Name" super(StdoutQueue,self).__init__(*args,**kwargs, ctx=ctx) </code></pre> <p><strong>EDIT2</strong>: Turned out docs have some information about context <a href="https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods" rel="nofollow">here</a>. So instead of manually creating it like I did you can do </p> <pre><code>import multiprocessing ctx = multiprocessing.get_context() </code></pre> <p>It will create proper context with <code>_name</code> set (to 'fork' in your particular case) and you can pass it to your queue.</p>
1
2016-09-14T18:01:01Z
[ "python", "python-3.x", "multiprocessing", "python-multiprocessing" ]
printing a int variable in a sqlite3 update query
39,496,562
<p>I am trying to update a sqlite3 db once I get a true statement but I can't feed my variable, which is an int, to the query which is a str. Here is the database </p> <pre><code>CREATE TABLE STATICIPS( ID INTEGER PRIMARY KEY AUTOINCREMENT, IP CHAR(50) NOT NULL, CITY CHAR(50) NOT NULL, INCOMPLETE BOOL , CMTSIP CHAR(50)); </code></pre> <p>Here is the code</p> <pre><code>#!/usr/bin/python import sqlite3 conn = sqlite3.connect('ipdb.sqlite') cursor = conn.execute("SELECT ID, IP, CITY, INCOMPLETE, CMTSIP from STATICIPS WHERE CITY='LS'") for row in cursor: if (row[3] == 1): print row[1] searchfile = open("arp-ls.txt", "r") for line in searchfile: if row[1] + ' ' in line: print line conn.execute("UPDATE STATICIPS set INCOMPLETE = 0 where ID = " + row[0]) conn.commit searchfile.close()` </code></pre> <p>The row[0] is the id in the db which is an int. I get this when i run the code:</p> <pre><code> Traceback (most recent call last): File "getinc.py", line 16, in &lt;module&gt; conn.execute("UPDATE STATICIPS set INCOMPLETE = 0 where ID = " + row[0]) TypeError: cannot concatenate 'str' and 'int' objects </code></pre> <p>So my question is how do I make it that row[0] prints correctly in my query so I can update the sqlite entry of this specific ID?</p>
0
2016-09-14T17:39:52Z
39,496,655
<p>You need to leverage backticks.</p> <pre><code>conn.execute("UPDATE STATICIPS set INCOMPLETE = 0 where ID = " + `row[0]`) </code></pre>
-1
2016-09-14T17:45:04Z
[ "python", "sqlite3" ]
printing a int variable in a sqlite3 update query
39,496,562
<p>I am trying to update a sqlite3 db once I get a true statement but I can't feed my variable, which is an int, to the query which is a str. Here is the database </p> <pre><code>CREATE TABLE STATICIPS( ID INTEGER PRIMARY KEY AUTOINCREMENT, IP CHAR(50) NOT NULL, CITY CHAR(50) NOT NULL, INCOMPLETE BOOL , CMTSIP CHAR(50)); </code></pre> <p>Here is the code</p> <pre><code>#!/usr/bin/python import sqlite3 conn = sqlite3.connect('ipdb.sqlite') cursor = conn.execute("SELECT ID, IP, CITY, INCOMPLETE, CMTSIP from STATICIPS WHERE CITY='LS'") for row in cursor: if (row[3] == 1): print row[1] searchfile = open("arp-ls.txt", "r") for line in searchfile: if row[1] + ' ' in line: print line conn.execute("UPDATE STATICIPS set INCOMPLETE = 0 where ID = " + row[0]) conn.commit searchfile.close()` </code></pre> <p>The row[0] is the id in the db which is an int. I get this when i run the code:</p> <pre><code> Traceback (most recent call last): File "getinc.py", line 16, in &lt;module&gt; conn.execute("UPDATE STATICIPS set INCOMPLETE = 0 where ID = " + row[0]) TypeError: cannot concatenate 'str' and 'int' objects </code></pre> <p>So my question is how do I make it that row[0] prints correctly in my query so I can update the sqlite entry of this specific ID?</p>
0
2016-09-14T17:39:52Z
39,496,730
<p>Don't use concatenation at all when constructing SQL queries. Use <em>SQL parameters</em> instead. These are placeholders in the query where the database will fill in the values for you.</p> <p>This ensures that those values are properly escaped (avoiding SQL injection attacks), and allows the database to re-use queries for different values (giving you a performance boost).</p> <p>In <code>sqlite3</code>, placeholders are question marks; you pass in the values in a sequence as a second argument to <code>execute()</code>:</p> <pre><code>conn.execute("UPDATE STATICIPS set INCOMPLETE = 0 where ID = ?", [row[0]]) </code></pre> <p>For the <em>general</em> case (so not using SQL queries), you'd convert that integer value to a string first. Either by using <code>str(row[0])</code> or by using <code>str.format()</code> string templating.</p>
1
2016-09-14T17:50:31Z
[ "python", "sqlite3" ]
Pandas long to wide without losing timezone awareness
39,496,573
<p>I'm trying to reshape a pandas dataframe from long to wide format and the timestamps lose the timezone.</p> <p>Here is a reproducible example:</p> <pre><code>import pandas as pd long = pd.DataFrame(dict( ind=[1,1,2, 2], events=['event1', 'event2', 'event1', 'event2'], time=[pd.Timestamp('2015-03-30 00:00:00', tz='UTC'), pd.Timestamp('2015-03-30 01:00:00', tz='UTC'), pd.Timestamp('2015-03-30 02:00:00', tz='UTC'), pd.Timestamp('2015-03-30 03:00:00', tz='UTC')])) </code></pre> <p>Then when looking at <code>long.time</code> I get a timezone-aware serie.</p> <pre><code>0 2015-03-30 00:00:00+00:00 1 2015-03-30 01:00:00+00:00 2 2015-03-30 02:00:00+00:00 3 2015-03-30 03:00:00+00:00 Name: time, dtype: datetime64[ns, UTC] </code></pre> <p>and after reshaping like this</p> <pre><code>wide = long.set_index(['ind'] + ['events']).unstack(level=1).reset_index() </code></pre> <p>the timezone goes away. E.g. <code>wide.time.event1</code></p> <pre><code>0 2015-03-30 00:00:00 1 2015-03-30 02:00:00 Name: event1, dtype: datetime64[ns] </code></pre> <p>Is there another way of reshaping that does not lose the timezone?</p>
3
2016-09-14T17:40:18Z
39,497,481
<p><code>pandas</code> is tracking the timezone. When, you <code>unstack</code>, that reshaping must be happening in <code>numpy</code> which loses track. This is proven by</p> <pre><code>df = pd.concat([long.time, pd.Series(long.time.values)], axis=1, keys=['pandas', 'numpy']) df </code></pre> <p><a href="http://i.stack.imgur.com/FeBkH.png" rel="nofollow"><img src="http://i.stack.imgur.com/FeBkH.png" alt="enter image description here"></a></p> <pre><code>df.dtypes pandas datetime64[ns, UTC] numpy datetime64[ns] dtype: object </code></pre> <hr> <p>Work around is to recast each column as the dtype you care about</p> <pre><code>for c, col in wide.filter(like='time').iteritems(): wide[c] = col.astype(long.time.dtype) wide </code></pre> <p><a href="http://i.stack.imgur.com/m079M.png" rel="nofollow"><img src="http://i.stack.imgur.com/m079M.png" alt="enter image description here"></a></p>
1
2016-09-14T18:40:15Z
[ "python", "pandas", "timezone", "reshape" ]
Not able to save captured image in right directory
39,496,694
<p>I am working on a project related to image processing. I would like to capture an image from a webcam and want to display it on webpage. I am using django framework for web stuff. </p> <p>Program to capture image from webcam in views.py:</p> <pre><code>from django.shortcuts import render from django.views.decorators.csrf import csrf_exempt import cv2 def home(request): return render(request,'detect/home.html',{}) def get_image(camera): retval, im = camera.read() return im def webcam(): camera_port = 0 ramp_frames = 30 camera = cv2.VideoCapture(camera_port) for i in xrange(ramp_frames): temp = get_image(camera) print("Taking image...") camera_capture = get_image(camera) file = "/detect/static/test_image.png" cv2.imwrite(file, camera_capture) del(camera) @csrf_exempt def display(request): webcam() return render(request,'detect/display.html',{}) </code></pre> <p>Here is a screenshot of my directory structure: <a href="http://i.stack.imgur.com/OneNO.png" rel="nofollow"><img src="http://i.stack.imgur.com/OneNO.png" alt="enter image description here"></a></p> <p>If I don't mention any path and only include the name of the image file(file = "test_image.png"), the image gets saved in /moody/project. I would like to save image in /moody/project/detect/static/.</p>
-2
2016-09-14T17:47:48Z
39,497,220
<p>Use the fullpath from / or the relative path from your current dir, using a dot ('.') before the first '/' in the filepath.</p> <pre><code>file = "./detect/static/test_image.png" </code></pre>
3
2016-09-14T18:25:13Z
[ "python", "django", "opencv" ]
How to create a numpy dtype from other dtypes?
39,496,726
<p>I normally create numpy dtypes like this: </p> <pre><code>C = np.dtype([('a',int),('b',float)]) </code></pre> <p>However in my code I also use the fields <code>a</code> and <code>b</code> individually elsewhere: </p> <pre><code>A = np.dtype([('a',int)]) B = np.dtype([('b',float)]) </code></pre> <p>For maintainability I'd like to derive <code>C</code> from types <code>A</code> and <code>B</code> somehow like this: </p> <pre><code>C = np.dtype([A,B]) # this gives a TypeError </code></pre> <p>Is there a way in numpy to create complex dtypes by combining other dtypes?</p>
2
2016-09-14T17:50:05Z
39,496,839
<p>You can combine the fields using the <code>.descr</code> attribute of the dtypes. For example, here are your <code>A</code> and <code>B</code>. Note that the <code>.descr</code> attrbute is a list containing an entry for each field:</p> <pre><code>In [44]: A = np.dtype([('a',int)]) In [45]: A.descr Out[45]: [('a', '&lt;i8')] In [46]: B = np.dtype([('b',float)]) In [47]: B.descr Out[47]: [('b', '&lt;f8')] </code></pre> <p>Because the values of the <code>.descr</code> attributes are lists, they can be added to create a new dtype:</p> <pre><code>In [48]: C = np.dtype(A.descr + B.descr) In [49]: C Out[49]: dtype([('a', '&lt;i8'), ('b', '&lt;f8')]) </code></pre>
3
2016-09-14T17:58:05Z
[ "python", "numpy", "numpy-dtype" ]
How to create a numpy dtype from other dtypes?
39,496,726
<p>I normally create numpy dtypes like this: </p> <pre><code>C = np.dtype([('a',int),('b',float)]) </code></pre> <p>However in my code I also use the fields <code>a</code> and <code>b</code> individually elsewhere: </p> <pre><code>A = np.dtype([('a',int)]) B = np.dtype([('b',float)]) </code></pre> <p>For maintainability I'd like to derive <code>C</code> from types <code>A</code> and <code>B</code> somehow like this: </p> <pre><code>C = np.dtype([A,B]) # this gives a TypeError </code></pre> <p>Is there a way in numpy to create complex dtypes by combining other dtypes?</p>
2
2016-09-14T17:50:05Z
39,496,843
<p>According to <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html" rel="nofollow">dtype documentation</a>, dtypes have an attribute <code>descr</code> which provides an "<em>Array-interface compliant full description of the data-type</em>". Therefore: </p> <pre><code>A = np.dtype([('a',int)]) # A.descr -&gt; [('a', '&lt;i4')] B = np.dtype([('b',float)]) # B.descr -&gt; [('b', '&lt;f8')] # then C = np.dtype([A.descr[0], B.descr[0]]) </code></pre>
4
2016-09-14T17:58:15Z
[ "python", "numpy", "numpy-dtype" ]
How to create a numpy dtype from other dtypes?
39,496,726
<p>I normally create numpy dtypes like this: </p> <pre><code>C = np.dtype([('a',int),('b',float)]) </code></pre> <p>However in my code I also use the fields <code>a</code> and <code>b</code> individually elsewhere: </p> <pre><code>A = np.dtype([('a',int)]) B = np.dtype([('b',float)]) </code></pre> <p>For maintainability I'd like to derive <code>C</code> from types <code>A</code> and <code>B</code> somehow like this: </p> <pre><code>C = np.dtype([A,B]) # this gives a TypeError </code></pre> <p>Is there a way in numpy to create complex dtypes by combining other dtypes?</p>
2
2016-09-14T17:50:05Z
39,497,303
<p>There is a backwater module in <code>numpy</code> that has a bunch of structured/rec array utilities.</p> <p><code>zip_descr</code> does this sort of <code>descr</code> concatenation, but it starts with arrays rather than the <code>dtypes</code>:</p> <pre><code>In [77]: import numpy.lib.recfunctions as rf In [78]: rf.zip_descr([np.zeros((0,),dtype=A),np.zeros((0,),dtype=B)]) Out[78]: [('a', '&lt;i4'), ('b', '&lt;f8')] In [81]: rf.zip_descr([np.array((0,),dtype=A),np.array((0,),dtype=B)]) Out[81]: [('a', '&lt;i4'), ('b', '&lt;f8')] </code></pre>
0
2016-09-14T18:29:31Z
[ "python", "numpy", "numpy-dtype" ]
How can I make this program ignore punctuation
39,496,745
<p>I'm new to python and I'm not sure how I can make this program ignore punctuation; I know it's really inefficient but I'm not bothered about it at this moment in time.</p> <pre><code>while True: y="y" n="n" Sentence=input("Please enter your sentence: ").upper() print("Your sentence is:",Sentence) Correct=input("Is your sentence correct? y/n ") if Correct==n: break elif Correct==y: Location=0 SplitSentence = Sentence.split(" ") for word in SplitSentence: locals()[word] = Location Location+=1 print("") FindWord=input("What word would you like to search? ").upper() if FindWord not in SplitSentence: print("Your chosen word is not in your sentence") else: iterate=0 WordBank=[] for word in SplitSentence: iterate=iterate+1 if word == FindWord: WordBank.append(iterate) print(FindWord, WordBank) break </code></pre> <p>I appreciate any help you can give me</p>
1
2016-09-14T17:52:00Z
39,497,093
<p>You can use Python's <code>string</code> module to help test for punctuation.</p> <pre><code>&gt;&gt; import string &gt;&gt; print string.punctuation !"#$%&amp;'()*+,-./:;&lt;=&gt;?@[\]^_`{|}~ &gt;&gt; sentence = "I am a sentence, and; I haven't been punctuated well.!" </code></pre> <p>You can <code>split</code> the sentence at each space to get the individual words from your sentence, then remove the punctuation from each word. Or, you can remove the punctuation from the sentence first then re-form the individual words. We'll do option 2 - make a list of all characters in the sentence, except for the punctuation marks, then join that list together.</p> <pre><code>&gt;&gt; cleaned_sentence = ''.join([c for c in sentence if c not in string.punctuation]) &gt;&gt; print cleaned_sentence 'I am a sentence and I havent been punctuated well' </code></pre> <p>Notice that the apostrophe from "haven't" got removed - a side effect of ignoring punctuation altogether.</p>
1
2016-09-14T18:17:13Z
[ "python" ]
kill a task with name containing whitespace using python os.system
39,496,858
<p>From a python script, I am trying to terminate a task. The task name contains whitespace. </p> <p>Like "My Program.exe" for instance.</p> <p>I use the following line but it complains about invalid argument because the name contains a whitespace.</p> <pre><code>os.system("TASKKILL /F /IM My Program.exe") </code></pre> <p>I can't find how to escape the whitespace.</p> <p>I tried</p> <pre><code>os.system("TASKKILL /F /IM "My Program.exe"") os.system("TASKKILL /F /IM \"My Program.exe\"") os.system("TASKKILL /F /IM 'My Program.exe'") os.system("TASKKILL /F /IM \'My Program.exe\'") </code></pre> <p>But still does not work.</p>
1
2016-09-14T17:59:28Z
39,497,130
<p>This worked for me:</p> <pre><code>os.system("TASKKILL /F /IM \"My Program.exe\"") </code></pre> <p>Check if your application is really named 'My Program.exe' on task manager. Also, check if you have privileges to be killing that process.</p>
1
2016-09-14T18:20:12Z
[ "python", "whitespace", "taskkill" ]
"Name Error: name 'get_ipython' is not defined" while preparing a debugging session via "import ipdb"
39,496,891
<p>I'm trying to install and use ipdb (IPython-enabled pdb) on Python 3.3.5 32 bit on Win10 using PIP 8.1.2. I've installed via PIP (had to install it seprately) in windows cmd with no errors:</p> <pre><code> pip install ipdb </code></pre> <p>I wrote a simple test script expecting to stop in debugger before printing 'test' string, <em>ipdb_test.py</em>:</p> <pre><code>import ipdb ipdb.set_trace() print('test') </code></pre> <p>When running it from IDLE editor the following exceptions show up:</p> <pre><code>Traceback (most recent call last): File "C:\Python33.5-32\lib\site-packages\ipdb\__main__.py", line 44, in &lt;module&gt; get_ipython NameError: name 'get_ipython' is not defined During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/temp/ipdb_test.py", line 1, in &lt;module&gt; import ipdb File "C:\Python33.5-32\lib\site-packages\ipdb\__init__.py", line 7, in &lt;module&gt; from ipdb.__main__ import set_trace, post_mortem, pm, run # noqa File "C:\Python33.5-32\lib\site-packages\ipdb\__main__.py", line 51, in &lt;module&gt; (...) File "C:\Python33.5-32\lib\site-packages\prompt_toolkit\terminal\win32_output.py", line 266, in flush self.stdout.flush() AttributeError: 'NoneType' object has no attribute 'flush' </code></pre>
0
2016-09-14T18:02:06Z
39,501,284
<p>As the issue seemed to be related to IPython, I've checked that the version installed while resolving ipdb dependencies was: "ipython-5.1.0". </p> <p>The WA solution for the issue occured to be a fallback to version 4.2.1 of IPython:</p> <pre><code>pip install "ipython&lt;5" (...) Successfully uninstalled ipython-5.1.0 Successfully installed ipython-4.2.1 </code></pre> <p>After that ipdb halted on a breakpoint as expected:</p> <pre><code>$ python C:\temp\ipdb_test.py WARNING: Readline services not available or not loaded. WARNING: Proper color support under MS Windows requires the pyreadline library. You can find it at: http://ipython.org/pyreadline.html Defaulting color scheme to 'NoColor' &gt; c:\temp\ipdb_test.py(3)&lt;module&gt;() 1 import ipdb 2 ipdb.set_trace() ----&gt; 3 print('test') ipdb&gt; </code></pre> <p>It may be a valid case to contact the IPython project team on the issue, meanwhile I find an initial task of running a debug session completed.</p>
0
2016-09-14T23:51:22Z
[ "python", "python-3.x", "ipython", "ipdb" ]
I am having trouble using or importing matplotlib. This is the first line of my code: import matplotlib.pyplot as plt
39,496,987
<p>These are the messages I'm receiving and I have no idea what any of them mean.</p> <p>I have <code>python 2.7.12</code> and have imported <code>matplotlib</code>.</p> <p>In the command prompt window, the last update I installed was <strong>conda install matplotlib</strong> which seemed to update some stuff (<code>conda-env-2.5.2</code> and <code>conda-4.1.12</code>), but unfortunately not the stuff I needed.</p> <p>I have never used any of this before so please speak in layman's terms if possible.</p> <p>Traceback (most recent call last):</p> <pre><code> File "C:\Users\user\Desktop\plotting2.py", line 1, in &lt;module&gt; import matplotlib.pyplot as plt File "C:\Python27\lib\site-packages\matplotlib\pyplot.py", line 36, in &lt;module&gt; from matplotlib.figure import Figure, figaspect File "C:\Python27\lib\site-packages\matplotlib\figure.py", line 40, in &lt;module&gt; from matplotlib.axes import Axes, SubplotBase, subplot_class_factory File "C:\Python27\lib\site-packages\matplotlib\axes\__init__.py", line 4, in &lt;module&gt; from ._subplots import * File "C:\Python27\lib\site-packages\matplotlib\axes\_subplots.py", line 10, in &lt;module&gt; from matplotlib.axes._axes import Axes File "C:\Python27\lib\site-packages\matplotlib\axes\_axes.py", line 23, in &lt;module&gt; import matplotlib.dates as _ # &lt;-registers a date unit converter File "C:\Python27\lib\site-packages\matplotlib\dates.py", line 130, in &lt;module&gt; import dateutil.parser File "C:\Python27\lib\site-packages\dateutil\parser.py", line 38, in &lt;module&gt; from calendar import monthrange, isleap ImportError: cannot import name monthrange </code></pre>
0
2016-09-14T18:09:07Z
39,497,074
<p>It looks like there may be a conflicting module named calendar. Are there any other packages (files) called "calendar" (<code>calendar.py</code>) in your python path? Try opening up a new python console and type <code>from calendar import monthrange</code>.</p>
1
2016-09-14T18:15:58Z
[ "python", "python-2.7", "matplotlib" ]
Error reinstalling python3 in ubuntu 16.04
39,497,017
<p>I am using ubuntu 16.04 and I removed the preinstalled python3 and want to install it again. However, I'm getting an error when using <code>sudo apt-get -f install python3</code> :</p> <pre><code>Reading package lists... Done Building dependency tree Reading state information... Done python3.5 is already the newest version (3.5.2-2~16.01). python3.5 set to manually installed. The following packages were automatically installed and are no longer required: dictionaries-common emacsen-common gir1.2-appindicator3-0.1 gir1.2-atk-1.0 gir1.2-atspi-2.0 gir1.2-freedesktop gir1.2-gdkpixbuf-2.0 gir1.2-glib-2.0 gir1.2-gtk-3.0 gir1.2-pango-1.0 hunspell-en-us libcanberra0 libgirepository-1.0-1 libhunspell-1.3-0 libpangoxft-1.0-0 libvorbisfile3 sound-theme-freedesktop Use 'sudo apt autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue? [Y/n] y Setting up python3 (3.5.1-3) ... running python rtupdate hooks for python3.5... dpkg-query: package 'hplip-data' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53, in from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of hplip-data error running python rtupdate hook hplip-data dpkg-query: package 'python3-uno' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53, in from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of python3-uno error running python rtupdate hook python3-uno dpkg-query: package 'rhythmbox-plugins' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53, in from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of rhythmbox-plugins error running python rtupdate hook rhythmbox-plugins dpkg-query: package 'rhythmbox' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53, in from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of rhythmbox error running python rtupdate hook rhythmbox dpkg-query: package 'totem-plugins' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53,from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of totem-plugins error running python rtupdate hook totem-plugins dpkg-query: package 'ubuntu-drivers-common' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53, in from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of ubuntu-drivers-common error running python rtupdate hook ubuntu-drivers-common dpkg: error processing package python3 (--configure): subprocess installed post-installation script returned error exit status 4 dpkg: dependency problems prevent configuration of dh-python: dh-python depends on python3:any (&gt;= 3.3.2-2~); however: Package python3 is not configured yet. dpkg: error processing package dh-python (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: python3 dh-python E: Sub-process /usr/bin/dpkg returned an error code (1) </code></pre> <p>I have anaconda2 also installed which has python 2.7 and the PYTHON_PATH is set to that. I already tried changing that with the same results. I was originally trying to install gedbi-core using <code>sudo apt-get install gdebi-core</code> and I was getting the same error. After looking online, I tried </p> <pre><code>sudo apt-get clean sudo apt-get update sudo apt-get remove python3.* </code></pre> <p>Now I am trying to reinstall python3.5. </p>
0
2016-09-14T18:10:59Z
39,497,413
<p>Anaconda supports multiple environments for your Python installations. You should use:</p> <pre><code>conda create -n py35 python=3.5 anaconda </code></pre> <p>Then, to use the Python 3.5 environment, use:</p> <pre><code>activate py35 </code></pre> <p>from the command line. You can refer to:</p> <p><a href="http://conda.pydata.org/docs/py2or3.html" rel="nofollow">http://conda.pydata.org/docs/py2or3.html</a></p> <p>You can also add the new environment to your Python IDE (if desired). The new environment would be created in:</p> <pre><code> Linux: /YOUR_ANACONDA_PATH&gt;/envs/py35 Windows: C:\&lt;YOUR_ANACONDA_PATH\envs\py35 </code></pre>
0
2016-09-14T18:36:29Z
[ "python", "ubuntu", "ubuntu-16.04" ]
Error reinstalling python3 in ubuntu 16.04
39,497,017
<p>I am using ubuntu 16.04 and I removed the preinstalled python3 and want to install it again. However, I'm getting an error when using <code>sudo apt-get -f install python3</code> :</p> <pre><code>Reading package lists... Done Building dependency tree Reading state information... Done python3.5 is already the newest version (3.5.2-2~16.01). python3.5 set to manually installed. The following packages were automatically installed and are no longer required: dictionaries-common emacsen-common gir1.2-appindicator3-0.1 gir1.2-atk-1.0 gir1.2-atspi-2.0 gir1.2-freedesktop gir1.2-gdkpixbuf-2.0 gir1.2-glib-2.0 gir1.2-gtk-3.0 gir1.2-pango-1.0 hunspell-en-us libcanberra0 libgirepository-1.0-1 libhunspell-1.3-0 libpangoxft-1.0-0 libvorbisfile3 sound-theme-freedesktop Use 'sudo apt autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue? [Y/n] y Setting up python3 (3.5.1-3) ... running python rtupdate hooks for python3.5... dpkg-query: package 'hplip-data' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53, in from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of hplip-data error running python rtupdate hook hplip-data dpkg-query: package 'python3-uno' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53, in from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of python3-uno error running python rtupdate hook python3-uno dpkg-query: package 'rhythmbox-plugins' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53, in from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of rhythmbox-plugins error running python rtupdate hook rhythmbox-plugins dpkg-query: package 'rhythmbox' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53, in from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of rhythmbox error running python rtupdate hook rhythmbox dpkg-query: package 'totem-plugins' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53,from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of totem-plugins error running python rtupdate hook totem-plugins dpkg-query: package 'ubuntu-drivers-common' is not installed Use dpkg --info (= dpkg-deb --info) to examine archive files, and dpkg --contents (= dpkg-deb --contents) to list their contents. Traceback (most recent call last): File "/usr/bin/py3clean", line 210, in &lt;module&gt; main() File "/usr/bin/py3clean", line 196, in main pfiles = set(dpf.from_package(options.package)) File "/usr/share/python3/debpython/files.py", line 53, in from_package raise Exception("cannot get content of %s" % package_name) Exception: cannot get content of ubuntu-drivers-common error running python rtupdate hook ubuntu-drivers-common dpkg: error processing package python3 (--configure): subprocess installed post-installation script returned error exit status 4 dpkg: dependency problems prevent configuration of dh-python: dh-python depends on python3:any (&gt;= 3.3.2-2~); however: Package python3 is not configured yet. dpkg: error processing package dh-python (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: python3 dh-python E: Sub-process /usr/bin/dpkg returned an error code (1) </code></pre> <p>I have anaconda2 also installed which has python 2.7 and the PYTHON_PATH is set to that. I already tried changing that with the same results. I was originally trying to install gedbi-core using <code>sudo apt-get install gdebi-core</code> and I was getting the same error. After looking online, I tried </p> <pre><code>sudo apt-get clean sudo apt-get update sudo apt-get remove python3.* </code></pre> <p>Now I am trying to reinstall python3.5. </p>
0
2016-09-14T18:10:59Z
39,497,529
<p>Clearly looks like a packaging error in the Ubuntu <code>python3</code> package. Report it to Launchpad if it's not already there.</p>
0
2016-09-14T18:43:14Z
[ "python", "ubuntu", "ubuntu-16.04" ]
Not able to pass an array into another array within a for loop
39,497,038
<p>I want to evaluate an expression inside for loop. I am doing:</p> <pre><code> for i in range(1,height+1): for j in range(1,width+1): y[i,j] = Q[img[i,j]+1] </code></pre> <p>but this returns an error saying </p> <pre><code> y[i,j] = Q[img[i,j]+1] TypeError: 'numpy.ndarray' object is not callable </code></pre> <p>Thanks :)</p>
1
2016-09-14T18:12:42Z
39,497,087
<p>Q(x) is trying to call Q with x as an argument. Use Q[x] to access index x.</p> <p>EDIT:</p> <p>Your big block of code has the Q[x] syntax. Do you still get the error message?</p>
0
2016-09-14T18:16:50Z
[ "python", "numpy", "matplotlib" ]
Django+heroku: django logs appear, app logs don't
39,497,040
<p>I read all the StackOverflow answers, all the blog posts on the subject, and tried everything twice, but I still can't get my Django app's log messsages to appear in the heroku log (Django's own messages do appear).</p> <p>Can anyone please paste a full LOGGING config that works in heroku?</p> <pre><code># views.py import logging logger = logging.getLogger(__name__) def a_view(request): # ... logger.exception('error finding file') # ... </code></pre> <p>with:</p> <pre><code># settings.py LOGGING = { "version": 1, "disable_existing_loggers": False, 'formatters': { 'simple': { 'format': '%(levelname)s [%(name)s:%(lineno)s] %(message)s' }, }, 'handlers': { 'console': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'simple', "stream": sys.stdout }, }, "loggers": { "root": { "handlers": ["console"], }, # last try :( "myapp": { "handlers": ["console"], }, "django": { "handlers": ["console"], } } } </code></pre>
2
2016-09-14T18:12:43Z
39,499,703
<p>Here is a configuration that worked for me (OP):</p> <pre><code>LOGGING = { "version": 1, "disable_existing_loggers": False, 'formatters': { 'simple': { 'format': '%(levelname)s [%(name)s:%(lineno)s] %(message)s' }, }, 'handlers': { 'console': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'simple', "stream": sys.stdout }, }, "loggers": { "": { "handlers": ["console"], 'level': 'INFO', }, "django": { "handlers": ["console"], 'level': 'INFO', } } } </code></pre> <p>The modification that made the difference is changing the level of all loggers, but before that I changed <code>"root"</code> to <code>""</code>, so that might have been needed too.</p> <p>edit: removed "myapp" logger as it is captured by "" logger, resolved my double logging issue.</p>
2
2016-09-14T21:11:43Z
[ "python", "django", "logging", "heroku" ]
Python during runtime does not consider a number entered as a int or float
39,497,067
<p>I am a beginner and it might feel like a silly question. </p> <p>I am trying to create a simple program,a temperature converter from Celsius to F*.</p> <p>Which takes:</p> <ol> <li>'quit' for stopping program</li> <li>int values for providing the correct result</li> </ol> <p>Problem: any text value entered by user is handled through try expect error and a while loop which ask them to enter the int value again, but the second time user enters a correct value (which is int) the system considers it as a string and the loop is never ending.</p> <p>Here is my program:</p> <pre><code># Program to convert Celsius to Fahrenheit import time while True: userinput= input(" 1 Enter the temperature in celsius: " "Enter Quit if you want to close the converter") if userinput.lower()=='quit': print('Closing the application. Thanks for using the converter') break try: userinput= float(userinput) except ValueError: print('You cannot enter text, please use numbers') while userinput.lower()!='quit': if type(userinput)!=str: userinput = float(userinput) print(type(userinput)) break else: while type(userinput)==str: userinput = (input(" 2. Oops Enter the temperature in celsius: " "Enter Quit if you want to close the converter")) print(type(userinput)) ## my problem is here even if enter a integer 2 as userinput. i am getting the type userinput as str. Can someone help me type(userinput) userinput=float(userinput) f=userinput*9.0/5+32 print('Your input was:',userinput,'\n'*2,'The corresponding Fareinheit is',f,'\n'*3, '---'*25) </code></pre>
0
2016-09-14T18:15:09Z
39,497,165
<p>Not sure if I understand the problem correctly, but usually it is better to use <code>raw_input</code> instead of <code>input</code> in case like this.</p>
0
2016-09-14T18:21:46Z
[ "python", "input" ]