title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Receiving AssertionError using Flask sqlalchemy and restful | 39,845,187 | <p>I have been stumped and can't seem to figure out why I am receiving an AssertionError. I am currently working on a rest api using the flask_restful lib. I am querying by:</p>
<pre><code>@staticmethod
def find_by_id(id, user_id):
f = File.query.filter_by(id=id).first() #Error is happening here
if f is not None:
if f.check_permission(user_id)>=4:
return f
print f.check_permission(user_id)
FileErrors.InsufficientFilePermission()
FileErrors.FileDoesNotExist()
</code></pre>
<p>The error message looks like this:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2000, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1991, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/usr/local/lib/python2.7/dist-packages/flask_restful/__init__.py", line 271, in error_router
return original_handler(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1567, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask_restful/__init__.py", line 268, in error_router
return self.handle_error(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask_restful/__init__.py", line 271, in error_router
return original_handler(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1531, in handle_user_exception
assert exc_value is e
AssertionError
</code></pre>
<p>This is how my File model looks like:</p>
<pre><code>class File(db.Model):
id = db.Column(db.Integer, primary_key=True)
user_id = db.Column(db.Integer)
parts = db.Column(db.Integer)
size = db.Column(db.Integer)
name = db.Column(db.String(100))
def __init__ (self, file_info):
self.user_id = file_info['user_id']
self.parts = file_info['parts']
self.size = file_info['size']
self.name = file_info['name']
@staticmethod
def create(file_info):
return add_to_db(File(file_info))
@staticmethod
def delete(file_id, user_id):
pass
def check_permission(self,user_id):
permission = 0
print 'self.user_id {}'.format(self.user_id)
print 'user_id {}'.format(user_id)
if self.user_id == user_id:
return 7
fs = FileShare.find_by_file_and_user_id(self.id, user_id)
if fs is not None:
permission = fs.permission
return permission
@staticmethod
def find_by_id(id, user_id):
f = File.query.filter_by(id=id).first() #Error is happening here
if f is not None:
if f.check_permission(user_id)>=4:
return f
print f.check_permission(user_id)
FileErrors.InsufficientFilePermission()
FileErrors.FileDoesNotExist()
</code></pre>
<p>Any help would be appreciated. Thanks in advance.</p>
| 0 | 2016-10-04T06:04:11Z | 39,866,856 | <p>Although I couldn't figure out why the error occurs, I know how as well as how to prevent it. It is created because the query doesn't pull the live data right after the commit. The way to prevent it is by using <code>db.session.query()</code>. So in my example I would change:</p>
<p><code>f = File.query.filter_by(id=id).first()</code> </p>
<p>to </p>
<p><code>f = db.session.query(File).filter_by(id=id).first()</code></p>
<p>For some reason that works. Although I don't know why.</p>
<p><strong><em>EDIT</em></strong>: It seems to have to do with the class not receiving updated session. For the time being I recommending using queries within session.</p>
| 0 | 2016-10-05T06:29:40Z | [
"python",
"flask",
"sqlalchemy",
"flask-sqlalchemy",
"flask-restful"
]
|
sorting json file using python having value as string | 39,845,303 | <p>i have json file:</p>
<pre><code>ll = {"employees":[
{"firstName":"John", "lastName":"Doe"},
{"firstName":"Anna", "lastName":"Smith"},
{"firstName":"James", "lastName":"Bond"},
{"firstName":"Celestial", "lastName":"Systems"},
{"firstName":"Peter", "lastName":"Jones"}
]}
</code></pre>
<p>i want to sort by key(lastname or firstname)</p>
<p>i know the sorting method that we use with number as value but i am not able to sort this by value as string.</p>
<p>i tried below:</p>
<pre><code>#data = json.load(file)
new = sorted(data, key = lambda k: k['employees'].get('lastName',0))
</code></pre>
<p>and getting error:</p>
<pre><code>TypeError: string indices must be integers
</code></pre>
| 0 | 2016-10-04T06:13:27Z | 39,845,360 | <p>You need to give list of objects to <code>sorted</code> and <code>key</code> should be method to get value for single object.</p>
<pre><code>In [1]: ll = {"employees": [{"name": "abc"}, {"name": "bcd"}]}
In [2]: sorted(ll['employees'], key=lambda x: x['name'])
Out[2]: [{'name': 'abc'}, {'name': 'bcd'}]
</code></pre>
<p>As you can see I get list of employees to <code>sorted</code>. Then <code>sorted</code> run <code>lambda</code> on every element of that list (so it calls <code>x['name']</code> on <code>{"name": "abc"}</code> and gets <code>name</code>. Then <code>sorted</code> sort elements in list using key (it doesn't matter if it's string or number - string are also numbers somehow - ascii).</p>
<p>When you have sorted list you can assign it to <code>ll['employees']</code> or any other place.</p>
<p>Anyway, guessing your code (cause you didn't include all of it) I assume that you're calling <code>sorted</code> on <code>dict</code>, not <code>list</code>. Also you're using wrong <code>key</code> (<code>k</code> is not full object, but single element in <code>list</code> passed to sorted).</p>
| 1 | 2016-10-04T06:18:32Z | [
"python",
"json"
]
|
sorting json file using python having value as string | 39,845,303 | <p>i have json file:</p>
<pre><code>ll = {"employees":[
{"firstName":"John", "lastName":"Doe"},
{"firstName":"Anna", "lastName":"Smith"},
{"firstName":"James", "lastName":"Bond"},
{"firstName":"Celestial", "lastName":"Systems"},
{"firstName":"Peter", "lastName":"Jones"}
]}
</code></pre>
<p>i want to sort by key(lastname or firstname)</p>
<p>i know the sorting method that we use with number as value but i am not able to sort this by value as string.</p>
<p>i tried below:</p>
<pre><code>#data = json.load(file)
new = sorted(data, key = lambda k: k['employees'].get('lastName',0))
</code></pre>
<p>and getting error:</p>
<pre><code>TypeError: string indices must be integers
</code></pre>
| 0 | 2016-10-04T06:13:27Z | 39,845,950 | <p>You can sort the <code>ll["employees"]</code> list in-place using the <code>list.sort</code> method. And while you can roll your own key function using <code>lambda</code> it's cleaner and more efficient to use <code>operator.itemgetter</code>.</p>
<p>In this code, my key function creates a tuple of the <code>lastName</code> followed by the <code>firstName</code>, so the list is primarily sorted by <code>lastName</code>, but people with the same <code>lastName</code> are sorted by <code>firstName</code>.</p>
<pre><code>from operator import itemgetter
from pprint import pprint
ll = {"employees":[
{"firstName":"John", "lastName":"Doe"},
{"firstName":"Patti", "lastName":"Smith"},
{"firstName":"Anna", "lastName":"Smith"},
{"firstName":"James", "lastName":"Bond"},
{"firstName":"Celestial", "lastName":"Systems"},
{"firstName":"Peter", "lastName":"Jones"}
]}
ll["employees"].sort(key=itemgetter("lastName", "firstName"))
pprint(ll)
</code></pre>
<p><strong>output</strong></p>
<pre><code>{'employees': [{'firstName': 'James', 'lastName': 'Bond'},
{'firstName': 'John', 'lastName': 'Doe'},
{'firstName': 'Peter', 'lastName': 'Jones'},
{'firstName': 'Anna', 'lastName': 'Smith'},
{'firstName': 'Patti', 'lastName': 'Smith'},
{'firstName': 'Celestial', 'lastName': 'Systems'}]}
</code></pre>
| 1 | 2016-10-04T06:54:49Z | [
"python",
"json"
]
|
How to use python to fill in dates and values so that every day has the same number of entries? | 39,845,417 | <p>I have a data set that looks like the table below. Every day has time stamps with 15 minutes interval, so there should be 15*24 = 96 entries per day. It is important that each day has the same number of values, because the np.corrceof() function I will use later to find the correlation of each day requires this. </p>
<pre><code>date value
5/1/2015 0:00:00 23
5/1/2015 0:15:00 22
5/1/2015 0:30:00 50
......
5/1/2015 23:30:00 60
5/1/2015 23:45:00 27
</code></pre>
<p>However, the problem right now is that some days are missing values. For example, 5/2/2016 below only have two entries. </p>
<pre><code>5/2/2015 0:00:00 60
5/2/2015 0:15:00 45
5/3/2015 0:00:00 60
......
</code></pre>
<p>What I hope to do is to complete the 5/2/2015 time series by adding 94 extra rows (0:30:00, 0:45:00, ...23:45:00) and use 45 (the last value in 5/2/2015) as the duplicate for all the "fake data" created for the rest of this day. I also hope the script can do the same for the other days that are missing values.</p>
<p>I heard about the python interpolation function (scipy.interpolate) but it does not seem to work here? </p>
<p>Apologize for not showing any coding examples as I have no clues how to do it in a pythonian way. If you could give me a small example of how to solve this problem or just point me to the right function that would be great. Thank you for your help in advance!</p>
| 0 | 2016-10-04T06:22:29Z | 39,847,337 | <p>Here an example: </p>
<pre><code>import pandas as pd
index = pd.date_range('5/1/2015', periods=12, freq='15T')
series = pd.Series(range(12), index=index)
series = series.drop(series[2:5].index)
print(series)
print(series.resample('15T').pad())
print(series.resample('15T').bfill())
print(series.resample('15T').interpolate(method='linear'))
</code></pre>
<p>Either use <code>pad</code> (use last value for missing ones) <code>bfill</code> (backfill) or <code>interpolate</code> with an appropriate interpolation method to fill missing values.</p>
| 1 | 2016-10-04T08:12:44Z | [
"python"
]
|
pgadmin4 - New install not working | 39,845,604 | <p>I downloaded the <code>postgresql-9.6.0-1-linux-x64.run</code> package and ran through the installer on <strong>ubuntu 16.04</strong>. <strong>Postgres</strong> is working fine. I am trying to use the <strong>pgadmin4</strong> package that was included with this installer. I created a site in Apache per the instructions. </p>
<p>This is the error I am getting in the server.log file in Apache. Not sure how to fix this.</p>
<pre><code> Traceback (most recent call last):
File "/opt/PostgreSQL/9.6/pgAdmin4/web/pgAdmin4.wsgi", line 8, in <module>
from pgAdmin4 import app as application
File "/opt/PostgreSQL/9.6/pgAdmin4/web/pgAdmin4.py", line 24, in <module>
from pgadmin import create_app
File "/opt/PostgreSQL/9.6/pgAdmin4/web/pgadmin/__init__.py", line 18, in <module>
from flask_babel import Babel, gettext
ImportError: No module named flask_babel
</code></pre>
| 0 | 2016-10-04T06:34:18Z | 39,848,018 | <p>This error message shows that your environment is missing a package called <code>flask_babel</code>. To install it, switch to the virtualenv your webserver uses and install it with this command:</p>
<pre><code>pip install flask_babel
</code></pre>
<p>If you are not using any virtual environment for your python scripts, you have to prepend <code>sudo</code> to the command. But you should really <a href="http://stackoverflow.com/questions/5844869/comprehensive-beginners-virtualenv-tutorial">consider using a virtualenv</a> for your projects.</p>
| 0 | 2016-10-04T08:52:21Z | [
"python",
"postgresql",
"pgadmin-4"
]
|
pgadmin4 - New install not working | 39,845,604 | <p>I downloaded the <code>postgresql-9.6.0-1-linux-x64.run</code> package and ran through the installer on <strong>ubuntu 16.04</strong>. <strong>Postgres</strong> is working fine. I am trying to use the <strong>pgadmin4</strong> package that was included with this installer. I created a site in Apache per the instructions. </p>
<p>This is the error I am getting in the server.log file in Apache. Not sure how to fix this.</p>
<pre><code> Traceback (most recent call last):
File "/opt/PostgreSQL/9.6/pgAdmin4/web/pgAdmin4.wsgi", line 8, in <module>
from pgAdmin4 import app as application
File "/opt/PostgreSQL/9.6/pgAdmin4/web/pgAdmin4.py", line 24, in <module>
from pgadmin import create_app
File "/opt/PostgreSQL/9.6/pgAdmin4/web/pgadmin/__init__.py", line 18, in <module>
from flask_babel import Babel, gettext
ImportError: No module named flask_babel
</code></pre>
| 0 | 2016-10-04T06:34:18Z | 39,871,086 | <p>If you are using virtualenv to run pgAdmin4 then you need to activate it first,
Refer Apache mine wsgi file.<a href="http://i.stack.imgur.com/8HbKw.png" rel="nofollow"><img src="http://i.stack.imgur.com/8HbKw.png" alt="enter image description here"></a></p>
| 0 | 2016-10-05T10:05:48Z | [
"python",
"postgresql",
"pgadmin-4"
]
|
The 'pip==7.1.0' distribution was not found and is required by the application | 39,845,636 | <p>I have the latest version of pip 8.1.1 on my ubuntu 16.
But I am not able to install any modules via pip as I get this error all the time.</p>
<pre><code>File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.0' distribution was not found and is required by the application
</code></pre>
<p>I found a similar <a href="http://stackoverflow.com/questions/38587785/pkg-resources-distributionnotfound-the-pip-1-5-4-distribution-was-not-found">link</a>, but not helpful. </p>
| 1 | 2016-10-04T06:36:03Z | 39,953,036 | <p>Im repair this with</p>
<blockquote>
<p>easy_install pip</p>
</blockquote>
| 1 | 2016-10-10T07:17:41Z | [
"python",
"python-2.7",
"ubuntu",
"pip"
]
|
Generating shell commands from functions | 39,845,724 | <p>I wrote a library for manipulating specific files. That library comprises many functions that would be useful to use as command line scripts and piping results.</p>
<p>In my library I have a few functions:</p>
<pre><code>def do_this(p1, p2=[]):
return "something"
def do_that(p1, p2, p3={}):
return "else"
</code></pre>
<p>What would be nice on the <code>shell</code>:</p>
<pre><code>$ cat my_file > do_this -p1 doo -p2 be doo | do_that -p1 be -p2 doo -p3 key1=bee key2=bop
</code></pre>
<p>I could write a script for each function parsing arguments - with <code>argparse</code>, <code>docopt</code>, etc. However, I have several functions to turn into command line scripts and I can already see the redundancy in writing the function first and then the parsing of arguments before calling the function.</p>
<p>So I would like to know what kind of alternative - if any - could be used in that case? Are there different, more productive approaches to writing Python libraries and their associated command line scripts?</p>
<p>My current idea may be naive but I see something that would parse decorated/tagged functions. Then it would produce the command line script with all required parameters. It could be based on the docstring of the functions. It kind of reminds me of the library <code>docopt</code> which requires conventions and parses the command's doc string to generate the argument parser.</p>
| 1 | 2016-10-04T06:40:37Z | 39,847,102 | <p>As you use <code>cat</code> in you example, I assume that you use a Linux or other Unix-like system. A common way would be to have several links to your python script, one for each function. Then in the script, you parse the arguments once, and depending on the value of <code>sys.argv[0]</code> call the proper function.</p>
<pre><code>...
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='...')
...
cmd = os.path.basename(sys.argv[0])
args = parser.parse_args()
if cmd == 'cmd1':
cmd1(args)
elif cmd == 'cmd2':
cmd2(args)
...
</code></pre>
<p>and the script would be linked to <code>cmd1</code>, <code>cmd2</code>, ...</p>
| 0 | 2016-10-04T07:59:05Z | [
"python",
"function",
"parsing",
"command-line"
]
|
Working on dates with mm-dd-YY & YY-mm-dd format in pandas | 39,845,833 | <p>I am trying to do a simple test on pandas capabilities to handle dates & format.
For that i have created a dataframe with values like below. :</p>
<pre><code>df = pd.DataFrame({'date1' : ['10-11-11','12-11-12','10-10-10','12-11-11',
'12-12-12','11-12-11','11-11-11']})
</code></pre>
<p>Here I am assuming that the values are dates. And I am converting it into proper format using pandas' to_datetime function.</p>
<pre><code>df['format_date1'] = pd.to_datetime(df['date1'])
print(df)
Out[3]:
date1 format_date1
0 10-11-11 2011-10-11
1 12-11-12 2012-12-11
2 10-10-10 2010-10-10
3 12-11-11 2011-12-11
4 12-12-12 2012-12-12
5 11-12-11 2011-11-12
6 11-11-11 2011-11-11
</code></pre>
<p>Here, Pandas is reading the date of the dataframe as "MM/DD/YY" and converting it in native format (i.e. YYYY/MM/DD). I want to check if Pandas can take my input indicating that the date format is actually "YY/MM/DD" and then let it convert into its native format. This will change the value of row no.: 5. To do this, I have run following code. But it is giving me an error.</p>
<pre><code>df3['format_date2'] = pd.to_datetime(df3['date1'], format='%Y/%m/%d')
ValueError: time data '10-10-10' does not match format '%Y/%m/%d' (match)
</code></pre>
<p>I have seen the sort of solution <a href="http://stackoverflow.com/questions/32131742/pandas-reading-dates-from-csv-in-yy-mm-dd-format">here</a>. But I was hoping to get a little easy and crisp answer.</p>
| 1 | 2016-10-04T06:48:06Z | 39,845,879 | <p><code>%Y</code> in the format specifier takes the 4-digit year (i.e. 2016). <code>%y</code> takes the 2-digit year (i.e. 16, meaning 2016). Change the <code>%Y</code> to <code>%y</code> and it should work. </p>
<p>Also the dashes in your format specifier are not present. You need to change your format to <code>%y-%m-%d</code></p>
| 1 | 2016-10-04T06:51:20Z | [
"python",
"date",
"datetime"
]
|
Sum of difference of squares between each combination of rows of 17,000 by 300 matrix | 39,845,960 | <p>Ok, so I have a matrix with 17000 rows (examples) and 300 columns (features). I want to compute basically the euclidian distance between each possible combination of rows, so the sum of the squared differences for each possible pair of rows.
Obviously it's a lot and iPython, while not completely crashing my laptop, says "(busy)" for a while and then I can't run anything anymore and it certain seems to have given up, even though I can move my mouse and everything.</p>
<p>Is there any way to make this work? Here's the function I wrote. I used numpy everywhere I could.
What I'm doing is storing the differences in a difference matrix for each possible combination. I'm aware that the lower diagonal part of the matrix = the upper diagonal, but that would only save 1/2 the computation time (better than nothing, but not a game changer, I think).</p>
<p><strong>EDIT</strong>: I just tried using <code>scipy.spatial.distance.pdist</code>but it's been running for a good minute now with no end in sight, is there a better way? I should also mention that I have NaN values in there...but that's not a problem for numpy apparently.</p>
<pre><code>features = np.array(dataframe)
distances = np.zeros((17000, 17000))
def sum_diff():
for i in range(17000):
for j in range(17000):
diff = np.array(features[i] - features[j])
diff = np.square(diff)
sumsquares = np.sum(diff)
distances[i][j] = sumsquares
</code></pre>
| 3 | 2016-10-04T06:55:20Z | 39,846,562 | <p>You could always divide your computation time by 2, noticing that d(i, i) = 0 and d(i, j) = d(j, i).</p>
<p>But have you had a look at <code>sklearn.metrics.pairwise.pairwise_distances()</code> (in v 0.18, see <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html" rel="nofollow">the doc here</a>) ?</p>
<p>You would use it as:</p>
<pre><code>from sklearn.metrics import pairwise
import numpy as np
a = np.array([[0, 0, 0], [1, 1, 1], [3, 3, 3]])
pairwise.pairwise_distances(a)
</code></pre>
| 2 | 2016-10-04T07:29:11Z | [
"python",
"numpy",
"optimization",
"matrix"
]
|
Sum of difference of squares between each combination of rows of 17,000 by 300 matrix | 39,845,960 | <p>Ok, so I have a matrix with 17000 rows (examples) and 300 columns (features). I want to compute basically the euclidian distance between each possible combination of rows, so the sum of the squared differences for each possible pair of rows.
Obviously it's a lot and iPython, while not completely crashing my laptop, says "(busy)" for a while and then I can't run anything anymore and it certain seems to have given up, even though I can move my mouse and everything.</p>
<p>Is there any way to make this work? Here's the function I wrote. I used numpy everywhere I could.
What I'm doing is storing the differences in a difference matrix for each possible combination. I'm aware that the lower diagonal part of the matrix = the upper diagonal, but that would only save 1/2 the computation time (better than nothing, but not a game changer, I think).</p>
<p><strong>EDIT</strong>: I just tried using <code>scipy.spatial.distance.pdist</code>but it's been running for a good minute now with no end in sight, is there a better way? I should also mention that I have NaN values in there...but that's not a problem for numpy apparently.</p>
<pre><code>features = np.array(dataframe)
distances = np.zeros((17000, 17000))
def sum_diff():
for i in range(17000):
for j in range(17000):
diff = np.array(features[i] - features[j])
diff = np.square(diff)
sumsquares = np.sum(diff)
distances[i][j] = sumsquares
</code></pre>
| 3 | 2016-10-04T06:55:20Z | 39,846,761 | <p>Forget about numpy, which is only a convenience solution for self expanding arrays.
Use python lists instead which have a very fast indexing access and are about 15 times faster.
Use it like this:</p>
<pre><code>features = list(dataframe)
distances = [[None]*17000]*17000
def sum_diff():
for i in range(17000):
for j in range(17000):
for k in range(300):
diff = features[i][k] - features[j][k]
diff = diff*diff
sumsquares = sumsquares + diff
distances[i][j] = sumsquares
</code></pre>
<p>I hope this is faster than your solution, just try it and give feedback please.</p>
| -3 | 2016-10-04T07:40:14Z | [
"python",
"numpy",
"optimization",
"matrix"
]
|
Sum of difference of squares between each combination of rows of 17,000 by 300 matrix | 39,845,960 | <p>Ok, so I have a matrix with 17000 rows (examples) and 300 columns (features). I want to compute basically the euclidian distance between each possible combination of rows, so the sum of the squared differences for each possible pair of rows.
Obviously it's a lot and iPython, while not completely crashing my laptop, says "(busy)" for a while and then I can't run anything anymore and it certain seems to have given up, even though I can move my mouse and everything.</p>
<p>Is there any way to make this work? Here's the function I wrote. I used numpy everywhere I could.
What I'm doing is storing the differences in a difference matrix for each possible combination. I'm aware that the lower diagonal part of the matrix = the upper diagonal, but that would only save 1/2 the computation time (better than nothing, but not a game changer, I think).</p>
<p><strong>EDIT</strong>: I just tried using <code>scipy.spatial.distance.pdist</code>but it's been running for a good minute now with no end in sight, is there a better way? I should also mention that I have NaN values in there...but that's not a problem for numpy apparently.</p>
<pre><code>features = np.array(dataframe)
distances = np.zeros((17000, 17000))
def sum_diff():
for i in range(17000):
for j in range(17000):
diff = np.array(features[i] - features[j])
diff = np.square(diff)
sumsquares = np.sum(diff)
distances[i][j] = sumsquares
</code></pre>
| 3 | 2016-10-04T06:55:20Z | 39,847,037 | <p>The big thing with numpy is to avoid using loops and to let it do its magic with the vectorised operations, so there are a few basic improvements that will save you some computation time: </p>
<pre><code>import numpy as np
import timeit
#I reduced the problem size to 1000*300 to keep the timing in reasonable range
n=1000
features = np.random.rand(n,300)
distances = np.zeros((n,n))
def sum_diff():
for i in range(n):
for j in range(n):
diff = np.array(features[i] - features[j])
diff = np.square(diff)
sumsquares = np.sum(diff)
distances[i][j] = sumsquares
#Here I removed the unnecessary copy induced by calling np.array
# -> some improvement
def sum_diff_v0():
for i in range(n):
for j in range(n):
diff = features[i] - features[j]
diff = np.square(diff)
sumsquares = np.sum(diff)
distances[i][j] = sumsquares
#Collapsing of the statements -> no improvement
def sum_diff_v1():
for i in range(n):
for j in range(n):
distances[i][j] = np.sum(np.square(features[i] - features[j]))
# Using brodcasting and vetorized operations -> big improvement
def sum_diff_v2():
for i in range(n):
distances[i] = np.sum(np.square(features[i] - features),axis=1)
# Computing only half the distance -> 1/2 computation time
def sum_diff_v3():
for i in range(n):
distances[i][i+1:] = np.sum(np.square(features[i] - features[i+1:]),axis=1)
distances[:] = distances + distances.T
print("original :",timeit.timeit(sum_diff, number=10))
print("v0 :",timeit.timeit(sum_diff_v0, number=10))
print("v1 :",timeit.timeit(sum_diff_v1, number=10))
print("v2 :",timeit.timeit(sum_diff_v2, number=10))
print("v3 :",timeit.timeit(sum_diff_v3, number=10))
</code></pre>
<p><strong>Edit :</strong> For completeness I also timed Camilleri's solution that is <strong>much faster</strong>:</p>
<pre><code>from sklearn.metrics import pairwise
def Camilleri_solution():
distances=pairwise.pairwise_distances(features)
</code></pre>
<p><strong>Timing results</strong> (in seconds, function run 10 times with 1000*300 input):</p>
<pre><code>original : 138.36921879299916
v0 : 111.39915344800102
v1 : 117.7582511530054
v2 : 23.702392491002684
v3 : 9.712442981006461
Camilleri's : 0.6131987979897531
</code></pre>
<p>So as you can see we can easily gain an order of magnitude by using the proper numpy syntax. Note that with only 1/20th of the data the function run in about one second so I would expect the whole thing to run in the tens of minutes as the scipt runs in N^2.</p>
| 1 | 2016-10-04T07:55:34Z | [
"python",
"numpy",
"optimization",
"matrix"
]
|
How to do POST using requests module with Flask server? | 39,845,970 | <p>I am having trouble uploading a file to my Flask server using the Requests module for Python. </p>
<pre><code>import os
from flask import Flask, request, redirect, url_for
from werkzeug import secure_filename
UPLOAD_FOLDER = '/Upload/'
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
@app.route("/", methods=['GET', 'POST'])
def index():
if request.method == 'POST':
file = request.files['file']
if file:
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return redirect(url_for('index'))
return """
<!doctype html>
<title>Upload new File</title>
<h1>Upload new File</h1>
<form action="" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form>
<p>%s</p>
""" % "<br>".join(os.listdir(app.config['UPLOAD_FOLDER'],))
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=True)
</code></pre>
<p>I am able to upload a file via web page, but I wanted to upload file with the requests module like this:</p>
<pre><code>import requests
r = requests.post('http://127.0.0.1:5000', files={'random.txt': open('random.txt', 'rb')})
</code></pre>
<p>It keeps returning 400 and saying that "The browser (or proxy) sent a request that this server could not understand"</p>
<p>I feel like I am missing something simple, but I cannot figure it out.</p>
| 0 | 2016-10-04T06:56:06Z | 39,846,239 | <p>You upload the file as the <code>random.txt</code> field:</p>
<pre><code>files={'random.txt': open('random.txt', 'rb')}
# ^^^^^^^^^^^^ this is the field name
</code></pre>
<p>but look for a field named <code>file</code> instead:</p>
<pre><code>file = request.files['file']
# ^^^^^^ the field name
</code></pre>
<p>Make those two match; using <code>file</code> for the <code>files</code> dictionary, for example:</p>
<pre><code>files={'file': open('random.txt', 'rb')}
</code></pre>
<p>Note that <code>requests</code> will automatically detect the filename for that open fileobject and include it in the part headers.</p>
| 0 | 2016-10-04T07:11:08Z | [
"python",
"post",
"flask",
"python-requests"
]
|
How to do POST using requests module with Flask server? | 39,845,970 | <p>I am having trouble uploading a file to my Flask server using the Requests module for Python. </p>
<pre><code>import os
from flask import Flask, request, redirect, url_for
from werkzeug import secure_filename
UPLOAD_FOLDER = '/Upload/'
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
@app.route("/", methods=['GET', 'POST'])
def index():
if request.method == 'POST':
file = request.files['file']
if file:
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return redirect(url_for('index'))
return """
<!doctype html>
<title>Upload new File</title>
<h1>Upload new File</h1>
<form action="" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form>
<p>%s</p>
""" % "<br>".join(os.listdir(app.config['UPLOAD_FOLDER'],))
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=True)
</code></pre>
<p>I am able to upload a file via web page, but I wanted to upload file with the requests module like this:</p>
<pre><code>import requests
r = requests.post('http://127.0.0.1:5000', files={'random.txt': open('random.txt', 'rb')})
</code></pre>
<p>It keeps returning 400 and saying that "The browser (or proxy) sent a request that this server could not understand"</p>
<p>I feel like I am missing something simple, but I cannot figure it out.</p>
| 0 | 2016-10-04T06:56:06Z | 39,846,300 | <p>Because you have <code><input></code> with <code>name=file</code> so you need</p>
<pre><code> files={'file': ('random.txt', open('random.txt', 'rb'))}
</code></pre>
<p>Examples in requests doc: <a href="http://docs.python-requests.org/en/master/user/quickstart/#post-a-multipart-encoded-file" rel="nofollow">POST a Multipart-Encoded File</a></p>
| 0 | 2016-10-04T07:14:17Z | [
"python",
"post",
"flask",
"python-requests"
]
|
customizing django admin ChangeForm template / adding custom content | 39,845,982 | <p>I'm able to insert (lame) static text onto the change form admin page, but I'd really like it to use the context of the current object being edited! </p>
<p>For instance, I want to format on the MyObject change form a URL to include the ID from a ForeignKey connected object (<code>obj</code>) as a link.</p>
<p>My admin objects:</p>
<pre><code>class MyObjectChangeForm(forms.ModelForm):
class Meta:
model = MyObject
fields = ('field1', 'obj',)
class MyObjectAdmin(admin.ModelAdmin):
form = MyObjectChangeForm
list_display = ('field1', 'obj')
def render_change_form(self, request, context, *args, **kwargs):
self.change_form_template = 'admin/my_change_form.html'
extra = {'lame_static_text': "something static",}
context.update(extra)
return super(MyObjectAdmin, self).render_change_form(request,
context, *args, **kwargs)
</code></pre>
<p>My template <code>templates/admin/my_change_form.html</code>:</p>
<pre><code>{% extends "admin/change_form.html" %}
{% block form_top %}
{{ lame_static_text }}
<a href="http://example.com/abc/{{ adminform.data.obj.id }}?"/>View Website</a>
{% endblock %}
</code></pre>
<p>The <code>{{adminform.data.obj.id}}</code> call obviously doesn't work, but I'd like something along those lines. </p>
<p>How do I insert dynamic context from the current object into the admin change form?</p>
| 1 | 2016-10-04T06:56:41Z | 39,848,376 | <p>Add your extra context in <a href="https://docs.djangoproject.com/en/dev/ref/contrib/admin/#django.contrib.admin.ModelAdmin.change_view" rel="nofollow">change_view</a></p>
<pre><code>class MyObjectAdmin(admin.ModelAdmin):
# A template for a very customized change view:
change_form_template = 'admin/my_change_form.html'
def get_dynamic_info(self):
# ...
pass
def change_view(self, request, object_id, form_url='', extra_context=None):
extra_context = extra_context or {}
extra_context['osm_data'] = self.get_dynamic_info()
return super(MyObjectAdmin, self).change_view(
request, object_id, form_url, extra_context=extra_context,
)
</code></pre>
| 3 | 2016-10-04T09:10:35Z | [
"python",
"django",
"django-admin"
]
|
Check for duplicates in a randomly generated list and replace them | 39,846,207 | <p>I am making a minesweeper game with randomly generated bombs. Yet at times I have found that there are duplicates in my list of coordinates of bombs. How do I check for duplicates in a list and replace them with other randomised coordinates.</p>
<pre><code>from random import randint
def create_bombpos():
global BOMBS, NUM_BOMBS, GRID_TILES
for i in range(0, NUM_BOMBS):
x = randint(1, GRID_TILES)
y = randint(1, GRID_TILES)
BOMBS.append((x, y))
print(BOMBS)
</code></pre>
<p>The user can decide how big the board is by input of <code>GRID_TILES</code>.
If they input 5, the board will be 5x5. The ammount of bombs is: </p>
<pre><code>GRID_TILES * GRIDTILES / 5
</code></pre>
| 2 | 2016-10-04T07:09:06Z | 39,846,321 | <pre><code>from random import randint
def create_bombpos():
global BOMBS, NUM_BOMBS, GRID_TILES
i = 0
while i<NUM_BOMBS:
x = randint(1, GRID_TILES)
y = randint(1, GRID_TILES)
if (x,y) not in BOMBS
BOMBS.append((x, y))
i = i + 1
print(BOMBS)
</code></pre>
<p>If the newly generated point is already in the list, then <code>i</code> won't get incremented, and we will find another newly generated point, till it is not present in <code>BOMBS</code>.</p>
<p>Hope it helps!!</p>
| 3 | 2016-10-04T07:15:44Z | [
"python",
"python-3.x",
"random"
]
|
Check for duplicates in a randomly generated list and replace them | 39,846,207 | <p>I am making a minesweeper game with randomly generated bombs. Yet at times I have found that there are duplicates in my list of coordinates of bombs. How do I check for duplicates in a list and replace them with other randomised coordinates.</p>
<pre><code>from random import randint
def create_bombpos():
global BOMBS, NUM_BOMBS, GRID_TILES
for i in range(0, NUM_BOMBS):
x = randint(1, GRID_TILES)
y = randint(1, GRID_TILES)
BOMBS.append((x, y))
print(BOMBS)
</code></pre>
<p>The user can decide how big the board is by input of <code>GRID_TILES</code>.
If they input 5, the board will be 5x5. The ammount of bombs is: </p>
<pre><code>GRID_TILES * GRIDTILES / 5
</code></pre>
| 2 | 2016-10-04T07:09:06Z | 39,846,391 | <p>Searching each time through your whole BOMBS list would cost you <code>O(n)</code> (linear time). Why don't you use a <a href="https://docs.python.org/2/library/sets.html" rel="nofollow">set</a> instead? a Set guarantees that you ll end up with distinct (in terms of hashing) elements.</p>
<pre><code>from random import randint
def create_bombpos():
BOMBS = set()
i = 0
while i<NUM_BOMBS:
x = randint(1, GRID_TILES)
y = randint(1, GRID_TILES)
if (x,y) not in BOMBS
BOMBS.add((x, y))
i = i + 1
print(BOMBS)
</code></pre>
<p>Let me give u an example of a set:</p>
<pre><code>>>> a = set()
>>> a.add((1,2))
>>> a
{(1, 2)}
>>> a.add((1,2))
>>> a.add((1,3))
>>> a.add((1,2))
>>> a
{(1, 2), (1, 3)}
</code></pre>
<p>I can add the same element to a set many times, but only 1 instance will be present.</p>
| 4 | 2016-10-04T07:19:28Z | [
"python",
"python-3.x",
"random"
]
|
Check for duplicates in a randomly generated list and replace them | 39,846,207 | <p>I am making a minesweeper game with randomly generated bombs. Yet at times I have found that there are duplicates in my list of coordinates of bombs. How do I check for duplicates in a list and replace them with other randomised coordinates.</p>
<pre><code>from random import randint
def create_bombpos():
global BOMBS, NUM_BOMBS, GRID_TILES
for i in range(0, NUM_BOMBS):
x = randint(1, GRID_TILES)
y = randint(1, GRID_TILES)
BOMBS.append((x, y))
print(BOMBS)
</code></pre>
<p>The user can decide how big the board is by input of <code>GRID_TILES</code>.
If they input 5, the board will be 5x5. The ammount of bombs is: </p>
<pre><code>GRID_TILES * GRIDTILES / 5
</code></pre>
| 2 | 2016-10-04T07:09:06Z | 39,846,480 | <p>Use a python set for this, it will automatically check for duplicates and simply ignore every entry that already is in the list.
I also think the runtime is much better than using a list and checking for duplicates manually.</p>
<p>Link: <a href="https://docs.python.org/2/library/sets.html" rel="nofollow">https://docs.python.org/2/library/sets.html</a></p>
| 0 | 2016-10-04T07:24:41Z | [
"python",
"python-3.x",
"random"
]
|
Check for duplicates in a randomly generated list and replace them | 39,846,207 | <p>I am making a minesweeper game with randomly generated bombs. Yet at times I have found that there are duplicates in my list of coordinates of bombs. How do I check for duplicates in a list and replace them with other randomised coordinates.</p>
<pre><code>from random import randint
def create_bombpos():
global BOMBS, NUM_BOMBS, GRID_TILES
for i in range(0, NUM_BOMBS):
x = randint(1, GRID_TILES)
y = randint(1, GRID_TILES)
BOMBS.append((x, y))
print(BOMBS)
</code></pre>
<p>The user can decide how big the board is by input of <code>GRID_TILES</code>.
If they input 5, the board will be 5x5. The ammount of bombs is: </p>
<pre><code>GRID_TILES * GRIDTILES / 5
</code></pre>
| 2 | 2016-10-04T07:09:06Z | 39,846,996 | <p>You could also use random.sample to achieve this:</p>
<pre><code>from random import sample
GRID_TILES = 100
NUM_BOMBS = 5
indexes = sample(range(GRID_TILES * GRID_TILES), NUM_BOMBS)
BOMBS = [(i // GRID_TILES, i % GRID_TILES) for i in indexes]
</code></pre>
| 3 | 2016-10-04T07:53:24Z | [
"python",
"python-3.x",
"random"
]
|
library of react components in webpack bundle export to use in on runtime rendering | 39,846,223 | <p>I am upgrading a legacy web2py (python) application to use react components. I am using webpack to transpile the jsx files to minified js bundle. I want to be able to use:</p>
<pre><code>ReactDOM.render(
<ComponentA arg1="hello" arg2="world" />,
document.getElementById('react-container')
);
</code></pre>
<p>Where ComponentA is included in the bundle and the bundle is included on the web2py view. The issue is that I can't access ComponentA in the view. The following example will work:</p>
<pre><code><script>
var ComponentA = React.createClass({
render: function() {
var p = React.createElement('p', null, 'Passed these props: ',this.props.arg1, ' and ', this.props.arg2);
var div = React.createElement('div', { className: 'my-test' }, p);
return div;
}
});
var component = React.createElement(ComponentA, {arg1:"hello", arg2:"world"})
ReactDOM.render(
component,//I would rather use <ComponentA arg1="hello" arg2="world" />,
document.getElementById('react-sample')
);
</script>
</code></pre>
<p>I looked at <a href="https://github.com/webpack/docs/wiki/shimming-modules#exports-loader" rel="nofollow">exports-loader</a> and <a href="https://github.com/joeyguo/webpack-add-module-exports" rel="nofollow">webpack-add-module-exports</a> but I have not yet gotten it to work. Any help is greatly appreciated.</p>
| 0 | 2016-10-04T07:09:55Z | 39,864,896 | <p>I solved it after I came across <a href="http://stackoverflow.com/a/32786185/3163075">this StackOverflow answer</a></p>
<p>First make sure that your <code>main.jsx</code> file (which would import all the components) also exports them:</p>
<pre><code>import React from 'react';
import ReactDOM from 'react-dom';
import ComponentA from './components/A';
import ComponentB from './components/B';
import style from '../stylesheets/main.scss';
// This is how every tutorial shows you how to get started.
// However we want to use it "on-demand"
/* ReactDOM.render(
<ComponentA arg1="hello" arg2="world" />,
document.getElementById('react-container')
);*/
// ... other stuff here
// Do NOT forget to export the desired components!
export {
ComponentA,
ComponentB
};
</code></pre>
<p>Then make sure you use <code>output.library</code> <a href="http://webpack.github.io/docs/configuration.html#output-library" rel="nofollow">("more" info in the docs)</a> in the <code>webpack.config.js</code> file:</p>
<pre><code>module.exports = {
entry: {
// 'vendor': ['bootstrap', 'analytics.js'],
'main': './src/scripts/main.jsx'
},
output: {
filename: './dist/scripts/[name].js',
library: ['App', 'components']
// This will expose Window.App.components which will
// include your exported components e.g. ComponentA and ComponentB
}
// other stuff in the config
};
</code></pre>
<p>Then in the web2py view (make sure you include the build files e.g. main.js AND the appropriate containers):</p>
<pre><code><!-- Make sure you include the build files e.g. main.js -->
<!-- Some other view stuff -->
<div id="react-component-a"></div>
<div id="react-component-b"></div>
<script>
// Below is how it would be used.
// The web2py view would output a div with the proper id
// and then output a script tag with the render block.
ReactDOM.render(
React.createElement(App.components.ComponentA, {arg1:"hello", arg2:"world"}),
document.getElementById('react-component-a')
);
ReactDOM.render(
React.createElement(App.components.ComponentB, {arg1:"hello", arg2:"world"}),
document.getElementById('react-component-b')
);
</script>
</code></pre>
<p><strong>NOTE:</strong> I decided to use the vanilla react in the view instead of the JSX so no additional transpiling has to be done in the browser.</p>
| 0 | 2016-10-05T03:19:30Z | [
"python",
"reactjs",
"ecmascript-6",
"webpack",
"web2py"
]
|
Binary to hex in Python, low-nibble first encoding | 39,846,315 | <p>I have a protocol that requires converting binary data to and from a hex-encoded string where the low-nibble (lower 4 bits of each byte) come first - so for example the Python string '\xab\xcd' would be encoded as 'badc'. This odd format comes from the PHP/Perl <code>pack</code> and <code>unpack</code> functions with the <code>h*</code> format (see <a href="http://php.net/manual/en/function.pack.php" rel="nofollow">http://php.net/manual/en/function.pack.php</a>). </p>
<p>Obviously plain <code>binascii.hexlify()</code> et al won't achieve this. I've tried using various formats with the <code>struct</code> module as well as <code>array</code> byteswapping <a href="http://stackoverflow.com/questions/13155570/reorder-byte-order-in-hex-string-python">as suggested here</a> but couldn't get it to work. </p>
<p>I ended up solving it with some old-fashioned bit juggling like this: </p>
<pre><code>def to_low_nibble_hex(value):
"""Convert a binary string to hex using low-nibble first encoding
"""
r = []
for c in value:
c = ord(c)
hb = (0x0f & c) << 4
lb = (0xf0 & c) >> 4
nc = chr(hb | lb)
r += [nc]
return binascii.hexlify(''.join(r))
</code></pre>
<p>This seems to work well in my case but I was wondering if anyone can suggest a more formal approach.</p>
| 1 | 2016-10-04T07:15:03Z | 39,848,471 | <p>A more efficient way would be to precalculate a translation table and then simply apply it using <a href="https://docs.python.org/2/library/string.html#string.translate" rel="nofollow"><code>str.translate()</code></a></p>
<pre><code>_translation = bytearray(((0x0f & c) << 4 | (0xf0 & c) >> 4)
for c in range(256))
def to_low_nibble_hex(value):
return binascii.hexlify(value.translate(_translation))
</code></pre>
<p>This also works on python3 without modification, although there you could replace <code>bytearray</code> with <code>bytes</code></p>
| 0 | 2016-10-04T09:15:11Z | [
"python",
"encoding",
"hex",
"binary-data",
"packing"
]
|
Automated transfer of results of computation on ec2 instance to local client using Python | 39,846,434 | <p>I'm computing smaller things on a AWS EC2 gpu-instance. Currently I have to find the right parameters, so there are shorter computations I have to analyze and then adjust some parameter to compute again.</p>
<p>For the analysis, I depend on things, which aren't available on my EC2 instance.
So I'm looking for an convenient way to transfer small amounts of data (not bigger than 2MB) from an EC2 instance directly onto my local computer.
I'm looking for a way, which interferes with my workflow as little as possible, so that I can concentrate on analyzing the data and updating the parameters.
Currently I use the terminal and do it manually via SCP but there has to be a better way-</p>
| 0 | 2016-10-04T07:21:51Z | 39,846,508 | <p>If you have <code>scp</code> access to your systems then the <code>paramiko</code> library can perform the transfers for you, as detailed in <a href="http://stackoverflow.com/questions/250283/how-to-scp-in-python">this answer</a>. There's an <a href="https://pypi.python.org/pypi/scp" rel="nofollow"><code>scp</code> library</a> that works well with it.</p>
<p>For fuller automation over <code>ssh</code> links consider <a href="http://www.fabfile.org/" rel="nofollow"><code>fabric</code></a>, built to handle single- and multi-system automation tasks.</p>
| 1 | 2016-10-04T07:26:28Z | [
"python",
"amazon-web-services",
"amazon-ec2",
"file-transfer"
]
|
Automated transfer of results of computation on ec2 instance to local client using Python | 39,846,434 | <p>I'm computing smaller things on a AWS EC2 gpu-instance. Currently I have to find the right parameters, so there are shorter computations I have to analyze and then adjust some parameter to compute again.</p>
<p>For the analysis, I depend on things, which aren't available on my EC2 instance.
So I'm looking for an convenient way to transfer small amounts of data (not bigger than 2MB) from an EC2 instance directly onto my local computer.
I'm looking for a way, which interferes with my workflow as little as possible, so that I can concentrate on analyzing the data and updating the parameters.
Currently I use the terminal and do it manually via SCP but there has to be a better way-</p>
| 0 | 2016-10-04T07:21:51Z | 39,846,575 | <p>You can use tar for your files.</p>
<pre><code>tar cvzf - -T list_of_filenames | ssh -i ec2key.pem ec2-user@hostname tar xzf -
</code></pre>
<p>You can also use recursive <code>scp</code>.</p>
| 0 | 2016-10-04T07:29:44Z | [
"python",
"amazon-web-services",
"amazon-ec2",
"file-transfer"
]
|
seconds since midnight to datetime (python) | 39,846,488 | <p>I have a timezone aware datetime date object:</p>
<pre><code>Timestamp('2004-03-29 00:00:00-0456', tz='America/New_York')
</code></pre>
<p>and a number of mili seconds since midnight (midnight in the local timezone):</p>
<p>34188542</p>
<p>How to combine them to get a valid datetime?</p>
| 0 | 2016-10-04T07:25:15Z | 39,846,623 | <p>Create a <a href="https://docs.python.org/2/library/datetime.html#timedelta-objects" rel="nofollow"><code>timedelta</code></a> object and add it to you time like this:</p>
<pre><code>td = datetime.timedelta(milliseconds=34188542)
date_object = datetime.datetime.now() + td # change to your datetime object, I just use `now()`
</code></pre>
| 4 | 2016-10-04T07:32:20Z | [
"python",
"datetime"
]
|
seconds since midnight to datetime (python) | 39,846,488 | <p>I have a timezone aware datetime date object:</p>
<pre><code>Timestamp('2004-03-29 00:00:00-0456', tz='America/New_York')
</code></pre>
<p>and a number of mili seconds since midnight (midnight in the local timezone):</p>
<p>34188542</p>
<p>How to combine them to get a valid datetime?</p>
| 0 | 2016-10-04T07:25:15Z | 39,846,655 | <p>Assuming the datetime object is <code>ts</code>, and by "combine them", you mean "add them":</p>
<pre><code>ms = 34188545
new_datetime = ts + datetime.timedelta(milliseconds = ms)
</code></pre>
| 1 | 2016-10-04T07:34:00Z | [
"python",
"datetime"
]
|
what is foo.__add__ in python means? | 39,846,532 | <p>i ususally see when we create class in python we use <code>__init__(self)</code> , I just come across this code :</p>
<pre><code>>>> foo = 10
>>> print(foo.__add__)
<method-wrapper '__add__' of int object at 0x8502c0>
</code></pre>
<p>so i have two doubts:</p>
<p>can someone explain in deep what it means when someone write something in <code>__something__</code> (i know its private class but what it do when we write like <code>hello.__something__)</code> ? instance ?
Second doubt is why we use self <code>(__init__(self))</code></p>
<p>P.S: if you are going to redirect me at any other question which explain what is <code>__something__</code> then still i request you please answer what <code>foo.__add__</code> means ?</p>
| 0 | 2016-10-04T07:27:43Z | 39,847,355 | <p>When Python tries to evaluate <code>x + y</code>, it first attempts to call <code>x.__add__(y)</code>. If this fails then it falls back to <code>y.__radd__(x)</code>. In short, <code>__add__()</code> defines the behavior of <code>+</code> operator.</p>
<p>For example, <code>1 + 2</code> results in 3, whereas <code>[3] + [4]</code> results in <code>[3, 4]</code>. Below is the sample example:</p>
<pre><code>class Test(object):
def __init__(self, p):
self.p = p
def __add__(self, value):
return '{} - {}'.format(self.p, value)
test = Test('hello')
test + 'world'
# returns: 'hello - world'
</code></pre>
| 1 | 2016-10-04T08:13:51Z | [
"python",
"python-2.7",
"python-3.x"
]
|
Working with GUIs and text files - how do I 'synchronize' them? | 39,846,573 | <p>I am working on a larger programming project, and since I am a beginner it is not that complex. I will try to keep it straight forward: I want to create a GUI program that reads from a text file with "elements of notes", i.e. as a calendar.</p>
<p>The interface should have "text entry box", where you can submit new notes, and there should also be two buttons that upon pressing them "scrolls" up and down among the existing notes, i.d. <em>turning page</em>. A single note should be displayed at all times in a text box under the two buttons.</p>
<p>So, to wrap this up, my question is: How is the best way to "load" the text file's notes to the program, so that I can make the buttons scroll between them? Should I read the text file into a list that I give my <code>Application(Frame)</code> object?</p>
<p>Here is some of my code so far:</p>
<pre><code> from tkinter import *
class Application(Frame):
""" GUI application that creates diary. """
def __init__(self, master):
""" Initialize Frame. """
Frame.__init__(self, master)
self.grid()
self.create_widgets()
def create_widgets(self):
""" Create widgets to get info of choices and display notes. """
# create a label and text entry for new note
Label(self,
text = "Enter new note:"
).grid(row = 1, column = 0, sticky = W)
self.note_ent = Entry(self)
self.note_ent.grid(row = 1, column = 1, sticky = W)
# create a submit button for the new note
Button(self,
text = "Submit",
# command = self.add_note to a list within app obeject?
).grid(row = 2 column = 0, sticky = W)
# create a 'next note' button
Button(self,
text = "Next",
# command = self.next_note which goes to a list?
).grid(row = 6, column = 0, sticky = W)
# create a 'past note' button
Button(self,
text = "Back",
# command = self_past_note, or should I reuse next_note?
).grid(row = 6, column = 0, sticky = W)
# create a textbox (I am not sure?)
self.show_ent = Text(self, width = 75, height = 10, wrap = WORD)
self.show_ent.grid(row = 7, column = 0, columnspan = 4)
# main
text_file = open("diary.txt", "r")
note_list = text_file.readlines()
text_file.close()
# No idea where to put the note_list, which 'client' should receive it?
root = Tk()
root.title("Diary")
app = Application(root)
root.mainloop()
</code></pre>
<p>So now that you have examined my code, how to fill in the missing pieces?</p>
<p><strong>Edit:</strong> I added the text_file and note_list under the # main.</p>
<p><strong>Note:</strong> I have used <em>calendar</em> and <em>diary</em> interchangeably, but the program is more of a calendar. </p>
| 0 | 2016-10-04T07:29:38Z | 39,847,576 | <p>You can use it inside <code>Application</code> class: </p>
<pre><code>from Tkinter import *
class Application(Frame):
""" GUI application that creates diary. """
def __init__(self):
""" Initialize Frame. """
Frame.__init__(self)
self.note_list = []
with open("diary.txt", "r") as notes:
self.note_list = text_file.readlines()
self.grid()
self.create_widgets()
def create_widgets(self):
""" Create widgets to get info of choices and display notes. """
# create a label and text entry for new note
Label(self,
text = "Enter new note:"
).grid(row = 1, column = 0, sticky = W)
self.note_ent = Entry(self)
self.note_ent.grid(row = 1, column = 1, sticky = W)
# create a submit button for the new note
Button(self,
text = "Submit",
# command = self.add_note to a list within app obeject?
).grid(row = 2, column = 0, sticky = W)
# create a 'next note' button
Button(self,
text = "Next",
# command = self.next_note which goes to a list?
).grid(row = 6, column = 0, sticky = W)
# create a 'past note' button
Button(self,
text = "Back",
# command = self_past_note, or should I reuse next_note?
).grid(row = 6, column = 0, sticky = W)
# create a textbox (I am not sure?)
self.show_ent = Text(self, width = 75, height = 10, wrap = WORD)
self.show_ent.grid(row = 7, column = 0, columnspan = 4)
app = Application()
app.master.title("Diary")
app.mainloop()
</code></pre>
| 0 | 2016-10-04T08:28:39Z | [
"python",
"oop",
"user-interface",
"tkinter",
"text-files"
]
|
Python non-numpy matrixes issue | 39,846,646 | <p>I have the following issue:</p>
<p>I have a list of lists with the following declaration:</p>
<pre><code>As = [[0]*3]*3
</code></pre>
<p>I then try to change the values of this "matrix" with this:</p>
<pre><code>for i in range(3):
for j in range(3):
As[i][j] = calculate(A, i, j)*((-1)**(i+j))
</code></pre>
<p>As you may have guessed, this is used in calculating the inverse of a 3x3 matrix.</p>
<p>The function calculate returns the following values:</p>
<pre><code>4.0
-2.0
-3.0
-4.0
-10.0
9.0
4.0
10.0
-21.0
</code></pre>
<p>However, As has the following value:</p>
<p><code>[[4.0, -10.0, -21.0], [4.0, -10.0, -21.0], [4.0, -10.0, -21.0]]</code>, which is unexpected.</p>
<p>What am I missing?</p>
| 1 | 2016-10-04T07:33:41Z | 39,846,702 | <p>When you build a list like this <code>[[0]*3]*3</code> you are creating 3 references to the same list, use a list comprehension instead:</p>
<pre><code>[[0 for _ in xrange(3)] for _ in xrange(3)]
</code></pre>
<p>See how in the comprehension here just the <code>[0][0]</code> is modified:</p>
<pre><code>>>> l1 = [[0]*3]*3
>>> l1[0][0] = 10
>>> l1
[[10, 0, 0], [10, 0, 0], [10, 0, 0]]
>>> l2 = [[0 for _ in xrange(3)] for _ in xrange(3)]
>>> l2[0][0] = 10
>>> l2
[[10, 0, 0], [0, 0, 0], [0, 0, 0]]
</code></pre>
| 2 | 2016-10-04T07:36:36Z | [
"python",
"list",
"matrix"
]
|
Python non-numpy matrixes issue | 39,846,646 | <p>I have the following issue:</p>
<p>I have a list of lists with the following declaration:</p>
<pre><code>As = [[0]*3]*3
</code></pre>
<p>I then try to change the values of this "matrix" with this:</p>
<pre><code>for i in range(3):
for j in range(3):
As[i][j] = calculate(A, i, j)*((-1)**(i+j))
</code></pre>
<p>As you may have guessed, this is used in calculating the inverse of a 3x3 matrix.</p>
<p>The function calculate returns the following values:</p>
<pre><code>4.0
-2.0
-3.0
-4.0
-10.0
9.0
4.0
10.0
-21.0
</code></pre>
<p>However, As has the following value:</p>
<p><code>[[4.0, -10.0, -21.0], [4.0, -10.0, -21.0], [4.0, -10.0, -21.0]]</code>, which is unexpected.</p>
<p>What am I missing?</p>
| 1 | 2016-10-04T07:33:41Z | 39,846,813 | <p>The problem is in your first line:</p>
<pre><code>As = [[0]*3]*3
</code></pre>
<p>This is equivalent to the following:</p>
<pre><code>a = [0] * 3
As = [a] * 3
</code></pre>
<p><code>As</code> therefore contains three references to the same list:</p>
<pre><code>>>> for a in As:
... print(id(a))
...
4293437996
4293437996
4293437996
</code></pre>
<p>So when you change this list, it's reflected in all of the rows of your matrix.</p>
<p>To get around this, you can use a list comprehension to construct the outer list:</p>
<pre><code>As = [[0] * 3 for _ in xrange(3)]
</code></pre>
<p>Note that the inner list <code>[0] * 3</code> is fine, since the list contains only an integer, which is immutable. It therefore doesn't matter that the list refers to the <em>same instance</em> of that integer, since they can't be changed.</p>
| 1 | 2016-10-04T07:43:23Z | [
"python",
"list",
"matrix"
]
|
Trying to concatenate string with integer but unwanted values is getting added | 39,846,653 | <p>I have a Dataframe which has 2 columns, I am trying to create a new column <code>column3</code> with a logic of concatenating values of <code>column1</code> (String) and <code>column2</code> (int) with a separator ('_'). </p>
<p>Below are the few initial values of the dataframe:</p>
<pre><code> column1 column2
0 Andy 1
1 Ashok 4
2 Collins 7
</code></pre>
<p>Below are my few attempts :</p>
<pre><code>df['column3'] = df['column1'].apply(lambda x: x + '_' + str(df['column2']))
df['column3'] = df['column1'] + '_' + str(df['column2'])
df['column3'] = pd.Series(df['column1']).str.cat(str(df['column2']), sep='_')
</code></pre>
<p>Below is the result:</p>
<pre><code>0 Andy_0 2\n1 2\n2 1\n3 ...
1 Ashok_0 2\n1 2\n2 1\n3 ...
2 Collins_0 2\n1 2\n2 1\n3 ...
</code></pre>
<p>But (<code>2\n1 2\n2 1\n3 ...</code>) is getting added to the result column3 value and only one value zero(0) is getting appended to the result column3.</p>
<p>Please let me know where the things are getting wrong ?</p>
| 0 | 2016-10-04T07:33:55Z | 39,846,723 | <p>you don't need to make it so complicated. dataframe support such operation:</p>
<pre><code>df.column1 + "_" + df.column2.astype("str")
</code></pre>
| 1 | 2016-10-04T07:37:54Z | [
"python",
"dataframe"
]
|
Trying to concatenate string with integer but unwanted values is getting added | 39,846,653 | <p>I have a Dataframe which has 2 columns, I am trying to create a new column <code>column3</code> with a logic of concatenating values of <code>column1</code> (String) and <code>column2</code> (int) with a separator ('_'). </p>
<p>Below are the few initial values of the dataframe:</p>
<pre><code> column1 column2
0 Andy 1
1 Ashok 4
2 Collins 7
</code></pre>
<p>Below are my few attempts :</p>
<pre><code>df['column3'] = df['column1'].apply(lambda x: x + '_' + str(df['column2']))
df['column3'] = df['column1'] + '_' + str(df['column2'])
df['column3'] = pd.Series(df['column1']).str.cat(str(df['column2']), sep='_')
</code></pre>
<p>Below is the result:</p>
<pre><code>0 Andy_0 2\n1 2\n2 1\n3 ...
1 Ashok_0 2\n1 2\n2 1\n3 ...
2 Collins_0 2\n1 2\n2 1\n3 ...
</code></pre>
<p>But (<code>2\n1 2\n2 1\n3 ...</code>) is getting added to the result column3 value and only one value zero(0) is getting appended to the result column3.</p>
<p>Please let me know where the things are getting wrong ?</p>
| 0 | 2016-10-04T07:33:55Z | 39,846,886 | <p>You want this:</p>
<pre><code>def concat_cols(row):
return "{}_{}".format(row['column1'], row['column2'])
df['column3'] = df.apply(concat_cols, axis = 1)
</code></pre>
<p>The key aspect is <code>axis = 1</code>, which looks at the dataframe row-wise, rather than column-wise. In your code above, <code>df['column2']</code> in the lambda function was referring to the entire column, rather than just the row's value.</p>
| 0 | 2016-10-04T07:47:42Z | [
"python",
"dataframe"
]
|
What exactly does iter_content() function do? | 39,846,671 | <p>just finished learning web scraping from automate the boring stuff book, but i'm still really confused about iter_content function? what does it actually do?</p>
<p>try to download web page using normal: </p>
<pre><code>note = open('download.txt', 'w')
note.write(request)
note.close()
</code></pre>
<p>but the result is different from using:</p>
<pre><code>note = open('download.txt', 'wb')
for chunk in request.iter_content(100000):
note.write(chunk)
note.close()
</code></pre>
<p>???</p>
| -2 | 2016-10-04T07:34:35Z | 39,846,737 | <p><strong><em>iter_content</em></strong>(<em>chunk_size=1, decode_unicode=False</em>)</p>
<p>Iterates over the response data. When stream=True is set on the request, this avoids reading the content at once into memory for large responses. The chunk size is the number of bytes it should read into memory. This is not necessarily the length of each item returned as decoding can take place.</p>
<p>chunk_size must be of type int or None. A value of None will function differently depending on the value of stream. stream=True will read data as it arrives in whatever size the chunks are received. If stream=False, data is returned as a single chunk.</p>
<p>If decode_unicode is True, content will be decoded using the best available encoding based on the response.</p>
<p>link - <a href="http://docs.python-requests.org/en/master/api/" rel="nofollow">http://docs.python-requests.org/en/master/api/</a></p>
<p><em>here in this case it will iterate over the response with 100000 bytes each time</em></p>
| 1 | 2016-10-04T07:38:54Z | [
"python",
"python-3.x"
]
|
Google Foobar Challenge 3 - Find the Access Codes | 39,846,735 | <h1>Find the Access Codes</h1>
<p>Write a function answer(l) that takes a list of positive integers l and counts the number of "lucky triples" of (lst[i], lst[j], lst[k]) where i < j < k. The length of l is between 2 and 2000 inclusive. The elements of l are between 1 and 999999 inclusive. The answer fits within a signed 32-bit integer. Some of the lists are purposely generated without any access codes to throw off spies, so if no triples are found, return 0. </p>
<p>For example, [1, 2, 3, 4, 5, 6] has the triples: [1, 2, 4], [1, 2, 6], [1, 3, 6], making the answer 3 total.</p>
<h1>Test cases</h1>
<p>Inputs:
(int list) l = [1, 1, 1]
Output:
(int) 1</p>
<p>Inputs:
(int list) l = [1, 2, 3, 4, 5, 6]
Output:
(int) 3</p>
<h1>My Attempt</h1>
<pre><code>from itertools import combinations
def answer(l):
if len(l) < 3:
return 0
found = 0
for val in combinations(l,3):
# Ordering Check
if (val[0] <= val[1] <= val[2]) != True:
continue
# Answer Size Check against size of signed integer 32 bit
if int(val[0].__str__() + val[1].__str__() + val[2].__str__()) > 2147483647:
continue
# Division Check
if (val[1] % val[1] != 0) or (val[2] % val[1] != 0):
continue
# Increment 'found' variable by one
found += 1
return found
</code></pre>
| 3 | 2016-10-04T07:38:44Z | 39,846,872 | <p>Thing is: you let that library method <strong>combinations</strong> do all the "real" work for you. </p>
<p>And of course: normally that is exactly the way to go. You do <strong>not</strong> want to re-invent the wheel when there is an existing library function that gives you what you need. Your current code is pretty concise, and good to read (except maybe that you should call your list, well, "list", but not "l").</p>
<p>But this case is different: obviously, most of the execution time for this program will happen in that call. And it seems that google thinks whatever this call is doing .. can be done <strong>faster</strong>. </p>
<p>So, the answer for you is: you actually want to re-invent the wheel, by rewriting your code in a way that is <strong>better</strong> than what it is doing right now! A first starting point might be to check out the source code of <strong>combinations</strong> to understand if/how that call is doing things that you do not need in your context.</p>
<p><em>Guessing</em>: that call creates <strong>a lot</strong> of permutations that are not ideal. All of that is <strong>wasted</strong> time. You want to step back and consider how to build those lucky triples from your input <strong>without</strong> creating a ton of not so lucky triples!</p>
| 1 | 2016-10-04T07:46:49Z | [
"java",
"python",
"combinations"
]
|
Google Foobar Challenge 3 - Find the Access Codes | 39,846,735 | <h1>Find the Access Codes</h1>
<p>Write a function answer(l) that takes a list of positive integers l and counts the number of "lucky triples" of (lst[i], lst[j], lst[k]) where i < j < k. The length of l is between 2 and 2000 inclusive. The elements of l are between 1 and 999999 inclusive. The answer fits within a signed 32-bit integer. Some of the lists are purposely generated without any access codes to throw off spies, so if no triples are found, return 0. </p>
<p>For example, [1, 2, 3, 4, 5, 6] has the triples: [1, 2, 4], [1, 2, 6], [1, 3, 6], making the answer 3 total.</p>
<h1>Test cases</h1>
<p>Inputs:
(int list) l = [1, 1, 1]
Output:
(int) 1</p>
<p>Inputs:
(int list) l = [1, 2, 3, 4, 5, 6]
Output:
(int) 3</p>
<h1>My Attempt</h1>
<pre><code>from itertools import combinations
def answer(l):
if len(l) < 3:
return 0
found = 0
for val in combinations(l,3):
# Ordering Check
if (val[0] <= val[1] <= val[2]) != True:
continue
# Answer Size Check against size of signed integer 32 bit
if int(val[0].__str__() + val[1].__str__() + val[2].__str__()) > 2147483647:
continue
# Division Check
if (val[1] % val[1] != 0) or (val[2] % val[1] != 0):
continue
# Increment 'found' variable by one
found += 1
return found
</code></pre>
| 3 | 2016-10-04T07:38:44Z | 39,847,350 | <p>Here is a solution off the top of my head that has O(n^2) time and O(n) space complexity. I think there is a better solution (probably using dynamic programming), but this one beats generating all combinations.</p>
<pre><code>public static int foobar( int[] arr)
{
int noOfCombinations = 0;
int[] noOfDoubles = new int[arr.length];
// Count lucky doubles for each item in the array, except the first and last items
for( int i = 1; i < arr.length-1; ++i)
{
for( int j = 0; j < i; ++j)
{
if( arr[i] % arr[j] == 0)
++noOfDoubles[i];
}
}
// Count lucky triples
for( int i = 2; i < arr.length; i++)
{
for( int j = 1; j < i; ++j)
{
if( arr[i] % arr[j] == 0)
noOfCombinations += noOfDoubles[j];
}
}
return noOfCombinations;
}
</code></pre>
| 1 | 2016-10-04T08:13:37Z | [
"java",
"python",
"combinations"
]
|
Google Foobar Challenge 3 - Find the Access Codes | 39,846,735 | <h1>Find the Access Codes</h1>
<p>Write a function answer(l) that takes a list of positive integers l and counts the number of "lucky triples" of (lst[i], lst[j], lst[k]) where i < j < k. The length of l is between 2 and 2000 inclusive. The elements of l are between 1 and 999999 inclusive. The answer fits within a signed 32-bit integer. Some of the lists are purposely generated without any access codes to throw off spies, so if no triples are found, return 0. </p>
<p>For example, [1, 2, 3, 4, 5, 6] has the triples: [1, 2, 4], [1, 2, 6], [1, 3, 6], making the answer 3 total.</p>
<h1>Test cases</h1>
<p>Inputs:
(int list) l = [1, 1, 1]
Output:
(int) 1</p>
<p>Inputs:
(int list) l = [1, 2, 3, 4, 5, 6]
Output:
(int) 3</p>
<h1>My Attempt</h1>
<pre><code>from itertools import combinations
def answer(l):
if len(l) < 3:
return 0
found = 0
for val in combinations(l,3):
# Ordering Check
if (val[0] <= val[1] <= val[2]) != True:
continue
# Answer Size Check against size of signed integer 32 bit
if int(val[0].__str__() + val[1].__str__() + val[2].__str__()) > 2147483647:
continue
# Division Check
if (val[1] % val[1] != 0) or (val[2] % val[1] != 0):
continue
# Increment 'found' variable by one
found += 1
return found
</code></pre>
| 3 | 2016-10-04T07:38:44Z | 39,929,490 | <p>I tried implementing this in python. It isn't quite fast enough to pass the test, but it runs 50x faster then uoyilmaz's solution ported to python. The code for that is below:</p>
<pre><code>#!/usr/bin/env python2.7
from bisect import insort_left
from itertools import combinations
def answer_1(l):
"""My own solution."""
indices = {}
setdefault_ = indices.setdefault
for i, x in enumerate(l):
setdefault_(x, []).append(i)
out = 0
highest_value = max(l)
for i, x in enumerate(l):
multiples = []
for m in xrange(1, int(highest_value / x) + 1):
if x * m in indices:
for j in indices[x * m]:
if i < j:
insort_left(multiples, (j, x * m))
if multiples:
multiples = [m[1] for m in multiples]
for pair in combinations(multiples, 2):
out += pair[1] % pair[0] == 0
return out
def answer_2(l):
"""@uoyilmaz's solution ported from Java."""
out = 0
pair_counts = [0] * len(l)
for i in xrange(1, len(l) - 1):
for j in xrange(i):
if l[i] % l[j] == 0:
pair_counts[i] += 1
for i in xrange(2, len(l)):
for j in xrange(1, i):
if l[i] % l[j] == 0:
out += pair_counts[j]
return out
answer = answer_1
# -----------------------------------------------------------------------------
_SEED = 1.23
def benchmark(sample_count):
from random import seed, randint
import timeit
clock = timeit.default_timer
seed(_SEED)
samples = [[randint(1, 999999) for _ in xrange(randint(2, 2000))]
for _ in xrange(sample_count)]
start = clock()
for sample in samples:
answer(sample)
end = clock()
print("%.4f s elapsed for %d samples." % (end - start, sample_count))
def test():
# Provided test cases.
assert(answer([1, 1, 1]) == 1)
assert(answer([1, 2, 3, 4, 5, 6]) == 3)
# Custom test cases.
assert(answer([1]) == 0)
assert(answer([1, 2]) == 0)
assert(answer([2, 4]) == 0)
assert(answer([1, 1, 1, 1]) == 4)
assert(answer([1, 1, 1, 1, 1]) == 10)
assert(answer([1, 1, 1, 1, 1, 1]) == 20)
assert(answer([1, 1, 1, 1, 1, 1, 1]) == 35)
assert(answer([1, 1, 2]) == 1)
assert(answer([1, 1, 2, 2]) == 4)
assert(answer([1, 1, 2, 2, 2]) == 10)
assert(answer([1, 1, 2, 2, 2, 3]) == 11)
assert(answer([1, 2, 4, 8, 16]) == 10)
assert(answer([2, 4, 5, 9, 12, 34, 45]) == 1)
assert(answer([2, 2, 2, 2, 4, 4, 5, 6, 8, 8, 8]) == 90)
assert(answer([2, 4, 8]) == 1)
assert(answer([2, 4, 8, 16]) == 4)
assert(answer([3, 4, 2, 7]) == 0)
assert(answer([6, 5, 4, 3, 2, 1]) == 0)
assert(answer([4, 7, 14]) == 0)
assert(answer([4, 21, 7, 14, 8, 56, 56, 42]) == 9)
assert(answer([4, 21, 7, 14, 56, 8, 56, 4, 42]) == 7)
assert(answer([4, 7, 14, 8, 21, 56, 42]) == 4)
assert(answer([4, 8, 4, 16]) == 2)
def main():
test()
benchmark(100)
if __name__ == '__main__':
main()
</code></pre>
<p>Now if anyone has an idea on how to speed this up further, I'm open for suggestions.</p>
| 0 | 2016-10-08T06:38:45Z | [
"java",
"python",
"combinations"
]
|
seaborn violinplot out of valid range | 39,846,760 | <p>I am using seaborn to create violin plots. Right now I am creating violin plots out of proportion values (so all values are between 0 and 1), but the resulting violin plot is quite off. Its bottom ranges into negative values and its top ranges into values greater than 1. Below is an example which I ran to test it:</p>
<pre><code>import seaborn as sns
import numpy as np
y = np.asarray([.1725,.1825,.163,.1625,.93,.943,.893,.93,.11225,.93,.812,.832,.9425,.953,.8525,.993,.963,.1425,.113,.752])
x = np.asarray([1]*len(data))
sns.violinplot(x=x,y=y)
sns.plt.show()
</code></pre>
<p>Clearly none of the values are outside of the range [0,1], yet the violin plot looks all screwy:</p>
<p><a href="http://i.stack.imgur.com/ZNGWu.png" rel="nofollow">Violin plot that is out of range</a></p>
<p>Help would be greatly appreciated!</p>
| 0 | 2016-10-04T07:40:11Z | 39,858,667 | <p>The link provided by mbatchkarov answers my question. Thanks!</p>
| 0 | 2016-10-04T17:46:16Z | [
"python",
"plot",
"seaborn",
"violin-plot"
]
|
How to parse text from html file | 39,846,780 | <pre><code>import urllib2
import nltk
from HTMLParser import HTMLParser
from bs4 import BeautifulSoup
l = """<TR><TD><small style=font-family:courier> >M. tuberculosis H37Rv|Rv3676|crp<br />VDEILARAGIFQGVEPSAIAALTKQLQPVDFPRGHTVFAEGEPGDRLYIIISGKVKIGRR<br />APDGRENLLTIMGPSDMFGELSIFDPGPRTSSATTITEVRAVSMDRDALRSWIADRPEIS<br />EQLLRVLARRLRRTNNNLADLIFTDVPGRVAKQLLQLAQRFGTQEGGALRVTHDLTQEEI<br />AQLVGASRETVNKALADFAHRGWIRLEGKSVLISDSERLARRAR<br /></small><TR><td><b><big>Blastp: <a href="http://tuberculist.epfl.ch/blast_output/Rv3676.fasta.out"> Pre-computed results</a></big></b><TR><td><b><big>TransMembrane prediction using Hidden Markov Models: <a href="http://tuberculist.epfl.ch/tmhmm/Rv3676.html"> TMHMM</a></big></b><base target="_blank"/><TR><td><b><big>Genomic sequence</big></b><br /><br /><form action="dnaseq.php" method="get">"""
print l
</code></pre>
<p>I have one HTML line and want to extract the text which is embedded into the HTML tags. I have tried with all the available methods, but they are not working in my case. </p>
<p>How can I do it?</p>
<p>Expected output should be:</p>
<p>H37Rv|Rv3676|crp
VDEILARAGIFQGVEPSAIAALTKQLQPVDFPRGHTVFAEGEPGDRLYIIISGKVKIGRRAPDGRENLLTIMGPSDMFGELSIFDPGPRTSSATTITEVRAVSMDRDALRSWIADRPEISEQLLRVLARRLRRTNNNLADLIFTDVPGRVAKQLLQLAQRFGTQEGGALRVTHDLTQEEIAQLVGASRETVNKALADFAHRGWIRLEGKSVLISDSERLARRAR</p>
| 1 | 2016-10-04T07:41:31Z | 39,846,897 | <p>I notice you import BeautifulSoup, so you can use BeautifulSoup to help you extract these information.</p>
<pre><code>soup = BeautifulSoup(l,"html.parser")
print soup.get_text()
</code></pre>
<p>I've tried and it worked, but the sentence in the last tag will also be extracted, you have to cut the result if needed.</p>
| 1 | 2016-10-04T07:48:21Z | [
"python"
]
|
How to parse text from html file | 39,846,780 | <pre><code>import urllib2
import nltk
from HTMLParser import HTMLParser
from bs4 import BeautifulSoup
l = """<TR><TD><small style=font-family:courier> >M. tuberculosis H37Rv|Rv3676|crp<br />VDEILARAGIFQGVEPSAIAALTKQLQPVDFPRGHTVFAEGEPGDRLYIIISGKVKIGRR<br />APDGRENLLTIMGPSDMFGELSIFDPGPRTSSATTITEVRAVSMDRDALRSWIADRPEIS<br />EQLLRVLARRLRRTNNNLADLIFTDVPGRVAKQLLQLAQRFGTQEGGALRVTHDLTQEEI<br />AQLVGASRETVNKALADFAHRGWIRLEGKSVLISDSERLARRAR<br /></small><TR><td><b><big>Blastp: <a href="http://tuberculist.epfl.ch/blast_output/Rv3676.fasta.out"> Pre-computed results</a></big></b><TR><td><b><big>TransMembrane prediction using Hidden Markov Models: <a href="http://tuberculist.epfl.ch/tmhmm/Rv3676.html"> TMHMM</a></big></b><base target="_blank"/><TR><td><b><big>Genomic sequence</big></b><br /><br /><form action="dnaseq.php" method="get">"""
print l
</code></pre>
<p>I have one HTML line and want to extract the text which is embedded into the HTML tags. I have tried with all the available methods, but they are not working in my case. </p>
<p>How can I do it?</p>
<p>Expected output should be:</p>
<p>H37Rv|Rv3676|crp
VDEILARAGIFQGVEPSAIAALTKQLQPVDFPRGHTVFAEGEPGDRLYIIISGKVKIGRRAPDGRENLLTIMGPSDMFGELSIFDPGPRTSSATTITEVRAVSMDRDALRSWIADRPEISEQLLRVLARRLRRTNNNLADLIFTDVPGRVAKQLLQLAQRFGTQEGGALRVTHDLTQEEIAQLVGASRETVNKALADFAHRGWIRLEGKSVLISDSERLARRAR</p>
| 1 | 2016-10-04T07:41:31Z | 39,847,186 | <pre><code>try:
from BeautifulSoup import BeautifulSoup
except ImportError:
from bs4 import BeautifulSoup
html = BeautifulSoup(l)
small = html.find_all('small')
print (small.get_text())
</code></pre>
<p>This gets the small tag and prints out all the text in it</p>
| 1 | 2016-10-04T08:04:12Z | [
"python"
]
|
How to parse text from html file | 39,846,780 | <pre><code>import urllib2
import nltk
from HTMLParser import HTMLParser
from bs4 import BeautifulSoup
l = """<TR><TD><small style=font-family:courier> >M. tuberculosis H37Rv|Rv3676|crp<br />VDEILARAGIFQGVEPSAIAALTKQLQPVDFPRGHTVFAEGEPGDRLYIIISGKVKIGRR<br />APDGRENLLTIMGPSDMFGELSIFDPGPRTSSATTITEVRAVSMDRDALRSWIADRPEIS<br />EQLLRVLARRLRRTNNNLADLIFTDVPGRVAKQLLQLAQRFGTQEGGALRVTHDLTQEEI<br />AQLVGASRETVNKALADFAHRGWIRLEGKSVLISDSERLARRAR<br /></small><TR><td><b><big>Blastp: <a href="http://tuberculist.epfl.ch/blast_output/Rv3676.fasta.out"> Pre-computed results</a></big></b><TR><td><b><big>TransMembrane prediction using Hidden Markov Models: <a href="http://tuberculist.epfl.ch/tmhmm/Rv3676.html"> TMHMM</a></big></b><base target="_blank"/><TR><td><b><big>Genomic sequence</big></b><br /><br /><form action="dnaseq.php" method="get">"""
print l
</code></pre>
<p>I have one HTML line and want to extract the text which is embedded into the HTML tags. I have tried with all the available methods, but they are not working in my case. </p>
<p>How can I do it?</p>
<p>Expected output should be:</p>
<p>H37Rv|Rv3676|crp
VDEILARAGIFQGVEPSAIAALTKQLQPVDFPRGHTVFAEGEPGDRLYIIISGKVKIGRRAPDGRENLLTIMGPSDMFGELSIFDPGPRTSSATTITEVRAVSMDRDALRSWIADRPEISEQLLRVLARRLRRTNNNLADLIFTDVPGRVAKQLLQLAQRFGTQEGGALRVTHDLTQEEIAQLVGASRETVNKALADFAHRGWIRLEGKSVLISDSERLARRAR</p>
| 1 | 2016-10-04T07:41:31Z | 39,849,162 | <p>I have tried with BeautifulSoup which was not working for me because it was producing unformatted version so i have decide to write down code with my self and its working absolutely fine and producing what i want.</p>
<pre><code>import urllib2
proxy = urllib2.ProxyHandler({'http': 'http://******************'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
res = urllib2.urlopen('http://tuberculist.epfl.ch/quicksearch.php?gene+name=Rv3676')
html = res.readlines()
for l in html:
if "Genomic sequence" in l:
l = l.split("</small>")[0]
l = l.split("<br />")
header = l[0]
sequence = l[1:]
print "".join([">", header.split(">")[4]])
print "".join(sequence)
</code></pre>
<p>output</p>
<pre><code>>M. tuberculosis H37Rv|Rv3676|crp
VDEILARAGIFQGVEPSAIAALTKQLQPVDFPRGHTVFAEGEPGDRLYIIISGKVKIGRRAPDGRENLLTIMGPSDMFGELSIFDPGPRTSSATTITEVRAVSMDRDALRSWIADRPEISEQLLRVLARRLRRTNNNLADLIFTDVPGRVAKQLLQLAQRFGTQEGGALRVTHDLTQEEIAQLVGASRETVNKALADFAHRGWIRLEGKSVLISDSERLARRAR
</code></pre>
| 1 | 2016-10-04T09:48:23Z | [
"python"
]
|
Python: from vertical line user input to list | 39,846,836 | <p>I know how to change horizontal user input to became a list.</p>
<pre><code>numbers = [int(n) for n in input().split()]
</code></pre>
<p>but not sure how to do if the user input is vertical(enter after input integer:224, 32, 5)...</p>
<p>example input:</p>
<blockquote>
<p>224 32 5</p>
</blockquote>
<p>example output:</p>
<blockquote>
<p>[224, 32, 5]</p>
</blockquote>
| 1 | 2016-10-04T07:44:47Z | 39,846,906 | <pre><code>number_read = raw_input()
number_list = []
while number_read != 'q':
number_list.append(int(number_read))
number_read = raw_input()
</code></pre>
<p>So as soon as a user writes <code>q</code> and presses enter, you will be done collecting input</p>
| 2 | 2016-10-04T07:48:52Z | [
"python"
]
|
Python: from vertical line user input to list | 39,846,836 | <p>I know how to change horizontal user input to became a list.</p>
<pre><code>numbers = [int(n) for n in input().split()]
</code></pre>
<p>but not sure how to do if the user input is vertical(enter after input integer:224, 32, 5)...</p>
<p>example input:</p>
<blockquote>
<p>224 32 5</p>
</blockquote>
<p>example output:</p>
<blockquote>
<p>[224, 32, 5]</p>
</blockquote>
| 1 | 2016-10-04T07:44:47Z | 39,850,683 | <p>As @vlad-ardelean mentioned you can create a list and by means of "append" comand add its parameter as a single element to the list. However if your given input is going to be a Python list (or any other iterable such as a tuple) you can convert it to a string for display:</p>
<pre><code>inputNumbers = input_list()
Buffer= []
Buffer += [('%03X' % ord(x)) for x in inputNumbers]
Vertical_list = []
Vertical_list= '\n'.join(map(str, Buffer))
</code></pre>
| 1 | 2016-10-04T11:06:55Z | [
"python"
]
|
Invalid parameter estimator for estimator MLPClassifier | 39,846,864 | <p>When I tried to use GridSearchCV for MLPClassifier, I got this message: </p>
<blockquote>
<p>ValueError: Invalid parameter estimator for estimator <br>
MLPClassifier(activation='relu', alpha=0.0001, batch_size='auto', beta_1=0.9,
beta_2=0.999, early_stopping=False, epsilon=1e-08,<br>
hidden_layer_sizes=(100,), learning_rate='constant',<br>
learning_rate_init=0.001, max_iter=200, momentum=0.9,<br>
nesterovs_momentum=True, power_t=0.5, random_state=1, shuffle=True,<br>
solver='lbfgs', tol=0.0001, validation_fraction=0.1, verbose=False,<br>
warm_start=False). Check the list of available parameters with estimator.get_params().keys().</p>
</blockquote>
<pre><code>from sklearn.neural_network import *
mlp = MLPClassifier(solver='lbfgs', hidden_layer_sizes=(100, ), random_state=1)
paramgrid = {'estimator__alpha':logspace(-3,2,20),}
mlpcv = grid_search.GridSearchCV(mlp, paramgrid, cv = 5)
mlpcv.fit(trainXtf, trainY)
print mlpcv.best_params_
</code></pre>
<p>Which parameter has the mistakes?</p>
| 0 | 2016-10-04T07:46:18Z | 39,898,074 | <p>You are doing a grid search on <code>paramgrid</code> which contains the parameter <code>estimator__alpha</code>.</p>
<p>However <strong>MLPClassifier</strong> does not have this parameter. You should change your <code>paramgrid</code>.</p>
| 0 | 2016-10-06T14:01:25Z | [
"python"
]
|
How does this work: except IOError as e: print("No such file: {0.filename}".format(e)) | 39,846,889 | <p>The code that uses the expression in question:</p>
<pre><code>def read_file(self,file_name):
try:
with open(file_name,'r') as file:
data=file.read()
return data.split()
except IOError as e:
print("Could not read file:{0.filename}".format(e))
sys.exit()
</code></pre>
<p>How does this work? What is meaning of <code>{0.filename}.format(e)</code>? Why do we use <code>{0.filename}</code> and not <code>{1.filename}</code>?</p>
| 0 | 2016-10-04T07:47:50Z | 39,846,925 | <p>This essentially means takes the positional argument at position <code>0</code> (in <code>format(e)</code>, <code>e</code> is the zero position arg) and grab the <code>filename</code> attribute defined on it:</p>
<pre><code>print("No such file: {0.filename}".format(e))
</code></pre>
<p>Is similar to:</p>
<pre><code>print("No such file: {0}".format(e.filename))
</code></pre>
<p>It isn't <code>1.filename</code> because format hasn't been called with an argument at position <code>1</code>, another example might help you out even more:</p>
<pre><code>print("{0}{1.filename}".format("No such File: ", e))
</code></pre>
<p>Here <code>{0}</code> will grab <code>"No such File: "</code> and <code>{1.filename}</code> will grab <code>e.filename</code> and add it to the resulting string.</p>
| 2 | 2016-10-04T07:49:49Z | [
"python",
"python-3.x"
]
|
Convert date to ordinal python? | 39,846,918 | <p>I want to convert
2010-03-01 to 733832</p>
<p>I just found this toordinal code</p>
<pre><code>d=datetime.date(year=2010, month=3, day=1)
d.toordinal()
</code></pre>
<p>from <a href="http://stackoverflow.com/questions/16542074/what-is-the-inverse-of-date-toordinal-in-python">this</a></p>
<p>But i want something more like</p>
<pre><code>d=datetime.date('2010-03-01')
d.toordinal()
</code></pre>
<p>Thanks in advance</p>
| 0 | 2016-10-04T07:49:32Z | 39,846,998 | <p>You'll need to use <a href="https://docs.python.org/2/library/datetime.html#datetime.datetime.strptime" rel="nofollow"><code>strptime</code></a> on the date string, specifying the format, then you can call the <code>toordinal</code> method of the <code>date</code> object:</p>
<pre><code>>>> from datetime import datetime as dt
>>> d = dt.strptime('2010-03-01', '%Y-%m-%d').date()
>>> d
datetime.date(2010, 3, 1)
>>> d.toordinal()
733832
</code></pre>
<p>The call to the <code>date</code> method in this case is redundant, and is only kept for making the object consistent as a <code>date</code> object instead of a <code>datetime</code> object.</p>
<p>If you're looking to handle more date string formats, <a href="http://strftime.org/" rel="nofollow">Python's strftime directives</a> is one good reference you want to check out.</p>
| 3 | 2016-10-04T07:53:33Z | [
"python",
"datetime"
]
|
Convert date to ordinal python? | 39,846,918 | <p>I want to convert
2010-03-01 to 733832</p>
<p>I just found this toordinal code</p>
<pre><code>d=datetime.date(year=2010, month=3, day=1)
d.toordinal()
</code></pre>
<p>from <a href="http://stackoverflow.com/questions/16542074/what-is-the-inverse-of-date-toordinal-in-python">this</a></p>
<p>But i want something more like</p>
<pre><code>d=datetime.date('2010-03-01')
d.toordinal()
</code></pre>
<p>Thanks in advance</p>
| 0 | 2016-10-04T07:49:32Z | 39,847,003 | <p>like this:</p>
<pre><code>datetime.strptime("2016-01-01", "%Y-%m-%d").toordinal()
</code></pre>
| 2 | 2016-10-04T07:53:49Z | [
"python",
"datetime"
]
|
Convert date to ordinal python? | 39,846,918 | <p>I want to convert
2010-03-01 to 733832</p>
<p>I just found this toordinal code</p>
<pre><code>d=datetime.date(year=2010, month=3, day=1)
d.toordinal()
</code></pre>
<p>from <a href="http://stackoverflow.com/questions/16542074/what-is-the-inverse-of-date-toordinal-in-python">this</a></p>
<p>But i want something more like</p>
<pre><code>d=datetime.date('2010-03-01')
d.toordinal()
</code></pre>
<p>Thanks in advance</p>
| 0 | 2016-10-04T07:49:32Z | 39,847,069 | <p>You need to firstly convert the time string to <code>datetime</code> object using <a href="https://docs.python.org/2/library/datetime.html#datetime.datetime.strptime" rel="nofollow">strptime()</a>. Then call <code>.toordinal()</code> on the <code>datetime</code> object</p>
<pre><code>>>> from datetime import datetime
>>> date = datetime.strptime('2010-03-01', '%Y-%M-%d')
>>> date.toordinal()
733773
</code></pre>
<p>You should be creating the function to achieve this as:</p>
<pre><code>def convert_date_to_ordinal(date):
return datetime.strptime(date, '%Y-%M-%d').toordinal()
convert_date_to_ordinal('2010-03-01')
#returns: 733773
</code></pre>
| 1 | 2016-10-04T07:57:02Z | [
"python",
"datetime"
]
|
python DataFrame comparison using == operator | 39,846,946 | <p>I have two python dataframes of same structure and same number of rows
when I perform '==' operation on them they gives wrong answers</p>
<p>df1:</p>
<pre><code> 0 61561899
1 56598947
2 52231204
3 10069030
4 19900179
5 52892001
6 50015534
7 10071207
8 55455545
9 10075649
10 52050196
Name: spn, dtype: object
</code></pre>
<p>df2:</p>
<pre><code> 0 61561899
1 56598947
2 52231204
3 10069030
4 19900179
5 52892001
6 50015534
7 10071207
8 55455545
9 10075649
10 52050196
Name: spn, dtype: object
</code></pre>
<p>print df1 == df2<br>
the above python statement gives following output:</p>
<pre><code> 0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 False
Name: spn, dtype: bool
</code></pre>
<p>I dont know what am I missing. I am expecting all true.</p>
| 0 | 2016-10-04T07:51:00Z | 39,846,979 | <p>Try cast to <code>str</code> and then compare:</p>
<pre><code>df1.spn.astype(str) == df2.spn.astype(str)
</code></pre>
<p>Or maybe need compare columns only:</p>
<pre><code>df1.spn == df2.spn
</code></pre>
| 0 | 2016-10-04T07:52:34Z | [
"python",
"dataframe",
"comparison"
]
|
urllib.error.HTTPError: HTTP Error 400: Bad Request (bottlenose) | 39,847,007 | <p>After installing bottlenose and getting my API keys and associate tags I tried following the instructions in this guide: <a href="https://github.com/lionheart/bottlenose" rel="nofollow">https://github.com/lionheart/bottlenose</a></p>
<p>(I have removed my api keys)</p>
<p>This is the error I am getting:</p>
<pre><code>>>> import bottlenose
>>> amazon = bottlenose.Amazon(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_ASSOCIATE_TAG)
>>> response = amazon.ItemLookup(ItemId="B007OZNUCE")
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
response = amazon.ItemLookup(ItemId="B007OZNUCE")
File "C:\Users\Windows\AppData\Local\Programs\Python\Python35\lib\site-packages\bottlenose\api.py", line 265, in __call__
{'api_url': api_url, 'cache_url': cache_url})
File "C:\Users\Windows\AppData\Local\Programs\Python\Python35\lib\site-packages\bottlenose\api.py", line 226, in _call_api
return urllib2.urlopen(api_request, timeout=self.Timeout)
File "C:\Users\Windows\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 162, in urlopen
return opener.open(url, data, timeout)
File"C:\Users\Windows\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 471, in open
response = meth(req, response)
File"C:\Users\Windows\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 581, in http_response
'http', request, response, code, msg, hdrs)
File"C:\Users\Windows\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 509, in error
return self._call_chain(*args)
File"C:\Users\Windows\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 443, in _call_chain
result = func(*args)
File"C:\Users\Windows\AppData\Local\Programs\Python\Python35\lib\urllib\request.py", line 589, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
</code></pre>
| 0 | 2016-10-04T07:53:59Z | 39,849,015 | <p>You have to provide the value for keys</p>
<ul>
<li><strong>AWS_ACCESS_KEY_ID</strong></li>
<li><strong>AWS_SECRET_ACCESS_KEY</strong></li>
<li><strong>AWS_ASSOCIATE_TAG</strong></li>
</ul>
<p>In order to get the keys you have to register on AWS</p>
| 0 | 2016-10-04T09:41:11Z | [
"python",
"api",
"amazon-web-services"
]
|
Pythonic superclass with classmethod constructor: override, inherit, ...? | 39,847,154 | <p>What is the neatest and most pythonic way to solve this:</p>
<p>Given a class with a <code>@classmethod</code> constructor such as in code sample 1. But now subclass it with two classes which both require a totally different additional argument, such as in code sample 2. Should this be solved by using <code>*args, **kwargs</code> (sample 3)? Or should I not inherit the <code>@classmethod</code> but rather copy it in every class? Or create a superclass <code>def read_csv</code> and refer from the subclasses' <code>@classmethod</code>? </p>
<p>Subclassing is essential as there are other complex functions in the classes.</p>
<p>Other people working with this code should understand what arguments to pass when initializing the various classes.</p>
<p><strong>Code sample 1</strong></p>
<pre><code>class Car(object):
@classmethod
def from_csv(cls, csv):
df = pd.read_csv(csv)
# could be more complex
return cls(df)
def __init__(self, df):
self.df = df
</code></pre>
<p><strong>Code sample 2</strong></p>
<pre><code>class Ferrari(Car):
def __init__(self, df, ferrari_logo):
self.df = df
self.ferrari_logo = ferrari_logo
def somethingcomplex(self):
#complex ferrari method
def Fiat(Car):
def __init__(self, df, fiat_wheels):
self.df = df
self.fiat_wheels = fiat_wheels
def somethingcomplex(self):
#complex fiat method
Fiat.from_csv('fiat.csv', fiat_wheels=8)
Ferrari.from_csv('ferrari.csv', ferrari_logo='logo.jpg')
</code></pre>
<p><strong>Code sample 3</strong></p>
<pre><code>class Car(object):
@classmethod
def from_csv(cls, csv, *args, **kwargs):
df = pd.read_csv(csv)
# could be more complex
return cls(df, *args, **kwargs)
def __init__(self, df):
self.df = df
</code></pre>
| -1 | 2016-10-04T08:02:15Z | 39,847,610 | <p>I think you could avoid defining a new <em>subclass</em> (and <em>subclassing</em> altogether) each time you have a new attribute for the <code>Car</code> class by setting up the attributes of each instance from the <code>kwargs</code> passed to the <code>__init__</code> method of the class:</p>
<pre><code>class Car(object):
@classmethod
def from_csv(cls, csv, **kwargs):
df = pd.read_csv(csv)
# could be more complex
return cls(df, **kwargs)
def __init__(self, df=None, **kwargs):
self.df = df
for kw in kwargs:
setattr(self, kw, kwargs[kw])
</code></pre>
<p>Then you could do:</p>
<pre><code>ferrari = Car.from_csv('some.csv', name='ferrari', ferrari_logo='logo.jpg')
fiat = Car.from_csv('fiat.csv', name='fiat', fiat_wheels=8)
</code></pre>
<p><strong>Update:</strong></p>
<p>Then you could subclass the above if you need to define separate methods at the subclass level. You wont need to write a separate <code>__init__</code> for each subclass. </p>
| 1 | 2016-10-04T08:30:50Z | [
"python",
"inheritance",
"class-method"
]
|
How could I call a User defined function from spark sql queries in pyspark? | 39,847,547 | <p>I need to call a function from my spark sql queries. I have tried udf but I don't know how to manipulate it.
Here is the scenario:</p>
<pre><code># my python function example
def sum(effdate, trandate):
sum=effdate+trandate
return sum
</code></pre>
<p>and my spark sql query is like:</p>
<pre><code>spark.sql("select sum(cm.effdate, cm.trandate)as totalsum, name from CMLEdG cm ....").show()
</code></pre>
<p>These lines are not my code but I am stating it as an example. How could I call my sum function inside spark.sql(sql queries) for getting a result?
Could you please kindly suggest me any link or any comment compatible with pyspark?</p>
<p>Any help would be appreciated. </p>
<p>Thanks</p>
<p>Kalyan </p>
| 0 | 2016-10-04T08:26:36Z | 39,848,223 | <p>Check this</p>
<pre><code> >>> from pyspark.sql.types import IntegerType
>>> sqlContext.udf.register("stringLengthInt", lambda x: len(x), IntegerType())
>>> sqlContext.sql("SELECT stringLengthInt('test')").collect()
[Row(_c0=4)]
</code></pre>
| 0 | 2016-10-04T09:01:41Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-sql",
"spark-dataframe"
]
|
Is it possible to use cassandra cqlsh copy command inside the python programme? | 39,847,605 | <p>Is it possible to execute Cassandra CQLSH copy command through a Python program. If yes, how?</p>
<p>Thanks in Advance.</p>
| -3 | 2016-10-04T08:30:36Z | 39,866,720 | <p>Yes, it is possible. I'm providing you a sample code about how we use cqlsh inside python. I guess, you'll can get some help from it but not interested to write the entire copy script for you.</p>
<pre><code> import subprocess
host = "xxx.xxx.xxx.xxx"
user = "cassandra"
password = "cassandra"
process = subprocess.Popen("cqlsh " + host + " -u " + user + " -p " + password, shell=True)
exitCode = process.wait()
if exitCode == 0:
print "Job Done!"
else:
print "Still working!"
</code></pre>
| 0 | 2016-10-05T06:20:37Z | [
"python",
"cassandra"
]
|
Access to django template variable from js block | 39,847,663 | <p>I have js code in my django template but project use Grunt js, so code must be in js block {% block extrajs %}.</p>
<p>This is my code in template:</p>
<pre><code><script type="text/javascript">
var pub_date = {{ obj.pub_date|date:'YmdHi' }};
var hour = moment().startOf('hour').fromNow();
var time_ago = moment(pub_date, "YYYYMMDDhhmm").locale('{{ LANGUAGE_CODE }}').fromNow();
document.write(time_ago);
</script>
</code></pre>
<p>Here I try test my code but without success, I get empty alert window:</p>
<pre><code>$(document).ready(function(){
var pub_date = '{{ obj.pub_date|date:'YmdHi' }}';
alert(pub_date);
});
</code></pre>
<p>Question is, how I can get access from js block to variable in template?</p>
| 0 | 2016-10-04T08:33:44Z | 39,848,265 | <p>In order to this work you have to have your js code inside your html template. Also when trying to do this add <code>|safe</code> behind your statement. For example :</p>
<pre><code>$(document).ready(function(){
var pub_date = '{{ obj.pub_date|date:'YmdHi'|safe }}';
alert(pub_date);
});
</code></pre>
| 0 | 2016-10-04T09:04:22Z | [
"javascript",
"python",
"django",
"gruntjs",
"django-templates"
]
|
Python Regular Expression for a paragraph | 39,847,681 | <p>Hi i have this as my testing string:</p>
<pre><code><image>
<title>CNN.com - Technology</title>
<link>http://www.cnn.com/TECH/index.html?eref=rss_tech</link>
</code></pre>
<p>and i want to select 'Technology' from it using a python regular expression, however i need it specific so that it uses <code><image></code> and <code><link></code>. So far the expression i have is:</p>
<pre><code>'<title[^>]*>CNN.com - (.*?)</title>'
</code></pre>
<p>this expression works to select 'Technology', this is correct however i am unsure how to specialise my code using <code><image></code> and <code><link></code> in the expression. For example i need something along the lines of this regular expression <code>'<image><title[^>]*>CNN.com - (.*?)</title><link>'</code> that would actually work to produce the same result of 'Technology? </p>
| 2 | 2016-10-04T08:34:48Z | 39,848,100 | <p>How about something like this :</p>
<pre><code>(<image>\n<title>CNN.com - )(.*?)(<\/title>\n.*)
</code></pre>
<p>Group number 2 would be <code>Technology</code>.</p>
| 0 | 2016-10-04T08:56:05Z | [
"python",
"regex"
]
|
Python Regular Expression for a paragraph | 39,847,681 | <p>Hi i have this as my testing string:</p>
<pre><code><image>
<title>CNN.com - Technology</title>
<link>http://www.cnn.com/TECH/index.html?eref=rss_tech</link>
</code></pre>
<p>and i want to select 'Technology' from it using a python regular expression, however i need it specific so that it uses <code><image></code> and <code><link></code>. So far the expression i have is:</p>
<pre><code>'<title[^>]*>CNN.com - (.*?)</title>'
</code></pre>
<p>this expression works to select 'Technology', this is correct however i am unsure how to specialise my code using <code><image></code> and <code><link></code> in the expression. For example i need something along the lines of this regular expression <code>'<image><title[^>]*>CNN.com - (.*?)</title><link>'</code> that would actually work to produce the same result of 'Technology? </p>
| 2 | 2016-10-04T08:34:48Z | 39,848,174 | <p>Your regexp is not bad but you need to escape the slash in <code></title></code> with a backslash and it does not match because of the newlines in your string.</p>
<p>Newlines are whitespaces (like space, tabulation... \s is equivalent to [ \t\n\r\f\v] when the UNICODE flag is not set), so you can use \s to match them.</p>
<p>I assume you're using python3 but it does not matter.</p>
<pre><code>s = """<image>
<title>CNN.com - Technology</title>
<link>http://www.cnn.com/TECH/index.html?eref=rss_tech</link>"""
r = r"<image>[\s]*<title[^>]*>CNN.com - (.*?)<\/title>[\s]*<link>"
m = re.search(r, s)
print(m.group(0))
print(m.group(1))
</code></pre>
<p>group(1) is "Technology".</p>
| 1 | 2016-10-04T08:59:13Z | [
"python",
"regex"
]
|
Python Regular Expression for a paragraph | 39,847,681 | <p>Hi i have this as my testing string:</p>
<pre><code><image>
<title>CNN.com - Technology</title>
<link>http://www.cnn.com/TECH/index.html?eref=rss_tech</link>
</code></pre>
<p>and i want to select 'Technology' from it using a python regular expression, however i need it specific so that it uses <code><image></code> and <code><link></code>. So far the expression i have is:</p>
<pre><code>'<title[^>]*>CNN.com - (.*?)</title>'
</code></pre>
<p>this expression works to select 'Technology', this is correct however i am unsure how to specialise my code using <code><image></code> and <code><link></code> in the expression. For example i need something along the lines of this regular expression <code>'<image><title[^>]*>CNN.com - (.*?)</title><link>'</code> that would actually work to produce the same result of 'Technology? </p>
| 2 | 2016-10-04T08:34:48Z | 39,848,201 | <p>If you use the 'single line' option for regex, you name newlines with <code>.</code>. So, you can do:</p>
<pre><code><image>.<title[^>]*>CNN.com - (.*?)</title>.<link>
</code></pre>
| 0 | 2016-10-04T09:00:28Z | [
"python",
"regex"
]
|
How to select all the rows with the same name and not only the first one? | 39,847,727 | <p>This is the xls file : </p>
<pre><code>moto 5 2 45
moto 2 4 43
coche 8 54 12
coche 43 21 6
coche 22 14 18
</code></pre>
<p>And this is the code working with pyexcel library:</p>
<pre><code>import pyexcel as pe
data = pe.get_sheet(file_name="vehiculo.xls")
sheet = pe.Sheet(data, name_rows_by_column=0)
sheet.row.select(['coche'])
sheet.save_as("output.xls")
</code></pre>
<p>it returns only the fisrt row with name 'coche':</p>
<pre><code>coche 8 54 12
</code></pre>
<p>And I want all the rows with the name "coche".</p>
<p>Any idea?
Thanks</p>
| 0 | 2016-10-04T08:37:16Z | 39,848,048 | <p>Just iterate over the data:</p>
<pre><code>import pyexcel
from collections import defaultdict
results = defaultdict(list)
data = pe.get_sheet(file_name="vehiculo.xls")
for row in data.rows():
results[row[0]] += row[1:]
</code></pre>
<p>You'll get the following:</p>
<pre><code>>>> results['moto']
[5L, 2L, 45L, 2L, 4L, 43L]
</code></pre>
| 0 | 2016-10-04T08:53:48Z | [
"python",
"python-2.7",
"pyexcel"
]
|
Can I get PyCharm to suppress a particular warning on a single line? | 39,847,884 | <p>PyCharm provides some helpful warnings on code style, conventions and logical gotchas. It also provides a notification if I try to commit code with warnings (or errors).</p>
<p>Sometimes I consciously ignore these warnings for particular lines of code (for various reasons, typically to account for implementation details of third-party libraries). I want to suppress the warning, but just for that line (if the warning crops up on a different line where I'm not being deliberate, I want to know about it!)</p>
<p>How can I do that in PyCharm? (Following a universal Python convention strongly preferable.)</p>
| 1 | 2016-10-04T08:45:31Z | 39,847,910 | <p>Essentially, for the file in which you want to suppress the warning, run it through code inspection, post which PyCharm will give you warnings in that particular file. You can then review the warnings and tell PyCharm to suppress warning for a particular statement.</p>
<p>Code Inspection can be accessed from Code-->Inspect Code menu from where you can select the file that you want to inspect.</p>
<p>Below is an image that shows how to do it for an image, after running code via CodeInspection
<a href="https://i.stack.imgur.com/erFCc.png" rel="nofollow"><img src="https://i.stack.imgur.com/erFCc.png" alt="Code Inspection Output"></a></p>
<p>Link for more details around suppressing warnings: <a href="https://www.jetbrains.com/help/pycharm/2016.1/suppressing-inspections.html#1" rel="nofollow">https://www.jetbrains.com/help/pycharm/2016.1/suppressing-inspections.html#1</a></p>
| 1 | 2016-10-04T08:47:09Z | [
"python",
"pycharm",
"compiler-warnings",
"suppress-warnings"
]
|
Find max and min RGB values of a pixel using PIL | 39,847,891 | <p>I have a basic algorithm for desaturating an image using the pillow library and Python 3:
- find max of a pixel's RGB values
- find min of a pixel's RGB values
- calc average: (max + min) / 2</p>
<p>How do I find the min and max red, green and blue values for each pixel? I'm completely confused! I tried this code as part of a for loop</p>
<pre><code> red = image.getextrema()
green = image.getextrema()
blue = image.getextrema()
average = int( (red + green + blue) / 2 )
</code></pre>
<p>but the error returned is</p>
<p>"TypeError: unsupported operand type(s) for /: 'tuple' and 'int'"</p>
<p>The same error msg appeared when I removed the int() function.</p>
<p>Not sure if I'm barking up the wrong tree completely or only slightly off the trail. Complete novice with the pillow library and just wanting to experiment with different effects. </p>
| 0 | 2016-10-04T08:46:11Z | 39,847,965 | <p><code>img.getextrema()</code> returns the tuple of <code>(min_value, max_value)</code>. In order to get the average value, you have to do:</p>
<pre><code>value = img.getextrema()
avg = sum(value)/len(value) # OR, sum(value)/2, as len will always be 2
</code></pre>
| 0 | 2016-10-04T08:50:08Z | [
"python"
]
|
Pandas: improve algorithm with find substring in column | 39,847,973 | <p>I have dataframe and I try to get only string, where some column contain some strings.</p>
<p>I use:</p>
<pre><code>df_res = pd.DataFrame()
for i in substr:
res = df[df['event_address'].str.contains(i)]
</code></pre>
<p><code>df</code> looks like:</p>
<pre><code>member_id,event_address,event_time,event_duration
g1497o1ofm5a1963,fotki.yandex.ru/users/atanusha/albums,2015-05-01 00:00:05,8
g1497o1ofm5a1963,9829192.ru/apple-iphone.html,2015-05-01 00:00:15,2
g1497o1ofm5a1963,fotki.yandex.ru/users/atanusha/album/165150?&p=3,2015-05-01 00:00:17,2
g1497o1ofm5a1963,fotki.yandex.ru/tags/%D0%B1%D0%BE%D1%81%D0%B8%D0%BA%D0%BE%D0%BC?text=%D0%B1%D0%BE%D1%81%D0%B8%D0%BA%D0%BE%D0%BC&search_author=utpaladev&&p=2,2015-05-01 00:01:31,10
g1497o1ofm5a1963,3gmaster.net,2015-05-01 00:01:41,6
g1497o1ofm5a1963,fotki.yandex.ru/search.xml?text=%D0%B1%D0%BE%D1%81%D0%B8%D0%BA%D0%BE%D0%BC&&p=2,2015-05-01 00:02:01,6
g1497o1ofm5a1963,fotki.yandex.ru/search.xml?text=%D0%B1%D0%BE%D1%81%D0%B8%D0%BA%D0%BE%D0%BC&search_author=Sunny-Fanny&,2015-05-01 00:02:31,2
g1497o1ofm5a1963,fotki.9829192.ru/apple-iphone.html,2015-05-01 00:03:25,6
</code></pre>
<p>and <code>substr</code> is: </p>
<pre><code>123.ru/gadgets/communicators
320-8080.ru/mobilephones
3gmaster.net
3-q.ru/products/smartfony/s
9829192.ru/apple-iphone.html
9829192.ru/index.php?cat=1
acer.com/ac/ru/ru/content/group/smartphones
aj.ru
</code></pre>
<p>I get desirable result with this code, but it's loo long.
I also try to use column(<code>substr</code> it's a <code>substr = urls.url.values.tolist()</code>)
and I try</p>
<pre><code>res = df[df['event_address'].str.contains(urls.url)]
</code></pre>
<p>but it returns: </p>
<blockquote>
<p>TypeError: 'Series' objects are mutable, thus they cannot be hashed</p>
</blockquote>
<p>Is it any way to make it more faster or maybe I'm wrong? </p>
| 1 | 2016-10-04T08:50:41Z | 39,848,010 | <p>I think you need add <code>join</code> by <code>|</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow">str.<code>contains</code></a> if need faster solution:</p>
<pre><code>res = df[df['event_address'].str.contains('|'.join(urls.url))]
print (res)
member_id event_address event_time \
1 g1497o1ofm5a1963 9829192.ru/apple-iphone.html 2015-05-01 00:00:15
4 g1497o1ofm5a1963 3gmaster.net 2015-05-01 00:01:41
7 g1497o1ofm5a1963 fotki.9829192.ru/apple-iphone.html 2015-05-01 00:03:25
event_duration
1 2
4 6
7 6
</code></pre>
<p>Another <code>list comprehension</code> solution:</p>
<pre><code>res = df[df['event_address'].apply(lambda x: any([n in x for n in urls.url.tolist()]))]
print (res)
member_id event_address event_time \
1 g1497o1ofm5a1963 9829192.ru/apple-iphone.html 2015-05-01 00:00:15
4 g1497o1ofm5a1963 3gmaster.net 2015-05-01 00:01:41
7 g1497o1ofm5a1963 fotki.9829192.ru/apple-iphone.html 2015-05-01 00:03:25
event_duration
1 2
4 6
7 6
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>#[8000 rows x 4 columns]
df = pd.concat([df]*1000).reset_index(drop=True)
In [68]: %timeit (df[df['event_address'].str.contains('|'.join(urls.url))])
100 loops, best of 3: 12 ms per loop
In [69]: %timeit (df.ix[df.event_address.map(check_exists)])
10 loops, best of 3: 155 ms per loop
In [70]: %timeit (df.ix[df.event_address.map(lambda x: any([True for i in urls.url.tolist() if i in x]))])
10 loops, best of 3: 163 ms per loop
In [71]: %timeit (df[df['event_address'].apply(lambda x: any([n in x for n in urls.url.tolist()] ))])
10 loops, best of 3: 174 ms per loop
</code></pre>
| 1 | 2016-10-04T08:52:01Z | [
"python",
"string",
"pandas",
"indexing",
"substring"
]
|
Pandas: improve algorithm with find substring in column | 39,847,973 | <p>I have dataframe and I try to get only string, where some column contain some strings.</p>
<p>I use:</p>
<pre><code>df_res = pd.DataFrame()
for i in substr:
res = df[df['event_address'].str.contains(i)]
</code></pre>
<p><code>df</code> looks like:</p>
<pre><code>member_id,event_address,event_time,event_duration
g1497o1ofm5a1963,fotki.yandex.ru/users/atanusha/albums,2015-05-01 00:00:05,8
g1497o1ofm5a1963,9829192.ru/apple-iphone.html,2015-05-01 00:00:15,2
g1497o1ofm5a1963,fotki.yandex.ru/users/atanusha/album/165150?&p=3,2015-05-01 00:00:17,2
g1497o1ofm5a1963,fotki.yandex.ru/tags/%D0%B1%D0%BE%D1%81%D0%B8%D0%BA%D0%BE%D0%BC?text=%D0%B1%D0%BE%D1%81%D0%B8%D0%BA%D0%BE%D0%BC&search_author=utpaladev&&p=2,2015-05-01 00:01:31,10
g1497o1ofm5a1963,3gmaster.net,2015-05-01 00:01:41,6
g1497o1ofm5a1963,fotki.yandex.ru/search.xml?text=%D0%B1%D0%BE%D1%81%D0%B8%D0%BA%D0%BE%D0%BC&&p=2,2015-05-01 00:02:01,6
g1497o1ofm5a1963,fotki.yandex.ru/search.xml?text=%D0%B1%D0%BE%D1%81%D0%B8%D0%BA%D0%BE%D0%BC&search_author=Sunny-Fanny&,2015-05-01 00:02:31,2
g1497o1ofm5a1963,fotki.9829192.ru/apple-iphone.html,2015-05-01 00:03:25,6
</code></pre>
<p>and <code>substr</code> is: </p>
<pre><code>123.ru/gadgets/communicators
320-8080.ru/mobilephones
3gmaster.net
3-q.ru/products/smartfony/s
9829192.ru/apple-iphone.html
9829192.ru/index.php?cat=1
acer.com/ac/ru/ru/content/group/smartphones
aj.ru
</code></pre>
<p>I get desirable result with this code, but it's loo long.
I also try to use column(<code>substr</code> it's a <code>substr = urls.url.values.tolist()</code>)
and I try</p>
<pre><code>res = df[df['event_address'].str.contains(urls.url)]
</code></pre>
<p>but it returns: </p>
<blockquote>
<p>TypeError: 'Series' objects are mutable, thus they cannot be hashed</p>
</blockquote>
<p>Is it any way to make it more faster or maybe I'm wrong? </p>
| 1 | 2016-10-04T08:50:41Z | 39,848,317 | <p>do like this:</p>
<pre><code>def check_exists(x):
for i in substr:
if i in x:
return True
return False
df2 = df.ix[df.event_address.map(check_exists)]
</code></pre>
<p>or if you like write it in one-line:</p>
<pre><code>df.ix[df.event_address.map(lambda x: any([True for i in substr if i in x]))]
</code></pre>
<hr>
| 2 | 2016-10-04T09:07:06Z | [
"python",
"string",
"pandas",
"indexing",
"substring"
]
|
Mykrobe predictor JSON to TSV Converter | 39,847,995 | <p>I wanted to ask a question regarding file conversion.</p>
<p>I have a JSON file (after AMR prediction execution) that I want to covert to a TSV file based on Mykrobe-predictor scripts (json_to_tsv.py) and this is my JSON output (<a href="http://bioinfo.cs.ccu.edu.tw/result_TB.json" rel="nofollow">result_TB.json</a>).</p>
<pre><code>./json_to_tsv.py /path/to/JSON_file
</code></pre>
<p>When I pasted a command into the terminal, I got a IndexError at Line 78.</p>
<p><a href="https://github.com/iqbal-lab/Mykrobe-predictor/blob/master/scripts/json_to_tsv.py#L78" rel="nofollow">https://github.com/iqbal-lab/Mykrobe-predictor/blob/master/scripts/json_to_tsv.py#L78</a></p>
<pre><code> def get_sample_name(f):
return f.split('/')[-2]
</code></pre>
<p>And here is the error I get:</p>
<pre><code>mykrobe_version file plate_name sample drug phylo_group species lineage phylo_group_per_covg species_per_covg lineage_per_covg phylo_group_depth species_depth lineage_depth susceptibility variants (gene:alt_depth:wt_depth:conf) genes (prot_mut-ref_mut:percent_covg:depth)
Traceback (most recent call last):
File "./json_to_tsv.py", line 157, in <module>
sample_name = get_sample_name(f)
File "./json_to_tsv.py", line 78, in get_sample_name
return f.split('/')[-2]
IndexError: list index out of range
</code></pre>
<p>Any suggestions would be appreciated.</p>
| 2 | 2016-10-04T08:51:40Z | 39,872,732 | <p>Looking at the code I guess they expect to call the converter with something like:</p>
<pre><code>python json_to_tsv.py plate/sample1/sample1.json
</code></pre>
<p>Try copying your JSON file to a directory called <code>sample1</code> inside a directory called <code>plate</code> and see if you get the same error when you call it like in the example above.</p>
<hr>
<p><strong>Update</strong></p>
<p>The problem is indeed as described above.</p>
<p><strong>Doesn't work:</strong></p>
<pre><code>python json_to_tsv.py result_TB.json
</code></pre>
<blockquote>
<p>mykrobe_version file plate_name sample drug phylo_group species lineage phylo_group_per_covg species_per_covg lineage_per_covg phylo_group_depth species_depth lineage_depth susceptibility variants
(gene:alt_depth:wt_depth:conf) genes
(prot_mut-ref_mut:percent_covg:depth) </p>
<pre><code>Traceback (most recent call last): File "json_to_tsv.py", line 157, in <module>
sample_name = get_sample_name(f) File "json_to_tsv.py", line 78, in get_sample_name
return f.split('/')[-2] IndexError: list index out of range
</code></pre>
</blockquote>
<p><strong>Works:</strong></p>
<pre><code>python json_to_tsv.py plate/sample/result_TB.json
</code></pre>
<blockquote>
<p>mykrobe_version file plate_name sample drug phylo_group species lineage phylo_group_per_covg species_per_covg lineage_per_covg phylo_group_depth species_depth lineage_depth susceptibility variants (gene:alt_depth:wt_depth:conf) genes (prot_mut-ref_mut:percent_covg:depth)</p>
<p>-1 result_TB plate sample NA</p>
</blockquote>
| 0 | 2016-10-05T11:27:22Z | [
"python",
"json",
"linux",
"bioinformatics"
]
|
Memory error python computing permutations of list | 39,848,084 | <p>I want to compute all possible ways of constructing a binary list of length n with the following line</p>
<pre><code>combinations = map(list, itertools.product([0, 1], repeat=n))
</code></pre>
<p>This works fine with low n's but I want to compute this for big n's (i.e. values between 25-35). Is there a better and more efficient way of producing this list? </p>
| 0 | 2016-10-04T08:55:23Z | 39,848,454 | <p>Just create the list "lazily", so as to not store the entire thing in memory at once:</p>
<pre><code>n = some-largish-value
for i in itertools.product([0, 1], repeat=n):
result = do_something_with(list(i))
</code></pre>
| 2 | 2016-10-04T09:14:04Z | [
"python",
"python-2.7",
"permutation",
"combinatorics"
]
|
Memory error python computing permutations of list | 39,848,084 | <p>I want to compute all possible ways of constructing a binary list of length n with the following line</p>
<pre><code>combinations = map(list, itertools.product([0, 1], repeat=n))
</code></pre>
<p>This works fine with low n's but I want to compute this for big n's (i.e. values between 25-35). Is there a better and more efficient way of producing this list? </p>
| 0 | 2016-10-04T08:55:23Z | 39,849,515 | <p>Your are trying to find all the combinations of 0 and 1 for n term. Total number of such combination will be <code>2**n</code>. For <code>n=30</code>, total such combinations are <code>1073741824</code>. Huge isn't? </p>
<p>In order to get rid of the memory error, you should be using <a href="https://wiki.python.org/moin/Generators" rel="nofollow">generator</a> which <code>yield</code> the combinations dynamically instead of storing these as list. Also, since it is the combination of just 0s and 1s. Why not print binary numbers from <code>0</code> to <code>'1'*n</code>?</p>
<p>Below is the iterator to achieve this as:</p>
<pre><code>def binary_combinations(num):
my_binary_string = '1'*num
my_int_num = int(my_binary_string, 2)
format_string = '{'+':0{}b'.format(num)+'}'
for i in xrange(my_int_num):
yield format_string.format(i)
else:
raise StopIteration('End of Memory Issue!')
</code></pre>
<p>In order to execute this, do:</p>
<pre><code>>>> for i in binary_combinations(3):
... print i
...
000
001
010
011
100
101
110
</code></pre>
<p>Here <code>n = 3</code>. Now you may use it with n = 30, 40, .. OR whatever you want ;)</p>
| 1 | 2016-10-04T10:05:43Z | [
"python",
"python-2.7",
"permutation",
"combinatorics"
]
|
Memory error python computing permutations of list | 39,848,084 | <p>I want to compute all possible ways of constructing a binary list of length n with the following line</p>
<pre><code>combinations = map(list, itertools.product([0, 1], repeat=n))
</code></pre>
<p>This works fine with low n's but I want to compute this for big n's (i.e. values between 25-35). Is there a better and more efficient way of producing this list? </p>
| 0 | 2016-10-04T08:55:23Z | 39,849,706 | <p>No, if you really want a list of lists then your code is almost as memory efficient as it could be. Your issue is the size of the list, not the way you are computing it.</p>
<p>Do you realize that for n=35 you will have 1,202,590,842,880 elements? Most (if not all) desktop computers can't hold so many python integers in memory.</p>
| 0 | 2016-10-04T10:15:39Z | [
"python",
"python-2.7",
"permutation",
"combinatorics"
]
|
convert foreach to foreachpartition in spark | 39,848,127 | <p>I am working with foreach on spark RDD. Now I wanted to convert the same logic to foreachpartition.</p>
<p><strong>Here is my logic:</strong> </p>
<pre><code>batch = []
max = rdd.count()
processed = 0
def writeBatch(b):
///Push this batch to a database
def collect(row):
dict={'name':row[0],'age':row[1}
batch.append(dict)
global processed
processed += 1
if len(batch) >= 500 or processed >= max:
writeBatch(batch)
rdd.foreach(collect)
</code></pre>
<p>Here, I am using rdd.count() in the condition. How can I implement this using foreachPartition() effectively.</p>
| 0 | 2016-10-04T08:57:28Z | 39,853,563 | <p>Problem is it is not a valid logic. It may work in some special cases but in general you'll loose data.</p>
<p>Inside <code>foreach</code> you mutate two objects <code>batch</code>, <code>processed</code> where neither one is truly global (for a whole cluster) or isolated per partition (unless you configured Spark to not to reuse interpreters).</p>
<p>Without using external dependencies a correct approach can be expressed like this:</p>
<pre><code>max_size = ...
def write_partition(items, max_size):
batch = []
size = 0
# Process items in batches
for item in items:
batch.append(item)
size += 1
# Reached desired number of partitions
if size == max_size:
write_batch(batch)
# Reset batch
batch = []
size = 0
# If partition has been completed and there are still objects in batch
if batch:
write_batch(batch)
rdd.foreachPartition(lambda items: write_partition(items, max_size))
</code></pre>
| 0 | 2016-10-04T13:24:27Z | [
"python",
"apache-spark",
"pyspark"
]
|
Change the underlying data representation with the descriptor protocol | 39,848,164 | <p>Suppose I have an existing class, for example doing some mathematical stuff:</p>
<pre><code>class Vector:
def __init__(self, x, y):
self.x = y
self.y = y
def norm(self):
return math.sqrt(math.pow(self.x, 2) + math.pow(self.y, 2))
</code></pre>
<p>Now, for some reason, I'd like to have that Python does not store the members <code>x</code> and <code>y</code> like any variable. I'd rather want that Python internally stores them as strings. Or that it stores them into a dedicated buffer, maybe for interoperability with some C code. So (for the string case) I build the following <em>descriptor</em>:</p>
<pre><code>class MyStringMemory(object):
def __init__(self, convert):
self.convert = convert
def __get__(self, obj, objtype):
print('Read')
return self.convert(self.prop)
def __set__(self, obj, val):
print('Write')
self.prop = str(val)
def __delete__(self, obj):
print('Delete')
</code></pre>
<p>And I wrap the existing vector class in a new class where members <code>x</code> and <code>y</code> become <code>MyStringMemory</code>:</p>
<pre><code>class StringVector(Vector):
def __init__(self, x, y):
self.x = x
self.y = y
x = MyStringMemory(float)
y = MyStringMemory(float)
</code></pre>
<p>Finally, some driving code:</p>
<pre><code>v = StringVector(1, 2)
print(v.norm())
v.x, v.y = 10, 20
print(v.norm())
</code></pre>
<p>After all, I replaced the internal representation of <code>x</code> and <code>y</code> to be strings without any change in the original class, but still with its full functionality.</p>
<p>I just wonder: <strong>Will that concept work universally or do I run into serious pitfalls?</strong> As I said, the main idea is to store the data into a specific buffer location that is later on accessed by a C code.</p>
<p><strong>Edit:</strong> The intention of what I'm doing is as follows. Currently, I have a nicely working program where some physical objects, all of type <code>MyPhysicalObj</code> interact with each other. The code <em>inside</em> the objects is vectorized with Numpy. Now I'd also like to vectorize some code <em>over all</em> objects. For example, each object has an <code>energy</code> that is computed by a complicated vectorized code per-object. Now I'd like to sum up all energies. I can iterate over all objects and sum up, but that's slow. So I'd rather have that property <code>energy</code> for each object automatically stored into a globally predefined buffer, and I can just use <code>numpy.sum</code> over that buffer.</p>
| 2 | 2016-10-04T08:58:53Z | 39,848,393 | <p>If you need a generic convertor('convert') like you did, this is the way to go.</p>
<p>The biggest downside will be <strong>performance</strong> when you will need to create a lot of instances( I assumed you might, since the class called <code>Vector</code>). This will be slow since python class initiation is slow.</p>
<p>In this case you might consider using <a href="https://docs.python.org/3/library/collections.html#collections.namedtuple" rel="nofollow">namedTuple</a> you can see the docs have a similar scenario as you have.</p>
<p>As a side note: If that possible, why not creating a dict with the string representation of x and y on the init method? and then keep using the x and y as normal variables without all the converting</p>
| 0 | 2016-10-04T09:11:22Z | [
"python"
]
|
Change the underlying data representation with the descriptor protocol | 39,848,164 | <p>Suppose I have an existing class, for example doing some mathematical stuff:</p>
<pre><code>class Vector:
def __init__(self, x, y):
self.x = y
self.y = y
def norm(self):
return math.sqrt(math.pow(self.x, 2) + math.pow(self.y, 2))
</code></pre>
<p>Now, for some reason, I'd like to have that Python does not store the members <code>x</code> and <code>y</code> like any variable. I'd rather want that Python internally stores them as strings. Or that it stores them into a dedicated buffer, maybe for interoperability with some C code. So (for the string case) I build the following <em>descriptor</em>:</p>
<pre><code>class MyStringMemory(object):
def __init__(self, convert):
self.convert = convert
def __get__(self, obj, objtype):
print('Read')
return self.convert(self.prop)
def __set__(self, obj, val):
print('Write')
self.prop = str(val)
def __delete__(self, obj):
print('Delete')
</code></pre>
<p>And I wrap the existing vector class in a new class where members <code>x</code> and <code>y</code> become <code>MyStringMemory</code>:</p>
<pre><code>class StringVector(Vector):
def __init__(self, x, y):
self.x = x
self.y = y
x = MyStringMemory(float)
y = MyStringMemory(float)
</code></pre>
<p>Finally, some driving code:</p>
<pre><code>v = StringVector(1, 2)
print(v.norm())
v.x, v.y = 10, 20
print(v.norm())
</code></pre>
<p>After all, I replaced the internal representation of <code>x</code> and <code>y</code> to be strings without any change in the original class, but still with its full functionality.</p>
<p>I just wonder: <strong>Will that concept work universally or do I run into serious pitfalls?</strong> As I said, the main idea is to store the data into a specific buffer location that is later on accessed by a C code.</p>
<p><strong>Edit:</strong> The intention of what I'm doing is as follows. Currently, I have a nicely working program where some physical objects, all of type <code>MyPhysicalObj</code> interact with each other. The code <em>inside</em> the objects is vectorized with Numpy. Now I'd also like to vectorize some code <em>over all</em> objects. For example, each object has an <code>energy</code> that is computed by a complicated vectorized code per-object. Now I'd like to sum up all energies. I can iterate over all objects and sum up, but that's slow. So I'd rather have that property <code>energy</code> for each object automatically stored into a globally predefined buffer, and I can just use <code>numpy.sum</code> over that buffer.</p>
| 2 | 2016-10-04T08:58:53Z | 39,854,068 | <p>There is one pitfall regarding python descriptors.</p>
<p>Using your code, you will reference the same value, stored in StringVector.x.prop and StringVector.y.prop respectively:</p>
<pre><code>v1 = StringVector(1, 2)
print('current StringVector "x": ', StringVector.__dict__['x'].prop)
v2 = StringVector(3, 4)
print('current StringVector "x": ', StringVector.__dict__['x'].prop)
print(v1.x)
print(v2.x)
</code></pre>
<p>will have the following output:</p>
<pre><code>Write
Write
current StringVector "x": 1
Write
Write
current StringVector "x": 3
Read
3.0
Read
3.0
</code></pre>
<p>I suppose this is not what you want=). To store unique value per object inside object, make the following changes:</p>
<pre><code>class MyNewStringMemory(object):
def __init__(self, convert, name):
self.convert = convert
self.name = '_' + name
def __get__(self, obj, objtype):
print('Read')
return self.convert(getattr(obj, self.name))
def __set__(self, obj, val):
print('Write')
setattr(obj, self.name, str(val))
def __delete__(self, obj):
print('Delete')
class StringVector(Vector):
def __init__(self, x, y):
self.x = x
self.y = y
x = MyNewStringMemory(float, 'x')
y = MyNewStringMemory(float, 'y')
v1 = StringVector(1, 2)
v2 = StringVector(3, 4)
print(v1.x, type(v1.x))
print(v1._x, type(v1._x))
print(v2.x, type(v2.x))
print(v2._x, type(v2._x))
</code></pre>
<p>Output:</p>
<pre><code>Write
Write
Write
Write
Read
Read
1.0 <class 'float'>
1 <class 'str'>
Read
Read
3.0 <class 'float'>
3 <class 'str'>
</code></pre>
<p>Also, you definitely could save data inside centralized store, using descriptor's <code>__set__</code> method.</p>
<p>Refer to this document: <a href="https://docs.python.org/3/howto/descriptor.html" rel="nofollow">https://docs.python.org/3/howto/descriptor.html</a></p>
| 1 | 2016-10-04T13:48:46Z | [
"python"
]
|
Django: Store many forms into one table | 39,848,235 | <p>I'm using django v1.8.</p>
<p>A have one table split into four forms.</p>
<p>Example from my <code>views.py</code></p>
<pre><code>ext_cent = ExternalCentersForm(request.POST, prefix='extcent')
ext_cent_diagnostic = ExternalCentersDiagnosticForm(request.POST,prefix='extcentDiagn')
ext_cent_outcomes = ExternalCentersOutcomesForm(request.POST,prefix='extcentOutcomes')
ext_cent_outcomes2 = ExternalCentersOutcomes2Form(request.POST,prefix='extcentOutcomesTwo')
</code></pre>
<p>When I'm trying to save them I use </p>
<pre><code>ext_cent_object = ext_cent.save(commit=False)
ext_cent_object.author = request.user
ext_cent_object.save()
ext_cent_diagnostic_object = ext_cent_diagnostic.save(commit=False)
ext_cent_diagnostic_object.author = request.user
ext_cent_diagnostic_object.save()
ext_cent_outcomes_object = ext_cent_outcomes.save(commit=False)
ext_cent_outcomes_object.author = request.user
ext_cent_outcomes_object.save()
ext_cent_outcomes2_object = ext_cent_outcomes2.save(commit=False)
ext_cent_outcomes2_object.author = request.user
ext_cent_outcomes2_object.save()
</code></pre>
<p>My <code>forms.py</code>: </p>
<pre><code>class ExternalCentersForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(ExternalCentersForm, self).__init__(*args, **kwargs)
self.helper=FormHelper(self)
#self.fields['patient'].queryset = Demographic.objects.filter(patient_id=self.instance.patient)
self.helper.field_class = 'col-md-8'
self.helper.label_class = 'col-md-3'
self.helper.layout = Layout(
Fieldset(
'<b>Center Information</b>',
Div(
#HTML(u'<br/><div class="col-md-9"><h4><b>Molecular analysis</b></h4></div><br/><br/>'),
Div('location_of_center',css_class='col-md-6'),
Div('name_of_center',css_class="col-md-6"),
Div('type_of_center',css_class="col-md-6"),
css_class='row',
),
),
FormActions(
Submit('submit', "Save changes"),
Submit('cancel',"Cancel")
),
)
self.helper.form_tag = False
self.helper.form_show_labels = True
class Meta:
model = Ext_centers
exclude = ['center_id', 'author']
list_display = ('title', 'pub_date', 'author')
class ExternalCentersDiagnosticForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(ExternalCentersDiagnosticForm, self).__init__(*args, **kwargs)
self.helper=FormHelper(self)
#self.fields['patient'].queryset = Demographic.objects.filter(patient_id=self.instance.patient)
self.helper.field_class = 'col-md-8'
self.helper.label_class = 'col-md-3'
self.helper.layout = Layout(
Fieldset(
'<b>Diagnostic categories</b>',
Div(
HTML(u'<div class="col-md-9"><h4><b>Paroxysmal nocturnal haemoglobinuria (PNH)</b></h4></div>'),
Div('diagn_categ_pnh_no_patient',css_class='col-md-6'),
Div('diagn_categ_pnh_distribution',css_class="col-md-6"),
css_class='row',
),
),
FormActions(
Submit('submit', "Save changes"),
Submit('cancel',"Cancel")
),
)
self.helper.form_tag = False
self.helper.form_show_labels = True
class Meta:
model = Ext_centers
exclude = ['center_id', 'author']
list_display = ('title', 'pub_date', 'author')
class ExternalCentersOutcomesForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(ExternalCentersOutcomesForm, self).__init__(*args, **kwargs)
self.helper=FormHelper(self)
#self.fields['patient'].queryset = Demographic.objects.filter(patient_id=self.instance.patient)
self.helper.field_class = 'col-md-8'
self.helper.label_class = 'col-md-3'
self.helper.layout = Layout(
Fieldset(
'<b>1. Deaths</b>',
Div(
HTML(u'<div class="col-md-9"><h4><b>2015</b></h4></div>'),
Div('outcomes_year2015_thal',css_class='col-md-6'),
Div('outcomes_year2015_sickle',css_class="col-md-6"),
Div('outcomes_year2015_rare',css_class="col-md-6"),
css_class='row',
),
),
FormActions(
Submit('submit', "Save changes"),
Submit('cancel',"Cancel")
),
)
self.helper.form_tag = False
self.helper.form_show_labels = True
class Meta:
model = Ext_centers
exclude = ['center_id', 'author']
list_display = ('title', 'pub_date', 'author')
class ExternalCentersOutcomes2Form(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(ExternalCentersOutcomes2Form, self).__init__(*args, **kwargs)
self.helper=FormHelper(self)
#self.fields['patient'].queryset = Demographic.objects.filter(patient_id=self.instance.patient)
self.helper.field_class = 'col-md-8'
self.helper.label_class = 'col-md-3'
self.helper.layout = Layout(
Fieldset(
'<b>2. More measurements</b>',
Div(
#HTML(u'<br/><div class="col-md-9"><h4><b>Molecular analysis</b></h4></div><br/><br/>'),
Div('out_patients_married',css_class='col-md-6'),
Div('out_patients_divorced',css_class="col-md-6"),
Div('out_patients_single',css_class="col-md-6"),
Div('out_patients_cohabiting',css_class="col-md-6"),
Div('out_patients_parented_children',css_class="col-md-6"),
Div('out_thal_women_preg',css_class="col-md-6"),
Div('out_patients_splene',css_class="col-md-6"),
css_class='row',
),
),
FormActions(
Submit('submit', "Save changes"),
Submit('cancel',"Cancel")
),
)
self.helper.form_tag = False
self.helper.form_show_labels = True
class Meta:
model = Ext_centers
exclude = ['center_id', 'author']
list_display = ('title', 'pub_date', 'author')
</code></pre>
<p>Example from <code>models.py</code></p>
<pre><code>class Ext_centers(models.Model):
center_id = models.AutoField(primary_key=True)
center_city = models.CharField('City',max_length=45, null=True, blank=True)
center_country = models.CharField('Country',max_length=45, null=True, blank=True)
center_name_of_medical_director = models.CharField('Name of medical director',max_length=45, null=True, blank=True)
center_name_of_respondent = models.CharField('Name of respondent',max_length=45, null=True, blank=True)
center_status_of_respondent = models.CharField('Status of respondent',max_length=45, null=True, blank=True)
center_telephone = models.IntegerField('Telephone', null=True,blank=True)
center_email = models.EmailField('E-mail',null=True,blank=True)
center_fax = models.IntegerField('Fax',null=True,blank=True)
center_website = models.CharField('Website',max_length=45, null=True, blank=True)
....
pub_date = models.DateTimeField(auto_now=True)
author = models.ForeignKey(User)
history = HistoricalRecords()
def __str__(self):
return self.name_of_center
</code></pre>
<p>As I result for each form I have a new row in my table containing only values from the specific form. I need to store all these form's data into one table.</p>
| 0 | 2016-10-04T09:02:06Z | 39,848,573 | <p>In the place of save You need "update"</p>
<p>Here is an example :</p>
<pre><code> ext_cent_stored = Ext_centers.objects.get(center_id=ext_cent_object.center_id)
form = ExternalCentersForm(request.POST, instance=ext_cent_stored)
if form.is_valid():
form.save()
form = ExternalCentersDiagnosticForm(request.POST, prefix='extcentDiagn',instance=ext_cent_stored)
if form.is_valid():
form.save()
form = ExternalCentersOutcomesForm(request.POST,prefix='extcentOutcomes', instance=ext_cent_stored)
if form.is_valid():
form.save()
form = ExternalCentersOutcomes2Form(request.POST,prefix='extcentOutcomesTwo', instance=ext_cent_stored)
if form.is_valid():
form.save()
</code></pre>
<p>Use this for every save method and you will not lose old data.</p>
<p>Thanks.</p>
| 1 | 2016-10-04T09:20:14Z | [
"python",
"django",
"django-models",
"django-forms",
"django-views"
]
|
How to dump a dictionary into an .xlsx file with proper column alignment? | 39,848,392 | <p>I have a dictionary with 2000 items which looks like this:</p>
<pre><code>d = {'10071353': (0, 0), '06030011': (6, 0), '06030016': (2, 10), ...}
</code></pre>
<p>Given that I want to write it to an <code>.xlsx</code> file, I use this code (taken from <a href="http://stackoverflow.com/questions/23113231/python-write-dictionary-values-in-an-excel-file?answertab=active#tab-top">here</a>):</p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('myfile.xlsx')
worksheet = workbook.add_worksheet()
row = 0
col = 0
order=sorted(d.keys())
for key in order:
row += 1
worksheet.write(row, col, key)
for item in d[key]:
worksheet.write(row, col + 1, item)
row += 1
workbook.close()
</code></pre>
<p>This produces an <code>.xlsx</code> file with the following alignment:</p>
<pre class="lang-none prettyprint-override"><code> A B
06030001 0
10
06030002 10
10
06030003 5
10
</code></pre>
<p>However, this is the alignment I am after:</p>
<pre class="lang-none prettyprint-override"><code>A B C
06030001 0 10
06030002 10 10
06030003 5 10
</code></pre>
<p><strong>What should I change in the script to achieve this?</strong></p>
| 1 | 2016-10-04T09:11:21Z | 39,848,529 | <p>This is what I think should help:</p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('myfile.xlsx')
worksheet = workbook.add_worksheet()
row = 0
col = 0
order=sorted(d.keys())
for key in order:
row += 1
worksheet.write(row, col, key)
i =1
for item in d[key]:
worksheet.write(row, col + i, item)
i += 1
workbook.close()
IN:
d={'10071353':(0, 0),'06030011':(6, 0),'06030016':(2, 10)
OUT:
A B C
06030001 6 0
06030002 2 10
06030003 0 0
</code></pre>
| 2 | 2016-10-04T09:18:17Z | [
"python",
"excel",
"dictionary",
"xlsxwriter"
]
|
How to dump a dictionary into an .xlsx file with proper column alignment? | 39,848,392 | <p>I have a dictionary with 2000 items which looks like this:</p>
<pre><code>d = {'10071353': (0, 0), '06030011': (6, 0), '06030016': (2, 10), ...}
</code></pre>
<p>Given that I want to write it to an <code>.xlsx</code> file, I use this code (taken from <a href="http://stackoverflow.com/questions/23113231/python-write-dictionary-values-in-an-excel-file?answertab=active#tab-top">here</a>):</p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('myfile.xlsx')
worksheet = workbook.add_worksheet()
row = 0
col = 0
order=sorted(d.keys())
for key in order:
row += 1
worksheet.write(row, col, key)
for item in d[key]:
worksheet.write(row, col + 1, item)
row += 1
workbook.close()
</code></pre>
<p>This produces an <code>.xlsx</code> file with the following alignment:</p>
<pre class="lang-none prettyprint-override"><code> A B
06030001 0
10
06030002 10
10
06030003 5
10
</code></pre>
<p>However, this is the alignment I am after:</p>
<pre class="lang-none prettyprint-override"><code>A B C
06030001 0 10
06030002 10 10
06030003 5 10
</code></pre>
<p><strong>What should I change in the script to achieve this?</strong></p>
| 1 | 2016-10-04T09:11:21Z | 39,849,006 | <p>The best way is to use <code>pandas</code> for this task. Some documentation is available here (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html</a> )</p>
<pre><code>import pandas as pd
a = {'1':[1,2,3,4], '2':[5,6,7,8]}
a = pd.DataFrame(a)
writer = pd.ExcelWriter('test.xlsx', engine='xlsxwriter')
a.to_excel(writer, sheet_name='Sheet1')
writer.save()
</code></pre>
<p>You may need to install <code>xlsxwriter</code> package</p>
| 1 | 2016-10-04T09:40:46Z | [
"python",
"excel",
"dictionary",
"xlsxwriter"
]
|
How to dump a dictionary into an .xlsx file with proper column alignment? | 39,848,392 | <p>I have a dictionary with 2000 items which looks like this:</p>
<pre><code>d = {'10071353': (0, 0), '06030011': (6, 0), '06030016': (2, 10), ...}
</code></pre>
<p>Given that I want to write it to an <code>.xlsx</code> file, I use this code (taken from <a href="http://stackoverflow.com/questions/23113231/python-write-dictionary-values-in-an-excel-file?answertab=active#tab-top">here</a>):</p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('myfile.xlsx')
worksheet = workbook.add_worksheet()
row = 0
col = 0
order=sorted(d.keys())
for key in order:
row += 1
worksheet.write(row, col, key)
for item in d[key]:
worksheet.write(row, col + 1, item)
row += 1
workbook.close()
</code></pre>
<p>This produces an <code>.xlsx</code> file with the following alignment:</p>
<pre class="lang-none prettyprint-override"><code> A B
06030001 0
10
06030002 10
10
06030003 5
10
</code></pre>
<p>However, this is the alignment I am after:</p>
<pre class="lang-none prettyprint-override"><code>A B C
06030001 0 10
06030002 10 10
06030003 5 10
</code></pre>
<p><strong>What should I change in the script to achieve this?</strong></p>
| 1 | 2016-10-04T09:11:21Z | 39,849,172 | <p>I think you just misplaced a variable.</p>
<pre><code>worksheet.write(row, col + 1, item)
row += 1
</code></pre>
<p>this is wrong, <code>row +=1</code> should be replaced with <code>col +=1</code></p>
<p>This is correct way, You can use the same variable.</p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('myfile.xlsx')
worksheet = workbook.add_worksheet()
row = 0
col = 0
order=sorted(d.keys())
for key in order:
row += 1
print(key)
worksheet.write(row, col, key)
for item in d[key]:
print(item,row, col+1)
worksheet.write(row, col + 1, item)
col += 1
col = 0
workbook.close()
</code></pre>
<p>Output:</p>
<p><a href="http://i.stack.imgur.com/iMyOZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/iMyOZ.png" alt="enter image description here"></a></p>
| 2 | 2016-10-04T09:48:45Z | [
"python",
"excel",
"dictionary",
"xlsxwriter"
]
|
Scipy minimize a scalar with Brent method throws an Overflow 34 | 39,848,410 | <p>I'd like to find a local minimum of the function <code>f(x) = x^3 + x^2 + x - 2</code>
where <code>x</code> is between <code><-10; 10></code>. I use Anaconda 3 on Windows 64bit.</p>
<p>My scipy python code throws an error:</p>
<pre><code>from scipy import optimize
def f(x):
return (x**3)+(x**2)+x-2
x_min = optimize.minimize_scalar(f, bounds=[-10, 10], method='brent')
</code></pre>
<blockquote>
<p>OverflowError: (34, 'Result too large')</p>
</blockquote>
<p>Isn't a power to 3 too simple function to break a scipy optimization package?</p>
| 0 | 2016-10-04T09:12:05Z | 39,849,390 | <p>When using local boundaries, changing <code>method</code> to <code>'bounded'</code> is required</p>
<pre><code>from scipy import optimize
def f(x):
return (x**3)+(x**2)+x-2
x_min = optimize.minimize_scalar(f, bounds=[-10, 10], method='bounded')
print(x_min)
</code></pre>
| 1 | 2016-10-04T09:59:36Z | [
"python",
"scipy"
]
|
Selenium Webdriver failed when use window_handles | 39,848,485 | <p>I am trying to handle Two Tab in Python Selenium webdriver with Chrome as browser.</p>
<p>I am getting result for find element by link text on first tab as well as second tab if I keep the <strong>Chrome Browser as selected window</strong>.[i.e Front Screen Process ]</p>
<p>When I change the control to new tab using</p>
<pre><code>driver.switch_to_window(driver.window_handles[1])
</code></pre>
<p>and minimise the google chrome[i.e if I select any process other than Google Chrome].i get the error in finding the link text saying <strong>Element Not Found Exception</strong> for Second Tab only not on first Tab.</p>
<p>I am getting result on First Tab.</p>
<pre><code>def DriverCreation():
try:
Driver = WebBase.initWebScraping(URL) # Methods visible Driver.driver and Driver.loggerDriverWait = Driver.EC
print "Driver Creation Successful"
return Driver
except:
print "Driver Initalisation Failed"
sys.exit(1)
if __name__ == '__main__':
URL = 'https://www.example.com/'
Driver = DriverCreation() # will Load first Tab with www.Example.com
aboutlink = Driver.driver.find_element_by_link_text('about')
aboutlink.send_keys(Keys.CONTROL + Keys.RETURN)
Driver.driver.switch_to_window(Driver.driver.window_handles[1])
contactLink = Driver.driver.find_element_by_link_text('contact')
print contactLink.text() #** getting error if i change the focus from Google Chrome and works fine if i keep the window focus on Google Chrome**
</code></pre>
| 0 | 2016-10-04T09:15:53Z | 39,865,480 | <p>you can manage tab using following code.</p>
<pre><code>driver.execute_script("window.open('"+url+"', '_blank');")
driver.switch_to_window(driver.window_handles[1])
</code></pre>
| 0 | 2016-10-05T04:31:45Z | [
"python",
"google-chrome",
"selenium",
"selenium-webdriver",
"windows-7"
]
|
How to insert NaN array into a numpy 2D array | 39,848,700 | <p>I'm trying to insert an arbitrary number of rows of NaN values within a 2D array at specific places. I'm logging some data from a microcontroller in a .csv file and parsing with python.</p>
<p>The data is stored in a 3 column 2D array like this</p>
<pre><code>[(122.0, 1.0, -47.0) (123.0, 1.0, -47.0) (125.0, 1.0, -44.0) ...,
(39.0, 1.0, -47.0) (40.0, 1.0, -45.0) (41.0, 1.0, -47.0)]
</code></pre>
<p>The first column is an sequence counter. What I'm trying to do is iterate through the sequence values, diff current and previous sequence number and insert as many rows with nan as there are missing sequences.</p>
<p>Basically, </p>
<pre><code>[(122.0, 1.0, -47.0) (123.0, 1.0, -47.0) (125.0, 1.0, -44.0)]
</code></pre>
<p>would become</p>
<pre><code>[(122.0, 1.0, -47.0) (123.0, 1.0, -47.0) (nan, nan, nan) (125.0, 1.0, -44.0)]
</code></pre>
<p>However the following implementation of <code>np.insert</code> produces an error</p>
<pre><code>while (i < len(list[1])):
pid = list[i][0]
newMissing = (pid - LastGoodId + 255) % 256
TotalMissing = TotalMissing + newMissing
np.insert(list,i,np.zeros(newMissing,1) + np.nan)
i = i + newMissing
list[i][0] = TotalMissing
LastGoodId = pid
</code></pre>
<blockquote>
<p>---> 28 np.insert(list,i,np.zeros(newMissing,1) + np.nan)
29 i = i + newMissing
30 list[i][0] = TotalMissing</p>
<p>TypeError: data type not understood</p>
</blockquote>
<p>Any ideas on how I can accomplish this?</p>
| 0 | 2016-10-04T09:25:42Z | 39,848,762 | <p>From the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.insert.html" rel="nofollow">doc of <code>np.insert()</code></a>:</p>
<pre><code>import numpy as np
a = np.arrray([(122.0, 1.0, -47.0), (123.0, 1.0, -47.0), (125.0, 1.0, -44.0)]))
np.insert(a, 2, np.nan, axis=0)
array([[ 122., 1., -47.],
[ 123., 1., -47.],
[ nan, nan, nan],
[ 125., 1., -44.]])
</code></pre>
| 1 | 2016-10-04T09:29:19Z | [
"python",
"arrays",
"numpy"
]
|
How to insert NaN array into a numpy 2D array | 39,848,700 | <p>I'm trying to insert an arbitrary number of rows of NaN values within a 2D array at specific places. I'm logging some data from a microcontroller in a .csv file and parsing with python.</p>
<p>The data is stored in a 3 column 2D array like this</p>
<pre><code>[(122.0, 1.0, -47.0) (123.0, 1.0, -47.0) (125.0, 1.0, -44.0) ...,
(39.0, 1.0, -47.0) (40.0, 1.0, -45.0) (41.0, 1.0, -47.0)]
</code></pre>
<p>The first column is an sequence counter. What I'm trying to do is iterate through the sequence values, diff current and previous sequence number and insert as many rows with nan as there are missing sequences.</p>
<p>Basically, </p>
<pre><code>[(122.0, 1.0, -47.0) (123.0, 1.0, -47.0) (125.0, 1.0, -44.0)]
</code></pre>
<p>would become</p>
<pre><code>[(122.0, 1.0, -47.0) (123.0, 1.0, -47.0) (nan, nan, nan) (125.0, 1.0, -44.0)]
</code></pre>
<p>However the following implementation of <code>np.insert</code> produces an error</p>
<pre><code>while (i < len(list[1])):
pid = list[i][0]
newMissing = (pid - LastGoodId + 255) % 256
TotalMissing = TotalMissing + newMissing
np.insert(list,i,np.zeros(newMissing,1) + np.nan)
i = i + newMissing
list[i][0] = TotalMissing
LastGoodId = pid
</code></pre>
<blockquote>
<p>---> 28 np.insert(list,i,np.zeros(newMissing,1) + np.nan)
29 i = i + newMissing
30 list[i][0] = TotalMissing</p>
<p>TypeError: data type not understood</p>
</blockquote>
<p>Any ideas on how I can accomplish this?</p>
| 0 | 2016-10-04T09:25:42Z | 39,848,990 | <p><strong>Approach #1</strong></p>
<p>We can use an initialization based approach to handle multiple gaps and gaps of any lengths -</p>
<pre><code># Pre-processing step to create monotonically increasing array for first col
id_arr = np.zeros(arr.shape[0])
id_arr[np.flatnonzero(np.diff(arr[:,0])<0)+1] = 256
a0 = id_arr.cumsum() + arr[:,0]
range_arr = np.arange(a0[0],a0[-1]+1)
out = np.full((range_arr.shape[0],arr.shape[1]),np.nan)
out[np.in1d(range_arr,a0)] = arr
</code></pre>
<p>Sample run -</p>
<pre><code>In [233]: arr # Input array
Out[233]:
array([[ 122., 1., -47.],
[ 123., 1., -47.],
[ 126., 1., -44.],
[ 39., 1., -47.],
[ 40., 1., -45.],
[ 41., 1., -47.]])
In [234]: out
Out[234]:
array([[ 122., 1., -47.],
[ 123., 1., -47.],
[ nan, nan, nan],
[ nan, nan, nan],
[ 126., 1., -44.],
[ nan, nan, nan], (168 NaN rows)
.....
[ nan, nan, nan],
[ nan, nan, nan],
[ 39., 1., -47.],
[ 40., 1., -45.],
[ 41., 1., -47.]])
</code></pre>
<p><strong>Approach #2</strong></p>
<p>An alternative approach could be suggested to handle such generic cases using <code>np.insert</code> instead of initialization, like so -</p>
<pre><code>idx = np.flatnonzero(~np.in1d(range_arr,a0))
out = np.insert(arr,idx - np.arange(idx.size),np.nan,axis=0)
</code></pre>
| 1 | 2016-10-04T09:39:44Z | [
"python",
"arrays",
"numpy"
]
|
Filtering and displaying mySQL Data on Webpage via Python? PHP? | 39,848,809 | <p>I'd like to build a website that reads data from a database, lets me filter that data with different Frontend dropdown-filters and presents me the filtered data in a table shown on the Webpage.</p>
<p>In addition I'd like to have the possibility to use that data further to store it into some file, or use it in a python script that handles that data further.</p>
<p>Should I build the webpage using PHP or what would you recommend me?
I'm new on this area so I don't know much about website interaction yet, just have some experience writing python scripts until now.</p>
<p>Thank you for advices.</p>
| -2 | 2016-10-04T09:31:22Z | 39,849,072 | <p>It depends on how familiar you are with PHP or Python. If you are new to both, I recommend you to start with <a href="http://www.w3schools.com/php/" rel="nofollow">PHP</a>. If you are familiar with Python, I recommend you to use <a href="http://flask.pocoo.org/docs/0.11/tutorial/" rel="nofollow">Flask</a> or <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial01/" rel="nofollow">Django</a> to make your project.</p>
| 1 | 2016-10-04T09:43:55Z | [
"php",
"python",
"mysql"
]
|
Remote MySQL connection works from cmdline, but not with Apache (Webpy, Mod-WSGI) | 39,848,813 | <ul>
<li>Server OS: Red Hat Enterprise Linux Server release 5.11 </li>
<li>Apache: 2.2.3</li>
<li>Python: 2.7.6</li>
<li>Mod_WSGI 4.5.3, Web.Py, MySQLdb</li>
</ul>
<p>Hey! I have created a Web.Py site that queries data from a remote Oracle database which works perfectly, but I have now run into a problem when trying to create user authentication from a remote MySQL database.</p>
<p>Following the steps from here: <a href="http://webpy.org/cookbook/userauthpgsql" rel="nofollow">http://webpy.org/cookbook/userauthpgsql</a></p>
<p>The exact same program works from the python commandline, but for some reason when trying to connect to the remote MySQL database from Apache, I get the following error:</p>
<pre><code><type 'exceptions.AttributeError'> at /login
'module' object has no attribute 'connect'
</code></pre>
<p>Things I've tried:</p>
<ol>
<li>List item</li>
<li>Disabled SELinux</li>
<li>Added permissions for apache@myserversIP to the MySQL.</li>
<li>Tried both MySQLdb & web.database for the connection</li>
<li>Googled a lot.</li>
</ol>
<p>Here's the code that is being run from my main program:</p>
<pre><code>myDB = MySQLdb.connect(host='ip',user='user',passwd='password',db='db',port=3306 );
cHandler = myDB.cursor()
cHandler.execute("SHOW TABLES;")
results = cHandler.fetchall()
for items in results:
print items
</code></pre>
<p>Any help regarding the issue, or ways to diagnose it further would be appreciated a lot! Thanks</p>
<p>Edit1: I checked where the MySQLdb is being searched from: /usr/local/lib/python2.7/site-packages<br>
Which seems to be correct.</p>
<p>Edit2:<br>
ldd mod_wsgi.so<br>
linux-vdso.so.1 => (0x00007fff4b5fd000)<br>
libpython2.7.so.1.0 => /lib64/libpython2.7.so.1.0<br>
libpthread.so.0 => /lib64/libpthread.so.0<br>
libdl.so.2 => /lib64/libdl.so.2<br>
libutil.so.1 => /lib64/libutil.so.1<br>
libm.so.6 => /lib64/libm.so.6<br>
libc.so.6 => /lib64/libc.so.6<br>
/lib64/ld-linux-x86-64.so.2 </p>
| 0 | 2016-10-04T09:31:26Z | 39,855,544 | <p>As has been commented, check to see if there is a file within the same directory as the file you're trying to run, that has the name MySQLdb.py</p>
| 0 | 2016-10-04T14:55:16Z | [
"python",
"apache",
"mod-wsgi",
"mysql-python",
"web.py"
]
|
Replace the column into designed format | 39,848,869 | <p>I have a column like that </p>
<pre><code>Hei-3_ctg7180000009945 pan gene 1 13249 . . . ID=Hei-3_ctg7180000009945;Name=Hei-3_ctg7180000009945
Hei-3_ctg7180000009946 pan gene 1 587 . . . ID=Hei-3_ctg7180000009946;Name=Hei-3_ctg7180000009946
</code></pre>
<p>And I want to make it like:</p>
<pre><code>Hei-3_ctg7180000009945 pan gene 1 13249 . . . ID=Hei-3_ctg7180000009945;Name=Hei-3_ctg7180000009945
Hei-3_ctg7180000009945 pan mRNA 1 13249 . . . ID=mHei-3_ctg7180000009945;parent=Hei-3_ctg7180000009945
Hei-3_ctg7180000009945 pan exon 1 13249 . . . ID=eHei-3_ctg7180000009945;parent=mHei-3_ctg7180000009945
</code></pre>
<p>Any suggestion I can do it easily?So basically I will print each row for three times. The first row is the same as the input. And the second row with the change of gene into mRNA as well as the change in the last column. And so on for the third column.</p>
<p>This is what I tried in python. But I tried to read each element, but I am not sure how can I modified some of the element? </p>
<pre><code>with open('10_pan.gff3') as f:
for line in f:
lines1 = line.rstrip('\n').split(' ')
ll = line.split('\t')
#print (ll)
print(lines1)
</code></pre>
| 0 | 2016-10-04T09:34:15Z | 39,849,118 | <p>Very vague question, but to print with a fixed length (5 in this case) you can use</p>
<pre><code>print("{:5}".format(123))
print("{:5}".format(12345))
</code></pre>
<p>yields</p>
<pre><code> 123
12345
</code></pre>
| 0 | 2016-10-04T09:46:06Z | [
"python",
"shell",
"unix"
]
|
Problems while importing h5py package | 39,848,991 | <p>I am coding in python and trying to <code>import h5py</code>. I have installed this package before. When I try to do this, it gives this error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/h5py/__init__.py", line 34, in <module>
from ._conv import register_converters as _register_converters
File "h5py/h5t.pxd", line 14, in init h5py._conv (/build/h5py-nQFNYZ/h5py-2.6.0/h5py/_conv.c:7359)
File "h5py/numpy.pxd", line 66, in init h5py.h5t (/build/h5py-nQFNYZ/h5py-2.6.0/h5py/h5t.c:20505)
ValueError: numpy.dtype has the wrong size, try recompiling
</code></pre>
<p>The point is that when I <code>import h5py</code> in the directory:<br>
<code>/usr/lib/python2.7/dist-packages/</code> it works, but I do not have enough space there. </p>
<p>Does anyone know how to import this package in my data directory?
(I tried to export but it did not work!)</p>
| 0 | 2016-10-04T09:39:44Z | 39,849,311 | <p>The fact that it works in one place and not another points to a possible conflict between several installs.</p>
<p>I suggest that you make sure to have only one install of NumPy and of h5py.</p>
<p>To diagnose the problem, issue the commands</p>
<pre><code>python -c 'import h5py; print h5py.__file__'
python -c 'import numpy; print numpy.__file__'
</code></pre>
<p>in your home directory then in <code>/usr/lib/python2.7/dist-packages/</code>, and copy the output here.</p>
<p>A likely solution is to</p>
<ol>
<li>pip uninstall h5py</li>
<li>pip uninstall numpy</li>
</ol>
<p>and rely on your package manager for the installation. If that is not appropriate (outdated packages, for instance), install everything with pip.</p>
| 0 | 2016-10-04T09:56:08Z | [
"python",
"numpy",
"h5py"
]
|
Skipping long entries for history search in IPython 5.x | 39,849,137 | <p>I use <code>ipython</code> console quite heavily for python workflow. As happy as I am with the new 5.x series <a href="http://ipython.readthedocs.io/en/stable/whatsnew/version5.html" rel="nofollow">released</a>, I find the ability to freely navigate inside the long code blocks a double-edged sword when it comes to history search.</p>
<p>For example, typing <code>import f</code> and hitting the up key for history search completion now prompts the following to appear if there was a recent pasted code block that started with importing <code>foo</code>:</p>
<pre><code>In [100]: import foo
...:
...: # copy-pasted code block that shows up in history
...: for foobar in foo.bar:
...: pass
...:
</code></pre>
<p>Now if you were simply looking for a one line import statement, and if the code snippet in history is sufficiently long, that's a lot of lines to navigate upwards before you can switch to an earlier (desired) <code>import foo</code> in history.</p>
<p>So my question is, is there a shortcut that allows to skip a long history entry to the previous one? Browsing history with <code>Ctrl+R</code> is an obvious workaround for this issue, but I'd like to know if there's a way to get it to work with the up key.</p>
| 0 | 2016-10-04T09:47:14Z | 39,849,333 | <p><code><Up></code>/<code><C-P></code> and <code><Down></code>/<code><C-N></code> iterate over every line in your history.</p>
<p>Use <code><PageDown></code> and <code><PageUp></code> keys to iterate over entries instead.</p>
<p>Here's a full list of shortcuts: <a href="http://ipython.readthedocs.io/en/stable/config/shortcuts/index.html" rel="nofollow">http://ipython.readthedocs.io/en/stable/config/shortcuts/index.html</a>.</p>
| 1 | 2016-10-04T09:57:04Z | [
"python",
"ipython",
"keyboard-shortcuts"
]
|
Regular expression tags on multiple lines | 39,849,145 | <p>How to extract the contents between these tags when they're on multiple/ different lines? </p>
<pre><code><link>
https://widget.websta.me/rss/n/bleh
</link>
</code></pre>
<p>I tried:
content = findall('(.*)', web_page_contents, re.DOTALL)
But I get the next mention of instead of this one^</p>
| -1 | 2016-10-04T09:47:41Z | 39,849,397 | <p>You can use <code>BeautifulSoup</code> to do that. It has a very good <a href="https://www.crummy.com/software/BeautifulSoup/" rel="nofollow">documentation</a> and is very easy.</p>
<p>The following code will work:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
r = requests.get(webpage_url)
soup = BeautifulSoup(r.content, 'lxml')
for link in soup.find_all('link'):
print link.text
</code></pre>
| 0 | 2016-10-04T09:59:49Z | [
"python",
"html",
"regex"
]
|
Translate lists to a 2D numpy matrix | 39,849,208 | <p>I have a list of lists in python. The lists is the following:</p>
<pre><code>[[196, 242, 3],
[186, 302, 3],
[22, 377, 1],
[196, 377, 3],
....
]
</code></pre>
<p>The first column correspond to users (1:943) and the second to items(1:1682) and their votes to items. I want to try the matrix factorization <a href="https://github.com/mnick/scikit-tensor" rel="nofollow">library</a>. Should I created a users x items matrix? If yes how can I create a matrix like that in python, with the one axis to be in size of users the other one the size of items and the values the votes of the users?</p>
<p><strong>EDIT:</strong> I check also the implementation of nmf.py which requir as input a 2D matrix and not a list or a sparse represantation.</p>
| 0 | 2016-10-04T09:50:26Z | 39,849,268 | <p>Sure.</p>
<p>You can create a 2-dimensional numpy array (which you can treat as a matrix), using the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html" rel="nofollow"><code>np.array</code></a> function:</p>
<pre><code>mat = np.array(list_of_lists)
</code></pre>
| 1 | 2016-10-04T09:54:21Z | [
"python",
"numpy",
"matrix"
]
|
Translate lists to a 2D numpy matrix | 39,849,208 | <p>I have a list of lists in python. The lists is the following:</p>
<pre><code>[[196, 242, 3],
[186, 302, 3],
[22, 377, 1],
[196, 377, 3],
....
]
</code></pre>
<p>The first column correspond to users (1:943) and the second to items(1:1682) and their votes to items. I want to try the matrix factorization <a href="https://github.com/mnick/scikit-tensor" rel="nofollow">library</a>. Should I created a users x items matrix? If yes how can I create a matrix like that in python, with the one axis to be in size of users the other one the size of items and the values the votes of the users?</p>
<p><strong>EDIT:</strong> I check also the implementation of nmf.py which requir as input a 2D matrix and not a list or a sparse represantation.</p>
| 0 | 2016-10-04T09:50:26Z | 39,849,412 | <p>Here is how you can create a sparse matrix from the list of set items:</p>
<pre><code>data = [
[196, 242, 3],
[186, 302, 3],
[22, 377, 1],
[196, 377, 3],
....
]
user_count = max(i[0] for i in data) + 1
item_count = max(i[1] for i in data) + 1
data_mx = scipy.sparse.dok_matrix((user_count, item_count))
for (user, item, value) in data:
data_mx[user, item] = value
</code></pre>
| 1 | 2016-10-04T10:00:14Z | [
"python",
"numpy",
"matrix"
]
|
Translate lists to a 2D numpy matrix | 39,849,208 | <p>I have a list of lists in python. The lists is the following:</p>
<pre><code>[[196, 242, 3],
[186, 302, 3],
[22, 377, 1],
[196, 377, 3],
....
]
</code></pre>
<p>The first column correspond to users (1:943) and the second to items(1:1682) and their votes to items. I want to try the matrix factorization <a href="https://github.com/mnick/scikit-tensor" rel="nofollow">library</a>. Should I created a users x items matrix? If yes how can I create a matrix like that in python, with the one axis to be in size of users the other one the size of items and the values the votes of the users?</p>
<p><strong>EDIT:</strong> I check also the implementation of nmf.py which requir as input a 2D matrix and not a list or a sparse represantation.</p>
| 0 | 2016-10-04T09:50:26Z | 39,858,030 | <p>You data looks like a list of lists:</p>
<pre><code>In [168]: ll = [[196, 242, 3],
...: [186, 302, 3],
...: [22, 377, 1],
...: [196, 377, 3]]
</code></pre>
<p>Make an array from it - for convenience in the following operations</p>
<pre><code>In [169]: A = np.array(ll)
In [170]: ll
Out[170]: [[196, 242, 3], [186, 302, 3], [22, 377, 1], [196, 377, 3]]
In [171]: A
Out[171]:
array([[196, 242, 3],
[186, 302, 3],
[ 22, 377, 1],
[196, 377, 3]])
</code></pre>
<p>Shift the index columns to 0 base (optional)</p>
<pre><code>In [172]: A[:,:2] -= 1
</code></pre>
<p>With this it is quick and easy define a sparse matrix using the <code>coo</code> (or <code>csr</code>) format, of <code>(data, (rows, cols))</code>. The iterative <code>dok</code> approach works, but this is faster.</p>
<pre><code>In [174]: from scipy import sparse
In [175]: M = sparse.csr_matrix((A[:,2],(A[:,0], A[:,1])), shape=(942,1681))
In [176]: M
Out[176]:
<942x1681 sparse matrix of type '<class 'numpy.int32'>'
with 4 stored elements in Compressed Sparse Row format>
In [177]: print(M)
(21, 376) 1
(185, 301) 3
(195, 241) 3
(195, 376) 3
</code></pre>
<p><code>M.A</code> creates a dense array from this sparse matrix. Some code, especially in the <code>sckit-learn</code> package can use sparse matrices directly.</p>
<p>A direct way of creating the dense array is:</p>
<pre><code>In [183]: N = np.zeros((942,1681),int)
In [184]: N[A[:,0],A[:,1]]= A[:,2]
In [185]: N.shape
Out[185]: (942, 1681)
In [186]: M.A.shape
Out[186]: (942, 1681)
In [187]: np.allclose(N, M.A) # it matches the sparse version
Out[187]: True
</code></pre>
| 1 | 2016-10-04T17:06:13Z | [
"python",
"numpy",
"matrix"
]
|
Override ModelViewSet's queryset with filter backends applied | 39,849,292 | <p>Is it possible to take into account <code>MyModelViewSet</code>'s <code>filter_backends</code> when creating custom queryset?</p>
<pre><code>class MyModelViewSet(viewsets.ModelViewSet):
filter_backends = (CustomFilter, )
serializer_class = MySerializer
def get_queryset(self):
# It should not return all objects, but only results from `CustomFilter`
queryset = LedgerEntry.objects.all()
# some extra filtering
return queryset
</code></pre>
<p>How should I implement this?</p>
<p><strong>Django</strong>: 1.10</p>
<p><strong>Django Rest Framework</strong>: 3.4.6</p>
| 0 | 2016-10-04T09:55:19Z | 39,849,686 | <p>Yes you can. Just extend <code>filter_queryset</code> method of ViewSet</p>
<pre><code>class MyModelViewSet(viewsets.ModelViewSet):
filter_backends = (CustomFilter, )
serializer_class = MySerializer
def filter_queryset(self, queryset):
# super needs to be called to filter backends to be applied
queryset = super().filter_queryset(queryset)
# some extra filtering
return queryset
</code></pre>
<p>In action methods in ViewSet it actually does this</p>
<pre><code>queryset = self.filter_queryset(self.get_queryset())
</code></pre>
<p>So your queryset that is sent to serializer is the one that created by <code>get_queryset</code> and then filtered with <code>filter_queryset</code></p>
| 1 | 2016-10-04T10:14:07Z | [
"python",
"django",
"python-3.x",
"django-rest-framework"
]
|
Get visible content of a page using selenium and BeautifulSoup | 39,849,497 | <p>I want to retrieve all visible content of a web page. Let say for example <a href="https://dukescript.com/best/practices/2015/11/23/dynamic-templates.html" rel="nofollow">this</a> webpage. I am using a headless firefox browser remotely with selenium.</p>
<p>The script I am using looks like this</p>
<pre><code>driver = webdriver.Remote('http://0.0.0.0:xxxx/wd/hub', desired_capabilities)
driver.get(url)
dom = BeautifulSoup(driver.page_source, parser)
f = dom.find('iframe', id='dsq-app1')
driver.switch_to_frame('dsq-app1')
s = driver.page_source
f.replace_with(BeautifulSoup(s, 'html.parser'))
with open('out.html', 'w') as fe:
fe.write(dom.encode('utf-8'))
</code></pre>
<p>This is supposed to load the page, parse the dom, and then replace the iframe with id <code>dsq-app1</code> with it's visible content. If I execute those commands one by one via my python command line it works as expected. I can then see the paragraphs with all the visible content. When instead I execute all those commands at once, either by executing the script or by pasting all this snippet in my interpreter, it behaves differently. The paragraphs are missing, the content still exists in json format, but it's not what I want.</p>
<p>Any idea why this may happening? Something to do with <code>replace_with</code> maybe?</p>
| 2 | 2016-10-04T10:04:34Z | 39,849,658 | <p>Sounds like the dom elements are not yet loaded when your code try to reach them. </p>
<p>Try to <a href="http://stackoverflow.com/questions/7781792/selenium-waitforelement">wait</a> for the elements to be fully loaded and just then replace.</p>
<p>This works for your when you run it command by command because then you let the driver load all the elements before you execute more commands. </p>
| 1 | 2016-10-04T10:12:59Z | [
"python",
"html",
"selenium",
"beautifulsoup"
]
|
Get visible content of a page using selenium and BeautifulSoup | 39,849,497 | <p>I want to retrieve all visible content of a web page. Let say for example <a href="https://dukescript.com/best/practices/2015/11/23/dynamic-templates.html" rel="nofollow">this</a> webpage. I am using a headless firefox browser remotely with selenium.</p>
<p>The script I am using looks like this</p>
<pre><code>driver = webdriver.Remote('http://0.0.0.0:xxxx/wd/hub', desired_capabilities)
driver.get(url)
dom = BeautifulSoup(driver.page_source, parser)
f = dom.find('iframe', id='dsq-app1')
driver.switch_to_frame('dsq-app1')
s = driver.page_source
f.replace_with(BeautifulSoup(s, 'html.parser'))
with open('out.html', 'w') as fe:
fe.write(dom.encode('utf-8'))
</code></pre>
<p>This is supposed to load the page, parse the dom, and then replace the iframe with id <code>dsq-app1</code> with it's visible content. If I execute those commands one by one via my python command line it works as expected. I can then see the paragraphs with all the visible content. When instead I execute all those commands at once, either by executing the script or by pasting all this snippet in my interpreter, it behaves differently. The paragraphs are missing, the content still exists in json format, but it's not what I want.</p>
<p>Any idea why this may happening? Something to do with <code>replace_with</code> maybe?</p>
| 2 | 2016-10-04T10:04:34Z | 39,854,093 | <p>Try to get the Page Source after detecting the required ID/CSS_SELECTOR/CLASS or LINK.</p>
<p>You can always use explicit wait of Selenium WebDriver.</p>
<pre><code>from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Remote('http://0.0.0.0:xxxx/wd/hub', desired_capabilities)
driver.get(url)
f = WebDriverWait(driver,10).until(EC.presence_of_element_located((By.ID,idName)
# here 10 is time for which script will try to find given id
# provide the id name
dom = BeautifulSoup(driver.page_source, parser)
f = dom.find('iframe', id='dsq-app1')
driver.switch_to_frame('dsq-app1')
s = driver.page_source
f.replace_with(BeautifulSoup(s, 'html.parser'))
with open('out.html', 'w') as fe:
fe.write(dom.encode('utf-8'))
</code></pre>
<p>Correct me if this not work</p>
| 1 | 2016-10-04T13:50:00Z | [
"python",
"html",
"selenium",
"beautifulsoup"
]
|
Get visible content of a page using selenium and BeautifulSoup | 39,849,497 | <p>I want to retrieve all visible content of a web page. Let say for example <a href="https://dukescript.com/best/practices/2015/11/23/dynamic-templates.html" rel="nofollow">this</a> webpage. I am using a headless firefox browser remotely with selenium.</p>
<p>The script I am using looks like this</p>
<pre><code>driver = webdriver.Remote('http://0.0.0.0:xxxx/wd/hub', desired_capabilities)
driver.get(url)
dom = BeautifulSoup(driver.page_source, parser)
f = dom.find('iframe', id='dsq-app1')
driver.switch_to_frame('dsq-app1')
s = driver.page_source
f.replace_with(BeautifulSoup(s, 'html.parser'))
with open('out.html', 'w') as fe:
fe.write(dom.encode('utf-8'))
</code></pre>
<p>This is supposed to load the page, parse the dom, and then replace the iframe with id <code>dsq-app1</code> with it's visible content. If I execute those commands one by one via my python command line it works as expected. I can then see the paragraphs with all the visible content. When instead I execute all those commands at once, either by executing the script or by pasting all this snippet in my interpreter, it behaves differently. The paragraphs are missing, the content still exists in json format, but it's not what I want.</p>
<p>Any idea why this may happening? Something to do with <code>replace_with</code> maybe?</p>
| 2 | 2016-10-04T10:04:34Z | 39,854,349 | <p>To add to Or Duan's answer I provide what I ended up doing. The problem of finding whether a page or parts of a page have loaded completely is an intricate one. I tried to use implicit and explicit waits but again I ended up receiving half-loaded frames. My workaround is to check the <code>readyState</code> of the original document and the readyState of iframes.</p>
<p>Here is a sample function</p>
<pre><code>def _check_if_load_complete(driver, timeout=10):
elapsed_time = 1
while True:
if (driver.execute_script('return document.readyState') == 'complete' or
elapsed_time == timeout):
break
else:
sleep(0.0001)
elapsed_time += 1
</code></pre>
<p>then I used that function right after I changed the focus of the driver to the iframe</p>
<pre><code>driver.switch_to_frame('dsq-app1')
_check_if_load_complete(driver, timeout=10)
</code></pre>
| 1 | 2016-10-04T14:01:39Z | [
"python",
"html",
"selenium",
"beautifulsoup"
]
|
Unicoded string key error in python dict | 39,849,584 | <p>I have such a code:</p>
<pre><code>corpus_file = codecs.open("corpus_en-tr.txt", encoding="utf-8").readlines()
corpus = []
for a in range(0, len(corpus_file), 2):
corpus.append({'src': corpus_file[a].rstrip(), 'tgt': corpus_file[a+1].rstrip()})
params = {}
for sentencePair in corpus:
for tgtWord in sentencePair['tgt']:
for srcWord in sentencePair['src']:
params[srcWord][tgtWord] = 1.0
</code></pre>
<p>Basically I am trying to create a dictionary of dictionary of float. But I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "initial_guess.py", line 15, in <module>
params[srcWord][tgtWord] = 1.0
KeyError: u'A'
</code></pre>
<p><a href="http://stackoverflow.com/questions/36607145/utf-8-string-as-key-in-dictionary-causes-keyerror">UTF-8 string as key in dictionary causes KeyError</a></p>
<p>I checked the case above, but it doesn't help.</p>
<p>Basically I don't understand why unicoded string 'A' is not allowed in python to be a key value? Is there any way to fix it? </p>
| 0 | 2016-10-04T10:09:25Z | 39,849,671 | <p>Your <code>params</code> dict is empty.</p>
<p>You can use tree for that:</p>
<pre><code>from collections import defaultdict
def tree():
return defaultdict(tree)
params = tree()
params['any']['keys']['you']['want'] = 1.0
</code></pre>
<p>Or a simpler <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>defaultdict</code></a> case without <code>tree</code>:</p>
<pre><code>from collections import defaultdict
params = defaultdict(dict)
for sentencePair in corpus:
for tgtWord in sentencePair['tgt']:
for srcWord in sentencePair['src']:
params[srcWord][tgtWord] = 1.0
</code></pre>
<p>If you don't want to add anything like that, then just try to add dict to <code>params</code> on every iteration:</p>
<pre><code>params = {}
for sentencePair in corpus:
for srcWord in sentencePair['src']:
params.setdefault(srcWord, {})
for tgtWord in sentencePair['tgt']:
params[srcWord][tgtWord] = 1.0
</code></pre>
<p>Please note, that I've changed the order of <code>for</code> loops, because you need to know <code>srcWord</code> first.</p>
<p>Otherwise you need to check key existence too often:</p>
<pre><code>params = {}
for sentencePair in corpus:
for tgtWord in sentencePair['tgt']:
for srcWord in sentencePair['src']:
params.setdefault(srcWord, {})[tgtWord] = 1.0
</code></pre>
| 1 | 2016-10-04T10:13:23Z | [
"python",
"dictionary",
"unicode"
]
|
Unicoded string key error in python dict | 39,849,584 | <p>I have such a code:</p>
<pre><code>corpus_file = codecs.open("corpus_en-tr.txt", encoding="utf-8").readlines()
corpus = []
for a in range(0, len(corpus_file), 2):
corpus.append({'src': corpus_file[a].rstrip(), 'tgt': corpus_file[a+1].rstrip()})
params = {}
for sentencePair in corpus:
for tgtWord in sentencePair['tgt']:
for srcWord in sentencePair['src']:
params[srcWord][tgtWord] = 1.0
</code></pre>
<p>Basically I am trying to create a dictionary of dictionary of float. But I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "initial_guess.py", line 15, in <module>
params[srcWord][tgtWord] = 1.0
KeyError: u'A'
</code></pre>
<p><a href="http://stackoverflow.com/questions/36607145/utf-8-string-as-key-in-dictionary-causes-keyerror">UTF-8 string as key in dictionary causes KeyError</a></p>
<p>I checked the case above, but it doesn't help.</p>
<p>Basically I don't understand why unicoded string 'A' is not allowed in python to be a key value? Is there any way to fix it? </p>
| 0 | 2016-10-04T10:09:25Z | 39,850,144 | <p>You can just use <code>setdefault</code>:</p>
<p>Replace
</p>
<pre class="lang-python prettyprint-override"><code>params[srcWord][tgtWord] = 1.0
</code></pre>
<p>with</p>
<pre class="lang-python prettyprint-override"><code>params.setdefault(srcWord, {})[tgtWord] = 1.0
</code></pre>
| 1 | 2016-10-04T10:39:07Z | [
"python",
"dictionary",
"unicode"
]
|
Unable to extract zip in python to destination folder(server) from my local host | 39,849,766 | <p>I'm unable to extract zip file in python to destination folder(server) from my local host. While extracting using <code>z.extract(name,"/destination/")</code>, it's unable to find destination folder as it is trying to search destination folder locally instead of the server.</p>
<pre><code> transport = paramiko.Transport((destinationIP))
transport.connect(username = destinationuserName, password = destinationPassword)
sftp = paramiko.SFTPClient.from_transport(transport)
fh = sftp.open('/destination/xxx.zip', 'rb')
z = zipfile.ZipFile(fh)
for name in z.namelist():
print name
z.extract(name,"/destination/")
fh.close()
sftp.close()
</code></pre>
| 0 | 2016-10-04T10:18:35Z | 39,850,327 | <p>It appears you would like the extracted files to appear on the server, even though you are extracting them on the client machine. Unfortunately that isn't going to fly, as the <code>zipfile.extract</code> method assumes that its second argument is a local path.</p>
<p>You <em>could</em> consider creating a local temporary directory in which to extract the files, but then you will have to copy each file back to its desired destination on the server. This doesn't seem like a sensible use of distributed resources, but if you don't have shell access to the server it might be the best you can do.</p>
<p>If you <em>do</em> have shell access to the server then consider using something like <code>fabric</code> or <code>paramiko</code> to execute the necessary commands on the server system.</p>
| 1 | 2016-10-04T10:49:06Z | [
"python",
"ssh",
"sftp",
"paramiko"
]
|
New project in scrapy | 39,849,778 | <p>I am new to scrapy. I found the error when I create new project.</p>
<p>So please tell me How to create new project in scrapy.
I am trying as following-</p>
<pre><code>scrapy start project
</code></pre>
| 0 | 2016-10-04T10:19:36Z | 39,849,896 | <p>Simply go to directory where you want to create project using-</p>
<pre><code>cd path/to/directory
</code></pre>
<p>in the command prompt.
Then run the command-</p>
<pre><code>scrapy startproject project_name
</code></pre>
<p>you are using space between start and project which is incorrect.</p>
| 0 | 2016-10-04T10:25:59Z | [
"python",
"scrapy"
]
|
Get index/position of the tag using text in Selenium WebDriver | 39,849,823 | <p>Here is html chunk:</p>
<pre><code><tbody>
<tr>
<td><td>
<td>
<img src="../imgs.gif">
<td>
<td><td>
<td><td>
<td><td>
</tr>
</tbody>
</code></pre>
<p>I want to iterate over <strong>< td ></strong> attributes and and get the index position of <strong>< img ></strong> attribute. In this case, the output should be <strong>"1"</strong>.</p>
<p>Tried a lot of xpath strategies including index-of(), count(), etree etc.</p>
<p>As I suspect, the following should be close.</p>
<pre><code>from selenium import webdriver
chrome_path = r"E:\chromedriver_win32\chromedriver.exe"
browser = webdriver.Chrome(chrome_path)
td = browser.find_element_by_xpath("//tbody//tr//td")
target = td.find_element_by_xpath("*[. = '../imgs.gif']")
children = td.find_elements_by_xpath("*")
print children.index(target)
</code></pre>
| 0 | 2016-10-04T10:21:50Z | 39,850,593 | <p>How about identifying the image and counting the preceding td sibling and adding one to it.</p>
<pre><code>$x("count(//img/parent::td/preceding-sibling::td) + 1")
</code></pre>
| 1 | 2016-10-04T11:01:58Z | [
"python",
"html",
"selenium",
"xpath"
]
|
Get index/position of the tag using text in Selenium WebDriver | 39,849,823 | <p>Here is html chunk:</p>
<pre><code><tbody>
<tr>
<td><td>
<td>
<img src="../imgs.gif">
<td>
<td><td>
<td><td>
<td><td>
</tr>
</tbody>
</code></pre>
<p>I want to iterate over <strong>< td ></strong> attributes and and get the index position of <strong>< img ></strong> attribute. In this case, the output should be <strong>"1"</strong>.</p>
<p>Tried a lot of xpath strategies including index-of(), count(), etree etc.</p>
<p>As I suspect, the following should be close.</p>
<pre><code>from selenium import webdriver
chrome_path = r"E:\chromedriver_win32\chromedriver.exe"
browser = webdriver.Chrome(chrome_path)
td = browser.find_element_by_xpath("//tbody//tr//td")
target = td.find_element_by_xpath("*[. = '../imgs.gif']")
children = td.find_elements_by_xpath("*")
print children.index(target)
</code></pre>
| 0 | 2016-10-04T10:21:50Z | 39,854,573 | <p>Tried a bit different approach that works.</p>
<pre><code> html = browser.page_source
tree = lxml.html.fromstring(html)
re = tree.xpath("//tbody//tr//td")
for i in range(0, len(re)):
res = re[i].xpath(".//img//@src")
for img in res:
print repr(img)
print "img number in list:", i
</code></pre>
| 0 | 2016-10-04T14:11:51Z | [
"python",
"html",
"selenium",
"xpath"
]
|
I am getting following error.i download oracle client and provide neccesary path to env varibles | 39,849,837 | <blockquote>
<blockquote>
<blockquote>
<p>import cx_Oracle
Traceback (most recent call last):
File "", line 1, in
ImportError: DLL load failed: %1 is not a valid Win32 application.</p>
</blockquote>
</blockquote>
</blockquote>
| 0 | 2016-10-04T10:23:10Z | 39,852,993 | <p>I'll list the things that you need to check.</p>
<p>1) An Oracle client is required. The easiest to use is the Oracle instant client which you can get from this location: <a href="http://www.oracle.com/technetwork/database/features/instant-client/index.html" rel="nofollow">http://www.oracle.com/technetwork/database/features/instant-client/index.html</a></p>
<p>2) If Python is 64-bit, the Oracle client needs to be 64-bit and cx_Oracle needs to be 64-bit. If Python is 32-bit, the Oracle client needs to be 32-bit and cx_Oracle needs to be 32-bit. You can't mix and match!</p>
<p>3) The client needs to be in the PATH environment variable. No other environment variables (like ORACLE_HOME) should be set.</p>
<p>You can use the "depends" tool (<a href="http://www.dependencywalker.com/" rel="nofollow">http://www.dependencywalker.com/</a>) to help determine why Windows is refusing to load that DLL.</p>
| 0 | 2016-10-04T12:58:51Z | [
"python",
"django",
"oracle10g",
"cx-oracle",
"instantclient"
]
|
Disappearing {% csrf_token %} on website file | 39,849,875 | <p>When I wanted use my registration form in my site, I get ERROR 403: "CSRF verification failed. Request aborted." In source of this website I realised that is missing. This is part of view-source from my site:</p>
<pre><code><div style="margin-left:35%;margin-right:35%;">
<fieldset>
<legend> Wszystkie pola oprócz numeru telefonu należy wypeÅniÄ </legend>
<form method="post" action=".">
<p><label for="id_username">Login:</label> <input id="id_username" maxlength="30" name="username" type="text" required/></p>
<p><label for="id_email">Email:</label> <input id="id_email" name="email" type="email" required /></p>
<p><label for="id_password1">HasÅo:</label> <input id="id_password1" name="password1" type="password" required /></p>
<p><label for="id_password2">Powtórz hasÅo:</label> <input id="id_password2" name="password2" type="password" required /></p>
<p><label for="id_phone">Telefon:</label> <input id="id_phone" maxlength="20" name="phone" type="text" /></p>
<p><label for="id_log_on">Logowanie po rejestracji:</label><input id="id_log_on" name="log_on" type="checkbox" /></p>
<input type="submit" value="Rejestracja"><input type="reset" value="WartoÅci poczÄ
tkowe">
</form>
</fieldset>
</div>
</code></pre>
<p>I was surprised of that, because in my files on Pythonanythere this fragment of code is present.</p>
<p>This is part of my file register.html on Pythonanythere:</p>
<pre><code><div style="margin-left:35%;margin-right:35%;">
<fieldset>
<legend> Wszystkie pola oprócz numeru telefonu należy wypeÅniÄ </legend>
<form method="post" action=".">{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Rejestracja"><input type="reset" value="WartoÅci poczÄ
tkowe">
</form>
</fieldset>
</div>
</code></pre>
<p>What am I doing wrong that my webpage don't see this piece of code? It is seamed on server but on webpage view-source It isn't.</p>
<p>EDIT:
This is view, which render my template:</p>
<pre><code>def register(request):
if request.method == 'POST':
form = FormularzRejestracji(request.POST)
if form.is_valid():
user = User.objects.create_user(
username=form.cleaned_data['username'],
password=form.cleaned_data['password1'],
email=form.cleaned_data['email']
)
user.last_name = form.cleaned_data['phone']
user.save()
if form.cleaned_data['log_on']:
user = authenticate(username=form.cleaned_data['username'], password=form.cleaned_data['password1'])
login(request, user)
template = get_template("osnowa_app/point_list.html")
variables = RequestContext(request, {'user': user})
output = template.render(variables)
return HttpResponseRedirect("/")
else:
template = get_template("osnowa_app/register_success.html")
variables = RequestContext(request, {'username': form.cleaned_data['username']})
output = template.render(variables)
return HttpResponse(output)
else:
form = FormularzRejestracji()
template = get_template("osnowa_app/register.html")
form = FormularzRejestracji()
variables = RequestContext(request, {'form': form})
output = template.render(variables)
return HttpResponse(output)
</code></pre>
| 3 | 2016-10-04T10:25:15Z | 39,850,114 | <p>CSRF token gets included in HTML form by calling <code>hidden_tag</code> function on your form object.</p>
<p>For example check this <a href="https://gist.github.com/srahul07/6507758" rel="nofollow">gist</a>, line number 6. This is how you add form and it's elements in jinja.</p>
| -2 | 2016-10-04T10:37:15Z | [
"python",
"html",
"django",
"django-templates",
"pythonanywhere"
]
|
Disappearing {% csrf_token %} on website file | 39,849,875 | <p>When I wanted use my registration form in my site, I get ERROR 403: "CSRF verification failed. Request aborted." In source of this website I realised that is missing. This is part of view-source from my site:</p>
<pre><code><div style="margin-left:35%;margin-right:35%;">
<fieldset>
<legend> Wszystkie pola oprócz numeru telefonu należy wypeÅniÄ </legend>
<form method="post" action=".">
<p><label for="id_username">Login:</label> <input id="id_username" maxlength="30" name="username" type="text" required/></p>
<p><label for="id_email">Email:</label> <input id="id_email" name="email" type="email" required /></p>
<p><label for="id_password1">HasÅo:</label> <input id="id_password1" name="password1" type="password" required /></p>
<p><label for="id_password2">Powtórz hasÅo:</label> <input id="id_password2" name="password2" type="password" required /></p>
<p><label for="id_phone">Telefon:</label> <input id="id_phone" maxlength="20" name="phone" type="text" /></p>
<p><label for="id_log_on">Logowanie po rejestracji:</label><input id="id_log_on" name="log_on" type="checkbox" /></p>
<input type="submit" value="Rejestracja"><input type="reset" value="WartoÅci poczÄ
tkowe">
</form>
</fieldset>
</div>
</code></pre>
<p>I was surprised of that, because in my files on Pythonanythere this fragment of code is present.</p>
<p>This is part of my file register.html on Pythonanythere:</p>
<pre><code><div style="margin-left:35%;margin-right:35%;">
<fieldset>
<legend> Wszystkie pola oprócz numeru telefonu należy wypeÅniÄ </legend>
<form method="post" action=".">{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Rejestracja"><input type="reset" value="WartoÅci poczÄ
tkowe">
</form>
</fieldset>
</div>
</code></pre>
<p>What am I doing wrong that my webpage don't see this piece of code? It is seamed on server but on webpage view-source It isn't.</p>
<p>EDIT:
This is view, which render my template:</p>
<pre><code>def register(request):
if request.method == 'POST':
form = FormularzRejestracji(request.POST)
if form.is_valid():
user = User.objects.create_user(
username=form.cleaned_data['username'],
password=form.cleaned_data['password1'],
email=form.cleaned_data['email']
)
user.last_name = form.cleaned_data['phone']
user.save()
if form.cleaned_data['log_on']:
user = authenticate(username=form.cleaned_data['username'], password=form.cleaned_data['password1'])
login(request, user)
template = get_template("osnowa_app/point_list.html")
variables = RequestContext(request, {'user': user})
output = template.render(variables)
return HttpResponseRedirect("/")
else:
template = get_template("osnowa_app/register_success.html")
variables = RequestContext(request, {'username': form.cleaned_data['username']})
output = template.render(variables)
return HttpResponse(output)
else:
form = FormularzRejestracji()
template = get_template("osnowa_app/register.html")
form = FormularzRejestracji()
variables = RequestContext(request, {'form': form})
output = template.render(variables)
return HttpResponse(output)
</code></pre>
| 3 | 2016-10-04T10:25:15Z | 39,850,762 | <p>You should pass a plain dict and the request object to <code>template.render()</code>, not a <code>RequestContext</code>. The template engine will convert it to a <code>RequestContext</code> for you:</p>
<pre><code>template = get_template("osnowa_app/register.html")
context = {'form': form}
output = template.render(context, request)
</code></pre>
<p>Right now, the <code>template.render()</code> function sees a dict-like object as the first argument, but no request as the second argument. Without a request as the second argument, it converts the dict-like <code>RequestContext</code> into a plain <code>Context</code> object. Since the <code>Context</code> object doesn't run context processors, your context is missing the csrf token. </p>
<p>Alternatively you can just use the <a href="https://docs.djangoproject.com/en/1.10/topics/http/shortcuts/#render" rel="nofollow"><code>render</code> shortcut</a>, which returns a <code>HttpResponse</code> object with the rendered template as content:</p>
<pre><code>from django.shortcuts import render
def register(request):
...
return render(request, "osnowa_app/register.html", {'form': form})
</code></pre>
<p>This particular case is also being discussed in <a href="https://code.djangoproject.com/ticket/27258" rel="nofollow">ticket #27258</a>. </p>
| 4 | 2016-10-04T11:10:20Z | [
"python",
"html",
"django",
"django-templates",
"pythonanywhere"
]
|
Selectively feeding python csv class with lines | 39,850,173 | <p>I have a csv file with a few patterns. I only want to selectively load lines into the csv reader class of python. Currently, csv only takes a file object. Is there a way to get around this?<br>
In other words, what I need is:</p>
<pre><code>with open('filename') as f:
for line in f:
if condition(line):
record = csv.reader(line)
</code></pre>
<p>But, currently, csv class fails if it is given a line instead of a file object.</p>
| 0 | 2016-10-04T10:40:59Z | 39,850,257 | <pre><code>import shlex
lex = shlex.shlex('"sreeraag","100,ABC,XYZ",112',',', posix=True)
lex.whitespace += ','
lex.whitespace_split = True
print list(lex)
</code></pre>
<p>yields</p>
<pre><code>['sreeraag', '100,ABC,XYZ', '112']
</code></pre>
| 1 | 2016-10-04T10:44:59Z | [
"python",
"csv"
]
|
Selectively feeding python csv class with lines | 39,850,173 | <p>I have a csv file with a few patterns. I only want to selectively load lines into the csv reader class of python. Currently, csv only takes a file object. Is there a way to get around this?<br>
In other words, what I need is:</p>
<pre><code>with open('filename') as f:
for line in f:
if condition(line):
record = csv.reader(line)
</code></pre>
<p>But, currently, csv class fails if it is given a line instead of a file object.</p>
| 0 | 2016-10-04T10:40:59Z | 39,851,685 | <p>To read file as stream you can use this.</p>
<pre><code>io.open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True)
</code></pre>
| 1 | 2016-10-04T11:57:28Z | [
"python",
"csv"
]
|
Selectively feeding python csv class with lines | 39,850,173 | <p>I have a csv file with a few patterns. I only want to selectively load lines into the csv reader class of python. Currently, csv only takes a file object. Is there a way to get around this?<br>
In other words, what I need is:</p>
<pre><code>with open('filename') as f:
for line in f:
if condition(line):
record = csv.reader(line)
</code></pre>
<p>But, currently, csv class fails if it is given a line instead of a file object.</p>
| 0 | 2016-10-04T10:40:59Z | 39,852,797 | <p>From the <code>csv.reader</code> docstring:</p>
<blockquote>
<p><em>csvfile</em> can be any object which supports the iterator protocol and returns a string each time its <code>__next__()</code> method is called</p>
</blockquote>
<p>You can feed <code>csv.reader</code> with a generator iterator that yields only the selected rows.</p>
<pre><code>with open('filename') as f:
lines = (line for line in f if condition(line))
for record in csv.reader(lines):
do_something()
</code></pre>
| 3 | 2016-10-04T12:49:26Z | [
"python",
"csv"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.