title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Asynchronous action in bottle
| 39,591,410 |
<p>I have simple application in written in bottle. I need to run same method each 10 seconds. My first idea was something like this, but it is not working and I think it is ugly solution:</p>
<pre><code>inc = 0
# after run server open /loop page in order do initiate loop
@route('/loop', method='GET')
def whiletrue():
global inc
inc += 1
print inc
if inc != 1:
return str(inc)
while True:
time.sleep(1)
print "X",
</code></pre>
<p>Could you suggest me how to do it in right way?</p>
| 0 |
2016-09-20T10:13:07Z
| 39,591,790 |
<p>you can use the threading module to call the method with the Timer command: </p>
<pre><code>from functools import partial
import threading
class While_True(threading.Thread):
def __init__(self, **kwargs):
threading.Thread.__init__(self)
def whileTrue(self, *args):
print args
def caller(self, *args):
threading.Timer(10, partial(self.whilTrue, "Hallo")).start()
</code></pre>
| 1 |
2016-09-20T10:29:28Z
|
[
"python",
"asynchronous",
"bottle"
] |
Raspbian Python azure.storage ImportError: No module named cryptography.hazmat.primitives.keywrap
| 39,591,561 |
<p>Im trying to upload pictures from my RPi3 to Azure blob storage. Im using raspbian and python moduls as described below.</p>
<ul>
<li>setup a virtual environment 'azure' using virtualwrapper.</li>
<li>Installed azure-storage like here: <a href="https://github.com/Azure/azure-storage-python" rel="nofollow">https://github.com/Azure/azure-storage-python</a></li>
</ul>
<p>My problem is that what ever I do I keep getting the following error</p>
<pre><code>>>> from azure.storage import BlobService
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pi/.virtualenvs/azure/lib/python2.7/site-packages/azure_storage-0.33.0-py2.7.egg/azure/storage/__init__.py", line 21, in <module>
from .models import (
File "/home/pi/.virtualenvs/azure/lib/python2.7/site-packages/azure_storage-0.33.0-py2.7.egg/azure/storage/models.py", line 27, in <module>
from cryptography.hazmat.primitives.keywrap import(
ImportError: No module named cryptography.hazmat.primitives.keywrap
</code></pre>
<p>I've tried <code>pip install cryptography</code> and <a href="https://pypi.python.org/pypi/azure-storage" rel="nofollow">https://pypi.python.org/pypi/azure-storage</a>, but that didn't change anything. I keep getting the same error <code>ImportError: No module named cryptography.hazmat.primitives.keywrap</code>. I even tried just to import <code>azure.storage</code> but that throws the same error.<br>
If anyone could shed some light on how to get <code>azure-storage-blob</code> to work on Raspbian I would be very grateful.
Thank you in advance.</p>
| 0 |
2016-09-20T10:19:26Z
| 39,600,941 |
<p>You can try to stick to azure-storage 0.32.0 to avoid using cryptography if you don't need the new features of 0.33.0. There is some difficulties to get cryptography working on some system (<a href="https://github.com/Azure/azure-storage-python/issues/219" rel="nofollow">https://github.com/Azure/azure-storage-python/issues/219</a>)</p>
| 0 |
2016-09-20T18:08:57Z
|
[
"python",
"azure",
"raspbian",
"raspberry-pi3"
] |
Mask numbers in pandas
| 39,591,722 |
<p>i have an input to coloumns of a dataframe as 12345 and want to output to excel sheet as 1XXX5 how to do this . The data type in the dataframe coloumn is an integer</p>
<pre><code>df=pd.read_excel('INVAMF.xls',sheetname=4,header=0,skiprows=0)
#df created
print df.dtypes
print np.count_nonzero(pd.value_counts(df['ACCOUNT_NUMBER'].values))
s = (df['ACCOUNT_NUMBER'])
print s
s = s.astype(str)
s.apply(lambda x: x[0] + 'X' * (len(x) - 2) + x[-1])
print s
0 32642
1 32643
2 32644
3 32677
4 32680
5 32680
6 32688
7 32688
8 32695
9 32708
10 32708
11 32709
12 32710
13 32734
14 32734
15 32738
16 32738
17 6109
18 6109
</code></pre>
<ol>
<li><code>List item</code></li>
</ol>
| -3 |
2016-09-20T10:26:14Z
| 39,591,956 |
<p>As you've failed to post any data and code here is a general form assuming that numbers are varying length:</p>
<pre><code>In [141]:
s = pd.Series([8815392,2983])
s = s.astype(str)
s.apply(lambda x: x[0] + 'X' * (len(x) - 2) + x[-1])
Out[141]:
0 8XXXXX2
1 2XX3
dtype: object
</code></pre>
<p>if the numbers are equal length you can use a vectorised method to set the entire column:</p>
<pre><code>In [142]:
s = pd.Series([8815392,1291283])
s = s.astype(str)
s.str[0] + 'X' * (s.str.len() - 2)[0] + s.str[-1]
Out[142]:
0 8XXXXX2
1 1XXXXX3
dtype: object
</code></pre>
<p>Also just to clarify a common problem you need to assign back the operation as most pandas methods return a copy and don't work in place, although some methods do have a <code>inplace</code> arg. So you need to do the following:</p>
<pre><code>s = s.apply(lambda x: x[0] + 'X' * (len(x) - 2) + x[-1])
</code></pre>
| 1 |
2016-09-20T10:38:28Z
|
[
"python",
"pandas"
] |
How to login into an website which is authenticated by google + API using python?
| 39,591,727 |
<p>How to get logged into some website using python code
<code>i.e www.example.com/auth/gmail</code> which is taking advantage of google+ API for user logins. Now I would like to login with my credentials (Gmail or What?) by using python code. Please help me how to approach towards this problem.</p>
<p>Thanks</p>
| -1 |
2016-09-20T10:26:34Z
| 39,591,944 |
<p>this is old(sandbox) and is done really fast so, you have to refactor the code </p>
<pre><code>import re
import sys
import imaplib
import getpass
import email
import datetime
import string
import get_mail_search
from sys import stdout
M = imaplib.IMAP4_SSL('imap.gmail.com')
class Get_mail(object):
"""docstring for Get_mail"""
def __init__(self, *args):
super(Get_mail, self).__init__()
c=1
self.login(c)
self.toast_msg()
raw_input()
def toast_msg(self, *args):
"""docstring for Get_mail"""
M = self.mailbox()
stdout.write("\n{}\n".format(get_mail_search.search_help_info))
serach_input = raw_input()
rv, data = M.search(None, serach_input)
if rv != 'OK':
print "No messages found!"
id_ls = data[0].split()
rev_id_ls = [i for i in reversed(id_ls)]
if rev_id_ls:
for o in rev_id_ls:
try:
msg_content = self.process_mailbox(M, o)
_date_ = msg_content[0]
_from_ = msg_content[1]
_to_ = msg_content[2]
_subject_ = msg_content[3]
_msg_ = msg_content[4]
stdout.write("$$$$$$$$$$$\nDate: {}\nFrom: {}\nTo: {}\nSubject: {}\nMSG: {}\n".format(_date_,_from_,_to_,_subject_,_msg_))
except Exception, e:
pass
else:
stdout.write("No {} Mail Found!".format(serach_input))
raw_input()
self.toast_msg()
def login(self, try_c, *args):
"""docstring for Get_mail"""
try:
stdout.write("\nMail:\n")
mail = raw_input()
if mail:
M.login(str(mail), getpass.getpass())
else:
sys.exit(1)
except imaplib.IMAP4.error:
if try_c<=3:
stdout.write("Versuch: {}/3\n".format(try_c))
stdout.write("Die eingegebene E-Mail-Adresse und das Passwort stimmen nicht uberein. Nochmal versuchen")
try_c+=1
self.login(try_c)
else:
sys.exit(1)
def mailbox(self, *args):
"""docstring for Get_mail"""
rv, mailboxes = M.list()
if rv == 'OK':
for menu in mailboxes:
print('{}'.format(menu))
rv, data = M.select("inbox")
if rv == 'OK':
return M
def eval_decode(self, header, *args):
"""docstring for Get_mail"""
return email.Header.decode_header(header)[0]
def process_mailbox(self, M, num, *args):
"""docstring for Get_mail"""
rv, header = M.fetch(num, '(RFC822)')
if rv != 'OK':
print "ERROR getting message", num
header_msg = email.message_from_string(header[0][1])
if header_msg.is_multipart():
body=[payload.get_payload(decode=True) for payload in header_msg.get_payload()]
else:
body=payload.get_payload(decode=True)
from_decode = self.eval_decode(header_msg['From'])
subject_decode = self.eval_decode(header_msg['Subject'])
date_decode = self.eval_decode(header_msg['Date'])
to_decode = self.eval_decode(header_msg['To'])
return (date_decode[0], from_decode[0], to_decode[0], subject_decode[0], str(body[0]))
def run():
try:
Get_mail()
except KeyboardInterrupt:
M.close()
M.logout()
sys.exit(1)
run()
</code></pre>
| 0 |
2016-09-20T10:37:58Z
|
[
"python",
"google-api",
"google-api-python-client"
] |
How to login into an website which is authenticated by google + API using python?
| 39,591,727 |
<p>How to get logged into some website using python code
<code>i.e www.example.com/auth/gmail</code> which is taking advantage of google+ API for user logins. Now I would like to login with my credentials (Gmail or What?) by using python code. Please help me how to approach towards this problem.</p>
<p>Thanks</p>
| -1 |
2016-09-20T10:26:34Z
| 39,592,370 |
<p>Here you have an example by using the google-api-python-client
First of all you have to create your client secrets on the google developer console.</p>
<p>First time you execute the code you will have to authorize the script to use your gmail account, then you can save the credentials on a file as on the example, and use it for the future executions without requiring this authentication</p>
<pre><code>from googleapiclient.discovery import build
from oauth2client import client
from googleapiclient.errors import HttpError
import base64
from email.mime.text import MIMEText
from apiclient import errors
from oauth2client import client
import httplib2
from googleapiclient.discovery import build
def getCredentials(secrets, scope,filename):
flow = client.flow_from_clientsecrets(
secrets,
scope=scope,
redirect_uri='urn:ietf:wg:oauth:2.0:oob')
auth_uri = flow.step1_get_authorize_url()
webbrowser.open(auth_uri)
auth_code = raw_input('Enter the auth code: ')
credentials = flow.step2_exchange(auth_code)
saveJson(filename,credentials.to_json())
def saveJson(filename, object):
with open(filename, 'w') as f:
json.dump(object, f)
def openJson(filename):
with open(filename, 'r') as f:
object = json.load(f)
return object
if __name__=='__main__':
client_secrets = 'client_secrets.json' #client secrets to use the API
credentials = 'auth_credentials.json'
if(firstRun) #create a file with the auth credentials
scope = 'https://www.googleapis.com/auth/gemail.send'
getCredentials(secrets,scope,credentials)
cre = client.Credentials.new_from_json(openJson(credentials))
http_auth = cre.authorize(httplib2.Http())
gmail = build('gmail', 'v1', http=http_auth)
#gmail.doSomething
</code></pre>
| 0 |
2016-09-20T11:02:13Z
|
[
"python",
"google-api",
"google-api-python-client"
] |
scrapy list of requests delayed
| 39,591,753 |
<p>I need to scrape a list of web pages 10 times at 5 minute intervals.
This is to collect URLs for later scraping. Another way to look at it is</p>
<pre><code>url_list = []
for i in 1:10 {
url_list += scrape request
url_list += scrape request
url_list += scrape request
sleep 5 min
}
for site in url_list
scrape site
</code></pre>
<p>How can I add a delay between the sets, but no delay between the scraping requests?</p>
<p>How can I achieve this?</p>
<p>Thanks</p>
| 0 |
2016-09-20T10:27:52Z
| 39,635,948 |
<p>You can use <code>DOWNLOAD_DELAY</code> project settings or <code>download_delay</code> spider class attribute with a number of desired delay in seconds. </p>
<p><a href="http://doc.scrapy.org/en/latest/topics/settings.html#download-delay" rel="nofollow">Official docs on download_delay</a></p>
| 0 |
2016-09-22T09:58:25Z
|
[
"python",
"scrapy"
] |
Optimization in scipy from sympy
| 39,591,831 |
<p>I have four functions symbolically computed with Sympy and then lambdified:</p>
<pre><code>deriv_log_s_1 = sym.lambdify((z, m_1, m_2, s_1, s_2), deriv_log_sym_s_1, modules=['numpy', 'sympy'])
deriv_log_s_2 = sym.lambdify((z, m_1, m_2, s_1, s_2), deriv_log_sym_s_2, modules=['numpy', 'sympy'])
deriv_log_m_1 = sym.lambdify((z, m_1, m_2, s_1, s_2), deriv_log_sym_m_1, modules=['numpy', 'sympy'])
deriv_log_m_2 = sym.lambdify((z, m_1, m_2, s_1, s_2), deriv_log_sym_m_2, modules=['numpy', 'sympy'])
</code></pre>
<p>From these functions, I define a cost function to optimize:</p>
<pre><code>def cost_function(x, *args):
m_1, m_2, s_1, s_2 = x
print(args[0])
T1 = np.sum([deriv_log_m_1(y, m_1, m_2, s_1, s_2) for y in args[0]])
T2 = np.sum([deriv_log_m_2(y, m_1, m_2, s_1, s_2) for y in args[0]])
T3 = np.sum([deriv_log_m_1(y, m_1, m_2, s_1, s_2) for y in args[0]])
T4 = np.sum([deriv_log_m_1(y, m_1, m_2, s_1, s_2) for y in args[0]])
return T1 + T2 + T3 + T4
</code></pre>
<p>My function <code>cost_function</code> works as expected: </p>
<pre><code>a = 48.7161
b = 16.3156
c = 17.0882
d = 7.0556
z = [0.5, 1, 2, 1.2, 3]
test = cost_function(np.array([a, b, c, d]).astype(np.float32), z)
</code></pre>
<p>However, when I try to optimize it:</p>
<pre><code>from scipy.optimize import fmin_powell
res = fmin_powell(cost_function, x0=np.array([a, b, c, d], dtype=np.float32), args=(z, ))
</code></pre>
<p>It raises the following error:</p>
<pre><code>AttributeError: 'Float' object has no attribute 'sqrt'
</code></pre>
<p>I do not understand why such an error appears as my <code>cost_function</code> alone does not raise any error.</p>
| 4 |
2016-09-20T10:31:19Z
| 39,592,089 |
<p>The solution was, and I do not know why, to cast inputs to numpy.float:</p>
<pre><code>m_1 = np.float32(m_1)
m_2 = np.float32(m_2)
s_1 = np.float32(s_1)
s_2 = np.float32(s_2)
</code></pre>
| 2 |
2016-09-20T10:46:21Z
|
[
"python",
"numpy",
"scipy",
"sympy"
] |
Checking a .txt file for an input by the user (Python 2.7.11)
| 39,591,938 |
<p>So i am currently creating a quiz program which requires a really basic authentication. This is the psuedocode that i have.</p>
<pre><code>Start
Open UserCredentials.txt (This is where the user ID is stored)
Get UserID
If UserID is in UserCredetials.txt( This is the part that i could not figure out how to do)
..Passtries = 0
..While passtries != 1
....Get password
....If password is UserCredentials.txt( The other part which i couldnt do.)
......Passtries = 1 (Correct password, it will exit the loop)
....else
......Passtries = 0 (wrong password, it will continue looping)
....WhileEnd
</code></pre>
<p>So the problem here is that i could not figure out how to check the UserCredentials.txt file based on what the user has inputted and write the code for it in python. (Python 2.7.11) </p>
<p>Note that in the UserCredentials.txt, the userID and password are stored side by side [Izzaz abc123] </p>
<p>Also i am still kind off new to python, so go easy on me :)<br>
Any help is appreciated.</p>
| -2 |
2016-09-20T10:37:49Z
| 39,592,545 |
<p>.you should start to use dict for data container and xml or json for the formatting</p>
<pre><code>UserCredentials.xml
members_dict:
User(Name = "izzaz", Passwort: "abc123")
User(Name = "isssz", Passwort: "asassa")
.start
.open UserCredentials.xml/json
.get members_users
.if user_id_input in members_users continue
.if user_passwort_input == members[user_id_input][passwort]:
</code></pre>
<p>its only a guide how it can work, more information about <a href="https://docs.python.org/2/library/collections.html" rel="nofollow">OrderedDict</a>, <a href="https://docs.python.org/2/library/xml.etree.elementtree.html" rel="nofollow">Xml Parsing</a>,
<a href="http://stackoverflow.com/questions/3605680/creating-a-simple-xml-file-using-python">create simple Xml file</a></p>
| 0 |
2016-09-20T11:10:09Z
|
[
"python"
] |
Python MemoryError when 'stacking' arrays
| 39,592,117 |
<p>I am writing code to add data along the length of a numpy array (for combining satellite data records). In order to do this my code reads two arrays and then uses the function </p>
<pre><code>def swath_stack(array1, array2):
"""Takes two arrays of swath data and compares their dimensions.
The arrays should be equally sized in dimension 1. If they aren't the
function appends the smallest along dimension 0 until they are.
The arrays are then stacked on top of each other."""
if array1.shape[1] > array2.shape[1]:
no_lines = array1.shape[1] - array2.shape[1]
a = np.zeros((array2.shape[0], no_lines))
a.fill(-999.)
new_array = np.hstack((array2, a))
mask = np.zeros(new_array.shape, dtype=int)
mask[np.where(new_array==-999.)] = 1
array2 = ma.masked_array(new_array, mask=mask)
elif array1.shape[1] < array2.shape[1]:
no_lines = array2.shape[1] - array1.shape[1]
a = np.zeros((array1.shape[0], no_lines))
a.fill(-999.)
new_array = np.hstack((array1, a))
mask = np.zeros(new_array.shape, dtype=int)
mask[np.where(new_array==-999.)] = 1
array1 = ma.masked_array(new_array, mask=mask)
return np.vstack((array1, array2))
</code></pre>
<p>to make one array of the two in the line</p>
<pre><code>window_data = swath_stack(window_data, stack_data)
</code></pre>
<p>In the event that the arrays under consideration are equal in width the swath_stack() function reduces to np.vstack(). My problem is that I keep encountering <code>MemoryError</code>during this stage. I know that in the case of arithmetic operators it is more memory efficient to do the arithmetic in place (i.e. <code>array1 += array2</code> as opposed to <code>array1 = array1 + array2</code>) but I don't know how to avoid this kind of memory issue whilst using my swath_stack() function. </p>
<p>Can anyone please help?</p>
| -1 |
2016-09-20T10:48:16Z
| 39,599,489 |
<p>I changed your last line to <code>np.ma.vstack</code>, and got</p>
<pre><code>In [474]: swath_stack(np.ones((3,4)),np.zeros((3,6)))
Out[474]:
masked_array(data =
[[1.0 1.0 1.0 1.0 -- --]
[1.0 1.0 1.0 1.0 -- --]
[1.0 1.0 1.0 1.0 -- --]
[0.0 0.0 0.0 0.0 0.0 0.0]
[0.0 0.0 0.0 0.0 0.0 0.0]
[0.0 0.0 0.0 0.0 0.0 0.0]],
mask =
[[False False False False True True]
[False False False False True True]
[False False False False True True]
[False False False False False False]
[False False False False False False]
[False False False False False False]],
fill_value = 1e+20)
</code></pre>
<p>This preserves the masking that you created during the padding.</p>
<p>The masked padding doubles the memory use of the intermediate array.</p>
<p>Do you get memory errors when using 2 equal size arrays? I.e. with just the plain <code>vstack</code>? There's no way of doing in-place stacking. It must create one or more new arrays. Arrays have a fixed size, so can't grow in-place. And the final array must have a contiguous data buffer, so can't use the buffers of the originals.</p>
<p>It won't use masking, but <code>np.pad</code> might make padding a bit easier.</p>
| 0 |
2016-09-20T16:36:31Z
|
[
"python",
"arrays",
"numpy",
"memory",
"satellite-image"
] |
REgex to extract date and time
| 39,592,176 |
<p>I am using nltk regex for date and time extraction:</p>
<pre><code>text = 'LEts have quick meeting on Wednesday at 9am'
week_day = "(monday|tuesday|wednesday|thursday|friday|saturday|sunday)"
month = "(january|february|march|april|may|june|july|august|september| \
october|november|december)"
dmy = "(year|day|week|month)"
exp2 = "(this|next|last)"
regxp2 = "(" + exp2 + " (" + dmy + "|" + week_day + "|" + month + "))"
reg2 = re.compile(regxp2, re.IGNORECASE)
found = reg2.findall(text)
found = [a[0] for a in found if len(a) > 1]
for timex in found:
timex_found.append(timex)
print timex_found
</code></pre>
<p>Everything looks right to me, but it does not tag <code>Wednesday</code> any clue? What change should I make to consider "wednesday" as well "this wednesday"</p>
<p>Will</p>
<pre><code>regxp2 = "((this|next|last)? (" + dmy + "| " + week_day + "| " + month+ "))"
</code></pre>
<p>consider my case?</p>
| -1 |
2016-09-20T10:51:47Z
| 39,592,298 |
<p>The regex is looking for <code>((this|next|last) (dmy|weekday|month))</code>.</p>
<p>Your input doesn't have a match.</p>
<p>Some alternatives that may work:</p>
<pre><code>((this|next|last|on) (dmy|weekday|month))
((this|next|last)? (dmy|weekday|month))
</code></pre>
| 3 |
2016-09-20T10:58:21Z
|
[
"python",
"regex"
] |
PYTHON: Summing up elements from one list based on indices in another list
| 39,592,237 |
<p>So here is what I am trying to achieve in Python:</p>
<ul>
<li>I have a list "A" with unsorted and repeated indices. </li>
<li>I have a list "B" with some float values</li>
<li>Length A = Length B</li>
<li>I want list "C" with summed values of B based on the repeated indices in A in a sorted ascending manner.</li>
</ul>
<p>Example:</p>
<p><code>A=[0 , 1 , 0 , 3 , 2 , 1 , 2] (indicates unsorted and repeated indices)</code></p>
<p><code>B=[25 , 10 , 15 , 10 , 5 , 30 , 50] (values to be summed)</code></p>
<p><code>C=[25+15 , 10+30 , 5+50 , 15] (summed values in a sorted manner)</code></p>
<p>So far I know how to do the sorting bit with:</p>
<p><code>C= zip(*sorted(zip(A, B)))</code></p>
<p>Getting the result:</p>
<p><code>[(0, 0, 1, 1, 2, 2, 3), (15, 25, 10, 30, 5, 50, 10)]</code></p>
<p>But I do not know how to do the sum.</p>
<p>What would be a good way to create list C?</p>
| 0 |
2016-09-20T10:55:14Z
| 39,592,567 |
<p>Use <code>zip()</code> in combination with a <code>dict</code>:</p>
<pre><code>A = [0 , 1 , 0 , 3 , 2 , 1 , 2]
B = [25 , 10 , 15 , 10 , 5 , 30 , 50]
sums = {}
for key, value in zip(A,B):
try:
sums[key] += value
except KeyError:
sums[key] = value
print(sums)
# {0: 40, 1: 40, 2: 55, 3: 10}
</code></pre>
<p>And see <a href="http://ideone.com/OxbT5s" rel="nofollow"><strong>a demo on ideone.com</strong></a>.</p>
| 2 |
2016-09-20T11:11:34Z
|
[
"python",
"list"
] |
PYTHON: Summing up elements from one list based on indices in another list
| 39,592,237 |
<p>So here is what I am trying to achieve in Python:</p>
<ul>
<li>I have a list "A" with unsorted and repeated indices. </li>
<li>I have a list "B" with some float values</li>
<li>Length A = Length B</li>
<li>I want list "C" with summed values of B based on the repeated indices in A in a sorted ascending manner.</li>
</ul>
<p>Example:</p>
<p><code>A=[0 , 1 , 0 , 3 , 2 , 1 , 2] (indicates unsorted and repeated indices)</code></p>
<p><code>B=[25 , 10 , 15 , 10 , 5 , 30 , 50] (values to be summed)</code></p>
<p><code>C=[25+15 , 10+30 , 5+50 , 15] (summed values in a sorted manner)</code></p>
<p>So far I know how to do the sorting bit with:</p>
<p><code>C= zip(*sorted(zip(A, B)))</code></p>
<p>Getting the result:</p>
<p><code>[(0, 0, 1, 1, 2, 2, 3), (15, 25, 10, 30, 5, 50, 10)]</code></p>
<p>But I do not know how to do the sum.</p>
<p>What would be a good way to create list C?</p>
| 0 |
2016-09-20T10:55:14Z
| 39,592,816 |
<p>You could use <a href="https://docs.python.org/3/library/itertools.html#itertools.groupby" rel="nofollow">groupby</a>, if the order matters:</p>
<pre><code>In [1]: A=[0 , 1 , 0 , 3 , 2 , 1 , 2]
In [2]: B=[25 , 10 , 15 , 10 , 5 , 30 , 50]
In [3]: from itertools import groupby
In [4]: from operator import itemgetter
In [5]: C = [sum(map(itemgetter(1), group))
...: for key, group in groupby(sorted(zip(A, B)),
...: key=itemgetter(0))]
In [6]: C
Out[6]: [40, 40, 55, 10]
</code></pre>
<p>or <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>defaultdict(float)</code></a>, if it does not:</p>
<pre><code>In [10]: from collections import defaultdict
In [11]: res = defaultdict(float)
In [12]: for k, v in zip(A, B):
...: res[k] += v
...:
In [13]: res
Out[13]: defaultdict(float, {0: 40.0, 1: 40.0, 2: 55.0, 3: 10.0})
</code></pre>
<p>Note that <code>dict</code>s in python are unordered (you are not to trust any CPython implementation details).</p>
| 0 |
2016-09-20T11:24:51Z
|
[
"python",
"list"
] |
PYTHON: Summing up elements from one list based on indices in another list
| 39,592,237 |
<p>So here is what I am trying to achieve in Python:</p>
<ul>
<li>I have a list "A" with unsorted and repeated indices. </li>
<li>I have a list "B" with some float values</li>
<li>Length A = Length B</li>
<li>I want list "C" with summed values of B based on the repeated indices in A in a sorted ascending manner.</li>
</ul>
<p>Example:</p>
<p><code>A=[0 , 1 , 0 , 3 , 2 , 1 , 2] (indicates unsorted and repeated indices)</code></p>
<p><code>B=[25 , 10 , 15 , 10 , 5 , 30 , 50] (values to be summed)</code></p>
<p><code>C=[25+15 , 10+30 , 5+50 , 15] (summed values in a sorted manner)</code></p>
<p>So far I know how to do the sorting bit with:</p>
<p><code>C= zip(*sorted(zip(A, B)))</code></p>
<p>Getting the result:</p>
<p><code>[(0, 0, 1, 1, 2, 2, 3), (15, 25, 10, 30, 5, 50, 10)]</code></p>
<p>But I do not know how to do the sum.</p>
<p>What would be a good way to create list C?</p>
| 0 |
2016-09-20T10:55:14Z
| 39,594,113 |
<p>It is actually a bit unclear what you want, but if you want them to be <em>indexed</em> by whatever the number is, you shouldn't even use a list, but a Counter instead:</p>
<pre><code>>>> from collections import Counter
>>> c = Counter()
>>> A = [0, 1, 0, 3, 2, 1, 2]
>>> B = [25, 10, 15, 10 , 5, 30, 50]
>>> for k, v in zip(A, B):
... c[k] += v
...
>>> c
Counter({2: 55, 0: 40, 1: 40, 3: 10})
>>> c[0]
40
</code></pre>
<p>If you really want a list, you can use</p>
<pre><code>>>> [i[1] for i in sorted(c.items())]
</code></pre>
<p>but then any missing key would cause the rest of the values show up upper, which might or might not be that what you wanted.</p>
| 0 |
2016-09-20T12:28:39Z
|
[
"python",
"list"
] |
Celery - No module named 'celery.datastructures'
| 39,592,360 |
<p>i am using <code>Django 1.10</code> + <code>celery==4.0.0rc3</code> + <code>django-celery with commit @79d9689b62db3d54ebd0346e00287f91785f6355</code> .</p>
<p>My settings are:</p>
<pre><code>CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE
# http://docs.celeryproject.org/en/latest/getting-started/brokers/redis.html#visibility-timeout
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 259200} # 3 days
</code></pre>
<p>my tasks.py i have </p>
<pre><code>@task(queue='assign_rnal_id')
def assign_rnal_id_to_mongo(rnal_id, mongo_id):
print ("something")
return False
</code></pre>
<p>In my django model i am overriding the save method to send a task to celery:</p>
<pre><code>def save(self, *args, **kwargs):
super(Suggested, self).save(*args, **kwargs)
assign_rnal_id_to_mongo.delay(rnal_id=self.id, mongo_id=self.raw_data['_id'])
</code></pre>
<p>When i save my model object i get a <code>No module named 'celery.datastructures'</code></p>
<p>Any ideas?? I have similar code working for older versions of django and celery, did something change?</p>
<p>Thanks</p>
| 0 |
2016-09-20T11:01:38Z
| 39,592,658 |
<p>I was able to find the answer here <a href="https://github.com/celery/celery/issues/3303#issuecomment-246780116" rel="nofollow">https://github.com/celery/celery/issues/3303#issuecomment-246780116</a></p>
<p>Basically <code>Django-celery does not support 4.0 yet</code> so i downgraded to <code>celery==3.1.23</code> and it now works</p>
| 0 |
2016-09-20T11:16:50Z
|
[
"python",
"django",
"celery",
"django-celery"
] |
Indention Errors on Python IDLE
| 39,592,427 |
<p>I'm just following the basic's of python and have been using IDLE as I find it handy to experiment scripts in real-time. </p>
<p>While I can run this script with no issues as a file I just cannot include the last print statement in IDLE!. I have tried an indentation, 4 spaces, no indentation's. Please explain what I'm doing wrong. </p>
<pre><code>while True:
print ('Who are you?')
name = input()
if name != 'Joe':
continue
print('Hello, Joe. What is the password? (It is a fish.)')
password = input()
if password == 'swordfish':
break
print('value')
SyntaxError: invalid syntax
</code></pre>
| 0 |
2016-09-20T11:05:04Z
| 39,592,620 |
<p>You can type one statement at a time. The <code>while</code> loop considered one with the loop itself, and since the loop is a code block everything in it is okay.</p>
<p>The <code>print</code> at the end however is a new command. You have to run the <code>while</code> loop first and type the <code>print</code> after in the interpreter.</p>
| 1 |
2016-09-20T11:14:24Z
|
[
"python",
"indentation"
] |
Indention Errors on Python IDLE
| 39,592,427 |
<p>I'm just following the basic's of python and have been using IDLE as I find it handy to experiment scripts in real-time. </p>
<p>While I can run this script with no issues as a file I just cannot include the last print statement in IDLE!. I have tried an indentation, 4 spaces, no indentation's. Please explain what I'm doing wrong. </p>
<pre><code>while True:
print ('Who are you?')
name = input()
if name != 'Joe':
continue
print('Hello, Joe. What is the password? (It is a fish.)')
password = input()
if password == 'swordfish':
break
print('value')
SyntaxError: invalid syntax
</code></pre>
| 0 |
2016-09-20T11:05:04Z
| 39,592,629 |
<p>You can't paste more than one statement at a time into IDLE. The problem has nothing to do with indentation. The <code>while</code> loop constitutes one compound statement and the final <code>print</code> another.</p>
<p>The following also has issues when you try to paste at once into IDLE:</p>
<pre><code>print('A')
print('B')
</code></pre>
<p>The fact that this has a problem shows even more clearly that the issue ins't one of indentation.</p>
| 1 |
2016-09-20T11:14:55Z
|
[
"python",
"indentation"
] |
Indention Errors on Python IDLE
| 39,592,427 |
<p>I'm just following the basic's of python and have been using IDLE as I find it handy to experiment scripts in real-time. </p>
<p>While I can run this script with no issues as a file I just cannot include the last print statement in IDLE!. I have tried an indentation, 4 spaces, no indentation's. Please explain what I'm doing wrong. </p>
<pre><code>while True:
print ('Who are you?')
name = input()
if name != 'Joe':
continue
print('Hello, Joe. What is the password? (It is a fish.)')
password = input()
if password == 'swordfish':
break
print('value')
SyntaxError: invalid syntax
</code></pre>
| 0 |
2016-09-20T11:05:04Z
| 39,592,733 |
<p>You have an undentation error in line 10, then you need just to add an espace</p>
<pre><code>while True:
print ('Who are you?')
name = input()
if name != 'Joe':
continue
print('Hello, Joe. What is the password? (It is a fish.)')
password = input()
if password == 'swordfish':
break
print('value')
</code></pre>
| 0 |
2016-09-20T11:20:30Z
|
[
"python",
"indentation"
] |
Indention Errors on Python IDLE
| 39,592,427 |
<p>I'm just following the basic's of python and have been using IDLE as I find it handy to experiment scripts in real-time. </p>
<p>While I can run this script with no issues as a file I just cannot include the last print statement in IDLE!. I have tried an indentation, 4 spaces, no indentation's. Please explain what I'm doing wrong. </p>
<pre><code>while True:
print ('Who are you?')
name = input()
if name != 'Joe':
continue
print('Hello, Joe. What is the password? (It is a fish.)')
password = input()
if password == 'swordfish':
break
print('value')
SyntaxError: invalid syntax
</code></pre>
| 0 |
2016-09-20T11:05:04Z
| 39,592,831 |
<p>As others have kindly pointed out python IDLE only allows one block of code to be executed at a time. In this instance the while loop is a 'block'. The last print statement is outside this block thus it cannot be executed.</p>
| 0 |
2016-09-20T11:25:52Z
|
[
"python",
"indentation"
] |
Package version difference between pip and OS?
| 39,592,490 |
<p>I have Debian OS and python version 2.7 installed on it. But I have a strange issue about package <code>six</code>. I want to use 1.10 version. </p>
<p>I have installed six 1.10 via pip:</p>
<pre><code>$ pip list
...
six (1.10.0)
</code></pre>
<p>But when I run the following script</p>
<pre><code>python -c "import six; print(six.__version__)"
</code></pre>
<p>it says <code>1.8.0</code></p>
<p>The reason is that veriosn installed in OS is different:</p>
<pre><code>$ sudo apt-cache policy python-six
python-six:
Installed: 1.8.0-1
Candidate: 1.8.0-1
Version table:
1.9.0-3~bpo8+1 0
100 http://172.24.70.103:9999/jessie-backports/ jessie-backports/main amd64 Packages
*** 1.8.0-1 0
500 ftp://172.24.70.103/mirror/jessie-debian/ jessie/main amd64 Packages
500 http://172.24.70.103:9999/jessie-debian/ jessie/main amd64 Packages
100 /var/lib/dpkg/status
</code></pre>
<p><strong>How to force python to use package installed via pip?</strong></p>
| 0 |
2016-09-20T11:07:59Z
| 39,592,939 |
<p>You can use <code>virtualenv</code> for this.</p>
<pre><code>pip install virtualenv
cd project_folder
virtualenv venv
</code></pre>
<p><code>virtualenv venv</code> will create a folder in the current directory which will contain the Python executable files, and a copy of the pip library which you can use to install other packages. The name of the virtual environment (in this case, it was venv) can be anything; omitting the name will place the files in the current directory instead.</p>
<p>Set the wished python interpreter </p>
<pre><code>virtualenv -p /usr/bin/python2.7 venv
</code></pre>
<p>Activate the environment</p>
<pre><code>source venv/bin/activate
</code></pre>
<p>From now on, any package that you install using pip will be placed in the <code>venv</code> folder, isolated from the <strong>global</strong> Python installation.</p>
<pre><code>pip install six
</code></pre>
<p>Now you run code. When you have finished simpliy deactivate <code>venv</code></p>
<pre><code>deactivate
</code></pre>
<p>See also <a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow">the original resources</a>.</p>
| 1 |
2016-09-20T11:31:13Z
|
[
"python",
"pip",
"six"
] |
exe file created with pyinstaller not recognizing the external sources
| 39,592,507 |
<p>I have created with success an exe file with Pyinstaller. However when I run the exe file and I fill in the path, file & sheetnames in the messagebox that pops up the exe file says that the files that I have mistyped either a filename or sheetname. I obviously types this message myself and therefore my question is: Why does the exe file have troubles finding my files whereas when I do exactly the same in PyCharm there is no trouble running it?</p>
<pre><code> import pandas as pd
import numpy as np
import tkinter as tk
from tkinter import messagebox
def create_file():
try:
FILEPATH = e0.get()
w_filename = e1.get()
x_filename = e2.get()
y_filename = e3.get()
z_inventory_filename = e4.get()
aa_active_filename = e5.get()
ab_test_filename = e6.get()
output_filename = e7.get()
w_sheetname = e1_sheet.get()
x_sheetname = e2_sheet.get()
y_sheetname = e3_sheet.get()
z_sheetname = e4_sheet.get()
aa_sheetname = e5_sheet.get()
ab_test_sheetname = e6_sheet.get()
except:
messagebox.showinfo("Error", "Please fill out all fields.")
try:
w= pd.read_excel(FILEPATH +"\\"+ w_filename, sheetname=w_sheetname, header=0)
x = pd.read_excel(FILEPATH +"\\"+ x_filename, sheetname=x_sheetname, header=0)
y = pd.read_excel(FILEPATH +"\\"+y_filename, sheetname=y_sheetname, header=0)
z_inventory = pd.read_excel(FILEPATH +"\\"+ z_inventory_filename, sheetname=z_inventory_sheetname, header=0)
aa_active = pd.read_excel(FILEPATH +"\\"+ aa_active_filename, sheetname=aa_active_sheetname, header=0)
ab_test_ready = pd.read_excel(FILEPATH +"\\"+ ab_test_filename, sheetname=ab_test_sheetname, header=0)
except:
messagebox.showinfo("Error", "You have mistyped either a filename or a sheetname.")
</code></pre>
<p>Hope anyone has a specific answer to this.</p>
<p>Thanks,</p>
<p>Jeroen</p>
| 0 |
2016-09-20T11:08:45Z
| 39,595,299 |
<p>Define a function to translate path.</p>
<pre><code>import os, sys
def resource_path(relative_path):
if hasattr(sys, "_MEIPASS"):
base_path = sys._MEIPASS
else:
base_path = os.path.abspath(".")
return os.path.join(base_path, relative_path)
</code></pre>
<p>Use this function to warp your file path, for example: </p>
<pre><code>bingo_music = resource_path('resources/bingo.wav')
demo_file = resource_path('texts/demo.txt')
</code></pre>
<p>In your <code>.spec</code> file, put a list in <code>exe = EXE()</code>:</p>
<pre><code>[('resources/bingo.wav', r'C:\Users\Administrator\resources\bingo.wav', 'music'),
[('texts/demo.txt', r'C:\Users\Administrator\texts\demo.txt', 'text'),],
</code></pre>
<p>Write every file you use in your project, as a tuple<code>(relative_path, absolute_path, folder_name_in_bundled_app)</code>, the third argument is the name of the folder in the bundled app where your file will be copied into. Then files will work properly.</p>
| 0 |
2016-09-20T13:20:43Z
|
[
"python",
"python-3.x",
"exe",
"pyinstaller"
] |
temp files/directories with custom names?
| 39,592,524 |
<p>How to create a temporary files/directories with user defined names in python. I am aware of <a href="https://docs.python.org/3/library/tempfile.html" rel="nofollow">tempfile</a> . However I couldn't see any function with filename as argument.</p>
<p>Note: I need this for unit testing the glob(file name pattern matching) functionality on a temporary directory containing temporary files rather than using the actual file system. </p>
| 0 |
2016-09-20T11:09:17Z
| 39,592,704 |
<p>You can use <code>open()</code> with whatever file name you need.</p>
<p>e.g.</p>
<pre><code>open(name, 'w')
</code></pre>
<p><a href="https://docs.python.org/2/library/functions.html#open" rel="nofollow">Open</a></p>
<p>Or</p>
<pre><code>import os
import tempfile
print 'Building a file name yourself:'
filename = '/tmp/guess_my_name.%s.txt' % os.getpid()
temp = open(filename, 'w+b')
try:
print 'temp:', temp
print 'temp.name:', temp.name
finally:
temp.close()
# Clean up the temporary file yourself
os.remove(filename)
</code></pre>
<p>Or you can create a temp directory using <code>mkdtemp</code> and then use <code>open</code> to create a file in the temporary directory. </p>
| 0 |
2016-09-20T11:19:23Z
|
[
"python",
"py.test",
"temporary-files",
"pyunit"
] |
PyQt5 OpenGL swapBuffers very slow
| 39,592,653 |
<p>I am trying to make a small application using PyQt5 and PyOpenGL. Everything works fine, however rendering takes way too long with even only one sphere. I tried different routes to try and optimise the speed of the app, and right now I am using a simple QWindow with an OpenGLSurface. </p>
<p>I managed to figure out that it is the context.swapBuffers call that takes a long time to complete and varies between approx. 0.01s (which is fine) and 0.05s (which is way to long), when displaying 1 sphere with some shading and 240 vertices. </p>
<p>Now my questions are the following: Is this normal? If so, is there a way to speed this process up or is this related to how pyqt works, since it is a python wrap around the library? Basically: is there any way for me to continue developing this program without needing to learn c++. It's quite a simple application that just needs to visualise some atomic structure and be able to manipulate it. </p>
<p>Is there another gui toolkit I could maybe use to have less overhead when working with OpenGL from pyopengl?</p>
<p>This is the definition that does the rendering:</p>
<pre><code>def renderNow(self):
if not self.isExposed():
return
self.m_update_pending = False
needsInitialize = False
if self.m_context is None:
self.m_context = QOpenGLContext(self)
self.m_context.setFormat(self.requestedFormat())
self.m_context.create()
needsInitialize = True
self.m_context.makeCurrent(self)
if needsInitialize:
self.m_gl = self.m_context.versionFunctions()
self.m_gl.initializeOpenGLFunctions()
self.initialize()
self.render()
self.m_context.swapBuffers(self)
if self.m_animating:
self.renderLater()
</code></pre>
<p>I am using OpenGl directly without using Qt opengl definitions, the format for the surface is given by:</p>
<pre><code>fmt = QSurfaceFormat()
fmt.setVersion(4, 2)
fmt.setProfile(QSurfaceFormat.CoreProfile)
fmt.setSamples(4)
fmt.setSwapInterval(1)
QSurfaceFormat.setDefaultFormat(fmt)
</code></pre>
<p><strong>Edit1:</strong>
Some more clarification on how my code works: </p>
<pre><code>def render(self):
t1 = time.time()
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
wtvMatrix = self.camera.get_wtv_mat()
transformMatrix = matrices.get_projection_matrix(60, self.width() / self.height(), 0.1, 30, matrix=wtvMatrix)
transformMatrixLocation = glGetUniformLocation(self.shader,"transformMatrix")
glUniformMatrix4fv(transformMatrixLocation,1,GL_FALSE,transformMatrix)
eye_pos_loc = glGetUniformLocation(self.shader, "eye_world_pos0")
glUniform3f(eye_pos_loc, self.camera.position[0], self.camera.position[1], self.camera.position[2])
glDrawElementsInstanced(GL_TRIANGLES,self.num_vertices,GL_UNSIGNED_INT,None,self.num_objects)
print("drawing took:{}".format(time.time()-t1))
self.frame+=1
t1=time.time()
self.m_context.swapBuffers(self)
print('swapping buffers took:{}'.format(time.time()-t1))
</code></pre>
<p>This is the only drawElementsInstanced that I call. Shaders are set up as follows (sorry for the mess):</p>
<pre><code>VERTEX_SHADER = compileShader("""#version 410
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec3 vertex_colour;
layout(location = 2) in vec3 vertex_normal;
layout(location = 3) in mat4 model_mat;
layout(location = 7) in float mat_specular_intensity;
layout(location = 8) in float mat_specular_power;
uniform mat4 transformMatrix;
uniform vec3 eye_world_pos0;
out vec3 normal0;
out vec3 colour;
out vec3 world_pos;
out float specular_intensity;
out float specular_power;
out vec3 eye_world_pos;
void main () {
colour = vertex_colour;
normal0 = (model_mat*vec4(vertex_normal,0.0)).xyz;
world_pos = (model_mat*vec4(vertex_position,1.0)).xyz;
eye_world_pos = eye_world_pos0;
specular_intensity = mat_specular_intensity;
specular_power = mat_specular_power;
gl_Position = transformMatrix*model_mat*vec4(vertex_position,1.0);
}""", GL_VERTEX_SHADER)
FRAGMENT_SHADER = compileShader("""#version 410
in vec3 colour;
in vec3 normal0;
in vec3 world_pos;
in float specular_intensity;
in float specular_power;
in vec3 eye_world_pos;
out vec4 frag_colour;
struct directional_light {
vec3 colour;
float amb_intensity;
float diff_intensity;
vec3 direction;
};
uniform directional_light gdirectional_light;
void main () {
vec4 ambient_colour = vec4(gdirectional_light.colour * gdirectional_light.amb_intensity,1.0f);
vec3 light_direction = -gdirectional_light.direction;
vec3 normal = normalize(normal0);
float diffuse_factor = dot(normal,light_direction);
vec4 diffuse_colour = vec4(0,0,0,0);
vec4 specular_colour = vec4(0,0,0,0);
if (diffuse_factor>0){
diffuse_colour = vec4(gdirectional_light.colour,1.0f) * gdirectional_light.diff_intensity*diffuse_factor;
vec3 vertex_to_eye = normalize(eye_world_pos-world_pos);
vec3 light_reflect = normalize(reflect(gdirectional_light.direction,normal));
float specular_factor = dot(vertex_to_eye, light_reflect);
if(specular_factor>0) {
specular_factor = pow(specular_factor,specular_power);
specular_colour = vec4(gdirectional_light.colour*specular_intensity*specular_factor,1.0f);
}
}
frag_colour = vec4(colour,1.0)*(ambient_colour+diffuse_colour+specular_colour);
}""", GL_FRAGMENT_SHADER)
</code></pre>
<p>Now the code that I use when I want to rotate the scene is the following (the camera updates etc are as normally done afaik):</p>
<pre><code>def mouseMoveEvent(self, event):
dx = event.x() - self.lastPos.x()
dy = event.y() - self.lastPos.y()
self.lastPos = event.pos()
if event.buttons() & QtCore.Qt.RightButton:
self.camera.mouse_update(dx,dy)
elif event.buttons()& QtCore.Qt.LeftButton:
pass
self.renderNow()
</code></pre>
<p>Some final info: All vertex info needed in the shaders is given through a vao that I initialized and bound earlier in the initialize definition, does not contain too many objects (I'm just testing and it uses an icosahedron with 2 subdivisions to render a sphere, also, I removed the duplicate vertices but that did not do anything since that really should not be the bottleneck I think). </p>
<p>To answer some questions: I did try with varius different versions of opengl just for gigglez, no changes, tried without vsync, nothing changes, tried with different sample sizes, no changes. </p>
<p><strong>Edit2:</strong>
Might be a clue: the swapBuffers takes around 0.015s most of the time, but when I start moving around a lot, it stutters and jumps up to 0.05s for some renders. Why is this happening? From what I understand, every render has to process all the data anyways?</p>
| 0 |
2016-09-20T11:16:28Z
| 39,594,889 |
<p>By the way OpenGL works, the rendering commands you submit are sent to the GPU and executed asynchronously (frankly even the process of sending them to the GPU is asynchronous). When you request to display the back buffer by a call to <code>swapBuffers</code> the display driver must wait till the content of the back buffer finishes rendering (i.e. all previously issued commands finish executing), and only then it can swap the buffers.<sup>â </sup></p>
<p>If you experience low frame rate then you shall optimize your rendering code, that is the stuff you submit to the GPU. Switching to C++ will not help you here (though it would be a great idea independently).</p>
<p><strong>EDIT:</strong> You say that when you do nothing then your <code>swapBuffers</code> executes in 0.015 seconds, which is suspiciously ~1/60th of a second. It implies that your rendering code is efficient enough to render at 60 FPS and you have no reason to optimize it yet. What probably happens is that your call to <code>renderNow()</code> from <code>mouseMoveEvent</code> causes re-rendering the scene more than 60 times per second, which is redundant. Instead you should call <code>renderLater()</code> in <code>mouseMoveEvent</code>, and restructure your code accordingly.</p>
<p><strong>NOTE:</strong> you call <code>swapBuffers</code> twice, once in <code>render()</code> and once in <code>renderNow()</code> immediately after.</p>
<p><strong>DISCLAIMER:</strong> I'm not familiar with PyOpenGL. </p>
<hr>
<p>â <code>swapBuffer</code> may also execute asynchronously, but even then if the display driver swaps buffers faster than you can render you will eventually block on the <code>swapBuffer</code> call.</p>
| 1 |
2016-09-20T13:03:14Z
|
[
"python",
"qt",
"opengl",
"pyqt",
"pyopengl"
] |
Save data in associative model using one query in django
| 39,592,670 |
<p>I am working on registration module in django project. for registering user i am using auth_user table for extending this table i have created one more model Profile</p>
<pre><code>from django.contrib.auth.models import User
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
start_date = models.DateField()
phone_number = models.CharField(max_length=12)
address = models.CharField(max_length=225)
subscription = models.BooleanField(default=False)
</code></pre>
<p>Profile table has been created successfully. Now what i want to do is when i submit the registration form, the fields related to Profile model within registration form should be inserted automatically after inserting fields related to auth_user model.
Means i don't want to first insert data in auth_user model and then after getting it's id again insert data in Profile table.
I want to insert complete record in one query. Is it possible ?</p>
| 0 |
2016-09-20T11:17:31Z
| 39,597,099 |
<p>I think you can define a registration form and override the form <code>save</code> method to save the <code>Profile</code> when creating the User model. Sample code for your reference:</p>
<pre><code>class RegistrationForm(forms.ModelForm):
start_date = forms.DateField()
phone_number = forms.CharField()
address = forms.CharField()
subscription = forms.BooleanField()
class Meta:
model = User
def save(self, commit=True):
instance = super(RegistrationForm, self).save(commit=commit)
profile = Profile(user=instance, start_date=self.cleaned_data['start_date'], phone_number=self.cleaned_data['phone_number'], address=self.cleaned_data['address'], subscription=self.cleaned_data['subscription'])
profile.save()
return instance
def register(request):
if request.method == 'POST':
form = RegistrationForm(request.POST)
if form.is_valid():
user = form.save()
# do anything after user created
else:
raise Error('form validate failed')
else:
# handling the GET method
</code></pre>
| 0 |
2016-09-20T14:41:52Z
|
[
"python",
"django",
"database"
] |
Numpy is adding dot(.) from CSV data
| 39,592,721 |
<p>I am reading CSV as:</p>
<pre><code>import numpy as np
features = np.genfromtxt('train.csv',delimiter=',',usecols=(1,2))
</code></pre>
<p>It outputs data as:</p>
<blockquote>
<p>[[-1. -1.] [ 1. -1.] [-1. 1.] [ 1. -1.]]. See the dot after <code>1</code> and <code>-1</code></p>
</blockquote>
<p><strong>train.csv</strong></p>
<pre><code>0,-1,-1
-1,1,-1
0,-1,1
1,1,-1
-1,1,-1
0,1,-1
</code></pre>
| -1 |
2016-09-20T11:20:10Z
| 39,595,715 |
<p>As stated in the comments : <code>np.genfromtxt</code> is simply converting your data to float numbers by default (<a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow">see the default dtype argument in the function signature</a>). If you want to force the output to integers just specify <code>dtype=np.int</code> in genfromtxt:</p>
<pre><code>features = np.genfromtxt('train.csv',delimiter=',',usecols=(1,2),dtype=np.int)
</code></pre>
| 1 |
2016-09-20T13:39:48Z
|
[
"python",
"numpy"
] |
How do I run a curl command in python
| 39,593,054 |
<p>This is the command I am running in <code>bash</code>:</p>
<pre><code>curl -d "username=xxxxxx" -d "password=xxxxx" <<address>> --insecure --silent
</code></pre>
<p>How can I run this using Python?</p>
| -2 |
2016-09-20T11:37:39Z
| 39,593,217 |
<p>Try <code>Subprocess</code>:</p>
<pre><code>import subprocess
# ...
subprocess.call(["curl -d \"username=xxxxxx\" -d \"password=xxxxx\" https://xxxxx.com/logreg/login/ --insecure --silent" , shell=True)
</code></pre>
<p>I did not try what I wrote, but the essential is here.</p>
<p>Check this page for more info : <a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow">Subprocess management</a></p>
| 0 |
2016-09-20T11:45:56Z
|
[
"python",
"curl"
] |
How do I run a curl command in python
| 39,593,054 |
<p>This is the command I am running in <code>bash</code>:</p>
<pre><code>curl -d "username=xxxxxx" -d "password=xxxxx" <<address>> --insecure --silent
</code></pre>
<p>How can I run this using Python?</p>
| -2 |
2016-09-20T11:37:39Z
| 39,596,626 |
<p>I hope you don't want to explicitly run <code>curl</code>, but you just want to get the same result.</p>
<pre><code>>>> import requests
>>> r = requests.get('https://example.com', auth=('user', 'pass'), verify=False)
>>> print(r.status_code)
200
</code></pre>
| 0 |
2016-09-20T14:20:32Z
|
[
"python",
"curl"
] |
Double Conversion to decimal value IEEE-754 in Python
| 39,593,087 |
<p>I am trying to convert my <code>double</code> type data 64 bits long to decimal value. I am following <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="nofollow">https://en.wikipedia.org/wiki/Double-precision_floating-point_format</a>
for the converting. </p>
<p>I have tried it in following script:</p>
<pre><code>a = '\x3f\xd5\x55\x55\x55\x55\x55\x55' # Hexbyte representation of 1/3 value in double
sign_bit = bin(ord(a[0])).replace('0b', '').rjust(8, '0')[0]
sign = -1 ** int(sign_bit)
print sign # Sign bit
# Next 11 bits for exponent calculation
exp_bias = 1023
a11 = bin(ord(a[0])).replace('0b', '').rjust(8, '0')[1:] + bin(ord(a[1])).replace('0b', '').rjust(8, '0')[:4]
exp = int(a11, 2)
print exp
# Next 52 bits for fraction calculation
fraction = bin(ord(a[1])).replace('0b', '').rjust(8, '0')[4:] + bin(ord(a[2])).replace('0b', '').rjust(8, '0') \
+ bin(ord(a[3])).replace('0b', '').rjust(8, '0') + bin(ord(a[4])).replace('0b', '').rjust(8, '0') \
+ bin(ord(a[5])).replace('0b', '').rjust(8, '0') + bin(ord(a[6])).replace('0b', '').rjust(8, '0') \
+ bin(ord(a[7])).replace('0b', '').rjust(8, '0')
print len(fraction), fraction
fract = str(int(fraction, 2))
print len(fract), fract
fin = repr(float(fract)/ 10 ** 16)
print type(fin), fin # 16 digit precision
# final value calculation according equation
# eq = (-1)^sign * 2 ^(exp- exp_bias) * (1 + fin)
val = 2 ** (exp - exp_bias) * float(fin) # Looses precision
print val
</code></pre>
<p>Please, any one help me out with this. I am not able understand where I am wrong? Cause I can have fraction value with precision by using <code>repr()</code> but whenever try to use it into equation, it looses its precision in <code>float()</code>.</p>
<p>Is there anyway or alternate way to solve it?</p>
| 2 |
2016-09-20T11:39:17Z
| 39,593,353 |
<p>The easy way to do this conversion is to use the <a href="https://docs.python.org/2/library/struct.html" rel="nofollow"><code>struct</code></a> module.</p>
<pre><code>from struct import unpack
a = '\x3f\xd5\x55\x55\x55\x55\x55\x55'
n = unpack('>d', a)
print '%.18f' % n[0]
</code></pre>
<p><strong>output</strong></p>
<pre><code>0.33333333333333331
</code></pre>
<p>In Python 3, you need to specify the input string and the packing format string as byte strings, eg</p>
<pre><code>a = b'\x3f\xd5\x55\x55\x55\x55\x55\x55'
n = unpack(b'>d', a)
print(format(n[0], '.18f'))
</code></pre>
<p>You can also use the <code>b</code> string prefix in Python 2 (from 2.6 and later, IIRC). Python 2 just ignores that prefix, since normal Python 2 strings are bytes strings. </p>
| 3 |
2016-09-20T11:52:54Z
|
[
"python",
"double",
"decimal",
"ieee-754"
] |
Double Conversion to decimal value IEEE-754 in Python
| 39,593,087 |
<p>I am trying to convert my <code>double</code> type data 64 bits long to decimal value. I am following <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="nofollow">https://en.wikipedia.org/wiki/Double-precision_floating-point_format</a>
for the converting. </p>
<p>I have tried it in following script:</p>
<pre><code>a = '\x3f\xd5\x55\x55\x55\x55\x55\x55' # Hexbyte representation of 1/3 value in double
sign_bit = bin(ord(a[0])).replace('0b', '').rjust(8, '0')[0]
sign = -1 ** int(sign_bit)
print sign # Sign bit
# Next 11 bits for exponent calculation
exp_bias = 1023
a11 = bin(ord(a[0])).replace('0b', '').rjust(8, '0')[1:] + bin(ord(a[1])).replace('0b', '').rjust(8, '0')[:4]
exp = int(a11, 2)
print exp
# Next 52 bits for fraction calculation
fraction = bin(ord(a[1])).replace('0b', '').rjust(8, '0')[4:] + bin(ord(a[2])).replace('0b', '').rjust(8, '0') \
+ bin(ord(a[3])).replace('0b', '').rjust(8, '0') + bin(ord(a[4])).replace('0b', '').rjust(8, '0') \
+ bin(ord(a[5])).replace('0b', '').rjust(8, '0') + bin(ord(a[6])).replace('0b', '').rjust(8, '0') \
+ bin(ord(a[7])).replace('0b', '').rjust(8, '0')
print len(fraction), fraction
fract = str(int(fraction, 2))
print len(fract), fract
fin = repr(float(fract)/ 10 ** 16)
print type(fin), fin # 16 digit precision
# final value calculation according equation
# eq = (-1)^sign * 2 ^(exp- exp_bias) * (1 + fin)
val = 2 ** (exp - exp_bias) * float(fin) # Looses precision
print val
</code></pre>
<p>Please, any one help me out with this. I am not able understand where I am wrong? Cause I can have fraction value with precision by using <code>repr()</code> but whenever try to use it into equation, it looses its precision in <code>float()</code>.</p>
<p>Is there anyway or alternate way to solve it?</p>
| 2 |
2016-09-20T11:39:17Z
| 39,593,361 |
<p>Too much work.</p>
<pre><code>>>> import struct
>>> struct.unpack('>d', '\x3f\xd5\x55\x55\x55\x55\x55\x55')[0]
0.3333333333333333
</code></pre>
| 2 |
2016-09-20T11:53:04Z
|
[
"python",
"double",
"decimal",
"ieee-754"
] |
Using pytest fixtures and peewee transactions together
| 39,593,159 |
<p>I'm writing a set of unit tests using <code>pytest</code> for some database models that are implemented using <code>peewee</code>. I would like to use database transactions (the database is a Postgres one, if that is relevant) in order to roll back any database changes after each test. </p>
<p>I have a situation where I would like to use two fixtures in a test, but have both fixtures clean up their database models via the <code>rollback</code> method, like so:</p>
<pre><code>@pytest.fixture
def test_model_a():
with db.transaction() as txn: # `db` is my database object
yield ModelA.create(...)
txn.rollback()
@pytest.fixture
def test_model_b():
with db.transaction() as txn: # `db` is my database object
yield ModelB.create(...)
txn.rollback()
def test_models(test_model_a, test_model_b):
# ...
</code></pre>
<p>This works, but reading the <a href="http://peewee.readthedocs.io/en/latest/peewee/transactions.html#explicit-transaction" rel="nofollow">documentation for <code>peewee</code></a> suggests that this is error prone:</p>
<blockquote>
<p>If you attempt to nest transactions with peewee using the <code>transaction()</code> context manager, only the outer-most transaction will be used. However if an exception occurs in a nested block, this can lead to unpredictable behavior, so it is strongly recommended that you use <code>atomic()</code>.</p>
</blockquote>
<p>However, <code>atomic()</code> does not provide a <code>rollback()</code> method. It seems that when managing transactions explicitly, the key is to use an outer-most <code>transaction()</code>, and use <code>savepoint()</code> context managers within that transaction. But in my test code above, both fixtures are on the same "level", so to speak, and I don't know where to create the transaction, and where to create the savepoint.</p>
<p>My only other idea is to use the order that the fixtures get evaluated to decide where to put the transaction (<a href="http://stackoverflow.com/questions/25660064/in-which-order-are-pytest-fixtures-executed">which seems to be alphabetical</a>), but this seems very brittle indeed.</p>
<p>Is there a way to achieve this? Or does my test design need re-thinking?</p>
| 1 |
2016-09-20T11:43:10Z
| 39,621,218 |
<p>If you want to rollback all transactions created within a test, you could have a fixture which takes care of the transaction itself and make the model fixtures use it:</p>
<pre><code>@pytest.fixture
def transaction():
with db.transaction() as txn: # `db` is my database object
yield txn
txn.rollback()
@pytest.fixture
def test_model_a(txn):
yield ModelA.create(...)
@pytest.fixture
def test_model_b(txn):
yield ModelB.create(...)
def test_models(test_model_a, test_model_b):
# ...
</code></pre>
<p>This way all models are created within the same transaction and rolled back at the end of the test. </p>
| 2 |
2016-09-21T15:50:43Z
|
[
"python",
"postgresql",
"transactions",
"py.test",
"peewee"
] |
How to disable SSL verification on Python Scrapy?
| 39,593,172 |
<p>I am writing data scraping scripts for past 3 years in PHP.</p>
<p>This is simple PHP script </p>
<pre><code>$url = 'https://appext20.dos.ny.gov/corp_public/CORPSEARCH.SELECT_ENTITY';
$fields = array(
'p_entity_name' => urlencode('AAA'),
'p_name_type' => urlencode('A'),
'p_search_type' => urlencode('BEGINS')
);
//url-ify the data for the POST
foreach ($fields as $key => $value) {
$fields_string .= $key . '=' . $value . '&';
}
$fields_string = rtrim($fields_string, '&');
//open connection
$ch = curl_init();
//set the url, number of POST vars, POST data
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($ch, CURLOPT_POST, count($fields));
curl_setopt($ch, CURLOPT_POSTFIELDS, $fields_string);
//execute post
$result = curl_exec($ch);
print curl_error($ch) . '<br>';
print curl_getinfo($ch, CURLINFO_HTTP_CODE) . '<br>';
print $result;
</code></pre>
<p>It works fine only if <code>CURLOPT_SSL_VERIFYPEER</code> is <code>false</code>. It returns empty response if we enable <code>CURLOPT_SSL_VERIFYPEER</code> or if use <code>http</code> instead of <code>https</code>.</p>
<p>But, I have to do this same project in Python Scrapy, here is same code in Scrapy.</p>
<pre><code>from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.http.request import Request
import urllib
from appext20.items import Appext20Item
class Appext20Spider(CrawlSpider):
name = "appext20"
allowed_domains = ["appext20.dos.ny.gov"]
DOWNLOAD_HANDLERS = {
'https': 'my.custom.downloader.handler.https.HttpsDownloaderIgnoreCNError',}
def start_requests(self):
payload = {"p_entity_name": 'AMEB', "p_name_type": 'A', 'p_search_type':'BEGINS'}
url = 'https://appext20.dos.ny.gov/corp_public/CORPSEARCH.SELECT_ENTITY'
yield Request(url, self.parse_data, method="POST", body=urllib.urlencode(payload))
def parse_data(self, response):
print('here is repos')
print response
</code></pre>
<p>It returns empty response. it needs to be disabled SSL Verification.</p>
<p>Please pardon my lack of knowledge in Python Scrapy, I have searched a lot about it but didn't find any solution.</p>
| 0 |
2016-09-20T11:44:05Z
| 39,593,438 |
<p>I would recommend having a look at this page: <a href="http://doc.scrapy.org/en/1.0/topics/settings.html" rel="nofollow">http://doc.scrapy.org/en/1.0/topics/settings.html</a> it would appear that you can alter the way that the module behaves and change settings on various handlers.</p>
<p>I also believe this is a duplicate question from: <a href="http://stackoverflow.com/questions/32950694/disable-ssl-certificate-verification-in-scrapy">Disable SSL certificate verification in Scrapy</a></p>
<p>HTHs</p>
<p>Thanks,</p>
<p>//P</p>
| 1 |
2016-09-20T11:56:38Z
|
[
"python",
"ssl",
"scrapy"
] |
How to disable SSL verification on Python Scrapy?
| 39,593,172 |
<p>I am writing data scraping scripts for past 3 years in PHP.</p>
<p>This is simple PHP script </p>
<pre><code>$url = 'https://appext20.dos.ny.gov/corp_public/CORPSEARCH.SELECT_ENTITY';
$fields = array(
'p_entity_name' => urlencode('AAA'),
'p_name_type' => urlencode('A'),
'p_search_type' => urlencode('BEGINS')
);
//url-ify the data for the POST
foreach ($fields as $key => $value) {
$fields_string .= $key . '=' . $value . '&';
}
$fields_string = rtrim($fields_string, '&');
//open connection
$ch = curl_init();
//set the url, number of POST vars, POST data
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($ch, CURLOPT_POST, count($fields));
curl_setopt($ch, CURLOPT_POSTFIELDS, $fields_string);
//execute post
$result = curl_exec($ch);
print curl_error($ch) . '<br>';
print curl_getinfo($ch, CURLINFO_HTTP_CODE) . '<br>';
print $result;
</code></pre>
<p>It works fine only if <code>CURLOPT_SSL_VERIFYPEER</code> is <code>false</code>. It returns empty response if we enable <code>CURLOPT_SSL_VERIFYPEER</code> or if use <code>http</code> instead of <code>https</code>.</p>
<p>But, I have to do this same project in Python Scrapy, here is same code in Scrapy.</p>
<pre><code>from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.http.request import Request
import urllib
from appext20.items import Appext20Item
class Appext20Spider(CrawlSpider):
name = "appext20"
allowed_domains = ["appext20.dos.ny.gov"]
DOWNLOAD_HANDLERS = {
'https': 'my.custom.downloader.handler.https.HttpsDownloaderIgnoreCNError',}
def start_requests(self):
payload = {"p_entity_name": 'AMEB', "p_name_type": 'A', 'p_search_type':'BEGINS'}
url = 'https://appext20.dos.ny.gov/corp_public/CORPSEARCH.SELECT_ENTITY'
yield Request(url, self.parse_data, method="POST", body=urllib.urlencode(payload))
def parse_data(self, response):
print('here is repos')
print response
</code></pre>
<p>It returns empty response. it needs to be disabled SSL Verification.</p>
<p>Please pardon my lack of knowledge in Python Scrapy, I have searched a lot about it but didn't find any solution.</p>
| 0 |
2016-09-20T11:44:05Z
| 39,601,276 |
<p>This code worked for me</p>
<pre><code>from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.http import FormRequest
import urllib
from appext20.items import Appext20Item
from scrapy.selector import HtmlXPathSelector
class Appext20Spider(CrawlSpider):
name = "appext20"
allowed_domains = ["appext20.dos.ny.gov"]
payload = {"p_entity_name": 'AME', "p_name_type": 'A', 'p_search_type':'BEGINS'}
def start_requests(self):
url = 'https://appext20.dos.ny.gov/corp_public/CORPSEARCH.SELECT_ENTITY'
return [ FormRequest(url,
formdata= self.payload,
callback=self.parse_data) ]
def parse_data(self, response):
print('here is response')
questions = HtmlXPathSelector(response).xpath("//td[@headers='c1']")
# print questions
all_links = []
for tr in questions:
temp_dict = {}
temp_dict['link'] = tr.xpath('a/@href').extract()
temp_dict['title'] = tr.xpath('a/text()').extract()
all_links.extend([temp_dict])
print (all_links)
</code></pre>
| 0 |
2016-09-20T18:27:25Z
|
[
"python",
"ssl",
"scrapy"
] |
NameError in TensorFlow tutorial while using the Retrained Model
| 39,593,212 |
<p>I'm following TensorFlow for Poets <a href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#5" rel="nofollow">tutorial</a>. paragraph about Using the Retrained Model. </p>
<p>When I'm trying to run Python file which provided in the tutorial </p>
<pre><code>python /tf_files/label_image.py /tf_files/flower_photos/daisy/21652746_cc379e0eea_m.jpg
</code></pre>
<p>Code of the file label_image.py</p>
<pre><code>import tensorflow as tf
# change this as you see fit
image_path = sys.argv[1]
# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line
in tf.gfile.GFile("/tf_files/retrained_labels.txt")]
# Unpersists graph from file
with tf.gfile.FastGFile("/tf_files/retrained_graph.pb", 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
with tf.Session() as sess:
# Feed the image_data as input to the graph and get first prediction
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor, \
{'DecodeJpeg/contents:0': image_data})
# Sort to show labels of first prediction in order of confidence
top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
for node_id in top_k:
human_string = label_lines[node_id]
score = predictions[0][node_id]
print('%s (score = %.5f)' % (human_string, score))
</code></pre>
<p>I got an error:</p>
<pre><code> File "/tf_files/label_image.py", line 4, in <module>
image_path=sys.argv[1]
NameError: name 'sys' is not defined
</code></pre>
<p>What I'm doing wrong? Or how to fix it? </p>
| 0 |
2016-09-20T11:45:54Z
| 39,593,386 |
<p>Maybe you could try with <code>import sys</code> ?</p>
| 1 |
2016-09-20T11:54:11Z
|
[
"python",
"tensorflow"
] |
How to get data from R to pandas
| 39,593,275 |
<p>In a jupiter notebook I created some 2-d-list in R like</p>
<pre><code>%%R
first <- "first"
second <- "second"
names(first) <- "first_thing"
names(second) <- "second_thing"
x <- list()
index <- length(x)+1
x[[index]] = first
x[[index +1]] = second
</code></pre>
<p>a <code>%Rpull x</code> does not return the nice representation but rather a <code>ListVector</code>. How can I convert it into something nicer e.g. a dict / pd.Dataframe? So far I had no luck following <a href="http://pandas.pydata.org/pandas-docs/stable/r_interface.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/r_interface.html</a>
<a href="http://i.stack.imgur.com/2PBFh.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/2PBFh.jpg" alt="pull from R"></a></p>
<h1>edit</h1>
<p>The list I want to convert is a 2-d list like <code>results</code> the updated code snipped from above</p>
| 3 |
2016-09-20T11:49:04Z
| 39,594,165 |
<p>Just slice the <code>ListVector</code>:</p>
<pre><code>%Rpull x
pd.DataFrame(data=[i[0] for i in x], columns=['X'])
</code></pre>
<p><a href="http://i.stack.imgur.com/qNYVx.png" rel="nofollow"><img src="http://i.stack.imgur.com/qNYVx.png" alt="Image"></a></p>
<p>If you want a dictionary instead:</p>
<pre><code>dict([[i,j[0]] for i,j in enumerate(x)])
{0: 'first', 1: 'first'}
</code></pre>
| 1 |
2016-09-20T12:30:29Z
|
[
"python",
"pandas",
"rpy2"
] |
How to get data from R to pandas
| 39,593,275 |
<p>In a jupiter notebook I created some 2-d-list in R like</p>
<pre><code>%%R
first <- "first"
second <- "second"
names(first) <- "first_thing"
names(second) <- "second_thing"
x <- list()
index <- length(x)+1
x[[index]] = first
x[[index +1]] = second
</code></pre>
<p>a <code>%Rpull x</code> does not return the nice representation but rather a <code>ListVector</code>. How can I convert it into something nicer e.g. a dict / pd.Dataframe? So far I had no luck following <a href="http://pandas.pydata.org/pandas-docs/stable/r_interface.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/r_interface.html</a>
<a href="http://i.stack.imgur.com/2PBFh.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/2PBFh.jpg" alt="pull from R"></a></p>
<h1>edit</h1>
<p>The list I want to convert is a 2-d list like <code>results</code> the updated code snipped from above</p>
| 3 |
2016-09-20T11:49:04Z
| 39,594,479 |
<p>Since you created an R list (rather than, say, a data frame),
the Python object returned is a <code>ListVector</code>.</p>
<p>R lists can have duplicated names, making the conversion to a <code>dict</code> something that cannot be guaranteed to be safe. </p>
<p>Should you feel lucky, and wanting a dict, it is rather straightforward.
For example:</p>
<pre><code> from rpy2.robjects.vectors import ListVector
l = ListVector({'a':1, 'b':2})
d = dict(l.items())
</code></pre>
| 0 |
2016-09-20T12:44:54Z
|
[
"python",
"pandas",
"rpy2"
] |
How to get instance of entity in limit_choices_to (Django)?
| 39,593,577 |
<p>For instance:</p>
<pre><code>class Foo(models.Model):
bar = models.OneToOneField(
'app.Bar',
limit_choices_to=Q(type=1) & Q(foo=None) | Q(foo=instance)
)
class Bar(models.Model):
TYPE_CHOICE = (
(0, 'hello'),
(1, 'world')
)
type = models.SmallIntegerField(
choices=TYPE_CHOICE,
default=0
)
</code></pre>
<p>I wanna show in Django admin only these Bars that have <code>type = 1</code>, that haven't relations with Foo's, and show linked Bar of edited entity (if it is).</p>
<p>Of course, we can do it via overriding <code>formfield_for_foreignkey</code> method of <code>admin.ModelAdmin</code>, but we want do this via <code>limit_choices_to</code>.</p>
<p>How to get instance of edited entity? </p>
| 1 |
2016-09-20T12:02:42Z
| 39,598,243 |
<p>If you pass a callable to <code>limit_choices_to</code>, that callable has no reference to the current instance. As such, you can't filter based on the current instance either.</p>
<p>There are several other ways to achieve what you want, such as overriding <code>formfield_for_foreignkey()</code> as you mentioned, or overriding the formfield's queryset in the form's <code>__init__()</code> method. <code>limit_choices_to</code> just isn't one of them. </p>
| 1 |
2016-09-20T15:33:55Z
|
[
"python",
"django"
] |
python add value in dictionary in lambda expression
| 39,593,632 |
<p>Is it possible to add values in dictionary in lambda expression?</p>
<p>That is to implement a lambda which has the similar function as below methods.</p>
<pre><code>def add_value(dict_x):
dict_x['a+b'] = dict_x['a'] + dict_x['b']
return dict_x
</code></pre>
| 2 |
2016-09-20T12:05:09Z
| 39,593,726 |
<p>Technically, you may use side effect to update it, and exploit that <code>None</code> returned from <code>.update</code> is falsy to return dict via based on boolean operations:</p>
<pre><code>add_value = lambda d: d.update({'a+b': d['a'] + d['b']}) or d
</code></pre>
<p>I just don't see any reason for doing it in real code though, both with lambda or with function written by you in question.</p>
| 2 |
2016-09-20T12:09:55Z
|
[
"python"
] |
python add value in dictionary in lambda expression
| 39,593,632 |
<p>Is it possible to add values in dictionary in lambda expression?</p>
<p>That is to implement a lambda which has the similar function as below methods.</p>
<pre><code>def add_value(dict_x):
dict_x['a+b'] = dict_x['a'] + dict_x['b']
return dict_x
</code></pre>
| 2 |
2016-09-20T12:05:09Z
| 39,593,740 |
<p>You could build a custom dict, inheriting from <code>dict</code>, overriding its <code>__setitem__</code> function. <a href="https://docs.python.org/3/reference/datamodel.html?emulating-container-types#emulating-container-types" rel="nofollow">See the Python documentation.</a></p>
<pre><code>class MyCustomDict(dict):
def __setitem__(self, key, item):
# your method here
</code></pre>
| 0 |
2016-09-20T12:10:41Z
|
[
"python"
] |
Means of asymmetric arrays in numpy
| 39,593,672 |
<p>I have an asymmetric 2d array in numpy, as in some arrays are longer than others, such as: <code>[[1, 2], [1, 2, 3], ...]</code></p>
<p>But numpy doesn't seem to like this:</p>
<pre><code>import numpy as np
foo = np.array([[1], [1, 2]])
foo.mean(axis=1)
</code></pre>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tom/.virtualenvs/nlp/lib/python3.5/site-packages/numpy/core/_methods.py", line 56, in _mean
rcount = _count_reduce_items(arr, axis)
File "/home/tom/.virtualenvs/nlp/lib/python3.5/site-packages/numpy/core/_methods.py", line 50, in _count_reduce_items
items *= arr.shape[ax]
IndexError: tuple index out of range
</code></pre>
<p>Is there a nice way to do this or should I just do the maths myself?</p>
| 0 |
2016-09-20T12:07:05Z
| 39,593,854 |
<p>You could perform the mean for each sub-array of foo using a list comprehension:</p>
<pre><code>mean_foo = np.array( [np.mean(subfoo) for subfoo in foo] )
</code></pre>
<p>As suggested by @Kasramvd in another answer's comment, you can also use the <code>map</code> function :</p>
<pre><code>mean_foo = np.array( map(np.mean, foo) )
</code></pre>
| 2 |
2016-09-20T12:15:29Z
|
[
"python",
"numpy"
] |
Means of asymmetric arrays in numpy
| 39,593,672 |
<p>I have an asymmetric 2d array in numpy, as in some arrays are longer than others, such as: <code>[[1, 2], [1, 2, 3], ...]</code></p>
<p>But numpy doesn't seem to like this:</p>
<pre><code>import numpy as np
foo = np.array([[1], [1, 2]])
foo.mean(axis=1)
</code></pre>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tom/.virtualenvs/nlp/lib/python3.5/site-packages/numpy/core/_methods.py", line 56, in _mean
rcount = _count_reduce_items(arr, axis)
File "/home/tom/.virtualenvs/nlp/lib/python3.5/site-packages/numpy/core/_methods.py", line 50, in _count_reduce_items
items *= arr.shape[ax]
IndexError: tuple index out of range
</code></pre>
<p>Is there a nice way to do this or should I just do the maths myself?</p>
| 0 |
2016-09-20T12:07:05Z
| 39,594,084 |
<p>We could use an almost vectorized approach based upon <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow"><code>np.add.reduceat</code></a> that takes care of the irregular length <em>subarrays</em>, for which we are calculating the average values. <code>np.add.reduceat</code> sums up elements in those intervals of irregular lengths after getting a <code>1D</code> flattened version of the input array with <code>np.concatenate</code>. Finally, we need to divide the summations by the lengths of those subarrays to get the average values.</p>
<p>Thus, the implementation would look something like this -</p>
<pre><code>lens = np.array(map(len,foo)) # Thanks to @Kasramvd on this!
vals = np.concatenate(foo)
shift_idx = np.append(0,lens[:-1].cumsum())
out = np.add.reduceat(vals,shift_idx)/lens.astype(float)
</code></pre>
| 2 |
2016-09-20T12:27:24Z
|
[
"python",
"numpy"
] |
Merge two csv file in pyhton
| 39,593,747 |
<p>I trying to merge two csv file, I don't want to remove duplicate I want simply to check first column "PDB ID" and then check second column "Chain ID". all values has input files. I want to merge and add column file 1 and file 2. </p>
<pre><code>import pandas as pd
a = pd.read_csv("testfile.csv")
b = pd.read_csv("testfile_1.csv")
b = b.dropna(axis=1)
merged = a.merge(b, on='PDB ID')
merged.to_csv("output.csv", index=False)
</code></pre>
<p>I used above script but getting result one row three time same value. </p>
<pre><code>File 1: Input
PDB ID Chain ID Ligand ID Uniprot Acc
3RSQ A NAI Q9X024
3RTD A NAI Q9X024
1E3E A NAI Q9QYY9
1E3E B NAI Q9QYY9
1E3I A NAI Q9QYY9
1E3I B NAI Q9QYY9
File 2: Input
PDB ID Chain ID Avg
1E3E A 31.566
1E3E B 17.867
3RSQ A 57.653
1E3I A 27.63
1E3I B 17.867
3RTD A 48.806
Getting Output:
PDB ID Chain ID_x Avg Ligand ID Uniprot Acc
3RSQ A 57.653 NAI Q9X024
3RTD A 48.806 NAI Q9X024
1E3E A 31.566 NAI Q9QYY9
1E3E A 31.566 NAI Q9QYY9
1E3E B 17.867 NAI Q9QYY9
1E3E B 17.867 NAI Q9QYY9
1E3I A 27.63 NAI Q9QYY9
1E3I A 27.63 NAI Q9QYY9
1E3I B 17.867 NAI Q9QYY9
1E3I B 17.867 NAI Q9QYY9
Expected Output:
3RSQ A 57.653 NAI Q9X024
3RTD A 48.806 NAI Q9X024
1E3E A 31.566 NAI Q9QYY9
1E3E B 17.867 NAI Q9QYY9
1E3I A 27.63 NAI Q9QYY9
1E3I B 17.867 NAI Q9QYY9
</code></pre>
| 1 |
2016-09-20T12:10:57Z
| 39,594,694 |
<p>Maybe you can use the <code>left_index</code> and <code>right_index</code> parameters of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow">pandas <code>merge</code></a> method to not duplicate rows. Additionally, using <a href="http://stackoverflow.com/a/19125531/6779606">this solution</a> to not duplicate column names, I suggest the following:</p>
<pre><code>import pandas as pd
a = pd.read_csv("testfile.csv")
b = pd.read_csv("testfile_1.csv")
b = b.dropna(axis=1)
cols = b.columns.difference(a.columns)
merged = a.merge(b[cols], left_index=True, right_index=True)
merged.to_csv("output.csv", index=False)
</code></pre>
<p>which resulted in this:</p>
<pre><code> Chain ID Ligand ID PDB ID Uniprot Acc Avg
0 A NAI 3RSQ Q9X024 57.653
1 A NAI 3RTD Q9X024 48.806
2 A NAI 1E3E Q9QYY9 31.566
3 B NAI 1E3E Q9QYY9 17.867
4 A NAI 1E3I Q9QYY9 21.63
5 B NAI 1E3I Q9QYY9 17.867
</code></pre>
<hr>
<h1>EDIT:</h1>
<p>In order to accomplish this when the indices of each DataFrame do not correspond to the same <code>PDB ID</code>, I ended up sorting DataFrame <code>a</code> to retrieve its indices and setting the indices of the sorted version of DataFrame <code>b</code> to these values. Finally, I sort DataFrame <code>b</code> by its indices and <code>PDB ID</code> should be ordered the same way as DataFrame <code>a</code>.</p>
<pre><code>import pandas as pd
a = pd.read_csv("testfile.csv")
b = pd.read_csv("testfile_1.csv")
b = b.dropna(axis=1)
b = b.sort_values(by='PDB ID')
b.index = a.sort_values(by='PDB ID').index
b = b.sort_index()
cols = b.columns.difference(a.columns)
merged = a.merge(b[cols], left_index=True, right_index=True)
merged.to_csv("output.csv", index=False)
</code></pre>
<p>where merged resulted in this:</p>
<pre><code> Chain ID Ligand ID PDB ID Uniprot Acc Avg
0 A NAI 3RSQ Q9X024 57.653
1 A NAI 3RTD Q9X024 48.806
2 A NAI 1E3E Q9QYY9 31.566
3 B NAI 1E3E Q9QYY9 17.867
4 A NAI 1E3I Q9QYY9 27.63
5 B NAI 1E3I Q9QYY9 17.867
</code></pre>
<hr>
<h1>EDIT 2:</h1>
<p>Here is a much simpler solution, as found in <a href="http://stackoverflow.com/a/27314641/6779606">this answer</a>.</p>
<pre><code>import pandas as pd
a = pd.read_csv("testfile.csv")
b = pd.read_csv("testfile_1.csv")
b = b.dropna(axis=1)
merged = a.merge(b, on=['PDB ID', 'Chain ID'], how='outer')
merged.to_csv("output.csv", index=False)
</code></pre>
<p>The number of rows do not have to be equal and the result should be what you expect (my last row is an example of different number of rows):</p>
<pre><code> Chain ID Ligand ID PDB ID Uniprot Acc Avg
0 A NAI 3RSQ Q9X024 57.653
1 A NAI 3RTD Q9X024 48.806
2 A NAI 1E3E Q9QYY9 31.566
3 B NAI 1E3E Q9QYY9 17.867
4 A NAI 1E3I Q9QYY9 27.63
5 B NAI 1E3I Q9QYY9 17.867
6 a a a a NaN
</code></pre>
| 0 |
2016-09-20T12:54:58Z
|
[
"python",
"python-2.7",
"python-3.x"
] |
UBER Python API returns uber_rides.errors.ClientError
| 39,593,754 |
<p>I wrote a piece of code to get product information. I used the code provided in their <a href="https://github.com/uber/rides-python-sdk" rel="nofollow">github</a>.</p>
<pre><code>from uber_rides.session import Session
session = Session(server_token="key_given_here")
from uber_rides.client import UberRidesClient
client = UberRidesClient(session)
response = client.get_products(12.9242845,77.582953)
products = response.json.get('products')
print products
</code></pre>
<p>It returns the following error:</p>
<p><code>uber_rides.errors.ClientError: The request contains bad syntax or cannot be filled due to a fault from the client sending the request.</code></p>
<p>Why this happening. What is wrong with my code ?</p>
| 1 |
2016-09-20T12:11:16Z
| 39,594,374 |
<p>I got the answer. The code is working perfect. One character was missing in the <code>server_token</code> I provided.</p>
| 0 |
2016-09-20T12:40:02Z
|
[
"python",
"uber-api"
] |
Convert a dict to a pandas DataFrame
| 39,593,821 |
<p>My data look like this : </p>
<pre><code>{u'"57e01311817bc367c030b390"': u'{"ad_since": 2016, "indoor_swimming_pool": "No", "seaside": "No", "handicapped_access": "Yes"}', u'"57e01311817bc367c030b3a8"': u'{"ad_since": 2012, "indoor_swimming_pool": "No", "seaside": "No", "handicapped_access": "Yes"}'}
</code></pre>
<p>I want to convert it to a pandas Dataframe. But when I try </p>
<pre><code>df = pd.DataFrame(response.items())
</code></pre>
<p>I get a DataFrame with two columns, the first with the first key, and the second with the values of the key: </p>
<pre><code> 0 1
0 "57e01311817bc367c030b390" {"ad_since": 2016, "indoor_swimming_pool": "No...
1 "57e01311817bc367c030b3a8" {"ad_since": 2012, "indoor_swimming_pool": "No...
</code></pre>
<p>How can I get a single column for each key : <code>"ad_since"</code>, <code>"indoor_swimming_pool"</code>, <code>"indoor_swimming_pool"</code> ? And keep the first column, or get the id as index.</p>
| 2 |
2016-09-20T12:14:18Z
| 39,594,846 |
<p>You need convert column of <code>type</code> <code>str</code> to <code>dict</code> by <code>.apply(literal_eval)</code> or <code>.apply(json.loads)</code> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_records.html" rel="nofollow"><code>DataFrame.from_records</code></a>:</p>
<pre><code>import pandas as pd
from ast import literal_eval
response = {u'"57e01311817bc367c030b390"': u'{"ad_since": 2016, "indoor_swimming_pool": "No", "seaside": "No", "handicapped_access": "Yes"}',
u'"57e01311817bc367c030b3a8"': u'{"ad_since": 2012, "indoor_swimming_pool": "No", "seaside": "No", "handicapped_access": "Yes"}'}
df = pd.DataFrame.from_dict(response, orient='index')
print (type(df.iloc[0,0]))
<class 'str'>
df.iloc[:,0] = df.iloc[:,0].apply(literal_eval)
print (pd.DataFrame.from_records(df.iloc[:,0].values.tolist(), index=df.index))
ad_since handicapped_access indoor_swimming_pool \
"57e01311817bc367c030b3a8" 2012 Yes No
"57e01311817bc367c030b390" 2016 Yes No
seaside
"57e01311817bc367c030b3a8" No
"57e01311817bc367c030b390" No
</code></pre>
<hr>
<pre><code>import pandas as pd
import json
response = {u'"57e01311817bc367c030b390"': u'{"ad_since": 2016, "indoor_swimming_pool": "No", "seaside": "No", "handicapped_access": "Yes"}',
u'"57e01311817bc367c030b3a8"': u'{"ad_since": 2012, "indoor_swimming_pool": "No", "seaside": "No", "handicapped_access": "Yes"}'}
df = pd.DataFrame.from_dict(response, orient='index')
df.iloc[:,0] = df.iloc[:,0].apply(json.loads)
print (pd.DataFrame.from_records(df.iloc[:,0].values.tolist(), index=df.index))
ad_since handicapped_access indoor_swimming_pool \
"57e01311817bc367c030b3a8" 2012 Yes No
"57e01311817bc367c030b390" 2016 Yes No
seaside
"57e01311817bc367c030b3a8" No
"57e01311817bc367c030b390" No
</code></pre>
| 1 |
2016-09-20T13:01:07Z
|
[
"python",
"json",
"pandas"
] |
Convert a dict to a pandas DataFrame
| 39,593,821 |
<p>My data look like this : </p>
<pre><code>{u'"57e01311817bc367c030b390"': u'{"ad_since": 2016, "indoor_swimming_pool": "No", "seaside": "No", "handicapped_access": "Yes"}', u'"57e01311817bc367c030b3a8"': u'{"ad_since": 2012, "indoor_swimming_pool": "No", "seaside": "No", "handicapped_access": "Yes"}'}
</code></pre>
<p>I want to convert it to a pandas Dataframe. But when I try </p>
<pre><code>df = pd.DataFrame(response.items())
</code></pre>
<p>I get a DataFrame with two columns, the first with the first key, and the second with the values of the key: </p>
<pre><code> 0 1
0 "57e01311817bc367c030b390" {"ad_since": 2016, "indoor_swimming_pool": "No...
1 "57e01311817bc367c030b3a8" {"ad_since": 2012, "indoor_swimming_pool": "No...
</code></pre>
<p>How can I get a single column for each key : <code>"ad_since"</code>, <code>"indoor_swimming_pool"</code>, <code>"indoor_swimming_pool"</code> ? And keep the first column, or get the id as index.</p>
| 2 |
2016-09-20T12:14:18Z
| 39,594,999 |
<p>As the values are strings, you can use the <a href="https://docs.python.org/2/library/json.html" rel="nofollow"><code>json</code> module</a> and list comprehension:</p>
<pre><code>In [20]: d = {u'"57e01311817bc367c030b390"': u'{"ad_since": 2016, "indoor_swimming_pool": "No", "seaside": "No", "handicapped_access": "Yes"}', u'"57e01311817bc367c030b3a8"': u'{"ad_since": 2012, "indoor_swimming_pool": "No", "seaside": "No", "handicapped_access": "Yes"}'}
In [21]: import json
In [22]: pd.DataFrame(dict([(k, [json.loads(e)[k] for e in d.values()]) for k in json.loads(d.values()[0])]), index=d.keys())Out[22]:
ad_since handicapped_access indoor_swimming_pool \
"57e01311817bc367c030b390" 2016 Yes No
"57e01311817bc367c030b3a8" 2012 Yes No
seaside
"57e01311817bc367c030b390" No
"57e01311817bc367c030b3a8" No
</code></pre>
| 1 |
2016-09-20T13:07:58Z
|
[
"python",
"json",
"pandas"
] |
How to format datetime like java?
| 39,593,837 |
<p>I have the following java code:</p>
<pre><code>SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd'T'hh:mm:ss.SXXX");
String timestamp = simpleDateFormat.format(new Date());
System.out.println(timestamp);
</code></pre>
<p>which prints the date in the following format:</p>
<pre><code>2016-09-20T01:28:03.238+02:00
</code></pre>
<p>How do I achieve the same formatting in Python?</p>
<p>What I have so far is very cumbersome..:</p>
<pre><code>import datetime
import pytz
def now_fmt():
oslo = pytz.timezone('Europe/Oslo')
otime = oslo.localize(datetime.datetime.now())
msecs = otime.microsecond
res = otime.strftime('%Y-%m-%dT%H:%M:%S.')
res += str(msecs)[:3]
tz = otime.strftime('%z')
return res + tz[:-2] + ':' + tz[-2:]
</code></pre>
| 2 |
2016-09-20T12:14:50Z
| 39,602,161 |
<h1>ISO 8601</h1>
<p>That format is one of a family of date-time string formats defined by the <a href="https://en.wikipedia.org/wiki/ISO_8601" rel="nofollow">ISO 8601</a> standard.</p>
<h1>Python</h1>
<p>Looks like many duplicates of your Question for Python.</p>
<p><a href="http://stackoverflow.com/search?q=python%20%22ISO%208601%22">Search Stack Overflow for: Python "ISO 8601"</a>. Get 233 hits.</p>
<h1>java.time</h1>
<p>On the Java side, you are working too hard. And you are using troublesome old legacy classes now supplanted by the java.time classes.</p>
<p>The java.time classes use ISO 8601 formats by default when parsing/generating strings that represent date-time values. So no need to define a formatting pattern. Note that Java 8 has some bugs or limitations with parsing all variations of ISO 8601, remedied in Java 9.</p>
<pre><code>String input = "2016-09-20T01:28:03.238+02:00" ;
OffsetDateTime odt = OffsetDateTime.parse ( input ) ;
</code></pre>
<p>Ditto, for going the other direction in ISO 8601 format, no need to define a formatting pattern.</p>
<pre><code>String output = odt.toString () ; // 2016-09-20T01:28:03.238+02:00
</code></pre>
<p>If you really need a <code>java.util.Date</code> for old code not yet updated to the java.time types, you can convert using new methods added to the old classes.</p>
<h1>Fractional second</h1>
<p>Beware of the fractional second. </p>
<ul>
<li>The java.time classes handle values with up to <a href="https://en.wikipedia.org/wiki/Nanosecond" rel="nofollow">nanosecond</a> resolution, nine digits of decimal fraction. </li>
<li>The <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow">Python <code>datetime</code> type</a> seems to have a <a href="https://en.wikipedia.org/wiki/Microsecond" rel="nofollow">microseconds</a> resolution, for six decimal places.</li>
<li>The old legacy date-time classes in Java are limited to <a href="https://en.wikipedia.org/wiki/Millisecond" rel="nofollow">millisecond</a> resolution, for three digits of decimal fraction.</li>
<li>The ISO 8601 standard prescribes no limit to the fraction, any number of decimal fraction digits.</li>
</ul>
<p>When going from Python to Java, this is another reason to avoid the old date-time classes and stick with java.time classes. When going from Java to python, you may want to truncate any nanoseconds value to microseconds as defined by <a href="http://docs.oracle.com/javase/8/docs/api/java/time/temporal/ChronoUnit.html#MICROS" rel="nofollow"><code>ChronoUnit.MICROS</code></a>.</p>
<pre><code>Instant instant = Instant.now().truncatedTo( ChronoUnit.MICROS ) ;
</code></pre>
<h2>About java.time</h2>
<p>The <a href="http://docs.oracle.com/javase/8/docs/api/java/time/package-summary.html" rel="nofollow">java.time</a> framework is built into Java 8 and later. These classes supplant the troublesome old date-time classes such as <a href="https://docs.oracle.com/javase/8/docs/api/java/util/Date.html" rel="nofollow"><code>java.util.Date</code></a>, <a href="https://docs.oracle.com/javase/8/docs/api/java/util/Calendar.html" rel="nofollow"><code>.Calendar</code></a>, & <a href="http://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html" rel="nofollow"><code>java.text.SimpleDateFormat</code></a>.</p>
<p>The <a href="http://www.joda.org/joda-time/" rel="nofollow">Joda-Time</a> project, now in <a href="https://en.wikipedia.org/wiki/Maintenance_mode" rel="nofollow">maintenance mode</a>, advises migration to java.time.</p>
<p>To learn more, see the <a href="http://docs.oracle.com/javase/tutorial/datetime/TOC.html" rel="nofollow">Oracle Tutorial</a>. And search Stack Overflow for many examples and explanations.</p>
<p>Much of the java.time functionality is back-ported to Java 6 & 7 in <a href="http://www.threeten.org/threetenbp/" rel="nofollow">ThreeTen-Backport</a> and further adapted to <a href="https://en.wikipedia.org/wiki/Android_(operating_system)" rel="nofollow">Android</a> in <a href="https://github.com/JakeWharton/ThreeTenABP" rel="nofollow">ThreeTenABP</a> (see <a href="http://stackoverflow.com/q/38922754/642706"><em>How to useâ¦</em></a>).</p>
<p>The <a href="http://www.threeten.org/threeten-extra/" rel="nofollow">ThreeTen-Extra</a> project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as <a href="http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Interval.html" rel="nofollow"><code>Interval</code></a>, <a href="http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearWeek.html" rel="nofollow"><code>YearWeek</code></a>, <a href="http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearQuarter.html" rel="nofollow"><code>YearQuarter</code></a>, and <a href="http://www.threeten.org/threeten-extra/apidocs/index.html" rel="nofollow">more</a>.</p>
| 1 |
2016-09-20T19:17:56Z
|
[
"java",
"python"
] |
How to handle blank form value in python
| 39,594,024 |
<p>I have 2 form fields in my html code, which hits to python script. Everything works perfectly if i provide both input in form. But in my case, 2nd field is optional. I am getting blank result when i submit form with 1st field only. How to handle my code to run even when 2nd field is blank/null</p>
<p>python code:</p>
<pre><code>import cgi
form = cgi.FieldStorage()
first=form["firstSix"].value
last=form["lastFour"].value
</code></pre>
| 0 |
2016-09-20T12:23:59Z
| 39,594,096 |
<pre><code>import cgi
form = cgi.FieldStorage()
first=form["firstSix"].value
if form["lastFour"]:
last=form["lastFour"].value
else:
form["lastFour"] = None
</code></pre>
| 0 |
2016-09-20T12:28:03Z
|
[
"python"
] |
How to handle blank form value in python
| 39,594,024 |
<p>I have 2 form fields in my html code, which hits to python script. Everything works perfectly if i provide both input in form. But in my case, 2nd field is optional. I am getting blank result when i submit form with 1st field only. How to handle my code to run even when 2nd field is blank/null</p>
<p>python code:</p>
<pre><code>import cgi
form = cgi.FieldStorage()
first=form["firstSix"].value
last=form["lastFour"].value
</code></pre>
| 0 |
2016-09-20T12:23:59Z
| 39,594,531 |
<p>You can check if something is empty with a boolean.</p>
<pre><code>if not form["lastFour"]:
print "lastFour is empty!"
else:
print "lastFour has a value!"
</code></pre>
<p>Any string that is empty is considered false, and and that has a value is true. So <code>""</code> is false and <code>"Hello World!"</code> is true. If the form value equals <code>None</code>, you might have to check for that too.</p>
<pre><code>if not form["lastFour"]:
print "lastFour is empty!"
elif form["lastFour"] == None:
print "lastFour has is empty!"
else:
print "lastFour has a value!"
</code></pre>
| 0 |
2016-09-20T12:47:16Z
|
[
"python"
] |
Using regex extract all digit and word numbers
| 39,594,066 |
<p>I am trying to extract all string and digit numbers from a text.</p>
<pre><code>text = 'one tweo three 10 number'
numbers = "(^a(?=\s)|one|two|three|four|five|six|seven|eight|nine|ten| \
eleven|twelve|thirteen|fourteen|fifteen|sixteen|seventeen| \
eighteen|nineteen|twenty|thirty|forty|fifty|sixty|seventy|eighty| \
ninety|hundred|thousand)"
print re.search(numbers, text).group(0)
</code></pre>
<p>This gives me first words digit. </p>
<p>my expected result = ['one', 'two', 'three', '10']</p>
<p>How can I modify it so that all words and well digit numbers I Can get in list?</p>
| 0 |
2016-09-20T12:26:12Z
| 39,594,403 |
<p>There are several issues here:</p>
<ul>
<li>The pattern should be used with the VERBOSE flag (add <code>(?x)</code> at the start)</li>
<li>The <code>nine</code> will match <code>nine</code> in <code>ninety</code>, so you should either put the longer values first, or use word boundaries <code>\b</code></li>
<li>Declare the pattern with a raw string literal to avoid issues like parsing <code>\b</code> as a backspace and not a word boundary</li>
<li>To match digits, you may add a <code>|\d+</code> branch to your number matching group</li>
<li>To match multiple non-overlapping occurrences of the substrings inside the input string, you need to use <code>re.findall</code> (or <code>re.finditer</code>), not <code>re.search</code>.</li>
</ul>
<p>Here is my suggestion:</p>
<pre><code>import re
text = 'one two three 10 number eleven eighteen ninety \n '
numbers = r"""(?x) # Turn on free spacing mode
(
^a(?=\s)| # Here we match a at the start of string before whitespace
\d+| # HERE we match one or more digits
\b # Initial word boundary
(?:
one|two|three|four|five|six|seven|eight|nine|ten|
eleven|twelve|thirteen|fourteen|fifteen|sixteen|seventeen|
eighteen|nineteen|twenty|thirty|forty|fifty|sixty|seventy|eighty|
ninety|hundred|thousand
) # A list of alternatives
\b # Trailing word boundary
)"""
print(re.findall(numbers, text))
</code></pre>
<p>See <a href="https://ideone.com/5l6rkc" rel="nofollow">Python demo</a></p>
<p>And here is a <a href="https://regex101.com/r/mQ8aF2/2" rel="nofollow">regex demo</a>.</p>
| 2 |
2016-09-20T12:40:55Z
|
[
"python",
"regex"
] |
Using regex extract all digit and word numbers
| 39,594,066 |
<p>I am trying to extract all string and digit numbers from a text.</p>
<pre><code>text = 'one tweo three 10 number'
numbers = "(^a(?=\s)|one|two|three|four|five|six|seven|eight|nine|ten| \
eleven|twelve|thirteen|fourteen|fifteen|sixteen|seventeen| \
eighteen|nineteen|twenty|thirty|forty|fifty|sixty|seventy|eighty| \
ninety|hundred|thousand)"
print re.search(numbers, text).group(0)
</code></pre>
<p>This gives me first words digit. </p>
<p>my expected result = ['one', 'two', 'three', '10']</p>
<p>How can I modify it so that all words and well digit numbers I Can get in list?</p>
| 0 |
2016-09-20T12:26:12Z
| 39,594,438 |
<p>Well the re.findall and the add of [0-9]+ work well for your list. Unfortunately if you try to match something like seventythree you will get --> seven and three, thus you need something better than this below :-)</p>
<pre><code>numbers = "(^a(?=\s)|one|two|three|four|five|six|seven|eight|nine|ten| \
eleven|twelve|thirteen|fourteen|fifteen|sixteen|seventeen| \
eighteen|nineteen|twenty|thirty|forty|fifty|sixty|seventy|eighty| \
ninety|hundred|thousand|[0-9]+)"
x = re.findall(numbers, text)
</code></pre>
| 1 |
2016-09-20T12:42:35Z
|
[
"python",
"regex"
] |
How to compare dataframe unique values with a list?
| 39,594,080 |
<p>I have a Panda dataframe column, and I want to check if all values in my column come from another list.</p>
<p>For example, I want to check whether all values in my column are <code>A</code> or <code>B</code> and nothing else. My code should return true for the following inputs:</p>
<pre><code>myValues = ['A','B']
df = pd.DataFrame(['A','B','B','A'],columns=['Col']) # True
df = pd.DataFrame(['A','A'],columns=['Col']) # True
df = pd.DataFrame(['B'],columns=['Col']) # True
df = pd.DataFrame(['B','C'],columns=['Col']) # False
</code></pre>
| 1 |
2016-09-20T12:26:59Z
| 39,594,268 |
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> and pass your list to generate a boolean array and with <code>all</code> to return whether all values present:</p>
<pre><code>In [146]:
myValues = ['A','B']
df = pd.DataFrame(['A','B','B','A'],columns=['Col']) # True
print(df['Col'].isin(myValues).all())
df = pd.DataFrame(['A','A'],columns=['Col']) # True
print(df['Col'].isin(myValues).all())
df = pd.DataFrame(['B'],columns=['Col']) # True
print(df['Col'].isin(myValues).all())
df = pd.DataFrame(['B','C'],columns=['Col']) # False
print(df['Col'].isin(myValues).all())
True
True
True
False
</code></pre>
| 1 |
2016-09-20T12:35:31Z
|
[
"python",
"pandas"
] |
How to compare dataframe unique values with a list?
| 39,594,080 |
<p>I have a Panda dataframe column, and I want to check if all values in my column come from another list.</p>
<p>For example, I want to check whether all values in my column are <code>A</code> or <code>B</code> and nothing else. My code should return true for the following inputs:</p>
<pre><code>myValues = ['A','B']
df = pd.DataFrame(['A','B','B','A'],columns=['Col']) # True
df = pd.DataFrame(['A','A'],columns=['Col']) # True
df = pd.DataFrame(['B'],columns=['Col']) # True
df = pd.DataFrame(['B','C'],columns=['Col']) # False
</code></pre>
| 1 |
2016-09-20T12:26:59Z
| 39,594,798 |
<p>Here is an alternative solution:</p>
<pre><code>df.eval('Col in @myValues')
</code></pre>
<p>Demo:</p>
<pre><code>In [78]: pd.DataFrame(['A','B','B','A'],columns=['Col']).eval('Col in @myValues')
Out[78]:
0 True
1 True
2 True
3 True
dtype: bool
In [79]: pd.DataFrame(['A','A'],columns=['Col']).eval('Col in @myValues')
Out[79]:
0 True
1 True
dtype: bool
In [80]: pd.DataFrame(['B'],columns=['Col']).eval('Col in @myValues')
Out[80]:
0 True
dtype: bool
In [81]: pd.DataFrame(['B','C'],columns=['Col']).eval('Col in @myValues')
Out[81]:
0 True
1 False
dtype: bool
</code></pre>
| 1 |
2016-09-20T12:59:17Z
|
[
"python",
"pandas"
] |
Python os library does not see environment variables in Windows
| 39,594,083 |
<p>As you can see on the picture below, I have 'SPARK_HOME' environment variable:</p>
<p><a href="http://i.stack.imgur.com/vARRs.png" rel="nofollow"><img src="http://i.stack.imgur.com/vARRs.png" alt="enter image description here"></a></p>
<p>However I just can't get it through python:</p>
<pre><code>import os
os.environ.get('SPARK_HOME', None) # returns None
"SPARK_HOME" in os.environ # returns False
</code></pre>
<p>What am I doing wrong? Operating system is Windows 7
PS: I can get other variables, for example: </p>
<pre><code>spark_home = os.environ.get('PYTHONPATH', None)
print spark_home # returns correct path
</code></pre>
| 0 |
2016-09-20T12:27:21Z
| 39,594,216 |
<pre><code>import os
print bool(os.environ["SPARK_HOME"]) # True or False
print os.environ["SPARK_HOME"] # print "SPARKE_HOME" path
</code></pre>
| 0 |
2016-09-20T12:32:39Z
|
[
"python",
"windows",
"python-2.7",
"operating-system",
"environment-variables"
] |
Python os library does not see environment variables in Windows
| 39,594,083 |
<p>As you can see on the picture below, I have 'SPARK_HOME' environment variable:</p>
<p><a href="http://i.stack.imgur.com/vARRs.png" rel="nofollow"><img src="http://i.stack.imgur.com/vARRs.png" alt="enter image description here"></a></p>
<p>However I just can't get it through python:</p>
<pre><code>import os
os.environ.get('SPARK_HOME', None) # returns None
"SPARK_HOME" in os.environ # returns False
</code></pre>
<p>What am I doing wrong? Operating system is Windows 7
PS: I can get other variables, for example: </p>
<pre><code>spark_home = os.environ.get('PYTHONPATH', None)
print spark_home # returns correct path
</code></pre>
| 0 |
2016-09-20T12:27:21Z
| 39,594,302 |
<p>To get your python start seeing new variables you need to restart your console, not just only <code>ipython notebook</code>!!!</p>
| 1 |
2016-09-20T12:36:45Z
|
[
"python",
"windows",
"python-2.7",
"operating-system",
"environment-variables"
] |
intellij python global environment set fail
| 39,594,131 |
<p>I set some environment variables in /etc/profile or .bashrc ,all can not be use in intellij with python when runtime.
so I have to set those variables in in intellij in here bellow.
<a href="http://i.stack.imgur.com/P47B9.png" rel="nofollow">idea set global environment variables for all projects</a></p>
<p>however, i run a script code like this :</p>
<pre><code>import os
print os.environ['PYTHONUNBUFFERED'] # intellij auto set ,work fine
print os.environ['CUDA_HOME']` # i set , failed, key_error.
</code></pre>
<p>the output surprised me.
they both be set in the same place and same format. but behave different.</p>
<p>anyone can explain? thanks.</p>
| 1 |
2016-09-20T12:29:22Z
| 39,594,271 |
<p>Setting environment variables in <code>.bashrc</code> make them relative to one user's session only.</p>
<p>And <code>/etc/profile</code> limit env variables to the shell.</p>
<p>Set your variables in : <code>/etc/environment</code> or <code>/etc/security/pam_env.conf</code></p>
<p>See : <a href="http://www.linux-pam.org/Linux-PAM-html/sag-pam_env.html" rel="nofollow">http://www.linux-pam.org/Linux-PAM-html/sag-pam_env.html</a></p>
| 1 |
2016-09-20T12:35:41Z
|
[
"python",
"intellij-idea",
"environment-variables"
] |
Creating an incremental List
| 39,594,249 |
<p>When I try to create an incremental list from <code>a</code> to <code>b</code> ( so something like [1,1.1,1.2,1.3,...,2] (a=1, b=2), then I start having a problem with the specific code I came up with:</p>
<pre><code>def Range(a,b,r):
lst1=[]
y=a
while y<b:
lst1.append(round(float(y+10**-r),r))
y+=10**-r
return lst1
print Range(1,2,3)
</code></pre>
<p>Here, <code>r</code> is the power of my increments <code>(10**-r)</code>. For <code>r=1</code> or <code>r=2</code> the code works fine and ends at [...,2.0]. But for <code>r=3</code> or <code>r=4</code> it ends at [...,2.001] and [...,2.0001] respectively. But then, for <code>r=5</code> and <code>r=6</code> it goes back to ending at 2. Is there something wrong with my code or is this some sort of bug?</p>
| 0 |
2016-09-20T12:34:21Z
| 39,595,048 |
<blockquote>
<p>say, to get a element of 2.001, the lost has to take in y=2.000</p>
</blockquote>
<p>That's not true; it's possible that y becomes, say, 1.99999999999997.</p>
<p>You check on <code>y < b</code>, but then add <code>round(float(y+10**-r),r)</code> to the list. It's of course possible that the first is true, but the second is still larger than b.</p>
<p>To see what's happening, remove the rounding, and look at the next-to-last number in your lists in the cases where it's going "wrong".</p>
| 2 |
2016-09-20T13:10:23Z
|
[
"python",
"list",
"increment"
] |
User keyboard input in pygame, how to get CAPS input
| 39,594,390 |
<p>I'm in the process of making my first game and want prompt the user to enter information, eg their name. I've spent the last 3.5 hours writing the function below which I intend to use(slightly modified with a blinking underscore cursor to name one) in my games moving forward. </p>
<p>As my code is currently written I cannot get CAPS input from the user, even though I have allowed for such characters. How might I do this?</p>
<p>Any other suggestions also welcome.</p>
<p>Code:</p>
<pre><code>import pygame
from pygame.locals import *
import sys
def enter_text(max_length, lower = False, upper = False, title = False):
"""
returns user name input of max length "max length and with optional
string operation performed
"""
BLUE = (0,0,255)
pressed = ""
finished = False
# create list of allowed characters by converting ascii values
# numbers 1-9, letters a-z(lower/upper)
allowed_chars = [chr(i) for i in range(97, 123)] +\
[chr(i) for i in range(48,58)] +\
[chr(i) for i in range(65,90)]
while not finished:
screen.fill((0,0,0))
pygame.draw.rect(screen, BLUE, (125,175,150,50))
print_text(font, 125, 150, "Enter Name:")
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
# if input is in list of allowed characters, add to variable
elif event.type == KEYUP and pygame.key.name(event.key) in \
allowed_chars and len(pressed) < max_length:
pressed += pygame.key.name(event.key)
# otherwise, only the following are valid inputs
elif event.type == KEYUP:
if event.key == K_BACKSPACE:
pressed = pressed[:-1]
elif event.key == K_SPACE:
pressed += " "
elif event.key == K_RETURN:
finished = True
print_text(font, 130, 180, pressed)
pygame.display.update()
# perform any selected string operations
if lower: pressed = pressed.lower()
if upper: pressed = pressed.upper()
if title: pressed = pressed.title()
return pressed
def print_text(font, x, y, text, color = (255,255,255)):
"""Draws a text image to display surface"""
text_image = font.render(text, True, color)
screen.blit(text_image, (x,y))
pygame.init()
screen = pygame.display.set_mode((400,400))
font = pygame.font.SysFont(None, 25)
fpsclock = pygame.time.Clock()
fps = 30
BLUE = (0,0,255)
# name entered?
name = False
while True:
fpsclock.tick(fps)
pressed = None
for event in pygame.event.get():
if event.type == KEYUP:
print(pygame.key.name(event.key))
print(ord(pygame.key.name(event.key)))
if event.type == QUIT:
pygame.quit()
sys.exit()
# key polling
keys = pygame.key.get_pressed()
screen.fill((0,0,0))
if not name:
name = enter_text(4, title = True)
print_text(font, 130, 180, name)
pygame.display.update()
</code></pre>
| 0 |
2016-09-20T12:40:26Z
| 39,596,960 |
<p>Below is the re-worked code with upper case input catered for. I've also put in a blinking underscore. Unsure if it's the most efficient way to do so, but there it is. I had fun doing it.</p>
<pre><code>import pygame
from pygame.locals import *
import sys
from itertools import cycle
def enter_text(max_length, lower = False, upper = False, title = False):
"""
returns user name input of max length "max length and with optional
string operation performed
"""
BLUE = (0,0,255)
pressed = ""
finished = False
# create list of allowed characters using ascii values
# numbers 1-9, letters a-z
allowed_values = [i for i in range(97, 123)] +\
[i for i in range(48,58)]
# create blinking underscore
BLINK_EVENT = pygame.USEREVENT + 0
pygame.time.set_timer(BLINK_EVENT, 800)
blinky = cycle(["_", " "])
next_blink = next(blinky)
while not finished:
screen.fill((0,0,0))
pygame.draw.rect(screen, BLUE, (125,175,150,50))
print_text(font, 125, 150, "Enter Name:")
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
if event.type == BLINK_EVENT:
next_blink = next(blinky)
# if input is in list of allowed characters, add to variable
elif event.type == KEYUP and event.key in allowed_values \
and len(pressed) < max_length:
# caps entry?
if pygame.key.get_mods() & KMOD_SHIFT or pygame.key.get_mods()\
& KMOD_CAPS:
pressed += chr(event.key).upper()
# lowercase entry
else:
pressed += chr(event.key)
# otherwise, only the following are valid inputs
elif event.type == KEYUP:
if event.key == K_BACKSPACE:
pressed = pressed[:-1]
elif event.key == K_SPACE:
pressed += " "
elif event.key == K_RETURN:
finished = True
# only draw underscore if input is not at max character length
if len(pressed) < max_length:
print_text(font, 130, 180, pressed + next_blink)
else:
print_text(font, 130, 180, pressed)
pygame.display.update()
# perform any selected string operations
if lower: pressed = pressed.lower()
if upper: pressed = pressed.upper()
if title: pressed = pressed.title()
return pressed
def print_text(font, x, y, text, color = (255,255,255)):
"""Draws a text image to display surface"""
text_image = font.render(text, True, color)
screen.blit(text_image, (x,y))
pygame.init()
screen = pygame.display.set_mode((400,400))
font = pygame.font.SysFont(None, 25)
fpsclock = pygame.time.Clock()
fps = 30
BLUE = (0,0,255)
# name entered?
name = False
while True:
fpsclock.tick(fps)
pressed = None
for event in pygame.event.get():
if event.type == KEYUP:
print(pygame.key.name(event.key))
print(ord(pygame.key.name(event.key)))
if event.type == QUIT:
pygame.quit()
sys.exit()
# key polling
keys = pygame.key.get_pressed()
screen.fill((0,0,0))
if not name:
name = enter_text(10)
print_text(font, 130, 180, name)
pygame.display.update()
</code></pre>
| 0 |
2016-09-20T14:35:52Z
|
[
"python",
"pygame",
"ascii"
] |
Image Attachment in python SDK
| 39,594,421 |
<p>I have the below document which got synced from couchbase lite (iOS).</p>
<pre><code> {
"Date": "00170925591601",
"ID": "tmp652844",
"InvoiceNumber": "tmp652844",
"Supplier": "Temp",
"_attachments": {
"ImageAttachment": {
"content_type": "image/jpeg",
"digest": "sha1-8uKi9mywFwvoP8qNrTGEMWemgKU=",
"length": 1898952,
"revpos": 2,
"stub": true
}
},
"channels": [
"payment"
],
"type": "invoice"
}
</code></pre>
<p>How do I get the attachment from the document? I need to render the image attachment in the HTML. I believe the above attachment data is only meta data.
It would be of great help if someone can help me with this.</p>
<p>Thanks</p>
| 0 |
2016-09-20T12:41:39Z
| 39,594,735 |
<p>How do I get the attachment from the document?
- this is a default dict with nested dicts and list inside it, you need a nested iterator function or a <a href="http://stackoverflow.com/questions/12507206/python-recommended-way-to-walk-complex-dictionary-structures-imported-from-json">recursive method</a>...</p>
| 0 |
2016-09-20T12:56:48Z
|
[
"python",
"couchbase"
] |
Image Attachment in python SDK
| 39,594,421 |
<p>I have the below document which got synced from couchbase lite (iOS).</p>
<pre><code> {
"Date": "00170925591601",
"ID": "tmp652844",
"InvoiceNumber": "tmp652844",
"Supplier": "Temp",
"_attachments": {
"ImageAttachment": {
"content_type": "image/jpeg",
"digest": "sha1-8uKi9mywFwvoP8qNrTGEMWemgKU=",
"length": 1898952,
"revpos": 2,
"stub": true
}
},
"channels": [
"payment"
],
"type": "invoice"
}
</code></pre>
<p>How do I get the attachment from the document? I need to render the image attachment in the HTML. I believe the above attachment data is only meta data.
It would be of great help if someone can help me with this.</p>
<p>Thanks</p>
| 0 |
2016-09-20T12:41:39Z
| 39,612,299 |
<p>use the function ..</p>
<p>get_attachment(id_or_doc, filename, default=None)
Return an attachment from the specified doc id and filename.</p>
<p>Parameters:
id_or_doc â either a document ID or a dictionary or Document object representing the document that the attachment belongs to
filename â the name of the attachment file
default â default value to return when the document or attachment is not found
Returns:<br>
a file-like object with read and close methods, or the value of the default argument if the attachment is not found
Since:<br>
0.4.1</p>
<p>check <a href="https://pythonhosted.org/CouchDB/client.html#couchdb.client.Database.get_attachment" rel="nofollow">https://pythonhosted.org/CouchDB/client.html#couchdb.client.Database.get_attachment</a> for more details</p>
| 0 |
2016-09-21T09:16:03Z
|
[
"python",
"couchbase"
] |
A diagram for fluxes and stores of a hydrological model
| 39,594,436 |
<p>What is the easiest way in Python to draw a Sankey diagram with variable node (stock) sizes as well as arrows (flows)? </p>
<p>I'm currently working on a hydrological model. The structure of the model can be seen below</p>
<p><a href="http://i.stack.imgur.com/LNRfQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/LNRfQ.png" alt="enter image description here"></a></p>
<p>On every timestep of the model there are fluxes between the different storages and thus the storages decrease oder increase in their amount of stored water. To better visualize the fluxes and the storage changes I want to plot them, using Python as it is the only language I know. The size of the drawn storages and fluxes should change according to the amount of water that is contained in a storage or transported by a flux. More or less it should look something like the model structure, but with changing sizes. </p>
<p>At first I thought about using the Sankey diagram of matplotlib, but this proofed to be not sufficient as it only plots fluxes and not stores. My second idea was code it all by myself using circles and arrows from matplotlib, but this will be quite a lot of work and as I am only beginning to program it will probably look ugly, too. </p>
<p>So my question is:
Is there a Python tool that can accomplish to plot fluxes and stores the way I imagine or is there another way of doing it in python myself?</p>
| 1 |
2016-09-20T12:42:23Z
| 39,721,556 |
<p>So I have played a bit around and finally got a solution that works and is more general and less crude than I expected. The output looks something like this:
<a href="http://i.stack.imgur.com/EB0C0.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/EB0C0.jpg" alt="enter image description here"></a></p>
<p>Every storage is represented by a rectangle and every flux is an arrow. The usage is similar to the one of the sankey diagram. I added the code here if someone has the same problem and might have some use of this. An example of how to use the Fluxogram is at the end of the post. The animation method is not yet working. So if anyone has an idea how to properly animate this Fluxogram feel free to add a post. I will update the code if I am able to do it myself.</p>
<pre><code># -*- coding: utf-8 -*-
"""
Created on Thu Sep 22 14:13:32 2016
@author: Florian Jehn
"""
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
class Fluxogram:
"""
a class to draw and maintain all fluxes and storages from a model or
some similiar kind of thing to be drawn as a sequence of storages
and fluxes. Storages and fluxes are not drawn proportional
it should look something like this:
order offset=-1 center offset=1
----------------------------------------------------
1. . . . stor1 . . .
. . .arrow . arrow . arrow . .
2. .stor2 . . stor3 . . stor4 .
. . . . . . arrow .
3. . . . . . stor5 .
"""
def __init__(self, max_flux, max_storage, grid_size = 20, storages = None,
fluxes = None):
"""
initilalizes a fluxogram. must be called with:
- max_flux: aximum flux of all fluxes; needed for scaling
- max_storage: maximum storages of all storages; needed for scaling
- grid_size:grid_size for drawing the fluxogram, determines how big
everything is. Fluxes and storages scaled accordingly
- storages: all the storages the fluxogram has (usually empy to
begin with)
- fluxes: all the fluxes the fluxogram has (usually empty to begin
with)
"""
if storages == None:
self.storages = []
if fluxes == None:
self.fluxes = []
self.max_flux = max_flux
self.max_storage = max_storage
self.grid_size = grid_size
def add_storage(self, name, amount, order, offset):
"""
add a storage to the storages of the fluxogram
"""
# len(self.storages is used to give the storages consecutive numbers)
self.storages.append(Storage(name, self.grid_size,len(self.storages) ,
amount, order, offset))
def add_flux(self, name, from_storage, to_storage, amount):
"""
add a flux to the fluxes of the fluxogram
"""
self.fluxes.append(Flux(name, self.grid_size, from_storage, to_storage,
amount))
def update_all_storages(self, amounts):
"""
updates the amount of all storages
"""
for storage, amount in zip(self.storages, amounts):
storage.update_storage(amount)
def update_all_fluxes(self, amounts):
"""
updates the amount of all fluxes
"""
for flux, amount in zip(self.fluxes, amounts):
flux.update_flux(amount)
def update_everything(self, amounts_storages, amounts_fluxes):
"""
updates all fluxes and storages
"""
self.update_all_fluxes(amounts_fluxes)
self.update_all_storages(amounts_storages)
def draw(self):
"""
draws all fluxes and storages
"""
plt.axes()
# draw all fluxes
for flux in self.fluxes:
# scale the amount
scaled_amount_flux = self.scaler(flux.amount, self.max_flux)
# width multiplied because if not, the arrows are so tiny
arrow = plt.Arrow(flux.x_start, flux.y_start, flux.dx, flux.dy,
width = scaled_amount_flux * 1.7, alpha = 0.8)
plt.gca().add_patch(arrow)
# draw all storages
for storage in self.storages:
# scale the amount
scaled_amount_stor = self.scaler(storage.amount, self.max_storage)
# change_x and y, so the storages are centered to the middle
# of their position and not to upper left
x = (storage.x + (1 - storage.amount / self.max_storage) * 0.5
* self.grid_size)
y = (storage.y - (1 - storage.amount / self.max_storage) * 0.5
* self.grid_size)
rectangle = plt.Rectangle((x, y), scaled_amount_stor,
-scaled_amount_stor, alpha = 0.4)
plt.gca().add_patch(rectangle)
plt.axis('scaled')
plt.show()
def animate(self, timeseries_fluxes, timeseries_storages):
"""
animates the shit out of a timeseries
must be called with:
- timeseries_fluxes: a timeseries with amounts for all fluxes for
every day of the timeseries
- timeseries_storages: a timeseries with amounts for all storages
for every day of the timeseries
"""
pass
def animation_init(self, timeseries_fluxes, timeseries_storages):
pass
def scaler(self, value_in, base_max):
"""
scales the fluxes and storages, so they don't overstep their grafical
bounds must be called with:
- valueIn: the value that needs rescaling
- baseMax: the upper limit of the original dataset
~ 100 for fluxes, ~250 for stores (in my model)
"""
# baseMin: the lower limit of the original dataset (usually zero)
base_min = 0
# limitMin: the lower limit of the rescaled dataset (usually zero)
limit_min = 0
# limitMax: the upper limit of the rescaled dataset (in our case grid)
limit_max = self.grid_size
# prevents wrong use of scaler
if value_in > base_max:
raise ValueError("Input value larger than base max")
return (((limit_max - limit_min) * (value_in - base_min)
/ (base_max - base_min)) + limit_min)
class Flux:
"""
a flux of a fluxogram
"""
def __init__(self, name, grid_size, from_storage, to_storage, amount = 0):
"""
initializes a flux with:
- name: name of the flux
- grid_size: grid size of the diagram
- from_storage: storage the flux is originating from
- to_storage: storage the flux is going into
- amount: how much stuff fluxes
"""
self.name = name
self.from_storage = from_storage
self.to_storage = to_storage
self.amount = amount
self.grid_size = grid_size
self.x_start,self.y_start,self.x_end,self.y_end, self.dx, self.dy = (
self.calc_start_end_dx_dy())
def update_flux(self, amount):
"""
update the amount of the flux
"""
self.amount = amount
def calc_start_end_dx_dy(self):
"""
calculates the starting and ending point of an arrow depending on the
order and offset of the starting and ending storages. This helps
determine the direction of the arrow
returns the start and end xy coordinates of the arrow as tuples
"""
# small corrections of x_start/y_start are to make a little gap
# between the arrow and the storage
# arrow pointing to left up
if (self.from_storage.offset > self.to_storage.offset and
self.from_storage.order > self.to_storage.order):
x_start = self.from_storage.x - self.grid_size * 0.1
y_start = self.from_storage.y + self.grid_size * 0.1
x_end = self.to_storage.x + self.grid_size
y_end = self.to_storage.y - self.grid_size
dx = abs(x_start - x_end) * (-1)
dy = abs(y_start - y_end)
# arrow pointing up
elif (self.from_storage.offset == self.to_storage.offset and
self.from_storage.order > self.to_storage.order):
x_start = self.from_storage.x + 0.5 * self.grid_size
y_start = self.from_storage.y + self.grid_size * 0.1
x_end = self.to_storage.x + 0.5 * self.grid_size
y_end = self.to_storage.y - self.grid_size
dx = abs(x_start - x_end)
dy = abs(y_start - y_end)
# arrow pointing right up
elif (self.from_storage.offset < self.to_storage.offset and
self.from_storage.order > self.to_storage.order):
x_start = self.from_storage.x + self.grid_size + self.grid_size*0.1
y_start = self.from_storage.y + self.grid_size * 0.1
x_end = self.to_storage.x
y_end = self.to_storage.y - self.grid_size
dx = abs(x_start - x_end)
dy = abs(y_start - y_end)
# arrow pointing right
elif (self.from_storage.offset < self.to_storage.offset and
self.from_storage.order == self.to_storage.order):
x_start = self.from_storage.x + self.grid_size + self.grid_size*0.1
y_start = self.from_storage.y - 0.5 * self.grid_size
x_end = self.to_storage.x
y_end = self.to_storage.y - 0.5 * self.grid_size
dx = abs(x_start - x_end)
dy = abs(y_start - y_end)
# arrow pointing right down
elif (self.from_storage.offset < self.to_storage.offset and
self.from_storage.order < self.to_storage.order):
x_start = self.from_storage.x + self.grid_size + self.grid_size*0.1
y_start = self.from_storage.y - self.grid_size - self.grid_size*0.1
x_end = self.to_storage.x
y_end = self.to_storage.y
dx = abs(x_start - x_end)
dy = abs(y_start - y_end) * (-1)
# arrow pointing down
elif (self.from_storage.offset == self.to_storage.offset and
self.from_storage.order < self.to_storage.order):
x_start = self.from_storage.x + 0.5 * self.grid_size
y_start = self.from_storage.y - self.grid_size - self.grid_size*0.1
x_end = self.to_storage.x + 0.5 * self.grid_size
y_end = self.to_storage.y
dx = abs(x_start - x_end)
dy = abs(y_start - y_end) * (-1)
# arrow pointing left down
elif (self.from_storage.offset > self.to_storage.offset and
self.from_storage.order < self.to_storage.order):
x_start = self.from_storage.x - self.grid_size * 0.1
y_start = self.from_storage.y - self.grid_size - self.grid_size*0.1
x_end = self.to_storage.x + self.grid_size
y_end = self.to_storage.y
dx = abs(x_start - x_end) * (-1)
dy = abs(y_start - y_end) * (-1)
# arrow pointing left
elif (self.from_storage.offset > self.to_storage.offset and
self.from_storage.order == self.to_storage.order):
x_start = self.from_storage.x - self.grid_size * 0.1
y_start = self.from_storage.y - 0.5 *self.grid_size
x_end = self.to_storage.x + self.grid_size
y_end = self.to_storage.y - 0.5 * self.grid_size
dx = abs(x_start - x_end) * (-1)
dy = abs(y_start - y_end)
# multiply by 0.9 so there is a gap between storages and arrows
dx = dx * 0.9
dy = dy * 0.9
return x_start, y_start, x_end, y_end, dx, dy
class Storage:
"""
a storage of a fluxogram
"""
def __init__(self, name, grid_size, number, amount = 0, order = 0,
offset = 0):
"""initializes a storage with:
- name: name of the storage
- number: consecutive number
- grid_size of the diagram
- amount: how much stuff is in it
- order: how much down it is in the hierachie (starts with 0)
- offset = how much the storage is offset to the left/right
in relationship to the center
"""
self.name = name
self.amount = amount
self.number = number
self.order = order
self.offset = offset
self.grid_size = grid_size
self.x, self.y = self.calculate_xy()
def update_storage(self, amount):
"""
update the amount of the storage
"""
self.amount = amount
def calculate_xy(self):
"""
calculates the xy coordinates of the starting point from where
the recangle is drawn. The additional multiplication by two is
to produce the gaps in the diagram
"""
x = self.offset * self.grid_size * 2
# multiply by -1 to draw the diagram from top to bottom
y = self.order * self.grid_size * 2 * -1
return x,y
</code></pre>
<p>How to use it:</p>
<pre><code># make a fluxgram instance
fl = Fluxogram(100, 150, grid_size = 10)
# add storages
fl.add_storage("up", 23, 0, 0)
fl.add_storage("right", 130, 1, 1)
fl.add_storage("middle", 149, 1, 0)
fl.add_storage("right_up", 90, 0, 2)
fl.add_storage("right_down", 50, 2, 1)
fl.add_storage("down", 23, 2, 0)
fl.add_storage("left_down", 76, 2, -1)
fl.add_storage("left", 43, 1, -2)
fl.add_storage("left_up", 34, 0, -1)
fl.add_storage("down_down", 78, 3, 0)
# add fluxes
fl.add_flux("middle_to_up", fl.storages[2], fl.storages[0], 30)
fl.add_flux("middle_to_right_up", fl.storages[2], fl.storages[3], 50)
fl.add_flux("middle_to_right", fl.storages[2], fl.storages[1], 100)
fl.add_flux("middle_to_right_down", fl.storages[2], fl.storages[4], 35)
fl.add_flux("middle_to_down", fl.storages[2], fl.storages[5], 50)
fl.add_flux("middle_to_left_down", fl.storages[2], fl.storages[6], 10)
fl.add_flux("middle_to_left", fl.storages[2], fl.storages[7], 50)
fl.add_flux("middle_to_left_up", fl.storages[2], fl.storages[8], 25)
fl.add_flux("down_to_down_down", fl.storages[5], fl.storages[9], 25)
fl.add_flux("down_to_left_down", fl.storages[5], fl.storages[4], 25)
# draw
fl.draw()
# generate data for updating the plots and
data_fluxes = []
data_storages = []
for i in range(len(fl.storages) + len(fl.fluxes)):
if i % 2 == 0:
data_fluxes.append(np.random.randint(0, fl.max_flux))
else:
data_storages.append(np.random.randint(0, fl.max_storage))
# update fluxes/storages
fl.update_everything(data_storages, data_fluxes)
# draw again
fl.draw()
# generate random somehow oscillating data for animation
timeseries = {"fluxes" : {}, "storages" : {}}
timespan_timeseries = 365
timeseries["fluxes"][0] = data_fluxes
timeseries["storages"][0] = data_storages
for day in range(1, timespan_timeseries):
timeseries["fluxes"][day] = []
for flux in range(len(timeseries["fluxes"][0])):
upper_limit = timeseries["fluxes"][day - 1][flux] + 5
lower_limit = timeseries["fluxes"][day - 1][flux] - 5
new_flux = np.random.randint(lower_limit, upper_limit)
timeseries["fluxes"][day].append(new_flux)
if timeseries["fluxes"][day][flux] > fl.max_flux:
timeseries["fluxes"][day][flux] = fl.max_flux
if timeseries["fluxes"][day][flux] < 0:
timeseries["fluxes"][day][flux] = 0
timeseries["storages"][day] = []
for storage in range(len(timeseries["storages"][0])):
upper_limit = timeseries["storages"][day - 1][flux] + 5
lower_limit = timeseries["storages"][day - 1][flux] - 5
new_storage = np.random.randint(lower_limit, upper_limit)
timeseries["storages"][day].append(new_storage)
if timeseries["storages"][day][storage] > fl.max_storage:
timeseries["storages"][day][storage] = fl.max_storage
if timeseries["storages"][day][storage] < 0:
timeseries["storages"][day][storage] = 0
# animate the timeseries
fl.animate(timeseries["fluxes"], timeseries["storages"])
</code></pre>
| 1 |
2016-09-27T09:52:20Z
|
[
"python",
"matplotlib",
"sankey-diagram"
] |
Grouping continguous values in a numpy array with their length
| 39,594,631 |
<p>In numpy / scipy (or pure python if you prefer), what would be a good way to group contiguous regions in a numpy array and count the length of these regions?</p>
<p>Something like this:</p>
<pre><code>x = np.array([1,1,1,2,2,3,0,0,0,0,0,1,2,3,1,1,0,0,0])
y = contiguousGroup(x)
print y
>> [[1,3], [2,2], [3,1], [0,5], [1,1], [2,1], [3,1], [1,2], [0,3]]
</code></pre>
<p>I have tried to do this just with loops however it takes a longer time than I would like (6 seconds) to do a list with about 30 million samples and 20000 contiguous regions. </p>
<p>Edit:</p>
<p>And now for some speed comparisons (just using time.clock() and a few hundred iterations, or less if it is in seconds). </p>
<p>Firstly my python loop code tested on 5 samples. </p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 8.644007 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 10.533305 seconds...
Number of elements 21841302
Number of regions 7619335
Time taken = 7.671015 seconds...
Number of elements 19723928
Number of regions 10799
Time taken = 5.014807 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 4.207359 seconds...
</code></pre>
<p>And now with Divakar's vectorized solution.</p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 0.063470 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 0.046293 seconds...
Number of elements 21841302
Number of regions 7619335
Time taken = 1.654288 seconds...
Number of elements 19723928
Number of regions 10799
Time taken = 0.022651 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 0.021189 seconds...
</code></pre>
<p>Modified approach gives roughly same times (maybe 5% slower at worst)</p>
<p>And now with with the generator approach from Kasramvd.</p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 3.834922 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 4.785480 seconds...
Number of elements 21841302
Number of regions 7619335
Time taken = 6.806867 seconds...
Number of elements 19723928
Number of regions 10799
Time taken = 2.264413 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 1.778873 seconds...
</code></pre>
<p>And now his numpythonic version.</p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 0.286336 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 0.174769 seconds...
Memory error sample 3 (too many regions)
Number of elements 19723928
Number of regions 10799
Time taken = 0.087028 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 0.084963 seconds...
</code></pre>
<p>Anyway I think the moral of the story is that numpy is very good. </p>
| 2 |
2016-09-20T12:51:15Z
| 39,594,730 |
<p>Here's a vectorized approach -</p>
<pre><code>idx = np.concatenate(([0],np.flatnonzero(x[:-1]!=x[1:])+1,[x.size]))
out = zip(x[idx[:-1]],np.diff(idx))
</code></pre>
<p>Sample run -</p>
<pre><code>In [34]: x
Out[34]: array([1, 1, 1, 2, 2, 3, 0, 0, 0, 0, 0, 1, 2, 3, 1, 1, 0, 0, 0])
In [35]: out
Out[35]: [(1, 3), (2, 2), (3, 1), (0, 5), (1, 1), (2, 1), (3, 1), (1, 2), (0, 3)]
</code></pre>
<hr>
<p>The concatenation on the entire array could be expensive. So, a modified version that does concatenation on the group shifting indices rather could be suggested, like so -</p>
<pre><code>idx0 = np.flatnonzero(x[:-1]!=x[1:])
count = np.concatenate(([idx0[0]+1],np.diff(idx0),[x.size-idx0[-1]-1]))
out = zip(x[np.append(0,idx0+1)],count)
</code></pre>
<p>Alternatively, at the final step, if the output as a <code>2D</code> array is okay, we could avoid that <code>zipping</code> and use NumPy's column_stack, like so -</p>
<pre><code>out = np.column_stack((x[np.append(0,idx0+1)],count))
</code></pre>
| 0 |
2016-09-20T12:56:38Z
|
[
"python",
"arrays",
"performance",
"numpy"
] |
Grouping continguous values in a numpy array with their length
| 39,594,631 |
<p>In numpy / scipy (or pure python if you prefer), what would be a good way to group contiguous regions in a numpy array and count the length of these regions?</p>
<p>Something like this:</p>
<pre><code>x = np.array([1,1,1,2,2,3,0,0,0,0,0,1,2,3,1,1,0,0,0])
y = contiguousGroup(x)
print y
>> [[1,3], [2,2], [3,1], [0,5], [1,1], [2,1], [3,1], [1,2], [0,3]]
</code></pre>
<p>I have tried to do this just with loops however it takes a longer time than I would like (6 seconds) to do a list with about 30 million samples and 20000 contiguous regions. </p>
<p>Edit:</p>
<p>And now for some speed comparisons (just using time.clock() and a few hundred iterations, or less if it is in seconds). </p>
<p>Firstly my python loop code tested on 5 samples. </p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 8.644007 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 10.533305 seconds...
Number of elements 21841302
Number of regions 7619335
Time taken = 7.671015 seconds...
Number of elements 19723928
Number of regions 10799
Time taken = 5.014807 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 4.207359 seconds...
</code></pre>
<p>And now with Divakar's vectorized solution.</p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 0.063470 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 0.046293 seconds...
Number of elements 21841302
Number of regions 7619335
Time taken = 1.654288 seconds...
Number of elements 19723928
Number of regions 10799
Time taken = 0.022651 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 0.021189 seconds...
</code></pre>
<p>Modified approach gives roughly same times (maybe 5% slower at worst)</p>
<p>And now with with the generator approach from Kasramvd.</p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 3.834922 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 4.785480 seconds...
Number of elements 21841302
Number of regions 7619335
Time taken = 6.806867 seconds...
Number of elements 19723928
Number of regions 10799
Time taken = 2.264413 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 1.778873 seconds...
</code></pre>
<p>And now his numpythonic version.</p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 0.286336 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 0.174769 seconds...
Memory error sample 3 (too many regions)
Number of elements 19723928
Number of regions 10799
Time taken = 0.087028 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 0.084963 seconds...
</code></pre>
<p>Anyway I think the moral of the story is that numpy is very good. </p>
| 2 |
2016-09-20T12:51:15Z
| 39,595,127 |
<p>here is a Numpyhonic-pythonic approach:</p>
<pre><code>In [192]: [(i[0], len(i)) for i in np.split(x, np.where(np.diff(x) != 0)[0]+1)]
Out[192]: [(1, 3), (2, 2), (3, 1), (0, 5), (1, 1), (2, 1), (3, 1), (1, 2), (0, 3)]
</code></pre>
<p>Here is a generator based approach using <code>itertools.groupby()</code>:</p>
<pre><code>In [180]: from itertools import groupby
In [181]: [(k, sum(1 for _ in g)) for k, g in groupby(x)]
Out[181]: [(1, 3), (2, 2), (3, 1), (0, 5), (1, 1), (2, 1), (3, 1), (1, 2), (0, 3)]
</code></pre>
<p>Or :</p>
<pre><code>In [213]: mask = np.diff(x) != 0
In [216]: np.column_stack((np.concatenate((x[mask], [x[-1]])), map(len, np.split(x, np.where(mask)[0]+1))))
Out[216]:
array([[1, 3],
[2, 2],
[3, 1],
[0, 5],
[1, 1],
[2, 1],
[3, 1],
[1, 2],
[0, 3]])
</code></pre>
| 1 |
2016-09-20T13:14:15Z
|
[
"python",
"arrays",
"performance",
"numpy"
] |
Grouping continguous values in a numpy array with their length
| 39,594,631 |
<p>In numpy / scipy (or pure python if you prefer), what would be a good way to group contiguous regions in a numpy array and count the length of these regions?</p>
<p>Something like this:</p>
<pre><code>x = np.array([1,1,1,2,2,3,0,0,0,0,0,1,2,3,1,1,0,0,0])
y = contiguousGroup(x)
print y
>> [[1,3], [2,2], [3,1], [0,5], [1,1], [2,1], [3,1], [1,2], [0,3]]
</code></pre>
<p>I have tried to do this just with loops however it takes a longer time than I would like (6 seconds) to do a list with about 30 million samples and 20000 contiguous regions. </p>
<p>Edit:</p>
<p>And now for some speed comparisons (just using time.clock() and a few hundred iterations, or less if it is in seconds). </p>
<p>Firstly my python loop code tested on 5 samples. </p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 8.644007 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 10.533305 seconds...
Number of elements 21841302
Number of regions 7619335
Time taken = 7.671015 seconds...
Number of elements 19723928
Number of regions 10799
Time taken = 5.014807 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 4.207359 seconds...
</code></pre>
<p>And now with Divakar's vectorized solution.</p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 0.063470 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 0.046293 seconds...
Number of elements 21841302
Number of regions 7619335
Time taken = 1.654288 seconds...
Number of elements 19723928
Number of regions 10799
Time taken = 0.022651 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 0.021189 seconds...
</code></pre>
<p>Modified approach gives roughly same times (maybe 5% slower at worst)</p>
<p>And now with with the generator approach from Kasramvd.</p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 3.834922 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 4.785480 seconds...
Number of elements 21841302
Number of regions 7619335
Time taken = 6.806867 seconds...
Number of elements 19723928
Number of regions 10799
Time taken = 2.264413 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 1.778873 seconds...
</code></pre>
<p>And now his numpythonic version.</p>
<pre><code>Number of elements 33718251
Number of regions 135137
Time taken = 0.286336 seconds...
Number of elements 42503100
Number of regions 6985
Time taken = 0.174769 seconds...
Memory error sample 3 (too many regions)
Number of elements 19723928
Number of regions 10799
Time taken = 0.087028 seconds...
Number of elements 16619539
Number of regions 19293
Time taken = 0.084963 seconds...
</code></pre>
<p>Anyway I think the moral of the story is that numpy is very good. </p>
| 2 |
2016-09-20T12:51:15Z
| 39,597,153 |
<p>All you may need is <code>np.diff</code> and is a little easier to read. Create a mask ...</p>
<pre><code>x = np.array([1,1,1,2,2,3,0,0,0,0,0,1,2,3,1,1,0,0,0])
mask = np.where( np.diff(x) != 0)[0]
mask = np.hstack((-1, mask, len(x)-1 ))
zip( x[mask[1:]], np.diff(mask) )
</code></pre>
<p>This should be easiest to understand and is fully vectorized (not sure about <code>zip</code>)...</p>
| 0 |
2016-09-20T14:44:22Z
|
[
"python",
"arrays",
"performance",
"numpy"
] |
add multiple values per key to a dictionary depending on if statement
| 39,594,722 |
<p>I need to transform a path into a python dictionary where <code>parent folder = key</code> and <code>folder content = values (e.g 'a/foo' 'a/bar' 'b/root' 'b/data'</code> needs to be <code>{ 'a': ['foo', 'bar'], 'b': ['root', 'data']}</code>) I got my dictionary initialized <code>d = {'a': [], 'b': [] }</code>and my list of values <code>l = ['a/foo', 'a/bar', 'b/root', 'b/data']</code></p>
<p>I did a nested for loop with an if statement who takes care of saying which value should be assigned to with key and remove the said key from the value path (e.g 'a/root' becomes 'root')</p>
<pre><code>for key in d:
myIndex = len(key)
for value in l:
if value[:myIndex] == key:
d[key].append(value[myIndex+1:])
</code></pre>
<p>The thing is that I get the values in the right format exept that I get all of them for each key <code>{ a: ['foo', 'bar', 'root', 'data'], b: ['foo', 'bar', 'root', 'data'] }</code> as if my if statement is simply being ignored. If someone has an idea of the problem! Thanks</p>
| 1 |
2016-09-20T12:56:15Z
| 39,594,837 |
<p>Considering the structure of your result, <a href="https://docs.python.org/2/library/collections.html#collections.defaultdict" rel="nofollow"><code>collections.defaultdict</code></a> can handle this more easily:</p>
<pre><code>from collections import defaultdict
d = defaultdict(list)
lst = ['a/foo', 'a/bar', 'b/root', 'b/data']
for path in lst:
k, v = path.split('/')
d[k].append(v)
print(d)
# defaultdict(<class 'list'>, {'b': ['root', 'data'], 'a': ['foo', 'bar']})
</code></pre>
| 1 |
2016-09-20T13:00:49Z
|
[
"python",
"dictionary"
] |
add multiple values per key to a dictionary depending on if statement
| 39,594,722 |
<p>I need to transform a path into a python dictionary where <code>parent folder = key</code> and <code>folder content = values (e.g 'a/foo' 'a/bar' 'b/root' 'b/data'</code> needs to be <code>{ 'a': ['foo', 'bar'], 'b': ['root', 'data']}</code>) I got my dictionary initialized <code>d = {'a': [], 'b': [] }</code>and my list of values <code>l = ['a/foo', 'a/bar', 'b/root', 'b/data']</code></p>
<p>I did a nested for loop with an if statement who takes care of saying which value should be assigned to with key and remove the said key from the value path (e.g 'a/root' becomes 'root')</p>
<pre><code>for key in d:
myIndex = len(key)
for value in l:
if value[:myIndex] == key:
d[key].append(value[myIndex+1:])
</code></pre>
<p>The thing is that I get the values in the right format exept that I get all of them for each key <code>{ a: ['foo', 'bar', 'root', 'data'], b: ['foo', 'bar', 'root', 'data'] }</code> as if my if statement is simply being ignored. If someone has an idea of the problem! Thanks</p>
| 1 |
2016-09-20T12:56:15Z
| 39,594,934 |
<p>Use <code>split</code> to split to folder and subfolder.</p>
<pre><code>d = {'a': [], 'b': [] }
l = ['a/foo', 'a/bar', 'b/root', 'b/data']
for key in d:
for value in l:
sep = value.split('/')
if key == sep[0]:
d[key].append(sep[1])
print(d)
# {'b': ['root', 'data'], 'a': ['foo', 'bar']}
</code></pre>
| 0 |
2016-09-20T13:05:16Z
|
[
"python",
"dictionary"
] |
add multiple values per key to a dictionary depending on if statement
| 39,594,722 |
<p>I need to transform a path into a python dictionary where <code>parent folder = key</code> and <code>folder content = values (e.g 'a/foo' 'a/bar' 'b/root' 'b/data'</code> needs to be <code>{ 'a': ['foo', 'bar'], 'b': ['root', 'data']}</code>) I got my dictionary initialized <code>d = {'a': [], 'b': [] }</code>and my list of values <code>l = ['a/foo', 'a/bar', 'b/root', 'b/data']</code></p>
<p>I did a nested for loop with an if statement who takes care of saying which value should be assigned to with key and remove the said key from the value path (e.g 'a/root' becomes 'root')</p>
<pre><code>for key in d:
myIndex = len(key)
for value in l:
if value[:myIndex] == key:
d[key].append(value[myIndex+1:])
</code></pre>
<p>The thing is that I get the values in the right format exept that I get all of them for each key <code>{ a: ['foo', 'bar', 'root', 'data'], b: ['foo', 'bar', 'root', 'data'] }</code> as if my if statement is simply being ignored. If someone has an idea of the problem! Thanks</p>
| 1 |
2016-09-20T12:56:15Z
| 39,595,020 |
<p>you have a problem with the way you initialize your dictionary <strong>customersDatabases</strong> :</p>
<p>Instead of :</p>
<pre><code>customersDatabases = d # problem with keys initialization (pointer to same empty list)
</code></pre>
<p>Try :</p>
<pre><code>customersDatabases = dict(d) # new instance properly initialized
</code></pre>
| 0 |
2016-09-20T13:09:09Z
|
[
"python",
"dictionary"
] |
Python: find all urls which contain string
| 39,594,926 |
<p>I am trying to find all pages that contain a certain string in the name, on a certain domain. For example: </p>
<pre><code>www.example.com/section/subsection/406751371-some-string
www.example.com/section/subsection/235824297-some-string
www.example.com/section/subsection/146783214-some-string
</code></pre>
<p>What would be the best way to do it?</p>
<p>The numbers before "-some-string" can be any 9-digit number. I can write a script that loops through all possible 9-digit numbers and tries to access the resulting url, but I keep thinking that there should be a more efficient way to do this, especially since I know that overall there are only about 1000 possible pages that end with that string. </p>
| -2 |
2016-09-20T13:05:00Z
| 39,595,557 |
<p>I understood your situation, the numeric value before -some-string is a kind of object id for that web site (for example, this question has a id 39594926, and the url is stackoverflow.com/questions/<strong>39594926</strong>/python-find-all-urls-which-contain-string)</p>
<p>I don't think there is a way to find all valid numbers, unless you have a listing (or parent) page from that website lists all these numbers. Take Stackoverflow as an example again, in the question list page, you will see all these question ids.</p>
<p>If you can provide me the website, I could have a look try to find the 'pattern' of these numbers. For some simple website, that number is just a increment to identify objects (could be user, question or anything else).</p>
| 0 |
2016-09-20T13:32:27Z
|
[
"python",
"url"
] |
Python: find all urls which contain string
| 39,594,926 |
<p>I am trying to find all pages that contain a certain string in the name, on a certain domain. For example: </p>
<pre><code>www.example.com/section/subsection/406751371-some-string
www.example.com/section/subsection/235824297-some-string
www.example.com/section/subsection/146783214-some-string
</code></pre>
<p>What would be the best way to do it?</p>
<p>The numbers before "-some-string" can be any 9-digit number. I can write a script that loops through all possible 9-digit numbers and tries to access the resulting url, but I keep thinking that there should be a more efficient way to do this, especially since I know that overall there are only about 1000 possible pages that end with that string. </p>
| -2 |
2016-09-20T13:05:00Z
| 39,606,981 |
<p>If these articles are all linked to on one page you could parse the html of this index page since all links will be contained in the href tags.</p>
| 0 |
2016-09-21T03:19:27Z
|
[
"python",
"url"
] |
How to set default value from 'on delivery order' to 'on demand' in create invoice field in quotations in odoo?
| 39,595,152 |
<p>I am trying to set default value from <code>'on delivery order'</code> to <code>'on demand'</code> in create invoice field in quotation's other information tab in odoo. But its not setting as default value to on demand. Can anyone help me out how to set default value of create invoice in 'on demand'. </p>
<p><strong>This is the default odoo code i am posting.</strong></p>
<pre><code>'order_policy': fields.selection([
('manual', 'On Demand'),
('picking', 'On Delivery Order'),
('prepaid', 'Before Delivery'),
],
'Create Invoice',
required=True,
readonly=True,
states={
'draft': [('readonly', False)],
'sent': [('readonly', False)]
},
),
</code></pre>
<p><strong>and this is default value code i am writing.</strong></p>
<pre><code>_defaults = {
'order_policy': 'manual',
}
</code></pre>
| 0 |
2016-09-20T13:15:38Z
| 39,595,626 |
<p>You can use default attribute in the field </p>
<pre><code>'order_policy': fields.selection([
('manual', 'On Demand'),
('picking', 'On Delivery Order'),
('prepaid', 'Before Delivery'),
],
'Create Invoice',
default='manual',
required=True,
readonly=True,
states={
'draft': [('readonly', False)],
'sent': [('readonly', False)]
},
),
</code></pre>
| 1 |
2016-09-20T13:35:34Z
|
[
"python",
"openerp",
"default",
"odoo-8",
"default-value"
] |
How to set default value from 'on delivery order' to 'on demand' in create invoice field in quotations in odoo?
| 39,595,152 |
<p>I am trying to set default value from <code>'on delivery order'</code> to <code>'on demand'</code> in create invoice field in quotation's other information tab in odoo. But its not setting as default value to on demand. Can anyone help me out how to set default value of create invoice in 'on demand'. </p>
<p><strong>This is the default odoo code i am posting.</strong></p>
<pre><code>'order_policy': fields.selection([
('manual', 'On Demand'),
('picking', 'On Delivery Order'),
('prepaid', 'Before Delivery'),
],
'Create Invoice',
required=True,
readonly=True,
states={
'draft': [('readonly', False)],
'sent': [('readonly', False)]
},
),
</code></pre>
<p><strong>and this is default value code i am writing.</strong></p>
<pre><code>_defaults = {
'order_policy': 'manual',
}
</code></pre>
| 0 |
2016-09-20T13:15:38Z
| 39,607,906 |
<p>You can do it by two way.</p>
<p>1). As @KHELILI Hamza suggested you should redefine that field and set default value.</p>
<p>2). You need to override default_get method.</p>
<p><strong>Solution:</strong></p>
<pre><code>def default_get(self, cr, uid, fields, context=None):
res = super(class_name, self).default_get(cr, uid, fields, context=context)
res.update({'order_policy': 'manual'})
return res
</code></pre>
<p>If you want to set the default value conditionally then you can manage in that code easily.</p>
| 1 |
2016-09-21T05:01:39Z
|
[
"python",
"openerp",
"default",
"odoo-8",
"default-value"
] |
cv2.stereocalibrate() error about "cameraMatrix" not being a numerical tuple
| 39,595,291 |
<p>I'm working on estimating distance from stereo cameras. I am using python 2.7.11 and OpenCV 3.1 versions.I calibrated every camera separately and found intrinsic parameters. the problem is that when I use cv2.stereocalibtare() function in the way stated in OpenCV documents I get the error: "TypeError: cameraMatrix1 is not a numerical tuple".</p>
<p>the code looks like this :</p>
<pre><code>stereo_criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS, 100, 1e-5)
ret_stereo, R, T, E, F = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, grayL.shape[::-1],mtxL, distcoeffL, mtxR, distcoeffR, criteria=stereo_criteria, flags=cv2.CALIB_FIX_INTRINSIC)
</code></pre>
<p>I thought that changing the camera matrix into tuple using tuple() function might work. it didn't change anything. and then I thought the problem might be the version of opencv that I'm using so I changed it into 2.4.13 but that didn't change anything also. after all these tries I thought I let the stereocalibrate() do the camera matrix and distortion coefficient estimation itself without any single camera calibration. </p>
<p>the code looked like this:</p>
<pre><code>stereo_criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS, 100, 1e-5)
stereocalib_retval, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F = cv2.stereoCalibrate(objpointsL,imgpointsL,imgpointsR,grayL.shape, criteria=stereo_criteria)
</code></pre>
<p>but it gets an error saying: "TypeError: Required argument 'distCoeffs1' (pos 5) not found"
I just can't figure out why this function is not working. All I want is to calculate the R, T, E, F values </p>
<p><strong>UPDATE:</strong> For the last error I tried this:</p>
<pre><code>stereocalib_retval, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F = cv2.stereoCalibrate(objpointsL,
imgpointsL,
imgpointsR,
mtxL,
distcoeffL,
mtxR,
distcoeffR,
image_size,
criteria=stereocalib_criteria,
flags=stereocalib_flags)
</code></pre>
<p>and it worked. although putting cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2 were not needed ( and are actually equal to mtxL,discoeffL,mtxR,discoeffR respectively ) omitting them was causing error for me. The first problem still remains.</p>
<p>I appreciate your help</p>
<p>Masood Saber</p>
| 0 |
2016-09-20T13:20:26Z
| 39,779,805 |
<p>Problem fixed.I used the lines below and errors were fixed.</p>
<pre><code>`cameraMatrix1 =None
cameraMatrix2 = None
distCoeffs1 = None
distCoeffs2 = None
R =None
T = None
E = None
F = None
retval, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F = cv2.stereoCalibrate(objpointsL,imgpointsL, imgpointsR,cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2,(w,h), R, T, E, F, flags= stereocalib_flags, criteria = stereocalib_criteria)`
</code></pre>
| 0 |
2016-09-29T20:51:59Z
|
[
"python",
"opencv",
"camera",
"calibration"
] |
Windows Theano Keras - lazylinker_ext\mod.cpp: No such file or directory
| 39,595,392 |
<p>I am installing Theano and Keras follwing the <a href="http://stackoverflow.com/questions/34097988/how-do-i-install-keras-and-theano-in-anaconda-python-2-7-on-windows?answertab=votes#tab-top">How do I install Keras and Theano in Anaconda Python 2.7 on Windows?</a>, which worked fine for me with an older release before. Now I have upgraded to the latest Theano version and when validating its functionality using this command:</p>
<p>Python:</p>
<pre><code> from theano import function, config, shared, sandbox
</code></pre>
<p>it resulted in really long error log containing:</p>
<pre><code>g++.exe: error: C:\Users\John: No such file or directory
g++.exe: error: Dow\AppData\Local\Theano\compiledir_Windows-10-10.0.10240-Intel64_Family_6_Model_60_Stepping_3_GenuineIntel-2.7.12-64\lazylinker_ext\mod.cpp: No such file or directory
</code></pre>
<p>It seems the <strong>path to user directory "John Dow" was splitted</strong> by g++ to two file paths, since there is space in the name.</p>
<p>Is there any way how to tell python to <strong>not to use the "C:\Users\John Dow" directory</strong> but e.g. "C:\mytempdir". Setting the USERPROFILE windows variable didn't help.</p>
<p>NOTE: I managed to fix the g++ command, where it failed (by adding quotes to the output), which successfully compiled the sources. Unfortunately it didnt solve my problem, since when started again, it fails on this step.
It seems also to be an issue of Theano, since swithcing to different Python version didnt help.</p>
| 0 |
2016-09-20T13:24:50Z
| 39,721,049 |
<p>Answer is from here:
<a href="http://stackoverflow.com/questions/34346839/theano-change-base-compiledir-to-save-compiled-files-in-another-directory">Theano: change `base_compiledir` to save compiled files in another directory</a></p>
<p>i.e. in ~/.theanorc file (or create it) add this line :</p>
<pre><code>[global]
base_compiledir=/some/path
</code></pre>
| 0 |
2016-09-27T09:30:03Z
|
[
"python",
"g++",
"mingw",
"theano",
"keras"
] |
Write output of a python script calling a sql script to a file
| 39,595,423 |
<p>I'm learning python and sql at the same time. ;) The following python script calls a sql script. I am trying to figure out how to get the output from the query to go to a csv. I have gotten this far: </p>
<hr>
<pre><code>#!/usr/bin/env python
import cx_Oracle
import sys
import csv
import os.path
import subprocess
con = cx_Oracle.connect('creden/tials@(DESCRIPTION =(ADDRESS_LIST =(ADDRESS =(PROTOCOL = TCP)(Host =server.work.net)(Port = 3453)))(CONNECT_DATA = (SID = SQLTEST)))')
cur=con.cursor()
f=open('/usr/local/src/sql/script_01.sql')
full_sql=f.read()
sql_commands=full_sql.split(";")
for sql_command in sql_commands:
cur.execute(sql_command)
# Here is where the magic should happen
# The output of above should go to the 'test_query' file
query_results = <OUTPUT OF FOR LOOP>
c=csv.writer(open("/home/user/sqlplus/test_query/test_query.csv","w"),delimiter=";",lineterminator="\r\n")
c.writerows(query_results)
cur.close()
</code></pre>
<hr>
<p>Can someone please tell me what I'm missing?</p>
| 0 |
2016-09-20T13:25:53Z
| 39,596,745 |
<p>May be this will help you. <a href="https://cx-oracle.readthedocs.io/en/latest/cursor.html" rel="nofollow">Documentation</a> </p>
<p>There is function fetchall() which fetches all rows of query result.</p>
<p>You can use it as</p>
<pre><code>cur.execute("sql_command")
query_result = cur.fetchall()
</code></pre>
<p>query_result will be of type tuple.</p>
<p>This will contain result of only one command at a time.</p>
<p>Now you can use list. And append all the results.</p>
<p>e.g.</p>
<pre><code>results = []
for sql_command in sql_commands:
cur.execute(sql_command) #sql_command is of type string
query_result = cur.fetchall()
print(query_result)
#query_result will be tuple so select which part of the tuple you need. e.g query_result[0], query_result[1] etc.
results.append(query_result[0]) #change index according to your need
</code></pre>
<p>Now at the end of for loop you will have list (i.e results) having all the outputs of query.</p>
<p>By using another for loop over results(which is a list) you can get the result of each query one by one
and can write in csv file.</p>
<p>I don't now how to handle csv file. Maybe you need to save csv file at each iteration.
I hope you can handle this. </p>
| 0 |
2016-09-20T14:27:03Z
|
[
"python",
"sql",
"oracle"
] |
Using only a slice of meshgrid or replacing it for evaluting a function
| 39,595,564 |
<p>Edit: Solution</p>
<p>The answer was posted below, using list prod from itertools greatly reduces memory usage as it is a list object, something I overlooked in the thread I linked.</p>
<p>Code:</p>
<pre><code>n = 2 #dimension
r = np.arange(0.2,2.4,0.1) #range
grid = product(r,repeat=n)
for g in grid:
y = np.array(g)
if np.all(np.diff(g) > 0) == True:
print(f(y)) #or whatever you want to do
</code></pre>
<hr>
<p>I try to evalute a function with n parameters in a certain range. And I want to be able to change n in a certain range, so the user can determine it with his input. </p>
<p>I got a working code with help from <a href="http://stackoverflow.com/questions/39536288/variable-dimensionality-of-a-meshgrid-with-numpy">here</a> an it looks like this:</p>
<pre><code>import numpy as np
n = 2 #determine n
r = np.arange(0.2,2.4,0.1) #determine range
grid=np.array(np.meshgrid(*[r]*n)).T.reshape(-1,n) #creates a meshgrid for the range**n
#reduces the number of used points in the grid (the function is symmetrical in its input parameters)
for i in range(0,n-1):
grid = np.array([g for g in grid if g[i]<g[i+1]])
y = np.zeros((grid.shape[0]))
for i in range(grid.shape[0]):
y[i] = f(grid[i,:]) #evaluating function for all lines in grid
</code></pre>
<p>This only works up to n equals 5 or 6 then the grid-array and the ouput array just gets to big for Spyder to handle it (~10 MB). </p>
<p>Is there a way to only create one line of the meshgrid (combination of parameters) at a time to evaluate the function with? Then I could save those values (grid,y) in a textfile and overwrite them in the next step.</p>
<p>Or is there a way to create all n-dimensional combinations in range without meshgrid but with a variable n and one at a time?</p>
| 0 |
2016-09-20T13:32:44Z
| 39,595,965 |
<p>You may want to try iterator. It seems <a href="https://docs.python.org/3.6/library/itertools.html#itertools.product" rel="nofollow">itertools.product</a> fits your need.</p>
<pre><code>r = np.arange(0,3,1) #determine range
list(product(r,r))
#[(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)]
</code></pre>
<p>More detail:</p>
<pre><code>grid = product([r]*n)
y = np.zeros(... # you should know the exact size
i = 0
for g in grid:
if ...: # check if g in ascending order
y[i] = f(g)
i += 1
</code></pre>
<p>By the way the size of y should be <code>comb(len(r), 3)</code>, choose 3 from <code>len(r)</code></p>
<pre><code>from scipy.special import comb
comb(len(r), 3)
</code></pre>
<p>Or an one-liner</p>
<pre><code>y = [f(g) for g in grid if is_ascending(g)]
</code></pre>
<p>Where <code>is_ascending</code> is an user defined filter function.</p>
| 0 |
2016-09-20T13:51:35Z
|
[
"python",
"arrays",
"python-3.x",
"numpy"
] |
Python os.walk netCDF4 not working together
| 39,595,735 |
<p>I am trying to go through a series of .nc files to run a small bit of code. The test script below prints the first filename in the directory but when I use <code>ncfile = netCDF4.Dataset(fname, 'r')</code> I get the error </p>
<pre><code>File "netCDF4\_netCDF4.pyx",
line 1795, in netCDF4._netCDF4.Dataset.__init__
(netCDF4\_netCDF4.c:12278)
RuntimeError: No such file or directory
</code></pre>
<p>Is this an incompatibility issue with os.walk and netCDF4 or is it a simple error I am making?</p>
<pre><code>import os
import netCDF4
for root, dirs, files in os.walk('E:\satellite .nc data\ENVISAT2006'):
for fname in files:
print fname # works up to here and without line below it prints all filenames
ncfile = netCDF4.Dataset(fname, 'r')
</code></pre>
| 0 |
2016-09-20T13:40:42Z
| 39,596,127 |
<p>It seems it was a simple problem of needing the path and directory in</p>
<p><code>ncfile = netCDF4.Dataset(fname, 'r')</code> </p>
<p>so I have replaced it with</p>
<p><code>ncfile = netCDF4.Dataset(os.path.join(fdir,fname), 'r')</code> </p>
<p>and specified <code>fdir</code> outside the loop. For simplicity I have replaced <code>os.walk</code> with <code>os.listdir</code> as I don't need to go through a directory tree.</p>
<pre><code>import os
import netCDF4
import numpy as np
from math import pi
from numpy import cos, sin
fdir = 'E:\satellite .nc data\ENVISAT2006'
for fname in os.listdir('E:\satellite .nc data\ENVISAT2006'):
#os.walk only needed if going through all of the files in a directory tree
#for fname in files:
print fname
ncfile = netCDF4.Dataset(os.path.join(fdir,fname), 'r')
</code></pre>
| 1 |
2016-09-20T13:57:57Z
|
[
"python",
"netcdf",
"os.walk"
] |
luigi per-task retry policy
| 39,595,786 |
<p>I have an issue configuring luigi per-task retry-policy. I've configured the global luigi.cfg file as follows:</p>
<pre><code>[scheduler]
retry-delay: 1
retry_count: 5
[worker]
keep_alive: true
wait_interval: 3
</code></pre>
<p>Furthermore, it states in the luigi configuration manual that writing a task as follows:</p>
<pre><code>class SomeTask(luigi.Task):
retry_count = 3
</code></pre>
<p>will suffice in overriding the luigi retry_count specified in the luigi.cfg. However this setting does not effect the run at all. I've managed to create a task which fails every time just for testing, and logging returns that this task failed 5 times (and not 3). </p>
<p>I think there's something fundamental i'm missing.</p>
| 0 |
2016-09-20T13:43:03Z
| 39,869,332 |
<p>If even the example does not work. I believe your server or client code is out of date. Note that the <code>luigi</code> command will run your installed version and the central server needs to be restarted after a package upgrade.</p>
| 0 |
2016-10-05T08:46:48Z
|
[
"python",
"luigi"
] |
How to configure ruamel.yaml.dump output?
| 39,595,807 |
<p>With this data structure:</p>
<pre><code>d = {
(2,3,4): {
'a': [1,2],
'b': 'Hello World!',
'c': 'Voilà !'
}
}
</code></pre>
<p>I would like to get this YAML:</p>
<pre><code>%YAML 1.2
---
[2,3,4]:
a:
- 1
- 2
b: Hello World!
c: 'Voilà !'
</code></pre>
<p>Unfortunately I get this format:</p>
<pre><code>$ print ruamel.yaml.dump(d, default_flow_style=False, line_break=1, explicit_start=True, version=(1,2))
%YAML 1.2
---
? !!python/tuple
- 2
- 3
- 4
: a:
- 1
- 2
b: Hello World!
c: !!python/str 'Voilà !'
</code></pre>
<p>I cannot configure the output I want even with <code>safe_dump</code>. How can I do that without manual regex work on the output?</p>
<p>The only ugly solution I found is something like: </p>
<pre><code>def rep(x):
return repr([int(y) for y in re.findall('^\??\s*-\s*(\d+)', x.group(0), re.M)]) + ":\n"
print re.sub('\?(\s*-\s*(\w+))+\s*:', rep,
ruamel.yaml.dump(d, default_flow_style=False, line_break=1, explicit_start=True, version=(1,2)))
</code></pre>
| 1 |
2016-09-20T13:44:06Z
| 39,611,010 |
<p>You cannot get exactly what you want as output using <code>ruamel.yaml.dump()</code> without major rework of the internals.</p>
<ul>
<li>The output you like has indentation 2 for the values of the top-level mapping (key <code>a</code>, <code>b</code>, etc) and indentation 4 for the elements of the sequence that is the value for the <code>a</code> key (with the <code>-</code> pushed in 2 positions. That would at least require differencing between indentation levels for mapping and sequences (if not for individual collections) and that is non-trivial.</li>
<li>Your sequence output is compacted from the <code>,</code> (comma, space) what a "normal" flow style emits to just a <code>,</code>. IIRC this cannot currently be influenced by any parameter, and since you have little contextual knowledge when emitting a collection, it is difficult to "not include the spaces when emitting a sequence that is a key". An additional option to <code>dump()</code> would require changes in several of the sources files and classes.</li>
</ul>
<p>Less difficult issues, with indication of solution:</p>
<ul>
<li>Your tuple has to magically convert to a sequence to get rid of the tag <code>!!python/tuple</code>. As you don't want to affect all tuples, this is IMO best done by making a subclass of <code>tuple</code> and represent this as a sequence (optionally represent such tuple as list only if actually used as a key). You can use <code>comments.CommentedKeySeq</code> for that (assuming <code>ruamel.yaml>=0.12.14</code>, it has the proper representation support when using <code>ruamel.yaml.round_trip_dump()</code></li>
<li>Your key is, when tested before emitting, not a simple key and as such it get a '? ' (question mark, space) to indicate a complex mapping key. . You would have to change the emitter so that the <code>SequenceStartEvent</code> starts a simple key (if it has flow style and not block style). An additional issue is that such a SequenceStartEvent then will be "tested" to have a <code>style</code> attribute (which might indicate an explicit need for '?' on key). This requires changing <code>emitter.py:Emitter.check_simple_key()</code> and <code>emitter.py:Emitter.expect_block_mapping_key()</code>.</li>
<li>Your scalar string value for <code>c</code> gets quotes, whereas your scalar string value for <code>b</code> doesn't. You only can get that kind of difference in output in ruamel.yaml by making them different types. E.g. by making it type <code>scalarstring.SingleQuotedScalarString()</code> (and using <code>round_trip_dump()</code>).</li>
</ul>
<p>If you do:</p>
<pre><code>import sys
import ruamel.yaml
from ruamel.yaml.comments import CommentedMap, CommentedKeySeq
assert ruamel.yaml.version_info >= (0, 12, 14)
data = CommentedMap()
data[CommentedKeySeq((2, 3, 4))] = cm = CommentedMap()
cm['a'] = [1, 2]
cm['b'] = 'Hello World!'
cm['c'] = ruamel.yaml.scalarstring.SingleQuotedScalarString('Voilà !')
ruamel.yaml.round_trip_dump(data, sys.stdout, explicit_start=True, version=(1, 2))
</code></pre>
<p>you will get:</p>
<pre><code>%YAML 1.2
---
[2, 3, 4]:
a:
- 1
- 2
b: Hello World!
c: 'Voilà !'
</code></pre>
<p>which, apart from the now consistent indentation level of 2, the extra spaces in the flow style sequence, and the required use of the <code>round_trip_dump</code>, will get you as close to what you want without major rework. </p>
<p>Whether the above code is ugly as well or not is of course a matter of taste.</p>
<p>The output will, non-incidently, round-trip correctly when loaded using <code>ruamel.yaml.round_trip_load(preserve_quotes=True)</code>.</p>
<hr>
<p>If control over the quotes is not needed, and neither is the order of your mapping keys important, then you can also patch the normal dumper:</p>
<pre><code>def my_key_repr(self, data):
if isinstance(data, tuple):
print('data', data)
return self.represent_sequence(u'tag:yaml.org,2002:seq', data,
flow_style=True)
return ruamel.yaml.representer.SafeRepresenter.represent_key(self, data)
ruamel.yaml.representer.Representer.represent_key = my_key_repr
</code></pre>
<p>Then you can use a normal sequence:</p>
<pre><code>data = {}
data[(2, 3, 4)] = cm = {}
cm['a'] = [1, 2]
cm['b'] = 'Hello World!'
cm['c'] = 'Voilà !'
ruamel.yaml.dump(data, sys.stdout, allow_unicode=True, explicit_start=True, version=(1, 2))
</code></pre>
<p>will give you:</p>
<pre><code>%YAML 1.2
---
[2, 3, 4]:
a: [1, 2]
b: Hello World!
c: Voilà !
</code></pre>
<p>please note that you need to explicitly allow unicode in your output (default with <code>round_trip_dump()</code>) using <code>allow_unicode=True</code>. </p>
<hr>
<p>¹ <sub>Disclaimer: I am the author of <a href="https://pypi.python.org/pypi/ruamel.yaml" rel="nofollow">ruamel.yaml</a>.</sub></p>
| 1 |
2016-09-21T08:15:00Z
|
[
"python",
"yaml",
"ruamel.yaml"
] |
Python CSV: Can I do this with one 'with open' instead of two?
| 39,595,820 |
<p>I am a noobie.</p>
<p>I have written a couple of scripts to modify CSV files I work with.</p>
<p>The scripts:</p>
<p>1.) change the headers of a CSV file then save that to a new CSV file,.</p>
<p>2.) Load that CSV File, and change the order of select columns using DictWriter.</p>
<pre><code>from tkinter import *
from tkinter import filedialog
import os
import csv
root = Tk()
fileName = filedialog.askopenfilename(filetypes=(("Nimble CSV files", "*.csv"),("All files", "*.*")))
outputFileName = os.path.splitext(fileName)[0] + "_deleteme.csv" #my temp file
forUpload = os.path.splitext(fileName)[0] + "_forupload.csv"
#Open the file - change the header then save the file
with open(fileName, 'r', newline='') as infile, open(outputFileName, 'w', newline='') as outfile:
reader = csv.reader(infile)
writer = csv.writer(outfile, delimiter=',', lineterminator='\n')
row1 = next(reader)
#new header names
row1[0] = 'firstname'
row1[1] = 'lastname'
row1[4] = 'phone'
row1[5] = 'email'
row1[11] = 'address'
row1[21] = 'website'
#write the temporary CSV file
writer.writerow(row1)
for row in reader:
writer.writerow(row)
#Open the temporary CSV file - rearrange some columns
with open(outputFileName, 'r', newline='') as dInFile, open(forUpload, 'w', newline='') as dOutFile:
fieldnames = ['email', 'title', 'firstname', 'lastname', 'company', 'phone', 'website', 'address', 'twitter']
dWriter = csv.DictWriter(dOutFile, restval='', extrasaction='ignore', fieldnames=fieldnames, lineterminator='\n')
dWriter.writeheader()
for row in csv.DictReader(dInFile):
dWriter.writerow(row)
</code></pre>
<p>My question is: Is there a more efficient way to do this? </p>
<p>It seems like I shouldn't have to make a temporary CSV file ("_deleteme.csv") I then delete.</p>
<p>I assume making the temporary CSV file is a rookie move -- is there a way to do this all with one 'With open' statement?</p>
<p>Thanks for any help, it is greatly appreciated. </p>
<p>--Luke</p>
| 0 |
2016-09-20T13:44:26Z
| 39,596,366 |
<p><code>csvfile</code> can be any object with a <code>write()</code> method. You could craft a custom element, or use <a href="https://docs.python.org/2/library/stringio.html" rel="nofollow">StringIO</a>. You'd have to verify efficiency yourself.</p>
| 0 |
2016-09-20T14:08:13Z
|
[
"python",
"csv",
"dictionary",
"header",
"temporary-files"
] |
Not getting required output using findall in python
| 39,595,840 |
<p>Earlier ,I could not put the exact question.My apologies.</p>
<p>Below is what I am looking for :</p>
<p>I am reading a string from file as below and there can be multiple such kind of strings in the file.</p>
<pre><code>" VEGETABLE 1
POTATOE_PRODUCE 1.1 1SIMLA(INDIA)
BANANA 1.2 A_BRAZIL(OR INDIA)
CARROT_PRODUCE 1.3 A_BRAZIL/AFRICA"
</code></pre>
<p>I want to capture the entire string as output using findall only.</p>
<p><strong>My script:</strong></p>
<pre><code>import re
import string
f=open('log.txt')
contents = f.read()
output=re.findall('(VEGETABLE.*)(\s+\w+\s+.*)+',contents)
print output
</code></pre>
<p>Above script is giving output as </p>
<p>[('VEGETABLE 1', '\n CARROT_PRODUCE 1.3 A_BRAZIL/AFRICA')]</p>
<p>But contents in between are missing.</p>
| 1 |
2016-09-20T13:45:23Z
| 39,597,244 |
<p>Solution in last snippet in this answer.</p>
<pre><code>>>> import re
>>> str2='d1 talk walk joke'
>>> re.findall('(\d\s+)(\w+\s)+',str2)
[('1 ', 'walk ')]
</code></pre>
<p>output is a list with only one occurrence of the given pattern. The tuple in the list contains two strings that matched corresponding two groupings given within () in the pattern </p>
<h1>Experiment 1</h1>
<p>Removed the last '+' which made pattern to select the first match instead of greedy last match</p>
<pre><code>>>> re.findall('(\d\s+)(\w+\s)',str2)
[('1 ', 'talk ')]
</code></pre>
<h1>Experiment 2</h1>
<p>Added one more group to find the third words followed with one or more spaces. But if the sting has more than 3 words followed by spaces, this will still find only three words.</p>
<pre><code>>>> re.findall('(\d\s+)(\w+\s)(\w+\s)',str2)
[('1 ', 'talk ', 'walk ')] #
</code></pre>
<h1>Experiment 3</h1>
<p>Using '|' to match the pattern multipel times. Note the tuple has disappeared. Also note that the first match is not containing only the number. This may be because \w is superset of \d</p>
<pre><code>>>> re.findall('\d\s+|\w+\s+',str2)
['d1 ', 'talk ', 'walk ']
</code></pre>
<h1>Final Experiment</h1>
<pre><code>>>> re.findall('\d\s+|[a-z]+\s+',str2)
['1 ', 'talk ', 'walk ']
</code></pre>
<p>Hope this helps.</p>
| 0 |
2016-09-20T14:49:17Z
|
[
"python",
"regex",
"findall"
] |
Unpack *args to string using ONLY format function
| 39,595,924 |
<p>As in the title, how to unpack <code>*args</code> to string using <strong>ONLY</strong> <code>format</code> function (without <code>join</code> function), so having:</p>
<pre><code>args = ['a', 'b', 'c']
</code></pre>
<p>trying something like that (pseudocode):</p>
<pre><code>'-{}-'.format(*args)
</code></pre>
<p>I will get:</p>
<pre><code>'-a-b-c-'
</code></pre>
| -2 |
2016-09-20T13:49:12Z
| 39,596,062 |
<p><strong>EDIT:</strong> There you go:</p>
<pre><code>("-{}" * len(args) + "-").format(*args)
</code></pre>
<p><em>Before I realized that it shouldn't use <code>join()</code> it was:</em></p>
<pre><code>"-{}-".format("-".join(["{}"] * len(args)).format(*args))
</code></pre>
| 3 |
2016-09-20T13:55:15Z
|
[
"python"
] |
Unpack *args to string using ONLY format function
| 39,595,924 |
<p>As in the title, how to unpack <code>*args</code> to string using <strong>ONLY</strong> <code>format</code> function (without <code>join</code> function), so having:</p>
<pre><code>args = ['a', 'b', 'c']
</code></pre>
<p>trying something like that (pseudocode):</p>
<pre><code>'-{}-'.format(*args)
</code></pre>
<p>I will get:</p>
<pre><code>'-a-b-c-'
</code></pre>
| -2 |
2016-09-20T13:49:12Z
| 39,596,176 |
<p>With a slight modification of the <em>format string</em> into <code>'{}-'</code>:</p>
<pre><code>>>> args = ['a', 'b', 'c']
>>> l = len(args)
>>> ('{}-'*(l+1)).format('', *args)
'-a-b-c-'
</code></pre>
| 2 |
2016-09-20T14:00:11Z
|
[
"python"
] |
Unpack *args to string using ONLY format function
| 39,595,924 |
<p>As in the title, how to unpack <code>*args</code> to string using <strong>ONLY</strong> <code>format</code> function (without <code>join</code> function), so having:</p>
<pre><code>args = ['a', 'b', 'c']
</code></pre>
<p>trying something like that (pseudocode):</p>
<pre><code>'-{}-'.format(*args)
</code></pre>
<p>I will get:</p>
<pre><code>'-a-b-c-'
</code></pre>
| -2 |
2016-09-20T13:49:12Z
| 39,596,178 |
<p>Without <code>.join</code>:</p>
<pre><code>from functools import reduce
l = [1,2,3]
s = reduce('{}{}-'.format, l, '-')
print(s)
</code></pre>
| 1 |
2016-09-20T14:00:13Z
|
[
"python"
] |
Parsing DeepDiff result
| 39,595,934 |
<p>I am working with <a href="http://deepdiff.readthedocs.io/en/latest/" rel="nofollow">DeepDiff</a>. So I have results like:</p>
<pre><code>local = [{1: {'age': 50, 'name': 'foo'}}, {2: {'age': 90, 'name': 'bar'}}, {3: {'age': 60, 'name': 'foobar'}}]
online = [{1: {'age': 50, 'name': 'foo'}}, {2: {'age': 40, 'name': 'bar'}}]
ddiff = DeepDiff(local, online)
added, updated = ddiff['iterable_item_added'], ddiff['values_changed']
added = {'root[2]': {3: {'age': 60, 'name': 'foobar'}}}
updated = {"root[1][2]['age']": {'new_value': 90, 'old_value': 40}}
</code></pre>
<p>Now, I want to take:</p>
<pre><code>list_indexes_added = foo(added)
list_indexes_updated = foo(updated)
</code></pre>
<p>and to obtain:</p>
<pre><code>list_indexes_added = [2]
list_index_updated = [(1,2,'age')]
</code></pre>
<p>in this way, I can manipulate the list <code>local</code> and <code>online</code> and in the future update the <code>online</code> table.</p>
<p>I am thinking in regexs, but maybe there is other option.</p>
| 2 |
2016-09-20T13:49:54Z
| 39,602,787 |
<p>So, I'd go with something like this:</p>
<pre><code>import re
def foo(diff):
modded = []
for key in diff.keys():
m = re.search('\[(.+)\]', key)
modded.append(tuple(m.group(1).split('][')))
return modded
</code></pre>
<p>It'll read each key, extract only the indices (whether numeric or string), then slice up the string. Since your desired output indicates a tuple, it'll spit the sequence of indices back as one, then return the whole list of index sets (since <code>diff</code> might have more than one).</p>
<p>This can be golfed down into a one-line list comprehension:</p>
<pre><code>import re
def foo(diff):
return [tuple(re.search('\[(.+)\]', key).group(1).split('][')) for key in diff.keys()]
</code></pre>
| 1 |
2016-09-20T19:58:07Z
|
[
"python",
"regex",
"algorithm",
"diff"
] |
Parsing DeepDiff result
| 39,595,934 |
<p>I am working with <a href="http://deepdiff.readthedocs.io/en/latest/" rel="nofollow">DeepDiff</a>. So I have results like:</p>
<pre><code>local = [{1: {'age': 50, 'name': 'foo'}}, {2: {'age': 90, 'name': 'bar'}}, {3: {'age': 60, 'name': 'foobar'}}]
online = [{1: {'age': 50, 'name': 'foo'}}, {2: {'age': 40, 'name': 'bar'}}]
ddiff = DeepDiff(local, online)
added, updated = ddiff['iterable_item_added'], ddiff['values_changed']
added = {'root[2]': {3: {'age': 60, 'name': 'foobar'}}}
updated = {"root[1][2]['age']": {'new_value': 90, 'old_value': 40}}
</code></pre>
<p>Now, I want to take:</p>
<pre><code>list_indexes_added = foo(added)
list_indexes_updated = foo(updated)
</code></pre>
<p>and to obtain:</p>
<pre><code>list_indexes_added = [2]
list_index_updated = [(1,2,'age')]
</code></pre>
<p>in this way, I can manipulate the list <code>local</code> and <code>online</code> and in the future update the <code>online</code> table.</p>
<p>I am thinking in regexs, but maybe there is other option.</p>
| 2 |
2016-09-20T13:49:54Z
| 39,603,030 |
<ul>
<li><p>One solution can be regex and custom parsing of the matches.</p></li>
<li><p>Another can be using <code>literal_eval</code> after regex parsing on these strings, if the output format of <code>deepdiff</code> is consistent</p>
<pre><code>from ast import literal_eval
import re
def str_diff_parse(str_diff):
return [tuple(literal_eval(y) for y in re.findall(r"\[('?\w+'?)\]", x)) for x in str_diff]
added = {'root[2]': {3: {'age': 60, 'name': 'foobar'}}}
updated = {"root[1][2]['age']": {'new_value': 90, 'old_value': 40}}
list_indexes_added = str_diff_parse(added)
list_indexes_updated = str_diff_parse(updated)
print(list_indexes_added)
print(list_indexes_updated)
# prints
#[(2,)]
#[(1, 2, 'age')]
</code></pre></li>
</ul>
<p><strong>demo</strong> : <a href="http://ideone.com/3MhTky" rel="nofollow">http://ideone.com/3MhTky</a></p>
<ul>
<li>Would also recommend <a href="https://github.com/inveniosoftware/dictdiffer" rel="nofollow">dictdiffer</a> module, it returns the diff as a consumable python diff object which can be patched to original dictionary to get the updated one or vice versa.</li>
</ul>
| 1 |
2016-09-20T20:14:51Z
|
[
"python",
"regex",
"algorithm",
"diff"
] |
Parsing DeepDiff result
| 39,595,934 |
<p>I am working with <a href="http://deepdiff.readthedocs.io/en/latest/" rel="nofollow">DeepDiff</a>. So I have results like:</p>
<pre><code>local = [{1: {'age': 50, 'name': 'foo'}}, {2: {'age': 90, 'name': 'bar'}}, {3: {'age': 60, 'name': 'foobar'}}]
online = [{1: {'age': 50, 'name': 'foo'}}, {2: {'age': 40, 'name': 'bar'}}]
ddiff = DeepDiff(local, online)
added, updated = ddiff['iterable_item_added'], ddiff['values_changed']
added = {'root[2]': {3: {'age': 60, 'name': 'foobar'}}}
updated = {"root[1][2]['age']": {'new_value': 90, 'old_value': 40}}
</code></pre>
<p>Now, I want to take:</p>
<pre><code>list_indexes_added = foo(added)
list_indexes_updated = foo(updated)
</code></pre>
<p>and to obtain:</p>
<pre><code>list_indexes_added = [2]
list_index_updated = [(1,2,'age')]
</code></pre>
<p>in this way, I can manipulate the list <code>local</code> and <code>online</code> and in the future update the <code>online</code> table.</p>
<p>I am thinking in regexs, but maybe there is other option.</p>
| 2 |
2016-09-20T13:49:54Z
| 39,603,170 |
<p>This is what I did:</p>
<pre><code>def getFromSquareBrackets(s):
return re.findall(r"\['?([A-Za-z0-9_]+)'?\]", s)
def auxparse(e):
try:
e = int(e)
except:
pass
return e
def castInts(l):
return list((map(auxparse, l)))
def parseRoots(dics):
"""
Returns pos id for list.
Because we have formmatted [{id:{dic}}, {id:{dic}}]
"""
values = []
for d in dics:
values.append(castInts(getFromSquareBrackets(d)))
return values
</code></pre>
<p>So:</p>
<pre><code>parseRoots({"root[1][2]['age']": {'new_value': 90, 'old_value': 40}})
[[1, 2, 'age']]
</code></pre>
<p>Maybe someone can improve it.</p>
| 1 |
2016-09-20T20:26:56Z
|
[
"python",
"regex",
"algorithm",
"diff"
] |
LDA: different sample sizes within the classes
| 39,596,151 |
<p>I'm trying to fit a LDA model on a data set that has different sample sizes for the classes.</p>
<h1>TL;DR</h1>
<p>lda.predict() doesn't work correctly if I trained the classifier with classes that don't have the same number of samples.</p>
<h1>Long explanation</h1>
<p>I have 7 classes with 3 samples each, and one class with only 2 samples:</p>
<pre><code>tortle -14,6379 -17,3731
tortle -14,9339 -17,4379
bull -11,7777 -13,1383
bull -11,6207 -13,4596
bull -11,4616 -12,9811
hawk -9,01229 -12,777
hawk -8,88177 -12,4383
hawk -8,93559 -13,0143
pikachu -6,50024 -7,92564
pikachu -6,00418 -8,59305
pikachu -6,0769 -6,00419
pizza 2,02872 3,07972
pizza 2,084 2,73762
pizza 2,20269 2,90577
sangoku -3,14428 -3,14415
sangoku -4,02675 -3,55358
sangoku -3,26119 -2,95265
charizard -0,159746 0,434694
charizard 0,0191964 0,514596
charizard 0,0422884 0,512207
tomatoe -1,15295 -2,09673
tomatoe -0,562748 -1,80215
tomatoe -0,716941 -1,83503
</code></pre>
<p>Here is a working example:</p>
<pre><code>#!/usr/bin/python
# coding: utf-8
from matplotlib import pyplot as plt
import numpy as np
from sklearn import preprocessing
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn import cross_validation
analytes = ['tortle', 'tortle', 'bull', 'bull', 'bull', 'hawk', 'hawk', 'hawk', 'pikachu', 'pikachu', 'pikachu', 'pizza', 'pizza', 'pizza', 'sangoku', 'sangoku', 'sangoku', 'charizard', 'charizard', 'charizard', 'tomatoe', 'tomatoe', 'tomatoe']
# Transform the names of the samples into integers
lb = preprocessing.LabelEncoder().fit(analytes)
analytes = lb.transform(analytes)
# Create an array w/ the measurements
dimensions = [[-14.6379, -14.9339, -11.7777, -11.6207, -11.4616, -9.01229, -8.88177, -8.93559, -6.50024, -6.00418, -6.0769, 2.02872, 2.084, 2.20269, -3.14428, -4.02675, -3.26119, -0.159746, 0.0191964, 0.0422884, -1.15295, -0.562748, -0.716941], [-17.3731, -17.4379, -13.1383, -13.4596, -12.9811, -12.777, -12.4383, -13.0143, -7.92564, -8.59305, -6.00419, 3.07972, 2.73762, 2.90577, -3.14415, -3.55358, -2.95265, 0.434694, 0.514596, 0.512207, -2.09673, -1.80215, -1.83503]]
# Transform the array of the results
all_samples = np.array(dimensions).T
# Normalize the data
preprocessing.scale(all_samples, axis=0, with_mean=True, with_std=True,
copy=False)
# Train the LDA classifier. Use the eigen solver
lda = LDA(solver='eigen', n_components=2)
transformed = lda.fit_transform(all_samples, analytes)
# Fit the LDA classifier on the new subspace
lda.fit(transformed, analytes)
fig = plt.figure()
plt.plot(transformed[:, 0], transformed[:, 1], 'o')
# Get the limits of the graph. Used for adapted color areas
x_min, x_max = fig.axes[0].get_xlim()
y_min, y_max = fig.axes[0].get_ylim()
# Step size of the mesh. Decrease to increase the quality of the VQ.
# point in the mesh [x_min, m_max]x[y_min, y_max].
# h = 0.01
h = 0.001
# Create a grid for incoming plottings
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the class for each unit of the grid
Z = lda.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the areas
plt.imshow(Z, extent=(x_min, x_max, y_min, y_max), aspect='auto', origin='lower', alpha=0.6)
plt.show()
</code></pre>
<p>And this is the output:</p>
<p><a href="http://i.stack.imgur.com/hccMa.png" rel="nofollow"><img src="http://i.stack.imgur.com/hccMa.png" alt="enter image description here"></a></p>
<p>As you can see, the two points on the right are assimilated to the purple class, while they shouldn't. They should belong to the yellow class, which becomes visible if I increase the limits of the graph:</p>
<p><a href="http://i.stack.imgur.com/s5ehg.png" rel="nofollow"><img src="http://i.stack.imgur.com/s5ehg.png" alt="enter image description here"></a></p>
<p>Basically, my problem is that lda.predict() doesn't work correctly if I trained the classifier with classes that don't have the same number of samples.</p>
<p>Is there a workaround ?</p>
| 0 |
2016-09-20T13:59:16Z
| 39,611,696 |
<p>It took me a while to figure this one out. The preprocessing step is responsible for the misclassification. Changing </p>
<pre><code>preprocessing.scale(all_samples, axis=0, with_mean=True, with_std=True,
copy=False)
</code></pre>
<p>to </p>
<pre><code>preprocessing.scale(all_samples, axis=0, with_mean=True, with_std=True)
</code></pre>
<p>Solved my issue. However, my data are not scaled in the same way now.</p>
| 0 |
2016-09-21T08:48:44Z
|
[
"python",
"scikit-learn"
] |
Numpy one hot encoder is only displaying huge arrays with zeros
| 39,596,305 |
<p>I have a <code>(400, 1)</code> numpy array that has values such <code>4300, 4450, 4650...</code> where these values can be distributed around 20 classes i.e. <code>4300 class 1, 4450 class 2...</code> and tried the below code to transform that array to one hot encoded list. The <code>y_train_onehot</code> is showing a list of 400 arrays where each array has a size > 4000 i.e. each array has around 4000 zeros. How can I fix that in order to have one hot encoded vectors for each value i.e. <code>4300</code> can be <code>00001</code>.</p>
<pre><code>def convertOneHot(data):
y=np.array([int(i[0]) for i in data])
y_onehot=[0]*len(y)
for i,j in enumerate(y):
y_onehot[i]=[0]*(y.max() + 1)
y_onehot[i][j]=1
return (y,y_onehot)
y_train,y_train_onehot = convertOneHot(data)
</code></pre>
<p><a href="http://i.stack.imgur.com/LvPqs.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/LvPqs.jpg" alt="sample from y_train_onehot"></a></p>
| 1 |
2016-09-20T14:05:28Z
| 39,596,939 |
<p>Here is some not optimized example:</p>
<pre><code>def convert_to_one_hot(y):
levels=np.unique(y)
n_classes=levels.shape[0]
one_hot=np.zeros((y.shape[0],n_classes),"uint8")
for i in xrange(y.shape[0]):
for index,level in enumerate(levels):
if y[i]==level:
one_hot[i,index]=1
return one_hot
</code></pre>
<p><strong>EDIT 1</strong> One less readable but more elegant version:</p>
<pre><code>def convert_to_one_hot2(y):
levels=np.unique(y)
one_hot=(y==levels).astype("uint8")
return one_hot
</code></pre>
| 2 |
2016-09-20T14:35:02Z
|
[
"python",
"numpy"
] |
Numpy one hot encoder is only displaying huge arrays with zeros
| 39,596,305 |
<p>I have a <code>(400, 1)</code> numpy array that has values such <code>4300, 4450, 4650...</code> where these values can be distributed around 20 classes i.e. <code>4300 class 1, 4450 class 2...</code> and tried the below code to transform that array to one hot encoded list. The <code>y_train_onehot</code> is showing a list of 400 arrays where each array has a size > 4000 i.e. each array has around 4000 zeros. How can I fix that in order to have one hot encoded vectors for each value i.e. <code>4300</code> can be <code>00001</code>.</p>
<pre><code>def convertOneHot(data):
y=np.array([int(i[0]) for i in data])
y_onehot=[0]*len(y)
for i,j in enumerate(y):
y_onehot[i]=[0]*(y.max() + 1)
y_onehot[i][j]=1
return (y,y_onehot)
y_train,y_train_onehot = convertOneHot(data)
</code></pre>
<p><a href="http://i.stack.imgur.com/LvPqs.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/LvPqs.jpg" alt="sample from y_train_onehot"></a></p>
| 1 |
2016-09-20T14:05:28Z
| 39,601,786 |
<p>I used Pandas Dummies and it's very simple.</p>
<pre><code>>>> import pandas as pd
>>> s = pd.Series(y_train)
</code></pre>
<p>And then</p>
<pre><code>y_train_onehot = pd.get_dummies(s)
</code></pre>
<p><a href="http://i.stack.imgur.com/Vs5Cw.png" rel="nofollow"><img src="http://i.stack.imgur.com/Vs5Cw.png" alt="hot_encode"></a></p>
| 1 |
2016-09-20T18:55:37Z
|
[
"python",
"numpy"
] |
Make basic cipher function more readable
| 39,596,422 |
<p>I have this basic cipher function:</p>
<pre><code>def encrypt_decrypt(data, in_or_out):
pass_lst = list(data)
return_list = []
if in_or_out == "in":
for i in pass_lst:
num = ord(i) + 10
return_list.append(chr(num))
else:
for i in pass_lst:
num = ord(i) - 10
return_list.append(chr(num))
return ''.join(return_list)
</code></pre>
<p>I want to make this cipher a little more readable and a little <code>DRY</code>er.. Is there a way I can shorten this function successfully?</p>
| 1 |
2016-09-20T14:11:00Z
| 39,596,641 |
<p>You ca make it DRYer by computing the ±10 from the <code>in_or_out</code> parameter. Eg,</p>
<pre><code>def encrypt_decrypt(data, in_or_out):
delta = {'in': 10, 'out': -10}[in_or_out]
return_list = []
for i in list(data):
num = ord(i) + delta
return_list.append(chr(num))
return ''.join(return_list)
</code></pre>
<p>And that can be made more compact by using a list comprehension:</p>
<pre><code>def encrypt_decrypt(data, in_or_out):
delta = {'in': 10, 'out': -10}[in_or_out]
return ''.join([chr(ord(i) + delta) for i in data])
</code></pre>
<p>Notice that I'm directly iterating over <code>data</code>. That will work if <code>data</code> is a string, list or tuple.</p>
<p>However, you should be aware that your code isn't safe: it doesn't handle char codes where <code>ord(i) + delta</code> is outside the 0-255 range.</p>
| 1 |
2016-09-20T14:21:20Z
|
[
"python",
"python-2.7"
] |
Make basic cipher function more readable
| 39,596,422 |
<p>I have this basic cipher function:</p>
<pre><code>def encrypt_decrypt(data, in_or_out):
pass_lst = list(data)
return_list = []
if in_or_out == "in":
for i in pass_lst:
num = ord(i) + 10
return_list.append(chr(num))
else:
for i in pass_lst:
num = ord(i) - 10
return_list.append(chr(num))
return ''.join(return_list)
</code></pre>
<p>I want to make this cipher a little more readable and a little <code>DRY</code>er.. Is there a way I can shorten this function successfully?</p>
| 1 |
2016-09-20T14:11:00Z
| 39,597,905 |
<p>As a general rule, functions should do <em>one</em> thing; combining two functions into one, then using an argument to trigger which "embedded" function actually runs, is a bit of an antipattern. You can still abstract out the common code (here, following PM 2Ring's definition):</p>
<pre><code>def encrypt(data):
return _modify(data, 10)
def decrypt(data):
return _modify(data, -10)
def _modify(data, delta):
return ''.join([chr(ord(i) + delta) for i in data])
</code></pre>
<p>In general, your pair of functions is not going to be this symmetrical, though, and it will not be so easy to implement both in terms of one clear function. In that case, you <em>definitely</em> do not want to be stuffing both implementations into one <code>encrypt_or_decrypt</code> function.</p>
<p>(Even if you <em>do</em> combine them, don't use two separate sets of terms. Pick one of "encrypt"/"decrypt" or "in"/"out" and stick with it for both the function name and the value to pass to the argument.) </p>
<p>If you really need to choose between encrypting and decrypting based on the value of a parameter, store your two functions in a dictionary:</p>
<pre><code>d = {"encrypt": encrypt, "decrypt": decrypt}
d[in_or_out](value)
</code></pre>
| 1 |
2016-09-20T15:19:37Z
|
[
"python",
"python-2.7"
] |
Get empty cell coodinates with openpyxl
| 39,596,433 |
<p>I work at a script to extract some very messy data from a huge excel file and I run into a bump.
There is any way to get coordinates from an empty cell with openpyxl?</p>
<p>This will return an error saying EmptyCell has no row/column attribute</p>
<pre><code>for row in sheet.iter_rows():
for cell in row:
if cell.column is ... :
print(cell.value)
</code></pre>
<p>I cannot skip the empty cells, as there are empty cells in some columns that have to taken into account and extracted as such in the new file.
Also, I cannot fill the empty cells because the original file is huge and its opened read only because of this.</p>
<p>There is any way to do this with openpyxl or I should try to use another library?</p>
<p>Thanks!</p>
| 0 |
2016-09-20T14:11:35Z
| 39,610,715 |
<p>EmptyCell's are special cells in the read-only context: they are fillers created where there are no cells in the original XML so that rows from a range are of equal length. As such they do not really exist and hence do not have coordinates. If you need their coordinates then you can simply use <code>enumerate</code> to count for you while looping.</p>
| 0 |
2016-09-21T08:00:17Z
|
[
"python",
"excel",
"openpyxl"
] |
Using a folder for personal modules in Python
| 39,596,444 |
<p>I have created a folder named <code>C:\Python27\Lib\site-packages\perso</code> and inside I have put a file <code>mymodule.py</code>. The goal is to have this module accessible from any future Python script.</p>
<p>Let's do a <code>D:\My Documents\test.py</code> file:</p>
<pre><code>import mymodule #fails
import perso.mymodule #fails
</code></pre>
<p><strong>Why does it fail? How to import a module from <code>C:\Python27\Lib\site-packages\perso</code>? What are the best practice for using user-modules in all Python scripts of the same computer?</strong></p>
| 2 |
2016-09-20T14:12:12Z
| 39,596,540 |
<ol>
<li>check PythonPath</li>
<li>create <code>__init__.py</code> to use perso as package</li>
</ol>
| 3 |
2016-09-20T14:16:31Z
|
[
"python",
"module",
"python-module"
] |
Using a folder for personal modules in Python
| 39,596,444 |
<p>I have created a folder named <code>C:\Python27\Lib\site-packages\perso</code> and inside I have put a file <code>mymodule.py</code>. The goal is to have this module accessible from any future Python script.</p>
<p>Let's do a <code>D:\My Documents\test.py</code> file:</p>
<pre><code>import mymodule #fails
import perso.mymodule #fails
</code></pre>
<p><strong>Why does it fail? How to import a module from <code>C:\Python27\Lib\site-packages\perso</code>? What are the best practice for using user-modules in all Python scripts of the same computer?</strong></p>
| 2 |
2016-09-20T14:12:12Z
| 39,596,909 |
<p><strong>Python Modules:</strong></p>
<p>In order to create a module you can do the following:</p>
<ol>
<li>under <code><Python_DIR>\Lib\site-packages</code>:<br>
put you directory <code><module></code>. This directory contains your classes.</li>
<li>In <code><Python_DIR>\Lib\site-packages\<module></code> put an init file <code>__init__.py</code>,<br>
This file define what's in the directory, and may apply some logic if needed.<br>
for example:
<code>__all__ = ["class_1", "class_2",]</code>.</li>
</ol>
<p>Than, to import:<br></p>
<pre><code>from <your_module> import <your_class>
</code></pre>
<p>For more information, read <a href="https://docs.python.org/2/tutorial/modules.html" rel="nofollow">this</a>.</p>
| 1 |
2016-09-20T14:33:52Z
|
[
"python",
"module",
"python-module"
] |
Error "No such file or directory" when i import a ".so" file and that file is available in python
| 39,596,506 |
<p>I have a python code and some nao files(naoqi.py,_inaoqi.so, ...) in a folder in a raspberry pi 3 model B v1.2 with armv7l. my code has some import line:</p>
<pre><code>import sys
from naoqi import ALProxy
import time
import almath
import os
import socket
</code></pre>
<p>when i run this code i see "cannot open shared object file: No such file or directory" error from second line :</p>
<pre><code>from naoqi import ALProxy
</code></pre>
<p>and in below line in naoqi.py (in line <code>import _inaoqi</code>):</p>
<pre><code>try:
import _inaoqi
except ImportError:
# quick hack to keep inaoqi.py happy
if sys.platform.startswith("win"):
print "Could not find _inaoqi, trying with _inaoqi_d"
import _inaoqi_d as _inaoqi
else:
raise
</code></pre>
<p>this file is available, but i see "cannot open shared object file: No such file or directory" error.</p>
<p>Why such an error occurs.</p>
<p>What can i do?</p>
| 0 |
2016-09-20T14:14:48Z
| 39,647,922 |
<p>Just dumping the inaoqi files into your program directory isn't sufficient, you have to package them properly as a "python module." Is there an installer available for an inaoqi package, or can it be installed using pip?</p>
<p>Also, if you're running Python on Windows, the <code>.so</code> file isn't going to do you any good. The C or C++ code for the module on Windows will be in a <code>.dll</code> file, so again, check to see if an installer for the module is available for your platform.</p>
| 0 |
2016-09-22T20:01:14Z
|
[
"python",
"importerror",
"raspberry-pi3",
"nao-robot"
] |
This doesn't produce a window and I don't know why
| 39,596,524 |
<p>I am using VPython in my attempt to model a ball bouncing off a wall. </p>
<p>To make my code more elegant, I have decided to use class inheritance to set the dimensions and properties of my objects (at the moment, it's the ball and a wall). After I ran the code, the shell didn't produce any errors, however, it did not produce a window either. </p>
<p>I am fairly new to programming and I am using VPython 2.7 in Wine on Linux Mint 18. I have a feeling that I have missed something obvious but I don't know what it is. </p>
<p>My code so far is as follows:</p>
<pre><code>from visual import *
class Obj(object):
def __init__(self, pos, color): #sets the position and color
self.pos = pos
self.color = color
class Sphere(Obj):
def __init__(self, pos, color, radius):
super(Sphere, self).__init__(pos, color)
self.radius = radius
class Box(Obj):
def __init__self, pos, color, radius):
super(Box, self).__init__(pos, color)
self.size = size
self.opacity = opacity
ball1 = Sphere((-5,0,0,), color.orange, 0.25)
wallR = Box((6,0,0), color.cyan, (0.,12,12), 0.3)
</code></pre>
| 1 |
2016-09-20T14:15:28Z
| 39,604,164 |
<p>I take it you never dealt with graphic aspects before, as there is nothing about it in the code you posted. Then it is time to begin !
By default, python works in console mode. To show an actual window, with icons and stuff going across it, you'll need to write it explicitly in your code, using modules like <strong>TKinter</strong> or <strong>pygame</strong>.</p>
<p>I suggest you read the tutorial I found here : <a href="http://code.activestate.com/recipes/502241-bouncing-ball-simulation/" rel="nofollow">http://code.activestate.com/recipes/502241-bouncing-ball-simulation/</a> as it does what you want (with <strong>TKinter</strong>), including the window part. Take a look at it and let's see if you still need help !</p>
| 0 |
2016-09-20T21:40:14Z
|
[
"python",
"python-2.7",
"vpython"
] |
Custom End a Modal operator Blender Python
| 39,596,525 |
<p>I have the following code:</p>
<pre><code>class audio_visualizer_create(bpy.types.Operator):
bl_idname = "myop.audio_visualizer_create"
bl_label = "Audio Visualizer Create"
bl_description = ""
bl_options = {"REGISTER"}
@classmethod
def poll(cls, context):
return True
def invoke(self, context, event):
context.window_manager.modal_handler_add(self)
scene = bpy.context.scene
##VARIABLES
type = scene.audio_visualizer_type
subtype = scene.audio_visualizer_subtype
axis = scene.audio_visualizer_axis
object = scene.audio_visualizer_other_sample_object
scalex = scene.audio_visualizer_sample_object_scale[0]
scaley = scene.audio_visualizer_sample_object_scale[1]
scalez = scene.audio_visualizer_sample_object_scale[2]
object = scene.audio_visualizer_other_sample_object
bars = scene.audio_visualizer_bars_number
print(bars)
print(scaley)
print(scene.audio_visualizer_bars_distance_weight)
##GETTING THE OBJECT
if object == "OTHER":
object = scene.audio_visualizer_other_sample_object
##Setting Up the bars
total_lenght = (scaley*bars) + (scene.audio_visualizer_bars_distance_weight/100*(bars-1))
for i in range(0, bars):
bpy.ops.mesh.primitive_cube_add(radius=1, view_align=False, enter_editmode=False, location=(0, 0, 0), layers=bpy.context.scene.layers)
bpy.context.object.scale = (scalex,scaley,scalez)
bpy.context.object.location.y = total_lenght/bars*i
is_finished = True
</code></pre>
<blockquote>
<p>At this Point i want to finish the Modal Operator.</p>
</blockquote>
<pre><code> return {"RUNNING_MODAL"}
def modal(self, context, event):
if event.type in {"ESC"}:
print("You've Cancelled The Operation.")
return {"CANCELLED"}
if event.type in {"MIDDLEMOUSE", "RIGHTMOUSE", "LEFTMOUSE"}:
return {"FINISHED"}
return {"FINISHED"}
</code></pre>
<p>But If I put return {"FINISHED"} instead of return {"RUNNING_MODAL"} blender Crashes or freezes, is there any way to end the operator?</p>
| 0 |
2016-09-20T14:15:38Z
| 39,622,875 |
<p>Firstly the example you show doesn't benefit from being a modal operator. A modal operator is one that allows the 3DView to be updated as the user input alters what the operator does. An example of a modal operator is the knife tool, once started it changes the final result based on user input while it is running.</p>
<p>The issue you have with your example, is you are doing the wrong tasks in invoke and modal. <code>invoke()</code> should call <code>modal_handler_add()</code> and return <code>{"RUNNING_MODAL"}</code> to signify that <code>modal()</code> should be called while the operator is still running. <code>modal()</code> should perform the data alterations, returning <code>{"RUNNING_MODAL"}</code> while it is still working and <code>{"FINISHED"}</code> or <code>{"CANCELLED"}</code> when it is done.</p>
<p>For a modal operator, <code>modal()</code> is like a loop, each call to modal performs part of the task with the viewport being updated and user input collected in between each call. You add properties to the operator class to hold state information between each modal call.</p>
<p>A simple modal example that adds cubes as you move the mouse -</p>
<pre><code>class audio_visualizer_create(bpy.types.Operator):
bl_idname = "myop.audio_visualizer_create"
bl_label = "Audio Visualizer Create"
bl_options = {"REGISTER"}
first_mouse_x = bpy.props.IntProperty()
first_value = bpy.props.FloatProperty()
def modal(self, context, event):
delta = 0
if event.type == 'MOUSEMOVE':
delta = event.mouse_x - self.first_mouse_x
elif event.type in ['LEFTMOUSE','RIGHTMOUSE','ESC']:
return {'FINISHED'}
for i in range(delta//5):
bpy.ops.mesh.primitive_cube_add(radius=1)
s = i*0.1
bpy.context.object.scale = (s,s,s)
bpy.context.object.location.y = i
return {"RUNNING_MODAL"}
def invoke(self, context, event):
self.first_mouse_x = event.mouse_x
self.first_value = 0.0
context.window_manager.modal_handler_add(self)
return {"RUNNING_MODAL"}
</code></pre>
<p>The flaw in this example - each time <code>modal()</code> is called the for loop creates a cube at each location, which leads to multiple cubes created at each position.</p>
| 0 |
2016-09-21T17:22:47Z
|
[
"python",
"modal-dialog",
"operator-keyword",
"blender"
] |
web.input fails in cx_Oracle
| 39,596,614 |
<p>I have a webserver and I will add the user data into my sql query. It works psycopg but not with cx_Oracle.</p>
<pre><code>...
class grid:
def GET(self):
web.header('Access-Control-Allow-Origin', '*')
web.header('Access-Control-Allow-Credentials', 'true')
web.header('Content-Type', 'application/json')
data = web.input(ID='')
ido = int(data.ID)
a = [ido]
cur = connection.cursor()
cur.arraysize = 10000
query = "SELECT a.id AS building_nr, c.geometry.sdo_ordinates AS geometry, d.Classname AS polygon_typ FROM building a, THEMATIC_SURFACE b, SURFACE_GEOMETRY c, OBJECTCLASS d WHERE a.id = b.BUILDING_ID AND b.LOD2_MULTI_SURFACE_ID = c.ROOT_ID AND c.GEOMETRY IS NOT NULL AND b.OBJECTCLASS_ID = d.ID AND a.grid_id_500 = %s;"
cur.execute(query, a)
</code></pre>
<p>It works until the execute statement. I get the error message:
<strong>'ascii' codec can't decode byte 0xfc in position 36: ordinal not in range(128)</strong></p>
<p>How can I add the data into my query?</p>
| 0 |
2016-09-20T14:20:09Z
| 39,597,410 |
<p>I know what was wrong. I should not use %s for the data. Apparently, cx_Oracle defaults to a "named" paramstyle .</p>
<pre><code>data = web.input(ID='')
query = "SELECT ... FROM... WHERE a.id =:grid_id "
cursor.execute(query, {'grid_id':data.ID})
</code></pre>
| 1 |
2016-09-20T14:56:23Z
|
[
"python",
"web.py",
"cx-oracle"
] |
Why got type error :takes 1 position argument but 2 were given?
| 39,596,669 |
<p>I have faced a problem with my django project. It is as following:</p>
<p>Get Result: <code>__init__() takes 1 positional argument but 2 were given</code></p>
<p>My code:</p>
<p><strong>urls.py</strong></p>
<pre><code>url(r'^_get_weather', views._get_weather, name='_get_weather')
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>def _get_weather(request):
r = urllib.request.urlopen('http://api.openweathermap.org/data/2.5/weather?APPID=$API&q=Hongkong')
s = r.read().decode('utf-8')
j = json.loads(s)
temp='Current tempearture: {:.2f}'.format(j['main']['temp'] - 273.15)
return HttpRequest(temp)
</code></pre>
| 1 |
2016-09-20T14:22:55Z
| 39,596,710 |
<p>Your view function should return an <a href="https://docs.djangoproject.com/en/1.10/ref/request-response/#httpresponse-objects"><code>HttpResponse</code></a> not an <a href="https://docs.djangoproject.com/en/1.10/ref/request-response/#httprequest-objects"><code>HttpRequest</code></a>.</p>
| 6 |
2016-09-20T14:25:05Z
|
[
"python",
"django"
] |
How to get a list of numbers in a for statement
| 39,596,682 |
<p>I am currently trying to figure how to get numbers organized.</p>
<pre><code>def convert(temp_celsius):
return temp_celsius * 1.8 + 32
def table(b):
print('C F')
for x in b:
print('{:10}'.format(convert(x)))
table(range(-30, 50, 10))
</code></pre>
<p>I need a list of numbers that ranges from -30 to 40 in steps of 10. So I have 2 columns. One with Farhenheit and one with Celsius. I currently only have the columns with the converted Farhenheit. </p>
| 1 |
2016-09-20T14:23:52Z
| 39,596,771 |
<p>It's really just a matter of formatting <em>two</em> numbers instead of one:</p>
<pre><code>print('{:10} {:10}'.format(x, convert(x)))
</code></pre>
<p>Of course, you'll need to fix up the alignments but that's something you can do yourself.</p>
| 1 |
2016-09-20T14:27:54Z
|
[
"python",
"for-loop"
] |
How to get a list of numbers in a for statement
| 39,596,682 |
<p>I am currently trying to figure how to get numbers organized.</p>
<pre><code>def convert(temp_celsius):
return temp_celsius * 1.8 + 32
def table(b):
print('C F')
for x in b:
print('{:10}'.format(convert(x)))
table(range(-30, 50, 10))
</code></pre>
<p>I need a list of numbers that ranges from -30 to 40 in steps of 10. So I have 2 columns. One with Farhenheit and one with Celsius. I currently only have the columns with the converted Farhenheit. </p>
| 1 |
2016-09-20T14:23:52Z
| 39,596,783 |
<p>With two format statements:</p>
<pre><code>for x in b:
print('{:10} {:10}'.format(x, convert(x)))
</code></pre>
<p>Certainly, you just forgot to add the other output.</p>
| 0 |
2016-09-20T14:28:20Z
|
[
"python",
"for-loop"
] |
Deprecated Warning: Why won't my Matrix print?
| 39,596,705 |
<p>So I am doing a little project for my linear algebra class and I wanted to make a program that could construct an i by j matrix and then do a Row Echelon form-esque algorithm. But before any of that I wanted python to print a matrix before it performed the task so you could see the original matrix. This is what I have for code.</p>
<pre><code>import math
import numpy
i = eval(input("how many rows? "))
j = eval(input("how many columns? "))
def make_matrix(i,j):
matrix = numpy.random.random_integerers(0,100,(i,j))
print(make_matrix(i,j))
</code></pre>
<p>So then I get this message:</p>
<p>C:\Users\Schmidt\Anaconda3\lib\site-packages\ipykernel__main__.py:2: DeprecationWarning: This function is deprecated. Please call randint(0, 100 + 1) instead
from ipykernel import kernelapp as app</p>
<p>and now do not know what to do. Could someone explain to me what is happening and guide me to a solution?</p>
| -1 |
2016-09-20T14:24:56Z
| 39,600,323 |
<p>The solution was so easy as to write a return in the define fuction</p>
<pre><code>def make_matrix(i,j):
matrix = numpy.random.randint(0,50,(i,j))
return(matrix)
print(make_matrix(i,j))
</code></pre>
| 0 |
2016-09-20T17:28:32Z
|
[
"python",
"matrix",
"printing",
"deprecated"
] |
How wrap list within the frame boundaries in python Tkinter?
| 39,596,715 |
<p><a href="http://i.stack.imgur.com/pGJG2.png" rel="nofollow"><img src="http://i.stack.imgur.com/pGJG2.png" alt="How to make sure my radio button list is within the frame"></a></p>
<p>I am trying to list all the categories placed left of each other. The problem I am facing is that it is going beyond the frame boundaries and not coming down to the new line. Any work around for this ?</p>
<pre><code> var = IntVar()
for i in xrange(len(ultraCategories)):
i = Radiobutton(midFrame,text=ultraCategories[i],variable=var,value=i,command=sel)
i.pack(side = LEFT)
</code></pre>
| 0 |
2016-09-20T14:25:17Z
| 39,598,647 |
<p>If you use the .grid layout manager for Tkinter you can specify the row and column of where you would like to place each item. There is no built in function to split items in to a new row.</p>
<p><a href="http://effbot.org/tkinterbook/grid.htm" rel="nofollow">Tkinter Grid Layout</a></p>
| 2 |
2016-09-20T15:52:14Z
|
[
"python",
"tkinter",
"radiobuttonlist"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.