title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
overflow hidden on a Paragraph in Reportlab
| 39,627,803 |
<p>I have a <code>Table</code> with 2 cells, inside of each one there is a <code>Paragraph</code></p>
<pre><code>from reportlab.platypus import Paragraph, Table, TableStyle
from reportlab.lib.styles import ParagraphStyle
from reportlab.lib.units import cm
table_style_footer = TableStyle(
[
('LEFTPADDING', (0, 0), (-1, -1), 0),
('RIGHTPADDING', (0, 0), (-1, -1), 0),
('TOPPADDING', (0, 0), (-1, -1), 0),
('BOTTOMPADDING', (0, 0), (-1, -1), 0),
('BOX', (0, 0), (-1, -1), 1, (0, 0, 0)),
('VALIGN', (0, 0), (-1, -1), 'TOP'),
]
)
style_p_footer = ParagraphStyle('Normal')
style_p_footer.fontName = 'Arial'
style_p_footer.fontSize = 8
style_p_footer.leading = 10
Table([
[
Paragraph('Send To:', style_p_footer),
Paragraph('Here should be a variable with long content', style_p_footer)
]
],
[1.7 * cm, 4.8 * cm],
style=table_style_footer
)
</code></pre>
<p>I need hide the overflow content of paragraph , but the paragraph instead of hide the overflow content does a break line.</p>
| 1 |
2016-09-21T23:03:36Z
| 39,641,038 |
<p>Reportlab doesn't seem to have a native support for hiding overflow, but we can achieve it by utilizing the <code>breakLines</code> function of a <code>Paragraph</code>. The <code>breakLines</code> functions returns an object that contains all the lines of the paragraph given a certain width, thus we can also use it to find the first line and discard everything else.</p>
<p>Basically we need to do the following:</p>
<ol>
<li>Create a dummy paragraph</li>
<li>Fetch the lines of the dummy</li>
<li>Create a the actual paragraph based on the first line of the dummy</li>
</ol>
<p>Doing this in code looks like this:</p>
<pre><code># Create a dummy paragraph to see how it would split
long_string = 'Here should be a variable with long content'*10
long_paragraph = Paragraph(long_string, style_p_footer)
# Needed because of a bug in breakLines (value doesn't matter)
long_paragraph.width = 4.8 * cm
# Fetch the lines of the paragraph for the given width
para_fragment = long_paragraph.breakLines(width=4.8 * cm)
# There are 2 kinds of returns so 2 ways to grab the first line
if para_fragment.kind == 0:
shorted_text = " ".join(para_fragment.lines[0][1])
else:
shorted_text = " ".join([w.text for w in para_fragment.lines[0].words])
# To make it pretty add ... when we break of the sentence
if len(para_fragment.lines) > 1:
shorted_text += "..."
# Create the actual paragraph
shorted_paragraph = Paragraph(shorted_text, style_p_footer)
</code></pre>
| 2 |
2016-09-22T13:53:39Z
|
[
"python",
"python-3.x",
"reportlab",
"platypus"
] |
Finding the smallest common denominator
| 39,627,822 |
<p>I have the following problem. I have a arrays of <code>L</code> binary entries.(<code>L</code> is usually between 2 and 8 if it is of interest.) All possible combinations do exist. So there are <code>2^L</code> arrays in one realisation. Now each combination will get randomly a fitness value assigned. The fitness value is also either <code>1</code> or <code>0</code>. Now for example it could be, that the following arrays have fitness value <code>0</code>:</p>
<pre><code>1000
1010
1100
1001
1011
</code></pre>
<p>Then it is true, that all arrays with a <code>10__</code> are zero and all with <code>1_00</code>. This is actually my problem. I need an algorithm which find these similarities and sort them for their rank. Like <code>10__</code> is of rank <code>2</code> and <code>1_00</code> is of rank <code>3</code>. It is most of the time possible to define all arrays with zero fitness either with a small number of low ranks or with many high ranks. So my output must be ordered and I need to know how many I have of each rank. I can imagine that this is a known problem and there is already a solution? I hope you can help :) </p>
| 2 |
2016-09-21T23:05:54Z
| 39,628,120 |
<p>Assuming that this <code>fitness value</code> may depend of the value of each bit in your binary entry, and that you're searching the rules that can sum up your truth table (I'm not sure I understood your question very well about this point), You could use something like a <a href="https://en.wikipedia.org/wiki/Karnaugh_map" rel="nofollow">Karnaugh map</a></p>
<p>These maps work like this :</p>
<p>You list your factors (here value of 1st bit, 2nd bit...) and separate them in two lists (one for the rows, one for the columns).</p>
<p>Order each list with a <a href="https://en.wikipedia.org/wiki/Gray_code" rel="nofollow">Gray Code</a>.</p>
<p>Use the set of values for each list for respectively columns and rows in a table.</p>
<p>You know have a table that represent every possibilities of values for your binary entries.</p>
<p>Then, you list the fitness values for each entry in your table (1 or 0).</p>
<p>Then you'll have to make groups of Trues values (or false) in your table and that will permit you to find the rule for this group (by finding the common caracteristics). You usually group these by squares, or by corners (cf. first wikipedia link).</p>
<h2>Here is a (simple) example with 3 bit entries :</h2>
<p>Assuming you have the following Truth table :</p>
<pre><code>ABC|R
-----
000|0
001|0
010|1
011|1
100|1
101|0
110|1
111|1
</code></pre>
<p>You can establish the following Karnaugh table :</p>
<pre><code> AB
| 00 | 01 | 11 | 10 <-- AB
--------------------------------------
0 | 0 | (1) 1 | (1) 1 | (2) 1
--------------------------------------
1 | 0 | (1) 1 | (1) 1 | 0
^
|
C
</code></pre>
<ul>
<li><p>You can group the 4 in the middle, these 4 have all B True, but A and C may be True or False, so you can sum it up like this :</p>
<p>(1) <code>B -> R</code></p></li>
<li><p>You also have the one on the top right :</p>
<p>(2) <code>A and !B and !C -> R</code></p></li>
</ul>
<p>So your rule in this case is :</p>
<pre><code>B or (A and !B and !C) -> R
</code></pre>
<p>As I mentionned in my comment, I am not sure about what you really want to achieve. But if you're trying to find some simple rules to sum up your truth table, maybe listing the rules found with this step is enough?</p>
<p>I have no python implementation to provide but you may find some on the internet. :) At least you can do it with paper and pencil now. :)</p>
<p>Feel free to ask if something isn't clear!</p>
<p>Here is one of the many tutorials you can find on the internet :</p>
<p><a href="http://www.facstaff.bucknell.edu/mastascu/eLessonsHTML/Logic/Logic3.html" rel="nofollow">http://www.facstaff.bucknell.edu/mastascu/eLessonsHTML/Logic/Logic3.html</a></p>
<p><strong>PS :</strong></p>
<p><code>B</code> means that B is true, and <code>!B</code> means it's false.</p>
<p><strong>PS-2 :</strong></p>
<p>You can also use Boole algebra to reduce your rules.</p>
| 1 |
2016-09-21T23:44:17Z
|
[
"python"
] |
Finding the smallest common denominator
| 39,627,822 |
<p>I have the following problem. I have a arrays of <code>L</code> binary entries.(<code>L</code> is usually between 2 and 8 if it is of interest.) All possible combinations do exist. So there are <code>2^L</code> arrays in one realisation. Now each combination will get randomly a fitness value assigned. The fitness value is also either <code>1</code> or <code>0</code>. Now for example it could be, that the following arrays have fitness value <code>0</code>:</p>
<pre><code>1000
1010
1100
1001
1011
</code></pre>
<p>Then it is true, that all arrays with a <code>10__</code> are zero and all with <code>1_00</code>. This is actually my problem. I need an algorithm which find these similarities and sort them for their rank. Like <code>10__</code> is of rank <code>2</code> and <code>1_00</code> is of rank <code>3</code>. It is most of the time possible to define all arrays with zero fitness either with a small number of low ranks or with many high ranks. So my output must be ordered and I need to know how many I have of each rank. I can imagine that this is a known problem and there is already a solution? I hope you can help :) </p>
| 2 |
2016-09-21T23:05:54Z
| 39,629,480 |
<p>As already mentioned, this is a combinational logic minimization problem.</p>
<p>The algorithm typically used is Quine-Mccluskey and somebody has already asked for and posted a Python implementation:</p>
<p><a href="http://stackoverflow.com/questions/7537125/quine-mccluskey-algorithm-in-python">Quine-McCluskey algorithm in Python</a></p>
| 1 |
2016-09-22T02:49:50Z
|
[
"python"
] |
Invalidate Flask session?
| 39,627,827 |
<p>I'm testing JSON based login/logout functionality with httpie (<a href="https://github.com/jkbrzt/httpie#sessions" rel="nofollow">https://github.com/jkbrzt/httpie#sessions</a>).
The problem I have is that once I login no matter how many times I "logout" I can not clean up the session.
On the logout I can see clearly that the session is cleared, but when I call "status" afterwards the session
content is preserved (still there) ???</p>
<p>Any idea what I'm doing wrong ? How to invalidate the Session on logout ?</p>
<p>here is the code :</p>
<pre><code>http -v --session=log3 -j :5000/start/status
http -v --session=log3 -j :5000/start/logout
http -v --session=log3 -j :5000/start/status
@start.route('/logout', methods=['GET'])
def logout():
print session
session.pop('logged_in', None)
session.clear()
print session
return jsonify({'rv' : 'ok' })
@start.route('/status', methods=['GET'])
def status():
print session
if 'logged_in' in session and session['logged_in'] :
return jsonify({'status' : True })
return jsonify({'status' : False})
</code></pre>
| 0 |
2016-09-21T23:06:39Z
| 39,649,464 |
<p>Solved it .... didn't know that by default Flask uses client-side sessions !?!!? whaaat !!</p>
<p>Once you install Flask-Session instead everything is fine !
<a href="https://pythonhosted.org/Flask-Session/" rel="nofollow">https://pythonhosted.org/Flask-Session/</a></p>
| 0 |
2016-09-22T21:53:32Z
|
[
"python",
"json",
"session",
"flask"
] |
Python - Ciphertext length must be equal to key size - different string length after passing it to server
| 39,627,844 |
<p>I have a problem with decoding the encrypted message after passing it from client to the server. I am using <a href="https://cryptography.io/en/latest/hazmat/primitives/asymmetric/rsa/" rel="nofollow">Cryptography</a>. I am encoding the message on the client side using the script below:</p>
<pre><code>encMessage = public_key.encrypt(message, padding.OAEP(mgf=padding.MGF1(algorithm=hashes.SHA1()), algorithm=hashes.SHA1(),label=None))
</code></pre>
<p>After this encoding the <em>len()</em> function on the client side tells me properly that the <em>encMessage</em> is 256 long. Also the <em>type()</em> function tells me that <em>encMessage</em> is <em>'str'</em>. The <em>encMessage</em> when printed looks something like this:</p>
<blockquote>
<p>I\xf0gr\xf5\xf8\xf2F\xde\xc7\xe4\x91\xa1F3\xc1\x05\x06\xd7Y:\xc9\xcf\xed'\xf49\xd5\x99Z\xed\x93\xba8\xdd\x0b\xe3?</p>
</blockquote>
<p>However, when I pass this <em>encMessage</em> to the server using <em>rest_framework</em> then, after using the code below on the server side:</p>
<pre><code>message = private_key.decrypt(encMessage,padding.OAEP(mgf=padding.MGF1(algorithm=hashes.SHA1()), algorithm=hashes.SHA1(),label=None))
</code></pre>
<p>I get the error: "Ciphertext length must be equal to key size". I checked this and now the <em>len()</em> function on the server side used on <em>encMessage</em> gives me not 256 but something smaller. Also the <em>type()</em> function shows <em>'unicode'</em>.</p>
<p>I assume that there is something wrong with <em>encMessage</em> conversion during data sending from client to server but I have no idea what to do with it.</p>
<p>Thank you in advance for any suggestions!</p>
<p><strong>EDIT:</strong></p>
<p>Answering the comment about showing how I communicate with the server and send the data - this is what I do:</p>
<pre><code>data = {'message': $scope.message};
loginApi.getMessageEncrypted(data)
.success(
function(dataEncrypted) {
loginApi.checkMessage(dataEncrypted['encMessage'])
.success(
function(dataDecrypted) {
$log.log('Server responded properly');
})
.error(
function(errorInfo, status) {
$log.info('Server user data answer error.')
});
})
.error(
function(errorInfo, status) {
$log.info('Client encryption error.')
});
</code></pre>
<p>Explanation:</p>
<p>After clicking some button on the website angular's <em>getMessageEncrypted</em> function sends <em>message</em> in <em>data</em> variable using <em>post</em> to <em>django REST framework</em> APIview and then serializer object which are written in python on the client side. In serializer the message is encrypted using:</p>
<pre><code>encMessage = public_key.encrypt(message, padding.OAEP(mgf=padding.MGF1(algorithm=hashes.SHA1()), algorithm=hashes.SHA1(),label=None))
dataEncrypted = {'encMessage': encMessage};
return dataEncrypted
</code></pre>
<p>and sent back to angular using <em>dataEncrypted</em> dictionary-like variable. Then this message is sent with angular's <em>checkMessage</em> function (also with <em>post</em>) to the <em>django REST framework</em> APIview and then serializer object on the server side. In server's serializer the message is decrypted using:</p>
<pre><code>message = private_key.decrypt(encMessage,padding.OAEP(mgf=padding.MGF1(algorithm=hashes.SHA1()), algorithm=hashes.SHA1(),label=None))
</code></pre>
<p>and similarly sent back to angular in <em>dataDecrypted</em> dictionary-like variable. However, this never happens because in the decryption line I get <em>Ciphertext length must be equal to key size</em> error.</p>
<p>I think that there is some change of the message coding (like utf-8, unicode or something) on the border of client-angular when encrypted message is sent to angular from python function on the client side or on the border of angular-server when encrypted message is sent from angular to python APIview on the server side.</p>
<p>I am really sorry for probably lack of "naming" experience. I am not really into proper nomenclature.</p>
<p><strong>SOLUTION</strong></p>
<p>User <a href="http://stackoverflow.com/users/1350899/mata">mata</a> answered my question. What I had to do was to encode encrypted message with <em>base64</em> before returning it to angular and sending it to server api. Then on the server side I had to decode it from <em>base64</em> and then decode the message. Thank you again!</p>
| 1 |
2016-09-21T23:09:35Z
| 39,648,542 |
<p>Your encrypted message is a binary blob of data, and you seem to treat it as string in javascript, but javascript strings are not made for binary data, and when the transport format is json it gets even more complicated.</p>
<p>The standard approach would be to base64 encode the encrypted message before sending and decode it when it's received.</p>
| 1 |
2016-09-22T20:42:43Z
|
[
"python",
"string",
"cryptography"
] |
Number of rows in numpy array
| 39,627,852 |
<p>I know that numpy array has a method called shape that returns [No.of rows, No.of columns], and shape[0] gives you the number of rows, shape[1] gives you the number of columns. </p>
<pre><code>a = numpy.array([[1,2,3,4], [2,3,4,5]])
a.shape
>> [2,4]
a.shape[0]
>> 2
a.shape[1]
>> 4
</code></pre>
<p>However, if my array only have one row, then it returns [No.of columns, ]. And shape[1] will be out of the index. For example</p>
<pre><code>a = numpy.array([1,2,3,4])
a.shape
>> [4,]
a.shape[0]
>> 4 //this is the number of column
a.shape[1]
>> Error out of index
</code></pre>
<p>Now how do I get the number of rows of an numpy array if the array may have only one row?</p>
<p>Thank you</p>
| 1 |
2016-09-21T23:10:08Z
| 39,627,884 |
<p>The concept of <em>rows</em> and <em>columns</em> applies when you have a 2D array. However, the array <code>numpy.array([1,2,3,4])</code> is a 1D array and so has only one dimension, therefore <code>shape</code> rightly returns a single valued iterable. </p>
<p>For a 2D version of the same array, consider the following instead:</p>
<pre><code>>>> a = numpy.array([[1,2,3,4]]) # notice the extra square braces
>>> a.shape
(1, 4)
</code></pre>
| 4 |
2016-09-21T23:13:40Z
|
[
"python",
"arrays",
"numpy"
] |
Number of rows in numpy array
| 39,627,852 |
<p>I know that numpy array has a method called shape that returns [No.of rows, No.of columns], and shape[0] gives you the number of rows, shape[1] gives you the number of columns. </p>
<pre><code>a = numpy.array([[1,2,3,4], [2,3,4,5]])
a.shape
>> [2,4]
a.shape[0]
>> 2
a.shape[1]
>> 4
</code></pre>
<p>However, if my array only have one row, then it returns [No.of columns, ]. And shape[1] will be out of the index. For example</p>
<pre><code>a = numpy.array([1,2,3,4])
a.shape
>> [4,]
a.shape[0]
>> 4 //this is the number of column
a.shape[1]
>> Error out of index
</code></pre>
<p>Now how do I get the number of rows of an numpy array if the array may have only one row?</p>
<p>Thank you</p>
| 1 |
2016-09-21T23:10:08Z
| 39,627,945 |
<p>Rather then converting this to a 2d array, which may not be an option every time - one could either check the <code>len()</code> of the tuple returned by shape or just check for an index error as such:</p>
<pre><code>import numpy
a = numpy.array([1,2,3,4])
print(a.shape)
# (4,)
print(a.shape[0])
try:
print(a.shape[1])
except IndexError:
print("only 1 column")
</code></pre>
<p>Or you could just try and assign this to a variable for later use (or return or what have you) if you know you will only have 1 or 2 dimension shapes:</p>
<pre><code>try:
shape = (a.shape[0], a.shape[1])
except IndexError:
shape = (1, a.shape[0])
print(shape)
</code></pre>
| 1 |
2016-09-21T23:21:01Z
|
[
"python",
"arrays",
"numpy"
] |
How to use MongoDB with Django Framework and Python3
| 39,627,869 |
<p>I am a newbie to python,django, and mongoDB world. I've setup Django project (Virtualenv, Python3, Django==1.10.1 MongoDB) and installed some basic package requirements: </p>
<pre><code>Django==1.10.1
django-mongodb-engine==0.6.0
djangotoolbox==1.8.0
mongoengine==0.9.0
pymongo==3.3.0
</code></pre>
<p>but when I tried to syncdb using <code>python3 manage.py syncdb</code>, then it's showing </p>
<pre><code>~/virtenv/lib/python3.5/site-packages/django_mongodb_engine/base.py", line 272
raise ImproperlyConfigured, exc_info[1], exc_info[2]
^
SyntaxError: invalid syntax
</code></pre>
<p><strong>Settings.py</strong></p>
<pre><code>import os
import mongoengine
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'fvp_amu.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'fvp_amu.wsgi.application'
# Database
DATABASES = {
'default' : {
'ENGINE' : '',
'NAME' : 'fvp_amu'
}
}
# Password validation
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
STATIC_URL = '/static/'
</code></pre>
| 1 |
2016-09-21T23:11:55Z
| 39,633,607 |
<p>From the docs <a href="https://django-mongodb-engine.readthedocs.io/en/latest/topics/setup.html#configuration" rel="nofollow">django-mongodb-engine</a></p>
<pre><code>DATABASES = {
'default' : {
'ENGINE' : 'django_mongodb_engine',
'NAME' : 'my_database'
}
}
</code></pre>
| 1 |
2016-09-22T08:10:07Z
|
[
"python",
"django",
"mongodb",
"python-3.x",
"pymongo"
] |
Desk Price Calculation in Python
| 39,627,886 |
<p>I have a little exercise I need to do in Python that's called "Desk Price Calculation". There need to be 4 functions which are used in the program. </p>
<p>My main problem is using the output of a function in another function.</p>
<pre><code>def get_drawers():
drawers = int(input('How many goddamn drawers would you like? '))
return drawers
def get_wood_type():
print('The types of wood available are Pine, Mahogany or Oak. For any other wood, type Other')
wood = str(input('What type of wood would you like? '))
return wood
def calculate_price(wood_type, drawers):
wood_type = get_wood_type()
drawers = get_drawers()
wood_price = 0
drawer_price = 0
#wood price
if wood_type == "Pine":
wood_price = 100
elif wood_type == "Oak":
wood_price = 140
elif wood_type == "Mahogany":
wood_price = 180
else:
wood_price = 180
#price of drawers
drawer_price = drawers * 30
return drawer_price + wood_price
def display_price(price, wood, drawer_count):
price = calculate_price()
wood = get_wood_type()
drawer_count = get_drawers()
print("The amount of drawers you requested were: ", drawer_count, ". Their total was ", drawer_count * 30, ".")
print("The type of would you requested was ", get_wood_type(), ". The total for that was ", drawer_count * 30 - calculate_price(), ".")
print("Your total is: ", price)
if __name__ == '__main__':
display_price()
</code></pre>
| -2 |
2016-09-21T23:13:48Z
| 39,627,948 |
<p>I guess you get an error like this (that should have been added to your question by the way) :</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 42, in <module>
display_price()
TypeError: display_price() missing 3 required positional arguments: 'price', 'wood', and 'drawer_count'
</code></pre>
<p>You defined your <code>display_price()</code> function like this :</p>
<pre><code>def display_price(price, wood, drawer_count):
</code></pre>
<p>So it expects 3 arguments when you call it, but you call it without any.</p>
<p>You have to either :</p>
<ol>
<li>redefine your function without argument (that would be the most logical solution since <code>price</code>, <code>wood</code> and <code>drawer_count</code> are defined in its scope.</li>
<li>pass these arguments to the call of this function, but that would be useless for the reason I mentionned in <strong>1</strong>. Unless you remove these arguments definition.</li>
</ol>
<p><strong>PS:</strong> You'll have the same problem with <code>calculate_price()</code> since it expects two arguments, but you pass none to it.</p>
<h2>About function's arguments :</h2>
<p>When you define a function, you also define the arguments it expects when you call it.</p>
<p>For instance, if you define :</p>
<pre><code>def f(foo):
# Do some stuff
</code></pre>
<p>a correct <code>f</code> call would be <code>f(some_val)</code> and not <code>f()</code></p>
<p>Plus it would be useless to define <code>f</code> like this :</p>
<pre><code>def f(foo):
foo = someval
# Do some other stuff
</code></pre>
<p>since foo is direcly redefined in the function's scope without even using the initial value.</p>
<p>This will help you to discover functions basics :</p>
<p><a href="http://www.tutorialspoint.com/python/python_functions.htm" rel="nofollow">http://www.tutorialspoint.com/python/python_functions.htm</a></p>
| 0 |
2016-09-21T23:21:25Z
|
[
"python",
"python-3.x",
"price"
] |
Desk Price Calculation in Python
| 39,627,886 |
<p>I have a little exercise I need to do in Python that's called "Desk Price Calculation". There need to be 4 functions which are used in the program. </p>
<p>My main problem is using the output of a function in another function.</p>
<pre><code>def get_drawers():
drawers = int(input('How many goddamn drawers would you like? '))
return drawers
def get_wood_type():
print('The types of wood available are Pine, Mahogany or Oak. For any other wood, type Other')
wood = str(input('What type of wood would you like? '))
return wood
def calculate_price(wood_type, drawers):
wood_type = get_wood_type()
drawers = get_drawers()
wood_price = 0
drawer_price = 0
#wood price
if wood_type == "Pine":
wood_price = 100
elif wood_type == "Oak":
wood_price = 140
elif wood_type == "Mahogany":
wood_price = 180
else:
wood_price = 180
#price of drawers
drawer_price = drawers * 30
return drawer_price + wood_price
def display_price(price, wood, drawer_count):
price = calculate_price()
wood = get_wood_type()
drawer_count = get_drawers()
print("The amount of drawers you requested were: ", drawer_count, ". Their total was ", drawer_count * 30, ".")
print("The type of would you requested was ", get_wood_type(), ". The total for that was ", drawer_count * 30 - calculate_price(), ".")
print("Your total is: ", price)
if __name__ == '__main__':
display_price()
</code></pre>
| -2 |
2016-09-21T23:13:48Z
| 39,627,971 |
<p>It would appear that you wanted to pass in some parameters to your method. </p>
<p>If that is the case, you need to move the "calculate" and "get" functions.</p>
<p>And, no need to re-prompt for input - you already have parameters</p>
<pre><code>def calculate_price(wood_type, drawers):
# wood_type = get_wood_type()
# drawers = get_drawers()
wood_price = 0
drawer_price = 0
# ... other calulations
return drawer_price + wood_price
def display_price(wood, drawer_count):
price = calculate_price(wood, drawer_count)
### Either do this here, or pass in via the main method below
# wood = get_wood_type()
# drawer_count = get_drawers()
print("The amount of drawers you requested were: ", drawer_count, ". Their total was ", drawer_count * 30, ".")
print("The type of would you requested was ", wood, ". The total for that was ", drawer_count * 30 - price, ".")
print("Your total is: ", price)
if __name__ == '__main__':
wood = get_wood_type()
drawer_count = get_drawers()
display_price(wood, drawer_count)
</code></pre>
| 0 |
2016-09-21T23:23:53Z
|
[
"python",
"python-3.x",
"price"
] |
How do I get the username of the current user in a Flask app on an IIS server using Windows Authentication?
| 39,627,947 |
<p>I have a Flask app (Python 2.7) running on an IIS server in Windows 10. The server is configured to use Windows Authentication. I am using an HttpPlatformHandler in order to execute the Python code. </p>
<p>I have verified that the authentication is working and am able to see the Kerberos "Negotiate" auth header. However, I cannot find a way to access the username of the user who requested the page. </p>
<p>I have tried printing the entire request header and request environment and it is not there. This <a href="http://stackoverflow.com/questions/31412852/flask-python-decoding-username-ntlm-or-negotiate-authentication-header">post</a> seems to be about my issue but the code in it is not correct. <strong>What can I do to pass the Windows username of the requester to my python code?</strong></p>
<p>I would like to access the username of the user in order to both restrict page access by user and remove certain elements from pages based on user. </p>
| 0 |
2016-09-21T23:21:17Z
| 39,648,839 |
<p>It turns out that the answer on <a href="http://stackoverflow.com/questions/33347412/rails-app-remote-user-attribute-in-iis-8-5-with-windows-authentication">this post</a> works for my configuration as well. I simply downloaded ISAPI_Rewrite 3, copy and pasted the text into httpd.conf, and was able to access the name of the user by calling <code>request.headers.get('X-Remote-User')</code> in my Python code. </p>
| 0 |
2016-09-22T21:04:21Z
|
[
"python",
"iis",
"flask",
"windows-authentication"
] |
Why does my Python XML parser break after the first file?
| 39,627,960 |
<p>I am working on a Python (3) XML parser that should extract the text content of specific nodes from every xml file within a folder. Then, the script should write the collected data into a tab-separated text file. So far, all the functions seem to be working. The script returns all the information that I want from the first file, but it always breaks, I believe, when it starts to parse the second file.</p>
<p>When it breaks, it returns "TypeError: 'str' object is not callable." I've checked the second file and found that the functions work just as well on that as the first file when I remove the first file from the folder. I'm very new to Python/XML. Any advice, help, or useful links would be greatly appreciated. Thanks!</p>
<pre><code>import xml.etree.ElementTree as ET
import re
import glob
import csv
import sys
content_file = open('WWP Project/WWP_texts.txt','wt')
quotes_file = open('WWP Project/WWP_quotes.txt', 'wt')
list_of_files = glob.glob("../../../Documents/WWPtextbase/distribution/*.xml")
ns = {'wwp':'http://www.wwp.northeastern.edu/ns/textbase'}
def content(tree):
lines = ''.join(ET.tostring(tree.getroot(),encoding='unicode',method='text')).replace('\n',' ').replace('\t',' ').strip()
clean_lines = re.sub(' +',' ', lines)
return clean_lines.lower()
def quotes(tree):
quotes_list = []
for node in tree.findall('.//wwp:quote', namespaces=ns):
quote = ET.tostring(node,encoding='unicode',method='text')
clean_quote = re.sub(' +',' ', quote)
quotes_list.append(clean_quote)
return ' '.join(str(v) for v in quotes_list).replace('\t','').replace('\n','').lower()
def pid(tree):
for node in tree.findall('.//wwp:sourceDesc//wwp:author/wwp:persName[1]', namespaces=ns):
pid = node.attrib.get('ref')
return pid.replace('personography.xml#','') # will need to replace 'p:'
def trid(tree): # this function will eventually need to call OT (.//wwp:publicationStmt//wwp:idno)
for node in tree.findall('.//wwp:sourceDesc',namespaces=ns):
trid = node.attrib.get('n')
return trid
content_file.write('pid' + '\t' + 'trid' + '\t' +'text' + '\n')
quotes_file.write('pid' + '\t' + 'trid' + '\t' + 'quotes' + '\n')
for file_name in list_of_files:
file = open(file_name, 'rt')
tree = ET.parse(file)
file.close()
pid = pid(tree)
trid = trid(tree)
content = content(tree)
quotes = quotes(tree)
content_file.write(pid + '\t' + trid + '\t' + content + '\n')
quotes_file.write(pid + '\t' + trid + '\t' + quotes + '\n')
content_file.close()
quotes_file.close()
</code></pre>
| 1 |
2016-09-21T23:22:24Z
| 39,628,092 |
<p>You are overwriting your function calls with the values they returned. changing the function names should fix it.</p>
<pre><code>import xml.etree.ElementTree as ET
import re
import glob
import csv
import sys
content_file = open('WWP Project/WWP_texts.txt','wt')
quotes_file = open('WWP Project/WWP_quotes.txt', 'wt')
list_of_files = glob.glob("../../../Documents/WWPtextbase/distribution/*.xml")
ns = {'wwp':'http://www.wwp.northeastern.edu/ns/textbase'}
def get_content(tree):
lines = ''.join(ET.tostring(tree.getroot(),encoding='unicode',method='text')).replace('\n',' ').replace('\t',' ').strip()
clean_lines = re.sub(' +',' ', lines)
return clean_lines.lower()
def get_quotes(tree):
quotes_list = []
for node in tree.findall('.//wwp:quote', namespaces=ns):
quote = ET.tostring(node,encoding='unicode',method='text')
clean_quote = re.sub(' +',' ', quote)
quotes_list.append(clean_quote)
return ' '.join(str(v) for v in quotes_list).replace('\t','').replace('\n','').lower()
def get_pid(tree):
for node in tree.findall('.//wwp:sourceDesc//wwp:author/wwp:persName[1]', namespaces=ns):
pid = node.attrib.get('ref')
return pid.replace('personography.xml#','') # will need to replace 'p:'
def get_trid(tree): # this function will eventually need to call OT (.//wwp:publicationStmt//wwp:idno)
for node in tree.findall('.//wwp:sourceDesc',namespaces=ns):
trid = node.attrib.get('n')
return trid
content_file.write('pid' + '\t' + 'trid' + '\t' +'text' + '\n')
quotes_file.write('pid' + '\t' + 'trid' + '\t' + 'quotes' + '\n')
for file_name in list_of_files:
file = open(file_name, 'rt')
tree = ET.parse(file)
file.close()
pid = get_pid(tree)
trid = get_trid(tree)
content = get_content(tree)
quotes = get_quotes(tree)
content_file.write(pid + '\t' + trid + '\t' + content + '\n')
quotes_file.write(pid + '\t' + trid + '\t' + quotes + '\n')
content_file.close()
quotes_file.close()
</code></pre>
| 1 |
2016-09-21T23:39:58Z
|
[
"python",
"for-loop",
"xml-parsing"
] |
from openpyxl.styles import Style, Font ImportError: cannot import name Style
| 39,628,005 |
<p>While running a python code, I am facing the following error :</p>
<pre><code>sbassi-mbpro:FacebookEventScraper-master sbassi$ python facebook_event_scraper.py
Traceback (most recent call last):
File "facebook_event_scraper.py", line 13, in <module>
from openpyxl.styles import Style, Font
ImportError: cannot import name Style
python -V
Python 2.7.10
</code></pre>
| 0 |
2016-09-21T23:27:16Z
| 39,628,053 |
<p>Looking at the <a href="https://bitbucket.org/openpyxl/openpyxl/src/a4373fb5b471/openpyxl/styles/?at=default" rel="nofollow">source code</a> for openpyxl.styles, the <a href="https://bitbucket.org/openpyxl/openpyxl/src/a4373fb5b471f8dd6000b06e675cf37fe6a0ccad/openpyxl/styles/__init__.py?at=default&fileviewer=file-view-default" rel="nofollow">__init__.py</a> file doesn't define any name: <code>Style</code>. </p>
<p>Perhaps you were looking for <code>NamedStyle</code> instead?</p>
| 1 |
2016-09-21T23:32:56Z
|
[
"python",
"openpyxl"
] |
inserting python variable data into sqlite table not saving
| 39,628,043 |
<p>I'm querying a json on a website for data, then saving that data into a variable so I can put it into a sqlite table. I'm 2 out of 3 for what I'm trying to do, but the sqlite side is just mystifying. I'm able to request the data, from there I can verify that the variable has data when I test it with a print, but all of my sqlite stuff is failing. It's not even creating a table, much less updating the table (but it is printing all the results to the buffer for some reason) Any idea what I'm doing wrong here? Disclaimer: Bit of a python noob. I've successfully created test tables just copying the stuff off of the <a href="https://docs.python.org/2/library/sqlite3.html" rel="nofollow">python sqlite doc</a></p>
<pre><code># this is requesting the data and seems to work
for ticket in zenpy.search("bananas"):
id = ticket.id
subj = ticket.subject
created = ticket.created_at
for comment in zenpy.tickets.comments(ticket.id):
body = comment.body
# connecting to sqlite db that exists. things seem to go awry here
conn = sqlite3.connect('example.db')
c = conn.cursor()
# Creating the table table (for some reason table is not being created at all)
c.execute('''CREATE TABLE tickets_test
(ticket id, ticket subject, creation date, body text)''')
# Inserting the variables into the sqlite table
c.execute("INSERT INTO ticketstest VALUES (id, subj, created, body)")
# committing changes the changes and closing
c.commit()
c.close()
</code></pre>
<p>I'm on Windows 64bit and using pycharm to do this. </p>
| 0 |
2016-09-21T23:31:39Z
| 39,628,146 |
<p>Your table likely isn't created because you haven't committed yet, and your sql fails before it commits. It should work when you fix your 2nd sql statement.</p>
<p>You're not inserting the variables you've created into the table. You need to use parameters. There are <a href="https://docs.python.org/2/library/sqlite3.html#sqlite3.Cursor.execute" rel="nofollow">two ways of parameterizing your sql statement</a>. I'll show the named placeholders one:</p>
<pre><code>c.execute("INSERT INTO ticketstest VALUES (:id, :subj, :created, :body)",
{'id':id, 'subj':subj, 'created':created, 'body':body}
)
</code></pre>
| 0 |
2016-09-21T23:47:15Z
|
[
"python",
"sqlite",
"zendesk"
] |
Items are not adding in correctly in QTableWidget in Maya
| 39,628,044 |
<p>I am trying to create a table, where it has 2 columns and several rows.
Column1 will be listing all the available mesh/geos in the scene while Column2 will be in the form of combo box per cell where it contains several options - depending on the mesh/geos from Column1, it will lists different shader options in the combobox as it will be reading off from a file. Meaning to say in the table, each item is on a per-row basis.</p>
<p>I am currently having issues with populating the list of mesh/geos into Column1. Suppose my scene has 5 geos - <code>pCube1, pCube2, pCube3, pCube4, pCube5</code>, in my table, I would be expecting the Column0 of its 5 rows to be populated with <code>pCube#</code>, however instead of that, I got <code>pCube5</code> as my output result instead.</p>
<p>Please see the following code:</p>
<pre><code>from PyQt4 import QtGui, QtCore
from functools import partial
import maya.cmds as cmds
class combo_box( QtGui.QComboBox ):
# For combox
def __init__( self, *args, **kwargs ):
super( combo_box, self ).__init__( *args, **kwargs)
def get_all_geos():
all_geos = cmds.ls(type='mesh')
return all_geos
class TestTable( QtGui.QWidget ):
def __init__( self, parent=None ):
QtGui.QWidget.__init__( self, parent )
self.setLayout( QtGui.QVBoxLayout() )
self.resize( 600, 300 )
self.myTable = QtGui.QTableWidget()
self.myTable.setColumnCount( 2 )
rowCount = len(get_all_geos())
self.myTable.setRowCount(rowCount)
self.setTable()
self.layout().addWidget(self.myTable)
self.myTable.cellChanged.connect(self.update)
def setTable(self):
# Adding the list of mesh found in scene into first column
for geo in get_all_geos():
item = cmds.listRelatives(geo, parent=True)[0]
for i in range(0, self.myTable.rowCount()):
# instead of being populated with the list of items, I got the same name for the entire column
self.myTable.setItem(i, 0, QtGui.QTableWidgetItem(item))
# sets the combobox into the second column
box = combo_box()
nameList = ("test1","test2","test3")
box.addItems(nameList)
self.myTable.setCellWidget(i,1,box)
box.currentIndexChanged.connect(partial(self.tmp, i))
def tmp(self, rowIndex, comboBoxIndex):
item = "item " + str(comboBoxIndex)
self.myTable.setItem(rowIndex, 2, QtGui.QTableWidgetItem(item))
if __name__ == "__main__":
tableView = TestTable()
tableView.show()
</code></pre>
<p>In my <code>setTable</code> function, the <code>item</code> is not being processed correctly? when I am trying to add it into the QTableWidget. Can someone advise?</p>
<p>Additionally, if anyone could answers, does the format I have used, would it be applicable for the scenario I am trying to achieve as I mentioned at the start of the post?</p>
| 0 |
2016-09-21T23:31:46Z
| 39,630,491 |
<p>In your <code>setTable()</code> method, you are looping through the geometries, then you are looping through the rows. Since each geometry represents a row you only really need to loop through them and remove the other loop.</p>
<p>Modifying it like so fixes the output:</p>
<pre><code>def setTable(self):
# Adding the list of mesh found in scene into first column
geos = get_all_geos()
for i in range(0, len(geos)):
item = cmds.listRelatives(geos[i], parent=True)[0]
# instead of being populated with the list of items, I got the same name for the entire column
self.myTable.setItem(i, 0, QtGui.QTableWidgetItem(item))
# sets the combobox into the second column
box = combo_box()
nameList = ("test1","test2","test3")
box.addItems(nameList)
self.myTable.setCellWidget(i,1,box)
box.currentIndexChanged.connect(partial(self.tmp, i))
</code></pre>
<p>The reason it was failing was because your second loop kept overriding the rows with the last geo in the list.</p>
| 1 |
2016-09-22T04:50:22Z
|
[
"python",
"pyqt",
"maya"
] |
read number of users from OS using python
| 39,628,113 |
<p>I am writing a nagios plugin that would exit based on number of users that are logged into my instance.</p>
<pre><code>import argparse
import subprocess
import os
import commands
import sys
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='checks number of logged in users')
parser.add_argument('-c','--critical', help='enter the value for critical limit', nargs='?', const=10)
parser.add_argument('-w','--warning', help='enter the value for warning limit', nargs='?', const=5)
args = parser.parse_args()
x= commands.getstatusoutput("users | tr ' ' '\n' | sort -u | wc -l")
a= os.popen("users | tr ' ' '\n' | sort -u | wc -l").read()
print type(a)
print "value of critical is %s" % (args.critical)
print a
(co, res) = x
print "result from command is %s" % (res)
print type(res)
if a >= args.critical:
print "we are critical"
sys.exit(2)
elif a <= args.warning:
print " we are ok"
sys.exit(0)
elif (a >= args.warning and a < args.critical):
print "we are warning"
sys.exit(1)
else:
print "Unkown"
sys.exit(3)
</code></pre>
<p>however the issue is for my if statements the result I am getting from commands.getstatusoutput or os.popen are string. how can I get the actual number of users from a shell command.</p>
<pre><code>python test.py -c
<type 'str'>
value of critical is 10
1
result from command is 1
<type 'str'>
we are critical
</code></pre>
| 0 |
2016-09-21T23:43:25Z
| 39,628,156 |
<p>To convert string to integer use function int(). For instance <code>res_integer = int(res)</code></p>
| 1 |
2016-09-21T23:48:25Z
|
[
"python",
"shell",
"python-os"
] |
Is there a way to set a tag on an EC2 instance while creating it?
| 39,628,128 |
<p>Is that a way to create an EC2 instance with tags(I mean, adding tag as parameter when creating instance)</p>
<p>I can't find this function in boto APIs. According to the document, we can only add tags after creating. </p>
<p>However, when creating on the browser, we can configure the tags when creating. So can we do the same thing in boto? (In our course we are required to tag our resource when creating, which is for bill monitor purpose, so adding tags after creating is not allowed.....)</p>
| 0 |
2016-09-21T23:44:44Z
| 39,649,819 |
<p>At the time of writing, there is no way to do this in a single operation.</p>
| 0 |
2016-09-22T22:25:23Z
|
[
"python",
"amazon-web-services",
"amazon-ec2",
"boto"
] |
Giving input and output files for batch processing?
| 39,628,131 |
<p>I have more than 100 .txt files in a directory, that I want to run the same python script for each one of the files. Right now I have to type a similar command over 100 times because there is a slight variation for each command because the input and output file names are different. I was wondering if this could done automatically.</p>
<p>My code looks like this:</p>
<pre><code>import pandas as pd
import numpy as np
import os
import argparse
parser = argparse.ArgumentParser(description='Excelseq ')
parser.add_argument('-i','--txt', help='Input file name',required=True)
parser.add_argument('-o','--output',help='output file name', required=True)
args = parser.parse_args()
df = pd.read_csv(args.txt, sep='\t' )
f=open('VD.fasta', "r+")
out = open(args.output, "w")
for line in f:
title = line[1:]
title = title.rstrip()
seq = f.readline()
seq = seq.rstrip()
if df['ReadID'].str.contains(title).any():
out.write('>{0}\n{1}\n'.format(title,seq))
</code></pre>
<p>The code takes 1 input file: <code>df</code> which is given by <code>-i</code>, it is a .txt file, and the script checks if the <code>ReadID</code> from the .txt file is in the .fasta file. If it is, the script will print out the <code>title</code> and <code>seq</code>. But for each output file, I would like the name to be the same as the .txt file but with a .fasta extension.</p>
<p>For example:</p>
<pre><code>input file1 : H100.txt
output file1: H100.fasta
input file2 : H101.txt
output file2: H101.fasta
input file3: H102.txt
output file3: H102.fasta
...
</code></pre>
<p><strong>How would I automate this for over 100 files? Each run takes a long time and I don't want to sit in front of the computer to wait for it to finish and then run the next.</strong></p>
| 0 |
2016-09-21T23:44:50Z
| 39,628,715 |
<p>I couldn't test this because I don't have the input files nor do I have all the third party modules installed that you do. However it should be close to what you should do, as I was trying to explain in the comments.</p>
<pre><code>import glob
import numpy as np
import os
import pandas as pd
import sys
def process_txt_file(txt_filename, f):
root, ext = os.path.splitext(txt_filename)
fasta_filename = root + '.fasta'
print('processing {} -> {}'.format(txt_filename, fasta_filename))
df = pd.read_csv(txt_filename, sep='\t' )
with open(fasta_filename, "w") as out:
f.seek(0) # rewind
for line in f:
title = line[1:].rstrip()
seq = f.readline().rstrip()
if df['ReadID'].str.contains(title).any():
out.write('>{0}\n{1}\n'.format(title, seq))
if __name__ == '__main__':
if len(sys.argv) != 2:
print('usage: {} <path-to-txt-files-directory>'.format(sys.argv[0]))
sys.exit(2)
with open('VD.fasta', "r+") as f:
for input_filename in glob.glob(os.path.join(sys.argv[1], 'H*.txt'):
process_txt_file(input_filename, f)
</code></pre>
| 0 |
2016-09-22T01:05:16Z
|
[
"python",
"batch-processing"
] |
Selenium Webdriver Python - implicit wait is not clear to me
| 39,628,191 |
<p>Other people have asked this question and there are some answers but they do not clarify one moment. Implicit wait will wait for a specified amount of time if element is not found right away and then will run an error after waiting for the specified amount of time. Does it mean that implicit wait checks for the element the very first second and then waits for the specified time and checks at the last second again? </p>
<p>I know that explicit wait polls the DOM every 500ms. What is the practical use of implicit wait if tests take longer with it? </p>
| 0 |
2016-09-21T23:54:26Z
| 39,629,897 |
<p>In case of implicit wait driver waits till elements appears in DOM but at the same time it does not guarantee that elements are usable. Elements might not be enabled to be used ( like button click ) or elements might not have shape defined at that time. </p>
<p>We are not interested with all the elements on the page as far as we are using selenium. All element might not have shape even.But presence of all the element in DOM is important to have other element working correctly. So implicit wait. </p>
<p>When working with any element, we use explicit wait ( WebDriverwait ) or FluentWait.</p>
| 0 |
2016-09-22T03:44:48Z
|
[
"python",
"selenium",
"selenium-webdriver"
] |
Selenium Webdriver Python - implicit wait is not clear to me
| 39,628,191 |
<p>Other people have asked this question and there are some answers but they do not clarify one moment. Implicit wait will wait for a specified amount of time if element is not found right away and then will run an error after waiting for the specified amount of time. Does it mean that implicit wait checks for the element the very first second and then waits for the specified time and checks at the last second again? </p>
<p>I know that explicit wait polls the DOM every 500ms. What is the practical use of implicit wait if tests take longer with it? </p>
| 0 |
2016-09-21T23:54:26Z
| 39,630,409 |
<p><strong>Implicit Wait</strong> is internal to selenium. You set it once while initializing. Then every time web driver tries to look for a element it will look for that elmemt continiously (with some polling) presence till 'implicit wait' timer expires. If the element is found then it resumes execution otherwise throws <code>NoSuchElement</code> exception. So it it founds it on first second it will come out from wait loop else if it does not find element on last second it will throw exeception.</p>
<p><strong>Explicit Wait</strong> Is used for the scenario where Its required to wait for certain condition to be True. For e.g visibility of element. Its scope is limited for that particular call only.</p>
<p>You can look for <a href="http://www.seleniumhq.org/docs/04_webdriver_advanced.jsp" rel="nofollow">Selenium documentation</a> for more details and examples</p>
| 0 |
2016-09-22T04:42:14Z
|
[
"python",
"selenium",
"selenium-webdriver"
] |
Histograms in Pandas
| 39,628,242 |
<p>Relatively new to python and pandas. I have a dataframe: <code>df</code> with say 2 columns (say, <code>0</code> and <code>1</code>) and n rows. I'd like to plot the histograms of the two time series data represented in the two columns. I also need access to the exact counts in the histogram for each bin for later manipulations.</p>
<pre><code>b_counts, b_bins = np.histogram(df[0], bins = 10)
a_counts, a_bins = np.histogram(df[1], bins = 10)
plt.bar(b_bins, b_counts)
plt.pbar(a_bins, a_counts)
</code></pre>
<p>However I get an error for incompatible sizes, i.e, length of the bins array is 11 whereas the length of counts array is 10. Two questions:
1) Why does the histogram in numpy an extra bin? i.e., 11 instead of 10 bins
2) Assuming question 1) above can be solved, is this the best/simplest way of going about this?</p>
| 0 |
2016-09-22T00:02:59Z
| 39,630,751 |
<p>I would directly use Pyplot's built in <a href="http://matplotlib.org/1.2.1/examples/pylab_examples/histogram_demo.html" rel="nofollow">histogram</a> function:</p>
<pre><code>b_counts, b_bins, _ = plt.hist(df[0], bins = 10)
a_counts, a_bins, _ = plt.hist(df[1], bins = 10)
</code></pre>
<hr>
<p>As per the documentation of <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html" rel="nofollow">numpy.histogram</a> (if you scroll down far enough to read the <code>Returns</code> section in parameter definition):</p>
<blockquote>
<p><strong>hist</strong> : <em>array</em> The values of the histogram. See density and weights for
a description of the possible semantics.</p>
<p><strong>bin_edges</strong> : <em>array of dtype
float</em> Return the bin edges <code>(length(hist)+1)</code>.</p>
</blockquote>
<p>Quite clear, isn't it?</p>
| 1 |
2016-09-22T05:14:57Z
|
[
"python",
"pandas",
"numpy",
"histogram",
"bins"
] |
How to save a value to be used after app.exec() exits in python?
| 39,628,267 |
<p>I would like to save/store mouse event values that were created in app.exec() as it was running. I would like to use the following code that I got from a post that I am having trouble finding now.(Will update with link to post where this code came from, once I find it.)</p>
<pre><code>import sys
from PyQt4 import QtGui, QtCore
from PyQt4.QtGui import *
from PyQt4.QtCore import *
import numpy as np
class DrawImage(QMainWindow):
def __init__(self,fName, parent=None):
## Default values
self.x = 0
self.y = 0
super(QMainWindow, self).__init__(parent)
self.setWindowTitle('Select Window')
self.local_image = QImage(fName)
self.local_grview = QGraphicsView()
self.setCentralWidget( self.local_grview )
self.local_scene = QGraphicsScene()
self.image_format = self.local_image.format()
#self.pixMapItem = self.local_scene.addPixmap( QPixmap(self.local_image) )
self.pixMapItem = QGraphicsPixmapItem(QPixmap(self.local_image), None, self.local_scene)
self.local_grview.setScene( self.local_scene )
self.pixMapItem.mousePressEvent = self.pixelSelect
def pixelSelect( self, event ):
# print(event.pos().x(), event.pos().y())
self.x = event.pos().x()
self.y = event.pos().y()
print(self.x, self.y)
def main():
# Initialize
fName = "N2-600-PSI-V1-40-30ms-1.tiff"
app = QtGui.QApplication(sys.argv)
form = DrawImage(fName)
form.show()
app.exec_()
x,y = app.exec_()
print(x,y)
return
if __name__ == '__main__':
main()
</code></pre>
<p>My first attempt was to create two global variables, which then I used in the pixelSelect function to hold the values from event.pos().x() and .()y.</p>
<p>This worked however... ultimately I would like to pass more than just one set of coordinates out of the app.exec() loop...(process?? its a strange beast)</p>
<p>So from this point I have tried several different methods to pass an array into the app.exec() to save more values. The best result I have gotten thus far has been by using a global array and trying to make a for loop happen in the the DrawImage class.</p>
<p>Any pointers would be great :)</p>
<p>Have a good one!</p>
| 0 |
2016-09-22T00:05:06Z
| 39,641,383 |
<p><code>app.exec_()</code> is not doing anything magical, it simply starts the event loop that handles all the GUI events. Your main function will block here until the GUI is shut down and the event loop exits. At this point the objects you have created are still in scope, that is, they have not been garbage collected, they just can't do anything that relies on the Qt event loop.</p>
<p>You can simply access the members of <code>DrawImage</code> to retrieve what you need.</p>
<pre><code>from PyQt4 import QtCore, QtGui
class DrawImage(QtGui.QMainWindow):
def __init__(self, fname, **kwargs):
super(DrawImage, self).__init__(**kwargs)
self.setWindowTitle('Select Window')
scene = QtGui.QGraphicsScene()
gview = QtGui.QGraphicsView()
gview.setScene(scene)
self.setCentralWidget(gview)
image = QtGui.QImage(fname)
pixmapitem = QtGui.QGraphicsPixmapItem(QtGui.QPixmap(image), None, scene)
pixmapitem.mousePressEvent = self.pixelSelect
self.points = []
def pixelSelect(self, event):
x, y = event.pos().x(), event.pos().y()
self.points.append((x, y))
print x, y
if __name__ == '__main__':
fname = 'N2-600-PSI-V1-40-30ms-1.tiff'
app = QtGui.QApplication([])
form = DrawImage(fname)
form.show()
app.exec_()
for point in form.points:
print point
</code></pre>
<p>Presumably you want to actually <em>do something</em> with the values your application generates though, rather than just print them. There's no reason why you can't (and you should!) handle any further processing from within the Qt application.</p>
| 0 |
2016-09-22T14:08:33Z
|
[
"python",
"qt",
"user-interface",
"pyqt"
] |
Why does the swapping work one way but not the other?
| 39,628,275 |
<p>For this <a href="https://leetcode.com/problems/first-missing-positive/" rel="nofollow">question</a> on leetcode, I attempted to solve with Python 2.7 with the code at the bottom of the post. However, for an input of <code>[2,1]</code>, the function will loop forever. But, if I change the line doing the swap, and switch the variables so the order is the opposite, the swapping will actually work and the function executes correctly.</p>
<p>So currently the code has the swap as: <code>nums[i], nums[nums[i]-1] = nums[nums[i]-1], nums[i]</code>, which doesn't work (this is in the <code>while</code> loop). If I change the order of swapping to <code>nums[nums[i]-1], nums[i] = nums[i], nums[nums[i]-1]</code>, the swap/assignment does work. Why is that? I looked on SO and it seemed like Python's <code>a,b=b,a</code> swap works by putting both <code>a</code> and <code>b</code> on the stack (evaluating the right side of <code>=</code> first) and then reassigning them <a href="http://stackoverflow.com/questions/21047524/how-does-swapping-of-members-in-the-python-tuples-a-b-b-a-work-internally">(description here)</a>. If that is how it works, then why shouldn't <code>b,a=a,b</code> achieve the same effects?</p>
<p>On Leetcode's online judge, my current (looping forever way) freezes the page. I tried it on my local Python 2.7 environment and it also loops forever. I tested that <code>a,b=b,a</code> is equivalent to <code>b,a=a,b</code> in my environment. So then - why does my code below loop forever when the swap is in one order and work perfectly in the other order?</p>
<pre><code>def firstMissingPositive(nums):
"""
:type nums: List[int]
:rtype: int
"""
if len(nums) == 1:
if nums[0] != 1:
return 1
else:
return 2
i = 0
while i < len(nums):
if nums[i] > 0 and nums[i] - 1 < len(nums) and nums[i] != nums[nums[i]-1]:
#Line below does not work
nums[i], nums[nums[i]-1] = nums[nums[i]-1], nums[i]
#=>> ??But this works?? # nums[nums[i]-1], nums[i] = nums[i], nums[nums[i]-1]
else:
i += 1
for i, int in enumerate(nums):
if int != i + 1:
return i + 1
return len(nums) + 1
</code></pre>
| 1 |
2016-09-22T00:06:01Z
| 39,628,302 |
<p>The use of <strong>nums[i]-1</strong> as a subscript for <strong>nums</strong> introduces an extra evaluation that isn't in the order you want. Run a simple test, such as on the list [1, 2, 3, 4, 5, 6, 7] with just a few of these statements, and you'll see the result.</p>
<p>If you handle just one intermediate operation, I think you'll get the semantics you want:</p>
<pre><code>index = nums[i]
nums[i], nums[index-1] = nums[index-1], nums[i]
</code></pre>
| 1 |
2016-09-22T00:08:52Z
|
[
"python",
"python-2.7"
] |
Drawing a rectangle representing a value in python?
| 39,628,294 |
<p>How do I plot a rectangle with color representing a value like this picture in python?
<a href="http://i.stack.imgur.com/OLvkW.png" rel="nofollow"><img src="http://i.stack.imgur.com/OLvkW.png" alt="enter image description here"></a> </p>
<p>I have showed a rectangle but still trying to display values in the rectangle! </p>
<pre><code>import matplotlib
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
fig = plt.figure()
ax = fig.add_subplot(111)
rect1 = matplotlib.patches.Rectangle((0,1), 5, 0.5, color='c')
ax.add_patch(rect1)
ax.grid()
plt.xlim([-5, 20])
plt.ylim([-5, 6])
plt.show()
someX, someY = 0.5, 0.5
plt.figure()
currentAxis = plt.gca()
currentAxis.add_patch(Rectangle((someX - .1, someY - .1), 0.2, 0.2, color = 'c', alpha=0.5))
plt.grid()
plt.show()
</code></pre>
<p>The result:
<a href="http://i.stack.imgur.com/nyhR1.png" rel="nofollow"><img src="http://i.stack.imgur.com/nyhR1.png" alt="enter image description here"></a></p>
| 0 |
2016-09-22T00:07:58Z
| 39,628,391 |
<p>You could, for example, use the text function from matplotlib library.</p>
<p><a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.text" rel="nofollow">http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.text</a></p>
<p>You can also find another solution in a similar post here:</p>
<p><a href="http://stackoverflow.com/questions/14531346/how-to-add-a-text-into-a-rectangle">How to add a text into a Rectangle?</a></p>
| 0 |
2016-09-22T00:21:19Z
|
[
"python",
"matplotlib"
] |
Print values in Python debugger
| 39,628,298 |
<p>In my Python code, I have this call inside a bounded method:</p>
<pre><code>instances = instance_objects.InstanceList().get_by_host(ctxt, self.host)
</code></pre>
<p>When I debug with the Python debugger (pdb) and I issue <code>p instances</code> i get this output:</p>
<pre><code>InstanceList(objects=[Instance(bdfbf658-da32-445d-9560-56d496abcb9d)])
</code></pre>
<p>When I issue <code>p instances.objects</code> i get this output:</p>
<pre><code>[Instance(
access_ip_v4=None,
access_ip_v6=None,
architecture=None,
auto_disk_config=False,
availability_zone=None,
cell_name=None,
cleaned=False,
vcpus=1,
)]
</code></pre>
<p>How can I print out the value of vcpus in pdb?</p>
| 2 |
2016-09-22T00:08:24Z
| 39,641,562 |
<p>Try</p>
<pre><code>p instances.objects[0].vcpus
</code></pre>
| 1 |
2016-09-22T14:16:52Z
|
[
"python",
"python-2.7",
"pdb"
] |
Run paho mqtt client loop_forever
| 39,628,305 |
<p>I am trying to run the following code on loop continuously. But the following code only runs once and takes only one message entry. </p>
<p>What i a trying to do inside the on_message function is run a cron task using python apscheduler. </p>
<pre><code>def on_message(mqttc, obj, msg):
global val
val = str(msg.payload)
print val
dow = val[0:3]
print dow
hr = val[4:6]
print hr
min = val[7:9]
print min
status = val[10:11]
print status
def plugON():
publish.single("plug/status","0", hostname="localhost")
def plugOFF():
publish.single("plug/status","1", hostname="localhost")
def cronon():
print "cron on"
def cronoff():
print "cron off"
if status == '0':
sched.add_job(plugON, trigger='cron', year='*', month='*', day='*', week='*', day_of_week=dow, hour=hr, minute=min, id='plugon')
sched.add_job(cronon, trigger='cron', year='*', month='*', day='*', week='*', day_of_week=dow, hour=hr, minute=min)
if status == '1':
sched.add_job(plugOFF, trigger='cron', year='*', month='*', day='*', week='*', day_of_week=dow, hour=hr, minute=min, id='plugoff')
sched.add_job(cronoff, trigger='cron', year='*', month='*', day='*', week='*', day_of_week=dow, hour=hr, minute=min)
sched.start()
</code></pre>
<p>The mqtt connect script:</p>
<pre><code>mqttc = mqtt.Client()
mqttc.on_message = on_message
mqttc.on_connect = on_connect
mqttc.on_publish = on_publish
mqttc.on_subscribe = on_subscribe
mqttc.connect("localhost", 1883, 60)
mqttc.subscribe("plug/#", 0)
#mqtt loop
mqttc.loop_forever()
</code></pre>
<p>during the execution, it connects to localhost, takes a single entry. On sending something like <strong>thu:05:47:0</strong>, waits till the 5:47 to run plugON/plugOFF. At 5:47 it runs the function and disconnects with the localhost. </p>
<p>How can i get my code to take another entry and continue loop?</p>
| 1 |
2016-09-22T00:09:25Z
| 39,845,778 |
<p>a few issues.</p>
<p>you have not detailed which version of python, apscheduler, mqtt and you have omitted your imports and some useful functions for troubleshooting like on_connect</p>
<p>so testing this on python3 and apscheduler3.2 i think that you;</p>
<ol>
<li>you are using BlockingScheduler instead of BackgroundScheduler (this is likely why seeing it all halt on the first msg). the bgscheduler will not stop and wait for the event.</li>
<li>you are starting the scheduler in your function, do this once when you declare sched. you can add_job later no worries.</li>
<li>you have not declared sched (or you cut it our of your paste)</li>
<li>you failed to call publish correctly with your declared mqtt instance. or your doing something else that i cant see in your pasted code.</li>
<li>paho publish has no method called 'single'?</li>
<li>your static id on the add_job will cause you issues scheduling a second job. you will throw an exception from id being reused. but i dont know the details of your use.</li>
</ol>
<p>if you are using aps<3 then looking at <a href="https://apscheduler.readthedocs.io/en/latest/migration.html" rel="nofollow">this</a> you should be able to set <code>standalone=false</code></p>
<blockquote>
<p>The concept of âstandalone modeâ is gone. For standalone=True, use
BlockingScheduler instead, and for standalone=False, use
BackgroundScheduler. BackgroundScheduler matches the old default
semantics.</p>
</blockquote>
<pre><code>#!/usr/bin/env python
import paho.mqtt.client as mqtt
#from apscheduler.schedulers.blocking import BlockingScheduler
from apscheduler.schedulers.background import BackgroundScheduler
# Start the scheduler
sched = BackgroundScheduler()
sched.start()
def on_message(mqttc, obj, msg):
global val
val = str(msg.payload.decode('utf-8'))
dow = val[0:3]
hr = val[4:6]
minu = val[7:9]
status = val[10:11]
print(str(val) +" "+ dow +" "+ hr +" "+ minu +" "+ status)
def plugON():
mqttc.publish("plug/status","0")
def plugOFF():
mqttc.publish("plug/status","1")
def cronon():
print("cron on")
def cronoff():
print("cron off")
try:
if status == '0':
sched.add_job(plugON, trigger='cron', year='*', month='*', day='*', week='*', day_of_week=dow, hour=hr, minute=minu, id='plugon')
sched.add_job(cronon, trigger='cron', year='*', month='*', day='*', week='*', day_of_week=dow, hour=hr, minute=minu)
if status == '1':
sched.add_job(plugOFF, trigger='cron', year='*', month='*', day='*', week='*', day_of_week=dow, hour=hr, minute=minu, id='plugoff')
sched.add_job(cronoff, trigger='cron', year='*', month='*', day='*', week='*', day_of_week=dow, hour=hr, minute=minu)
except:
print("whoops")
sched.print_jobs()
mqttc = mqtt.Client()
mqttc.on_message = on_message
#mqttc.on_connect = on_connect
#mqttc.on_publish = on_publish
#mqttc.on_subscribe = on_subscribe
mqttc.connect("local", 1883, 60)
mqttc.subscribe("plug/#")
#mqtt loop
mqttc.loop_forever()
</code></pre>
<p>code <strong>TESTED</strong> as working on python3.5.2 and apschedulerv3.2 and paho-mqttv1.2</p>
<p><strong>TL:DR</strong></p>
<ul>
<li>you are using apsheduler in blocking mode</li>
<li>you are starting the scheduler on every on_message call</li>
</ul>
| 0 |
2016-10-04T06:44:25Z
|
[
"python",
"apscheduler"
] |
Optimizing a way to find all permutations of a string
| 39,628,308 |
<p>I solved a puzzle but need to optimize my solution. The puzzle says that I am to take a string <em>S</em>, find all permutations of its characters, sort my results, and then return the one-based index of where <em>S</em> appears in that list.</p>
<p>For example, the string 'bac' appears in the 3rd position in the sorted list of its own permutations: <code>['abc', 'acb', 'bac', 'bca', 'cab', 'cba']</code>.</p>
<p>My problem is that the puzzle limits my execution time to 500ms. One of the test cases passed "BOOKKEEPER" as an input, which takes ~4.2s for me to complete.</p>
<p>I took a (possibly naive) dynamic programming approach using memoization using a dict keyed by one particular permutation of some character set, but that's not enough.</p>
<p><strong>What is my bottleneck?</strong></p>
<p>I'm profiling in the meantime to see if I can answer my own question, but I invite those who see the problem outright to help me understand how I slowed this down.</p>
<p><strong>EDIT:</strong> My solution appears to outperform <code>itertools.permutations</code>. 10+ seconds for input "QUESTION". But to be fair, this includes time printing so this might not be a fair comparison. Even so, I'd rather submit a handcoded solution with competitive performance knowing why mine was worse than to opt for a module.</p>
<pre><code>memo = {}
def hash(word):
return ''.join(sorted(word))
def memoize(word, perms):
memo[hash(word)] = perms
return perms
def permutations(word, prefix = None):
"""Return list of all possible permutatons of given characters"""
H = hash(word)
if H in memo:
return [s if prefix is None else prefix + s for s in memo[H]]
L = len(word)
if L == 1:
return [word] if prefix is None else [prefix + word]
elif L == 2:
a = word[0] + word[1]
b = word[1] + word[0]
memoize(word, [a, b])
if prefix is not None:
a = prefix + a
b = prefix + b
return [a, b]
perms = []
for i in range(len(word)):
perms = perms + permutations(word[:i] + word[i+1:], word[i])
memoize(word, perms)
return [prefix + s for s in perms] if prefix is not None else perms
def listPosition(word):
"""Return the anagram list position of the word"""
return sorted(list(set(permutations(word)))).index(word) + 1
print listPosition('AANZ')
</code></pre>
| 1 |
2016-09-22T00:09:48Z
| 39,628,764 |
<p>Your bottleneck resides in the fact that the number of permutations of a list of N items is N! (N factorial). This number grows very fast as the input increases.</p>
<p>The first optimisation you can do is that you do not have to store all the permutations. Here is a recursive solution that produces all the permutations already sorted. The "trick" is to sort the letters of the word before generating the permutations.</p>
<pre><code>def permutations_sorted( list_chars ):
if len(list_chars) == 1: # only one permutation for a 1-character string
yield list_chars
elif len(list_chars) > 1:
list_chars.sort()
for i in range(len(list_chars)):
# use each character as first position (i=index)
head_char = None
tail_list = []
for j,c in enumerate(list_chars):
if i==j:
head_char = c
else:
tail_list.append(c)
# recursive call, find all permutations of remaining
for tail_perm in permutations_sorted(tail_list):
yield [ head_char ] + tail_perm
def puzzle( s ):
print "puzzle %s" % s
results = []
for i,p_list in enumerate(permutations_sorted(list(s))):
p_str = "".join(p_list)
if p_str == s:
results.append( i+1 )
print "string %s was seen at position%s %s" % (
s,
"s" if len(results) > 1 else "",
",".join(["%d" % i for i in results])
)
print ""
if __name__ == '__main__':
puzzle("ABC")
</code></pre>
<p>Still, that program takes a long time to run when the input is large. On my computer (2.5 GHz Intel core i5)</p>
<ul>
<li>Input = "ABC" (3 characters): 0.03 seconds</li>
<li>Input = "QUESTION" (8 characters): 0.329 seconds</li>
<li>Input = "QUESTIONS" (9 characters): 2.848 seconds</li>
<li>Input = "BOOKKEEPER" (10 characters): 30.47 seconds</li>
</ul>
<p>The only way to "beat the clock" is to figure a way to compute the position of the string <strong>without</strong> generating all the permutations.</p>
<p>See the comment by Evert above.</p>
<p>N.B. When the input contains letters that are repeated, the initial string is seen at more than one place. I assume you have to report only the first occurence.</p>
| 1 |
2016-09-22T01:13:05Z
|
[
"python",
"optimization"
] |
Optimizing a way to find all permutations of a string
| 39,628,308 |
<p>I solved a puzzle but need to optimize my solution. The puzzle says that I am to take a string <em>S</em>, find all permutations of its characters, sort my results, and then return the one-based index of where <em>S</em> appears in that list.</p>
<p>For example, the string 'bac' appears in the 3rd position in the sorted list of its own permutations: <code>['abc', 'acb', 'bac', 'bca', 'cab', 'cba']</code>.</p>
<p>My problem is that the puzzle limits my execution time to 500ms. One of the test cases passed "BOOKKEEPER" as an input, which takes ~4.2s for me to complete.</p>
<p>I took a (possibly naive) dynamic programming approach using memoization using a dict keyed by one particular permutation of some character set, but that's not enough.</p>
<p><strong>What is my bottleneck?</strong></p>
<p>I'm profiling in the meantime to see if I can answer my own question, but I invite those who see the problem outright to help me understand how I slowed this down.</p>
<p><strong>EDIT:</strong> My solution appears to outperform <code>itertools.permutations</code>. 10+ seconds for input "QUESTION". But to be fair, this includes time printing so this might not be a fair comparison. Even so, I'd rather submit a handcoded solution with competitive performance knowing why mine was worse than to opt for a module.</p>
<pre><code>memo = {}
def hash(word):
return ''.join(sorted(word))
def memoize(word, perms):
memo[hash(word)] = perms
return perms
def permutations(word, prefix = None):
"""Return list of all possible permutatons of given characters"""
H = hash(word)
if H in memo:
return [s if prefix is None else prefix + s for s in memo[H]]
L = len(word)
if L == 1:
return [word] if prefix is None else [prefix + word]
elif L == 2:
a = word[0] + word[1]
b = word[1] + word[0]
memoize(word, [a, b])
if prefix is not None:
a = prefix + a
b = prefix + b
return [a, b]
perms = []
for i in range(len(word)):
perms = perms + permutations(word[:i] + word[i+1:], word[i])
memoize(word, perms)
return [prefix + s for s in perms] if prefix is not None else perms
def listPosition(word):
"""Return the anagram list position of the word"""
return sorted(list(set(permutations(word)))).index(word) + 1
print listPosition('AANZ')
</code></pre>
| 1 |
2016-09-22T00:09:48Z
| 39,628,894 |
<p>I believe the answer is to not produce all the permutations nor sort them. Let's keep it simple and see how it compares performance-wise:</p>
<pre><code>import itertools
def listPosition(string):
seen = set()
target = tuple(string)
count = 1;
for permutation in itertools.permutations(sorted(string)):
if permutation == target:
return count
if permutation not in seen:
count += 1
seen.add(permutation)
print(listPosition('BOOKKEEPER'))
</code></pre>
<p><strong>TIMINGS</strong> (in seconds)</p>
<pre><code> Sage/Evert Mine Sage Answer
QUESTIONS 0.02 0.18 0.45 98559
BOOKKEEPER 0.03 0.11 2.10 10743
ZYGOTOBLAST 0.03 24.4 117(*) 9914611
(*) includes ~25 second delay between printing of answer and program completion
</code></pre>
<p>The output from Sci Prog's code did not produce answers that agreed with the other two as it produced larger indexes and multiple of them so I didn't include its timings which were lengthy.</p>
| 2 |
2016-09-22T01:31:12Z
|
[
"python",
"optimization"
] |
Optimizing a way to find all permutations of a string
| 39,628,308 |
<p>I solved a puzzle but need to optimize my solution. The puzzle says that I am to take a string <em>S</em>, find all permutations of its characters, sort my results, and then return the one-based index of where <em>S</em> appears in that list.</p>
<p>For example, the string 'bac' appears in the 3rd position in the sorted list of its own permutations: <code>['abc', 'acb', 'bac', 'bca', 'cab', 'cba']</code>.</p>
<p>My problem is that the puzzle limits my execution time to 500ms. One of the test cases passed "BOOKKEEPER" as an input, which takes ~4.2s for me to complete.</p>
<p>I took a (possibly naive) dynamic programming approach using memoization using a dict keyed by one particular permutation of some character set, but that's not enough.</p>
<p><strong>What is my bottleneck?</strong></p>
<p>I'm profiling in the meantime to see if I can answer my own question, but I invite those who see the problem outright to help me understand how I slowed this down.</p>
<p><strong>EDIT:</strong> My solution appears to outperform <code>itertools.permutations</code>. 10+ seconds for input "QUESTION". But to be fair, this includes time printing so this might not be a fair comparison. Even so, I'd rather submit a handcoded solution with competitive performance knowing why mine was worse than to opt for a module.</p>
<pre><code>memo = {}
def hash(word):
return ''.join(sorted(word))
def memoize(word, perms):
memo[hash(word)] = perms
return perms
def permutations(word, prefix = None):
"""Return list of all possible permutatons of given characters"""
H = hash(word)
if H in memo:
return [s if prefix is None else prefix + s for s in memo[H]]
L = len(word)
if L == 1:
return [word] if prefix is None else [prefix + word]
elif L == 2:
a = word[0] + word[1]
b = word[1] + word[0]
memoize(word, [a, b])
if prefix is not None:
a = prefix + a
b = prefix + b
return [a, b]
perms = []
for i in range(len(word)):
perms = perms + permutations(word[:i] + word[i+1:], word[i])
memoize(word, perms)
return [prefix + s for s in perms] if prefix is not None else perms
def listPosition(word):
"""Return the anagram list position of the word"""
return sorted(list(set(permutations(word)))).index(word) + 1
print listPosition('AANZ')
</code></pre>
| 1 |
2016-09-22T00:09:48Z
| 39,629,213 |
<p>Providing my own answer under the assumption that a good way to optimize code is to not use it in the first place. Since I strongly emphasized identifying ways to speed up the code I posted, I'm upvoting everyone else for having made improvements in that light.</p>
<p>@Evert posted the following comment:</p>
<blockquote>
<p>I would think you can come up with a formula to calculate the position of the input word, based on the alphabetic ordering (since the list is sorted alphabetically) of the letters. If I understand the puzzle correctly, it only asks to return the position of the input, not all of the permutations. So you'll want to grab some pen and paper and find a formulation of that problem. </p>
</blockquote>
<p>Following this reasoning, among similar suggestions from others, I tried an approach based more in enumerative combinatorics:</p>
<pre><code>from math import factorial as F
from operator import mul
def permutations(s):
return F(len(s)) / reduce(mul, [F(s.count(c)) for c in set(s)], 1)
def omit(s,index):
return s[:index] + s[index+1:]
def listPosition(s):
if (len(s) == 1):
return 1
firstletter = s[0]
predecessors = set([c for c in s[1:] if c < firstletter])
startIndex = sum([permutations(omit(s, s.index(c))) for c in predecessors])
return startIndex + listPosition(s[1:])
</code></pre>
<p>This produced correct output and passed the puzzle at high speed (performance metrics not recorded, but noticably different). Not a single string permutation was actually produced.</p>
<p>Take as an example input <code>QUESTION</code>:</p>
<p>We know that wherever in the list "QUESTION" appears, it will appear after all permutations that start with letters that come before "Q". The same can be said of substrings down the line.</p>
<p>I find the letters that come before <code>firstletter = 'Q'</code>, which is stored in <code>predecessors</code>. The <code>set</code> prevents double counting for input with repeated letters.</p>
<p>Then, we assume that each letter in <code>predecessors</code> acts as a prefix. If I omit that prefix from the string and find the sum of permutations of the remaining letters, we find the number of permutations that <em>must appear</em> before the initial input's <em>first letter</em>. Recurse, then sum the results, and you end up with the start position.</p>
| 2 |
2016-09-22T02:17:38Z
|
[
"python",
"optimization"
] |
Parsing an irregularly spaced text file in Python pandas
| 39,628,453 |
<p>I have a text file that looks like :</p>
<pre><code>Date Fruit-type Color count
aug-6 apple green 4
aug-7 pear brown 5
aug-3 peach yellow 10
aug-29 orange orange 34
</code></pre>
<p>I would like to parse it to remove the irregular spaces into a nicely formatted pandas dataframe. I thought to remove the spaces and replace them with another delimiter but could not figure out the logic.</p>
<p>Desired output</p>
<pre><code>Date,Fruit-type,Color,count
aug-6,apple,green,4
aug-7,pear,brown,5
aug-3,peach,yellow,10
aug-29,orange,orange,34
</code></pre>
| 1 |
2016-09-22T00:29:38Z
| 39,628,581 |
<p>If you can use command line tools, you can run this <code>awk</code> command to turn it from space delimited to comma delimited.</p>
<pre><code>awk '{for (i=1; i<NF; i++){printf "%s,", $i} print $NF}' data.txt
</code></pre>
<p>Otherwise, pandas can import space delimited files easily.</p>
<pre><code>import pandas as pd
frame = pd.read_table('data.txt', sep='\s+')
</code></pre>
<p>With data.txt as:</p>
<pre><code>Date Fruit-type Color count
aug-6 apple green 4
aug-7 pear brown 5
aug-3 peach yellow 10
aug-29 orange orange 34
</code></pre>
<p>The output is </p>
<pre><code> Date Fruit-type Color count
0 aug-6 apple green 4
1 aug-7 pear brown 5
2 aug-3 peach yellow 10
3 aug-29 orange orange 34
</code></pre>
<p>You can read more here: <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files">http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files</a></p>
| 5 |
2016-09-22T00:47:57Z
|
[
"python",
"pandas"
] |
Parsing an irregularly spaced text file in Python pandas
| 39,628,453 |
<p>I have a text file that looks like :</p>
<pre><code>Date Fruit-type Color count
aug-6 apple green 4
aug-7 pear brown 5
aug-3 peach yellow 10
aug-29 orange orange 34
</code></pre>
<p>I would like to parse it to remove the irregular spaces into a nicely formatted pandas dataframe. I thought to remove the spaces and replace them with another delimiter but could not figure out the logic.</p>
<p>Desired output</p>
<pre><code>Date,Fruit-type,Color,count
aug-6,apple,green,4
aug-7,pear,brown,5
aug-3,peach,yellow,10
aug-29,orange,orange,34
</code></pre>
| 1 |
2016-09-22T00:29:38Z
| 39,640,255 |
<pre><code>gawk '{gsub(/[[:blank:]]+/, ",")}1' file
Date,Fruit-type,Color,count
aug-6,apple,green,4
aug-7,pear,brown,5
aug-3,peach,yellow,10
aug-29,orange,orange,34
</code></pre>
| 0 |
2016-09-22T13:21:08Z
|
[
"python",
"pandas"
] |
Zen of Python: Errors should never pass silently. Why does zip work the way it does?
| 39,628,456 |
<p>I use python's function zip a lot in my code (mostly to create dicts like below) </p>
<pre><code>dict(zip(list_a, list_b))
</code></pre>
<p>I find it really useful, but sometimes it frustrates me because I end up with a situation where list_a is a different length to list_b. zip just goes ahead and zips together the two lists until it achieves a zipped list that is the same length as the shorter list, ignoring the rest of the longer list. This seems like it should be treated as an error in most circumstances, which according to the zen of python should never pass silently. </p>
<p>Given that this is such an integral function, I'm curious as to why it's been designed this way? Why isn't it treated as an error if you try to zip together two lists of different lengths?</p>
| 6 |
2016-09-22T00:30:01Z
| 39,628,556 |
<p>In my experience, the only reason that you would ever have two lists that happen to have the same length is because they were both constructed from the same source, e.g. they are <code>map</code>s of the same underlying source, they are constructed inside the same loop, etc. In these cases, rather than creating them separately and then zipping them, I usually just create a single pre-zipped list of tuples. Most of the times that I actually use zip, one of the iterables is infinite, and in these cases I'm glad that it lets me.</p>
| 0 |
2016-09-22T00:44:10Z
|
[
"python"
] |
Zen of Python: Errors should never pass silently. Why does zip work the way it does?
| 39,628,456 |
<p>I use python's function zip a lot in my code (mostly to create dicts like below) </p>
<pre><code>dict(zip(list_a, list_b))
</code></pre>
<p>I find it really useful, but sometimes it frustrates me because I end up with a situation where list_a is a different length to list_b. zip just goes ahead and zips together the two lists until it achieves a zipped list that is the same length as the shorter list, ignoring the rest of the longer list. This seems like it should be treated as an error in most circumstances, which according to the zen of python should never pass silently. </p>
<p>Given that this is such an integral function, I'm curious as to why it's been designed this way? Why isn't it treated as an error if you try to zip together two lists of different lengths?</p>
| 6 |
2016-09-22T00:30:01Z
| 39,628,603 |
<h2>Reason 1: Historical Reason</h2>
<p><code>zip</code> allows unequal-length arguments because it was meant to improve upon <code>map</code> by <em>allowing</em> unequal-length arguments. This behavior is the reason <code>zip</code> exists at all.</p>
<p>Here's how you did <code>zip</code> before it existed:</p>
<pre><code>>>> a = (1, 2, 3)
>>> b = (4, 5, 6)
>>> for i in map(None, a, b): print i
...
(1, 4)
(2, 5)
(3, 6)
>>> map(None, a, b)
[(1, 4), (2, 5), (3, 6)]
</code></pre>
<p>This is terrible behaviour, and <em>does not</em> support unequal-length lists. This was a major design concern, which you can see plain-as-day in <a href="https://www.python.org/dev/peps/pep-0201/#lockstep-for-loops">the official RFC proposing <code>zip</code> for the first time</a>:</p>
<blockquote>
<p>While the map() idiom is a common one in Python, it has several
disadvantages:</p>
<ul>
<li><p>It is non-obvious to programmers without a functional programming
background.</p></li>
<li><p>The use of the magic <code>None</code> first argument is non-obvious.</p></li>
<li><p>It has arbitrary, often unintended, and inflexible semantics when the
lists are not of the same length - the shorter sequences are padded
with <code>None</code> :</p>
<p><code>>>> c = (4, 5, 6, 7)</code></p>
<p><code>>>> map(None, a, c)</code></p>
<p><code>[(1, 4), (2, 5), (3, 6), (None, 7)]</code></p></li>
</ul>
</blockquote>
<p>So, no, this behaviour would not be treated as an error - it is <em>why it was designed in the first place</em>.</p>
<hr>
<h2>Reason 2: Practical Reason</h2>
<p>Because it is pretty useful, is clearly specified and doesn't have to be thought of as an error at all. </p>
<p>By allowing unequal lengths, <code>zip</code> only requires that its arguments conform to the <a href="http://stackoverflow.com/questions/16301253/what-exactly-is-pythons-iterator-protocol">iterator protocol</a>. This allows <code>zip</code> to be extended to generators, tuples, dictionary keys and literally anything in the world that implements <code>__next__()</code> and <code>__iter__()</code>, precisely because it doesn't inquire about length. </p>
<p>This is significant, because generators <em>do not</em> support <code>len()</code> and thus there is no way to check the length beforehand. Add a check for length, and you break <code>zip</code>s ability to work on generators, <em>when it should</em>. That's a fairly serious disadvantage, wouldn't you agree?</p>
<hr>
<h2>Reason 3: By Fiat</h2>
<p>Guido van Rossum wanted it this way:</p>
<blockquote>
<p><em>Optional padding.</em> An earlier version of this PEP proposed an optional pad keyword argument, which would be used when the argument sequences were not the same length. This is similar behavior to the map(None, ...) semantics except that the user would be able to specify pad object. <strong>This has been rejected by the BDFL in favor of always truncating to the shortest sequence, because of the KISS principle.</strong> If there's a true need, it is easier to add later. If it is not needed, it would still be impossible to delete it in the future.</p>
</blockquote>
<p>KISS trumps everything.</p>
| 9 |
2016-09-22T00:51:29Z
|
[
"python"
] |
Difference between normed plt.xcorr at 0-lag and np.corrcoef
| 39,628,497 |
<p>I am working on a cross correlation between two relatively small time series, but in trying to accomplish I am running into a problem I cannot reconcile myself. To begin, I understand the dependence between <code>plt.xcorr</code> and <code>np.correlate</code>. However, I am having trouble reconciling the difference between <code>plt.xcorr</code> at zero lag and <code>np.corrcoef</code>?</p>
<pre><code>a = np.array([ 7.35846410e+08, 8.96271634e+08, 6.16249222e+08,
8.00739868e+08, 1.06116376e+09, 9.05690167e+08,
6.31383600e+08])
b = np.array([ 1.95621617e+09, 2.06263134e+09, 2.27717015e+09,
2.27281916e+09, 2.71090116e+09, 2.84676385e+09,
3.19578883e+09])
np.corrcoef(a,b)
# returns:
array([[ 1. , 0.02099573],
[ 0.02099573, 1. ]])
plt.xcorr(a,b,normed=True, maxlags=1)
# returns:
array([-1, 0, 1]),
array([ 0.90510941, 0.97024415, 0.79874158])
</code></pre>
<p>I expected these to return the same result. I clearly do not understand how <code>plt.xcorr</code> is normed, could someone please set me straight?</p>
| 1 |
2016-09-22T00:35:58Z
| 39,638,088 |
<p>I used <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.xcorr" rel="nofollow">http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.xcorr</a></p>
<blockquote>
<p>normed : boolean, optional, default: True</p>
<p>if True, normalize the data by the autocorrelation at the 0-th lag.</p>
</blockquote>
<p>In the following code, <code>plt_corr</code> equals <code>np_corr</code>.</p>
<pre><code>plt_corr = plt.xcorr(a, b, normed=True, maxlags=6)
c = np.correlate(a, a) # autocorrelation of a
d = np.correlate(b, b) # autocorrelation of b
np_corr = np.correlate(a/np.sqrt(c), b/np.sqrt(d), 'full')
</code></pre>
| 0 |
2016-09-22T11:43:55Z
|
[
"python",
"numpy",
"correlation",
"cross-correlation"
] |
Difference between normed plt.xcorr at 0-lag and np.corrcoef
| 39,628,497 |
<p>I am working on a cross correlation between two relatively small time series, but in trying to accomplish I am running into a problem I cannot reconcile myself. To begin, I understand the dependence between <code>plt.xcorr</code> and <code>np.correlate</code>. However, I am having trouble reconciling the difference between <code>plt.xcorr</code> at zero lag and <code>np.corrcoef</code>?</p>
<pre><code>a = np.array([ 7.35846410e+08, 8.96271634e+08, 6.16249222e+08,
8.00739868e+08, 1.06116376e+09, 9.05690167e+08,
6.31383600e+08])
b = np.array([ 1.95621617e+09, 2.06263134e+09, 2.27717015e+09,
2.27281916e+09, 2.71090116e+09, 2.84676385e+09,
3.19578883e+09])
np.corrcoef(a,b)
# returns:
array([[ 1. , 0.02099573],
[ 0.02099573, 1. ]])
plt.xcorr(a,b,normed=True, maxlags=1)
# returns:
array([-1, 0, 1]),
array([ 0.90510941, 0.97024415, 0.79874158])
</code></pre>
<p>I expected these to return the same result. I clearly do not understand how <code>plt.xcorr</code> is normed, could someone please set me straight?</p>
| 1 |
2016-09-22T00:35:58Z
| 39,639,272 |
<p>Calculation of standard "Pearson product-moment correlation coefficient" is using samples, shifted by mean values.
Cross-correlation coefficient doesn't use normalized samples.
Other than than, computation is similar. But still those coefficients have different formulas and different meaning. They are equal only if mean values of samples <code>a</code> and <code>b</code> are equal to <code>0</code> (if shifting by mean values doesn't change the samples.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
a = np.array([7.35846410e+08, 8.96271634e+08, 6.16249222e+08,
8.00739868e+08, 1.06116376e+09, 9.05690167e+08, 6.31383600e+08])
b = np.array([1.95621617e+09, 2.06263134e+09, 2.27717015e+09,
2.27281916e+09, 2.71090116e+09, 2.84676385e+09, 3.19578883e+09])
y = np.corrcoef(a, b)
z = plt.xcorr(a, b, normed=True, maxlags=1)
print("Pearson product-moment correlation coefficient between `a` and `b`:", y[0][1])
print("Cross-correlation coefficient between `a` and `b` with 0-lag:", z[1][1], "\n")
# Calculate manually:
def pearson(a, b):
# Length.
n = len(a)
# Means.
ma = sum(a) / n
mb = sum(b) / n
# Shifted samples.
_ama = a - ma
_bmb = b - mb
# Standard deviations.
sa = np.sqrt(np.dot(_ama, _ama) / n)
sb = np.sqrt(np.dot(_bmb, _bmb) / n)
# Covariation.
cov = np.dot(_ama, _bmb) / n
# Final formula.
# Note: division by `n` in deviations and covariation cancel out each other in
# final formula and could be ignored.
return cov / (sa * sb)
def cross0lag(a, b):
return np.dot(a, b) / np.sqrt(np.dot(a, a) * np.dot(b, b))
pearson_coeff = pearson(a, b)
cross_coeff = cross0lag(a, b)
print("Manually calculated coefficients:")
print(" Pearson =", pearson_coeff)
print(" Cross =", cross_coeff, "\n")
# Normalized samples:
am0 = a - sum(a) / len(a)
bm0 = b - sum(b) / len(b)
pearson_coeff = pearson(am0, bm0)
cross_coeff = cross0lag(am0, bm0)
print("Coefficients for samples with means = 0:")
print(" Pearson =", pearson_coeff)
print(" Cross =", cross_coeff)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Pearson product-moment correlation coefficient between `a` and `b`: 0.020995727082
Cross-correlation coefficient between `a` and `b` with 0-lag: 0.970244146831
Manually calculated coefficients:
Pearson = 0.020995727082
Cross = 0.970244146831
Coefficients for samples with means = 0:
Pearson = 0.020995727082
Cross = 0.020995727082
</code></pre>
| 1 |
2016-09-22T12:37:10Z
|
[
"python",
"numpy",
"correlation",
"cross-correlation"
] |
Get the return in a function in other function?
| 39,628,755 |
<p>Is there any way to access to the return in a function inside other function ?</p>
<p>Probably the next code explain more what I want to do.</p>
<pre><code>class perro:
def coqueta(self, num1, num2):
self.num1 = num1
self.num2 = num2
return self.num1 + self.num2
def otro_perro(self, coqueta):
print otro_perro
mascota = perro()
ejemplo = mascota.coqueta(5,5)
mascota.otro_perro()
print ejemplo
</code></pre>
<p>How can i get the return of the first <code>def</code> (<code>coqueta</code>) to print in the second function (<code>otro_perro</code>)?</p>
| 0 |
2016-09-22T01:11:38Z
| 39,628,769 |
<p>Whats wrong with this? </p>
<pre><code>mascota = perro()
ejemplo = mascota.coqueta(5,5)
mascota.otro_perro(ejemplo)
</code></pre>
<p>or, </p>
<pre><code>mascota = perro()
mascota.otro_perro(mascota.coqueta(5,5))
</code></pre>
| 0 |
2016-09-22T01:13:46Z
|
[
"python",
"function"
] |
Get the return in a function in other function?
| 39,628,755 |
<p>Is there any way to access to the return in a function inside other function ?</p>
<p>Probably the next code explain more what I want to do.</p>
<pre><code>class perro:
def coqueta(self, num1, num2):
self.num1 = num1
self.num2 = num2
return self.num1 + self.num2
def otro_perro(self, coqueta):
print otro_perro
mascota = perro()
ejemplo = mascota.coqueta(5,5)
mascota.otro_perro()
print ejemplo
</code></pre>
<p>How can i get the return of the first <code>def</code> (<code>coqueta</code>) to print in the second function (<code>otro_perro</code>)?</p>
| 0 |
2016-09-22T01:11:38Z
| 39,628,828 |
<p>Just make it an attribute of the <code>perro</code> class. Thats really the whole point of class, to modularize, organize, and encapsulate your data instead of having to use <code>global</code>s everywhere:</p>
<pre><code>class perro:
def coqueta(self, num1, num2):
self.num1 = num1
self.num2 = num2
self._otro_perro = self.num1 + self.num2
def otro_perro(self):
print self._otro_perro
mascota = perro()
mascota.coqueta(5,5)
mascota.otro_perro() # will print the value of self._orto_perro
</code></pre>
<p>I added the extra underscore before <code>orto_perro</code> because you had already used that name for your method. And as a unrelated side note, class names a gereral capitalized in Python. So <code>perro</code> would become <code>Perro</code>.</p>
| 0 |
2016-09-22T01:21:57Z
|
[
"python",
"function"
] |
Get the return in a function in other function?
| 39,628,755 |
<p>Is there any way to access to the return in a function inside other function ?</p>
<p>Probably the next code explain more what I want to do.</p>
<pre><code>class perro:
def coqueta(self, num1, num2):
self.num1 = num1
self.num2 = num2
return self.num1 + self.num2
def otro_perro(self, coqueta):
print otro_perro
mascota = perro()
ejemplo = mascota.coqueta(5,5)
mascota.otro_perro()
print ejemplo
</code></pre>
<p>How can i get the return of the first <code>def</code> (<code>coqueta</code>) to print in the second function (<code>otro_perro</code>)?</p>
| 0 |
2016-09-22T01:11:38Z
| 39,629,074 |
<p>Either pass the return value of method <code>coqueta()</code> into <code>otro_perro()</code>, or have <code>otro_perro()</code> call <code>coqueta()</code> directly. Your code suggests that you wish to do the first, so write it like this:</p>
<pre><code>class Perro:
def coqueta(self, num1, num2):
result = big_maths_calculation(num1, num2)
return result
def otro_perro(self, coqueta):
print coqueta
mascota = Perro()
ejemplo = mascota.coqueta(5, 5)
mascota.otro_perro(ejemplo)
</code></pre>
<p>Or, you could call <code>coqueta()</code> from <code>otro_perro()</code>:</p>
<pre><code> def otro_perro(self, num1, num2):
print self.coqueta(num1, num2)
</code></pre>
<p>but that requires that you also pass values for <code>num1</code> and <code>num2</code> into <code>otro_perro()</code> as well.</p>
<p>Perhaps <code>num1</code> and <code>num2</code> could be considered attributes of class <code>Perro</code>? In that case you could specify them when you create the class:</p>
<pre><code>class Perro:
def __init__(self, num1, num2):
self.num1 = num1
self.num2 = num2
def coqueta(self):
result = big_maths_calculation(self.num1, self.num2)
return result
def otro_perro(self, coqueta):
print self.coqueta() # N.B. method call
</code></pre>
<p>Or another possibility is to cache the result of the "big calculation":</p>
<pre><code>class Perro:
def __init__(self, num1, num2):
self.num1 = num1
self.num2 = num2
self.result = None
def coqueta(self):
if self.result is None:
self.result = big_maths_calculation(self.num1, self.num2)
return self.result
def otro_perro(self, coqueta):
print self.coqueta() # N.B. method call
</code></pre>
<p>Now the expensive calculation is performed just once when required and its result is stored for later use without requiring recalculation.</p>
| 2 |
2016-09-22T01:52:26Z
|
[
"python",
"function"
] |
Cross-Domain XML Querying
| 39,628,914 |
<p>I have two servers in my organization. One of which is read-only to me (Server A) and the other hosts our knowledge base (Server B). There is an XML file on <strong>Server A</strong> which is refreshed at an unknown interval. This file contains information on the status of various items. I want to be able to display those statuses on <strong>Server B</strong>.</p>
<p>As a beginner, I'm having trouble getting around the same-origin policy since I do not have access to <strong>Server A</strong>.</p>
<p>Right now I'm trying to use a simple python script <em>xmlpull.py</em>:</p>
<pre><code>import urllib2
response = urllib2.urlopen('http://192.168.255.255/connections')
html = response.read()
</code></pre>
<p>The script works great on its own, but the issue is when I try to load it using JQuery (<em>xmlpull.html</em>):</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script>
$(document).ready(function(){
$.ajax({url: "xmlpull.py", success: function(result){
$("#2").html(result);
}});
});
</script>
</head>
<body>
<div id="2">Change Me Please</div>
</body>
</html>
</code></pre>
<p>FF just gives me syntax errors for both <em>xmlpull.html</em> and <em>xmlpull.py</em> files at <code>:1:1</code>.</p>
<ol>
<li>What am I doing wrong?</li>
<li>If this isn't the best way to approach this problem, then feel free to suggest a better way.</li>
</ol>
<p>Thanks in advance!</p>
| 0 |
2016-09-22T01:33:50Z
| 39,629,788 |
<p>You have 3 Options options:</p>
<ul>
<li><p>First: is to allow Server B to access Server A. If you are using apache server you can do this by add this code the apache configuration file and restart apache</p>
<p>SetEnvIf Origin
"http(s)?://(www.)?(WRITE_IP_OF_SERVER_B_HERE)$"
AccessControlAllowOrigin=$0 Header add Access-Control-Allow-Origin
%{AccessControlAllowOrigin}e env=AccessControlAllowOrigin Header set
Access-Control-Allow-Headers "Content-Type, Accept, Authorization,
X-Requested-With"</p></li>
</ul>
<p>This way you can call Server A from Server B direct from the js and no need to the to create python file.<br></p>
<ul>
<li>Second using the python file:<br>to call python file from ajax, the python file should be accessable using apache or other server and to do this you have two options. mode wsgi or mod python and both will require apache configuration and code change more than the code size</li>
<li>Third way if to use PHP for this task by creating a php file and add it to apache public folder ( www ) you can call it direct without apache
configuration.<br>file content will be:</li>
</ul>
<p><?= file_get_contents("<a href="http://192.168.255.255/connections" rel="nofollow">http://192.168.255.255/connections</a>"); ?></p>
| 0 |
2016-09-22T03:30:38Z
|
[
"jquery",
"python",
"ajax",
"xml",
"same-origin-policy"
] |
pymunk updated shape filter usage
| 39,628,923 |
<p>I am trying to detect the first shape along a segment starting at my player's position, but I do not want to detect the player.</p>
<p>In a previous version of pymunk, the pymunk.Space.segment_query_first function accepted an integer as the shape_filter and it only detected shapes within a group of that integer. That worked perfectly, but now it accepts a list or dictionary instead. I have tried putting the integer into a list and that didn't work. I have no idea what it wants with a dictionary. I have tried everything that I can think of. Nothing seems to stop it from detecting my player. The documentation is not helpful at all. Thanks in advance.</p>
| 2 |
2016-09-22T01:36:11Z
| 39,641,351 |
<p>Yes the shape filter have become more powerful in pymunk 5 (and as a result also a little bit more complicated). The shape filter is supposed to be a <code>ShapeFilter</code> object (but . See the api docs <a href="http://www.pymunk.org/en/latest/pymunk.html#pymunk.ShapeFilter" rel="nofollow">http://www.pymunk.org/en/latest/pymunk.html#pymunk.ShapeFilter</a> for details on this filter object.</p>
<p>The <code>ShapeFilter</code> has 3 properties: <code>categories</code>, <code>mask</code>, and <code>group</code>. In your case I think you want to put the player in a separate category, and mask it out from the filter query. (By default the shape filter object matches all categories and doesnt mask out anything).</p>
<p>So, you want to do something like in this example:</p>
<pre><code>>>> import pymunk
>>> s = pymunk.Space()
>>> player_b = pymunk.Body(1,1)
>>> player_c = pymunk.Circle(player_b, 10)
>>> s.add(player_b, player_c)
>>>
>>> player_c.filter = pymunk.ShapeFilter(categories=0x1)
>>> s.point_query_nearest((0,0), 0, pymunk.ShapeFilter())
PointQueryInfo(shape=<pymunk.shapes.Circle object at 0x03C07F30>, point=Vec2d(nan, nan), distance=-10.0, gradient=Vec2d(0.0, 1.0))
>>> s.point_query_nearest((0,0), 0, pymunk.ShapeFilter(mask=pymunk.ShapeFilter.ALL_MASKS ^ 0x1))
>>>
>>> other_b = pymunk.Body(1,1)
>>> other_c = pymunk.Circle(other_b, 10)
>>> s.add(other_b, other_c)
>>>
>>> s.point_query_nearest((0,0), 0, pymunk.ShapeFilter(mask=pymunk.ShapeFilter.ALL_MASKS ^ 0x1))
PointQueryInfo(shape=<pymunk.shapes.Circle object at 0x03C070F0>, point=Vec2d(nan, nan), distance=-10.0, gradient=Vec2d(0.0, 1.0))
</code></pre>
<p>There are tests in the <code>test_space.py</code> file that tests different combinations of the shape filter which might help explain how they work: <a href="https://github.com/viblo/pymunk/blob/master/tests/test_space.py#L175" rel="nofollow">https://github.com/viblo/pymunk/blob/master/tests/test_space.py#L175</a></p>
| 0 |
2016-09-22T14:06:45Z
|
[
"python",
"filter",
"shape",
"segment",
"pymunk"
] |
NuGet error while building caffe on Windows with visual studio 2013
| 39,628,955 |
<p>I'm trying to build Caffe on Windows in order to use it on Python (by Import caffe) for my Deep Learning project, but I came across a problem while building the Caffe.sln file with Visual Studio 2013, following instructions from this video : <a href="https://www.youtube.com/watch?v=nrzAF2sxHHM" rel="nofollow">https://www.youtube.com/watch?v=nrzAF2sxHHM</a> (Build Caffe in 5 minutes)
I use Windows 7 64bits, here's the error message :</p>
<blockquote>
<p>1>------ Build started: Project: libcaffe, Configuration: Release x64 ------</p>
<p>1>C:\Users\LU10600\Documents\DeepLearning\NugetPackages\glog.0.3.3.0\build\native\glog.targets(346,5): error : NuGet Error:Unable to find version '0.3.3.0' of package 'glog.overlay-x64_v120_Release_dynamic'.</p>
<p>2>------ Build started: Project: caffe, Configuration: Release x64 ------</p>
<p>.....</p>
<p>15>C:\Users\LU10600\Documents\DeepLearning\NugetPackages\glog.0.3.3.0\build\native\glog.targets(346,5): error : NuGet Error:Unable to find version '0.3.3.0' of package 'glog.overlay-x64_v120_Release_dynamic'.</p>
<p>========== Build: 0 succeeded, 15 failed, 0 up-to-date, 0 skipped ==========</p>
</blockquote>
<p>I do have a folder called glog.0.3.3.0 in the NugetPackages directory...So I don't why it can't find it...</p>
<p>Thanks for your help.</p>
| 0 |
2016-09-22T01:40:18Z
| 39,747,992 |
<p>I resolved this issue on my system by overwriting the copy of NuGet.exe that was in caffe-master/windows/.nuget with the nuget.exe version that came with glog.0.3.3.0 in NugetPackages\glog.0.3.3.0\build\native\private\</p>
<p>Just a heads up since I was trying to follow the same tutorial; after resolving that issue I was seeing errors about missing boost libraries and it turned out that one of projects in the solution had it's boost dependency set to 61 instead of 59. I fixed that issue by changing it's nuget configuration to point to the 59 version of the lib.</p>
| 0 |
2016-09-28T12:46:11Z
|
[
"python",
"windows",
"visual-studio-2013",
"nuget",
"caffe"
] |
How does Django make tables relating to user,auth,group,session and so on with very first migration?
| 39,628,985 |
<p>I am little bit exciting to write my question in here. Normally I searched other questions to figure out my own problem. By the way, this time I couldn't figure out my question by searching it. That's why I am leaving question in here in person.</p>
<p>My question is :</p>
<p>How does Django framework make initial tables?
if you make Django project and then migrate. it makes tables like below. even thought you didn't make any apps in project or didn't write any code in models.py of each app.</p>
<p>auth_group<br>
auth_group_permissions
auth_permission<br>
auth_user<br>
auth_user_groups<br>
auth_user_user_permissions<br>
django_admin_log<br>
django_content_type<br>
django_migrations<br>
django_session</p>
<p>I understood what ORM, MTV, migrations are. I hope I can understand Django framework more deeply and want to figure what hidden code(or Django structure) in Django made those initial tables.</p>
<p>Addition Question :</p>
<p>I understood that I can control default permissions or custom permission in Meta class. if I don't set any, it makes three dafault permssions(add,change,delete).</p>
<p>when I migrate as I told you above, I can check those in a table named 'auth_permission'. There are records which is made initially</p>
<p>1;"Can add log entry";1;"add_logentry"
2;"Can change log entry";1;"change_logentry"
3;"Can delete log entry";1;"delete_logentry"
4;"Can add permission";2;"add_permission"
5;"Can change permission";2;"change_permission"
6;"Can delete permission";2;"delete_permission"
7;"Can add user";3;"add_user"
8;"Can change user";3;"change_user"
9;"Can delete user";3;"delete_user"
10;"Can add group";4;"add_group"
11;"Can change group";4;"change_group"
12;"Can delete group";4;"delete_group"
13;"Can add content type";5;"add_contenttype"
14;"Can change content type";5;"change_contenttype"
15;"Can delete content type";5;"delete_contenttype"
16;"Can add session";6;"add_session"
17;"Can change session";6;"change_session"
18;"Can delete session";6;"delete_session"</p>
<p>How can I manipulate those? ex)what if I want to change those code name or what if I don't want to make default permissions(add,change,delete)?</p>
<p>Thank you in advance.
if my explanation for my question was not clear, please leave me comment.
Have a nice day!</p>
| 1 |
2016-09-22T01:43:15Z
| 39,629,128 |
<p>There are apps that are included in the project by default. You can see this is the <code>INSTALLED_APPS</code> in the <code>settings.py</code> file of your project. <code>auth_group</code> is a table from <code>django.contrib.auth</code>.</p>
| 0 |
2016-09-22T02:03:28Z
|
[
"python",
"django"
] |
How to get remote stdout from Ansible python playbook api
| 39,629,026 |
<p>I know I can print the std out using the <code>debug</code> module of Ansible like below:</p>
<pre><code>---
- hosts: all
tasks:
- name: list files under /root folder
command: ls /root
register: out
- name: stdout
debug: var=out.stdout_lines
</code></pre>
<p>And from the answer of <a href="http://stackoverflow.com/questions/35368044/how-to-use-ansible-2-0-python-api-to-run-a-playbook">How to use Ansible 2.0 Python API to run a Playbook?</a>, I can run Ansible playbook with python code. </p>
<p>So the question is how can I get the content of the variable <code>out</code> in this playbook using Ansible python api?</p>
| 0 |
2016-09-22T01:47:25Z
| 39,991,404 |
<p>here is my answers, you should make a <code>callback</code></p>
<pre><code># -*- coding: utf-8 -*-
import json
from ansible.parsing.dataloader import DataLoader
from ansible.vars import VariableManager
from ansible.inventory import Inventory
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.plugins import callback_loader
from ansible.plugins.callback import CallbackBase
import os
import logging
loader = DataLoader()
variable_manager = VariableManager()
inventory = Inventory(loader=loader, variable_manager=variable_manager)
variable_manager.set_inventory(inventory)
#get result output
class ResultsCollector(CallbackBase):
def __init__(self, *args, **kwargs):
super(ResultsCollector, self).__init__(*args, **kwargs)
self.host_ok = []
self.host_unreachable = []
self.host_failed = []
def v2_runner_on_unreachable(self, result, ignore_errors=False):
name = result._host.get_name()
task = result._task.get_name()
ansible_log(result)
#self.host_unreachable[result._host.get_name()] = result
self.host_unreachable.append(dict(ip=name, task=task, result=result))
def v2_runner_on_ok(self, result, *args, **kwargs):
name = result._host.get_name()
task = result._task.get_name()
if task == "setup":
pass
elif "Info" in task:
self.host_ok.append(dict(ip=name, task=task, result=result))
else:
ansible_log(result)
self.host_ok.append(dict(ip=name, task=task, result=result))
def v2_runner_on_failed(self, result, *args, **kwargs):
name = result._host.get_name()
task = result._task.get_name()
ansible_log(result)
self.host_failed.append(dict(ip=name, task=task, result=result))
class Options(object):
def __init__(self):
self.connection = "smart"
self.forks = 10
self.check = False
self.become = None
self.become_method = None
self.become_user=None
def __getattr__(self, name):
return None
options = Options()
def run_adhoc(ip,order):
variable_manager.extra_vars={"ansible_ssh_user":"root" , "ansible_ssh_pass":"passwd"}
play_source = {"name":"Ansible Ad-Hoc","hosts":"%s"%ip,"gather_facts":"no","tasks":[{"action":{"module":"command","args":"%s"%order}}]}
# play_source = {"name":"Ansible Ad-Hoc","hosts":"192.168.2.160","gather_facts":"no","tasks":[{"action":{"module":"command","args":"python ~/store.py del"}}]}
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
tqm = None
callback = ResultsCollector()
try:
tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=None,
run_tree=False,
)
tqm._stdout_callback = callback
result = tqm.run(play)
return callback
finally:
if tqm is not None:
tqm.cleanup()
def run_playbook(books):
results_callback = callback_loader.get('json')
playbooks = [books]
variable_manager.extra_vars={"ansible_ssh_user":"root" , "ansible_ssh_pass":"passwd"}
callback = ResultsCollector()
pd = PlaybookExecutor(
playbooks=playbooks,
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=None,
)
pd._tqm._stdout_callback = callback
try:
result = pd.run()
return callback
except Exception as e:
print e
if __name__ == '__main__':
#run_playbook("yml/info/process.yml")
#run_adhoc("192.168.2.149", "ifconfig")
</code></pre>
| 0 |
2016-10-12T05:55:43Z
|
[
"python",
"ansible",
"ansible-playbook",
"ansible-2.x"
] |
Issues with sending lines over netcat to Python
| 39,629,064 |
<p>I am piping a program's stdout to netcat: <code>nc -u localhost 50000</code>. Listening on UDP 50000 is a Python program that does something like this:</p>
<pre><code> lstsock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
lstsock.setblocking(0)
while True:
print '1'
try:
tmp = lstsock.recv(SOCK_BUFSZ)
except socket.error, e:
if e.args[0] == errno.EWOULDBLOCK:
sleep(.5)
continue
else:
print("Socket error: {}".format(e))
return
print tmp
</code></pre>
<p>I'll always get a few lines, but then the program hangs on <code>print '1'</code>. When I run the line-generating program, the output is a line to stdin about every second. What's going on here?</p>
<p>Edit, in case it's somehow related: The program producing lines is in docker (run with <code>--net="host"</code>, and the server (accepting lines) is on the host running docker. Docker is sending it over 127.0.0.1.</p>
<p>Another edit: It seems to stop accepting input when SOCK_BUFSZ bytes were received. It's not recycling the buffer?</p>
<p>Update: this seems to be an issue with Docker. It works on localhost, but not from the container. I have connectivity (I can ping the server, and the first burst of data gets through).</p>
| 1 |
2016-09-22T01:51:09Z
| 39,629,948 |
<p>This Python script worked for me:</p>
<pre><code>import socket
lstsock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
lstsock.bind(('127.0.0.1', 50000))
SOCK_BUFSZ = 4096
counter = 1
while True:
print "counter = %s" % counter
counter += 1
data, addr = lstsock.recvfrom(SOCK_BUFSZ)
print data
</code></pre>
<p>I used this bash script to send lines:</p>
<pre><code>while true ;
do
echo "Running..."
echo -n "hello" | nc -w 0 -u "127.0.0.1" 50000
sleep 1
done
</code></pre>
<p>I didn't see the point of setting the socket to be non-blocking when time was wasted in a sleep.</p>
<p>After <a href="http://stackoverflow.com/questions/39629064/issues-with-sending-lines-over-netcat-to-python/39629948#comment66565411_39629948">a comment</a> from <a href="http://stackoverflow.com/users/3246078/horse-hair">horse_hair</a>, I put put the server in a thread:</p>
<pre><code>import socket
import threading
import time
SOCK_BUFSZ = 4096
def udp_server(quit_flag):
udp_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
udp_sock.bind(('127.0.0.1', 50000))
counter = 1
while not quit_flag.is_set():
print "counter = %s" % counter
counter += 1
data, addr = udp_sock.recvfrom(SOCK_BUFSZ)
print data
def main():
quit_flag = threading.Event()
udp_thread = threading.Thread(target=udp_server, args=(quit_flag,))
udp_thread.daemon = True
udp_thread.start()
try:
while True:
time.sleep(1.0)
except KeyboardInterrupt:
print "Exiting due to keyboard interrupt"
quit_flag.set()
udp_thread.join()
if __name__ == '__main__':
main()
</code></pre>
<p>Everything still works.</p>
| 1 |
2016-09-22T03:52:00Z
|
[
"python",
"sockets"
] |
Dynamic array in Python
| 39,629,066 |
<p>I´m trying to delete an specific row and column from a square list "m" iteratively. At the beginning, I used the square list like a square matrix "m" and I tried using the comand "delete" from numpy as follows:</p>
<pre><code>from numpy import*
import numpy as np
m=array([[1,2,3],[4,5,6],[7,8,9]])
#deleting row and column "0"
#x is the new matrix without the row and column "0"
x=np.delete((np.delete(m,0,0)),0,1)
print x
x=[[5,6],[8,9]]
</code></pre>
<p>The problem with the command "delete" it´s I´m not sure about this command can be used in a iteratively loop. Specifically, I want to known how I can delete and specific row and column taking from a "y" list:</p>
<pre><code>m=([[1,2,3],[4,5,6],[7,8,9]])
y=[[1],[0],[1],[0]]
</code></pre>
<p>Note: If the number "1" appears in the list "y", in the list "m" delete the respective row and column.</p>
<p>If number "1" appears in the list "y", delete the respective row and column in the list "m". For example, in this case the number "1" appears in the list "y" in the position "0", we need to delete the first row and the first column in the list "m". This is the desired list "m" for the first apperance of the number "1" in the list "y":
m=[[5,6],[8,9]]</p>
<p>Note: The size of the list "m" changed, now is 2x2. This is my first question, how I can use a dinamyc list in Python?. How can a specify the new dimension?. </p>
<p>Because the number "1" appears again in the list "y", we need to delete the respective row and column in the new list "m", in this case the desired list "m" is:
m=[[5]]</p>
<p>I tried in a lot of ways, I obtained this tip from this forum (another way using comprehension list): </p>
<pre><code>#row to delete
roww=0
#column to delete
column=0
m=([[1,2,3],[4,5,6],[7,8,9]])
a=[j[:column]+j[column+1:] for i,j in enumerate(m) if i!=roww]
print a
</code></pre>
<p>Note: I don´t know how use the dynamic arrays in python, because I need some help. Some people thinks is a homework,but is a lie, I try to learn Python because is more friendly. I used to use Fortran, but with Python I hate that. Thanks.</p>
<p>I´m trying to incorporate an array which specify which rows and columns must be deleted. This is my original idea:</p>
<pre><code>from numpy import*
import numpy as np
mat=array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]
x=[[1],[0],[1],[0]]
for i in x:
if i==1:
row=0
col=0
mat=np.delete(np.delete(mat,row,0),col,1)
print mat
</code></pre>
<p>When one appears in the first row(row 0) in "x", this is the desired matrix:</p>
<pre><code>mat=array([[6,7,8],[10,11,12],[14,15,16]]
</code></pre>
<p>When one appears in the third row(row 2) in "x", this is the desired matrix:</p>
<pre><code>mat=array([[6,8], [14,16]])
</code></pre>
<p>Note: x gives the index to delete in the original matrix "mat".</p>
<p>Thanks again.</p>
| 0 |
2016-09-22T01:51:18Z
| 39,634,907 |
<p>It seems to me that applying <code>np.delete</code> repeatedly to the same matrix does the job: </p>
<pre><code>from __future__ import print_function
import numpy as np
mat = np.array([[1,2,3],[4,5,6],[7,8,9]])
print( "initial mat:\n", mat )
rmlist = [ (2,0), (1,1) ] # a list of (row,col) to be removed
# rmlist = [ [2,0], [1,1] ] # this also works
for (row, col) in rmlist:
print( "removing row", row, "and column", col )
mat = np.delete( np.delete( mat, row, 0 ), col, 1 )
print( mat )
</code></pre>
<p>Result (is this not what you expect...?):</p>
<pre><code>initial mat:
[[1 2 3]
[4 5 6]
[7 8 9]]
removing row 2 and column 0
[[2 3]
[5 6]]
removing row 1 and column 1
[[2]]
</code></pre>
<hr>
<p><strong>EDIT</strong></p>
<p>If you need to retain specific rows and columns of the original matrix, you can achieve it by selecting rows and columns directly such that</p>
<pre><code>print( "initial mat = \n", mat )
print( "sliced mat = \n", mat[ :, [1, 3] ] )
print( "further sliced mat = \n", mat[ :, [1, 3] ][ [1,3] ] )
</code></pre>
<p>where <code>[1,3]</code> means that 1st and 3rd rows (or columns) should be retained. Using the 4x4 matrix in the Question, the result becomes</p>
<pre><code>initial mat =
[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]
[13 14 15 16]]
sliced mat =
[[ 2 4]
[ 6 8]
[10 12]
[14 16]]
further sliced mat =
[[ 6 8]
[14 16]]
</code></pre>
<p>If necessary, you can also create the above selection list (like [1,3]) from x = [1,0,1,0] as follows:</p>
<pre><code>x = [ 1, 0, 1, 0 ]
s = []
for i in range( len(x) ):
if x[i] == 0: s.append( i )
print( "s = ", s ) # s = [1,3]
print( "further sliced mat (again) = \n", mat[ :, s ][ s ] )
</code></pre>
| 1 |
2016-09-22T09:13:49Z
|
[
"python",
"arrays",
"dynamic"
] |
SWIG return unsigned char * from C to Python
| 39,629,126 |
<p>I'm running into a problem with generating an interface for python with underlying C code.
I have the following pieces of code:</p>
<p>prov.h</p>
<pre><code>#include<string.h>
#include<stdio.h>
#include<stdlib.h>
unsigned char *out();
</code></pre>
<p>prov.c</p>
<pre><code>#include "prov.h"
unsigned char *out()
{
unsigned char *bytes="Hello";
unsigned char *data=NULL;
data=calloc(6,sizeof(char));
if(data) {
strncpy(data,bytes,6);
}
return data;
}
</code></pre>
<p>prov.i</p>
<pre><code>%module prov
%{
#include "prov.h"
%}
unsigned char *out();
</code></pre>
<p>And I generate the .so as shown below:</p>
<pre><code>$ swig -python prov.i
$ gcc -fpic -c prov.c prov_wrap.c -I/usr/include/python2.7
$ gcc -shared prov.o prov_wrap.o -o _prov.so
$ python
Python 2.7.3 (default, Jun 22 2015, 19:33:41)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import prov
>>> s=prov.out()
>>> s
<Swig Object of type 'unsigned char *' at 0x7fb241dbff90>
</code></pre>
<p>Now the problem is that when I try to view the returned string, s from my C function, it doesn't seem to show it as a python string object (I expected when I print s the output should be 'Hello'). Can anyone please help me out with returning unsigned char * to python calling code?</p>
| 0 |
2016-09-22T02:03:07Z
| 39,629,256 |
<p>Either return <code>char*</code> instead, or you can apply the <code>char*</code> typemaps to <code>unsigned char*</code>. You'll also want <code>%newobject</code> to let Python know to free the returned pointer after converting the return result to a Python object.</p>
<pre><code>%module prov
%{
#include "prov.h"
%}
%apply char* {unsigned char*};
%newobject out;
unsigned char *out();
</code></pre>
| 0 |
2016-09-22T02:22:04Z
|
[
"python",
"c",
"swig"
] |
Python - Why scatter of matplotlib plots points which have negative coordinates, with positive coordinates?
| 39,629,139 |
<p>so there is a big trouble in my script : I have done an array with numpy with contains x,y and z coordinates of some points. some of these points have negative coordinates (for x and/or y and/or z). And for reasons I don't understand, when I use the function scatter from matplotlib, it plots all points with positives coordinates (it means that if a coordinate is negative, it will be plotted as positive...):</p>
<p><img src="http://i.stack.imgur.com/tzDO3.png" alt="enter image description here"></p>
<p>So my question is simple : why does it do that, and how to plot points with negatives coordinates properly ?</p>
<p>Here is my code in 2 parts :</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
import Tkinter
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure
import numpy as np
import os
import subprocess
import Parse_Gro
class windowsTk(Tkinter.Tk):
def __init__(self,parent):
Tkinter.Tk.__init__(self,parent)
self.parent = parent
self.f = Figure(figsize=(6, 6), dpi=100)
self.canevas = FigureCanvasTkAgg(self.f, master=self)
self.subplota= self.f.add_subplot(111)
self.RotateMatY= np.matrix([[np.cos(0.2*np.pi),0,np.sin(0.2*np.pi)],[0,1,0],[-np.sin(0.2*np.pi),0,np.cos(0.2*np.pi)]])
self.RotateMatZ= np.matrix([[np.cos(0.2*np.pi),-np.sin(0.2*np.pi),0],[np.sin(0.2*np.pi),np.cos(0.2*np.pi),0],[0,0,1]])
self.matrice=Parse_Gro.get_Coordinates()
self.initialize()
def initialize(self):
self.grid()
self.canevas.get_tk_widget().grid(column=0,row=0,columnspan=3)
button1 = Tkinter.Button(self,text=u"Rotate Right",command=self.ClickonRight)
button1.grid(column=2,row=2)
button2 = Tkinter.Button(self,text=u"Rotate Left",command=self.ClickonLeft)
button2.grid(column=0,row=2)
button3 = Tkinter.Button(self,text=u"Rotate Up",command=self.ClickonUp)
button3.grid(column=1,row=1)
button4 = Tkinter.Button(self,text=u"Rotate Down",command=self.ClickonDown)
button4.grid(column=1,row=3)
#Sort according to X coordinate (first column)
#self.matrice=np.sort(self.matrice, axis=0)
#Scatter Plot Test
self.subplota.scatter(self.matrice[:,1],self.matrice[:,2],c=self.matrice[:,0],s=self.matrice[:,0]*100)
def ClickonRight(self):
print"Right Rotation"
def ClickonLeft(self):
print"Left Rotation"
def ClickonUp(self):
print"Up Rotation"
def ClickonDown(self):
print"Down Rotation"
if __name__ == "__main__":
app = windowsTk(None)
app.title('Visualisation 3D Proteine')
app.mainloop()
</code></pre>
<p>here is the second part of the code in an other <code>file.py</code>:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import subprocess
import numpy as np
def get_Coordinates():
path = raw_input("Enter a Path to your PDB file: ")
#Path Example :/home/yoann
os.chdir(path)
filename = raw_input("Enter the name of your PDB file: ")
#Filename Example : 5f4c.pdb
bashCommand = "gmx editconf -f {:s} -o output.gro -center 0 0 0 -aligncenter 0 0 0".format(filename)
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output = process.communicate()[0]
ListX=[]
ListY=[]
ListZ=[]
with open("/home/yoann/output.gro","r") as file:
lines = file.readlines()
for line in lines[2:-1:]:
ListX.append(float(line[22:28]))
ListY.append(float(line[30:36]))
ListZ.append(float(line[38:44]))
matrixCoord=np.column_stack([ListX,ListY,ListZ])
return matrixCoord
</code></pre>
<p>Here I put an example of the content of the File red by the function get_Coordinate() :</p>
<pre><code>PUTATIVE CYTOPLASMIC PROTEIN
1637
1MET N 1 1.206 1.701 1.641
1MET CA 2 1.077 1.663 1.575
2ASN C 11 0.687 1.503 1.675
2ASN O 12 0.688 1.495 1.550
</code></pre>
<p>Here I put what should show the program :</p>
<p>Cheers !</p>
| -1 |
2016-09-22T02:05:45Z
| 39,661,910 |
<p>I'm reading your problem as:</p>
<blockquote>
<p>Why do the circles at negative coordinates <em>on the axis into and out of the page</em> look the same as their absolute value</p>
</blockquote>
<p>I think your problem is in this line:</p>
<pre><code>self.subplota.scatter(
self.matrice[:,1],
self.matrice[:,2],
c=self.matrice[:,0],
s=self.matrice[:,0]*100
)
</code></pre>
<p>Remember that <code>s</code> is the <strong>size</strong> in pixels. It's not any kind of 3D support. A circle with radius <code>-100</code> looks very much the same as a circle with radius <code>100</code>. You should find the min and max of your data, then pick the circle size based on that.</p>
<hr>
<p>Have you considered using <code>mplot3d</code>, and just using a pre-built 3d plot window?</p>
| 1 |
2016-09-23T13:23:11Z
|
[
"python",
"numpy",
"matplotlib",
"negative-number",
"scatter"
] |
Python - Why scatter of matplotlib plots points which have negative coordinates, with positive coordinates?
| 39,629,139 |
<p>so there is a big trouble in my script : I have done an array with numpy with contains x,y and z coordinates of some points. some of these points have negative coordinates (for x and/or y and/or z). And for reasons I don't understand, when I use the function scatter from matplotlib, it plots all points with positives coordinates (it means that if a coordinate is negative, it will be plotted as positive...):</p>
<p><img src="http://i.stack.imgur.com/tzDO3.png" alt="enter image description here"></p>
<p>So my question is simple : why does it do that, and how to plot points with negatives coordinates properly ?</p>
<p>Here is my code in 2 parts :</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
import Tkinter
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure
import numpy as np
import os
import subprocess
import Parse_Gro
class windowsTk(Tkinter.Tk):
def __init__(self,parent):
Tkinter.Tk.__init__(self,parent)
self.parent = parent
self.f = Figure(figsize=(6, 6), dpi=100)
self.canevas = FigureCanvasTkAgg(self.f, master=self)
self.subplota= self.f.add_subplot(111)
self.RotateMatY= np.matrix([[np.cos(0.2*np.pi),0,np.sin(0.2*np.pi)],[0,1,0],[-np.sin(0.2*np.pi),0,np.cos(0.2*np.pi)]])
self.RotateMatZ= np.matrix([[np.cos(0.2*np.pi),-np.sin(0.2*np.pi),0],[np.sin(0.2*np.pi),np.cos(0.2*np.pi),0],[0,0,1]])
self.matrice=Parse_Gro.get_Coordinates()
self.initialize()
def initialize(self):
self.grid()
self.canevas.get_tk_widget().grid(column=0,row=0,columnspan=3)
button1 = Tkinter.Button(self,text=u"Rotate Right",command=self.ClickonRight)
button1.grid(column=2,row=2)
button2 = Tkinter.Button(self,text=u"Rotate Left",command=self.ClickonLeft)
button2.grid(column=0,row=2)
button3 = Tkinter.Button(self,text=u"Rotate Up",command=self.ClickonUp)
button3.grid(column=1,row=1)
button4 = Tkinter.Button(self,text=u"Rotate Down",command=self.ClickonDown)
button4.grid(column=1,row=3)
#Sort according to X coordinate (first column)
#self.matrice=np.sort(self.matrice, axis=0)
#Scatter Plot Test
self.subplota.scatter(self.matrice[:,1],self.matrice[:,2],c=self.matrice[:,0],s=self.matrice[:,0]*100)
def ClickonRight(self):
print"Right Rotation"
def ClickonLeft(self):
print"Left Rotation"
def ClickonUp(self):
print"Up Rotation"
def ClickonDown(self):
print"Down Rotation"
if __name__ == "__main__":
app = windowsTk(None)
app.title('Visualisation 3D Proteine')
app.mainloop()
</code></pre>
<p>here is the second part of the code in an other <code>file.py</code>:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import subprocess
import numpy as np
def get_Coordinates():
path = raw_input("Enter a Path to your PDB file: ")
#Path Example :/home/yoann
os.chdir(path)
filename = raw_input("Enter the name of your PDB file: ")
#Filename Example : 5f4c.pdb
bashCommand = "gmx editconf -f {:s} -o output.gro -center 0 0 0 -aligncenter 0 0 0".format(filename)
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output = process.communicate()[0]
ListX=[]
ListY=[]
ListZ=[]
with open("/home/yoann/output.gro","r") as file:
lines = file.readlines()
for line in lines[2:-1:]:
ListX.append(float(line[22:28]))
ListY.append(float(line[30:36]))
ListZ.append(float(line[38:44]))
matrixCoord=np.column_stack([ListX,ListY,ListZ])
return matrixCoord
</code></pre>
<p>Here I put an example of the content of the File red by the function get_Coordinate() :</p>
<pre><code>PUTATIVE CYTOPLASMIC PROTEIN
1637
1MET N 1 1.206 1.701 1.641
1MET CA 2 1.077 1.663 1.575
2ASN C 11 0.687 1.503 1.675
2ASN O 12 0.688 1.495 1.550
</code></pre>
<p>Here I put what should show the program :</p>
<p>Cheers !</p>
| -1 |
2016-09-22T02:05:45Z
| 39,689,200 |
<p>Sorry since I put this message I try run again my scripts and for reasons I still don't get it works ! And the capture I put at the beginning of my message is the good one. I think maybe it's a problem of reloading new datas in python, Maybe I should have restart a session to get those things right...
So, If anybody have the same problem I would advise to restart your python session (maybe restart your computer also). This is what I have done, and it works for me.
Scripts here are free to use, just be nice and put a little note in your README or in your sources to say that they are from me (Yoann PAGEAUD) ;) !
Thank you all! Cheers!</p>
| 0 |
2016-09-25T16:31:11Z
|
[
"python",
"numpy",
"matplotlib",
"negative-number",
"scatter"
] |
Create a 2D plot pixel grid based on a pandas series of lists
| 39,629,183 |
<p>Suppose we have a pandas Series of lists where each list contains some characteristics described as strings like this:</p>
<pre><code>0 ["A", "C", "G", ...]
1 ["B", "C", "H", ...]
2 ["A", "X"]
...
N ["J", "K", ...]
</code></pre>
<p>What would be the best/easiest way to plot a 2D pixel grid where the X axis is occurrence of the characteristic and the Y axis each sample in the series 0,1,2,..., N?</p>
<p>Edited on Sept 22 16:</p>
<p>It seems I haven't mentioned explicitly that the list of characteristics isn't necessarily of the same size for all observations. The observation 1 can have 4 characteristics, observation 2 can have no one, observation 3 can have 5 and so on. So, I can't transform them into a numpy array right away without preprocessing them in some way that the missing characteristics are filled in.</p>
| 0 |
2016-09-22T02:12:31Z
| 39,633,983 |
<p>Using pandas for a 1D histrogram seems to be straightfoward as in <a href="http://stackoverflow.com/questions/28418988/how-to-make-a-histogram-from-a-list-of-strings-in-python">this</a> answer. You could use this idea and fill an array of N by 26 and then plot in 2D with </p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
import string
from collections import Counter
#Generate list of letters and dataframe
N = 20
M = 1000
letterlist = []
for i in range(N):
letterlist.append([random.choice(string.ascii_uppercase) for i in range(M)])
df = pd.DataFrame(letterlist)
#Fill an array of size N by 26
im = np.zeros([N,26])
for n in range(N):
#Get histogram of letters for a line as Dict
letter_counts = Counter(df.loc[n])
#Add to array
for k in letter_counts.keys():
c = ord(k.lower()) - 97
im[n,c] = letter_counts[k]
#Plot
plt.imshow(im, interpolation='none')
plt.colorbar()
plt.axis('tight')
plt.xticks(range(26), [i for i in string.ascii_uppercase])
plt.show()
</code></pre>
| 1 |
2016-09-22T08:30:23Z
|
[
"python",
"pandas",
"matplotlib",
"plot",
"series"
] |
Create a 2D plot pixel grid based on a pandas series of lists
| 39,629,183 |
<p>Suppose we have a pandas Series of lists where each list contains some characteristics described as strings like this:</p>
<pre><code>0 ["A", "C", "G", ...]
1 ["B", "C", "H", ...]
2 ["A", "X"]
...
N ["J", "K", ...]
</code></pre>
<p>What would be the best/easiest way to plot a 2D pixel grid where the X axis is occurrence of the characteristic and the Y axis each sample in the series 0,1,2,..., N?</p>
<p>Edited on Sept 22 16:</p>
<p>It seems I haven't mentioned explicitly that the list of characteristics isn't necessarily of the same size for all observations. The observation 1 can have 4 characteristics, observation 2 can have no one, observation 3 can have 5 and so on. So, I can't transform them into a numpy array right away without preprocessing them in some way that the missing characteristics are filled in.</p>
| 0 |
2016-09-22T02:12:31Z
| 39,642,909 |
<p>Since I already wrote the code for the image in my comment, and Ed seems to have the same interpretation of your question as I do, I'll go ahead and add my solution.</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import string
M, N = 100, 10
letters = list(string.ascii_uppercase)
data = np.random.choice(letters, (M, N))
df = pd.DataFrame(data)
# Get frequency of letters in each column using pd.value_counts
df_freq = df.apply(pd.value_counts).T
# Plot frequency dataframe with seaborn heatmap
ax = sns.heatmap(df_freq, linewidths=0.1, annot=False, cbar=True)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/BT43l.png" rel="nofollow"><img src="http://i.stack.imgur.com/BT43l.png" alt="enter image description here"></a></p>
| 2 |
2016-09-22T15:18:34Z
|
[
"python",
"pandas",
"matplotlib",
"plot",
"series"
] |
SFTP with an SSH ProxyCommand in python
| 39,629,209 |
<p>I am downloading some files with the following SSH config:</p>
<pre><code>Host proxy
Hostname proxy.example.com
User proxyuser
IdentityFile ~/.ssh/proxy_id_rsa
Host target # uses password auth
Hostname target.example.com
User targetuser
ProxyCommand ssh proxy nc %h %p
</code></pre>
<p>I'm trying to automate downloading the files--currently using paramiko, but could use another library if it would be easier.</p>
<p>Here's what I'm trying based on some other answers:</p>
<pre><code>from paramiko.proxy import ProxyCommand
from paramiko.transport import Transport
from paramiko.sftp_client import SFTPClient
proxy = ProxyCommand('ssh -i /Users/ben/.ssh/proxy_id_rsa proxyuser@proxy.example.com nc target.example.com 22')
client = SFTPClient(proxy)
client.connect(username='targetuser', password='targetpassword')
</code></pre>
<p>However, this throws the error</p>
<pre><code>Traceback (most recent call last):
File "sftp.py", line 6, in <module>
client = SFTPClient(proxy)
File "/Users/ben/.virtualenvs/venv/lib/python3.5/site-packages/paramiko/sftp_client.py", line 99, in __init__
server_version = self._send_version()
File "/Users/ben/.virtualenvs/venv/lib/python3.5/site-packages/paramiko/sftp.py", line 105, in _send_version
t, data = self._read_packet()
File "/Users/ben/.virtualenvs/venv/lib/python3.5/site-packages/paramiko/sftp.py", line 177, in _read_packet
raise SFTPError('Garbage packet received')
paramiko.sftp.SFTPError: Garbage packet received
</code></pre>
<p>Unfortunately the error message is not very helpful so I'm at a loss for what I could change. I can't change the config on <code>target</code> and I'd prefer not to change the config on <code>proxy</code> if avoidable. Any suggestions?</p>
| 0 |
2016-09-22T02:16:40Z
| 39,629,352 |
<p>Solved with the following:</p>
<pre><code>class PatchedProxyCommand(ProxyCommand):
# work around https://github.com/paramiko/paramiko/issues/789
@property
def closed(self):
return self.process.returncode is not None
@property
def _closed(self):
# Concession to Python 3 socket-like API
return self.closed
def close(self):
self.process.kill()
self.process.poll()
proxy = PatchedProxyCommand('ssh -i /Users/ben/.ssh/proxy_id_rsa '
'proxyuser@proxy.example.com nc target.example.com 22')
transport = Transport(proxy)
key = HostKeyEntry.from_line('target.example.com ssh-rsa '
'AAAAB3NzaC1yc2EAAAA/base64+stuff==').key
transport.connect(hostkey=key,
username='targetuser', password='targetpass')
sftp = SFTPClient.from_transport(transport)
print(sftp.listdir())
</code></pre>
| 0 |
2016-09-22T02:34:22Z
|
[
"python",
"ssh",
"proxy",
"sftp",
"paramiko"
] |
What does the code zip( *sorted( zip(units, errors) ) ) do?
| 39,629,253 |
<p>For my application <code>units</code> and <code>errors</code> are always lists of numerical values. I tried googling what each part does and figured out the firs part of zip. It seems </p>
<pre><code> ziped_list = zip(units, errors)
</code></pre>
<p>simply pairs the units and errors to produce a list as <code>[...,(unit, error),...]</code>. Then Its passed to sorted which sorts the elements. Since I did not provide the argument for <code>key</code>, then it compares the elements directly as the documentation implies:</p>
<blockquote>
<p>The default value is None (compare the elements directly).</p>
</blockquote>
<p>Since the <code>ziped_list</code> is a list of tuples of integers, then it seems that it makes a comparison between tuples directly. From a small example in my terminal (python 3) it seems it compares based on the first element (even though the documentation implies the comparison is element wise):</p>
<pre><code>>>> (1,None) < (2,None)
True
>>> (2,None) < (1,None)
False
</code></pre>
<p>The last bit the unpacking and then zip still remain a mystery and I have not been able to figure out what they do. I understand that <code>*</code> unpacks to positional argument but doing <code>*</code> doesn't let me see exactly what its doing if I try it in the command line. What further confuses me is why <code>zip</code> requires to be passed as argument an unpacked list such as <code>*sorted</code> if it already takes as an argument <code>zip(*iterable)</code> a variable called iterable. It just seems confusing (to me) why we would need to unpack something that just allows as input a list of iterables.</p>
| 2 |
2016-09-22T02:21:37Z
| 39,629,483 |
<p>If you don't unpack list, then pass to argument as one element, so zip can't aggregates elements from each of the iterables.
For example:</p>
<pre><code>a = [3, 2, 1,]
b = ['a', 'b', 'c']
ret = zip(a, b)
the_list = sorted(ret)
the_list >> [(1, 'c'), (2, 'b'), (3, 'a')]
</code></pre>
<p><code>zip(*the_list)</code> is equal to <code>zip((1, 'c'), (2, 'b'), (3, 'a'))</code></p>
<p>output : <code>[(1, 2, 3), ('c', 'b', 'a')]</code></p>
<p>If you just use <code>zip(the_list)</code> is equal to <code>zip([(1, 'c'), (2, 'b'), (3, 'a')],)</code></p>
<p>output: <code>[((1, 'c'),), ((2, 'b'),), ((3, 'a'),)]</code></p>
<p>You can also see <a href="http://stackoverflow.com/questions/36901/what-does-double-star-and-star-do-for-python-parameters">What does ** (double star) and * (star) do for Python parameters?</a></p>
| 2 |
2016-09-22T02:50:00Z
|
[
"python"
] |
What does the code zip( *sorted( zip(units, errors) ) ) do?
| 39,629,253 |
<p>For my application <code>units</code> and <code>errors</code> are always lists of numerical values. I tried googling what each part does and figured out the firs part of zip. It seems </p>
<pre><code> ziped_list = zip(units, errors)
</code></pre>
<p>simply pairs the units and errors to produce a list as <code>[...,(unit, error),...]</code>. Then Its passed to sorted which sorts the elements. Since I did not provide the argument for <code>key</code>, then it compares the elements directly as the documentation implies:</p>
<blockquote>
<p>The default value is None (compare the elements directly).</p>
</blockquote>
<p>Since the <code>ziped_list</code> is a list of tuples of integers, then it seems that it makes a comparison between tuples directly. From a small example in my terminal (python 3) it seems it compares based on the first element (even though the documentation implies the comparison is element wise):</p>
<pre><code>>>> (1,None) < (2,None)
True
>>> (2,None) < (1,None)
False
</code></pre>
<p>The last bit the unpacking and then zip still remain a mystery and I have not been able to figure out what they do. I understand that <code>*</code> unpacks to positional argument but doing <code>*</code> doesn't let me see exactly what its doing if I try it in the command line. What further confuses me is why <code>zip</code> requires to be passed as argument an unpacked list such as <code>*sorted</code> if it already takes as an argument <code>zip(*iterable)</code> a variable called iterable. It just seems confusing (to me) why we would need to unpack something that just allows as input a list of iterables.</p>
| 2 |
2016-09-22T02:21:37Z
| 39,629,666 |
<p>Seems you've already figured out what <code>zip</code> does. </p>
<p>When you sort the zipped list, <code>sorted</code> compares the first element of each tuple, and sorts the list. If the first elements are equal, the order is determined by the second element. </p>
<p>The <code>*</code> operator then unpacks the sorted list. </p>
<p>Finally, the second <code>zip</code> recombines the the output. </p>
<p>So what you end up with is two lists of tuples. The first list is errors, sorted from smallest to largest. The second list is the corresponding errors.</p>
| 1 |
2016-09-22T03:13:52Z
|
[
"python"
] |
How could I close Excel file using pywinauto
| 39,629,324 |
<p>I'm having a problem that I can't excel file.</p>
<p>I was using swapy+pywinauto.<br>
program export excel file with different name (ex. time..)
I used swapy to close the export excel.</p>
<pre><code>from pywinauto.application import Application
app = Application().Start(cmd_line=u'"C:\\Program Files\\Microsoft Office\\Office14\\EXCEL.EXE" \\dde')
xlmain = app.XLMAIN
xlmain.Wait('ready')
xlmain.Close()
app.Kill_()
</code></pre>
<p>but got error below.</p>
<pre><code>Traceback (most recent call last):
File "D:/23007.py", line 54, in <module>
xlmain.Wait('ready')
WaitUntil(timeout, retry_interval, lambda: self.__check_all_conditions(check_method_names))
File "C:\Python35\lib\site-packages\pywinauto\timings.py", line 308, in WaitUntil
raise err
pywinauto.timings.TimeoutError: timed out
Process finished with exit code 1
</code></pre>
<p><a href="http://i.stack.imgur.com/c1BOC.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/c1BOC.jpg" alt="enter image description here"></a></p>
| 0 |
2016-09-22T02:31:34Z
| 39,638,436 |
<p>Why do you use <code>app.XLMAIN</code>? Is the title of window similar to <code>XLMAIN</code>? Usually the title is <code><file name> - Excel</code> so that pywinauto can handle it so: <code>xlmain = app["<file name> - Excel"]</code>.</p>
<p>Obviously <code>Wait('ready')</code> raised an exception because the window with title <code>"XLMAIN"</code> or similar is not found.</p>
<p>Generally I would recommend using pyWin32 standard module <code>win32com.client</code> to work with Excel (through standard COM interface). See the second answer here for example: <a href="http://stackoverflow.com/questions/441758/driving-excel-from-python-in-windows">Driving Excel from Python in Windows</a></p>
| 1 |
2016-09-22T11:58:38Z
|
[
"python",
"excel",
"pywinauto"
] |
numpy.genfromtxt flattens my data
| 39,629,337 |
<p>I have <code>row.csv</code></p>
<pre><code>1, 2, 3, 4, 5
</code></pre>
<p>and <code>column.csv</code></p>
<pre><code>1,
2,
3,
4,
5
</code></pre>
<p><code>numpy.genfromtxt("one-of-the-above.csv", delimiter=",")</code> reads them into the same array: <code>np.array([1,2,3,4,5])</code></p>
<p>but I would like to get distinct 2D arrays instead.</p>
| 1 |
2016-09-22T02:33:05Z
| 39,629,395 |
<p>That is a "feature" of <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow"><code>genfromtxt</code></a>. Before it returns the data that it read from the file, it uses <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.squeeze.html" rel="nofollow"><code>numpy.squeeze</code></a> to eliminate trivial dimensions.</p>
<p>If you use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html" rel="nofollow"><code>loadtxt</code></a> instead, you can use the argument <code>ndmin=2</code> to ensure that the result is always a two-dimensional array.</p>
<p>For example:</p>
<pre><code>In [15]: !cat one_row.csv
1, 2, 3, 4, 5
In [16]: !cat one_col.csv
1
2
3
4
5
In [17]: np.loadtxt('one_row.csv', delimiter=',', ndmin=2)
Out[17]: array([[ 1., 2., 3., 4., 5.]])
In [18]: np.loadtxt('one_col.csv', delimiter=',', ndmin=2)
Out[18]:
array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 5.]])
</code></pre>
<p>If you would like to see this option added to <code>genfromtxt</code>, add a comment or a "thumbs up" to the issue that I created for it here: <a href="https://github.com/numpy/numpy/issues/4811" rel="nofollow">https://github.com/numpy/numpy/issues/4811</a></p>
| 1 |
2016-09-22T02:38:44Z
|
[
"python",
"numpy"
] |
SoftLayer Api: How to get the currently consume of bandwidth packet traffic data?
| 39,629,340 |
<p>I have a sl-vm which pay mode is monthly. I have buy it a bandwidth packet 250G. The sl website shows this vm's bandwidth usage is 34.18MB, as shown in flowing Fig:
<a href="http://i.stack.imgur.com/p43t3.png" rel="nofollow"><img src="http://i.stack.imgur.com/p43t3.png" alt="enter image description here"></a></p>
<p>While, I use Softlayer_Virtual_Guest:getBandwidthTotal to get this vm's bandwith and get the result 45500302 which is not equals to 34.18MB. </p>
<p>Q1: Which bandwidth usage data is the real comsumption?
Q2: If the api I used is not correct, please show me the connect one.
Regard~ </p>
| 0 |
2016-09-22T02:33:39Z
| 39,648,394 |
<p>Please use the âIEEE notation: kilobyte = 1000 bytesâ, it means that you need to do the following: <code>45500302/1000/1000 =45.5</code>. There are some variations between UI and API values.</p>
<p>You can use this online tool:</p>
<p><a href="http://www.matisse.net/bitcalc/?input_amount=45500302&input_units=bytes&notation=ieee" rel="nofollow">http://www.matisse.net/bitcalc/?input_amount=45500302&input_units=bytes&notation=ieee</a></p>
<h3>References:</h3>
<p><a href="http://stackoverflow.com/questions/39286157/projected-data-for-bandwidth-in-softlayer">Projected Data for Bandwidth in Softlayer</a></p>
<p><a href="http://stackoverflow.com/questions/36308040/bandwidth-summary-per-server">Bandwidth summary per server</a></p>
<p><a href="http://stackoverflow.com/questions/36014415/softlayer-rest-api-get-bandwidth-data-by-date">SoftLayer REST API get Bandwidth Data By Date</a></p>
<p><a href="http://stackoverflow.com/questions/36308040/bandwidth-summary-per-server">Bandwidth summary per server</a></p>
<p>I hope this information help you.</p>
| 0 |
2016-09-22T20:33:48Z
|
[
"python",
"api",
"softlayer"
] |
input function does not work & returns FileInput object
| 39,629,406 |
<p>I'm trying to get user input as function call in a small python program but, when i'm using the <code>input()</code> method, python shell does not let me input anything and quit by itself.</p>
<pre><code>from fileinput import input
def funcinput():
comm = input("-->")
print('your input is %s' %comm)
if __name__ == "__main__":
print("try to get input")
funcinput()
</code></pre>
<p>Code looks like this and every time I run it, python launcher just give me:</p>
<pre><code>try to get input
your input is <fileinput.FileInput object at 0x10464c208>
>>>
</code></pre>
<p>without let me input anything</p>
<p>What should I do to make it work?</p>
| 1 |
2016-09-22T02:40:08Z
| 39,629,453 |
<p>The cause here is a masking of the built-in <a href="https://docs.python.org/3.6/library/functions.html#input" rel="nofollow"><code>input</code></a>. In some point, a <code>from fileinput import input</code> occurs, which masks <code>builtins.input</code> and replaces it with <a href="https://docs.python.org/3.6/library/fileinput.html#fileinput.input" rel="nofollow"><code>fileinput.input</code></a>. That function doesn't return an <code>str</code> but, instead, returns an instance of <a href="https://docs.python.org/3.6/library/fileinput.html#fileinput.FileInput" rel="nofollow"><code>FileInput</code></a>:</p>
<pre><code>>>> from fileinput import input
</code></pre>
<p>Using <code>input</code> now picks up <code>fileinput.input</code>: </p>
<pre><code>>>> print(input("--> "))
<fileinput.FileInput at 0x7f16bd465160
</code></pre>
<p>If importing <code>fileinput</code> is <em>necessary</em> import the the module and access the <code>input</code> function using dot notation:</p>
<pre><code>>>> import fileinput
>>> print(fileinput.input("--> ")) # fileinput input, returns FileInput instance.
>>> print(input("--> ")) # original input, returns str
</code></pre>
<p>If this option is failing (or, you have no control of imports for some reason), you could try and delete the imported reference to get the look-up for it to use the built-in version:</p>
<pre><code>>>> from fileinput import input
>>> print(input("--> "))
<fileinput.FileInput at 0x7f16bd465f60>
</code></pre>
<p>Now by deleting the reference with <code>del input</code>:</p>
<pre><code>>>> del input
>>> input("--> ")
-->
</code></pre>
<p><code>input</code> works as it always does.</p>
| 2 |
2016-09-22T02:45:59Z
|
[
"python",
"python-3.x",
"input"
] |
Writing terminal output to terminal and to a file?
| 39,629,435 |
<p>I have different functions which is giving string and text output in the terminal during excecution of the function. The command I have used is <code>sys.stdout.write</code></p>
<p>In one of the function I am creating a file and calling those functions mentioned above. When running this function I need to write all the <code>sys.output.write</code> output from different functions in the file as well as to the terminal.</p>
<p>Sample code is like:</p>
<pre><code>def func1():
sys.stdout.write('A')
sys.stdout.write('B')
def func2():
sys.stdout.write('C')
sys.stdout.write('D')
def main():
fileName=open("log.txt","W+")
func1()
func2()
--------------------
# need output ABCD both in terminal and write in logfile as well
# unfortunately cannot change anything in func1() and func2()
</code></pre>
| 1 |
2016-09-22T02:44:05Z
| 39,629,459 |
<pre><code>import sys
def func1():
sys.stdout.write('A')
sys.stdout.write('B')
def func2():
sys.stdout.write('C')
sys.stdout.write('D')
class MyCoolOs:
def __init__(self,stdout,f):
self.stdout = stdout
self.f = f
def write(self,s):
self.f.write(s)
self.stdout.write(s)
def main():
f=open("log.txt","a")
sys.stdout = MyCoolOs(sys.stdout,f)
func1()
func2()
main()
</code></pre>
| 1 |
2016-09-22T02:47:12Z
|
[
"python"
] |
Writing terminal output to terminal and to a file?
| 39,629,435 |
<p>I have different functions which is giving string and text output in the terminal during excecution of the function. The command I have used is <code>sys.stdout.write</code></p>
<p>In one of the function I am creating a file and calling those functions mentioned above. When running this function I need to write all the <code>sys.output.write</code> output from different functions in the file as well as to the terminal.</p>
<p>Sample code is like:</p>
<pre><code>def func1():
sys.stdout.write('A')
sys.stdout.write('B')
def func2():
sys.stdout.write('C')
sys.stdout.write('D')
def main():
fileName=open("log.txt","W+")
func1()
func2()
--------------------
# need output ABCD both in terminal and write in logfile as well
# unfortunately cannot change anything in func1() and func2()
</code></pre>
| 1 |
2016-09-22T02:44:05Z
| 39,629,587 |
<p>Use a context manager to create a wrapper that writes to a file and <code>stdout</code>:</p>
<pre><code>class Tee(object):
def __init__(self, log_file_name, stdout):
self.log_file_name = log_file_name
self.stdout = stdout
def __enter__(self):
self.log_file = open(self.log_file_name, 'a', 0)
return self
def __exit__(self, exc_type, exc_value, exc_traceback):
self.log_file.close()
def write(self, data):
self.log_file.write(data)
self.stdout.write(data)
self.stdout.flush()
def main():
with Tee('log.txt', sys.stdout) as sys.stdout:
func1()
func2()
</code></pre>
<p>Don't forget to flush <code>stdout</code> after each write as it is buffered.</p>
| 1 |
2016-09-22T03:04:04Z
|
[
"python"
] |
identify csv in python
| 39,629,443 |
<p>I have a data dump that is a "messed up" CSV. (About 100 files, each with about 1000 lines of actual CSV data.)<br>
The dump has some other text in addition to CSV. How can I extract the CSV part separately, programmatically?</p>
<p>As an example the data file looks like something like this</p>
<pre><code>Session:1
Data collection date: 09-09-2016
Related questions:
Question 1: parta, partb, partc,
Question 2: parta, partb, partc
"field1","field2","field3","field4"
"data11","data12","data13","data14"
"data21","data22","data23","data24"
"data31","data32","data33","data34"
"data41","data42","data43","data44"
"data51","data52","data53","data54"
</code></pre>
<p>I need to extract the csv part.<br></p>
<p>Caveats, <br>
the text in the beginning is NOT limited to 4 - 5 lines.<br>
the additional text is NOT just in the beginning of the file <br></p>
<p>I saw <a href="http://stackoverflow.com/questions/3952132/how-do-you-dynamically-identify-unknown-delimiters-in-a-data-file">this post</a> that suggests using re.split and/or csv.Sniffer,
however my attempt was not fruitful.</p>
<pre><code>with open("untitled.csv") as csvfile:
dialect = csv.Sniffer().sniff(csvfile.read(1024))
csvfile.seek(0)
print(dialect.__dict__)
csvstarts = False
csvdump = []
for ln in csvfile.readlines():
toks = re.split(r'[,]', ln)
print(toks)
if toks[0] == '"field1"' and not csvstarts: # identify by the header line
csvstarts = True
continue
if csvstarts:
if toks[0] == '"field1"': # identify the start of subsequent csv data
csvstarts = False
continue
csvdump.append(ln) # record the current line
print(csvdump)
</code></pre>
<p>For now I am able to identify the csv lines accurately ONLY if there is one bunch of data.</p>
<p>Is there anything better I can do?</p>
| 0 |
2016-09-22T02:44:58Z
| 39,629,694 |
<p>If your csv lines and only those lines start with \", then you can do this:</p>
<pre><code>import csv
data = list(csv.reader(open("test.csv", 'rb'), quotechar='¬'))
# for quotechar - use something that won't turn up in data
def importCSV(data):
# outputs list of list with required data
# works on the assumption that all required data starts with \"
# and that no text starts with \"
out = []
for line in data:
if (line != []) and (line[0][0] == "\""):
line = [el.replace("\"", "") for el in line]
out.append(line)
return out
useful = importCSV(data)
</code></pre>
| 0 |
2016-09-22T03:18:09Z
|
[
"python",
"csv"
] |
identify csv in python
| 39,629,443 |
<p>I have a data dump that is a "messed up" CSV. (About 100 files, each with about 1000 lines of actual CSV data.)<br>
The dump has some other text in addition to CSV. How can I extract the CSV part separately, programmatically?</p>
<p>As an example the data file looks like something like this</p>
<pre><code>Session:1
Data collection date: 09-09-2016
Related questions:
Question 1: parta, partb, partc,
Question 2: parta, partb, partc
"field1","field2","field3","field4"
"data11","data12","data13","data14"
"data21","data22","data23","data24"
"data31","data32","data33","data34"
"data41","data42","data43","data44"
"data51","data52","data53","data54"
</code></pre>
<p>I need to extract the csv part.<br></p>
<p>Caveats, <br>
the text in the beginning is NOT limited to 4 - 5 lines.<br>
the additional text is NOT just in the beginning of the file <br></p>
<p>I saw <a href="http://stackoverflow.com/questions/3952132/how-do-you-dynamically-identify-unknown-delimiters-in-a-data-file">this post</a> that suggests using re.split and/or csv.Sniffer,
however my attempt was not fruitful.</p>
<pre><code>with open("untitled.csv") as csvfile:
dialect = csv.Sniffer().sniff(csvfile.read(1024))
csvfile.seek(0)
print(dialect.__dict__)
csvstarts = False
csvdump = []
for ln in csvfile.readlines():
toks = re.split(r'[,]', ln)
print(toks)
if toks[0] == '"field1"' and not csvstarts: # identify by the header line
csvstarts = True
continue
if csvstarts:
if toks[0] == '"field1"': # identify the start of subsequent csv data
csvstarts = False
continue
csvdump.append(ln) # record the current line
print(csvdump)
</code></pre>
<p>For now I am able to identify the csv lines accurately ONLY if there is one bunch of data.</p>
<p>Is there anything better I can do?</p>
| 0 |
2016-09-22T02:44:58Z
| 39,629,720 |
<p>How about this:</p>
<pre><code>import re
my_pattern = re.compile("(\"[\w]+\",)+")
with open('<your_file>', 'rb') as fi:
for f in fi:
result = my_pattern.match(f)
if result:
print f
</code></pre>
<p>Assuming the csv data can be differentiated from the rest by having no special characters in them (we only accept each element to have letters or numbers surrounded by double quotes and a comma to separate from the next element)</p>
| 1 |
2016-09-22T03:21:30Z
|
[
"python",
"csv"
] |
identify csv in python
| 39,629,443 |
<p>I have a data dump that is a "messed up" CSV. (About 100 files, each with about 1000 lines of actual CSV data.)<br>
The dump has some other text in addition to CSV. How can I extract the CSV part separately, programmatically?</p>
<p>As an example the data file looks like something like this</p>
<pre><code>Session:1
Data collection date: 09-09-2016
Related questions:
Question 1: parta, partb, partc,
Question 2: parta, partb, partc
"field1","field2","field3","field4"
"data11","data12","data13","data14"
"data21","data22","data23","data24"
"data31","data32","data33","data34"
"data41","data42","data43","data44"
"data51","data52","data53","data54"
</code></pre>
<p>I need to extract the csv part.<br></p>
<p>Caveats, <br>
the text in the beginning is NOT limited to 4 - 5 lines.<br>
the additional text is NOT just in the beginning of the file <br></p>
<p>I saw <a href="http://stackoverflow.com/questions/3952132/how-do-you-dynamically-identify-unknown-delimiters-in-a-data-file">this post</a> that suggests using re.split and/or csv.Sniffer,
however my attempt was not fruitful.</p>
<pre><code>with open("untitled.csv") as csvfile:
dialect = csv.Sniffer().sniff(csvfile.read(1024))
csvfile.seek(0)
print(dialect.__dict__)
csvstarts = False
csvdump = []
for ln in csvfile.readlines():
toks = re.split(r'[,]', ln)
print(toks)
if toks[0] == '"field1"' and not csvstarts: # identify by the header line
csvstarts = True
continue
if csvstarts:
if toks[0] == '"field1"': # identify the start of subsequent csv data
csvstarts = False
continue
csvdump.append(ln) # record the current line
print(csvdump)
</code></pre>
<p>For now I am able to identify the csv lines accurately ONLY if there is one bunch of data.</p>
<p>Is there anything better I can do?</p>
| 0 |
2016-09-22T02:44:58Z
| 39,629,967 |
<p>Can you not read each line and do a regex to see weather or not to pull the data?
Maybe something like:</p>
<p>^(["][\w]<em>["][,])+["][\w]</em>["]$</p>
<p>My regex is not the best and there may likely be a better way but that seemed to work for me.</p>
| 0 |
2016-09-22T03:54:25Z
|
[
"python",
"csv"
] |
Printed items from LISt do not match what was inserted
| 39,629,464 |
<p>Sorry guys, a bit new to Python so please bear with me.</p>
<p>I am attempting, just for S and G's to create a small python program that will take X number of digits of Pi, and play them as winsound.beeps (don't ask).</p>
<p>I got the beeps to beep, and I got Pi inserted into a list. When i print the list, it's not correct. Can anyone point to what I may have done wrong?</p>
<pre><code>#!/usr/bin/python
from mpmath import *
import winsound
mp.dps = 10
floatPi = mp.pi
print(floatPi)
conPi = str(floatPi)
print(conPi)
strPi = conPi.replace(".", "")
print(strPi)
listPi = []
for digit in strPi:
listPi.append(int(digit))
print listPi
#winsound.Beep(floatPi*100, 300)
for number in listPi:
print(listPi[number])
#winsound.Beep(listPi[number]*100, 300)
</code></pre>
<p>Results are as follows </p>
<pre><code>3.141592654
3.141592654
3141592654
[3, 1, 4, 1, 5, 9, 2, 6, 5, 4]
1
1
5
1
9
4
4
2
9
5
</code></pre>
<p>Why is the printed list from the for loop not Pi?</p>
| 0 |
2016-09-22T02:47:38Z
| 39,629,488 |
<p>Use the following as your last for loop:</p>
<pre><code>for number in listPi:
print(number)
</code></pre>
<p>the for loop gives you the contents in the list one by one. You don't need to look up in the list again.</p>
| 0 |
2016-09-22T02:50:36Z
|
[
"python",
"list",
"python-2.7"
] |
Printed items from LISt do not match what was inserted
| 39,629,464 |
<p>Sorry guys, a bit new to Python so please bear with me.</p>
<p>I am attempting, just for S and G's to create a small python program that will take X number of digits of Pi, and play them as winsound.beeps (don't ask).</p>
<p>I got the beeps to beep, and I got Pi inserted into a list. When i print the list, it's not correct. Can anyone point to what I may have done wrong?</p>
<pre><code>#!/usr/bin/python
from mpmath import *
import winsound
mp.dps = 10
floatPi = mp.pi
print(floatPi)
conPi = str(floatPi)
print(conPi)
strPi = conPi.replace(".", "")
print(strPi)
listPi = []
for digit in strPi:
listPi.append(int(digit))
print listPi
#winsound.Beep(floatPi*100, 300)
for number in listPi:
print(listPi[number])
#winsound.Beep(listPi[number]*100, 300)
</code></pre>
<p>Results are as follows </p>
<pre><code>3.141592654
3.141592654
3141592654
[3, 1, 4, 1, 5, 9, 2, 6, 5, 4]
1
1
5
1
9
4
4
2
9
5
</code></pre>
<p>Why is the printed list from the for loop not Pi?</p>
| 0 |
2016-09-22T02:47:38Z
| 39,629,504 |
<p>The code should be ,</p>
<pre><code>for number in listPi:
print(number)
</code></pre>
<p>Cause here you are iterating through the contents of the list not the keys associated with them , for example take the first case where it is <code>print(listPi[number])</code> it essentially prints <code>lisPi[3]</code> which is <code>1</code>.</p>
<p>And if you are from c/c++/java , </p>
<pre><code>for i in range(0,len(listPi),1):
print(listPi[i])
</code></pre>
<p>This will be more familliar.</p>
| 0 |
2016-09-22T02:52:10Z
|
[
"python",
"list",
"python-2.7"
] |
Using if for multiple input lists in Python list comprehension
| 39,629,591 |
<p>How do you use an if statement in a list comprehension when there are multiple input lists. Here is the code that I'm using and the error that I'm getting: </p>
<p>(I get that it's not able to apply modulus to a list, but not sure how to specifically reference the x in each of the lists as it iterates through them)</p>
<pre><code>a = [1,2,3]
b = [4,5,6]
nums = [x**2 for x in (a,b) if x%2==0]
print(nums)
TypeError: unsupported operand type(s) for %: 'list' and 'int'
</code></pre>
| 0 |
2016-09-22T03:04:15Z
| 39,629,616 |
<p>This isn't a cause with the <code>if</code> statement, the issue here is with <code>x in (a, b)</code>. When that executes, <code>x</code> takes on a <code>list</code> value (first <code>a</code>, then <code>b</code>) and then Python will try try to execute your <code>if</code> condition on it, an expression of the form:</p>
<pre><code>[1, 2, 3] % 2
</code></pre>
<p>is done, which obviously isn't allowed.</p>
<p>Instead, use <a href="https://docs.python.org/3/library/itertools.html#itertools.chain" rel="nofollow"><code>chain</code></a> from <a href="https://docs.python.org/3/library/itertools.html" rel="nofollow"><code>itertools</code></a> to chain both lists together and make <code>x</code> take values from them:</p>
<pre><code>a = [1,2,3]
b = [4,5,6]
nums = [x**2 for x in chain(a,b) if x%2==0]
print(nums)
[4, 16, 36]
</code></pre>
<p>If you're using Python <code>>= 3.5</code> you could also unpack in the list literal <code>[]</code>:</p>
<pre><code>nums = [x**2 for x in [*a, *b] if x%2==0]
</code></pre>
| 3 |
2016-09-22T03:07:22Z
|
[
"python",
"python-3.x",
"list-comprehension"
] |
Using if for multiple input lists in Python list comprehension
| 39,629,591 |
<p>How do you use an if statement in a list comprehension when there are multiple input lists. Here is the code that I'm using and the error that I'm getting: </p>
<p>(I get that it's not able to apply modulus to a list, but not sure how to specifically reference the x in each of the lists as it iterates through them)</p>
<pre><code>a = [1,2,3]
b = [4,5,6]
nums = [x**2 for x in (a,b) if x%2==0]
print(nums)
TypeError: unsupported operand type(s) for %: 'list' and 'int'
</code></pre>
| 0 |
2016-09-22T03:04:15Z
| 39,629,737 |
<p>As Jim said, you are <code>mod</code> a list to a <code>int</code>.</p>
<p>You can also use <code>+</code>, e.g., <code>nums = [x**2 for x in a+b if x%2==0]</code>.</p>
| 1 |
2016-09-22T03:24:17Z
|
[
"python",
"python-3.x",
"list-comprehension"
] |
Not able to move image in Pygame
| 39,629,679 |
<p>I am a beginner in Python. I am trying to shift image using pygame module. But I am not able to shift position of the image in python. Can you help me understand what I am doing wrong?</p>
<pre><code>import pygame, sys
from pygame.locals import *
pygame.init()
image = pygame.image.load("ball.jpg")
image = pygame.transform.scale(image, (100, 100))
imgrect = image.get_rect()
Canvas = pygame.display.set_mode((500, 500))
pygame.display.set_caption('Text Input')
imgrect.left = 200
imgrect.top = 200
Canvas.blit(image, imgrect)
pygame.display.update()
while True:
for event in pygame.event.get():
if event.type == KEYDOWN :
if event.key == K_ESCAPE:
pygame.quit()
sys.exit()
if event.key == K_UP:
imgrect.top += 1
if event.key == K_DOWN:
imgrect.top -= 1
</code></pre>
| 0 |
2016-09-22T03:14:56Z
| 39,630,580 |
<p>A basic game loop should do three things: handle events, update and draw. I see the logic where you update the position of the rectangle, but you don't redraw the image at the new position.</p>
<p>I've added lines at the bottom of the game loop to draw the scene.</p>
<pre><code>while True:
# handle events
# update logic
# draw
Canvas.fill((0, 0, 0)) # Clears the previous image.
Canvas.blit(image, imgrect) # Draws the image at the new position.
pygame.display.update() # Updates the screen.
</code></pre>
| 1 |
2016-09-22T04:57:22Z
|
[
"python",
"pygame"
] |
Python "IndentationError: expected an indented block" After Checking Indentation
| 39,629,684 |
<p>I keep getting the following error even though I've checked all the idents to make sure they're indents and not spaces:</p>
<p>File "", line 13
"""</p>
<p>^
IndentationError: expected an indented block</p>
<pre><code>import random
def guess_the_number():
number, guesses = random.randint( 1, 20 ), 1
inpName = input('Hello! Enter your name? \n')
print('{}, there is a number f a number between 1 and 20.'.format(inpName))
print('Take a guess what it is.')
guess = input()
while guess != number:
if guess < number:
print('That is too low.')
print('Take another guess.')
guesses += 1
guess = input()
else:
print('That is to high.')
print('Take another guess.')
guesses += 1
guess = input()
if guess == number:
print('Good job {} you guessed the number in {} guesses!'.format(inpName, guesses))
</code></pre>
| 0 |
2016-09-22T03:16:27Z
| 39,629,977 |
<p>Also for your further references, it is recommended by the <a href="https://www.python.org/dev/peps/pep-0008/#tabs-or-spaces" rel="nofollow">Style Guide for Python Code</a> that </p>
<blockquote>
<p>Spaces are the preferred indentation method. Tabs should be used solely to remain consistent with code that is already indented with tabs.</p>
</blockquote>
| 0 |
2016-09-22T03:55:10Z
|
[
"python",
"python-3.x"
] |
Python multidimensional array list index out of range
| 39,629,728 |
<p>I'm trying to make a program that will read a text file of names and numbers and make it into a 2d array with the names and numbers, then remove the item from the array if the number is 1. However, the code I wrote to access the index in the 2d array doesn't work and throws the error "IndexError: list index out of range". This is the block of code that isn't working:</p>
<pre><code>for i in range(x):
list2[i][1] = int(list2[i][1])
if int(list2[i][1]) == 1:
list2.pop(i)
</code></pre>
<p>This is the traceback:</p>
<pre><code>File "/Users/cat/PycharmProjects/myCS106/Names.py", line 16, in updateNames
list2[i][1] = int(list2[i][1])
IndexError: list index out of range
</code></pre>
<p>This is an example of what list2 might look like:</p>
<pre><code>[["John Doe","1"],["Jane Smith","0"],["Firstname Lastname","1"]]
</code></pre>
<p>What am I doing wrong, and how do I correctly access the second part (in this case, the number) of the items in the array?</p>
| 0 |
2016-09-22T03:23:09Z
| 39,629,797 |
<p>In case you just want to remove some elements from the list you could iterate it from end to beginning which would allow you to remove items without causing <code>IndexError</code>:</p>
<pre><code>list2 = [["John Doe","1"],["Jane Smith","0"],["Firstname Lastname","1"]]
for i in range(len(list2) - 1, -1, -1):
list2[i][1] = int(list2[i][1])
if int(list2[i][1]) == 1:
list2.pop(i)
print(list2)
</code></pre>
<p>Output:</p>
<pre><code>[['Jane Smith', 0]]
</code></pre>
<p>Of course you could just use <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a> to create a new list:</p>
<pre><code>list2 = [[x[0], int(x[1])] for x in list2 if int(x[1]) != 1]
</code></pre>
| 0 |
2016-09-22T03:32:05Z
|
[
"python",
"arrays",
"multidimensional-array",
"indexoutofrangeexception"
] |
How to plot pie charts as subplots with custom size with Plotly in Python
| 39,629,735 |
<p>I've been trying to make a grid of subplots with custom size with Plotly(version 1.12.9) in Jupyter notebook(offline). There is nice examples in the Plotly website but all of them are with scattered plots. I modified one of them to make it look like the one I want to and it works with scatter plots:</p>
<pre><code>import plotly
import plotly.offline as py
import plotly.graph_objs as go
py.init_notebook_mode(connected=True)
labels = ['Oxygen','Hydrogen','Carbon_Dioxide','Nitrogen']
values = [4500,2500,1053,500]
trace0 = go.Scatter(x=[1, 2], y=[1, 2])
trace1 = go.Scatter(x=[1, 2], y=[1, 2])
trace2 = go.Scatter(x=[1, 2], y=[1, 2])
trace3 = go.Scatter(x=[1, 2], y=[1, 2])
trace4 = go.Scatter(x=[1, 2], y=[1, 2])
trace5 = go.Scatter(x=[1, 2], y=[1, 2])
fig = plotly.tools.make_subplots(
rows=3,
cols=3,
specs=[[{}, {}, {}], [{}, {'colspan': 2, 'rowspan': 2}, None], [{} , None, None]],
subplot_titles=('First Subplot','Second Subplot', 'Third Subplot')
)
fig.append_trace(trace0, 3, 1)
fig.append_trace(trace1, 2, 1)
fig.append_trace(trace2, 1, 1)
fig.append_trace(trace3, 1, 2)
fig.append_trace(trace4, 1, 3)
fig.append_trace(trace5, 2, 2)
py.iplot(fig)
</code></pre>
<p>And works as expected:
<a href="http://i.stack.imgur.com/BNiEh.png" rel="nofollow"><img src="http://i.stack.imgur.com/BNiEh.png" alt="Custom size scattered subplots"></a></p>
<p>But changing the traces for pie charts like this:</p>
<pre><code>labels = ['Oxygen','Hydrogen','Carbon_Dioxide','Nitrogen']
values = [4500,2500,1053,500]
trace0 = go.Pie(labels=labels,values=values)
trace1 = go.Pie(labels=labels,values=values)
trace2 = go.Pie(labels=labels,values=values)
trace3 = go.Pie(labels=labels,values=values)
trace4 = go.Pie(labels=labels,values=values)
trace5 = go.Pie(labels=labels,values=values)
</code></pre>
<p>Just throws this error:</p>
<pre><code>PlotlyDictKeyError: 'xaxis' is not allowed in 'pie'
Path To Error: ['xaxis']
Valid attributes for 'pie' at path [] under parents []:
['pullsrc', 'textfont', 'hoverinfo', 'domain', 'label0', 'legendgroup',
'showlegend', 'scalegroup', 'textpositionsrc', 'pull', 'visible',
'sort', 'name', 'outsidetextfont', 'dlabel', 'stream', 'hole',
'textinfo', 'marker', 'labels', 'labelssrc', 'rotation', 'opacity',
'values', 'insidetextfont', 'direction', 'textsrc', 'textposition',
'type', 'valuessrc', 'text', 'uid']
Run `<pie-object>.help('attribute')` on any of the above.
'<pie-object>' is the object at []
</code></pre>
<p>Is only possible to do this with scattered plots? I didn't find anything in the plotly documentation.</p>
| 2 |
2016-09-22T03:24:13Z
| 40,076,205 |
<p>I recently struggled with the same problem, and found nothing about whether we can use <code>plotly.tools.make_subplots</code> with <code>plotly.graph_objs.Pie</code>. As I understand this is not possible because these plots have no x and y axes. In the <a href="https://plot.ly/python/pie-charts/" rel="nofollow">original tutorial</a> for <code>Pie</code>, they do subplots with providing a <code>domain</code> dict, e.g. <code>{'x': [0.0, 0.5], 'y': [0.0, 0.5]}</code> defines an area in the bottom left quadrant of the total plotting space. Btw, this tutorial witholds the solution for annotation positioning at donut charts, what can be done with providing <code>xanchor = 'center'</code> and <code>yanchor = 'middle'</code> parameters. I found <a href="http://pycopy.com/plotly-a-pack-of-donuts/" rel="nofollow">one other tutorial</a> which gives a very nice example. Here I show it with your example:</p>
<pre><code>import plotly
import plotly.offline as py
import plotly.graph_objs as go
py.init_notebook_mode(connected=True)
labels = ['Oxygen','Hydrogen','Carbon_Dioxide','Nitrogen']
values = [4500,2500,1053,500]
domains = [
{'x': [0.0, 0.33], 'y': [0.0, 0.33]},
{'x': [0.0, 0.33], 'y': [0.33, 0.66]},
{'x': [0.0, 0.33], 'y': [0.66, 1.0]},
{'x': [0.33, 0.66], 'y': [0.0, 0.33]},
{'x': [0.66, 1.0], 'y': [0.0, 0.33]},
{'x': [0.33, 1.0], 'y': [0.33, 1.0]}
]
traces = []
for domain in domains:
trace = go.Pie(labels = labels,
values = values,
domain = domain,
hoverinfo = 'label+percent+name')
traces.append(trace)
layout = go.Layout(height = 600,
width = 600,
autosize = False,
title = 'Main title')
fig = go.Figure(data = traces, layout = layout)
py.iplot(fig, show_link = False)
</code></pre>
<p><a href="https://i.stack.imgur.com/IFUpB.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/IFUpB.jpg" alt="Plotly pie charts subplots example"></a></p>
<p>p.s. Sorry, I realized afterwards that y coordinates start from the bottom, so I mirrored your layout vertically. Also you may want to add space between adjacent subplots (just give slightly smaller/greater numbers in layout, e.g. 0.31 instead 0.33 at right, and 0.35 instead of 0.33 at left corners).</p>
<p>And finally, before using pie charts for any purpose, please think about if they are really the best option, and consider critics like <a href="http://www.businessinsider.com/pie-charts-are-the-worst-2013-6?IR=T" rel="nofollow">this</a> and <a href="https://www.r-bloggers.com/how-to-replace-a-pie-chart/" rel="nofollow">this</a>.</p>
| 0 |
2016-10-16T22:31:33Z
|
[
"python",
"jupyter-notebook",
"plotly"
] |
TypeError at /post/ render_to_response() got an unexpected keyword argument 'context_instance'
| 39,629,793 |
<p>I am trying to preview form before saving using 'formtools'. When I visit post it gives following errors:
Request Method: GET
Request URL: <a href="http://127.0.0.1:8000/post/" rel="nofollow">http://127.0.0.1:8000/post/</a></p>
<pre><code>Django Version: 1.10.1
Python Version: 3.5.2
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'pagedown',
'bootstrapform',
'contact',
'crispy_forms',
'formtools',
'member']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback:
File "/home/ohid/test_venv/lib/python3.5/site-packages/django/core/handlers/exception.py" in inner
39. response = get_response(request)
File "/home/ohid/test_venv/lib/python3.5/site-packages/django/core/handlers/base.py" in _legacy_get_response
249. response = self._get_response(request)
File "/home/ohid/test_venv/lib/python3.5/site-packages/django/core/handlers/base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "/home/ohid/test_venv/lib/python3.5/site-packages/django/core/handlers/base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/ohid/test_venv/lib/python3.5/site-packages/formtools/preview.py" in __call__
34. return method(request)
File "/home/ohid/test_venv/lib/python3.5/site-packages/formtools/preview.py" in preview_get
58. context_instance=RequestContext(request))
Exception Type: TypeError at /post/
Exception Value: render_to_response() got an unexpected keyword argument 'context_instance'
</code></pre>
<p>Here is my preview.py:</p>
<pre><code>from formtools.preview import FormPreview
from django.http import HttpResponseRedirect
from .models import Person
class PersonFormPreview(FormPreview):
form_template = 'member/person_form.html'
preview_template = 'member/person_review.html'
model = Person
def done(self, request, cleaned_data):
self.form.save()
# Do something with the cleaned_data, then redirect
# to a "success" page.
return HttpResponseRedirect('/form/success')
</code></pre>
<p>Here is my urls:</p>
<pre><code>from .preview import PersonFormPreview
from .forms import MemberForm
from django import forms
url(r'^post/$', PersonFormPreview(MemberForm)),
</code></pre>
<p>How do I fix this errors?</p>
| 0 |
2016-09-22T03:31:22Z
| 39,631,415 |
<p><code>formtools</code> not supported the <code>Django 1.10</code> version please <code>downgrade</code> your Django release for the workaround.</p>
<blockquote>
<p><a href="https://github.com/django/django-formtools/issues/75" rel="nofollow">https://github.com/django/django-formtools/issues/75</a></p>
</blockquote>
| 1 |
2016-09-22T06:05:23Z
|
[
"python",
"django"
] |
TypeError at /post/ render_to_response() got an unexpected keyword argument 'context_instance'
| 39,629,793 |
<p>I am trying to preview form before saving using 'formtools'. When I visit post it gives following errors:
Request Method: GET
Request URL: <a href="http://127.0.0.1:8000/post/" rel="nofollow">http://127.0.0.1:8000/post/</a></p>
<pre><code>Django Version: 1.10.1
Python Version: 3.5.2
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'pagedown',
'bootstrapform',
'contact',
'crispy_forms',
'formtools',
'member']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback:
File "/home/ohid/test_venv/lib/python3.5/site-packages/django/core/handlers/exception.py" in inner
39. response = get_response(request)
File "/home/ohid/test_venv/lib/python3.5/site-packages/django/core/handlers/base.py" in _legacy_get_response
249. response = self._get_response(request)
File "/home/ohid/test_venv/lib/python3.5/site-packages/django/core/handlers/base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "/home/ohid/test_venv/lib/python3.5/site-packages/django/core/handlers/base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/ohid/test_venv/lib/python3.5/site-packages/formtools/preview.py" in __call__
34. return method(request)
File "/home/ohid/test_venv/lib/python3.5/site-packages/formtools/preview.py" in preview_get
58. context_instance=RequestContext(request))
Exception Type: TypeError at /post/
Exception Value: render_to_response() got an unexpected keyword argument 'context_instance'
</code></pre>
<p>Here is my preview.py:</p>
<pre><code>from formtools.preview import FormPreview
from django.http import HttpResponseRedirect
from .models import Person
class PersonFormPreview(FormPreview):
form_template = 'member/person_form.html'
preview_template = 'member/person_review.html'
model = Person
def done(self, request, cleaned_data):
self.form.save()
# Do something with the cleaned_data, then redirect
# to a "success" page.
return HttpResponseRedirect('/form/success')
</code></pre>
<p>Here is my urls:</p>
<pre><code>from .preview import PersonFormPreview
from .forms import MemberForm
from django import forms
url(r'^post/$', PersonFormPreview(MemberForm)),
</code></pre>
<p>How do I fix this errors?</p>
| 0 |
2016-09-22T03:31:22Z
| 39,652,201 |
<p>I solved it by some changes in lib/python3.5/site-packages/formtools/preview.py file. Here I first changes render_to_response to render and then removed context_instance=RequestContext(request) from argument. Suppose post_post method now look like:</p>
<pre><code> def post_post(self, request):
"""
Validates the POST data. If valid, calls done(). Else, redisplays form.
"""
form = self.form(request.POST, auto_id=self.get_auto_id())
if form.is_valid():
if not self._check_security_hash(
request.POST.get(self.unused_name('hash'), ''),
request, form):
return self.failed_hash(request) # Security hash failed.
return self.done(request, form.cleaned_data)
else:
return render(request, self.form_template,
self.get_context(request, form))
</code></pre>
<p>Hope this will help others.</p>
| 1 |
2016-09-23T03:39:56Z
|
[
"python",
"django"
] |
I using python and the module of multiprocessï¼but when I use globalï¼that have errorï¼the global name is not defined
| 39,629,795 |
<p>helloï¼I want to compare the distributed database to the postgresql</p>
<p>using <code>python</code>ï¼and using <code>multiprocessing</code> to simulate multi-role</p>
<p>I want to create the 4 random and calculate their running time</p>
<p>but when I defined the 4 random as globalï¼that prompt </p>
<blockquote>
<p>global name âaâ is not defined</p>
</blockquote>
<p>I don't know how to solve this ï¼For every loopï¼the 4 random must have same values on distributed database and posgresqlï¼</p>
<p>here is my code</p>
<pre><code>#coding=utf-8
import psycopg2
import random
import multiprocessing
conn = psycopg2.connect("dbname=test user=higis password=dbrgdbrg host=10.1.1.215 port=5432")
cur = conn.cursor()
#test-SQl operate
def multitest(num):
global a
global b
global c
global d
a = random.randint(74,135)
b = random.randint(18,53)
c = random.randint(74,135)
d = random.randint(18,53)
if a>c:
a=c
if b>d:
b=d
try:
sqltest = "SELECT ogc_fid FROM testindex_1 WHERE ST_MAKEENVELOPE" + str((a,b,c,d,4326))+str("&& wkb_geometry")
cur.execute(sqltest)
#print cur.fetchall()
except Exception, e:
print e
#citus-SQL operate
def multicitus(num):
try:
sqlcitus = "SELECT ogc_fid FROM citusindex_1 WHERE ST_MAKEENVELOPE" + str((a, b, c, d, 4326)) + str(
"&& wkb_geometry")
cur.execute(sqlcitus)
#print cur.fetchall()
except Exception,e:
print e
#test-multi-process
if __name__ =="__main__":
nums = 5
for num in range(nums):
p = multiprocessing.Process(target=multitest,args=(num,))
#print 'process a %d is start'%num
p.start()
p.join()
#citus-multi-process
for num in range(nums):
q = multiprocessing.Process(target=multicitus,args=(num,))
#print 'process b %d is start'%num
q.start()
q.join()
cur.close()
conn.close()
</code></pre>
| 0 |
2016-09-22T03:31:51Z
| 39,630,182 |
<p>The error is from multicitus(), while making the string for sqlcitus.</p>
<p>I think you want to get global variables a, b, c, d declared and assigned in multitest(). But the problem is that multicitus() and sqlcitus() are in different processes, and it is impossible for them to share any global variables. Global variables can be shared among different functions in the <strong>same process</strong>.</p>
<p>One way to solve this problem is to use Pipes to transfer data(a, b, c, d in this example) between multicitus() and sqlcitus().</p>
<pre><code>#coding=utf-8
import random
import multiprocessing
from multiprocessing import Process, Pipe
#test-SQl operate
def multitest(num, conn):
a = random.randint(74,135)
b = random.randint(18,53)
c = random.randint(74,135)
d = random.randint(18,53)
conn.send([a, b, c, d])
if a>c:
a=c
if b>d:
b=d
try:
sqltest = "SELECT ogc_fid FROM testindex_1 WHERE ST_MAKEENVELOPE" + str((a,b,c,d,4326))+str("&& wkb_geometry")
print sqltest
except Exception, e:
print e
#citus-SQL operate
def multicitus(num, conn):
try:
a, b, c, d = conn.recv()
sqlcitus = "SELECT ogc_fid FROM citusindex_1 WHERE ST_MAKEENVELOPE" + str((a, b, c, d, 4326)) + str(
"&& wkb_geometry")
print sqlcitus
except Exception, e:
print e
#test-multi-process
if __name__ =="__main__":
nums = 5
pips = []
for i in range(nums):
pips.append(Pipe())
# connect and running a database
for num in range(nums):
p = multiprocessing.Process(target=multitest,args=(num, pips[num][0]))
#print 'process a %d is start'%num
p.start()
p.join()
#citus-multi-process
for num in range(nums):
q = multiprocessing.Process(target=multicitus,args=(num, pips[num][1]))
#print 'process b %d is start'%num
q.start()
q.join()
</code></pre>
| 1 |
2016-09-22T04:18:01Z
|
[
"python",
"parallel-processing",
"global"
] |
Inserting NULL value to a double data type MySQL Python
| 39,629,823 |
<p>I have a table. This is the create statement. </p>
<pre><code> CREATE TABLE `runsettings` (
`runnumber` mediumint(9) NOT NULL,
`equipment` varchar(45) NOT NULL,
`wafer` varchar(45) NOT NULL,
`settingname` varchar(100) NOT NULL,
`float8value` double DEFAULT NULL,
`apcrunid` varchar(45) DEFAULT NULL,
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`intvalue` int(11) DEFAULT NULL,
`floatvalue` float DEFAULT NULL,
`Batch` varchar(45) DEFAULT NULL,
`IndexNum` smallint(6) DEFAULT '1',
`stringvalue` mediumtext,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=1056989 DEFAULT CHARSET=latin1;
</code></pre>
<p>This is my insert statement :</p>
<pre><code>import mysql.connector
cnx = mysql.connector.connect(user='test',
password ='test',host='0.0.0.1',
database='test')
vallist = [(471285, u'CT19', 7, u'271042', u'Etch Time Min', None, None, None),
(471285, u'CT19', 7, u'00000', u'Etch Time Min', None, None, 'None')]
cursor = cnx.cursor()
# ss =
cursor.executemany("INSERT INTO runsettings (apcrunid,equipment,runnumber,wafer,settingname,intvalue,floatvalue,float8value) VALUES (%s,%s,%s,%s,%s,%s,%s,%s)",vallist)
cnx.commit()
</code></pre>
<p>So I try to insert these values .</p>
<pre><code>vallist = [471285, u'CT19', 7, u'271042', u'Etch Time Min', None, None, None,
471285, u'CT19', 7, u'00000', u'Etch Time Min', None, None, 'None']
</code></pre>
<p>This is getting inserted.
Result on DB
<a href="http://i.stack.imgur.com/soQHW.png" rel="nofollow"><img src="http://i.stack.imgur.com/soQHW.png" alt="enter image description here"></a>
But when the list value is this one (difference is the string <code>'None'</code> gets evaluated first): </p>
<pre><code>vallist = [471285, u'CT19', 7, u'271042', u'Etch Time Min', None, None, 'None',
471285, u'CT19', 7, u'271042', u'Etch Time Min', None, None, None]
</code></pre>
<p>It gives out the truncated error. </p>
<pre><code>Data truncated for column 'float8value' at row 2
</code></pre>
<p>How come when the row that contains <code>None</code> is the first on the list it doesn't give out the same truncated error on the first list? </p>
| 3 |
2016-09-22T03:37:36Z
| 39,630,463 |
<p>This is not entirely suprising. In the first instance you are inserting the python predefined constant <a href="https://docs.python.org/2/library/constants.html" rel="nofollow">None</a></p>
<blockquote>
<p>The sole value of types.NoneType. None is frequently used to represent
the absence of a value, as when default arguments are not passed to a
function.</p>
</blockquote>
<p>That equates to SQL <code>NULL</code>. In the second instance you are inserting a String called 'None' into the table. These two are very different. If you insert a string into a double or float field you will see all kinds of errors, most often, the exact one you have seen.</p>
<p>in the first instance it works because you have declared :</p>
<pre><code> `float8value` double DEFAULT NULL,
</code></pre>
<p>This accepts NULL and None is in the 8th place in your values list. When lots of different parameters are being used, it's always a good idea to use named parameters so that it's obvious at a glance what's being bound to each column.</p>
<p>Updates:</p>
<p>After running your code, the only conclusion that can be reached is that you have found a bug, by using <code>print(cursor.statement)</code> it's possible to discover that the executed query is. </p>
<pre><code>INSERT INTO runsettings (apcrunid,equipment,runnumber,wafer,settingname,intvalue,floatvalue,float8value)
VALUES (471285,'CT19',7,'271042','Etch Time Min',NULL,NULL,NULL),
(471285,'CT19',7,'00000','Etch Time Min',NULL,NULL,'None')
</code></pre>
<p>This does not produce an error, but if you erase the first set of values the error is indeed produced. My recommendation is to file a bug report</p>
| 3 |
2016-09-22T04:47:09Z
|
[
"python",
"mysql",
"prepared-statement"
] |
Write Pandas Series Vertically using to_csv
| 39,629,852 |
<p>I have a loop running that has a long series that needs to be printed vertically to a csv on each iteration. Using the to_csv is just printing it horizontally. Is there a specific way this needs be done?</p>
<pre><code>Index Value
Age 25
Siblings 0
Area Code 416
...etc
Age 23
Siblings 2
Area Code 401
...etc
</code></pre>
<p>This is an example of my series, I would like to have it output to a csv like this. So I'm trying to print the indexes as columns and the values below it.</p>
<pre><code> age siblings Area Code
25 0 416
23 2 401
</code></pre>
| 0 |
2016-09-22T03:40:17Z
| 39,630,047 |
<p>This will work for you:</p>
<pre><code>idx = s.index.unique()
df = pd.DataFrame(dict(zip(idx, [s[i].tolist() for i in idx])))
df.to_csv('file.csv')
</code></pre>
| 0 |
2016-09-22T04:03:59Z
|
[
"python",
"pandas"
] |
Write Pandas Series Vertically using to_csv
| 39,629,852 |
<p>I have a loop running that has a long series that needs to be printed vertically to a csv on each iteration. Using the to_csv is just printing it horizontally. Is there a specific way this needs be done?</p>
<pre><code>Index Value
Age 25
Siblings 0
Area Code 416
...etc
Age 23
Siblings 2
Area Code 401
...etc
</code></pre>
<p>This is an example of my series, I would like to have it output to a csv like this. So I'm trying to print the indexes as columns and the values below it.</p>
<pre><code> age siblings Area Code
25 0 416
23 2 401
</code></pre>
| 0 |
2016-09-22T03:40:17Z
| 39,630,640 |
<p>Have you tried transposing the df and then outputting it? </p>
<pre><code>dfT = df.T
dfT.to_csv('Vert.csv')
</code></pre>
<p>Something like that might give you what you want.
Similar to this <a href="http://stackoverflow.com/questions/24412510/transpose-pandas-dataframe">question</a>. </p>
| 0 |
2016-09-22T05:03:03Z
|
[
"python",
"pandas"
] |
How to uninstall python jupyter correctly?
| 39,629,879 |
<p>I have <code>jupyter</code> installed with <code>python3.5</code> on my <em>Mac OSX</em>, but I want the <code>python2.7</code> version. So, I basically need to uninstall the <code>3.5</code> version, and reinstall the <code>2.7</code> version. </p>
<p>But for some reason I can't uninstall the 3.5 version. I tried <code>sudo python3 -m pip uninstall jupyter</code>, and you can see the results below:</p>
<pre><code>â ~/current/directory
20:08 $ which jupyter
/Library/Frameworks/Python.framework/Versions/3.5/bin/jupyter
â ~/current/directory
20:08 $ sudo python3 -m pip uninstall jupyter
The directory '/Users/<username>/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Cannot uninstall requirement jupyter, not installed
The directory '/Users/<username>/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
You are using pip version 8.1.1, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
â-1 ~/current/directory
20:09 $ which jupyter
/Library/Frameworks/Python.framework/Versions/3.5/bin/jupyter
</code></pre>
<p>...as you can see above, the <code>which jupyter</code> command still returns a valid path, AND not only that. I'm still able to launch <code>jupyter notebook</code> from the command line, and it opens a notebook.</p>
<p>How do I correctly get rid of my existing version of <code>jupyter</code> ? OR, if someone knows how to ADD a <code>python2</code> kernel to my existing <code>jupyter</code>, that would be fine too. Is that possible?</p>
<p>All I can think of is to manually kill the files and subfolders inside of <code>/Library/Frameworks/Python.framework/Versions/3.5/bin/</code>, but this seems unnecessarily brutal?</p>
| 2 |
2016-09-22T03:42:49Z
| 39,649,697 |
<p>Use pip3 instead of pip</p>
<pre><code>pip3 uninstall jupyter
</code></pre>
<p>You can install for both python 2 and python 3 on the same computer as long as you use the correct pip version</p>
| 0 |
2016-09-22T22:14:42Z
|
[
"python",
"ipython",
"jupyter",
"jupyter-notebook"
] |
Python Minibatch Dictionary Learning
| 39,629,931 |
<p>I'd like to implement error tracking with dictionary learning in python, using sklearn's <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.MiniBatchDictionaryLearning.html" rel="nofollow">MiniBatchDictionaryLearning</a>, so that I can record how the error decreases over the iterations. I have two methods to do it, neither of which really worked. Set up:</p>
<ul>
<li><strong>Input data X</strong>, numpy array shape (n_samples, n_features) = (298143, 300). These are patches of shape (10, 10), generated from an image of shape (642, 480, 3).</li>
<li><strong>Dictionary learning parameters</strong>: No. of columns (or atoms) = 100, alpha = 2, transform algorithm = OMP, total no. of iterations = 500 (keep it small first, just as a test case)</li>
<li><p><strong>Calculating error</strong>: After learning the dictionary, I encode the original image again based on the learnt dictionary. Since both the encoding and the original are numpy arrays of the same shape (642, 480, 3), I'm just doing elementwise Euclidean distance for now: </p>
<p>err = np.sqrt(np.sum(reconstruction - original)**2))</p></li>
</ul>
<p>I did a test run with these parameters, and the full fit was able to produce a pretty good reconstruction with a low error, so that's good.Now on to the two methods:</p>
<p><strong>Method 1:</strong> Save the learnt dictionary every 100 iterations, and record the error. For 500 iterations, this gives us 5 runs of 100 iterations each. After each run, I compute the error, then use the currently learnt dictionary as an initialization for the next run. </p>
<pre><code># Fit an initial dictionary, V, as a first run
dico = MiniBatchDictionaryLearning(n_components = 100,
alpha = 2,
n_iter = 100,
transform_algorithm='omp')
dl = dico.fit(patches)
V = dl.components_
# Now do another 4 runs.
# Note the warm restart parameter, dict_init = V.
for i in range(n_runs):
print("Run %s..." % i, end = "")
dico = MiniBatchDictionaryLearning(n_components = 100,
alpha = 2,
n_iter = n_iterations,
transform_algorithm='omp',
dict_init = V)
dl = dico.fit(patches)
V = dl.components_
img_r = reconstruct_image(dico, V, patches)
err = np.sqrt(np.sum((img - img_r)**2))
print("Err = %s" % err)
</code></pre>
<p>Problem: The error isn't decreasing, and was pretty high. The dictionary wasn't learnt very well either. </p>
<p><strong>Method 2</strong>: Cut the input data X into, say, 500 batches, and do partial fitting, using the <code>partial_fit()</code> method.</p>
<pre><code>batch_size = 500
n_batches = X.shape[0] // batch_size
print(n_batches) # 596
for iternum in range(n_batches):
batch = patches[iternum*batch_size : (iternum+1)*batch_size]
V = dico.partial_fit(batch)
</code></pre>
<p>Problem: this seems to take about 5000 times longer. </p>
<p>I'd like to know if there's a way to retrieve the error over the fitting process?</p>
| 2 |
2016-09-22T03:50:03Z
| 39,633,530 |
<p>Each call to <code>fit</code> re-initializes the model and forgets any previous call to <code>fit</code>: this is the expected behavior of all estimators in scikit-learn.</p>
<p>I think using <code>partial_fit</code> in a loop is the right solution but you should call it on small batches (as done in the fit method method, the default batch_size value is just 3) and then only compute the cost every 100 or 1000 calls to <code>partial_fit</code> for instance:</p>
<pre><code>batch_size = 3
n_epochs = 20
n_batches = X.shape[0] // batch_size
print(n_batches) # 596
n_updates = 0
for epoch in range(n_epochs):
for i in range(n_batches):
batch = patches[i * batch_size:(i + 1) * batch_size]
dico.partial_fit(batch)
n_updates += 1
if n_updates % 100 == 0:
img_r = reconstruct_image(dico, dico.components_, patches)
err = np.sqrt(np.sum((img - img_r)**2))
print("[epoch #%02d] Err = %s" % (epoch, err))
</code></pre>
| 1 |
2016-09-22T08:04:56Z
|
[
"python",
"numpy",
"scikit-learn"
] |
What is the best way to represent or shape data with >700 features for classification?
| 39,630,178 |
<p>I have a train data file that contains 0 or 1 class labels with a string containing numbers. The string is the molecular structure of the drug with a class label.</p>
<p>The file looks like this: </p>
<pre><code>0 1730 2281 2572 2602 2611 2824 2855 2940 3149 3313 3560 3568 3824 4185 4266 4366 4409 4472 5008 5114 5408 5470 5509 5526 5626 5728 5910 5976 6031 6047 6069 6307 6352 6396 6401 6439 6468 6477 6708 6978 7092 7112 7149 7235 7470 7495 7549 7714 7815 7911 8037 8464 8488 8601 8650 8797 8825 8830 9015 9275 9447 9577 9707 9735 10200 10234 10328 10469 10471 10637 10749 10938 11042 11378 11713 11728 11756 11858 11950 12142 12160 12375 12383 12540 12906 13002 13121 13401 13700 14147 14332 14565 14581 14707 14944 15213 15423 15608 15677 15859 16028 16043 16092 16145 16323 16419 16564 17009 17161 17252 17361 17515 17698 17760 17791 17957 18135 18722 18889 18914 19030 19082 19105 19166 19199 19532 19716 19857 19958 20146 20153 20154 20354 20503 20582 20587 21109 21484 21543 21690 21904 21967 22009 22105 22154 22757 22808 22813 23066 23176 23361 23505 23602 23650 23868 24410 24718 24820 24869 24967 25051 25136 25174 25299 25340 25419 25568 25578 25608 25803 25930 26245 26465 26729 26795 26807 27211 27340 27750 27810 28017 28020 28070 28192 28250 28606 28671 28811 28880 29029 29061 29071 29103 29298 29350 29372 29384 29419 29432 29752 29961 30039 30237 30245 30314 30326 30433 30463 30552 30582 30748 30784 30840 30945 30965 31025 31192 30786 40822 40882 41407 41410 41457 41540 41996 42011 42265 42299 42425 42209 50240 50322 50399 50506 50601 50710 50876 50923 51028 51066 51434 51724 51846 51951 52291 52321 52425 52659 52686 53022 53255 53266 53315 53338 53455 53760 53948 53976 54059 54103 54131 54136 54151 54161 54244 54452 54526 54746 55113 55283 55367 55424 55650 55972 56061 56114 56211 56410 56681 56725 56887 57155 57173 57180 57313 57474 57481 57506 57612 57762 57765 58149 58401 58459 58716 58832 58867 59013 59117 59340 59522 59744 59922 60085 60205 60272 60280 60489 60546 60572 60587 60778 60853 60962 61142 61214 61405 61504 61576 61607 61771 62139 62214 62419 62483 62520 62773 62905 62940 63150 63200 63466 63479 63508 63513 63685 63830 64247 64313 64356 64436 64450 64459 64461 64521 64904 65048 65142 65217 65241 65318 65518 65555 65651 65713 65750 65804 65911 66071 66081 66157 66182 66364 66531 66541 66691 66872 67050 67105 67214 67475 67582 67637 67810 67957 67986 68103 68279 68353 68500 68574 68601 68623 68796 68798 68948 69517 69646 69734 69773 69956 70071 70486 71106 71114 71425 72008 72253 72289 72311 72377 72456 72498 72601 72650 72730 72733 72822 72826 73170 73235 73315 73322 73330 73335 73473 73595 73673 73686 73821 73916 74108 74316 74773 74808 74865 75036 75220 75247 75393 75396 75399 75645 75676 75790 75823 76023 76090 76172 76370 76581 76881 76886 77050 77202 77523 77578 77648 77870 78150 78222 78353 78375 78583 78655 78802 789 90725 90880 90909 90954 91307 91315 91408 91828 91923 91927 91931 91996 92109 92204 92221 92278 92560 92704 92839 92929 92931 92966 92983 92988 93016 93136 93307 93539 93622 93735 93972 94210 94211 94226 94493 94583 94611 94618 94715 95145 95170 95347 95360 95371 95546 95566 95629 95646 95685 95876 95986 96422 96502 96567 96685 96769 96844 96998 97062 97204 97947 97977 98061 98190 98214 98231 98277 98402 98543 98581 98654 98831 98959 99116 99187 99257 99321 99349 99580 99678 99686 99998
0 118 307 367 478 505 512 807 878 939 1024 1095 1836 1915 1961 2261 2474 2521 2633 2673 2969 3143 3193 3292 3313 3593 3906 4073 4104 4605 4684 4720 5168 5264 5422 5456 5470 5537 5629 5895 5932 6052 6305 6319 6330 6601 6671 6891 6946 7065 7142 7260 7446 7517 7582 7609 7947 7965 7993 8015 8098 8367 8410 8490 8532 8549 8700 8837 9043 9086 9146 9247 9427 9735 10090 10141 10229 10235 10489 10614 10833 10955 11172 11238 11807 11820 11858 11989 12092 12216 12262 12533 12534 12923 13159 13306 13621 13677 13685 13824 14052 14053 14176 14179 14203 14222 14481 14600 14654 14732 14763 14782 14859 15105 15348 15956 16041 16073 16320 16490 16528 16558 16746 16835 16 74524 74560 74651 74765 74947 75069 75220 75504 75939 76317 76484 76571 76803 76826 77013 77256 77453 77546 77649 77789 77870 77891 77945 77981 78001 78157 78840 78998 79482 79864 79869 79920 80092 80104 80113 80200 80256 80376 80543 80592 80767 80897 81142 81261 81281 81381 81566 81690 82258 82517 82533 82538 82641 82684 82839 82871 83189 83427 83435 83620 83821 83914 84352 84516 84528 84530 84574 84879 85158 85378 85390 85517 85867 86106 86197 86207 86271 86306 86516 86818 87149 87207 87293 87385 87496 87662 87686 87744 87769 87775 87927 87939 88153 88174 88745 88767 88901 88946 88957 88990 88993 89106 89130 89283 89652 89872 90028 90123 90138 90220 90237 90349 90441 90446 90487 90818 91086 91160 91188 91237 91353 91593 91684 91737 91810 91943 92204 92346 92350 92381 92515 92779 92814 93085 93226 93357 93440 93531 94009 94026 94120 94173 94240 94518 94696 94757 94770 94852 94931 94979 95021 95130 95371 95758 95877 96172 96268 96271 96409 96427 96441 96480 96536 96593 96741 96815 96852 96886 96959 97018 97215 97385 97398 97848 97877 97889 98260 98268 98452 98676 98756 98801 98808 98928 99025 99104 99220 99606 99628 99801
0 87 149 433 704 711 892 988 1056 1070 1234 1246 1289 1642 1669 1861 1924 1956 2081 2150 2909 3038 3070 3082 3589 3708 3709 3713 4011 4266 4404 4489 4534 4674 4688 5114 5133 5190 5253 5815 6114 6645 6750 6767 6862 6880 6960 6986 7028 7080 7112 7262 7426 7492 7494 7522 7614 8100 8258 8581 8631 8799 8824 8872 8958 9011 9146 9197 9202 9247 9249 9300 9324 9353 9391 9392 9669 10234 10314 10323 10341 10455 10471 10764 10811 10871 10938 10973 11210 11277 11317 11331 11470 11581 11588 11670 11820 12199 12250 12274 12372 12425 12471 12504 12505 12540 12575 12764 12801 13424 13457 13561 13587 13650 13700 13832 13873 13916 13974 14044 14203 14246 14386 14454 14676 14942 14952 15372 15555 15570 15938 16176 16233 16268 16274 16419 16765 16820 17236 17260 17287 17307 17319 17324 17369 17674 17714 17749 18091 18154 18327 18630 18957 19072 19395 19943 19962 20179 20355 20728 20807 20850 20958 21068 21424 21890 22029 22165 22314 22316 22548 22620 22764 22820 23018 23197 23326 23671 23707 24003 24178 24205 24258 24324 24347 24401 24405 24569 24820 24939 25172 25352 25541 25783 25952 26022 26376 26523 267295 36435 36605 36732 36931 37155 37242 37263 37347 37420 37431 37496 37589 37627 37824 38249 38385 38481 38551 38715 38752 38915 39157 45730 45770 45881 4595
</code></pre>
<p>Each string has a different number of segments (sequence of numbers).
I need to do some feature reduction on this training set possibly using RandomForests or another approach.
I'm unclear on how I should represent this data so that I can work on it and pass it to a model in scikit-learn. I tried putting it into a dataframe in Python but then that leads to a "jagged" dataframe which is hard to work with. I also need to calculate Variance Threshold.</p>
<p>Any suggestions on how to use this file? </p>
| 0 |
2016-09-22T04:17:44Z
| 39,630,379 |
<p>You need to vectorize your data so that you have a square matrix with one column for each possible value. You can do this using a <code>CountVectorizer</code> (this is usually used for processing text but it will work for your data as well). The output will be a sparse matrix, depending on the model that you want to use, you may have to convert this to a dense array using <code>np.array</code></p>
<pre><code>from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer(binary=True, vocabulary=[str(i) for i in range(100000)])
X = vec.fit_transform(df[1])
X
# <162x56905 sparse matrix of type '<class 'numpy.int64'>'
# with 147915 stored elements in Compressed Sparse Row format>
X.toarray()
# array([[0, 0, 0, ..., 0, 1, 0],
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# ...,
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0]])
</code></pre>
| 1 |
2016-09-22T04:38:59Z
|
[
"python",
"pandas",
"scikit-learn",
"classification",
"data-representation"
] |
cannot load Python 3.5 interpreter for virtualenv
| 39,630,188 |
<p>I have installed Python 3.5 through Anaconda on the OSX system. After installing and activating the virtual environment, </p>
<pre><code>virtualenv venv
source venv/bin/activate
</code></pre>
<p>The Python version is Python 2.7.10. And while we are allowed to load the interpreter of our choice in <a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow">virtualenv</a>, "/usr/bin/" only has folders for Python 2.6 and 2.7. After finding out the Anaconda python 3.5 path ( /Users/Username/anaconda/lib/python3.5)
and trying to load it,</p>
<blockquote>
<p>for: virtualenv -p /Users/Username/anaconda/lib/python3.5 venv</p>
</blockquote>
<p>the code returns a [Errno 13] Permission Denied</p>
<pre><code>> Running virtualenv with interpreter /Users/Username/anaconda/lib/python3.5
> Traceback (most recent call last): File "/usr/local/bin/virtualenv",
> line 11, in <module>
> sys.exit(main()) File "/Library/Python/2.7/site-packages/virtualenv.py", line 790, in main
> popen = subprocess.Popen([interpreter, file] + sys.argv[1:], env=env) File
> "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py",
> line 710, in __init__
> errread, errwrite) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py",
> line 1335, in _execute_child
> raise child_exception
OSError: [Errno 13] Permission denied
</code></pre>
<blockquote>
<p>for: virtualenv -p /Users/Username/anaconda/bin/python3.5 venv</p>
</blockquote>
<p>there seems to be another type of error...</p>
<pre><code>Running virtualenv with interpreter /Users/Username/anaconda/bin/python3.5
Using base prefix '/Users/Username/anaconda'
New python executable in venv/bin/python3.5
Not overwriting existing python script venv/bin/python (you must use venv/bin/python3.5)
ERROR: The executable venv/bin/python3.5 is not functioning
ERROR: It thinks sys.prefix is '/Users/Username/.../targetfolder' (should be '/Users/Username/.../targetfolder/venv')
ERROR: virtualenv is not compatible with this system or executable
</code></pre>
| 0 |
2016-09-22T04:18:34Z
| 39,630,906 |
<pre><code>ERROR: The executable venv/bin/python3.5 is not functioning
ERROR: It thinks sys.prefix is '/Users/Username/.../targetfolder' (should be '/Users/Username/.../targetfolder/venv')
ERROR: virtualenv is not compatible with this system or executable
</code></pre>
<p>This error results from trying to combine incompatible versions of Python and the virtualenv tool. I'm not sure precisely where the incompatibility comes from, but I do know how to work around it.</p>
<p>Assuming your Python is reasonably functional and recent (read: 3.3 or later), this should always work:</p>
<pre><code>/path/to/python3.5 -m venv venv
</code></pre>
<p>The first venv is the <a href="https://docs.python.org/3/library/venv.html" rel="nofollow">venv module</a>. The second is the name of the directory where you want to create a virtualenv. This command asks Python to create a virtualenv itself, rather than shelling out to a third-party tool. Thus, we can be reasonably confident Python will do it correctly, and in particular that it will not be incompatible with itself.</p>
<hr>
<p>Unfortunately, the version of Python installed with Anaconda cannot be described as "reasonably functional" because <a href="http://stackoverflow.com/questions/38524856/anaconda-3-for-linux-has-no-ensurepip">it lacks <code>ensurepip</code></a>. That makes it impossible for the venv module to bootstrap <code>pip</code> into your virtualenv. So you will need to build your venv without pip, and then install it manually:</p>
<pre><code>/path/to/python3.5 -m venv --without-pip venv
</code></pre>
<p>Then download and run <a href="https://pip.pypa.io/en/stable/installing/" rel="nofollow"><code>get-pip.py</code></a> from within the virtualenv.</p>
| 0 |
2016-09-22T05:27:37Z
|
[
"python",
"python-2.7",
"python-3.x",
"virtualenv"
] |
python is there an inbuilt in way to return a list generator instead of list from random.sample
| 39,630,193 |
<p>I use <code>random.sample</code> to sample from a very large range depending on the input load. Sometimes the sample itself is very large and since it is a list it occupies a lot of memory.</p>
<p>The application does not necessarily use all the value in the list.
It would be great if <code>random.sample</code> can return a list generator instead of a list itself. </p>
<p>Right now I have a wrapper that divides the large input range into equal sized buckets and use <code>randint</code> to select a random number from each <code>n / sample_size</code> buckets.</p>
<p>edit: In my case input is continuous, I had this wrapper function to simulate random.sample as a generator but this is not truly replicating the functionality as it skips some elements in the end.</p>
<pre><code>import random
def samplegen( start, end, sample_size ):
bktlen = ( end - start ) / sample_size
for i in xrange( sample_size ): #this skips the last modulo elements
st = start + (i * bktlen)
yield random.randrange( st, st + bktlen )
</code></pre>
| 0 |
2016-09-22T04:18:57Z
| 39,631,306 |
<p>Since you commented that the order doesn't matter (I had asked whether it must be random or can be sorted), this might be an option:</p>
<pre><code>import random
def sample(n, k):
"""Generate random sorted k-sample of range(n)."""
for i in range(n):
if random.randrange(n - i) < k:
yield i
k -= 1
</code></pre>
<p>That goes through the numbers and includes each in the sample with probability<br>
numberOfNumbersStillNeeded / numberOfNumbersStillLeft.</p>
<p>Demo:</p>
<pre><code>>>> for _ in range(5):
print(list(sample(100, 10)))
[7, 16, 41, 50, 55, 56, 61, 76, 89, 96]
[5, 13, 24, 28, 34, 35, 40, 64, 80, 95]
[9, 18, 19, 36, 38, 39, 61, 73, 84, 85]
[23, 24, 26, 28, 40, 53, 62, 76, 77, 91]
[2, 12, 21, 41, 60, 68, 70, 72, 90, 91]
</code></pre>
| 2 |
2016-09-22T06:00:00Z
|
[
"python",
"random"
] |
python is there an inbuilt in way to return a list generator instead of list from random.sample
| 39,630,193 |
<p>I use <code>random.sample</code> to sample from a very large range depending on the input load. Sometimes the sample itself is very large and since it is a list it occupies a lot of memory.</p>
<p>The application does not necessarily use all the value in the list.
It would be great if <code>random.sample</code> can return a list generator instead of a list itself. </p>
<p>Right now I have a wrapper that divides the large input range into equal sized buckets and use <code>randint</code> to select a random number from each <code>n / sample_size</code> buckets.</p>
<p>edit: In my case input is continuous, I had this wrapper function to simulate random.sample as a generator but this is not truly replicating the functionality as it skips some elements in the end.</p>
<pre><code>import random
def samplegen( start, end, sample_size ):
bktlen = ( end - start ) / sample_size
for i in xrange( sample_size ): #this skips the last modulo elements
st = start + (i * bktlen)
yield random.randrange( st, st + bktlen )
</code></pre>
| 0 |
2016-09-22T04:18:57Z
| 39,631,828 |
<p>Why not something like the following -- the set <code>seen</code> only grows to a function of <code>k</code>, not necessarily to the size of <code>population</code>:</p>
<pre><code>import random
def sample(population, k):
seen = set()
for _ in range(k):
element = random.randrange(population)
while element in seen:
element = random.randrange(population)
yield element
seen.add(element)
for n in sample(1000000, 10):
print(n)
</code></pre>
<p>Another approach might be to work with your original bucket design but with non-uniform buckets whose indexes themselves are randomly sampled:</p>
<pre><code>import random
def samplegen(start, end, sample_size):
random_bucket_indices = random.sample(range(start, end), sample_size)
sorted_bucket_indices = sorted(random_bucket_indices) + [end + 1]
for index in random_bucket_indices:
yield random.randrange(index, sorted_bucket_indices[sorted_bucket_indices.index(index) + 1])
</code></pre>
| 1 |
2016-09-22T06:32:47Z
|
[
"python",
"random"
] |
Decrease the cost of two lists comparisons
| 39,630,255 |
<p>I want to minimize the cost of comparisons of two lists that have some words. In the code below <code>A</code> has 4 words whereas <code>B</code> has 2 words and the cost is <code>O(n^2)</code> that is too bad. While for 100 words it can be time consuming. Can I minimize it somehow?</p>
<pre><code>A= ["helry", "john" , "kat" , "david"]
d="Helry David"
B = d.lower().split()
for x in range(len(A)):
for i in range(len(B)):
if A[x] == B[i]:
print("Match = " + A[x])
else:
print("No")
</code></pre>
| 0 |
2016-09-22T04:26:04Z
| 39,630,276 |
<ol>
<li>Sort array for O(nlogn) first.</li>
<li>Compare it in O(n), because you can safely assume the relationship between certain elements are the same after sorting.</li>
</ol>
| 0 |
2016-09-22T04:27:41Z
|
[
"python",
"time-complexity"
] |
Decrease the cost of two lists comparisons
| 39,630,255 |
<p>I want to minimize the cost of comparisons of two lists that have some words. In the code below <code>A</code> has 4 words whereas <code>B</code> has 2 words and the cost is <code>O(n^2)</code> that is too bad. While for 100 words it can be time consuming. Can I minimize it somehow?</p>
<pre><code>A= ["helry", "john" , "kat" , "david"]
d="Helry David"
B = d.lower().split()
for x in range(len(A)):
for i in range(len(B)):
if A[x] == B[i]:
print("Match = " + A[x])
else:
print("No")
</code></pre>
| 0 |
2016-09-22T04:26:04Z
| 39,630,796 |
<ol>
<li><code>O(nlogn)</code> for sorting <code>A</code>. </li>
<li>use a binary search for each
element of <code>B</code> in <code>A</code> Iit'll take <code>O(log n)</code>. For <code>m</code> elements in
<code>B</code>, it'll be <code>O(m*logn)</code>.</li>
</ol>
<p>(or) if you use hash, you can do in <code>O(m)</code> for <code>m</code> elements of <code>B</code>.</p>
| 0 |
2016-09-22T05:18:30Z
|
[
"python",
"time-complexity"
] |
Decrease the cost of two lists comparisons
| 39,630,255 |
<p>I want to minimize the cost of comparisons of two lists that have some words. In the code below <code>A</code> has 4 words whereas <code>B</code> has 2 words and the cost is <code>O(n^2)</code> that is too bad. While for 100 words it can be time consuming. Can I minimize it somehow?</p>
<pre><code>A= ["helry", "john" , "kat" , "david"]
d="Helry David"
B = d.lower().split()
for x in range(len(A)):
for i in range(len(B)):
if A[x] == B[i]:
print("Match = " + A[x])
else:
print("No")
</code></pre>
| 0 |
2016-09-22T04:26:04Z
| 39,631,095 |
<p>Use sets instead of lists (btw these are not called arrays in Python). What you want is the intersection of two sets, which is (on average) <code>O(min(len(A), len(B))</code> (<a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">https://wiki.python.org/moin/TimeComplexity</a>). And since this algorithm is built-in and implemented in C, it is much faster than anything you could write in Python code.</p>
<p>Example (A and B are considered to be defined as before):</p>
<pre><code>>>> set(A) & set(B)
{'david', 'helry'}
</code></pre>
<p>This gives you a set of all values that are contained in both A and B.</p>
| 1 |
2016-09-22T05:42:53Z
|
[
"python",
"time-complexity"
] |
Decrease the cost of two lists comparisons
| 39,630,255 |
<p>I want to minimize the cost of comparisons of two lists that have some words. In the code below <code>A</code> has 4 words whereas <code>B</code> has 2 words and the cost is <code>O(n^2)</code> that is too bad. While for 100 words it can be time consuming. Can I minimize it somehow?</p>
<pre><code>A= ["helry", "john" , "kat" , "david"]
d="Helry David"
B = d.lower().split()
for x in range(len(A)):
for i in range(len(B)):
if A[x] == B[i]:
print("Match = " + A[x])
else:
print("No")
</code></pre>
| 0 |
2016-09-22T04:26:04Z
| 39,634,797 |
<p>You can do it in <em>O(n)</em> time with sets and testing for <em>membership</em> with <code>in</code>, you still have to iterate over all the names In <code>A</code> but checking if each name is in the setof name is <code>O(1)</code>:</p>
<pre><code>A = ["helry", "john" , "kat" , "david"]
d = "Helry David"
st = set(d.lower().split())
for name in A:
if name in st:
print("Match = {}".format(name))
else:
print("No match")
</code></pre>
| 0 |
2016-09-22T09:08:55Z
|
[
"python",
"time-complexity"
] |
python: socket.sendall() hogs GIL?
| 39,630,326 |
<p>i have a multithreaded program (about 20 threads; a mixture of producer/consumers with many queues)</p>
<p>in one of the threads, it pops strings from a queue and send it to a remote program</p>
<pre><code># it starts the thread like this
workQ = Queue.Queue()
stop_thr_event = threading.Event()
t = threading.Thread( target=worker, args=(stop_thr_event,) )
# in the thread's worker function
def worker(stop_event):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = (myhost, int(myport))
sock.connect(server_address)
while True:
try:
item = workQ.get(timeout=1)
if print_only:
print item
else:
if item.startswith("key:"):
item = "{%s}" % item
sock.sendall(item)
workQ.task_done()
except Queue.Empty, msg:
if stop_event.isSet():
break
</code></pre>
<p>intermittently, my program will just hang, none of the threads are doing any work</p>
<p>after trial and error, i found that my program only hangs with this thread running</p>
<p>my only guess is that sendall() is hogging the GIL and my whole program hangs</p>
<p>1) is this even a plausible theory?<br>
2) if my theory is correct, what can i do so that sendall() doesnt hog the GIL? make it a nonblock send?</p>
| 1 |
2016-09-22T04:33:05Z
| 39,630,616 |
<p>You're wrong. No network activities hold GIL and sendall() is not an exception!</p>
<pre><code>item=workQ.get()
socket.sendall() **# may take long time here.**
workQ.task_done()
</code></pre>
<p>Because sendall() may take long time and other threads which use workQ <strong>can not take turn to run before you call task_done()</strong> ==> this is why it seems that your whole program is in hang.</p>
| 1 |
2016-09-22T05:00:03Z
|
[
"python",
"multithreading",
"sockets",
"deadlock",
"hang"
] |
python: socket.sendall() hogs GIL?
| 39,630,326 |
<p>i have a multithreaded program (about 20 threads; a mixture of producer/consumers with many queues)</p>
<p>in one of the threads, it pops strings from a queue and send it to a remote program</p>
<pre><code># it starts the thread like this
workQ = Queue.Queue()
stop_thr_event = threading.Event()
t = threading.Thread( target=worker, args=(stop_thr_event,) )
# in the thread's worker function
def worker(stop_event):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = (myhost, int(myport))
sock.connect(server_address)
while True:
try:
item = workQ.get(timeout=1)
if print_only:
print item
else:
if item.startswith("key:"):
item = "{%s}" % item
sock.sendall(item)
workQ.task_done()
except Queue.Empty, msg:
if stop_event.isSet():
break
</code></pre>
<p>intermittently, my program will just hang, none of the threads are doing any work</p>
<p>after trial and error, i found that my program only hangs with this thread running</p>
<p>my only guess is that sendall() is hogging the GIL and my whole program hangs</p>
<p>1) is this even a plausible theory?<br>
2) if my theory is correct, what can i do so that sendall() doesnt hog the GIL? make it a nonblock send?</p>
| 1 |
2016-09-22T04:33:05Z
| 39,631,506 |
<p>GIL-hogging will not cause a program to hang. It may harm the performance of the program, but this is a far cry from hanging. It is much more likely that you are experiencing some form of <a href="http://en.wikipedia.org/wiki/Deadlock" rel="nofollow">deadlock</a>. The GIL cannot participate in a deadlock because the interpreter is constantly releasing and re-acquiring it, acquiring or releasing the GIL is generally not dependent on acquiring or releasing any other resources, and other locks do not depend on the GIL either.</p>
<p>Your use of the <code>stop_thr_event</code> lock is rather peculiar. It would be more common for the master to simply put a series of "we're done, go home" objects into the queue, and for the workers to detect these objects and return when they are recognized. This also ties into the rule of thumb that the only correct values for <code>timeout</code> are zero and infinity (i.e. no timeout). In the current case, your worker is waiting for one second, checking the event, waiting for one second, etc., and <a href="https://blogs.msdn.microsoft.com/oldnewthing/20060124-17/?p=32553" rel="nofollow">polling is a Bad Thing</a>.</p>
<p>Now, if by "hang" you mean the program occasionally freezes up for short periods of time before resuming, that <em>is</em> poor performance, so perhaps the GIL could be to blame. But the socket is not the problem. The problem is that you may have a large number of threads contending for the GIL (because they're all trying to poll once per second), and if you're still on 2.x, you don't have <a href="https://docs.python.org/3/whatsnew/3.2.html#multi-threading" rel="nofollow">the new GIL</a>. Eliminating the polling will help with this.</p>
| 1 |
2016-09-22T06:11:34Z
|
[
"python",
"multithreading",
"sockets",
"deadlock",
"hang"
] |
Python v3 using idle with Tkinter and Gui, repeating code
| 39,630,410 |
<p>I am having some trouble with my code and was wondering if anyone could help me?</p>
<p>I have made a program with two classes that ae designed to act as a phone ordering system. This is based on the company Subway.
You can see the gui this code produces here:
<a href="http://i.stack.imgur.com/dvkYL.png" rel="nofollow">Output</a></p>
<p>I have also uploaded the python file, which contains all the code.
My question is, can I get the button on line 176 "Save and order another sub", to be pressed so the user can add 5 more subs, while being able to save variables for the costs, so they can all be added up at the end. So essentially repeating the code 5 times while holding 5 variables for costs to be added at the end.</p>
<p>Sorry this is so broad, but I hope this makes sense when you can see what my code is trying to do. What code can I put on line 318 that will help me with this?</p>
<p>Here is the code: <a href="https://drive.google.com/file/d/0BydTFZ" rel="nofollow">https://drive.google.com/file/d/0BydTFZ</a> ... sp=sharing</p>
<p>If someone could reply with the code I need to repeat my program with, I would be very grateful.</p>
| 0 |
2016-09-22T04:42:15Z
| 39,630,902 |
<p>Firstly, you should directly post the relevant code, not links.</p>
<p>Just based on what you wrote, you can have the button add data to a list instead of an arbitrary number of distinct variables. That way, the list can hold any number of sandwich orders.</p>
<pre><code>mylist = []
mylist.append('chicken sub')
mylist.append('vegetarian sub')
sub1 = mylist[0] # sub1 == 'chicken sub'
</code></pre>
| 1 |
2016-09-22T05:27:22Z
|
[
"python",
"tkinter"
] |
How to href the elements in a python list with Flask?
| 39,630,416 |
<p>How can I transform to different html hrefs the elements of a list?. For instance:</p>
<pre><code>A = [string1, string2, ..., stringN]
</code></pre>
<p>To:</p>
<p><code>A = [</code><a href="https://this_is_url_1/" rel="nofollow"><code>string1</code></a><code>,</code> <a href="https://this_is_url_2/" rel="nofollow"><code>string2</code></a><code>, ...,</code> <a href="https://this_is_url_N/" rel="nofollow"><code>stringN</code></a><code>]</code></p>
| 1 |
2016-09-22T04:42:40Z
| 39,630,495 |
<pre><code>strings = ['the','quick','brown','fox']
result = ['<a href="http://this_is_url_{}">{}</a>'.format(a,a) for a in strings]
</code></pre>
| 1 |
2016-09-22T04:50:36Z
|
[
"python"
] |
How to href the elements in a python list with Flask?
| 39,630,416 |
<p>How can I transform to different html hrefs the elements of a list?. For instance:</p>
<pre><code>A = [string1, string2, ..., stringN]
</code></pre>
<p>To:</p>
<p><code>A = [</code><a href="https://this_is_url_1/" rel="nofollow"><code>string1</code></a><code>,</code> <a href="https://this_is_url_2/" rel="nofollow"><code>string2</code></a><code>, ...,</code> <a href="https://this_is_url_N/" rel="nofollow"><code>stringN</code></a><code>]</code></p>
| 1 |
2016-09-22T04:42:40Z
| 39,630,522 |
<p>Your question is not clear and ambiguous.</p>
<p>If string is hyperlink..</p>
<pre><code>A = [string1, string2, ..., stringN]
</code></pre>
<p><code>hrefs = ['<a href="%s">%s</a>' % (a,a) in A]</code></p>
| 0 |
2016-09-22T04:52:51Z
|
[
"python"
] |
Spatial index to find points within polygon, if points and polygon have same minimum bounding box
| 39,630,501 |
<p>I have a shapely <em>polygon</em> representing the boundaries of the city of Los Angeles. I also have a set of ~1 million lat-long <em>points</em> in a geopandas GeoDataFrame, all of which fall within that polygon's minimum bounding box. Some of these points lie within the polygon itself, but others do not. I want to retain only those points within Los Angeles's boundaries, and due to Los Angeles's irregular shape, only approximately 1/3 of the points within its minimum bounding box are within the polygon itself.</p>
<p><strong>Using Python, what is the fastest way to identify which of these points lie within the polygon, given that the points and the polygon have the same minimum bounding box?</strong></p>
<p>I tried using geopandas and its r-tree spatial index:</p>
<pre><code>sindex = gdf['geometry'].sindex
possible_matches_index = list(sindex.intersection(polygon.bounds))
possible_matches = gdf.iloc[possible_matches_index]
points_in_polygon = possible_matches[possible_matches.intersects(polygon)]
</code></pre>
<p>This uses the GeoDataFrame's r-tree spatial index to quickly find the <em>possible</em> matches, and then finds the exact intersection of the polygon and those possible matches. However, because the polygon's minimum bounding box is the same as the set of points' minimum bounding box, r-tree considers <em>every point</em> to be a possible match. Thus, using an r-tree spatial index makes the intersection run no faster than it would without the spatial index. This method is very slow: it takes ~30 minutes to complete.</p>
<p>I also tried dividing my polygon into small sub-polygons, then using the spatial index to find which points possibly intersect with each of these sub-polygons. This method successfully finds fewer possible matches because each of the sub-polygons' minimum bounding boxes is much smaller than the set of points minimum bounding box. However, intersecting this set of possible matches with my polygon still only shaves off ~25% of my computation time, so it's still a brutally slow process.</p>
<p>Is there a better spatial index method I should use? And what is the fastest way to find which points are within the polygon, if the points and polygon have the same minimum bounding box?</p>
| 1 |
2016-09-22T04:51:03Z
| 39,651,929 |
<p>A little example to duplicate the problem a bit</p>
<pre><code>import pandas as pd
import shapely
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from matplotlib.patches import Polygon
from shapely.geometry import Point
import seaborn as sns
import numpy as np
# some lon/lat points in a DataFrame
n = 1000000
data = {'lat':np.random.uniform(low=0.0, high=3.0, size=(n,)), 'lon':np.random.uniform(low=0.0, high=3.0, size=(n,))}
df = pd.DataFrame(data)
# the 'bounding' polygon
poly1 = shapely.geometry.Polygon([(1,1), (1.5,1.2), (2,.7), (2.1,1.2), (1.8,2.3), (1.6,1.8), (1.2,3)])
# poly2 = shapely.geometry.Polygon([(1,1), (1.3,1.6), (1.4,1.55), (1.5,1.2), (2,.7), (2.1,1.2), (1.8,2.3), (1.6,1.8), (1.2,3), (.8,1.5),(.91,1.3)])
# poly3 = shapely.geometry.Polygon([(1,1), (1.3,1.6), (1.4,1.55), (1.5,1.2), (2,.7), (2.1,1.2), (1.8,2.3), (1.6,1.8), (1.5,2), (1.4,2.5),(1.3,2.4), (1.2,3), (.8,2.8),(1,2.8),(1.3,2.2),(.7,1.5),(.66,1.4)])
# limit DataFrame to interior points
mask = [poly1.intersects(shapely.geometry.Point(lat,lon)) for lat,lon in zip(df.lat,df.lon)]
df = df[mask]
# plot bounding polygon
fig1, ax1 = sns.plt.subplots(1, figsize=(4,4))
patches = PatchCollection([Polygon(poly1.exterior)], facecolor='red', linewidth=.5, alpha=.5)
ax1.add_collection(patches, autolim=True)
# plot the lat/lon points
df.plot(x='lat',y='lon', kind='scatter',ax=ax1)
plt.show()
</code></pre>
<p>Calling intersects() with a million points on a simple polygon doesn't take very much time. Using poly1, I get the following image. Finding the lat/lon points inside the polygon takes less than 10 seconds. Plotting only the interior points on top of the bounding polygon looks like this:</p>
<p><a href="http://i.stack.imgur.com/8us3h.png" rel="nofollow"><img src="http://i.stack.imgur.com/8us3h.png" alt="enter image description here"></a></p>
<pre><code>In [45]: %timeit mask = [Point(lat,lon).intersects(poly1) for lat,lon in zip(df.lat,df.lon)]
1 loops, best of 3: 9.23 s per loop
</code></pre>
<p>Poly3 is a bit bigger and funnier. The new image looks like this and it takes about a minute to get through the bottle-neck intersects() line. </p>
<p><a href="http://i.stack.imgur.com/HEunh.png" rel="nofollow"><img src="http://i.stack.imgur.com/HEunh.png" alt="enter image description here"></a></p>
<pre><code>In [2]: %timeit mask = [poly3.intersects(shapely.geometry.Point(lat,lon)) for lat,lon in zip(df.lat,df.lon)]
1 loops, best of 3: 51.4 s per loop
</code></pre>
<p>So the criminal isn't necessarily the number of lat/lon points. Just as bad is the complexity of the bounding polygon. First, I would recommend <code>poly.simplify()</code>, or anything you can do to reduce the number of points in your bounding polygon (without changing it drastically, obviously).</p>
<p>Next, I would recommend thinking of some probabilistic approach. If a point <code>p</code> is surrounded by points which are all inside the bounding polygon, there's a pretty good chance <code>p</code> is also in the bounding polygon. Generally, there's a bit of a trade-off between speed and accuracy, but maybe it can be a way of reducing the number of points you need to check. Here's my attempt using <a href="http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier" rel="nofollow">k-nearest neighbors classifier</a>:</p>
<pre><code>from sklearn.neighbors import KNeighborsClassifier
# make a knn object, feed it some training data
neigh = KNeighborsClassifier(n_neighbors=4)
df_short = df.sample(n=40000)
df_short['labels'] = np.array([poly3.intersects(shapely.geometry.Point(lat,lon)) for lat,lon in zip(df_short.lat,df_short.lon)])*1
neigh.fit(df_short[['lat','lon']], df_short['labels'])
# now use the training data to guess whether a point is in polygon or not
df['predict'] = neigh.predict(df[['lat','lon']])
</code></pre>
<p>Gives me this image. Not perfect, but %timeit for this block takes only 3.62 seconds (4.39 seconds for n=50000), compared to about 50 seconds to check every single point.</p>
<p><a href="http://i.stack.imgur.com/xeXnX.png" rel="nofollow"><img src="http://i.stack.imgur.com/xeXnX.png" alt="enter image description here"></a></p>
<p>If instead, I just want to drop points that have, say, a 30% chance of being in the polygon (just throwing out the obvious offenders and checking the rest by hand). I can use a <a href="http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html#sklearn.neighbors.KNeighborsRegressor" rel="nofollow">knn regression</a>:</p>
<pre><code>from sklearn.neighbors import KNeighborsRegressor
neigh = KNeighborsRegressor(n_neighbors=3, weights='distance')
#everything else using 'neigh' is the same as before
# only keep points with more than 30\% chance of being inside
df = df[df.predict>.30]
</code></pre>
<p>Now I only have about 138000 points to check, which would go pretty quickly if I wanted to check each one using <code>intersects()</code>. </p>
<p>Of course if I increase the number of neighbors, or the size of the training set, I can get sharper images still. Some nice things about this probabilistic approach are (1) it's algorithmic, so you can throw it at any funky bounding polygon, (2) you can easily tune its accuracy up/down, (3) it's a lot faster and scales pretty well (better to brute force at least).</p>
<p>Like many things in machine learning, there can be 100 ways to do it. Hopefully this helps you figure something out that works. Here's one more picture with the following settings (using a classifier, not a regression). You can see it's getting better.</p>
<pre><code>neigh = KNeighborsClassifier(n_neighbors=3, weights='distance')
df_short = df.sample(n=80000)
</code></pre>
<p><a href="http://i.stack.imgur.com/GhHbt.png" rel="nofollow"><img src="http://i.stack.imgur.com/GhHbt.png" alt="enter image description here"></a></p>
| 1 |
2016-09-23T03:05:31Z
|
[
"python",
"gis",
"geospatial",
"shapely",
"geopandas"
] |
Time stamp representation in digits in a tsv to standard formate
| 39,630,508 |
<p>I have a text file and it contains data of the activities of online Facebook and Twitter users i.e. PostID or userID. These IDs are represented in decimal. For example "PostID":4038363805081732322.But the problem is that there is one column where time stamp of the post is also represented in decimal format e.g "PostTimeStamp": <strong>1413332041998</strong>.These values have time in hours But i don't have any clue of converting these values into <strong>hours</strong>. Please tell me how can I convert these values and into hours. I am using python.</p>
| -1 |
2016-09-22T04:51:46Z
| 39,630,964 |
<p>This is epoch time, from the python library:
"time.ctime([secs])" -
Convert a time expressed in seconds since the epoch to a string representing local time. If secs is not provided or None, the current time as returned by time() is used. ctime(secs) is equivalent to asctime(localtime(secs)). Locale information is not used by ctime().</p>
<p>So just do:</p>
<pre><code>import time
time.ctime(1413332041998/1000)
>> ' Wed Oct 15 11:14:01 2014'
</code></pre>
| 0 |
2016-09-22T05:32:13Z
|
[
"python"
] |
A print function makes a multiprocessing program fail
| 39,630,676 |
<p>In the following code, I'm trying to create a sandboxed master-worker system, in which changes to global variables in a worker don't reflect to other workers.</p>
<p>To achieve this, a new process is created each time a task is created, and to make the execution parallel, the creation of processes itself is managed by <code>ThreadPoolExecutor</code>.</p>
<pre><code>import time
from concurrent.futures import ThreadPoolExecutor
from multiprocessing import Pipe, Process
def task(conn, arg):
conn.send(arg * 2)
def isolate_fn(fn, arg):
def wrapped():
parent_conn, child_conn = Pipe()
p = Process(target=fn, args=(child_conn, arg), daemon=True)
try:
p.start()
r = parent_conn.recv()
finally:
p.join()
return r
return wrapped
def main():
with ThreadPoolExecutor(max_workers=4) as executor:
pair = []
for i in range(0, 10):
pair.append((i, executor.submit(isolate_fn(task, i))))
# This function makes the program broken.
#
print('foo')
time.sleep(2)
for arg, future in pair:
if future.done():
print('arg: {}, res: {}'.format(arg, future.result()))
else:
print('not finished: {}'.format(arg))
print('finished')
main()
</code></pre>
<p>This program works fine, until I put the <code>print('foo')</code> function inside the loop. If the function exists, some tasks remain unfinished, and what is worse, this program itself doesn't finish.</p>
<p>Results are not always the same, but the following is the typical output:</p>
<pre><code>foo
foo
foo
foo
foo
foo
foo
foo
foo
foo
arg: 0, res: 0
arg: 1, res: 2
arg: 2, res: 4
not finished: 3
not finished: 4
not finished: 5
not finished: 6
not finished: 7
not finished: 8
not finished: 9
</code></pre>
<p>Why is this program so fragile?</p>
<p>I use Python 3.4.5.</p>
| 4 |
2016-09-22T05:07:10Z
| 39,631,629 |
<p>You are not creating ThreadPoolExecutor every time , rather using the pre initialized pool for every iteration. I really not able to track which print statement is hindering you?</p>
| 0 |
2016-09-22T06:19:57Z
|
[
"python",
"python-3.x",
"python-multithreading",
"python-multiprocessing"
] |
A print function makes a multiprocessing program fail
| 39,630,676 |
<p>In the following code, I'm trying to create a sandboxed master-worker system, in which changes to global variables in a worker don't reflect to other workers.</p>
<p>To achieve this, a new process is created each time a task is created, and to make the execution parallel, the creation of processes itself is managed by <code>ThreadPoolExecutor</code>.</p>
<pre><code>import time
from concurrent.futures import ThreadPoolExecutor
from multiprocessing import Pipe, Process
def task(conn, arg):
conn.send(arg * 2)
def isolate_fn(fn, arg):
def wrapped():
parent_conn, child_conn = Pipe()
p = Process(target=fn, args=(child_conn, arg), daemon=True)
try:
p.start()
r = parent_conn.recv()
finally:
p.join()
return r
return wrapped
def main():
with ThreadPoolExecutor(max_workers=4) as executor:
pair = []
for i in range(0, 10):
pair.append((i, executor.submit(isolate_fn(task, i))))
# This function makes the program broken.
#
print('foo')
time.sleep(2)
for arg, future in pair:
if future.done():
print('arg: {}, res: {}'.format(arg, future.result()))
else:
print('not finished: {}'.format(arg))
print('finished')
main()
</code></pre>
<p>This program works fine, until I put the <code>print('foo')</code> function inside the loop. If the function exists, some tasks remain unfinished, and what is worse, this program itself doesn't finish.</p>
<p>Results are not always the same, but the following is the typical output:</p>
<pre><code>foo
foo
foo
foo
foo
foo
foo
foo
foo
foo
arg: 0, res: 0
arg: 1, res: 2
arg: 2, res: 4
not finished: 3
not finished: 4
not finished: 5
not finished: 6
not finished: 7
not finished: 8
not finished: 9
</code></pre>
<p>Why is this program so fragile?</p>
<p>I use Python 3.4.5.</p>
| 4 |
2016-09-22T05:07:10Z
| 39,653,112 |
<p>Try using</p>
<pre><code>from multiprocessing import set_start_method
... rest of your code here ....
if __name__ == '__main__':
set_start_method('spawn')
main()
</code></pre>
<p>If you search Stackoverflow for python multiprocessing and multithreading you will find a a fair few questions mentioning similar hanging issues. (esp. for python version 2.7 and 3.2)</p>
<p>Mixing multithreading and multiprocessing ist still a bit of an issue and even the python docs for <a href="https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods" rel="nofollow">multiprocessing.set_start_method</a> mention that. In your case <em>'spawn'</em> and <em>'forkserver'</em> should work without any issues.</p>
<p>Another option might be to use MultiProcessingPool directly, but this may not be possible for you in a more complex use case.</p>
<p>Btw. <em>'Not Finished'</em> may still appear in your output, as you are not waiting for your sub processes to finish, but the whole code should not hang anymore and always finish cleanly.</p>
| 1 |
2016-09-23T05:20:50Z
|
[
"python",
"python-3.x",
"python-multithreading",
"python-multiprocessing"
] |
Django, How does models.py under auth folder create initial tables when you migrate very first time?
| 39,630,773 |
<p>If you migrate very first time after making new project in Django, you can find that Django creates tables like below.</p>
<pre><code>auth_group
auth_group_permissions
auth_permission
auth_user
auth_user_groups
auth_user_user_permissions
django_admin_log
django_content_type
django_migrations
django_session
</code></pre>
<p>Now I learned that those tables are created because lines under INSTALLED_APPS in settings.py.</p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
</code></pre>
<p>so I started to look into models.py under auth folder(in Django folder where I installed). I expected that there would be six classes in models.py. Because I learned that Class turn to table thanks to ORM(Ojbejct Relation Mapping).</p>
<pre><code>auth_group
auth_group_permissions
auth_permission
auth_user
auth_user_groups
auth_user_user_permissions
</code></pre>
<p>I could find class named 'Permission', a class named 'Group' and a class named 'User' in models.py under auth folder. I think those made tables 'auth_permission', 'auth_group' and 'auth_user'. Then what about other?(auth_group_permissions, auth_user_groups, auth_user_user_permissions) I would like to understand how those tables are created by Django(models.py in auth folder). Where should like look into in models.py to understand that?</p>
<p>I expect that I can understand how other tables are created(django_admin_log, django_content_type, django_migrations, django_session) if I can understand that. I will appreciate if you can also explain how Django creates tables named 'django_migrations' and 'django_session' too.</p>
<p>Thank you in advance. Have a nice day.
<a href="http://i.stack.imgur.com/ExTVQ.png" rel="nofollow">enter image tables showing in pgAdmin3</a></p>
| 3 |
2016-09-22T05:16:42Z
| 39,630,862 |
<p>This can be easily explained.</p>
<pre><code>auth_user_groups
auth_group_permissions
auth_user_user_permissions
</code></pre>
<p>Are relational tables. Which are used to store info about how model tables are related.</p>
<p>For example <code>auth_user_groups</code> is storing how users and groups are related.
If you do</p>
<pre><code>SELECT * FROM auth_user_groups;
|id|user_id|group_id|
....
</code></pre>
<p>And here we can see that which groups are related to which users.</p>
<p>So basically django will automatically create such tables for when you use <code>ManyToManyField</code> on your models</p>
<p><strong>Answering comment</strong>
Django migration table is created automatically when you call <code>./manage.py migrate</code> for the first time. This table stores history of applying your migrations. Migration model can be found in <code>django/db/migrations/recorder.py</code>. This model is used when running new migrations to see which are applied and which should be applied on this command run. And this model is part of core django's functionality, that's why you don't need to add it in <code>INSTALLED_APPS</code> because <code>INSTALLED_APPS</code> contain only pluggable apps. Which you can include/exclude from your project if you want.</p>
| 3 |
2016-09-22T05:25:11Z
|
[
"python",
"django",
"django-models",
"migration"
] |
Return prompt after accessing nohup.out (bash)
| 39,630,824 |
<p>I'm running a Python script from bash using <code>nohup</code>. The script is executed via my bashrc as part of a shell function. If I run it like this:</p>
<pre><code>function timer {
nohup python path/timer.py $1 $2 > path/nohup.out 2>&1 &
echo 'blah'
}
</code></pre>
<p>Everything works and I get my prompt back. However, if instead of <code>echo</code> I call <code>tail</code> to access the end of the nohup output file, like this:</p>
<pre><code>function timer {
nohup python path/timer.py $1 $2 > path/nohup.out 2>&1 &
tail -f path/nohup.out
}
</code></pre>
<p>my prompt is not returned. I would like to see the contents of nohup.out and get back to the prompt without having to use CTRL-c.</p>
<p>I have followed the advice <a href="http://stackoverflow.com/questions/23024850/nohup-as-background-task-does-not-return-prompt">here</a>, but adding <code></dev/null</code> yields the same results as above.</p>
| 0 |
2016-09-22T05:20:55Z
| 39,630,883 |
<p>You won't get prompt. </p>
<p>Because, <code>tail -f</code> will always watch the file (<code>path/nohup.out</code>) to output appended data as the file grows. You can try <code>tail -n</code> to get last <code>10</code> lines of <code>path/nohup.out</code>.</p>
| 0 |
2016-09-22T05:26:29Z
|
[
"python",
"bash",
"shell",
"nohup"
] |
How to plot a figure with Chinese Characters in label
| 39,630,928 |
<p>When I draw a figure with Chinese Character label in Python 3, it doesn't work correctly:</p>
<p><img src="http://i.stack.imgur.com/k5zpP.png" alt="Screenshot">]</p>
<p><strong>My code:</strong></p>
<pre><code>fig = pd.DataFrame({
'åºå¸æ¶çç':bond,
'åºå¸ååºéæ¶çç':bondFunds,
'è¢«å¨ææ°ååºéæ¶çç':indexFunds,
'æ»æ¶çç':ret})
fig.plot()
plt.legend(loc=0)
plt.title('åºå¸æ¶çç',
fontproperties='SimHei',
fontsize='xx-large')
plt.grid(True)
plt.axis('tight')
</code></pre>
| 0 |
2016-09-22T05:29:21Z
| 39,655,334 |
<p>You need to explicitly pass the font properties to <code>legend</code> function using the <code>prop</code> kwag:</p>
<pre><code>from matplotlib import font_manager
fontP = font_manager.FontProperties()
fontP.set_family('SimHei')
fontP.set_size(14)
fig = pd.DataFrame({
'åºå¸æ¶çç':bond,
'åºå¸ååºéæ¶çç':bondFunds,
'è¢«å¨ææ°ååºéæ¶çç':indexFunds,
'æ»æ¶çç':ret})
fig.plot()
# Note the next lines
plt.legend(loc=0, prop=fontP)
plt.title('åºå¸æ¶çç', fontproperties=fontP)
plt.grid(True)
plt.axis('tight')
</code></pre>
<hr>
<p><a href="https://pythonpath.wordpress.com/2013/09/16/chinese-in-matplotlib/" rel="nofollow">Source</a></p>
| 1 |
2016-09-23T07:41:48Z
|
[
"python",
"python-3.x",
"pandas",
"matplotlib"
] |
Python can't find packages in Virtual Environment
| 39,630,944 |
<p>I'm trying to setup my environment for a project but python isn't able to find the modules I've installed with pip.</p>
<p>I did the following:</p>
<pre><code>mkdir helloTwitter
cd helloTwitter
virtualenv myenv
Installing setuptools, pip, wheel...done.
source myenv/bin/activate
pip install tweepy
Collecting tweepy
Using cached tweepy-3.5.0-py2.py3-none-any.whl
Collecting six>=1.7.3 (from tweepy)
Using cached six-1.10.0-py2.py3-none-any.whl
Collecting requests>=2.4.3 (from tweepy)
Using cached requests-2.11.1-py2.py3-none-any.whl
Collecting requests-oauthlib>=0.4.1 (from tweepy)
Using cached requests_oauthlib-0.6.2-py2.py3-none-any.whl
Collecting oauthlib>=0.6.2 (from requests-oauthlib>=0.4.1->tweepy)
Installing collected packages: six, requests, oauthlib, requests-oauthlib, tweepy
Successfully installed oauthlib-2.0.0 requests-2.11.1 requests-oauthlib-0.6.2 six-1.10.0 tweepy-3.5.0
</code></pre>
<p>When I try to import the module it says it cannot be found.</p>
<p>The first entry in $PATH is <code>helloTwitter/myenv/bin</code>
All the packages are showing up in the environments site-packages directory.
I seem to be using the right python and pip.
Which python outputs <code>helloTwitter/myenv/bin/python</code>
Which pip outputs <code>helloTwitter/myenv/bin/pip</code></p>
<p>Any advice on where I'm going wrong?</p>
| 0 |
2016-09-22T05:30:37Z
| 39,631,250 |
<p>It looks like you're manually setting your <code>$PATH</code> to point to your virtual environment. The whole point of the <code>myenv/bin/activate</code> script is to take care of this for you. </p>
<p>Once you have activated your virtual environment, any package you install using pip will be placed in the relevant venv <code>site-packages</code> directory (in your case, <code>myenv/lib/python2.7/site-packages</code>). Things like <code>pip --user</code> are unnecessary when you are working in a virtual environment (assuming default behaviour). It's all automatic.</p>
<p>After running <code>activate</code>, you can check the python binary you are using with <code>find -iname tweepy</code>. </p>
<p>Aliases can cause issues too. <code>which</code> is an external command, and won't always pick these up. A <code>type -a python</code> will flush these out.</p>
<p>A quick test can be done by running <code>helloTwitter/myenv/bin/python -c 'import tweepy'</code> directly. If this behaves differently to however you are currently running python (i.e. doesn't throw an import exception), then this is your problem.</p>
<p>Hope that helps.</p>
| 3 |
2016-09-22T05:56:46Z
|
[
"python",
"virtualenv"
] |
Python can't find packages in Virtual Environment
| 39,630,944 |
<p>I'm trying to setup my environment for a project but python isn't able to find the modules I've installed with pip.</p>
<p>I did the following:</p>
<pre><code>mkdir helloTwitter
cd helloTwitter
virtualenv myenv
Installing setuptools, pip, wheel...done.
source myenv/bin/activate
pip install tweepy
Collecting tweepy
Using cached tweepy-3.5.0-py2.py3-none-any.whl
Collecting six>=1.7.3 (from tweepy)
Using cached six-1.10.0-py2.py3-none-any.whl
Collecting requests>=2.4.3 (from tweepy)
Using cached requests-2.11.1-py2.py3-none-any.whl
Collecting requests-oauthlib>=0.4.1 (from tweepy)
Using cached requests_oauthlib-0.6.2-py2.py3-none-any.whl
Collecting oauthlib>=0.6.2 (from requests-oauthlib>=0.4.1->tweepy)
Installing collected packages: six, requests, oauthlib, requests-oauthlib, tweepy
Successfully installed oauthlib-2.0.0 requests-2.11.1 requests-oauthlib-0.6.2 six-1.10.0 tweepy-3.5.0
</code></pre>
<p>When I try to import the module it says it cannot be found.</p>
<p>The first entry in $PATH is <code>helloTwitter/myenv/bin</code>
All the packages are showing up in the environments site-packages directory.
I seem to be using the right python and pip.
Which python outputs <code>helloTwitter/myenv/bin/python</code>
Which pip outputs <code>helloTwitter/myenv/bin/pip</code></p>
<p>Any advice on where I'm going wrong?</p>
| 0 |
2016-09-22T05:30:37Z
| 39,650,431 |
<p>Ok, I think I found a solution, if not an answer.
1. I uninstalled the package and made sure it was not in system or user
2. I re-created the virtual environment.
3. I checked that the environments python and pip were being used.
4. This time when installing my package I added the <code>--no-cache-dir</code> option.
The packages install successfully.
Now Python can find the package.</p>
<pre><code>derptop:environmentScience Marcus$ python
>>> from tweepy import StreamListener
>>> StreamListener
<class 'tweepy.streaming.StreamListener'>
</code></pre>
<p>I checked the <code>sys.path</code> and it now includes the <code>site-packages</code> directory from the virtual environment, where previously it was absent.</p>
<p>Output of <code>sys.path</code></p>
<pre><code>['', ....'/Users/Marcus/CodeProjects/environmentScience/myenv/lib/python2.7/site-packages']
</code></pre>
<p>As far as I can tell the <code>sys.path</code> was referencing the wrong site packages directory. Though I'm not sure why. I'm wondering if pips use of the cache was causing the site-packages reference to reset to system.</p>
| 0 |
2016-09-22T23:38:50Z
|
[
"python",
"virtualenv"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.