title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Django userena check if mugshot is set | 39,751,901 | <p>Ok so this is the models.py of userena. Can i check using template tags in html if mugshot is set? The first if statement checks if the mugshot is uploaded.</p>
<pre><code>def get_mugshot_url(self):
"""
Returns the image containing the mugshot for the user.
The mugshot can be a uploaded image or a Gravatar.
Gravatar functionality will only be used when
``USERENA_MUGSHOT_GRAVATAR`` is set to ``True``.
:return:
``None`` when Gravatar is not used and no default image is supplied
by ``USERENA_MUGSHOT_DEFAULT``.
"""
# First check for a mugshot and if any return that.
if self.mugshot:
return self.mugshot.url
# Use Gravatar if the user wants to.
if userena_settings.USERENA_MUGSHOT_GRAVATAR:
return get_gravatar(self.user.email,
userena_settings.USERENA_MUGSHOT_SIZE,
userena_settings.USERENA_MUGSHOT_DEFAULT)
# Gravatar not used, check for a default image.
else:
if userena_settings.USERENA_MUGSHOT_DEFAULT not in ['404', 'mm',
'identicon',
'monsterid',
'wavatar']:
return userena_settings.USERENA_MUGSHOT_DEFAULT
else:
return None
</code></pre>
| 0 | 2016-09-28T15:33:48Z | 39,752,090 | <p>You could simply replicate the first <code>if</code> statement in the template on the user profile's instance. Something like</p>
<pre><code>{% if profile.mugshot %}
profile.mugshot.url
{% endif %}
</code></pre>
| 1 | 2016-09-28T15:43:06Z | [
"python",
"django"
]
|
how to align text in python turtle with drawn polygon with n sides | 39,751,935 | <p>I am a beginner at python and am writing a simple program with python turtle, prompting the user to enter a side length of a polygon, and the program is supposed to draw the polygon and print the name of the person (me) under the polygon.</p>
<p>I have gotten the program to work, however I cannot seem to figure out how to get the text to print under the polygon, because the side length can be changed by the user, so the polygon sometimes can go out of view, depending how large the side length is imputed.</p>
<p>the program is supposed to look like this:
<a href="http://i.stack.imgur.com/on0Ka.png" rel="nofollow">end result</a> </p>
<p>However, mine looks sort of like this each time:
<a href="http://i.stack.imgur.com/g6SZy.png" rel="nofollow">My result</a></p>
<p>My code is as follows:</p>
<pre><code>import turtle
print('************************************************')
print('This program draws a randomly colored polygon')
print('with side lengths provided by the user.')
print('************************************************')
polygonSideLength = int(input('Enter length of polygon side: \n'))
numberOfSides = int(5 + (28 / 4))
turnAngle = 360 / numberOfSides
import random
randomColor = random.randint(0,5)
if randomColor == 0:
fillcolor="red"
elif randomColor == 1:
fillcolor="green"
elif randomColor == 2:
fillcolor="blue"
elif randomColor == 3:
fillcolor="cyan"
elif randomColor == 4:
fillcolor="magenta"
elif randomColor == 5:
fillcolor="yellow"
print('Length of polygon side =', polygonSideLength)
print('Number of polygon sides =', numberOfSides)
print('Turn angle at each vertex =', turnAngle)
print('Random fill color is', fillcolor)
turtle.begin_fill()
turtle.pen(pensize = 5, pencolor="black", fillcolor = fillcolor)
count = 0
while (count < numberOfSides):
turtle.forward(polygonSideLength)
turtle.right(turnAngle)
count = count + 1
turtle.end_fill()
turtle.setheading(270)
turtle.penup()
turtle.forward(65)
turtle.left(90)
turtle.forward(130)
turtle.pendown()
turtle.write("polygon drawn by: Name", align = "right", font=("Arial", 12, "normal"))
turtle.hideturtle()
turtle.done()
</code></pre>
<p>Can someone help me on how to change this code so that it works?</p>
| 0 | 2016-09-28T15:35:13Z | 39,785,257 | <p>Let's try to go the simplest route to a solution. First, let's get your polygon centered on the window. We can do that by adding:</p>
<pre><code>turtle.backward(polygonSideLength / 2)
</code></pre>
<p>before the filled polygon drawing begins. Next, let's get it into the upper half of the window instead of the lower half. We can do that by changing:</p>
<pre><code>turtle.right(turnAngle)
</code></pre>
<p>to:</p>
<pre><code>turtle.left(turnAngle)
</code></pre>
<p>I.e. just draw it in the other direction. Now that we have the polygon centertered horizontally and above the center line vertically, we just need to write centered text below the center line displaced by the height of the font (or two to make some space):</p>
<pre><code>fontSize = 18
...
turtle.goto(0, -fontSize * 2)
turtle.write("polygon drawn by: Name", align="center", font=("Arial", fontSize, "normal"))
</code></pre>
<p>Putting it all together, and making some simple style changes, we get:</p>
<pre><code>import turtle
import random
colors = ["red", "green", "blue", "cyan", "magenta", "yellow"]
print('************************************************')
print('This program draws a randomly colored polygon')
print('with side lengths provided by the user.')
print('************************************************')
polygonSideLength = int(input('Enter length of polygon side: '))
numberOfSides = 5 + (28 // 4) # I assume this should be something more interesting...
turnAngle = 360 / numberOfSides
fontSize = 18
randomColor = random.randint(0, 5)
fillcolor = colors[randomColor]
print('Length of polygon side =', polygonSideLength)
print('Number of polygon sides =', numberOfSides)
print('Turn angle at each vertex =', turnAngle)
print('Random fill color is', fillcolor)
turtle.pen(pensize=5, pencolor="black", fillcolor=fillcolor)
turtle.backward(polygonSideLength / 2)
turtle.begin_fill()
for count in range(numberOfSides):
turtle.forward(polygonSideLength)
turtle.left(turnAngle)
turtle.end_fill()
turtle.penup()
turtle.setheading(270)
turtle.goto(0, -fontSize * 2)
turtle.write("polygon drawn by: Name", align="center", font=("Arial", fontSize, "normal"))
turtle.hideturtle()
turtle.done()
</code></pre>
<p><a href="http://i.stack.imgur.com/Zq4S2.png" rel="nofollow"><img src="http://i.stack.imgur.com/Zq4S2.png" alt="enter image description here"></a></p>
| 0 | 2016-09-30T06:43:56Z | [
"python",
"turtle-graphics"
]
|
what is most pythonic way to find a element in a list that is different with other elements? | 39,751,945 | <p>Suppose we have a list with unknown size and there is an element in the list that is different with other elements but we don't know the index of the element. the list only contains numerics and is fetched from a remote server and the length of the list and the index of the different element is changed every time. what is the most pythonic way to find that different element?
I tried this but I'm not sure if it's the best solution.</p>
<pre><code>a = 1
different_element = None
my_list = fetch_list()
b = my_list[0] - a
for elem in my_list[1::]:
if elem - a != b:
different_element = elem
print(different_element)
</code></pre>
| 2 | 2016-09-28T15:35:36Z | 39,752,124 | <p>Would this work for you?</p>
<pre><code>In [6]: my_list = [1,1,1,2,1,1,1]
In [7]: different = [ii for ii in set(my_list) if my_list.count(ii) == 1]
In [8]: different
Out[8]: [2]
</code></pre>
| 2 | 2016-09-28T15:44:38Z | [
"python"
]
|
what is most pythonic way to find a element in a list that is different with other elements? | 39,751,945 | <p>Suppose we have a list with unknown size and there is an element in the list that is different with other elements but we don't know the index of the element. the list only contains numerics and is fetched from a remote server and the length of the list and the index of the different element is changed every time. what is the most pythonic way to find that different element?
I tried this but I'm not sure if it's the best solution.</p>
<pre><code>a = 1
different_element = None
my_list = fetch_list()
b = my_list[0] - a
for elem in my_list[1::]:
if elem - a != b:
different_element = elem
print(different_element)
</code></pre>
| 2 | 2016-09-28T15:35:36Z | 39,752,197 | <p>You can use <code>Counter</code> from <code>collections</code> package</p>
<pre><code>from collections import Counter
a = [1,2,3,4,3,4,1]
b = Counter(a) # Counter({1: 2, 2: 1, 3: 2, 4: 2})
elem = list(b.keys())[list(b.values()).index(1)] # getting elem which is key with value that equals 1
print(a.index(elem))
</code></pre>
<p>Another possible solution that just differently compute <code>elem</code></p>
<pre><code>a = [1,2,3,4,3,4,1]
b = Counter(a) # Counter({1: 2, 2: 1, 3: 2, 4: 2})
elem = (k for k, v in b.items() if v == 1)
print(a.index(next(elem)))
</code></pre>
<p><strong>UPDATE</strong></p>
<p>Time consumption: </p>
<p>As @Jblasco mentioned, Jblasco's method is not really efficient one, and i was curious to measure it.</p>
<p>So the initial data is array with 200-400 elements, with only one unique value. The code that generate that array is. At the end of snipped there is 100 first elements that prove that it has one unique</p>
<pre><code>import random
from itertools import chain
f = lambda x: [x]*random.randint(2,4)
a=list(chain.from_iterable(f(random.randint(0,100)) for _ in range(100)))
a[random.randint(1, 100)] = 101
print(a[:100])
# [5, 5, 5, 84, 84, 84, 46, 46, 46, 46, 6, 6, 6, 68, 68, 68, 68, 38,
# 38, 38, 44, 44, 61, 61, 15, 15, 15, 15, 36, 36, 36, 36, 73, 73, 73,
# 28, 28, 28, 28, 6, 6, 93, 93, 74, 74, 74, 74, 12, 12, 72, 72, 22,
# 22, 22, 22, 78, 78, 17, 17, 17, 93, 93, 93, 12, 12, 12, 23, 23, 23,
# 23, 52, 52, 88, 88, 79, 79, 42, 42, 34, 34, 47, 47, 1, 1, 1, 1, 71,
# 71, 1, 1, 45, 45, 101, 45, 39, 39, 50, 50, 50, 50]
</code></pre>
<p>That's the code that show us results, i choose to execute 3 times with 10000 executions:</p>
<pre><code>from timeit import repeat
s = """\
import random
from itertools import chain
f = lambda x: [x]*random.randint(2,4)
a=list(chain.from_iterable(f(random.randint(0,100)) for _ in range(100)))
a[random.randint(1, 100)] = 101
"""
print('my 1st method:', repeat(stmt="""from collections import Counter
b=Counter(a)
elem = (k for k, v in b.items() if v == 1)
a.index(next(elem))""",
setup=s, number=10000, repeat=3)
print('my 2nd method:', repeat(stmt="""from collections import Counter
b = Counter(a)
elem = list(b.keys())[list(b.values()).index(1)]
a.index(elem)""",
setup=s, number=10000, repeat=3))
print('@Jblasco method:', repeat(stmt="""different = [ii for ii in set(a) if a.count(ii) == 1]
different""", setup=s, number=10000, repeat=3))
# my 1st method: [0.303596693000145, 0.27322746600111714, 0.2701447969993751]
# my 2nd method: [0.2715420649983571, 0.28590541199810104, 0.2821485950007627]
# @Jblasco method: [3.2133491599997797, 3.488262927003234, 2.884892332000163]
</code></pre>
| 2 | 2016-09-28T15:48:26Z | [
"python"
]
|
what is most pythonic way to find a element in a list that is different with other elements? | 39,751,945 | <p>Suppose we have a list with unknown size and there is an element in the list that is different with other elements but we don't know the index of the element. the list only contains numerics and is fetched from a remote server and the length of the list and the index of the different element is changed every time. what is the most pythonic way to find that different element?
I tried this but I'm not sure if it's the best solution.</p>
<pre><code>a = 1
different_element = None
my_list = fetch_list()
b = my_list[0] - a
for elem in my_list[1::]:
if elem - a != b:
different_element = elem
print(different_element)
</code></pre>
| 2 | 2016-09-28T15:35:36Z | 39,752,296 | <p>I would try maybe something like this:</p>
<pre><code>newList = list(set(my_list))
print newList.pop()
</code></pre>
<p>Assuming there's only 1 different value and the rest are all the same.
There's a little bit of ambiguity in your question which makes it difficult to answer but that's all I could think of optimally.</p>
| 1 | 2016-09-28T15:53:31Z | [
"python"
]
|
what is most pythonic way to find a element in a list that is different with other elements? | 39,751,945 | <p>Suppose we have a list with unknown size and there is an element in the list that is different with other elements but we don't know the index of the element. the list only contains numerics and is fetched from a remote server and the length of the list and the index of the different element is changed every time. what is the most pythonic way to find that different element?
I tried this but I'm not sure if it's the best solution.</p>
<pre><code>a = 1
different_element = None
my_list = fetch_list()
b = my_list[0] - a
for elem in my_list[1::]:
if elem - a != b:
different_element = elem
print(different_element)
</code></pre>
| 2 | 2016-09-28T15:35:36Z | 39,754,208 | <p>This is a great use for <a href="http://www.numpy.org" rel="nofollow">numpy</a></p>
<p>Given some random uniform list with a single uniquely different number in it:</p>
<pre><code>>>> li=[1]*100+[200]+[1]*250
</code></pre>
<p>If the uniform value is known (in this case 1 and the unknown value is 200) you can use <code>np.where</code> on an array to get that value:</p>
<pre><code>>>> import numpy as np
>>> a=np.array(li)
>>> a[a!=1]
array([200])
</code></pre>
<p>If the uniform values are not known, you can use <code>np.uniques</code> to get the counts of uniques:</p>
<pre><code>>>> np.unique(a, return_counts=True)
(array([ 1, 200]), array([350, 1]))
</code></pre>
<p>For a pure Python solution, use a generator with <code>next</code> to get the first value that is different than all the others:</p>
<pre><code>>>> next(e for i, e in enumerate(li) if li[i]!=1)
200
</code></pre>
<p>Or, you can use <a href="https://docs.python.org/2/library/itertools.html#itertools.dropwhile" rel="nofollow">dropwhile</a> from itertools:</p>
<pre><code>>>> from itertools import dropwhile
>>> next(dropwhile(lambda e: e==1, li))
200
</code></pre>
<p>If you do not know what the uniform value is, use a Counter on a slice big enough to get it:</p>
<pre><code>>>> uniform=Counter(li[0:3]).most_common()[0][0]
>>> uniform
1
>>> next(e for i, e in enumerate(li) if li[i]!=uniform)
200
</code></pre>
<p>In these cases, <code>next</code> will short-circuit at the first value that satisfies the condition. </p>
| 1 | 2016-09-28T17:36:03Z | [
"python"
]
|
Variable length arguments | 39,752,005 | <p>I am creating a python program using the <a href="https://docs.python.org/3/library/argparse.html"><code>argparse</code> module</a> and I want to allow the program to take either one argument or 2 arguments.</p>
<p>What do I mean? Well, I am creating a program to download/decode MMS messages and I want the user to either be able to provide a phone number and MMS-Transaction-ID to download the data or provide a file from their system of already downloaded MMS data.</p>
<p>What I want is something like this, where you can either enter in 2 arguments, or 1 argument:</p>
<pre><code>./mms.py (phone mmsid | file)
</code></pre>
<p><sub>NOTE: <code>phone</code> would be a phone number (like <code>15555555555</code>), <code>mmsid</code> a string (MMS-Transaction-ID) and <code>file</code> a file on the user's computer</sub></p>
<p>Is this possible with <code>argparse</code>? I was hoping I could use <code>add_mutually_exclusive_group</code>, but that didn't seem to do what I want.</p>
<pre><code>parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('phone', help='Phone number')
group.add_argument('mmsid', help='MMS-Transaction-ID to download')
group.add_argument('file', help='MMS binary file to read')
</code></pre>
<p>This gives the error (removing <code>required=True</code> gives the same error):</p>
<blockquote>
<p>ValueError: mutually exclusive arguments must be optional</p>
</blockquote>
<p>It looks like it wants me to use <code>--phone</code> instead of <code>phone</code>:</p>
<pre><code>parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--phone', help='Phone number')
group.add_argument('--mmsid', help='MMS-Transaction-ID to download')
group.add_argument('--file', help='MMS binary file to read')
</code></pre>
<p>When running my program with no arguments, I see:</p>
<blockquote>
<p>error: one of the arguments --phone --mmsid --file is required</p>
</blockquote>
<p>This is closer to what I want, but can I make <code>argparse</code> do <code>(--phone --msid) or (--file)</code>?</p>
| 6 | 2016-09-28T15:38:30Z | 39,752,370 | <p>This is a little beyond the scope of what <code>argparse</code> can do, as the "type" of the first argument isn't known ahead of time. I would do something like</p>
<pre><code>import argparse
p = argparse.ArgumentParser()
p.add_argument("file_or_phone", help="MMS File or phone number")
p.add_argument ("mmsid", nargs="?", help="MMS-Transaction-ID")
args = p.parse_args()
</code></pre>
<p>To determine whether <code>args.file_or_phone</code> is intended as a file name or a phone number, you need to check whether or not <code>args.mmsid</code> is <code>None</code>.</p>
| 4 | 2016-09-28T15:56:41Z | [
"python",
"python-3.x",
"argparse"
]
|
Python How to detect vertical and horizontal lines in an image with HoughLines with OpenCV? | 39,752,235 | <p>I m trying to obtain a threshold of the calibration chessboard. I cant detect directly the chessboard corners as there is some dust as i observe a micro chessboard.
I try several methods and HoughLinesP seems to be the easiest approach. But the results are not good, how to improve my results?</p>
<pre><code>import numpy as np
import cv2
img = cv2.imread('lines.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
print img.shape[1]
print img.shape
minLineLength=100
lines = cv2.HoughLinesP(image=edges,rho=0.02,theta=np.pi/500, threshold=10,lines=np.array([]), minLineLength=minLineLength,maxLineGap=100)
a,b,c = lines.shape
for i in range(a):
cv2.line(img, (lines[i][0][0], lines[i][0][1]), (lines[i][0][2], lines[i][0][3]), (0, 0, 255), 3, cv2.LINE_AA)
cv2.imwrite('houghlines5.jpg',img)
</code></pre>
<p>As you can see on figure below, i cant obtain my chessboard, the lines are plotted in a lot of directions... (the original picture : <a href="https://s22.postimg.org/iq2b91xq9/droite_Image_00000.jpg" rel="nofollow">https://s22.postimg.org/iq2b91xq9/droite_Image_00000.jpg</a>)</p>
<p><a href="http://i.stack.imgur.com/Qmksl.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Qmksl.jpg" alt="enter image description here"></a></p>
| 2 | 2016-09-28T15:51:07Z | 39,916,236 | <p>I would rather write this as a comment but unfortunately I can't. You should change the minLineLength and minLineGap. Or what if its just sqaures that you have to find, I would get all the lines and check the angles between them to get lines only along squares. I have worked with HoughLineP before and it is pretty much based on the above two arguments. Additionally, try using Bilateral filtering. I really helps when the sharpening using median filter doesn't help. </p>
<p><a href="http://docs.opencv.org/2.4/doc/tutorials/imgproc/gausian_median_blur_bilateral_filter/gausian_median_blur_bilateral_filter.html" rel="nofollow">Bilateral Filter</a></p>
| 0 | 2016-10-07T11:33:47Z | [
"python",
"opencv",
"camera-calibration",
"hough-transform"
]
|
Python How to detect vertical and horizontal lines in an image with HoughLines with OpenCV? | 39,752,235 | <p>I m trying to obtain a threshold of the calibration chessboard. I cant detect directly the chessboard corners as there is some dust as i observe a micro chessboard.
I try several methods and HoughLinesP seems to be the easiest approach. But the results are not good, how to improve my results?</p>
<pre><code>import numpy as np
import cv2
img = cv2.imread('lines.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
print img.shape[1]
print img.shape
minLineLength=100
lines = cv2.HoughLinesP(image=edges,rho=0.02,theta=np.pi/500, threshold=10,lines=np.array([]), minLineLength=minLineLength,maxLineGap=100)
a,b,c = lines.shape
for i in range(a):
cv2.line(img, (lines[i][0][0], lines[i][0][1]), (lines[i][0][2], lines[i][0][3]), (0, 0, 255), 3, cv2.LINE_AA)
cv2.imwrite('houghlines5.jpg',img)
</code></pre>
<p>As you can see on figure below, i cant obtain my chessboard, the lines are plotted in a lot of directions... (the original picture : <a href="https://s22.postimg.org/iq2b91xq9/droite_Image_00000.jpg" rel="nofollow">https://s22.postimg.org/iq2b91xq9/droite_Image_00000.jpg</a>)</p>
<p><a href="http://i.stack.imgur.com/Qmksl.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Qmksl.jpg" alt="enter image description here"></a></p>
| 2 | 2016-09-28T15:51:07Z | 39,925,027 | <p>in images processing they are some roles you have to go through such as filters before you go for edges detection, in your condition the dust is just a noise that you have to remove by filter, use gausse or blure after that use thresholding and then use canny for edges and in opencv they are cornere detection you can use, or you can just go for key point after threshholding if i'm not wrong.. try to do those steps and see the resulte</p>
| 0 | 2016-10-07T20:08:12Z | [
"python",
"opencv",
"camera-calibration",
"hough-transform"
]
|
Python How to detect vertical and horizontal lines in an image with HoughLines with OpenCV? | 39,752,235 | <p>I m trying to obtain a threshold of the calibration chessboard. I cant detect directly the chessboard corners as there is some dust as i observe a micro chessboard.
I try several methods and HoughLinesP seems to be the easiest approach. But the results are not good, how to improve my results?</p>
<pre><code>import numpy as np
import cv2
img = cv2.imread('lines.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
print img.shape[1]
print img.shape
minLineLength=100
lines = cv2.HoughLinesP(image=edges,rho=0.02,theta=np.pi/500, threshold=10,lines=np.array([]), minLineLength=minLineLength,maxLineGap=100)
a,b,c = lines.shape
for i in range(a):
cv2.line(img, (lines[i][0][0], lines[i][0][1]), (lines[i][0][2], lines[i][0][3]), (0, 0, 255), 3, cv2.LINE_AA)
cv2.imwrite('houghlines5.jpg',img)
</code></pre>
<p>As you can see on figure below, i cant obtain my chessboard, the lines are plotted in a lot of directions... (the original picture : <a href="https://s22.postimg.org/iq2b91xq9/droite_Image_00000.jpg" rel="nofollow">https://s22.postimg.org/iq2b91xq9/droite_Image_00000.jpg</a>)</p>
<p><a href="http://i.stack.imgur.com/Qmksl.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Qmksl.jpg" alt="enter image description here"></a></p>
| 2 | 2016-09-28T15:51:07Z | 39,925,278 | <p>You are using too small value for rho.</p>
<p><strong>Try the below code:-</strong></p>
<pre><code>import numpy as np
import cv2
gray = cv2.imread('lines.jpg')
edges = cv2.Canny(gray,50,150,apertureSize = 3)
cv2.imwrite('edges-50-150.jpg',edges)
minLineLength=100
lines = cv2.HoughLinesP(image=edges,rho=1,theta=np.pi/180, threshold=100,lines=np.array([]), minLineLength=minLineLength,maxLineGap=80)
a,b,c = lines.shape
for i in range(a):
cv2.line(gray, (lines[i][0][0], lines[i][0][1]), (lines[i][0][2], lines[i][0][3]), (0, 0, 255), 3, cv2.LINE_AA)
cv2.imwrite('houghlines5.jpg',gray)
</code></pre>
<p>Note, the change in <strong>rho value, pi value and maxLineGap</strong> to reduce outliers.</p>
<p><strong>Input Image</strong>
<a href="http://i.stack.imgur.com/4wQjD.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/4wQjD.jpg" alt="Input Image"></a></p>
<p><strong>Edges Image</strong>
<a href="http://i.stack.imgur.com/Z1LHg.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Z1LHg.jpg" alt="Edges Image"></a></p>
<p><strong>Output Image</strong>
<a href="http://i.stack.imgur.com/TGbc8.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/TGbc8.jpg" alt="Output Image"></a></p>
| 3 | 2016-10-07T20:25:58Z | [
"python",
"opencv",
"camera-calibration",
"hough-transform"
]
|
Using BeautifulSoup, how to get text only from the specific selector without the text in the children? | 39,752,403 | <p>I don't know how to code BeautifulSoup so that it gives me only the text from the selected tag. I get more such as the text of its child(ren)!</p>
<p>For example:</p>
<pre><code>from bs4 import BeautifulSoup
soup = BeautifulSoup('<div id="left"><ul><li>"I want this text"<a href="someurl.com"> I don\'t want this text</a><p>I don\'t want this either</li><li>"Good"<a href="someurl.com"> Not Good</a><p> Not Good either</li></ul></div>', "html5lib")
x = soup.select('ul > li')
for i in x:
print(i.text)
</code></pre>
<p>Output: </p>
<blockquote>
<p>"I want this text" I don't want this textI don't want this either</p>
<p>"Good" Not Good Not Good either</p>
</blockquote>
<p>Desired Output:</p>
<blockquote>
<p>"I want this text" </p>
<p>"Good"</p>
</blockquote>
| 1 | 2016-09-28T15:58:33Z | 39,752,453 | <p>One option would be to get the first element of the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#contents-and-children" rel="nofollow"><code>contents</code> list</a>:</p>
<pre><code>for i in x:
print(i.contents[0])
</code></pre>
<p>Another - find the first <em>text node</em>:</p>
<pre><code>for i in x:
print(i.find(text=True))
</code></pre>
<p>Both would print:</p>
<pre><code>"I want this text"
"Good"
</code></pre>
| 3 | 2016-09-28T16:01:04Z | [
"python",
"web-scraping",
"beautifulsoup"
]
|
Using BeautifulSoup, how to get text only from the specific selector without the text in the children? | 39,752,403 | <p>I don't know how to code BeautifulSoup so that it gives me only the text from the selected tag. I get more such as the text of its child(ren)!</p>
<p>For example:</p>
<pre><code>from bs4 import BeautifulSoup
soup = BeautifulSoup('<div id="left"><ul><li>"I want this text"<a href="someurl.com"> I don\'t want this text</a><p>I don\'t want this either</li><li>"Good"<a href="someurl.com"> Not Good</a><p> Not Good either</li></ul></div>', "html5lib")
x = soup.select('ul > li')
for i in x:
print(i.text)
</code></pre>
<p>Output: </p>
<blockquote>
<p>"I want this text" I don't want this textI don't want this either</p>
<p>"Good" Not Good Not Good either</p>
</blockquote>
<p>Desired Output:</p>
<blockquote>
<p>"I want this text" </p>
<p>"Good"</p>
</blockquote>
| 1 | 2016-09-28T15:58:33Z | 39,753,003 | <pre><code>from bs4 import BeautifulSoup
from bs4 import NavigableString
soup = BeautifulSoup('<div id="left"><ul><li>"I want this text"<a href="someurl.com"> I don\'t want this text</a><p>I don\'t want this either</li><li>"Good"<a href="someurl.com"> Not Good</a><p> Not Good either</li></ul></div>', "html5lib")
x = soup.select('ul > li')
for i in x:
if isinstance(i.next_element, NavigableString):#if li's next child is a string
print(i.next_element)
</code></pre>
| -1 | 2016-09-28T16:29:59Z | [
"python",
"web-scraping",
"beautifulsoup"
]
|
Reading 3 columns of data from .txt into list | 39,752,428 | <p>Hello I have a question here. I have the following text file (3 columns and space between). I want to insert each column(tt,ff,ll) into it's own list(time,l,f).</p>
<p>Text file:</p>
<pre><code> 0.0000000000000000 656.5342434532456082 0.9992059165961109
1.0001828508431749 656.5342334512754405 1.0009810769697651
2.0003657016863499 656.5342259805754566 0.9989386155502871
3.0005485525295246 656.5342339081594218 1.0005032672779635
4.0007314033726997 656.5342356101768928 0.9996101946453564
5.0009142542158749 656.5342236489159404 0.9986884414684027
6.0010971050590491 656.5342474828242985 1.0001061182847479
7.0012799559022243 656.5342355894648563 1.0003982380731031
8.0014628067453994 656.5342256832242356 0.9993176599964499
9.0016456575885737 656.5342218575017341 0.9999117456245585
10.0018285084317498 656.5342408970133192 1.0000973751521087
11.0020113592749240 656.5342243211601954 0.9997189612768125
12.0021942101180983 656.5342320396634932 0.9997487346699927
13.0023770609612743 656.5342291293554808 0.9991986731183715
</code></pre>
<p>But I want the following output:</p>
<pre><code>time: (0.00,1.00,2.003,4.0007 etc...)
l: (656.53,656.53,656.53,656.53 etc..)
f: (...)
</code></pre>
<p>Attempt code:</p>
<pre><code>from numpy import *
def read_file(filename):
time = [] # list time
f = [] # ...
l = [] # ...
infile = open(filename, "r") # reads file
for line in infile: # each line in txt file
numbers = line.split() # removes the " "
tt = numbers[0] # 1st column?
ff = numbers[1] # 2nd column?
ll = numbers[2] # 3rd column?
time.append(tt) # Inserts 1st column(tt) into list(time)
f.append(ff) # ...
l.append(ll) # ...
return time,f,l # return lists
txt1 =read_file("1.txt") # calls function
print txt1 # print return values
</code></pre>
| 0 | 2016-09-28T15:59:46Z | 39,752,603 | <p>use the <code>loadtxt</code> function of numpy</p>
<pre><code>text_array = np.loadtxt('my_file.txt')
time = text_array[:, 0]
l = text_array[:, 1]
f = text_array[:, 2]
</code></pre>
| 0 | 2016-09-28T16:08:03Z | [
"python",
"list",
"text-files",
"multiple-columns"
]
|
Reading 3 columns of data from .txt into list | 39,752,428 | <p>Hello I have a question here. I have the following text file (3 columns and space between). I want to insert each column(tt,ff,ll) into it's own list(time,l,f).</p>
<p>Text file:</p>
<pre><code> 0.0000000000000000 656.5342434532456082 0.9992059165961109
1.0001828508431749 656.5342334512754405 1.0009810769697651
2.0003657016863499 656.5342259805754566 0.9989386155502871
3.0005485525295246 656.5342339081594218 1.0005032672779635
4.0007314033726997 656.5342356101768928 0.9996101946453564
5.0009142542158749 656.5342236489159404 0.9986884414684027
6.0010971050590491 656.5342474828242985 1.0001061182847479
7.0012799559022243 656.5342355894648563 1.0003982380731031
8.0014628067453994 656.5342256832242356 0.9993176599964499
9.0016456575885737 656.5342218575017341 0.9999117456245585
10.0018285084317498 656.5342408970133192 1.0000973751521087
11.0020113592749240 656.5342243211601954 0.9997189612768125
12.0021942101180983 656.5342320396634932 0.9997487346699927
13.0023770609612743 656.5342291293554808 0.9991986731183715
</code></pre>
<p>But I want the following output:</p>
<pre><code>time: (0.00,1.00,2.003,4.0007 etc...)
l: (656.53,656.53,656.53,656.53 etc..)
f: (...)
</code></pre>
<p>Attempt code:</p>
<pre><code>from numpy import *
def read_file(filename):
time = [] # list time
f = [] # ...
l = [] # ...
infile = open(filename, "r") # reads file
for line in infile: # each line in txt file
numbers = line.split() # removes the " "
tt = numbers[0] # 1st column?
ff = numbers[1] # 2nd column?
ll = numbers[2] # 3rd column?
time.append(tt) # Inserts 1st column(tt) into list(time)
f.append(ff) # ...
l.append(ll) # ...
return time,f,l # return lists
txt1 =read_file("1.txt") # calls function
print txt1 # print return values
</code></pre>
| 0 | 2016-09-28T15:59:46Z | 39,752,608 | <p>I just tried your code and it works, returning a tuple of lists. If your question is how to turn this tuple of lists into your requested format (which will be very clunky, because of the amount of data), you can just add this to the end, using your <code>txt1</code> variable:</p>
<p>(edited to include line breaks)</p>
<pre><code>print ("time: ({})\nl: ({})\nf: ({})".format(*[','.join(i) for i in txt1]))
</code></pre>
<p>This joins each list with commas and uses the unpacking operator (*) to separate them into the three arguments of the <code>format</code> function.</p>
<p>I like the other answer that uses <code>numpy</code>'s capabilities to process the file. You can also do it this way using built-in functions (note that this returns a list of tuples):</p>
<pre><code>def read_file(filename):
with open(filename, 'r') as f:
return zip(*[line.split() for line in f])
</code></pre>
| 1 | 2016-09-28T16:08:18Z | [
"python",
"list",
"text-files",
"multiple-columns"
]
|
Reading 3 columns of data from .txt into list | 39,752,428 | <p>Hello I have a question here. I have the following text file (3 columns and space between). I want to insert each column(tt,ff,ll) into it's own list(time,l,f).</p>
<p>Text file:</p>
<pre><code> 0.0000000000000000 656.5342434532456082 0.9992059165961109
1.0001828508431749 656.5342334512754405 1.0009810769697651
2.0003657016863499 656.5342259805754566 0.9989386155502871
3.0005485525295246 656.5342339081594218 1.0005032672779635
4.0007314033726997 656.5342356101768928 0.9996101946453564
5.0009142542158749 656.5342236489159404 0.9986884414684027
6.0010971050590491 656.5342474828242985 1.0001061182847479
7.0012799559022243 656.5342355894648563 1.0003982380731031
8.0014628067453994 656.5342256832242356 0.9993176599964499
9.0016456575885737 656.5342218575017341 0.9999117456245585
10.0018285084317498 656.5342408970133192 1.0000973751521087
11.0020113592749240 656.5342243211601954 0.9997189612768125
12.0021942101180983 656.5342320396634932 0.9997487346699927
13.0023770609612743 656.5342291293554808 0.9991986731183715
</code></pre>
<p>But I want the following output:</p>
<pre><code>time: (0.00,1.00,2.003,4.0007 etc...)
l: (656.53,656.53,656.53,656.53 etc..)
f: (...)
</code></pre>
<p>Attempt code:</p>
<pre><code>from numpy import *
def read_file(filename):
time = [] # list time
f = [] # ...
l = [] # ...
infile = open(filename, "r") # reads file
for line in infile: # each line in txt file
numbers = line.split() # removes the " "
tt = numbers[0] # 1st column?
ff = numbers[1] # 2nd column?
ll = numbers[2] # 3rd column?
time.append(tt) # Inserts 1st column(tt) into list(time)
f.append(ff) # ...
l.append(ll) # ...
return time,f,l # return lists
txt1 =read_file("1.txt") # calls function
print txt1 # print return values
</code></pre>
| 0 | 2016-09-28T15:59:46Z | 39,752,635 | <p>Your <code>return</code> statement is currently incorrectly indented. When I corrected that it appears to work.</p>
<p>A simpler way might be</p>
<pre><code>def read_file(filename):
infile = open(filename, "r") # reads file
rows = [line.split() for line in infile]
return [r[i] for r in rows for i in range(3)] # return lists
</code></pre>
| 0 | 2016-09-28T16:09:49Z | [
"python",
"list",
"text-files",
"multiple-columns"
]
|
Merge two CSV files Keeping duplicates | 39,752,434 | <p>I am trying to merge two csv files and keep duplicate records. Each file may have a matching record, one file may have duplicate records (same student ID with different test scores), and one file may not have a matching student record in the other file. The code below works as expected however if a duplicate record exists only the second record is being written to the merged file. I've looked through many threads and all address removing duplicates but I need to keep duplicate records.</p>
<pre><code>import csv
from collections import OrderedDict
import os
cd = os.path.dirname(os.path.abspath(__file__))
fafile = os.path.join(cd, 'MJB_FAScores.csv')
testscores = os.path.join(cd, 'MJB_TestScores.csv')
filenames = fafile, testscores
data = OrderedDict()
fieldnames = []
for filename in filenames:
with open(filename, 'r') as fp:
reader = csv.DictReader(fp)
fieldnames.extend(reader.fieldnames)
for row in reader:
data.setdefault(row['Student_Number'], {}).update(row)
fieldnames = list(OrderedDict.fromkeys(fieldnames))
with open('merged.csv', 'w', newline='') as fp:
writer = csv.writer(fp)
writer.writerow(fieldnames)
for row in data.values():
writer.writerow([row.get(fields, '') for fields in fieldnames])
</code></pre>
<hr>
<p>fafile:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher FA1 FA2 FA3
65731 Ball, Peggy 4 Bauman, Edyn 56 45 98
65731 Ball, Peggy 4 Bauman, Edyn 32 323 232
85250 Ball, Jonathan 3 Clarke, Mary 65 77 45
981235 Ball, David 5 Longo, Noel 56 89 23
91851 Ball, Jeff 0 Delaney, Mary 83 45 42
543 MAX 2 Phil 77 77 77
543 MAX 2 Annie 88 888 88
9844 Lisa 1 Smith, Jennifer 43 44 55
</code></pre>
<p>testscores:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher MAP Reading MAP Math FP Level DSA LN DSA WW DSA SJ DSA DC
65731 Ball, Peggy 4 Bauman, Edyn 175 221 A 54 23 78 99
72941 Ball, Amanda 4 Bauman, Edyn 201 235 J 65 34 65
85250 Ball, Jonathan 3 Clarke, Mary 189 201 L 34 54
981235 Ball, David 5 Longo, Noel 225 231 D 23 55
91851 Ball, Jeff 0 Delaney, Mary 198 175 C
65731 Ball, Peggy 4 Bauman, Edyn 200 76 Y 54 23 78 99
543 MAX 2 Phil 111 111 Z 33 44 55 66
543 MAX 2 Annie 222 222 A 44 55 66 77
</code></pre>
<p>Current output:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher FA1 FA2 FA3 MAP Reading MAP Math FP Level DSA LN DSA WW DSA SJ DSA DC
65731 Ball, Peggy 4 Bauman, Edyn 32 323 232 200 76 Y 54 23 78 99
85250 Ball, Jonathan 3 Clarke, Mary 65 77 45 189 201 L 34 54
981235 Ball, David 5 Longo, Noel 56 89 23 225 231 D 23 55
91851 Ball, Jeff 0 Delaney, Mary 83 45 42 198 175 C
543 MAX 2 Annie 88 888 88 222 222 A 44 55 66 77
72941 Ball, Amanda 4 Bauman, Edyn 201 235 J 65 34 65
9844 Lisa 1 Smith, Jennifer 43 44 55
</code></pre>
<p>Desired output:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher FA1 FA2 FA3 MAP Reading MAP Math FP Level DSA LN DSA WW DSA SJ DSA DC
65731 Ball, Peggy 4 Bauman, Edyn 32 323 232 200 76 Y 54 23 78 99
65731 Ball, Peggy 4 Bauman, Edyn 56 45 98 175 221 A 54 23 78 99
85250 Ball, Jonathan 3 Clarke, Mary 65 77 45 189 201 L 34 54
981235 Ball, David 5 Longo, Noel 56 89 23 225 231 D 23 55
91851 Ball, Jeff 0 Delaney, Mary 83 45 42 198 175 C
543 MAX 2 Annie 88 888 88 222 222 A 44 55 66 77
543 MAX 2 Phil 77 77 77 111 111 Z 33 44 55 66
72941 Ball, Amanda 4 Bauman, Edyn 201 235 J 65 34 65
9844 Lisa 1 Smith, Jennifer 43 44 55
</code></pre>
| -2 | 2016-09-28T16:00:03Z | 39,752,562 | <p>You're using an <code>OrderedDict</code> to store student IDs (keys) and their record (values) from both files. </p>
<p>This has a subtle implication: new <em>student ID</em> entries will replace the old duplicate <em>student ID</em> dict entries from the previous file, because dict keys are unique.</p>
<p>You may consider writing the entries to the <em>merge</em> file immediately after they are read from one file. After which the <code>dict</code> is cleared and the next file is used to <em>repopulate</em> <code>dict</code>. This will help avoid <em>collision</em> and <em>replacement</em> of studentIDs.</p>
| 0 | 2016-09-28T16:06:09Z | [
"python",
"python-3.x",
"csv",
"merge"
]
|
Merge two CSV files Keeping duplicates | 39,752,434 | <p>I am trying to merge two csv files and keep duplicate records. Each file may have a matching record, one file may have duplicate records (same student ID with different test scores), and one file may not have a matching student record in the other file. The code below works as expected however if a duplicate record exists only the second record is being written to the merged file. I've looked through many threads and all address removing duplicates but I need to keep duplicate records.</p>
<pre><code>import csv
from collections import OrderedDict
import os
cd = os.path.dirname(os.path.abspath(__file__))
fafile = os.path.join(cd, 'MJB_FAScores.csv')
testscores = os.path.join(cd, 'MJB_TestScores.csv')
filenames = fafile, testscores
data = OrderedDict()
fieldnames = []
for filename in filenames:
with open(filename, 'r') as fp:
reader = csv.DictReader(fp)
fieldnames.extend(reader.fieldnames)
for row in reader:
data.setdefault(row['Student_Number'], {}).update(row)
fieldnames = list(OrderedDict.fromkeys(fieldnames))
with open('merged.csv', 'w', newline='') as fp:
writer = csv.writer(fp)
writer.writerow(fieldnames)
for row in data.values():
writer.writerow([row.get(fields, '') for fields in fieldnames])
</code></pre>
<hr>
<p>fafile:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher FA1 FA2 FA3
65731 Ball, Peggy 4 Bauman, Edyn 56 45 98
65731 Ball, Peggy 4 Bauman, Edyn 32 323 232
85250 Ball, Jonathan 3 Clarke, Mary 65 77 45
981235 Ball, David 5 Longo, Noel 56 89 23
91851 Ball, Jeff 0 Delaney, Mary 83 45 42
543 MAX 2 Phil 77 77 77
543 MAX 2 Annie 88 888 88
9844 Lisa 1 Smith, Jennifer 43 44 55
</code></pre>
<p>testscores:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher MAP Reading MAP Math FP Level DSA LN DSA WW DSA SJ DSA DC
65731 Ball, Peggy 4 Bauman, Edyn 175 221 A 54 23 78 99
72941 Ball, Amanda 4 Bauman, Edyn 201 235 J 65 34 65
85250 Ball, Jonathan 3 Clarke, Mary 189 201 L 34 54
981235 Ball, David 5 Longo, Noel 225 231 D 23 55
91851 Ball, Jeff 0 Delaney, Mary 198 175 C
65731 Ball, Peggy 4 Bauman, Edyn 200 76 Y 54 23 78 99
543 MAX 2 Phil 111 111 Z 33 44 55 66
543 MAX 2 Annie 222 222 A 44 55 66 77
</code></pre>
<p>Current output:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher FA1 FA2 FA3 MAP Reading MAP Math FP Level DSA LN DSA WW DSA SJ DSA DC
65731 Ball, Peggy 4 Bauman, Edyn 32 323 232 200 76 Y 54 23 78 99
85250 Ball, Jonathan 3 Clarke, Mary 65 77 45 189 201 L 34 54
981235 Ball, David 5 Longo, Noel 56 89 23 225 231 D 23 55
91851 Ball, Jeff 0 Delaney, Mary 83 45 42 198 175 C
543 MAX 2 Annie 88 888 88 222 222 A 44 55 66 77
72941 Ball, Amanda 4 Bauman, Edyn 201 235 J 65 34 65
9844 Lisa 1 Smith, Jennifer 43 44 55
</code></pre>
<p>Desired output:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher FA1 FA2 FA3 MAP Reading MAP Math FP Level DSA LN DSA WW DSA SJ DSA DC
65731 Ball, Peggy 4 Bauman, Edyn 32 323 232 200 76 Y 54 23 78 99
65731 Ball, Peggy 4 Bauman, Edyn 56 45 98 175 221 A 54 23 78 99
85250 Ball, Jonathan 3 Clarke, Mary 65 77 45 189 201 L 34 54
981235 Ball, David 5 Longo, Noel 56 89 23 225 231 D 23 55
91851 Ball, Jeff 0 Delaney, Mary 83 45 42 198 175 C
543 MAX 2 Annie 88 888 88 222 222 A 44 55 66 77
543 MAX 2 Phil 77 77 77 111 111 Z 33 44 55 66
72941 Ball, Amanda 4 Bauman, Edyn 201 235 J 65 34 65
9844 Lisa 1 Smith, Jennifer 43 44 55
</code></pre>
| -2 | 2016-09-28T16:00:03Z | 39,752,987 | <p>For your inner loop you can do the following:</p>
<pre><code>for row in reader:
new_record = dict(row)
records = data.setdefault(row['Student_Number'], [])
for record in records:
new_record.update(record) # preserve old values
record = copy.copy(new_record)
new_record.update(dict(row)) # to preserve original values
records.append(new_record)
</code></pre>
<p>When writing you have to go one step deeper now:</p>
<pre><code>for rows in data.values():
for row in rows:
writer.writerow(row)
</code></pre>
<p>A the beginning add "import copy". I wrote it from my head, may need some tweaking.</p>
<p>Basically what this does is adds another layer in form of list where you update your old values with new ones and add updated new record to the end of list with (all collected data are put into the new record). When writing it has to go one level deeper.</p>
| 0 | 2016-09-28T16:29:07Z | [
"python",
"python-3.x",
"csv",
"merge"
]
|
Merge two CSV files Keeping duplicates | 39,752,434 | <p>I am trying to merge two csv files and keep duplicate records. Each file may have a matching record, one file may have duplicate records (same student ID with different test scores), and one file may not have a matching student record in the other file. The code below works as expected however if a duplicate record exists only the second record is being written to the merged file. I've looked through many threads and all address removing duplicates but I need to keep duplicate records.</p>
<pre><code>import csv
from collections import OrderedDict
import os
cd = os.path.dirname(os.path.abspath(__file__))
fafile = os.path.join(cd, 'MJB_FAScores.csv')
testscores = os.path.join(cd, 'MJB_TestScores.csv')
filenames = fafile, testscores
data = OrderedDict()
fieldnames = []
for filename in filenames:
with open(filename, 'r') as fp:
reader = csv.DictReader(fp)
fieldnames.extend(reader.fieldnames)
for row in reader:
data.setdefault(row['Student_Number'], {}).update(row)
fieldnames = list(OrderedDict.fromkeys(fieldnames))
with open('merged.csv', 'w', newline='') as fp:
writer = csv.writer(fp)
writer.writerow(fieldnames)
for row in data.values():
writer.writerow([row.get(fields, '') for fields in fieldnames])
</code></pre>
<hr>
<p>fafile:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher FA1 FA2 FA3
65731 Ball, Peggy 4 Bauman, Edyn 56 45 98
65731 Ball, Peggy 4 Bauman, Edyn 32 323 232
85250 Ball, Jonathan 3 Clarke, Mary 65 77 45
981235 Ball, David 5 Longo, Noel 56 89 23
91851 Ball, Jeff 0 Delaney, Mary 83 45 42
543 MAX 2 Phil 77 77 77
543 MAX 2 Annie 88 888 88
9844 Lisa 1 Smith, Jennifer 43 44 55
</code></pre>
<p>testscores:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher MAP Reading MAP Math FP Level DSA LN DSA WW DSA SJ DSA DC
65731 Ball, Peggy 4 Bauman, Edyn 175 221 A 54 23 78 99
72941 Ball, Amanda 4 Bauman, Edyn 201 235 J 65 34 65
85250 Ball, Jonathan 3 Clarke, Mary 189 201 L 34 54
981235 Ball, David 5 Longo, Noel 225 231 D 23 55
91851 Ball, Jeff 0 Delaney, Mary 198 175 C
65731 Ball, Peggy 4 Bauman, Edyn 200 76 Y 54 23 78 99
543 MAX 2 Phil 111 111 Z 33 44 55 66
543 MAX 2 Annie 222 222 A 44 55 66 77
</code></pre>
<p>Current output:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher FA1 FA2 FA3 MAP Reading MAP Math FP Level DSA LN DSA WW DSA SJ DSA DC
65731 Ball, Peggy 4 Bauman, Edyn 32 323 232 200 76 Y 54 23 78 99
85250 Ball, Jonathan 3 Clarke, Mary 65 77 45 189 201 L 34 54
981235 Ball, David 5 Longo, Noel 56 89 23 225 231 D 23 55
91851 Ball, Jeff 0 Delaney, Mary 83 45 42 198 175 C
543 MAX 2 Annie 88 888 88 222 222 A 44 55 66 77
72941 Ball, Amanda 4 Bauman, Edyn 201 235 J 65 34 65
9844 Lisa 1 Smith, Jennifer 43 44 55
</code></pre>
<p>Desired output:</p>
<pre class="lang-none prettyprint-override"><code>Student_Number Name Grade Teacher FA1 FA2 FA3 MAP Reading MAP Math FP Level DSA LN DSA WW DSA SJ DSA DC
65731 Ball, Peggy 4 Bauman, Edyn 32 323 232 200 76 Y 54 23 78 99
65731 Ball, Peggy 4 Bauman, Edyn 56 45 98 175 221 A 54 23 78 99
85250 Ball, Jonathan 3 Clarke, Mary 65 77 45 189 201 L 34 54
981235 Ball, David 5 Longo, Noel 56 89 23 225 231 D 23 55
91851 Ball, Jeff 0 Delaney, Mary 83 45 42 198 175 C
543 MAX 2 Annie 88 888 88 222 222 A 44 55 66 77
543 MAX 2 Phil 77 77 77 111 111 Z 33 44 55 66
72941 Ball, Amanda 4 Bauman, Edyn 201 235 J 65 34 65
9844 Lisa 1 Smith, Jennifer 43 44 55
</code></pre>
| -2 | 2016-09-28T16:00:03Z | 39,753,180 | <p>I would personally use <a href="http://pandas.pydata.org/" rel="nofollow">Pandas</a> for something like this, specifically <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#concatenating-using-append" rel="nofollow">DataFrame.append</a></p>
<p>First you would create a DataFrame for both CSVs using <code>pandas.DataFrame.from_csv('filename')</code>. </p>
<p>Then append the DataFrames together: <code>myDataFrame.append(otherDataframe)</code>. </p>
<p>And finally output the result with <code>to_csv()</code>.</p>
<p>Putting it all together:
<a href="http://i.stack.imgur.com/Epn4O.png" rel="nofollow"><img src="http://i.stack.imgur.com/Epn4O.png" alt="Code snippet"></a></p>
| 0 | 2016-09-28T16:39:40Z | [
"python",
"python-3.x",
"csv",
"merge"
]
|
matplotlib - multiple markers for a plot based on additional data | 39,752,488 | <p>Lets say I have the following data say</p>
<pre><code>x=[0,1,2,3,4,5,6,7,8,9]
y=[1.0,1.5,2.3,2.2,1.1,1.4,2.0,2.8,1.9,2.0]
z=['A','A','A','B','B','A','B','B','B','B']
plt.plot(x,y, marker='S')
</code></pre>
<p>will give me a x-y plot with square markers. Is there a way to change the marker type based on the data z, so that all 'A' type has square marker and 'B' type has a triangle marker.</p>
<p>but I want to add 'z' data on the curve only when it changes from one type to another (in this case from 'A' to 'B' or vice versa)</p>
| 1 | 2016-09-28T16:03:17Z | 39,753,209 | <pre><code>import matplotlib.pyplot as plt
fig = plt.figure(0)
x=[0,1,2,3,4,5,6,7,8,9]
y=[1.0,1.5,2.3,2.2,1.1,1.4,2.0,2.8,1.9,2.0]
z=['A','A','A','B','B','A','B','B','B','B']
plt.plot(x,y)
if 'A' in z and 'B' in z:
xs = [a for a,b in zip(x, z) if b == 'A']
ys = [a for a,b in zip(y, z) if b == 'A']
plt.scatter(xs, ys, marker='s')
xt = [a for a,b in zip(x, z) if b == 'B']
yt = [a for a,b in zip(y, z) if b == 'B']
plt.scatter(xt, yt, marker='^')
else:
plt.scatter(x, y, marker='.', s=0)
plt.show()
</code></pre>
<p>Or</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure(0)
x=[0,1,2,3,4,5,6,7,8,9]
y=[1.0,1.5,2.3,2.2,1.1,1.4,2.0,2.8,1.9,2.0]
z=['A','A','A','B','B','A','B','B','B','B']
plt.plot(x,y)
if 'A' in z and 'B' in z:
plt.scatter(x, y, marker='s', s=list(map(lambda a: 20 if a == 'A' else 0, z)))
plt.scatter(x, y, marker='^', s=list(map(lambda a: 20 if a == 'B' else 0, z)))
else:
plt.scatter(x, y, marker='.', s=0)
plt.show()
</code></pre>
| 1 | 2016-09-28T16:41:15Z | [
"python",
"matplotlib"
]
|
error to execute djangocms -f -p . mysite | 39,752,641 | <p>I have a problem when I execute <code>djangocms -f -p . mysite</code>
the process started but fail when it have to create an admin user.</p>
<pre><code>Creating admin user
/Users/ccsguest/Site/djangoProject/django1101/bin/python2.7: can't open file 'create_user.py': [Errno 2] No such file or directory
The installation has failed.
Traceback (most recent call last):
File "/Users/ccsguest/Site/djangoProject/django1101/bin/djangocms", line 11, in <module>
sys.exit(execute())
File "/Users/ccsguest/Site/djangoProject/django1101/lib/python2.7/site-packages/djangocms_installer/main.py", line 41, in execute
django.setup_database(config_data)
File "/Users/ccsguest/Site/djangoProject/django1101/lib/python2.7/site-packages/djangocms_installer/django/__init__.py", line 394, in setup_database
create_user(config_data)
File "/Users/ccsguest/Site/djangoProject/django1101/lib/python2.7/site-packages/djangocms_installer/django/__init__.py", line 411, in create_user
subprocess.check_call([sys.executable, 'create_user.py'], env=env)
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 541, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/Users/ccsguest/Site/djangoProject/django1101/bin/python2.7', u'create_user.py']' returned non-zero exit status 2
</code></pre>
<p>The process created the folder with the project and inside I have all the normally files and folder but there is a file call create_user.py, and when I run the server I see the page for login but I never define user and password</p>
<p>My environment has this configuration</p>
<pre><code>cmsplugin-filer==1.1.3
dj-database-url==0.4.1
Django==1.8.15
django-appconf==1.0.2
django-classy-tags==0.8.0
django-cms==3.4.0
django-filer==1.2.5
django-formtools==1.0
django-mptt==0.8.6
django-polymorphic==0.8.1
django-sekizai==0.10.0
Django-Select2==4.3.2
django-treebeard==4.0.1
djangocms-admin-style==1.2.4
djangocms-attributes-field==0.1.1
djangocms-column==1.7.0
djangocms-googlemap==0.5.2
djangocms-installer==0.9
djangocms-link==2.0.1
djangocms-snippet==1.9.1
djangocms-style==1.7.0
djangocms-text-ckeditor==3.2.1
djangocms-video==2.0.2
easy-thumbnails==2.3
html5lib==0.9999999
Pillow==3.3.1
pytz==2016.6.1
six==1.10.0
South==1.0.2
tzlocal==1.2.2
Unidecode==0.4.19
</code></pre>
<p>any idea how to fix the problem!</p>
<p>Thank in advance</p>
| 0 | 2016-09-28T16:10:18Z | 39,766,057 | <p>I had the same issue.<br>
Adding '-w' did the trick for me:</p>
<p><code>source venv/bin/activate</code><br>
<code>mkdir project_name</code><br>
<code>cd project_name</code><br>
<code>djangocms -w -f -p . project_name</code> </p>
| 1 | 2016-09-29T09:09:03Z | [
"python",
"django",
"django-cms"
]
|
Graphene/Django (GraphQL): How to use a query argument in order to exclude nodes matching a specific filter? | 39,752,667 | <p>I have some video items in a Django/Graphene backend setup. Each video item is linked to one owner.
In a React app, I would like to query via GraphQL all the videos owned by the current user on the one hand and all the videos NOT owned by the current user on the other hand.</p>
<p>I could run the following GraphQl query and filter on the client side:</p>
<pre><code>query AllScenes {
allScenes {
edges {
node {
id,
name,
owner {
name
}
}
}
}
}
</code></pre>
<p>I would rather have two queries with filters parameters directly asking relevant data to my backend. Something like: </p>
<pre><code>query AllScenes($ownerName : String!, $exclude: Boolean!) {
allScenes(owner__name: $ownerName, exclude: $exclude) {
edges {
node {
id,
name,
owner {
name
}
}
}
}
}
</code></pre>
<p>I would query with <code>ownerName = currentUserName</code> and <code>exclude = True/False</code> yet I just cannot retrieve my <code>exclude</code> argument on my backend side. Here is the code I have tried in my schema.py file:</p>
<pre><code>from project.scene_manager.models import Sceneâ¨
from graphene import ObjectType, relay, Int, String, Field, Boolean, Floatâ¨
from graphene.contrib.django.filter import DjangoFilterConnectionField
from graphene.contrib.django.types import DjangoNodeâ¨
from django_filters import FilterSet, CharFilter
â¨â¨â¨
class SceneNode(DjangoNode):
⨠class Meta:â¨
model = Scene
â¨â¨â¨class SceneFilter(FilterSet):â¨
owner__name = CharFilter(lookup_type='exact', exclude=exclude)
â¨
class Meta:â¨
model = Sceneâ¨
fields = ['owner__name']â¨â¨
class Query(ObjectType):â¨â¨
scene = relay.NodeField(SceneNode)â¨
all_scenes = DjangoFilterConnectionField(SceneNode, filterset_class=SceneFilter, exclude=Boolean())â¨â¨
def resolve_exclude(self, args, info):â¨
exclude = args.get('exclude')â¨
return excludeâ¨â¨â¨
class Meta:â¨
abstract = True
</code></pre>
<p>My custom <code>SceneFilter</code> is used but I do not know how to pass the <code>exclude</code> arg to it. (I do not think that I am making a proper use of the <a href="http://graphene-python.readthedocs.io/en/latest/types/objecttypes/" rel="nofollow">resolver</a>). Any help on that matter would be much appreciated!</p>
| 0 | 2016-09-28T16:11:37Z | 39,774,434 | <p>Switching to graphene-django 1.0, I have been able to do what I wanted with the following query definition:</p>
<pre><code>class Query(AbstractType):
selected_scenes = DjangoFilterConnectionField(SceneNode, exclude=Boolean())
def resolve_selected_scenes(self, args, context, info):
owner__name = args.get('owner__name')
exclude = args.get('exclude')
if exclude:
selected_scenes = Scene.objects.exclude(owner__name=owner__name)
else:
selected_scenes = Scene.objects.filter(owner__name=owner__name)
return selected_scenes
</code></pre>
<p>BossGrand proposed <a href="https://github.com/graphql-python/graphene/issues/305#issuecomment-250371352" rel="nofollow">an other solution</a> on GitHub</p>
| 0 | 2016-09-29T15:30:37Z | [
"python",
"django",
"graphql",
"graphene-python"
]
|
imaplib: what factors decide maintype is "text" or "multipart" | 39,752,682 | <p>I have been writing python code for small tool, wherein i am trying to fetch mails using python libraries imaplib and email.
code statement is something like below.</p>
<pre><code>import imaplib
import email
mail = imaplib.IMAP4_SSL('imap.server')
mail.login('userid@mail.com', 'password')
result, data = mail.uid('search', None, "ALL")
latest_email_uid = data[0].split()[-1]
result, data = mail.uid('fetch', latest_email_uid, '(RFC822)')
raw_email = data[0][1]
email_message = email.message_from_string(raw_email)
maintype = email_message_instance.get_content_maintype()
</code></pre>
<p>I am executing the script from different host machines simultaneously.
Problem that I am facing is, while fetching the mail body, for <strong>same incoming email</strong>, on first host mac maintype is evaluated as "text" whereas for other host machine, its evaluated as "multipart" during script execution.</p>
<p>Would like to know how these values are determined at runtime and if I always want maintype to be "multipart", what standard layout should I follow while writing email in email body.</p>
| 1 | 2016-09-28T16:12:15Z | 39,784,652 | <p>From comments:</p>
<blockquote>
<p>raw_email for both cases has raw html code with multiple values. all most all html code is same except few differences. For maintype=multipart, Content-Type="multipart/alternative", boundary tag is present. For maintype=text, Content-Type="text/html", boundary field is not present </p>
</blockquote>
<p>Well, that answers the question. <code>get_content_maintype</code> returns first part of the <code>Content-Type</code>, which is <code>multipart</code> for <strong>multipart</strong>/alternative and <code>text</code> for <strong>text</strong>/html.</p>
<p><code>multiplart/alternative</code> means there are multiple alternative versions of the email. Usually, that is html + text. Emails are often sent that way, because then they can be read by any client (the text part), but will still contain HTML formatting which will be used in clients which support it.</p>
<p>Apparently one of the emails was sent with both html and text, whereas the other contains only html.</p>
| 0 | 2016-09-30T05:57:41Z | [
"python",
"email",
"imaplib"
]
|
How to get spcific child page in django cms with {% show_menu_below_id %}? | 39,752,716 | <p>My DjangoCMS has the structure:</p>
<p>--Page a<br>
----Page a.1<br>
----Page a.2<br>
----Page a.3<br>
----Page a.4<br></p>
<p>Using {% show_menu_below_id "Page a"%} shows this:</p>
<pre><code><ul>
<li> Page a.1</li>
<li> Page a.2</li>
<li> Page a.3</li>
<li> Page a.4</li>
</ul>
</code></pre>
<p>I need to show then in the navegation menu in two separeted columns, like:</p>
<pre><code><ul>
<li> Page a.1</li>
<li> Page a.2</li>
</ul>
<ul>
<li> Page a.3</li>
<li> Page a.4</li>
</ul>
</code></pre>
<p>How do I get just some of the childs page?</p>
| 0 | 2016-09-28T16:14:36Z | 39,753,134 | <p>You can do a lot when you specify a custom HTML for the menu. The template tag would be following:</p>
<pre><code>{% show_menu 0 100 100 100 "menu_top.html" %}
</code></pre>
<p>And in the menu_top.html, you have a variable called <code>children</code> available for your convenience. You can use the for loop counter to determine when to insert additional HTML and split the lists or you can use other parameters of the menu items (i.e. do they have children etc.). You can even create a menu modifier and enhance the navigation tree when custom parameters are needed.</p>
<p>Here is an example from the Django CMS <a href="https://github.com/divio/django-cms/blob/develop/menus/templates/menu/menu.html" rel="nofollow">repo</a>:</p>
<pre><code>{% load menu_tags %}
{% for child in children %}
<li class="child{% if child.selected %} selected{% endif %}{% if child.ancestor %} ancestor{% endif %}{% if child.sibling %} sibling{% endif %}{% if child.descendant %} descendant{% endif %}">
<a href="{{ child.attr.redirect_url|default:child.get_absolute_url }}">{{ child.get_menu_title }}</a>
{% if child.children %}
<ul>
{% show_menu from_level to_level extra_inactive extra_active template "" "" child %}
</ul>
{% endif %}
</li>
{% endfor %}
</code></pre>
| 1 | 2016-09-28T16:37:20Z | [
"python",
"django",
"django-cms"
]
|
convenient way to reindex one level of a multiindex | 39,752,729 | <p>consider the <code>pd.Series</code> <code>s</code> and <code>pd.MultiIndex</code> <code>idx</code></p>
<pre><code>idx = pd.MultiIndex.from_product([list('AB'), [1, 3], list('XY')],
names=['one', 'two', 'three'])
s = pd.Series(np.arange(8), idx)
s
one two three
A 1 X 0
Y 1
3 X 2
Y 3
B 1 X 4
Y 5
3 X 6
Y 7
dtype: int32
</code></pre>
<p>I want to <code>reindex</code> on <code>level='two'</code> with <code>np.arange(4)</code><br>
I can do it with:</p>
<pre><code>s.unstack([0, 2]).reindex(np.arange(4), fill_value=0).stack().unstack([0, 1])
one two three
A 0 X 0
Y 0
1 X 0
Y 1
2 X 0
Y 0
3 X 2
Y 3
B 0 X 0
Y 0
1 X 4
Y 5
2 X 0
Y 0
3 X 6
Y 7
dtype: int32
</code></pre>
<p>But I'm looking for something more direct if it exists. Any ideas?</p>
| 3 | 2016-09-28T16:15:36Z | 39,752,802 | <p>Unfortunately if need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a> with <code>MultiIndex</code>, need all levels:</p>
<pre><code>mux = pd.MultiIndex.from_product([list('AB'), np.arange(4), list('XY')],
names=['one', 'two', 'three'])
print (s.reindex(mux, fill_value=0))
one two three
A 0 X 0
Y 0
1 X 0
Y 1
2 X 0
Y 0
3 X 2
Y 3
B 0 X 0
Y 0
1 X 4
Y 5
2 X 0
Y 0
3 X 6
Y 7
dtype: int32
</code></pre>
<p>EDIT by comment:</p>
<pre><code>idx = pd.MultiIndex.from_tuples([('A', 1, 'X'), ('B', 3, 'Y')],
names=['one', 'two', 'three'])
s = pd.Series([5,6], idx)
print (s)
one two three
A 1 X 5
B 3 Y 6
dtype: int64
mux = pd.MultiIndex.from_tuples([('A', 0, 'X'), ('A', 1, 'X'),
('A', 2, 'X'), ('A', 3, 'X'),
('B', 0, 'Y'), ('B', 1, 'Y'),
('B', 2, 'Y'), ('B', 3, 'Y')],
names=['one', 'two', 'three'])
print (s.reindex(mux, fill_value=0))
one two three
A 0 X 0
1 X 5
2 X 0
3 X 0
B 0 Y 0
1 Y 0
2 Y 0
3 Y 6
dtype: int64
</code></pre>
<hr>
<p><strong><em>Direct Solution</em></strong> </p>
<pre><code>new_lvl = np.arange(4)
mux = [(a, b, c) for b in new_lvl for a, c in s.reset_index('two').index.unique()]
s.reindex(mux, fill_value=0).sort_index()
one two three
A 0 X 0
Y 0
1 X 0
Y 1
2 X 0
Y 0
3 X 2
Y 3
B 0 X 0
Y 0
1 X 4
Y 5
2 X 0
Y 0
3 X 6
Y 7
dtype: int64
</code></pre>
| 2 | 2016-09-28T16:19:54Z | [
"python",
"pandas"
]
|
Insert my chatbot in a website | 39,752,749 | <p>i have a question for you,</p>
<p>i have built a bot in python and i need to know if there is a way (or a service with REST API) to embed it in a website. I dont want to use slack/telegram/facebook messanger and others as comunication channels. I want to use a chat environment in a website but i dont really want to write it myself I've tried to ask for services like LiveChat, Chatlio and Tawk but the support says that there is no API for that. I was thinking of building my own chat but this would take tons of time. It would be very nice to add the functionality that if the bot cant answer the question, a human agent can answer for it. </p>
<p>From where can i start? Any suggestions? Thank you very much.</p>
| 1 | 2016-09-28T16:16:43Z | 39,763,581 | <h3>Use a chatbot generator</h3>
<p><strong><a href="https://codename.co/botgen/designer/" rel="nofollow">bot<em>gen</em></a></strong> is a service for designing and publishing a chatbot in minutes. I'm pretty sure it can suit your basic needs.</p>
<p>It can help you:</p>
<ul>
<li>define your bot behaviors (its logic) using the <a href="https://codename.co/botml/" rel="nofollow">BotML</a> syntax ;</li>
<li>generate a ready-to-use bot that relies on said behaviors ;</li>
<li>embed it on a website with a one-liner JS code.</li>
</ul>
<p><em>Note: author here</em></p>
| 0 | 2016-09-29T07:04:20Z | [
"python",
"chatbot",
"web-chat"
]
|
Insert my chatbot in a website | 39,752,749 | <p>i have a question for you,</p>
<p>i have built a bot in python and i need to know if there is a way (or a service with REST API) to embed it in a website. I dont want to use slack/telegram/facebook messanger and others as comunication channels. I want to use a chat environment in a website but i dont really want to write it myself I've tried to ask for services like LiveChat, Chatlio and Tawk but the support says that there is no API for that. I was thinking of building my own chat but this would take tons of time. It would be very nice to add the functionality that if the bot cant answer the question, a human agent can answer for it. </p>
<p>From where can i start? Any suggestions? Thank you very much.</p>
| 1 | 2016-09-28T16:16:43Z | 39,826,616 | <p>You can use <a href="http://talkus.io" rel="nofollow" title="Talkus">Talkus</a> and write a slack bot.
Talkus provide a hook that will be called when a visitor start a conversation.
So when the hook is called, make your bot join the channel and it will be able to discuss with your website visitors.</p>
<p>For the bot, you can use <a href="https://github.com/howdyai/botkit" rel="nofollow">botkit</a>. It's will be easier. Other wise you can use the slack client <a href="https://github.com/slackhq/node-slack-sdk" rel="nofollow">https://github.com/slackhq/node-slack-sdk</a> </p>
<p>I hop it will help you.</p>
| 0 | 2016-10-03T07:40:23Z | [
"python",
"chatbot",
"web-chat"
]
|
Interpretation of "too many resources for launch" | 39,752,819 | <p>Consider the following Python code:</p>
<pre><code>from numpy import float64
from pycuda import compiler, gpuarray
import pycuda.autoinit
# N > 960 is crucial!
N = 961
code = """
__global__ void kern(double *v)
{
double a = v[0]*v[2];
double lmax = fmax(0.0, a), lmin = fmax(0.0, -a);
double smax = sqrt(lmax), smin = sqrt(lmin);
if(smax > 0.2) {
smax = fmin(smax, 0.2)/smax ;
smin = (smin > 0.0) ? fmin(smin, 0.2)/smin : 0.0;
smin = lmin + smin*a;
v[0] = v[0]*smin + smax*lmax;
v[2] = v[2]*smin + smax*lmax;
}
}
"""
kernel_func = compiler.SourceModule(code).get_function("kern")
kernel_func(gpuarray.zeros(3, float64), block=(N,1,1))
</code></pre>
<p>Executing this gives:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 25, in <module>
kernel_func(gpuarray.zeros(3, float64), block=(N,1,1))
File "/usr/lib/python3.5/site-packages/pycuda/driver.py", line 402, in function_call
func._launch_kernel(grid, block, arg_buf, shared, None)
pycuda._driver.LaunchError: cuLaunchKernel failed: too many resources requested for launch
</code></pre>
<p>My setup: Python v3.5.2 with pycuda==2016.1.2 and numpy==1.11.1 on Ubuntu 16.04.1 (64-bit), kernel 4.4.0, nvcc V7.5.17. The graphics card is an Nvidia GeForce GTX 480.</p>
<p>Can you reproduce this on your machine? Do you have any idea, what causes this error message?</p>
<p><strong>Remark:</strong> I know that, in principle, there is a race condition because all kernels try to change v[0] and v[2]. But the kernels shouldn't reach the inside of the if-block anyway! Moreover, I'm able to reproduce the error without the race condition, but it's much more complicated.</p>
| 1 | 2016-09-28T16:20:59Z | 39,753,783 | <p>It is almost certain that you are hitting a registers-per-block limit. </p>
<p>Reading the <a href="http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#features-and-technical-specifications" rel="nofollow">relevant documentation</a>, your device has a limit of 32k 32 bit registers per block. When the block size is larger than 960 threads (30 warps), your kernel launch requires too many registers and the launch fails. NVIDIA supply an excel spreadsheet and <a href="http://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#calculating-occupancy" rel="nofollow">advice</a> on how to determine the per thread the register requirement of your kernel and the limiting block sizes you can use for your kernel to launch on your device.</p>
| 2 | 2016-09-28T17:12:45Z | [
"python",
"numpy",
"cuda",
"pycuda"
]
|
Change axis in mapltolib figure | 39,752,943 | <p>Usually matplotlib uses this axis:</p>
<pre><code>Y
|
|
|_______X
</code></pre>
<p>But I need to plot my data using:</p>
<pre><code> _________Y
|
|
|
X
</code></pre>
<p>How can I do it? I will prefer not modify my data (i.e. transposing). I need to be able of use the coordinates always and matplotlib changes the axis.</p>
| 1 | 2016-09-28T16:27:08Z | 39,757,481 | <p>One of the variations:</p>
<pre><code>import matplotlib.pyplot as plt
def Scatter(x, y):
ax.scatter(y, x)
#Data preparation part:
x=[1, 2, 3, 4, 5]
y=[2, 3, 4, 5, 6]
#Plotting and axis inverting part
fig, ax = plt.subplots(figsize=(10,8))
plt.ylabel('X', fontsize=15)
plt.xlabel('Y', fontsize=15)
ax.xaxis.set_label_position('top') #This send label to top
plt.gca().invert_yaxis() #This inverts y axis
ax.xaxis.tick_top() #This send xticks to top
#User defined function Scatter
Scatter(x,y)
ax.grid()
ax.axis('scaled')
plt.show()
</code></pre>
<p><strong>Output:</strong></p>
<p><a href="http://i.stack.imgur.com/xnPty.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/xnPty.jpg" alt="enter image description here"></a></p>
| 2 | 2016-09-28T20:55:02Z | [
"python",
"numpy",
"matplotlib",
"axis"
]
|
unable to runserver in django 1.10.1 due to database connection | 39,753,066 | <p>I am using PostGRE sql in my localhost in Django 1.10.1 and have installed all dependencies, psycopg2, postgresql, pgadmin3 and have set the password of postgresql via command line. I've used exactly the same password in settings.py of the django project. Still I am being shown the authentication error. I am using pyvenv virtual environment with python 3.5.2</p>
<p>Please check the error code below:</p>
<pre><code>Traceback (most recent call last):
</code></pre>
<blockquote>
<p>File
"/home/jayesh/django/myproject/lib/python3.5/site-packages/django/db/backends/base/base.py",
line 199, in ensure_connection
self.connect() File "/home/jayesh/django/myproject/lib/python3.5/site-packages/django/db/backends/base/base.py",
line 171, in connect
self.connection = self.get_new_connection(conn_params) File "/home/jayesh/django/myproject/lib/python3.5/site-packages/django/db/backends/postgresql/base.py",
line 176, in get_new_connection
connection = Database.connect(**conn_params) File "/home/jayesh/django/myproject/lib/python3.5/site-packages/psycopg2/<strong>init</strong>.py",
line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async) psycopg2.OperationalError: FATAL: password
authentication failed for user "postgresql" FATAL: password
authentication failed for user "postgresql"</p>
</blockquote>
<p>The settings.py contains the following reg DB:</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'myproject',
'USER': 'postgresql',
'PASSWORD': 'mypassword',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
</code></pre>
<p>Please guide regarding the above issue.</p>
<p>I've already followed the following links:
<a href="http://stackoverflow.com/questions/15805561/django-setting-psycopg2-operationalerror-fatal-peer-authentication-failed-fo">Django setting : psycopg2.OperationalError: FATAL: Peer authentication failed for user "indivo"</a></p>
<p><a href="http://stackoverflow.com/questions/12906351/importerror-no-module-named-psycopg2">ImportError: No module named psycopg2</a></p>
<p><a href="http://stackoverflow.com/questions/24917832/how-connect-postgres-to-localhost-server-using-pgadmin-on-ubuntu">How connect Postgres to localhost server using pgAdmin on Ubuntu?</a></p>
| 1 | 2016-09-28T16:33:44Z | 39,755,812 | <p>Looks like psycopg2 also had its own dependencies that were not properly installed for me.</p>
<p>Also the default username of is not postgresql but its just postgres</p>
<p>Both of the above checks gave me the solution required and django 1.10 server is up and running now.</p>
| 0 | 2016-09-28T19:10:46Z | [
"python",
"django",
"postgresql",
"psycopg2"
]
|
Is there a more pythonic way to write this? | 39,753,082 | <p>I'm looking to make this a little more pythonic.</p>
<pre><code>user_df[-1:]['status_id'].values[0] in {3,5}
</code></pre>
<p>I ititially tried <code>user_id.ix[-1:, 'status_id'].isin([3,5])</code>, but didn't work.</p>
<p>Any suggestions? The top version works, but looks a little weird.</p>
| 1 | 2016-09-28T16:34:29Z | 39,753,131 | <p>You can try:</p>
<pre><code>user_id['status_id'].iloc[-1:].isin([3,5])
</code></pre>
<p>Sample:</p>
<pre><code>user_id = pd.DataFrame({'status_id':[1,2,3]})
print (user_id)
status_id
0 1
1 2
2 3
#iloc without : [-1] return scalar
print (user_id['status_id'].iloc[-1] in set({3,5}))
True
#iloc with : [-1:] return vector - Series
print (user_id['status_id'].iloc[-1:].isin([3,5]))
2 True
Name: status_id, dtype: bool
</code></pre>
| 4 | 2016-09-28T16:37:08Z | [
"python",
"pandas"
]
|
Is there a more pythonic way to write this? | 39,753,082 | <p>I'm looking to make this a little more pythonic.</p>
<pre><code>user_df[-1:]['status_id'].values[0] in {3,5}
</code></pre>
<p>I ititially tried <code>user_id.ix[-1:, 'status_id'].isin([3,5])</code>, but didn't work.</p>
<p>Any suggestions? The top version works, but looks a little weird.</p>
| 1 | 2016-09-28T16:34:29Z | 39,753,152 | <p><code>isin</code> might be marginally faster (the more values that you have to check the more speed up you would notice ... but even for large sets its not going to be a huge difference ...(I doubt its any faster in this example... its probably a little slower) ... but <code>val in set()</code> <strong>is pretty dang pythonic</strong> (in fact much more so than <code>pd.isin</code>)</p>
<p>you are testing a <strong>single value</strong> against a <code>set</code> ... by using <code>pandas.isin</code> or <code>numpy.in1d</code> you will incure significant time overhead to move into C and back to python vs just using <code>in</code> and a set wich is an <code>O(1)</code> look up ... (<em>in either case the time slice is non-existent on a human time scale</em>)</p>
| 2 | 2016-09-28T16:38:11Z | [
"python",
"pandas"
]
|
Python: Replace df1.col values with df2.col2 values | 39,753,085 | <p>I have two data frames df1 and df2.
In df1 I have 50 columns and in df2 I have 50+ columns. What I want to achieve is that
In df1 I have 13000 rows and a column name subject where names of all subjects are given.
In df2 I have 250 rows and along 50+ I have two columns named subject code and subject_name. </p>
<pre><code> Here is an example of my datasets:
df1 =
index subjects
0 Biology
1 Physicss
2 Chemistry
3 Biology
4 Physics
5 Physics
6 Biolgy
df2 =
index subject_name subject_code
0 Biology BIO
1 Physics PHY
2 Chemistry CHE
3 Medical MED
4 Programming PRO
5 Maths MAT
6 Literature LIT
My desired output in df1 (after replacing subject_name and fixing the spelling errors) is:
index subjects subject_code
0 Biology BIO
1 Physics PHY
2 Chemistry CHE
3 Biology BIO
4 Physics PHY
5 Physics PHY
6 Biology BIO
</code></pre>
<p>What happens at my end is that I want to merge all subject values in df1 with values in df2 subject name values. In df1 there are around 500 rows where I get NAN after I merge both the columns in one as in these 500 rows there is some difference in the spellings of the subject.
I have tried solution given at the following links but didn't work for me:
<a href="http://stackoverflow.com/questions/34946913/replace-df-index-values-with-values-from-a-list-but-ignore-empty-strings">replace df index values with values from a list but ignore empty strings</a></p>
<p><a href="http://stackoverflow.com/questions/31751230/python-pandas-replace-values-multiple-columns-matching-multiple-columns-from-an">Python pandas: replace values multiple columns matching multiple columns from another dataframe</a></p>
<pre><code> Here is my code:
df_merged = pd.merge(df1_subject,df2_subjectname, left_on='subjects', right_on='subject_name')
df_merged.head()
</code></pre>
<p>Can anyone tell me how I can fix this issue as I have already spend 8 hours on this issue but am unable tor resolve it.</p>
<p>Cheers</p>
| 0 | 2016-09-28T16:34:40Z | 39,754,407 | <p>One of the issues you have is the incorrect spellings. You can try to harmonise the spelling of the subject across both <code>dataframes</code> using the <code>difflib</code> module and its <code>get_close_matches</code> method. </p>
<p>Using this code will return the closest matching subject for each match in <code>df1</code> and and <code>df2</code>. <code>df1's</code> columns will be updated to reflect this. Therefore, even if the subject name is spelled incorrectly, it will now have the same spelling in both <code>dataframes</code>.</p>
<pre><code>import pandas as pd
import difflib
df2['subject_name'] = df2.subject_name.map(lambda x: difflib.get_close_matches(x, df1.subject)[0])
</code></pre>
<p>After this you can try to merge. It may resolve your issue, but it would be easier to fix if you provide a reproducible example. </p>
| 0 | 2016-09-28T17:48:03Z | [
"python",
"pandas"
]
|
Python: Replace df1.col values with df2.col2 values | 39,753,085 | <p>I have two data frames df1 and df2.
In df1 I have 50 columns and in df2 I have 50+ columns. What I want to achieve is that
In df1 I have 13000 rows and a column name subject where names of all subjects are given.
In df2 I have 250 rows and along 50+ I have two columns named subject code and subject_name. </p>
<pre><code> Here is an example of my datasets:
df1 =
index subjects
0 Biology
1 Physicss
2 Chemistry
3 Biology
4 Physics
5 Physics
6 Biolgy
df2 =
index subject_name subject_code
0 Biology BIO
1 Physics PHY
2 Chemistry CHE
3 Medical MED
4 Programming PRO
5 Maths MAT
6 Literature LIT
My desired output in df1 (after replacing subject_name and fixing the spelling errors) is:
index subjects subject_code
0 Biology BIO
1 Physics PHY
2 Chemistry CHE
3 Biology BIO
4 Physics PHY
5 Physics PHY
6 Biology BIO
</code></pre>
<p>What happens at my end is that I want to merge all subject values in df1 with values in df2 subject name values. In df1 there are around 500 rows where I get NAN after I merge both the columns in one as in these 500 rows there is some difference in the spellings of the subject.
I have tried solution given at the following links but didn't work for me:
<a href="http://stackoverflow.com/questions/34946913/replace-df-index-values-with-values-from-a-list-but-ignore-empty-strings">replace df index values with values from a list but ignore empty strings</a></p>
<p><a href="http://stackoverflow.com/questions/31751230/python-pandas-replace-values-multiple-columns-matching-multiple-columns-from-an">Python pandas: replace values multiple columns matching multiple columns from another dataframe</a></p>
<pre><code> Here is my code:
df_merged = pd.merge(df1_subject,df2_subjectname, left_on='subjects', right_on='subject_name')
df_merged.head()
</code></pre>
<p>Can anyone tell me how I can fix this issue as I have already spend 8 hours on this issue but am unable tor resolve it.</p>
<p>Cheers</p>
| 0 | 2016-09-28T16:34:40Z | 39,929,085 | <p>Correct the spelling then merge...</p>
<pre><code>import pandas as pd
import operator, collections
df1 = pd.DataFrame.from_items([("subjects",
["Biology","Physicss","Phsicss","Chemistry",
"Biology","Physics","Physics","Biolgy","navelgazing"])])
df2 = pd.DataFrame.from_items([("subject_name",
["Biology","Physics","Chemistry","Medical",
"Programming","Maths","Literature"]),
("subject_code",
["BIO","PHY","CHE","MED","PRO","MAT","LIT"])])
</code></pre>
<p>Find the misspellings:</p>
<pre><code>misspelled = set(df1.subjects) - set(df2.subject_name)
</code></pre>
<p>Find the subject that matches the misspelling best and create a dictionary -> {mis_sp : subject_name}</p>
<pre><code>difference = operator.itemgetter(1)
subject = operator.itemgetter(0)
def foo1(word, candidates):
'''Returns the most likely match for a misspelled word
'''
temp = []
for candidate in candidates:
count1 = collections.Counter(word)
count2 = collections.Counter(candidate)
diff1 = count1 - count2
diff2 = count2 - count1
diff = sum(diff1.values())
diff += sum(diff2.values())
temp.append((candidate, diff))
return subject(min(temp, key = difference))
def foo2(words):
'''Yields (misspelled-word, corrected-word) tuples from misspelled words'''
for word in words:
name = foo1(word, df2.subject_name)
if name:
yield (word, name)
d = dict(foo2(misspelled))
</code></pre>
<p>Correct all the misspellings in df1</p>
<pre><code>def foo3(thing):
return d.get(thing, thing)
df3 = df1.applymap(foo3)
</code></pre>
<p>Merge</p>
<pre><code>df2 = df2.set_index("subject_name")
df3 = df3.merge(df2, left_on = "subjects", right_index = True, how = 'left')
</code></pre>
<hr>
<p><code>foo1</code> might possibly be sufficient for this purpose, but there are better, more sophisticated, algorithms to correct spelling. maybe, <a href="http://norvig.com/spell-correct.html" rel="nofollow">http://norvig.com/spell-correct.html</a></p>
<p>Just read @conner's solution. I didn't know difflib was there so a better <code>foo1</code> would be,</p>
<pre><code>def foo1(word, candidates):
try:
return difflib.get_close_matches(word, candidates, 1)[0]
except IndexError as e:
# there isn't a close match
return None
</code></pre>
| 0 | 2016-10-08T05:38:53Z | [
"python",
"pandas"
]
|
VLOOKUP/ETL with Python | 39,753,103 | <p>I have a data that comes from MS SQL Server. The data from the query returns a list of names straight from a public database. For instance, If i wanted records with the name of "Microwave" something like this would happen:</p>
<pre><code>Microwave
Microwvae
Mycrowwave
Microwavee
</code></pre>
<p>Microwave would be spelt in hundreds of ways. I solve this currently with a VLOOKUP in excel. It looks for the value on the left cell and returns value on the right. for example:</p>
<pre><code>VLOOKUP(A1,$A$1,$B$4,2,False)
Table:
A B
1 Microwave Microwave
2 Microwvae Microwave
3 Mycrowwave Microwave
4 Microwavee Microwave
</code></pre>
<p>I would just copy the VLOOKUP formula down the CSV or Excel file and then use that information for my analysis.</p>
<p>Is there a way in Python to solve this issue in another way?</p>
<p>I <em>could</em> make a long if/elif list or even a replace list and apply it to each line of the csv, but that would save no more time than just using the VLOOKUP. There are thousands of company names spelt wrong and i do not have the clearance to change the database.</p>
<p>So Stack, Any ideas on how to leverage python in this scenario?</p>
| 0 | 2016-09-28T16:35:37Z | 39,770,931 | <p>If you had have data like this:</p>
<pre><code>+-------------+-----------+
| typo | word |
+-------------+-----------+
| microweeve | microwave |
| microweevil | microwave |
| macroworv | microwave |
| murkeywater | microwave |
+-------------+-----------+
</code></pre>
<p>Save it as typo_map.csv</p>
<p>Then run (in the same directory):</p>
<pre><code>import csv
def OpenToDict(path, index):
with open(path, 'rb') as f:
reader=csv.reader(f)
headings = reader.next()
heading_nums={}
for i, v in enumerate(headings):
heading_nums[v]=i
fields = [heading for heading in headings if heading <> index]
file_dictionary = {}
for row in reader:
file_dictionary[row[heading_nums[index]]]={}
for field in fields:
file_dictionary[row[heading_nums[index]]][field]=row[heading_nums[field]]
return file_dictionary
map = OpenToDict('typo_map.csv', 'typo')
print map['microweevil']['word']
</code></pre>
<p>The structure is slightly more complex than it needs to be for your situation but that's because this function was originally written to lookup more than one column. However, it will work for you, and you can simplify it yourself if you want.</p>
| 1 | 2016-09-29T12:54:13Z | [
"python",
"sql-server",
"excel",
"python-3.x",
"vlookup"
]
|
Write string to file with different byte size? | 39,753,119 | <p>I am doing a Python scripting.</p>
<p>I have a string, the <code>len()</code> of the string is <code>1048576</code> and the <code>sys.getsizeof()</code> of the string is <code>1048597</code>.</p>
<p>However, when I write this string to a file, the byte size of the file is <code>1051027</code>. My code is below, anyone can tell me why the byte size of file is different with that of the string?</p>
<pre><code>print type(allInOne) # allInOne is my string
print len(allInOne)
print sys.getsizeof(allInOne)
newFile = open("./all_in_one7.raw", "w")
newFile.write(allInOne.encode('ascii'))
newFile.close()
</code></pre>
<hr>
<p>My string is <code>allInOne</code>, it is generated with many processes before, it was generated like this <code>allInOne = numpy.uint8(dataset.pixel_array).tostring()</code> , above this, <code>dataset.pixel_array</code> is of type <code>numpy.ndarray</code>. I don't know whether this info would be of any help.</p>
| 0 | 2016-09-28T16:36:33Z | 39,753,212 | <p>Your <code>allInOne = numpy.uint8(dataset.pixel_array).tostring()</code> doesn't look like text. When writing anything but text to a file in Python, you need to <a href="https://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files">open the file in binary mode</a> (<code>"wb"</code> instead of <code>"w"</code>) so that Python doesn't assume the <code>0x0A</code> bytes are <code>'\n'</code> line endings and attempt to convert them to the <code>'\r\n'</code> line endings that are more common on Microsoft Windows.</p>
<p>To see if this is your problem, <a href="https://stackoverflow.com/q/1155617/2738262">count that particular character</a>:</p>
<pre><code>print len(allInOne), "bytes"
print len(allInOne) + allInOne.count('\n'), "bytes with 0A counted twice"
</code></pre>
| 5 | 2016-09-28T16:41:39Z | [
"python",
"file"
]
|
A generator for refactoring a combination of for loops and nested if statements | 39,753,217 | <p>The next code scans some log file and selects the lines with some particular characters.</p>
<pre><code>with open(file) as input:
for line in input:
if 'REW' in line or 'LOSE' in line:
<some optional code>
if 'REW VAL' in line or 'LOSE INV' in line:
<some code>
</code></pre>
<p>I wrote some functions upon this but every function contains this repeated code, so I think refactoring is needed. And I guess I need to make a generator. How do I make this generator that will allow me to change the code in brackets?</p>
| 1 | 2016-09-28T16:42:03Z | 39,753,622 | <p>For arbitrary code, the best you can do is write a function like</p>
<pre><code>def foo(file, f1, f2):
with open(file) as input:
for line in input:
if 'REW' in line or 'LOSE' in line:
f1(line)
if 'REW VAL' in line or 'LOSE INV' in line:
f2(line)
</code></pre>
<p>where you need to impose some condition on how <code>f1</code> and <code>f2</code> are called (here, I define them as functions that take a line as input).</p>
<p>This isn't fully general, though. For instance, the function <code>f1</code> can't decide to skip the next <code>if</code> statement and continue to the next line from the file.</p>
| 1 | 2016-09-28T17:03:18Z | [
"python",
"refactoring",
"generator"
]
|
A generator for refactoring a combination of for loops and nested if statements | 39,753,217 | <p>The next code scans some log file and selects the lines with some particular characters.</p>
<pre><code>with open(file) as input:
for line in input:
if 'REW' in line or 'LOSE' in line:
<some optional code>
if 'REW VAL' in line or 'LOSE INV' in line:
<some code>
</code></pre>
<p>I wrote some functions upon this but every function contains this repeated code, so I think refactoring is needed. And I guess I need to make a generator. How do I make this generator that will allow me to change the code in brackets?</p>
| 1 | 2016-09-28T16:42:03Z | 39,753,677 | <p>Sometimes repetition is not that bad!</p>
<p>But anyway you can try something like this:</p>
<pre><code>#!/usr/bin/env python
# script.py
def process_file(filename, func1, func2):
with open(filename) as f:
for line in f:
if '1' in line:
func1(line)
if '2' in line:
func2(line)
def main():
counters = {1: 0, 2: 0}
def func1(line):
# TODO Add some logic based on line value here
counters[1] += 1
def func2(line):
counters[2] += 1
process_file('table.csv', func1, func2)
return counters
if __name__ == '__main__':
print(main())
</code></pre>
<p>And if you have a file:</p>
<pre><code>$ cat table.csv
1 just one
1 2 one and two
1
1
0
0
2
2 1
1 0 2
0
</code></pre>
<p>And run the script:</p>
<pre><code>python script.py
</code></pre>
<p>You will get the next output:</p>
<pre><code>{1: 6, 2: 3}
</code></pre>
<p>Also you can factor out predicates of your <code>if</code> statements:</p>
<pre><code>def process_file(filename, func1, func2, predicate1, predicate2):
with open(filename) as f:
for line in f:
if predicate1(line):
func1(line)
if predicate2(line):
func2(line)
def predicate1(line):
return 'REW' in line or 'LOSE' in line
</code></pre>
<p>Don't forget to choose nice function names!</p>
| 1 | 2016-09-28T17:06:29Z | [
"python",
"refactoring",
"generator"
]
|
scatter plot with single pixel marker in matplotlib | 39,753,282 | <p>I am trying to plot a large dataset with a scatter plot.
I want to use matplotlib to plot it with single pixel marker.
It seems to have been solved.</p>
<p><a href="https://github.com/matplotlib/matplotlib/pull/695" rel="nofollow">https://github.com/matplotlib/matplotlib/pull/695</a></p>
<p>But I cannot find a mention of how to get a single pixel marker. </p>
<p>My simplified dataset (data.csv)</p>
<pre><code>Length,Time
78154393,139.324091
84016477,229.159305
84626159,219.727537
102021548,225.222662
106399706,221.022827
107945741,206.760239
109741689,200.153263
126270147,220.102802
207813132,181.67058
610704756,50.59529
623110004,50.533158
653383018,52.993885
659376270,53.536834
680682368,55.97628
717978082,59.043843
</code></pre>
<p>My code is below.</p>
<pre><code>import pandas as pd
import os
import numpy
import matplotlib.pyplot as plt
inputfile='data.csv'
iplevel = pd.read_csv(inputfile)
base = os.path.splitext(inputfile)[0]
fig = plt.figure()
plt.yscale('log')
#plt.xscale('log')
plt.title(' My plot: '+base)
plt.xlabel('x')
plt.ylabel('y')
plt.scatter(iplevel['Time'], iplevel['Length'],color='black',marker=',',lw=0,s=1)
fig.tight_layout()
fig.savefig(base+'_plot.png', dpi=fig.dpi)
</code></pre>
<p>You can see below that the points are not single pixel. </p>
<p><a href="http://i.stack.imgur.com/Z6sB6.png" rel="nofollow"><img src="http://i.stack.imgur.com/Z6sB6.png" alt="data_plot.png"></a></p>
<p>Any help is appreciated</p>
| 0 | 2016-09-28T16:45:15Z | 39,778,483 | <p>I fear that the bugfix discussed at matplotlib git repository that you're citing is only valid for <code>plt.plot()</code> and not for <code>plt.scatter()</code></p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure(figsize=(4,2))
ax = fig.add_subplot(121)
ax2 = fig.add_subplot(122, sharex=ax, sharey=ax)
ax.plot([1, 2],[0.4,0.4],color='black',marker=',',lw=0, linestyle="")
ax.set_title("ax.plot")
ax2.scatter([1,2],[0.4,0.4],color='black',marker=',',lw=0, s=1)
ax2.set_title("ax.scatter")
ax.set_xlim(0,8)
ax.set_ylim(0,1)
fig.tight_layout()
print fig.dpi #prints 80 in my case
fig.savefig('plot.png', dpi=fig.dpi)
</code></pre>
<p><a href="http://i.stack.imgur.com/98OO7.png" rel="nofollow"><img src="http://i.stack.imgur.com/98OO7.png" alt="enter image description here"></a></p>
| 0 | 2016-09-29T19:27:19Z | [
"python",
"matplotlib",
"scatter-plot"
]
|
Automatically running app .py in in Heroku | 39,753,285 | <p>I have created a bot for my website and I currently host in on heroku.com.
I run it by executing the command </p>
<p><code>heroku run --app cghelper python bot.py</code></p>
<p>This executes the command perfectly through CMD and runs that specific .py file in my github repo. </p>
<p>The issue is when I close the cmd window this stops the bot.py. How can I get the to run automatically. </p>
<p>Thanks </p>
| 0 | 2016-09-28T16:45:22Z | 39,754,555 | <p>Not sure but try: </p>
<p>heroku run --app cghelper python bot.py &</p>
| 0 | 2016-09-28T17:58:30Z | [
"python",
"django",
"git",
"heroku"
]
|
How can an average equation be added to this? | 39,753,350 | <p>I am taking GCSE programming and have be set a task to create a program which takes "n" amount of numbers and works out the average.</p>
<pre><code>#presets for varibles
nCount = 0
total = 0
average = 0.0
Numbers = []
ValidInt = False
#start of string
nCount = (input("How many numbers? "))
print(nCount)
while not ValidInt:
try:
int(nCount)
ValidInt = True
except:
nCount = input("Please Enter An Interger Number")
#validation loops untill an interger is entered
for x in range (int(nCount)):
Numbers.append(input("Please Enter The Next Number"))
</code></pre>
<p>This is what i have so far but cannot think how i can code it to work out an average from this information? Any help is much appreciated, Thank you(i am not looking for answers just help in what function as i should use)</p>
| -1 | 2016-09-28T16:48:01Z | 39,753,742 | <p>You're <em>really</em> close to the answer. Looks like you've got everything setup and ready to calculate the average, so very good job.<br>
Python's got two built-in functions <code>sum</code> and <code>len</code> which can be used to calculate the sum of all the numbers, then divide that sum by how many numbers have been collected. Add this as the last line in your program and check the output. </p>
<p>Note: Since inputs were taken as integers (whole numbers) and the average will usually be a non-whole number, we make one of the numbers a float before calculating the average:</p>
<p><code>print(sum(Numbers)/float(len(Numbers)))</code></p>
<p>Edit: <em>Or</em>, since you've got a variable that already holds how many numbers the user has input, <code>nCount</code>, we can use this calculation, which will give the same answer:</p>
<p><code>print(sum(Numbers)/float(nCount))</code>.</p>
<p>Try both and choose one or make your own.</p>
| 0 | 2016-09-28T17:10:08Z | [
"python",
"average"
]
|
'int' object is not callable when using groupby | 39,753,383 | <p>I am trying to find the longest sequence of numbers in a list using the following code</p>
<pre><code>from itertools import groupby
ggg = groupby([1,2,3,3,3,5,88,9,9,9,9,9,9,9,1,1,1,2,2,3,3,3,3,3])
max(ggg, key=lambda k: len(list(k[1])))
</code></pre>
<p>However, I get the error 'int' object is not callable. Additionally, I am using python 3. </p>
| 1 | 2016-09-28T16:49:58Z | 39,753,584 | <p>The code works fine for me:</p>
<pre><code>>>> from itertools import groupby
>>> ggg = groupby([1,2,3,3,3,5,88,9,9,9,9,9,9,9,1,1,1,2,2,3,3,3,3,3])
>>> max(ggg, key=lambda k: len(list(k[1])))
(9, <itertools._grouper object at 0x019973F0>)
</code></pre>
<p>As @FamousJameous mentioned, it appears that one of the functions you are using has been assigned to an integer. This is a great illustration of why you should be <strong>careful with what variable names you choose</strong>, since they can wipe away existing functions. Consider:</p>
<pre><code>>>> x = [1,2,3]
>>> len = len(x)
>>> len(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
</code></pre>
| 0 | 2016-09-28T17:01:08Z | [
"python"
]
|
'int' object is not callable when using groupby | 39,753,383 | <p>I am trying to find the longest sequence of numbers in a list using the following code</p>
<pre><code>from itertools import groupby
ggg = groupby([1,2,3,3,3,5,88,9,9,9,9,9,9,9,1,1,1,2,2,3,3,3,3,3])
max(ggg, key=lambda k: len(list(k[1])))
</code></pre>
<p>However, I get the error 'int' object is not callable. Additionally, I am using python 3. </p>
| 1 | 2016-09-28T16:49:58Z | 39,753,626 | <p>Your code works for me in Python 3. My best guess is that your problem is coming from assigning <code>len</code>, <code>list</code>, or <code>max</code> to be an <code>int</code>. For example:</p>
<pre><code>>>> list_a = [1, 2]
>>> len(list_a)
2
>>> len = 4
>>> len(list_a)
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
len(list_a)
TypeError: 'int' object is not callable
</code></pre>
| 0 | 2016-09-28T17:03:22Z | [
"python"
]
|
Using replace instead of find | 39,753,605 | <p>I have written the following piece but not able to figure the next part:</p>
<pre><code>if request.guess in game.target:
position = game.target.find(request.guess)
game.correct[position] = request.guess.upper()
</code></pre>
<p>The point here is <code>request.guess</code> is a character input like <code>'a'</code> and <code>game.target</code> is <code>'austin'</code>. So the above code will find the character in the target word and generate an output like <code>a*****</code>. But my code will only work for words that have letters not repeated in them eg: Australia. I know there is a method 'replace' in python but I am not able to figure out how should I incorporate here.</p>
<p>Kindly help!!!!!</p>
| 2 | 2016-09-28T17:02:28Z | 39,753,758 | <p>just use a set of guessed letters and do something like the below.</p>
<pre><code>In [164]: word = 'Australia'
In [165]: guessed = set()
In [166]: guessed.add('a')
In [167]: print(''.join(c if c.lower() in guessed else '*' for c in word))
A****a**a
</code></pre>
| 2 | 2016-09-28T17:11:24Z | [
"python",
"python-3.x"
]
|
Using replace instead of find | 39,753,605 | <p>I have written the following piece but not able to figure the next part:</p>
<pre><code>if request.guess in game.target:
position = game.target.find(request.guess)
game.correct[position] = request.guess.upper()
</code></pre>
<p>The point here is <code>request.guess</code> is a character input like <code>'a'</code> and <code>game.target</code> is <code>'austin'</code>. So the above code will find the character in the target word and generate an output like <code>a*****</code>. But my code will only work for words that have letters not repeated in them eg: Australia. I know there is a method 'replace' in python but I am not able to figure out how should I incorporate here.</p>
<p>Kindly help!!!!!</p>
| 2 | 2016-09-28T17:02:28Z | 39,753,779 | <p>With regular expressions:</p>
<pre><code>import re
correct = re.sub(r'[^a]',"*","australia") # returns "a****a**a"
</code></pre>
<p>This the regular expression <code>[^a]</code> matches all characters except "a". The <code>re.sub</code> function replaces each "not a" with a "*"</p>
<p><em>edit</em></p>
<p>Your comments make be think that you are trying to change the <code>game.correct</code> string by replacing characters in that string. Remember that strings in python are immutable. You can't change individual letters in them. So rather than try to change the stars to "a", you have to make a new string. Regular expressions are one way to do this.</p>
| 1 | 2016-09-28T17:12:33Z | [
"python",
"python-3.x"
]
|
Using replace instead of find | 39,753,605 | <p>I have written the following piece but not able to figure the next part:</p>
<pre><code>if request.guess in game.target:
position = game.target.find(request.guess)
game.correct[position] = request.guess.upper()
</code></pre>
<p>The point here is <code>request.guess</code> is a character input like <code>'a'</code> and <code>game.target</code> is <code>'austin'</code>. So the above code will find the character in the target word and generate an output like <code>a*****</code>. But my code will only work for words that have letters not repeated in them eg: Australia. I know there is a method 'replace' in python but I am not able to figure out how should I incorporate here.</p>
<p>Kindly help!!!!!</p>
| 2 | 2016-09-28T17:02:28Z | 39,753,802 | <p><code>replace</code> will not help here, since you are not changing the string you are searching.</p>
<p>There are two options here:</p>
<ol>
<li><p>Use <code>str.find</code> repeatedly:</p>
<pre><code>pos = 0
while True:
pos = game.target.find(request.guess, start=pos)
if pos == -1: break
game.correct[pos] = request.guess.upper()
</code></pre></li>
<li><p>Build the string from scratch each time:</p>
<pre><code>game.correct = ''.join(
letter if letter in request.guesses else '*'
for letter in game.target
)
</code></pre></li>
</ol>
| 0 | 2016-09-28T17:13:29Z | [
"python",
"python-3.x"
]
|
Using replace instead of find | 39,753,605 | <p>I have written the following piece but not able to figure the next part:</p>
<pre><code>if request.guess in game.target:
position = game.target.find(request.guess)
game.correct[position] = request.guess.upper()
</code></pre>
<p>The point here is <code>request.guess</code> is a character input like <code>'a'</code> and <code>game.target</code> is <code>'austin'</code>. So the above code will find the character in the target word and generate an output like <code>a*****</code>. But my code will only work for words that have letters not repeated in them eg: Australia. I know there is a method 'replace' in python but I am not able to figure out how should I incorporate here.</p>
<p>Kindly help!!!!!</p>
| 2 | 2016-09-28T17:02:28Z | 39,753,850 | <p>I'd suggest you invest some time in learning <a href="https://docs.python.org/2/library/re.html" rel="nofollow">regular expressions</a>, they come in handy in many similar situations and are often more expressive (and efficient) than ad-hoc implementations. Plus you can reuse (with some quirks) your understanding of them in other languages and OS shells, which makes them very powerful.</p>
<p>You can build a regular expression from your pattern that will match anything except the characters you are looking for: <code>r'[^{}]'.format(pattern)</code>; then use it to do the substitution:</p>
<pre><code>import re
pattern = request.guess
result = re.sub(r'[^{}]'.format(pattern),"*","adam")
</code></pre>
<p><a href="https://ideone.com/7eDYFz" rel="nofollow">Example on IdeOne</a>:</p>
<pre><code>import re
print re.sub(r'[^{}]'.format('ad'),"*","adam")
print re.sub(r'[^{}]'.format('a'),"*","adam")
print re.sub(r'[^{}]'.format('m'),"*","adam")
</code></pre>
<p>Prints:</p>
<pre><code>ada*
a*a*
***m
</code></pre>
| 0 | 2016-09-28T17:16:33Z | [
"python",
"python-3.x"
]
|
Django Rest Framework: How to include linked resources in serializer response? | 39,753,706 | <p>If write my own implementation of a <code>ViewSet</code>, and I want to return some objects, it's pretty simple:</p>
<pre><code>class MyViewSet(ViewSet):
def my_method(self, request):
objects = MyModel.objects.all()
return Response( MyModelSerializer(objects, many=True).data )
</code></pre>
<p>But let's say that I want to include the actual instance of another linked resource by a <code>ForeignKey</code>, <strong>instead of</strong> just an ID. For example:</p>
<pre><code>class MyModel(Model):
author = ForeignKey(MyOtherModel)
...
</code></pre>
<p>Is there a way to do something like this?</p>
<pre><code>...
return Response( MyModelSerializer(objects, many=True, include='author').data )
</code></pre>
| 0 | 2016-09-28T17:08:09Z | 39,753,761 | <p>You can use the <a href="http://www.django-rest-framework.org/api-guide/serializers/#specifying-nested-serialization" rel="nofollow">depth</a> meta attribute on your serializer.</p>
<pre><code>class AccountSerializer(serializers.ModelSerializer):
class Meta:
model = Account
fields = ('id', 'account_name', 'users', 'created')
depth = 1
</code></pre>
<p>Specifying fields explicitly to control how they are serialized. </p>
<pre><code>class AccountSerializer(serializers.ModelSerializer):
url = serializers.CharField(source='get_absolute_url', read_only=True)
groups = serializers.PrimaryKeyRelatedField(many=True)
class Meta:
model = Account
</code></pre>
| 1 | 2016-09-28T17:11:36Z | [
"python",
"django",
"django-rest-framework"
]
|
Django Rest Framework: How to include linked resources in serializer response? | 39,753,706 | <p>If write my own implementation of a <code>ViewSet</code>, and I want to return some objects, it's pretty simple:</p>
<pre><code>class MyViewSet(ViewSet):
def my_method(self, request):
objects = MyModel.objects.all()
return Response( MyModelSerializer(objects, many=True).data )
</code></pre>
<p>But let's say that I want to include the actual instance of another linked resource by a <code>ForeignKey</code>, <strong>instead of</strong> just an ID. For example:</p>
<pre><code>class MyModel(Model):
author = ForeignKey(MyOtherModel)
...
</code></pre>
<p>Is there a way to do something like this?</p>
<pre><code>...
return Response( MyModelSerializer(objects, many=True, include='author').data )
</code></pre>
| 0 | 2016-09-28T17:08:09Z | 39,753,765 | <p>What you're looking for is <a href="http://www.django-rest-framework.org/api-guide/relations/" rel="nofollow">nested relations</a>. This is built into Django REST Framework. By explicitly defining your relation fields in your serializer you can specify <code>many=True</code> which will expand the related object. </p>
<p>From this example <a href="http://www.django-rest-framework.org/api-guide/relations/#example" rel="nofollow">http://www.django-rest-framework.org/api-guide/relations/#example</a></p>
<pre><code>class TrackSerializer(serializers.ModelSerializer):
class Meta:
model = Track
fields = ('order', 'title', 'duration')
class AlbumSerializer(serializers.ModelSerializer):
tracks = TrackSerializer(many=True, read_only=True)
class Meta:
model = Album
fields = ('album_name', 'artist', 'tracks')
</code></pre>
<p>This can return:</p>
<pre><code>{
'album_name': 'The Grey Album',
'artist': 'Danger Mouse',
'tracks': [
{'order': 1, 'title': 'Public Service Announcement', 'duration': 245},
{'order': 2, 'title': 'What More Can I Say', 'duration': 264},
{'order': 3, 'title': 'Encore', 'duration': 159},
...
],
}
</code></pre>
| 1 | 2016-09-28T17:11:47Z | [
"python",
"django",
"django-rest-framework"
]
|
How do I apply QScintilla syntax highlighting to a QTextEdit in PyQt4? | 39,753,717 | <p>I have a simple PyQt text editor, and would like to apply QScintilla formatting to it. I need to use a QTextEdit for the text, as it provides other functionality that I am using (cursor position, raw text output, etc), and would like to apply QScintilla formatting. </p>
<p>Just for refrence, the initialisation of the QTextEdit:</p>
<pre><code>self.text = QtGui.QTextEdit(self)
</code></pre>
| 2 | 2016-09-28T17:08:47Z | 39,772,188 | <p>I believe you cannot use <code>QScintilla</code> directly with <code>QTextEdit</code>. </p>
<p>But have a look at this question: stackoverflow.com/questions/20951660/⦠and if you want to see the usage <code>QTextEdit</code> (or <code>QPlainTextEdit</code>) with <code>QSyntaxHiglighter</code>, see for example this: <a href="http://wiki.python.org/moin/PyQt/Python%20syntax%20highlighting" rel="nofollow">http://wiki.python.org/moin/PyQt/Python%20syntax%20highlighting</a> or this <a href="http://carsonfarmer.com/2009/07/syntax-highlighting-with-pyqt/" rel="nofollow">http://carsonfarmer.com/2009/07/syntax-highlighting-with-pyqt/</a> which uses very basic syntax highlighter for Python code.</p>
| 1 | 2016-09-29T13:49:58Z | [
"python",
"qt",
"pyqt",
"scintilla",
"qscintilla"
]
|
When using ElasticSearch scroll api, can I skip to page n? | 39,753,731 | <p>I'm using the elasticsearch scroll api. In some cases I'd like to return the hits on page n without returning the previous pages' hits. I believe this should be like an iterator. So I'd like to just pass the iterator through the first few pages, but then actually return the hits of the n-th page. </p>
<p>My current code is </p>
<pre><code>initial_request = client.search(index = index, doc_type = doc_type, body = q, scroll = str(wait_time) + 'm', search_type = 'scan', size = size)
sid = initial_request['_scroll_id'] ## scroll id
total_hits = initial_request['hits']['total'] ## how many results there are.
scroll_size = total_hits ## set this to a positive value initially
while scroll_size > 0:
p += 1
print "\t\t Scrolling to page %s ..." %p
page = client.scroll(scroll_id = sid, scroll = str(wait_time) + 'm')
sid = page['_scroll_id'] # Update the scroll ID
scroll_size = len(page["hits"]["hits"]) ## no. of hits returned on this page
## then code to do stuff w/ that page's hits.
</code></pre>
<p>but the <code>page = client.scroll(...)</code> actually sends the hits for that page back to my local machine. I'd like to just <code>pass</code> on the first n pages, then start sending the pages' hits after that. </p>
<p>Any ideas? </p>
| 1 | 2016-09-28T17:09:30Z | 39,755,123 | <p>Elasticsearch python client <code>search</code> method allows you to specify the <code>size</code> and <code>from_</code> argument to start searching for elements for a <code>query</code> skipping a few records.</p>
<pre><code>client.search(index=ES_INDEX, body=QUERY, doc_type=ES_TYPE, size=SIZE,from_=FROM)
</code></pre>
| -1 | 2016-09-28T18:31:06Z | [
"python",
"elasticsearch",
"pagination",
"iterator"
]
|
Managing django urls | 39,753,808 | <p>I am making a personal site. I have a blog page(site.com/blog), where I have a list of my blog posts. If I want to check a blog post simple enough I can click it and the code:</p>
<pre><code> <a href="/blog/{{ obj.id }}"><h1>{{ obj.topic_title }}</h1></a>
</code></pre>
<p>will get me there. Also if I want to go to my contacts page (site.com/contacts). Easy enough I click nav contacs button and I go there</p>
<pre><code> <a href="/contacts">Contacts</a>
</code></pre>
<p>but if I enter a blog post (<em>site.com/blog/1</em>), I am using the same template and if I want to go to my contacts page I have to yet again click the</p>
<pre><code> <a href="/contacts">Contacts</a>
</code></pre>
<p>link, but that will port me to a 404 page <em>site.com/blog/contacts</em> . How do I deal with this problem without harcoding every single page</p>
| 0 | 2016-09-28T17:13:55Z | 39,753,932 | <p>Use the built-in Django <a href="https://docs.djangoproject.com/en/1.10/ref/templates/builtins/#url" rel="nofollow"><code>url</code></a> template tag, which takes the view name and returns an absolute path to it.</p>
<blockquote>
<p>Returns an absolute path reference (a URL without the domain name) matching a given view and optional parameters.</p>
</blockquote>
<p>You can give it a view name and any of its view parameters as well. This is much better than how you link to the blog page:</p>
<pre><code>{% url 'blog-view-name' obj.id %}
</code></pre>
<p>This ensures that if you ever change the structure of your views, it will still not break your links.</p>
| 2 | 2016-09-28T17:20:44Z | [
"python",
"html",
"django",
"url"
]
|
How to use regular experession inside function? | 39,753,842 | <p>I am using following python script to copy files between windows machines.</p>
<pre><code>from subprocess import call
def copy_logs():
file= '.\pscp.exe -pw test123 C:\Users\Administrator\Desktop\interact.* Administrator@1.1.1.1:/'
call(file)
copy_logs()
</code></pre>
<p>If I run the above script, I'm getting following error:</p>
<pre><code>PS C:\Users\Administrator\Desktop> python .\execute_pscp.py
C:\Users\Administrator\Desktop\interact.*: No such file or directory
PS C:\Users\Administrator\Desktop>
</code></pre>
<p>But if i specify filename exactly as "file= '.\pscp.exe -pw test123 C:\Users\Administrator\Desktop\interact_python.py Administrator@1.1.1.1:/'", its working perfectly as shown below.</p>
<pre><code>PS C:\Users\Administrator\Desktop> python .\execute_pscp.py
interact_python.py | 0 kB | 0.2 kB/s | ETA: 00:00:00 | 100%
PS C:\Users\Administrator\Desktop>
</code></pre>
<p>But I want to use some regular expression inside the command "interact.*". So that I can copy some particular file/files.</p>
<p>And also I want execute this script every three hours. its there any way in python to achieve this?</p>
| 0 | 2016-09-28T17:16:05Z | 39,754,390 | <p>I tried "import os" module as shown below. Its working perfectly.</p>
<pre><code>import os
def copy_logs():
os.system(".\pscp.exe -pw test123 C:\Users\Administrator\Desktop\interact_pyth* Administrator@1.1.1.1:/")
copy_logs()
</code></pre>
| 0 | 2016-09-28T17:47:18Z | [
"python",
"python-2.7"
]
|
Second level looping over dictionaries | 39,753,861 | <p>In building a class with an out line like below I would like the behaviour of the for loops to, if done once: just give the keys as normal an then move on to the next line of code. But if a second loop is set up inside the first loop it would give the keys on the first loop and then ea value in the sequences in the second loop. The problem I can't figure out is how to set up this under <strong>iter</strong>. </p>
<pre><code>class MyClass():
def __init__(self):
self.cont1 = [1,2,3,4]
self.cont2 = ('a','b','c')
def __iter__(self):
pass # ???????
</code></pre>
<p>Something like this:</p>
<pre><code>dct = dict(container1=[5,6,7,8], container2=('a','b','c')
if one loop is used:
for ea in dct:
print(ea)
print("Howdy")
'containter1'
'containter2'
Howdy
</code></pre>
<p>If a nest loop is used:</p>
<pre><code>for ea in dct:
print(ea)
for i in dct.get(ea):
print(i)
'container1'
5
6
...
'container2'
a
b
c
</code></pre>
| 0 | 2016-09-28T17:17:40Z | 39,754,173 | <p>How would you feel about this:</p>
<pre><code>class MyClass():
def __init__(self):
self.cont1 = [1,2,3,4]
self.cont2 = ('a','b','c')
self.conts = {'container1':self.cont1, 'container2':self.cont2}
def __iter__(self):
return self.conts.iteritems()
dct = MyClass()
print('One loop')
for mi in dct:
print(mi)
print('='*40)
print('Nested loops')
for name, values in dct:
print(name)
for i in values:
print(i)
</code></pre>
<p>Which outputs:</p>
<pre><code>One loop
container1
container2
========================================
Nested loops
container1
1
2
3
4
container2
a
b
c
</code></pre>
<h2>Update</h2>
<hr>
<p>I don't know that I would really recommend this, but this seems to more closely fit what the OP wants:</p>
<pre><code>class MyIterator(object):
def __init__(self, name, values):
self.vals = iter(values)
self.name = name
def __iter__(self):
return self.vals
def __str__(self):
return self.name
class MyClass():
def __init__(self):
self.cont1 = [1,2,3,4]
self.cont2 = ('a','b','c')
self.conts = [MyIterator('container1', self.cont1),
MyIterator('container2', self.cont2)]
def __iter__(self):
return iter(self.conts)
dct = MyClass()
for mi in dct:
print(mi)
for i in mi:
print(i)
</code></pre>
<p>This is the only way I can think of to be able to print the name and then iterate over it as the values list. This works by overriding the <code>__str__</code> method to change how the object gets "stringified". But as I said earlier, I think you would be better served with the first part of the answer.</p>
<p>Sorry, just realized <a href="http://stackoverflow.com/a/39754576/3901060">nauer's answer</a> already showed something like this.</p>
| 0 | 2016-09-28T17:34:52Z | [
"python",
"class",
"for-loop"
]
|
Second level looping over dictionaries | 39,753,861 | <p>In building a class with an out line like below I would like the behaviour of the for loops to, if done once: just give the keys as normal an then move on to the next line of code. But if a second loop is set up inside the first loop it would give the keys on the first loop and then ea value in the sequences in the second loop. The problem I can't figure out is how to set up this under <strong>iter</strong>. </p>
<pre><code>class MyClass():
def __init__(self):
self.cont1 = [1,2,3,4]
self.cont2 = ('a','b','c')
def __iter__(self):
pass # ???????
</code></pre>
<p>Something like this:</p>
<pre><code>dct = dict(container1=[5,6,7,8], container2=('a','b','c')
if one loop is used:
for ea in dct:
print(ea)
print("Howdy")
'containter1'
'containter2'
Howdy
</code></pre>
<p>If a nest loop is used:</p>
<pre><code>for ea in dct:
print(ea)
for i in dct.get(ea):
print(i)
'container1'
5
6
...
'container2'
a
b
c
</code></pre>
| 0 | 2016-09-28T17:17:40Z | 39,754,234 | <p>To answer your immediate question, you could just copy how dictionaries implement <code>dict.get</code> and <code>dict.__iter__</code>:</p>
<pre><code>class MyClass():
def __init__(self):
self.cont1 = [1,2,3,4]
self.cont2 = ('a','b','c')
def __iter__(self):
for attr in dir(self):
if not attr.startswith('_') and attr != 'get':
yield attr
def get(self, key):
return getattr(self, key)
</code></pre>
<p>It's not a very good approach, however. Looking at the attributes of your object at runtime isn't a good idea, because it will break when you subclass and it will add needless complexity. Instead, just use a dictionary internally:</p>
<pre><code>class MyClass():
def __init__(self):
self.container = {
'cont1': [1, 2, 3, 4],
'cont2': ('a', 'b', 'c')
}
def __iter__(self):
return iter(self.container)
def get(self, key):
return self.container.get(key)
</code></pre>
| 1 | 2016-09-28T17:37:22Z | [
"python",
"class",
"for-loop"
]
|
Second level looping over dictionaries | 39,753,861 | <p>In building a class with an out line like below I would like the behaviour of the for loops to, if done once: just give the keys as normal an then move on to the next line of code. But if a second loop is set up inside the first loop it would give the keys on the first loop and then ea value in the sequences in the second loop. The problem I can't figure out is how to set up this under <strong>iter</strong>. </p>
<pre><code>class MyClass():
def __init__(self):
self.cont1 = [1,2,3,4]
self.cont2 = ('a','b','c')
def __iter__(self):
pass # ???????
</code></pre>
<p>Something like this:</p>
<pre><code>dct = dict(container1=[5,6,7,8], container2=('a','b','c')
if one loop is used:
for ea in dct:
print(ea)
print("Howdy")
'containter1'
'containter2'
Howdy
</code></pre>
<p>If a nest loop is used:</p>
<pre><code>for ea in dct:
print(ea)
for i in dct.get(ea):
print(i)
'container1'
5
6
...
'container2'
a
b
c
</code></pre>
| 0 | 2016-09-28T17:17:40Z | 39,754,576 | <p>You can do this with a second class like this</p>
<pre><code>class MyClass():
def __init__(self):
self.data = [MyClass2({'cont1' : [1,2,3,4]}),MyClass2({'cont2' : ('a','b','c')})]
def __iter__(self):
for item in self.data:
yield item
class MyClass2():
def __init__(self, mydict):
self.d = mydict
def __iter__(self):
for item in self.d.values():
for value in item:
yield value
def __repr__(self):
return(list(self.d.keys())[0])
m = MyClass()
for k in m:
print(k)
for val in k:
print(val)
</code></pre>
| 0 | 2016-09-28T17:59:42Z | [
"python",
"class",
"for-loop"
]
|
Second level looping over dictionaries | 39,753,861 | <p>In building a class with an out line like below I would like the behaviour of the for loops to, if done once: just give the keys as normal an then move on to the next line of code. But if a second loop is set up inside the first loop it would give the keys on the first loop and then ea value in the sequences in the second loop. The problem I can't figure out is how to set up this under <strong>iter</strong>. </p>
<pre><code>class MyClass():
def __init__(self):
self.cont1 = [1,2,3,4]
self.cont2 = ('a','b','c')
def __iter__(self):
pass # ???????
</code></pre>
<p>Something like this:</p>
<pre><code>dct = dict(container1=[5,6,7,8], container2=('a','b','c')
if one loop is used:
for ea in dct:
print(ea)
print("Howdy")
'containter1'
'containter2'
Howdy
</code></pre>
<p>If a nest loop is used:</p>
<pre><code>for ea in dct:
print(ea)
for i in dct.get(ea):
print(i)
'container1'
5
6
...
'container2'
a
b
c
</code></pre>
| 0 | 2016-09-28T17:17:40Z | 39,755,716 | <p>You cannot do that simply by implementing <code>__iter__</code>. <code>__iter__</code> should return an <em>iterator</em>, that is, an object that keeps the state of an iteration (the current position in a sequence of items) and has a method <code>next</code> that returns with each invocation the next item in the sequence.</p>
<p>If your object has nested sequences you can implement an iterator that will traverse only the external sequence, or one that will traverse both
the external and the internal sequences - in a depth-first or a breath-first fashion - but it does not make sense to use nested loops on the same iterable:</p>
<pre><code># iterate over every item in myobj
for x in myobj:
...
# iterate over every item again? not likely what you want!
for y in myobj:
</code></pre>
<p>A more likely situation is:</p>
<pre><code>for x in myob:
...
for y in x:
...
</code></pre>
| 0 | 2016-09-28T19:05:28Z | [
"python",
"class",
"for-loop"
]
|
How to get the width of a matplotlib text, including the padded bounding box? | 39,753,972 | <p>I know how to get the width of the text:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.patches import BoxStyle
xpos, ypos = 0, 0
text = 'blah blah'
boxstyle = BoxStyle("Round", pad=1)
props = {'boxstyle': boxstyle,
'facecolor': 'white',
'linestyle': 'solid',
'linewidth': 1,
'edgecolor': 'black'}
textbox = plt.text(xpos, ypos, text, bbox=props)
plt.show()
textbox.get_bbox_patch().get_width() # 54.121092459652573
</code></pre>
<p>However, this does not take into account the padding. Indeed, if I set the padding to 0, I get the same width.</p>
<pre><code>boxstyle = BoxStyle("Round", pad=0)
props = {'boxstyle': boxstyle,
'facecolor': 'white',
'linestyle': 'solid',
'linewidth': 1,
'edgecolor': 'black'}
textbox = plt.text(xpos, ypos, text, bbox=props)
plt.show()
textbox.get_bbox_patch().get_width() # 54.121092459652573
</code></pre>
<p>My question is: how can I get the width of the surrounding box? or how can I get the size of the padding in the case of a FancyBoxPatch? </p>
| 2 | 2016-09-28T17:23:00Z | 39,754,917 | <blockquote>
<p><a href="http://matplotlib.org/api/patches_api.html#matplotlib.patches.FancyBboxPatch" rel="nofollow"><code>matplotlib.patches.FancyBboxPatch</code></a> class is similar to <a href="http://matplotlib.org/api/patches_api.html#matplotlib.patches.Rectangle" rel="nofollow"><code>matplotlib.patches.Rectangle</code></a> class, but it draws
a fancy box around the rectangle.</p>
<p><a href="http://matplotlib.org/api/patches_api.html#matplotlib.patches.FancyBboxPatch.get_width" rel="nofollow"><code>FancyBboxPatch.get_width()</code></a> returns the width of the (inner) rectangle</p>
</blockquote>
<p>However, <a href="http://matplotlib.org/api/patches_api.html#matplotlib.patches.FancyBboxPatch" rel="nofollow"><code>FancyBboxPatch</code></a> is a subclass of <a href="http://matplotlib.org/api/patches_api.html#matplotlib.patches.Patch" rel="nofollow"><code>matplotlib.patches.Patch</code></a> which provides a <a href="http://matplotlib.org/api/patches_api.html#matplotlib.patches.Patch.get_extents" rel="nofollow"><code>get_extents()</code></a> method:</p>
<blockquote>
<p><a href="http://matplotlib.org/api/patches_api.html#matplotlib.patches.Patch.get_extents" rel="nofollow"><code>matplotlib.patches.Patch.get_extents()</code></a></p>
<p>Return a Bbox object defining the axis-aligned extents of the Patch.</p>
</blockquote>
<p>Thus, what you need is <strong><code>textbox.get_bbox_patch().get_extents().width</code></strong>.</p>
| 2 | 2016-09-28T18:19:53Z | [
"python",
"matplotlib"
]
|
python 27 - Boolean check fails while multiprocessing | 39,753,995 | <p>I have a script that retrieves a list of active 'jobs' from a MySQL table and then instantiates my main script once per active job using the multiprocessing library. My multiprocessing script has a function that checks if a given job has been claimed by another thread. It does this by checking if a particular column in the DB table is/is not NULL. The DB query returns a single item tuple:</p>
<pre><code>def check_if_job_claimed():
#...
job_claimed = cursor.fetchone() #Returns (claim_id,) for claimed jobs, and (None,) for unclaimed jobs
if job_claimed:
print "This job has already been claimed by another thread."
return
else:
do_stuff_to_claim_the_job
</code></pre>
<p>When I run this function <strong>without</strong> the multiprocessing portion, the claim check works just fine. But when I try to run the jobs in parallel, the claim check reads all the (None,) tuples as having value and therefore truthiness and therefore the function assumes the job has already been claimed.</p>
<p>I have tried adjusting the number of concurrent processes the multiprocessor uses, but the claim check still doesn't work... even when I set the number of processes to 1. I have also tried playing around with the if statement to see if I could make it work that way:</p>
<pre><code>if job_claimed == True
if job_claimed == (None,)
# etc.
</code></pre>
<p>No luck though.</p>
<p>Is anybody aware of something in the multiprocessing library that would prevent my claim checking function from properly interpreting the job_claimed tuple? Maybe there's something wrong with my code?</p>
<p><strong>EDIT</strong></p>
<p>I had run some truthiness tests on the job_claimed variable in debug mode. Here are the results of those tests:</p>
<pre><code>(pdb) job_claimed
(None,)
(pdb) len(job_claimed)
1
(pdb) job_claimed == True
False
(pdb) job_claimed == False
False
(pdb) job_claimed[0]
None
(pdb) job_claimed[0] == True
False
(pdb) job_claimed[0] == False
False
(pdb) any(job_claimed)
False
(pdb) all(job_claimed)
False
(pdb) job_claimed is not True
True
(pdb) job_claimed is not False
True
</code></pre>
<p><strong>EDIT</strong></p>
<p>As requested:</p>
<pre><code>with open('Resource_File.txt', 'r') as f:
creds = eval(f.read())
connection = mysql.connector.connect(user=creds["mysql_user"],password=creds["mysql_pw"],host=creds["mysql_host"],database=creds["mysql_db"],use_pure=False,buffered=True)
def check_if_job_claimed(job_id):
cursor = connection.cursor()
thread_id_query = "SELECT Thread_Id FROM jobs WHERE Job_ID=\'{}\';".format(job_id)
cursor.execute(thread_id_query)
job_claimed = cursor.fetchone()
job_claimed = job_claimed[0]
if job_claimed:
print "This job has already been claimed by another thread. Moving on to next job..."
cursor.close()
return False
else:
thread_id = socket.gethostname()+':'+str(random.randint(0,1000))
claim_job = "UPDATE jobs SET Thread_Id = \'{}\' WHERE Job_ID = \'{}\';".format(job_id)
cursor.execute(claim_job)
connection.commit()
print "Job is now claimed"
cursor.close()
return True
def call_the_queen(dict_of_job_attributes):
if check_if_job_claimed(dict_of_job_attributes['job_id']):
instance = OM(dict_of_job_attributes) #<-- Create instance of my target class
instance.queen_bee()
#multiprocessing code
import multiprocessing as mp
if __name__ == '__main__':
active_jobs = get_active_jobs()
pool = mp.Pool(processes = 4)
pool.map(call_the_queen,active_jobs)
pool.close()
pool.join()
</code></pre>
| 0 | 2016-09-28T17:24:54Z | 39,754,968 | <p>Any non-empty tuple (or list, string, iterable, etc.) will evaluate to <code>True</code>. It doesn't matter if the contents of the iterable are non-True. To test that, you can use either <code>any(iterable)</code> or <code>all(iterable)</code> to test whether any or all of the items in the iterable evaluate to True.</p>
<p>However, based on your edits, your problem is likely caused by using a global connection object across multiple processes.</p>
<p>Instead, each process should create it's own connection. </p>
<pre><code>def check_if_job_claimed(job_id):
connection = mysql.connector.connect(user=creds["mysql_user"],password=creds["mysql_pw"],host=creds["mysql_host"],database=creds["mysql_db"],use_pure=False,buffered=True)
</code></pre>
<p>You could also try using <a href="https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html" rel="nofollow">connection pooling</a>, but I'm not sure if that would work across process, and would probably require you to switch to threads instead.</p>
<p>Also, I would move all the code under <code>if __name__ == '__main__':</code> into a function. You generally want to avoid polluting the global namespace when using multiprocessing, because when python creates a new process, it tries to copy the global state to the new process. That can lead to some odd bugs since global variables no longer share state (since they're in separate processes), or an object either can't be serialized or loses some information during serialization when it's reconstructed in the new process.</p>
| 1 | 2016-09-28T18:22:32Z | [
"python",
"mysql",
"boolean",
"multiprocessing"
]
|
file writing not working as expected | 39,754,030 | <p>I have a python code where it will take the first column of a sample.csv file and copy it to temp1.csv file. Now I would like to compare this csv file with another serialNumber.txt file for any common rows. If any common rows found, It should write to a result file. My temp1.csv is being created properly but the problem is the result file which is being created is empty.</p>
<p>script.py</p>
<pre><code>import csv
f = open("sample.csv", "r")
reader = csv.reader(f)
data = open("temp1.csv", "wb")
w = csv.writer(data)
for row in reader:
my_row = []
my_row.append(row[0])
w.writerow(my_row)
data.close()
with open('temp1.csv', 'r') as file1:
with open('serialNumber.txt', 'r') as file2:
same = set(file1).intersection(file2)
print same
with open('result.csv', 'w') as file_out:
for line in same:
file_out.write(line)
print line
</code></pre>
<p>sample.csv</p>
<pre><code>M11435TDS144,STB#1,Router#1
M11543TH4292,STB#2,Router#1
M11509TD9937,STB#3,Router#1
M11543TH4258,STB#4,Router#1
</code></pre>
<p>serialNumber.txt</p>
<pre><code>G1A114042400571
M11543TH4258
M11251TH1230
M11435TDS144
M11543TH4292
M11509TD9937
</code></pre>
| 0 | 2016-09-28T17:26:41Z | 39,754,108 | <pre><code>with open('temp1.csv', 'r') as file1:
list1 = file1.readlines()
set1 = set(list1)
with open('temp2.csv', 'r') as file2:
list2 = file2.readlines()
set2 = set(list2)
</code></pre>
<p>now you can process the contents as sets</p>
<p>I want to note that your original code does not insert line breaks after each line so I am not sure if they would be present. If not you need to add those, otherwise you will have a mess</p>
| 0 | 2016-09-28T17:31:19Z | [
"python",
"file-writing"
]
|
file writing not working as expected | 39,754,030 | <p>I have a python code where it will take the first column of a sample.csv file and copy it to temp1.csv file. Now I would like to compare this csv file with another serialNumber.txt file for any common rows. If any common rows found, It should write to a result file. My temp1.csv is being created properly but the problem is the result file which is being created is empty.</p>
<p>script.py</p>
<pre><code>import csv
f = open("sample.csv", "r")
reader = csv.reader(f)
data = open("temp1.csv", "wb")
w = csv.writer(data)
for row in reader:
my_row = []
my_row.append(row[0])
w.writerow(my_row)
data.close()
with open('temp1.csv', 'r') as file1:
with open('serialNumber.txt', 'r') as file2:
same = set(file1).intersection(file2)
print same
with open('result.csv', 'w') as file_out:
for line in same:
file_out.write(line)
print line
</code></pre>
<p>sample.csv</p>
<pre><code>M11435TDS144,STB#1,Router#1
M11543TH4292,STB#2,Router#1
M11509TD9937,STB#3,Router#1
M11543TH4258,STB#4,Router#1
</code></pre>
<p>serialNumber.txt</p>
<pre><code>G1A114042400571
M11543TH4258
M11251TH1230
M11435TDS144
M11543TH4292
M11509TD9937
</code></pre>
| 0 | 2016-09-28T17:26:41Z | 39,754,121 | <p>You probably want this instead</p>
<pre><code>same = set(list(file1)).intersection(list(file2))
</code></pre>
| 0 | 2016-09-28T17:31:58Z | [
"python",
"file-writing"
]
|
Python:Minumim Function | 39,754,115 | <p>I'm new to Python and I have a problem which needs to be solved with the min function in Python.</p>
<p>I have three ingredients which are:</p>
<pre><code>chicken = 20
lettuce = 30
tomato = 50
max_burgers = "Code Goes Here"
</code></pre>
<p>You need to make burgers with these. Each burger contains 1 piece of chicken, 3 lettuce leaves and 6 tomato slices. Using the min function, I need to calculate the maximum number of burgers which can be made with these ingredients. I've already done this with a while loop and the answer is 8. But I can not do so with a min function.</p>
<p>Any sort of help would be greatly appreciated.</p>
<p>Than You.</p>
| -4 | 2016-09-28T17:31:41Z | 39,754,232 | <p>The <code>min</code> function takes an arbitrary amount of arguments and takes the lowest value:</p>
<pre><code>>>> min(1, 2, 3,)
1
</code></pre>
<p>To solve this, simply calculate the max amount of burgers that could be made with each ingredient using floor division, and then pick the lowest amount:</p>
<pre><code>>>> chicken, lettuce, tomato = 20, 30, 50
>>> min(chicken, lettuce // 3, tomato // 6)
8
</code></pre>
| 1 | 2016-09-28T17:37:22Z | [
"python",
"min"
]
|
Extract a specific section of a string depending on an input | 39,754,196 | <p>I have a very large <code>.json</code> converted into a <code>string</code>, containing numerous cities/countries.</p>
<p>I'd like to extract the information of the city depending on the user's choice of country (<em>London is just an example</em>). </p>
<p>For example, if the <code>Country</code> the user <em>inputed</em> was: <code>UK</code>, the following information would be extracted from the string:</p>
<p>I'm not too sure on how I could possibly achieve this due to my inexperience, but am aware that it would require an if statement. My progress so far:</p>
<pre><code>Country = raw_input('Country: ')
if 'UK' in string:
???
</code></pre>
| 0 | 2016-09-28T17:35:34Z | 39,754,519 | <pre><code>import json
country = raw_input('Country: ')
jsondata = "the large json string mentioned in your post"
info = json.loads(jsondata)
for item in info:
if item['country'] == country:
print item
</code></pre>
| 0 | 2016-09-28T17:56:46Z | [
"python",
"json",
"string",
"python-2.7",
"wunderground"
]
|
Extract a specific section of a string depending on an input | 39,754,196 | <p>I have a very large <code>.json</code> converted into a <code>string</code>, containing numerous cities/countries.</p>
<p>I'd like to extract the information of the city depending on the user's choice of country (<em>London is just an example</em>). </p>
<p>For example, if the <code>Country</code> the user <em>inputed</em> was: <code>UK</code>, the following information would be extracted from the string:</p>
<p>I'm not too sure on how I could possibly achieve this due to my inexperience, but am aware that it would require an if statement. My progress so far:</p>
<pre><code>Country = raw_input('Country: ')
if 'UK' in string:
???
</code></pre>
| 0 | 2016-09-28T17:35:34Z | 39,754,639 | <p>You can try this. Might still want to consider some user input errors in your code. For example, str.strip() and capital sensitive.</p>
<pre><code>import json
input_country = raw_input('Please enter country:')
with open('London.json') as fp:
london_json = fp.read()
london = json.loads(london_json)
for item in london["response"]["results"]:
if item['country'] == input_country:
print json.dumps(item, indent = 2)
</code></pre>
| 0 | 2016-09-28T18:02:59Z | [
"python",
"json",
"string",
"python-2.7",
"wunderground"
]
|
Extract a specific section of a string depending on an input | 39,754,196 | <p>I have a very large <code>.json</code> converted into a <code>string</code>, containing numerous cities/countries.</p>
<p>I'd like to extract the information of the city depending on the user's choice of country (<em>London is just an example</em>). </p>
<p>For example, if the <code>Country</code> the user <em>inputed</em> was: <code>UK</code>, the following information would be extracted from the string:</p>
<p>I'm not too sure on how I could possibly achieve this due to my inexperience, but am aware that it would require an if statement. My progress so far:</p>
<pre><code>Country = raw_input('Country: ')
if 'UK' in string:
???
</code></pre>
| 0 | 2016-09-28T17:35:34Z | 39,755,340 | <p>The initial response wasn't great because a few of us overlooked the raw JSON. However, you <em>did</em> provide it so in future it would be better to make it more obvious that the snippet you showed had a more complete (and valid) counterpart.</p>
<p>That said, I would load the data into a dictionary and do something like:</p>
<pre><code>import json
json_string = """{
"response": {
"version":"0.1",
"termsofService":"http://www.wunderground.com/weather/api/d/terms.html",
"features": {
"conditions": 1
}
, "results": [
{
"name": "London",
"city": "London",
"state": "AR",
"country": "US",
"country_iso3166":"US",
"country_name":"USA",
"zmw": "72847.1.99999",
"l": "/q/zmw:72847.1.99999"
}
,
{
"name": "London",
"city": "London",
"state": "KY",
"country": "US",
"country_iso3166":"US",
"country_name":"USA",
"zmw": "40741.1.99999",
"l": "/q/zmw:40741.1.99999"
}
,
{
"name": "London",
"city": "London",
"state": "MN",
"country": "US",
"country_iso3166":"US",
"country_name":"USA",
"zmw": "56036.3.99999",
"l": "/q/zmw:56036.3.99999"
}
,
{
"name": "London",
"city": "London",
"state": "OH",
"country": "US",
"country_iso3166":"US",
"country_name":"USA",
"zmw": "43140.1.99999",
"l": "/q/zmw:43140.1.99999"
}
,
{
"name": "London",
"city": "London",
"state": "ON",
"country": "CA",
"country_iso3166":"CA",
"country_name":"Canada",
"zmw": "00000.1.71623",
"l": "/q/zmw:00000.1.71623"
}
,
{
"name": "London",
"city": "London",
"state": "TX",
"country": "US",
"country_iso3166":"US",
"country_name":"USA",
"zmw": "76854.1.99999",
"l": "/q/zmw:76854.1.99999"
}
,
{
"name": "London",
"city": "London",
"state": "",
"country": "UK",
"country_iso3166":"GB",
"country_name":"United Kingdom",
"zmw": "00000.1.03772",
"l": "/q/zmw:00000.1.03772"
}
,
{
"name": "London",
"city": "London",
"state": "WV",
"country": "US",
"country_iso3166":"US",
"country_name":"USA",
"zmw": "25126.1.99999",
"l": "/q/zmw:25126.1.99999"
}
]
}
}"""
json_object = json.loads(json_string)
world_dict = {}
for item in json_object['response']['results']:
item_country = item['country']
in_dict = world_dict.get(item_country)
if in_dict:
world_dict[item_country].extend([item])
else:
world_dict[item_country] = [item]
country = raw_input('Country: ')
response = world_dict.get(country)
if response:
for item in response:
print item
else:
print "Not a valid country"
</code></pre>
<p>EDIT:
Based on comment to use URL rather than a JSON string.</p>
<pre><code>import requests
url = 'http://api.wunderground.com/api/a8c3e5ce8970ae66/conditions/q/London.json'
data = requests.get(url).json()
world_dict = {}
for item in data['response']['results']:
item_country = item['country']
in_dict = world_dict.get(item_country)
if in_dict:
world_dict[item_country].extend([item])
else:
world_dict[item_country] = [item]
country = raw_input('Country: ')
response = world_dict.get(country)
if response:
for item in response:
print item
else:
print "Not a valid country"
</code></pre>
| 1 | 2016-09-28T18:43:06Z | [
"python",
"json",
"string",
"python-2.7",
"wunderground"
]
|
Python - Convert string-numeric to float | 39,754,222 | <p>I have the following string numeric values, and need to keep only the digit and decimals. I just can't find a right regular expression for this. </p>
<pre><code>s = [
"12.45-280", # need to convert to 12.45280
"A10.4B2", # need to convert to 10.42
]
</code></pre>
| 2 | 2016-09-28T17:36:43Z | 39,754,382 | <p>Convert each alphabetical character in the string to empty char ""</p>
<pre><code>import re
num_string = []* len(s)
for i, string in enumerate(s):
num_string[i] = re.sub('[a-zA-Z]+', '', string)
</code></pre>
| 0 | 2016-09-28T17:46:46Z | [
"python",
"regex",
"typeconverter"
]
|
Python - Convert string-numeric to float | 39,754,222 | <p>I have the following string numeric values, and need to keep only the digit and decimals. I just can't find a right regular expression for this. </p>
<pre><code>s = [
"12.45-280", # need to convert to 12.45280
"A10.4B2", # need to convert to 10.42
]
</code></pre>
| 2 | 2016-09-28T17:36:43Z | 39,754,472 | <p>You could go for a combination of <code>locale</code> and regular expressions:</p>
<pre><code>import re, locale
from locale import atof
# or whatever else
locale.setlocale(locale.LC_NUMERIC, 'en_GB.UTF-8')
s = [
"12.45-280", # need to convert to 12.45280
"A10.4B2", # need to convert to 10.42
]
rx = re.compile(r'[A-Z-]+')
def convert(item):
"""
Try to convert the item to a float
"""
try:
return atof(rx.sub('', item))
except:
return None
converted = [match
for item in s
for match in [convert(item)]
if match]
print(converted)
# [12.4528, 10.42]
</code></pre>
| 0 | 2016-09-28T17:53:11Z | [
"python",
"regex",
"typeconverter"
]
|
Python - Convert string-numeric to float | 39,754,222 | <p>I have the following string numeric values, and need to keep only the digit and decimals. I just can't find a right regular expression for this. </p>
<pre><code>s = [
"12.45-280", # need to convert to 12.45280
"A10.4B2", # need to convert to 10.42
]
</code></pre>
| 2 | 2016-09-28T17:36:43Z | 39,754,501 | <p>You can also remove all non-digits and non-dot characters, then convert the result to float:</p>
<pre><code>In [1]: import re
In [2]: s = [
...: "12.45-280", # need to convert to 12.45280
...: "A10.4B2", # need to convert to 10.42
...: ]
In [3]: for item in s:
...: print(float(re.sub(r"[^0-9.]", "", item)))
...:
12.4528
10.42
</code></pre>
<p>Here <code>[^0-9.]</code> would match any character except a digit or a literal dot. </p>
| 1 | 2016-09-28T17:55:26Z | [
"python",
"regex",
"typeconverter"
]
|
Django: Request timeout for long-running script | 39,754,283 | <p>I have a webpage made in Django that feeds data from a form to a script that takes quite a long time to run (1-5 minutes) and then returns a detailview with the results of that scripts.
I have problem with getting a request timeout. Is there a way to increase time length before a timeout so that the script can finish?</p>
<p>[I have a spinner to let users know that the page is loading].</p>
| 0 | 2016-09-28T17:41:00Z | 39,754,475 | <p>Yes, the timeout value can be adjusted in the web server configuration.</p>
<p>Does anyone else but you use this page? If so, you'll have to educate them to be patient and not click the Stop or Reload buttons on their browser.</p>
| 0 | 2016-09-28T17:53:36Z | [
"python",
"django",
"python-3.x",
"pythonanywhere",
"django-1.9"
]
|
Django: Request timeout for long-running script | 39,754,283 | <p>I have a webpage made in Django that feeds data from a form to a script that takes quite a long time to run (1-5 minutes) and then returns a detailview with the results of that scripts.
I have problem with getting a request timeout. Is there a way to increase time length before a timeout so that the script can finish?</p>
<p>[I have a spinner to let users know that the page is loading].</p>
| 0 | 2016-09-28T17:41:00Z | 39,775,664 | <p>We don't change the request timeout for individual users on PythonAnywhere. In the vast majority of cases, a request that takes 5 min (or even, really, 1 min) indicates that something is very wrong with the app.</p>
| 0 | 2016-09-29T16:35:44Z | [
"python",
"django",
"python-3.x",
"pythonanywhere",
"django-1.9"
]
|
regex both numberic and numeric with one decimal place | 39,754,352 | <p>Using
<a href="https://regex101.com/r/ukjM5F/1" rel="nofollow">https://regex101.com/r/ukjM5F/1</a></p>
<p>My regex is:</p>
<pre><code>(?:'\d+[A-Za-z ]*)(\d+|\d+\.)
</code></pre>
<p>It uses the <code>g</code> (global) modifier.</p>
<p>How do I grab all the<br>
10<br>
12 </p>
<p>As well as<br>
2.5<br>
3.5?</p>
<p>Right now I'm stuck on how to grab the first decimal place (eg. 2.5)</p>
<p>I'm matching against this example string:</p>
<pre><code>['Table: Waiter: kenny',
'======================================',
'1 GRILLED AUSTRALIA ANGU **29.00**',
'----------------------------------',
'TOTAL 29.00', 'CASH 29.00',
'CHANGE 0.00',
'Signature:__________________________',
'Thank you & see you again soon!']
['1 Carrot Cake **2.50**',
'----------------------------------',
'TOTAL 2.50', 'CASH 2.50',
'CHANGE 0.00',
'====================================',
'Thank You and',
'See You Again!']
['Table: Waiter: kenny',
'======================================',
'1 SAUSAGE WRAPPED WITH B **10.00**',
'1 ESCARGOT WITH GARLIC H **12.00**',
'1 PAN SEARED FOIE GRAS **15.00**',
'1 SAUTE FIELD MUSHROOM W **9.00**',
'1 CRISPY CHICKEN WINGS **7.00**',
'1 ONION RINGS **6.00**',
'----------------------------------',
'TOTAL 59.00', 'CASH 59.00',
'CHANGE 0.00',
'Signature:__________________________',
'Thank you & see you again soon!']
['1 Carrot Cake **2.50**',
'1 Chocolate Cake **3.50**',
'----------------------------------',
'TOTAL
6.00', 'CASH
6.00', 'CHANGE 0.00',
'===================================='
, 'Thank You and', 'See You Again!']
</code></pre>
| -3 | 2016-09-28T17:45:14Z | 39,754,481 | <p>You can use a non-capturing group: it is then simple to match a number with two decimal places:</p>
<pre><code>(?:\d[\s\w]+?)(\d+\.\d\d)
</code></pre>
<p><a href="https://regex101.com/r/ukjM5F/3" rel="nofollow">https://regex101.com/r/ukjM5F/3</a></p>
| 0 | 2016-09-28T17:53:57Z | [
"python",
"regex"
]
|
regex both numberic and numeric with one decimal place | 39,754,352 | <p>Using
<a href="https://regex101.com/r/ukjM5F/1" rel="nofollow">https://regex101.com/r/ukjM5F/1</a></p>
<p>My regex is:</p>
<pre><code>(?:'\d+[A-Za-z ]*)(\d+|\d+\.)
</code></pre>
<p>It uses the <code>g</code> (global) modifier.</p>
<p>How do I grab all the<br>
10<br>
12 </p>
<p>As well as<br>
2.5<br>
3.5?</p>
<p>Right now I'm stuck on how to grab the first decimal place (eg. 2.5)</p>
<p>I'm matching against this example string:</p>
<pre><code>['Table: Waiter: kenny',
'======================================',
'1 GRILLED AUSTRALIA ANGU **29.00**',
'----------------------------------',
'TOTAL 29.00', 'CASH 29.00',
'CHANGE 0.00',
'Signature:__________________________',
'Thank you & see you again soon!']
['1 Carrot Cake **2.50**',
'----------------------------------',
'TOTAL 2.50', 'CASH 2.50',
'CHANGE 0.00',
'====================================',
'Thank You and',
'See You Again!']
['Table: Waiter: kenny',
'======================================',
'1 SAUSAGE WRAPPED WITH B **10.00**',
'1 ESCARGOT WITH GARLIC H **12.00**',
'1 PAN SEARED FOIE GRAS **15.00**',
'1 SAUTE FIELD MUSHROOM W **9.00**',
'1 CRISPY CHICKEN WINGS **7.00**',
'1 ONION RINGS **6.00**',
'----------------------------------',
'TOTAL 59.00', 'CASH 59.00',
'CHANGE 0.00',
'Signature:__________________________',
'Thank you & see you again soon!']
['1 Carrot Cake **2.50**',
'1 Chocolate Cake **3.50**',
'----------------------------------',
'TOTAL
6.00', 'CASH
6.00', 'CHANGE 0.00',
'===================================='
, 'Thank You and', 'See You Again!']
</code></pre>
| -3 | 2016-09-28T17:45:14Z | 39,754,487 | <p>Try this: <code>\d+[.]?\d*</code></p>
<p>That matches all numbers with or without a decimal point</p>
<p><a href="https://regex101.com/r/9UHCt4/1" rel="nofollow">https://regex101.com/r/9UHCt4/1</a></p>
<p>But, probably even better would be not to try to use regexes for the bulk of your text processing. Because this is structured, you can use a regex to recognise the structure of the line, use splitting/awk to break it up into columns, and go from there.</p>
| 1 | 2016-09-28T17:54:27Z | [
"python",
"regex"
]
|
regex both numberic and numeric with one decimal place | 39,754,352 | <p>Using
<a href="https://regex101.com/r/ukjM5F/1" rel="nofollow">https://regex101.com/r/ukjM5F/1</a></p>
<p>My regex is:</p>
<pre><code>(?:'\d+[A-Za-z ]*)(\d+|\d+\.)
</code></pre>
<p>It uses the <code>g</code> (global) modifier.</p>
<p>How do I grab all the<br>
10<br>
12 </p>
<p>As well as<br>
2.5<br>
3.5?</p>
<p>Right now I'm stuck on how to grab the first decimal place (eg. 2.5)</p>
<p>I'm matching against this example string:</p>
<pre><code>['Table: Waiter: kenny',
'======================================',
'1 GRILLED AUSTRALIA ANGU **29.00**',
'----------------------------------',
'TOTAL 29.00', 'CASH 29.00',
'CHANGE 0.00',
'Signature:__________________________',
'Thank you & see you again soon!']
['1 Carrot Cake **2.50**',
'----------------------------------',
'TOTAL 2.50', 'CASH 2.50',
'CHANGE 0.00',
'====================================',
'Thank You and',
'See You Again!']
['Table: Waiter: kenny',
'======================================',
'1 SAUSAGE WRAPPED WITH B **10.00**',
'1 ESCARGOT WITH GARLIC H **12.00**',
'1 PAN SEARED FOIE GRAS **15.00**',
'1 SAUTE FIELD MUSHROOM W **9.00**',
'1 CRISPY CHICKEN WINGS **7.00**',
'1 ONION RINGS **6.00**',
'----------------------------------',
'TOTAL 59.00', 'CASH 59.00',
'CHANGE 0.00',
'Signature:__________________________',
'Thank you & see you again soon!']
['1 Carrot Cake **2.50**',
'1 Chocolate Cake **3.50**',
'----------------------------------',
'TOTAL
6.00', 'CASH
6.00', 'CHANGE 0.00',
'===================================='
, 'Thank You and', 'See You Again!']
</code></pre>
| -3 | 2016-09-28T17:45:14Z | 39,754,538 | <p>To get only the price of a menu item use:</p>
<pre><code>(?:[a-zA-Z]+\s[a-zA-Z]+\s+)(\d+\.\d\d)(?:')
</code></pre>
<p><a href="https://regex101.com/r/ukjM5F/5" rel="nofollow">https://regex101.com/r/ukjM5F/5</a></p>
| 0 | 2016-09-28T17:57:53Z | [
"python",
"regex"
]
|
python: Threading classes from main_window class | 39,754,459 | <p>I'm trying to create a game using Tkinter that can run functions from multiple class objects simultaneously using threads. In the MainWindow class, I have "player" and "player2" assigned to the "Player" class. </p>
<p>In the "Player" class, have a function called "move" that simply moves the canvas object.</p>
<p>When the right button is press , "player" starts moving.However, as soon as the left button is pressed, it seems that "player" is stopped in replacement for "player2".</p>
<p>Is there anyway to fix this?</p>
<pre><code>from tkinter import *
from threading import Thread
import time
class MainWindow(Frame):
def __init__(self , parent):
self.backround = '#%02x%02x%02x' % (180, 180, 180)
self.main_width = 1905
self.main_height = 1002
Frame.__init__(self , parent , bg = self.backround)
self.pack(fill=BOTH, expand=1)
self.parent = parent
self.parent.geometry('1905x1002+0+0')
self.main_canvas = Canvas(self , width = self.main_width , height =
self.main_height , bg = 'white')
self.main_canvas.pack()
self.Keyboard_Events = Thread(target = self.keyboard_events)
self.Keyboard_Events.start()
</code></pre>
<p>players</p>
<pre><code> self.player = Player(self.main_canvas , [125 , 125] , self) #(canvas , coords)
self.player2 = Player(self.main_canvas , [200 , 100] , self) #(canvas , coords)
</code></pre>
<p>callbacks</p>
<pre><code> def keyboard_events(self):
def callback_mouse_primary(event):
self.player.move(0.01)
def callback_mouse_secondary(event):
self.player2.move(0.01)
root.bind('<Button-1>' , callback_mouse_primary)
root.bind('<Button-3>' , callback_mouse_secondary)
</code></pre>
<p>player class</p>
<pre><code>class Player(Thread):
def __init__(self , canvas , coords , parent):
Thread.__init__(self)
self.setDaemon(True)
self.canvas = canvas
self.coords = coords
self.player_object = self.canvas.create_rectangle(self.coords[0]-25 , self.coords[1]-25 , self.coords[0]+25 , self.coords[1]+25)
def move(self , Time):
for y in range(100):
self.canvas.coords(self.player_object , self.coords[0]-25 , self.coords[1]-25 , self.coords[0]+25 , self.coords[1]+25)
self.coords[0] += 0.1
self.coords[1] += 0.1
self.canvas.update()
time.sleep(Time)
def Print_info (self):
print (self.coords)
if __name__ == '__main__':
root = Tk()
main = MainWindow(root)
root.mainloop()
</code></pre>
<p>Just to make it clear. Player objects are created within the MainWindow class and functions within those players are run within the MainWindow class. Is there anyway to thread those player objects to run independently?</p>
| 0 | 2016-09-28T17:52:30Z | 39,754,884 | <p>1) your need to fix your model and have a separation between the model/data and the representation/visualization :</p>
<p><code>players</code> need only to hold their data, and not be responsible for drawing anything. You should have a main rendering loop in charge of reading the components on the display, and position them accordingly. </p>
<p>2) <code>players</code> are not <code>threads</code> and there's not need to daemonize them. <code>Threads</code> runs function as a separate process. In your case, the separate process is to start moving the players at a certain rate, that being :</p>
<pre><code>def callback_mouse_primary(event):
t=Thread(target=self.player.move,kwargs={'Time': 0.01})
t.start()
</code></pre>
| 0 | 2016-09-28T18:17:36Z | [
"python",
"class",
"python-3.x",
"tkinter",
"python-multithreading"
]
|
concatenate arrays with mixed types | 39,754,658 | <p>consider the <code>np.array</code> <code>a</code></p>
<pre><code>a = np.concatenate(
[np.arange(2).reshape(-1, 1),
np.array([['a'], ['b']])],
axis=1)
a
array([['0', 'a'],
['1', 'b']],
dtype='|S11')
</code></pre>
<p>How can I execute this concatenation such that the first column of <code>a</code> remains integers?</p>
| 0 | 2016-09-28T18:04:26Z | 39,755,454 | <p>You can mix types in a numpy array by using a <code>numpy.object</code> as the <code>dtype</code>:</p>
<pre><code>>>> import numpy as np
>>> a = np.empty((2, 0), dtype=np.object)
>>> a = np.append(a, np.arange(2).reshape(-1,1), axis=1)
>>> a = np.append(a, np.array([['a'],['b']]), axis=1)
</code></pre>
<p>
<pre><code>>>> a
array([[0, 'a'],
[1, 'b']], dtype=object)
</code></pre>
<p>
<pre><code>>>> type(a[0,0])
<type 'int'>
</code></pre>
<p>
<pre><code>>>> type(a[0,1])
<type 'str'>
</code></pre>
| 2 | 2016-09-28T18:49:07Z | [
"python",
"numpy"
]
|
concatenate arrays with mixed types | 39,754,658 | <p>consider the <code>np.array</code> <code>a</code></p>
<pre><code>a = np.concatenate(
[np.arange(2).reshape(-1, 1),
np.array([['a'], ['b']])],
axis=1)
a
array([['0', 'a'],
['1', 'b']],
dtype='|S11')
</code></pre>
<p>How can I execute this concatenation such that the first column of <code>a</code> remains integers?</p>
| 0 | 2016-09-28T18:04:26Z | 39,755,795 | <p>A suggested duplicate recommends making a recarray or structured array.</p>
<p><a href="http://stackoverflow.com/questions/11309739/store-different-datatypes-in-one-numpy-array">Store different datatypes in one NumPy array?</a></p>
<p>In this case:</p>
<pre><code>In [324]: a = np.rec.fromarrays((np.arange(2).reshape(-1,1), np.array([['a'],['b']])))
In [325]: a
Out[325]:
rec.array([[(0, 'a')],
[(1, 'b')]],
dtype=[('f0', '<i4'), ('f1', '<U1')])
In [326]: a['f0']
Out[326]:
array([[0],
[1]])
In [327]: a['f1']
Out[327]:
array([['a'],
['b']],
dtype='<U1')
</code></pre>
<p>(I have reopened this because I think both approaches need to acknowledged. Plus the <code>object</code> answer was already given and accepted.)</p>
| 2 | 2016-09-28T19:10:03Z | [
"python",
"numpy"
]
|
overriding not operator in Python | 39,754,808 | <p>I cannot find the method corresponding to <code>not x</code> operator. There is one for <code>and,or,xor</code> tho. Where is it?
<a href="https://docs.python.org/3/reference/datamodel.html" rel="nofollow">https://docs.python.org/3/reference/datamodel.html</a></p>
| -1 | 2016-09-28T18:13:45Z | 39,754,858 | <blockquote>
<p>There is one for <code>and,or,xor</code> tho</p>
</blockquote>
<p>The methods you're looking at are for <em>bitwise</em> <code>&</code>, <code>|</code>, and <code>^</code>, not <code>and</code>, <code>or</code>, or <code>xor</code> (which isn't even a Python operator).</p>
<p><code>not</code> cannot be overloaded, just like <code>and</code> and <code>or</code> can't be overloaded. Bitwise <code>~</code> can be overloaded, though; that's <code>__invert__</code>.</p>
<p>If you're in a situation where you wish you could overload <code>not</code>, you'll either have to make do with overloading <code>~</code> instead, or you'll have to write your own <code>logical_not</code> function and use that instead of the <code>not</code> operator.</p>
| 3 | 2016-09-28T18:16:23Z | [
"python",
"operator-overloading",
"operators"
]
|
overriding not operator in Python | 39,754,808 | <p>I cannot find the method corresponding to <code>not x</code> operator. There is one for <code>and,or,xor</code> tho. Where is it?
<a href="https://docs.python.org/3/reference/datamodel.html" rel="nofollow">https://docs.python.org/3/reference/datamodel.html</a></p>
| -1 | 2016-09-28T18:13:45Z | 39,754,868 | <p>There are no hooks for <code>and</code> or <code>or</code> operators, no (as they short-circuit), and there is no <code>xor</code> operator in Python. The <code>__and__</code> and <code>__or__</code> are for the <a href="https://docs.python.org/3/reference/expressions.html#unary-arithmetic-and-bitwise-operations" rel="nofollow"><em>bitwise</em> <code>&</code> and <code>|</code> operators</a>, respectively. The equivalent bitwise operator for <code>not</code> is <code>~</code> (inversion), which is handled by the <a href="https://docs.python.org/3/reference/datamodel.html#object.__invert__" rel="nofollow"><code>__invert__</code> method</a>, while <a href="https://docs.python.org/3/reference/datamodel.html#object.__xor__" rel="nofollow"><code>__xor__</code></a> covers the <code>^</code> bitwise operator.</p>
<p><code>not</code> operates on the <a href="https://docs.python.org/3/library/stdtypes.html#truth-value-testing" rel="nofollow"><em>truth-value</em> of an object</a>. If you have a container, give it a <a href="https://docs.python.org/3/reference/datamodel.html#object.__len__" rel="nofollow"><code>__len__</code> method</a>, if not give it a <a href="https://docs.python.org/3/reference/datamodel.html#object.__bool__" rel="nofollow"><code>__bool__</code> method</a>. Either one is consulted to determine if an object should be considered 'true'; <code>not</code> inverts the result of that test.</p>
<p>So if <code>__bool__</code> returns <code>True</code> or <code>__len__</code> returns an integer other than <code>0</code>, <code>not</code> will invert that to <code>False</code>, otherwise <code>not</code> produces <code>True</code>. Note that you can't make <code>not</code> return anything else but a boolean value!</p>
<p>From the documentation for <code>__bool__</code>:</p>
<blockquote>
<p><code>__bool__</code><br>
Called to implement truth value testing and the built-in operation <code>bool()</code>; should return <code>False</code> or <code>True</code>. When this method is not defined, <code>__len__()</code> is called, if it is defined, and the object is considered true if its result is nonzero. If a class defines neither <code>__len__()</code> nor <code>__bool__()</code>, all its instances are considered true.></p>
</blockquote>
<p>and for the <a href="https://docs.python.org/3/reference/expressions.html#boolean-operations" rel="nofollow"><code>not</code> expression</a>:</p>
<blockquote>
<p>In the context of Boolean operations, and also when expressions are used by control flow statements, the following values are interpreted as false: <code>False</code>, <code>None</code>, numeric zero of all types, and empty strings and containers (including strings, tuples, lists, dictionaries, sets and frozensets). All other values are interpreted as true. <strong>User-defined objects can customize their truth value by providing a <code>__bool__()</code> method.</strong></p>
<p>The operator <code>not</code> yields <code>True</code> if its argument is false, <code>False</code> otherwise.</p>
</blockquote>
<p><em>bold emphasis mine</em>.</p>
| 1 | 2016-09-28T18:16:59Z | [
"python",
"operator-overloading",
"operators"
]
|
scrapy run spider from path | 39,754,822 | <p>A number of suggestions on running scrapy suggest doing this in order to start scrapy via script, or to debug in an IDE, etc:</p>
<pre><code>from scrapy import cmdline
cmdline.execute(("scrapy runspider spider-file-name.py").split())
</code></pre>
<p>This works, so long as the script is placed in the project directory, but if not try to give it an absolute or relative path. For example:</p>
<pre><code>import os
from scrapy import cmdline
this_file_path = os.path.dirname(os.path.realpath(__file__))
base_path = this_file_path.replace('bootstrap', '')
full_path = base_path + "path/to/spiders/some-spider.py"
print full_path
cmdline.execute(("scrapy runspider " + full_path).split())
</code></pre>
<p>With this, I get:</p>
<pre><code>2016-09-28 10:49:29 [scrapy] INFO: Scrapy 1.1.2 started (bot: scrapybot)
2016-09-28 10:49:29 [scrapy] INFO: Overridden settings: {}
Usage
=====
scrapy runspider [options] <spider_file>
spider-main.py: error: Unable to load '/Users/name/intellij-workspace/crawling/scrape/scrape/spiders/some-spider.py': No module named items
</code></pre>
<p>Is there a way to run and debug scrapy spiders from an absolute path? Ideally, I need to have this to debug in an IDE.</p>
| 2 | 2016-09-28T18:14:15Z | 39,756,350 | <p>It's highly advised to use a distributed crawling software but if you really want to do it like this just for some dirty testing here it is</p>
<pre><code>import subprocess
project_path="/Users/name/intellij-workspace/crawling/scrape"
subprocess.Popen(["scrapy","runspider","scrape/spiders/some-spider.py"],cwd=project_path)
</code></pre>
| 2 | 2016-09-28T19:42:55Z | [
"python",
"scrapy"
]
|
Syntax error in if...elif...else | 39,755,012 | <p>tax calculator</p>
<pre><code>def computeTax(maritalStatus,userIncome):
if maritalStatus == "Single":
print("User is single")
if userIncome <= 9075:
tax = (.10) * (userIncome)
elif userIncome <= 36900:
tax = 907.50 + ((.15) * (userIncome - 9075))
elif userIncome <= 89350:
tax = 5081.25 + ((.25) * (userIncome - 36900))
elif userIncome <= 186350:
tax = 18193.75 + ((.28) * (userIncome - 89350))
elif userIncome <= 405100:
tax = 45353.75 + ((.33) * (userIncome - 186350))
elif userIncome <= 406750:
tax = 117541.25 + ((.35) * (userIncome - (405100)
else: # getting syntax error here
tax = 118118.75 + ((.396) * (userIncome - (406750))
return tax
else:
return "placeholder"
def main():
maritalStatusMain = input("Please enter your marital status (Single or Married)")
userIncomeMain = float(input("Please enter your annual income"))
finalTax = computeTax(maritalStatusMain,userIncomeMain)
print(finalTax)
main()
</code></pre>
<p>When I remove or add statements the syntax error seems to jump around.</p>
| 0 | 2016-09-28T18:25:26Z | 39,755,916 | <p>A quick glance at the lines around it, shows a missing parenthesis</p>
<pre><code>...
tax = 45353.75 + ((.33) * (userIncome - 186350)) # <- two ending parens
elif userIncome <= 406750:
tax = 117541.25 + ((.35) * (userIncome - (405100) # <- one ending paren, plus extra paren around 405100
else:
...
</code></pre>
<p>That's probably all it is, unless the copy+paste into the question failed</p>
| 1 | 2016-09-28T19:17:24Z | [
"python",
"syntax-error"
]
|
How to do LOAD DATA command from within python | 39,755,033 | <p>How would I do the following?</p>
<pre><code>import MySQLdb
conn = MySQLdb.connect (
host = settings.DATABASES['default']['HOST'],
port = 3306,
user = settings.DATABASES['default']['USER'],
passwd = settings.DATABASES['default']['PASSWORD'],
db = settings.DATABASES['default']['NAME'],
charset='utf8')
cursor = conn.cursor()
cursor.execute('SELECT COUNT(*) FROM auth_user')
print cursor.fetchall() # this prints, so I know the connection is correct
cursor.execute('''
LOAD DATA INFILE 'a_short.csv' INTO TABLE export
FIELDS TERMINATED BY '|' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
''')
</code></pre>
<p>When I try the above I get an "Access Denied" error, however I think it is more related to trying to do the <code>LOAD DATA</code> command from inside the cursor. What would be the proper way to do this?</p>
<p><strong>Update</strong>: This seems to be a limitation in privileges (no 'file' privilege) for the user's in Amazon RDS. Here's one way to get around this: <a href="http://stackoverflow.com/questions/1641160/how-to-load-data-infile-on-amazon-rds">how to 'load data infile' on amazon RDS?</a>. </p>
| 1 | 2016-09-28T18:26:32Z | 39,755,190 | <p>In order for <code>load file</code> to work your database user account needs MySQL's <code>file_priv</code>. The database also needs to have read permissions on the file in question.</p>
<p>In this query the database is instructed to look for <code>a_short.csv</code> on the database's filsystem, which will probably produce a path like: <code>/var/mysql/a_short.csv</code>. If this csv file is on the python side of the system, then python needs to open the file and populate the MySQL database.</p>
| 2 | 2016-09-28T18:34:45Z | [
"python",
"mysql"
]
|
How to do LOAD DATA command from within python | 39,755,033 | <p>How would I do the following?</p>
<pre><code>import MySQLdb
conn = MySQLdb.connect (
host = settings.DATABASES['default']['HOST'],
port = 3306,
user = settings.DATABASES['default']['USER'],
passwd = settings.DATABASES['default']['PASSWORD'],
db = settings.DATABASES['default']['NAME'],
charset='utf8')
cursor = conn.cursor()
cursor.execute('SELECT COUNT(*) FROM auth_user')
print cursor.fetchall() # this prints, so I know the connection is correct
cursor.execute('''
LOAD DATA INFILE 'a_short.csv' INTO TABLE export
FIELDS TERMINATED BY '|' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
''')
</code></pre>
<p>When I try the above I get an "Access Denied" error, however I think it is more related to trying to do the <code>LOAD DATA</code> command from inside the cursor. What would be the proper way to do this?</p>
<p><strong>Update</strong>: This seems to be a limitation in privileges (no 'file' privilege) for the user's in Amazon RDS. Here's one way to get around this: <a href="http://stackoverflow.com/questions/1641160/how-to-load-data-infile-on-amazon-rds">how to 'load data infile' on amazon RDS?</a>. </p>
| 1 | 2016-09-28T18:26:32Z | 39,755,325 | <p>This doesn't seem to be a cursor issue, but yet it seems that the user you are using does not having the required permissions to execute said command.</p>
<p>Make sure you give the user the needed permissions.</p>
| 1 | 2016-09-28T18:42:03Z | [
"python",
"mysql"
]
|
Putting a list in the same order as another list | 39,755,045 | <p>There's a bunch of questions that are phrased similarly, but I was unable to find one that actually mapped to my intended semantics.</p>
<p>There are two lists, <code>A</code> and <code>B</code>, and I want to rearrange <code>B</code> so that it is in the same relative order as <code>A</code> - the maximum element of <code>B</code> is in the same position as the current position of the maximum element of <code>A</code>, and the same for the minimum element, and so on.</p>
<p>Note that <code>A</code> is not sorted, nor do I want it to be.</p>
<p>As an example, if the following were input:</p>
<pre><code>a = [7, 14, 0, 9, 19, 9]
b = [45, 42, 0, 1, -1, 0]
</code></pre>
<p>I want the output to be <code>[0, 42, -1, 0, 45, 1]</code>.</p>
<p>Please note that the intended output is not <code>[0, 45, 1, 0, 42, -1]</code>, which is what it would be it you zipped the two and sorted by <code>A</code> and took the resulting elements of <code>B</code> (this is what all of the other questions I looked at wanted).</p>
<p>Here's my code:</p>
<pre><code>def get_swaps(x):
out = []
if len(x) <= 1:
return out
y = x[:]
n = -1
while len(y) != 1:
pos = y.index(max(y))
y[pos] = y[-1]
y.pop()
out.append((pos, n))
n -= 1
return out
def apply_swaps_in_reverse(x, swaps):
out = x[:]
for swap in swaps[::-1]:
orig, new = swap
out[orig], out[new] = out[new], out[orig]
return out
def reorder(a, b):
return apply_swaps_in_reverse(sorted(b), get_swaps(a))
</code></pre>
<p>The approach is basically to construct a list of the swaps necessary to sort <code>A</code> via selection sort, sort <code>B</code>, and then apply those swaps in reverse. This works, but is pretty slow (and is fairly confusing, as well). Is there a better approach to this?</p>
| 4 | 2016-09-28T18:27:12Z | 39,755,202 | <p>You can easily do this with <code>numpy</code> by sorting both lists (to get a mapping between the two lists) and by inverting one of the sorting permutations:</p>
<pre><code>import numpy as np
a = [7, 14, 0, 9, 19, 9]
b = [45, 42, 0, 1, -1, 0]
a = np.array(a)
b = np.array(b)
ai = np.argsort(a)
bi = np.argsort(b)
aiinv = np.empty(ai.shape,dtype=int)
aiinv[ai] = np.arange(a.size) # inverse of ai permutation
b_new = b[bi[aiinv]]
# array([ 0, 42, -1, 0, 45, 1])
</code></pre>
<p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html" rel="nofollow"><code>numpy.argsort</code></a> gives the indices (permutation) that will sort your array. This needs to be inverted to be used inside <code>b</code>, which can be done by the inverted assignment</p>
<pre><code>aiinv[ai] = np.arange(a.size)
</code></pre>
| 3 | 2016-09-28T18:35:43Z | [
"python",
"list",
"sorting"
]
|
Putting a list in the same order as another list | 39,755,045 | <p>There's a bunch of questions that are phrased similarly, but I was unable to find one that actually mapped to my intended semantics.</p>
<p>There are two lists, <code>A</code> and <code>B</code>, and I want to rearrange <code>B</code> so that it is in the same relative order as <code>A</code> - the maximum element of <code>B</code> is in the same position as the current position of the maximum element of <code>A</code>, and the same for the minimum element, and so on.</p>
<p>Note that <code>A</code> is not sorted, nor do I want it to be.</p>
<p>As an example, if the following were input:</p>
<pre><code>a = [7, 14, 0, 9, 19, 9]
b = [45, 42, 0, 1, -1, 0]
</code></pre>
<p>I want the output to be <code>[0, 42, -1, 0, 45, 1]</code>.</p>
<p>Please note that the intended output is not <code>[0, 45, 1, 0, 42, -1]</code>, which is what it would be it you zipped the two and sorted by <code>A</code> and took the resulting elements of <code>B</code> (this is what all of the other questions I looked at wanted).</p>
<p>Here's my code:</p>
<pre><code>def get_swaps(x):
out = []
if len(x) <= 1:
return out
y = x[:]
n = -1
while len(y) != 1:
pos = y.index(max(y))
y[pos] = y[-1]
y.pop()
out.append((pos, n))
n -= 1
return out
def apply_swaps_in_reverse(x, swaps):
out = x[:]
for swap in swaps[::-1]:
orig, new = swap
out[orig], out[new] = out[new], out[orig]
return out
def reorder(a, b):
return apply_swaps_in_reverse(sorted(b), get_swaps(a))
</code></pre>
<p>The approach is basically to construct a list of the swaps necessary to sort <code>A</code> via selection sort, sort <code>B</code>, and then apply those swaps in reverse. This works, but is pretty slow (and is fairly confusing, as well). Is there a better approach to this?</p>
| 4 | 2016-09-28T18:27:12Z | 39,755,265 | <pre><code>a = [7, 14, 0, 9, 19, 9]
b = [45, 42, 0, 1, -1, 0]
print zip(*sorted(zip(sorted(b), sorted(enumerate(a), key=lambda x:x[1])), key=lambda x: x[1][0]))[0]
#or, for 3.x:
print(list(zip(*sorted(zip(sorted(b), sorted(enumerate(a), key=lambda x:x[1])), key=lambda x: x[1][0])))[0])
</code></pre>
<p>result:</p>
<pre><code>(0, 42, -1, 0, 45, 1)
</code></pre>
<p>You sort <code>a</code>, using <code>enumerate</code> to keep track of each item's original index. You zip the result with <code>sorted(b)</code>, then re-sort the whole thing based on <code>a</code>'s original indices. Then you call <code>zip</code> once more to extract just <code>b</code>'s values.</p>
| 5 | 2016-09-28T18:38:58Z | [
"python",
"list",
"sorting"
]
|
How do I add a multiline variable in a honcho .env file? | 39,755,063 | <p>I am trying to add a multiline value for an env var in .env so that my process, run by honcho, will have access to it.</p>
<p>Bash uses a '\' to permit multilines. But this gives errors in honcho/python code. How to do this?</p>
| 0 | 2016-09-28T18:28:13Z | 39,755,064 | <p>I put '\\' at the end of the line to permit multiline values.</p>
| 0 | 2016-09-28T18:28:13Z | [
"python",
"environment-variables"
]
|
How to divide the sum with the size in a pandas groupby | 39,755,075 | <p>I have a dataframe like</p>
<pre><code> ID_0 ID_1 ID_2
0 a b 1
1 a c 1
2 a b 0
3 d c 0
4 a c 0
5 a c 1
</code></pre>
<p>I would like to groupby ['ID_0','ID_1'] and produce a new dataframe which has the sum of the ID_2 values for each group divided by the number of rows in each group.</p>
<pre><code>grouped = df.groupby(['ID_0', 'ID_1'])
print grouped.agg({'ID_2': np.sum}), "\n", grouped.size()
</code></pre>
<p>gives</p>
<pre><code> ID_2
ID_0 ID_1
a b 1
c 2
d c 0
ID_0 ID_1
a b 2
c 3
d c 1
dtype: int64
</code></pre>
<p>How can I get the new dataframe with the np.sum values divided by the size() values?</p>
| 1 | 2016-09-28T18:28:39Z | 39,755,183 | <p>Use <code>groupby.apply</code> instead:</p>
<pre><code>df.groupby(['ID_0', 'ID_1']).apply(lambda x: x['ID_2'].sum()/len(x))
ID_0 ID_1
a b 0.500000
c 0.666667
d c 0.000000
dtype: float64
</code></pre>
| 1 | 2016-09-28T18:34:16Z | [
"python",
"pandas"
]
|
List in column Python + Pandas | 39,755,131 | <p>I'm new to pandas and would like to analyse some data arranged like this:</p>
<pre><code>label aa bb
index
0 [2, 5, 1, 4] [x1, x2, y1, z1]
1 [3, 3, 19] [x3, x4, y2]
2 [6, 4, 2, 8, 9, 10] [y1, y2, z3, z4, x1, w]
</code></pre>
<p>in which x1,x2,x3,x4 are of type M; y1,y2 are of type N; and z1,z2,z3,z4 are of type O. Note that data[2,'bb'] is w, which does not belong to any type. This relationship is defined in mongodb as follows</p>
<pre><code>{'_id' : ObjectId(x1), type : 'M'}
{'_id' : ObjectId(y1), type : 'N'}
{'_id' : ObjectId(z1), type : 'O'}...
db.data.find({'_id' : ObjectId(w)}) is null
</code></pre>
<p>The desired output would be like this:</p>
<pre><code>label sum_M sum_N sum_O
index
0 7 1 4
1 6 19 0
2 9 10 10
</code></pre>
<p>Does anyone know how to do this with pandas?</p>
| 1 | 2016-09-28T18:31:32Z | 39,755,863 | <p>Pandas works best when your data are in a table format and individual cells contain values, not collections. To use pandas effectively for your problem, you need to change the way you create your data table. </p>
<p>Ultimately, it looks like you want to generate a table with columns representing object "id", "amount", and "numbering".</p>
<pre><code> id amount numbering
0 abc 2 x1
1 abc 5 x2
2 abc 1 y1
3 abc 4 z1
4 def 3 x3
etc.
</code></pre>
<p>To create this table, you can probably use a list of dictionaries, each dictionary containing the data for a row in your table, e.g.:</p>
<pre><code>{'id':'abc', 'amount': 2, 'numbering':'x1'}
</code></pre>
<p>You can construct a pandas DataFrame from this list: <a href="http://stackoverflow.com/questions/20638006/convert-list-of-dictionaries-to-dataframe">Convert list of dictionaries to Dataframe</a></p>
<p>Then you would add a column that represents the type associated with each "numbering" value:</p>
<pre><code>data['dbtype'] = data.numbering.map(lambda num: {'x':'M','y':'N','z':'O'}.get(num[0], 'None'))
</code></pre>
<p>Then you would use groupby:</p>
<pre><code>data.groupby('dbtype').sum()
</code></pre>
| 1 | 2016-09-28T19:14:09Z | [
"python",
"pandas"
]
|
How to insert string to each token of a list of strings? | 39,755,171 | <p>Lets assume I have the following list:</p>
<pre><code>l = ['the quick fox', 'the', 'the quick']
</code></pre>
<p>I would like to transform each element of the list into a url as follows:</p>
<pre><code>['<a href="http://url.com/the">the</a>', '<a href="http://url.com/quick">quick</a>','<a href="http://url.com/fox">fox</a>', '<a href="http://url.com/the">the</a>','<a href="http://url.com/the">the</a>', '<a href="http://url.com/quick">quick</a>']
</code></pre>
<p>So far, I tried the following:</p>
<pre><code>list_words = ['<a href="http://url.com/{}">{}</a>'.format(a, a) for a in x[0].split(' ')]
</code></pre>
<p>The problem is that the above list-comprehension just does the work for the first element of the list:</p>
<pre><code>['<a href="http://url.com/the">the</a>',
'<a href="http://url.com/quick">quick</a>',
'<a href="http://url.com/fox">fox</a>']
</code></pre>
<p>I also tried with a <code>map</code> but, it didn't work:</p>
<pre><code>[map('<a href="http://url.com/{}">{}</a>'.format(a,a),x) for a in x[0].split(', ')]
</code></pre>
<p>Any idea of how to create such links from the tokens of a list of sentences?</p>
| 4 | 2016-09-28T18:33:46Z | 39,755,220 | <p>You were close, you limited your comprehension to the contents of <code>x[0].split</code>, i.e you were missing one <code>for</code> loop through the elements of <code>l</code>:</p>
<pre><code>list_words = ['<a href="http://url.com/{}">{}</a>'.format(a,a) for x in l for a in x.split()]
</code></pre>
<p>this works because <code>"string".split()</code> yields a one element list.</p>
<p>This can look <em>way prettier</em> if you define the format string outside the comprehension and use a positional index <code>{0}</code> informing <code>format</code> of the argument (so you don't need to do <code>format(a, a)</code>):</p>
<pre><code>fs = '<a href="http://url.com/{0}">{0}</a>'
list_words = [fs.format(a) for x in l for a in x.split()]
</code></pre>
<p>With <code>map</code> you can get an ugly little duckling too if you like:</p>
<pre><code>list(map(fs.format, sum(map(str.split, l),[])))
</code></pre>
<p>Here we <code>sum(it, [])</code> to flatten the list of lists <code>map</code> with <code>split</code> produces and then map <code>fs.format</code> to the corresponding flattened list. Results are the same:</p>
<pre><code>['<a href="http://url.com/the">the</a>',
'<a href="http://url.com/quick">quick</a>',
'<a href="http://url.com/fox">fox</a>',
'<a href="http://url.com/the">the</a>',
'<a href="http://url.com/the">the</a>',
'<a href="http://url.com/quick">quick</a>']
</code></pre>
<p>Go with the comprehension, <em>obviously</em>.</p>
| 5 | 2016-09-28T18:36:25Z | [
"python",
"string",
"python-3.x",
"list-comprehension"
]
|
How to insert string to each token of a list of strings? | 39,755,171 | <p>Lets assume I have the following list:</p>
<pre><code>l = ['the quick fox', 'the', 'the quick']
</code></pre>
<p>I would like to transform each element of the list into a url as follows:</p>
<pre><code>['<a href="http://url.com/the">the</a>', '<a href="http://url.com/quick">quick</a>','<a href="http://url.com/fox">fox</a>', '<a href="http://url.com/the">the</a>','<a href="http://url.com/the">the</a>', '<a href="http://url.com/quick">quick</a>']
</code></pre>
<p>So far, I tried the following:</p>
<pre><code>list_words = ['<a href="http://url.com/{}">{}</a>'.format(a, a) for a in x[0].split(' ')]
</code></pre>
<p>The problem is that the above list-comprehension just does the work for the first element of the list:</p>
<pre><code>['<a href="http://url.com/the">the</a>',
'<a href="http://url.com/quick">quick</a>',
'<a href="http://url.com/fox">fox</a>']
</code></pre>
<p>I also tried with a <code>map</code> but, it didn't work:</p>
<pre><code>[map('<a href="http://url.com/{}">{}</a>'.format(a,a),x) for a in x[0].split(', ')]
</code></pre>
<p>Any idea of how to create such links from the tokens of a list of sentences?</p>
| 4 | 2016-09-28T18:33:46Z | 39,755,230 | <pre><code>list_words = ['<a href="http://url.com/{}">{}</a>'.format(a,a) for item in l for a in item.split(' ')]
</code></pre>
| 2 | 2016-09-28T18:36:55Z | [
"python",
"string",
"python-3.x",
"list-comprehension"
]
|
How to insert string to each token of a list of strings? | 39,755,171 | <p>Lets assume I have the following list:</p>
<pre><code>l = ['the quick fox', 'the', 'the quick']
</code></pre>
<p>I would like to transform each element of the list into a url as follows:</p>
<pre><code>['<a href="http://url.com/the">the</a>', '<a href="http://url.com/quick">quick</a>','<a href="http://url.com/fox">fox</a>', '<a href="http://url.com/the">the</a>','<a href="http://url.com/the">the</a>', '<a href="http://url.com/quick">quick</a>']
</code></pre>
<p>So far, I tried the following:</p>
<pre><code>list_words = ['<a href="http://url.com/{}">{}</a>'.format(a, a) for a in x[0].split(' ')]
</code></pre>
<p>The problem is that the above list-comprehension just does the work for the first element of the list:</p>
<pre><code>['<a href="http://url.com/the">the</a>',
'<a href="http://url.com/quick">quick</a>',
'<a href="http://url.com/fox">fox</a>']
</code></pre>
<p>I also tried with a <code>map</code> but, it didn't work:</p>
<pre><code>[map('<a href="http://url.com/{}">{}</a>'.format(a,a),x) for a in x[0].split(', ')]
</code></pre>
<p>Any idea of how to create such links from the tokens of a list of sentences?</p>
| 4 | 2016-09-28T18:33:46Z | 39,755,246 | <p>In <strong>one-liner</strong>:</p>
<pre><code>list_words = ['<a href="http://url.com/{}">{}</a>'.format(a,a) for a in [i for sub in [i.split() for i in l] for i in sub]]
</code></pre>
<p><strong>In steps</strong></p>
<p>You can split the list:</p>
<pre><code>l = [i.split() for i in l]
</code></pre>
<p>and then flatten it:</p>
<pre><code>l = [i for sub in l for i in sub]
</code></pre>
<p>result:</p>
<pre><code>>>> l
['the', 'quick', 'fox', 'the', 'the', 'quick']
</code></pre>
<p>Then:</p>
<pre><code>list_words = ['<a href="http://url.com/{}">{}</a>'.format(a,a) for a in l]
</code></pre>
<p>You will finally take:</p>
<pre><code>>>> list_words
['<a href="http://url.com/the">the</a>', '<a href="http://url.com/quick">quick</a>', '<a href="http://url.com/fox">fox</a>', '<a href="http://url.com/the">the</a>', '<a href="http://url.com/the">the</a>', '<a href="http://url.com/quick">quick</a>']
</code></pre>
| 2 | 2016-09-28T18:37:51Z | [
"python",
"string",
"python-3.x",
"list-comprehension"
]
|
How to remove window frame from program using Tkinter and Matplotlib | 39,755,216 | <p>I am writing a graphical program in Python for my Raspberry Pi project. I have started writing it using Tkinter and wish to use the Matplotlib tools. </p>
<p>Due to limited screen space and the purpose of the project, I want it to be fullscreen without a window frame and menubar showing. Normally I use the following command:</p>
<pre><code>app.overrideredirect(1)
</code></pre>
<p>This works great until I import Matplotlib. Once I do that, the window frame appears again even with the above line of code. </p>
<p>How can I get Matplotlib to not show the window frame or menubars and be completely fullscreen?</p>
<p>Thanks!</p>
| 0 | 2016-09-28T18:36:17Z | 39,777,743 | <p>I found the problem. The example code I followed involved using the command canvas.show() along with canvas.get_tk_widget().grid(...). The canvas.show() was not needed and caused it to override the app.overrideredirect(1) command.</p>
| 0 | 2016-09-29T18:41:28Z | [
"python",
"matplotlib",
"tkinter",
"fullscreen"
]
|
Python append column header & append column values from list to csv | 39,755,232 | <p>I am trying to append column header (hard-coded) and append column values from list to an existing csv. I am not getting the desired result. </p>
<p>Method 1 is appending results on an existing csv file. Method 2 clones a copy of existing csv into temp.csv. Both methods don't get me the desired output I am looking for. In Results 1, it just appends after the last row cell. In results 2, all list values append on each row. Expected results is what I am looking for. </p>
<p>I have included my code below. Appreciate any input or guidance. </p>
<p><strong>Existing CSV Test.csv</strong></p>
<pre><code>Type,Id,TypeId,CalcValues
B,111K,111Kequity(long) 111K,116.211768
C,111N,B(long) 111N,0.106559957
B,111J,c(long) 111J,20.061634
</code></pre>
<p><strong>Code - Method 1 & 2</strong></p>
<pre><code>final_results = ['0.1065599566767107', '0.0038113334533441123', '20.061623176440904']
# Method1
csvfile = "test.csv"
with open(csvfile, "a") as output:
writer = csv.writer(output, lineterminator='\n')
for val in final_results:
writer.writerow([val])
# Method2
with open("test.csv", 'rb') as input, open('temp.csv', 'wb') as output:
reader = csv.reader(input, delimiter = ',')
writer = csv.writer(output, delimiter = ',')
all = []
row = next(reader)
row.insert(5, 'Results')
all.append(row)
for row in reader:
for i in final_results:
print type(i)
row.insert(5, i)
all.append(row)
writer.writerows(all)
</code></pre>
<p><strong>Results for Method 1</strong></p>
<pre><code>Type,Id,TypeId,CalcValues
B,111K,111Kequity(long) 111K,116.211768
C,111N,B(long) 111N,0.106559957
B,111J,c(long) 111J,20.0616340.1065599566767107
0.0038113334533441123
20.061623176440904
</code></pre>
<p><strong>Results for Method 2</strong></p>
<pre><code>Type,Id,TypeId,CalcValues,Results
B,111K,111Kequity(long) 111K,116.211768,0.1065599566767107,20.061623176440904,0.0038113334533441123
C,111N,B(long) 111N,0.106559957,0.1065599566767107,20.061623176440904,0.0038113334533441123
B,111J,c(long) 111J,20.061634,0.1065599566767107,20.061623176440904,0.0038113334533441123
</code></pre>
<p><strong>Expected Result</strong></p>
<pre><code>Type,Id,TypeId,CalcValues,ID
B,111K,111Kequity(long) 111K,116.211768,0.1065599566767107
C,111N,B(long) 111N,0.106559957,20.061623176440904
B,111J,c(long) 111J,20.061634,0.0038113334533441123
</code></pre>
| 2 | 2016-09-28T18:37:03Z | 39,755,404 | <p>First method is bound to fail: you don't want to add new lines but new columns. So back to second method:</p>
<p>You insert the title OK, but then you're looping through the results on each row, whereas you need to iterate on them.</p>
<p>For this, i create an iterator from the <code>final_results</code> list (with <code>__iter__()</code>), then I call <code>it.next</code> and append to each row (no need to insert in the end, just append) </p>
<p>I removed the <code>all</code> big list, because 1) you can write one line at a time, saves memory, and 2) <code>all</code> is a predefined function. Avoid to use that as a variable.</p>
<pre><code>final_results = ['0.1065599566767107', '0.0038113334533441123', '20.061623176440904']
# Method2
with open("test.csv", 'rb') as input, open('temp.csv', 'wb') as output:
reader = csv.reader(input, delimiter = ',')
writer = csv.writer(output, delimiter = ',')
row = next(reader) # read title line
row.append("Results")
writer.writerow(row) # write enhanced title line
it = final_results.__iter__() # create an iterator on the result
for row in reader:
if row: # avoid empty lines that usually lurk undetected at the end of the files
try:
row.append(next(it)) # add a result to current row
except StopIteration:
row.append("N/A") # not enough results: pad with N/A
writer.writerow(row)
</code></pre>
<p>result:</p>
<pre><code>Type,Id,TypeId,CalcValues,Results
B,111K,111Kequity(long) 111K,116.211768,0.1065599566767107
C,111N,B(long) 111N,0.106559957,0.0038113334533441123
B,111J,c(long) 111J,20.061634,20.061623176440904
</code></pre>
<p>Note: had we included <code>"Results"</code> in the <code>final_results</code> variable, we wouldn't even have needed to process first line differently.</p>
<p>Note2: the values seem wrong: <code>final_results</code> seems not in the same order as the expected output. And the <code>Result</code> column has turned to <code>ID</code>, but that's easy to correct.</p>
| 1 | 2016-09-28T18:46:40Z | [
"python",
"list",
"loops",
"csv",
"parsing"
]
|
Python append column header & append column values from list to csv | 39,755,232 | <p>I am trying to append column header (hard-coded) and append column values from list to an existing csv. I am not getting the desired result. </p>
<p>Method 1 is appending results on an existing csv file. Method 2 clones a copy of existing csv into temp.csv. Both methods don't get me the desired output I am looking for. In Results 1, it just appends after the last row cell. In results 2, all list values append on each row. Expected results is what I am looking for. </p>
<p>I have included my code below. Appreciate any input or guidance. </p>
<p><strong>Existing CSV Test.csv</strong></p>
<pre><code>Type,Id,TypeId,CalcValues
B,111K,111Kequity(long) 111K,116.211768
C,111N,B(long) 111N,0.106559957
B,111J,c(long) 111J,20.061634
</code></pre>
<p><strong>Code - Method 1 & 2</strong></p>
<pre><code>final_results = ['0.1065599566767107', '0.0038113334533441123', '20.061623176440904']
# Method1
csvfile = "test.csv"
with open(csvfile, "a") as output:
writer = csv.writer(output, lineterminator='\n')
for val in final_results:
writer.writerow([val])
# Method2
with open("test.csv", 'rb') as input, open('temp.csv', 'wb') as output:
reader = csv.reader(input, delimiter = ',')
writer = csv.writer(output, delimiter = ',')
all = []
row = next(reader)
row.insert(5, 'Results')
all.append(row)
for row in reader:
for i in final_results:
print type(i)
row.insert(5, i)
all.append(row)
writer.writerows(all)
</code></pre>
<p><strong>Results for Method 1</strong></p>
<pre><code>Type,Id,TypeId,CalcValues
B,111K,111Kequity(long) 111K,116.211768
C,111N,B(long) 111N,0.106559957
B,111J,c(long) 111J,20.0616340.1065599566767107
0.0038113334533441123
20.061623176440904
</code></pre>
<p><strong>Results for Method 2</strong></p>
<pre><code>Type,Id,TypeId,CalcValues,Results
B,111K,111Kequity(long) 111K,116.211768,0.1065599566767107,20.061623176440904,0.0038113334533441123
C,111N,B(long) 111N,0.106559957,0.1065599566767107,20.061623176440904,0.0038113334533441123
B,111J,c(long) 111J,20.061634,0.1065599566767107,20.061623176440904,0.0038113334533441123
</code></pre>
<p><strong>Expected Result</strong></p>
<pre><code>Type,Id,TypeId,CalcValues,ID
B,111K,111Kequity(long) 111K,116.211768,0.1065599566767107
C,111N,B(long) 111N,0.106559957,20.061623176440904
B,111J,c(long) 111J,20.061634,0.0038113334533441123
</code></pre>
| 2 | 2016-09-28T18:37:03Z | 39,755,652 | <pre><code>import csv
HEADER = "Type,Id,TypeId,CalcValues,ID"
final_results = ['0.1065599566767107', '20.061623176440904', '0.0038113334533441123']
with open("test.csv") as inputs, open("tmp.csv", "wb") as outputs:
reader = csv.reader(inputs, delimiter=",")
writer = csv.writer(outputs, delimiter=",")
reader.next() # ignore header line
writer.writerow(HEADER.split(","))
for row in reader:
writer.writerow(row + [final_results.pop(0)])
</code></pre>
<p>I store the header fields into <code>HEADER</code> and switch 2nd and 3rd elements of <code>final_results</code>, use <code>pop(0)</code> to remove and return the first element of <code>final_results</code></p>
<p>output:</p>
<pre><code>Type,Id,TypeId,CalcValues,ID
B,111K,111Kequity(long) 111K,116.211768,0.1065599566767107
C,111N,B(long) 111N,0.106559957,20.061623176440904
B,111J,c(long) 111J,20.061634,0.0038113334533441123
</code></pre>
| 1 | 2016-09-28T19:01:00Z | [
"python",
"list",
"loops",
"csv",
"parsing"
]
|
Can Pywinauto Track or Log Error with an Application? | 39,755,271 | <p>Is there any way to track or log error in Pywinauto (eg a pop-up window does not appear, etc.) ? I am trying to track if a window opens correctly or not. I am also trying to verify values in an Excel Worksheet. Is this possible ? Oh! Yes, I am a newbie to Python and Pywinauto. Thanks for your assistance !!</p>
| 0 | 2016-09-28T18:39:18Z | 39,756,392 | <p>For working with MS Excel I would recommend using standard <code>win32com.client</code> module (it's included into ActivePython, or pyWin32 extensions can be installed by <code>pip install pypiwin32</code> for example). Almost every Microsoft application has nice <code>IDispatch</code> COM interface. By the way standard docs example shows MS Excel usage. ;)</p>
<p>For handling window opening pywinauto contains <code>.Wait('ready')</code> method for the window specification. So something like that should work or raise an exception in case of failure:</p>
<pre><code>app.MainWindowTitle.Wait('ready') # 'ready' == 'exists visible enabled'
# or
app.Window_(title_re='^some regular expr - .*$', class_name='#32770').Wait('visible enabled')
</code></pre>
<p>You can do the same if the window is closing:</p>
<pre><code>app.SomeDialog.WaitNot('exists', timeout=20) # default or implicit timeout is 5 sec.
</code></pre>
<p>If you need <code>bool</code> return value instead of raising an exception, then use methods <code>.Visible()</code>, <code>.Exists()</code>, <code>.Enabled()</code> and <code>.IsActive()</code>.</p>
| 1 | 2016-09-28T19:45:08Z | [
"python",
"pywinauto"
]
|
Python code for ABAQUS to create helix | 39,755,275 | <p>I have this code I have found online that creates a helix when run in ABAQUS. I am trying to understand the logic behind it to perhaps customize it to the size of my helix.</p>
<p>I have added comments above the line of codes I understand.</p>
<pre><code>#######################
# Imports controls from abaqus
from abaqus import *
from abaqusConstants import *
# Defining helix dimensions
width = 20.0
height = 0.05
origin = (15.0, 0.0)
pitch = 50.0
numTurns = 1.0
# Creating sketch in abaqus under name 'rect' and sheetsize of 200
s = mdb.models['Model-1'].ConstrainedSketch(name='rect', sheetSize=200.0)
# No idea. What does .geometry return?
g = s.geometry
# No idea
s.setPrimaryObject(option=STANDALONE)
# Creating a line from point1 to point2, why not use .Line?
cl = s.ConstructionLine(point1=(0.0, -100.0), point2=(0.0, 100.0))
# No idea as I don't know what is stored in g (adding constraints but where?
s.FixedConstraint(entity=g[2])
s.FixedConstraint(entity=g[cl.id])
# Creating rectangle from point1 to point2
s.rectangle(point1=(origin[0], origin[1]), point2=(origin[0]+width, origin[1]+height))
# Creating Part-1 3D Deformable
p = mdb.models['Model-1'].Part(name='Part-1', dimensionality=THREE_D,
type=DEFORMABLE_BODY)
p = mdb.models['Model-1'].parts['Part-1']
p.BaseSolidRevolve(sketch=s, angle=numTurns*360.0, flipRevolveDirection=OFF,
pitch=pitch, flipPitchDirection=OFF, moveSketchNormalToPath=OFF)
#In above command try changing the following member: moveSketchNormalToPath=ON
s.unsetPrimaryObject()
session.viewports['Viewport: 1'].setValues(displayedObject=p)
</code></pre>
<p>Could someone elaborate the logic behind this?</p>
| 0 | 2016-09-28T18:39:29Z | 39,852,890 | <p>This code creates a Construction(!!) line from point1 to point2, around this line your helix will construct:</p>
<p><code>cl = s.ConstructionLine(point1=(0.0, -100.0), point2=(0.0, 100.0))</code></p>
<p>This code revolves your sketch(rectange) around your construction line with defined pitch and number of turns:</p>
<p><code>p.BaseSolidRevolve(sketch=s, angle=numTurns*360.0, flipRevolveDirection=OFF,
pitch=pitch, flipPitchDirection=OFF, moveSketchNormalToPath=OFF)</code></p>
| 0 | 2016-10-04T12:53:40Z | [
"python",
"abaqus"
]
|
Understanding Django Q - Dynamic | 39,755,289 | <p>I'm reading <a href="http://www.michelepasin.org/blog/2010/07/20/the-power-of-djangos-q-objects/" rel="nofollow">this article</a> on dynamically generating Q objects. I understand (for the most part) Q objects but I'm not understanding how the author specifically is doing this example:</p>
<pre><code># string representation of our queries
>>> predicates = [('question__contains', 'dinner'), ('question__contains', 'meal')]
# create the list of Q objects and run the queries as above..
>>> q_list = [Q(x) for x in predicates]
>>> Poll.objects.filter(reduce(operator.or_, q_list))
[<Poll: what shall I make for dinner>, <Poll: what is your favourite meal?>]
</code></pre>
<p>What I specifically don't get is the list comprehension. A <code>Q</code> object is formatted with arbitrary keywords arguments as such <code>Q(question__contains='dinner')</code>. </p>
<p>If doing it like the author suggests with the list comprehension, won't that effectively just place a tuple inside a <code>Q</code> object on each iteration? Like such: <code>Q(('question__contains', 'dinner'))</code>. </p>
<p>I'm not sure how this code produces a correctly formatted <code>Q</code> object. </p>
| 2 | 2016-09-28T18:40:16Z | 39,755,862 | <p>The article relies on the undocumented feature that <code>Q()</code> accepts args as well as kwargs.</p>
<p>If you look at <a href="https://github.com/django/django/blob/9c522d2ed8e752932bfff62d6e2940e56dee700b/django/db/models/query_utils.py#L53" rel="nofollow">source code for the <code>Q</code> class</a>, you can see that it does the following in the <code>__init__</code> method.</p>
<pre><code>class Q(tree.Node):
...
def __init__(self, *args, **kwargs):
super(Q, self).__init__(children=list(args) + list(kwargs.items()))
</code></pre>
<p>If you call <code>Q(question__contains=dinner)</code> then <code>args</code> in an empty tuple <code>()</code> and <code>kwargs</code> is a dictionary <code>{'question__contains': 'dinner'}</code>. In the <code>super()</code> call, the <code>children</code> variable is</p>
<pre><code>children = list(args) + list(kwargs.items())
</code></pre>
<p>which evaluates to</p>
<pre><code>children = list(()) + list(('question__contains', 'dinner'),)
</code></pre>
<p>which simplifies to</p>
<pre><code>children = [('question__contains', 'dinner')]
</code></pre>
<p>Note that you could also get this result if you use <code>Q(('question__contains', 'dinner'))</code>. In this case, <code>args</code> is a tuple <code>(('question__contains', 'dinner'),)</code> and <code>kwargs</code> is an empty dictionary <code>{}</code>.</p>
<p>In the <code>super()</code> call, the <code>children</code> variable evaluates to</p>
<pre><code>children = list((('question__contains', 'dinner'),)) + list([])
</code></pre>
<p>which simplifies to the same result as before,</p>
<pre><code>children = [('question__contains', 'dinner')]
</code></pre>
<p>We have shown that <code>Q(question__contains=dinner)</code> is equivalent to <code>Q(('question__contains', 'dinner'))</code>, and therefore you can generate the list of <code>Q()</code> objects by looping over a list of 2-tuples in a list comprehension.</p>
<pre><code>>>> predicates = [('question__contains', 'dinner'), ('question__contains', 'meal')]
>>> q_list = [Q(x) for x in predicates]
</code></pre>
<p>Personally, I might prefer to write</p>
<pre><code>>>> predicates = [{'question__contains': 'dinner'}, {'question__contains': 'meal'}]
>>> q_list = [Q(**kwargs) for kwargs in predicates]
</code></pre>
<p>This way, you are not relying on the behaviour of the <code>__init__</code> method of the <code>Q</code>.</p>
| 4 | 2016-09-28T19:14:09Z | [
"python",
"django",
"list-comprehension"
]
|
Find and replace texts in all files from the text file input using Python in Notepad++ | 39,755,291 | <p>I'm using Notepad ++ to do a find and replacement function. Currently I have a a huge numbers of text files. I need to do a replacement for different string in different file. I want do it in batch. For example.</p>
<p>I have a folder that has the huge number of text file. I have another text file that has the strings for find and replace in order</p>
<p>Text1 Text1-corrected</p>
<p>Text2 Text2-corrected</p>
<p>I have a small script that do this replacement only for the opened files in Notepad++. For achieving this I'm using <strong>python script</strong> in <strong>Notepad++</strong>. The code is as follows.</p>
<pre><code> with open('C:/replace.txt') as f:
for l in f:
s = l.split()
editor.replace(s[0], s[1])
</code></pre>
<p>In simple words, the find and replace function should fetch the input from a file.</p>
<p>Thanks in advance. </p>
| 0 | 2016-09-28T18:40:20Z | 39,756,091 | <pre><code>with open('replace.txt') as f:
replacements = [tuple(line.split()) for line in f]
for filename in filenames:
with open(filename, 'w') as f:
contents = f.read()
for old, new in replacements:
contents = contents.replace(old, new)
f.write(contents)
</code></pre>
<p>Read replacements into a list of tuples, then go through each file, and read the contents into memory, do the replacements, then write it back. I think the files get overwritten properly, but you might want to double check.</p>
| 0 | 2016-09-28T19:28:43Z | [
"python",
"replace",
"find",
"notepad++"
]
|
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 55: character maps to <undefined> | 39,755,301 | <p>I am new to Python and am hoping that someone could please explain to me what the error message means. </p>
<p>To be specific, I have some code of Python and SPSS combined together saved in Atom, which was created by a former colleague. Now since the former colleague is not here anymore, I need to run the code now. What I did was I ran the code below from SPSS22. </p>
<pre><code> begin program.
import spss,spssaux,imp
abcvalid = imp.load_source('abcvalid', "I:/VALIDITY CHECK/Python Library/2016/abcvalid2016.py")
import abcvalid
abcvalid.fullprocess("9_26_2016","M:/Users/Yli\2016 SURVEY/DOWNLOADS/9_26_2016/","M:/Users/Yli/2016 SURVEY/Legacy15.sav")
end program.
</code></pre>
<p>Then I got the following from the output.</p>
<pre><code> Traceback (most recent call last):
File "<string>", line 5, in <module>
File "I:/VALIDITY CHECK/Python Library/2016/abcnvalid2016.py", line 2067, in fullprocess
dataprep(date,filepath,legacypath)
File "I:/VALIDITY CHECK/Python Library/2016/abcvalid2016.py", line 2006, in dataprep
emailslower(date,filepath)
File "I:/VALIDITY CHECK/Python Library/2016/abcvalid2016.py", line 1635, in emailslower
DATASET ACTIVATE comment_data.""".format(date,filepath))
File "C:\PROGRA~1\IBM\SPSS\STATIS~1\22\Python\Lib\site-packages\spss\spss.py", line 1494, in Submit
cmdList = spssutil.CheckStr(cmdList)
File "C:\PROGRA~1\IBM\SPSS\STATIS~1\22\Python\Lib\site-packages\spss\spssutil.py", line 166, in CheckStr
s1 = unicode(mystr,locale.getlocale(locale.LC_CTYPE)[1])
File "C:\Program Files\IBM\SPSS\Statistics\22\Python\lib\encodings\cp1252.py", line 15, in decode
return codecs.charmap_decode(input,errors,decoding_table)
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 55: character maps to <undefined>
</code></pre>
<p>I know there are similar questions on this site, but the questions and answers were too hard for me to comprehend. If someone could please help me, I'd really appreciate it!</p>
<p>Thank you in advance!</p>
| 0 | 2016-09-28T18:40:53Z | 39,771,447 | <p>It's hard to be sure about what is going on here as there is a lot of code off stage, but the error message is telling you that there is an invalid character in the input stream. Code x81 is undefined in code page 1252, which is the code page in effect. That's the western Europe/US default code page. The program is trying to convert a presumed code-page string to Unicode, so that fails.</p>
<p>My guess is that the input is actually not encoded with cp 1252. Something is messed up in in the Statistics current code page or with Unicode mode. You might need to set the SPSS Statistics locale to something different or to turn Unicode mode on or off. See SET LOCALE and SET UNICODE in the Command Syntax Reference on how to do this.</p>
<p>If you can say more about your locale and what this code is doing, we might be able to provide more information.</p>
| 0 | 2016-09-29T13:18:37Z | [
"python",
"syntax-error",
"decode",
"spss"
]
|
Beautiful Soup: extracting tagged and untagged HTML text | 39,755,346 | <p>As a novice with bs4 I'm looking for some help in working out how to extract the text from a series of webpage tables, one of which is like this:</p>
<pre><code><table style="padding:0px; margin:1px" width="715px">
<tr>
<td height="22" width="33%" >
<span class="darkGreenText"><strong> Name: </strong></span>
Tyto alba
</td>
<td height="22" width="33%" >
<span class="darkGreenText"><strong> Order: </strong></span>
Strigiformes
</td>
<td height="22" width="33%">
<span class="darkGreenText"><strong> Family: </strong></span>
Tytonidae
</td>
<td height="22" width="66%" colspan="2">
<span class="darkGreenText"><strong> Status: </strong></span>
Least Concern
</td>
</tr>
</table>
</code></pre>
<p>Desired output:</p>
<pre><code>Name: Tyto alba
Order: Strigiformes
Family: Tytonidae
Status: Least Concern
</code></pre>
<p>I've tried using <code>[index]</code> as recommended (<a href="http://stackoverflow.com/a/35050622/1726290">http://stackoverflow.com/a/35050622/1726290</a>),
and also <code>next_sibling</code> (<a href="http://stackoverflow.com/a/23380225/1726290">http://stackoverflow.com/a/23380225/1726290</a>) but I'm getting stuck as one part of the text I need is tagged and the second part is not. Any help would be appreciated.</p>
| 0 | 2016-09-28T18:43:29Z | 39,755,726 | <p>It seems like what you want is to call <code>get_text(strip=True)</code>(<a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text" rel="nofollow">docs</a>) on the BeautifulSoup Tag. Assuming <code>raw_html</code> is the html you pasted above:</p>
<p><code>htmlSoup = BeautifulSoup(raw_html)
for tag in htmlSoup.select('td'):
print(tag.get_text(strip=True))
</code></p>
<p>which prints:</p>
<p><code>Name:Tyto alba
Order:Strigiformes
Family:Tytonidae
Status:Least Concern</code></p>
| 1 | 2016-09-28T19:05:59Z | [
"python",
"beautifulsoup"
]
|
How to modify my code to scrape these links? | 39,755,365 | <p>I am new to use python scrapy, and my scrapy version is 1.1.3. I want to get a link list in <a href="http://i.stack.imgur.com/ZrLsf.png" rel="nofollow">this part</a> on <a href="https://www.wikipedia.org/" rel="nofollow">https://www.wikipedia.org/</a>. How should I modify my code? </p>
<pre><code>import scrapy
class LinkSpider(scrapy.Spider):
name = "links"
start_urls = [
'https://www.wikipedia.org/',
]
def parse(self, response):
for link in response.xpath('//div/ul/li/a'):
yield{
'link': link.extract()
}
</code></pre>
<p>Above is my code in my project folder/spiders/spiders.py</p>
<p>What I get is </p>
<pre><code>[
{"link": "<a href=\"//de.wikipedia.org/\" lang=\"de\">Deutsch</a>"},
{"link": "<a href=\"//en.wikipedia.org/\" lang=\"en\" title=\"English\">English</a>"},
{"link": "<a href=\"//es.wikipedia.org/\" lang=\"es\">Espa\u00f1ol</a>"},
{"link": "<a href=\"//fr.wikipedia.org/\" lang=\"fr\">Fran\u00e7ais</a>"},
{"link": "<a href=\"//it.wikipedia.org/\" lang=\"it\">Italiano</a>"},
{"link": "<a href=\"//nl.wikipedia.org/\" lang=\"nl\">Nederlands</a>"},
{"link": "<a href=\"//ja.wikipedia.org/\" lang=\"ja\" title=\"Nihongo\">\u65e5\u672c\u8a9e</a>"},
{"link": "<a href=\"//pl.wikipedia.org/\" lang=\"pl\">Polski</a>"},
{"link": "<a href=\"//ru.wikipedia.org/\" lang=\"ru\" title=\"Russkiy\">\u0420\u0443\u0441\u0441\u043a\u0438\u0439</a>"},
{"link": "<a href=\"//ceb.wikipedia.org/\" lang=\"ceb\">Sinugboanong Binisaya</a>"}
]
</code></pre>
<p>and I expect something like a list only contains links like "//de.wikipedia.org/".</p>
| -2 | 2016-09-28T18:44:41Z | 39,757,101 | <p>You need to modify the xpath query to get the value of the attribute not the tag</p>
<pre><code>import scrapy
class LinkSpider(scrapy.Spider):
name = "links"
start_urls = [
'https://www.wikipedia.org/',
]
def parse(self, response):
for link in response.xpath('//div/ul/li/a/@href'):
yield{
'link': link.extract()
}
</code></pre>
| 0 | 2016-09-28T20:30:33Z | [
"python",
"scrapy"
]
|
How to modify my code to scrape these links? | 39,755,365 | <p>I am new to use python scrapy, and my scrapy version is 1.1.3. I want to get a link list in <a href="http://i.stack.imgur.com/ZrLsf.png" rel="nofollow">this part</a> on <a href="https://www.wikipedia.org/" rel="nofollow">https://www.wikipedia.org/</a>. How should I modify my code? </p>
<pre><code>import scrapy
class LinkSpider(scrapy.Spider):
name = "links"
start_urls = [
'https://www.wikipedia.org/',
]
def parse(self, response):
for link in response.xpath('//div/ul/li/a'):
yield{
'link': link.extract()
}
</code></pre>
<p>Above is my code in my project folder/spiders/spiders.py</p>
<p>What I get is </p>
<pre><code>[
{"link": "<a href=\"//de.wikipedia.org/\" lang=\"de\">Deutsch</a>"},
{"link": "<a href=\"//en.wikipedia.org/\" lang=\"en\" title=\"English\">English</a>"},
{"link": "<a href=\"//es.wikipedia.org/\" lang=\"es\">Espa\u00f1ol</a>"},
{"link": "<a href=\"//fr.wikipedia.org/\" lang=\"fr\">Fran\u00e7ais</a>"},
{"link": "<a href=\"//it.wikipedia.org/\" lang=\"it\">Italiano</a>"},
{"link": "<a href=\"//nl.wikipedia.org/\" lang=\"nl\">Nederlands</a>"},
{"link": "<a href=\"//ja.wikipedia.org/\" lang=\"ja\" title=\"Nihongo\">\u65e5\u672c\u8a9e</a>"},
{"link": "<a href=\"//pl.wikipedia.org/\" lang=\"pl\">Polski</a>"},
{"link": "<a href=\"//ru.wikipedia.org/\" lang=\"ru\" title=\"Russkiy\">\u0420\u0443\u0441\u0441\u043a\u0438\u0439</a>"},
{"link": "<a href=\"//ceb.wikipedia.org/\" lang=\"ceb\">Sinugboanong Binisaya</a>"}
]
</code></pre>
<p>and I expect something like a list only contains links like "//de.wikipedia.org/".</p>
| -2 | 2016-09-28T18:44:41Z | 39,772,293 | <p>You are missing couple of things,</p>
<ul>
<li>You need to add attribute @href to get the value</li>
<li><p>Your href value on first index, you need to add index number.</p>
<pre><code>import scrapy
class LinkSpider(scrapy.Spider):
name = "links"
start_urls = ['https://www.wikipedia.org/', ]
def parse(self, response):
for link in response.xpath('//div/ul/li/a/@href'):
yield{'link': link.extract()[0]}
</code></pre></li>
</ul>
| 0 | 2016-09-29T13:53:57Z | [
"python",
"scrapy"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.