title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Move data from pyodbc to pandas | 39,835,770 | <p>I am querying a SQL database and I want to use pandas to process the data. However, I am not sure how to move the data. Below is my input and output. </p>
<pre><code>import pyodbc
import pandas
from pandas import DataFrame
cnxn = pyodbc.connect(r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\users\bartogre\desktop\CorpRentalPivot1.accdb;UID="";PWD="";')
crsr = cnxn.cursor()
for table_name in crsr.tables(tableType='TABLE'):
print(table_name)
cursor = cnxn.cursor()
sql = "Select sum(CYTM), sum(PYTM), BRAND From data Group By BRAND"
cursor.execute(sql)
for data in cursor.fetchall():
print (data)
</code></pre>
<hr>
<pre><code>('C:\\users\\bartogre\\desktop\\CorpRentalPivot1.accdb', None, 'Data', 'TABLE', None)
('C:\\users\\bartogre\\desktop\\CorpRentalPivot1.accdb', None, 'SFDB', 'TABLE', None)
(Decimal('78071898.71'), Decimal('82192672.29'), 'A')
(Decimal('12120663.79'), Decimal('13278814.52'), 'B')
</code></pre>
| 0 | 2016-10-03T16:01:04Z | 39,835,930 | <p>I was way over thinking this one!</p>
<pre><code>cnxn = pyodbc.connect(r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\users\bartogre\desktop\CorpRentalPivot1.accdb;UID="";PWD="";')
crsr = cnxn.cursor()
for table_name in crsr.tables(tableType='TABLE'):
print(table_name)
cursor = cnxn.cursor()
sql = "Select sum(CYTM), sum(PYTM), BRAND From data Group By BRAND"
cursor.execute(sql)
data = cursor.fetchall()
print(data)
Data = pandas.DataFrame(data)
print(Data)
</code></pre>
| 3 | 2016-10-03T16:10:51Z | [
"python",
"pandas",
"pyodbc"
]
|
Move data from pyodbc to pandas | 39,835,770 | <p>I am querying a SQL database and I want to use pandas to process the data. However, I am not sure how to move the data. Below is my input and output. </p>
<pre><code>import pyodbc
import pandas
from pandas import DataFrame
cnxn = pyodbc.connect(r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\users\bartogre\desktop\CorpRentalPivot1.accdb;UID="";PWD="";')
crsr = cnxn.cursor()
for table_name in crsr.tables(tableType='TABLE'):
print(table_name)
cursor = cnxn.cursor()
sql = "Select sum(CYTM), sum(PYTM), BRAND From data Group By BRAND"
cursor.execute(sql)
for data in cursor.fetchall():
print (data)
</code></pre>
<hr>
<pre><code>('C:\\users\\bartogre\\desktop\\CorpRentalPivot1.accdb', None, 'Data', 'TABLE', None)
('C:\\users\\bartogre\\desktop\\CorpRentalPivot1.accdb', None, 'SFDB', 'TABLE', None)
(Decimal('78071898.71'), Decimal('82192672.29'), 'A')
(Decimal('12120663.79'), Decimal('13278814.52'), 'B')
</code></pre>
| 0 | 2016-10-03T16:01:04Z | 39,839,426 | <p>Another, faster method. Please see data = pd.read_sql(sql, cnxn)</p>
<pre><code>import pyodbc
import pandas as pd
from pandas import DataFrame
from pandas.tools import plotting
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
cnxn = pyodbc.connect(r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)}; DBQ=C:\users\bartogre\desktop\data.mdb;UID="";PWD="";')
crsr = cnxn.cursor()
for table_name in crsr.tables(tableType='TABLE'):
print(table_name)
cursor = cnxn.cursor()
sql = "Select *"
sql = sql + " From data"
print(sql)
cursor.execute(sql)
data = pd.read_sql(sql, cnxn)
</code></pre>
| 0 | 2016-10-03T19:57:14Z | [
"python",
"pandas",
"pyodbc"
]
|
len(unicode string) | 39,835,779 | <pre><code>>>> c='䏿'
>>> c
'\xe4\xb8\xad\xe6\x96\x87'
>>> len(c)
6
>>> cu=u'䏿'
>>> cu
u'\u4e2d\u6587'
>>> len(cu)
2
>>> s='í¡í½¢'
>>> s
'\xf0\xa4\xad\xa2'
>>> len(s)
4
>>> su=u'í¡í½¢'
>>> su
u'\U00024b62'
>>> len(su)
2
>>> import sys
>>> sys.getdefaultencoding()
'ascii'
>>> sys.stdout.encoding
'UTF-8'
</code></pre>
<p>First, I want to make some concepts clear myself.
I've learned that unicode string like <code>cu=u'䏿'</code> ,actually is encoded in UTF-16 by python shell default. Right? <strong>So, when we saw <code>'\u*'</code> , that actually <code>UTF-16 encoding</code>? And <code>'\u4e2d\u6587'</code> is an unicode string or byte string?</strong> But <code>cu</code> has to be stored in the memory, so </p>
<pre><code>0100 1110 0010 1101 0110 0101 1000 0111
</code></pre>
<p>(convert \u4e2d\u6587 to binary) is the form that <code>cu</code> preserved if that a byte string? <strong>Am I right?</strong> </p>
<p>But it can't be byte string. Otherwise len(cu) can't be 2, it should be 4!!
So it has to be unicode string. <strong>BUT!!!</strong> I've also <a href="http://stackoverflow.com/questions/2596714/why-does-python-print-unicode-characters-when-the-default-encoding-is-ascii/21968640#21968640">learned</a> that </p>
<blockquote>
<p>python attempts to implicitly encode the Unicode string with whatever
scheme is currently set in sys.stdout.encoding, in this instance it's
"UTF-8".</p>
</blockquote>
<pre><code>>>> cu.encode('utf-8')
'\xe4\xb8\xad\xe6\x96\x87'
</code></pre>
<p>So! how could <code>len(cu)</code> == 2??? Is that because there are two <code>'\u'</code> in it?</p>
<p>But that doesn't make <code>len(su) == 2</code> sense!</p>
<p>Am I missing something? </p>
<p>I'm using python 2.7.12</p>
| 0 | 2016-10-03T16:01:33Z | 39,835,844 | <p>The Python <code>unicode</code> type holds <em>Unicode codepoints</em>, and is not meant to be an encoding. How Python does this internally is an implementation detail and not something you need to be concerned with most of the time. They are not UTF-16 code units, because UTF-16 is another codec you can use to encode Unicode text, just like UTF-8 is.</p>
<p>The most important thing here is that a standard Python <code>str</code> object holds <em>bytes</em>, which may or may not hold text encoded to a certain codec (your sample uses UTF-8 but that's not a given), and <code>unicode</code> holds <em>Unicode codepoints</em>. In an interactive interpreter session, it is the codec of your terminal that determines what bytes are received by Python (which then uses <code>sys.stdin.encoding</code> to decode these as needed when you create a <code>u'...'</code> <code>unicode</code> object).</p>
<p>Only when <em>writing to <code>sys.stdout</code></em> (say, when using <code>print</code>) does the <code>sys.stdout.encoding</code> value come to play, where Python will automatically encode your Unicode strings again. Only <em>then</em> will your 2 Unicode codepoints be encoded to UTF-8 again and written to your terminal, which knows how to interpret those.</p>
<p>You probably want to read up about Python and Unicode, I recommend:</p>
<ul>
<li><p><a href="http://nedbatchelder.com/text/unipain.html" rel="nofollow">Pragmatic Unicode</a> by Ned Batchelder</p></li>
<li><p><a href="http://joelonsoftware.com/articles/Unicode.html" rel="nofollow">The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)</a> by Joel Spolsky</p></li>
<li><p>The <a href="http://docs.python.org/2/howto/unicode.html" rel="nofollow">Python Unicode HOWTO</a></p></li>
</ul>
| 0 | 2016-10-03T16:06:01Z | [
"python",
"python-2.7",
"unicode",
"encoding",
"utf-8"
]
|
array to tiff raster image with gdal | 39,835,794 | <p>Update </p>
<p>i try to follow <a href="https://earthlab.github.io/python/DEM-slope-aspect-python/" rel="nofollow">this tutorial</a> </p>
<p>but i dont know how can export to new tiff images slope/aspect with GDAL ?</p>
<p>the full code :</p>
<pre><code>from __future__ import division
from osgeo import gdal
from matplotlib.colors import ListedColormap
from matplotlib import colors
import sys
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import math
filename = 'dem.tif'
def getResolution(rasterfn):
raster = gdal.Open(rasterfn)
geotransform = raster.GetGeoTransform()
res = {"east-west": abs(geotransform[1]),
"north-south": abs(geotransform[5])}
return res
def raster2array(rasterfn):
raster = gdal.Open(rasterfn)
band = raster.GetRasterBand(1)
return band.ReadAsArray()
def getNoDataValue(rasterfn):
raster = gdal.Open(rasterfn)
band = raster.GetRasterBand(1)
return band.GetNoDataValue()
data_array = raster2array(filename)
nodataval = getNoDataValue(filename)
resolution = getResolution(filename)
print(resolution)
print(nodataval)
print(type(data_array))
print(data_array.shape)
num_rows = data_array.shape[0]
num_cols = data_array.shape[1]
slope_array = np.ones_like(data_array) * nodataval
aspect_array = np.ones_like(data_array) * nodataval
for i in range(1, num_rows - 1):
for j in range(1, num_cols - 1):
a = data_array[i - 1][j - 1]
b = data_array[i - 1][j]
c = data_array[i - 1][j + 1]
d = data_array[i][j - 1]
e = data_array[i][j]
f = data_array[i][j + 1]
g = data_array[i + 1][j - 1]
h = data_array[i + 1][j]
q = data_array[i + 1][j + 1]
vals = [a, b, c, d, e, f, g, h, q]
if nodataval in vals:
all_present = False
else:
all_present = True
if all_present == True:
dz_dx = (c + (2 * f) + q - a - (2 * d) - g) / (8 * resolution['east-west'])
dz_dy = (g + (2 * h) + q - a - (2 * b) - c) / (8 * resolution['north-south'])
dz_dx_sq = math.pow(dz_dx, 2)
dz_dy_sq = math.pow(dz_dy, 2)
rise_run = math.sqrt(dz_dx_sq + dz_dy_sq)
slope_array[i][j] = math.atan(rise_run) * 57.29578
aspect = math.atan2(dz_dy, (-1 * dz_dx)) * 57.29578
if aspect < 0:
aspect_array[i][j] = 90 - aspect
elif aspect > 90:
aspect_array[i][j] = 360 - aspect + 90
else:
aspect_array[i][j] = 90 - aspect
hist, bins = np.histogram(slope_array, bins=100, range=(0, np.amax(slope_array)))
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, hist, align='center', width=width)
plt.xlabel('Slope (degrees)')
plt.ylabel('Frequency')
plt.show()
color_map = ListedColormap(['white', 'darkgreen', 'green', 'limegreen', 'lime',
'greenyellow', 'yellow', 'gold',
'orange', 'orangered', 'red'])
# range begins at negative value so that missing values are white
color_bounds = list(range(-3, math.ceil(np.amax(slope_array)), 1))
color_norm = colors.BoundaryNorm(color_bounds, color_map.N)
#Create the plot and colorbar
img = plt.imshow(slope_array, cmap = color_map, norm = color_norm)
cbar = plt.colorbar(img, cmap = color_map, norm = color_norm,
boundaries = color_bounds, ticks = color_bounds)
#Show the visualization
plt.axis('off')
plt.title("Slope (degrees)")
plt.show()
plt.close()
</code></pre>
<p>first trial to export i follow this <a href="https://pcjericks.github.io/py-gdalogr-cookbook/raster_layers.html" rel="nofollow">tutorial</a>,but the slope in not correct
have lower degrees from the original(using gis program)</p>
<pre><code>import gdal, ogr, os, osr
import numpy as np
def array2raster(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,array):
cols = array.shape[1]
rows = array.shape[0]
originX = rasterOrigin[0]
originY = rasterOrigin[1]
driver = gdal.GetDriverByName('GTiff')
outRaster = driver.Create(newRasterfn, cols, rows, 1, gdal.GDT_Byte)
outRaster.SetGeoTransform((originX, pixelWidth, 0, originY, 0, pixelHeight))
outband = outRaster.GetRasterBand(1)
outband.WriteArray(array)
outRasterSRS = osr.SpatialReference()
outRasterSRS.ImportFromEPSG(4326)
outRaster.SetProjection(outRasterSRS.ExportToWkt())
outband.FlushCache()
def main(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,array):
reversed_arr = slope_array # reverse array so the tif looks like the array
array2raster(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,reversed_arr) # convert array to raster
if __name__ == "__main__":
rasterOrigin = (-123.25745,45.43013)
pixelWidth = 10
pixelHeight = 10
newRasterfn = 'test.tif'
array = slope_array
main(newRasterfn,rasterOrigin,pixelWidth,pixelHeight,array)
</code></pre>
<p>second trial i follow this <a href="http://stackoverflow.com/questions/37648439/simplest-way-to-save-array-into-raster-file-in-python">quest</a> but i dont take some export</p>
<pre><code>def array_to_raster(slope_array):
"""Array > Raster
Save a raster from a C order array.
:param array: ndarray
"""
dst_filename = 'xxx.tiff'
x_pixels = num_rows # number of pixels in x
y_pixels = num_cols # number of pixels in y
driver = gdal.GetDriverByName('GTiff')
dataset = driver.Create(
dst_filename,
x_pixels,
y_pixels,
1,
gdal.GDT_Float32, )
dataset.GetRasterBand(1).WriteArray(array)
dataset.FlushCache() # Write to disk.
return dataset, dataset.GetRasterBand(1)
</code></pre>
<p>any idea</p>
| 0 | 2016-10-03T16:02:40Z | 39,835,884 | <p>The error states exactly what the problem is. You are trying to multiply an <code>int</code> type by a <code>NoneType</code> (None). The most likely case is that <code>nodataval</code> is None, which would occur because the NoDataValue for filename's first raster band is not defined. Your print(nodataval) command should demonstrate that. <strike>Remember, printing None doesn't appear as the string 'None'. It appears as a blank or no character.</strike></p>
<p>Your edits show that nodataval is None.</p>
| 1 | 2016-10-03T16:08:15Z | [
"python",
"python-2.7",
"numpy",
"scipy"
]
|
re.search() if result is None | 39,835,826 | <p>How to make this pythonic?</p>
<pre><code>def money_from_string(s):
gold = re.search("([0-9]+)g", s)
silver = re.search("([0-9]+)s", s)
copper = re.search("([0-9]+)c", s)
s = re.sub("[0-9]+g", "", s)
s = re.sub("[0-9]+s", "", s)
s = re.sub("[0-9]+c", "", s)
assert (len(s.strip()) == 0) # should be 0
return (gold.group() or 0) * 10000 + (silver.group() or 0) * 100 + (copper.group() or 0)
</code></pre>
<p>This doesn't work because if <code>gold</code> is <code>None</code>, <code>gold.group()</code> will throw an error.</p>
<p>Input examples & expected outputs:</p>
<pre><code>s = "15g17s5c" -> 150000 + 1700 + 5 -> 151705
s = "15g5s" -> 150000 + 500 -> 150500
s = "15g" -> 150000 -> 150000
s = "17s5c" -> 1700 + 5 -> 1705
s = "5c" -> 5 -> 5
</code></pre>
<p>Note that I do appropriate checks on the input, to ensure that it actually is of correct format. Ie. it has match:</p>
<pre><code>MONEY_PATTERNS = [
"([0-9]+g[ ]*[0-9]+s[ ]*[0-9]+c)", # g / s / c
"([0-9]+g[ ]*[0-9]+s)", # g / s
"([0-9]+g[ ]*[0-9]+c)", # g / c
"([0-9]+s[ ]*[0-9]+c)", # s / c
"([0-9]+g)", # g
"([0-9]+s)", # s
"([0-9]+c)", # c
]
</code></pre>
| 0 | 2016-10-03T16:04:22Z | 39,836,060 | <p>If the value returned is <code>None</code>, you can simply check for none and replace it with <code>0</code>. This seems apt to what your are trying to do.</p>
<pre><code>def money_from_string(s):
gold = re.search("([0-9]+)g", s)
silver = re.search("([0-9]+)s", s)
copper = re.search("([0-9]+)c", s)
if str(gold) in 'None':
gold = 0
if str(silver) in 'None':
silver = 0
if str(copper) in 'None':
copper = 0
s = re.sub("[0-9]+g", "", s)
s = re.sub("[0-9]+s", "", s)
s = re.sub("[0-9]+c", "", s)
assert (len(s.strip()) == 0) # should be 0
return ((gold.group() * 10000 + silver.group() * 100 + copper.group())
</code></pre>
| -1 | 2016-10-03T16:18:48Z | [
"python"
]
|
re.search() if result is None | 39,835,826 | <p>How to make this pythonic?</p>
<pre><code>def money_from_string(s):
gold = re.search("([0-9]+)g", s)
silver = re.search("([0-9]+)s", s)
copper = re.search("([0-9]+)c", s)
s = re.sub("[0-9]+g", "", s)
s = re.sub("[0-9]+s", "", s)
s = re.sub("[0-9]+c", "", s)
assert (len(s.strip()) == 0) # should be 0
return (gold.group() or 0) * 10000 + (silver.group() or 0) * 100 + (copper.group() or 0)
</code></pre>
<p>This doesn't work because if <code>gold</code> is <code>None</code>, <code>gold.group()</code> will throw an error.</p>
<p>Input examples & expected outputs:</p>
<pre><code>s = "15g17s5c" -> 150000 + 1700 + 5 -> 151705
s = "15g5s" -> 150000 + 500 -> 150500
s = "15g" -> 150000 -> 150000
s = "17s5c" -> 1700 + 5 -> 1705
s = "5c" -> 5 -> 5
</code></pre>
<p>Note that I do appropriate checks on the input, to ensure that it actually is of correct format. Ie. it has match:</p>
<pre><code>MONEY_PATTERNS = [
"([0-9]+g[ ]*[0-9]+s[ ]*[0-9]+c)", # g / s / c
"([0-9]+g[ ]*[0-9]+s)", # g / s
"([0-9]+g[ ]*[0-9]+c)", # g / c
"([0-9]+s[ ]*[0-9]+c)", # s / c
"([0-9]+g)", # g
"([0-9]+s)", # s
"([0-9]+c)", # c
]
</code></pre>
| 0 | 2016-10-03T16:04:22Z | 39,836,194 | <p>Here is how I would implement your program.</p>
<p>Note:</p>
<ul>
<li>You must use <code>int()</code> to convert strings to integers</li>
<li>I would separate the validation from the computation, but use the same regular expression in both cases.</li>
<li>I would use a dictionary, not code, to hold the values of metals.</li>
<li>By using a character class for the metal, I only have to call <code>re.findall()</code> once. </li>
<li>My regular expression allows for a richer set of inventory strings, for example <code>"10g 10g 10g"</code> represents 30 gold pieces.</li>
</ul>
<p> </p>
<pre><code>import re
money_from_string_pattern = re.compile(r"(\d+)([gsc])")
def is_money_from_string_valid(s):
return not money_from_string_pattern.sub("", s).strip()
def money_from_string(s):
value = {'g': 10000, 's': 100, 'c': 1}
inventory = money_from_string_pattern.findall(s)
return sum(int(amount) * value[metal] for amount, metal in inventory)
assert money_from_string("11g 22s 33c") == 112233
assert money_from_string("11g") == 110000
assert money_from_string("11g 11g") == 220000
assert is_money_from_string_valid("11g 22s 33c") == True
assert is_money_from_string_valid("11g") == True
assert is_money_from_string_valid("11g 11g") == True
assert is_money_from_string_valid("11 g 22s 33c") == False
assert is_money_from_string_valid("11g 22q") == False
assert is_money_from_string_valid("stackoverflow.com") == False
</code></pre>
| 1 | 2016-10-03T16:26:58Z | [
"python"
]
|
Auto indentation in VIM, version 7.4.52, Operating Sysytem :Ubuntu 14.04 | 39,835,836 | <p>I am new to this programming world, currently I am reading from "Introduction to Computation and programming using Python " by John V Guttag , MIT press.</p>
<p>How can I set the auto indentation on in VIM. What's happening now is that when I press enter after : it starts from new line.</p>
<p>Is it possible ?</p>
| 0 | 2016-10-03T16:05:18Z | 39,836,319 | <p>Vim comes with several language indentation presets. To enable them, you need to add the following to Vim's user configuration file (<code>~/.vimrc</code>):</p>
<pre><code>filetype indent plugin on
</code></pre>
<p>An easy way to do this from the terminal:</p>
<pre><code>echo "filetype indent plugin on" >> ~/.vimrc
</code></pre>
<p>This appends the command to your <code>.vimrc</code>, creating it if necessary.</p>
<p>then open up a <code>.py</code> file and define a function.</p>
<p>Note: there are several other settings that can be very helpful to Python programmers and the <a href="http://vim.wikia.com/wiki/Indenting_source_code" rel="nofollow">link</a> from @birryree looks pretty complete.</p>
<p>I also use</p>
<pre><code>set expandtab " Use spaces instead of tabs
set shiftwidth=4 " Number of auto-indent spaces
set softtabstop=4 " Number of spaces per Tab
</code></pre>
<p>in my <code>.vimrc</code> to follow PEP8.</p>
| 0 | 2016-10-03T16:34:59Z | [
"python",
"vim"
]
|
Import tensorflow error | 39,835,841 | <p>I installed <strong>tensorflow</strong> using anaconda distribution and I am unable to use the same in python.</p>
<p>When I import tensorflow using <code>import tensorflow</code> I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/kaxilnaik/anaconda/lib/python2.7/site-packages/tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "/Users/kaxilnaik/anaconda/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 52, in <module>
from tensorflow.core.framework.graph_pb2 import *
File "/Users/kaxilnaik/anaconda/lib/python2.7/site-packages/tensorflow/core/framework/graph_pb2.py", line 16, in <module>
from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
File "/Users/kaxilnaik/anaconda/lib/python2.7/site-packages/tensorflow/core/framework/attr_value_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
File "/Users/kaxilnaik/anaconda/lib/python2.7/site-packages/tensorflow/core/framework/tensor_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
File "/Users/kaxilnaik/anaconda/lib/python2.7/site-packages/tensorflow/core/framework/tensor_shape_pb2.py", line 22, in <module>
serialized_pb=_b('\n,tensorflow/core/framework/tensor_shape.proto\x12\ntensorflow\"z\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x02 \x03(\x0b\x32 .tensorflow.TensorShapeProto.Dim\x12\x14\n\x0cunknown_rank\x18\x03 \x01(\x08\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tB2\n\x18org.tensorflow.frameworkB\x11TensorShapeProtosP\x01\xf8\x01\x01\x62\x06proto3')
TypeError: __init__() got an unexpected keyword argument 'syntax'
</code></pre>
<p>I tried reinstalling anaconda as well as uninstalling <code>protobuf</code> which I found as an answer somewhere in stackoverflow. </p>
| 0 | 2016-10-03T16:05:50Z | 39,853,145 | <p>You should uninstall tensorflow as well, and make sure protobuf is uninstalled. You can try <code>brew uninstall protobuf</code> as well. Then reinstall protobuf, and tensorflow. Tensorflow requires protobuf version 3.x</p>
<pre><code>pip install 'protobuf>=3.0.0a3'
</code></pre>
<p>You can test your protobuf version:</p>
<pre><code>import google.protobuf
>>> print google.protobuf.__version__
</code></pre>
| 0 | 2016-10-04T13:05:57Z | [
"python",
"osx",
"python-2.7",
"tensorflow"
]
|
curl command for an xmlrpc command | 39,835,934 | <p>so i have an XMLRPC server that has a command called start_apps</p>
<p>normally from python I would run it like this</p>
<pre><code>import xmlrpclib
app=xmlrpclib.ServerProxy("http://localhost:6610/app_rpc")
app.start_apps()
</code></pre>
<p>is there any way to call this method from a curl command?</p>
<p>i'm interested from bash or powershell </p>
| 0 | 2016-10-03T16:11:11Z | 39,836,925 | <p>The following command will invoke the start_apps command by posting the XML data to the XMLRPC server:</p>
<pre><code>curl -d "<?xml version='1.0'?><methodCall><methodName>start_apps</methodName></methodCall>" http://localhost:6610/app_rpc
</code></pre>
<p>When invoked on Linux use single quotes for the XML fragment.</p>
| 1 | 2016-10-03T17:14:23Z | [
"python",
"curl",
"xml-rpc"
]
|
Extracting information depending on user input from a dictionary | 39,836,008 | <p>I have the following information from a <a href="http://www.futbin.com/api?term=paul%20pogba" rel="nofollow">.json</a> loaded and stored in a variable <code>Data</code>:</p>
<pre><code>Data = [
{u'rating': u'89', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'3', u'version': u'IF', u'full_name': u'Pogba', u'position': u'CDM', u'id': u'15073', u'nation_image': u'/content/fifa17/img/nation/18.png'},
{u'rating': u'89', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'21', u'version': u'OTW', u'full_name': u'Pogba', u'position': u'CM', u'id': u'15091', u'nation_image': u'/content/fifa17/img/nation/18.png'},
{u'rating': u'88', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'1', u'version': u'Normal', u'full_name': u'Pogba', u'position': u'CM', u'id': u'78', u'nation_image': u'/content/fifa17/img/nation/18.png'}
]
</code></pre>
<p>I'd like to extract the information depending on the <code>rating</code> of the player inputted by the user, only if <em>another rating exists</em>. <em>EDIT</em> : <em>By this, I mean that if only one line was present, i.e. 1 rating, there would be no need to carry out the process.</em></p>
<p>For example, if the user inputted: <code>88</code>, it would return/print:</p>
<pre><code>{u'rating': u'88', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'1', u'version': u'Normal', u'full_name': u'Pogba', u'position': u'CM', u'id': u'78', u'nation_image': u'/content/fifa17/img/nation/18.png'}
</code></pre>
<p>From my minimal knowledge, I am aware that I must use dictionaries, but am unaware of how to do so. However, I am currently in the following stage in my attempts:</p>
<pre><code>player_dict = {Data}
player_info = player_dict.get(user_input)
if player_info:
for item in player_info:
player_info = item
</code></pre>
<p>^ <em>This doesn't seem to work at all.</em></p>
| 0 | 2016-10-03T16:15:49Z | 39,836,169 | <p>Well in this case, you need to loop over your values and compare the one that has been input to the ones you already have, right?</p>
<p>So <code>player_dict</code> is a list of dictionaries, and every dictionary contains properties about a player, of which <code>rating</code> is one.
Also, let's say that <code>player_info</code> is the user-inputted rating, like <code>88</code>.</p>
<pre><code>for player_dict in player_dict:
try:
if player_dict['rating']==player_info:
return player_dict
except KeyError:
print 'there is no key called rating in this dict!'
</code></pre>
| 0 | 2016-10-03T16:25:37Z | [
"python",
"json",
"python-2.7",
"dictionary"
]
|
Extracting information depending on user input from a dictionary | 39,836,008 | <p>I have the following information from a <a href="http://www.futbin.com/api?term=paul%20pogba" rel="nofollow">.json</a> loaded and stored in a variable <code>Data</code>:</p>
<pre><code>Data = [
{u'rating': u'89', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'3', u'version': u'IF', u'full_name': u'Pogba', u'position': u'CDM', u'id': u'15073', u'nation_image': u'/content/fifa17/img/nation/18.png'},
{u'rating': u'89', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'21', u'version': u'OTW', u'full_name': u'Pogba', u'position': u'CM', u'id': u'15091', u'nation_image': u'/content/fifa17/img/nation/18.png'},
{u'rating': u'88', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'1', u'version': u'Normal', u'full_name': u'Pogba', u'position': u'CM', u'id': u'78', u'nation_image': u'/content/fifa17/img/nation/18.png'}
]
</code></pre>
<p>I'd like to extract the information depending on the <code>rating</code> of the player inputted by the user, only if <em>another rating exists</em>. <em>EDIT</em> : <em>By this, I mean that if only one line was present, i.e. 1 rating, there would be no need to carry out the process.</em></p>
<p>For example, if the user inputted: <code>88</code>, it would return/print:</p>
<pre><code>{u'rating': u'88', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'1', u'version': u'Normal', u'full_name': u'Pogba', u'position': u'CM', u'id': u'78', u'nation_image': u'/content/fifa17/img/nation/18.png'}
</code></pre>
<p>From my minimal knowledge, I am aware that I must use dictionaries, but am unaware of how to do so. However, I am currently in the following stage in my attempts:</p>
<pre><code>player_dict = {Data}
player_info = player_dict.get(user_input)
if player_info:
for item in player_info:
player_info = item
</code></pre>
<p>^ <em>This doesn't seem to work at all.</em></p>
| 0 | 2016-10-03T16:15:49Z | 39,836,173 | <p>You can use list comprehensions:</p>
<pre><code>[record for record in Data if record.get("rating") == ratingWeWant]
</code></pre>
<p>where <code>ratingWeWant</code> is the rating you are searching for.</p>
<p>That code will produce list of records from data that have specified rating, eg:</p>
<p><code>[{u'rating': u'88', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'full_name': u'Pogba', u'id': u'78', u'nation_image': u'/content/fifa17/img/nation/18.png', u'rare': u'1', u'name': u'Pogba', u'rare_type': u'1', u'version': u'Normal', u'position': u'CM'}]</code></p>
<p>for data in your post.</p>
<p>Also, because we use <code>.get()</code> method of dict, we don't have to worry about exceptions.</p>
| 0 | 2016-10-03T16:25:54Z | [
"python",
"json",
"python-2.7",
"dictionary"
]
|
Extracting information depending on user input from a dictionary | 39,836,008 | <p>I have the following information from a <a href="http://www.futbin.com/api?term=paul%20pogba" rel="nofollow">.json</a> loaded and stored in a variable <code>Data</code>:</p>
<pre><code>Data = [
{u'rating': u'89', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'3', u'version': u'IF', u'full_name': u'Pogba', u'position': u'CDM', u'id': u'15073', u'nation_image': u'/content/fifa17/img/nation/18.png'},
{u'rating': u'89', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'21', u'version': u'OTW', u'full_name': u'Pogba', u'position': u'CM', u'id': u'15091', u'nation_image': u'/content/fifa17/img/nation/18.png'},
{u'rating': u'88', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'1', u'version': u'Normal', u'full_name': u'Pogba', u'position': u'CM', u'id': u'78', u'nation_image': u'/content/fifa17/img/nation/18.png'}
]
</code></pre>
<p>I'd like to extract the information depending on the <code>rating</code> of the player inputted by the user, only if <em>another rating exists</em>. <em>EDIT</em> : <em>By this, I mean that if only one line was present, i.e. 1 rating, there would be no need to carry out the process.</em></p>
<p>For example, if the user inputted: <code>88</code>, it would return/print:</p>
<pre><code>{u'rating': u'88', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'1', u'version': u'Normal', u'full_name': u'Pogba', u'position': u'CM', u'id': u'78', u'nation_image': u'/content/fifa17/img/nation/18.png'}
</code></pre>
<p>From my minimal knowledge, I am aware that I must use dictionaries, but am unaware of how to do so. However, I am currently in the following stage in my attempts:</p>
<pre><code>player_dict = {Data}
player_info = player_dict.get(user_input)
if player_info:
for item in player_info:
player_info = item
</code></pre>
<p>^ <em>This doesn't seem to work at all.</em></p>
| 0 | 2016-10-03T16:15:49Z | 39,836,315 | <p>What you are trying to do is called a "lookup". Given a key (u'88', or other rating), you want a value (the dictionary where 'rating' is u'88'). A list object, like you currently have the dictionaries stored in, is not very good for such lookups, because you have to go through each index in the list, compare the rating to the key, and then return the dictionary (if it exists at all).</p>
<pre><code>key = u'88'
for d in Data:
if d['rating'] == key:
print d
</code></pre>
<p>Consider what happens when you have a large amount of dictionaries stored in that list. What if none of the dictionaries have a rating equal to the key? You pass through a large number of dictionaries, checking for equalities, only to have none exist. This is wasted computations.</p>
<p>As you know, python's <code>dict</code> type is very-well suited for lookups. If you had a dictionary that looked like</p>
<pre><code>Data_dict = {u'88': {u'rare': u'1', u'name': u'Pogba', ...},
u'89': {u'rare': u'1', u'name': u'Pogba', ...} }
</code></pre>
<p>then you could quickly see if the key u'88' is in that dataset.</p>
<pre><code>if u'88' in Data_dict:
print Data_dict[u'88']
</code></pre>
<p>To make such a dictionary, from your current Data as a list:</p>
<pre><code>Data_dict = {val[u'rating']: val for val in Data}
</code></pre>
| 0 | 2016-10-03T16:34:43Z | [
"python",
"json",
"python-2.7",
"dictionary"
]
|
Extracting information depending on user input from a dictionary | 39,836,008 | <p>I have the following information from a <a href="http://www.futbin.com/api?term=paul%20pogba" rel="nofollow">.json</a> loaded and stored in a variable <code>Data</code>:</p>
<pre><code>Data = [
{u'rating': u'89', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'3', u'version': u'IF', u'full_name': u'Pogba', u'position': u'CDM', u'id': u'15073', u'nation_image': u'/content/fifa17/img/nation/18.png'},
{u'rating': u'89', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'21', u'version': u'OTW', u'full_name': u'Pogba', u'position': u'CM', u'id': u'15091', u'nation_image': u'/content/fifa17/img/nation/18.png'},
{u'rating': u'88', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'1', u'version': u'Normal', u'full_name': u'Pogba', u'position': u'CM', u'id': u'78', u'nation_image': u'/content/fifa17/img/nation/18.png'}
]
</code></pre>
<p>I'd like to extract the information depending on the <code>rating</code> of the player inputted by the user, only if <em>another rating exists</em>. <em>EDIT</em> : <em>By this, I mean that if only one line was present, i.e. 1 rating, there would be no need to carry out the process.</em></p>
<p>For example, if the user inputted: <code>88</code>, it would return/print:</p>
<pre><code>{u'rating': u'88', u'rare': u'1', u'name': u'Pogba', u'club_image': u'/content/fifa17/img/clubs/11.png', u'image': u'/content/fifa17/img/players/195864.png', u'rare_type': u'1', u'version': u'Normal', u'full_name': u'Pogba', u'position': u'CM', u'id': u'78', u'nation_image': u'/content/fifa17/img/nation/18.png'}
</code></pre>
<p>From my minimal knowledge, I am aware that I must use dictionaries, but am unaware of how to do so. However, I am currently in the following stage in my attempts:</p>
<pre><code>player_dict = {Data}
player_info = player_dict.get(user_input)
if player_info:
for item in player_info:
player_info = item
</code></pre>
<p>^ <em>This doesn't seem to work at all.</em></p>
| 0 | 2016-10-03T16:15:49Z | 39,836,704 | <p>The other answers don't include your requirement that data should only be returned if the <code>rating</code> appears in <code>Data</code> once only.</p>
<pre><code># First iterate over Data and count how many times that rating appears
counter = {}
for item in Data:
rating = item['rating']
counter[rating] = counter.get(rating, 0) + 1
# Now build a dict that uses rating as a key and the whole dataset with
# that rating as a value. Include only ratings that appeared once in Data
cleaned_dict = {item['rating']: item for item in Data if counter.get(item['rating']) == 1}
query = raw_input('Enter rating:')
if cleaned_dict.get(query):
print "FOUND"
print cleaned_dict.get(query)
else:
print "Not found, no need to process further"
</code></pre>
<p>Explanation:<br>
In the first <code>for</code> loop, we build a dictionary of <code>{rating: frequency}</code>. The line <code>counter[rating] = counter.get(rating, 0) + 1</code> says that if the <code>rating</code> isn't in the dictionary, give that rating a default count of <code>0</code> and immediately add <code>1</code> to it. i.e. it appears once. If, however, later in the <code>for</code> loop we find that rating again, we get the value stored against that rating and add <code>1</code> to it. You could also use the <a href="https://docs.python.org/2/library/collections.html" rel="nofollow"><code>Counter</code></a> object for this but I've found that usually to be slower to <code>get()</code>.</p>
<p>In <code>cleaned_dict</code> we use a dictionary comprehension to build a new set of data only if that rating appeared once in <code>Data</code>. We store the whole raw data against a key of just the <code>rating</code>. This allows fast lookup on user input.</p>
<p>Finally, we take user input and look for the rating in our dictionary. If the rating is not a key in <code>cleaned_data</code> then we get <code>None</code>, which is falsey, so <code>if cleaned_dict.get(query):</code> is interpreted as <code>False</code>.</p>
<p>EDIT based on comments:</p>
<pre><code>cleaned_dict = {}
for item in Data:
rating = item['rating']
is_in_dict = cleaned_dict.get(rating)
if is_in_dict:
position = item['position']
cleaned_dict[rating][position] = item
else:
sub_dict = {}
sub_dict[item['position']] = item
cleaned_dict[rating] = sub_dict
rating_input = raw_input('Enter rating:')
check_rating = cleaned_dict.get(rating_input)
if check_rating:
position_input = raw_input('Enter position:')
if check_rating.get(position_input):
print check_rating.get(position_input)
else:
print "Record of rating, but not position"
else:
print "No record of rating {}".format(rating_input)
</code></pre>
| 0 | 2016-10-03T16:58:47Z | [
"python",
"json",
"python-2.7",
"dictionary"
]
|
Replacing flask internal web server with Apache | 39,836,011 | <p>I have written a single user application that currently works with Flask internal web server. It does not seem to be very robust and it crashes with all sorts of socket errors as soon as a page takes a long time to load and the user navigates elsewhere while waiting. So I thought to replace it with Apache. </p>
<p>The problem is, my current code is a single program that first launches about ten threads to do stuff, for example set up ssh tunnels to remote servers and zmq connections to communicate with a database located there. Finally it enters run() loop to start the internal server. </p>
<p>I followed all sorts of instructions and managed to get Apache service the initial page. However, everything goes wrong as I now don't have any worker threads available, nor any globally initialised classes, and none of my global variables holding interfaces to communicate with these threads do not exist. </p>
<p>Obviously I am not a web developer. </p>
<p>How badly "wrong" my current code is? Is there any way to make that work with Apache with a reasonable amount of work? Can I have Apache just replace the run() part and have a running application, with which Apache communicates? My current app in a very simplified form (without data processing threads) is something like this:</p>
<pre><code>comm=None
app = Flask(__name__)
class CommsHandler(object):
__init__(self):
*Init communication links to external servers and databases*
def request_data(self, request):
*Use initialised links to request something*
return result
@app.route("/", methods=["GET"]):
def mainpage():
return render_template("main.html")
@app.route("/foo", methods=["GET"]):
def foo():
a=comm.request_data("xyzzy")
return render_template("foo.html", data=a)
comm = CommsHandler()
app.run()
</code></pre>
<p>Or have I done this completely wrong? Now when I remove app.run and just import app class to wsgi script, I do get a response from the main page as it does not need reference to global variable comm. </p>
<p>/foo does not work, as "comm" is an uninitialised variable. And I can see why, of course. I just never thought this would need to be exported to Apache or any other web server. </p>
<p>So the question is, can I launch this application somehow in a rc script at boot, set up its communication links and everyhing, and have Apache/wsgi just call function of the running application instead of launching a new one? </p>
<p>Hannu</p>
| 0 | 2016-10-03T16:15:59Z | 39,854,907 | <p>This is the simple app with flask run on internal server: </p>
<pre><code>from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
</code></pre>
<p>To run it on apache server Check out <a href="http://flask.pocoo.org/docs/0.11/deploying/fastcgi/#creating-a-fcgi-file" rel="nofollow">fastCGI</a> doc :</p>
<pre><code>from flup.server.fcgi import WSGIServer
from yourapplication import app
if __name__ == '__main__':
WSGIServer(app).run()
</code></pre>
| 0 | 2016-10-04T14:27:10Z | [
"python",
"apache",
"flask",
"wsgi"
]
|
Reading serial and responding with Python | 39,836,164 | <p>Everyone, hello!</p>
<p>I'm currently using trying to communicate with my Arduino (whom is hooked up my Raspberry Pi through Serial) and using the information in my Python script on my Raspberry Pi.</p>
<p>That said, my Python script has to wait for the Arduino to report back it's data before I want the script to continue, although, I'm not entirely sure on how to do that.</p>
<p>This is what I've got so far:</p>
<pre><code>#!/usr/bin/env python
import time
import serial
import RPi.GPIO as GPIO
ser = serial.Serial('/dev/ttyACM0', 9600)
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(20, GPIO.OUT) #green LED
GPIO.setup(16, GPIO.IN, GPIO.PUD_UP) #green button
GPIO.output(20, True) #green ON
start = time.time()
while True:
if (GPIO.input(16) == False):
print "green button pressed"
time.sleep(0.25)
start = time.time()
while (GPIO.input(16) == False):
time.sleep(0.01)
if (GPIO.input(16) == True):
print "released!"
end = time.time()
elapsed = end - start
print elapsed
if elapsed >= 5:
print "longer than 5s"
else:
print "shorter than 5s"
ser.write("0")
while True:
print ser.readline().rstrip()
if ser.readline().rstrip() == "a":
print "ready"
continue
if ser.readline().rstrip() == "b":
print "timeout"
break
if ser.readline().rstrip()[0] == "c":
print "validated: " + ser.readline().rstrip()[2]
break
</code></pre>
<p>As you can see, I'm sending the number 0 to my Arduino, and wait for it to respond with a, which means it's ready. After which, when it has the data, it sends out the message "c", as a result, I need to wait for 2 different seperate messages.</p>
<p>I've tried to do this by having a loop and breaking it when I have what I need, but this doesn't work.</p>
<p>It currently does go into the loop, and it prints out the "a" message, but doesn't come back with the second message.</p>
<p>Any idea how to properly tie this loop?</p>
<p>Thank you!</p>
| 0 | 2016-10-03T16:25:15Z | 39,836,451 | <p>use functions</p>
<pre><code>def wait_for(ser,targetChar):
resp = ""
while True:
tmp=ser.read(1)
resp = resp + tmp
if not tmp or tmp == targetChar:
return resp
first_resp = wait_for(ser,'a')
second_resp = wait_for(ser,'c')
while not second_resp.endswith('c'):
print "RETRY"
second_resp = wait_for(ser,'c')
</code></pre>
| 0 | 2016-10-03T16:43:37Z | [
"python",
"pyserial"
]
|
Reading serial and responding with Python | 39,836,164 | <p>Everyone, hello!</p>
<p>I'm currently using trying to communicate with my Arduino (whom is hooked up my Raspberry Pi through Serial) and using the information in my Python script on my Raspberry Pi.</p>
<p>That said, my Python script has to wait for the Arduino to report back it's data before I want the script to continue, although, I'm not entirely sure on how to do that.</p>
<p>This is what I've got so far:</p>
<pre><code>#!/usr/bin/env python
import time
import serial
import RPi.GPIO as GPIO
ser = serial.Serial('/dev/ttyACM0', 9600)
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(20, GPIO.OUT) #green LED
GPIO.setup(16, GPIO.IN, GPIO.PUD_UP) #green button
GPIO.output(20, True) #green ON
start = time.time()
while True:
if (GPIO.input(16) == False):
print "green button pressed"
time.sleep(0.25)
start = time.time()
while (GPIO.input(16) == False):
time.sleep(0.01)
if (GPIO.input(16) == True):
print "released!"
end = time.time()
elapsed = end - start
print elapsed
if elapsed >= 5:
print "longer than 5s"
else:
print "shorter than 5s"
ser.write("0")
while True:
print ser.readline().rstrip()
if ser.readline().rstrip() == "a":
print "ready"
continue
if ser.readline().rstrip() == "b":
print "timeout"
break
if ser.readline().rstrip()[0] == "c":
print "validated: " + ser.readline().rstrip()[2]
break
</code></pre>
<p>As you can see, I'm sending the number 0 to my Arduino, and wait for it to respond with a, which means it's ready. After which, when it has the data, it sends out the message "c", as a result, I need to wait for 2 different seperate messages.</p>
<p>I've tried to do this by having a loop and breaking it when I have what I need, but this doesn't work.</p>
<p>It currently does go into the loop, and it prints out the "a" message, but doesn't come back with the second message.</p>
<p>Any idea how to properly tie this loop?</p>
<p>Thank you!</p>
| 0 | 2016-10-03T16:25:15Z | 39,838,771 | <p>This worked out well for me to keep my while, and escape the block until I get what i want:</p>
<pre><code>loop = 1
while loop == 1:
message = ser.readline().rstrip()
if message == "a":
print "ready"
continue
if message == "b":
print "Timeout"
loop = 0
if message[0] == "c":
print "Validated: " + message[2]
loop = 0
if message == "d":
print "error, try again"
loop = 0
</code></pre>
| 0 | 2016-10-03T19:11:27Z | [
"python",
"pyserial"
]
|
Python Gtk3 set or change color for GdkPixbuf.Pixbuf from Gdk.RGBA or Gdk.Color | 39,836,188 | <p>How can I set or change the color of a GdkPixbuf.Pixbuf ?</p>
<p>I can create and fill:</p>
<pre><code>pixbuf = GdkPixbuf.Pixbuf.new(GdkPixbuf.Colorspace.RGB, False, 8, 32, 16)
pixbuf.fill(0x000000)
</code></pre>
<p>But how to get a color from from Gdk.RGBA or Gdk.Color?</p>
<p>edit:</p>
<p>I tried so much things now but nothing is working, for example:</p>
<pre><code>import webcolors
from gi.repository import Gdk
color = Gdk.color_parse("orange")
hexe = webcolors.rgb_to_hex(color.to_floats())
</code></pre>
<p>But I still have the problem that I need 0x at the beginning and it have to be a number.</p>
| 0 | 2016-10-03T16:26:33Z | 39,858,129 | <p>Wow, this was very hard. I finally found an solution:</p>
<pre><code>pixbuf = GdkPixbuf.Pixbuf.new(GdkPixbuf.Colorspace.RGB, True, 8, 16, 16)
color = Gdk.color_parse("orange")
fillr = (color.red / 256) << 24
fillg = (color.green / 256) << 16
fillb = (color.blue / 256) << 8
fillcolor = fillr | fillg | fillb | 255
pixbuf.fill(fillcolor)
</code></pre>
<p>Thanks to starcal (<a href="https://github.com/ilius/starcal/blob/master/scal3/ui_gtk/desktop.py" rel="nofollow">https://github.com/ilius/starcal/blob/master/scal3/ui_gtk/desktop.py</a>) for the code and and to <a href="http://nullege.com/" rel="nofollow">http://nullege.com/</a> helped me to find this.</p>
| 0 | 2016-10-04T17:12:54Z | [
"python",
"colors",
"gtk3"
]
|
How to print only printable charcters in binary file (equvalent to strings under Linux)? | 39,836,287 | <p>I am undertaking conversion of my python application from python 2 to python 3. One of the functions which I use is to get the printable character out of binary file. I earlier used following function in python 2 and it worked great:</p>
<pre><code>import string
def strings(filename, min=4):
with open(filename, "rb") as f:
result = ""
for c in f.read():
if c in string.printable:
result += c
continue
if len(result) >= min:
yield result
result = ""
if len(result) >= min: # catch result at EOF
yield result
</code></pre>
<p>Code is actually from <a href="http://stackoverflow.com/questions/17195924/python-equivalent-of-unix-strings-utility">Python equivalent of unix "strings" utility</a>. When I run the above code with python 2 it produces the output like this which is absolutely ok for me:</p>
<pre><code> +s
^!1^
i*Q(
}"~
%lh!ghY
#dh!
!`,!
mL#H
o!<XXT0
' <
z !Uk
%
wS
n` !wl
*ty
(Q 6
!XPLO$
E#kF
</code></pre>
<p>However, the function gives weird results under python 3. It produces the error:</p>
<pre><code>TypeError: 'in <string>' requires string as left operand, not int
</code></pre>
<p>So I converted the 'int' to 'str' by replacing this </p>
<pre><code>if c in string.printable:
</code></pre>
<p>with this</p>
<pre><code>if str(c) in string.printable:
</code></pre>
<p>(I also converted all the places where the same error message is thrown)</p>
<p>Now the python 3 gives the following output:</p>
<pre><code>56700
0000000000000000000000000000000000000000
1236
60000
400234
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
2340
0000
5010
5000
17889
2348
23400000000
5600
</code></pre>
<p>I cant see any characters when I use python 3. Any help to get the code working or pointer to the solution is appreciated. All I require is to extract the strings from binary file (very small with few kb) and store it in a variable.</p>
| -2 | 2016-10-03T16:32:43Z | 39,836,357 | <p>In Python 3, opening a file in binary mode gives you <code>bytes</code> results. Iterating over a <code>bytes</code> object gives you <em>integers</em>, not characters, in the range 0 to 255 (inclusive). From the <a href="https://docs.python.org/3/library/stdtypes.html#bytes" rel="nofollow"><code>bytes</code> documentation</a>:</p>
<blockquote>
<p>While bytes literals and representations are based on ASCII text, <code>bytes</code> objects actually behave like immutable sequences of integers, with each value in the sequence restricted such that <code>0 <= x < 256</code></p>
</blockquote>
<p>Convert <code>string.printable</code> to a set and test against that:</p>
<pre><code>printable = {ord(c) for c in string.printable}
</code></pre>
<p>and</p>
<pre><code>if c in printable:
</code></pre>
<p>Next, you want to append to a <code>bytesarray()</code> object to keep things reasonably performant, and decode from ASCII to produce a <code>str</code> result:</p>
<pre><code>printable = {ord(c) for c in string.printable}
with open(filename, "rb") as f:
result = bytearray()
for c in f.read():
if c in printable:
result.append(c)
continue
if len(result) >= min:
yield result.decode('ASCII')
result.clear()
if len(result) >= min: # catch result at EOF
yield result
</code></pre>
<p>Rather than iterate over the bytes one by one, you could instead split on anything that is <em>not</em> printable:</p>
<pre><code>import re
nonprintable = re.compile(b'[^%s]+' % re.escape(string.printable.encode('ascii')))
with open(filename, "rb") as f:
for result in nonprintable.split(f.read()):
if result:
yield result.decode('ASCII')
</code></pre>
<p>I'd explore reading the file in <em>chunks</em> rather than in one go; don't try to fit a large file into memory in one go here:</p>
<pre><code>with open(filename, "rb") as f:
buffer = b''
for chunk in iter(lambda: f.read(2048), b''):
splitresult = nonprintable.split(buffer + chunk)
buffer = splitresult.pop()
for string in splitresult:
if string:
yield string.decode('ascii')
if buffer:
yield buffer.decode('ascii')
</code></pre>
<p>The buffer carries over any incomplete word from one chunk to the next; <code>re.split()</code> produces empty values at the start and end if the input started or ended with non-printable characters, respectively.</p>
| 2 | 2016-10-03T16:37:36Z | [
"python",
"string",
"python-3.x"
]
|
How to print only printable charcters in binary file (equvalent to strings under Linux)? | 39,836,287 | <p>I am undertaking conversion of my python application from python 2 to python 3. One of the functions which I use is to get the printable character out of binary file. I earlier used following function in python 2 and it worked great:</p>
<pre><code>import string
def strings(filename, min=4):
with open(filename, "rb") as f:
result = ""
for c in f.read():
if c in string.printable:
result += c
continue
if len(result) >= min:
yield result
result = ""
if len(result) >= min: # catch result at EOF
yield result
</code></pre>
<p>Code is actually from <a href="http://stackoverflow.com/questions/17195924/python-equivalent-of-unix-strings-utility">Python equivalent of unix "strings" utility</a>. When I run the above code with python 2 it produces the output like this which is absolutely ok for me:</p>
<pre><code> +s
^!1^
i*Q(
}"~
%lh!ghY
#dh!
!`,!
mL#H
o!<XXT0
' <
z !Uk
%
wS
n` !wl
*ty
(Q 6
!XPLO$
E#kF
</code></pre>
<p>However, the function gives weird results under python 3. It produces the error:</p>
<pre><code>TypeError: 'in <string>' requires string as left operand, not int
</code></pre>
<p>So I converted the 'int' to 'str' by replacing this </p>
<pre><code>if c in string.printable:
</code></pre>
<p>with this</p>
<pre><code>if str(c) in string.printable:
</code></pre>
<p>(I also converted all the places where the same error message is thrown)</p>
<p>Now the python 3 gives the following output:</p>
<pre><code>56700
0000000000000000000000000000000000000000
1236
60000
400234
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
2340
0000
5010
5000
17889
2348
23400000000
5600
</code></pre>
<p>I cant see any characters when I use python 3. Any help to get the code working or pointer to the solution is appreciated. All I require is to extract the strings from binary file (very small with few kb) and store it in a variable.</p>
| -2 | 2016-10-03T16:32:43Z | 39,837,691 | <p>
I am sure this will work. </p>
<p>As a generator:</p>
<pre class="lang-py prettyprint-override"><code>import string, _io
def getPrintablesFromBinaryFile(path, encoding='cp1252'):
global _io, string
buffer = _io.BufferedReader(open(path, 'rb'))
while True:
byte = buffer.read(1)
if byte == b'':
return #EOF
try:
d = byte.decode(encoding)
except:
continue
if d in string.printable:
yield d
</code></pre>
<p>As a function is to just collect the outputs of the getPrintablesFromBinaryFile() into a iterable.</p>
<p>Explanation:</p>
<ol>
<li>Import the needed modules</li>
<li>Define the function</li>
<li>Load the modules</li>
<li>Create the buffer</li>
<li>Get a byte from the buffer</li>
<li>Check if it is EOF</li>
<li>If yes, stop the generator</li>
<li>Try to decode using the encoding (like <code>'\xef'</code> does not decode using UTF-8)</li>
<li>If impossible, it cannot be a printable</li>
<li>If printable, yield it</li>
</ol>
<p><strong>Note:</strong> <code>cp1252</code> is the encoding for many text files</p>
| -1 | 2016-10-03T18:01:24Z | [
"python",
"string",
"python-3.x"
]
|
BloomFilter is at capacity after 10 minutes | 39,836,304 | <p>I'm using <strong>Scrapy</strong> with a <strong>BloomFilter</strong> and after 10 minutes I have this error on loop :</p>
<pre><code>2016-10-03 18:03:34 [twisted] CRITICAL:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/task.py", line 517, in _oneWorkUnit
result = next(self._iterator)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 63, in <genexpr>
work = (callable(elem, *args, **named) for elem in iterable)
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/scraper.py", line 183, in _process_spidermw_output
self.crawler.engine.crawl(request=output, spider=spider)
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 209, in crawl
self.schedule(request, spider)
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 215, in schedule
if not self.slot.scheduler.enqueue_request(request):
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/scheduler.py", line 54, in enqueue_request
if not request.dont_filter and self.df.request_seen(request):
File "dirbot/custom_filters.py", line 20, in request_seen
self.fingerprints.add(fp)
File "/usr/local/lib/python2.7/dist-packages/pybloom/pybloom.py", line 182, in add
raise IndexError("BloomFilter is at capacity")
IndexError: BloomFilter is at capacity
</code></pre>
<p><strong>The filter.py :</strong></p>
<pre><code>from pybloom import BloomFilter
from scrapy.utils.job import job_dir
from scrapy.dupefilters import BaseDupeFilter
class BLOOMDupeFilter(BaseDupeFilter):
"""Request Fingerprint duplicates filter"""
def __init__(self, path=None):
self.file = None
self.fingerprints = BloomFilter(2000000, 0.00001)
@classmethod
def from_settings(cls, settings):
return cls(job_dir(settings))
def request_seen(self, request):
fp = request.url
if fp in self.fingerprints:
return True
self.fingerprints.add(fp)
def close(self, reason):
self.fingerprints = None
</code></pre>
<p>I search on Google every possibilities but nothing work.<br/>
Thank's for your help.</p>
| 0 | 2016-10-03T16:34:09Z | 39,836,342 | <p>Use <a href="https://github.com/jaybaird/python-bloomfilter/blob/2bbe01ad49965bf759e31781e6820408068862ac/pybloom/pybloom.py#L287" rel="nofollow"><code>pybloom.ScalableBloomFilter</code></a> instead of <code>BloomFilter</code>.</p>
<pre><code>from pybloom import ScalableBloomFilter
from scrapy.utils.job import job_dir
from scrapy.dupefilters import BaseDupeFilter
class BLOOMDupeFilter(BaseDupeFilter):
"""Request Fingerprint duplicates filter"""
def __init__(self,
path=None,
initial_capacity=2000000,
error_rate=0.00001,
mode=ScalableBloomFilter.SMALL_SET_GROWTH):
self.file = None
self.fingerprints = ScalableBloomFilter(
initial_capacity, error_rate, mode)
</code></pre>
| 2 | 2016-10-03T16:36:28Z | [
"python",
"web-scraping",
"scrapy",
"bloom-filter"
]
|
Comparing Arrays for Accuracy | 39,836,318 | <p>I've a 2 arrays:</p>
<pre><code>np.array(y_pred_list).shape
# returns (5, 47151, 10)
np.array(y_val_lst).shape
# returns (5, 47151, 10)
np.array(y_pred_list)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
np.array(y_val_lst)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
</code></pre>
<p>I would like to go through all 47151 examples, and calculate the "accuracy". Meaning the sum of those in y_pred_list that matches y_val_lst over 47151. What's the comparison function for this? </p>
| 0 | 2016-10-03T16:34:57Z | 39,836,606 | <p>Sounds like you want something like this:</p>
<pre><code>accuracy = (y_pred_list == y_val_lst).all(axis=(0,2)).mean()
</code></pre>
<p>...though since your arrays are clearly floating-point arrays, you might want to allow for numerical-precision errors rather than insisting on exact equality:</p>
<pre><code>accuracy = (numpy.abs(y_pred_list - y_val_lst) < tolerance ).all(axis=(0,2)).mean()
</code></pre>
<p>(where, for example, <code>tolerance = 1e-10</code>)</p>
<p>The <code>.all(axis=(0,2))</code> call records cases in which everything in its input is <code>True</code> (i.e. everything matches) when working along the dimension 0 (i.e. the one that has extent 5) and dimension 2 (the one that has extent 10). It outputs a one-dimensional array of length 47151. The <code>.mean()</code> call then gives you the proportion of matches in that sequence, which is my best guess as to what you mean by "over 47151".</p>
| 0 | 2016-10-03T16:53:02Z | [
"python",
"arrays",
"numpy"
]
|
Comparing Arrays for Accuracy | 39,836,318 | <p>I've a 2 arrays:</p>
<pre><code>np.array(y_pred_list).shape
# returns (5, 47151, 10)
np.array(y_val_lst).shape
# returns (5, 47151, 10)
np.array(y_pred_list)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
np.array(y_val_lst)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
</code></pre>
<p>I would like to go through all 47151 examples, and calculate the "accuracy". Meaning the sum of those in y_pred_list that matches y_val_lst over 47151. What's the comparison function for this? </p>
| 0 | 2016-10-03T16:34:57Z | 39,836,888 | <p>You can find a lot of useful classification score in <code>sklearn.metrics</code>, particularly <code>accuracy_score()</code>. See the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html" rel="nofollow">doc here</a>, you would use it as:</p>
<pre><code>import sklearn
acc = sklearn.metrics.accuracy_score(np.array(y_val_list)[:, 2, :],
np.array(y_pred_list)[:, 2, :])
</code></pre>
| 0 | 2016-10-03T17:11:40Z | [
"python",
"arrays",
"numpy"
]
|
Pandas missing rows when importing using delimiter ="|" | 39,836,337 | <p>I have a dataset with 51,347 rows. When import the data using pandas and set the delimiter to "|" , I lose 394 rows. </p>
<pre><code> import pandas as pd
df = pd.read_csv("Basin11.txt", sep='|', error_bad_lines=False,
dtype={'Start Date': str, 'Greater Than/Less Than': str,
'Parameter Code': float, 'Start Time': str, 'Start Depth': float, 'Composite Category': str,
'Composite Type': str})
print(len(df.index))
</code></pre>
<p>If I remove the sep variable, the data won't load as multiple columns but will load the proper number of rows. It only seems to be an issue for this file.
<a href="http://newcoastalatlas.tamug.edu/download/Basin/basin11.txt" rel="nofollow">Basin11.txt File</a></p>
<p>Does anyone know why I'm losing data?</p>
| 2 | 2016-10-03T16:36:06Z | 39,837,259 | <p>I started going through your input file and found a number of errors that may be leading to the "missing rows".</p>
<p>The comments line 3491 and 9805 have an opening <code>"</code> but is missing the closing <code>"</code>. This would cause matching issues, including the following rows as part of the comment body. As I started fixing those, the line counts started going up. There are probably more cases of this.</p>
<p>Also, some lines have double-double quotes (<code>""</code>) for opening and closing comments. For example:</p>
<blockquote>
<p>""green, med tide, 10-15 mph winds""</p>
</blockquote>
<p>Edit: I added the following code:</p>
<pre><code>for comment in df['Comments'].values:
print(comment)
</code></pre>
<p>Then ran <code>python3 sample.py | grep '|' | wc -l</code>, to find the number of comments that contained <code>|</code>, and got 394 (The number of rows that you are missing)</p>
| 2 | 2016-10-03T17:33:49Z | [
"python",
"pandas"
]
|
Simple Pygame animation stuttering | 39,836,374 | <p>I'm learning python <code>*\(^o^)/*</code></p>
<p>I have a simple bouncing box window drawn using <code>Pygame</code>. Everything seems to be functioning properly, except for one minor annoyance. It stutters constantly! I have no idea what could be causing the stutter. I thought it might be lag, so I implemented a fixed time-step to allow the loop to catch up, but this had no effect. </p>
<pre><code>#--- initialize pygame window ---#
import pygame
import time
pygame.init()
size = (1200,500)
screen = pygame.display.set_mode(size, pygame.RESIZABLE)
fps = 60
#--- define color palette ---#
black = (0,0,0)
white = (255,255,255)
#--- define the player ---#
class player:
def __init__(self,screen,surface, color):
self.speed = 3
self.direction_x = 1
self.direction_y = 1
self.screen = screen
self.surface = surface
self.rect = self.surface.get_rect()
self.color = color
def set_pos(self, x,y):
self.rect.x = x
self.rect.y = y
def advance_pos(self):
screen_width, screen_height = screen.get_size()
if self.rect.x + self.rect.width > screen_width or player1.rect.x < 0:
player1.direction_x *= -1
player1.speed = 3
elif player1.rect.y + player1.rect.height > screen_height or player1.rect.y < 0:
player1.direction_y *= -1
player1.speed = 3
else:
player1.speed -= 0.001
self.rect.x += self.speed * self.direction_x
self.rect.y += self.speed * self.direction_y
def draw(self):
pygame.draw.rect(self.surface, self.color, [0,0,self.rect.width,self.rect.height])
def blit(self):
screen.blit(self.surface, self.rect)
player1 = player(screen, pygame.Surface((50,50)), white)
player1.set_pos(50,50)
player1.draw()
#--- define game variables ---#
previous = time.time() * 1000
lag = 0.0
background = black
done = False
#--- game ---#
while not done:
#--- update time step ---#
current = time.time() * 1000
elapsed = current - previous
lag += elapsed
previous = current
#--- process events ---#
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
break
if event.type == pygame.VIDEORESIZE:
screen = pygame.display.set_mode((event.w, event.h), pygame.RESIZABLE)
#--- update logic ---#
while True:
player1.advance_pos()
lag -= fps
if lag <= fps:
break
#--- draw to screen ---#
screen.fill(background)
player1.blit()
pygame.display.update()
pygame.time.Clock().tick(fps)
</code></pre>
| 0 | 2016-10-03T16:38:54Z | 39,862,978 | <p>This is a rewrite of your code that uses opengl instead for the rendering. The major changes are as follows:</p>
<ol>
<li>I used opengl immediate mode, which is out-of-date and deprecated, but is a lot easier to understand at first. Most of the gl calls are either in the player.draw() method or in the main loop.</li>
<li><p>I fixed the way the timer is done. Rather than doing just clock.tick(fps), I manually keep track of the amount of time that it takes to do all of the processing to the frame and add the appropriate millisecond delay to reach 60 fps. You can try that modification with your existing pygame code before migrating to opengl as that might be sufficient to remove most of the stutter.</p>
<pre><code>import pygame
import time
from OpenGL.GL import *
class Player:
def __init__(self, screen, width, height, color):
self.x = 0
self.y = 0
self.speed = 3
self.direction_x = 1
self.direction_y = 1
self.screen = screen
self.width = width
self.height = height
self.color = color
def set_pos(self, x, y):
self.x = x
self.y = y
def advance_pos(self):
screen_width, screen_height = screen.get_size()
if self.x + self.width > screen_width or self.x < 0:
self.direction_x *= -1
self.speed = 3
elif self.y + self.height > screen_height or self.y < 0:
self.direction_y *= -1
self.speed = 3
else:
self.speed -= 0.001
self.x += self.speed * self.direction_x
self.y += self.speed * self.direction_y
def draw(self):
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glTranslate(self.x, self.y, 0)
glBegin(GL_QUADS)
glColor(*self.color)
glVertex(0, 0, 0)
glVertex(self.width, 0, 0)
glVertex(self.width, self.height, 0)
glVertex(0, self.height, 0)
glEnd()
if __name__ == "__main__":
pygame.init()
size = width, height = (550, 400)
screen = pygame.display.set_mode(size, pygame.RESIZABLE | pygame.DOUBLEBUF | pygame.OPENGL)
fps = 60
black = (0,0,0,255)
white = (255,255,255,255)
player1 = Player(screen, 50, 50, white)
player1.set_pos(50,50)
done = False
previous = time.time() * 1000
glClearColor(*black)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
glOrtho(0, width, height, 0, -1, 1)
clock = pygame.time.Clock()
while not done:
current = time.time() * 1000
elapsed = current - previous
previous = current
delay = 1000.0/fps - elapsed
delay = max(int(delay), 0)
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
break
if event.type == pygame.VIDEORESIZE:
size = width, height = event.w, event.h
screen = pygame.display.set_mode(size, pygame.RESIZABLE | pygame.DOUBLEBUF | pygame.OPENGL)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
glOrtho(0, width, height, 0, -1, 1)
glViewport(0, 0, width, height)
#reset player movement and position to avoid glitches where player is trapped outside new window borders
player1.set_pos(50, 50)
player1.direction_x = 1
player1.direction_y = 1
player1.advance_pos()
glClear(GL_COLOR_BUFFER_BIT)
glClear(GL_DEPTH_BUFFER_BIT)
player1.draw()
pygame.display.flip()
pygame.time.delay(delay)
</code></pre></li>
</ol>
| 1 | 2016-10-04T22:54:01Z | [
"python",
"pygame"
]
|
How to have beautiful soup fetch emails? | 39,836,385 | <p>Im working with beautiful soup and would like to grab emails to a depth of my choosing in my web scraper. Currently however I am unsure why my web scraping tool is not working. Everytime I run it, it does not populate the email list. </p>
<pre><code>#!/usr/bin/python
from bs4 import BeautifulSoup, SoupStrainer
import re
import urllib
import threading
def step2():
file = open('output.html', 'w+')
file.close()
# links already added
visited = set()
visited_emails = set()
scrape_page(visited, visited_emails, 'https://www.google.com', 2)
print('Webpages \n')
for w in visited:
print(w)
print('Emails \n')
for e in visited_emails:
print(e)
# Run recursively
def scrape_page(visited, visited_emails, url, depth):
if depth == 0:
return
website = urllib.urlopen(url)
soup = BeautifulSoup(website, parseOnlyThese=SoupStrainer('a', email=False))
emails = re.findall(r"[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}", str(website))
first = str(website).split('mailto:')
for i in range(1, len(first)):
print(first.split('>')[0])
for email in emails:
if email not in visited_emails:
print('- got email ' + email)
visited_emails.add(email)
for link in soup:
if link.has_attr('href'):
if link['href'] not in visited:
if link['href'].startswith('https://www.google.com'):
visited.add(link['href'])
scrape_page(visited, visited_emails, link['href'], depth - 1)
def main():
step2()
main()
</code></pre>
<p>for some reason im unsure how to fix my code to add emails to the list. if you could give me some advice it would be greatly appreciated. thanks</p>
| -1 | 2016-10-03T16:39:37Z | 39,836,476 | <p>You just need to look for the href's with mailto:</p>
<pre><code>emails = [a["href"] for a in soup.select('a[href^=mailto:]')]
</code></pre>
<p>I presume <a href="https://www.google.com" rel="nofollow">https://www.google.com</a> is a placeholder for the actual site you are scraping as there are no mailto's to scrape on the google page. If there are mailto's in the source you are scraping then this will find them.</p>
| 1 | 2016-10-03T16:45:04Z | [
"python",
"email",
"beautifulsoup"
]
|
Wait for a Request to complete - Python Scrapy | 39,836,404 | <p>I have a Scrapy Spider which scrapes a website and that website requires to refresh a token to be able to access them.</p>
<pre><code>def get_ad(self, response):
temp_dict = AppextItem()
try:
Selector(response).xpath('//div[@class="messagebox"]').extract()[0]
print("Captcha found when scraping ID "+ response.meta['id'] + " LINK: "+response.meta['link'])
self.p_token = ''
return Request(url = url_, callback=self.get_p_token, method = "GET",priority=1, meta = response.meta)
except Exception:
print("Captcha was not found")
</code></pre>
<p>I have a <code>get_p_token</code> method that refreshes token and assigns to <code>self.p_token</code> </p>
<p><code>get_p_token</code> is called when Captcha is found, but problem is, other Requests keep executing.</p>
<p>I want that if Captcha is found, do not make next request until execution of <code>get_p_token</code> is finished.</p>
<p>I have <code>priority=1</code> but that does not help.</p>
<p><a href="http://pastebin.com/X6Q4ZFp2" rel="nofollow">HERE is full code of Spider</a></p>
<p>P.S:</p>
<p>Actually that token is passed to each URL so that is why I want to wait until a new token is found and then scrape the rest of URLs.</p>
| 3 | 2016-10-03T16:40:47Z | 39,837,394 | <p>This is how I would go on about it:</p>
<pre><code>def get_p_token(self, response):
# generate token
...
yield Request(url = response.url, callback=self.no_captcha, method = "GET",priority=1, meta = response.meta, dont_filter=True)
def get_ad(self, response):
temp_dict = AppextItem()
try:
Selector(response).xpath('//div[@class="messagebox"]').extract()[0]
print("Captcha found when scraping ID "+ response.meta['id'] + " LINK: "+response.meta['link'])
self.p_token = ''
yield Request(url = url_, callback=self.get_p_token, method = "GET",priority=1, meta = response.meta)
except Exception:
print("Captcha was not found")
yield Request(url = url_, callback=self.no_captcha, method = "GET",priority=1, meta = response.meta)
</code></pre>
<p>You haven't provided working code so this is only a demonstration of the problem...The logic here is pretty simple:</p>
<p>If a captcha is found it goes to <code>get_p_token</code> and after generating the token, it requests the url that you were requesting before. If no captcha is found it goes on as normal.</p>
| 0 | 2016-10-03T17:43:21Z | [
"python",
"scrapy",
"screen-scraping",
"scrapy-spider"
]
|
PyMongo Only include field in document if value is not a NaN | 39,836,449 | <p>I have a significant amount of data that needs to be inserted into my MongoDB database using PyMongo. The data I have is currently stored in flat files and is sparse (i.e. many of the individual values are NaN). In Mongo DB I would like to not insert fields if values are NaN but I'm not sure how to do that (I should point out I'm new to both MongoDB and Python).</p>
<p>My insert startement looks something like this</p>
<pre><code> strategy.insert_many([
{
"strategyId": strategyInfo[stratIndex][ID],
"strategyName": strategyInfo[stratIndex][NAME],
"date": dates[i],
"time": thisTime,
"aum": stratAum[i],
"return":0.0,
"commission":0.0,
"slippage":0.0,
"basket":[{
"assetId": assets[m][ASSETID],
"order": orders[i, m],
"expiry": expiry[i, m],
"price": prices[i, m],
"ePrice": eprices[i, m] <<<Don't include line if eprices[i,m] is a NaN
}
for m in range(len(assets))
]
}
], False)
</code></pre>
<p>It's easy enough to check to see if one of my value's is NaN using <code>math.isnan()</code> but I can't figure out how to leave the entire field blank if that is the case.</p>
| 0 | 2016-10-03T16:43:28Z | 39,927,306 | <blockquote>
<p>It's easy enough to check to see if one of my value's is NaN using <code>math.isnan()</code> but I can't figure out how to leave the entire field blank if that is the case.</p>
</blockquote>
<p>Based on your example code, you can do the following instead:</p>
<pre><code># Create a strategy document.
# This is inside of a loop where variable `i` is known, similar to your example.
doc = {
"strategyId": strategyInfo[stratIndex][ID],
"strategyName": strategyInfo[stratIndex][NAME],
"date": dates[i],
"time": thisTime,
"aum": stratAum[i],
"return":0.0,
"commission":0.0,
"slippage":0.0
}
baskets = []
for m in range(len(assets)):
basket = {
"assetId": assets[m][ASSETID],
"order": orders[i, m],
"expiry": expiry[i, m],
"price": prices[i, m],
}
if not math.isnan(eprice[i, m]):
basket["ePrice"] = eprice[i, m]
baskets.append(basket)
# You can also add a filter here to make sure `baskets` array is not null.
doc["basket"] = baskets
docs.append(doc)
</code></pre>
<p>Which essentially separating the making your document and the database insertion. </p>
<p>You can then use <a href="http://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.insert_many" rel="nofollow">insert_many()</a>:</p>
<pre><code>strategy.insert_many(docs, False)
</code></pre>
<p>You can also wrap the <code>insert_many</code> in a try/except to detect database insert error, which should be different error handling from your document creation error. </p>
| 0 | 2016-10-08T00:00:23Z | [
"python",
"mongodb",
"insert",
"pymongo",
null
]
|
Appending letters to a list in a while loop | 39,836,496 | <p>I am trying to append the first word of a sentence to an empty list. The current code is below:</p>
<pre><code>sentence = input("Enter sentence: ")
subject = []
print (subject)
x = 0
while True:
letter = sentence[x]
if letter != " ":
print (letter)
subject.append(letter)
x = x + 1
print (subject)
</code></pre>
<p>It currently prints this:</p>
<pre><code>Enter sentence: Cherries are red fruit
[]
C
h
e
r
r
i
e
s
</code></pre>
<p>It seems to ignore my attempt to append the result to the empty list... Help, please! </p>
| 0 | 2016-10-03T16:46:24Z | 39,836,558 | <p>You'd better use <code>for</code> loop, it's less error-prone:</p>
<pre><code>sentence = input('Enter sentence: ')
subject = []
print(subject)
for letter in sentence:
if letter == ' ':
break
else:
print(letter)
subject.append(letter)
print(subject)
</code></pre>
<p>If you want to break a sentence into words, there's a <a href="https://docs.python.org/3/library/stdtypes.html#str.split" rel="nofollow"><code>str.split</code></a> method, which can help you in simple cases:</p>
<pre><code>words = sentence.split()
first_word = words[0] if words else None
print(first_word)
</code></pre>
| 1 | 2016-10-03T16:49:35Z | [
"python",
"while-loop",
"append"
]
|
Appending letters to a list in a while loop | 39,836,496 | <p>I am trying to append the first word of a sentence to an empty list. The current code is below:</p>
<pre><code>sentence = input("Enter sentence: ")
subject = []
print (subject)
x = 0
while True:
letter = sentence[x]
if letter != " ":
print (letter)
subject.append(letter)
x = x + 1
print (subject)
</code></pre>
<p>It currently prints this:</p>
<pre><code>Enter sentence: Cherries are red fruit
[]
C
h
e
r
r
i
e
s
</code></pre>
<p>It seems to ignore my attempt to append the result to the empty list... Help, please! </p>
| 0 | 2016-10-03T16:46:24Z | 39,836,597 | <p>Why not use the <code>split()</code> function instead of appending one letter at a time:</p>
<pre><code>sentence = input("Enter sentence: ")
split_sentence = sentence.split(" ")
subject = []
subject.append(split_sentence[0])
print (subject)
</code></pre>
<p>or even more simplier:</p>
<pre><code>sentence = input("Enter sentence: ").split(" ")
subject = []
subject.append(sentence[0])
print (subject)
</code></pre>
<p>or even if you are only wanting one input you don't have to append</p>
<pre><code>sentence = input("Enter sentence: ").split(" ")
subject = sentence[0]
print (subject)
</code></pre>
<p><code>split()</code>, splits a string define by the parameter and returns a list. </p>
| 0 | 2016-10-03T16:52:35Z | [
"python",
"while-loop",
"append"
]
|
Javascript implementation of Murmurhash3 to give the same result as Murmurhash3.cpp used by transform available in Python's sklearn | 39,836,589 | <p>(I am VERY sorry I am not allowed to add many URLs to help me better explain my problems in this post because I am new on StackOverflow and my StackOverflow account has very low privilege).</p>
<p><strong>Summary</strong> </p>
<p>Can anyone please guide me on how to modify <code>murmurhash3.js</code> (below) so that it produces the same hash as <code>MurmurHash3.cpp</code> (below) does? I provide a simple python code "simple_python_wrapper.py" for MurmurHash3.cpp based on what I need. The simple_python_wrapper.py should work on your computer if you have sklearn installed.</p>
<p>I have heavily used <code>Murmurhash3.cpp</code> (shown below) while using <code>transform</code> from sklearn (a Python Machine Learning Library): <code>from sklearn.feature_extraction._hashing import transform</code> in one of my machine learning projects. <code>transform</code> uses <code>Murmurhash3.cpp</code> deep down in sklearn's implementation/import tree.</p>
<p><strong>More details</strong></p>
<p><strong><em>hash % (2^18) {that is "hash modulus 2^18"} based on MurmurHash3.cpp</em></strong></p>
<pre><code>"hello" gives 260679
"there" gives 45525
</code></pre>
<p><strong><em>hash % (2^18) {that is "hash modulus 2^18"} based on murmurhash3.js</em></strong></p>
<pre><code>"hello" gives -58999
"there" gives 65775
</code></pre>
<p><strong><em>murmurhash3.js</em></strong></p>
<pre><code>/*
* The MurmurHash3 algorithm was created by Austin Appleby. This JavaScript port was authored
* by whitequark (based on Java port by Yonik Seeley) and is placed into the public domain.
* The author hereby disclaims copyright to this source code.
*
* This produces exactly the same hash values as the final C++ version of MurmurHash3 and
* is thus suitable for producing the same hash values across platforms.
*
* There are two versions of this hash implementation. First interprets the string as a
* sequence of bytes, ignoring most significant byte of each codepoint. The second one
* interprets the string as a UTF-16 codepoint sequence, and appends each 16-bit codepoint
* to the hash independently. The latter mode was not written to be compatible with
* any other implementation, but it should offer better performance for JavaScript-only
* applications.
*
* See http://github.com/whitequark/murmurhash3-js for future updates to this file.
*/
var MurmurHash3 = {
mul32: function(m, n) {
var nlo = n & 0xffff;
var nhi = n - nlo;
return ((nhi * m | 0) + (nlo * m | 0)) | 0;
},
hashBytes: function(data, len, seed) {
var c1 = 0xcc9e2d51, c2 = 0x1b873593;
var h1 = seed;
var roundedEnd = len & ~0x3;
for (var i = 0; i < roundedEnd; i += 4) {
var k1 = (data.charCodeAt(i) & 0xff) |
((data.charCodeAt(i + 1) & 0xff) << 8) |
((data.charCodeAt(i + 2) & 0xff) << 16) |
((data.charCodeAt(i + 3) & 0xff) << 24);
k1 = this.mul32(k1, c1);
k1 = ((k1 & 0x1ffff) << 15) | (k1 >>> 17); // ROTL32(k1,15);
k1 = this.mul32(k1, c2);
h1 ^= k1;
h1 = ((h1 & 0x7ffff) << 13) | (h1 >>> 19); // ROTL32(h1,13);
h1 = (h1 * 5 + 0xe6546b64) | 0;
}
k1 = 0;
switch(len % 4) {
case 3:
k1 = (data.charCodeAt(roundedEnd + 2) & 0xff) << 16;
// fallthrough
case 2:
k1 |= (data.charCodeAt(roundedEnd + 1) & 0xff) << 8;
// fallthrough
case 1:
k1 |= (data.charCodeAt(roundedEnd) & 0xff);
k1 = this.mul32(k1, c1);
k1 = ((k1 & 0x1ffff) << 15) | (k1 >>> 17); // ROTL32(k1,15);
k1 = this.mul32(k1, c2);
h1 ^= k1;
}
// finalization
h1 ^= len;
// fmix(h1);
h1 ^= h1 >>> 16;
h1 = this.mul32(h1, 0x85ebca6b);
h1 ^= h1 >>> 13;
h1 = this.mul32(h1, 0xc2b2ae35);
h1 ^= h1 >>> 16;
return h1;
},
hashString: function(data, len, seed) {
var c1 = 0xcc9e2d51, c2 = 0x1b873593;
var h1 = seed;
var roundedEnd = len & ~0x1;
for (var i = 0; i < roundedEnd; i += 2) {
var k1 = data.charCodeAt(i) | (data.charCodeAt(i + 1) << 16);
k1 = this.mul32(k1, c1);
k1 = ((k1 & 0x1ffff) << 15) | (k1 >>> 17); // ROTL32(k1,15);
k1 = this.mul32(k1, c2);
h1 ^= k1;
h1 = ((h1 & 0x7ffff) << 13) | (h1 >>> 19); // ROTL32(h1,13);
h1 = (h1 * 5 + 0xe6546b64) | 0;
}
if((len % 2) == 1) {
k1 = data.charCodeAt(roundedEnd);
k1 = this.mul32(k1, c1);
k1 = ((k1 & 0x1ffff) << 15) | (k1 >>> 17); // ROTL32(k1,15);
k1 = this.mul32(k1, c2);
h1 ^= k1;
}
// finalization
h1 ^= (len << 1);
// fmix(h1);
h1 ^= h1 >>> 16;
h1 = this.mul32(h1, 0x85ebca6b);
h1 ^= h1 >>> 13;
h1 = this.mul32(h1, 0xc2b2ae35);
h1 ^= h1 >>> 16;
return h1;
}
};
if(typeof module !== "undefined" && typeof module.exports !== "undefined") {
module.exports = MurmurHash3;
}
</code></pre>
<p><strong><em>Here is the HTML code + Javascript I am using to test the javascript</em></strong></p>
<p><a href="https://jsbin.com/gicomikike/edit?html,js,output" rel="nofollow">https://jsbin.com/gicomikike/edit?html,js,output</a> </p>
<pre><code><html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.0/jquery.min.js"></script>
<script src="murmurhash3.js"></script>
<script>
function call_murmurhash3_32_gc () {
var key = $('textarea#textarea1').val();
var seed = 0;
var hash = MurmurHash3.hashString (key, key.length, seed);
$('div#div1').text(hash);
}
</script>
</head>
<body>
Body
<form>
<textarea rows="4" cols="50" id=textarea1></textarea>
<br>
<input type="button" value="Hash" onclick="call_murmurhash3_32_gc()"/>
</form>
<div id=div1>
</div>
</body>
</html>
</code></pre>
<p><strong><em>simple_python_wrapper.py</em></strong></p>
<p>This makes use of MurmurHash3.cpp in sklearn's import tree.</p>
<pre><code>from sklearn.feature_extraction._hashing import transform
import numpy as np
def getHashIndex (words):
raw_X = words
n_features = 262144 # 2 ** 18
dtype = np.float32 #np.float64
#transform(raw_X, Py_ssize_t n_features, dtype)
indices_a, indptr, values = transform (raw_X, n_features, dtype)
return indices_a
words = [[("hello", 1), ("there", 1)]]
print getHashIndex (words)
</code></pre>
<p>Output</p>
<pre><code>[260679 45525]
</code></pre>
<p><strong><em>MurmurHash3.cpp</em></strong></p>
<pre><code>I copied this code is available here
https://github.com/karanlyons/murmurHash3.js/blob/master/murmurHash3.js
//-----------------------------------------------------------------------------
// MurmurHash3 was written by Austin Appleby, and is placed in the public
// domain. The author hereby disclaims copyright to this source code.
// Note - The x86 and x64 versions do _not_ produce the same results, as the
// algorithms are optimized for their respective platforms. You can still
// compile and run any of them on any platform, but your performance with the
// non-native version will be less than optimal.
#include "MurmurHash3.h"
//-----------------------------------------------------------------------------
// Platform-specific functions and macros
// Microsoft Visual Studio
#if defined(_MSC_VER)
#define FORCE_INLINE __forceinline
#include <stdlib.h>
#define ROTL32(x,y) _rotl(x,y)
#define ROTL64(x,y) _rotl64(x,y)
#define BIG_CONSTANT(x) (x)
// Other compilers
#else // defined(_MSC_VER)
#if defined(GNUC) && ((GNUC > 4) || (GNUC == 4 && GNUC_MINOR >= 4))
/* gcc version >= 4.4 4.1 = RHEL 5, 4.4 = RHEL 6.
* Don't inline for RHEL 5 gcc which is 4.1 */
#define FORCE_INLINE attribute((always_inline))
#else
#define FORCE_INLINE
#endif
inline uint32_t rotl32 ( uint32_t x, int8_t r )
{
return (x << r) | (x >> (32 - r));
}
inline uint64_t rotl64 ( uint64_t x, int8_t r )
{
return (x << r) | (x >> (64 - r));
}
#define ROTL32(x,y) rotl32(x,y)
#define ROTL64(x,y) rotl64(x,y)
#define BIG_CONSTANT(x) (x##LLU)
#endif // !defined(_MSC_VER)
//-----------------------------------------------------------------------------
// Block read - if your platform needs to do endian-swapping or can only
// handle aligned reads, do the conversion here
FORCE_INLINE uint32_t getblock ( const uint32_t * p, int i )
{
return p[i];
}
FORCE_INLINE uint64_t getblock ( const uint64_t * p, int i )
{
return p[i];
}
//-----------------------------------------------------------------------------
// Finalization mix - force all bits of a hash block to avalanche
FORCE_INLINE uint32_t fmix ( uint32_t h )
{
h ^= h >> 16;
h *= 0x85ebca6b;
h ^= h >> 13;
h *= 0xc2b2ae35;
h ^= h >> 16;
return h;
}
//----------
FORCE_INLINE uint64_t fmix ( uint64_t k )
{
k ^= k >> 33;
k *= BIG_CONSTANT(0xff51afd7ed558ccd);
k ^= k >> 33;
k *= BIG_CONSTANT(0xc4ceb9fe1a85ec53);
k ^= k >> 33;
return k;
}
//-----------------------------------------------------------------------------
void MurmurHash3_x86_32 ( const void * key, int len,
uint32_t seed, void * out )
{
const uint8_t * data = (const uint8_t*)key;
const int nblocks = len / 4;
uint32_t h1 = seed;
uint32_t c1 = 0xcc9e2d51;
uint32_t c2 = 0x1b873593;
//----------
// body
const uint32_t * blocks = (const uint32_t *)(data + nblocks*4);
for(int i = -nblocks; i; i++)
{
uint32_t k1 = getblock(blocks,i);
k1 *= c1;
k1 = ROTL32(k1,15);
k1 *= c2;
h1 ^= k1;
h1 = ROTL32(h1,13);
h1 = h1*5+0xe6546b64;
}
//----------
// tail
const uint8_t * tail = (const uint8_t*)(data + nblocks*4);
uint32_t k1 = 0;
switch(len & 3)
{
case 3: k1 ^= tail[2] << 16;
case 2: k1 ^= tail[1] << 8;
case 1: k1 ^= tail[0];
k1 *= c1; k1 = ROTL32(k1,15); k1 *= c2; h1 ^= k1;
};
//----------
// finalization
h1 ^= len;
h1 = fmix(h1);
*(uint32_t*)out = h1;
}
//-----------------------------------------------------------------------------
void MurmurHash3_x86_128 ( const void * key, const int len,
uint32_t seed, void * out )
{
const uint8_t * data = (const uint8_t*)key;
const int nblocks = len / 16;
uint32_t h1 = seed;
uint32_t h2 = seed;
uint32_t h3 = seed;
uint32_t h4 = seed;
uint32_t c1 = 0x239b961b;
uint32_t c2 = 0xab0e9789;
uint32_t c3 = 0x38b34ae5;
uint32_t c4 = 0xa1e38b93;
//----------
// body
const uint32_t * blocks = (const uint32_t *)(data + nblocks*16);
for(int i = -nblocks; i; i++)
{
uint32_t k1 = getblock(blocks,i*4+0);
uint32_t k2 = getblock(blocks,i*4+1);
uint32_t k3 = getblock(blocks,i*4+2);
uint32_t k4 = getblock(blocks,i*4+3);
k1 *= c1; k1 = ROTL32(k1,15); k1 *= c2; h1 ^= k1;
h1 = ROTL32(h1,19); h1 += h2; h1 = h1*5+0x561ccd1b;
k2 *= c2; k2 = ROTL32(k2,16); k2 *= c3; h2 ^= k2;
h2 = ROTL32(h2,17); h2 += h3; h2 = h2*5+0x0bcaa747;
k3 *= c3; k3 = ROTL32(k3,17); k3 *= c4; h3 ^= k3;
h3 = ROTL32(h3,15); h3 += h4; h3 = h3*5+0x96cd1c35;
k4 *= c4; k4 = ROTL32(k4,18); k4 *= c1; h4 ^= k4;
h4 = ROTL32(h4,13); h4 += h1; h4 = h4*5+0x32ac3b17;
}
//----------
// tail
const uint8_t * tail = (const uint8_t*)(data + nblocks*16);
uint32_t k1 = 0;
uint32_t k2 = 0;
uint32_t k3 = 0;
uint32_t k4 = 0;
switch(len & 15)
{
case 15: k4 ^= tail[14] << 16;
case 14: k4 ^= tail[13] << 8;
case 13: k4 ^= tail[12] << 0;
k4 *= c4; k4 = ROTL32(k4,18); k4 *= c1; h4 ^= k4;
case 12: k3 ^= tail[11] << 24;
case 11: k3 ^= tail[10] << 16;
case 10: k3 ^= tail[ 9] << 8;
case 9: k3 ^= tail[ 8] << 0;
k3 *= c3; k3 = ROTL32(k3,17); k3 *= c4; h3 ^= k3;
case 8: k2 ^= tail[ 7] << 24;
case 7: k2 ^= tail[ 6] << 16;
case 6: k2 ^= tail[ 5] << 8;
case 5: k2 ^= tail[ 4] << 0;
k2 *= c2; k2 = ROTL32(k2,16); k2 *= c3; h2 ^= k2;
case 4: k1 ^= tail[ 3] << 24;
case 3: k1 ^= tail[ 2] << 16;
case 2: k1 ^= tail[ 1] << 8;
case 1: k1 ^= tail[ 0] << 0;
k1 *= c1; k1 = ROTL32(k1,15); k1 *= c2; h1 ^= k1;
};
//----------
// finalization
h1 ^= len; h2 ^= len; h3 ^= len; h4 ^= len;
h1 += h2; h1 += h3; h1 += h4;
h2 += h1; h3 += h1; h4 += h1;
h1 = fmix(h1);
h2 = fmix(h2);
h3 = fmix(h3);
h4 = fmix(h4);
h1 += h2; h1 += h3; h1 += h4;
h2 += h1; h3 += h1; h4 += h1;
((uint32_t*)out)[0] = h1;
((uint32_t*)out)[1] = h2;
((uint32_t*)out)[2] = h3;
((uint32_t*)out)[3] = h4;
}
//-----------------------------------------------------------------------------
void MurmurHash3_x64_128 ( const void * key, const int len,
const uint32_t seed, void * out )
{
const uint8_t * data = (const uint8_t*)key;
const int nblocks = len / 16;
uint64_t h1 = seed;
uint64_t h2 = seed;
uint64_t c1 = BIG_CONSTANT(0x87c37b91114253d5);
uint64_t c2 = BIG_CONSTANT(0x4cf5ad432745937f);
//----------
// body
const uint64_t * blocks = (const uint64_t *)(data);
for(int i = 0; i < nblocks; i++)
{
uint64_t k1 = getblock(blocks,i*2+0);
uint64_t k2 = getblock(blocks,i*2+1);
k1 *= c1; k1 = ROTL64(k1,31); k1 *= c2; h1 ^= k1;
h1 = ROTL64(h1,27); h1 += h2; h1 = h1*5+0x52dce729;
k2 *= c2; k2 = ROTL64(k2,33); k2 *= c1; h2 ^= k2;
h2 = ROTL64(h2,31); h2 += h1; h2 = h2*5+0x38495ab5;
}
//----------
// tail
const uint8_t * tail = (const uint8_t*)(data + nblocks*16);
uint64_t k1 = 0;
uint64_t k2 = 0;
switch(len & 15)
{
case 15: k2 ^= uint64_t(tail[14]) << 48;
case 14: k2 ^= uint64_t(tail[13]) << 40;
case 13: k2 ^= uint64_t(tail[12]) << 32;
case 12: k2 ^= uint64_t(tail[11]) << 24;
case 11: k2 ^= uint64_t(tail[10]) << 16;
case 10: k2 ^= uint64_t(tail[ 9]) << 8;
case 9: k2 ^= uint64_t(tail[ 8]) << 0;
k2 *= c2; k2 = ROTL64(k2,33); k2 *= c1; h2 ^= k2;
case 8: k1 ^= uint64_t(tail[ 7]) << 56;
case 7: k1 ^= uint64_t(tail[ 6]) << 48;
case 6: k1 ^= uint64_t(tail[ 5]) << 40;
case 5: k1 ^= uint64_t(tail[ 4]) << 32;
case 4: k1 ^= uint64_t(tail[ 3]) << 24;
case 3: k1 ^= uint64_t(tail[ 2]) << 16;
case 2: k1 ^= uint64_t(tail[ 1]) << 8;
case 1: k1 ^= uint64_t(tail[ 0]) << 0;
k1 *= c1; k1 = ROTL64(k1,31); k1 *= c2; h1 ^= k1;
};
//----------
// finalization
h1 ^= len; h2 ^= len;
h1 += h2;
h2 += h1;
h1 = fmix(h1);
h2 = fmix(h2);
h1 += h2;
h2 += h1;
((uint64_t*)out)[0] = h1;
((uint64_t*)out)[1] = h2;
}
//-----------------------------------------------------------------------------
</code></pre>
<p>Let me explain that a bit more. </p>
<p><code>from sklearn.feature_extraction._hashing import transform</code>
makes use of this code <code>https://github.com/scikit-learn/scikit-learn/blob/412996f09b6756752dfd3736c306d46fca8f1aa1/sklearn/feature_extraction/_hashing.pyx</code>
which makes use of this
<code>from sklearn.utils.murmurhash cimport murmurhash3_bytes_s32</code>
which in turn makes use of this
<code>https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/murmurhash.pyx</code>
which is built on this
<a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/src/MurmurHash3.cpp" rel="nofollow">https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/src/MurmurHash3.cpp</a> . <strong>So, MurmurHash3.cpp is very important. I need a Javascript version of this exact MurmurHash3.cpp such that the Javascript code and MurmurHash3.cpp would produce the same result</strong>.</p>
<p>I need this because I want to make some of my machine learning tools available online and the hashing needs to be done on the client's web browser.</p>
<p>So far, I have found a few Javascript implementations of MurmurHash3. However, murmurhash3.js <code>https://github.com/whitequark/murmurhash3-js/blob/master/murmurhash3.js</code> seems to be closest (in terms of codes structure) to the MurmurHash3.cpp used by sklearn. But I still d not get the same hash from both of them. </p>
<p>Can anyone please guide me on how to modify <code>murmurhash3.js</code> (above) so that it produces the same hash as <code>MurmurHash3.cpp</code> (above) does?</p>
| -1 | 2016-10-03T16:52:11Z | 39,856,874 | <p>Based on suggestions from @ChristopherOicles, I changed my <code>Javascript</code> code (my the header off my HTML code) to use <code>hashBytes</code> instead of <code>hashString</code> as shown below. I also noticed that I needed to change the returned value of <code>hashBytes</code> to its absolute value for my purpose (so I did). These solve my problems and now I get the same hash from the Python/C++ codes and from the Javascript codes.</p>
<p><strong>Modified Javascript Function in my HTML file</strong></p>
<pre><code><script>
function call_murmurhash3_32_gc () {
var key = $('textarea#textarea1').val();
var seed = 0;
var hash = MurmurHash3.hashBytes (key, key.length, seed);
$('div#div1').text(Math.abs (hash) % 262144);
}
</script>
</code></pre>
<p>My complete solution is here <a href="https://jsbin.com/qilokot/edit?html,js,output" rel="nofollow">https://jsbin.com/qilokot/edit?html,js,output</a>
.</p>
<p>Again, many thanks to <code>Christopher Oicles</code> and to everyone who tried to help me in some ways. </p>
| 1 | 2016-10-04T15:58:15Z | [
"javascript",
"python",
"c++",
"scikit-learn",
"murmurhash"
]
|
Parsing UDP Packets | 39,836,641 | <p>I am building a UDP server to parse and verify incoming UDP packets. I am able to receive and parse packets but the header values are not what I expected. </p>
<p>This is structure of incoming packet</p>
<p>Packet ID ( 4 bytes )<br>
Packet Sequence ( 4 bytes )<br>
XOR Key ( 2 bytes )<br>
Number of Checksums in packet ( 2 bytes )<br>
Cyclic checksum CRC32 (variable)</p>
<p>To send the packet, </p>
<pre><code>with open('payloadfile.bin') as op:
payload = pickle.load(op)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
for i in payload:
sock.sentto(payload, ('127.0.0.1',4545))
</code></pre>
<p>To receive and parse this packet</p>
<pre><code>sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind('127.0.0.1',4545)
while 1:
packet = sock.recvfrom(65565)
packet = packet[0]
# parse IP
ip_header = packet[0:20]
iph = struct.unpack('!BBHHHBBH4s4s' , ip_header)
#all the following values are incorrect
version_ihl = iph[0]
version = version_ihl >> 4
ihl = version_ihl & 0xF
ttl = iph[5]
protocol = iph[6]
s_addr = socket.inet_ntoa(iph[8]);
d_addr = socket.inet_ntoa(iph[9]);
# parse UDP
packet = packet[20:28]
data = packet[header_length:]
source_port, dest_port, data_length, checksum = struct.unpack("!HHHH", header)
</code></pre>
<p>From what I understand so far, this should be the general structure<br>
IP_HEADER ( UDP_HEADER ( PAYLOAD )))</p>
<p>I want to parse the headers correctly, and then extract payload.<br>
I would really appreciate if some one could point me to the right direction</p>
| 2 | 2016-10-03T16:55:05Z | 39,836,730 | <p>Unfortunately the standard socket interface doesn't give you access to the data frames that your data arrive in, neither does it include the IP Datagram headers nor the TCP/UDP headers from the transport layer.</p>
<p>To get hold of lower-level data you are forced to use the so-called <em>raw socket interface</em>, which Windows for one tries to block you from using because you might be a hacker. <a href="http://www.binarytides.com/raw-socket-programming-in-python-linux/" rel="nofollow">This article</a> might give you some clues.</p>
| 5 | 2016-10-03T17:01:03Z | [
"python",
"python-2.7",
"sockets",
"networking",
"udp"
]
|
How to do manual reload of file in iPython shell | 39,836,702 | <p>I have a file called sub.py, and I want to be able to call functions in it from the iPython shell. The iPython autoreload functionality has not been working very well, though. Sometimes it detects changes, sometimes it doesn't. </p>
<p>Instead of debugging autoreload, I was wondering if there's a way to just manually reload, or unload and load, modules in iPython. Currently I terminate the shell, start it again, re-import my module, and go from there. It would be great to be able to do a manual reload without killing the iPython shell. </p>
| 0 | 2016-10-03T16:58:34Z | 39,836,981 | <p>I find my homebrewed <code>%reimport</code> to be very useful in this context:</p>
<pre><code>def makemagic(f):
name = f.__name__
if name.startswith('magic_'): name = name[6:]
def wrapped(throwaway, *pargs, **kwargs): return f(*pargs,**kwargs)
if hasattr(f, '__doc__'): wrapped.__doc__ = f.__doc__
get_ipython().define_magic(name, wrapped)
return f
@makemagic
def magic_reimport(dd):
"""
The syntax
%reimport foo, bar.*
is a shortcut for the following:
import foo; foo = reload(foo)
import bar; bar = reload(bar); from bar import *
"""
ipython = get_ipython().user_ns
for d in dd.replace(',', ' ').split(' '):
if len(d):
bare = d.endswith('.*')
if bare: d = d[:-2]
exec('import xx; xx = reload(xx)'.replace('xx', d), ipython)
if bare: exec('from xx import *'.replace('xx', d), ipython)
</code></pre>
<p>Once gotcha is that, when there are sub-modules of packages involved, you have to <code>reimport</code> the sub-module, and <em>then</em> the top-level package:</p>
<pre><code>reimport foo.bar, foo
</code></pre>
| 1 | 2016-10-03T17:18:27Z | [
"python",
"ipython"
]
|
How does one add an item to GTK's "recently used" file list from Python? | 39,836,725 | <p>I'm trying to add to the "recently used" files list from Python 3 on Ubuntu.</p>
<p>I am able to successfully <em>read</em> the recently used file list like this:</p>
<pre><code>from gi.repository import Gtk
recent_mgr = Gtk.RecentManager.get_default()
for item in recent_mgr.get_items():
print(item.get_uri())
</code></pre>
<p>This prints out the same list of files I see when I look at "Recent" in Nautilus, or look at the "Recently Used" place in the file dialog of apps like GIMP.</p>
<p>However, when I tried adding an item like this (where <code>/home/laurence/foo/bar.txt</code> is an existing text file)...</p>
<pre><code>recent_mgr.add_item('file:///home/laurence/foo/bar.txt')
</code></pre>
<p>...the file does not show up in the Recent section of Nautilus or in file dialogs. It doesn't even show up in the results returned by <code>get_items()</code>.</p>
<p>How can I add a file to GTK's recently used file list from Python?</p>
| 12 | 2016-10-03T17:00:54Z | 39,927,261 | <p>A <code>Gtk.RecentManager</code> needs to emit the <code>changed</code> signal for the update to be written in a private attribute of the C++ class. To use a <code>RecentManager</code> object in an application, you need to start the event loop by calling <code>Gtk.main</code>:</p>
<pre><code>from gi.repository import Gtk
recent_mgr = Gtk.RecentManager.get_default()
uri = r'file:/path/to/my/file'
recent_mgr.add_item(uri)
Gtk.main()
</code></pre>
<p>If you don't call <code>Gtk.main()</code>, the <code>changed</code> signal is not emitted and nothing happens.</p>
<p>To answer @andlabs query, the reason why <code>RecentManager.add_item</code> returns a boolean is because the <code>g_file_query_info_async</code> function is called. The callback function <code>gtk_recent_manager_add_item_query_info</code> then gathers the mimetype, application name and command into a <code>GtkRecentData</code> struct and finally calls <code>gtk_recent_manager_add_full</code>. The source is <a href="https://github.com/GNOME/gtk/blob/master/gtk/gtkrecentmanager.c">here</a>.</p>
<p>If anything goes wrong, it is well after <code>add_item</code> has finished, so the method just returns <code>True</code> if the object it is called from is a <code>RecentManager</code> and if the uri is not <code>NULL</code>; and <code>False</code> otherwise. </p>
<p>The documentation is inaccurate in saying:</p>
<blockquote>
<p>Returns</p>
<p>TRUE if the new item was successfully added to the recently used resources list</p>
</blockquote>
<p>as returning <code>TRUE</code> only means that an asynchronous function was called to deal with the addition of a new item.</p>
| 10 | 2016-10-07T23:53:51Z | [
"python",
"gtk",
"pygtk",
"gtk3"
]
|
Sampling from a Computed Multivariate kernel density estimation | 39,836,779 | <p>Say I have X and Y coordinates on a map and a non-parametric distribution of "hot zones" (e.g. degree of pollution on a geographic map positioned at X and Y coordinates). My input data are heat maps.</p>
<p>I want to train a machine learning model that learns what a "hot zone" looks like, but I don't have a lot of labeled examples. All "hot zones" look pretty similar, but may be in different parts of my standardized XY coordinate map.</p>
<p>I can calculate a multivariate KDE and plot the density maps accordingly. To generate synthetic labeled data, can I "reverse" the KDE and randomly generate new image files with observations that fall within my KDE's "dense" range?</p>
<p>Is there any way to do this in python?</p>
| 0 | 2016-10-03T17:04:16Z | 39,840,955 | <p>There are at least 3 high-quality kernel-density estimation implementations available for python:</p>
<ul>
<li><a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html" rel="nofollow">scipy</a></li>
<li><a href="http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KernelDensity.html#sklearn.neighbors.KernelDensity" rel="nofollow">scikit-learn</a></li>
<li><a href="http://statsmodels.sourceforge.net/0.6.0/nonparametric.html" rel="nofollow">statsmodels</a></li>
</ul>
<p>My personal ranking is <strong>statsmodels > scikit-learn > scipy (best to worst)</strong> but it will depend on your use-case.</p>
<p>Some random-remarks:</p>
<ul>
<li>scikit-learn offers sampling from a fitted KDE for free (<code>kde.sample(N)</code>)</li>
<li>scikit-learn offers good cross-validation functions based on grid-search or random-search (cross-validation is highly recommended)</li>
<li>statsmodels offers cross-validation methods based on optimization (can be slow for big datasets; but very high accuracy)</li>
</ul>
<p>There are much more differences and some of these were analyzed in this very good <a href="https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/" rel="nofollow">blog post</a> by <em>Jake VanderPlas</em>. The following table is an excerpt from this post:</p>
<p><a href="http://i.stack.imgur.com/BXVWZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/BXVWZ.png" alt="From: https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/ (Jake VanderPlas)"></a>
<sup><i> From: <a href="https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/" rel="nofollow">https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/</a> (Author: Jake VanderPlas) </i></sup></p>
<h3>Here is some example code using <strong>scikit-learn</strong>:</h3>
<pre><code>from sklearn.datasets import make_blobs
from sklearn.neighbors import KernelDensity
from sklearn.model_selection import GridSearchCV
import matplotlib.pyplot as plt
import numpy as np
# Create test-data
data_x, data_y = make_blobs(n_samples=100, n_features=2, centers=7, cluster_std=0.5, random_state=0)
# Fit KDE (cross-validation used!)
params = {'bandwidth': np.logspace(-1, 2, 30)}
grid = GridSearchCV(KernelDensity(), params)
grid.fit(data_x)
kde = grid.best_estimator_
bandwidth = grid.best_params_['bandwidth']
# Resample
N_POINTS_RESAMPLE = 1000
resampled = kde.sample(N_POINTS_RESAMPLE)
# Plot original data vs. resampled
fig, axs = plt.subplots(2, 2, sharex=True, sharey=True)
for i in range(100):
axs[0,0].scatter(*data_x[i])
axs[0,1].hexbin(data_x[:, 0], data_x[:, 1], gridsize=20)
for i in range(N_POINTS_RESAMPLE):
axs[1,0].scatter(*resampled[i])
axs[1,1].hexbin(resampled[:, 0], resampled[:, 1], gridsize=20)
plt.show()
</code></pre>
<h3>Output:</h3>
<p><a href="http://i.imgur.com/5SAvrZl.png" rel="nofollow"><img src="http://i.imgur.com/5SAvrZl.png" alt="enter image description here"></a></p>
| 1 | 2016-10-03T21:50:37Z | [
"python",
"machine-learning",
"statistics",
"kernel-density"
]
|
Automate file downloading using a chrome extension | 39,836,893 | <p>I have a .csv file with a list of URLs I need to extract data from. I need to automate the following process: (1) Go to a URL in the file. (2) Click the chrome extension that will redirect me to another page which displays some of the URL's stats. (3) Click the link in the stats page that enables me to download the data as a .csv file. (4) Save the .csv. (5) Repeat for the next n URLs.</p>
<p>Any idea how to do this? Any help greatly appreciated!</p>
| 0 | 2016-10-03T17:11:54Z | 39,837,450 | <p>There is a python package called mechanize. It helps you automate the processes that can be done on a browser. So check it out.I think mechanize should give you all the tools required to solve the problem.</p>
| 0 | 2016-10-03T17:46:19Z | [
"python",
"automation",
"imacros"
]
|
How to draw a precision-recall curve with interpolation in python? | 39,836,953 | <p>I have drawn a precision-recall curve using <code>sklearn</code> <code>precision_recall_curve</code>function and <code>matplotlib</code> package. For those of you who are familiar with precision-recall curve you know that some scientific communities only accept it when its interpolated, similar to this example <a href="http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-ranked-retrieval-results-1.html" rel="nofollow">here</a>. Now my question is if any of you know how to do the interpolation in python? I have been searching for a solution for a while now but with no success! Any help would be greatly appreciated. </p>
<p><strong>Solution:</strong> Both solutions by @francis and @ali_m are correct and together solved my problem. So, assuming that you get an output from the <code>precision_recall_curve</code> function in <code>sklearn</code>, here is what I did to plot the graph:</p>
<pre><code> precision["micro"], recall["micro"], _ = precision_recall_curve(y_test.ravel(),scores.ravel())
pr = copy.deepcopy(precision[0])
rec = copy.deepcopy(recall[0])
prInv = np.fliplr([pr])[0]
recInv = np.fliplr([rec])[0]
j = rec.shape[0]-2
while j>=0:
if prInv[j+1]>prInv[j]:
prInv[j]=prInv[j+1]
j=j-1
decreasing_max_precision = np.maximum.accumulate(prInv[::-1])[::-1]
plt.plot(recInv, decreasing_max_precision, marker= markers[mcounter], label=methodNames[countOfMethods]+': AUC={0:0.2f}'.format(average_precision[0]))
</code></pre>
<p>And these lines will plot the interpolated curves if you put them in a for loop and pass it the data of each method at each iteration. Note that this will not plot the non-interpolated precision-recall curves.</p>
| 4 | 2016-10-03T17:16:08Z | 39,838,609 | <p>A backward iteration can be performed to remove the increasing parts in <code>precision</code>. Then, vertical and horizontal lines can be plotted as specified in the answer of Bennett Brown to <a href="http://stackoverflow.com/questions/16930328/vertical-horizontal-lines-in-matplotlib">vertical & horizontal lines in matplotlib</a> . </p>
<p>Here is a sample code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
#just a dummy sample
recall=np.linspace(0.0,1.0,num=42)
precision=np.random.rand(42)*(1.-recall)
precision2=precision.copy()
i=recall.shape[0]-2
# interpolation...
while i>=0:
if precision[i+1]>precision[i]:
precision[i]=precision[i+1]
i=i-1
# plotting...
fig, ax = plt.subplots()
for i in range(recall.shape[0]-1):
ax.plot((recall[i],recall[i]),(precision[i],precision[i+1]),'k-',label='',color='red') #vertical
ax.plot((recall[i],recall[i+1]),(precision[i+1],precision[i+1]),'k-',label='',color='red') #horizontal
ax.plot(recall,precision2,'k--',color='blue')
#ax.legend()
ax.set_xlabel("recall")
ax.set_ylabel("precision")
plt.savefig('fig.jpg')
fig.show()
</code></pre>
<p>And here is a result:</p>
<p><a href="http://i.stack.imgur.com/pEi0e.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/pEi0e.jpg" alt="enter image description here"></a></p>
| 1 | 2016-10-03T19:00:34Z | [
"python",
"numpy",
"matplotlib",
"scikit-learn",
"precision-recall"
]
|
How to draw a precision-recall curve with interpolation in python? | 39,836,953 | <p>I have drawn a precision-recall curve using <code>sklearn</code> <code>precision_recall_curve</code>function and <code>matplotlib</code> package. For those of you who are familiar with precision-recall curve you know that some scientific communities only accept it when its interpolated, similar to this example <a href="http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-ranked-retrieval-results-1.html" rel="nofollow">here</a>. Now my question is if any of you know how to do the interpolation in python? I have been searching for a solution for a while now but with no success! Any help would be greatly appreciated. </p>
<p><strong>Solution:</strong> Both solutions by @francis and @ali_m are correct and together solved my problem. So, assuming that you get an output from the <code>precision_recall_curve</code> function in <code>sklearn</code>, here is what I did to plot the graph:</p>
<pre><code> precision["micro"], recall["micro"], _ = precision_recall_curve(y_test.ravel(),scores.ravel())
pr = copy.deepcopy(precision[0])
rec = copy.deepcopy(recall[0])
prInv = np.fliplr([pr])[0]
recInv = np.fliplr([rec])[0]
j = rec.shape[0]-2
while j>=0:
if prInv[j+1]>prInv[j]:
prInv[j]=prInv[j+1]
j=j-1
decreasing_max_precision = np.maximum.accumulate(prInv[::-1])[::-1]
plt.plot(recInv, decreasing_max_precision, marker= markers[mcounter], label=methodNames[countOfMethods]+': AUC={0:0.2f}'.format(average_precision[0]))
</code></pre>
<p>And these lines will plot the interpolated curves if you put them in a for loop and pass it the data of each method at each iteration. Note that this will not plot the non-interpolated precision-recall curves.</p>
| 4 | 2016-10-03T17:16:08Z | 39,862,264 | <p><strong>@francis's</strong> solution can be vectorized using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.accumulate.html" rel="nofollow"><code>np.maximum.accumulate</code></a>.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
recall = np.linspace(0.0, 1.0, num=42)
precision = np.random.rand(42)*(1.-recall)
# take a running maximum over the reversed vector of precision values, reverse the
# result to match the order of the recall vector
decreasing_max_precision = np.maximum.accumulate(precision[::-1])[::-1]
</code></pre>
<p>You can also use <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.step" rel="nofollow"><code>plt.step</code></a> to get rid of the <code>for</code> loop used for plotting:</p>
<pre><code>fig, ax = plt.subplots(1, 1)
ax.hold(True)
ax.plot(recall, precision, '--b')
ax.step(recall, decreasing_max_precision, '-r')
</code></pre>
<p><a href="http://i.stack.imgur.com/sCnKo.png" rel="nofollow"><img src="http://i.stack.imgur.com/sCnKo.png" alt="enter image description here"></a></p>
| 2 | 2016-10-04T21:44:27Z | [
"python",
"numpy",
"matplotlib",
"scikit-learn",
"precision-recall"
]
|
Generate Series with column names of DataFrame that match condition | 39,837,029 | <p>I have a data frame with many columns containing true/false values. E. g.</p>
<pre><code>import pandas as pd
data = pd.DataFrame([[True, True, False],
[False, False, True],
[True, False, True],
[False, False, False],
[True, True, False]],
columns=['A','B','C'])
</code></pre>
<p>Actually there are many more than just those three columns.</p>
<p>I need to generate an additional column where each value is a list of the names of all columns where the value is true. For the example this should be:</p>
<pre><code>0 [A, B]
1 [C]
2 [A, C]
3 []
4 [A, B]
Name: X, dtype: object
</code></pre>
<p>Is there any magic trick in Pandas to achieve this without using nested loops (which is the only idea I had so far)?</p>
| 0 | 2016-10-03T17:20:44Z | 39,837,205 | <p>You can use <code>apply</code> method to loop through rows and use each row to subset the column names:</p>
<pre><code>data.apply(lambda r: data.columns[r].tolist(), axis = 1)
#0 [A, B]
#1 [C]
#2 [A, C]
#3 []
#4 [A, B]
#dtype: object
</code></pre>
| 1 | 2016-10-03T17:31:06Z | [
"python",
"pandas",
"dataframe"
]
|
How to calculate the total time a log file covers in Python 2.7? | 39,837,095 | <p>So I have several log files, they are structured like this:</p>
<pre><code>Sep 9 12:42:15 apollo sshd[25203]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=189.26.255.11
Sep 9 12:42:15 apollo sshd[25203]: pam_succeed_if(sshd:auth): error retrieving information about user ftpuser
Sep 9 12:42:17 apollo sshd[25203]: Failed password for invalid user ftpuser from 189.26.255.11 port 44061 ssh2
Sep 9 12:42:17 apollo sshd[25204]: Received disconnect from 189.26.255.11: 11: Bye Bye
Sep 9 19:12:46 apollo sshd[30349]: Did not receive identification string from 199.19.112.130
Sep 10 03:29:48 apollo unix_chkpwd[4549]: password check failed for user (root)
Sep 10 03:29:48 apollo sshd[4546]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.12.29.170 user=root
Sep 10 03:29:51 apollo sshd[4546]: Failed password for root from 221.12.29.170 port 56907 ssh2
</code></pre>
<p>There are more dates and times, But this is an example. I was wondering how I would calculate the total time that the file covers. I've tried a few things, and have had about 5 hours of no success.</p>
<p>I tried this first, and it was close, but it didn't work like I wanted it to, it kept repeating dates:</p>
<pre><code>with open(filename, 'r') as file1:
lines = file1.readlines()
for line in lines:
linelist = line.split()
date2 = int(linelist[1])
time2 = linelist[2]
print linelist[0], linelist[1], linelist[2]
if date1 == 0:
date1 = date2
dates.append(linelist[0] + ' ' + str(linelist[1]))
if date1 < date2:
date1 = date2
ttimes.append(datetime.strptime(str(ltime1), FMT) - datetime.strptime(str(time1), FMT))
time1 = '23:59:59'
ltime1 = '00:00:00'
dates.append(linelist[0] + ' ' + str(linelist[1]))
if time2 < time1:
time1 = time2
if time2 > ltime1:
ltime1 = time2
</code></pre>
| 0 | 2016-10-03T17:24:56Z | 39,837,276 | <p>If the entries are in a chronological order, you can just look at the first and at the last entry:</p>
<pre><code>entries = lines.split("\n")
first_date = entries[0].split("apollo")[0]
last_date = entries[len(entries)-1].split("apollo")[0]
</code></pre>
| 0 | 2016-10-03T17:35:08Z | [
"python"
]
|
How to calculate the total time a log file covers in Python 2.7? | 39,837,095 | <p>So I have several log files, they are structured like this:</p>
<pre><code>Sep 9 12:42:15 apollo sshd[25203]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=189.26.255.11
Sep 9 12:42:15 apollo sshd[25203]: pam_succeed_if(sshd:auth): error retrieving information about user ftpuser
Sep 9 12:42:17 apollo sshd[25203]: Failed password for invalid user ftpuser from 189.26.255.11 port 44061 ssh2
Sep 9 12:42:17 apollo sshd[25204]: Received disconnect from 189.26.255.11: 11: Bye Bye
Sep 9 19:12:46 apollo sshd[30349]: Did not receive identification string from 199.19.112.130
Sep 10 03:29:48 apollo unix_chkpwd[4549]: password check failed for user (root)
Sep 10 03:29:48 apollo sshd[4546]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.12.29.170 user=root
Sep 10 03:29:51 apollo sshd[4546]: Failed password for root from 221.12.29.170 port 56907 ssh2
</code></pre>
<p>There are more dates and times, But this is an example. I was wondering how I would calculate the total time that the file covers. I've tried a few things, and have had about 5 hours of no success.</p>
<p>I tried this first, and it was close, but it didn't work like I wanted it to, it kept repeating dates:</p>
<pre><code>with open(filename, 'r') as file1:
lines = file1.readlines()
for line in lines:
linelist = line.split()
date2 = int(linelist[1])
time2 = linelist[2]
print linelist[0], linelist[1], linelist[2]
if date1 == 0:
date1 = date2
dates.append(linelist[0] + ' ' + str(linelist[1]))
if date1 < date2:
date1 = date2
ttimes.append(datetime.strptime(str(ltime1), FMT) - datetime.strptime(str(time1), FMT))
time1 = '23:59:59'
ltime1 = '00:00:00'
dates.append(linelist[0] + ' ' + str(linelist[1]))
if time2 < time1:
time1 = time2
if time2 > ltime1:
ltime1 = time2
</code></pre>
| 0 | 2016-10-03T17:24:56Z | 39,837,383 | <p>We don't have the year, so I took the current year. Read all the lines, convert the month to month index, and parse each date.</p>
<p>Then sort it (so works even if logs mixed) and take first & last item. Substract. Enjoy.</p>
<pre><code>from datetime import datetime
months = ["","Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]
current_year = datetime.now().year
dates = list()
with open(filename, 'r') as file1:
for line in file1:
linelist = line.split()
if linelist: # filter out possible empty lines
linelist[0] = str(months.index(linelist[0])) # convert 3-letter months to index
date2 = int(linelist[1])
z=datetime.strptime(" ".join(linelist[0:3])+" "+str(current_year),"%m %d %H:%M:%S %Y") # compose & parse the date
dates.append(z) # store in list
dates.sort() # sort the list
first_date = dates[0]
last_date = dates[-1]
# print report & compute time span
print("start {}, end {}, time span {}".format(first_date,last_date,last_date-first_date))
</code></pre>
<p>result:</p>
<pre><code>start 2016-09-09 12:42:15, end 2016-09-10 03:29:51, time span 14:47:36
</code></pre>
<p>Note that it won't work properly between december 31st and january the 1st because of the missing year info. I suppose we could make a guess if we find January & December in the log then assume that it's january from the next year. Unsupported yet.</p>
| 0 | 2016-10-03T17:42:28Z | [
"python"
]
|
Building a Python script which runs Perl scripts. How to redirect stdout? | 39,837,139 | <p>I am writing a Python script that use some Perl scripts, but one of them uses stdout so I must use a redirection <code>></code> in bash to write this output to a file.</p>
<p>All input and output files are text files.</p>
<pre><code># -*- coding: utf-8 -*-
import subprocess
filename = input("What the name of the input? ")
#STAGE 1----------------------------------------------------------------------
subprocess.Popen(["perl", "run_esearch.pl", filename , 'result'])
#STAGE 2----------------------------------------------------------------------
subprocess.Popen(["perl", "shrink.pl", 'result'])
'''Here the input from stage one is "shrunk" to smaller file, but
the output is printed to the console. Is it possible to write this out
to a file in Python, so I can use it in stage 3? '''
#STAGE 3----------------------------------------------------------------------
subprocess.Popen(["perl", "shrink2.pl", 'stdoutfromstage2'])
</code></pre>
| 1 | 2016-10-03T17:27:11Z | 39,838,479 | <p>Here is an example of how you can use <code>bash</code> to redirect output to a file <code>test.txt</code>:</p>
<pre><code>import subprocess
#STAGE 2----------------------------------------------------------------------
subprocess.Popen(['bash', '-c', 'echo Hello > test.txt'])
#STAGE 3----------------------------------------------------------------------
subprocess.Popen(['perl', '-nE', 'say $_', 'test.txt'])
</code></pre>
| 0 | 2016-10-03T18:52:03Z | [
"python",
"bash",
"perl",
"subprocess",
"stdout"
]
|
Building a Python script which runs Perl scripts. How to redirect stdout? | 39,837,139 | <p>I am writing a Python script that use some Perl scripts, but one of them uses stdout so I must use a redirection <code>></code> in bash to write this output to a file.</p>
<p>All input and output files are text files.</p>
<pre><code># -*- coding: utf-8 -*-
import subprocess
filename = input("What the name of the input? ")
#STAGE 1----------------------------------------------------------------------
subprocess.Popen(["perl", "run_esearch.pl", filename , 'result'])
#STAGE 2----------------------------------------------------------------------
subprocess.Popen(["perl", "shrink.pl", 'result'])
'''Here the input from stage one is "shrunk" to smaller file, but
the output is printed to the console. Is it possible to write this out
to a file in Python, so I can use it in stage 3? '''
#STAGE 3----------------------------------------------------------------------
subprocess.Popen(["perl", "shrink2.pl", 'stdoutfromstage2'])
</code></pre>
| 1 | 2016-10-03T17:27:11Z | 39,838,554 | <p>I would handle the file in Python:</p>
<pre><code>link = "stage2output"
subprocess.call(["perl", "run_esearch.pl", filename, "result"])
with open(link, "w") as f:
subprocess.call(["perl", "shrink.pl", "result"], stdout=f)
subprocess.call(["perl", "shrink2.pl", link])
</code></pre>
<p>On the off-chance that <code>shrink2.pl</code> can take a filename of <code>-</code> to read from standard input:</p>
<pre><code>subprocess.call(["perl", "run_esearch.pl", filename, "result"])
p2 = subprocess.Popen(["perl", "shrink.pl", "result"], stdout=subprocess.PIPE)
subprocess.call(["perl", "shrink2.pl", "-"], stdin=p2.stdin)
</code></pre>
| 1 | 2016-10-03T18:57:06Z | [
"python",
"bash",
"perl",
"subprocess",
"stdout"
]
|
Building a Python script which runs Perl scripts. How to redirect stdout? | 39,837,139 | <p>I am writing a Python script that use some Perl scripts, but one of them uses stdout so I must use a redirection <code>></code> in bash to write this output to a file.</p>
<p>All input and output files are text files.</p>
<pre><code># -*- coding: utf-8 -*-
import subprocess
filename = input("What the name of the input? ")
#STAGE 1----------------------------------------------------------------------
subprocess.Popen(["perl", "run_esearch.pl", filename , 'result'])
#STAGE 2----------------------------------------------------------------------
subprocess.Popen(["perl", "shrink.pl", 'result'])
'''Here the input from stage one is "shrunk" to smaller file, but
the output is printed to the console. Is it possible to write this out
to a file in Python, so I can use it in stage 3? '''
#STAGE 3----------------------------------------------------------------------
subprocess.Popen(["perl", "shrink2.pl", 'stdoutfromstage2'])
</code></pre>
| 1 | 2016-10-03T17:27:11Z | 39,842,384 | <p>As far as I can tell, you have three Perl programs</p>
<ul>
<li><p><code>run_esearch.pl</code>, which expects two command-line parameters: the name of the input file and the name if the output file</p></li>
<li><p><code>shrink.pl</code>, which expects a single command-line parameter: the name of the input file. It writes its output to <code>stdout</code></p></li>
<li><p><code>shrink2.pl</code>, which expects a single command-line parameter: the name of the input file. You don't say anything about its output</p></li>
</ul>
<p>The standard, and most flexible way to write Linux programs is to have them read from <code>stdin</code> and write to <code>stdout</code>. That way input and output files may be specified explicitly on the command line using <code><</code> and <code>></code> redirection, or the same program may be used to read and write to a pipe <code>|</code> as part of a chain. Perl programs have the best of both worlds. Using an empty <code><></code> to read the input will collect all data from files mentioned as command-line parameters, or will read from <code>stdin</code> if there are no parameters</p>
<p>I have no way of knowing which way your <em>shrink</em> programs treat their input, so I have to imagine the worst: that they explicitly open and read the file specified by the first parameter on the command line</p>
<p>Python's <code>subprocess</code> module provides the <code>Popen</code> constructor as well as several convenience functions. There is generally no need to use the constructor, especially if you are defaulting most of the parameters and discarding the returned object as you are</p>
<p>Since you're treating Python as a very high-level shell, I suggest that you pass shell command strings to <code>subprocess.call</code> with the <code>shell</code> parameter set to <code>True</code>. That will allow you to provide bash command strings, and you will be on more familiar ground and so feel more in control</p>
<pre class="lang-python prettyprint-override"><code>import subprocess
filename = input("What's the name of the input? ")
subprocess.call("perl run_esearch.pl %s result" % filename, shell=True)
subprocess.call("perl shrink.pl result > shrink1_out", shell=True)
subprocess.call("perl shrink2.pl shrink1_out", shell=True)
</code></pre>
<p>Note that this method is too risky to use in production code, as the response to <code>What the name of the input?</code> could contain malicious shell code that may compromise your system. But if the people using your program could just as easily destroy your system directly if they chose, then nothing is lost</p>
<p>Another issue is the use of fixed names for the intermediate files.
There is no guarantee that a separate independent process won't use a file with the same path, so in theory the process is insecure.
I followed your lead and used <code>result</code> for the output of <code>run_esearch.pl</code>, and invented <code>shrink1_out</code> for the output of <code>shrink.pl</code>, but a proper program would use the <code>tempfile</code> module and call <code>tempfile.NamedTemporaryFile</code> to create intermediate files that were guaranteed to be unique</p>
| 2 | 2016-10-04T00:19:42Z | [
"python",
"bash",
"perl",
"subprocess",
"stdout"
]
|
Changing a global variable in a function without using global keyword | 39,837,160 | <p><a href="http://i.stack.imgur.com/EQUph.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/EQUph.jpg" alt="screenshot of code from a textbook"></a></p>
<p>Here is my code:</p>
<pre><code>def L_value_Change(k):
global L
L = k
return L
def applyF_filterG(L, f, g):
L
k = []
for i in L:
if g(f(i)):
k.append(i)
L = k
L_value_Change(k)
if L == []:
return -1
else :
return max(L)
</code></pre>
<p>When I put this code, the grader tells me it's incorrect! So I read the introduction to the quiz, the instructor wrote "global variables don't work". How could I change the <code>L</code> variable with a function without using the word <code>global</code>? If you try my code and gave it the required input, it will give you a right answer, but the grader tells me it's wrong.</p>
| -1 | 2016-10-03T17:28:13Z | 39,837,413 | <p>You need the <code>global</code> keyword when you want to rebind the global variable to a different object. But you don't need it if all you want to do is change a mutable object. In your case <code>L</code> is a list and can be mutated in place with a slice operation <code>L[:] = k</code>. To demonstrate:</p>
<pre><code>>>> L = [1,2,3]
>>>
>>> def L_value_Change(k):
... L[:] = k
...
>>> old_id = id(L)
>>> L
[1, 2, 3]
>>> L_value_Change([4,5,6])
>>> assert id(L) == old_id
>>> L
[4, 5, 6]
>>>
</code></pre>
| 1 | 2016-10-03T17:44:05Z | [
"python",
"python-3.x"
]
|
Changing a global variable in a function without using global keyword | 39,837,160 | <p><a href="http://i.stack.imgur.com/EQUph.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/EQUph.jpg" alt="screenshot of code from a textbook"></a></p>
<p>Here is my code:</p>
<pre><code>def L_value_Change(k):
global L
L = k
return L
def applyF_filterG(L, f, g):
L
k = []
for i in L:
if g(f(i)):
k.append(i)
L = k
L_value_Change(k)
if L == []:
return -1
else :
return max(L)
</code></pre>
<p>When I put this code, the grader tells me it's incorrect! So I read the introduction to the quiz, the instructor wrote "global variables don't work". How could I change the <code>L</code> variable with a function without using the word <code>global</code>? If you try my code and gave it the required input, it will give you a right answer, but the grader tells me it's wrong.</p>
| -1 | 2016-10-03T17:28:13Z | 39,837,500 | <p>Lists are <strong>mutable</strong> objects, and as such, to change them all you have to do is just pass them as an argument to the function. </p>
<pre><code>def f(i):
return i + 2
def g(i):
return i > 5
l = [0, -10, 5, 6, -4]
def applyF_filterG(L, f, g):
for val in L[:]:
if not g(f(val)):
L.remove(val)
return -1 if not L else max(L)
print(l) # [0, -10, 5, 6, -4]
applyF_filterG(l, f, g) # Return 6
print(l) # [5, 6]
</code></pre>
| 1 | 2016-10-03T17:48:54Z | [
"python",
"python-3.x"
]
|
Changing a global variable in a function without using global keyword | 39,837,160 | <p><a href="http://i.stack.imgur.com/EQUph.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/EQUph.jpg" alt="screenshot of code from a textbook"></a></p>
<p>Here is my code:</p>
<pre><code>def L_value_Change(k):
global L
L = k
return L
def applyF_filterG(L, f, g):
L
k = []
for i in L:
if g(f(i)):
k.append(i)
L = k
L_value_Change(k)
if L == []:
return -1
else :
return max(L)
</code></pre>
<p>When I put this code, the grader tells me it's incorrect! So I read the introduction to the quiz, the instructor wrote "global variables don't work". How could I change the <code>L</code> variable with a function without using the word <code>global</code>? If you try my code and gave it the required input, it will give you a right answer, but the grader tells me it's wrong.</p>
| -1 | 2016-10-03T17:28:13Z | 39,845,975 | <p>Here's my code which avoids mutating on the list that you're trying to iterate upon. </p>
<pre><code>def applyF_filterG(L,f,g):
"""
Assumes L is a list of integers
Assume functions f and g are defined for you.
f takes in an integer, applies a function, returns another integer
g takes in an integer, applies a Boolean function,
returns either True or False
Mutates L such that, for each element i originally in L, L contains
i if g(f(i)) returns True, and no other elements
Returns the largest element in the mutated L or -1 if the list is empty
"""
# Applying the functions & Mutating L
i = len(L)-1
largestnum=0
if (len(L)==0):
return -1
else:
while i>=0:
if not g(f(L[i])):
del L[i]
i-=1
#Finding the largest number
if not L:
return -1
if (len(L)==1):
return L[0]
else:
for num in range(len(L)-1):
if (L[num]>L[num+1]):
largestnum=L[num]
else:
largestnum=L[num+1]
return largestnum
</code></pre>
| 0 | 2016-10-04T06:56:22Z | [
"python",
"python-3.x"
]
|
Given a positive int, finds out how many numbers # from 1 to n inclusive evenly divide n | 39,837,263 | <p>The goal is to "find out how many numbers # from 1 to n inclusive evenly divide n" when given a positive int. Here is the code I have so far</p>
<pre><code>def num_divisors(n):
for i in n:
if n >= 1:
answer = i // n
return answer
</code></pre>
<p>I'm currently getting the error <code>'int' object is not iterable</code> for all test cases and here they are for reference:</p>
<pre><code>Check that num_divisors( 1 ) returns 1
Check that num_divisors( 12 ) returns 6
Check that num_divisors( 100 ) returns 9
Check that num_divisors( 360 ) returns 24.
</code></pre>
| 0 | 2016-10-03T17:34:12Z | 39,837,291 | <pre><code>for i in range(n):
</code></pre>
<p>This loops over all values from 0 to n-1</p>
<p>EDIT: </p>
<p>Your code is still flawed, and has other errors. You should have an accumulator counting the number of values which divide <code>n</code></p>
| 0 | 2016-10-03T17:36:02Z | [
"python"
]
|
Given a positive int, finds out how many numbers # from 1 to n inclusive evenly divide n | 39,837,263 | <p>The goal is to "find out how many numbers # from 1 to n inclusive evenly divide n" when given a positive int. Here is the code I have so far</p>
<pre><code>def num_divisors(n):
for i in n:
if n >= 1:
answer = i // n
return answer
</code></pre>
<p>I'm currently getting the error <code>'int' object is not iterable</code> for all test cases and here they are for reference:</p>
<pre><code>Check that num_divisors( 1 ) returns 1
Check that num_divisors( 12 ) returns 6
Check that num_divisors( 100 ) returns 9
Check that num_divisors( 360 ) returns 24.
</code></pre>
| 0 | 2016-10-03T17:34:12Z | 39,837,388 | <p>The working code is following</p>
<pre><code>def num_divisors(n):
count_d = 0
for i in range(1, n+1):
if n % i == 0:
count_d += 1
return count_d
</code></pre>
| 0 | 2016-10-03T17:42:58Z | [
"python"
]
|
Given a positive int, finds out how many numbers # from 1 to n inclusive evenly divide n | 39,837,263 | <p>The goal is to "find out how many numbers # from 1 to n inclusive evenly divide n" when given a positive int. Here is the code I have so far</p>
<pre><code>def num_divisors(n):
for i in n:
if n >= 1:
answer = i // n
return answer
</code></pre>
<p>I'm currently getting the error <code>'int' object is not iterable</code> for all test cases and here they are for reference:</p>
<pre><code>Check that num_divisors( 1 ) returns 1
Check that num_divisors( 12 ) returns 6
Check that num_divisors( 100 ) returns 9
Check that num_divisors( 360 ) returns 24.
</code></pre>
| 0 | 2016-10-03T17:34:12Z | 39,837,453 | <p>Here is a one-liner that should do what you want:</p>
<pre><code>sum(n % i == 0 for i in range(1, n + 1))
</code></pre>
<p>Issues with your code:</p>
<ul>
<li>Trying to iterate over an int, instead of an iterator.</li>
<li>Floor-dividing <code>i</code> by <code>n</code>, which will always give you 0</li>
<li>Returning <code>answer</code> at the end of each loop</li>
</ul>
| 0 | 2016-10-03T17:46:26Z | [
"python"
]
|
Given a positive int, finds out how many numbers # from 1 to n inclusive evenly divide n | 39,837,263 | <p>The goal is to "find out how many numbers # from 1 to n inclusive evenly divide n" when given a positive int. Here is the code I have so far</p>
<pre><code>def num_divisors(n):
for i in n:
if n >= 1:
answer = i // n
return answer
</code></pre>
<p>I'm currently getting the error <code>'int' object is not iterable</code> for all test cases and here they are for reference:</p>
<pre><code>Check that num_divisors( 1 ) returns 1
Check that num_divisors( 12 ) returns 6
Check that num_divisors( 100 ) returns 9
Check that num_divisors( 360 ) returns 24.
</code></pre>
| 0 | 2016-10-03T17:34:12Z | 39,838,634 | <p>This code reduces the number of divisions to approx. sqrt(n). For example if 100 == 4 x 25, then also 100 == 25 x 4, so after testing 4 we don't have to test 25.</p>
<pre><code>def num_divisors(n):
num = 0
for i in range(1, n + 1):
d, m = divmod(n, i)
if d < i:
break
if m == 0:
# n = d * i
num += 1 if d == i else 2
return num
</code></pre>
| 0 | 2016-10-03T19:02:06Z | [
"python"
]
|
how to remove high frequency contents from the image for inverse fourier transform | 39,837,268 | <p>I saw a couple of documents explaining this in opencv, however my objective is to do this with numpy & scipy.</p>
<p>I guess I have to mask the outer region of the spectrum with some sort of circle, as I masked the center of the spectrum with 60x60 rectangle for the low frequency filtering. But I couldn't understand how. </p>
<p>I would like to learn how to remove high frequency components from the magnitude spectrum before taking inverse Fourier transform using numpy arrays. </p>
<p>I provided my codes for Fourier Transform and inverse Fourier transform (for removing low frequency components). My objective is to do the similar thing but this time I want to remove high frequency components to be able to observe the changes in the reconstructed image -just like I did for the inverse FT after removing low frequencies. </p>
<pre><code> import numpy as np
import scipy
import scipy.misc
import matplotlib.pyplot as plt
from scipy import ndimage
from PIL import Image
img = Image.open('gorkem.png').convert('L')
img.save('output_file.jpg')
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f) ## shift for centering 0.0 (x,y)
magnitude_spectrum = 20*np.log(np.abs(fshift))
plt.subplot(121),plt.imshow(img, cmap = 'gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(magnitude_spectrum, cmap = 'gray')
plt.title('Magnitude Spectrum'), plt.xticks([]), plt.yticks([])
plt.show()
## removing low frequency contents by applying a 60x60 rectangle window (for masking)
rows = np.size(img, 0) #taking the size of the image
cols = np.size(img, 1)
crow, ccol = rows/2, cols/2
fshift[crow-30:crow+30, ccol-30:ccol+30] = 0
f_ishift= np.fft.ifftshift(fshift)
img_back = np.fft.ifft2(f_ishift) ## shift for centering 0.0 (x,y)
img_back = np.abs(img_back)
plt.subplot(131),plt.imshow(img, cmap = 'gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(132),plt.imshow(img_back, cmap = 'gray')
plt.title('Image after removing low freq'), plt.xticks([]), plt.yticks([])
</code></pre>
<p><a href="http://i.stack.imgur.com/rluui.png" rel="nofollow"><img src="http://i.stack.imgur.com/rluui.png" alt="inverse Fourier transformed image"></a></p>
| 2 | 2016-10-03T17:34:39Z | 39,837,494 | <p>You can just substract the image with low frequencies removed from your original image:</p>
<pre><code>original = np.copy(fshift)
fshift[crow-30:crow+30, ccol-30:ccol+30] = 0
f_ishift= np.fft.ifftshift(original - fshift)
</code></pre>
<p><a href="http://i.stack.imgur.com/VjoXu.png" rel="nofollow"><img src="http://i.stack.imgur.com/VjoXu.png" alt="enter image description here"></a></p>
| 1 | 2016-10-03T17:48:18Z | [
"python",
"numpy",
"fft"
]
|
I would like to know how to iterate through df.column3 find match in df.column2 and add name of df.column1 based on matches to a new column df.column4 | 39,837,361 | <p>Test data looks like:
<code>
Column1 Column2 Column3
johnny 100 900
matty 300 100
grapy 400 300
snapp 500 300
</code></p>
<p>Expected Result:</p>
<p><code>
Column1 Column2 Column3 Column4
johnny 100 900 None
matty 300 100 johnny
grapy 400 300 matty
snapp 500 300 matty
</code></p>
<p>Attempted to run """df['Column1'][df['Column2'].isin(df['Column3'])"""
This partly worked but failed on some of the items that didnt match
Thanks for your help </p>
| 0 | 2016-10-03T17:41:09Z | 39,837,637 | <p>You could do this using a dictionnary of possible values :</p>
<pre><code>df = pd.DataFrame([['matty', 300, 900], ['grapy', 400, 300], ['snapp', 500, 300]], columns=['Column1', 'Column2', 'Column3'])
df = df.set_index('Column2')
dic = df['Column1'].to_dict()
df['Column4'] = [dic[n] if n in dic.keys() else None for n in df['Column3']]
df = df.reset_index()
df
Out[114]:
Column2 Column1 Column3 Column4
0 300 matty 900 None
1 400 grapy 300 matty
2 500 snapp 300 matty
</code></pre>
| 1 | 2016-10-03T17:57:55Z | [
"python",
"pandas"
]
|
I would like to know how to iterate through df.column3 find match in df.column2 and add name of df.column1 based on matches to a new column df.column4 | 39,837,361 | <p>Test data looks like:
<code>
Column1 Column2 Column3
johnny 100 900
matty 300 100
grapy 400 300
snapp 500 300
</code></p>
<p>Expected Result:</p>
<p><code>
Column1 Column2 Column3 Column4
johnny 100 900 None
matty 300 100 johnny
grapy 400 300 matty
snapp 500 300 matty
</code></p>
<p>Attempted to run """df['Column1'][df['Column2'].isin(df['Column3'])"""
This partly worked but failed on some of the items that didnt match
Thanks for your help </p>
| 0 | 2016-10-03T17:41:09Z | 39,838,539 | <p>If it doesn't work properly, its because you have duplicate values: </p>
<pre><code>df['Column4'] = df.Column2.map(dict(zip(df.Column3, df.Column1)))
</code></pre>
| 0 | 2016-10-03T18:56:19Z | [
"python",
"pandas"
]
|
Python: Call a variable from a function to an if statement | 39,837,402 | <p>I am trying to practice my modular programming with this assignment. I found out how to call the variables into my main function, but for my if statement in evenOdd(), I am not sure how to call the value. I get the error 'randNum' is not defined.</p>
<pre><code>from random import randint
def genNumber():
randNum = randint(0,20000)
print("My number is " + str(randNum))
return (randNum)
def inputNum():
userNum = input("Please enter a number: ")
print("Your number is " + str(userNum))
return (userNum)
def evenOdd():
if randNum%2==0:
print("My number is even")
else:
print("My number is odd")
if userNum%2==0:
print("Your number is even")
else:
print("Your number is odd")
def main():
randNum = genNumber()
userNum = inputNum()
evenOdd()
input("Press ENTER to exit")
main()
</code></pre>
| 0 | 2016-10-03T17:43:32Z | 39,837,535 | <p>You can either define randNum in the global scope or just pass it as a variable to your evenOdd() function, like this evenOdd( randNum ).</p>
| 2 | 2016-10-03T17:50:39Z | [
"python",
"function",
"variables"
]
|
How to return False when using issubset and an empty set | 39,837,416 | <p>When I have two sets e.g.</p>
<pre><code>s1 = set()
s2 = set(['somestring'])
</code></pre>
<p>and I do </p>
<pre><code>print s1.issubset(s2)
</code></pre>
<p>it returns <code>True</code>; so apparently, an empty set is always a subset of another set.</p>
<p>For my analysis, it should actually return <code>False</code> and I am wondering about the best way to do this. I can write a function like this:</p>
<pre><code>def check_set(s1, s2):
if s1 and s1.issubset(s2):
return True
return False
</code></pre>
<p>which then indeed returns <code>False</code> for the example above. Is there any better way of doing this?</p>
| 3 | 2016-10-03T17:44:18Z | 39,837,467 | <p>I would do that like this:</p>
<pre><code>s1 <= s2 if s1 else False
</code></pre>
<p>It should be faster, because it uses the built-in operators supported by sets rather than using more expensive function calls and attribute lookups. It's logically equivalent.</p>
| 4 | 2016-10-03T17:46:59Z | [
"python",
"optimization",
"set"
]
|
How to return False when using issubset and an empty set | 39,837,416 | <p>When I have two sets e.g.</p>
<pre><code>s1 = set()
s2 = set(['somestring'])
</code></pre>
<p>and I do </p>
<pre><code>print s1.issubset(s2)
</code></pre>
<p>it returns <code>True</code>; so apparently, an empty set is always a subset of another set.</p>
<p>For my analysis, it should actually return <code>False</code> and I am wondering about the best way to do this. I can write a function like this:</p>
<pre><code>def check_set(s1, s2):
if s1 and s1.issubset(s2):
return True
return False
</code></pre>
<p>which then indeed returns <code>False</code> for the example above. Is there any better way of doing this?</p>
| 3 | 2016-10-03T17:44:18Z | 39,837,492 | <p>Instead of using an <code>if</code> you can force the result to be a <code>bool</code> by doing this:</p>
<pre><code>def check_set(s1, s2):
return bool(s1 and s1.issubset(s2))
</code></pre>
| 2 | 2016-10-03T17:48:10Z | [
"python",
"optimization",
"set"
]
|
How to return False when using issubset and an empty set | 39,837,416 | <p>When I have two sets e.g.</p>
<pre><code>s1 = set()
s2 = set(['somestring'])
</code></pre>
<p>and I do </p>
<pre><code>print s1.issubset(s2)
</code></pre>
<p>it returns <code>True</code>; so apparently, an empty set is always a subset of another set.</p>
<p>For my analysis, it should actually return <code>False</code> and I am wondering about the best way to do this. I can write a function like this:</p>
<pre><code>def check_set(s1, s2):
if s1 and s1.issubset(s2):
return True
return False
</code></pre>
<p>which then indeed returns <code>False</code> for the example above. Is there any better way of doing this?</p>
| 3 | 2016-10-03T17:44:18Z | 39,837,506 | <p>Why not just return the value? That way, you avoid having to write <code>return True</code> or <code>return False</code>.</p>
<pre><code>def check_set(s1, s2):
return bool(s1 and s1.issubset(s2))
</code></pre>
| 1 | 2016-10-03T17:49:15Z | [
"python",
"optimization",
"set"
]
|
How to return False when using issubset and an empty set | 39,837,416 | <p>When I have two sets e.g.</p>
<pre><code>s1 = set()
s2 = set(['somestring'])
</code></pre>
<p>and I do </p>
<pre><code>print s1.issubset(s2)
</code></pre>
<p>it returns <code>True</code>; so apparently, an empty set is always a subset of another set.</p>
<p>For my analysis, it should actually return <code>False</code> and I am wondering about the best way to do this. I can write a function like this:</p>
<pre><code>def check_set(s1, s2):
if s1 and s1.issubset(s2):
return True
return False
</code></pre>
<p>which then indeed returns <code>False</code> for the example above. Is there any better way of doing this?</p>
| 3 | 2016-10-03T17:44:18Z | 39,837,564 | <p>Instead of using an empty set, you could use a set with an empty value:</p>
<p><code>s1 = set([''])</code> or <code>s1 = set([None])</code></p>
<p>Then your <code>print</code> statement would work as you expected.</p>
| -1 | 2016-10-03T17:52:38Z | [
"python",
"optimization",
"set"
]
|
How to return False when using issubset and an empty set | 39,837,416 | <p>When I have two sets e.g.</p>
<pre><code>s1 = set()
s2 = set(['somestring'])
</code></pre>
<p>and I do </p>
<pre><code>print s1.issubset(s2)
</code></pre>
<p>it returns <code>True</code>; so apparently, an empty set is always a subset of another set.</p>
<p>For my analysis, it should actually return <code>False</code> and I am wondering about the best way to do this. I can write a function like this:</p>
<pre><code>def check_set(s1, s2):
if s1 and s1.issubset(s2):
return True
return False
</code></pre>
<p>which then indeed returns <code>False</code> for the example above. Is there any better way of doing this?</p>
| 3 | 2016-10-03T17:44:18Z | 39,838,310 | <p>You can take advantage of how Python evaluates the truthiness of an object <strong>plus</strong> how it short-circuits boolean <code>and</code> expressions with:</p>
<pre><code>bool(s1) and s1 <= s2
</code></pre>
<p>Essentially this means: if <code>s1</code> is something not empty AND it's a subset of <code>s2</code></p>
| 2 | 2016-10-03T18:41:34Z | [
"python",
"optimization",
"set"
]
|
Got an extra line on python plot | 39,837,495 | <p>i'm using pyplot to show the FFT of the signal 'a', here the code:</p>
<pre><code>myFFT = numpy.fft.fft(a)
x = numpy.arange(len(a))
fig2 = plt.figure(2)
plt.plot(numpy.fft.fftfreq(x.shape[-1]), myFFT)
fig2.show()
</code></pre>
<p>and i get this figure
<a href="http://i.stack.imgur.com/21CFm.png" rel="nofollow"><img src="http://i.stack.imgur.com/21CFm.png" alt="enter image description here"></a></p>
<p>There is a line from the begin to the end of the signal in the frequency domain. How i can remove this line? AM I doing something wrong with pyplot?</p>
| 1 | 2016-10-03T17:48:25Z | 39,837,748 | <p>Have a look at <code>plt.plot(numpy.fft.fftfreq(x.shape[-1])</code>: the first and last points are the same, hence the graph "makes a loop"</p>
<p>You can do <code>plt.plot(sorted(numpy.fft.fftfreq(x.shape[-1])),myFFT)</code> or <code>plt.plot(myFFT)</code></p>
| 0 | 2016-10-03T18:04:43Z | [
"python",
"numpy",
"matplotlib",
"fft"
]
|
Got an extra line on python plot | 39,837,495 | <p>i'm using pyplot to show the FFT of the signal 'a', here the code:</p>
<pre><code>myFFT = numpy.fft.fft(a)
x = numpy.arange(len(a))
fig2 = plt.figure(2)
plt.plot(numpy.fft.fftfreq(x.shape[-1]), myFFT)
fig2.show()
</code></pre>
<p>and i get this figure
<a href="http://i.stack.imgur.com/21CFm.png" rel="nofollow"><img src="http://i.stack.imgur.com/21CFm.png" alt="enter image description here"></a></p>
<p>There is a line from the begin to the end of the signal in the frequency domain. How i can remove this line? AM I doing something wrong with pyplot?</p>
| 1 | 2016-10-03T17:48:25Z | 39,839,205 | <p>Instead of <code>sorted</code>, you might want to use <code>np.fft.fftshift</code> to center you 0th frequency, this deals properly with odd- and even-size signals. Most importantly, you need to apply the transform on both x and y vectors you are plotting.</p>
<pre><code>plt.plot(np.fft.fftshift(np.fft.fftfreq(x.shape[-1])), np.fft.fftshift(myFFT))
</code></pre>
<p>You might also want to display the amplitude or phase of the FFT (<code>np.abs</code> or <code>np.angle</code>) - as-is, you are just plotting the real-part.</p>
| 0 | 2016-10-03T19:41:34Z | [
"python",
"numpy",
"matplotlib",
"fft"
]
|
Converting List of Multidimensional Arrays to Single Multidimensional Array? | 39,837,546 | <p>Let Y be a list of 100 ndarrays, such that Y[i] is an ndarray of an image, its shape is 160x320x3.</p>
<p>I want X no be an ndarrays that contains all the images, I do as follows:</p>
<pre><code>x = [ y[i] for i in range(0,10) ]
</code></pre>
<p>But it produces a list of of 100 160X320X3 ndarrays. How can I modify it to get an ndarray of shape 100x160x320x3 ?</p>
| 0 | 2016-10-03T17:51:33Z | 39,837,652 | <p>Calling <code>np.array</code> on <code>Y</code> (i.e <code>np.array(Y)</code>) should turn the list of ndarrays into one ndarray, with the size of the first axis corresponding to the length of the list.</p>
<p><em>Demo</em>:</p>
<pre><code>>>> x = np.array([[1,2], [3,4]])
>>> c = [x,x] # list of 2x2 arrays
>>> c
[array([[1, 2],
[3, 4]]),
array([[1, 2],
[3, 4]])]
>>> np.array(c) # 2x2x2 array
array([[[1, 2],
[3, 4]],
[[1, 2],
[3, 4]]])
</code></pre>
<p>Just call <code>np.array</code> on <code>Y</code>, or <code>x</code> if you only want a slice of <code>Y</code>.</p>
| 2 | 2016-10-03T17:58:40Z | [
"python",
"numpy"
]
|
convert pgsql int array into python array | 39,837,711 | <p>I have numeric data (int) stored in pgsql as arrays. These are <code>x,y,w,h</code> for rectangles in an image e.g. <code>{(248,579),(1,85)}</code></p>
<p>When reading to my python code (using psycopg) I get it as a string (?). I am now trying to find the best way to obtain a python array of ints from that string.
Is there a better way than to split the string on ',' and so on...</p>
<p>p.s. I did try <code>.astype(int)</code> construct but that wouldn't work in this instance.</p>
| 0 | 2016-10-03T18:02:43Z | 39,837,972 | <p>Assuming that you wouldn't be able to change the input format, you could remove any unneeded characters, then split on the <code>,</code>'s or do the opposite order.</p>
<pre><code>data = '{(248,579),(1,85)}'
data.translate(None, '{}()').split(',')
</code></pre>
<p>will get you a list of strings.</p>
<p>And</p>
<pre><code>[int(x) for x in data.translate(None, '{}()').split(',')]
</code></pre>
<p>will translate them to integers as well.</p>
| 1 | 2016-10-03T18:19:32Z | [
"python",
"arrays",
"postgresql",
"psycopg2"
]
|
convert pgsql int array into python array | 39,837,711 | <p>I have numeric data (int) stored in pgsql as arrays. These are <code>x,y,w,h</code> for rectangles in an image e.g. <code>{(248,579),(1,85)}</code></p>
<p>When reading to my python code (using psycopg) I get it as a string (?). I am now trying to find the best way to obtain a python array of ints from that string.
Is there a better way than to split the string on ',' and so on...</p>
<p>p.s. I did try <code>.astype(int)</code> construct but that wouldn't work in this instance.</p>
| 0 | 2016-10-03T18:02:43Z | 39,839,059 | <p>If you mean the rectangle is stored in Postgresql as an array of points:</p>
<pre><code>query = '''
select array[(r[1])[0],(r[1])[1],(r[2])[0],(r[2])[1]]::int[]
from (values ('{"(248,579)","(1,85)"}'::point[])) s (r)
'''
cursor.execute(query)
print cursor.fetchone()[0]
</code></pre>
<p>Returns a Python int list</p>
<pre><code>[248, 579, 1, 85]
</code></pre>
<p>Notice that while a Postgresql array is base 1 by default, a <code>Point</code> coordinate is retrieved as a base 0 array.</p>
| 0 | 2016-10-03T19:31:32Z | [
"python",
"arrays",
"postgresql",
"psycopg2"
]
|
Untraceable HTTP redirection? | 39,837,713 | <p>I'm currently working on a project to track products from several websites. I use a python scraper to retrieve all the URLs related to the listed products, and later, regularly check if these URLs are still active.</p>
<p>To do so I use the Python requests module, run a get request and look at the response's status code. Usually I get <strong>200</strong>, <strong>301</strong>, <strong>302</strong> or <strong>404</strong> as expected, except in the following case:</p>
<p><a href="http://www.sephora.fr/Parfum/Parfum-Femme/Totem-Orange-Eau-de-Toilette/P2232006" rel="nofollow">http://www.sephora.fr/Parfum/Parfum-Femme/Totem-Orange-Eau-de-Toilette/P2232006</a></p>
<p>This product has been removed and while opening the link (sorry it's in French), I am briefly shown a placeholder page saying the product is not available anymore and then redirected to the home page (www.sephora.fr).</p>
<p>Oddly, Python still returns a <strong>200</strong> status code and so do various redirect tracers such as wheregoes.com or redirectdetective.com. The worst part is that the response URL still is the original, so I can't even trace it that way.</p>
<p>When analyzing with Chrome DevTools and preserving the logs, I see that at some point the page is reloaded. However I'm unable to find out where. </p>
<p>I'm guessing this is done client-side via Javascript, but I'm not quite sure how. Furthermore, I'd really need to be able to detect this change from within Python.</p>
<p>As a reference, here's a link to a working product:</p>
<p><a href="http://www.sephora.fr/Parfum/Parfum-Femme/Kenzo-Jeu-d-Amour-Eau-de-Parfum/P1894014" rel="nofollow">http://www.sephora.fr/Parfum/Parfum-Femme/Kenzo-Jeu-d-Amour-Eau-de-Parfum/P1894014</a></p>
<p>Any leads?</p>
<p>Thank you !
Ludwig</p>
| 0 | 2016-10-03T18:02:45Z | 39,837,840 | <p>The page has a <a href="https://en.wikipedia.org/wiki/Meta_refresh" rel="nofollow">meta tag</a>, that redirects the page to the root URL:</p>
<pre class="lang-html prettyprint-override"><code><meta http-equiv="refresh" content="0; URL=/" />
</code></pre>
| 1 | 2016-10-03T18:10:25Z | [
"javascript",
"python",
"http",
"redirect"
]
|
How do I hide a word in a random number matrix? | 39,837,732 | <p>I am a Python beginner and am stuck here. I have written the following program that generates a matrix of random numbers.</p>
<pre><code>def randsq(size):
for i in range(size):
for j in range(size):
print(random.randint(0,9), end = '')
print()
</code></pre>
<p>Is there anyway I can add a function that takes a word from an input parameter and hides it diagonally within the matrix? So if the user inputs 'rain', the output looks like this:</p>
<pre><code>r123
1a45
23i4
989n
</code></pre>
<p>Thanks!</p>
| -1 | 2016-10-03T18:03:36Z | 39,839,244 | <p>Modified code:</p>
<pre class="lang-python prettyprint-override"><code>import random
def randsq(word):
size = len(word)
for i in range(size):
for j in range(size):
if i == j: # we are at a diagonal-value
print(word[i], end='') # it's the i-th diag -> choose i-th char
else:
print(random.randint(0, 9), end='')
print()
randsq('Rain')
</code></pre>
<p>Output:
</p>
<pre><code>R757
0a91
02i9
757n
</code></pre>
<p>If you want the user to input this word:</p>
<pre><code>word_to_hide = input('Which word to hide? (+enter) ') # will wait for user-input + enter
randsq(word_to_hide)
</code></pre>
| 1 | 2016-10-03T19:44:10Z | [
"python",
"algorithm",
"matrix",
"crossword"
]
|
Pandas to_sql() performance - why is it so slow? | 39,837,781 | <p>I am running into performance issues with Pandas and writing DataFrames to an SQL DB. In order to be as fast as possible I use <a href="http://www.memsql.com/" rel="nofollow">memSQL</a> (it's like MySQL in code, so I don't have to do anything). I benchmarked my instance just now:</p>
<pre><code>docker run --rm -it --link=memsql:memsql memsql/quickstart simple-benchmark
Creating database simple_benchmark
Warming up workload
Launching 10 workers
Workload will take approximately 30 seconds.
Stopping workload
42985000 rows inserted using 10 threads
1432833.3 rows per second
</code></pre>
<p>That isn't glorious, and it's just my local laptop. I know... I am also using the root user, but it's a throw-away Docker container.</p>
<p>Here is the code which writes my DataFrame to the DB:</p>
<pre><code> import MySQLdb
import mysql.connector
from sqlalchemy import create_engine
from pandas.util.testing import test_parallel
engine = create_engine('mysql+mysqlconnector://root@localhost:3306/netflow_test', echo=False)
# max_allowed_packet = 1000M in mysql.conf
# no effect
# @test_parallel(num_threads=8)
def commit_flows(netflow_df2):
% time netflow_df2.to_sql(name='netflow_ids', con=engine, if_exists = 'append', index=False, chunksize=500)
commit_flows(netflow_df2)
</code></pre>
<p>Below is the <code>%time</code> measurement of the function. </p>
<p><a href="https://www.continuum.io/content/pandas-releasing-gil" rel="nofollow">Multi-threading</a> does not make this faster. It remains within 7000 - 8000 rows/s. </p>
<blockquote>
<p>CPU times: user 2min 6s, sys: 1.69 s, total: 2min 8s Wall time: 2min
18s</p>
</blockquote>
<p>Screenshot:
<a href="http://i.stack.imgur.com/Atgdo.png" rel="nofollow"><img src="http://i.stack.imgur.com/Atgdo.png" alt="memSQL shows the speed"></a></p>
<p>I also increased the <code>max_allowed_packet</code> size to commit in bulk, with a larger chunk size. Still not faster. </p>
<p>Here is the shape of the DataFrame:</p>
<pre><code>netflow_df2.shape
(1015391, 20)
</code></pre>
<p>Does anyone know how I can make this faster?</p>
| 0 | 2016-10-03T18:06:45Z | 39,841,318 | <p>In case someones gets a similar situation:</p>
<p>I removed SQlalchemy and used the (deprecated) MySQL flavor for Pandas' <code>to_sql()</code> function. The speedup is more than 120 %. I don't recommend to use this, but it works for me at the moment.</p>
<pre><code>import MySQLdb
import mysql.connector
from sqlalchemy import create_engine
from pandas.util.testing import test_parallel
engine = MySQLdb.connect("127.0.0.1","root","","netflow_test")
# engine = create_engine('mysql+mysqlconnector://root@localhost:3306/netflow_test', echo=False)
# @test_parallel(num_threads=8)
def commit_flows(netflow_df2):
% time netflow_df2.to_sql(name='netflow_ids', flavor='mysql', con=engine, if_exists = 'append', index=False, chunksize=50000)
commit_flows(netflow_df2)
</code></pre>
<p>If I find out how to convince memSQL to accept a large query (similar to MySQL's <code>max_allowed_packet = 1000M</code> in mysql.conf) I will be even faster. I should be able to hit more than 50000 rows per second here.</p>
<pre><code>CPU times: user 28.7 s, sys: 797 ms, total: 29.5 s
Wall time: 38.2 s
</code></pre>
<p>126s before. 38.2s now.</p>
| 0 | 2016-10-03T22:22:02Z | [
"python",
"performance",
"pandas",
"memsql"
]
|
Simple Quiz - How Do I Link Variables? | 39,837,837 | <p>I'm stuck trying to figure out how to match the correct answer with the correct question. Right now if the user's answer is equal to any of the answers, it returns correct. Please help.</p>
<pre><code>easy_question = "The capitol of West Virginia is __1__"
medium_question = "The device amplifies a signal is an __2__"
hard_question = "A program takes in __3__ and produces output."
easy_answer = "Charleston"
medium_answer = "amplifier"
hard_answer = "input"
questions_and_answers = {easy_question: easy_answer,
medium_question: medium_answer,
hard_question: hard_answer}
#print(easy_answer in [easy_question, easy_answer])
#print(questions_and_answers[0][1])
print('This is a quiz')
ready = input("Are you ready? Type Yes.")
while ready != "Yes":
ready = input("Type Yes.")
user_input = input("Choose a difficulty: Easy, Medium, or Hard")
def choose_difficulty(user_input):
if user_input == "Easy":
return easy_question
elif user_input == "Medium":
return medium_question
elif user_input == "Hard":
return hard_question
else:
print("Incorrect")
user_input = input("Type Easy, Medium, or Hard")
print(choose_difficulty(user_input))
answer = input("What is your answer?")
def check_answer(answer):
if answer == easy_answer:
return "Correct"
elif answer == medium_answer:
return "Correct"
elif answer == hard_answer:
return "Correct"
print(check_answer(answer))
</code></pre>
| -1 | 2016-10-03T18:10:16Z | 39,837,898 | <p>You will want to keep track of the <code>question</code>:</p>
<pre><code>question = choose_difficulty(user_input)
print(question)
answer = input("What is your answer?")
def check_answer(question, answer):
if questions_and_answers[question] == answer:
return "Correct"
return "Incorrect"
print(check_answer(question, answer))
</code></pre>
<p>There's a lot more cool stuff you can do, but this is a minimal example that should solve your problem!</p>
<p><strong>EDIT:</strong></p>
<p>When you did</p>
<pre><code>questions_and_answers = {easy_question: easy_answer,
medium_question: medium_answer,
hard_question: hard_answer}
</code></pre>
<p>you created a dictionary (or <code>dict</code> as it's known in Python). See <a href="https://docs.python.org/2/tutorial/datastructures.html#dictionaries" rel="nofollow">examples</a>. Basically, you can do lookups by the first term (the question) and it'll return the second term (the answer).</p>
| 1 | 2016-10-03T18:14:49Z | [
"python"
]
|
Simple Quiz - How Do I Link Variables? | 39,837,837 | <p>I'm stuck trying to figure out how to match the correct answer with the correct question. Right now if the user's answer is equal to any of the answers, it returns correct. Please help.</p>
<pre><code>easy_question = "The capitol of West Virginia is __1__"
medium_question = "The device amplifies a signal is an __2__"
hard_question = "A program takes in __3__ and produces output."
easy_answer = "Charleston"
medium_answer = "amplifier"
hard_answer = "input"
questions_and_answers = {easy_question: easy_answer,
medium_question: medium_answer,
hard_question: hard_answer}
#print(easy_answer in [easy_question, easy_answer])
#print(questions_and_answers[0][1])
print('This is a quiz')
ready = input("Are you ready? Type Yes.")
while ready != "Yes":
ready = input("Type Yes.")
user_input = input("Choose a difficulty: Easy, Medium, or Hard")
def choose_difficulty(user_input):
if user_input == "Easy":
return easy_question
elif user_input == "Medium":
return medium_question
elif user_input == "Hard":
return hard_question
else:
print("Incorrect")
user_input = input("Type Easy, Medium, or Hard")
print(choose_difficulty(user_input))
answer = input("What is your answer?")
def check_answer(answer):
if answer == easy_answer:
return "Correct"
elif answer == medium_answer:
return "Correct"
elif answer == hard_answer:
return "Correct"
print(check_answer(answer))
</code></pre>
| -1 | 2016-10-03T18:10:16Z | 39,838,251 | <p>The way I would do it: create 2 variables, x and y. If the users chooses "Easy", it sets x to 1, "Medium" sets it to 2 and so on. Then you ask him for an answer. The answer to the easy question, if correct, sets y to 1, on the medium to 2 and so on. Then you have a check if x == y. If yes, then he has answered correctly to the question.</p>
| -1 | 2016-10-03T18:37:43Z | [
"python"
]
|
Django - Problems with get_or_create() | 39,838,013 | <p>I'm facing problems using get_or_create() in my view.
What I want to do is have the User get or create an instance of the Keyword model whenever he wants to add a keyword.</p>
<p>I have a Keyword model that looks like this:</p>
<pre><code>class Keyword(models.Model):
word = models.CharField(max_length=30, unique=True, default=None)
members = models.ManyToManyField(settings.AUTH_USER_MODEL, blank=True, default=None)
def __str__(self):
return self.keywords
</code></pre>
<p>I have a form to create the keyword:</p>
<pre><code>class KeywordForm(forms.ModelForm):
keywords = forms.CharField(max_length=30)
def __init__(self, *args, **kwargs):
super(KeywordForm, self).__init__(*args, **kwargs)
self.fields["keywords"].unique = False
class Meta:
fields = ("keywords",)
model = models.Keyword
</code></pre>
<p>I've tried different things in the view and here is my current version, without the use of get_or_create. It only creates the keyword:</p>
<pre><code>class KeywordCreationView(LoginRequiredMixin, generic.CreateView):
form_class = forms.KeywordForm
model = models.Keyword
page_title = 'Add a new keyword'
success_url = reverse_lazy("home")
template_name = "accounts/add_keyword.html"
def form_valid(self, form):
var = super(KeywordCreationView, self).form_valid(form)
self.object.user = self.request.user
self.object.save()
self.object.members.add(self.object.user)
return var
</code></pre>
<p>How should my view look in order to get the keyword if it exists and if it does, add the User as 'member'. If it doesn't, create the Keyword.</p>
<p>Thanks for your help!</p>
| 1 | 2016-10-03T18:21:47Z | 39,848,938 | <p>I do believe CreateView isn't the right class for this. You should use <a href="https://docs.djangoproject.com/en/1.10/ref/class-based-views/generic-editing/#updateview" rel="nofollow">UpdateView</a> instead and override the <a href="https://docs.djangoproject.com/en/1.10/ref/class-based-views/mixins-single-object/#django.views.generic.detail.SingleObjectMixin.get_object" rel="nofollow">get_object</a> method (which is actually a part of the SingleObjectMixin) ancestor of this class based view. </p>
<p>The <a href="https://github.com/django/django/blob/master/django/views/generic/detail.py#L22" rel="nofollow">source code</a> of this mixin is rather daunging but in your case something as simple as </p>
<pre><code>def get_object(self, queryset=None):
pk = self.kwargs.get(self.pk_url_kwarg)
if queryset:
obj, c = queryset.get_or_create(pk=pk)
else:
obj, c = MyModel.get_or_create(pk=pk)
return obj
</code></pre>
<p>might work. But frankly, it's alot simpler to use a simple (non class based view)</p>
| 0 | 2016-10-04T09:37:35Z | [
"python",
"django"
]
|
pandas read_csv() method supports zip archive reading but not to_csv() method supports zip archive saving | 39,838,026 | <p>Pandas 0.18 supports read_csv zip file as argument and reading zipped csv table correctly into data frame. But when i am trying to use to_csv() method to save data frame as zipped csv, i am getting error. According to official documentation, zip format not supported in to_csv() method. Any thoughts? Thank you.</p>
<p>import pandas as pd</p>
<p>works fine</p>
<p>data = pd.read_csv("E:\ASML SED.zip")</p>
<p>error out
IOError: [Errno 2] No such file or directory: 'E:\ASML SED.zip'</p>
<p>data.to_csv("E:\ASML SED Zipped.zip", compression = 'zip')</p>
| 0 | 2016-10-03T18:22:17Z | 39,851,453 | <p>Indeed, zip format not supported in to_csv() method according to this <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow">official documentation</a>, the allowed values are âgzipâ, âbz2â, âxzâ.</p>
<p>If you really want the 'zip' format, you can try to save as uncompressed csv file, then using cli to compress the .csv file to .csv.zip.</p>
| 0 | 2016-10-04T11:45:07Z | [
"python",
"csv",
"pandas",
"zip"
]
|
Find last possible page of Plask-SQLAlchemy paginate | 39,838,028 | <p>I am using paginate of sqlalchemy to paginate my query like following but in my front page I want to implement a next button (or a numerical page navigation links like 1,2,3,4..) and when it reaches at the last page I don't want to show user the next button (or know the maximum number of pages available in the page navigation). I would prefer not to use another database query in sqlalchemy. What is the most convenient way achieve this? </p>
<p>I am using following fo pagination:</p>
<pre><code>Blog.query.filter(Blog.title.like("%"+query+"%")).paginate(page=start,per_page=size).items
</code></pre>
<p>One way could be checking if it returns exact number of items of my size or not what I am doing right now, if not it is the last one, but it does not satisfy corner cases where the remainder of all count and size is zero. </p>
| 0 | 2016-10-03T18:22:24Z | 39,838,289 | <p>Check out <a href="http://flask-sqlalchemy.pocoo.org/2.1/api/?highlight=paginate#flask.ext.sqlalchemy.Pagination" rel="nofollow"><code>Pagination</code></a> class docs.</p>
<pre><code>query = (Blog.query.filter(Blog.title.like('%'+query+'%'))
.paginate(page=start, per_page=size))
</code></pre>
<p>Then you can use <code>query.has_prev</code> and <code>query.has_next</code> properties to check if previous or next pages exist.</p>
| 0 | 2016-10-03T18:40:07Z | [
"python",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
]
|
All possibilities permutations of 0 and 1 | 39,838,211 | <p>I want to list all possible permutations of 0 and 1 six times for probability reasons.</p>
<p>I started out like this, thinking, it was a piece of cake; until all combinations are listed the code should keep on randomising the values.
However once 0 and 1 are in the list it stops, i thought "for range in 6" would give something like this:</p>
<p><code>"0,0,0,0,0,0"</code> (which is a possibility of 0 and 1)</p>
<p><code>"1,1,1,1,1,1"</code> (which also is a possibility of 0 and 1)</p>
<p>My current output is "0, 1"</p>
<p>How can I group the items together of pieces of 6 and make the program keep filling values up unless a combination is already in my list?<br>
I know there are 64 combinations altogether because of 2^6.</p>
<pre><code>import math
import random
a = 63;
b = 1;
sequences = [];
while (b < a):
for i in range(6):
num = random.randint(0 , 1)
if(num not in sequences):
sequences.append(num)
b += 1;
else:
b += 1;
print(sorted(sequences));
</code></pre>
| 0 | 2016-10-03T18:35:02Z | 39,838,389 | <p>I didn't understand if you are looking for a solution, or for a fix for your code. If you just look for a solution, you can do this much easier and more efficiently using <a href="https://docs.python.org/2/library/itertools.html#itertools.product" rel="nofollow"><code>itertools</code></a> like so:</p>
<pre><code>>>> from itertools import *
>>> a = list(product([0,1],repeat=6)) #the list with all the 64 combinations
</code></pre>
| 1 | 2016-10-03T18:46:38Z | [
"python",
"permutation"
]
|
All possibilities permutations of 0 and 1 | 39,838,211 | <p>I want to list all possible permutations of 0 and 1 six times for probability reasons.</p>
<p>I started out like this, thinking, it was a piece of cake; until all combinations are listed the code should keep on randomising the values.
However once 0 and 1 are in the list it stops, i thought "for range in 6" would give something like this:</p>
<p><code>"0,0,0,0,0,0"</code> (which is a possibility of 0 and 1)</p>
<p><code>"1,1,1,1,1,1"</code> (which also is a possibility of 0 and 1)</p>
<p>My current output is "0, 1"</p>
<p>How can I group the items together of pieces of 6 and make the program keep filling values up unless a combination is already in my list?<br>
I know there are 64 combinations altogether because of 2^6.</p>
<pre><code>import math
import random
a = 63;
b = 1;
sequences = [];
while (b < a):
for i in range(6):
num = random.randint(0 , 1)
if(num not in sequences):
sequences.append(num)
b += 1;
else:
b += 1;
print(sorted(sequences));
</code></pre>
| 0 | 2016-10-03T18:35:02Z | 39,838,456 | <p>I understand you want to check in how many iterations you'll reach the 64 possible combinations. In that case this code does it:</p>
<pre><code>import math
import random
combs = set()
nb_iterations = 0
while len(combs)<64:
nb_iterations+=1
a = (tuple(random.randint(0,1) for _ in range(6)))
if a in combs:
pass
else:
combs.add(a)
print("Done in {} iterations".format(nb_iterations))
</code></pre>
<p>I ran it a couple of times and it took around 300/400 iterations to get the full list.</p>
<p>Example of outputs:</p>
<pre><code>Done in 313 iterations
Done in 444 iterations
Done in 393 iterations
</code></pre>
| 0 | 2016-10-03T18:50:55Z | [
"python",
"permutation"
]
|
Python writing (xlwt) to an existing Excel Sheet, drops charts and formatting | 39,838,220 | <p>Am using python to automate some tasks and ultimately write to an existing spreadsheet. Am using the xlwt, xlrd and xlutils modules. </p>
<p>So the way I set it up is to open the file, make a copy, write to it and then save it back to the same file. When I do the last step, all excel formatting such as comments and charts are dropped. Is there a way around that? I think it has something to do with excel objects.</p>
<p>Thank you</p>
<p>Sample code</p>
<pre><code>import xlwt
import os
import xlrd, xlutils
from xlrd import open_workbook
from xlutils.copy import copy
style1 = xlwt.easyxf('font: name Calibri, color-index black, bold off; alignment : horizontal center', num_format_str ='###0')
script_dir = os.path.dirname('_file_')
Scn1 = os.path.join(script_dir, "\sample\Outlet.OUT")
WSM_1V = []
infile = open (Scn1, "r")
for line in infile.readlines():
WSM_1V.append(line [-10:-1])
infile.close()
Existing_xls = xlrd.open_workbook(r'\test\test2.xls', formatting_info=True, on_demand=True)
wb = xlutils.copy.copy(Existing_xls)
ws = wb.get_sheet(10)
for i,e in enumerate(WSM_1V,1):
ws.write (i,0, float(e),style1)
wb.save('test2.xls')
</code></pre>
| 1 | 2016-10-03T18:35:29Z | 39,840,495 | <p>Could you do this with win32com?</p>
<pre><code>from win32com import client
...
xl = client.Dispatch("Excel.Application")
wb = xl.Workbooks.Open(r'\test\test2.xls')
ws = wb.Worksheets[10]
for i,e in enumerate(WSM_1V,1):
ws.Cells[i][0].Value = float(e)
wb.save
wb.Close()
xl.Quit()
xl = None
</code></pre>
| 0 | 2016-10-03T21:13:08Z | [
"python",
"excel",
"xlrd",
"xlwt",
"xlutils"
]
|
Python writing (xlwt) to an existing Excel Sheet, drops charts and formatting | 39,838,220 | <p>Am using python to automate some tasks and ultimately write to an existing spreadsheet. Am using the xlwt, xlrd and xlutils modules. </p>
<p>So the way I set it up is to open the file, make a copy, write to it and then save it back to the same file. When I do the last step, all excel formatting such as comments and charts are dropped. Is there a way around that? I think it has something to do with excel objects.</p>
<p>Thank you</p>
<p>Sample code</p>
<pre><code>import xlwt
import os
import xlrd, xlutils
from xlrd import open_workbook
from xlutils.copy import copy
style1 = xlwt.easyxf('font: name Calibri, color-index black, bold off; alignment : horizontal center', num_format_str ='###0')
script_dir = os.path.dirname('_file_')
Scn1 = os.path.join(script_dir, "\sample\Outlet.OUT")
WSM_1V = []
infile = open (Scn1, "r")
for line in infile.readlines():
WSM_1V.append(line [-10:-1])
infile.close()
Existing_xls = xlrd.open_workbook(r'\test\test2.xls', formatting_info=True, on_demand=True)
wb = xlutils.copy.copy(Existing_xls)
ws = wb.get_sheet(10)
for i,e in enumerate(WSM_1V,1):
ws.write (i,0, float(e),style1)
wb.save('test2.xls')
</code></pre>
| 1 | 2016-10-03T18:35:29Z | 39,840,505 | <p>Using those packages, there is no way around losing the comments and charts, as well as many other workbook features. The <code>xlrd</code> package simply does not read them, and the <code>xlwt</code> package simply does not write them. <code>xlutils</code> is just a bridge between the other two packages; it can't read anything that <code>xlrd</code> can't read, and it can't write anything that <code>xlwt</code> can't write.</p>
<p>To achieve what you want to achieve, probably your best option is to automate a running instance of Excel; the best Python package for doing that is <a href="https://www.xlwings.org/" rel="nofollow"><strong>xlwings</strong></a>, which works on Windows or Mac.</p>
| 0 | 2016-10-03T21:13:44Z | [
"python",
"excel",
"xlrd",
"xlwt",
"xlutils"
]
|
How should I deal with file handles when I return a generator? | 39,838,268 | <p>I have this function in my code:</p>
<pre><code>def load_fasta(filename):
f = open(filename)
return (seq.group(0) for seq in re.finditer(r">[^>]*", f.read()))
</code></pre>
<p>This will leave the file open indefinitely, which isn't good practice. How do I close the file when the generator is exhausted? I guess I could expand the generator expression into a for loop with yield statements and then close the file afterwards. I'm trying to use functional programming as often as possible, though (just as a learning exercise). Is there a different way to do this?</p>
| 0 | 2016-10-03T18:38:39Z | 39,838,323 | <p>Use <code>yield</code> instead of a single generator expression.</p>
<pre><code>def load_fasta(filename):
with open(filename) as f:
for seq in re.finditer(r">[^>]*", f.read()):
yield seq.group(0)
for thing in load_fasta(filename):
...
</code></pre>
<p>The <code>with</code> statement will close the file once the <code>for</code> loop completes. Note that since you read the entire file into memory anyway, you could simply use</p>
<pre><code>def load_fasta(filename):
with open(filename) as f:
data = f.read()
for seq in re.finditer(r">[^>]*", data):
yield seq.group(0)
</code></pre>
| 1 | 2016-10-03T18:42:07Z | [
"python",
"functional-programming",
"generator"
]
|
Reading multiple .csv files from different directories into pandas DataFrame | 39,838,332 | <p>My DataFrame has a index SubjectID, and each Subject ID has its own directory. In each Subject directory is a .csv file with info that I want to put into my DataFrame. Using my SubjectID index, I want to read in the header of the .csv file for every subject and put it into a new column in my DataFrame. </p>
<p>Each subject directory has the same pathway except for the individual subject number.</p>
<p>I have found ways to read multiple .csv files from a single target directory into a pandas DataFrame, but not from multiple directories. Here is some code I have for importing multiple .csv files from a target directory:</p>
<pre><code>subject_path = ('/home/mydirectory/SubjectID/')
filelist = []
os.chdir('subject_path')
for files in glob.glob( "*.csv" ) :
filelist.append(files)
# read each csv file into single dataframe and add a filename reference column
df = pd.DataFrame()
columns = range(1,100)
for c, f in enumerate(filelist) :
key = "file%i" % c
frame = pd.read_csv( (subject_path + f), skiprows = 1, index_col=0, names=columns )
frame['key'] = key
df = df.append(frame,ignore_index=True)
</code></pre>
<p>I want to do something similar but iteratively go into the different Subject directories instead of having a single target directory. </p>
<p>Edit:
I think I want to do this using <code>os</code> not <code>pandas</code>, is there a way to use a loop to search through multiple directories using <code>os</code>?</p>
| 0 | 2016-10-03T18:42:40Z | 39,839,049 | <p>Assuming your subject folders are in <code>mydirectory</code>, you can just create a list of all folders in the directory and then add the csv's into your filelist.</p>
<pre><code>import os
parent_dir = '/home/mydirectory'
subject_dirs = [os.path.join(parent_dir, dir) for dir in os.listdir(parent_dir) if os.path.isdir(os.path.join(parent_dir, dir))]
filelist = []
for dir in subject_dirs:
csv_files = [os.path.join(dir, csv) for csv in os.listdir(dir) if os.path.isfile(os.path.join(dir, csv)) and csv.endswith('.csv')]
for file in csv_files:
filelist.append(file)
# Do what you did with the dataframe from here
...
</code></pre>
| 0 | 2016-10-03T19:30:56Z | [
"python",
"csv",
"pandas",
"dataframe",
"operating-system"
]
|
Reading multiple .csv files from different directories into pandas DataFrame | 39,838,332 | <p>My DataFrame has a index SubjectID, and each Subject ID has its own directory. In each Subject directory is a .csv file with info that I want to put into my DataFrame. Using my SubjectID index, I want to read in the header of the .csv file for every subject and put it into a new column in my DataFrame. </p>
<p>Each subject directory has the same pathway except for the individual subject number.</p>
<p>I have found ways to read multiple .csv files from a single target directory into a pandas DataFrame, but not from multiple directories. Here is some code I have for importing multiple .csv files from a target directory:</p>
<pre><code>subject_path = ('/home/mydirectory/SubjectID/')
filelist = []
os.chdir('subject_path')
for files in glob.glob( "*.csv" ) :
filelist.append(files)
# read each csv file into single dataframe and add a filename reference column
df = pd.DataFrame()
columns = range(1,100)
for c, f in enumerate(filelist) :
key = "file%i" % c
frame = pd.read_csv( (subject_path + f), skiprows = 1, index_col=0, names=columns )
frame['key'] = key
df = df.append(frame,ignore_index=True)
</code></pre>
<p>I want to do something similar but iteratively go into the different Subject directories instead of having a single target directory. </p>
<p>Edit:
I think I want to do this using <code>os</code> not <code>pandas</code>, is there a way to use a loop to search through multiple directories using <code>os</code>?</p>
| 0 | 2016-10-03T18:42:40Z | 39,839,347 | <p>Consider the recursive method of <a href="https://www.tutorialspoint.com/python/os_walk.htm" rel="nofollow">os.walk()</a> to read all directories and files <em>top-down</em> (default=<code>TRUE</code>) or <em>bottom-up</em>. Additionally, you can use regex to check names to filter specifically for .csv files. </p>
<p>Below will import ALL csv files in any child/grandchild folder from the target root <em>/home/mydirectory</em>. So, be sure to check if non-subject csv files exist, else adjust <code>re.match()</code> accordingly:</p>
<pre><code>import os, re
import pandas as pd
# CURRENT DIRECTORY (PLACE SCRIPT IN /home/mydirectory)
cd = os.path.dirname(os.path.abspath(__file__))
i = 0
columns = range(1,100)
dfList = []
for root, dirs, files in os.walk(cd):
for fname in files:
if re.match("^.*.csv$", fname):
frame = pd.read_csv(os.path.join(root, fname), skiprows = 1,
index_col=0, names=columns)
frame['key'] = "file{}".format(i)
dfList.append(frame)
i += 1
df = pd.concat(dfList)
</code></pre>
| 0 | 2016-10-03T19:51:43Z | [
"python",
"csv",
"pandas",
"dataframe",
"operating-system"
]
|
Pandas Groupby Return Average BUT! exclude NaN | 39,838,367 | <p>So Im trying to make sense of the pandas groupby function and to reduce a large data frame I have. Here is an example:</p>
<pre><code> A B
2016-09-23 19:36:08+00:00 NaN 34.0
2016-09-23 19:36:11+00:00 NaN 33.0
2016-09-23 19:36:12+00:00 24.1 NaN
2016-09-23 19:36:14+00:00 NaN 34.0
2016-09-23 19:36:17+00:00 NaN 34.0
2016-09-23 19:36:20+00:00 NaN 34.0
2016-09-23 19:36:22+00:00 24.2 NaN
2016-09-23 19:36:23+00:00 NaN 34.0
2016-09-23 19:36:26+00:00 NaN 34.0
2016-09-23 19:36:29+00:00 NaN 34.0
2016-09-23 19:36:32+00:00 24.1 NaN
2016-09-23 19:36:33+00:00 NaN 34.0
2016-09-23 19:37:00+00:00 NaN 34.0
2016-09-23 19:37:02+00:00 24.1 NaN
</code></pre>
<p>So I have 2 data series "A" and "B" that were sampled at different rates with their sampling time as the index of the original data frame. </p>
<p>I would like to now group the rows of the data frame by date/hour/minute and return the average of the data per minute. Here the average should ignore the missing values in the data frame.</p>
<p>So for example, I would return something like this:</p>
<pre><code> A B
2016-09-23 19:36:00+00:00 24 34.0
2016-09-23 19:37:00+00:00 24.1 33.0
</code></pre>
<p>Is it possible to do this with a built in pandas function?</p>
| 1 | 2016-10-03T18:44:32Z | 39,838,422 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow"><code>resample</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.tseries.resample.Resampler.mean.html" rel="nofollow"><code>Resampler.mean</code></a>, which compute mean of groups, excluding missing values:</p>
<pre><code>print (df.resample('1Min').mean())
A B
2016-09-23 19:36:00 24.133333 33.888889
2016-09-23 19:37:00 24.100000 34.000000
</code></pre>
<p>Another solution with <code>groupby</code>:</p>
<pre><code>print (df.groupby([pd.TimeGrouper('1Min')]).mean())
A B
2016-09-23 19:36:00 24.133333 33.888889
2016-09-23 19:37:00 24.100000 34.000000
</code></pre>
| 2 | 2016-10-03T18:49:17Z | [
"python",
"pandas",
"dataframe"
]
|
Django REST Framework prevent multiple validation queries in batch create | 39,838,400 | <p>When trying to do bulk inserts using Django/Django Rest Framework 3, the validation step is causing <code>n * related_model_fields</code> queries, which is causing a lot of unnecessary latency. </p>
<p>In the example below, <code>Thing</code> has two related fields, one being user (added in the view), and the other being the <code>pk</code> of another model in a foreign key related field. Each item in the batch is individually being validated, and that validation includes look ups on the <code>User</code> model, and the other related model, resulting in 2 queries per item in the batch just for validation.</p>
<p>Is there any way to override this behavior to do a "batch" validation of the data? Or can I override the validation behavior to validate against a pre-queried set of values to prevent multiple database round-trips?</p>
<pre><code>class ThingView(APIView):
def post(self, request):
# Add user to each record
user_id = self.request.user.id
map(lambda rec: rec.update(user=user_id), request.data)
serializer = ThingSerializer(data=request.data, many=True)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class ThingListSerializer(serializers.ListSerializer):
def create(self, validated_data):
things = [Thing(**item) for item in validated_data]
return Thing.objects.bulk_create(things)
class ThingSerializer(serializers.ModelSerializer):
class Meta:
model = Thing
list_serializer_class = ThingListSerializer
</code></pre>
| 0 | 2016-10-03T18:47:16Z | 39,866,439 | <p>You'll likely want to roll your own validators and replace the default's one. For example, on the <code>User</code> you'll have a UniqueValidator that will query the DB for existing <code>username</code>. You could remove it and deal with that constraint explicitly by yourself - done that for some batch import.</p>
| 0 | 2016-10-05T06:01:44Z | [
"python",
"django",
"django-rest-framework"
]
|
Write a program that asks the user for a string and then prints the string in upper case | 39,838,416 | <p>I am learning programming language by my own from (CEMC Python from scratch) and this is my first programming language. I spend more than 3 hour to solve this PROBLEM. Please help to solve this so I can get to know what I was doing wrong.</p>
| -4 | 2016-10-03T18:48:30Z | 39,838,523 | <pre><code># python 2.x
>>>text = raw_input()
# python 3.x
>>>text = input()
>>>print(text.upper())
</code></pre>
| 2 | 2016-10-03T18:55:24Z | [
"python"
]
|
How to do calculations in lists with *args in Python 3.5.1 | 39,838,419 | <p>I'm not even sure how to ask this question; I've been doing programming for about a month and the progress I've made on this program is still a little bit above my head, so I'm sorry if the question is a little incomprehensible. I'm taking a programming class, but as far as I know, this program is just for my own functionality and enjoyment for music.</p>
<p>I'm trying to write a function that takes musical notes, represented by integers, and spits out a specific arrangement and transformation of those numbers. Specifically I'm trying to convert them into something from musical set theory called "prime form".</p>
<p>I wrote a function that successfully does this for a set of three pitches, but I wanted to expand it so that I could have any number of arguments for the function.</p>
<p>Hereâs the code for the first function</p>
<pre><code>def prime_form (pitch1, pitch2, pitch3, tones_in_octave = 12):
"""
(int, int, int, int) -> (str)
finds the prime form of any 3 note pitch set in any equal temperament,
one octave span of frequency classes (default is 12 tones).
>>> prime_form (9, 1, 5)
(0 4 8)
this is the prime form!
>>> prime_form (11, 1, 0)
(0 1 2)
this is the prime form!
>>> prime_form (1, 3, 4)
(0 1 3)
this is the prime form!
"""
pitches = [pitch1 %tones_in_octave, pitch2 %tones_in_octave, pitch3 %tones_in_octave]
spitches = sorted(pitches)
intervals = [(spitches[1] - spitches[0]) %tones_in_octave, (spitches[2] - spitches[1]) %tones_in_octave, (spitches [0] - spitches[2]) %tones_in_octave]
sintervals = sorted(intervals)
prime_form = [0, sintervals[0], sintervals[0]+sintervals[1]]
print('({} {} {})\nthis is the prime form!'.format(prime_form[0], prime_form[1], prime_form[2]))
</code></pre>
<p>I learned a bit about *args, and this is what I ended up with after some help from programmers much smarter than me. </p>
<pre><code>def prime_form (*pitches, tones_in_octave = 12):
tones_in_octave=int(tones_in_octave)
spitches = list(set(sorted([pitch %tones_in_octave for pitch in list(pitches)])))
print(spitches)
</code></pre>
<p>Paralleling the last program, this does what I need it to do so far up to the part where I start defining âintervalsâ, but I donât know how to go about doing so. I guess I could write a lot of if statements, but theoretically if I wanted to make it functional for <code>tones_in_octave = 12</code>, then I would have to write a lot of <code>if</code> and <code>elif</code> statements, and even then if I wanted to go past that (there is even an established system of music which utilize <code>43</code> tones in an octave, though that specific system wouldn't benefit from this function), I would have to write a bunch of them manually, and even then my function would stop working at the point I decide to stop.</p>
<p>In this case, what I would want to write out literally in the code would be </p>
<pre><code>sintervals = sorted([(spitches[1] - spitches[0]) %tones_in_octave, (spitches[2] - spitches[1]) %tones_in_octave, â¦, (spitches[n-1] - spitches[n-2]) %tones_in_octave, (spitches[0] - spitches[-1]) %tones_in_octave])
</code></pre>
<p>Where n is the number of items in the list <code>spitches</code> (which is different from the items in *pitches, since the way <code>spitches</code> is defined removes redundant values)</p>
<p>Question 1. How do I define a variable as this list in python?</p>
<p>After that, adding intervals like this yields prime form</p>
<pre><code>prime_from = [0, sintervals[0], sintervals[0] + sintervals[1], sintervals[0] + sintervals[1] + sintervals [2], ..., sintervals[0] + sintervals[1] + ... + sintervals[n-2]]
</code></pre>
<p>Where n is the number of items in <code>sintervals</code> (and also the number of items in this list)</p>
<p>Question 2. How do I add numbers like this depending on how many items are in my list from Question 1?</p>
<p>Edit: </p>
<p>Here is an explanation of what is ultimately trying to be accomplished, sorry if it isn't the clearest (it might even be flat out wrong aahahaha lets hope not).</p>
<p>---BACKGROUND: Pitch and Octaves---</p>
<p>In the most popularly used system of music, specific frequencies of air vibrations are called a "pitches"; frequency of the pitch is calculated by</p>
<p>(reference)*2^(distance from reference in pitches / pitches in an octave)</p>
<p>An "octave" is a (2^x):1 frequency relationship where x is an integer and x = the "amount of octaves"; this relationship is important, because us humans perceive frequencies of air vibrations related by octaves/this ratio to be more or less the same. The frequency most commonly used for the reference pitch in this system of music is "A 440" meaning "A4" is the frequency 440Hz. The 4 in "A4" is a register number for which "octave register" the note is in. The octave registers are divided up on the pitch C, so A4 is the A "above" or "with a higher frequency than" C4 (which has a frequency of (440)^-9/12 = ~261.6Hz) Ex. A0 = 27.5Hz, A1 = 55Hz, A2 = 110Hz, etc. (You can tell that the alphabetical notation system we still use was conceived before the measurement of the frequency of air vibrations)</p>
<p>---Background: Music Theory---</p>
<p>In Music Theory, Set Theory is one of the well established tools used to approach Music Theory. Musical Set Theory abstractifies pitch as integers. It also usually disregards octave registers, treating A4 completely equal to A2, A3, A6 etc. In the aforementioned most popularly used system of music, there are 12 tones in an octave, so the way to reference pitches in Set Theory changes from (C, C#, D, D#, E, F, F#, G, G#, A, A#, B) to (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11) respectively; to make sure that everyone is on the same page, there is a name for when octave registers are treated as irrelevant information for a pitch called "pitch class". With pitch classes, all math is done in mod 12, like a clock face, since there are 12 tones in an octave.</p>
<p>Also in Set Theory, there is a useful way to list a set of pitch classes ("pitch class set") called "prime form". Prime forms, also referred to as set classes are regarded as the most concise and abstract way to represent pitch class sets.</p>
<p>---What I'm Trying to Accomplish---</p>
<p>I want to write code that eats pitch class sets and spits out set classes/prime form. As a note, prime form must begin on 0, if it meets the other requirements (leftmost-compact considering the inversion of the set) and does not begin on 0, then will be transposed to start on 0 Ex. [1,2,4] is a leftmost-compact arrangement considering its inversion [1,3,4], but does not start on zero, so the set is transposed by (-1) to (0 1 3) which is its prime from. A human being trying to figure out prime form usually takes a few steps to arrive at the answer:</p>
<ol>
<li><p>Remove redundant pitches classes - if you get pitches like (1, 13, 0, 2, 0), remove 13 because it is redundant with 1 (13%12 = 1) and remove the second 0.</p></li>
<li><p>Arrange the pitch classes within a single octave span - (not necessarily octave register; octave register is understood to be always based on the pitch C, and octave span refers to a span of any two notes with a 2:1 frequency relationship, not just two Cs an octave apart.). As an example, we'd arrange (10, 9, 0, 11) as (0, 9, 10, 11) (starting on zero/within an octave register) or [9,10,11,0] (Normal form/leftmost-compact arrangement) or a few other ways that probably aren't as useful.</p></li>
<li><p>Find intervals - The interval in this case is the absolute value of the difference of adjacent pitch classes in a one octave: 9-10 or 10-9 = 1, 10-11 = 1, 0-11 = 1 (0 needs to be momentarily thought of as 12; this is a case where the numbers span across the loop point for the "clock face" of mod 12 and screw up the math), and 0-9 (0 is good as 0 here since it doesn't cross the loop point; if 0 is thought of as 12, you get the wrong interval) = 9. At this point, the difference of the first and last interval, in this case 9, is also considered in case there is a set like [3,4,7,9,0] or [1,2,4,5,7,8,10,11] where there is a tie between intervals. So you can't ignore the last interval.</p></li>
<li><p>Relist the intervals according to inversions and leftmost-compactness - This is why we found the intervals; the intervals abstractify the actual pitch so we don't have to transpose it to 0 (as the requirement of prime form is that the list starts on 0), and so we can list our intervals in the leftmost-compact fashion between the "normal" and "inverted" list of intervals (clockwise and counter-clockwise on a clockface) to calculate prime form. The way that a human would do this is to start at a point in the list of intervals and re-list it going left or right from that point, keeping the intervals in the same order as the pitches they correspond to and looping when necessary.</p>
<ol start="5">
<li>Choose the right interval list and calculate prime form -
(0,9,10,11)'s interval list is (9,1,1,1), what we do here is choose the leftmost compact version, which ignores the 9, since it is the least compact, and then use the intervals to build the prime form which is (0 1 2 3)</li>
</ol></li>
</ol>
<p>for (9,11,1,2,4,6,8), the interval list is (2,2,1,2,2,2,1). the most left compact is (1, 2, 2, 1, 2, 2, 2), so prime form is (0 1 3 5 6 8 10).</p>
<p>However, there is some disagreement on this final step, the disagreement is between the Rahn and Forte Algorithms. This is because the criteria I layed out results in two answers in five specific cases, because in those cases the question "what is leftmost-compact?" requires more definition. Honestly I have no clue if changing the tones_in_octave to something other than 12 will some how change these exceptions to the criteria I've presented out or not. I also don't know how to explain the differences between the two algorithms in terms of intervals, because the algorithms are defined based on certain pitch's intervals from 0 that cause disagreement between what is leftmost-compact, not the actual order of intervals. I guess I should work on that if I want to finish this program.</p>
<p>Python seems to be good at following these steps from what I have been shown by Stack Overflow and other sources (except maybe the interval list sorting), it's just a matter of figuring out how to generalize it enough so that the function works for a potentially infinite amount of arguments. Thanks to Moses Koledoye doing most of the heavy lifting by generalizing the function to work with *args, there seems to be only one problem that hasn't been solved, which is sorting the list of intervals in the fashion shown above in step 4. (the rest of the steps work pretty well; Koledoye's version of step 3 gave the wrong intervals, but applying mod 12 to the answer he came up with sometimes fixes that for cases where step 4 happens to work out correctly (AND SOME TIMES SCREWS IT UP).</p>
<p>Here is the code with a flawed step 4, print messages to notify what just got defined, and an extra step at the end that SOMETIMES fixes cases where step 3 doesn't work correctly due to, I presume, math across the "loop point"/"clockface", and other times messes it up when it was correct with Koledoye's code.</p>
<pre><code>def prime_form (*pitches, tones_in_octave = 12):
tones_in_octave=int(tones_in_octave)
spitches = list(set(sorted([pitch %tones_in_octave for pitch in list(pitches)])))
print('{}\nthis is the sorted list of pitches \n'.format(spitches))
lth = len(spitches)
print('{}\nthis is the length of spitches \n'.format(lth))
intervals = [(spitches[0 if (i+1) >= lth else (i+1)]-x) % tones_in_octave for i, x in enumerate(spitches)]
print('{}\nthis is the interval list \n'.format(intervals))
sintervals = sorted(intervals)
print('{}\nthis is the sorted interval list \n'.format(sintervals))
p_form = [sum(sintervals[:i]) for i in range(len(intervals))]
print('{}\nthis may be the prime form, but maybe not \n'.format(p_form))
true_p_form = list({i % tones_in_octave for i in p_form})
print('({})\nthis is the prime form! (apparently it might not be)'.format(' '.join(map(str, true_p_form))
</code></pre>
<p>and here are some examples to try</p>
<pre><code>>>> prime_form(9, 3, 0)
(0 3 6)
this is the prime form!
>>> prime_form(1, 11, 9, 4, 2, 6, 8)
(0 1 3 5 6 8 10)
this is the prime form!
>>> prime_form(10, 9, 0, 11)
(0 1 2 3)
this is the prime form!
</code></pre>
<p>And here is a calculator that already does most of this and more (except changing the number of tones in the octave)
<a href="http://composertools.com/Tools/PCSets/setfinder.html" rel="nofollow">http://composertools.com/Tools/PCSets/setfinder.html</a></p>
<p>It even has an explanation on the difference between the Rahn and Forte algorithms.</p>
| 3 | 2016-10-03T18:48:40Z | 39,838,796 | <p>The following pretty much does what you want. </p>
<ul>
<li><p>The tricky part is the computation of the intervals which was done
using <a href="https://docs.python.org/2/library/functions.html#enumerate" rel="nofollow"><code>enumerate</code></a> to subtract the current item from the item at the next index, and subtracting the last item from the first.</p></li>
<li><p>The cumulatives are taken by <em>summing</em> the slices of the computed <code>sintervals</code>. The stop index of each slice is increased by 1 than that of the previous:</p></li>
<li><p>The last part is the formatting part which uses <code>.join</code> to <em>join</em> the list items, which are then formatted.</p></li>
</ul>
<hr>
<pre><code>def prime_form(*pitches, tones_in_octave=12):
# pitches = set(pitches)
spitches = sorted([p % tones_in_octave for p in pitches])
lth = len(spitches)
# Question 1
intervals = [(spitches[0 if (i+1) >= lth else (i+1)]-x) % tones_in_octave for i, x in enumerate(spitches)]
sintervals = sorted(intervals)
# Question 2
p_form = [sum(sintervals[:i]) for i in range(len(intervals))]
print('({})\nthis is the prime form!'.format(' '.join(map(str, p_form))))
</code></pre>
<hr>
<p><strong>Tests</strong>:</p>
<pre><code>>>> prime_form(9, 1, 5)
(0 4 8)
this is the prime form!
>>> prime_form(11, 1, 0)
(0 1 2)
this is the prime form!
>>> prime_form(1, 3, 4)
(0 1 3)
this is the prime form!
</code></pre>
| 0 | 2016-10-03T19:12:58Z | [
"python",
"list",
"python-3.x",
"music",
"args"
]
|
static files not detected Django 1.10 | 39,838,439 | <p><strong>Problem</strong>: javascript and css files doesn't load.<br>
I am using Django 1.10, my
<code>settings.py</code> file looks like this:</p>
<pre><code>STATIC_URL = "/static/"
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static")
]
STATIC_ROOT = os.path.join(BASE_DIR, "static_root")
</code></pre>
<p>I have my static files in the <code>static folder</code> and also have a static root, still the <code>js</code>, <code>css</code> not working<br>Why?</p>
| 2 | 2016-10-03T18:50:20Z | 39,838,503 | <p>You need to specify it in <code>urls.py</code> file as well</p>
<pre><code># need these two additional imports
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
...
]
# add this extra line of code
urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
</code></pre>
<p>Have you done that?</p>
| 2 | 2016-10-03T18:54:21Z | [
"python",
"django"
]
|
static files not detected Django 1.10 | 39,838,439 | <p><strong>Problem</strong>: javascript and css files doesn't load.<br>
I am using Django 1.10, my
<code>settings.py</code> file looks like this:</p>
<pre><code>STATIC_URL = "/static/"
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static")
]
STATIC_ROOT = os.path.join(BASE_DIR, "static_root")
</code></pre>
<p>I have my static files in the <code>static folder</code> and also have a static root, still the <code>js</code>, <code>css</code> not working<br>Why?</p>
| 2 | 2016-10-03T18:50:20Z | 39,838,673 | <p>If you're using <em>runserver</em> in Debug mode, you don't need to modify your URLs.</p>
<p>Your settings look fine provided that your directory structure looks like:</p>
<pre><code>- myProject
- myProject
- settings.py
- static
- ...
</code></pre>
<p>Have you remembered to add <code>{ load staticfiles }</code> at the top of your templates?</p>
<p>In that case, <code>{% url 'css/whatever.css' }</code> should work.</p>
| 0 | 2016-10-03T19:04:18Z | [
"python",
"django"
]
|
Converting .lua table to a python dictionary | 39,838,489 | <p>I have this kind of input:</p>
<pre><code> sometable = {
["a"] = {
"a1",
},
["b"] = {
"b1",
["b2"] = true,
},
["c"] = {
"c1",
["c2"] = true,
},
},
</code></pre>
<p>And would like to convert it to some dictionary I can work with in python - or basically, I just need to be able to read the data in this pattern:</p>
<pre><code>print sometable[b][b2]
</code></pre>
<p>What is the best solution to this? I tried to do a bunch of replaces and convert it using <code>ast</code>, ie:</p>
<pre><code>def make_dict(input): # just body, ie. without 'sometable'
input = input.replace("=", ":")
input = input.replace("[\"", "\"")
input = input.replace("\"]", "\"")
input = input.replace("\t", "")
input = input.replace("\n", "")
input = "{" + input + "}"
return ast.literal_eval(input)
</code></pre>
<p>The problem is that the output is:</p>
<pre><code>{
"a" :
{"a1", },
"b" :
{"b1", "b2" : true,},
"c" :
{"c1", "c2" : 1,},
}
</code></pre>
<p>The error (<code>invalid syntax</code>) is on <code>{"b1", "b2" : true,},</code>. Any suggestion?</p>
| 1 | 2016-10-03T18:53:09Z | 39,838,703 | <p>Look at this package: <a href="https://github.com/SirAnthony/slpp" rel="nofollow">https://github.com/SirAnthony/slpp</a>.</p>
<pre><code>>>> from slpp import slpp as lua
>>> code = """{
["a"] = {
"a1",
},
["b"] = {
"b1",
["b2"] = true,
},
["c"] = {
"c1",
["c2"] = true,
},
}"""
>>> print(lua.decode(code))
{'a': ['a1'], 'c': {0: 'c1', 'c2': True}, 'b': {0: 'b1', 'b2': True}}
</code></pre>
| 2 | 2016-10-03T19:06:45Z | [
"python",
"dictionary",
"lua"
]
|
Why is Python logging framework losing messages? | 39,838,616 | <p>I have a Python 3.4 application that uses logging extensively. I have two FileHandlers and a StreamHandler registered. Everything works as expected except that sometimes, and it seems to happen after the <code>requests</code> library throws an exception, the log files lose all the accumulated messages and start with new messages. I'm assuming that for some reason the FileHandlers reopened the files with <code>mode='w'</code>, but I don't understand why. Any ideas?</p>
<p>The main program sets up the loggers as follows:</p>
<pre><code># Set up root logger - two handlers logging to files
fh_debug = logging.FileHandler('Syncer.debug', mode='w', encoding='utf-8')
fh_debug.setLevel(logging.DEBUG)
fh_log = logging.FileHandler('Syncer.log', mode='w', encoding='utf-8')
fh_log.setLevel(logging.INFO)
fh_formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
fh_debug.setFormatter(fh_formatter)
fh_log.setFormatter(fh_formatter)
logging.getLogger().addHandler(fh_debug)
logging.getLogger().addHandler(fh_log)
# Add console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
ch_formatter = logging.Formatter('%(message)s')
ch.setFormatter(ch_formatter)
logging.getLogger().addHandler(ch)
# Need to set the logging level of the logger as well as the handlers
logging.getLogger().setLevel(logging.DEBUG)
# Set up the logger for this module
logger = logging.getLogger("Syncer")
logger.debug('Logger started.')
</code></pre>
<p>The modules simply contain</p>
<pre><code>logger = logging.getLogger(__name__)
</code></pre>
| 1 | 2016-10-03T19:00:43Z | 39,841,337 | <p>Your issue is that you chose the wrong <code>mode</code> for your <code>FileHandler</code>.</p>
<p>The default mode of <code>FileHandler</code> is <code>a</code> which means appends new lines to the log file.</p>
<blockquote>
<p>class logging.FileHandler(filename, mode='a', encoding=None, delay=False)</p>
</blockquote>
<p>You modified the default mode to <code>w</code> which truncate file to zero length or create a new file for writing. That's why you lost all accumulated messages.</p>
<p><strong>Change it to <code>mode='a'</code> or just delete <code>mode='w'</code> and then your logger will work.</strong></p>
<p>Read the <a href="https://webdocs.cs.ualberta.ca/~jdg001/local-archive/python-3.4-reference/library/logging.handlers.html" rel="nofollow">official python docs here</a></p>
| 0 | 2016-10-03T22:23:37Z | [
"python",
"python-3.x",
"logging",
"python-requests",
"error-logging"
]
|
Iterate over Pandas index pairs [0,1],[1,2][2,3] | 39,838,758 | <p>I have a pandas dataframe of lat/lng points created from a gps device. </p>
<p>My question is how to generate a distance column for the distance between each point in the gps track line. </p>
<p>Some googling has given me the haversine method below which works using single values selected using <code>iloc</code>, but i'm struggling on how to iterate over the dataframe for the method inputs.</p>
<p>I had thought I could run a for loop, with something along the lines of </p>
<pre><code>for i in len(df):
df['dist'] = haversine(df['lng'].iloc[i],df['lat'].iloc[i],df['lng'].iloc[i+1],df['lat'].iloc[i+1]))
</code></pre>
<p>but I get the error <code>TypeError: 'int' object is not iterable</code>. I was also thinking about <code>df.apply</code> but I'm not sure how to get the appropriate inputs. Any help or hints. on how to do this would be appreciated.</p>
<p>Sample DF</p>
<pre><code> lat lng
0 -7.11873 113.72512
1 -7.11873 113.72500
2 -7.11870 113.72476
3 -7.11870 113.72457
4 -7.11874 113.72444
</code></pre>
<p>Method</p>
<pre><code>def haversine(lon1, lat1, lon2, lat2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
"""
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(math.radians, [lon1, lat1, lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = math.sin(dlat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon/2)**2
c = 2 * math.asin(math.sqrt(a))
km = 6367 * c
return km
</code></pre>
| 0 | 2016-10-03T19:10:44Z | 39,839,524 | <p>are you looking for a result like this?</p>
<pre><code> lat lon dist2next
0 -7.11873 113.72512 0.013232
1 -7.11873 113.72500 0.026464
2 -7.11873 113.72476 0.020951
3 -7.11873 113.72457 0.014335
4 -7.11873 113.72444 NaN
</code></pre>
<p>There's probably a clever way to use pandas.rolling_apply... but for a quick solution, I'd do something like this.</p>
<pre><code>def haversine(loc1, loc2):
# convert decimal degrees to radians
lon1, lat1 = map(math.radians, loc1)
lon2, lat2 = map(math.radians, loc2)
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = math.sin(dlat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon/2)**2
c = 2 * math.asin(math.sqrt(a))
km = 6367 * c
return km
df['dist2next'] = np.nan
for i in df.index[:-1]:
loc1 = df.ix[i, ['lon', 'lat']]
loc2 = df.ix[i+1, ['lon', 'lat']]
df.ix[i, 'dist2next'] = haversine(loc1, loc2)
</code></pre>
<p>alternatively, if you don't want to modify your haversine function like that, you can just pick off lats and lons one at a time using df.ix[i, 'lon'], df.ix[i, 'lat'], df.ix[i+1, 'lon], etc.</p>
| 1 | 2016-10-03T20:03:28Z | [
"python",
"pandas"
]
|
Iterate over Pandas index pairs [0,1],[1,2][2,3] | 39,838,758 | <p>I have a pandas dataframe of lat/lng points created from a gps device. </p>
<p>My question is how to generate a distance column for the distance between each point in the gps track line. </p>
<p>Some googling has given me the haversine method below which works using single values selected using <code>iloc</code>, but i'm struggling on how to iterate over the dataframe for the method inputs.</p>
<p>I had thought I could run a for loop, with something along the lines of </p>
<pre><code>for i in len(df):
df['dist'] = haversine(df['lng'].iloc[i],df['lat'].iloc[i],df['lng'].iloc[i+1],df['lat'].iloc[i+1]))
</code></pre>
<p>but I get the error <code>TypeError: 'int' object is not iterable</code>. I was also thinking about <code>df.apply</code> but I'm not sure how to get the appropriate inputs. Any help or hints. on how to do this would be appreciated.</p>
<p>Sample DF</p>
<pre><code> lat lng
0 -7.11873 113.72512
1 -7.11873 113.72500
2 -7.11870 113.72476
3 -7.11870 113.72457
4 -7.11874 113.72444
</code></pre>
<p>Method</p>
<pre><code>def haversine(lon1, lat1, lon2, lat2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
"""
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(math.radians, [lon1, lat1, lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = math.sin(dlat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon/2)**2
c = 2 * math.asin(math.sqrt(a))
km = 6367 * c
return km
</code></pre>
| 0 | 2016-10-03T19:10:44Z | 39,839,602 | <p>I would recommande using a quicker variation of looping through a df such has</p>
<pre><code>df_shift = df.shift(1)
df = df.join(df_shift, l_suffix="lag_")
log = []
for rows in df.itertuples():
log.append(haversine(rows.lng ,rows.lat, rows.lag_lng, rows.lag_lat))
pd.DataFrame(log)
</code></pre>
| 0 | 2016-10-03T20:08:22Z | [
"python",
"pandas"
]
|
Status code 400 on post message to influxdb | 39,838,790 | <p>I'm trying to post a json file to influxdb on my local host. This is the code:</p>
<pre><code>import json
import requests
url = 'http://localhost:8086/write?db=mydb'
files ={'file' : open('sample.json', 'rb')}
r = requests.post(url, files=files)
print(r.text)
</code></pre>
<p>This is what <code>sample.json</code> looks like:</p>
<pre><code> {
"region" : "eu-west-1",
"instanceType": "m1.small"
}
</code></pre>
<p>My response gives the following errors:</p>
<pre><code> {"error":"unable to parse '--1bee44675e8c42d8985e750b2483e0a8\r':
missing fields\nunable to parse 'Content-Disposition: form-data;
name=\"file\"; filename=\"sample.json\"\r': invalid field
format\nunable to parse '\r': missing fields\nunable to parse '{':
missing fields\nunable to parse '\"region\" : \"eu-west-1\",': invalid
field format\nunable to parse '\"instanceType\": \"m1.small\"': invalid
field format\nunable to parse '}': missing fields"}
</code></pre>
<p>My json seems to be a valid json file. I am not sure what I am doing wrong. </p>
| 0 | 2016-10-03T19:12:37Z | 39,839,210 | <p>Writing data with JSON was deprecated for performance reasons and has since been removed.</p>
<p>See GitHub issue comments <a href="https://github.com/influxdata/influxdb/pull/2696#issuecomment-107043910" rel="nofollow">107043910</a>.</p>
| 0 | 2016-10-03T19:41:56Z | [
"python",
"json",
"post",
"influxdb",
"http-status-code-400"
]
|
Status code 400 on post message to influxdb | 39,838,790 | <p>I'm trying to post a json file to influxdb on my local host. This is the code:</p>
<pre><code>import json
import requests
url = 'http://localhost:8086/write?db=mydb'
files ={'file' : open('sample.json', 'rb')}
r = requests.post(url, files=files)
print(r.text)
</code></pre>
<p>This is what <code>sample.json</code> looks like:</p>
<pre><code> {
"region" : "eu-west-1",
"instanceType": "m1.small"
}
</code></pre>
<p>My response gives the following errors:</p>
<pre><code> {"error":"unable to parse '--1bee44675e8c42d8985e750b2483e0a8\r':
missing fields\nunable to parse 'Content-Disposition: form-data;
name=\"file\"; filename=\"sample.json\"\r': invalid field
format\nunable to parse '\r': missing fields\nunable to parse '{':
missing fields\nunable to parse '\"region\" : \"eu-west-1\",': invalid
field format\nunable to parse '\"instanceType\": \"m1.small\"': invalid
field format\nunable to parse '}': missing fields"}
</code></pre>
<p>My json seems to be a valid json file. I am not sure what I am doing wrong. </p>
| 0 | 2016-10-03T19:12:37Z | 39,839,220 | <p>I think that the fault maybe is that you just open the file but not read it. I mean since you want to post the content of the <code>json</code> object which is stored on the file, and not the file itself, it may be better to do that instead:</p>
<pre><code>import json
import requests
url = 'http://localhost:8086/write?db=mydb'
json_data = open('sample.json', 'rb').read() # read the json data from the file
r = requests.post(url, data=json_data) # post them as data
print(r.text)
</code></pre>
<p>which is actually your code modified just a bit...</p>
| 0 | 2016-10-03T19:42:45Z | [
"python",
"json",
"post",
"influxdb",
"http-status-code-400"
]
|
TypeError: 'tasks:meta:newtask' is not JSON serializable | 39,838,792 | <p>I am trying to dump this json - </p>
<pre><code>{'total_run_count': 9, 'task': 'tasks.add', 'enabled': True, 'schedule': {'period': 'seconds', 'every': 3}, 'kwargs': {'max_targets': 100}, 'running': False, 'options': {}, 'delete_key': 'deleted:tasks:meta:newtask', 'name': b'tasks:meta:newtask', 'last_run_at': datetime.datetime(2016, 10, 3, 19, 9, 50, 162098), 'args': ['3', '2'], 'key': 'tasks:meta:newtask'}
</code></pre>
<p>and it fails for the key 'name'. I have decoded it in utf-8 but still not luck. I am getting the following error.</p>
<p>TypeError: 'tasks:meta:newtask' is not JSON serializable</p>
<p>what is not serializable about the above string ? I am clueless.</p>
| 0 | 2016-10-03T19:12:48Z | 39,838,894 | <p>Notice how that item is displayed in the dictionary:</p>
<pre><code>'name': b'tasks:meta:newtask'
</code></pre>
<p>That leading <code>b</code> indicates that 'tasks:meta:newtask' is a <em>byte string</em>, not a regular character string. JSON is telling you that it doesn't know how to handle a byte string object.</p>
<p>Does it really need to be a byte string? If not, you should convert it to a regular string before calling json dump.</p>
| 1 | 2016-10-03T19:19:00Z | [
"python"
]
|
TypeError: 'tasks:meta:newtask' is not JSON serializable | 39,838,792 | <p>I am trying to dump this json - </p>
<pre><code>{'total_run_count': 9, 'task': 'tasks.add', 'enabled': True, 'schedule': {'period': 'seconds', 'every': 3}, 'kwargs': {'max_targets': 100}, 'running': False, 'options': {}, 'delete_key': 'deleted:tasks:meta:newtask', 'name': b'tasks:meta:newtask', 'last_run_at': datetime.datetime(2016, 10, 3, 19, 9, 50, 162098), 'args': ['3', '2'], 'key': 'tasks:meta:newtask'}
</code></pre>
<p>and it fails for the key 'name'. I have decoded it in utf-8 but still not luck. I am getting the following error.</p>
<p>TypeError: 'tasks:meta:newtask' is not JSON serializable</p>
<p>what is not serializable about the above string ? I am clueless.</p>
| 0 | 2016-10-03T19:12:48Z | 39,838,939 | <p>The "name" value in your dict is a <code>bytes</code> object, not string. You have to decode it or you can write your <a href="https://docs.python.org/3/library/json.html#json.JSONEncoder" rel="nofollow">custom JSON encoder</a>:</p>
<pre><code>import json
def default(o):
if isinstance(o, bytes):
return o.decode()
return json.JSONEncoder.default(self, o)
data = {'name': b'tasks:meta:newtask'}
json.JSONEncoder(default=default).encode(data)
</code></pre>
| 1 | 2016-10-03T19:22:56Z | [
"python"
]
|
Python object is being referenced by an object I cannot find | 39,838,793 | <p>I am trying to remove an object from memory in python and I am coming across an object that it is not being removed. From my understanding if there is no references to the object the garbage collector will de-allocate the memory when it is run. However after I have removed all of the references if I run </p>
<pre><code>bar = Foo()
print gc.get_referrers(bar)
del bar
baz = gc.collect()
print baz
</code></pre>
<p>I get a reply of </p>
<blockquote>
<p>[< frame object at 0x7f1eba291e50>]</p>
<p>0</p>
</blockquote>
<p>So how come does it not delete the object?</p>
<p>I get the same reply for all of the instances of objects if i do </p>
<pre><code>bar = [foo() for i in range(0, 10)]
for x in range(0,len(bar))
baz = bar[x]
del bar[x]
print gc.get_referrers(baz)
</code></pre>
<p>How do I completely remove all referrers from an object/any idea what the frame object that is on all is? </p>
<p>I thought it would be the object frame(?) that contains a list of all objects in the program but I have not been able to confirm that/find a way to rid objects from being referenced by said mystical(to me) object fram.</p>
<p>Any help would be greatly appreciated</p>
<p>Edit:
Okay I rewrote the code to the simple form pulling out everything except the basics</p>
<pre><code>import random, gc
class Object():
def __init__(self):
self.n=None
self.p=None
self.isAlive=True
def setNext(self,object):
self.n=object
def setPrev(self, object):
self.p=object
def getNext(self):
return self.n
def getPrev(self):
return self.p
def simulate(self):
if random.random() > .90:
self.isAlive=False
def remove(self):
if self.p is not None and self.n is not None:
self.n.setPrev(self.p)
self.p.setNext(self.n)
elif self.p is not None:
self.p.setNext(None)
elif self.n is not None:
self.n.setPrev(None)
del self
class Grid():
def __init__(self):
self.cells=[[Cell() for i in range(0,500)] for j in range(0,500)]
for x in range(0,100):
for y in range(0,100):
for z in range(0,100):
self.cells[x][y].addObject(Object())
def simulate(self):
for x in range(0,500):
for y in range(0,500):
self.cells[x][y].simulate()
num=gc.collect()
print " " + str(num) +" deleted today."
class Cell():
def __init__(self):
self.objects = None
self.objectsLast = None
def addObject(self, object):
if self.objects is None:
self.objects = object
else:
self.objectsLast.setNext(object)
object.setPrev(self.objectsLast)
self.objectsLast = object
def simulate(self):
current = self.objects
while current is not None:
if current.isAlive:
current.simulate()
current = current.getNext()
else:
delete = current
current = current.getNext()
if delete.getPrev() is None:
self.objects = current
elif delete.getNext() is None:
self.objectsLast = delete.getPrev()
delete.remove()
def main():
print "Building Map..."
x = Grid()
for y in range (1,101):
print "Simulating day " + str(y) +"..."
x.simulate()
if __name__ == "__main__":
main()
</code></pre>
| 0 | 2016-10-03T19:12:49Z | 39,839,810 | <p>Okay thanks to cjhanks and user2357112 I came up with this answer</p>
<p>The problem being that if you run the program the gc does not collect anything after each day even though there were things deleted</p>
<p>To test if it is deleted I instead run</p>
<pre><code>print len(gc.get_objects())
</code></pre>
<p>each time I go through a "day" doing this shows how many objects python is tracking.
Now with that information and thanks to a comment I tired changing Grid to</p>
<pre><code>class Grid():
def __init__(self):
self.cells=[[Cell() for i in range(0,500)] for j in range(0,500)]
self.add(100)
def add(self, num):
for x in range(0, 100):
for y in range(0, 100):
for z in range(0, num):
self.cells[x][y].addObject(Object())
def simulate(self):
for x in range(0,500):
for y in range(0,500):
self.cells[x][y].simulate()
num=gc.collect()
print " " + str(num) +" deleted today."
print len(gc.get_objects())
</code></pre>
<p>and then calling Grid.add(50) halfway through the process. My memory allocation for the program did not increase (watching top in Bash) So my learning points:</p>
<ul>
<li>GC was running without my knowledge</li>
<li>Memory is allocated and never returned to system until the the program is done</li>
<li>Python will reuse the memory </li>
</ul>
| 0 | 2016-10-03T20:23:37Z | [
"python",
"object",
"garbage-collection"
]
|
Python object is being referenced by an object I cannot find | 39,838,793 | <p>I am trying to remove an object from memory in python and I am coming across an object that it is not being removed. From my understanding if there is no references to the object the garbage collector will de-allocate the memory when it is run. However after I have removed all of the references if I run </p>
<pre><code>bar = Foo()
print gc.get_referrers(bar)
del bar
baz = gc.collect()
print baz
</code></pre>
<p>I get a reply of </p>
<blockquote>
<p>[< frame object at 0x7f1eba291e50>]</p>
<p>0</p>
</blockquote>
<p>So how come does it not delete the object?</p>
<p>I get the same reply for all of the instances of objects if i do </p>
<pre><code>bar = [foo() for i in range(0, 10)]
for x in range(0,len(bar))
baz = bar[x]
del bar[x]
print gc.get_referrers(baz)
</code></pre>
<p>How do I completely remove all referrers from an object/any idea what the frame object that is on all is? </p>
<p>I thought it would be the object frame(?) that contains a list of all objects in the program but I have not been able to confirm that/find a way to rid objects from being referenced by said mystical(to me) object fram.</p>
<p>Any help would be greatly appreciated</p>
<p>Edit:
Okay I rewrote the code to the simple form pulling out everything except the basics</p>
<pre><code>import random, gc
class Object():
def __init__(self):
self.n=None
self.p=None
self.isAlive=True
def setNext(self,object):
self.n=object
def setPrev(self, object):
self.p=object
def getNext(self):
return self.n
def getPrev(self):
return self.p
def simulate(self):
if random.random() > .90:
self.isAlive=False
def remove(self):
if self.p is not None and self.n is not None:
self.n.setPrev(self.p)
self.p.setNext(self.n)
elif self.p is not None:
self.p.setNext(None)
elif self.n is not None:
self.n.setPrev(None)
del self
class Grid():
def __init__(self):
self.cells=[[Cell() for i in range(0,500)] for j in range(0,500)]
for x in range(0,100):
for y in range(0,100):
for z in range(0,100):
self.cells[x][y].addObject(Object())
def simulate(self):
for x in range(0,500):
for y in range(0,500):
self.cells[x][y].simulate()
num=gc.collect()
print " " + str(num) +" deleted today."
class Cell():
def __init__(self):
self.objects = None
self.objectsLast = None
def addObject(self, object):
if self.objects is None:
self.objects = object
else:
self.objectsLast.setNext(object)
object.setPrev(self.objectsLast)
self.objectsLast = object
def simulate(self):
current = self.objects
while current is not None:
if current.isAlive:
current.simulate()
current = current.getNext()
else:
delete = current
current = current.getNext()
if delete.getPrev() is None:
self.objects = current
elif delete.getNext() is None:
self.objectsLast = delete.getPrev()
delete.remove()
def main():
print "Building Map..."
x = Grid()
for y in range (1,101):
print "Simulating day " + str(y) +"..."
x.simulate()
if __name__ == "__main__":
main()
</code></pre>
| 0 | 2016-10-03T19:12:49Z | 39,839,824 | <p><code>gc.get_referrers</code> takes one argument: the object whose referers it should find.</p>
<p>I cannot think of any circumstance in which <code>gc.get_referrers</code> would return no results, because in order to send an object to <code>gc.get_referrers</code>, there has to be a reference to the object.</p>
<p>In other words, if there was no reference to the object, it would not be possible to send it to <code>gc.get_referrers</code>.</p>
<p>At the very least, there will be a reference from the <code>globals()</code> or from the current <a href="https://docs.python.org/2.0/ref/execframes.html" rel="nofollow">execution frame</a> (which contains the local variables):</p>
<blockquote>
<p>A code block is executed in an execution frame. An execution frame contains some administrative information (used for debugging), determines where and how execution continues after the code block's execution has completed, and (perhaps most importantly) defines two namespaces, the local and the global namespace, that affect execution of the code block.</p>
</blockquote>
<p>See an extended version of the example from the question:</p>
<pre><code>class Foo(object):
pass
def f():
bar = [Foo() for i in range(0, 10)]
for x in range(0, len(bar)):
# at this point there is one reference to bar[x]: it is bar
print len(gc.get_referrers(bar[x])) # prints 1
baz = bar[x]
# at this point there are two references to baz:
# - bar refernces it, because it is in the list
# - this "execution frame" references it, because it is in variable "baz"
print len(gc.get_referrers(bar[x])) # prints 2
del bar[x]
# at this point, only the execution frame (variable baz) references the object
print len(gc.get_referrers(baz)) # prints 1
print gc.get_referrers(baz) # prints a frame object
del baz
# now there are no more references to it, but there is no way to call get_referrers
f()
</code></pre>
<h2>How to test it properly?</h2>
<p>There is a better trick to detect whether there are referers or not: <code>weakref</code>.</p>
<p><code>weakref</code> module provides a way to create <em>weak</em> references to an object which <em>do not count</em>. What it means is that even if there is a weak reference to an object, it will still be deleted when there are no other references to it. It also does not count in the <code>gc.get_referrers</code>.</p>
<p>So:</p>
<pre><code>>>> x = Foo()
>>> weak_x = weakref.ref(x)
>>>
>>> gc.get_referrers(x) == [globals()] # only one reference from global variables
True
>>> x
<__main__.Foo object at 0x000000000272D2E8>
>>> weak_x
<weakref at 0000000002726D18; to 'Foo' at 000000000272D2E8>
>>> del x
>>> weak_x
<weakref at 0000000002726D18; dead>
</code></pre>
<p>The weak reference says that the object is dead, so it was indeed deleted.</p>
| 0 | 2016-10-03T20:24:17Z | [
"python",
"object",
"garbage-collection"
]
|
How do I assign specific users to a user-uploaded file so they can modify it/delete it (Django + Apache) | 39,838,861 | <p>Im using django 1.10 + Apache in Linux.
I've created a small webapp to upload documents (with dropzone.js) and want to implement the ability for a user to specify who can view/modify/delete a specific file but i can't figure out a way how. I attempted using a ManyToManyField but maybe im not understading the Field itself correctly.</p>
<p>The "Document" model is this:</p>
<h1>Model</h1>
<pre><code>class Document(models.Model):
file = models.FileField(upload_to = 'files/')
#validators=[validate_file_type])
uploaded_at = models.DateTimeField(auto_now_add = True)
extension = models.CharField(max_length = 30, blank = True)
thumbnail = models.ImageField(blank = True, null = True)
is_public = models.BooleanField(default = False)
accesible_by = models.ManyToManyField(User) #This is my attempt at doing this task.
def clean(self):
self.extension = self.file.name.split('/')[-1].split('.')[-1]
if self.extension == 'xlsx' or self.extension == 'xls':
self.thumbnail = 'xlsx.png'
elif self.extension == 'pptx' or self.extension == 'ppt':
self.thumbnail = 'pptx.png'
elif self.extension == 'docx' or self.extension == 'doc':
self.thumbnail = 'docx.png'
def delete(self, *args, **kwargs):
#delete file from /media/files
self.file.delete(save = False)
#call parent delete method.
super().delete(*args, **kwargs)
#Redirect to file list page.
def get_absolute_url(self):
return reverse('dashby-files:files')
def __str__(self):
return self.file.name.split('/')[-1]
class Meta():
ordering = ['-uploaded_at']
</code></pre>
<p>My View to handle the creation of documents:</p>
<h1>View</h1>
<pre><code>class DocumentCreate(CreateView):
model = Document
fields = ['file', 'is_public']
def form_valid(self, form):
self.object = form.save(commit = False)
## I guess here i would Add the (self.request.user) to the accesible_by Field.
self.object.save()
data = {'status': 'success'}
response = JSONResponse(data, mimetype =
response_mimetype(self.request))
return response
</code></pre>
<p>Thanks in advance to anyone for any ideas or suggestions...</p>
| 0 | 2016-10-03T19:16:34Z | 39,839,446 | <p>You have a model and a view that hopefully works for adding new documents, you still have a number of steps to go.
You'll need a place to assign users that can view/modify/delete your files. If you need to store access levels (view/delete...), your accessible_by will not suffice and you'll do well with a <strong><em><a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#django.db.models.ManyToManyField.through" rel="nofollow">through</a></em></strong> table to add more information like access level.</p>
<p>You need to write views for various actions like view, delete... that users will request and here you ensure users have the right privileges. An implementation would be to get the request.user and the document id, look up if the user has the permission for what she's doing, return an http unauthorized exception or allow the action to proceed.</p>
<blockquote>
<p>Edit: My question is about how can I assign user-permissions to each
individual file</p>
</blockquote>
<p>If we're keeping this to access control from the django level, using the document model you already have, and you've taken some steps and for every document, you can assign users (accessible_by). Something like this can get you started:</p>
<pre><code>from django.core.exceptions import PermissionDenied
def view_document(request, doc_pk):
doc = get_object_or_404(Document, pk=doc_pk)
if not doc.accessible_by.filter(username=request.user.username):
raise PermissionDenied
#perform rest of action
</code></pre>
<p>Or do you mean to use the permissions framework itself?</p>
| 0 | 2016-10-03T19:58:33Z | [
"python",
"django",
"apache",
"file-upload",
"permissions"
]
|
Wrote a plugin for BitBucket API calls. Unable to find my plugin when used in master.cfg | 39,838,879 | <p>I wrote a class that wrapped BitBucket API calls. Can confirm it worked standalone, however, when I add that python class to /master/buidlbot/changes/bitbucket.py. And add this class in master/setup.py</p>
<pre><code> ('buildbot.changes.bitbucket', ['BitbucketPullrequestPoller', 'BitBucketBuildStatusAPIWrapper']),
</code></pre>
<p>It still complains plugin not found. I also used the /master/bin/buildbot executable as well. I am assuming the compiled buildbot executable should just point to the python script, so I am assuming I don't need to recompile buildbot, but at the same time I am not sure why it isn't working. Any helps will be greatly appreciated!</p>
| 0 | 2016-10-03T19:17:43Z | 39,876,296 | <p>Make sure you run </p>
<pre><code>python <path to buildbot repo>/master/setup.py build
python <path to buildbot repo>/master/setup.py install
</code></pre>
| 0 | 2016-10-05T14:11:15Z | [
"python",
"buildbot"
]
|
ANTLR error 134 | 39,838,986 | <p>I'm trying to build Abstract Syntax Tree for Java in Python with antlr4 package.
I've downloaded Java grammar from
<a href="https://github.com/antlr/grammars-v4/blob/master/java8/Java8.g4" rel="nofollow">https://github.com/antlr/grammars-v4/blob/master/java8/Java8.g4</a></p>
<p>I want to use that grammar file to produce JavaLexer and JavaParser for Python2.</p>
<p>When I say </p>
<pre><code>"$ antlr4 -Dlanguage=Python2 Java8.g4"
</code></pre>
<p>an error occured.That error is</p>
<blockquote>
<p>error(134): Java8.g4:73:0: symbol type conflicts with generated code in target language or runtime</p>
</blockquote>
<p>NOTE: I've deleted parts with <code>Character.isJavaIdentifierPart()</code>. Because these lines is not proper for python and i will use just ASCII.</p>
| 0 | 2016-10-03T19:26:15Z | 39,839,712 | <p>Python has built-in function called <code>type</code>. Antlr4 prints an error for line 73 of the grammar:</p>
<pre><code>type
: primitiveType
| referenceType
;
</code></pre>
<p>Looks like there is a name conflict and you have to rename <code>type</code> to something else in your grammar.</p>
| 1 | 2016-10-03T20:17:07Z | [
"java",
"python",
"antlr4"
]
|
Pcraster - python - reading stack of maps | 39,838,987 | <p>I have a stack of maps as follows:</p>
<p><a href="http://i.stack.imgur.com/P9a5U.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/P9a5U.jpg" alt="list of .map files"></a></p>
<p>They are the outputs of every 720 timesteps of a certain dynamic model
I would like to import / read those maps as input of other dynamic models.</p>
<p>What should I do?</p>
<p>(I've tried timeiput but I don't get how to use it correctly).</p>
| 0 | 2016-10-03T19:26:17Z | 39,840,212 | <p>If the stack of maps is in a particular directory, you can use <code>os.path</code> and read all files in that directory.</p>
<pre><code>from os import listdir
from os.path import isfile, join
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
</code></pre>
<p><code>onlyfiles</code> is a list of files in that directory.</p>
<p>Since the names follow a complex pattern(of difference of 720 in the ending). I think this is best way to go through all files.</p>
| 0 | 2016-10-03T20:51:15Z | [
"python"
]
|
QTextEdit as a child node for QTreeWidgetItem? | 39,839,006 | <p>Is it possible to add a QTextEdit as a child in QTreeWidget?</p>
<p>Here is my code we can create a QTreeWidget and add the columns:</p>
<pre><code>self.treetext = QtGui.QTreeWidget(self.dockWidgetContents_2)
self.treetext.setObjectName(_fromUtf8("treetext"))
self.verticalLayout_2.addWidget(self.treetext)
self.treetext.setGeometry(QtCore.QRect(20, 10, 261, 241))
item_0 = QtGui.QTreeWidgetItem(self.treetext)
item_1 = QtGui.QTreeWidgetItem(item_0)
item_1 = QtGui.QTreeWidgetItem(item_0)
item_1 = QtGui.QTreeWidgetItem(item_0)
item_1 = QtGui.QTreeWidgetItem(item_0)
item_0 = QtGui.QTreeWidgetItem(self.treetext)
item_1 = QtGui.QTreeWidgetItem(item_0)
item_1 = QtGui.QTreeWidgetItem(item_0)
item_1 = QtGui.QTreeWidgetItem(item_0)
item_1 = QtGui.QTreeWidgetItem(item_0)
</code></pre>
<p>and add new items as child:</p>
<pre><code>self.treetext.headerItem().setText(0, _translate("Form", "Model List", None))
__sortingEnabled = self.treetext.isSortingEnabled()
self.treetext.setSortingEnabled(False)
self.treetext.topLevelItem(0).setText(0, _translate("Form", "Model 1", None))
self.treetext.topLevelItem(0).child(0).setText(0, _translate("Form", "New Subitem", None))
self.treetext.topLevelItem(0).child(1).setText(0, _translate("Form", "New Item", None))
self.treetext.topLevelItem(0).child(2).setText(0, _translate("Form", "New Item", None))
self.treetext.topLevelItem(0).child(3).setText(0, _translate("Form", "New Item", None))
self.treetext.topLevelItem(1).setText(0, _translate("Form", "Model 2", None))
self.treetext.topLevelItem(1).child(0).setText(0, _translate("Form", "New Subitem", None))
self.treetext.topLevelItem(1).child(1).setText(0, _translate("Form", "New Item", None))
self.treetext.topLevelItem(1).child(2).setText(0, _translate("Form", "New Item", None))
self.treetext.topLevelItem(1).child(3).setText(0, _translate("Form", "New Item", None))
self.treetext.setSortingEnabled(__sortingEnabled)
</code></pre>
<p>Can Create a new QTextEdit with other example:</p>
<pre><code>self.groupBox = QtGui.QTextEdit(self.dockWidgetContents_2)
self.groupBox.setObjectName(_fromUtf8("groupBox"))
self.verticalLayout_2.addWidget(self.groupBox)
</code></pre>
<p>But can we put QTextEdit as a new child for QTreeWidgetItem?</p>
| 1 | 2016-10-03T19:27:59Z | 39,840,794 | <p>You can set a widget on any item in the tree using <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qtreewidget.html#setItemWidget" rel="nofollow"><code>setItemWidget</code></a></p>
<pre><code>self.treetext.setItemWidget(item_1, 0, QTextEdit(self))
</code></pre>
<p>If your tree widget items are editable, you can also just tell Qt to open a persistent editor (by default, <code>QTreeWidgetItems</code> use a <code>QLineEdit</code> for editing, but you can override that behavior with a <code>QItemDelegate</code> if you want) using <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qtreewidget.html#openPersistentEditor" rel="nofollow"><code>openPersistentEditor()</code></a></p>
<pre><code>self.treetext.openPersistentEditor(item_1, 0)
</code></pre>
| 0 | 2016-10-03T21:35:22Z | [
"python",
"pyqt",
"pyqt4",
"qtextedit",
"qtreewidget"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.