title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
grouping related data in csv excel file | 39,957,573 | <p>this is a csv excel file</p>
<pre><code> Receipt Name Address Date Time Total
25007 A ABC pte ltd 3/7/2016 10:40 12.30
25008 A ABC ptd ltd 3/7/2016 11.30 6.70
25009 B CCC ptd ltd 4/7/2016 07.35 23.40
25010 A ABC pte ltd 4/7/2016 12:40 9.90
</code></pre>
<p>how do i retrieve the dates and time and group them to respectively company A and B such that the output would be something like: (A, 3/7/2016, 10:40, 11.30, 4/7/2016 12:40), (B, 4/7/2016, 07:35)</p>
<p>My existing code is:</p>
<pre><code>datePattern = re.compile(r"(\d+/\d+/\d+)\s+(\d+:\d+)")
dateDict =dict()
for i, line in enumerate(open('sample_data.csv')):
for match in re.finditer(datePattern,line):
if match.group(1) in dateDict:
dateDict[match.group(1)].append(match.group(2))
else:
dateDict[match.group(1)] = [match.group(2),]
</code></pre>
<p>However it only works for grouping date and time but now i want to include name as part of the grouping as well. *Using csv module would be preferred</p>
| 2 | 2016-10-10T11:47:50Z | 39,957,721 | <p>It can be done pretty easily using Pandas module:</p>
<pre><code>import pandas as pd
df = pd.read_csv('/path/to/file.csv')
df.groupby(['Name','Date']).Time.apply(list).reset_index().to_csv('d:/temp/out.csv', index=False)
</code></pre>
<p>D:\temp\out.csv:</p>
<pre><code>Name,Date,Time
A,3/7/2016,"['10:40', '11.30']"
A,4/7/2016,['12:40']
B,4/7/2016,['07.35']
</code></pre>
| -1 | 2016-10-10T11:56:08Z | [
"python",
"regex",
"group"
] |
grouping related data in csv excel file | 39,957,573 | <p>this is a csv excel file</p>
<pre><code> Receipt Name Address Date Time Total
25007 A ABC pte ltd 3/7/2016 10:40 12.30
25008 A ABC ptd ltd 3/7/2016 11.30 6.70
25009 B CCC ptd ltd 4/7/2016 07.35 23.40
25010 A ABC pte ltd 4/7/2016 12:40 9.90
</code></pre>
<p>how do i retrieve the dates and time and group them to respectively company A and B such that the output would be something like: (A, 3/7/2016, 10:40, 11.30, 4/7/2016 12:40), (B, 4/7/2016, 07:35)</p>
<p>My existing code is:</p>
<pre><code>datePattern = re.compile(r"(\d+/\d+/\d+)\s+(\d+:\d+)")
dateDict =dict()
for i, line in enumerate(open('sample_data.csv')):
for match in re.finditer(datePattern,line):
if match.group(1) in dateDict:
dateDict[match.group(1)].append(match.group(2))
else:
dateDict[match.group(1)] = [match.group(2),]
</code></pre>
<p>However it only works for grouping date and time but now i want to include name as part of the grouping as well. *Using csv module would be preferred</p>
| 2 | 2016-10-10T11:47:50Z | 39,959,012 | <p>If you don't want to use Pandas, this is a possible solution. It's not the most elegant since your csv format is relatively clunky to parse. If you can change the format to use a non-whitespace field separator, using a proper csv parsing library (like <code>pandas</code> or Python's built-in <code>csv</code> module) would be preferable.</p>
<pre><code>import re
datePattern = re.compile(r"(\d+/\d+/\d+)\s+(\d+[:.]\d+)")
companyPattern = re.compile(r"^\s+\d+\s+(\w+)")
companyDict = {}
for i, line in enumerate(open('sample_data.csv')):
# skip csv header
if i == 0:
continue
timestampMatch = datePattern.search(line)
companyMatch = companyPattern.search(line)
# filter out any malformed lines which don't match
if timestampMatch is None or companyMatch is None:
continue
date = timestampMatch.group(1)
time = timestampMatch.group(2)
company = companyMatch.group(1)
companyDict.setdefault(company, []).append("{} {}".format(date, time))
</code></pre>
<p>Note that the time field is inconsistent as to whether it uses <code>.</code> or <code>:</code> for the hour/minute delimiter so I've taken this into account.</p>
<p>Running this on your sample data results in the following value for <code>companyDict</code>:</p>
<pre><code>{'A': ['3/7/2016 10:40', '3/7/2016 11.30', '4/7/2016 12:40'], 'B': ['4/7/2016 07.35']}
</code></pre>
| -1 | 2016-10-10T13:06:13Z | [
"python",
"regex",
"group"
] |
grouping related data in csv excel file | 39,957,573 | <p>this is a csv excel file</p>
<pre><code> Receipt Name Address Date Time Total
25007 A ABC pte ltd 3/7/2016 10:40 12.30
25008 A ABC ptd ltd 3/7/2016 11.30 6.70
25009 B CCC ptd ltd 4/7/2016 07.35 23.40
25010 A ABC pte ltd 4/7/2016 12:40 9.90
</code></pre>
<p>how do i retrieve the dates and time and group them to respectively company A and B such that the output would be something like: (A, 3/7/2016, 10:40, 11.30, 4/7/2016 12:40), (B, 4/7/2016, 07:35)</p>
<p>My existing code is:</p>
<pre><code>datePattern = re.compile(r"(\d+/\d+/\d+)\s+(\d+:\d+)")
dateDict =dict()
for i, line in enumerate(open('sample_data.csv')):
for match in re.finditer(datePattern,line):
if match.group(1) in dateDict:
dateDict[match.group(1)].append(match.group(2))
else:
dateDict[match.group(1)] = [match.group(2),]
</code></pre>
<p>However it only works for grouping date and time but now i want to include name as part of the grouping as well. *Using csv module would be preferred</p>
| 2 | 2016-10-10T11:47:50Z | 39,960,675 | <p>Presuming your data actually looks like:</p>
<pre><code>Receipt,Name,Address,Date,Time,Items
25007,A,ABC pte ltd,4/7/2016,10:40,"Cheese, Cookie, Pie"
25008,A,CCC pte ltd,4/7/2016,11:30,"Cheese, Cookie"
25009,B,CCC pte ltd,4/7/2016,07:35,"Chocolate"
25010,A,CCC pte ltd,4/7/2016,12:40," Butter, Cookie"
</code></pre>
<p>then it is pretty trivial to group:</p>
<pre><code>from collections import defaultdict
from csv import reader
with open("test.csv") as f:
next(f) # skip header
group_dict = defaultdict(list)
for _, name, _, dte, time, _ in reader(f):
group_dict[name].append((dte, time))
from pprint import pprint as pp
pp(dict(group_dict))
</code></pre>
<p>which would give you:</p>
<pre><code>'A': [('4/7/2016', '10:40'), ('4/7/2016', '11:30'), ('4/7/2016', '12:40')],
'B': [('4/7/2016', '07:35')]}
</code></pre>
<p>If you don't want the date repeating, then also group on that:</p>
<pre><code>with open("test.csv") as f:
next(f) # skip header
group_dict = defaultdict(list)
for _, name, _, dte, time, _ in reader(f):
group_dict[name, dte].append(time)
from pprint import pprint as pp
pp(dict(group_dict))
</code></pre>
<p>Which would give you:</p>
<pre><code>{('A', '4/7/2016'): ['10:40', '11:30', '12:40'], ('B', '4/7/2016'): ['07:35']}
</code></pre>
| 0 | 2016-10-10T14:31:54Z | [
"python",
"regex",
"group"
] |
Count each occurrence of alphabet and store it in a list in Python 3 | 39,957,615 | <p>Suppose I have a string s </p>
<pre><code>s = "ebber"
</code></pre>
<p>I want a list x, where x = [1,1,2,2,1]<br>
We see the first occurrence of e so our first element in list x is 1, then we see the first occurrence of b so our next element in our list is 1, then we see our second occurrence of b so our next element in the list is 2, and so on..... </p>
<p>How do I implement code to get this result in python?</p>
| -1 | 2016-10-10T11:50:48Z | 39,958,154 | <pre><code>from collections import defaultdict
def count(s):
d, ls = defaultdict(int), []
for e in s:
d[e] += 1 # increment counter of char
ls.append(d[e]) # add counter
return ls
count("ebber") # [1, 1, 2, 2, 1]
</code></pre>
| 0 | 2016-10-10T12:19:24Z | [
"python"
] |
Creating zip file from byte | 39,957,629 | <p>I'm sending byte string of a zip file from client side using <code>JSZip</code> and need to convert it back to zip on server side. the code I've tried isn't working.</p>
<pre><code>b = bytearray()
b.extend(map(ord, request.POST.get("zipFile")))
zipPath = 'uploadFile' + str(uuid.uuid4()) + '.zip'
myzip = zipfile.ZipFile(zipPath, 'w')
with myzip:
myzip.write(b)
</code></pre>
<p>It gives the error:</p>
<pre><code>stat: path too long for Windows
</code></pre>
<p>How do I save my byte string as a zip file?</p>
| 0 | 2016-10-10T11:51:29Z | 39,958,183 | <p>Haven't tried it, but you could use <a href="https://docs.python.org/3/library/io.html#buffered-streams" rel="nofollow"><code>io.BytesIO</code></a> to construct a Buffered Stream
with your bytes and then create your zip file like so:</p>
<pre><code>import io
with ZipFile('my_file.zip', 'w') as myzip:
myzip.write(io.BytesIO(b))
</code></pre>
| 1 | 2016-10-10T12:20:54Z | [
"python",
"python-3.x",
"zipfile"
] |
Creating zip file from byte | 39,957,629 | <p>I'm sending byte string of a zip file from client side using <code>JSZip</code> and need to convert it back to zip on server side. the code I've tried isn't working.</p>
<pre><code>b = bytearray()
b.extend(map(ord, request.POST.get("zipFile")))
zipPath = 'uploadFile' + str(uuid.uuid4()) + '.zip'
myzip = zipfile.ZipFile(zipPath, 'w')
with myzip:
myzip.write(b)
</code></pre>
<p>It gives the error:</p>
<pre><code>stat: path too long for Windows
</code></pre>
<p>How do I save my byte string as a zip file?</p>
| 0 | 2016-10-10T11:51:29Z | 39,958,539 | <p>JSZip already made a zip archive. The zipfile module is for accessing zip file contents, but you don't need to parse it to store it. In addition, bytearray can be created directly from strings so the map(ord,) is superfluous, and write can handle strings as well (bytearray is for handling numeric binary data or making a mutable stringlike object). So a slightly simplified variant might be:</p>
<pre><code>zipContents = request.POST.get("zipFile")
zipPath = 'uploadFile' + str(uuid.uuid4()) + '.zip'
with open(zipPath, 'wb') as zipFile:
zipFile.write(zipContents)
</code></pre>
| 2 | 2016-10-10T12:40:59Z | [
"python",
"python-3.x",
"zipfile"
] |
Creating zip file from byte | 39,957,629 | <p>I'm sending byte string of a zip file from client side using <code>JSZip</code> and need to convert it back to zip on server side. the code I've tried isn't working.</p>
<pre><code>b = bytearray()
b.extend(map(ord, request.POST.get("zipFile")))
zipPath = 'uploadFile' + str(uuid.uuid4()) + '.zip'
myzip = zipfile.ZipFile(zipPath, 'w')
with myzip:
myzip.write(b)
</code></pre>
<p>It gives the error:</p>
<pre><code>stat: path too long for Windows
</code></pre>
<p>How do I save my byte string as a zip file?</p>
| 0 | 2016-10-10T11:51:29Z | 39,958,763 | <p><a href="https://docs.python.org/2/library/zipfile.html#zipfile.ZipFile.write" rel="nofollow"><code>ZipFile.write(filename, [arcname[, compress_type]])</code></a> takes the name of a local file to be added to the zip file. To write data from a <code>bytearray</code> or <code>bytes</code> object you need to use the <a href="https://docs.python.org/2/library/zipfile.html#zipfile.ZipFile.writestr" rel="nofollow"><code>ZipFile.writestr(zinfo_or_arcname, bytes[, compress_type])</code></a> method instead:</p>
<pre><code>with zipfile.ZipFile(zipPath, 'w'):
zipFile.write('name_of_file_in_archive', zipContents)
</code></pre>
<p>Note: if <code>request.POST.get("zipFile")</code> already is <code>bytes</code> (or <code>str</code> in python2) you don't need to convert it to a <code>bytearray</code> before writing it to the archive.</p>
| 1 | 2016-10-10T12:52:59Z | [
"python",
"python-3.x",
"zipfile"
] |
store multiple images efficiently in Python data structure | 39,957,657 | <p>I have several images (their number might increase over time) and their corresponding annotated images - let's call them image masks.</p>
<p>I want to convert the original images to Grayscale and the annotated masks to Binary images (B&W) and then save the gray scale values in a Pandas DataFrame/CSV file based on the B&W pixel coordinates. </p>
<p>So that means a lot of switching back and forth the original image and the binary images.</p>
<p>I don't want to read every time the images from file because it might be very time consuming.</p>
<p>Any suggestion which data structure should be used for storing several types of images in Python?</p>
| 0 | 2016-10-10T11:53:08Z | 39,973,094 | <p>I used several lists and list.append() for storing the image.</p>
<p>For finding the white regions in the black & white images I used <strong>cv2.findNonZero()</strong>.</p>
| 0 | 2016-10-11T08:19:00Z | [
"python",
"image",
"algorithm",
"opencv",
"image-processing"
] |
store multiple images efficiently in Python data structure | 39,957,657 | <p>I have several images (their number might increase over time) and their corresponding annotated images - let's call them image masks.</p>
<p>I want to convert the original images to Grayscale and the annotated masks to Binary images (B&W) and then save the gray scale values in a Pandas DataFrame/CSV file based on the B&W pixel coordinates. </p>
<p>So that means a lot of switching back and forth the original image and the binary images.</p>
<p>I don't want to read every time the images from file because it might be very time consuming.</p>
<p>Any suggestion which data structure should be used for storing several types of images in Python?</p>
| 0 | 2016-10-10T11:53:08Z | 39,973,408 | <p>PIL and Pillow are only marginally useful for this type of work. </p>
<p>The basic algorithm used for "finding and counting" objects like you are trying to do goes something like this: 1. Conversion to grayscale 2. Thresholding (either automatically via Otsu method, or similar, or by manually setting the threshold values) 3. Contour detection 4. Masking and object counting based on your contours. </p>
<p>You can just use a Mat (of integers, Mat1i) would be Data structure fits in this scenario. </p>
| 0 | 2016-10-11T08:38:44Z | [
"python",
"image",
"algorithm",
"opencv",
"image-processing"
] |
How can I update Entry without a "submit" button in Tkinter? | 39,957,699 | <p>So I have <code>Entries</code> which have some values assigned to them from a CFG File. I want to modify the CFG file when the <code>Entry</code> is updated, live, without a <code>submit</code> button; </p>
<p>Using <code><Key></code> binding will work but will take only the previous value, not the current one, as the last key pressed is not taken into consideration as a value, but as a <code>key-press</code>.</p>
<p>For example:</p>
<pre><code>class EntryBox:
def __init__(self, value, option, section, grid_row, master_e):
self.section = section
self.option = option
self.box = Entry(master_e)
self.box.grid(column=0, row=grid_row)
self.serial_no = grid_row
self.box.insert(0, value)
self.box.bind("<Key>", lambda event: update_cfg(event, self, self.get_value()))
def get_value(self):
return self.box.get()
def update_cfg(evt, entry_box,new_value):
global config_file
config_file.set(entry_box.section, entry_box.option, new_value)
print "Config file modified. "+entry_box.section+" "+entry_box.option+" "+new_value
</code></pre>
<p>If the value in the <code>entry</code> is <code>05R</code> when I click on the <code>entry</code> and press 6, it will print <code>Config file modified. CURRENT_MEASUREMENT_EXAMPLE_HP shunt_resistance 05R</code>; after I press 7, it will print <code>Config file modified. CURRENT_MEASUREMENT_EXAMPLE_HP shunt_resistance 0R56</code> and so on, always with one keypress behind. The only way to live update it after the value has been changed is to press the <code>TAB</code> or <code>arrow</code> buttons. </p>
| 1 | 2016-10-10T11:55:25Z | 39,957,960 | <p>You have this <code>key-press</code> in <code>event.char</code> so you can add it to text.</p>
| 1 | 2016-10-10T12:08:38Z | [
"python",
"user-interface",
"tkinter",
"keypress",
"entry"
] |
How can I update Entry without a "submit" button in Tkinter? | 39,957,699 | <p>So I have <code>Entries</code> which have some values assigned to them from a CFG File. I want to modify the CFG file when the <code>Entry</code> is updated, live, without a <code>submit</code> button; </p>
<p>Using <code><Key></code> binding will work but will take only the previous value, not the current one, as the last key pressed is not taken into consideration as a value, but as a <code>key-press</code>.</p>
<p>For example:</p>
<pre><code>class EntryBox:
def __init__(self, value, option, section, grid_row, master_e):
self.section = section
self.option = option
self.box = Entry(master_e)
self.box.grid(column=0, row=grid_row)
self.serial_no = grid_row
self.box.insert(0, value)
self.box.bind("<Key>", lambda event: update_cfg(event, self, self.get_value()))
def get_value(self):
return self.box.get()
def update_cfg(evt, entry_box,new_value):
global config_file
config_file.set(entry_box.section, entry_box.option, new_value)
print "Config file modified. "+entry_box.section+" "+entry_box.option+" "+new_value
</code></pre>
<p>If the value in the <code>entry</code> is <code>05R</code> when I click on the <code>entry</code> and press 6, it will print <code>Config file modified. CURRENT_MEASUREMENT_EXAMPLE_HP shunt_resistance 05R</code>; after I press 7, it will print <code>Config file modified. CURRENT_MEASUREMENT_EXAMPLE_HP shunt_resistance 0R56</code> and so on, always with one keypress behind. The only way to live update it after the value has been changed is to press the <code>TAB</code> or <code>arrow</code> buttons. </p>
| 1 | 2016-10-10T11:55:25Z | 39,957,975 | <p>I decided that <code><Key></code> was not the right option in my case and instead used <code><FocusOut></code>. This way, if you either change the value using the mouse or keyboard <code>TAB</code>, on focus out it will update it.</p>
| 0 | 2016-10-10T12:09:14Z | [
"python",
"user-interface",
"tkinter",
"keypress",
"entry"
] |
How can I update Entry without a "submit" button in Tkinter? | 39,957,699 | <p>So I have <code>Entries</code> which have some values assigned to them from a CFG File. I want to modify the CFG file when the <code>Entry</code> is updated, live, without a <code>submit</code> button; </p>
<p>Using <code><Key></code> binding will work but will take only the previous value, not the current one, as the last key pressed is not taken into consideration as a value, but as a <code>key-press</code>.</p>
<p>For example:</p>
<pre><code>class EntryBox:
def __init__(self, value, option, section, grid_row, master_e):
self.section = section
self.option = option
self.box = Entry(master_e)
self.box.grid(column=0, row=grid_row)
self.serial_no = grid_row
self.box.insert(0, value)
self.box.bind("<Key>", lambda event: update_cfg(event, self, self.get_value()))
def get_value(self):
return self.box.get()
def update_cfg(evt, entry_box,new_value):
global config_file
config_file.set(entry_box.section, entry_box.option, new_value)
print "Config file modified. "+entry_box.section+" "+entry_box.option+" "+new_value
</code></pre>
<p>If the value in the <code>entry</code> is <code>05R</code> when I click on the <code>entry</code> and press 6, it will print <code>Config file modified. CURRENT_MEASUREMENT_EXAMPLE_HP shunt_resistance 05R</code>; after I press 7, it will print <code>Config file modified. CURRENT_MEASUREMENT_EXAMPLE_HP shunt_resistance 0R56</code> and so on, always with one keypress behind. The only way to live update it after the value has been changed is to press the <code>TAB</code> or <code>arrow</code> buttons. </p>
| 1 | 2016-10-10T11:55:25Z | 39,958,033 | <p>You can use either </p>
<ul>
<li><code>focus-out</code></li>
<li><code>tab</code> or <code>enter</code> Key</li>
<li><code>key-release</code></li>
</ul>
<p>bindings to achieve that. </p>
<p>Also validation functions can help as they have previous and new values available. Please read the <a href="http://infohost.nmt.edu/tcc/help/pubs/tkinter/entry-validation.html" rel="nofollow">docs</a> for more information on that matter.</p>
<p>It is IMHO the most "pythonic" / "tkinter" way of achieving what is a "check and submit" functionality.</p>
| 3 | 2016-10-10T12:12:17Z | [
"python",
"user-interface",
"tkinter",
"keypress",
"entry"
] |
Order of elements from minidom getElementsByTagName | 39,957,761 | <p>Is the order for returned elements from Mindom <code>getElementsByTagName</code> the same as it is in document for elements in the same hierarchy / level?</p>
<pre><code> images = svg_doc.getElementsByTagName('image')
image_siblings = []
for img in images:
if img.parentNode.getAttribute('layertype') == 'transfer':
if img.nextSibling is not None:
if img.nextSibling.nodeName == 'image':
image_siblings.append(img.nextSibling)
elif img.nextSibling.nextSibling is not None and img.nextSibling.nextSibling.nodeName == 'image':
image_siblings.append(img.nextSibling.nextSibling)
</code></pre>
<p>I need to know if <code>image_siblings</code> will contain the images in the same order, they are placed in the document for the same hierarchy.</p>
<p>I found a similar <a href="https://stackoverflow.com/questions/4954003/order-of-the-elements-returned-using-getelementsbytagname">question</a> for JavaScript, but I'm unsure if this is also true for Python (version 3.5.2) Minidom <code>getElementsByTagName</code>.</p>
| 5 | 2016-10-10T11:57:52Z | 40,001,523 | <p>According to the code (in Python 2.7), the <code>getElementsByName</code> method relays on the <code>_get_elements_by_tagName_helper</code> function, which code is:</p>
<pre><code>def _get_elements_by_tagName_helper(parent, name, rc):
for node in parent.childNodes:
if node.nodeType == Node.ELEMENT_NODE and \
(name == "*" or node.tagName == name):
rc.append(node)
_get_elements_by_tagName_helper(node, name, rc)
return rc
</code></pre>
<p>What this means is that the order in the <code>getElementByName</code> is the same that you have in the <code>childNodes</code>.</p>
<p>But this is true only if the <code>tagName</code> appears only in the same level. Notice the recursive call of <code>_get_elements_by_tagName_helper</code> inside the same function, which means that elements with the same <code>tagName</code> that are placed deeper in the tree will be interleaved with the ones you have in a higher level.</p>
<p>If by <em>document</em> you mean an XML text file or a string, the question is then moved to whether or not the parser respects the order when creating the elements in the DOM.
If you use the <code>parse</code> function from the <code>xml.dom.minidom</code>, it relays on the <code>pyexpat</code> library, that in turns use the <code>expat</code> C library.</p>
<p>So, the short answer would be:</p>
<blockquote>
<p>If you have the tagName only present in the same level of hierarchy in the XML DOM, then the order is respected. If you have the same tagName in other nodes deeper in the tree, those elements will be interleaved with the ones of higher level. The respected order is the order of the elements in the minidom document object, which order depends on the parser.</p>
</blockquote>
<p>Look this example:</p>
<pre><code>>>> import StringIO
>>> from xml.dom.minidom import parseString
>>> s = '''<head>
... <tagName myatt="1"/>
... <tagName myatt="2"/>
... <tagName myatt="3"/>
... <otherTag>
... <otherDeeperTag>
... <tagName myatt="3.1"/>
... <tagName myatt="3.2"/>
... <tagName myatt="3.3"/>
... </otherDeeperTag>
... </otherTag>
... <tagName myatt="4"/>
... <tagName myatt="5"/>
... </head>'''
>>> doc = parseString(s)
>>> for e in doc.getElementsByTagName('tagName'):
... print e.getAttribute('myatt')
...
1
2
3
3.1
3.2
3.3
4
5
</code></pre>
<p>It seems the parser respects the ordering structure of the xml string (most parsers respect that order because it is easier to respect it) but I couldn't find any documentation that confirms it. I mean, it could be the (strange) case that the parser, depending on the size of the document, moves from using a list to a hash table to store the elements, and that could break the order. Take into account that the XML standard does not specify order of the elements, so a parser that does not respect order would be complaint too.</p>
| 4 | 2016-10-12T14:41:21Z | [
"python",
"xml",
"python-3.x",
"dom",
"minidom"
] |
Not a JPEG file: starts with 0xc3 0xbf | 39,957,815 | <p>I am trying to decode jpeg file using tf.image.decode_jpeg but it says its not a JPEG file. I don't know what the problem is.Can anyone help me to solve this problem?</p>
<p>This is my test code.</p>
<pre><code>import tensorflow as tf
path = "/root/PycharmProjects/mscoco/train2014/COCO_train2014_000000291797.jpg"
with open(path, "r", encoding="latin-1") as f:
image = f.read()
encoded_jpeg = tf.placeholder(dtype=tf.string)
decoded_jpeg = tf.image.decode_jpeg(encoded_jpeg, channels=3)
sess = tf.InteractiveSession()
sess.run(decoded_jpeg, feed_dict={encoded_jpeg: image})
</code></pre>
<p>And This is the error:</p>
<pre><code>Not a JPEG file: starts with 0xc3 0xbf
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 965, in _do_call
return fn(*args)
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 947, in _run_fn
status, run_metadata)
File "/usr/lib64/python3.4/contextlib.py", line 66, in __exit__
next(self.gen)
File "/usr/lib/python3.4/site-packages/tensorflow/python/framework/errors.py", line 450, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.InvalidArgumentError: Invalid JPEG data, size 165886
[[Node: DecodeJpeg = DecodeJpeg[acceptable_fraction=1, channels=3, fancy_upscaling=true, ratio=1, try_recover_truncated=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_0)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/PycharmProjects/mytf/models/im2txt/im2txt/data/test.py", line 14, in <module>
sess.run(decoded_jpeg, feed_dict={encoded_jpeg: image})
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 710, in run
run_metadata_ptr)
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 908, in _run
feed_dict_string, options, run_metadata)
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 958, in _do_run
target_list, options, run_metadata)
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 978, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: Invalid JPEG data, size 165886
[[Node: DecodeJpeg = DecodeJpeg[acceptable_fraction=1, channels=3, fancy_upscaling=true, ratio=1, try_recover_truncated=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_0)]]
Caused by op 'DecodeJpeg', defined at:
File "/root/PycharmProjects/mytf/models/im2txt/im2txt/data/test.py", line 10, in <module>
decoded_jpeg = tf.image.decode_jpeg(encoded_jpeg, channels=3)
File "/usr/lib/python3.4/site-packages/tensorflow/python/ops/gen_image_ops.py", line 283, in decode_jpeg
name=name)
File "/usr/lib/python3.4/site-packages/tensorflow/python/framework/op_def_library.py", line 703, in apply_op
op_def=op_def)
File "/usr/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 2317, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 1239, in __init__
self._traceback = _extract_stack()
</code></pre>
<p>I cannot </p>
| -1 | 2016-10-10T12:00:44Z | 39,958,615 | <p>You're reading an image file as if it were a text file.</p>
<p>Just change the line:</p>
<pre><code>with open(path, "r", encoding="latin-1") as f:
</code></pre>
<p>with</p>
<pre><code>with open(path, "rb") as f:
</code></pre>
<p>To read the image as a binary ("rb" = Read Binary) file.</p>
| 2 | 2016-10-10T12:45:05Z | [
"python",
"tensorflow"
] |
Putting objects of the same class into a dictionary with different keys Python | 39,957,954 | <p>I have these Objects:</p>
<pre><code>class Object(object):
def __init__(self, name, *parameters):
self.name = name
self.parameters = parameters
def name(self):
return self.name
def parameters(self):
return self.parameters
class Parameter(object):
def __init__(self, name, value):
self.name = name
self.value = value
</code></pre>
<p>My aim is to get all parameters of 1 object in a dictionary, but it needs to end up like this:</p>
<pre><code>{ParameterOne: "ValueOne", ParameterTwo: "ValueTwo"}
</code></pre>
<p>Since I have to be able to give the parameters different names, I can't do it like this:</p>
<pre><code>class Parameter(object):
def __init__(self, value):
self.ParameterOne = value
</code></pre>
<p>I tried it like this which obviously didn't work:</p>
<pre><code>data = {}
for parameter in object.parameters:
data.update(vars(parameter))
</code></pre>
<p>Is there any way to achieve this?</p>
| 1 | 2016-10-10T12:08:27Z | 39,958,421 | <pre><code>data = {parameter: ''.join(['Value ', str(index)])
for index, parameter in enumerate(object.parameters)}
</code></pre>
<p>This will give you <code>{parameter0: 'Value 0', ...}</code> If you want to get the English words for the numbers, you;ll have to either roll your own function for it or use an external package like <code>inflect</code>.</p>
<p>Just an observation, but most dictionaries would have the string mapping to the object, not the other way around. Is there some reason you're doing it this way?</p>
<p>EDIT:</p>
<p>Rereading your question, are you looking for </p>
<pre><code>data = {parameter: parameter.name for parameter in object.parameters}
</code></pre>
<p>I think you might want to try instead</p>
<pre><code>data = {parameter.name: parameter.value for parameter in object.parameters}
</code></pre>
| 0 | 2016-10-10T12:34:26Z | [
"python",
"dictionary"
] |
ETL data from Bigquery to Redshift using Python | 39,958,028 | <p>I have this script in Python that set a variable with the result of a query, that runs in Google Bigquery (some library I do not use here, but I was testing converting json to a csv file):</p>
<pre><code>import httplib2
import datetime
import json
import csv
import sys
from oauth2client.service_account import ServiceAccountCredentials
from bigquery import get_client
#Set DAY - 1
yesterday = datetime.datetime.now() - datetime.timedelta(days=1)
today = datetime.datetime.now()
#Format to Date
yesterday = '{:%Y-%m-%d}'.format(yesterday)
today = '{:%Y-%m-%d}'.format(today)
# BigQuery project id as listed in the Google Developers Console.
project_id = 'project'
# Service account email address as listed in the Google Developers Console.
service_account = 'email@email.com'
scope = 'https://www.googleapis.com/auth/bigquery'
credentials = ServiceAccountCredentials.from_json_keyfile_name('/path/to/file/.json', scope)
http = httplib2.Http()
http = credentials.authorize(http)
client = get_client(project_id, credentials=credentials, service_account=service_account)
#Synchronous query
try:
_job_id, results = client.query("SELECT * FROM dataset.table WHERE CreatedAt >= PARSE_UTC_USEC('" + yesterday + "') and CreatedAt < PARSE_UTC_USEC('" + today + "') limit 1", timeout=1000)
except Exception as e:
print e
print results
</code></pre>
<p>The returned result at variable <strong>results</strong> is something like this:</p>
<pre><code>[
{u'Field1': u'Msn', u'Field2': u'00000000000000', u'Field3': u'jsdksf422552d32', u'Field4': u'00000000000000', u'Field5': 1476004363.421,
u'Field5': u'message', u'Field6': u'msn',
u'Field7': None,
u'Field8': u'{"user":{"field":"j23h4sdfsf345","field":"Msn","field":"000000000000000000","field":true,"field":"000000000000000000000","field":"2016-10-09T09:12:43.421Z"}}', u'Field9': 1476004387.016}
]
</code></pre>
<p>I need to load it at Amazon Redshift, but in this format I can't run a copy from s3 using the .json that it generates...</p>
<p>Is there a way that I can modify this json for Redshift to load? Or return a .csv directly? I don't know a lot from this library from bigquery, or python at all (one of my first scripts).</p>
<p>Thanks a lot!</p>
| 0 | 2016-10-10T12:12:04Z | 40,086,244 | <p>To remove the 'u' before the fields:</p>
<pre><code>results = json.dumps(results)
</code></pre>
<p>Then, to transform the json variable in a csv file, I created:</p>
<pre><code>#Transform json variable to csv
results = json.dumps(results)
results = json.loads(results)
f = csv.writer(open("file.csv", "w"), delimiter='|')
f.writerow(["field","field","field","field","field","field","field", "field", "field", "field"])
for results in results:
f.writerow([results["field"],
results["field"],
results["field"],
results["field"],
results["field"],
results["field"],
results["field"],
results["field"],
results["field"],
results["field"]])
</code></pre>
<p>After this, I was able to load the file to Redshift.</p>
| 0 | 2016-10-17T12:20:06Z | [
"python",
"json",
"csv",
"etl"
] |
issue: python xml append element inside a for loop | 39,958,031 | <p>thanks for taking the time with this one.
i have an xml file with an element called selectionset. the idea is to take that element and modify some of the subelements attributes and tails, that part i have done.
the shady thing for me to get is why when i try to add the new subelements to the original (called selectionsets) its only pushing the last on the list inplist</p>
<pre><code> import xml.etree.ElementTree as etree
from xml.etree.ElementTree import *
from xml.etree.ElementTree import ElementTree
tree=ElementTree()
tree.parse('STRUCTURAL.xml')
root = tree.getroot()
col=tree.find('selectionsets/selectionset')
#find the value needed
val=tree.findtext('selectionsets/selectionset/findspec/conditions/condition/value/data')
setname=col.attrib['name']
listnames=val + " 6"
inplist=["D","E","F","G","H"]
entry=3
catcher=[]
ss=root.find('selectionsets')
outxml=ss
for i in range(len(inplist)):
str(val)
col.set('name',(setname +" "+ inplist[i]))
col.find('findspec/conditions/condition/value/data').text=str(inplist[i]+val[1:3])
#print (etree.tostring(col)) #everything working well til this point
timper=col.find('selectionset')
root[0].append(col)
# new=etree.SubElement(outxml,timper)
#you need to create a tree with element tree before creating the xml file
itree=etree.ElementTree(outxml)
itree.write('Selection Sets.xml')
print (etree.tostring(outxml))
# print (Test_file.selectionset())
#Initial xml
<?xml version="1.0" encoding="UTF-8" ?>
<exchange xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://download.autodesk.com/us/navisworks/schemas/nw-exchange-12.0.xsd" units="ft" filename="STRUCTURAL.nwc" filepath="C:\Users\Ricardo\Desktop\Comun\Taller 3">
<selectionsets>
<selectionset name="Column Location" guid="565f5345-de06-4f5b-aa0f-1ae751c98ea8">
<findspec mode="all" disjoint="0">
<conditions>
<condition test="contains" flags="10">
<category>
<name internal="LcRevitData_Element">Element</name>
</category>
<property>
<name internal="lcldrevit_parameter_-1002563">Column Location Mark</name>
</property>
<value>
<data type="wstring">C-A </data>
</value>
</condition>
</conditions>
<locator>/</locator>
</findspec>
</selectionset>
</selectionsets>
</exchange>
#----Current Output
<selectionsets>
<selectionset guid="565f5345-de06-4f5b-aa0f-1ae751c98ea8" name="Column Location H">
<findspec disjoint="0" mode="all">
<conditions>
<condition flags="10" test="contains">
<category>
<name internal="LcRevitData_Element">Element</name>
</category>
<property>
<name internal="lcldrevit_parameter_-1002563">Column Location Mark</name>
</property>
<value>
<data type="wstring">H-A</data>
</value>
</condition>
</conditions>
<locator>/</locator>
</findspec>
</selectionset>
<selectionset guid="565f5345-de06-4f5b-aa0f-1ae751c98ea8" name="Column Location H">
<findspec disjoint="0" mode="all">
<conditions>
<condition flags="10" test="contains">
<category>
<name internal="LcRevitData_Element">Element</name>
</category>
<property>
<name internal="lcldrevit_parameter_-1002563">Column Location Mark</name>
</property>
<value>
<data type="wstring">H-A</data>
</value>
</condition>
</conditions>
<locator>/</locator>
</findspec>
</selectionset>
<selectionset guid="565f5345-de06-4f5b-aa0f-1ae751c98ea8" name="Column Location H">
<findspec disjoint="0" mode="all">
<conditions>
<condition flags="10" test="contains">
<category>
<name internal="LcRevitData_Element">Element</name>
</category>
<property>
<name internal="lcldrevit_parameter_-1002563">Column Location Mark</name>
</property>
<value>
<data type="wstring">H-A</data>
</value>
</condition>
</conditions>
<locator>/</locator>
</findspec>
</selectionset>
<selectionset guid="565f5345-de06-4f5b-aa0f-1ae751c98ea8" name="Column Location H">
<findspec disjoint="0" mode="all">
<conditions>
<condition flags="10" test="contains">
<category>
<name internal="LcRevitData_Element">Element</name>
</category>
<property>
<name internal="lcldrevit_parameter_-1002563">Column Location Mark</name>
</property>
<value>
<data type="wstring">H-A</data>
</value>
</condition>
</conditions>
<locator>/</locator>
</findspec>
</selectionset>
<selectionset guid="565f5345-de06-4f5b-aa0f-1ae751c98ea8" name="Column Location H">
<findspec disjoint="0" mode="all">
<conditions>
<condition flags="10" test="contains">
<category>
<name internal="LcRevitData_Element">Element</name>
</category>
<property>
<name internal="lcldrevit_parameter_-1002563">Column Location Mark</name>
</property>
<value>
<data type="wstring">H-A</data>
</value>
</condition>
</conditions>
<locator>/</locator>
</findspec>
</selectionset>
<selectionset guid="565f5345-de06-4f5b-aa0f-1ae751c98ea8" name="Column Location H">
<findspec disjoint="0" mode="all">
<conditions>
<condition flags="10" test="contains">
<category>
<name internal="LcRevitData_Element">Element</name>
</category>
<property>
<name internal="lcldrevit_parameter_-1002563">Column Location Mark</name>
</property>
<value>
<data type="wstring">H-A</data>
</value>
</condition>
</conditions>
<locator>/</locator>
</findspec>
</selectionset>
</selectionsets>
</code></pre>
| 0 | 2016-10-10T12:12:10Z | 39,961,260 | <p>Here's what I've been able to put together and it looks like it'll do what you're looking for. Here are the main differences: (1) This will iterate over multiple selectionset items (if you end up with more than one), (2) It creates a deepcopy of the element before modifying the values (I think you were always modifying the original "col"), (3) It appends the new selectionset to the selectionsets tag rather than the root.</p>
<p>Here's the <a href="https://docs.python.org/3/library/copy.html" rel="nofollow">deepcopy documentation</a></p>
<pre><code>import xml.etree.ElementTree as etree
import copy
tree=etree.ElementTree()
tree.parse('test.xml')
root = tree.getroot()
inplist=["D","E","F","G","H"]
for selectionset in tree.findall('selectionsets/selectionset'):
for i in inplist:
col = copy.deepcopy(selectionset)
col.set('name', '%s %s' % (col.attrib['name'], i))
data = col.find('findspec/conditions/condition/value/data')
data.text = '%s%s' % (i, data.text[1:3])
root.find('selectionsets').append(col)
itree = etree.ElementTree(root)
itree.write('Selection Sets.xml')
</code></pre>
| 0 | 2016-10-10T15:03:12Z | [
"python",
"xml"
] |
dask / pandas categorical transformation differences | 39,958,065 | <p>I am managing larger than memory csv files of mostly categorical data. Initially I used to create a large csv file, then read it via Pandas read_csv, convert to categorical and save into hdf5. Once into categorical format, it nicely fits in memory.</p>
<p>Files are growing and I moved to Dask. Same process though.</p>
<p>However, in empty fields, Pandas seems to use np.nan and the category is not included in the cat.categories listing.</p>
<p>With Dask, empty values are filled with NaN, it is included as a separate category and whence saved into HDF I get future compatibility warning. </p>
<p>Is this a bug or am I missing any steps ? Behaviour seems to differ between pandas and dask.</p>
<p>Thanks</p>
<p>JC</p>
| 1 | 2016-10-10T12:13:59Z | 40,027,860 | <p>This is solved in dask ver 0.11.1</p>
<p>See <a href="https://github.com/dask/dask/pull/1578" rel="nofollow">https://github.com/dask/dask/pull/1578</a></p>
| 0 | 2016-10-13T17:57:29Z | [
"python",
"csv",
"pandas",
"dask"
] |
Add elements in a matrix in python | 39,958,095 | <p>I have this matrix:</p>
<pre><code>mat = [[ 0 for x in range(row)] for y in range(column)]
</code></pre>
<p>I tried to add elements to the matrix:</p>
<pre><code>for x in range(row): # row is 2
for y in range(column): # column is 3
mat[x][y] = int(input("number: "))
</code></pre>
<p>but the shell returns this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Fr\Desktop\pr.py", line 13, in <module>
mat[x][y] = 12
IndexError: list assignment index out of range
</code></pre>
<p>how do I add elements to a matrix?</p>
| 2 | 2016-10-10T12:15:33Z | 39,958,153 | <p>The inner list should be based on columns:</p>
<pre><code>mat = [[ 0 for x in range(column)] for y in range(row)]
</code></pre>
<p>Here is an example:</p>
<pre><code>In [73]: row = 3
In [74]: column = 4
In [78]: mat = [[ 0 for x in range(column)] for y in range(row)]
In [79]:
In [79]: for x in range(row): # row is 2
for y in range(column): # column is 3
mat[x][y] = 5
....:
In [80]: mat
Out[80]: [[5, 5, 5, 5], [5, 5, 5, 5], [5, 5, 5, 5]]
</code></pre>
| 4 | 2016-10-10T12:19:21Z | [
"python",
"python-3.x",
"matrix"
] |
Add elements in a matrix in python | 39,958,095 | <p>I have this matrix:</p>
<pre><code>mat = [[ 0 for x in range(row)] for y in range(column)]
</code></pre>
<p>I tried to add elements to the matrix:</p>
<pre><code>for x in range(row): # row is 2
for y in range(column): # column is 3
mat[x][y] = int(input("number: "))
</code></pre>
<p>but the shell returns this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Fr\Desktop\pr.py", line 13, in <module>
mat[x][y] = 12
IndexError: list assignment index out of range
</code></pre>
<p>how do I add elements to a matrix?</p>
| 2 | 2016-10-10T12:15:33Z | 39,958,283 | <p>I think it should be:</p>
<pre><code>>>> for x in range(column):
... for y in range(row):
... mat[x][y] = int("number: ")
...
1
2
3
4
5
6
>>> mat
[[1, 2], [3, 4], [5, 6]]
</code></pre>
| 1 | 2016-10-10T12:27:10Z | [
"python",
"python-3.x",
"matrix"
] |
how to complex manage shell processes with asyncio? | 39,958,110 | <p>I want to track reboot process of daemon with python's asyncio module. So I need to run shell command <code>tail -f -n 0 /var/log/daemon.log</code> and analyze it's output while, let's say, <code>service daemon restart</code> executing in background. Daemon continues to write to log after service restart command finished it's execution and reports it's internal checks. Track process read checks info and reports that reboot was successful or not based it's internal logic.</p>
<pre><code>import asyncio
from asyncio.subprocess import PIPE, STDOUT
async def track():
output = []
process = await asyncio.create_subprocess_shell(
'tail -f -n0 ~/daemon.log',
stdin=PIPE, stdout=PIPE, stderr=STDOUT
)
while True:
line = await process.stdout.readline()
if line.decode() == 'reboot starts\n':
output.append(line)
break
while True:
line = await process.stdout.readline()
if line.decode() == '1st check completed\n':
output.append(line)
break
return output
async def reboot():
lines = [
'...',
'...',
'reboot starts',
'...',
'1st check completed',
'...',
]
p = await asyncio.create_subprocess_shell(
(
'echo "rebooting"; '
'for line in {}; '
'do echo $line >> ~/daemon.log; sleep 1; '
'done; '
'echo "rebooted";'
).format(' '.join('"{}"'.format(l) for l in lines)),
stdin=PIPE, stdout=PIPE, stderr=STDOUT
)
return (await p.communicate())[0].splitlines()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(
asyncio.ensure_future(track()),
asyncio.ensure_future(reboot())
))
loop.close()
</code></pre>
<p>This code is only way I've found to run two coroutines in parallel. But how to run <code>track()</code> strictly before <code>reboot</code> to not miss any possible output in log? And how to retrieve return values of both coroutines?</p>
| 2 | 2016-10-10T12:16:14Z | 39,962,293 | <blockquote>
<p>But how to run track() strictly before reboot to not miss any possible output in log?</p>
</blockquote>
<p>You could <code>await</code> the first subprocess creation before running the second one. </p>
<blockquote>
<p>And how to retrieve return values of both coroutines?</p>
</blockquote>
<p><a href="https://docs.python.org/3.5/library/asyncio-task.html#asyncio.gather" rel="nofollow"><code>asyncio.gather</code></a> returns the aggregated results.</p>
<p>Example:</p>
<pre><code>async def main():
process_a = await asyncio.create_subprocess_shell([...])
process_b = await asyncio.create_subprocess_shell([...])
return await asyncio.gather(monitor_a(process_a), monitor_b(process_b))
loop = asyncio.get_event_loop()
result_a, result_b = loop.run_until_complete(main())
</code></pre>
| 1 | 2016-10-10T16:03:50Z | [
"python",
"concurrency",
"python-asyncio"
] |
Django - choice set, but no drop down in form | 39,958,124 | <p>I would like to have a choice set in my model for template and validation.</p>
<p>However, in the model.form for that model I want to have just an integer field.</p>
<p>How can I change the widget in order to just get a InputField?</p>
<p>I tried to change the widget to forms.Model, but that did not seem to work. I get a error:</p>
<pre><code>'IntegerField' object has no attribute 'attrs'
</code></pre>
<p>forms.py:</p>
<pre><code>class KombiPublikationForm(forms.ModelForm):
class Meta:
model = KombiPublikation
#fields = []
exclude = ['pub_sprache']
widgets = {
'monat': forms.IntegerField(), # does not work
}
</code></pre>
<p>model.py:</p>
<pre><code>MONTH = (
(1, 'Januar'),
(2, 'Februar'),
(3, 'März'),
(4, 'April'),
(5, 'Mai'),
(6, 'Juni'),
(7, 'Juli'),
(8, 'August'),
(9, 'September'),
(10, 'Oktober'),
(11, 'November'),
(12, 'Dezember'),
)
class KombiPublikation(models.Model):
[...]
monat = models.IntegerField(choices=MONTH)
</code></pre>
<p>Thanks!</p>
| 0 | 2016-10-10T12:17:12Z | 40,045,044 | <p>I found the solution - NumberInput() is the right widget.</p>
| 0 | 2016-10-14T13:58:55Z | [
"python",
"django",
"django-forms",
"django-crispy-forms"
] |
Calculate multiple modelField from one Formfield | 39,958,232 | <p>I need extract multiple ModelField value from one FormField. where I should do this? in <code>clean_<field></code> functions? with <code>cleaned_data</code> mutation? form <code>__init__</code> function? in <code>model.save</code> or <code>form.save</code> function?</p>
<p>model:</p>
<pre><code>def normalize_name(name):
# some code
return name
class MyModel(models.Model):
name = models.CharField(max_length=250)
normalize_name = models.CharField(max_length=250, unique=True)
</code></pre>
<p>form:</p>
<pre><code>class MyForm(forms.ModelForm):
class Meta:
model = MyModel
fields = ('name',) # or normalize_name? or both?
</code></pre>
| 0 | 2016-10-10T12:23:58Z | 39,986,169 | <p>According to your comment, I would do the stuff in the <code>save()</code> function. </p>
<p><code>__init__(self)</code> is called before entering the data into the form so it can't do anything with attributes. </p>
<p>Theoretically, <code>clean_name</code> could work (in my opinion), but it should be used for validation.</p>
<p>The <code>save()</code> method is called after the <code>name</code> attribute is validated so you can get the <code>name</code> and do some stuff with it (normalize_name(name))</p>
<pre><code>def save(self, user, *args, **kwargs):
name = self.cleaned_data['name']
self.instance.normalized_name = normalize_name(name)
return super(YourFormClass, self).save(*args, **kwargs)
</code></pre>
<p>If you mean that you generate <code>normalized_name</code> using <code>name</code> and the form is valid when <code>normalized_name</code> meets some conditions, do it inside clean_name(self).</p>
<pre><code>def clean_name(self):
name = self.cleaned_data['name']
normalized_name = normalize(name)
if not validate(normalized_name):
raise ValidationError("Something is wrong)
return name
</code></pre>
| 0 | 2016-10-11T20:28:37Z | [
"python",
"django",
"modelform"
] |
Python: how make class, inherited from list, with access to first element? | 39,958,265 | <p>In Python I would like to make class <em>MyList</em> and inherit from <em>list</em>. Besides I would like to make attribute <em>first</em>, which is equal to first element of list.
For example, for such code </p>
<pre><code>l = MyList([0, 1, 2])
print (l.first)
l.first = 3
print (l)
</code></pre>
<p>I would like to see</p>
<pre><code>0
[3, 1, 2]
</code></pre>
<p>I write</p>
<pre><code>class MyList(list):
def __init__(self, input_list):
self.first = input_list[0]
my_list = MyList([0, 1, 2])
print (my_list)
print (my_list.first)
</code></pre>
<p>and get</p>
<pre><code>[]
0
</code></pre>
<p>Why my_list is empty?</p>
| 1 | 2016-10-10T12:26:01Z | 39,958,319 | <p>You're not calling list's <code>__init__</code> in your <code>__init__</code>.</p>
<pre><code>class MyList(list):
def __init__(self, input_list):
super(MyList, self).__init__(input_list)
self.first = input_list[0]
</code></pre>
<p>Also, rather than affecting first at init, you should use a <a href="https://docs.python.org/2.7/library/functions.html#property" rel="nofollow">property</a> so as to return the first value even when the values of the list have been changed after initialization.</p>
<pre><code>class MyList(list):
def __init__(self, input_list):
super(MyList, self).__init__(input_list)
@property
def first(self):
#Â TODO: catch empty list exception
return self[0]
</code></pre>
<p>in which case you wouldn't even have to override self, so it simplifies as </p>
<pre><code>class MyList(list):
@property
def first(self):
# TODO: catch empty list exception
return self[0]
</code></pre>
| 2 | 2016-10-10T12:28:39Z | [
"python",
"class"
] |
Python: how make class, inherited from list, with access to first element? | 39,958,265 | <p>In Python I would like to make class <em>MyList</em> and inherit from <em>list</em>. Besides I would like to make attribute <em>first</em>, which is equal to first element of list.
For example, for such code </p>
<pre><code>l = MyList([0, 1, 2])
print (l.first)
l.first = 3
print (l)
</code></pre>
<p>I would like to see</p>
<pre><code>0
[3, 1, 2]
</code></pre>
<p>I write</p>
<pre><code>class MyList(list):
def __init__(self, input_list):
self.first = input_list[0]
my_list = MyList([0, 1, 2])
print (my_list)
print (my_list.first)
</code></pre>
<p>and get</p>
<pre><code>[]
0
</code></pre>
<p>Why my_list is empty?</p>
| 1 | 2016-10-10T12:26:01Z | 39,958,371 | <p>Define <code>first</code> as a <a href="https://docs.python.org/2.7/library/functions.html#property" rel="nofollow"><code>property</code></a> rather than a simple attribute.</p>
<pre><code>class MyList(list):
@property
def first(self):
return self[0]
@first.setter
def first(self, val):
self[0] = val
</code></pre>
<p><strong>Demo:</strong></p>
<pre><code>>>> lst = MyList([0, 1, 2])
>>> lst.first
0
>>> lst
[0, 1, 2]
>>> lst.first = 100
>>> lst
[100, 1, 2]
</code></pre>
| 1 | 2016-10-10T12:31:15Z | [
"python",
"class"
] |
Python: how make class, inherited from list, with access to first element? | 39,958,265 | <p>In Python I would like to make class <em>MyList</em> and inherit from <em>list</em>. Besides I would like to make attribute <em>first</em>, which is equal to first element of list.
For example, for such code </p>
<pre><code>l = MyList([0, 1, 2])
print (l.first)
l.first = 3
print (l)
</code></pre>
<p>I would like to see</p>
<pre><code>0
[3, 1, 2]
</code></pre>
<p>I write</p>
<pre><code>class MyList(list):
def __init__(self, input_list):
self.first = input_list[0]
my_list = MyList([0, 1, 2])
print (my_list)
print (my_list.first)
</code></pre>
<p>and get</p>
<pre><code>[]
0
</code></pre>
<p>Why my_list is empty?</p>
| 1 | 2016-10-10T12:26:01Z | 39,958,414 | <p>You need to add a call to the <code>super</code> class, and for <code>first</code> to work as expected I'd recommend a <code>property</code>:</p>
<pre><code>class MyList(list):
def __init__(self, input_list):
super(MyList, self).__init__(input_list)
self.first = input_list[0]
@property
def first(self):
return self[0]
@first.setter
def first(self, value):
self[0] = value
</code></pre>
<p>Now calling:</p>
<pre><code>my_list = MyList([0, 1, 2])
print (my_list)
print (my_list.first)
my_list.first = 12
print (my_list)
print (my_list.first)
</code></pre>
<p>gives:</p>
<pre><code>[0, 1, 2]
0
[12, 1, 2]
12
</code></pre>
| 0 | 2016-10-10T12:33:52Z | [
"python",
"class"
] |
Creating a python dictionary in python from tab delimited text file with headers as keywords | 39,958,305 | <p>I am rather new to Python and am having trouble trying to create a function which reads in tab deliminated text files and creats a dictionary from the data. I am mostly dealing with text files of the following format with a number of tab deliminated numerical data columns with corresponding headers for each column:</p>
<pre><code>Time_(s) Mass_Flow_(kg/s) T_in_pipe(C) T_in_water(C) T_out_pipe(C) T_out_water(C)
0 1.2450 16.9029 16.8256 16.6234 16.6204
2.8700 1.2450 16.8873 16.8094 16.6237 19.6507
5.6600 1.2450 16.8889 16.8229 19.1406 29.1320
8.7800 1.2450 16.8875 16.8236 24.1325 34.9077
11.6200 1.2450 16.8794 16.8040 28.3927 38.5443
16.0600 1.2450 16.8615 16.7942 33.7205 42.4149
18.8900 1.2450 16.8512 16.7938 36.2797 44.1221
23.0200 1.2450 16.8319 16.7903 39.2102 46.1857
25.7600 1.2450 16.8380 16.7952 40.7243 47.2657
</code></pre>
<p>Preferably, I want to write a code which stores each column of data as an array but also to store headings of each column into a seperate array so that I can use them as keywords in a dictionary. For example, if I lookup the dictionary key "Mass_Flow_(kg/s)", an array would be returned of all the values in the mass flow rate column (excluding the header). </p>
<p>So far I have tried using numpy.loadtxt to create such numerical arrays from the columns but I have not been successful in extracting the header data and thus have had to skip this line. The following code will produce the dictionary I want but I would rather a more flexible code which doesn't require me to manually name each of the columns despite the names already being contained within the .txt file. </p>
<pre><code>import numpy as np
time, m_flow, Tin_pipe, Tin_water, Tout_pipe, Tout_water = np.loadtxt("pipeData.txt",skiprows=1,unpack=True)
#Assign each column in file to respective arrays
my_dict = {"Time":time, "Mass flow rate":m_flow, "Tin_pipe":Tin_pipe, "Tin_water":Tin_water, "Tout_pipe":Tout_pipe, "Tout_water":Tout_water}
#Line arrays to keywords and merge into a dictionary
</code></pre>
<p>I have tried not skipping the first row but loadtxt usually returns wih:</p>
<pre><code>ValueError: could not convert string to float: Time_(s)
</code></pre>
<p>Therefore I think I need to use another module if I want to read both the string data and numerical values. If anybody has any suggestions of how I may go about doing this or knows of a better module for doing this it would be greatly appreciated. </p>
<p>Keith</p>
| 0 | 2016-10-10T12:28:03Z | 39,958,396 | <p>Take a look at the <a href="http://pandas.pydata.org/" rel="nofollow">Pandas module</a></p>
<pre><code># This module kicks ass
import pandas as pd
pipe_data = pd.read_csv('pipeData.txt', sep='\t')
print pipe_data.columns # prints Time_(s), Mass_Flow_(kg/s), ...
print pipe_data['Time_(s)'] # print the Time_(s) column
</code></pre>
| 1 | 2016-10-10T12:32:30Z | [
"python",
"arrays",
"numpy",
"dictionary",
"text"
] |
Creating a python dictionary in python from tab delimited text file with headers as keywords | 39,958,305 | <p>I am rather new to Python and am having trouble trying to create a function which reads in tab deliminated text files and creats a dictionary from the data. I am mostly dealing with text files of the following format with a number of tab deliminated numerical data columns with corresponding headers for each column:</p>
<pre><code>Time_(s) Mass_Flow_(kg/s) T_in_pipe(C) T_in_water(C) T_out_pipe(C) T_out_water(C)
0 1.2450 16.9029 16.8256 16.6234 16.6204
2.8700 1.2450 16.8873 16.8094 16.6237 19.6507
5.6600 1.2450 16.8889 16.8229 19.1406 29.1320
8.7800 1.2450 16.8875 16.8236 24.1325 34.9077
11.6200 1.2450 16.8794 16.8040 28.3927 38.5443
16.0600 1.2450 16.8615 16.7942 33.7205 42.4149
18.8900 1.2450 16.8512 16.7938 36.2797 44.1221
23.0200 1.2450 16.8319 16.7903 39.2102 46.1857
25.7600 1.2450 16.8380 16.7952 40.7243 47.2657
</code></pre>
<p>Preferably, I want to write a code which stores each column of data as an array but also to store headings of each column into a seperate array so that I can use them as keywords in a dictionary. For example, if I lookup the dictionary key "Mass_Flow_(kg/s)", an array would be returned of all the values in the mass flow rate column (excluding the header). </p>
<p>So far I have tried using numpy.loadtxt to create such numerical arrays from the columns but I have not been successful in extracting the header data and thus have had to skip this line. The following code will produce the dictionary I want but I would rather a more flexible code which doesn't require me to manually name each of the columns despite the names already being contained within the .txt file. </p>
<pre><code>import numpy as np
time, m_flow, Tin_pipe, Tin_water, Tout_pipe, Tout_water = np.loadtxt("pipeData.txt",skiprows=1,unpack=True)
#Assign each column in file to respective arrays
my_dict = {"Time":time, "Mass flow rate":m_flow, "Tin_pipe":Tin_pipe, "Tin_water":Tin_water, "Tout_pipe":Tout_pipe, "Tout_water":Tout_water}
#Line arrays to keywords and merge into a dictionary
</code></pre>
<p>I have tried not skipping the first row but loadtxt usually returns wih:</p>
<pre><code>ValueError: could not convert string to float: Time_(s)
</code></pre>
<p>Therefore I think I need to use another module if I want to read both the string data and numerical values. If anybody has any suggestions of how I may go about doing this or knows of a better module for doing this it would be greatly appreciated. </p>
<p>Keith</p>
| 0 | 2016-10-10T12:28:03Z | 40,012,125 | <p>An alternative might be to use the <strong>csv</strong> module for Python itself.</p>
<pre><code>import csv
with open('temp.txt') as csvfile:
csvrows = csv.reader(csvfile, delimiter='\t')
fieldnames=next(csvrows)
print (fieldnames)
for row in csvrows:
print (row)
</code></pre>
<p>When I picked up the data you supplied and replaced multiple blanks between columns with single tabs these were the results.</p>
<pre><code>['Time_(s)', 'Mass_Flow_(kg/s)', 'T_in_pipe(C)', 'T_in_water(C)', 'T_out_pipe(C)', 'T_out_water(C)']
['0', '1.2450', '16.9029', '16.8256', '16.6234', '16.6204']
[' 2.8700', '1.2450', '16.8873', '16.8094', '16.6237', '19.6507']
[' 5.6600', '1.2450', '16.8889', '16.8229', '19.1406', '29.1320']
[' 8.7800', '1.2450', '16.8875', '16.8236', '24.1325', '34.9077']
[' 11.6200', '1.2450', '16.8794', '16.8040', '28.3927', '38.5443']
[' 16.0600', '1.2450', '16.8615', '16.7942', '33.7205', '42.4149']
[' 18.8900', '1.2450', '16.8512', '16.7938', '36.2797', '44.1221']
[' 23.0200', '1.2450', '16.8319', '16.7903', '39.2102', '46.1857']
[' 25.7600', '1.2450', '16.8380', '16.7952', '40.7243', '47.2657']
</code></pre>
<p>The main problem might be that the leading blanks remain in the first column.</p>
| 0 | 2016-10-13T04:07:21Z | [
"python",
"arrays",
"numpy",
"dictionary",
"text"
] |
Numpy dtype invalid index | 39,958,473 | <pre><code>I'm trying to load csv data file:
ACCEPT,organizer@t.net,t,p1@t.net,0,UK,3600000,3,1475917200000,1475920800000,MON,9,0,0,0
</code></pre>
<p>in following way:</p>
<pre><code>dataset = genfromtxt('./training_set.csv', delimiter=',', dtype='a20, a20, a20, a8, i8, a20, i8, i8, i8, i8, a3, i8, i8, i8, i8')
print(dataset)
target = [x[0] for x in dataset]
train = [x[1:] for x in dataset]
</code></pre>
<p>in last line above I've got an error:</p>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-66-5d58edf06039> in <module>()
4 print(dataset)
5 target = [x[0] for x in dataset]
----> 6 train = [x[1:] for x in dataset]
7
8 #rf = RandomForestClassifier(n_estimators=100)
<ipython-input-66-5d58edf06039> in <listcomp>(.0)
4 print(dataset)
5 target = [x[0] for x in dataset]
----> 6 train = [x[1:] for x in dataset]
7
8 #rf = RandomForestClassifier(n_estimators=100)
IndexError: invalid index
</code></pre>
<p>How to handle this?</p>
| -1 | 2016-10-10T12:37:21Z | 39,959,279 | <pre><code>n [42]: dataset = np.genfromtxt('./np_inf.txt', delimiter=',', dtype='a20, a20, a20, a8, i8, a20, i8, i8, i8, i8, a3, i8, i8, i8, i8')
In [43]: [x[0] for x in dataset]
Out[43]: ['ACCEPT', 'ACCEPT', 'ACCEPT']
</code></pre>
<p>The issue is that the entries of the <code>dataset</code> are of not very useful type <code>np.void</code>. It does not allow slicing, apparently, but you can iterate over it:</p>
<pre><code>In [56]: type(dataset[0])
Out[56]: numpy.void
In [57]: len(dataset[0])
Out[57]: 15
In [58]: z = [[y for j, y in enumerate(x) if j > 0] for x in dataset]
In [59]: z[0]
Out[59]:
['organizer@t.net',
't',
'p1@t.net',
0,
'UK',
3600000,
3,
1475917200000,
1475920800000,
'MON',
9,
0,
0,
0]
</code></pre>
<p>However you're probably better off converting the array to a structured dtype instead of using lists.</p>
<p>Better still, consider using pandas and do <code>pd.read_csv</code>.</p>
| 1 | 2016-10-10T13:19:44Z | [
"python",
"numpy"
] |
Numpy dtype invalid index | 39,958,473 | <pre><code>I'm trying to load csv data file:
ACCEPT,organizer@t.net,t,p1@t.net,0,UK,3600000,3,1475917200000,1475920800000,MON,9,0,0,0
</code></pre>
<p>in following way:</p>
<pre><code>dataset = genfromtxt('./training_set.csv', delimiter=',', dtype='a20, a20, a20, a8, i8, a20, i8, i8, i8, i8, a3, i8, i8, i8, i8')
print(dataset)
target = [x[0] for x in dataset]
train = [x[1:] for x in dataset]
</code></pre>
<p>in last line above I've got an error:</p>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-66-5d58edf06039> in <module>()
4 print(dataset)
5 target = [x[0] for x in dataset]
----> 6 train = [x[1:] for x in dataset]
7
8 #rf = RandomForestClassifier(n_estimators=100)
<ipython-input-66-5d58edf06039> in <listcomp>(.0)
4 print(dataset)
5 target = [x[0] for x in dataset]
----> 6 train = [x[1:] for x in dataset]
7
8 #rf = RandomForestClassifier(n_estimators=100)
IndexError: invalid index
</code></pre>
<p>How to handle this?</p>
| -1 | 2016-10-10T12:37:21Z | 39,963,162 | <p>With that <code>dtype</code> you have created a structured array - it is 1d with a compound dtype.</p>
<p>I have a sample structured array from another problem:</p>
<pre><code>In [26]: data
Out[26]:
array([(b'1Q11', 252.0, 0.0166), (b'2Q11', 212.4, 0.0122),
(b'3Q11', 425.9, 0.0286), (b'4Q11', 522.3, 0.0322),
(b'1Q12', 263.2, 0.0185), (b'2Q12', 238.6, 0.0131),
...
(b'1Q14', 264.5, 0.0179), (b'2Q14', 211.2, 0.0116)],
dtype=[('Qtrs', 'S4'), ('Y', '<f8'), ('X', '<f8')])
</code></pre>
<p>One record is: </p>
<pre><code>In [27]: data[0]
Out[27]: (b'1Q11', 252.0, 0.0166)
</code></pre>
<p>While I can access elements within that a number, it does not accept a slice:</p>
<pre><code>In [36]: data[0][1]
Out[36]: 252.0
In [37]: data[0][1:]
....
IndexError: invalid index
</code></pre>
<p>The preferred way of accessing elements with a structured record is with the field name:</p>
<pre><code>In [38]: data[0]['X']
Out[38]: 0.0166
</code></pre>
<p>Such a name allows me to access that field across all records:</p>
<pre><code>In [39]: data['X']
Out[39]:
array([ 0.0166, 0.0122, 0.0286, ... 0.0116])
</code></pre>
<p>Fetching multiple fields requires a list of field names (and is more wordy than 2d slicing):</p>
<pre><code>In [42]: data.dtype.names[1:]
Out[42]: ('Y', 'X')
In [44]: data[list(data.dtype.names[1:])]
Out[44]:
array([(252.0, 0.0166), (212.4, 0.0122),... (211.2, 0.0116)],
dtype=[('Y', '<f8'), ('X', '<f8')])
</code></pre>
<p>===============</p>
<p>With your sample line (replicated 3 times) I can load:</p>
<pre><code>In [53]: dataset=np.genfromtxt(txt,dtype=None,delimiter=',')
In [54]: dataset
Out[54]:
array([ (b'ACCEPT', b'organizer@t.net', b't', b'p1@t.net', 0, b'UK', 3600000, 3, 1475917200000, 1475920800000, b'MON', 9, 0, 0, 0),
(b'ACCEPT', b'organizer@t.net', b't', b'p1@t.net', 0, b'UK', 3600000, 3, 1475917200000, 1475920800000, b'MON', 9, 0, 0, 0),
(b'ACCEPT', b'organizer@t.net', b't', b'p1@t.net', 0, b'UK', 3600000, 3, 1475917200000, 1475920800000, b'MON', 9, 0, 0, 0)],
dtype=[('f0', 'S6'), ('f1', 'S15'), ('f2', 'S1'), ('f3', 'S8'), ('f4', '<i4'), ('f5', 'S2'), ('f6', '<i4'), ('f7', '<i4'), ('f8', '<i8'), ('f9', '<i8'), ('f10', 'S3'), ('f11', '<i4'), ('f12', '<i4'), ('f13', '<i4'), ('f14', '<i4')])
In [55]:
</code></pre>
<p><code>dtype=None</code> produces something similar to your explicit <code>dtype</code>;</p>
<p>To get your desired output (as arrays, not lists):</p>
<pre><code>target = dataset['f0']
names=dataset.dtype.names[1:]
train = dataset[list(names)]
</code></pre>
<p>=====================</p>
<p>You could also refine the dtype to make the task simpler. Define 2 fields, with the 2nd containing most of csv columns. <code>genfromtxt</code> handles this sort of dtype nesting - just so long as the total field count is correct.</p>
<pre><code>In [106]: dt=[('target','a20'),
('train','a20, a20, a8, i8, a20, i8, i8, i8, i8, a3, i8, i8, i8, i8')]
In [107]: dataset=np.genfromtxt(txt,dtype=dt,delimiter=',')
In [108]: dataset
Out[108]:
array([ (b'ACCEPT', (b'organizer@t.net', b't', b'p1@t.net', 0, b'UK', 3600000, 3, 1475917200000, 1475920800000, b'MON', 9, 0, 0, 0)),
...],
dtype=[('target', 'S20'), ('train', [('f0', 'S20'), ('f1', 'S20'), ('f2', 'S8'), ('f3', '<i8'), ('f4', 'S20'), ('f5', '<i8'), ('f6', '<i8'), ('f7', '<i8'), ('f8', '<i8'), ('f9', 'S3'), ('f10', '<i8'), ('f11', '<i8'), ('f12', '<i8'), ('f13', '<i8')])])
</code></pre>
<p>Now just select the 2 top level fields:</p>
<pre><code>In [109]: dataset['target']
Out[109]:
array([b'ACCEPT', b'ACCEPT', b'ACCEPT'],
dtype='|S20')
In [110]: dataset['train']
Out[110]:
array([ (b'organizer@t.net', b't', b'p1@t.net', 0, b'UK', 3600000, 3, 1475917200000, 1475920800000, b'MON', 9, 0, 0, 0),
...],
dtype=[('f0', 'S20'), ('f1', 'S20'), ...])
</code></pre>
<p>I could nest further, grouping the <code>i8</code> columns into groups of 4:</p>
<pre><code>dt=[('target','a20'), ('train','a20, a20, a8, i8, a20, (4,)i8, a3, (4,)i8')]
</code></pre>
| 1 | 2016-10-10T16:55:19Z | [
"python",
"numpy"
] |
Python: Program Keeps Repeating Itself | 39,958,566 | <pre><code>import random
print("Welcome To Arthur's Quiz,\nfeel free to revise the answers once you have completed.\n")
redos=0
def multiplyquestion():
first_num=random.randint(1,100)
second_num=random.randint(1,100)
real_answer= first_num*second_num
human_answer=0
while real_answer!=human_answer:
human_answer = float(input("What is "+str(first_num)+" x "+str(second_num)+" : "))
if real_answer==human_answer:
print("Well done!, The answer was: "+str(real_answer)+"\n")
else:
print("Incorrect answer, Please Try Again.\n")
global redos
redos = redos+1
def divisionquestion():
first_num=random.randint(1,100)
second_num=random.randint(1,100)
real_answer= first_num/second_num
human_answer=0
while real_answer!=human_answer:
human_answer = float(input("What is "+str(first_num)+" / "+str(second_num)+" : "))
if real_answer==human_answer:
print("Well done!, The answer was: "+str(real_answer)+"\n")
else:
print("Inccorrect answer, Please Try Again.\n")
global redos
redos = redos+1
def additionquestion():
first_num=random.randint(1,100)
second_num=random.randint(1,100)
real_answer= first_num+second_num
human_answer=0
while real_answer!=human_answer:
human_answer = float(input("What is "+str(first_num)+" + "+str(second_num)+" : "))
if real_answer==human_answer:
print("Well done!, The answer was: "+str(real_answer)+"\n")
else:
print("Inccorrect answer, Please Try Again.\n")
global redos
redos = redos+1
def subtractquestion():
first_num=random.randint(1,100)
second_num=random.randint(1,100)
real_answer= first_num-second_num
human_answer=0
while real_answer!=human_answer:
human_answer = float(input("What is "+str(first_num)+" - "+str(second_num)+" : "))
if real_answer==human_answer:
print("Well done!, The answer was: "+str(real_answer)+"\n")
else:
print("Inccorrect answer, Please Try Again.\n")
global redos
redos = redos+1
def main():
for i in range(0,1):
question_code=random.randint(1,4)
if question_code == 1:
subtractquestion()
elif question_code ==2:
additionquestion()
elif question_code == 3:
divisionquestion()
elif question_code==4:
multiplyquestion()
# Main program starts here
main()
</code></pre>
<p>I'm trying to do a randomised math quiz and I have made it only do one question for times sake:</p>
<p>When I run this the program will repeat itself over an over again, even though I have only called upon a function once.(I understand this is probably very basic and messy but please have patience with me <3)</p>
| 0 | 2016-10-10T12:42:26Z | 39,958,814 | <p>In the division function do this to avoid comparing recurring answers with non recurring answers. The rest of the program works perfectly in my environment</p>
<pre><code>if int(round(real_answer)) == human_answer:
</code></pre>
| -1 | 2016-10-10T12:55:33Z | [
"python",
"python-3.x"
] |
Python SQLITE3 Inserting Backwards | 39,958,590 | <p>I have a small piece of code which inserts some data into a database. However, the data is being inserting in a reverse order.<br>
If i "commit" after the for loop has run through, it inserts backwards, if i "commit" as part of the for loop, it inserts in the correct order, however it is much slower.<br>
How can i commit after the for loop but still retain the correct order? </p>
<pre><code>import subprocess, sqlite3
output4 = subprocess.Popen(['laZagne.exe', 'all'], stdout=subprocess.PIPE).communicate()[0]
lines4 = output4.splitlines()
conn = sqlite3.connect('DBNAME')
cur = conn.cursor()
for j in lines4:
print j
cur.execute('insert into Passwords (PassString) VALUES (?)',(j,))
conn.commit()
conn.close()
</code></pre>
| -1 | 2016-10-10T12:43:58Z | 39,959,051 | <p>You can't rely on <em>any</em> ordering in SQL database tables. Insertion takes place in an implementation-dependent manner, and where rows end up depends entirely on the storage implementation used and the data that is already there.</p>
<p>As such, no reversing takes place; if you are selecting data from the table again and these rows come back in a reverse order, then that's a coincidence and not a choice the database made.</p>
<p>If rows must come back in a specific order, use <code>ORDER BY</code> when selecting. You could order by <code>ROWID</code> for example, which <em>may</em> be increasing monotonically for new rows and thus give you an approximation for insertion order. See <a href="http://sqlite.org/lang_createtable.html#rowid" rel="nofollow"><em>ROWIDs and the INTEGER PRIMARY KEY</em></a>.</p>
| 1 | 2016-10-10T13:08:06Z | [
"python",
"sqlite3"
] |
How to exclude values from pandas dataframe? | 39,958,646 | <p>I have two dataframes:</p>
<p>1) customer_id,gender
2) customer_id,...[other fields]</p>
<p>The first dataset is an answer dataset (gender is an answer). So, I want to exclude from the second dataset those customer_id which are in the first dataset (which gender we know) and call it 'train'. The rest records should become a 'test' dataset.</p>
| 1 | 2016-10-10T12:46:19Z | 39,958,731 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> and condition with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a>, inverting <code>boolean Series</code> is by <code>~</code>:</p>
<pre><code>df1 = pd.DataFrame({'customer_id':[1,2,3],
'gender':['m','f','m']})
print (df1)
customer_id gender
0 1 m
1 2 f
2 3 m
df2 = pd.DataFrame({'customer_id':[1,7,5],
'B':[4,5,6],
'C':[7,8,9],
'D':[1,3,5],
'E':[5,3,6],
'F':[7,4,3]})
print (df2)
B C D E F customer_id
0 4 7 1 5 7 1
1 5 8 3 3 4 7
2 6 9 5 6 3 5
</code></pre>
<pre><code>mask = df2.customer_id.isin(df1.customer_id)
print (mask)
0 True
1 False
2 False
Name: customer_id, dtype: bool
print (~mask)
0 False
1 True
2 True
Name: customer_id, dtype: bool
train = df2[mask]
print (train)
B C D E F customer_id
0 4 7 1 5 7 1
test = df2[~mask]
print (test)
B C D E F customer_id
1 5 8 3 3 4 7
2 6 9 5 6 3 5
</code></pre>
| 2 | 2016-10-10T12:51:16Z | [
"python",
"pandas"
] |
One of threads rewrites console input in Python | 39,958,650 | <p>I have a problem with console app with threading. In first thread i have a function, which write symbol "x" into output. In second thread i have function, which waiting for users input. (Symbol "x" is just random choice for this question).</p>
<p>For ex.</p>
<p>Thread 1:</p>
<pre>
while True:
print "x"
time.sleep(1)
</pre>
<p>Thread 2:</p>
<pre>
input = null
while input != "EXIT":
input = raw_input()
print input
</pre>
<p>But when i write text for thread 2 to console, my input text (for ex. HELLO) is rewroted. </p>
<pre>
x
x
HELx
LOx
x
x[enter pressed here]
HELLO
x
x
</pre>
<p>Is any way how can i prevent rewriting my input text by symbol "x"?</p>
<p>Thanks for answers.</p>
| 0 | 2016-10-10T12:46:26Z | 39,958,943 | <p>In a console, standard output (produced by the running program(s)) and standard input (produced by your keypresses) are both sent to screen, so they may end up all mixed.</p>
<p>Here your thread 1 writes 1 <code>x</code> by line every second, so if your take more than 1 second to type <code>HELLO</code> then that will produce the in-console output that you submitted.</p>
<p>If you want to avoid that, a few non-exhaustive suggestions:</p>
<ul>
<li><p>temporarily interrupt thread1 output when a keypress is detected</p></li>
<li><p>use a library such as ncurses to create separates zones for your program output and the user input</p></li>
<li><p>just suppress thread1 input, or send it to a file instead.</p></li>
</ul>
| 0 | 2016-10-10T13:02:22Z | [
"python",
"multithreading",
"console-application"
] |
Python C/API: how to create a normal class | 39,958,741 | <p>Using the Python C/API, how do I create a normal Python class using the normal Python class-creation mechanism (i.e.: not an extension type)?</p>
<p>In other words, what is the Python C/API equivalent (in the sense that it does <em>exactly</em> the same in all cases) of a statement</p>
<pre><code>class X(bases):
...some methods/attributes here...
</code></pre>
| 0 | 2016-10-10T12:51:54Z | 39,958,885 | <p>I'm not sure what you mean by "the normal Python class-creation mechanism", but...</p>
<p>There's a documentation page dedicated to this: <a href="https://docs.python.org/3/extending/newtypes.html" rel="nofollow">https://docs.python.org/3/extending/newtypes.html</a> -- it creates a new type in an extension module, which is equivalent to creating a new <code>class</code> in Python code.</p>
<p>The minimal example presented there is:</p>
<pre><code>#include <Python.h>
typedef struct {
PyObject_HEAD
/* Type-specific fields go here. */
} noddy_NoddyObject;
static PyTypeObject noddy_NoddyType = {
PyVarObject_HEAD_INIT(NULL, 0)
"noddy.Noddy", /* tp_name */
sizeof(noddy_NoddyObject), /* tp_basicsize */
0, /* tp_itemsize */
0, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
0, /* tp_reserved */
0, /* tp_repr */
0, /* tp_as_number */
0, /* tp_as_sequence */
0, /* tp_as_mapping */
0, /* tp_hash */
0, /* tp_call */
0, /* tp_str */
0, /* tp_getattro */
0, /* tp_setattro */
0, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT, /* tp_flags */
"Noddy objects", /* tp_doc */
};
static PyModuleDef noddymodule = {
PyModuleDef_HEAD_INIT,
"noddy",
"Example module that creates an extension type.",
-1,
NULL, NULL, NULL, NULL, NULL
};
PyMODINIT_FUNC
PyInit_noddy(void)
{
PyObject* m;
noddy_NoddyType.tp_new = PyType_GenericNew;
if (PyType_Ready(&noddy_NoddyType) < 0)
return NULL;
m = PyModule_Create(&noddymodule);
if (m == NULL)
return NULL;
Py_INCREF(&noddy_NoddyType);
PyModule_AddObject(m, "Noddy", (PyObject *)&noddy_NoddyType);
return m;
}
</code></pre>
| 1 | 2016-10-10T12:59:34Z | [
"python",
"class",
"python-c-api"
] |
Python C/API: how to create a normal class | 39,958,741 | <p>Using the Python C/API, how do I create a normal Python class using the normal Python class-creation mechanism (i.e.: not an extension type)?</p>
<p>In other words, what is the Python C/API equivalent (in the sense that it does <em>exactly</em> the same in all cases) of a statement</p>
<pre><code>class X(bases):
...some methods/attributes here...
</code></pre>
| 0 | 2016-10-10T12:51:54Z | 39,964,743 | <p>In Python you can get programmatically create a class by calling the <code>type</code> built in function. See <a href="http://stackoverflow.com/a/15247202/4657412">this answer</a> for example.</p>
<p>This takes three arguments: a name, a tuple of bases, and a dictionary.</p>
<p>You can get the Python <code>type</code> in the C api as <a href="https://docs.python.org/3/c-api/type.html#c.PyType_Type" rel="nofollow"><code>PyType_Type</code></a>. You then just need to call it using <a href="https://docs.python.org/3/c-api/object.html#c.PyObject_Call" rel="nofollow">one of the standard methods for calling <code>PyObject*</code> callables</a>:</p>
<pre class="lang-c prettyprint-override"><code>// make a tuple of your bases
PyObject* bases = PyTuple_Pack(0); // assume no bases
// make a dictionary of member functions, etc
PyObject* dict = PyDict_New(); // empty for the sake of example
PyObject* my_new_class = PyObject_CallFunction(&PyType_Type,"sOO",
"X", // class name
bases,
dict);
// check if null
// decref bases and dict
Py_CLEAR(bases);
Py_CLEAR(dict);
</code></pre>
<p>(Note that you have to do <code>&PyType_Type</code> - the documentation implies that it's a <code>PyObject*</code> but it isn't!)</p>
| 3 | 2016-10-10T18:41:53Z | [
"python",
"class",
"python-c-api"
] |
Program hangs when using processes | 39,958,815 | <p>I'm trying to communicate my c code to my python code.
I'm making a name pipe in my c code, sending it to python and python prints it out. I want to do the same where I make a named pipe in python and c reads it out, however my program seems to stall (I want to do both of these at once), hence I'm using processes. </p>
<p>My c code:</p>
<pre><code>int main(void) {
int pid;
FILE * fp;
char *calledPython="./a.py";
char *pythonArgs[]={"python",calledPython,"a","b","c",NULL};
FILE * fp2;
char str[40];
pid = fork();
if(pid == 0) {
mkfifo("./test",0666);
fp = fopen("./test","w");
fprintf(fp,"Hello\n");
fprintf(fp,"World\n");
}
else {
execvp("python",pythonArgs);
// if we get here it misfired
perror("Python execution");
kill(pid,SIGKILL);
}
fp2 = fopen("./test2","r");
if(fp2 == NULL) {
printf("NUll\n");
}
else {
fscanf(fp2, "%s", str);
printf("received from test2 %s\n", str);
}
fclose(fp);
return 0;
}
</code></pre>
<p>My python code:</p>
<pre><code>#!/usr/bin/python
import os, sys
with open("./test") as fp:
for line in fp:
print line
path = "./test2"
os.mkfifo(path)
fifo = open(path,"w")
fifo.write("Message from the sender\n")
fifo.close()
</code></pre>
<p>Where ./test is the initial named pipe, python reads in and ./test2 is the second named pipe c is soposed to read in.
However, my program hangs once I write to my second pipe in python:</p>
<pre><code>path = "./test2"
os.mkfifo(path)
fifo = open(path,"w")
fifo.write("Message from the sender\n")
fifo.close()
</code></pre>
<p>The output I get from terminal is:</p>
<pre><code>NUll
Hello
World
</code></pre>
<p>if I run it again, I simply get a empty line (Nothing prints at all).
I tried to open the second named pipe in the parent process, that doesn't seem to change much. I'm not sure where I've gone wrong here, any ideas?</p>
| 1 | 2016-10-10T12:55:34Z | 39,959,177 | <p>I don't have a unix box on hand to test currently. But the likely problem is that you don't close the fifo in the parent process.</p>
<p>Whenever you have a loop like <code>for line in file</code>, that loop will continue until the file ends. Unlike a normal file, a fifo doesn't "end" until the process on the writing end closes the fifo. If you don't close the fifo, the reading process will block in the read call and won't be able to progress.</p>
<p>There may be other problems with your code that I'm missing.</p>
<p>Some other notes:</p>
<ul>
<li><p>You should mkfifo before you execvp, currently you are doing them in parallel and hoping that mkfifo completes before your python program tries to open the file. Likely, but not certain.</p></li>
<li><p>There is another race condition on the second fifo, this one I think is very unlikely to work as intended.</p></li>
<li><p>You never remove the named pipe, the next time it will try to create a file which already exists which will not work.</p></li>
<li><p>SIGKILL seldom the correct signal to use. I recommend you read up on the difference between SIGKILL, SIGTERM etc.</p></li>
<li><p>System calls like mkfifo can fail and checking the return value for errors is recommended.</p></li>
</ul>
| 0 | 2016-10-10T13:14:40Z | [
"python",
"c",
"fork"
] |
Django: accessing a table via dict-like interface | 39,958,907 | <p>So I have situation in Django where I have a model called S which can have a series of attributes, but I have no idea currently what attributes will be relevant/required. In order to handle this I am thinking of creating a additional model SAttrib which mimics a dictionary entry and accessing it via a metadata property. See below. </p>
<pre><code>class S(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(max_length=128, blank=True, null=True)
description = models.CharField(max_length=128, blank=True, null=True)
created_date = models.DateTimeField(auto_now_add=True, auto_now=False)
modified_date = models.DateTimeField(auto_now_add=False, auto_now=True)
def __unicode__(self):
return smart_unicode(self.name)
class Meta:
db_table = "stest"
def _get_metadata(self):
class AD(dict):
def __init__(self, id):
self.id = id
def __getitem__(self, key):
return SampleAttrib.objects.filter(sample_id=self.id, attrib_key=key).get().attrib_value
return AD(self.id)
metadata = property(_get_metadata)
class SampleAttrib(models.Model):
s_id = models.ForeignKey("S")
attrib_key = models.CharField(max_length=128)
attrib_value = models.TextField()
class Meta:
db_table = 's_attrib'
unique_together = (('s_id', 'attrib_key'),)
</code></pre>
<p>I would like to make it so that I have access the attributes stored in SAttrib via implementing it as a property. So I would be able to access values like so:</p>
<pre><code>y = S.objects.get()
y.metadata["foo"]
</code></pre>
<p>But then it becomes a mess and requires a lot of additional code for setting and saving objects / contains etc. </p>
<pre><code>s = S.objects.get()
s.metadata["foo"] = "spam"
.....
</code></pre>
<p>Is there a better way of structuring this or something thats already in Django that I'm missing? </p>
<p>All suggestions welcome....</p>
<p>Thanks</p>
| 0 | 2016-10-10T13:00:34Z | 39,960,640 | <p>Not a lot wrong with the idea, except for the fact that you seem to have made it needlessly complicated with the use of the <code>_get_metadata</code> function. You may be having a bit of a performance issue becaue of it as well. </p>
<p>This kind of two-table approach is or rather was quite popular in many web apps when you don't know in advance what kind of name/value pairs you will be storing in your table. However the practice is now on the decline thanks to nosql solution and most RDBMS now supporting JSON data types.</p>
<p>Django has built in support for Postgresql's excellent JSONB field. For other databases, tried and tested JSONFields are available so using JSONFields would give you a pretty good database independent solution.</p>
| 1 | 2016-10-10T14:29:23Z | [
"python",
"django",
"django-models"
] |
Kivy Multiple Screen Transitions not taking place | 39,958,969 | <p>I am trying to builng a multiple screen kivy app in python. There are no compile time errors. App Compiles successfully. I am using Screen Manager in kivy to achieve multiple screens. On clicking the buttons no transitions are taking place. Please help me perform transitions. Here are actual snippets of my code.</p>
<p><strong>main.py file</strong></p>
<pre><code>import kivy
from kivy.app import App
from kivy.app import ObjectProperty
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.screenmamager import ScreenManager, Screen
class LoginScreen(Screen):
pass
class SignUpScreen(Screen):
pass
class MainScreen(BoxLayout):
pass
class MyScreenManager(ScreenManager):
pass
class AuthenticationApp(App):
def build(self):
return MyScreenManager()
if __name__ == '__main__':
AuthenticationApp().run()
</code></pre>
<p><strong>Authentication.kv file</strong></p>
<pre><code><MyScreenManager>
MainScreen:
SecondScreen:
<SecondScreen>:
name: 'Second'
BoxLayout:
orientation: 'vertical'
canvas:
Rectangle:
source: 'images/blue.png'
pos: self.pos
size: self.size
BoxLayout:
orientation: 'vertical'
size_hint: 1,0.25
Label:
text: 'Vigilantdsjkadhakjshdakjsd Dollop'
font_size: '15sp'
size_hint: 1, 0.20
BoxLayout:
orientation: 'horizontal'
size_hint: 1, 0.1
Button:
id: login_button
text: 'Login'
font_size: '15sp'
on_release: app.root.current = 'Main'
Button:
id: login_button
text: 'Sign Up'
font_size: '15sp'
Button:
id: login_button
text: 'Recover'
font_size: '15sp'
Button:
id: login_button
text: 'Reset'
font_size: '15sp'
BoxLayout:
orientation: 'vertical'
size_hint: 1,0.75
Button:
text: 'Page'
<MainScreen>:
name: 'Main'
BoxLayout:
orientation: 'vertical'
canvas:
Rectangle:
source: 'images/blue.png'
pos: self.pos
size: self.size
BoxLayout:
orientation: 'vertical'
size_hint: 1,0.25
Label:
text: 'Vigilant Dollop'
font_size: '15sp'
size_hint: 1, 0.20
BoxLayout:
orientation: 'horizontal'
size_hint: 1, 0.1
Button:
id: login_button
text: 'Login'
font_size: '15sp'
Button:
id: login_button
text: 'Sign Up'
font_size: '15sp'
on_press: root.current = 'Second'
Button:
id: login_button
text: 'Recover'
font_size: '15sp'
Button:
id: login_button
text: 'Reset'
font_size: '15sp'
BoxLayout:
orientation: 'vertical'
size_hint: 1,0.75
Button:
text: 'Page'
</code></pre>
| -1 | 2016-10-10T13:03:56Z | 39,964,554 | <p>declare the screen manager a global variable</p>
<pre><code>import kivy
from kivy.app import App
from kivy.app import ObjectProperty
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.screenmamager import ScreenManager, Screen
screen_manager = ScreenManager()
class LoginScreen(Screen):
pass
</code></pre>
<p>and return the screen_manager instance in the <code>build</code> method.</p>
<pre><code>class AuthenticationApp(App):
def build(self):
screen_manager.add_widget(LoginScreen(name='login'))
return sceen_manager
</code></pre>
<p>then in your <code>.kv</code> file any where you want to switch between screens. try for example:</p>
<pre><code>Button:
id: login_button
text: 'Login'
font_size: '15sp'
on_release: root.manager.current = 'login'
</code></pre>
| 0 | 2016-10-10T18:29:59Z | [
"python",
"kivy"
] |
Python 3: How to run functions with parameter in random order? | 39,958,974 | <p>I knew how to run functions which doesn't have parameter. In this case, I want to run functions which has parameter in random order ? </p>
<p>It showed the result but not random and there is some error: Please help me correct the my code. Thank you :)</p>
<pre><code>import random
def hi(name):
print('a ' + name)
def howold(old, name):
print( name + 'is ' + old + "years old")
def howmuch(money):
print(money + ' dollars')
functions = [hi('John'),howold('20', 'John'),howmuch('50')]
random.shuffle(functions)
for i in functions:
i()
</code></pre>
| -1 | 2016-10-10T13:04:02Z | 39,959,082 | <p>In your loop, i is a function and not a string, therefore comparing it to a string wont work and you end up in else. Instead use <code>if i.__name__ == 'hi</code></p>
| -1 | 2016-10-10T13:09:34Z | [
"python",
"function",
"random"
] |
Python 3: How to run functions with parameter in random order? | 39,958,974 | <p>I knew how to run functions which doesn't have parameter. In this case, I want to run functions which has parameter in random order ? </p>
<p>It showed the result but not random and there is some error: Please help me correct the my code. Thank you :)</p>
<pre><code>import random
def hi(name):
print('a ' + name)
def howold(old, name):
print( name + 'is ' + old + "years old")
def howmuch(money):
print(money + ' dollars')
functions = [hi('John'),howold('20', 'John'),howmuch('50')]
random.shuffle(functions)
for i in functions:
i()
</code></pre>
| -1 | 2016-10-10T13:04:02Z | 39,959,154 | <p>Try something like that:</p>
<pre><code>functions = [(hi, ['John']), (howold, ['20', 'John']), (howmuch, ['50'])]
random.shuffle(functions)
for func, args in functions:
func(*args)
</code></pre>
| 3 | 2016-10-10T13:13:34Z | [
"python",
"function",
"random"
] |
Python 3: How to run functions with parameter in random order? | 39,958,974 | <p>I knew how to run functions which doesn't have parameter. In this case, I want to run functions which has parameter in random order ? </p>
<p>It showed the result but not random and there is some error: Please help me correct the my code. Thank you :)</p>
<pre><code>import random
def hi(name):
print('a ' + name)
def howold(old, name):
print( name + 'is ' + old + "years old")
def howmuch(money):
print(money + ' dollars')
functions = [hi('John'),howold('20', 'John'),howmuch('50')]
random.shuffle(functions)
for i in functions:
i()
</code></pre>
| -1 | 2016-10-10T13:04:02Z | 39,959,181 | <p>Your <code>functions</code> list contains results of already evaluated functions, not a partially applied function with no arguments (so that you can call them with <code>i()</code>) in the loop.</p>
<p>You can use lambdas to produce new functions with no arguments like this:</p>
<pre><code>functions = [lambda: hi('John'),
lambda: howold('20', 'John'),
lambda: howmuch('50')]
random.shuffle(functions)
for f in functions:
f()
</code></pre>
| 2 | 2016-10-10T13:14:58Z | [
"python",
"function",
"random"
] |
Python 3: How to run functions with parameter in random order? | 39,958,974 | <p>I knew how to run functions which doesn't have parameter. In this case, I want to run functions which has parameter in random order ? </p>
<p>It showed the result but not random and there is some error: Please help me correct the my code. Thank you :)</p>
<pre><code>import random
def hi(name):
print('a ' + name)
def howold(old, name):
print( name + 'is ' + old + "years old")
def howmuch(money):
print(money + ' dollars')
functions = [hi('John'),howold('20', 'John'),howmuch('50')]
random.shuffle(functions)
for i in functions:
i()
</code></pre>
| -1 | 2016-10-10T13:04:02Z | 39,959,234 | <p>When this line executes:</p>
<pre><code>functions = [hi('John'),howold('20', 'John'),howmuch('50')]
</code></pre>
<p>Python will call your 3 functions <code>hi()</code>, <code>howold()</code> then <code>howmuch()</code> in that order, then store their result in a list called <code>functions</code>. So all <code>print()</code> will run at that point. That's why, as you said, "it showed the result but not random". Since all your functions do not return anything, your <code>functions</code> will be equal to <code>[None, None, None]</code>.</p>
<p>Then the following code:</p>
<pre><code>random.shuffle(functions)
for i in functions:
i()
</code></pre>
<p>Will try to execute None(). That will produce an error, as you said "there is some error": this error is <code>TypeError: 'NoneType' object is not callable</code>.</p>
<p>How to fix: use for example <code>functools.partial()</code></p>
<pre><code>from functools import partial
functions = [partial(hi, 'John'), partial(howold, '20', 'John'), partial(howmuch, '50')]
random.shuffle(functions)
for i in functions:
i()
</code></pre>
<p>Official doc here: <a href="https://docs.python.org/2/library/functools.html#functools.partial" rel="nofollow">https://docs.python.org/2/library/functools.html#functools.partial</a></p>
| 1 | 2016-10-10T13:16:55Z | [
"python",
"function",
"random"
] |
i cannot round up and get it for my GTIN | 39,959,010 | <pre><code>import random
r1 = (random.randint(0,9))
r2 = (random.randint(0,9))
r3 = (random.randint(0,9))
r4 = (random.randint(0,9))
r5 = (random.randint(0,9))
r6 = (random.randint(0,9))
r7 = (random.randint(0,9))
print ("your item barcode number is", r1,r2,r3,r4,r5,r6,r7)
r8 = (r1*3+r2*1+r3*3+r4*1+r5*3+r6*1+r7*3)
roundup = round(r8, -1)
print (r8)
print(roundup)
GTIN = (roundup-r8)
if GTIN<0:
GTIN = (r8-roundup)
print("the GTIN number is", GTIN)
print(r1,r2,r3,r4,r5,r6,r7,GTIN)
</code></pre>
<p>i can't get it to round up to the higher ten somebody help so basically i want it to be like 44 rounds up to 50 unless the number is a mutiple of 10 somebody help me do it</p>
| 0 | 2016-10-10T13:06:04Z | 39,959,090 | <pre><code>def round_up_by_ten(num):
return num if not num%10 else ((num//10)+1)*10
</code></pre>
| 0 | 2016-10-10T13:09:56Z | [
"python"
] |
Save a subset of MongoDB(3.0) collection to another collection in Python | 39,959,206 | <p>I found this answer - <a href="http://stackoverflow.com/a/25247084/609782">Answer link</a></p>
<pre><code>db.full_set.aggregate([ { $match: { date: "20120105" } }, { $out: "subset" } ]);
</code></pre>
<p>I want do same thing but with first 15000 documents in collection, I couldn't find how to apply limit to such query (I tried using <code>$limit : 15000</code>, but it doesn't recognize $limit)</p>
<p>also when I tried - </p>
<pre><code>db.subset.insert(db.full_set.find({}).limit(15000).toArray())
</code></pre>
<p>there is no function <code>toArray()</code> for output type <code>cursor</code>.
<br><br>Guide me how can I accomplish it?</p>
| 2 | 2016-10-10T13:15:39Z | 40,076,716 | <p>Well,<br>
in python, this is how things work - <code>$limit</code> needs to be wrapped in <code>""</code>,<br>
and you need to create a pipeline to execute it as a command.</p>
<p>In my code -</p>
<pre><code> pipeline = [{ '$limit': 15000 },{'$out': "destination_collection"}]
db.command('aggregate', "source_collection", pipeline=pipeline)
</code></pre>
<p>You need to wrap everything in double quotes, including your source and destination collection.
And in <code>db.command</code> db is the object of your database (ie dbclient.database_name)</p>
<p>As per this answer -</p>
<blockquote>
<p>It works about 100 times faster than forEach at least in my case. This is because the entire aggregation pipeline runs in the mongod process, whereas a solution based on find() and insert() has to send all of the documents from the server to the client and then back. This has a performance penalty, even if the server and client are on the same machine.</p>
</blockquote>
<p>The one that really helped me figure this answer out - <a href="http://blog.pythonisito.com/2012/06/using-mongodbs-new-aggregation.html" rel="nofollow">Reference 1</a>
<br>
And <a href="https://docs.mongodb.com/v3.2/aggregation/" rel="nofollow">official documentation</a></p>
| 0 | 2016-10-16T23:44:42Z | [
"python",
"mongodb",
"mongodb-query",
"aggregation-framework",
"mongodb-aggregation"
] |
Multiple outputs printed slowly at the same time | 39,959,301 | <p>As it's stated in the title, I have N strings (let's say 3) and they go one after the other like:</p>
<pre><code>"string1"
"string2"
"string3"
</code></pre>
<p>With the help of 'sleep' we can make the string get printed out slowly, symbol by symbol, BUT I want to have each of them printed slowly AT THE SAME TIME. Is it possible to do such a thing? </p>
<p>Problem - lvl.'Advanced': Can I make 3 tkinter buttons with text in them printed like that? Or maybe it'd be better to create labels with such text-effect (if possible) and instantly replace them with 3 buttons with the same words?</p>
| -1 | 2016-10-10T13:20:40Z | 39,959,969 | <p>If I understand you correctly, you want to create a label that slowly reveals one character at a time (ie: first you see "s", then "st", then "str", etc).</p>
<p>That can be cone by creating a custom label class, and using <code>after</code> to slowly reveal the text. Each time the function <code>reveal_text</code> is called, it pops one character off of the list of remaining characters, appends it to the characters already displayed, and then arranges for itself to be called again in half a second.</p>
<p>For example:</p>
<pre><code>class SlowLabel(tk.Label):
def __init__(self, *args, **kwargs):
tk.Label.__init__(self, *args, **kwargs)
self.text = self.cget("text")
self.configure(text="")
self.reveal_text()
def reveal_text(self):
if len(self.text) > 0:
text = self.cget("text") + self.text[0]
self.configure(text=text)
self.text = self.text[1:]
self.after(500, self.reveal_text)
</code></pre>
<p>You can use this class exactly like you would use a normal label:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
for text in ("string1", "string2", "string3"):
label = SlowLabel(root, text=text, width=20, anchor="w")
label.pack(side="top", fill="x")
</code></pre>
<p>If you prefer buttons rather than labels, just inherit from <code>tk.Button</code> rather than <code>tk.Label</code>.</p>
| 1 | 2016-10-10T13:54:56Z | [
"python",
"python-3.x",
"tkinter"
] |
Python 3.5 Dropbox API modified date doesn't update | 39,959,308 | <p>I'm writing a script in python3.5 that needs to check if the file on dropbox is
newer then a local file. If the file is newer, it needs to download the file.</p>
<p>The problem I'm having is that the date on the server doesn't seem to update. Is it possible that it only update on certain times?</p>
<pre><code>code snippet:
def check_if_needed(dbx):
server_date = dbx.files_get_metadata('/Verlichting.zip').server_modified
version_epoch = os.path.getmtime('versie.txt')
version_date = datetime.datetime.fromtimestamp(version_epoch)
print (server_date)
print (version_date)
if (version_date < server_date):
return True
return False
</code></pre>
<p>output:<br>
2016-10-10 13:05:35<br>
2016-10-10 15:04:25.861405<br>
<br>
what it should be:<br>
2016-10-10-15:10:00<br>
2016-10-10 15:04:25.861405</p>
<p>So it returns False, while it has to be True.</p>
<p>I have updated the file on dropbox a couple of times, but it stil doesn't update. I have also looked on the internet but I couldn't find anything. Also I doesn't use the dropbox client, but directly in the browser, and yes I'm updating the file in the Apps folder created by dropbox ;)
If more information is needed, let me know!</p>
<p>Anyone able to help me?
Thanks in advance!</p>
| 0 | 2016-10-10T13:21:02Z | 39,960,027 | <p>The problem was the difference between timezones. I'm in GMT +2 while dropbox is GMT +0. So I chanced this line</p>
<pre><code>version_date = datetime.datetime.fromtimestamp(version_epoch) - datetime.timedelta(hours=2)
</code></pre>
<p>Now it works perfectly.</p>
| 0 | 2016-10-10T13:57:32Z | [
"python",
"python-3.x",
"dropbox"
] |
If found brackets found in string, then remove brackets and data inside brackets | 39,959,313 | <p>I would like to detect brackets in a string, and if found, remove the brackets and all data in the brackets</p>
<p>e.g.</p>
<p><code>Developer (12)</code></p>
<p>would become</p>
<p><code>Developer</code></p>
<p>Edit: Note that the string will be a different length/text each time, and the brackets will not always be present.</p>
<p>I can detect the brackets using something like</p>
<pre><code>if '(' in mystring:
print 'found it'
</code></pre>
<p>but how would I remove the <code>(12)</code>?</p>
| 2 | 2016-10-10T13:21:24Z | 39,959,456 | <p>You can user regex and replace it:</p>
<pre><code>>>> re.sub(r'\(.*?\)', '','Developer (12)')
'Developer '
>>> a='DEf (asd () . as ( as ssdd (12334))'
>>> re.sub(r'\(.*?\)', '','DEf (asd () . as ( as ssdd (12334))')
'DEf . as )'
</code></pre>
| 3 | 2016-10-10T13:28:28Z | [
"python",
"regex"
] |
If found brackets found in string, then remove brackets and data inside brackets | 39,959,313 | <p>I would like to detect brackets in a string, and if found, remove the brackets and all data in the brackets</p>
<p>e.g.</p>
<p><code>Developer (12)</code></p>
<p>would become</p>
<p><code>Developer</code></p>
<p>Edit: Note that the string will be a different length/text each time, and the brackets will not always be present.</p>
<p>I can detect the brackets using something like</p>
<pre><code>if '(' in mystring:
print 'found it'
</code></pre>
<p>but how would I remove the <code>(12)</code>?</p>
| 2 | 2016-10-10T13:21:24Z | 39,959,475 | <p>I believe you want something like this</p>
<pre><code>import re
a = "developer (12)"
print(re.sub("\(.*\)", "", a))
</code></pre>
| 1 | 2016-10-10T13:29:23Z | [
"python",
"regex"
] |
If found brackets found in string, then remove brackets and data inside brackets | 39,959,313 | <p>I would like to detect brackets in a string, and if found, remove the brackets and all data in the brackets</p>
<p>e.g.</p>
<p><code>Developer (12)</code></p>
<p>would become</p>
<p><code>Developer</code></p>
<p>Edit: Note that the string will be a different length/text each time, and the brackets will not always be present.</p>
<p>I can detect the brackets using something like</p>
<pre><code>if '(' in mystring:
print 'found it'
</code></pre>
<p>but how would I remove the <code>(12)</code>?</p>
| 2 | 2016-10-10T13:21:24Z | 39,959,531 | <p>Since it's always at the end and there is no nested brackets:</p>
<pre><code>s = "Developer (12)"
s[:s.index('(')] # or s.index(' (') if you want to get rid of the previous space too
</code></pre>
| 0 | 2016-10-10T13:33:10Z | [
"python",
"regex"
] |
If found brackets found in string, then remove brackets and data inside brackets | 39,959,313 | <p>I would like to detect brackets in a string, and if found, remove the brackets and all data in the brackets</p>
<p>e.g.</p>
<p><code>Developer (12)</code></p>
<p>would become</p>
<p><code>Developer</code></p>
<p>Edit: Note that the string will be a different length/text each time, and the brackets will not always be present.</p>
<p>I can detect the brackets using something like</p>
<pre><code>if '(' in mystring:
print 'found it'
</code></pre>
<p>but how would I remove the <code>(12)</code>?</p>
| 2 | 2016-10-10T13:21:24Z | 39,960,071 | <p>For nested brackets and multiple pairs in string this solution would work</p>
<pre><code>def replace_parenthesis_with_empty_str(str):
new_str = ""
stack = []
in_bracker = False
for c in str :
if c == '(' :
stack.append(c)
in_bracker = True
continue
else:
if in_bracker == True:
if c == ')' :
stack.pop()
if not len(stack):
in_bracker = False
else :
new_str += c
return new_str
a = "fsdf(ds fOsf(fs)sdfs f(sdfsd)sd fsdf)c sdsds (sdsd)"
print(replace_parenthesis_with_empty_str(a))
</code></pre>
| 0 | 2016-10-10T13:59:49Z | [
"python",
"regex"
] |
Trying to find a quick way to get windows path | 39,959,409 | <p>I'm new to python.
I know how to detect which os is installed but I'm trying to find a quick way to get the windows path rather then go a-z (c:\windows...x:\windows...).
Is there any quick way?</p>
<p>Edit:
Something like %systemroot% in windows (gives you full path).</p>
| 4 | 2016-10-10T13:25:49Z | 39,959,484 | <p>You can use <a href="https://docs.python.org/3/library/os.html#os.environ" rel="nofollow"><code>os.environ</code></a></p>
<pre><code>import os
win_path = os.environ['WINDIR']
</code></pre>
<p><code>WINDIR</code> is an environment variable set by windows that will point to <code>%SystemRoot%</code></p>
| 3 | 2016-10-10T13:30:00Z | [
"python",
"windows"
] |
Class providing methods inherited into class providing underlying data - How is that called? | 39,959,416 | <p>I have a class <code>Library</code> which defines some methods on - kind of - abstract data. This class is then inherited into a class <code>A</code> or <code>B</code> that actually defines the data. The intention is to reuse <code>Library</code> with different underlying data storage models.</p>
<p>In Python:</p>
<pre><code>class Library:
def meth1(self, ...):
return ...
def meth2(self, ...):
return ...
def compute_property1(...):
return ...
def compute_property2(...):
return ...
class B(Library):
property1 = property(lambda s: s.compute_property1()) #plain property
property1 = property(lambda s: s.compute_property2()) #plain property
class A(Library):
property1 = my_property_with_fancy_caching_and_compression_and_stuff(....)
property1 = my_property_with_fancy_caching_and_compression_and_stuff(....)
</code></pre>
<p>Is this pattern a well-known design approach? Does it have a name? Is there a recommended name for <code>Library</code>?</p>
| 0 | 2016-10-10T13:26:12Z | 39,974,756 | <p>Your example is leveraging Python's Multiple inheritance mechanism, whereby you add methods from a methods-only, data-free, abstract class <code>A</code>, to one (or several) data-oriented, dumb concrete class <code>B</code>.</p>
<p>As it separates data-storing (representation) from functionality (methods), it is very <strong>close to a mixin</strong> as <a href="http://stackoverflow.com/users/41316/bruno-desthuilliers">Bruno</a> commented, but with the distinction that such pattern usually brings both data storage <em>and</em> functionality. This difference in your pattern makes it not completely <a href="https://en.wikipedia.org/wiki/Object-oriented_programming" rel="nofollow">OOP</a> (where an object is supposed to bring both) but is <strong>reminiscent of the <a href="https://en.wikipedia.org/wiki/Aspect-oriented_programming" rel="nofollow">Aspect-Oriented</a> paradigm</strong>.</p>
<p>If I were to describe this mechanism in OOP, I'd call class <code>A</code> an <strong>interface with default methods</strong>. That would carry the ability to import anywhere some methods implementations, but <em>no fields</em>. And I would call class <code>B</code> <a href="https://en.wikipedia.org/wiki/Data_access_object" rel="nofollow">Data Access Object</a></p>
<p><em>Note:</em> If you intend to do so for several interfaces (<em>e.g.</em>, if you were to split your <code>meth1()</code> and <code>compute_priority_1()</code>from <code>meth2()</code> and <code>comute_priority_2()</code> from each other, and place them in two separate interfaces), then that would follow <a href="https://en.wikipedia.org/wiki/Interface_segregation_principle" rel="nofollow">Interface Segregation Principle</a>.</p>
| 1 | 2016-10-11T09:58:30Z | [
"python",
"design-patterns"
] |
Set last non-zero element of each row to zero - NumPy | 39,959,435 | <p>I have an array A:</p>
<pre><code>A = array([[1, 2, 3,4], [5,6,7,0] , [8,9,0,0]])
</code></pre>
<p>I want to change the last non-zero of each row to 0 </p>
<pre><code>A = array([[1, 2, 3,0], [5,6,0,0] , [8,0,0,0]])
</code></pre>
<p>how to write the code for any n*m numpy array?
Thanks, S ;-)</p>
| 3 | 2016-10-10T13:27:06Z | 39,959,511 | <p><strong>Approach #1</strong></p>
<p>One approach based on <code>cumsum</code> and <code>argmax</code> -</p>
<pre><code>A[np.arange(A.shape[0]),(A!=0).cumsum(1).argmax(1)] = 0
</code></pre>
<p>Sample run -</p>
<pre><code>In [59]: A
Out[59]:
array([[2, 0, 3, 4],
[5, 6, 7, 0],
[8, 9, 0, 0]])
In [60]: A[np.arange(A.shape[0]),(A!=0).cumsum(1).argmax(1)] = 0
In [61]: A
Out[61]:
array([[2, 0, 3, 0],
[5, 6, 0, 0],
[8, 0, 0, 0]])
</code></pre>
<p><strong>Approach #2</strong></p>
<p>One more based on <code>argmax</code> and hopefully more efficient -</p>
<pre><code>A[np.arange(A.shape[0]),A.shape[1] - 1 - (A[:,::-1]!=0).argmax(1)] = 0
</code></pre>
<hr>
<p><strong>Explanation</strong></p>
<p>One of the uses of <code>argmax</code> is to get ID of the <strong>first</strong> occurence of the <code>max</code> element along an axis in an array . In the first approach we get the cumsum along the rows and get the first max ID, which represents the last non-zero elem. This is because <code>cumsum</code> on the leftover elements won't increase the sum value after that last non-zero element. </p>
<p>Let's re-run that case in a bit more detailed manner -</p>
<pre><code>In [105]: A
Out[105]:
array([[2, 0, 3, 4],
[5, 6, 7, 0],
[8, 9, 0, 0]])
In [106]: (A!=0)
Out[106]:
array([[ True, False, True, True],
[ True, True, True, False],
[ True, True, False, False]], dtype=bool)
In [107]: (A!=0).cumsum(1)
Out[107]:
array([[1, 1, 2, 3],
[1, 2, 3, 3],
[1, 2, 2, 2]])
In [108]: (A!=0).cumsum(1).argmax(1)
Out[108]: array([3, 2, 1])
</code></pre>
<p>Finally, we use <code>fancy-indexing</code> to use those as the column indices to set appropriate elements in <code>A</code>.</p>
<p>In the second approach, when we use <code>argmax</code> on the boolean array, we simply got the first occurence of <code>True</code>, which we used on a row-flipped version of the input array. As such, we would have the last non-zero elem in the original order. Rest of the idea there, is the same.</p>
| 4 | 2016-10-10T13:31:49Z | [
"python",
"numpy",
"theano"
] |
Speed up scipy ndimage measurements applied on a numpy 3-d | 39,959,445 | <p>I have multiple large labeled numpy 2d array (10 000x10 000). For each label (connected cells with the same number) I want to calculate multiple measurements based on the values of another numpy 3-d array (mean, std., max, etc.). This is possible with the scipy.ndimage.labeled_comprehension tool if the 3-d numpy is converted to a 2-d. However, as the numbers of labels and the size of the arrays is rather big, the calculations take quite some time. My current code seems redundant as I am now iterating over the same labels for each 3rd dimension of the input image. I am wondering if there are ways to speed-up my code (for example by combining the three scipy.ndimage.labeled_comprehension calculations into a single calculation).</p>
<p><em>Using a test dataset of shape (4200,3000,3) and 283047 labels the calculations took 10:34 minutes</em></p>
<hr>
<p><strong>Test data</strong></p>
<pre><code>example_labels=np.array([[1, 1, 3, 3],
[1, 2, 2, 3],
[2, 2, 4, 4],
[5, 5, 5, 4]])
unique_labels=np.unique(example_labels)
value_array=np.arange(48).reshape(4,4,3)
</code></pre>
<p><strong>Current code and desired output</strong></p>
<pre><code>def mean_std_measurement(x):
xmean = x.mean()
xstd = x.std()
vals.append([xmean,xstd])
def calculate_measurements(labels, unique_labels, value_arr):
global vals
vals=[]
ndimage.labeled_comprehension(value_array[:,:,0],labels,unique_labels,mean_std_measurement,float,-1)
val1=np.array(vals)
vals=[]
ndimage.labeled_comprehension(value_array[:,:,1],labels,unique_labels,mean_std_measurement,float,-1)
val2=np.array(vals)
vals=[]
ndimage.labeled_comprehension(value_array[:,:,2],labels,unique_labels,mean_std_measurement,float,-1)
val3=np.array(vals)
return np.column_stack((unique_labels,val1,val2,val3))
>>> print calculate_measurements(example_labels,unique_labels,value_array)
array([[ 1. , 5. , 5.09901951, 6. ,
5.09901951, 7. , 5.09901951],
[ 2. , 21. , 4.74341649, 22. ,
4.74341649, 23. , 4.74341649],
[ 3. , 12. , 6.4807407 , 13. ,
6.4807407 , 14. , 6.4807407 ],
[ 4. , 36. , 6.4807407 , 37. ,
6.4807407 , 38. , 6.4807407 ],
[ 5. , 39. , 2.44948974, 40. ,
2.44948974, 41. , 2.44948974]])
</code></pre>
| 2 | 2016-10-10T13:27:38Z | 39,962,238 | <p>The list appends in mean_std_measurement is probably forming a bottleneck when dealing with large arrays. <code>labeled_comprehension</code> will return an array on its own, so a first step is to just let scipy handle the array construction under the hood. </p>
<p><code>labeled_comprehension</code> can only apply functions that output a single value---I suspect this is why you were using the list construction in the first place---but we can cheat this by having the function output a complex value. Another option would be to use structured datatypes as the output, which would be necessary if we were returning more than 2 values.</p>
<pre><code>import numpy as np
from scipy import ndimage
example_labels=np.array([[1, 1, 3, 3],
[1, 2, 2, 3],
[2, 2, 4, 4],
[5, 5, 5, 4]])
unique_labels=np.unique(example_labels)
value_array=np.arange(48).reshape(4,4,3)
# return a complex number to get around labled_comprehension limitations
def mean_std_measurement(x):
xmean = x.mean()
xstd = x.std()
return np.complex(xmean, xstd)
def calculate_measurements(labels, unique_labels, value_array):
val1 = ndimage.labeled_comprehension(value_array[:,:,0],labels,unique_labels,mean_std_measurement,np.complex,-1)
val2 = ndimage.labeled_comprehension(value_array[:,:,1],labels,unique_labels,mean_std_measurement,np.complex,-1)
val3 = ndimage.labeled_comprehension(value_array[:,:,2],labels,unique_labels,mean_std_measurement,np.complex,-1)
# convert the complex numbers back into reals
return np.column_stack((np.unique(labels),
val1.real,val1.imag,
val2.real,val2.imag,
val3.real,val3.imag))
</code></pre>
| 0 | 2016-10-10T16:01:10Z | [
"python",
"numpy",
"scipy",
"vectorization",
"ndimage"
] |
Open unique secondary window with Tkinter | 39,959,470 | <p>I'm having a trouble when i open a secondary window. Now I'm just creating a toplevel window with a button and I need to open the same secondary window If i click the button (not generate a new instance).</p>
<p>Which is the better way to generate single secondary window and not generating a new window instance?</p>
<p>I leave the code that I'm actually working on:</p>
<pre><code>import tkinter
class LogWindow():
def __init__(self, parent):
self.parent = parent
self.frame = tkinter.Frame(self.parent)
class MainWindow(tkinter.Frame):
def __init__(self, parent):
tkinter.Frame.__init__(self, parent)
self.parent = parent
frame = tkinter.Frame(self, borderwidth=1)
frame.pack(fill=tkinter.BOTH, expand=True, padx=5, pady=5)
self.LogButton = tkinter.Button(frame, text="Log Viewer", command= self.openLogWindow)
self.LogButton.grid(sticky=tkinter.E+tkinter.W)
self.pack(fill=tkinter.BOTH,expand=True)
def openLogWindow(self):
self.logWindow = tkinter.Toplevel(self.parent)
self.app = LogWindow(self.logWindow)
def main():
global app, stopRead
root = tkinter.Tk()
root.geometry("300x300")
app = MainWindow(root)
root.mainloop()
if __name__ == '__main__':
main()
</code></pre>
<p>Maybe i need to have a single instance of a Toplevel class and call show and close to show or hide the secondary window.</p>
| 0 | 2016-10-10T13:29:03Z | 40,021,793 | <p>Finally after some tests I've found how to solve that, thanks to the @furas response and some investigation about the Tkinter events with the protocol function.</p>
<p>I've got that working with:</p>
<pre><code>import tkinter
logWindowExists = False
class LogWindow():
def __init__(self, parent):
global logWindowExists, root
logWindowExists = True
self.parent = parent
self.frame = tkinter.Frame(self.parent)
def on_closing(self):
global logWindowExists
logWindowExists = False
self.parent.destroy()
class MainWindow(tkinter.Frame):
def __init__(self, parent):
tkinter.Frame.__init__(self, parent)
self.parent = parent
frame = tkinter.Frame(self, borderwidth=1)
frame.pack(fill=tkinter.BOTH, expand=True, padx=5, pady=5)
self.LogButton = tkinter.Button(frame, text="Log Viewer", command= self.openLogWindow)
self.LogButton.grid(sticky=tkinter.E+tkinter.W)
self.pack(fill=tkinter.BOTH,expand=True)
def openLogWindow(self):
if not logWindowExists:
self.logWindow = tkinter.Toplevel(self.parent)
self.app = LogWindow(self.logWindow)
self.logWindow.protocol("WM_DELETE_WINDOW", self.app.on_closing)
else:
self.logWindow.deiconify()
def main():
global app, stopRead, root
root = tkinter.Tk()
root.geometry("300x300")
app = MainWindow(root)
root.mainloop()
if __name__ == '__main__':
main()
</code></pre>
<p>Using a boolean to know if the window exists or not i can handle when the window it's opened or not and just show the existing window or creating a new one.</p>
| 0 | 2016-10-13T13:02:01Z | [
"python",
"tkinter"
] |
How to convert signed string to its Binary equivalent in Python? | 39,959,491 | <p>I am using the itertool function to enter value to a list. The itertool function is taking the value as a str, not as an int. After that, I need to convert the values from the list to its Binary equivalent. The problem arises when I need to convert a negative value e.g. -5. My code is taking the "-" as a str, but I need it to consider it as a negative sign before the following numerical value.Does the concept of unsigned integer come into play?</p>
<p>My code is-</p>
<pre><code>L3= list(itertools.repeat("-1",5))
file= open(filename, 'w')
L3_1=[ ]
for item in L3:
x3=bytes(item,"ascii")
L3_1.append(' '.join(["{0:b}".format(x).zfill(8) for x in x3]))
for item in L3_1:
file.write("%s\n" % item)
file.close()
</code></pre>
| 2 | 2016-10-10T13:30:23Z | 39,960,770 | <p>It's not entirely clear what your problem is and what you want to achieve, so please correct me if I make wrong assumptions.</p>
<p>Anyways, converting integers to binary representation is easily done using <code>bin</code>. For example, <code>bin(5)</code> gives you <code>'0b101'</code>. Now, <code>bin(-5)</code> gives you <code>'-0b101'</code> - which is not what you expect when being used to binary from other languages, e.g. C.</p>
<p>The "problem" is that integers in python are <em>not</em> fixed size. There's no <code>int16</code>, <code>int32</code>, <code>uint8</code> and such. Python will just add bits as it needs to. That means a negative number <strong>cannot</strong> be represented by its complement - <code>0b11111011</code> is not <code>-5</code> as for <code>int8</code>, but 251. Since binaries are potentially infinite, there's no fixed position to place a sign bit. Thus, python has to add the explicit unary <code>-</code>. This is <em>different</em> from interpreting <code>-5</code> as the strings <code>"-"</code> and <code>"5"</code>.</p>
<p>If you want to get the binary representation for negative, <em>fixed size</em> integers, I think you have to do it by yourself. A function that does this could look like this:</p>
<pre><code>def bin_int(number, size=8):
max_val = int('0b' + ('1'* (size - 1)), 2) # e.g. 0b01111111
assert -max_val <= number <= max_val, 'Number out of range'
if number >=0:
return bin(number)
sign = int('0b1' + ('0' * size), 2) # e.g. 0b10000000
return bin(number + sign)
</code></pre>
<p>Now, to do what you initially wanted: write the binary representation of numbers to a file.</p>
<pre><code>output_list = [1, -1, -5, -64, 0] # iterable *containing* integers
with open(filename, 'w') as output_file: # with statement is safer for writing
for number in output_list:
output_file.write(bin_int(number) + '\n')
</code></pre>
<p>Or if you just want to check the result:</p>
<pre><code>print([bin_int(number) for number in [1, -1, -5, -64, -127]])
# ['0b1', '0b11111111', '0b11111011', '0b11000000', '0b10000001']
</code></pre>
<p>Note that if you want to strip the <code>0b</code>, you can do that via <code>bin_int(number)[2:]</code>, e.g. <code>output_file.write(bin_int(number)[2:] + '\n')</code>. This removes the first two characters from the string holding the binary representation.</p>
| 0 | 2016-10-10T14:37:09Z | [
"python"
] |
Python - Select elements from matrix within range | 39,959,519 | <p>I have a question regarding python and selecting elements within a range.</p>
<p>If I have a n x m matrix with n row and m columns, I have a defined range for each column (so I have m min and max values).</p>
<p>Now I want to select those rows, where all values are within the range.</p>
<p>Looking at the following example:</p>
<pre><code>input = matrix([[1, 2], [3, 4],[5,6],[1,8]])
boundaries = matrix([[2,1],[8,5]])
#Note:
#col1min = 2
#col1max = 8
#col2min = 1
#col2max = 5
print(input)
desired_result = matrix([[3, 4]])
print(desired_result)
</code></pre>
<p>Here, 3 rows where discarded, because they contained values beyond the boundaries.</p>
<p>While I was able to get values within one range for a given array, I did not manage to solve this problem efficiently.</p>
<p>Thank you for your help.</p>
| 0 | 2016-10-10T13:32:14Z | 39,961,122 | <p>I believe that there is more elegant solution, but i came to this:</p>
<pre><code>def foo(data, boundaries):
zipped_bounds = list(zip(*boundaries))
output = []
for item in data:
for index, bound in enumerate(zipped_bounds):
if not (bound[0] <= item[index] <= bound[1]):
break
else:
output.append(item)
return output
data = [[1, 2], [3, 4], [5, 6], [1, 8]]
boundaries = [[2, 1], [8, 5]]
foo(data, boundaries)
</code></pre>
<p>Output:</p>
<pre><code>[[3, 4]]
</code></pre>
<p>And i know that there is not checking and raising exceptions if the sizes of arrays won't match each concrete size. I leave it OP to implement this.</p>
| 0 | 2016-10-10T14:55:24Z | [
"python"
] |
Python - Select elements from matrix within range | 39,959,519 | <p>I have a question regarding python and selecting elements within a range.</p>
<p>If I have a n x m matrix with n row and m columns, I have a defined range for each column (so I have m min and max values).</p>
<p>Now I want to select those rows, where all values are within the range.</p>
<p>Looking at the following example:</p>
<pre><code>input = matrix([[1, 2], [3, 4],[5,6],[1,8]])
boundaries = matrix([[2,1],[8,5]])
#Note:
#col1min = 2
#col1max = 8
#col2min = 1
#col2max = 5
print(input)
desired_result = matrix([[3, 4]])
print(desired_result)
</code></pre>
<p>Here, 3 rows where discarded, because they contained values beyond the boundaries.</p>
<p>While I was able to get values within one range for a given array, I did not manage to solve this problem efficiently.</p>
<p>Thank you for your help.</p>
| 0 | 2016-10-10T13:32:14Z | 39,963,139 | <p>Your example data syntax is not correct <code>matrix([[],..])</code> so it needs to be restructured like this:</p>
<pre><code>matrix = [[1, 2], [3, 4],[5,6],[1,8]]
bounds = [[2,1],[8,5]]
</code></pre>
<p>I'm not sure exactly what you mean by "efficient", but this solution is readable, computationally efficient, and modular:</p>
<pre><code># Test columns in row against column bounds or first bounds
def row_in_bounds(row, bounds):
for ci, colVal in enumerate(row):
bi = ci if len(bounds[0]) >= ci + 1 else 0
if not bounds[1][bi] >= colVal >= bounds[0][bi]:
return False
return True
# Use a list comprehension to apply test to n rows
print ([r for r in matrix if row_in_bounds(r,bounds)])
>>>[[3, 4]]
</code></pre>
<p>First we create a reusable test function for rows accepting a list of bounds lists, tuples are probably more appropriate, but I stuck with list as per your specification. </p>
<p>Then apply the test to your matrix of n rows with a list comprehension. If n exceeds the bounds column index or the bounds column index is falsey use the first set of bounds provided.</p>
<p>Keeping the row iterator out of the row parser function allows you to do things like get min/max from the filtered elements as required. This way you will not need to define a new function for every manipulation of the data required.</p>
| 0 | 2016-10-10T16:53:41Z | [
"python"
] |
Creating a Selenium module to replace testcomplete | 39,959,677 | <p>I'm trying to replace TestComplete with Selenium for our automated tests, ideally without having to re-write all of the different functions.</p>
<p>The plan is to replicate the Aliases structure for finding elements within TestComplete with a python module that can be dropped in to do the finding of web elements. </p>
<p>I've been able to get this to work for a single page, but due to how python seems to work with imports I've so far been unable to find a way to import beyond this.</p>
<p>Example code within test complete would be.</p>
<p>Aliases.LoginPage.Username.SetText("username")</p>
<p>To replicate this in selenium I've created a Module called Aliases, with a class named LoginPage, containing a property called username.</p>
<p>so Aliases.py looks like below:</p>
<pre><code> from selenium import webdriver
driver = webdriver.Chrome("C:\\BrowserDrivers\\chromedriver.exe")
class LoginPage:
_username = driver.find_element_by_id("txtUser")
@property
def user(self):
return type(self)._user
</code></pre>
<p>and is called by:</p>
<pre><code> import Aliases
login_page = Aliases.LoginPage()
login_page.username.send_keys("username")
</code></pre>
<p>this works fine with just the one page, however if I add a second class to this module with the find_element code for an element on another page I get an "element not found exception"</p>
<p>Debugging has show that this is because python is trying to set all the class properties within the Aliases module when it is imported, so of course elements not on the login page won't be found.
This occurs even if I specify the class within the Aliases module to import.</p>
<p>Is there a way for me to tell Python to only set the properties for the class being imported
or
Another way for me to structure the project to replicate the way elements are being found within the current coded test?</p>
<p>Alternatively, am I approaching this the wrong way and I should just start amending the current code we have to be selenium specific? </p>
| -1 | 2016-10-10T13:39:56Z | 39,961,837 | <p>I don't know the scale of the changes you are talking about but my guess is that it would be faster (and better) to just rewrite the tests than to try to write code so that you don't have to make changes. In the long run, rewriting will likely require less maintenance and be easier to debug. So take the time to rewrite them properly. I did something like this about a year ago when we switched from TestComplete/Javascript to Selenium/Java. I had to learn Java and rewrite everything in Selenium but in the end I'm really happy with the change. Having a strongly typed language, a better IDE (Eclipse), and so on have made me much more productive and the tests are faster, more resilient, and easier to maintain and debug.</p>
| 0 | 2016-10-10T15:37:29Z | [
"python",
"selenium",
"automated-tests",
"testcomplete"
] |
manifold.Isomap(n_neighbors, n_components).fit_transform - SciKit Learn | 39,959,713 | <p>I have a dataframe called X_train and I am performing Isomap dimensionality reduction using SciKit Learn. The code is the following:</p>
<pre><code> from sklearn import manifold
iso = manifold.Isomap(n_neighbors=4, n_components=2)
iso.fit(X_train)
manifold.Isomap(eigen_solver='auto', max_iter=None, n_components=2, n_neighbors=4,neighbors_algorithm='auto', path_method='auto', tol=0)
X2_train = manifold.Isomap(n_neighbors, n_components).fit_transform(X_train)
</code></pre>
<p>I am getting an errror message:</p>
<pre><code> Traceback (most recent call last):
File "<ipython-input-133-12f224bb5f59>", line 1, in <module>
X2_train = manifold.Isomap(n_neighbors, n_components).fit_transform(X_train)
NameError: name 'n_neighbors' is not defined
</code></pre>
<p>What am I doing wrong? Could you help me? Thank you!</p>
| -1 | 2016-10-10T13:41:53Z | 39,959,887 | <p>It raises this error because you did not create any variables called <code>n_neighbors</code> nor <code>n_components</code>. </p>
<p>As for the variable <code>iso</code>, you have to indicate the values of your parameters.<code>Isomap(n_neighbors=4, n_components=2)</code></p>
<p>Try this out :</p>
<pre><code>X2_train = manifold.Isomap(n_neighbors=4, n_components=2).fit_transform(X_train)
</code></pre>
| 0 | 2016-10-10T13:50:49Z | [
"python",
"scikit-learn"
] |
Amending datetime format while parsing from csv read - pandas | 39,959,925 | <p>I am reading a csv file (SimResults_Daily.csv) into pandas, that is structured as follows:</p>
<pre><code>#, Job_ID, Date/Time, value1, value2,
0, ID1, 05/01 24:00:00, 5, 6
1, ID2, 05/02 24:00:00, 6, 15
2, ID3, 05/03 24:00:00, 20, 21
</code></pre>
<p>etc.
As the datetime format cannot be read by pandas parse_dates, I have figured out I can use the command: <code>str.replace('24:','00:')</code>.</p>
<p>My code currently is:</p>
<pre><code>dateparse = lambda x: pd.datetime.strptime(x, '%m-%d %H:%M:%S')
df = pd.read_csv('SimResults_Daily.csv',
skipinitialspace=True,
date_parser=dateparse,
parse_dates=['Date/Time'],
index_col=['Date/Time'],
usecols=['Job_ID',
'Date/Time',
'value1',
'value2',
header=0)
</code></pre>
<p>Where in the code should I implement the <code>str.replace</code> command?</p>
| 1 | 2016-10-10T13:52:46Z | 39,959,989 | <p>You can use:</p>
<pre><code>import pandas as pd
import io
temp=u"""#,Job_ID,Date/Time,value1,value2,
0,ID1,05/01 24:00:00,5,6
1,ID2,05/02 24:00:00,6,15
2,ID3,05/03 24:00:00,20,21"""
dateparse = lambda x: pd.datetime.strptime(x.replace('24:','00:'), '%m/%d %H:%M:%S')
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp),
skipinitialspace=True,
date_parser=dateparse,
parse_dates=['Date/Time'],
index_col=['Date/Time'],
usecols=['Job_ID', 'Date/Time', 'value1', 'value2'],
header=0)
print (df)
Job_ID value1 value2
Date/Time
1900-05-01 ID1 5 6
1900-05-02 ID2 6 15
1900-05-03 ID3 20 21
</code></pre>
<hr>
<p>Another solution with double <code>replace</code> - <code>year</code> can be added also:</p>
<pre><code>dateparse = lambda x: x.replace('24:','00:').replace(' ','/1900 ')
df = pd.read_csv(io.StringIO(temp),
skipinitialspace=True,
date_parser=dateparse,
parse_dates=['Date/Time'],
index_col=['Date/Time'],
usecols=['Job_ID', 'Date/Time', 'value1', 'value2'],
header=0)
print (df)
Job_ID value1 value2
Date/Time
1900-05-01 ID1 5 6
1900-05-02 ID2 6 15
1900-05-03 ID3 20 21
</code></pre>
<hr>
<pre><code>dateparse = lambda x: x.replace('24:','00:').replace(' ','/2016 ')
df = pd.read_csv(io.StringIO(temp),
skipinitialspace=True,
date_parser=dateparse,
parse_dates=['Date/Time'],
index_col=['Date/Time'],
usecols=['Job_ID', 'Date/Time', 'value1', 'value2'],
header=0)
print (df)
Job_ID value1 value2
Date/Time
2016-05-01 ID1 5 6
2016-05-02 ID2 6 15
2016-05-03 ID3 20 21
</code></pre>
| 2 | 2016-10-10T13:55:41Z | [
"python",
"date",
"csv",
"parsing",
"pandas"
] |
How to inject Django template code into template through {{ variable/method }}? | 39,960,259 | <p>I have a model Reservation which I use in many templates. It's handy to create it's own <code>HTML/Django</code> snippet which is being injected into the template through variable/model method. </p>
<p>The raw <code>HTML</code> is correct using the method but Django template language isn't interpreted correctly. </p>
<p>This is a Reservation method:</p>
<pre><code>def get_html_description(self):
return """<ul>
<li><b>ID:</b> {{ reservation.id }}</li>
<hr>
<li><b>From:</b> {{ reservation.get_text_destination_from }}</li>
<li><b>To:</b> {{ reservation.get_text_destination_to }}</li>
<hr>
<li><b>Date:</b> {{ reservation.get_date }}</li>
<li><b>Time:</b> {{ reservation.get_time }}</li>
</ul>"""
</code></pre>
<p>Now I'm trying to inject this code into the template:</p>
<pre><code><div class="events">
{% for reservation in data.1 %}
<div class="event">
<h4>{{ reservation.get_text_destination_from }} to {{ reservation.get_text_destination_to }}</h4>
<div class="desc">
{% autoescape off %}{{ reservation.get_html_description }}{% endautoescape %}
</div>...
...
</code></pre>
<p>Unfortunately it renders something like this:</p>
<p><a href="http://i.stack.imgur.com/hP2a1.png" rel="nofollow"><img src="http://i.stack.imgur.com/hP2a1.png" alt="enter image description here"></a></p>
<p>Do you know what to do? I've already tried filter <code>|safe</code> and <code>{% autoescape off %}</code></p>
| 1 | 2016-10-10T14:09:41Z | 39,960,369 | <p>Simply don't ask the template to do it, if you really want to continue to use this method then just do string formatting, and mark it as safe.</p>
<pre><code>desc = """<ul>
<li><b>ID:</b> %(id)s</li>
</ul>""" % { 'id': self.id }
return mark_safe(desc)
</code></pre>
<p>Etc.</p>
| 1 | 2016-10-10T14:14:54Z | [
"python",
"html",
"django",
"templates",
"django-templates"
] |
How to inject Django template code into template through {{ variable/method }}? | 39,960,259 | <p>I have a model Reservation which I use in many templates. It's handy to create it's own <code>HTML/Django</code> snippet which is being injected into the template through variable/model method. </p>
<p>The raw <code>HTML</code> is correct using the method but Django template language isn't interpreted correctly. </p>
<p>This is a Reservation method:</p>
<pre><code>def get_html_description(self):
return """<ul>
<li><b>ID:</b> {{ reservation.id }}</li>
<hr>
<li><b>From:</b> {{ reservation.get_text_destination_from }}</li>
<li><b>To:</b> {{ reservation.get_text_destination_to }}</li>
<hr>
<li><b>Date:</b> {{ reservation.get_date }}</li>
<li><b>Time:</b> {{ reservation.get_time }}</li>
</ul>"""
</code></pre>
<p>Now I'm trying to inject this code into the template:</p>
<pre><code><div class="events">
{% for reservation in data.1 %}
<div class="event">
<h4>{{ reservation.get_text_destination_from }} to {{ reservation.get_text_destination_to }}</h4>
<div class="desc">
{% autoescape off %}{{ reservation.get_html_description }}{% endautoescape %}
</div>...
...
</code></pre>
<p>Unfortunately it renders something like this:</p>
<p><a href="http://i.stack.imgur.com/hP2a1.png" rel="nofollow"><img src="http://i.stack.imgur.com/hP2a1.png" alt="enter image description here"></a></p>
<p>Do you know what to do? I've already tried filter <code>|safe</code> and <code>{% autoescape off %}</code></p>
| 1 | 2016-10-10T14:09:41Z | 39,960,404 | <p>What you are asking for is double substitution and I don't think the Django templating engine will do that. Since you are pulling the data from a <code>Reservation</code> instance, I would just fill it in using string substitution. For example:</p>
<pre><code> return """<ul>
<li><b>ID:</b> {pk}</li>
<hr>
<li><b>From:</b> {destination_from}</li>
...
</ul>""".format(pk=self.id,
destination_from=self.reservation.get_text_destination_from)
</code></pre>
| 1 | 2016-10-10T14:16:51Z | [
"python",
"html",
"django",
"templates",
"django-templates"
] |
Difference between the Python built-in pow and math pow for large integers | 39,960,275 | <p>I find that for large integers, the math.pow does not successfully translate to its integer version. I got a buggy <a href="https://en.wikipedia.org/wiki/Karatsuba_algorithm" rel="nofollow">Karatsuba multiplication</a> when implemented with math.pow. </p>
<p>For instance:</p>
<pre><code>>>> a_Size=32
>>> pow(10,a_size) * 1024
102400000000000000000000000000000000
>>> math.pow(10,a_size) * 1024
1.024e+35
>>> int(math.pow(10,a_size) * 1024)
102400000000000005494950097298915328
</code></pre>
<p>I went with 10 ** a_size with correct results for large integers.</p>
<p>For floats, visit <a href="http://stackoverflow.com/questions/10282674/difference-between-the-built-in-pow-and-math-pow-for-floats-in-python">Difference between the built-in pow() and math.pow() for floats, in Python?</a></p>
<p>Please explain why this discrepancy is seen for math.pow. It is observed only from 10 power of 23 and higher.</p>
| 0 | 2016-10-10T14:10:22Z | 39,960,433 | <p><code>math.pow()</code> always returns a floating-point number, so you are limited by the precision of <code>float</code> (almost always an IEEE 754 double precision number). The built-in <code>pow()</code> on the other hand will use Python's arbitrary precision integer arithmetic when called with integer arguments.</p>
| 1 | 2016-10-10T14:19:10Z | [
"python",
"math",
"largenumber",
"karatsuba"
] |
Python, scipy : minimize multivariable function in integral expression | 39,960,320 | <p>how can I minimize a function (uncostrained), respect a[0] and a[1]?
example (this is a simple example for I uderstand scipy, numpy and py):</p>
<pre><code>import numpy as np
from scipy.integrate import *
from scipy.optimize import *
def function(a):
return(quad(lambda t: ((np.cos(a[0]))*(np.sin(a[1]))*t),0,3))
</code></pre>
<p>i tried:</p>
<pre><code>l=np.array([0.1,0.2])
res=minimize(function,l, method='nelder-mead',options={'xtol': 1e-8, 'disp': True})
</code></pre>
<p>but I get errors.
I get the results in matlab.</p>
<p>any idea ?</p>
<p>thanks in advance</p>
| 0 | 2016-10-10T14:12:28Z | 39,962,346 | <p>This is just a guess, because you haven't included enough information in the question for anyone to really know what the problem is. Whenever you ask a question about code that generates an error, always include the complete error message in the question. Ideally, you should include a <a href="http://stackoverflow.com/help/mcve">minimal, complete and verifiable example</a> that we can run to reproduce the problem. Currently, you define <code>function</code>, but later you use the undefined function <code>chirplet</code>. That makes it a little bit harder for anyone to understand your problem.</p>
<p>Having said that...</p>
<p><a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html" rel="nofollow"><code>scipy.integrate.quad</code></a> returns two values: the estimate of the integral, and an estimate of the absolute error of the integral. It looks like you haven't taken this into account in <code>function</code>. Try something like this:</p>
<pre><code>def function(a):
intgrl, abserr = quad(lambda t: np.cos(a[0])*np.sin(a[1])*t, 0, 3)
return intgrl
</code></pre>
| 2 | 2016-10-10T16:06:47Z | [
"python",
"numpy",
"optimization",
"scipy"
] |
PyMySQL Python 3 Django VirtualEnv MySQLdb | 39,960,367 | <p>So I had this running yesterday when I was running the virtualenv out of my home directory. I've ran in to this problem a number of times now without a clear answer. I've done a lot of research on this but there seems to be no black and white answer. Today I moved my virtualenv to <code>/usr/local/virtualenvs</code> to better manage my projects and viola, it's broken again. <code>Error loading MySQLdb module: No module named 'MySQLdb'</code> is the bane of my existence right now. I can't seem to fix it this time. I have even tried duplicating what I did back into my home directory and have the very same problem. I haven't deleted any packages from my system. I installed the very same packages in the virtualenv. At this point I'm lost. I've even written down the steps I took <em>from a fresh install</em> to get to a working virtualenv. I certainly don't want to go back to a fresh install. Please help :)</p>
<p><strong>Virtual Environment:</strong></p>
<pre><code>Ubuntu Server 16.04 LTS
python 3.5
django 1.10.2
pymysql 0.7.9
</code></pre>
<p><strong>My settings.py</strong></p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'auth',
'USER': 'user',
'PASSWORD': 'pass',
'HOST': 'localhost',
'PORT': '3306',
},
'cogs': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'cogs',
'USER': 'user',
'PASSWORD': 'pass',
'HOST': 'localhost',
'PORT': '3306',
}
}
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>Traceback (most recent call last):
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/db/backends/mysql/base.py", line 25, in <module>
import MySQLdb as Database
ImportError: No module named 'MySQLdb'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
utility.execute()
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 341, in execute
django.setup()
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/__init__.py", line 27, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/apps/config.py", line 199, in import_models
self.models_module = import_module(models_module_name)
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/contrib/auth/models.py", line 4, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/contrib/auth/base_user.py", line 52, in <module>
class AbstractBaseUser(models.Model):
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/db/models/base.py", line 119, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/db/models/base.py", line 316, in add_to_class
value.contribute_to_class(cls, name)
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/db/models/options.py", line 214, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/db/__init__.py", line 33, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/db/utils.py", line 211, in __getitem__
backend = load_backend(db['ENGINE'])
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/db/utils.py", line 115, in load_backend
return import_module('%s.base' % backend_name)
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/virtualenvs/rtservice/serviceenv/lib/python3.5/site-packages/django/db/backends/mysql/base.py", line 28, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named 'MySQLdb'
</code></pre>
<p>Thanks in advance.</p>
| 0 | 2016-10-10T14:14:42Z | 39,961,125 | <p>Added</p>
<pre><code>try:
import pymysql
pymysql.install_as_MySQLdb()
except ImportError:
pass
</code></pre>
<p>to <code>manage.py</code> and <code>wsgi.py</code></p>
<p>Now working.</p>
| 0 | 2016-10-10T14:55:32Z | [
"python",
"mysql",
"django",
"virtualenv"
] |
Accessing test database in (Selenium based) functional tests for a Pyramid + sqlalchemy application | 39,960,412 | <p>I have managed to create functional tests using Splinter (Selenium) and <code>StoppableWSGIServer</code>. Here's my code:</p>
<pre><code>...
engine = engine_from_config(settings, prefix='sqlalchemy.')
DBSession.configure(bind=engine)
Base.metadata.create_all(engine)
...
class FunctionalTest(...):
...
def setUp(self):
...
self.server = http.StopableWSGIServer.create(app)
self.server.wait()
self.browser = splinter.Browser("chrome")
def tearDown(self):
...
self.browser.quit()
self.server.shutdown()
</code></pre>
<p>Where <code>app</code> is created using <code>Configurator.make_wsgi_app</code>.</p>
<p>When running test cases using my <code>FunctionalTest</code>, the browsers shows up, the server starts the database works, the tables are created. However, the test server can't access rows created in the test cases, even though both are initialized using the same settings file.</p>
<p>I have tried mocking <code>DBSession</code> and <code>engine</code> in my <code>models.py</code> and <code>views.py</code> and this way both <code>DBSession</code> and <code>Base.metadata.bind</code> have the very same <code>id()</code> in both my test cases and my view functions. (So, to my understanding, are the very same objects) Yet, querying for rows created in the test cases returns <code>[]</code> in the view, and the tests fail. I have called <code>DBSession.flush()</code> after creating the rows and <code>DBSession</code> is defined in <code>models.py</code> like this:</p>
<pre><code>DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
</code></pre>
<p>How do I make the test server see rows created in the test case code?</p>
| 1 | 2016-10-10T14:17:22Z | 39,961,057 | <p>This seems to do the trick:</p>
<pre><code>import trasaction
transaction.commit()
</code></pre>
<p>However I'm not absolutely certain about my implementation, so I'm still awaiting answers. Also, this way I have to <code>DBSession.drop_all</code> before each test case.</p>
| 1 | 2016-10-10T14:51:12Z | [
"python",
"selenium",
"sqlalchemy",
"pyramid",
"functional-testing"
] |
read escaped character \t as '\t' instead of '\\t' Python ConfigParser | 39,960,450 | <p>I have a config.ini file containing <code>delimiter = \t</code> which i now want to read using the python3 ConfigParser.
However, the resulting string is <code>'\\t'</code> instead of <code>'\t'</code> which breaks my program.
Is there a more elegant option to solve this problem instead of just manually stripping the extra '\' from every variable containing an escaped character?
I cannot find an option for ConfigParser().read() to not escape the backslash it finds in the file.</p>
| 0 | 2016-10-10T14:20:16Z | 39,960,982 | <p>Python3 has a 'unicode_escape' codec.</p>
<pre><code>r"a\tb".decode('unicode_escape')
'a\tb'
</code></pre>
<p>Sources:</p>
<p><a href="https://bytes.com/topic/python/answers/37952-escape-chars-string" rel="nofollow">https://bytes.com/topic/python/answers/37952-escape-chars-string</a></p>
<p><a href="http://stackoverflow.com/questions/14820429/how-do-i-decodestring-escape-in-python3#14820462">how do I .decode('string-escape') in Python3?</a></p>
| 0 | 2016-10-10T14:47:11Z | [
"python",
"escaping",
"special-characters",
"configparser"
] |
Installing opencv 3.1.0 for python2.7 on kubuntu | 39,960,686 | <p>I try to install opencv for python2.7 on my ubuntu but nothing seem to work.
Whatever I do, I can't import cv2</p>
<p>I tryed to download the sources and cmake / make / make install. (as describe here <a href="http://docs.opencv.org/trunk/d7/d9f/tutorial_linux_install.html" rel="nofollow">http://docs.opencv.org/trunk/d7/d9f/tutorial_linux_install.html</a> ). I have no error during the process. But when I launch python and try to import cv2 I have the following error:</p>
<blockquote>
<p>Traceback (most recent call last):
File "", line 1, in
ImportError: No module named cv2</p>
</blockquote>
<p>I read a lot of stack overflow post with no luck so I figured I should ask myself</p>
| 0 | 2016-10-10T14:32:33Z | 39,972,468 | <p>Ok I found the issues. The make install did not put the cv2.so in the correct location.
I had to manually put it in my lib/python2.7/site-packages/</p>
| 0 | 2016-10-11T07:40:09Z | [
"python",
"opencv",
"ubuntu"
] |
Python time function inconsistency | 39,960,754 | <p>I've been gave an assignment were i had to create two small functions that gives, with equal chance, "heads" or "tails" and, similary with a 6 faces thrown dice, 1,2,3,4,5 or 6.</p>
<p>Important: I could NOT use randint or similar functions for this assignment.</p>
<p>So i've created those two functions that generate a 'pseudo-random number' utilizing time (first digit of the milliseconds) function from python library:</p>
<pre><code>import time
def dice():
ctrl = False
while ctrl == False:
m = lambda: int(round(time.time() * 1000))
f = m()
d = abs(f) % 10
if d in range(1,7):
return d
ctrl = True
def coin():
m = lambda: int(round(time.time() * 1000))
f = m()
if f % 2 == 0:
return "Tails"
elif f == 0:
return "Tails"
else:
return "Heads" (EDIT: I don't know why i typed "Dimes" before)
</code></pre>
<p>However i've observed a tendency to give 'Tails' over 'Heads', so i've created an function to test the percentage of 'Tails' and 'Heads' in 100 throws:</p>
<pre><code>def _test():
ta = 0
he = 0
x = 100
while x > 0:
c = coin()
if c == "Tails":
ta += 1
else:
he += 1
x -= 1
time.sleep(0.001)
print("Tails:%s Heads:%s" % (ta, he))
</code></pre>
<p>The result of the test was (for several times):</p>
<pre><code>Tails:56 Heads:44
</code></pre>
<p>So i did the same thing with the dice function and the result was:</p>
<pre><code>1:20 2:20 3:10 4:20 5:10 6:20
</code></pre>
<p>So, as you can see, for some reason i could not infer - if it is by some mistake of my or some other reason - the time function has a tendency to give less '3' and '5', and running the test again with all the numbers (zeros, sevens, eights and nines included) i've come to see that this tendency extends to '0' and '7'.</p>
<p>I would be grateful for some insight and opinions on the matter.</p>
<p>EDIT:</p>
<p>Remove the round() function from the <code>m = lambda: int(round(time.time() * 1000))</code> function solved the problem - as answered by Makoto.</p>
| 4 | 2016-10-10T14:36:21Z | 39,961,127 | <p>Your utilization of <code>round</code> means that your coin flip function will tend towards even numbers if the values you get from your time-based random operation are equidistant from one another (i.e. you "flip" your coin more consistently every half second due to your computer internals).</p>
<p><a href="https://docs.python.org/3/library/functions.html#round" rel="nofollow">From the documentation</a>:</p>
<blockquote>
<p>For the built-in types supporting <code>round()</code>, values are rounded to the closest multiple of 10 to the power minus <em>ndigits</em>; if two multiples are equally close, rounding is done toward the even choice (so, for example, both <code>round(0.5)</code> and <code>round(-0.5)</code> are 0, and <code>round(1.5)</code> is 2).</p>
</blockquote>
<p>It appears that both of your methods suffer from this sort of bias; if they're executed too quickly after one another, or too close to a single timestamp, then you can tend to get <em>one</em> value out of it:</p>
<pre><code>>>> [dice() for x in range(11)]
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]
>>> [coin() for x in range(11)]
['Dimes', 'Dimes', 'Dimes', 'Dimes', 'Dimes', 'Dimes', 'Dimes', 'Dimes', 'Dimes', 'Dimes', 'Dimes']
</code></pre>
<p>The only realistic thing that you could do is to regenerate the time sample if the values are sufficiently close to one another so that you don't run into time-based biases like this, or generate ten time samples and take the average of those instead. Principally, if your computer moves quickly enough and executes these functions fast enough, it <em>will</em> likely pull the same timestamp, which will lead to a strong time-based bias.</p>
| 2 | 2016-10-10T14:55:38Z | [
"python"
] |
Logarithmic slider with matplotlib | 39,960,791 | <p>I just implemented a slider in my plot which works great. (I used this example: <a href="http://matplotlib.org/examples/widgets/slider_demo.html" rel="nofollow">http://matplotlib.org/examples/widgets/slider_demo.html</a>) Now my question is if it is possible to make the slider logarithmic. My values range between 0 and 1 and I want to make changes from 0.01 to 0.02 and so on but also from 0.01 to 0.5. That is why I think a logarithmic scale would be nice. Also if this isn't doable with a slider do you then have other ideas how to implement this? </p>
| 2 | 2016-10-10T14:37:57Z | 39,960,977 | <p>You can simply <code>np.log()</code> the value of the slider. However then the label next to it would be incorrect. You need to manually set the text of <code>valtext</code> of the slider to the log-value:</p>
<pre><code>def update(val):
amp = np.log(slider.val)
slider.valtext.set_text(amp)
</code></pre>
| 2 | 2016-10-10T14:47:05Z | [
"python",
"matplotlib"
] |
How to get tornado object? | 39,960,806 | <p>I want to get value of a tornado object with key</p>
<p>This is my code : </p>
<pre><code>beanstalk = beanstalkt.Client(host='host', port=port)
beanstalk.connect()
print("ok1")
beanstalk.watch('contracts')
stateTube = beanstalk.stats_tube('contracts', callback=show)
print("ok2")
ioloop = tornado.ioloop.IOLoop.instance()
ioloop.start()
print("ok3")
</code></pre>
<p>And this is the function `show()``</p>
<pre><code>def show(s):
pprint(s['current-jobs-ready'])
ioloop.stop
</code></pre>
<p>When I look at the documentation I found this :
<a href="http://i.stack.imgur.com/qUJqb.png" rel="nofollow"><img src="http://i.stack.imgur.com/qUJqb.png" alt="enter image description here"></a></p>
<p>And when I excecute this code, I have this :</p>
<pre><code>ok1
ok2
3
</code></pre>
<p>In fact I have the result I wanted "3" but I don't understand why my program continue to running? Whythe ioloop doesn't close? I don't have <code>ok3</code>when I compile how can I do to close the ioloop and have <code>ok3</code>?</p>
| 0 | 2016-10-10T14:38:43Z | 39,961,325 | <p><code>beanstalk.stats_tube</code> is async, it returns a <code>Future</code> which represents a future result that has not yet been resolved.</p>
<p>As <a href="https://github.com/nephics/beanstalkt" rel="nofollow">the README says</a>, Your callback <code>show</code> will be executed with a dict that contains the resolved result. So you could define <code>show</code> like:</p>
<pre><code>def show(stateTube):
pprint(stateTube['current-job-ready'])
beanstalk.stats_tube('contracts', callback=show)
from tornado.ioloop import IOLoop
IOLoop.current().start()
</code></pre>
<p>Note that you pass <code>show</code>, not <code>show()</code>: you're passing the function itself, not calling the function and passing its return value.</p>
<p>The other way to resolve a Future, besides passing a callback, is to use it in a coroutine:</p>
<pre><code>from tornado import gen
from tornado.ioloop import IOLoop
@gen.coroutine
def get_stats():
stateTube = yield beanstalk.stats_tube('contracts')
pprint(stateTube['current-job-ready'])
loop = IOLoop.current()
loop.spawn_callback(get_stats)
loop.start()
</code></pre>
| 2 | 2016-10-10T15:06:08Z | [
"python",
"object",
"tornado",
"beanstalk"
] |
Cannot install a whl library | 39,960,828 | <p>When i try to install this OpenCV file the cmd give me an error. I'm new in python and I'm doing a school job, can someone help me?</p>
<p><img src="http://i.stack.imgur.com/rAxVV.png" alt=""></p>
| 0 | 2016-10-10T14:39:29Z | 39,960,852 | <p>Don't use <code>pip</code> from a python terminal, but from your system terminal/command line interface. </p>
| 1 | 2016-10-10T14:40:30Z | [
"python",
"install",
"pip"
] |
Fastest way to print this in python? | 39,960,877 | <p>Suppose I have a text file containing this, where the number on the left says how many of the characters of the right should be there: </p>
<pre><code>2 a
1 *
3 $
</code></pre>
<p>How would I get this output in the fastest time? </p>
<pre><code>aa*$$$
</code></pre>
<p>This is my code, but has N^2 complexity: </p>
<pre><code>f = open('a.txt')
for item in f:
item2=item.split()
num = int(item2[0])
for i in range(num):
line+=item2[1]
print(line)
f.close()
</code></pre>
| -1 | 2016-10-10T14:41:52Z | 39,960,939 | <p>You can multiply strings to repeat them in Python:</p>
<p><code>"foo" * 3</code> gives you <code>foofoofoo</code>.</p>
<pre><code>line = []
with open("a.txt") as f:
for line in f:
n, c = line.rstrip().split(" ")
line.append(c * int(n))
print("".join(line))
</code></pre>
<p>You can print directly but the code above lets you get the output you want in a string if you care about that.</p>
<p>Using a list then <code>join</code>ing is more efficient than using <code>+=</code> on a string because strings are immutable in Python. This means that a new string must be created for each <code>+=</code>. Of course printing immediately avoids this issue.</p>
| 1 | 2016-10-10T14:45:26Z | [
"python"
] |
Fastest way to print this in python? | 39,960,877 | <p>Suppose I have a text file containing this, where the number on the left says how many of the characters of the right should be there: </p>
<pre><code>2 a
1 *
3 $
</code></pre>
<p>How would I get this output in the fastest time? </p>
<pre><code>aa*$$$
</code></pre>
<p>This is my code, but has N^2 complexity: </p>
<pre><code>f = open('a.txt')
for item in f:
item2=item.split()
num = int(item2[0])
for i in range(num):
line+=item2[1]
print(line)
f.close()
</code></pre>
| -1 | 2016-10-10T14:41:52Z | 39,960,958 | <p>KISS</p>
<pre><code>with open('file.txt') as f:
for line in f:
count, char = line.strip().split(' ')
print char * int(count),
</code></pre>
| 4 | 2016-10-10T14:46:18Z | [
"python"
] |
Fastest way to print this in python? | 39,960,877 | <p>Suppose I have a text file containing this, where the number on the left says how many of the characters of the right should be there: </p>
<pre><code>2 a
1 *
3 $
</code></pre>
<p>How would I get this output in the fastest time? </p>
<pre><code>aa*$$$
</code></pre>
<p>This is my code, but has N^2 complexity: </p>
<pre><code>f = open('a.txt')
for item in f:
item2=item.split()
num = int(item2[0])
for i in range(num):
line+=item2[1]
print(line)
f.close()
</code></pre>
| -1 | 2016-10-10T14:41:52Z | 39,960,972 | <p>Just print immediately:</p>
<pre><code>for item in open('a.txt'):
num, char = item.strip().split()
print(int(num) * char, end='')
print() # Newline
</code></pre>
| 3 | 2016-10-10T14:46:59Z | [
"python"
] |
Fastest way to print this in python? | 39,960,877 | <p>Suppose I have a text file containing this, where the number on the left says how many of the characters of the right should be there: </p>
<pre><code>2 a
1 *
3 $
</code></pre>
<p>How would I get this output in the fastest time? </p>
<pre><code>aa*$$$
</code></pre>
<p>This is my code, but has N^2 complexity: </p>
<pre><code>f = open('a.txt')
for item in f:
item2=item.split()
num = int(item2[0])
for i in range(num):
line+=item2[1]
print(line)
f.close()
</code></pre>
| -1 | 2016-10-10T14:41:52Z | 39,961,383 | <p>You can try like this,</p>
<pre><code>f = open('a.txt')
print ''.join(int(item.split()[0]) * item.split()[1] for item in f.readlines())
</code></pre>
| 0 | 2016-10-10T15:10:03Z | [
"python"
] |
Fastest way to print this in python? | 39,960,877 | <p>Suppose I have a text file containing this, where the number on the left says how many of the characters of the right should be there: </p>
<pre><code>2 a
1 *
3 $
</code></pre>
<p>How would I get this output in the fastest time? </p>
<pre><code>aa*$$$
</code></pre>
<p>This is my code, but has N^2 complexity: </p>
<pre><code>f = open('a.txt')
for item in f:
item2=item.split()
num = int(item2[0])
for i in range(num):
line+=item2[1]
print(line)
f.close()
</code></pre>
| -1 | 2016-10-10T14:41:52Z | 39,961,661 | <p>Your code is actually O(sum(n_i)) where n_i is the number in the row i. You can't do any better and none of the solutions in the other answers do, even if they might be faster than yours.</p>
| 0 | 2016-10-10T15:26:04Z | [
"python"
] |
Django 'ascii' codec can't encode character u'\uff1f' | 39,960,921 | <p>I am still beginner in django. </p>
<p>When I save into database, I got this error.</p>
<blockquote>
<p>'ascii' codec can't encode character u'\uff1f' in position 14: ordinal
not in range(128)</p>
</blockquote>
<p>I have seen similar question here though but I have tried and it is still not okay.</p>
<p><a href="http://stackoverflow.com/questions/5141559/unicodeencodeerror-ascii-codec-cant-encode-character-u-xef-in-position-0">UnicodeEncodeError: 'ascii' codec can't encode character u'\xef' in position 0: ordinal not in range(128)</a></p>
<p>I believe it happen in this data['english'].</p>
<p>Shall I change in views.py or serializer? </p>
<p>My view is </p>
<pre><code>class DialogueView(APIView):
permission_classes = (IsAuthenticated,)
def post(self, request):
data = request.data
serializer = DialogueSerializer(data=request.data)
if not serializer.is_valid():
return Response(serializer.errors, status=
status.HTTP_400_BAD_REQUEST)
else:
owner = request.user
t = Dialogue(owner=owner, english=data['english'])
t.save()
# request.data['id'] = t.pk # return id
return Response(status=status.HTTP_201_CREATED)
</code></pre>
<p>My serializer is </p>
<pre><code>class DialogueSerializer(serializers.ModelSerializer):
sound_url = serializers.SerializerMethodField()
class Meta:
model = Dialogue
fields = ('id','english','myanmar', 'sound_url')
def get_sound_url(self, dialogue):
if not dialogue.sound:
return None
request = self.context.get('request')
sound_url = dialogue.sound.url
return request.build_absolute_uri(sound_url)
</code></pre>
| 0 | 2016-10-10T14:44:16Z | 39,961,488 | <p>It might be that the DB doesn't accept Unicode value as a string field.</p>
<p>To solve this, try two ways:</p>
<ol>
<li><p>Change DB config into using a unicode encoding. E.g. <a href="http://dev.mysql.com/doc/refman/5.7/en/charset-unicode.html" rel="nofollow">This post</a> for mysql.</p></li>
<li><p>Encode that unicode value before storing into DB. Try converting the value like this:
<code>val = data['English']</code>
and store val to your model.</p></li>
</ol>
| 0 | 2016-10-10T15:15:22Z | [
"python",
"django",
"encoding"
] |
Python best practice: series of "or"s or "in"? | 39,960,941 | <p>I am working on a <a href="https://github.com/johnroper100/CrowdMaster/pull/17" rel="nofollow">project</a> where a question came up about the following line:</p>
<pre><code>a == "EQUAL" or a == "NOT EQUAL" or a == "LESS" or a == "GREATER"
</code></pre>
<p>I proposed a change to make it "simpler" like so:</p>
<pre><code>a in ["EQUAL", "NOT EQUAL", "LESS", "GREATER"]
</code></pre>
<p>What would be considered best practice and what would be best for performance? This is for user interface code that gets updated frequently so minor performance improvements could be noticeable. I know the first example will "fail fast" if any were found, and I am assuming that the second would as well. </p>
<p>Furthermore, wouldn't it be even faster to use a dict like:</p>
<pre><code>a in {"EQUAL", "NOT EQUAL", "LESS", "GREATER"}
</code></pre>
<p>...so that a list wouldn't need to be constructed?</p>
<p>The only thing PEP-8 says (that I could find):</p>
<blockquote>
<p>...code is read much more often than it is written. The guidelines provided here are intended to improve the readability of code...</p>
<p>However, know when to be inconsistent -- sometimes style guide recommendations just aren't applicable. When in doubt, use your best judgment. Look at other examples and decide what looks best.</p>
</blockquote>
| 3 | 2016-10-10T14:45:28Z | 39,961,212 | <p>I'd go with the set. It's much more readable. The string of <code>or</code>s can be faster in some circumstances since the operator short circuits and there is no overhead of constructing the list of items each time but I don't think it's worth the readability sacrifice. Here is a quick and dirty benchmark. This is with Python 2.7</p>
<pre><code> def t1(x):
return (x == "Foo" or x == "Bar" or x == "Baz" or x == "Quux")
def t2(x):
return x in {"Foo", "Bar", "Baz", "Quux"}
[2.7.9]>>> import timeit
[2.7.9]>>> timeit.timeit(lambda : t1("Quux"))
0.22514700889587402
[2.7.9]>>> timeit.timeit(lambda : t1("Foo"))
0.18890380859375
[2.7.9]>>> timeit.timeit(lambda : t2("Quux"))
0.27969884872436523
[2.7.9]>>> timeit.timeit(lambda : t2("Foo"))
0.25904297828674316
</code></pre>
<p>Python 3 numbers.</p>
<pre><code> [3.4.2]>>> timeit.timeit(lambda : t1("Quux"))
0.25126787397312
[3.4.2]>>> timeit.timeit(lambda : t1("Foo"))
0.1722603400121443
[3.4.2]>>> timeit.timeit(lambda : t2("Quux"))
0.18982669000979513
[3.4.2]>>> timeit.timeit(lambda : t2("Foo"))
0.17984321201220155
</code></pre>
| 3 | 2016-10-10T15:00:23Z | [
"python",
"performance",
"python-3.x"
] |
Python best practice: series of "or"s or "in"? | 39,960,941 | <p>I am working on a <a href="https://github.com/johnroper100/CrowdMaster/pull/17" rel="nofollow">project</a> where a question came up about the following line:</p>
<pre><code>a == "EQUAL" or a == "NOT EQUAL" or a == "LESS" or a == "GREATER"
</code></pre>
<p>I proposed a change to make it "simpler" like so:</p>
<pre><code>a in ["EQUAL", "NOT EQUAL", "LESS", "GREATER"]
</code></pre>
<p>What would be considered best practice and what would be best for performance? This is for user interface code that gets updated frequently so minor performance improvements could be noticeable. I know the first example will "fail fast" if any were found, and I am assuming that the second would as well. </p>
<p>Furthermore, wouldn't it be even faster to use a dict like:</p>
<pre><code>a in {"EQUAL", "NOT EQUAL", "LESS", "GREATER"}
</code></pre>
<p>...so that a list wouldn't need to be constructed?</p>
<p>The only thing PEP-8 says (that I could find):</p>
<blockquote>
<p>...code is read much more often than it is written. The guidelines provided here are intended to improve the readability of code...</p>
<p>However, know when to be inconsistent -- sometimes style guide recommendations just aren't applicable. When in doubt, use your best judgment. Look at other examples and decide what looks best.</p>
</blockquote>
| 3 | 2016-10-10T14:45:28Z | 39,961,343 | <p>Obviously in your case it's better to use <code>in</code> operator. It's just much more readable.</p>
<p>In more complex cases when it's not possible to use <code>in</code> operator, you may use <a href="https://docs.python.org/3/library/functions.html#all" rel="nofollow"><code>all</code></a> and <a href="https://docs.python.org/3/library/functions.html#any" rel="nofollow"><code>any</code></a> functions:</p>
<pre><code>operations = {'EQUAL', 'NOT EQUAL', 'LESS', 'GREATER'}
condition1 = any(curr_op.startswith(op) for op in operations)
condition2 = all([
self.Operation == "EQUAL",
isinstance(self.LeftHandSide, int),
isinstance(self.RightHandSide, int),
])
</code></pre>
| 1 | 2016-10-10T15:07:22Z | [
"python",
"performance",
"python-3.x"
] |
Python best practice: series of "or"s or "in"? | 39,960,941 | <p>I am working on a <a href="https://github.com/johnroper100/CrowdMaster/pull/17" rel="nofollow">project</a> where a question came up about the following line:</p>
<pre><code>a == "EQUAL" or a == "NOT EQUAL" or a == "LESS" or a == "GREATER"
</code></pre>
<p>I proposed a change to make it "simpler" like so:</p>
<pre><code>a in ["EQUAL", "NOT EQUAL", "LESS", "GREATER"]
</code></pre>
<p>What would be considered best practice and what would be best for performance? This is for user interface code that gets updated frequently so minor performance improvements could be noticeable. I know the first example will "fail fast" if any were found, and I am assuming that the second would as well. </p>
<p>Furthermore, wouldn't it be even faster to use a dict like:</p>
<pre><code>a in {"EQUAL", "NOT EQUAL", "LESS", "GREATER"}
</code></pre>
<p>...so that a list wouldn't need to be constructed?</p>
<p>The only thing PEP-8 says (that I could find):</p>
<blockquote>
<p>...code is read much more often than it is written. The guidelines provided here are intended to improve the readability of code...</p>
<p>However, know when to be inconsistent -- sometimes style guide recommendations just aren't applicable. When in doubt, use your best judgment. Look at other examples and decide what looks best.</p>
</blockquote>
| 3 | 2016-10-10T14:45:28Z | 39,972,305 | <p>As suggested by multiple people, go for reability.</p>
<p>performance wise there is a difference, the <code>in</code> operator on sets has an average lookup time of O(1), while for lists it's O(n). You can find this <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">here</a>.</p>
<p>In your case where the list of possibilities is limited you will hardly notice a difference. However, once this list becomes very large (talking about millions), you can notice a difference.</p>
<p>A simple example can show this: For sets:</p>
<pre><code>operation = 9999999
lookupSet = {i for i in range(0,10000000)}
%timeit operation in lookupSet
>> 10000000 loops, best of 3: 89.4 ns per loop
</code></pre>
<p>where with lists:</p>
<pre><code>operation = 9999999
lookupList = [i for i in range(0,10000000)]
%timeit operation in lookupList
>> 10 loops, best of 3: 168 ms per loop
</code></pre>
| 0 | 2016-10-11T07:28:53Z | [
"python",
"performance",
"python-3.x"
] |
Flask app search bar | 39,960,942 | <p>I am trying to implement a search bar using Flask, but when I enter the url/search, I got a 405 error, <strong>Method Not Allowed</strong>. </p>
<p>Here is a snippet of my code. Any help would be appreciated!</p>
<p><strong>forms.py</strong></p>
<pre><code>from wtforms import StringField
from wtforms.validators import DataRequired
class SearchForm(Form):
search = StringField('search', [DataRequired()])
submit = SubmitField('Search',
render_kw={'class': 'btn btn-success btn-block'})
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>from flask_login import login_required
from forms import SearchForm
from models import User
@app.route('/')
def index():
if current_user.is_authenticated:
return redirect(url_for('profile'))
return render_template('index.html')
@app.route('/profile', methods=['GET', 'POST'])
@login_required
def profile():
# some code to display user profile page
@app.route('/search', methods=['POST'])
@login_required
def search():
form = SearchForm()
if not form.validate_on_submit():
return redirect(url_for('index'))
return redirect((url_for('search_results', query=form.search.data)))
@app.route('/search_results/<query>')
@login_required
def search_results(query):
results = User.query.whoosh_search(query).all()
return render_template('search_results.html', query=query, results=results)
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>from flask_sqlalchemy import SQLAlchemy
from flask_whooshalchemy import whoosh_index
from app import app
db = SQLAlchemy()
class User(db.model):
__searchable__ = ['name']
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64))
whoosh_index(app, User)
</code></pre>
<p><strong>search.html</strong></p>
<pre><code>{% extends 'layouts/base.html' %}
{% set page_title = 'Search' %}
{% block body %}
<div>
{{ render_form(url_for('search'), form) }} # note: render_form is some marco from another .html file
</div>
{% endblock %}
</code></pre>
| -1 | 2016-10-10T14:45:31Z | 39,961,246 | <p>Because when you load page manually you using <code>GET</code> method, but only <code>POST</code> is allowed for <code>search</code> controller. You need to change</p>
<pre><code>@app.route('/search', methods=['POST'])
</code></pre>
<p>to </p>
<pre><code>@app.route('/search', methods=['GET', 'POST'])
</code></pre>
<p><strong>UPDATE</strong></p>
<p>So basically it's better to change your <code>search</code> controller. Because it's not using <em>search.html</em> and works wrong.</p>
<pre><code>@app.route('/search', methods=['GET', 'POST'])
@login_required
def search():
form = SearchForm()
if request.method == 'POST' and form.validate_on_submit():
return redirect((url_for('search_results', query=form.search.data))) # or what you want
return render_template('search.html', form=form)
</code></pre>
<p>Also make indentation 4 spaces, as it said in <a href="https://www.python.org/dev/peps/pep-0008/#indentation" rel="nofollow">PEP-8</a></p>
| 1 | 2016-10-10T15:02:01Z | [
"python",
"search",
"flask",
"full-text-search",
"whoosh"
] |
Creating an SSL socket with python | 39,960,983 | <p>I'm using Python 2.4.4 and OpenSSL 0.9.8k (not by choice)</p>
<p>I've referred to the documentation: <a href="https://docs.python.org/release/2.4.4/lib/module-socket.html" rel="nofollow">https://docs.python.org/release/2.4.4/lib/module-socket.html</a></p>
<p>and to pretty much every mention of "openSSL" and "python" on the internet, and I haven't found a solution to my problem.</p>
<p>I'm simply writing a test program to initiate an SSL connection. Here is the code:</p>
<p><strong>server</strong></p>
<pre><code>#!/usr/bin/python
import socket
import _ssl
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('', 4433))
s.listen(5)
while True:
client, address = s.accept()
ssl_client = socket.ssl(client,
keyfile='keyfile',
certfile='certfile')
print "Connection: ", address
data = ssl_client.read(1024)
if data:
print "received data: ", data
ssl_client.write(data + " Hello, World!")
del ssl_client
client.close()
</code></pre>
<p><strong>client</strong></p>
<pre><code>#!/usr/bin/python
import socket
import _ssl
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('host', 4433))
ssl_s = socket.ssl(s,
keyfile='keyfile',
certfile='certfile')
print 'writing ', ssl_s.write("Hello, World!"), ' bytes to ssl stream'
data = ssl_s.read(1024)
del ssl_s
s.close()
print "received data: ", data
</code></pre>
<p>Some notes about this code - <code>keyfile</code> and <code>certfile</code> are paths to my actual key and cert file. Those arguments are not the issue. The hostnames are also not the issue. I'm aware that the port used is 4433 - in our requirements, we're meant to use a generic port, not 443. I was unaware that it was possible to use SSL over a different port, but regardless, even when I use 443 I get the exact same error.</p>
<p>I can run the server fine, and then when I run the client, I get the following error on the <code>wrap_socket</code> lines for both client and server:</p>
<p><code>error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol</code></p>
<p>I've read it's due to using a non-443 port, but again, using 443 didn't fix things. I've read it could be a protocol mismatch, but the client and the server are both defaulting to SSL2.3. We're meant to use TLS1.2 as per our requirements, but the docs don't seem to have any information on how to set the SSL protocol version. I'm unsure if that's related to my issue. Please keep in mind I'm not here to open a dialogue regarding to use of outdated SSL and Python versions.</p>
| 0 | 2016-10-10T14:47:18Z | 39,962,070 | <p><code>socket.ssl</code> is only able to initiate a SSL connection and the given optional cert and key are for use of client certificates. <code>socket.ssl</code> is not able to be used on the server side and it looks like python 2.4.4 does not offer this feature in any of the core modules at all. In later versions of python you can use the ssl module for this but 2.4.4 does not seem to have this.</p>
| 1 | 2016-10-10T15:51:41Z | [
"python",
"sockets",
"ssl"
] |
Unicode Error when opening text file - Geany | 39,961,007 | <p>I'm trying to create a little program that reads the contents of two stories, Alice in Wonderland & Moby Dick, and then counts how many times the word 'the' is found in each story.</p>
<p>However I'm having issues with getting Geany text editor to open the files. I've been creating and using my own small text files with no issues so far.</p>
<pre><code>with open('alice_test.txt') as a_file:
contents = a_file.readlines()
print(contents)
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "add_cats_dogs.py", line 50, in <module>
print(contents)
File "C:\Users\USER\AppData\Local\Programs\Python\Python35-32\lib\encodings\cp437.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2018' in position 279: character maps to <undefined>
</code></pre>
<p>As I said, no issues experienced with any small homemade text files.</p>
<p>Strangely enough, when I excecute the above code in Python IDLE, I have no problems, even if I change the text file's encoding between UTF-8 to ANSII.</p>
<p>I tried encoding the text file as UTF-8 and ANSII, I also checked to make sure the default encoding of Geany is UTF-8 (also tried without using default encoding), as well using and not using fixed encoding when opening non-Unicode files. </p>
<p>I get the same error every time. The text file was from gutenberg.org, I tried using another file from there and got the same issue.</p>
<p>I know it must be some sort of issue between Geany and the text file, but I can't figure out what.</p>
<p>EDIT: I found a sort of fix.
Here is the text that was giving me problems:<a href="https://www.gutenberg.org/files/11/11-0.txt" rel="nofollow">https://www.gutenberg.org/files/11/11-0.txt</a>
Here is the text that I can use without problems:<a href="http://www.textfiles.com/etext/FICTION/alice13a.txt" rel="nofollow">http://www.textfiles.com/etext/FICTION/alice13a.txt</a>
Top one is encoded in UTF-8, bottom one is encoded in windows-1252. I would've imagined the reverse to be true, but for whatever reason the UTF-8 encoding seems to be causing the problem.</p>
| 1 | 2016-10-10T14:48:42Z | 39,964,919 | <p>What OS do you use? There are similar problems in Windows. If so, you can try to run <code>chcp 65001</code> before you command in console. Also you can add <code># encoding: utf-8</code> at the top of you <code>.py</code> file. Hope this will help because I can't reply same encoding problem with .txt file from gutenberg.org on my machine.</p>
| 0 | 2016-10-10T18:52:48Z | [
"python",
"python-3.x",
"encoding",
"text-files",
"geany"
] |
How to align y labels in a Seaborn PairGrid | 39,961,147 | <p>Taken from the <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.PairGrid.html" rel="nofollow">documentation</a>:</p>
<pre><code>>>> import matplotlib.pyplot as plt
>>> import seaborn as sns; sns.set()
>>> iris = sns.load_dataset("iris")
>>> g = sns.PairGrid(iris)
>>> g = g.map(plt.scatter)
</code></pre>
<p><a href="http://i.stack.imgur.com/paMfN.png" rel="nofollow"><img src="http://i.stack.imgur.com/paMfN.png" alt="PairGrid"></a></p>
<p>How can I align y labels so that they do not seem as messy as they do by default?</p>
<p>What I tried:</p>
<pre><code>for ax in g.axes.flat:
ax.yaxis.labelpad = 20
</code></pre>
<p>However, that only <em>shifts</em> all labels by the specified amount instead of aligning them.</p>
| -1 | 2016-10-10T14:56:41Z | 39,968,565 | <p>Yeah, that alignment is terrible. How dare a package that boasts to make <a href="https://stanford.edu/~mwaskom/software/seaborn/" rel="nofollow">"attractive statistical graphics"</a> fail so abysmally at the alignment of labels?! Just kidding, I love me some seaborn, at least for inspiration on how to improve my own plots.</p>
<p>You are looking for:</p>
<pre><code>ax.get_yaxis().set_label_coords(x,y) # conveniently in axis coordinates
</code></pre>
<p>Also, I would only iterate over the axes that show the label, but that is just me:</p>
<pre><code>for ax in g.axes[:,0]:
ax.get_yaxis().set_label_coords(-0.2,0.5)
</code></pre>
<p><code>0.5</code> for a centred vertical alignment, <code>-0.2</code> for a slight offset to the left (trial-and-error).</p>
<p><a href="http://i.stack.imgur.com/pNSZd.png" rel="nofollow"><img src="http://i.stack.imgur.com/pNSZd.png" alt="enter image description here"></a></p>
| 1 | 2016-10-11T00:06:14Z | [
"python",
"matplotlib",
"seaborn"
] |
ERROR __init__() takes exactly 1 argument (3 given) on Django with GAE | 39,961,211 | <p>Im trying to solve one problem that I have on my admin of one Django App.</p>
<p>I have this code:</p>
<p>admin_main.py:</p>
<pre><code>application = webapp.WSGIApplication([
# Admin pages
(r'^(/admin/add_img)', admin.views.AddImage),
(r'^(/admin)(.*)$', admin.Admin),])
</code></pre>
<p>admin/views.py:</p>
<pre><code>class BaseRequestHandler(webapp.RequestHandler):
def handle_exception(self, exception, debug_mode):
logging.warning("Exception catched: %r" % exception)
if isinstance(exception, Http404) or isinstance(exception, Http500):
self.error(exception.code)
path = os.path.join(ADMIN_TEMPLATE_DIR, str(exception.code) + ".html")
self.response.out.write(template.render(path, {'errorpage': True}))
else:
super(BaseRequestHandler, self).handle_exception(exception, debug_mode)
class Admin(BaseRequestHandler):
def __init__(self, request,response):
logging.info("NEW Admin object created")
super(Admin, request, response).__init__()
# Define and compile regexps for Admin site URL scheme.
# Every URL will be mapped to appropriate method of this
# class that handles all requests of particular HTTP message
# type (GET or POST).
self.getRegexps = [
[r'^/?$', self.index_get],
[r'^/([^/]+)/list/$', self.list_get],
[r'^/([^/]+)/new/$', self.new_get],
[r'^/([^/]+)/edit/([^/]+)/$', self.edit_get],
[r'^/([^/]+)/delete/([^/]+)/$', self.delete_get],
[r'^/([^/]+)/get_blob_contents/([^/]+)/([^/]+)/$', self.get_blob_contents],
]
self.postRegexps = [
[r'^/([^/]+)/new/$', self.new_post],
[r'^/([^/]+)/edit/([^/]+)/$', self.edit_post],
]
self._compileRegexps(self.getRegexps)
self._compileRegexps(self.postRegexps)
# Store ordered list of registered data models.
self.models = model_register._modelRegister.keys()
self.models.sort()
# This variable is set by get and port methods and used later
# for constructing new admin urls.
self.urlPrefix = ''
def index_get(self):
"""Show admin start page
"""
path = os.path.join(ADMIN_TEMPLATE_DIR, 'index.html')
self.response.out.write(template.render(path, {
'models': self.models,
'urlPrefix': self.urlPrefix,
}))
</code></pre>
<p>And when I try to get the page <a href="http://localhost:8080/admin" rel="nofollow">http://localhost:8080/admin</a>, I get the next error:</p>
<pre><code> ERROR 2016-10-10 14:37:25,484 webapp2.py:1528] __init__() takes exactly 1 argument (3 given)
Traceback (most recent call last):
File "C:\Users\Yisus-MSI\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\webapp2-2.3\webapp2.py", line 1511, in __call__
rv = self.handle_exception(request, response, e)
File "C:\Users\Yisus-MSI\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\webapp2-2.3\webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "C:\Users\Yisus-MSI\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\webapp2-2.3\webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "C:\Users\Yisus-MSI\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\webapp2-2.3\webapp2.py", line 1076, in __call__
handler = self.handler(request, response)
TypeError: __init__() takes exactly 1 argument (3 given)
</code></pre>
<p>I try a lot of solutions of the forum, but no one works.</p>
<p>Thanks for your help.</p>
| 0 | 2016-10-10T15:00:22Z | 39,961,724 | <p>I believe you made a mistake in using <code>super()</code>, you need to change in your <code>Admin.__init__()</code> method:</p>
<pre><code>super(Admin, request, response).__init__()
</code></pre>
<p>to:</p>
<pre><code>super(Admin, self).__init__(request=request, response=response)
</code></pre>
| 0 | 2016-10-10T15:29:32Z | [
"python",
"python-2.7",
"google-app-engine",
"webapp2"
] |
django access database from crispy form | 39,961,213 | <p>I'm working on a blog for a client that is using crispy form to make their writing's easier. I have the ability to add an image to the blog - only one and at the top of the post. My client wants to add other pictures throughout the post. The crispy form allows them to do this but only by way of a full http link. This means they would have to understand how the admin of the site works to be able to find the pic they want and use it here. Is there a way to have them be able to find the picture from the database and use that pictures url right on this screen?</p>
<p>Here is a screenshot of what I mean:
<a href="http://i.stack.imgur.com/LXAQF.png" rel="nofollow"><img src="http://i.stack.imgur.com/LXAQF.png" alt="enter image description here"></a></p>
| 0 | 2016-10-10T15:00:28Z | 39,961,786 | <p>I decided to add a simple bootstrap dropdown with each picture in the database, sorted by most recent, and a copy url button. If someone can come up with a better solution, that would be best.</p>
| 0 | 2016-10-10T15:33:30Z | [
"python",
"django",
"django-forms",
"django-crispy-forms"
] |
tfidfvectorizer Predict in saved classifier | 39,961,294 | <p>I trained my model by using TfIdfVectorizer and MultinomialNB and I saved it into a pickle file.</p>
<p>Now that I am trying to use the classifier from another file to predict in unseen data, I cannot do it because it is telling my that the number of features of the classifier is not the same than the number of features of my current corpus.</p>
<p>This is the code where I am trying to predict. The function do_vectorize is exactly the same used in training.</p>
<pre><code>def do_vectorize(data, stop_words=[], tokenizer_fn=tokenize):
vectorizer = TfidfVectorizer(stop_words=stop_words, tokenizer=tokenizer_fn)
X = vectorizer.fit_transform(data)
return X, vectorizer
# Vectorizing the unseen documents
matrix, vectorizer = do_vectorize(corpus, stop_words=stop_words)
# Predicting on the trained model
clf = pickle.load(open('../data/classifier_0.5_function.pkl', 'rb'))
predictions = clf.predict(matrix)
</code></pre>
<p>However I receive the error that the number of features are different</p>
<pre><code>ValueError: Expected input with 65264 features, got 472546 instead
</code></pre>
<p>This means I also have to save my vocabulary from training in order to test? What will happen if there are terms that did not exist on training?</p>
<p>I tried to used pipelines from scikit-learn with the same vectorizer and classifier, and the same parameters for both. However, it turned too slow from 1 hour to more than 6 hours, so I prefer to do it manually.</p>
| 1 | 2016-10-10T15:04:18Z | 39,965,035 | <blockquote>
<p>This means I also have to save my vocabulary from training in order to test? </p>
</blockquote>
<p>Yes, you have to save <strong>whole tfidf vectorizer</strong>, which in particular means saving vocabulary.</p>
<blockquote>
<p>What will happen if there are terms that did not exist on training?</p>
</blockquote>
<p>They will be ignored, which makes perfect sense since you have <strong>no training data</strong> about this, thus there is nothing to take into consideration (there are more complex methods which could still use it, but they do not use such simple approaches as tfidf).</p>
<blockquote>
<p>I tried to used pipelines from scikit-learn with the same vectorizer and classifier, and the same parameters for both. However, it turned too slow from 1 hour to more than 6 hours, so I prefer to do it manually.</p>
</blockquote>
<p>There should be little to no overhead when using pipelines, however doing things manually is fine as long as you remember to store vectorizer as well.</p>
| 1 | 2016-10-10T19:01:36Z | [
"python",
"machine-learning",
"scikit-learn",
"classification",
"prediction"
] |
Issues with thread and dynamic list in python | 39,961,315 | <p>I will try to be clear, hoping everyobdy will understand even if it will not be easy for me.
I'm a beginner in coding in python so every help will be nice !
I've got those librairies import : requests and threading.
I'm trying to send in parrallel several urls to reduce the sending time of data. I used a dynamic list to stack all urls and then used requests.post to send them. </p>
<pre><code>l=[]
if ALARM&1:
alarmType="Break Alarm"
AlarmNumber = 1
sendAlarm(alarmType, AlarmNumber)
print alarmType
else:
s = "https://..." #the url works
l.append(threading.Thread(target=requests.post, args=(s)))
if ALARM&2:
alarmType=0
if ALARM&4:
alarmType="Limit Switch"
AlarmNumber = 2
sendAlarm(alarmType, AlarmNumber)
print alarmType
else:
s="https://..."
l.append(threading.Thread(target=requests.post, args=(s)))
for t in l:
t.start()
for t in l:
t.join()
</code></pre>
<p>The error that I got is :</p>
<pre><code>Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
TypeError: post() takes at most 3 arguments (110 given)
</code></pre>
<p>And same thing for Thread-2, I asked around me but we can't find a solution. If someone have an idea ? Thanks !</p>
| 1 | 2016-10-10T15:05:30Z | 39,961,489 | <p>From the <a href="https://docs.python.org/2/library/threading.html#threading.Thread" rel="nofollow">docs</a>, args should be a tuple.</p>
<blockquote>
<p>class <code>threading.Thread</code>(group=None, target=None, name=None, args=(), kwargs={})</p>
<p>args is the argument tuple for the target invocation. Defaults to ().</p>
</blockquote>
<p>You need to pass <code>args</code> a tuple with the url as the first (and only) element:</p>
<pre><code>l.append(threading.Thread(target=requests.post, args=(s,)))
</code></pre>
<p>The seemingly useless comma here is what makes Pyhton interpret (s,) as a set and not just a string surrounded by unneeded parenthesis.</p>
<p>Failing to do this, you're basically passing a string, and <code>Thread</code> iterates on it, passing <code>post</code> each letter as a separate argument, hence the error message:</p>
<blockquote>
<pre><code>TypeError: post() takes at most 3 arguments (110 given)
</code></pre>
</blockquote>
<p>A string being interpreted as an iterator is a common pitfall. A function/method expects a list/set, and when provided a string like <code>"https://..."</code>, it treats it like <code>['"', 'h', 't', 't', 'p', 's', ':', '/', '/',...]</code>.</p>
<p>The root cause of the issue is anecdotal, somehow. What's interesting, here, is that although I knew about nothing of <code>Thread</code> when reading the question, the error message (<code>TypeError: post() takes at most 3 arguments (110 given)</code>) pointed me to the right direction right away.</p>
| 2 | 2016-10-10T15:15:23Z | [
"python",
"multithreading",
"dynamic-list"
] |
Improving database query speed with Python | 39,961,322 | <p>Edit - I am using Windows 10</p>
<p>Is there a faster alternative to pd._read_sql_query for a MS SQL database?</p>
<p>I was using pandas to read the data and add some columns and calculations on the data. I have cut out most of the alterations now and I am basically just reading (1-2 million rows per day at a time; my query is to read all of the data from the previous date) the data and saving it to a local database (Postgres). </p>
<p>The server I am connecting to is across the world and I have no privileges at all other than to query for the data. I want the solution to remain in Python if possible. I'd like to speed it up though and remove any overhead. Also, you can see that I am writing a file to disk temporarily and then opening it to COPY FROM STDIN. Is there a way to skip the file creation? It is sometimes over 500mb which seems like a waste.</p>
<pre><code>engine = create_engine(engine_name)
query = 'SELECT * FROM {} WHERE row_date = %s;'
df = pd.read_sql_query(query.format(table_name), engine, params={query_date})
df.to_csv('../raw/temp_table.csv', index=False)
df= open('../raw/temp_table.csv')
process_file(conn=pg_engine, table_name=table_name, file_object=df)
</code></pre>
| 1 | 2016-10-10T15:05:52Z | 39,961,398 | <p><strong>UPDATE:</strong></p>
<p>you can also try to unload data using <a href="https://technet.microsoft.com/en-us/library/ms162802(v=sql.110).aspx" rel="nofollow">bcp utility</a>, which might be lot faster compared to <code>pd.read_sql()</code>, but you will need a local installation of <code>Microsoft Command Line Utilities for SQL Server</code></p>
<p>After that you can use PostgreSQL's <code>COPY ... FROM ...</code>...</p>
<p><strong>OLD answer:</strong></p>
<p>you can try to write your DF directly to PostgreSQL (skipping the <code>df.to_csv(...)</code> and <code>df= open('../raw/temp_table.csv')</code> parts):</p>
<pre><code>from sqlalchemy import create_engine
engine = create_engine(engine_name)
query = 'SELECT * FROM {} WHERE row_date = %s;'
df = pd.read_sql_query(query.format(table_name), engine, params={query_date})
pg_engine = create_engine('postgresql+psycopg2://user:password@host:port/dbname')
df.to_sql(table_name, pg_engine, if_exists='append')
</code></pre>
<p>Just test whether it's faster compared to <code>COPY FROM STDIN</code>...</p>
| 0 | 2016-10-10T15:11:07Z | [
"python",
"pandas",
"sqlalchemy"
] |
PDF Encryption Using Python | 39,961,362 | <p>The code below takes in a single pdf file and then encrypts it, What i want it to do is to take a directory containing pdf files and encrypt the files in that directory automatically, instead of specifying each file explicitly. Please help ! </p>
<pre><code>def main():
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input_pdf', required=True,
help='Input pdf file')
parser.add_argument('-p', '--user_password', required=True,
help='output CSV file')
parser.add_argument('-o', '--owner_password', default=None,
help='Owner Password')
args = parser.parse_args()
set_password(args.input_pdf, args.user_password, args.owner_password)
if __name__ == "__main__":
main()
</code></pre>
| -1 | 2016-10-10T15:08:43Z | 39,961,443 | <pre><code>import glob
import os
os.chdir('/some/directory/that/has/pdfs')
for file in glob.glob('*.pdf'):
set_password(file, upass, opass)
</code></pre>
| 0 | 2016-10-10T15:13:12Z | [
"python",
"python-2.7"
] |
Regex in python for positive and negative integer | 39,961,414 | <p>I am new to learning regex in python and I'm wondering how do I use regex in python to store the integers(positive and negative) i want into a list!</p>
<p>For example</p>
<p>This is the data in a list.</p>
<pre><code>data =
[u'\x1b[0m[\x1b[1m\x1b[0m\xbb\x1b[0m\x1b[36m]\x1b[0m (A=-5,B=5)',
u'\x1b[0m[\x1b[1m\x1b[0m\xbb\x1b[0m\x1b[36m]\x1b[0m (A=5,Y=5)',
u'\x1b[0m[\x1b[1m\x1b[10m\xbb\x1b[0m\x1b[36m]\x1b[0m : ']
</code></pre>
<p>How do I extract the integer values of A and B (negative and positive) and store them in a variable so that I can work with the numbers?</p>
<p>I tried smth like this but the list is empty .. </p>
<pre><code>for line in data[0]:
pattern = re.compile("([A-Z]=(-?\d+?),[A-Z]=(-?\d+?))")
store = pattern.findall(line)
print store
</code></pre>
<p>Thank you and appreciate it </p>
| -1 | 2016-10-10T15:12:02Z | 39,961,689 | <p>For a positive and negative integer, with or without commas in between use: <code>-?(?:\d+,?)+</code></p>
<p><code>-?</code> with or without negative sign<br>
<code>(?:</code> opens a group<br>
<code>\d+</code> one or more digits<br>
<code>,?</code> optional comma<br>
<code>)</code> closes the group<br>
<code>(?:\d+,?)+</code> this group may have one or then one occencences</p>
| 0 | 2016-10-10T15:27:38Z | [
"python",
"regex"
] |
Regex in python for positive and negative integer | 39,961,414 | <p>I am new to learning regex in python and I'm wondering how do I use regex in python to store the integers(positive and negative) i want into a list!</p>
<p>For example</p>
<p>This is the data in a list.</p>
<pre><code>data =
[u'\x1b[0m[\x1b[1m\x1b[0m\xbb\x1b[0m\x1b[36m]\x1b[0m (A=-5,B=5)',
u'\x1b[0m[\x1b[1m\x1b[0m\xbb\x1b[0m\x1b[36m]\x1b[0m (A=5,Y=5)',
u'\x1b[0m[\x1b[1m\x1b[10m\xbb\x1b[0m\x1b[36m]\x1b[0m : ']
</code></pre>
<p>How do I extract the integer values of A and B (negative and positive) and store them in a variable so that I can work with the numbers?</p>
<p>I tried smth like this but the list is empty .. </p>
<pre><code>for line in data[0]:
pattern = re.compile("([A-Z]=(-?\d+?),[A-Z]=(-?\d+?))")
store = pattern.findall(line)
print store
</code></pre>
<p>Thank you and appreciate it </p>
| -1 | 2016-10-10T15:12:02Z | 39,961,751 | <p>Depending on what you are trying to accomplish, this may work:</p>
<pre><code>import re
data = [
u'\x1b[0m[\x1b[1m\x1b[0m\xbb\x1b[0m\x1b[36m]\x1b[0m (A=-5,B=5)',
u'\x1b[0m[\x1b[1m\x1b[0m\xbb\x1b[0m\x1b[36m]\x1b[0m (A=5,Y=5)',
u'\x1b[0m[\x1b[1m\x1b[10m\xbb\x1b[0m\x1b[36m]\x1b[0m : '
]
for line in data:
m = re.search('\((\w)=(-?\d+),(\w)=(-?\d+)\)', line)
if not m:
continue
myvars = {}
myvars[m.group(1)] = int(m.group(2))
myvars[m.group(3)] = int(m.group(4))
print myvars
</code></pre>
<p>This results in a dictionary (<code>myvars</code>) containing the variables in the current line. If you use this, you will have to check that the variable you want is defined before you attempt to get it from the dictionary. The output of the above is:</p>
<pre><code>{u'A': -5, u'B': 5}
{u'A': 5, u'Y': 5}
</code></pre>
| 0 | 2016-10-10T15:31:35Z | [
"python",
"regex"
] |
Working with dictionaries and data to and from database - Python 3 | 39,961,429 | <p>I have a data which looks like (as example) - </p>
<pre><code>{"phone_number": "XXX1","phone_number_country": "XXX2","text": "XXX2"}
</code></pre>
<p>where XXX is my data (it can be different types of data)</p>
<p>Also, I have DB with data, which looks like (as example) -
[<img src="http://i.stack.imgur.com/41UE4.png" alt="DB"></p>
<p>Data is taken from DB:</p>
<pre><code>con = sqlite3.connect('dab')
cur = con.cursor()
c = cur.execute('SELECT * FROM some')
u_data = c.fetchall()
</code></pre>
<p>Each row is a set of data.</p>
<p>The data is taken from the first row and data from <strong>column_1</strong> goes to the place of <strong>XXX1</strong>, data from <strong>column_2</strong> goes to the place of <strong>XXX2</strong> and data from <strong>column_3</strong> goes to the place of <strong>XXX3</strong>... </p>
<p>After that I should get the data from the original array (template) + data taken from the database...then this dict should be committed to DB and the loop should go on until the data from the DB and every time commit my new dict to DB...</p>
<p>Now I have some code:</p>
<pre><code>import sqlite3
from itertools import product
a = {"phone_number": "XXX","phone_number_country": "XXX","text": "XXX"}
con = sqlite3.connect('dab')
cur = con.cursor()
c = cur.execute('SELECT * FROM some')
u_data = c.fetchall()
s = list(u_data)
b = list(zip(*u_data))
out = product(*b)
tout =list(out)
i = 0
for elem in b:
b[i] = elem
a["phone_number"] = elem[0]
a["phone_number_country"] = elem[1]
a["text"] = elem[2]
print(a)
</code></pre>
<p>and in console i have this:</p>
<pre><code>{'text': 'phone_number31', 'phone_number_country': 'phone_number21', 'phone_number': 'phone_number11'}
{'text': 'phone_number_c32', 'phone_number_country': 'phone_number_c22', 'phone_number': 'phone_number_c12'}
{'text': 'text33', 'phone_number_country': 'text23', 'phone_number': 'text13'}
</code></pre>
<p>It should be:</p>
<pre><code>{"phone_number": "phone_number11","phone_number_country": "phone_number_c12","text": "text13"}
{"phone_number": "phone_number21","phone_number_country": "phone_number_c22","text": "text23"}
{"phone_number": "phone_number31","phone_number_country": "phone_number_c32","text": "text33"}
</code></pre>
| -2 | 2016-10-10T15:12:34Z | 39,973,638 | <p>Dictionaries in python do not maintain the insertion order by default. You need ordered dictionary for this.</p>
<p>I have reproduced a part of your code below:</p>
<pre><code>import collections
a = collections.OrderedDict()
for elem in b:
b[i] = elem
a["phone_number"] = elem[0]
a["phone_number_country"] = elem[1]
a["text"] = elem[2]
print(a)
</code></pre>
| 0 | 2016-10-11T08:54:38Z | [
"python",
"arrays",
"database"
] |
Working with dictionaries and data to and from database - Python 3 | 39,961,429 | <p>I have a data which looks like (as example) - </p>
<pre><code>{"phone_number": "XXX1","phone_number_country": "XXX2","text": "XXX2"}
</code></pre>
<p>where XXX is my data (it can be different types of data)</p>
<p>Also, I have DB with data, which looks like (as example) -
[<img src="http://i.stack.imgur.com/41UE4.png" alt="DB"></p>
<p>Data is taken from DB:</p>
<pre><code>con = sqlite3.connect('dab')
cur = con.cursor()
c = cur.execute('SELECT * FROM some')
u_data = c.fetchall()
</code></pre>
<p>Each row is a set of data.</p>
<p>The data is taken from the first row and data from <strong>column_1</strong> goes to the place of <strong>XXX1</strong>, data from <strong>column_2</strong> goes to the place of <strong>XXX2</strong> and data from <strong>column_3</strong> goes to the place of <strong>XXX3</strong>... </p>
<p>After that I should get the data from the original array (template) + data taken from the database...then this dict should be committed to DB and the loop should go on until the data from the DB and every time commit my new dict to DB...</p>
<p>Now I have some code:</p>
<pre><code>import sqlite3
from itertools import product
a = {"phone_number": "XXX","phone_number_country": "XXX","text": "XXX"}
con = sqlite3.connect('dab')
cur = con.cursor()
c = cur.execute('SELECT * FROM some')
u_data = c.fetchall()
s = list(u_data)
b = list(zip(*u_data))
out = product(*b)
tout =list(out)
i = 0
for elem in b:
b[i] = elem
a["phone_number"] = elem[0]
a["phone_number_country"] = elem[1]
a["text"] = elem[2]
print(a)
</code></pre>
<p>and in console i have this:</p>
<pre><code>{'text': 'phone_number31', 'phone_number_country': 'phone_number21', 'phone_number': 'phone_number11'}
{'text': 'phone_number_c32', 'phone_number_country': 'phone_number_c22', 'phone_number': 'phone_number_c12'}
{'text': 'text33', 'phone_number_country': 'text23', 'phone_number': 'text13'}
</code></pre>
<p>It should be:</p>
<pre><code>{"phone_number": "phone_number11","phone_number_country": "phone_number_c12","text": "text13"}
{"phone_number": "phone_number21","phone_number_country": "phone_number_c22","text": "text23"}
{"phone_number": "phone_number31","phone_number_country": "phone_number_c32","text": "text33"}
</code></pre>
| -2 | 2016-10-10T15:12:34Z | 39,980,250 | <p>Answer here:</p>
<pre><code>a = {"phone_number": "XXX","phone_number_country": "XXX","text": "XXX"}
con = sqlite3.connect('dab')
cur = con.cursor()
c = cur.execute('SELECT * FROM some')
u_data = c.fetchall()
s = list(u_data)
i = 0
for elem in s:
s[i] = elem
a["phone_number"] = elem[0]
a["phone_number_country"] = elem[1]
a["text"] = elem[2]
</code></pre>
| 0 | 2016-10-11T14:56:40Z | [
"python",
"arrays",
"database"
] |
"ImportError: module 'xxxxxxxx' has no attribute 'main'" when porting project to Heroku | 39,961,843 | <p>I'm attempting to port a Python pyramid app to Heroku.</p>
<p>I must admit that I do not understand the file structure of a Python app, even after reading this very informative thread which seems like it contains all the answers: <a href="https://what.thedailywtf.com/topic/18922/python-project-structure/27" rel="nofollow">https://what.thedailywtf.com/topic/18922/python-project-structure/27</a></p>
<p>I've got everything set up, so that I can push source updates to Heroku and try to get a build. The whole process is crashing because of apparently a missing 'main' attribute. I don't know where to start on this problem, as I don't know what 'main' is, what its structure should be, or what file it should reside in.</p>
<p>I've pasted what I think are the relevant bits below, but please tell me if I've left something out that could be helpful. </p>
<p>I'm attempting to follow the instructions here: <a href="http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/deployment/heroku.html" rel="nofollow">http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/deployment/heroku.html</a></p>
<p>File structure:</p>
<pre><code>Procfile
run
runapp.py
wsgi.py
--->/corefinance/
----setup.py
----production.ini
------->/corefinance/
--------__init__.py
</code></pre>
<p>Heroku build errors:</p>
<pre><code>2016-10-10T04:44:45.496214+00:00 app[web.1]:
2016-10-10T04:44:45.496215+00:00 app[web.1]: Using /app/.heroku/python/lib/python3.5/site-packages
2016-10-10T04:44:45.497067+00:00 app[web.1]: Searching for zope.deprecation==4.1.1
2016-10-10T04:44:45.497245+00:00 app[web.1]: Best match: zope.deprecation 4.1.1
2016-10-10T04:44:45.497356+00:00 app[web.1]: Adding zope.deprecation 4.1.1 to easy-install.pth file
2016-10-10T04:44:45.497742+00:00 app[web.1]:
2016-10-10T04:44:45.497745+00:00 app[web.1]: Using /app/.heroku/python/lib/python3.5/site-packages
2016-10-10T04:44:45.498530+00:00 app[web.1]: Searching for Mako==1.0.0
2016-10-10T04:44:45.498709+00:00 app[web.1]: Best match: Mako 1.0.0
2016-10-10T04:44:45.498818+00:00 app[web.1]: Adding Mako 1.0.0 to easy-install.pth file
2016-10-10T04:44:45.503267+00:00 app[web.1]: Installing mako-render script to /app/.heroku/python/bin
2016-10-10T04:44:45.503522+00:00 app[web.1]:
2016-10-10T04:44:45.503524+00:00 app[web.1]: Using /app/.heroku/python/lib/python3.5/site-packages
2016-10-10T04:44:45.503725+00:00 app[web.1]: Finished processing dependencies for corefinance==0.0
2016-10-10T04:44:46.082134+00:00 app[web.1]: Traceback (most recent call last):
2016-10-10T04:44:46.082145+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/setuptools-25.2.0-py3.5.egg/pkg_resources/__init__.py", line 2238, in resolve
2016-10-10T04:44:46.082291+00:00 app[web.1]: AttributeError: module 'corefinance' has no attribute 'main'
2016-10-10T04:44:46.082295+00:00 app[web.1]:
2016-10-10T04:44:46.082296+00:00 app[web.1]: During handling of the above exception, another exception occurred:
2016-10-10T04:44:46.082297+00:00 app[web.1]:
2016-10-10T04:44:46.082299+00:00 app[web.1]: Traceback (most recent call last):
2016-10-10T04:44:46.082334+00:00 app[web.1]: File "runapp.py", line 8, in <module>
2016-10-10T04:44:46.082511+00:00 app[web.1]: app = loadapp('config:production.ini', relative_to='./corefinance/')
2016-10-10T04:44:46.082519+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp
2016-10-10T04:44:46.082666+00:00 app[web.1]: return loadobj(APP, uri, name=name, **kw)
2016-10-10T04:44:46.082668+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 271, in loadobj
2016-10-10T04:44:46.082894+00:00 app[web.1]: global_conf=global_conf)
2016-10-10T04:44:46.082898+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext
2016-10-10T04:44:46.083142+00:00 app[web.1]: global_conf=global_conf)
2016-10-10T04:44:46.083165+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig
2016-10-10T04:44:46.083511+00:00 app[web.1]: return loader.get_context(object_type, name, global_conf)
2016-10-10T04:44:46.083514+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 454, in get_context
2016-10-10T04:44:46.083857+00:00 app[web.1]: section)
2016-10-10T04:44:46.083862+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 476, in _context_from_use
2016-10-10T04:44:46.084217+00:00 app[web.1]: object_type, name=use, global_conf=global_conf)
2016-10-10T04:44:46.084221+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 406, in get_context
2016-10-10T04:44:46.084536+00:00 app[web.1]: global_conf=global_conf)
2016-10-10T04:44:46.084539+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext
2016-10-10T04:44:46.084822+00:00 app[web.1]: global_conf=global_conf)
2016-10-10T04:44:46.084826+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 328, in _loadegg
2016-10-10T04:44:46.085119+00:00 app[web.1]: return loader.get_context(object_type, name, global_conf)
2016-10-10T04:44:46.085123+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 620, in get_context
2016-10-10T04:44:46.085560+00:00 app[web.1]: object_type, name=name)
2016-10-10T04:44:46.085561+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 646, in find_egg_entry_point
2016-10-10T04:44:46.086013+00:00 app[web.1]: possible.append((entry.load(), protocol, entry.name))
2016-10-10T04:44:46.086015+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/setuptools-25.2.0-py3.5.egg/pkg_resources/__init__.py", line 2230, in load
2016-10-10T04:44:46.086217+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.5/site-packages/setuptools-25.2.0-py3.5.egg/pkg_resources/__init__.py", line 2240, in resolve
2016-10-10T04:44:46.086408+00:00 app[web.1]: ImportError: module 'corefinance' has no attribute 'main'
</code></pre>
<p>run</p>
<pre><code>#!/bin/bash
set -e
python ./corefinance/setup.py develop
python runapp.py
</code></pre>
<p>runapp.py</p>
<pre><code>import os
from paste.deploy import loadapp
from waitress import serve
if __name__ == "__main__":
port = int(os.environ.get("PORT", 5000))
app = loadapp('config:production.ini', relative_to='./corefinance/')
serve(app, host='0.0.0.0', port=port)
</code></pre>
<p>./corefinance/setup.py</p>
<pre><code>import os
from setuptools import setup, find_packages
here = os.path.abspath(os.path.dirname(__file__))
with open(os.path.join(here, 'README.txt')) as f:
README = f.read()
with open(os.path.join(here, 'CHANGES.txt')) as f:
CHANGES = f.read()
requires = [
'setuptools',
'markupsafe',
'pyramid',
'pyramid_chameleon',
'pyramid_debugtoolbar',
'pyramid_tm',
'SQLAlchemy',
'transaction',
'zope.sqlalchemy',
'waitress',
'docutils',
'pyramid_exclog',
'cryptacular',
'pycrypto',
'webtest',
]
setup(name='corefinance',
version='0.0',
description='corefinance',
long_description=README + '\n\n' + CHANGES,
classifiers=[
"Programming Language :: Python",
"Framework :: Pyramid",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
],
author='',
author_email='',
url='',
keywords='web wsgi bfg pylons pyramid',
packages=find_packages(),
include_package_data=True,
zip_safe=False,
test_suite='corefinance',
install_requires=requires,
entry_points="""\
[paste.app_factory]
main = corefinance:main
[console_scripts]
initialize_corefinance_db = corefinance.scripts.initializedb:main
""",
)
</code></pre>
<p>./corefinance/corefinance/<strong>init</strong>.py</p>
<pre><code>from pyramid.config import Configurator
from sqlalchemy import engine_from_config
from configparser import SafeConfigParser
import os
from pyramid.authentication import AuthTktAuthenticationPolicy
from pyramid.authorization import ACLAuthorizationPolicy
from pyramid.session import SignedCookieSessionFactory
from .security import groupfinder
from sqlalchemy import engine_from_config
from corefinance.models.meta import DBSession
from corefinance.models.utilities import RootFactory
from corefinance.models.meta import Base
def main(global_config, **settings):
""" This function returns a Pyramid WSGI application.
"""
parser = SafeConfigParser()
db_ini_file = settings['db_ini_file']
iniloc = os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', db_ini_file))
read_list = parser.read(iniloc)
connstring = parser.get('postgres', 'connstring')
settings['sqlalchemy.url'] = connstring
engine = engine_from_config(settings, 'sqlalchemy.')
DBSession.configure(bind=engine)
session_factory = SignedCookieSessionFactory(
settings['session.secret']
)
authn_policy = AuthTktAuthenticationPolicy(
settings['session.secret'], callback=groupfinder, hashalg='sha512')
authz_policy = ACLAuthorizationPolicy()
config = Configurator(
settings=settings,
root_factory=RootFactory,
authentication_policy=authn_policy,
authorization_policy=authz_policy,
session_factory=session_factory
)
Base.metadata.bind = engine
config.include('pyramid_chameleon')
config.include(addroutes)
config.scan()
return config.make_wsgi_app()
</code></pre>
| 1 | 2016-10-10T15:37:54Z | 40,113,034 | <p>I don't think there is a clear reason this isn't working. There's a few things you could improve though.</p>
<p>1) Most <code>setup.py</code> files expect to be executed from their own directory. When you run <code>python some_folder/setup.py develop</code> you are asking for a bad time. <code>pip</code> solves this for you, and you should switch to it by doing <code>pip install -e some_folder</code>.</p>
<p>2) Make sure there isn't an <code>__init.py__</code> in the same folder as the <code>setup.py</code> as that could confuse things for your <code>runapp.py</code> script which is relying on the <code>PYTHONPATH</code> to discover the code, and the current folder is usually on the path meaning the folder with your <code>setup.py</code> could be considered a package (and no <code>main</code> is found in that <code>__init__.py</code>.</p>
<p>The big test to figure this stuff out is just to try and run <code>python</code> in the same environment that's failing and try to <code>import corefinance</code> and see what you get. This should give you some clues as to why it's not working.</p>
| 1 | 2016-10-18T16:04:04Z | [
"python",
"heroku",
"pyramid"
] |
Select rows of DataFrame with datetime index based on date | 39,961,862 | <p>From the following DataFrame with datetime index</p>
<pre><code> 'A'
2015-02-17 14:31:00+00:00 127.2801
2015-02-17 14:32:00+00:00 127.7250
2015-02-17 14:33:00+00:00 127.8010
2015-02-17 14:34:00+00:00 127.5450
2015-02-17 14:35:00+00:00 127.6300
...
2016-02-17 20:56:00+00:00 98.0900
2016-02-17 20:57:00+00:00 98.0901
2016-02-17 20:58:00+00:00 98.1000
2016-02-17 20:59:00+00:00 98.0500
2016-02-17 21:00:00+00:00 98.1100
</code></pre>
<p>I want to select all rows with a certain date, e.g. 2015-02-17. </p>
<p>Whats the best way to achieve that?</p>
| 1 | 2016-10-10T15:38:58Z | 39,961,908 | <p><code>DatetimeIndex</code> supports partial datetime strings for <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#datetimeindex-partial-string-indexing" rel="nofollow">label based indexing</a>:</p>
<pre><code>In [18]:
df.loc['2015-02-17']
Out[18]:
A
2015-02-17 14:31:00 127.2801
2015-02-17 14:32:00 127.7250
2015-02-17 14:33:00 127.8010
2015-02-17 14:34:00 127.5450
2015-02-17 14:35:00 127.6300
</code></pre>
| 2 | 2016-10-10T15:42:03Z | [
"python",
"datetime",
"pandas"
] |
How to pull out number in string with python? | 39,961,883 | <p>I have a project where I pull out some posts of a subreddit (/r/buildapcsales) and email me with some deals.</p>
<p>For example: </p>
<ol>
<li>[Monitor] AOC 21.5" 1080p 75Hz 1ms FreeSync Monitor - <strong>$105</strong> ($229.99 - $110 sale - $15 promo thru 10/13)</li>
<li>[Monitor] EQD Auria EQ278CG 27 inch 144hz 1080p 3ms - <strong>$149.99</strong> (Newegg Flash)</li>
<li>[Monitor] EQD Auria EQ248CG 24 inch 144hz 1080p 3ms - <strong>$128.99</strong> (Newegg Flash)</li>
<li>[Monitor] Acer CB280HK 4k TN 1ms 60hz - <strong>$249.99</strong> ($449.99 - $200 Instant Rebate)</li>
<li>[Monitor] Acer K272HUL Ebmidpx 27â 2560 x 1440 1ms VESA Mountable - <strong>$239.99</strong> ($60 off)</li>
</ol>
<p>I want to pull out the bolded numbers and compare them to a threshhold (<=200), but the problem is I cant use regex because that will pull out the calculations (i.e. $449.99 - $200 Instant Rebate) on the right also.</p>
<p>Is there another more clever way to do this? I'm completely lost.</p>
| -2 | 2016-10-10T15:40:27Z | 39,961,939 | <blockquote>
<p>I cant use regex because that will pull out the calculations (i.e. $449.99 - $200 Instant Rebate) on the right also.</p>
</blockquote>
<p>You can still use regular expressions and extract <em>the first amount coming after the dash</em>:</p>
<pre><code>import re
lines = [
'1. [Monitor] AOC 21.5" 1080p 75Hz 1ms FreeSync Monitor - $105 ($229.99 - $110 sale - $15 promo thru 10/13)',
'2. [Monitor] EQD Auria EQ278CG 27 inch 144hz 1080p 3ms - $149.99 (Newegg Flash)',
'3. [Monitor] EQD Auria EQ248CG 24 inch 144hz 1080p 3ms - $128.99 (Newegg Flash)',
'4. [Monitor] Acer CB280HK 4k TN 1ms 60hz - $249.99 ($449.99 - $200 Instant Rebate)',
'5. [Monitor] Acer K272HUL Ebmidpx 27â 2560 x 1440 1ms VESA Mountable - $239.99 ($60 off)'
]
pattern = re.compile(r"- \$(\d+)")
for line in lines:
print(pattern.search(line).group(1))
</code></pre>
<p>Prints:</p>
<pre><code>105
149
128
249
239
</code></pre>
| 2 | 2016-10-10T15:43:57Z | [
"python",
"regex",
"string",
"reddit"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.