title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
python *args and **kwargs
| 39,596,753 |
<p>I have seen the previous stack overflow posts on this topic, but I am still unable to create use these two commands when I try to run my function. I have coded a demo example of a simple moving average that I would like to run through the args,kwargs command.</p>
<pre><code>import numpy as np
def moving_average(data,lookback=7,SMA=True): #MA function
if SMA==True:
weights=np.repeat(1.0,lookback)/lookback
smas=np.convolve(data,weights,'valid')
return smas
</code></pre>
<p>Just running this function works as expected.</p>
<pre><code>data=np.random.randn(100) #randomly
moving_average(data,lookback=7,SMA=True) #this outputs the correct set of numbers
</code></pre>
<p>However the second I try to add args and kwargs it breaks down.</p>
<pre><code>def test1(*args,**kwargs):
return moving_average(data,lookback,SMA)
test1(data,lookback=7,SMA=True) #this returns an error, saying my global lookback is not defined
</code></pre>
<p>What exactly in the *args **kwargs logic am I getting wrong? Ive tried inputting both a tuple and a dictionary but neither of those seem to work.</p>
| -4 |
2016-09-20T14:27:21Z
| 39,596,847 |
<p>Pass the <code>*args</code> and <code>**kwargs</code> to your function not the argument(s) and named argument(s):</p>
<pre><code>def test1(*args,**kwargs):
return moving_average(*args, **kwargs)
</code></pre>
| 1 |
2016-09-20T14:31:25Z
|
[
"python",
"args",
"kwargs"
] |
python *args and **kwargs
| 39,596,753 |
<p>I have seen the previous stack overflow posts on this topic, but I am still unable to create use these two commands when I try to run my function. I have coded a demo example of a simple moving average that I would like to run through the args,kwargs command.</p>
<pre><code>import numpy as np
def moving_average(data,lookback=7,SMA=True): #MA function
if SMA==True:
weights=np.repeat(1.0,lookback)/lookback
smas=np.convolve(data,weights,'valid')
return smas
</code></pre>
<p>Just running this function works as expected.</p>
<pre><code>data=np.random.randn(100) #randomly
moving_average(data,lookback=7,SMA=True) #this outputs the correct set of numbers
</code></pre>
<p>However the second I try to add args and kwargs it breaks down.</p>
<pre><code>def test1(*args,**kwargs):
return moving_average(data,lookback,SMA)
test1(data,lookback=7,SMA=True) #this returns an error, saying my global lookback is not defined
</code></pre>
<p>What exactly in the *args **kwargs logic am I getting wrong? Ive tried inputting both a tuple and a dictionary but neither of those seem to work.</p>
| -4 |
2016-09-20T14:27:21Z
| 39,596,866 |
<p>In your example using *args and **kwargs:</p>
<pre><code>def test1(*args,**kwargs):
return moving_average(data,lookback,SMA)
</code></pre>
<p><code>data</code>, <code>lookback</code> and <code>SMA</code> are no longer defined. It could be:</p>
<pre><code>def test1(*args, **kwargs):
return moving_average(args[0], kwargs['lookback'], kwargs['SMA'])
</code></pre>
<p>or</p>
<pre><code>def test1(*args, **kwargs):
return moving_average(*args, **kwargs)
</code></pre>
<p>The Python tutorial has a section that might help: <a href="https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists" rel="nofollow">Unpacking Argument Lists</a></p>
| 1 |
2016-09-20T14:32:04Z
|
[
"python",
"args",
"kwargs"
] |
python *args and **kwargs
| 39,596,753 |
<p>I have seen the previous stack overflow posts on this topic, but I am still unable to create use these two commands when I try to run my function. I have coded a demo example of a simple moving average that I would like to run through the args,kwargs command.</p>
<pre><code>import numpy as np
def moving_average(data,lookback=7,SMA=True): #MA function
if SMA==True:
weights=np.repeat(1.0,lookback)/lookback
smas=np.convolve(data,weights,'valid')
return smas
</code></pre>
<p>Just running this function works as expected.</p>
<pre><code>data=np.random.randn(100) #randomly
moving_average(data,lookback=7,SMA=True) #this outputs the correct set of numbers
</code></pre>
<p>However the second I try to add args and kwargs it breaks down.</p>
<pre><code>def test1(*args,**kwargs):
return moving_average(data,lookback,SMA)
test1(data,lookback=7,SMA=True) #this returns an error, saying my global lookback is not defined
</code></pre>
<p>What exactly in the *args **kwargs logic am I getting wrong? Ive tried inputting both a tuple and a dictionary but neither of those seem to work.</p>
| -4 |
2016-09-20T14:27:21Z
| 39,596,904 |
<pre><code>def test1(*args,**kwargs):
</code></pre>
<p>Your function now has two local variables, <code>args</code> and <code>kwargs</code>. One holds the positional arguments that were passed in (as a tuple), the other the keyword arguments (as a dictionary).</p>
<pre><code>return moving_average(data,lookback,SMA)
</code></pre>
<p>Here you use three variable names (data, lookback and SMA) that don't exist in your function, so you get an error.</p>
<p>You could have done</p>
<pre><code>return moving_average(args[0], kwargs['lookback'], kwargs['SMA'])
</code></pre>
<p>But then your test1 function will only work with an exact call like <code>test1(data,lookback=7,SMA=True)</code>. A call like <code>test1(data, 7, True)</code> won't work, as then the parameters are all in <code>args</code>, none in <code>kwargs</code>).</p>
<p>Your function could also pass on the parameters exactly as it received them:</p>
<pre><code>return moving_average(*args, **kwargs)
</code></pre>
<p>That works, but there's no benefit to the <code>test1</code> function, it just calls <code>moving_average</code> and returns its results, why not call <code>moving_average</code> directly.</p>
| 1 |
2016-09-20T14:33:39Z
|
[
"python",
"args",
"kwargs"
] |
Intercepting python interpreter code before it executes
| 39,596,768 |
<p>Is it possible to intercept interpreter's code before it executes?</p>
<p>Let's say I want to handle a case like:</p>
<pre><code>>>> for f in glob.glob('*.*'): # I'd like to intercept this code before it executes
... something_to_do(f) # and play with it in some dangerous fashion :)
...
ERROR: Using glob not allowed. # e.g.
</code></pre>
<p>But there are tons of other examples (like altering the code, or sending it somewhere).</p>
<p>I can write my own interpreter, but that's not really the point.</p>
| 0 |
2016-09-20T14:27:49Z
| 39,636,924 |
<p>Ok, solved it by creating new module which starts new interpreter instance and do whatever.</p>
<p>I just put the code below in the module and import it.</p>
<pre><code>import code
class GlobeFilterConsole(code.InteractiveConsole):
def push(self, line):
self.buffer.append(line)
source = "\n".join(self.buffer)
if 'glob' in source: # do whatever you want with the source
print('glob usage not allowed')
more = False
else:
more = self.runsource(source, self.filename)
if not more:
self.resetbuffer()
return more
console = GlobeFilterConsole()
console.interact()
</code></pre>
| 0 |
2016-09-22T10:45:13Z
|
[
"python",
"interpreter"
] |
logging - merging multiple configuration files
| 39,596,802 |
<p>I am working on a project where we have a core application that loads multiple plugins.
Each plugin has its own configuration file, and the core application has one as well.</p>
<p>We are using the excellent logging module from python's standard library.
The logging module includes the ability to load the logging configuration from an .ini file.
However, if you load another configuration file, the other files are discarded and only the new configuration is used.</p>
<p>What I would like to do is to split my logging configuration into multiple files, so that the application can load its own configuration file and then load each plugin's merging their logging configuration into the main one.</p>
<p>Note: fileConfig has an option called disable_existing_loggers that can be set to False. However, this only keeps existing loggers alive, but it still clears the internal map of handlers (which means that a plugin's configuration cannot use a handler defined in the application's config file).</p>
<p>I could merge the files manually to produce my own config, but I'd rather avoid that.</p>
<p>Thanks.</p>
<hr>
<p>To make it clearer, I'd like to do something like this:</p>
<pre><code># application.ini
[loggers]
keys=root,app
[handlers]
keys=rootHandler,appHandler
[formatters]
keys=myformatter
[logger_root]
# stuff
[handler_rootHandler]
# stuff
[formatter_myformatter]
# stuff
...
# plugin.ini
[loggers]
keys=pluginLogger # no root logger
[handlers]
keys=pluginHandler # no root handler
# no formatters section
[logger_pluginLogger]
# stuff
formatter=myformatter # using the formatter from application.ini
</code></pre>
| 0 |
2016-09-20T14:29:10Z
| 39,699,279 |
<p>I couldn't find a way to do what I wanted, so I ended up rolling a class to do it.</p>
<p>Here it is as a convenient <a href="https://gist.github.com/Ippo343/c2ab9d4ec03cc89b66211caf81b34545" rel="nofollow">github gist</a>.</p>
| 0 |
2016-09-26T09:33:23Z
|
[
"python",
"python-2.7"
] |
What kind of algorithm would I need to compute a name that equals a number?
| 39,596,808 |
<p>For instance A =1, B = 2 ... Z = 26
The name is auto generated and must equal lets say 200 with the sum of the letters.
e.g.</p>
<pre><code> A = 1
AB = 3 etc
</code></pre>
<p>What steps would be taken in creating a function to create an array of auto generated strings that has a value of 200?</p>
<p>How would this be done in Python?</p>
| -4 |
2016-09-20T14:29:39Z
| 39,597,036 |
<p>You want to generate a string under some constraints.
And this is kind of optimisation problem. However we don't need to hire any machine learning here.</p>
<p>One possible solution might look like this (sorry, no python, just pseudo code)</p>
<ol>
<li><code>name = ""</code> #initialize name variable</li>
<li><code>place = selectPlaceWhereToInsertNextCharacter(name)</code></li>
<li><code>char = selectNextRandomCharacter()</code></li>
<li>update <code>name</code> putting <code>char</code> in <code>place</code></li>
<li>If value of ```name`` is < 200, goto 2</li>
</ol>
<p>Comments:</p>
<ul>
<li>Function <code>selectNextRandomCharacter</code> must select next character which value is less than remaining space in built so far name.
Let's say your current name is <code>"zzz"</code>so the value of it is <code>174</code>. Next letter you select can not be greater than <code>200 - 174</code>. Othewise you will overflow.
To do this - just select next integer within appropriate range and map it to char.</li>
<li><code>selectPlaceWhereToInsertNextCharacter(name)</code> it just selects where you will put next character. For example if name is <code>"pawel"</code> there are 6 plces where you could mix in new letter. This is because I assumed that name must be quite random.</li>
<li>value of name can be checked using such python <code>sum(map(lambda x: ord(x)-64, name))</code></li>
</ul>
| 2 |
2016-09-20T14:39:08Z
|
[
"python"
] |
Change if conditon dynamic ? (python)
| 39,596,834 |
<p>Hey guys i want to change the if condition in a function dynamic.</p>
<pre><code>def func(<):
if y<x:
return x
def func(=):
if y=x:
return x
</code></pre>
<p>i want just the condition changed, any ideas?</p>
| -3 |
2016-09-20T14:30:43Z
| 39,597,192 |
<p>You can not use the operators like <code>=</code> and <code><</code> directly. But you can import them from <code>operator</code>:</p>
<pre><code>from operator import le, eq
def func(op):
if op(x, y):
return x
func(le) # â x < y
func(eq) # â x == y
</code></pre>
<p>BTW the comparison operator is <code>==</code> not <code>=</code>.</p>
| 0 |
2016-09-20T14:46:31Z
|
[
"python",
"function",
"python-3.x",
"if-statement"
] |
PyQt: How to set window size to match widget size?
| 39,596,868 |
<p>I am writing a pyqt application with a menu and central widget (a table widget with rows and columns that looks like a spread sheet). I want to set the size of the enclosing window to be the size of the widget. I do not want to fix it to some size and waste space around the border for aesthetic reasons. How do I extract the overall width of the table widget? I tried to use tabeleWidget.width() but that gives me the width of a cell it seems. I am using pyqt4 and python 2.7.</p>
| 0 |
2016-09-20T14:32:08Z
| 39,600,345 |
<p>I'm not sure if there is a simple way to get the size of the table, but how about something like this?</p>
<pre><code>from PyQt4 import QtCore, QtGui
class MainWindow(QtGui.QMainWindow):
def __init__(self, table_rows, table_cols, **kwargs):
super(MainWindow, self).__init__(**kwargs)
self.table = QtGui.QTableWidget(table_rows, table_cols)
self.table.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.table.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.table.verticalHeader().sectionResized.connect(self.fitToTable)
self.table.horizontalHeader().sectionResized.connect(self.fitToTable)
self.setCentralWidget(self.table)
self.fitToTable()
@QtCore.pyqtSlot()
def fitToTable(self):
x = self.table.verticalHeader().size().width()
for i in range(self.table.columnCount()):
x += self.table.columnWidth(i)
y = self.table.horizontalHeader().size().height()
for i in range(self.table.rowCount()):
y += self.table.rowHeight(i)
self.setFixedSize(x, y)
if __name__ == '__main__':
app = QtGui.QApplication([])
win = MainWindow(4, 5)
win.show()
app.exec_()
</code></pre>
<p>This example computes the size of the table and fixes the window size to those values. It also resizes the window to fit if the user manually resizes any of the rows or columns.</p>
| 1 |
2016-09-20T17:29:50Z
|
[
"python",
"pyqt",
"pyqt4"
] |
How to put sequential numbers into lists from a list
| 39,596,936 |
<p>I have a list numbers say,</p>
<p><code>[1,2,3,6,8,9,10,11]</code></p>
<p>First, I want to get the sum of the differences (step size) between the numbers <code>(n, n+1)</code> in the list.</p>
<p>Second, if a set of consecutive numbers having a difference of 1 between them, put them in a list, i.e. there are two such lists in this example,</p>
<p><code>[1,2,3]</code></p>
<p><code>[8,9,10,11]</code></p>
<p>and then put the rest numbers in another list, i.e. there is only one such list in the example,</p>
<p><code>[6]</code>.</p>
<p>Third, get the lists with the max/min sizes from the sequential lists, i.e. <code>[1,2,3]</code>, <code>[8,9,10,11]</code> in this example, the max list is,</p>
<p><code>[8,9,10,11]</code></p>
<p>min list is</p>
<p><code>[1,2,3]</code>.</p>
<p>What's the best way to implement this?</p>
| 0 |
2016-09-20T14:34:55Z
| 39,597,210 |
<p>I wonder if you've already got the answer to this (given the missing 4 from your answers) as the first thing I naively tried produced that answer. (That and/or it reads like a homework question)</p>
<pre><code>>>> a=[1,2,3,4,6,8,9,10,11]
>>> sum([a[x+1] - a[x] for x in range(len(a)-1)])
10
>>> [a[x] for x in range(len(a)-1) if abs(a[x] - a[x+1]) ==1]
[1, 2, 3, 8, 9, 10]
</code></pre>
<p>Alternatively, try :</p>
<pre><code>a=[1,2,3,6,8,9,10,11]
sets = []
cur_set = set()
total_diff = 0
for index in range(len(a)-1):
total_diff += a[index +1] - a[index]
if a[index +1] - a[index] == 1:
cur_set = cur_set | set([ a[index +1], a[index]])
else:
if len(cur_set) > 0:
sets.append(cur_set)
cur_set = set()
if len(cur_set) > 0:
sets.append(cur_set)
all_seq_nos = set()
for seq_set in sets:
all_seq_nos = all_seq_nos | seq_set
non_seq_set = set(a) - all_seq_nos
print("Sum of differences is {0:d}".format(total_diff))
print("sets of sequential numbers are :")
for seq_set in sets:
print(sorted(list(seq_set)))
print("set of non-sequential numbers is :")
print(sorted(list(non_seq_set)))
big_set=max(sets, key=sum)
sml_set=min(sets, key=sum)
print ("Biggest set of sequential numbers is :")
print (sorted(list(big_set)))
print ("Smallest set of sequential numbers is :")
print (sorted(list(sml_set)))
</code></pre>
<p>Which will produce the output :</p>
<pre><code>Sum of differences is 10
sets of sequential numbers are :
[1, 2, 3]
[8, 9, 10, 11]
set of non-sequential numbers is :
[6]
Biggest set of sequential numbers is :
[8, 9, 10, 11]
Smallest set of sequential numbers is :
[1, 2, 3]
</code></pre>
<p>Hopefully that all helps ;-)</p>
| 1 |
2016-09-20T14:47:31Z
|
[
"python",
"list"
] |
How to put sequential numbers into lists from a list
| 39,596,936 |
<p>I have a list numbers say,</p>
<p><code>[1,2,3,6,8,9,10,11]</code></p>
<p>First, I want to get the sum of the differences (step size) between the numbers <code>(n, n+1)</code> in the list.</p>
<p>Second, if a set of consecutive numbers having a difference of 1 between them, put them in a list, i.e. there are two such lists in this example,</p>
<p><code>[1,2,3]</code></p>
<p><code>[8,9,10,11]</code></p>
<p>and then put the rest numbers in another list, i.e. there is only one such list in the example,</p>
<p><code>[6]</code>.</p>
<p>Third, get the lists with the max/min sizes from the sequential lists, i.e. <code>[1,2,3]</code>, <code>[8,9,10,11]</code> in this example, the max list is,</p>
<p><code>[8,9,10,11]</code></p>
<p>min list is</p>
<p><code>[1,2,3]</code>.</p>
<p>What's the best way to implement this?</p>
| 0 |
2016-09-20T14:34:55Z
| 39,597,329 |
<blockquote>
<p>First, I want to get the sum of the differences (step size) between
the numbers <code>(n, n+1)</code> in the list.</p>
</blockquote>
<p>Use <code>sum</code> on the successive differences of elements in the list:</p>
<pre><code>>>> sum(lst[i] - x for i, x in enumerate(lst[:-1], start=1))
10
</code></pre>
<hr>
<blockquote>
<p>Second, if a set of consecutive numbers having a difference of 1 between them, put them in a list, i.e. there are two such lists in
this example, and then put the rest numbers in another list, i.e.
there is only one such list in the example,</p>
</blockquote>
<p><a href="https://docs.python.org/2/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby</code></a> does this by grouping on the difference of each element on a reference <a href="https://docs.python.org/2/library/itertools.html#itertools.count" rel="nofollow"><code>itertools.count</code></a> object:</p>
<pre><code>>>> from itertools import groupby, count
>>> c = count()
>>> result = [list(g) for i, g in groupby(lst, key=lambda x: x-next(c))]
>>> result
[[1, 2, 3, 4], [6], [8, 9, 10, 11]]
</code></pre>
<hr>
<blockquote>
<p>Third, get the lists with the max/min sizes from above</p>
</blockquote>
<p><code>max</code> and <code>min</code> with the <em>key function</em> as <code>sum</code>:</p>
<pre><code>>>> max(result, key=sum)
[8, 9, 10, 11]
>>> min(result, key=sum)
[6] #??? shouldn't this be [6]
</code></pre>
| 4 |
2016-09-20T14:52:36Z
|
[
"python",
"list"
] |
Python - PARAMIKO SSH close session
| 39,596,978 |
<p>I need help terminating my SSH session after my sendShell object runs through list commandfactory[].</p>
<p>I have a python script where I use paramiko to connect to a cisco lab router via ssh; execute commands in commandfactory[]; and output the results to the standard out. Everything seems to work except, I can't seem to get the SSH session to close after all my commands are ran. The session simply stays open until I terminate my script.</p>
<pre><code>import threading, paramiko, re, os
class ssh:
shell = None
client = None
transport = None
def __init__(self, address, username, password):
print("Connecting to server on ip", str(address) + ".")
self.client = paramiko.client.SSHClient()
self.client.set_missing_host_key_policy(paramiko.client.AutoAddPolicy())
self.client.connect(address, username=username, password=password, look_for_keys=False)
self.transport = paramiko.Transport((address, 22))
self.transport.connect(username=username, password=password)
thread = threading.Thread(target=self.process)
thread.daemon = True
thread.start()
def closeConnection(self):
if(self.client != None):
self.client.close()
self.transport.close()
def openShell(self):
self.shell = self.client.invoke_shell()
def sendShell(self):
self.commandfactory = []
print("\nWelcome to Command Factory. Enter Commands you want to execute.\nType \"done\" when you are finished:")
while not re.search(r"done.*", str(self.commandfactory)):
self.commandfactory.append(input(":"))
if self.commandfactory[-1] == "done":
del self.commandfactory[-1]
break
print ("Here are the commands you're going to execute:\n" + str(self.commandfactory))
if(self.shell):
self.shell.send("enable" + "\n")
self.shell.send("ilovebeer" + "\n")
self.shell.send("term len 0" + "\n")
for cmdcnt in range(0,len(self.commandfactory)):
self.shell.send(self.commandfactory[cmdcnt] + "\n")
self.shell.send("exit" + "\n")
self.shell.send("\n")
else:
print("Shell not opened.")
def process(self):
global connection
while True:
# Print data when available
if self.shell != None and self.shell.recv_ready():
alldata = self.shell.recv(1024)
while self.shell.recv_ready():
alldata += self.shell.recv(1024)
strdata = str(alldata, "utf8")
strdata.strip()
print(strdata, end = "")
sshUsername = "adrian"
sshPassword = "ilovebeer"
sshServer = "10.10.254.129"
connection = ssh(sshServer, sshUsername, sshPassword)
connection.openShell()
while True:
connection.sendShell()
</code></pre>
<p>I would like the SSH session terminate once all the commands in my "commandfactory" list has been ran (CODE BELOW).</p>
<pre><code>def sendShell(self):
self.commandfactory = []
print("\nWelcome to Command Factory. Enter Commands you want to execute.\nType \"done\" when you are finished:")
while not re.search(r"done.*", str(self.commandfactory)):
self.commandfactory.append(input(":"))
if self.commandfactory[-1] == "done":
del self.commandfactory[-1]
break
print ("Here are the commands you're going to execute:\n" + str(self.commandfactory))
if(self.shell):
self.shell.send("enable" + "\n")
self.shell.send("ilovebeer" + "\n")
self.shell.send("term len 0" + "\n")
for cmdcnt in range(0,len(self.commandfactory)):
self.shell.send(self.commandfactory[cmdcnt] + "\n")
self.shell.send("exit" + "\n")
self.shell.send("\n")
</code></pre>
<p>My code was mainly taken from <a href="https://daanlenaerts.com/blog/2016/07/01/python-and-ssh-paramiko-shell/" rel="nofollow">https://daanlenaerts.com/blog/2016/07/01/python-and-ssh-paramiko-shell/</a>. Much thanks to Daan Lenaerts for a good blog. I did make my own changes to fit my needs.</p>
| 0 |
2016-09-20T14:36:31Z
| 39,601,845 |
<p>End the sendShell function with self.transport.close(), see <a href="http://docs.paramiko.org/en/2.0/api/transport.html" rel="nofollow">http://docs.paramiko.org/en/2.0/api/transport.html</a></p>
| 0 |
2016-09-20T18:58:38Z
|
[
"python",
"networking",
"ssh",
"cisco"
] |
Python - PARAMIKO SSH close session
| 39,596,978 |
<p>I need help terminating my SSH session after my sendShell object runs through list commandfactory[].</p>
<p>I have a python script where I use paramiko to connect to a cisco lab router via ssh; execute commands in commandfactory[]; and output the results to the standard out. Everything seems to work except, I can't seem to get the SSH session to close after all my commands are ran. The session simply stays open until I terminate my script.</p>
<pre><code>import threading, paramiko, re, os
class ssh:
shell = None
client = None
transport = None
def __init__(self, address, username, password):
print("Connecting to server on ip", str(address) + ".")
self.client = paramiko.client.SSHClient()
self.client.set_missing_host_key_policy(paramiko.client.AutoAddPolicy())
self.client.connect(address, username=username, password=password, look_for_keys=False)
self.transport = paramiko.Transport((address, 22))
self.transport.connect(username=username, password=password)
thread = threading.Thread(target=self.process)
thread.daemon = True
thread.start()
def closeConnection(self):
if(self.client != None):
self.client.close()
self.transport.close()
def openShell(self):
self.shell = self.client.invoke_shell()
def sendShell(self):
self.commandfactory = []
print("\nWelcome to Command Factory. Enter Commands you want to execute.\nType \"done\" when you are finished:")
while not re.search(r"done.*", str(self.commandfactory)):
self.commandfactory.append(input(":"))
if self.commandfactory[-1] == "done":
del self.commandfactory[-1]
break
print ("Here are the commands you're going to execute:\n" + str(self.commandfactory))
if(self.shell):
self.shell.send("enable" + "\n")
self.shell.send("ilovebeer" + "\n")
self.shell.send("term len 0" + "\n")
for cmdcnt in range(0,len(self.commandfactory)):
self.shell.send(self.commandfactory[cmdcnt] + "\n")
self.shell.send("exit" + "\n")
self.shell.send("\n")
else:
print("Shell not opened.")
def process(self):
global connection
while True:
# Print data when available
if self.shell != None and self.shell.recv_ready():
alldata = self.shell.recv(1024)
while self.shell.recv_ready():
alldata += self.shell.recv(1024)
strdata = str(alldata, "utf8")
strdata.strip()
print(strdata, end = "")
sshUsername = "adrian"
sshPassword = "ilovebeer"
sshServer = "10.10.254.129"
connection = ssh(sshServer, sshUsername, sshPassword)
connection.openShell()
while True:
connection.sendShell()
</code></pre>
<p>I would like the SSH session terminate once all the commands in my "commandfactory" list has been ran (CODE BELOW).</p>
<pre><code>def sendShell(self):
self.commandfactory = []
print("\nWelcome to Command Factory. Enter Commands you want to execute.\nType \"done\" when you are finished:")
while not re.search(r"done.*", str(self.commandfactory)):
self.commandfactory.append(input(":"))
if self.commandfactory[-1] == "done":
del self.commandfactory[-1]
break
print ("Here are the commands you're going to execute:\n" + str(self.commandfactory))
if(self.shell):
self.shell.send("enable" + "\n")
self.shell.send("ilovebeer" + "\n")
self.shell.send("term len 0" + "\n")
for cmdcnt in range(0,len(self.commandfactory)):
self.shell.send(self.commandfactory[cmdcnt] + "\n")
self.shell.send("exit" + "\n")
self.shell.send("\n")
</code></pre>
<p>My code was mainly taken from <a href="https://daanlenaerts.com/blog/2016/07/01/python-and-ssh-paramiko-shell/" rel="nofollow">https://daanlenaerts.com/blog/2016/07/01/python-and-ssh-paramiko-shell/</a>. Much thanks to Daan Lenaerts for a good blog. I did make my own changes to fit my needs.</p>
| 0 |
2016-09-20T14:36:31Z
| 39,606,491 |
<p>Was able to solve by adding self.shell.transport.close() after my iterator.</p>
<pre><code>def sendShell(self):
self.commandfactory = []
print("\nWelcome to Command Factory. Enter Commands you want to execute.\nType \"done\" when you are finished:")
while not re.search(r"done.*", str(self.commandfactory)):
self.commandfactory.append(input(":"))
if self.commandfactory[-1] == "done":
del self.commandfactory[-1]
break
print ("Here are the commands you're going to execute:\n" + str(self.commandfactory))
if(self.shell):
self.shell.send("enable" + "\n")
self.shell.send("ilovebeer" + "\n")
self.shell.send("term len 0" + "\n")
for cmdcnt in range(0,len(self.commandfactory)):
self.shell.send(self.commandfactory[cmdcnt] + "\n")
self.shell.send("exit" + "\n")
self.shell.transport.close()
</code></pre>
| 0 |
2016-09-21T02:16:32Z
|
[
"python",
"networking",
"ssh",
"cisco"
] |
download specific columns of csv using requests.get
| 39,596,979 |
<p>I am using <code>requests.get</code> to download a csv file. I only need two columns from this csv file and the rest of the column are useless for me. Currently I am using </p>
<pre><code>r = requests.get(finalurl, verify=False,stream=True)
shutil.copyfileobj(r.raw, csvfile)
</code></pre>
<p>to get the complete csv file.</p>
<p>However, I only want to download two column from the csv file. I can always download the entire content and then take what is necessary. </p>
<p>Just checking if there is a way to get specific column using <code>requests.get</code>
Eg: <a href="http://chart.finance.yahoo.com/table.csv?s=AAPL&a=7&b=20&c=2016&d=8&e=20&f=2016&g=d&ignore=.csv" rel="nofollow">http://chart.finance.yahoo.com/table.csv?s=AAPL&a=7&b=20&c=2016&d=8&e=20&f=2016&g=d&ignore=.csv</a> </p>
<p>I need only <code>date</code> and <code>Adj.close</code> from this csv file.</p>
<p>Couldn't find similar questions, please direct me if similar question was asked earlier.</p>
<p>Thanks</p>
| 0 |
2016-09-20T14:36:33Z
| 39,597,113 |
<p>Not used requests.get before but I'd be surprised if you can request specific columns from within a file from a remote server.</p>
<p>Take a look at this post
<a href="http://stackoverflow.com/questions/26063231/read-specific-columns-with-pandas-or-other-python-module">Read specific columns with pandas or other python module</a></p>
<p>It provides a solution for opening specific columns, but the whole would need downloading first.</p>
| -1 |
2016-09-20T14:42:24Z
|
[
"python",
"csv",
"download",
"python-requests"
] |
download specific columns of csv using requests.get
| 39,596,979 |
<p>I am using <code>requests.get</code> to download a csv file. I only need two columns from this csv file and the rest of the column are useless for me. Currently I am using </p>
<pre><code>r = requests.get(finalurl, verify=False,stream=True)
shutil.copyfileobj(r.raw, csvfile)
</code></pre>
<p>to get the complete csv file.</p>
<p>However, I only want to download two column from the csv file. I can always download the entire content and then take what is necessary. </p>
<p>Just checking if there is a way to get specific column using <code>requests.get</code>
Eg: <a href="http://chart.finance.yahoo.com/table.csv?s=AAPL&a=7&b=20&c=2016&d=8&e=20&f=2016&g=d&ignore=.csv" rel="nofollow">http://chart.finance.yahoo.com/table.csv?s=AAPL&a=7&b=20&c=2016&d=8&e=20&f=2016&g=d&ignore=.csv</a> </p>
<p>I need only <code>date</code> and <code>Adj.close</code> from this csv file.</p>
<p>Couldn't find similar questions, please direct me if similar question was asked earlier.</p>
<p>Thanks</p>
| 0 |
2016-09-20T14:36:33Z
| 39,597,229 |
<p>You could use Numpy and Loadtext:</p>
<pre><code>import numpy as np
b=np.loadtxt(r'name.csv',dtype=str,delimiter=',',skiprows=1,usecols=(0,1,2))
</code></pre>
<p>This creates an array with data for only the columns you choose.</p>
| 1 |
2016-09-20T14:48:14Z
|
[
"python",
"csv",
"download",
"python-requests"
] |
download specific columns of csv using requests.get
| 39,596,979 |
<p>I am using <code>requests.get</code> to download a csv file. I only need two columns from this csv file and the rest of the column are useless for me. Currently I am using </p>
<pre><code>r = requests.get(finalurl, verify=False,stream=True)
shutil.copyfileobj(r.raw, csvfile)
</code></pre>
<p>to get the complete csv file.</p>
<p>However, I only want to download two column from the csv file. I can always download the entire content and then take what is necessary. </p>
<p>Just checking if there is a way to get specific column using <code>requests.get</code>
Eg: <a href="http://chart.finance.yahoo.com/table.csv?s=AAPL&a=7&b=20&c=2016&d=8&e=20&f=2016&g=d&ignore=.csv" rel="nofollow">http://chart.finance.yahoo.com/table.csv?s=AAPL&a=7&b=20&c=2016&d=8&e=20&f=2016&g=d&ignore=.csv</a> </p>
<p>I need only <code>date</code> and <code>Adj.close</code> from this csv file.</p>
<p>Couldn't find similar questions, please direct me if similar question was asked earlier.</p>
<p>Thanks</p>
| 0 |
2016-09-20T14:36:33Z
| 39,597,236 |
<p>Try <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a>, in your situation, pandas is more convenient.</p>
<pre><code>In [2]: import pandas.io.data as web
...: aapl = web.DataReader("AAPL", 'yahoo','2016-7-20','2016-8-20')
...: aapl['Adj Close']
...:
...:
Out[2]:
Date
2016-07-20 99.421412
2016-07-21 98.894269
2016-07-22 98.128421
2016-07-25 96.815526
2016-07-26 96.149138
2016-07-27 102.395300
2016-07-28 103.777810
2016-07-29 103.648513
2016-08-01 105.478603
2016-08-02 103.917063
2016-08-03 105.220002
2016-08-04 105.870003
2016-08-05 107.480003
2016-08-08 108.370003
2016-08-09 108.809998
2016-08-10 108.000000
2016-08-11 107.930000
2016-08-12 108.180000
2016-08-15 109.480003
2016-08-16 109.379997
2016-08-17 109.220001
2016-08-18 109.080002
2016-08-19 109.360001
Name: Adj Close, dtype: float64
</code></pre>
| 2 |
2016-09-20T14:48:29Z
|
[
"python",
"csv",
"download",
"python-requests"
] |
download specific columns of csv using requests.get
| 39,596,979 |
<p>I am using <code>requests.get</code> to download a csv file. I only need two columns from this csv file and the rest of the column are useless for me. Currently I am using </p>
<pre><code>r = requests.get(finalurl, verify=False,stream=True)
shutil.copyfileobj(r.raw, csvfile)
</code></pre>
<p>to get the complete csv file.</p>
<p>However, I only want to download two column from the csv file. I can always download the entire content and then take what is necessary. </p>
<p>Just checking if there is a way to get specific column using <code>requests.get</code>
Eg: <a href="http://chart.finance.yahoo.com/table.csv?s=AAPL&a=7&b=20&c=2016&d=8&e=20&f=2016&g=d&ignore=.csv" rel="nofollow">http://chart.finance.yahoo.com/table.csv?s=AAPL&a=7&b=20&c=2016&d=8&e=20&f=2016&g=d&ignore=.csv</a> </p>
<p>I need only <code>date</code> and <code>Adj.close</code> from this csv file.</p>
<p>Couldn't find similar questions, please direct me if similar question was asked earlier.</p>
<p>Thanks</p>
| 0 |
2016-09-20T14:36:33Z
| 39,600,295 |
<p>You cannot download certain columns only, you can with the regular finance api. You don't have to download all the data in one go either though and then replace after, you can parse as you go:</p>
<pre><code>import csv
final_url = "http://chart.finance.yahoo.com/table.csv?s=AAPL&a=7&b=20&c=2016&d=8&e=20&f=2016&g=d&ignore=.csv"
with open("out.csv", "w") as out:
writer = csv.writer(out)
data = requests.get(final_url, verify=False, stream=True).iter_lines()
headers = fieldnames = next(data).split(",")
reader = csv.DictReader(data, fieldnames=headers)
writer.writerow(["Date", "Adj Close"])
for row in reader:
writer.writerow([row["Date"], row["Adj Close"]])
</code></pre>
<p>You could just index if the column order is guaranteed to never change but using the <em>DictReader</em> lets you access by key so order is irrelevant. I think it is also safe to presume there will not be any newlines nested in the data.</p>
| -1 |
2016-09-20T17:27:05Z
|
[
"python",
"csv",
"download",
"python-requests"
] |
How to update metadata of an existing object in AWS S3 using python boto3?
| 39,596,987 |
<p>boto3 documentation does not clearly specify how to update the user metadata of an already existing S3 Object.</p>
| 0 |
2016-09-20T14:36:51Z
| 39,596,988 |
<p>It can be done using the copy_from() method -</p>
<pre><code>import boto3
s3 = boto3.resource('s3')
s3_object = s3.Object('bucket-name', 'key')
s3_object.metadata.update({'id':'value'})
s3_object.copy_from(CopySource={'Bucket':'bucket-name', 'Key':'key'}, Metadata=s3_object.metadata, MetadataDirective='REPLACE')
</code></pre>
| 0 |
2016-09-20T14:36:51Z
|
[
"python",
"amazon-web-services",
"amazon-s3",
"boto3"
] |
Format string in python list
| 39,597,006 |
<p>I have a list which should contain string with a particular format or character i.e. <code>{{str(x), str(y)},{str(x), str(y)}}</code>. I tried to do string concat like: <code>"{{"+str(x), str(y)+"},{"+str(x), str(y)+"}}"</code> and append to list, but it gets surrounded by brackets: [<code>({{str(x), str(y)}),({str(x), str(y)}})</code>]
How can I get rid of the brackets or betterstill, is there a better approach to having a list without brackets like this: <code>[{{string, string},{string, string}}]</code></p>
| 0 |
2016-09-20T14:37:47Z
| 39,597,543 |
<p>The parentheses are because you're creating a tuple of three items:</p>
<ul>
<li><code>"{{"+str(x)</code></li>
<li><code>str(y)+"},{"+str(x)</code></li>
<li><code>str(y)+"}}"</code></li>
</ul>
<p>Try replacing those bare commas between <code>str(x)</code> and <code>str(y)</code> with <code>+","+</code>:</p>
<pre><code>"{{"+str(x)+","+str(y)+"},{"+str(x)+","str(y)+"}}"
</code></pre>
| 1 |
2016-09-20T15:02:08Z
|
[
"python"
] |
What is the difference between these two python methods?
| 39,597,015 |
<p>I am a python newbie. </p>
<p>I have one python method which returns the list recursively (previous is the dictionary of string and s is just a string that is included in the previous dictionary)</p>
<pre><code>def path(previous, s):
"Return a list of states that lead to state s, according to the previous dict."
return [] if (s is None) else path(previous, previous[s]) + [s]
</code></pre>
<p>and this one which I believe should return the same result</p>
<pre><code>def path(previous, s):
"Return a list of states that lead to state s, according to the previous dict."
if s is None:
return []
else:
path(previous, previous[s]) + [s]
</code></pre>
<p>I was expecting that functionality wise those two methods are exactly identical, it's just that the first one is more consice. However, when I run the second methods, </p>
<p>I am receiving following error: </p>
<blockquote>
<p>"TypeError: unsupported operand type(s) for +: 'NoneType' and 'list'"</p>
</blockquote>
<p>What am I doing wrong here?</p>
| 0 |
2016-09-20T14:38:11Z
| 39,597,078 |
<p>You're missing a <code>return</code> statement in the <em>else</em> branch of the second method:</p>
<pre><code>def path(previous, s):
"Return a list of states that lead to state s, according to the previous dict."
if s is None:
return []
else:
return path(previous, previous[s]) + [s]
</code></pre>
<p>The first approach uses a <em>ternary operator</em> whose returned value (one of two) is <em>returned</em> by the <code>return</code> statement, therefore, the second needs a <code>return</code> statement in both branches.</p>
| 4 |
2016-09-20T14:40:52Z
|
[
"python",
"breadth-first-search"
] |
Scrapy SgmlLinkExtractor how go get number inside span tag
| 39,597,065 |
<p>How can I get the integer number highlighted at this specific location:</p>
<p><a href="http://i.stack.imgur.com/YDsY4.png" rel="nofollow"><img src="http://i.stack.imgur.com/YDsY4.png" alt="number inside span tag"></a></p>
<p>I got the follwing XPath from Google Chrome:</p>
<pre><code>//*[@id="page"]/main/div[4]/div[2]/div[1]/div/div/div[2]/div[4]/div/div[1]/span
</code></pre>
<p>So I definded the following XPath statement with srapy to retreive the number:</p>
<pre><code>id = response.xpath('//*[@id="page"]/main/div[4]/div[2]/div[1]/div/div/div[2]/div[4]/div/div[1]/span').extract()
</code></pre>
<p>However the variable id remains empty, my spider doesn't seem to crawl any information. How should I rewrite the statement to have access to this specific element?</p>
| 0 |
2016-09-20T14:40:12Z
| 39,597,624 |
<p>As a rule, to avoid later debugging to have a stable run you need to avoid using absolute xpath's or any xpath that is not flexible on minor changes of the page structure.</p>
<p>From the information available in the picture, your xpath should be:</p>
<pre><code>//*[@class='nr']/span
</code></pre>
<p>For basic overview of the xpath rule you can take a look on <a href="http://www.w3schools.com/xsl/xpath_syntax.asp" rel="nofollow">w3schools xpath selectors</a></p>
| 1 |
2016-09-20T15:05:56Z
|
[
"python",
"regex",
"xpath",
"scrapy"
] |
How to suppress all the print info from the Python Script?
| 39,597,102 |
<p>Is there an easy way to suppress all the print info from the python script globally ?</p>
<p>Have a scenario where I had put lot of print info in the script for debug purpose, but when the user runs I don't want the script to print all the infos.</p>
<p>So only when I pass a command line argument like debug=1 or something like that, the print needs to provide the info else it shouldn't print. </p>
<p>Tried simply like, </p>
<pre><code>if sys.argv[1]:
debug=1
else:
debug=0
if debug:
print "Enabled debugging"
</code></pre>
<p>But for this I need to include the if condition everywhere, instead is there any easy way to shutdown the print info globally ? Share in your comments.</p>
<p>Thanks</p>
| 1 |
2016-09-20T14:41:56Z
| 39,597,147 |
<p>This is why you should use the built-in <a href="https://docs.python.org/3/library/logging.html" rel="nofollow"><code>logging</code></a> library rather than writing print statements everywhere.</p>
<p>With that, you can call <code>logger.debug()</code> wherever you need to, and configure at application level whether or not the debug logs are actually output.</p>
| 4 |
2016-09-20T14:43:59Z
|
[
"python",
"python-2.7"
] |
How to suppress all the print info from the Python Script?
| 39,597,102 |
<p>Is there an easy way to suppress all the print info from the python script globally ?</p>
<p>Have a scenario where I had put lot of print info in the script for debug purpose, but when the user runs I don't want the script to print all the infos.</p>
<p>So only when I pass a command line argument like debug=1 or something like that, the print needs to provide the info else it shouldn't print. </p>
<p>Tried simply like, </p>
<pre><code>if sys.argv[1]:
debug=1
else:
debug=0
if debug:
print "Enabled debugging"
</code></pre>
<p>But for this I need to include the if condition everywhere, instead is there any easy way to shutdown the print info globally ? Share in your comments.</p>
<p>Thanks</p>
| 1 |
2016-09-20T14:41:56Z
| 39,597,191 |
<p>You will have to redirect stdout to /dev/null.
Below is the OS agnostic way of doing it.</p>
<pre><code>import os
import sys
f = open(os.devnull, 'w')
sys.stdout = f
</code></pre>
| 5 |
2016-09-20T14:46:25Z
|
[
"python",
"python-2.7"
] |
Distinct values with Q objects
| 39,597,172 |
<p>I have this query on my Django project 1.10.1 on Py3:</p>
<pre><code>Event.objects.filter(Q(subject=topic.id) | Q(object=topic.id) | Q(place=topic.id))
</code></pre>
<p>How can I prevent to get two identical <code>Event</code> records? </p>
<p>Thank you in advance.</p>
| 0 |
2016-09-20T14:45:28Z
| 39,597,305 |
<p>Use the <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#django.db.models.query.QuerySet.distinct" rel="nofollow">distinct</a> operator:</p>
<pre><code>Event.objects.filter(Q(subject=topic.id) | Q(object=topic.id) | Q(place=topic.id)).distinct()
</code></pre>
<p>From the documentation:</p>
<blockquote>
<p>By default, a QuerySet will not eliminate duplicate rows. In practice, this is rarely a problem, because simple queries such as Blog.objects.all() donât introduce the possibility of duplicate result rows. However, if your query spans multiple tables, itâs possible to get duplicate results when a QuerySet is evaluated. Thatâs when youâd use distinct().</p>
</blockquote>
<p>Make special note of their "However" clause before implementing this unless you expect to actually see duplicate results.</p>
| 2 |
2016-09-20T14:51:50Z
|
[
"python",
"django"
] |
Distinct values with Q objects
| 39,597,172 |
<p>I have this query on my Django project 1.10.1 on Py3:</p>
<pre><code>Event.objects.filter(Q(subject=topic.id) | Q(object=topic.id) | Q(place=topic.id))
</code></pre>
<p>How can I prevent to get two identical <code>Event</code> records? </p>
<p>Thank you in advance.</p>
| 0 |
2016-09-20T14:45:28Z
| 39,597,396 |
<p>I don't think that query can ever give duplicate results.</p>
<p>I just tried a similar query on my similar setup, it will convert to an SQL query which looks roughly like this:</p>
<pre><code>SELECT *
FROM event
WHERE (subject=x OR object=x OR place=x)
</code></pre>
<p>This will not duplicate any rows, so you don't actually need to do anything to avoid duplicate records. </p>
| 0 |
2016-09-20T14:55:47Z
|
[
"python",
"django"
] |
maya kBeforeSave callback
| 39,597,364 |
<p>I need to register to some maya's MSceneMessage callback, and query the scene paths. I need to get both before and after's maya path. (open , save file)</p>
<p>Here's what I have so far.</p>
<pre><code>def before(*args, **kwargs):
print 'BEFORE: ' + cmds.file(query = True)
def after(*args, **kwargs):
print 'AFTER: ' + cmds.file(query = True)
om.MSceneMessage.addCallback(om.MSceneMessage.kBeforeOpen, before)
om.MSceneMessage.addCallback(om.MSceneMessage.kAfterOpen, after)
om.MSceneMessage.addCallback(om.MSceneMessage.kBeforeSave, before)
om.MSceneMessage.addCallback(om.MSceneMessage.kAfterSave, after)
</code></pre>
<p>Case scenario1; In scene test_01.ma, OPEN scene test_02.ma</p>
<p>Works as expected.</p>
<p>BEFORE: ../../test_01.ma</p>
<p>AFTER: ../../test_02.ma</p>
<p>Case scenario2; In scene test_01.ma, SAVE scene test_02.ma</p>
<p><strong>DOESN'T work as expected.</strong></p>
<p><strong>BEFORE: ../../test_02.ma</strong></p>
<p>AFTER: ../../test_02.ma</p>
<p>I also tried 'kBeforeSaveCheck' callback, with same result.</p>
| 2 |
2016-09-20T14:54:16Z
| 39,698,282 |
<h1>get the <a href="http://download.autodesk.com/us/maya/2011help/CommandsPython/file.html" rel="nofollow">scenename</a></h1>
<pre><code>def scene_id(*args):
return cmds.file(query=True, scenename=True)
def before(*args, **kwargs):
print 'BEFORE: {0}'.format(scene_id())
def after(*args, **kwargs):
print 'After: {0}'.format(scene_id())
</code></pre>
| 0 |
2016-09-26T08:44:50Z
|
[
"python",
"api",
"maya"
] |
Install virtualenvwrapper on mac- OSError: [Errno 1] Operation not permitted:
| 39,597,368 |
<p>I try to install virtualenvwrapper on Mac and get the classic python catch 22:</p>
<pre><code>C02QPBHWFVH3MBP:~ ckc3153$ pip install virtualenvwrapper
Collecting virtualenvwrapper
Using cached virtualenvwrapper-4.7.2.tar.gz
Requirement already satisfied (use --upgrade to upgrade): virtualenv in /Library/Python/2.7/site-packages (from virtualenvwrapper)
Requirement already satisfied (use --upgrade to upgrade): virtualenv-clone in /Library/Python/2.7/site-packages (from virtualenvwrapper)
Collecting stevedore (from virtualenvwrapper)
Using cached stevedore-1.17.1-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): pbr>=1.6 in /Library/Python/2.7/site-packages (from stevedore->virtualenvwrapper)
Collecting six>=1.9.0 (from stevedore->virtualenvwrapper)
Using cached six-1.10.0-py2.py3-none-any.whl
Installing collected packages: six, stevedore, virtualenvwrapper
Found existing installation: six 1.4.1
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.4.1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 736, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 742, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/var/folders/yq/9qfrf1td5t5fk8xyp0536v340000gn/T/pip-5lWnj3-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
C02QPBHWFVH3MBP:~ ckc3153$ pip uninstall six
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.4.1:
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info
Proceed (y/n)? y
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/uninstall.py", line 76, in run
requirement_set.uninstall(auto_confirm=options.yes)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 336, in uninstall
req.uninstall(auto_confirm=auto_confirm)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 742, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/var/folders/yq/9qfrf1td5t5fk8xyp0536v340000gn/T/pip-SGBRs3-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
</code></pre>
<p>Pip can't uninstall six, but can't install wrapper with six there. Uninstalling six by hand fails the same way. Any help appreciated</p>
| 0 |
2016-09-20T14:54:26Z
| 39,597,448 |
<ol>
<li>use <code>sudo pip install virtualenvwrapper</code> command.</li>
<li>type the password of current user.</li>
</ol>
| 1 |
2016-09-20T14:58:29Z
|
[
"python",
"osx",
"python-2.7",
"virtualenv",
"virtualenvwrapper"
] |
Python Multiplicative Array Concatenating
| 39,597,377 |
<p>If I have a array of an inconsistent size, <code>lists</code> that consists of an (inconsistent) amount of lists:</p>
<pre><code> lists = [List1, List2, List3,...ListN]
</code></pre>
<p>where each contained list is of inconsistent size.</p>
<p>How can I concatenate the contents multiplicatively of each contained array.</p>
<p><strong>Example of target output:</strong></p>
<pre><code>A = ["A","B","C","D"]
B = ["1","2","3","4"]
C = ["D","E","F","G"]
A[0-N] + B[0-N] + C[0-N]
Giving ["A1D","B1D","C1D","D1D",
"A2D","B2D","C2D","D2D"
"A3D","B3D","C3D","D3D"
"A4D","B4D","C4D","D4D"
"A1E","B1E","C1E","D1E"
... "C4G","D4G" ]
</code></pre>
<p>For this specific example it should yield a list length of 4^3.
(List length to the power of the number of lists)</p>
<p>However list length is not constant so it is really </p>
<p><code>List1 Length * List2 Length * List3 Length * ... * ListN Length</code></p>
<p>For an inconsistent list length: </p>
<pre><code> A = ["A","B"]
B = ["1","2","3","4"]
= ["A1","A2","A3","A4","B1","B2","B3","B4"]
</code></pre>
<p>I have tried python maps and zips, but I am having trouble doing for example:</p>
<pre><code>zip(list1, list2, list3)
</code></pre>
<p>when:</p>
<p>Amount of lists is not consistent</p>
<p>The lists are not stored separately but collated in one large list,</p>
<p>and the list size is not consistent</p>
<p>Methods described on other SO Question only address consistent size, 2 list, situations. I am having trouble applying these techniques in this situation.</p>
| 1 |
2016-09-20T14:55:00Z
| 39,597,856 |
<p><a href="https://docs.python.org/2/library/itertools.html#itertools.product" rel="nofollow">itertools.product</a> Is what you are looking for, I think:</p>
<pre><code>>>> import itertools
>>> A = ["A","B"]
>>> B = ["1","2","3","4"]
>>> C = [A,B]
>>> [''.join(i) for i in itertools.product(*C)]
['A1', 'A2', 'A3', 'A4', 'B1', 'B2', 'B3', 'B4']
</code></pre>
| 0 |
2016-09-20T15:17:44Z
|
[
"python",
"arrays",
"list"
] |
Python Multiplicative Array Concatenating
| 39,597,377 |
<p>If I have a array of an inconsistent size, <code>lists</code> that consists of an (inconsistent) amount of lists:</p>
<pre><code> lists = [List1, List2, List3,...ListN]
</code></pre>
<p>where each contained list is of inconsistent size.</p>
<p>How can I concatenate the contents multiplicatively of each contained array.</p>
<p><strong>Example of target output:</strong></p>
<pre><code>A = ["A","B","C","D"]
B = ["1","2","3","4"]
C = ["D","E","F","G"]
A[0-N] + B[0-N] + C[0-N]
Giving ["A1D","B1D","C1D","D1D",
"A2D","B2D","C2D","D2D"
"A3D","B3D","C3D","D3D"
"A4D","B4D","C4D","D4D"
"A1E","B1E","C1E","D1E"
... "C4G","D4G" ]
</code></pre>
<p>For this specific example it should yield a list length of 4^3.
(List length to the power of the number of lists)</p>
<p>However list length is not constant so it is really </p>
<p><code>List1 Length * List2 Length * List3 Length * ... * ListN Length</code></p>
<p>For an inconsistent list length: </p>
<pre><code> A = ["A","B"]
B = ["1","2","3","4"]
= ["A1","A2","A3","A4","B1","B2","B3","B4"]
</code></pre>
<p>I have tried python maps and zips, but I am having trouble doing for example:</p>
<pre><code>zip(list1, list2, list3)
</code></pre>
<p>when:</p>
<p>Amount of lists is not consistent</p>
<p>The lists are not stored separately but collated in one large list,</p>
<p>and the list size is not consistent</p>
<p>Methods described on other SO Question only address consistent size, 2 list, situations. I am having trouble applying these techniques in this situation.</p>
| 1 |
2016-09-20T14:55:00Z
| 39,597,944 |
<pre><code>import itertools
A = ["A","B"]
B = ["1","2","3","4"]
list(itertools.product(A, B))
</code></pre>
<p>Generic</p>
<pre><code>lists_var = [List1, List2, List3,...ListN]
list(itertools.product(*lists_var))
</code></pre>
| 1 |
2016-09-20T15:21:47Z
|
[
"python",
"arrays",
"list"
] |
Python Multiplicative Array Concatenating
| 39,597,377 |
<p>If I have a array of an inconsistent size, <code>lists</code> that consists of an (inconsistent) amount of lists:</p>
<pre><code> lists = [List1, List2, List3,...ListN]
</code></pre>
<p>where each contained list is of inconsistent size.</p>
<p>How can I concatenate the contents multiplicatively of each contained array.</p>
<p><strong>Example of target output:</strong></p>
<pre><code>A = ["A","B","C","D"]
B = ["1","2","3","4"]
C = ["D","E","F","G"]
A[0-N] + B[0-N] + C[0-N]
Giving ["A1D","B1D","C1D","D1D",
"A2D","B2D","C2D","D2D"
"A3D","B3D","C3D","D3D"
"A4D","B4D","C4D","D4D"
"A1E","B1E","C1E","D1E"
... "C4G","D4G" ]
</code></pre>
<p>For this specific example it should yield a list length of 4^3.
(List length to the power of the number of lists)</p>
<p>However list length is not constant so it is really </p>
<p><code>List1 Length * List2 Length * List3 Length * ... * ListN Length</code></p>
<p>For an inconsistent list length: </p>
<pre><code> A = ["A","B"]
B = ["1","2","3","4"]
= ["A1","A2","A3","A4","B1","B2","B3","B4"]
</code></pre>
<p>I have tried python maps and zips, but I am having trouble doing for example:</p>
<pre><code>zip(list1, list2, list3)
</code></pre>
<p>when:</p>
<p>Amount of lists is not consistent</p>
<p>The lists are not stored separately but collated in one large list,</p>
<p>and the list size is not consistent</p>
<p>Methods described on other SO Question only address consistent size, 2 list, situations. I am having trouble applying these techniques in this situation.</p>
| 1 |
2016-09-20T14:55:00Z
| 39,598,047 |
<p>Use <code>itertools</code> to get the results, and then format as you want:</p>
<pre><code>import itertools
A = ["A","B","C","D"]
B = ["1","2","3","4"]
C = ["D","E","F","G"]
lists = [A, B, C]
results = [''.join(t) for t initertools.product(*lists)]
print(results)
</code></pre>
<p>prints:</p>
<pre><code>['A1D', 'A1E', 'A1F', 'A1G', 'A2D', 'A2E', 'A2F', 'A2G', 'A3D', 'A3E', 'A3F', 'A3G', 'A4D', 'A4E', 'A4F', 'A4G', 'B1D', 'B1E', 'B1F', 'B1G', 'B2D', 'B2E', 'B2F', 'B2G', 'B3D', 'B3E', 'B3F', 'B3G', 'B4D', 'B4E', 'B4F', 'B4G', 'C1D', 'C1E', 'C1F', 'C1G', 'C2D', 'C2E', 'C2F', 'C2G', 'C3D', 'C3E', 'C3F', 'C3G', 'C4D', 'C4E', 'C4F', 'C4G', 'D1D', 'D1E', 'D1F', 'D1G', 'D2D', 'D2E', 'D2F', 'D2G', 'D3D', 'D3E', 'D3F', 'D3G', 'D4D', 'D4E', 'D4F', 'D4G']
</code></pre>
| 1 |
2016-09-20T15:25:14Z
|
[
"python",
"arrays",
"list"
] |
Numpy log slow for numbers close to 1?
| 39,597,405 |
<p>Look at the following piece of code:</p>
<pre><code>import numpy as np
import timeit
print('close', timeit.Timer(lambda: np.log(0.99999999999999978)).timeit())
print('not close', timeit.Timer(lambda: np.log(0.99)).timeit()))
</code></pre>
<p>The output is:</p>
<pre><code>close 4.462684076999722
not close 0.6319260000018403
</code></pre>
<p>How come such big (orders of magnitude) difference in running time? Am I missing something?</p>
<p>EDIT:</p>
<p>More preceisly we see the slowdown for values as small as:
<code>1 - np.finfo(np.float).eps</code>
but not for values
<code>1 - np.finfo(np.float).eps * 10</code>.</p>
<p>My machine <code>Python 3.5.2 |Anaconda 4.1.1 (64-bit)</code> with <code>numpy 1.11.1</code>. </p>
<p>This has been so far reproduced on 3 other machines from my side (2 Python 3.4 Anaconda installations, 1 Python 2.7 default Ubuntu installation).</p>
<p>Some other users could also reproduce it, while others could not. See comments.</p>
<p>EDIT 2:</p>
<p>Possibly only reproducible on Linux systems. So far, not reproducible on Windows systems.</p>
| 3 |
2016-09-20T14:56:09Z
| 39,597,509 |
<p>At the time of writing, most chipsets evaluate <code>log</code> using a Taylor series coupled with a table of specific pre-computed values.</p>
<p>With the Taylor series, a number closer to 1 is slower to <em>converge</em> than a number further away from 1. That could go some way to explaining the difference in execution time observed here.</p>
<p><code>0.99</code> may also be closer to one of the tabulated values, which would also help.</p>
<p><strong>Or your observations may not even be statistically significant.</strong></p>
| 1 |
2016-09-20T15:01:07Z
|
[
"python",
"numpy",
"runtime"
] |
Is this a correct way to implement bubble sort?
| 39,597,479 |
<p>Is this a correct way to implement bubble sort?
I get a sorted list, but I have a doubt that the method is correct.</p>
<pre><code># Input unsorted list
size_lis = int(input("enter the size of the list"))
size = 0
list1 = list()
while (size < size_lis):
element = int(input("enter the element"))
list1.append(element)
size += 1
# Sort
for i in range(0, len(list1)):
for j in range(0, len(list1)-1):
if list1[j] > list1[j+1]:
list1[j],list1[j+1] = list1[j+1],list1[j]
print(list1)
</code></pre>
| 0 |
2016-09-20T15:00:02Z
| 39,598,372 |
<p>This is the <em>correct implementation</em> of the bubble sort algorithm. But you can prevent extra loops using this kind of implementation:</p>
<pre><code>def bubble_sort(arr):
for i in range(len(arr))[::-1]:
for j in range(1, i + 1):
if arr[j - 1] > arr[j]:
arr[j], arr[j-1] = arr[j-1], arr[j]
</code></pre>
<p>First loop iterating through <code>range(len(arr))</code> in reversed order (<code>[::-1]</code> - operation for reversing the list in the most efficient way). After first iteration of this loop the biggest element in your list will be placed in the end of the list. And the second loop needs to iterate only through remaining elements.</p>
<p>I tested yours(<code>bubble_sort_2</code>) and mine(<code>bubble_sort</code>) implementation using two identical arrays on 1000 elements.
Here are the results (using <em>cProfile</em>):</p>
<pre><code>ncalls tottime percall cumtime percall filename:lineno(function)
1 0.215 0.215 0.215 0.215 bs.py:22(bubble_sort_2)
1 0.128 0.128 0.128 0.128 bs.py:16(bubble_sort)
</code></pre>
<p>As you can see, <code>bubble_sort</code> is faster than <code>bubble_sort_2</code>.</p>
| 1 |
2016-09-20T15:40:02Z
|
[
"python",
"bubble-sort"
] |
Is this a correct way to implement bubble sort?
| 39,597,479 |
<p>Is this a correct way to implement bubble sort?
I get a sorted list, but I have a doubt that the method is correct.</p>
<pre><code># Input unsorted list
size_lis = int(input("enter the size of the list"))
size = 0
list1 = list()
while (size < size_lis):
element = int(input("enter the element"))
list1.append(element)
size += 1
# Sort
for i in range(0, len(list1)):
for j in range(0, len(list1)-1):
if list1[j] > list1[j+1]:
list1[j],list1[j+1] = list1[j+1],list1[j]
print(list1)
</code></pre>
| 0 |
2016-09-20T15:00:02Z
| 39,599,162 |
<p>In bubble sort the largest element is moved step by step to the end of list. Thus after first pass there is this one element in its final position. The second pass should sort only N-1 remaining elements, etc.</p>
<p>In the posted code, just adjust the inner circle like this. That'll save almost 50% of CPU time.</p>
<pre><code>n = len(lst)
for i in range(n):
for j in range(n-i-1):
if lst[j] > lst[j+1]:
lst[j], lst[j+1] = lst[j+1],lst[j]
</code></pre>
| 1 |
2016-09-20T16:18:20Z
|
[
"python",
"bubble-sort"
] |
How can I know the location of a python function that imported from a library
| 39,597,486 |
<pre><code>from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
</code></pre>
<p>How can I find the location of the <code>conv_forward_fast</code> function with a command? </p>
| 1 |
2016-09-20T15:00:21Z
| 39,597,572 |
<p>You can just </p>
<pre><code>import cs231n.fast_layers as path
print(path)
</code></pre>
<p>and it will show you the path of the library.</p>
| 2 |
2016-09-20T15:03:51Z
|
[
"python"
] |
How can I know the location of a python function that imported from a library
| 39,597,486 |
<pre><code>from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
</code></pre>
<p>How can I find the location of the <code>conv_forward_fast</code> function with a command? </p>
| 1 |
2016-09-20T15:00:21Z
| 39,598,234 |
<p>In Python 2.x:</p>
<pre><code>from os.path import join
path_to_module = __import__(join.__module__).__file__
</code></pre>
| 0 |
2016-09-20T15:33:34Z
|
[
"python"
] |
Python's fuzzywuzzy returns unpredictable results
| 39,597,550 |
<p>I'm working with fuzzy wuzzy in python and while it claims it works with a levenshtein distance, I find that many strings with a single character different produce different results. For example.</p>
<pre><code>>>>fuzz.ratio("vendedor","vendedora")
94
>>>fuzz.ratio("estagiário","estagiária")
90
>>> fuzz.ratio("abcdefghijlmnopqrst","abcdefghijlmnopqrsty")
97
>>>fuzz.ratio("abc","abcd")
86
>>>fuzz.ratio("a","ab")
67
</code></pre>
<p>I guess levenshtein distance should be the same as there is a single character distance in all the examples, but I understand this is not simple distance, it is some sort of "equality percentage" of some sort. </p>
<p>I tried to understand how it works but I cannot seem to understand. My very long string gives a 97 and the very short a 67. I guess it would mean the larger the string, there is less impact on a single character. However for the "vendedor","vendedora" and "estagiário","estagiária" example, that is not the case, as the latter is larger than the former.</p>
<p>How does this work?</p>
<p>I am currently matching user input job titles, trying to connect mistyped names with correctly typed names etc. is there a better package for my task?</p>
| 1 |
2016-09-20T15:02:36Z
| 39,598,074 |
<p>You are correct about how fuzzywuzzy works in general. A larger output number from the <code>fuzz.ratio</code> function means that the strings are closer to one another (with a 100 being a perfect match). I preformed a couple of additional test cases to check out how it worked. Here they are:</p>
<pre><code>fuzz.ratio("abc", "abce") #to show which extra letter doesn't matter.
86
fuzz.ratio("abcd", "abce") #to show that replacing a number is worse than adding.
75
fuzz.ratio("abc", "abc") #to find what a match gives.
100
</code></pre>
<p>From these tests, we can see that replacing a number has a larger effect on the ratio calculation than adding a letter (this is why estagiário/estagiária was less of a match than vendedor/vendedora, despite being longer). According to <a href="https://pypi.python.org/pypi/fuzzywuzzy" rel="nofollow">this</a>, the package can also be used to auto select the best choice from a list of possible matches, and thus I think it would be a good choice for your intended purpose.</p>
| 2 |
2016-09-20T15:26:04Z
|
[
"python",
"string-matching",
"fuzzywuzzy"
] |
From DatetimeIndex to list of times
| 39,597,553 |
<p>My objectif is to have a lists of times (in seconds), already packaged in lists of times in 5 minutes for a whole day.
This is my code to package the whole day of "2016-07-08" by 5 minutes :</p>
<pre><code>pd.date_range('2016-07-08 00:00:00', '2016-07-08 23:59:00', freq='5Min')
</code></pre>
<p>The result :</p>
<pre><code>DatetimeIndex(['2016-07-08 00:00:00', '2016-07-08 00:05:00',
'2016-07-08 00:10:00', '2016-07-08 00:15:00',
'2016-07-08 00:20:00', '2016-07-08 00:25:00',
'2016-07-08 00:30:00', '2016-07-08 00:35:00',
'2016-07-08 00:40:00', '2016-07-08 00:45:00',
...
'2016-07-08 23:10:00', '2016-07-08 23:15:00',
'2016-07-08 23:20:00', '2016-07-08 23:25:00',
'2016-07-08 23:30:00', '2016-07-08 23:35:00',
'2016-07-08 23:40:00', '2016-07-08 23:45:00',
'2016-07-08 23:50:00', '2016-07-08 23:55:00'],
dtype='datetime64[ns]', length=288, freq='5T')
</code></pre>
<p>And this is the code to have all times (by second) included in every 5 minutes:</p>
<pre><code>for time in pd.date_range('2016-07-08 00:00:00', '2016-07-08 23:59:00', freq='5Min').tolist():
time_by_5_min = datetime.datetime.strftime(time.to_datetime(), "%Y-%m-%d %H:%M:%S")
print pd.date_range(time_by_5_min, freq='S', periods=60)
</code></pre>
<p>The result :</p>
<pre><code>DatetimeIndex(['2016-07-08 00:00:00', '2016-07-08 00:00:01',
'2016-07-08 00:00:02', '2016-07-08 00:00:03',
'2016-07-08 00:00:04', '2016-07-08 00:00:05',
'2016-07-08 00:00:06', '2016-07-08 00:00:07',
'2016-07-08 00:00:08', '2016-07-08 00:00:09',
'2016-07-08 00:00:10', '2016-07-08 00:00:11',
'2016-07-08 00:00:12', '2016-07-08 00:00:13',
'2016-07-08 00:00:14', '2016-07-08 00:00:15',
'2016-07-08 00:00:16', '2016-07-08 00:00:17',
'2016-07-08 00:00:18', '2016-07-08 00:00:19',
'2016-07-08 00:00:20', '2016-07-08 00:00:21',
'2016-07-08 00:00:22', '2016-07-08 00:00:23',
'2016-07-08 00:00:24', '2016-07-08 00:00:25',
'2016-07-08 00:00:26', '2016-07-08 00:00:27',
'2016-07-08 00:00:28', '2016-07-08 00:00:29',
'2016-07-08 00:00:30', '2016-07-08 00:00:31',
'2016-07-08 00:00:32', '2016-07-08 00:00:33',
'2016-07-08 00:00:34', '2016-07-08 00:00:35',
'2016-07-08 00:00:36', '2016-07-08 00:00:37',
'2016-07-08 00:00:38', '2016-07-08 00:00:39',
'2016-07-08 00:00:40', '2016-07-08 00:00:41',
'2016-07-08 00:00:42', '2016-07-08 00:00:43',
'2016-07-08 00:00:44', '2016-07-08 00:00:45',
'2016-07-08 00:00:46', '2016-07-08 00:00:47',
'2016-07-08 00:00:48', '2016-07-08 00:00:49',
'2016-07-08 00:00:50', '2016-07-08 00:00:51',
'2016-07-08 00:00:52', '2016-07-08 00:00:53',
'2016-07-08 00:00:54', '2016-07-08 00:00:55',
'2016-07-08 00:00:56', '2016-07-08 00:00:57',
'2016-07-08 00:00:58', '2016-07-08 00:00:59'],
dtype='datetime64[ns]', freq='S')
DatetimeIndex(['2016-07-08 00:05:00', '2016-07-08 00:05:01',
'2016-07-08 00:05:02', '2016-07-08 00:05:03',
'2016-07-08 00:05:04', '2016-07-08 00:05:05',
'2016-07-08 00:05:06', '2016-07-08 00:05:07',
'2016-07-08 00:05:08', '2016-07-08 00:05:09',
'2016-07-08 00:05:10', '2016-07-08 00:05:11',
'2016-07-08 00:05:12', '2016-07-08 00:05:13',
'2016-07-08 00:05:14', '2016-07-08 00:05:15',
'2016-07-08 00:05:16', '2016-07-08 00:05:17',
'2016-07-08 00:05:18', '2016-07-08 00:05:19',
'2016-07-08 00:05:20', '2016-07-08 00:05:21',
'2016-07-08 00:05:22', '2016-07-08 00:05:23',
'2016-07-08 00:05:24', '2016-07-08 00:05:25',
'2016-07-08 00:05:26', '2016-07-08 00:05:27',
'2016-07-08 00:05:28', '2016-07-08 00:05:29',
'2016-07-08 00:05:30', '2016-07-08 00:05:31',
'2016-07-08 00:05:32', '2016-07-08 00:05:33',
'2016-07-08 00:05:34', '2016-07-08 00:05:35',
'2016-07-08 00:05:36', '2016-07-08 00:05:37',
'2016-07-08 00:05:38', '2016-07-08 00:05:39',
'2016-07-08 00:05:40', '2016-07-08 00:05:41',
'2016-07-08 00:05:42', '2016-07-08 00:05:43',
'2016-07-08 00:05:44', '2016-07-08 00:05:45',
'2016-07-08 00:05:46', '2016-07-08 00:05:47',
'2016-07-08 00:05:48', '2016-07-08 00:05:49',
'2016-07-08 00:05:50', '2016-07-08 00:05:51',
'2016-07-08 00:05:52', '2016-07-08 00:05:53',
'2016-07-08 00:05:54', '2016-07-08 00:05:55',
'2016-07-08 00:05:56', '2016-07-08 00:05:57',
'2016-07-08 00:05:58', '2016-07-08 00:05:59'],
dtype='datetime64[ns]', freq='S')
etc
</code></pre>
<p>This is perfect for me!
I want now to have lists, not a pandas.tseries.index.DatetimeIndex..
The .tolist() method give this :</p>
<pre><code>for time in pd.date_range('2016-07-08 00:00:00', '2016-07-08 23:59:00', freq='5Min').tolist():
time_by_5_min = datetime.datetime.strftime(time.to_datetime(), "%Y-%m-%d %H:%M:%S")
print (pd.date_range(time_by_5_min, freq='S', periods=60)).tolist()
</code></pre>
<p>The result :</p>
<pre><code>[Timestamp('2016-07-08 00:00:00', offset='S'), Timestamp('2016-07-08 00:00:01', offset='S'), Timestamp('2016-07-08 00:00:02', offset='S'), Timestamp('2016-07-08 00:00:03', offset='S'), Timestamp('2016-07-08 00:00:04', offset='S'), Timestamp('2016-07-08 00:00:05', offset='S'), Timestamp('2016-07-08 00:00:06', offset='S'), etc]
</code></pre>
<p>I want to have something like this :</p>
<pre><code> [['2016-07-08 00:00:00', '2016-07-08 00:00:01',
'2016-07-08 00:00:02', '2016-07-08 00:00:03',
'2016-07-08 00:00:04', '2016-07-08 00:00:05',
'2016-07-08 00:00:06', '2016-07-08 00:00:07',
'2016-07-08 00:00:08', '2016-07-08 00:00:09',
'2016-07-08 00:00:10', '2016-07-08 00:00:11',
'2016-07-08 00:00:12', '2016-07-08 00:00:13',
'2016-07-08 00:00:14', '2016-07-08 00:00:15',
'2016-07-08 00:00:16', '2016-07-08 00:00:17',
'2016-07-08 00:00:18', '2016-07-08 00:00:19',
'2016-07-08 00:00:20', '2016-07-08 00:00:21',
'2016-07-08 00:00:22', '2016-07-08 00:00:23',
'2016-07-08 00:00:24', '2016-07-08 00:00:25',
'2016-07-08 00:00:26', '2016-07-08 00:00:27',
'2016-07-08 00:00:28', '2016-07-08 00:00:29',
'2016-07-08 00:00:30', '2016-07-08 00:00:31',
'2016-07-08 00:00:32', '2016-07-08 00:00:33',
'2016-07-08 00:00:34', '2016-07-08 00:00:35',
'2016-07-08 00:00:36', '2016-07-08 00:00:37',
'2016-07-08 00:00:38', '2016-07-08 00:00:39',
'2016-07-08 00:00:40', '2016-07-08 00:00:41',
'2016-07-08 00:00:42', '2016-07-08 00:00:43',
'2016-07-08 00:00:44', '2016-07-08 00:00:45',
'2016-07-08 00:00:46', '2016-07-08 00:00:47',
'2016-07-08 00:00:48', '2016-07-08 00:00:49',
'2016-07-08 00:00:50', '2016-07-08 00:00:51',
'2016-07-08 00:00:52', '2016-07-08 00:00:53',
'2016-07-08 00:00:54', '2016-07-08 00:00:55',
'2016-07-08 00:00:56', '2016-07-08 00:00:57',
'2016-07-08 00:00:58', '2016-07-08 00:00:59'],
['2016-07-08 00:05:00', '2016-07-08 00:05:01',
'2016-07-08 00:05:02', '2016-07-08 00:05:03',
'2016-07-08 00:05:04', '2016-07-08 00:05:05',
'2016-07-08 00:05:06', '2016-07-08 00:05:07',
'2016-07-08 00:05:08', '2016-07-08 00:05:09',
'2016-07-08 00:05:10', '2016-07-08 00:05:11',
'2016-07-08 00:05:12', '2016-07-08 00:05:13',
'2016-07-08 00:05:14', '2016-07-08 00:05:15',
'2016-07-08 00:05:16', '2016-07-08 00:05:17',
'2016-07-08 00:05:18', '2016-07-08 00:05:19',
'2016-07-08 00:05:20', '2016-07-08 00:05:21',
'2016-07-08 00:05:22', '2016-07-08 00:05:23',
'2016-07-08 00:05:24', '2016-07-08 00:05:25',
'2016-07-08 00:05:26', '2016-07-08 00:05:27',
'2016-07-08 00:05:28', '2016-07-08 00:05:29',
'2016-07-08 00:05:30', '2016-07-08 00:05:31',
'2016-07-08 00:05:32', '2016-07-08 00:05:33',
'2016-07-08 00:05:34', '2016-07-08 00:05:35',
'2016-07-08 00:05:36', '2016-07-08 00:05:37',
'2016-07-08 00:05:38', '2016-07-08 00:05:39',
'2016-07-08 00:05:40', '2016-07-08 00:05:41',
'2016-07-08 00:05:42', '2016-07-08 00:05:43',
'2016-07-08 00:05:44', '2016-07-08 00:05:45',
'2016-07-08 00:05:46', '2016-07-08 00:05:47',
'2016-07-08 00:05:48', '2016-07-08 00:05:49',
'2016-07-08 00:05:50', '2016-07-08 00:05:51',
'2016-07-08 00:05:52', '2016-07-08 00:05:53',
'2016-07-08 00:05:54', '2016-07-08 00:05:55',
'2016-07-08 00:05:56', '2016-07-08 00:05:57',
'2016-07-08 00:05:58', '2016-07-08 00:05:59'], etc]
</code></pre>
<p>Any ideas ? </p>
| 2 |
2016-09-20T15:02:47Z
| 39,597,704 |
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.strftime.html" rel="nofollow"><code>DatetimeIndex.strftime</code></a>:</p>
<p><em>I try remove some code (in sample is not necessary, maybe in real code is important)</em></p>
<pre><code>for time in pd.date_range('2016-07-08 00:00:00', '2016-07-08 23:59:00', freq='5Min'):
print (pd.date_range(time, freq='S', periods=60).strftime("%Y-%m-%d %H:%M:%S").tolist())
</code></pre>
<pre><code>['2016-07-08 00:00:00', '2016-07-08 00:00:01', '2016-07-08 00:00:02', '2016-07-08 00:00:03', '2016-07-08 00:00:04', '2016-07-08 00:00:05', '2016-07-08 00:00:06', '2016-07-08 00:00:07', '2016-07-08 00:00:08', '2016-07-08 00:00:09', '2016-07-08 00:00:10', '2016-07-08 00:00:11', '2016-07-08 00:00:12', '2016-07-08 00:00:13', '2016-07-08 00:00:14', '2016-07-08 00:00:15', '2016-07-08 00:00:16', '2016-07-08 00:00:17', '2016-07-08 00:00:18', '2016-07-08 00:00:19', '2016-07-08 00:00:20', '2016-07-08 00:00:21', '2016-07-08 00:00:22', '2016-07-08 00:00:23', '2016-07-08 00:00:24', '2016-07-08 00:00:25', '2016-07-08 00:00:26', '2016-07-08 00:00:27', '2016-07-08 00:00:28', '2016-07-08 00:00:29', '2016-07-08 00:00:30', '2016-07-08 00:00:31', '2016-07-08 00:00:32', '2016-07-08 00:00:33', '2016-07-08 00:00:34', '2016-07-08 00:00:35', '2016-07-08 00:00:36', '2016-07-08 00:00:37', '2016-07-08 00:00:38', '2016-07-08 00:00:39', '2016-07-08 00:00:40', '2016-07-08 00:00:41', '2016-07-08 00:00:42', '2016-07-08 00:00:43', '2016-07-08 00:00:44', '2016-07-08 00:00:45', '2016-07-08 00:00:46', '2016-07-08 00:00:47', '2016-07-08 00:00:48', '2016-07-08 00:00:49', '2016-07-08 00:00:50', '2016-07-08 00:00:51', '2016-07-08 00:00:52', '2016-07-08 00:00:53', '2016-07-08 00:00:54', '2016-07-08 00:00:55', '2016-07-08 00:00:56', '2016-07-08 00:00:57', '2016-07-08 00:00:58', '2016-07-08 00:00:59']
['2016-07-08 00:05:00', '2016-07-08 00:05:01', '2016-07-08 00:05:02', '2016-07-08 00:05:03', '2016-07-08 00:05:04', '2016-07-08 00:05:05', '2016-07-08 00:05:06', '2016-07-08 00:05:07', '2016-07-08 00:05:08', '2016-07-08 00:05:09', '2016-07-08 00:05:10', '2016-07-08 00:05:11', '2016-07-08 00:05:12', '2016-07-08 00:05:13', '2016-07-08 00:05:14', '2016-07-08 00:05:15', '2016-07-08 00:05:16', '2016-07-08 00:05:17', '2016-07-08 00:05:18', '2016-07-08 00:05:19', '2016-07-08 00:05:20', '2016-07-08 00:05:21', '2016-07-08 00:05:22', '2016-07-08 00:05:23', '2016-07-08 00:05:24', '2016-07-08 00:05:25', '2016-07-08 00:05:26', '2016-07-08 00:05:27', '2016-07-08 00:05:28', '2016-07-08 00:05:29', '2016-07-08 00:05:30', '2016-07-08 00:05:31', '2016-07-08 00:05:32', '2016-07-08 00:05:33', '2016-07-08 00:05:34', '2016-07-08 00:05:35', '2016-07-08 00:05:36', '2016-07-08 00:05:37', '2016-07-08 00:05:38', '2016-07-08 00:05:39', '2016-07-08 00:05:40', '2016-07-08 00:05:41', '2016-07-08 00:05:42', '2016-07-08 00:05:43', '2016-07-08 00:05:44', '2016-07-08 00:05:45', '2016-07-08 00:05:46', '2016-07-08 00:05:47', '2016-07-08 00:05:48', '2016-07-08 00:05:49', '2016-07-08 00:05:50', '2016-07-08 00:05:51', '2016-07-08 00:05:52', '2016-07-08 00:05:53', '2016-07-08 00:05:54', '2016-07-08 00:05:55', '2016-07-08 00:05:56', '2016-07-08 00:05:57', '2016-07-08 00:05:58', '2016-07-08 00:05:59']
...
...
</code></pre>
<p>If need output as nested <code>lists</code> <code>append</code> data in loop to <code>L</code>:</p>
<pre><code>import pandas as pd
L = []
for time in pd.date_range('2016-07-08 00:00:00', '2016-07-08 23:59:00', freq='5Min'):
print (pd.date_range(time, freq='S', periods=60).strftime("%Y-%m-%d %H:%M:%S").tolist())
L.append(pd.date_range(time, freq='S', periods=60).strftime("%Y-%m-%d %H:%M:%S").tolist())
print (L)
[['2016-07-08 00:00:00', '2016-07-08 00:00:01', '2016-07-08 00:00:02', '2016-07-08 00:00:03', '2016-07-08 00:00:04', '2016-07-08 00:00:05', '2016-07-08 00:00:06', '2016-07-08 00:00:07', '2016-07-08 00:00:08', '2016-07-08 00:00:09', '2016-07-08 00:00:10', '2016-07-08 00:00:11', '2016-07-08 00:00:12', '2016-07-08 00:00:13', '2016-07-08 00:00:14', '2016-07-08 00:00:15', '2016-07-08 00:00:16', '2016-07-08 00:00:17', '2016-07-08 00:00:18', '2016-07-08 00:00:19', '2016-07-08 00:00:20', '2016-07-08 00:00:21', '2016-07-08 00:00:22', '2016-07-08 00:00:23', '2016-07-08 00:00:24', '2016-07-08 00:00:25', '2016-07-08 00:00:26', '2016-07-08 00:00:27', '2016-07-08 00:00:28', '2016-07-08 00:00:29', '2016-07-08 00:00:30', '2016-07-08 00:00:31', '2016-07-08 00:00:32', '2016-07-08 00:00:33', '2016-07-08 00:00:34', '2016-07-08 00:00:35', '2016-07-08 00:00:36', '2016-07-08 00:00:37', '2016-07-08 00:00:38', '2016-07-08 00:00:39', '2016-07-08 00:00:40', '2016-07-08 00:00:41', '2016-07-08 00:00:42', '2016-07-08 00:00:43', '2016-07-08 00:00:44', '2016-07-08 00:00:45', '2016-07-08 00:00:46', '2016-07-08 00:00:47', '2016-07-08 00:00:48', '2016-07-08 00:00:49', '2016-07-08 00:00:50', '2016-07-08 00:00:51', '2016-07-08 00:00:52', '2016-07-08 00:00:53', '2016-07-08 00:00:54', '2016-07-08 00:00:55', '2016-07-08 00:00:56', '2016-07-08 00:00:57', '2016-07-08 00:00:58', '2016-07-08 00:00:59'], ['2016-07-08 00:05:00', '2016-07-08 00:05:01', '2016-07-08 00:05:02', '2016-07-08 00:05:03', '2016-07-08 00:05:04', '2016-07-08 00:05:05', '2016-07-08 00:05:06', '2016-07-08 00:05:07', '2016-07-08 00:05:08', '2016-07-08 00:05:09', '2016-07-08 00:05:10', '2016-07-08 00:05:11', '2016-07-08 00:05:12', '2016-07-08 00:05:13', '2016-07-08 00:05:14', '2016-07-08 00:05:15', '2016-07-08 00:05:16', '2016-07-08 00:05:17', '2016-07-08 00:05:18', '2016-07-08 00:05:19', '2016-07-08 00:05:20', '2016-07-08 00:05:21', '2016-07-08 00:05:22', '2016-07-08 00:05:23', '2016-07-08 00:05:24', '2016-07-08 00:05:25', '2016-07-08 00:05:26', '2016-07-08 00:05:27', '2016-07-08 00:05:28', '2016-07-08 00:05:29', '2016-07-08 00:05:30', '2016-07-08 00:05:31', '2016-07-08 00:05:32', '2016-07-08 00:05:33', '2016-07-08 00:05:34', '2016-07-08 00:05:35', '2016-07-08 00:05:36', '2016-07-08 00:05:37', '2016-07-08 00:05:38', '2016-07-08 00:05:39', '2016-07-08 00:05:40', '2016-07-08 00:05:41', '2016-07-08 00:05:42', '2016-07-08 00:05:43', '2016-07-08 00:05:44', '2016-07-08 00:05:45', '2016-07-08 00:05:46', '2016-07-08 00:05:47', '2016-07-08...
</code></pre>
| 2 |
2016-09-20T15:09:44Z
|
[
"python",
"list",
"datetime",
"pandas",
"datetimeindex"
] |
Adding an element to the key's value in ordered dictionary
| 39,597,574 |
<p>I have an ordered dictionary:</p>
<pre><code>{'20140106': '82.1000000',
'20140217': '77.0300000',
'20140224': '69.6200000',
'20140310': '46.3300000',
'20140414': '49.3800000',
</code></pre>
<p>Keys are dates: yyyy-mm-dd and i want to store all values of one month in one key:</p>
<pre><code>{'20140106': '82.1000000',
'20140217': ['77.0300000','69.6200000',]
'20140310': '46.3300000',
'20140414': '49.3800000',
</code></pre>
<p>I realized this not very beutiful solution:</p>
<pre><code>val = [] #list, where i put all values for an exact month
previus_key = list(collection.keys())[0]
val.append(collection.get(previus_key))
for key in list(collection.keys())[1:]:
if (key[4:6]==previus_key[4:6]): #if it's the same month
val.append(list(collection.pop(key))) #remember the value and delete an item
else:
collection.update({previus_key:val}) #that means, we jumped to another month, so lets update the values for previus month
previus_key = key #now, start the same algorihtm for a new month.
val = collection.get(previus_key)
</code></pre>
<p>But, i get an error:<code>'str' object has no attribute 'append'</code> for line
<code>val.append(list(collection.pop(key)))</code>. I researched it, and came to conclusion, that it must not be here! Because, i append a list to a list!
So, please, point to my mistake and give advice, how could i make the code more beutiful. Thanks!</p>
| 1 |
2016-09-20T15:03:53Z
| 39,597,642 |
<p>I think the issue lies in the last line of your code. You do assign a string to the variable <code>var</code> there.</p>
<p><strong>EDIT : Here is a suggestion which stays close to your original code.</strong></p>
<pre><code>new_collection = {}
for key in collection:
month = key[:6]
if new_collection.get(month) != None:
new_collection[month].append(collection[key])
else:
new_collection[month] = [collection[key]]
</code></pre>
<p>Three things :
1) Here the result is a new dictionary instead of the same instance, as I find it generally preferable.
2) The keys are only the year and the month, as the day is irrelevant.
3) All the values are lists, instead of a mix of strings and lists.</p>
| 1 |
2016-09-20T15:06:48Z
|
[
"python",
"list",
"dictionary"
] |
Adding an element to the key's value in ordered dictionary
| 39,597,574 |
<p>I have an ordered dictionary:</p>
<pre><code>{'20140106': '82.1000000',
'20140217': '77.0300000',
'20140224': '69.6200000',
'20140310': '46.3300000',
'20140414': '49.3800000',
</code></pre>
<p>Keys are dates: yyyy-mm-dd and i want to store all values of one month in one key:</p>
<pre><code>{'20140106': '82.1000000',
'20140217': ['77.0300000','69.6200000',]
'20140310': '46.3300000',
'20140414': '49.3800000',
</code></pre>
<p>I realized this not very beutiful solution:</p>
<pre><code>val = [] #list, where i put all values for an exact month
previus_key = list(collection.keys())[0]
val.append(collection.get(previus_key))
for key in list(collection.keys())[1:]:
if (key[4:6]==previus_key[4:6]): #if it's the same month
val.append(list(collection.pop(key))) #remember the value and delete an item
else:
collection.update({previus_key:val}) #that means, we jumped to another month, so lets update the values for previus month
previus_key = key #now, start the same algorihtm for a new month.
val = collection.get(previus_key)
</code></pre>
<p>But, i get an error:<code>'str' object has no attribute 'append'</code> for line
<code>val.append(list(collection.pop(key)))</code>. I researched it, and came to conclusion, that it must not be here! Because, i append a list to a list!
So, please, point to my mistake and give advice, how could i make the code more beutiful. Thanks!</p>
| 1 |
2016-09-20T15:03:53Z
| 39,597,951 |
<p>Here is a better approach. Given a dict of type {str:str}, we can turn the value into a list on inserts while retaining all the functionality of dict by extending the builtin <code>dict</code> class.</p>
<p>Here is a demonstration.</p>
<pre><code>>>> class CustomDict(dict):
... def insert(self, key, val): # add a new method, `insert`
... if self.get(key):
... if not isinstance(self.get(key), list):
... lst = []
... lst.append(self.get(key))
... lst.append(val)
... self[key] = lst[:] # copy list
... else:
... self[key].append(val)
... else:
... self[key] = val
...
>>> vars = {'19900326': '372.99101',
... '19730529': '291.38291',
... '19430122': '81.248291',
... '19930227': '192.32919',}
>>> vars2 = CustomDict(vars)
>>> vars2
{'19730529': '291.38291',
'19900326': '372.99101',
'19430122': '81.248291',
'19930227': '192.32919'}
>>> vars2.insert('19730529', '102.88391')
>>> vars2
{'19730529': ['291.38291', '102.88391'],
'19900326': '372.99101',
'19430122': '81.248291',
'19930227': '192.32919'}
</code></pre>
<p>In your case, you'll want to inherit from collections.ordereddict</p>
<p>As far as the logic of wanting to store all the pieces of data from the same month in the same list, you can pretty easily code that in there. Just to be sure though, it looks like you're currently disregarding year entirely. Are you sure all you care about is the same month, even if the data comes from different years? If not, you should change your logic so that you can differentiate which year.</p>
<p>Somebody suggested a better representation of the date, as a 3-tuple <code>(year, month, day)</code>. This is immutable, so it can be your dict key. (Lists are mutable and cannot be dict keys).</p>
<p>You could add the month-grouping logic to the insert method, or you could move that logic elsewhere in your code.</p>
| 1 |
2016-09-20T15:21:53Z
|
[
"python",
"list",
"dictionary"
] |
Adding an element to the key's value in ordered dictionary
| 39,597,574 |
<p>I have an ordered dictionary:</p>
<pre><code>{'20140106': '82.1000000',
'20140217': '77.0300000',
'20140224': '69.6200000',
'20140310': '46.3300000',
'20140414': '49.3800000',
</code></pre>
<p>Keys are dates: yyyy-mm-dd and i want to store all values of one month in one key:</p>
<pre><code>{'20140106': '82.1000000',
'20140217': ['77.0300000','69.6200000',]
'20140310': '46.3300000',
'20140414': '49.3800000',
</code></pre>
<p>I realized this not very beutiful solution:</p>
<pre><code>val = [] #list, where i put all values for an exact month
previus_key = list(collection.keys())[0]
val.append(collection.get(previus_key))
for key in list(collection.keys())[1:]:
if (key[4:6]==previus_key[4:6]): #if it's the same month
val.append(list(collection.pop(key))) #remember the value and delete an item
else:
collection.update({previus_key:val}) #that means, we jumped to another month, so lets update the values for previus month
previus_key = key #now, start the same algorihtm for a new month.
val = collection.get(previus_key)
</code></pre>
<p>But, i get an error:<code>'str' object has no attribute 'append'</code> for line
<code>val.append(list(collection.pop(key)))</code>. I researched it, and came to conclusion, that it must not be here! Because, i append a list to a list!
So, please, point to my mistake and give advice, how could i make the code more beutiful. Thanks!</p>
| 1 |
2016-09-20T15:03:53Z
| 39,598,089 |
<p>I don't see any advantage of using strings here as keys to represent the <em>date.</em> This is my solution, but I hope it won't break your current code. </p>
<p>Well, your code shows you're doing so much nesting there: </p>
<pre><code> val.append(list(collection.pop(key)))
</code></pre>
<p>And: </p>
<blockquote>
<p>Keys are dates: yyyy-mm-dd and i want to store all values of one month in one key:</p>
</blockquote>
<p>Here's a way to do it:
Use tuples for this purpose: </p>
<pre><code>mydict = {(year, month, day): value}
</code></pre>
<p>this way not only you're simplifying the problem, but you're also making your code more readable: </p>
<pre><code>for date in mydict:
year = date[0]
month = date[1]
...
...process data...
</code></pre>
<p>Though keys can only be immutable objects, hence tuples are immutable objects. </p>
| 1 |
2016-09-20T15:26:51Z
|
[
"python",
"list",
"dictionary"
] |
Adding an element to the key's value in ordered dictionary
| 39,597,574 |
<p>I have an ordered dictionary:</p>
<pre><code>{'20140106': '82.1000000',
'20140217': '77.0300000',
'20140224': '69.6200000',
'20140310': '46.3300000',
'20140414': '49.3800000',
</code></pre>
<p>Keys are dates: yyyy-mm-dd and i want to store all values of one month in one key:</p>
<pre><code>{'20140106': '82.1000000',
'20140217': ['77.0300000','69.6200000',]
'20140310': '46.3300000',
'20140414': '49.3800000',
</code></pre>
<p>I realized this not very beutiful solution:</p>
<pre><code>val = [] #list, where i put all values for an exact month
previus_key = list(collection.keys())[0]
val.append(collection.get(previus_key))
for key in list(collection.keys())[1:]:
if (key[4:6]==previus_key[4:6]): #if it's the same month
val.append(list(collection.pop(key))) #remember the value and delete an item
else:
collection.update({previus_key:val}) #that means, we jumped to another month, so lets update the values for previus month
previus_key = key #now, start the same algorihtm for a new month.
val = collection.get(previus_key)
</code></pre>
<p>But, i get an error:<code>'str' object has no attribute 'append'</code> for line
<code>val.append(list(collection.pop(key)))</code>. I researched it, and came to conclusion, that it must not be here! Because, i append a list to a list!
So, please, point to my mistake and give advice, how could i make the code more beutiful. Thanks!</p>
| 1 |
2016-09-20T15:03:53Z
| 39,598,403 |
<p>Why not make use of <code>reduce</code> and <code>filter</code> function. <code>reduce</code> will apply a function to every item in the iterated object, and you can also specify the initial value for <code>reduce</code> </p>
<pre><code>from collections import OrderedDict
a = {'20140106': '82.1000000', '20140217': '77.0300000', '20140224': '69.6200000', '20140310': '46.3300000', '20140414': '49.3800000'}
def aggr(result, item):
key = filter(lambda x: x[:6] == item[0][:6], result.keys())
if not key:
result[item[0]] = [item[1]]
else:
result[key[0]].append(item[1])
return result
r = OrderedDict()
print reduce(aggr, a.items(), r)
</code></pre>
| 1 |
2016-09-20T15:41:27Z
|
[
"python",
"list",
"dictionary"
] |
Mathematical algorithm failing but seems correct
| 39,597,580 |
<p>I've been given a problem in which a function is fed in A and B. These are target numbers from 1, 1, whereby B may only increase by A and A may only increase by B (Ex, 1 1 -> 2 1 or 1 2. 2 1 -> 3 1 or 2 3. 2 3 -> 5 3 or 2 5). This creates a binary tree. In the problem, given the target numbers, I need to find the "minimum" number of generations that have passed to reach that number, or if the number is possible to reach (for example, 2 4 is impossible to reach). Here's the solution I've come up with, and it's passing every test case I throw at it:</p>
<pre><code>import math
def answer(M, F):
m = int(M)
f = int(F)
numgen=0
if f==1 and m==1:
return "0"
while f>=1 and m>=1:
if f > m:
m, f = f, m
if f==1:
return str( numgen + m - 1 )
if m>f:
numgen += math.floor( m / f )
m = m % f
return "impossible"
</code></pre>
<p>I'm semi-code-golfing it, and I feel like my solution is highly elegant and fairly efficient. Everything I throw at it within ten generations is correct, and if I throw large numbers (upper limits are stated to be 10^50th on inputs), those work just fine as well. When submitted and run against unknown test cases, three of the five fail. In essence, my question is more wanting to know what kinds of cases fail here.</p>
<p>I have a few assumptions I can't prove but am fairly certain are accurate:</p>
<ul>
<li>There are no duplicates within the binary tree. I haven't found any cases, and I suspect it's mathematically proveable.</li>
<li>The tree's right and left halves can be mirrored without affecting the number of generations - this one's actually pretty proven.</li>
<li>There is only one route to reach any given combination (a property of binary trees where there are no duplicates) - This relies on assumption 1, but if assumption 1 is true, then this must also be true.</li>
<li>One of the two numbers is always a prime. This doesn't really affect the algorithm, and I haven't proven it, but it always seems to be the case. An interesting tidbit.</li>
</ul>
<p>Where is this solution wrong?</p>
| 1 |
2016-09-20T15:03:57Z
| 39,598,146 |
<p>Your solution was too complicated for what it is in my honest opinion. Take a look at this:</p>
<pre><code>def answer(M, F):
my_bombs = [int(M), int(F)]
my_bombs.sort()
generations = 0
while my_bombs != [1, 1]:
if my_bombs[0] == 1:
return str(generations + my_bombs[1] - 1)
if my_bombs[0] < 1 or my_bombs[0] == my_bombs[1]:
return "impossible"
print(my_bombs, generations)
n = my_bombs[1] // my_bombs[0]
my_bombs[1] -= my_bombs[0] * n
generations += n
my_bombs.sort()
return str(generations)
</code></pre>
| 1 |
2016-09-20T15:29:15Z
|
[
"python",
"algorithm",
"math",
"binary-tree",
"primes"
] |
Mathematical algorithm failing but seems correct
| 39,597,580 |
<p>I've been given a problem in which a function is fed in A and B. These are target numbers from 1, 1, whereby B may only increase by A and A may only increase by B (Ex, 1 1 -> 2 1 or 1 2. 2 1 -> 3 1 or 2 3. 2 3 -> 5 3 or 2 5). This creates a binary tree. In the problem, given the target numbers, I need to find the "minimum" number of generations that have passed to reach that number, or if the number is possible to reach (for example, 2 4 is impossible to reach). Here's the solution I've come up with, and it's passing every test case I throw at it:</p>
<pre><code>import math
def answer(M, F):
m = int(M)
f = int(F)
numgen=0
if f==1 and m==1:
return "0"
while f>=1 and m>=1:
if f > m:
m, f = f, m
if f==1:
return str( numgen + m - 1 )
if m>f:
numgen += math.floor( m / f )
m = m % f
return "impossible"
</code></pre>
<p>I'm semi-code-golfing it, and I feel like my solution is highly elegant and fairly efficient. Everything I throw at it within ten generations is correct, and if I throw large numbers (upper limits are stated to be 10^50th on inputs), those work just fine as well. When submitted and run against unknown test cases, three of the five fail. In essence, my question is more wanting to know what kinds of cases fail here.</p>
<p>I have a few assumptions I can't prove but am fairly certain are accurate:</p>
<ul>
<li>There are no duplicates within the binary tree. I haven't found any cases, and I suspect it's mathematically proveable.</li>
<li>The tree's right and left halves can be mirrored without affecting the number of generations - this one's actually pretty proven.</li>
<li>There is only one route to reach any given combination (a property of binary trees where there are no duplicates) - This relies on assumption 1, but if assumption 1 is true, then this must also be true.</li>
<li>One of the two numbers is always a prime. This doesn't really affect the algorithm, and I haven't proven it, but it always seems to be the case. An interesting tidbit.</li>
</ul>
<p>Where is this solution wrong?</p>
| 1 |
2016-09-20T15:03:57Z
| 39,598,911 |
<p>You didn't say whether you're using Python 2 or Python 3, but the <code>math.floor( m / f )</code> only makes sense in Python 3. There the <code>m / f</code> is a float, which is imprecise. You'd better simply use integer division: <code>numgen += m // f</code>. An example where it matters is <code>M, F = str(10**30), '3'</code>, where you compute <code>333333333333333316505293553666</code> but with integer division you'd get <code>333333333333333333333333333335</code>.</p>
| 2 |
2016-09-20T16:05:39Z
|
[
"python",
"algorithm",
"math",
"binary-tree",
"primes"
] |
How to set the server timeout in python Klein?
| 39,597,585 |
<p>I am using python Klein <a href="http://klein.readthedocs.io/en/latest/" rel="nofollow">http://klein.readthedocs.io/en/latest/</a> for setting up a web service. I had checked the documentation but I still don't know how to set the timeout of the service. Can anyone who is more familiar with tool shows how to set the timeout to 15 seconds? Thanks!</p>
| 0 |
2016-09-20T15:04:09Z
| 39,753,400 |
<p>You could call <code>Request.loseConnection()</code> to drop the request connection to the client after an set timeout interval. Here is a quick example:</p>
<pre><code>from twisted.internet import reactor, task, defer
from klein import Klein
app = Klein()
request_timeout = 10 # seconds
@app.route('/delayed/<int:n>')
@defer.inlineCallbacks
def timeoutRequest(request, n):
work = serverTask(n) # work that might take too long
drop = reactor.callLater(
request_timeout, # drop request connection after n seconds
dropRequest, # function to drop request connection
request, # pass request obj into dropRequest()
work) # pass worker deferred obj to dropRequest()
try:
result = yield work # work has completed, get result
drop.cancel() # cancel the task to drop the request connection
except:
result = 'Request dropped'
defer.returnValue(result)
def serverTask(n):
"""
A simulation of a task that takes n number of seconds to complete.
"""
d = task.deferLater(reactor, n, lambda: 'delayed for %d seconds' % (n))
return d
def dropRequest(request, deferred):
"""
Drop the request connection and cancel any deferreds
"""
request.loseConnection()
deferred.cancel()
app.run('localhost', 9000)
</code></pre>
<p>To try this out, go to <code>http://localhost:9000/delayed/2</code> then <code>http://localhost:9000/delayed/20</code> to test a scenario when the task doesn't complete in time. Don't forget to cancel all tasks, deferreds, threads, etc related to this request or you could potentially waste lots of memory.</p>
<h1>Code Explanation</h1>
<p><strong>Server Side Task</strong>: Client goes to <code>/delayed/<n></code> endpoint with a specified delay value. A server side task (<code>serverTask()</code>) starts and for the sake of simplicity and to simulate a busy task, <code>deferLater</code> was used to return a string after <code>n</code> seconds.</p>
<p><strong>Request Timeout</strong>: Using <code>callLater</code> function, after the <code>request_timeout</code> interval, call the <code>dropRequest</code> function and pass <code>request</code> and all work deferreds that need to be canceled (in this case there's only <code>work</code>). When the <code>request_timeout</code> has passed then the request connection will be closed (<code>request.loseConnection()</code>) and deferreds will be cancelled (<code>deferred.cancel</code>).</p>
<p><strong>Yield Server Task Result</strong>: In a try/except block, the result will be yielded when the value is available or, if the timeout has passed and connection is dropped, an error will occur and the <code>Request dropped</code> message will be returned.</p>
<h1>Alternative</h1>
<p>This really doesn't seem like a desirable scenario and should be avoided if possible, but I could see a need for this kind of functionality. Also, though rare, keep in mind that <code>loseConnection</code> doesn't always fully close a connection (this is due to TCP implementation not so much Twisted). A better solution would be to cancel a server side task when the client disconnects (which may be a bit easier to catch). This can be done by attaching an <code>addErrback</code> to <code>Request.notifyFinish()</code>. Here is an example using just Twisted (<a href="http://twistedmatrix.com/documents/current/web/howto/web-in-60/interrupted.html" rel="nofollow">http://twistedmatrix.com/documents/current/web/howto/web-in-60/interrupted.html</a>).</p>
| 1 |
2016-09-28T16:51:09Z
|
[
"python",
"web-services",
"klein-mvc"
] |
Python empty csr_matrix throws ValueError: cannot infer dimensions from zero sized index arrays
| 39,597,638 |
<p>Here's the python SSCCE: </p>
<pre><code>import scipy.sparse
data = []
row = []
col = []
csr = scipy.sparse.csr_matrix((data, (row, col))) #error happens here
print(type(csr))
print(csr)
</code></pre>
<p>I'm running it with python2.7 I get an error:</p>
<pre><code>raise ValueError('cannot infer dimensions from zero sized index arrays')
ValueError: cannot infer dimensions from zero sized index arrays
</code></pre>
<p>It works correctly when I feed them values like this:</p>
<pre><code>csr = scipy.sparse.csr_matrix(([10,20,30], ([0,0,0],[0,1,2])))
</code></pre>
<p>or like this:</p>
<pre><code>csr = scipy.sparse.csr_matrix(([10,20], ([0,0],[0,1])))
csr = scipy.sparse.csr_matrix(([10], ([0],[0])))
</code></pre>
<p>I read the documentation at:
<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html</a> and </p>
<p><a href="http://docs.scipy.org/doc/scipy/reference/sparse.html#usage-information" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/sparse.html#usage-information</a> </p>
<p>but that doesn't seem to explain why I can't make a csr matrix with zero items in it.</p>
<p>What's going on with this error? I guess scipy.sparse.csr.csr_matrix types must have at least one value in them on instantiation? That seems like a silly restriction. </p>
| 0 |
2016-09-20T15:06:45Z
| 39,598,951 |
<p>Scipy sparse matrices have a definite shape, e.g. (m, n) where m is the number of rows and n is the number of columns. When you write, for example, <code>csr_matrix(([1, 2], ([0, 3], [1, 4])))</code>, <code>csr_matrix</code> infers the shape from the maximum values of the row and column indices. But when you write <code>csr_matrix(([], ([], [])))</code>, the function has no way of knowing what the shape of the matrix should be (and I guess it won't create a matrix with shape (0, 0) by default).</p>
<p>One way to handle that is to give an explicit <code>shape</code>:</p>
<pre><code>In [241]: csr_matrix(([], ([], [])), shape=(3, 3))
Out[241]:
<3x3 sparse matrix of type '<class 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
</code></pre>
| 1 |
2016-09-20T16:07:27Z
|
[
"python",
"scipy",
"sparse-matrix"
] |
Python empty csr_matrix throws ValueError: cannot infer dimensions from zero sized index arrays
| 39,597,638 |
<p>Here's the python SSCCE: </p>
<pre><code>import scipy.sparse
data = []
row = []
col = []
csr = scipy.sparse.csr_matrix((data, (row, col))) #error happens here
print(type(csr))
print(csr)
</code></pre>
<p>I'm running it with python2.7 I get an error:</p>
<pre><code>raise ValueError('cannot infer dimensions from zero sized index arrays')
ValueError: cannot infer dimensions from zero sized index arrays
</code></pre>
<p>It works correctly when I feed them values like this:</p>
<pre><code>csr = scipy.sparse.csr_matrix(([10,20,30], ([0,0,0],[0,1,2])))
</code></pre>
<p>or like this:</p>
<pre><code>csr = scipy.sparse.csr_matrix(([10,20], ([0,0],[0,1])))
csr = scipy.sparse.csr_matrix(([10], ([0],[0])))
</code></pre>
<p>I read the documentation at:
<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html</a> and </p>
<p><a href="http://docs.scipy.org/doc/scipy/reference/sparse.html#usage-information" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/sparse.html#usage-information</a> </p>
<p>but that doesn't seem to explain why I can't make a csr matrix with zero items in it.</p>
<p>What's going on with this error? I guess scipy.sparse.csr.csr_matrix types must have at least one value in them on instantiation? That seems like a silly restriction. </p>
| 0 |
2016-09-20T15:06:45Z
| 39,599,555 |
<p>You can make an empty sparse matrix - from an empty dense one or by giving the shape parameter:</p>
<pre><code>In [477]: from scipy import sparse
In [478]: sparse.csr_matrix([])
Out[478]:
<1x0 sparse matrix of type '<class 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
In [479]: sparse.csr_matrix(([],([],[])),shape=(0,0))
Out[479]:
<0x0 sparse matrix of type '<class 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
</code></pre>
<p>You can even create a large empty one with <code>shape</code>:</p>
<pre><code>In [480]: sparse.csr_matrix(([],([],[])),shape=(1000,1000))
Out[480]:
<1000x1000 sparse matrix of type '<class 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
In [481]: M=_
In [482]: M[34,334]=1000
/usr/lib/python3/dist-packages/scipy/sparse/compressed.py:730: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
SparseEfficiencyWarning)
</code></pre>
<p>But given the inefficiency of inserting values or concatenating a new matrix, I don't see why you'd want to make such a creature.</p>
<p>The relevant code from <code>sparse.coo_matrix</code> when the first argument is a tuple:</p>
<pre><code> try:
obj, (row, col) = arg1
except (TypeError, ValueError):
raise TypeError('invalid input format')
if shape is None:
if len(row) == 0 or len(col) == 0:
raise ValueError('cannot infer dimensions from zero '
'sized index arrays')
M = np.max(row) + 1
N = np.max(col) + 1
self.shape = (M, N)
</code></pre>
| 0 |
2016-09-20T16:40:54Z
|
[
"python",
"scipy",
"sparse-matrix"
] |
Create sparse circulant matrix in python
| 39,597,649 |
<p>I want to create a large (say 10^5 x 10^5) sparse circulant matrix in Python. It has 4 elements per row at positions <code>[i,i+1], [i,i+2], [i,i+N-2], [i,i+N-1]</code>, where I have assumed periodic boundary conditions for the indices (i.e. <code>[10^5,10^5]=[0,0], [10^5+1,10^5+1]=[1,1]</code> and so on). I looked at the scipy sparse matrices documentation but I am quite confused (I am new to Python). </p>
<p>I can create the matrix with numpy</p>
<pre><code>import numpy as np
def Bc(i, boundary):
"""(int, int) -> int
Checks boundary conditions on index
"""
if i > boundary - 1:
return i - boundary
elif i < 0:
return boundary + i
else:
return i
N = 100
diffMat = np.zeros([N, N])
for i in np.arange(0, N, 1):
diffMat[i, [Bc(i+1, N), Bc(i+2, N), Bc(i+2+(N-5)+1, N), Bc(i+2+(N-5)+2, N)]] = [2.0/3, -1.0/12, 1.0/12, -2.0/3]
</code></pre>
<p>However, this is quite slow and for large <code>N</code> uses a lot of memory, so I want to avoid the creation with numpy and the converting to a sparse matrix and go directly to the latter.</p>
<p>I know how to do it in Mathematica, where one can use SparseArray and index patterns - is there something similar here?</p>
| 1 |
2016-09-20T15:07:07Z
| 39,598,506 |
<p>To create a <em>dense</em> circulant matrix, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.circulant.html" rel="nofollow"><code>scipy.linalg.circulant</code></a>. For example,</p>
<pre><code>In [210]: from scipy.linalg import circulant
In [211]: N = 7
In [212]: vals = np.array([2.0/3, -1.0/12, 1.0/12, -2.0/3])
In [213]: offsets = np.array([1, 2, N-2, N-1])
In [214]: col0 = np.zeros(N)
In [215]: col0[offsets] = -vals
In [216]: c = circulant(col0)
In [217]: c
Out[217]:
array([[ 0. , 0.6667, -0.0833, 0. , 0. , 0.0833, -0.6667],
[-0.6667, 0. , 0.6667, -0.0833, 0. , 0. , 0.0833],
[ 0.0833, -0.6667, 0. , 0.6667, -0.0833, 0. , 0. ],
[ 0. , 0.0833, -0.6667, 0. , 0.6667, -0.0833, 0. ],
[ 0. , 0. , 0.0833, -0.6667, 0. , 0.6667, -0.0833],
[-0.0833, 0. , 0. , 0.0833, -0.6667, 0. , 0.6667],
[ 0.6667, -0.0833, 0. , 0. , 0.0833, -0.6667, 0. ]])
</code></pre>
<p>As you point out, for large <code>N</code>, that requires a lot of memory and most of the values are zero. To create a scipy sparse matrix, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.diags.html" rel="nofollow"><code>scipy.sparse.diags</code></a>. We have to create offsets (and corresponding values) for the diagonals above and below the main diagonal:</p>
<pre><code>In [218]: from scipy import sparse
In [219]: N = 7
In [220]: vals = np.array([2.0/3, -1.0/12, 1.0/12, -2.0/3])
In [221]: offsets = np.array([1, 2, N-2, N-1])
In [222]: dupvals = np.concatenate((vals, vals[::-1]))
In [223]: dupoffsets = np.concatenate((offsets, -offsets))
In [224]: a = sparse.diags(dupvals, dupoffsets, shape=(N, N))
In [225]: a.toarray()
Out[225]:
array([[ 0. , 0.6667, -0.0833, 0. , 0. , 0.0833, -0.6667],
[-0.6667, 0. , 0.6667, -0.0833, 0. , 0. , 0.0833],
[ 0.0833, -0.6667, 0. , 0.6667, -0.0833, 0. , 0. ],
[ 0. , 0.0833, -0.6667, 0. , 0.6667, -0.0833, 0. ],
[ 0. , 0. , 0.0833, -0.6667, 0. , 0.6667, -0.0833],
[-0.0833, 0. , 0. , 0.0833, -0.6667, 0. , 0.6667],
[ 0.6667, -0.0833, 0. , 0. , 0.0833, -0.6667, 0. ]])
</code></pre>
<p>The matrix is stored in the "diagonal" format:</p>
<pre><code>In [226]: a
Out[226]:
<7x7 sparse matrix of type '<class 'numpy.float64'>'
with 28 stored elements (8 diagonals) in DIAgonal format>
</code></pre>
<p>You can use the conversion methods of the sparse matrix to convert it to a different sparse format. For example, the following results in a matrix in CSR format:</p>
<pre><code>In [227]: a.tocsr()
Out[227]:
<7x7 sparse matrix of type '<class 'numpy.float64'>'
with 28 stored elements in Compressed Sparse Row format>
</code></pre>
| 2 |
2016-09-20T15:46:29Z
|
[
"python",
"numpy",
"matrix",
"scipy",
"sparse-matrix"
] |
python import MySQLdb
| 39,597,771 |
<p>in python, import MySQLdb</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/MySQL_python-1.2.4b4-py2.7-macosx-10.10-intel.egg/MySQLdb/__init__.py", line 19, in <module>
import _mysql
ImportError: dlopen(/Library/Python/2.7/site-packages/MySQL_python-1.2.4b4-py2.7-macosx-10.10-intel.egg/_mysql.so, 2): Library not loaded: /usr/local/mysql/lib/libmysqlclient.18.dylib
Referenced from: /Library/Python/2.7/site-packages/MySQL_python-1.2.4b4-py2.7-macosx-10.10-intel.egg/_mysql.so
Reason: image not found
</code></pre>
<p>I try <code>ln -s /usr/local/mysql/lib/libmysqlclient.18.dylib /usr/lib/libmysqlclient.18.dylib</code> and other soultions but nothing help.</p>
<p>My computer is OS X EI Capitan, python2.7.9 django1.8.2 mysql-5-7-15</p>
| 0 |
2016-09-20T15:13:09Z
| 40,107,011 |
<p>I'm using ubuntu 16.04 and I don't have that lib in the same place as you specified. </p>
<p>The best solution for me was installing the mysql pip package with the <code>--no-binary</code> option as shown in this <a href="http://stackoverflow.com/a/36835229/3086572">post reply</a></p>
| 0 |
2016-10-18T11:27:32Z
|
[
"python",
"mysql",
"django"
] |
Pandas Multicolumn Groupby Plotting
| 39,597,785 |
<p>Problem:<br>
I have a pandas dataframe of data that I would like to group-by year-months and rule_name. Once grouped by I want to be able to get the counts of each of the rules during that period and the % of all the rules for that group. So far I am able to get each of the periods counts but not the percentage. </p>
<p>The goal is to have a plot similar to the ones at the bottom but on the right-y axis I would have percentage of the time period as well.</p>
<p>Goal Dataframes:<br>
For rule_name A:</p>
<pre><code>date counts (rule_name) %_rule_name
Jan 16 1 50
Feb 16 0 0
Jun 16 2 66
</code></pre>
<p>I would like to continue this for each rule_name (i.e. for B and C)</p>
<p>Code So Far:</p>
<pre><code>d = {'date': ['1/1/2016', '2/1/2016', '3/5/2016', '2/5/2016', '1/15/2016', '3/3/2016', '3/4/2016'],
'rule_name' : ['A' , 'B', 'C', 'C', 'B', 'A','A']}
df = pd.DataFrame(d)
Output:
</code></pre>
<p><a href="http://i.stack.imgur.com/hk3Sg.png" rel="nofollow"><img src="http://i.stack.imgur.com/hk3Sg.png" alt="enter image description here"></a></p>
<pre><code># format string date to datetime
df['date'] = pd.to_datetime(df['date'], format='%m/%d/%Y', errors='coerce')
rule_names = df['rule_name'].unique().tolist()
for i in rule_names:
print ""
print 'dataframe for', i ,':'
df_temp = df[df['rule_name'] == i]
df_temp = df_temp.groupby(df_temp['date'].map(lambda x: str(x.year) + '-' + str(x.strftime('%m')))).count()
df_temp.plot(kind='line', title = 'Rule Name: ' + str(i))
print df_temp
Output:
</code></pre>
<p><a href="http://i.stack.imgur.com/Fa1h6.png" rel="nofollow"><img src="http://i.stack.imgur.com/Fa1h6.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/NP2D7.png" rel="nofollow"><img src="http://i.stack.imgur.com/NP2D7.png" alt="enter image description here"></a></p>
<p>I feel like there is a better way to do this but am unable to figure it out. I have been racking my brains on this problem for the last day'ish'. Should I be filtering? I tried a multi-index group-by but could not create a %_rule_name column. Thanks for input in advance.</p>
| 1 |
2016-09-20T15:13:58Z
| 39,599,650 |
<p>I was able to resolve this. The following code provides the necessary plots and data processing. I am putting it up in case this helps someone else. It feels kind of janky but it gets the trick done. Any suggestion to improve this would be appreciated.</p>
<p>Thanks SO.</p>
<pre><code>import seaborn as sns
df_all = df.groupby(df['date'].map(lambda x: str(x.year) + '-' + str(x.strftime('%m')))).count()
df_all = pd.DataFrame(df_all)
df_all['rule_name_all_count'] = df_all['rule_name']
rule_names = df['rule_name'].unique().tolist()
for i in rule_names:
print ""
print 'dataframe for', i ,':'
df_temp = df[df['rule_name'] == i]
df_temp = df_temp.groupby(df_temp['date'].map(lambda x: str(x.year) + '-' + str(x.strftime('%m')))).count()
df_temp = pd.DataFrame(df_temp)
df_merge = pd.merge(df_all, df_temp, right_index = True, left_index = True, how='left')
drop_x(df_merge)
rename_y(df_merge)
df_merge.drop('date', axis=1, inplace=True)
df_merge['rule_name_%'] = df_merge['rule_name'].astype(float) / df_merge['rule_name_all_count'].astype(float)
df_merge = df_merge.fillna(0)
fig = plt.figure()
ax = fig.add_subplot(111)
ax2 = ax.twinx()
df_merge['rule_name'].plot()
df_merge['rule_name_%'].plot()
plt.show()
print df_temp
</code></pre>
<p><a href="http://i.stack.imgur.com/gU57a.png" rel="nofollow"><img src="http://i.stack.imgur.com/gU57a.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/K8dD1.png" rel="nofollow"><img src="http://i.stack.imgur.com/K8dD1.png" alt="enter image description here"></a></p>
| 0 |
2016-09-20T16:47:17Z
|
[
"python",
"pandas",
"plot",
"group-by",
"filtering"
] |
Display objects related current user django admin
| 39,597,836 |
<p>I'm trying to display only objects related to the current user. Users can upload a file and then they can only see their files in the admin. Here is my models :</p>
<pre><code>class Share(models.Model):
owner = models.ForeignKey(User, default='')
title = models.CharField("File's title", max_length=100, unique=True, default='File')
zip_file = models.FileField('Shared File', upload_to=content_zip_name, validators= [validation_zip])
</code></pre>
<p>and my admin.py</p>
<pre><code>from django.contrib import admin
from share.models import Share
class ShareAdmin(admin.ModelAdmin):
list_display = ('title',)
save_as = True
def queryset(self, request):
qs = super(ShareAdmin, self).queryset(request)
if request.user.is_superuser:
return qs
return qs.filter(owner=request.user)
admin.site.register(Share, ShareAdmin)
</code></pre>
<p>I tried with overriding the queryset function but doesn't work..any idea? </p>
| 0 |
2016-09-20T15:16:34Z
| 39,597,964 |
<p>You've confused the <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_filter" rel="nofollow"><code>queryset</code></a> method (used with <code>list_filter</code>) with the <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/admin/#django.contrib.admin.ModelAdmin.get_queryset" rel="nofollow"><code>get_queryset</code></a> method:</p>
<blockquote>
<p>The <code>get_queryset</code> method on a <code>ModelAdmin</code> returns a <em>QuerySet</em> of all
model instances that can be edited by the admin site. One use case for
overriding this method is to show objects owned by the <strong>logged-in</strong>
user.</p>
</blockquote>
<p>[<em>Emphasis mine</em>]</p>
<pre><code>class ShareAdmin(admin.ModelAdmin):
list_display = ('title',)
save_as = True
def get_queryset(self, request):
qs = super(ShareAdmin, self).get_queryset(request)
if request.user.is_superuser:
return qs
return qs.filter(owner=request.user)
</code></pre>
| 1 |
2016-09-20T15:22:19Z
|
[
"python",
"django",
"django-queryset"
] |
Django Aggregation: Sum of Multiplication of two fields that are not in same model
| 39,597,886 |
<p>I have three models (Django 1.6.5):</p>
<pre><code>class Vote(models.Model):
voter = models.ForeignKey(UserSettings)
answer = models.ForeignKey(Answer)
rating = models.IntegerField()
class Answer(models.Model):
content = models.CharField(max_length=255)
class UserSettings(models.Model):
user = models.OneToOneField(User, related_name='settings')
weight = models.FloatField(default=1.0)
</code></pre>
<p>Basically, a user (voter) can vote for an answer by giving a rating. I know how to sum the ratings by answer:</p>
<pre><code>Vote.objects.all().values('answer').annotate(score=Sum('rating'))
</code></pre>
<p>The only subtlety is that each voter has a weight (all voters are not equal!) and I want to sum each product rating*weight. I know (from <a href="http://stackoverflow.com/questions/12165636/django-aggregation-summation-of-multiplication-of-two-fields">here</a>) that something like that can be done:</p>
<pre><code>Sum('id',field="field1*field2")
</code></pre>
<p>and it would work well if my 2 fields are in the same model but it doesn't work if they are not. In other words, command:</p>
<pre><code>Vote.objects.all().values('answer').annotate(score=Sum('id',field="rating*voter__weight"))
</code></pre>
<p>does not work. Any help greatly appreciated!</p>
| 0 |
2016-09-20T15:18:53Z
| 39,604,972 |
<p>The problem is that we need the join with another table (in this case UserSettings), so we need "force" the join.</p>
<pre><code>q = Vote.objects.all().filter(voter__settings__weight__gt=0).values("answer").annotate(Sum('id', field='rating*weight'))
</code></pre>
<p>To force the join I used the filter clause (In fact I suppose that all the users have a weight greater than 0) but is just used to force the join. Then you can use the weight field.</p>
<p>PD: I think that this problem is solved in the latest version of Django with Conditional Expressions: <a href="https://docs.djangoproject.com/es/1.10/ref/models/conditional-expressions/" rel="nofollow">https://docs.djangoproject.com/es/1.10/ref/models/conditional-expressions/</a></p>
| 0 |
2016-09-20T22:57:07Z
|
[
"python",
"django",
"aggregate",
"django-queryset"
] |
How do I Hangup up call in Asterisk Event
| 39,598,018 |
<p>I am connecting to Asterisk server using Python Asterisk manager. How do I hangup calls from the AMI.</p>
<pre><code>def hangup_event(event, manager):
with ctx:
if event.name == 'Hangup':
data = {
"channel":event.message['Channel'],
"unique_id":event.message['Uniqueid'],
"cause":event.message['Cause'],
}
manager.register_event('*', hangup_event)
channel = 'SIP/356256266262'
res = manager.send_action({'Action':'Hangup', 'Channel':channel})
</code></pre>
<p>My objective is to end the call but it isn't working.</p>
| 0 |
2016-09-20T15:24:04Z
| 39,605,548 |
<p>Use ami action COMMAND. Issue command </p>
<pre><code>channel request hangup channel_name_here
</code></pre>
| 0 |
2016-09-21T00:05:22Z
|
[
"python",
"asterisk"
] |
Implementing a single thread server/daemon (Python)
| 39,598,038 |
<p>I am developing a server (daemon).</p>
<p>The server has one "worker thread". The worker thread runs a queue of commands. When the queue is empty, the worker thread is paused (but does not exit, because it should preserve certain state in memory). To have exactly one copy of the state in memory, I need to run all time exactly one (not several and not zero) worker thread.</p>
<p>Requests are added to the end of this queue when a client connects to a Unix socket and sends a command.</p>
<p>After the command is issued, it is added to the queue of commands of the worker thread. After it is added to the queue, the server replies something like "OK". There should be not a long pause between server receiving a command and it "OK" reply. However, running commands in the queue may take some time.</p>
<p>The main "work" of the worker thread is split into small (taking relatively little time) chunks. Between chunks, the worker thread inspects ("eats" and empties) the queue and continues to work based on the data extracted from the queue.</p>
<p>How to implement this server/daemon in Python?</p>
| -1 |
2016-09-20T15:24:46Z
| 39,599,631 |
<p>This is a sample code with internet sockets, easily replaced with unix domain sockets. It takes whatever you write to the socket, passes it as a "command" to worker, responds OK as soon as it has queued the command. The single worker simulates a lengthy task with sleep(30). You can queue as many tasks as you want, receive OK immediately and every 30 seconds, your worker prints a command from the queue. </p>
<pre><code>import Queue, threading, socket
from time import sleep
class worker(threading.Thread):
def __init__(self,q):
super(worker,self).__init__()
self.qu = q
def run(self):
while True:
new_task=self.qu.get(True)
print new_task
i=0
while i < 10:
print "working ..."
sleep(1)
i += 1
try:
another_task=self.qu.get(False)
print another_task
except Queue.Empty:
pass
task_queue = Queue.Queue()
w = worker(task_queue)
w.daemon = True
w.start()
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('localhost', 4200))
sock.listen(1)
try:
while True:
conn, addr = sock.accept()
data = conn.recv(32)
task_queue.put(data)
conn.sendall("OK")
conn.close()
except:
sock.close()
</code></pre>
| 1 |
2016-09-20T16:45:47Z
|
[
"python",
"sockets",
"server",
"daemon"
] |
Django 1.10 AppRegistryNotReady: Apps aren't loaded yet. I can't use django.setup
| 39,598,151 |
<p>I'm facing an issue since I upgraded my django from <code>1.7.10</code> to <code>1.10.1</code>.
Indeed, I had the
<code>django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.</code>
Error and in order to solve that I figured out that I had to remove django.setup() in order to solve the circular dependency (I assume).
But then, I'm facing the error from the title:</p>
<p>``` </p>
<pre><code>Traceback (most recent call last):
File "/var/www/webapps/lib/python3.4/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/var/www/webapps/lib/python3.4/site-packages/django/core/management/commands/runserver.py", line 113, in inner_run
autoreload.raise_last_exception()
File "/var/www/webapps/lib/python3.4/site-packages/django/utils/autoreload.py", line 249, in raise_last_exception
six.reraise(*_exception)
File "/var/www/webapps/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/var/www/webapps/lib/python3.4/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/var/www/webapps/lib/python3.4/site-packages/django/__init__.py", line 27, in setup
apps.populate(settings.INSTALLED_APPS)
File "/var/www/webapps/lib/python3.4/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/var/www/webapps/lib/python3.4/site-packages/django/apps/config.py", line 90, in create
module = import_module(entry)
File "/var/www/webapps/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2231, in _gcd_import
File "<frozen importlib._bootstrap>", line 2214, in _find_and_load
File "<frozen importlib._bootstrap>", line 2203, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1448, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/var/www/webapps/lib/python3.4/site-packages/easy_select2/__init__.py", line 7, in <module>
from easy_select2.utils import (
File "/var/www/webapps/lib/python3.4/site-packages/easy_select2/utils.py", line 7, in <module>
from easy_select2.widgets import Select2Mixin, Select2, Select2Multiple
File "/var/www/webapps/lib/python3.4/site-packages/easy_select2/widgets.py", line 24, in <module>
static('easy_select2/js/init.js'),
File "/var/www/webapps/lib/python3.4/site-packages/django/templatetags/static.py", line 163, in static
return StaticNode.handle_simple(path)
File "/var/www/webapps/lib/python3.4/site-packages/django/templatetags/static.py", line 112, in handle_simple
if apps.is_installed('django.contrib.staticfiles'):
File "/var/www/webapps/lib/python3.4/site-packages/django/apps/registry.py", line 225, in is_installed
self.check_apps_ready()
File "/var/www/webapps/lib/python3.4/site-packages/django/apps/registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
<p>And the "apps registrations":</p>
<pre><code> # APP CONFIGURATION
DJANGO_APPS = (
# Default Django apps:
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.sitemaps',
'django.contrib.staticfiles',
'django.contrib.messages',
# Useful template tags:
# 'django.contrib.humanize',
)
THIRD_PARTY_APPS = (
# Admin
'djangocms_admin_style',
'djangocms_text_ckeditor',
'django.contrib.admin',
# Django CMS
'cms',
'menus',
'sekizai',
'treebeard',
'easy_thumbnails',
'easy_thumbnails.optimize',
'filer',
'rosetta',
'cmsplugin_filer_file',
'cmsplugin_filer_folder',
'cmsplugin_filer_link',
'cmsplugin_filer_image',
'cmsplugin_filer_teaser',
'cmsplugin_filer_video',
#'cmsplugin_filer_svg',
# 'djangocms_ckeditor_filer',
'djangocms_style',
'djangocms_flash',
'djangocms_googlemap',
'djangocms_inherit',
'reversion',
'aldryn_reversion',
'parler',
'taggit',
'meta',
'meta_mixin',
'cities_light',
'admin_enhancer',
'multiselectfield',# multiselect in charfield
'sortedm2m',# ordered manyTomany
'easy_select2',
'taggit_autosuggest_select2',
'adminsortable2',
'autocomplete_light',
'compressor',
'nocaptcha_recaptcha',
'widget_tweaks',
'qartez',
'django_mobile',
'cookielaw',
'django_user_agents',
)
# Apps specific for this project go here.
LOCAL_APPS = (
'blippar',
'multisite_multilanguage',
'djangocms_extend',
'djangocms_blog',
'djangocms_showroom',
'djangocms_press',
'djangocms_partner_profile',
'djangocms_faq',
'djangocms_presentations',
'djangocms_projects',
# plugins
'blippbutton',
'blippvideo',
'djangocms_column',
'djangocms_footer',
'djangocms_office',
'djangocms_contact',
'djangocms_partner',
'djangocms_job',
'djangocms_image',
'djangocms_commons',
'djangocms_casestudies',
'djangocms_presentations_cases',
'djangocms_testimonial',
# Your stuff: custom apps go here
)
# See: https://docs.djangoproject.com/en/dev/ref/settings/#installed-apps
INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS
# END APP CONFIGURATION
</code></pre>
<p>Which I don't know how to solve now. Does anyone have an idea on it?</p>
| 0 |
2016-09-20T15:29:22Z
| 39,598,825 |
<p>You are hitting <a href="https://code.djangoproject.com/ticket/27033" rel="nofollow">this issue</a>. The answer appears to be to upgrade your version of <code>easy_select2</code>.</p>
<p>The issue <a href="https://github.com/asyncee/django-easy-select2/commit/8b9a4f050166" rel="nofollow">is fixed</a> in <code>easy_select2</code> 1.3.2+. Note that the <a href="http://django-easy-select2.readthedocs.io/en/latest/changelog.html" rel="nofollow">changelog</a> says that there are backwards incompatible changes in 1.3.</p>
| 0 |
2016-09-20T16:01:42Z
|
[
"python",
"django"
] |
Code debugging, Raspberry pi relays and if statements
| 39,598,252 |
<p>I've a problem with Raspberry Pi relays. Earlier I wrote about LED's and the problem was similar, but this time these methods don't work. </p>
<pre><code>#!/usr/bin/env python
import sys
import time
import datetime
import RPi.GPIO as GPIO
import SDL_DS1307
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
Rele1 = 17
Rele2 = 27
GPIO.setup(17, GPIO.OUT)
GPIO.setup(27, GPIO.OUT)
filename = time.strftime("%Y-%m-%d%H:%M:%SRTCTest") + ".txt"
starttime = datetime.datetime.utcnow()
ds1307 = SDL_DS1307.SDL_DS1307(1, 0x68)
ds1307.write_now()
while True:
currenttime = datetime.datetime.utcnow()
deltatime = currenttime - starttime
data=time.strftime("%Y"+"%m"+"%d"+"%H"+"%M")
with open('data.txt') as f:
for line in f:
parts=line.split()
if parts[0]<=(data)<=parts[1]:
GPIO.output(Rele1, True)
GPIO.output(Rele2, False)
break
else:
GPIO.output(Rele1, False)
GPIO.output(Rele2, True)
sleep(0.10)
</code></pre>
<p>I have edited the code a little bit, now just when <code>if</code> is <code>True</code> one channel of relay module, GPIO27, is clicking very fast. I tried changing the GPIOs but the result is the same.</p>
<p>When <code>if</code> is <code>False</code>, then relay works as it should. If I put <code>break</code> after <code>else</code> , the code stops doing .txt file check and if there are more dates, the program doesn't do anything</p>
| 0 |
2016-09-20T15:34:21Z
| 39,602,724 |
<p>You need to do some work on your logic. Think about what happens when the the time is between the values on the third line of data.txt. As your code reads the first line, the result is false so the outputs are set correspondingly, then on the second line the result is false and the outputs are set for false again, then the third line the result is true so the outputs are set for true. Then the file is immediately read again and the first line result is false, 2nd line false, third line true. So your relays will be set to false-false-true each time the file is read, again and again.</p>
<p>You need to make your code only set the outputs when ANY of the tests is true (a logical OR) and not for the result of each test.</p>
<p>You probably need to do something like work out the OR across all the lines of the file then after finishing the file, if the result is true or false set the outputs accordingly.</p>
<p>And as a tip for debugging, if you had put a simple print statement in each branch of the if statement you would have seen on the console how once one of the lines after the first line has a true result the outputs are being toggled around on every read through the file.</p>
<pre><code>while True:
currenttime = datetime.datetime.utcnow()
deltatime = currenttime - starttime
data=time.strftime("%Y"+"%m"+"%d"+"%H"+"%M")
with open('data.txt') as f:
result = False
for line in f:
parts=line.split()
# compare the current time with the part1/part2 from data.txt
if parts[0]<=(data)<=parts[1]:
# The current time is between these, so our overall result is going to be True
result = True
# depending on the result, set the relays appropriately
if result:
print "Result is True"
GPIO.output(Rele1, True)
GPIO.output(Rele2, False)
else:
print "Result is False"
GPIO.output(Rele1, False)
GPIO.output(Rele2, True)
# delay for a second - no point reading faster than the clock changes
time.sleep(1)
</code></pre>
<p>`</p>
| 0 |
2016-09-20T19:53:53Z
|
[
"python",
"raspberry-pi"
] |
Recursive if-statement python
| 39,598,319 |
<p>Hi I am new to python and trying to implement a recursive function which fills up a table, but when running by program I get the following exception</p>
<blockquote>
<p>unsupported operand type(s) for +: 'NoneType' and 'int'.</p>
</blockquote>
<pre><code>def cost(i, j):
if table[i][j] == None:
v1 = v2 = v3 = v4 = None
if i > 0 and j > 0:
print "case1"
v1 = cost(i-1, j-1) + getSubCostMatrixValue(options[string1[i]], options[string2[j]])
if i > 0 and j >= 0:
print "case2"
v2 = cost(i-1, j) + gapCost
if i >= 0 and j > 0:
print "case3"
v3 = cost(i, j-1) + gapCost
if i == 0 and j == 0:
print "case4"
v4 = 0
print "Max:"
print max(v1,v2,v3,v4)
table[i][j] = max(v1,v2,v3,v4)
return table[i][j]
</code></pre>
<p>the problem ocours i case 2 and case 3, as if the recursive call fails somehow, but i cannot find out why. I feel like it is something obvious </p>
<p>table is filled with None from the begining, gabCost is a int, getSubCostMatrixValue also returns an int.</p>
| -1 |
2016-09-20T15:37:25Z
| 39,598,596 |
<p>Apparently, your <code>cost()</code> function returns <code>None</code> for some cases. If I understand right, then this can only happen if i or j are negative. As this only occurs in cases 2 or 3, it seems to me, that your i or j are indeed floats which can be larger than 0 but smaller than 1. Could this be the case? If you provide more details on your program, someone might have a more detailed answer!?</p>
| 0 |
2016-09-20T15:50:12Z
|
[
"python",
"if-statement",
"recursion"
] |
Calculate mean of array with specific value from another array
| 39,598,371 |
<p>I have these numpy arrays:</p>
<pre><code>array1 = np.array([-1, -1, 1, 1, 2, 1, 2, 2])
array2 = np.array([34.2, 11.2, 22.1, 78.2, 55.0, 66.87, 33.3, 11.56])
</code></pre>
<p>Now I want to return a 2d array in which there is the mean for each distinctive value from array1 so my output would look something like this:</p>
<pre><code>array([[-1, 22.7],
[ 1, 55.7],
[ 2, 33.3]])
</code></pre>
<p>Is there an efficient way without concatenating those 1D arrays to one 2D array? Thanks!</p>
| 2 |
2016-09-20T15:39:58Z
| 39,598,529 |
<p>Here's an approach using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="nofollow"><code>np.unique</code></a> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html" rel="nofollow"><code>np.bincount</code></a> -</p>
<pre><code># Get unique array1 elems, tag them starting from 0 and get their tag counts
unq,ids,count = np.unique(array1,return_inverse=True,return_counts=True)
# Use the tags/IDs to perform ID based summation of array2 elems and
# thus divide by the ID counts to get ID based average values
out = np.column_stack((unq,np.bincount(ids,array2)/count))
</code></pre>
<p>Sample run -</p>
<pre><code>In [16]: array1 = np.array([-1, -1, 1, 1, 2, 1, 2, 2])
...: array2 = np.array([34.2, 11.2, 22.1, 78.2, 55.0, 66.87, 33.3, 11.56])
...:
In [18]: out
Out[18]:
array([[ -1. , 22.7 ],
[ 1. , 55.72333333],
[ 2. , 33.28666667]])
</code></pre>
| 1 |
2016-09-20T15:47:32Z
|
[
"python",
"arrays",
"performance",
"numpy"
] |
Calculate mean of array with specific value from another array
| 39,598,371 |
<p>I have these numpy arrays:</p>
<pre><code>array1 = np.array([-1, -1, 1, 1, 2, 1, 2, 2])
array2 = np.array([34.2, 11.2, 22.1, 78.2, 55.0, 66.87, 33.3, 11.56])
</code></pre>
<p>Now I want to return a 2d array in which there is the mean for each distinctive value from array1 so my output would look something like this:</p>
<pre><code>array([[-1, 22.7],
[ 1, 55.7],
[ 2, 33.3]])
</code></pre>
<p>Is there an efficient way without concatenating those 1D arrays to one 2D array? Thanks!</p>
| 2 |
2016-09-20T15:39:58Z
| 39,601,232 |
<p>This is a typical grouping operation, and the <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package (disclaimer: I am its author) provides extensions to numpy to perform these type of operations efficiently and concisely:</p>
<pre><code>import numpy_indexed as npi
groups, means = npi.group_by(array_1).mean(array_2)
</code></pre>
<p>Note that you can in this manner easily perform other kind of reductions as well, such as a median for example.</p>
| 1 |
2016-09-20T18:25:03Z
|
[
"python",
"arrays",
"performance",
"numpy"
] |
Convert Hex Column in Pandas Without Iterating
| 39,598,449 |
<p>I am trying to bin a Pandas dataframe in Python 3 in order to have more efficient grouping over a large dataset. Currently the performance bottleneck is in iterating over the dataframe using the .apply() method.</p>
<p>All entries within the column are in hex, so it seems like the pd.to_numeric function should do exactly what I want.</p>
<p>I've tried a variety of options, but so far nothing has worked.</p>
<pre><code># This sets all values to np.nan with coerced errors, 'Unable to parse string' with raise errors.
dataframe[bin] = pd.to_numeric(dataframe[to_bin], errors='coerce') % __NUM_BINS__
# Gives me "int() Cannot convert non-string with explicit base"
dataframe[bin] = int(dataframe[to_bin].astype(str), 16) % __NUM_BINS__
# Value Error: Invalid literal for int with base 10 'ffffffffff'
dataframe[bin] = dataframe.astype(np.int64) % __NUM_BINS__
</code></pre>
<p>Any suggestions? This seems like something that people would have to have tackled in the past.</p>
| 1 |
2016-09-20T15:43:35Z
| 39,793,260 |
<p>After some assists from the comments above: a faster way to accomplish this is to use a generator function. That way it can deal with any exceptions if provided data that cannot be converted from hex.</p>
<pre><code>def bin_vals(lst):
for item in lst:
try:
yield int(item, 16) % __NUM_BINS__
except:
yield __ERROR_BIN__ #whatever you store weird items in
</code></pre>
<p>Then in your conversion portion you would do the following:</p>
<pre><code>dataframe['binned_value'] = [bin for bin in bin_vals(df['val_to_bin'].tolist())]
</code></pre>
<p>That led to a substantial speedup from iterating through each row. It was also faster than the apply method which I had used originally.</p>
| 0 |
2016-09-30T13:57:28Z
|
[
"python",
"pandas",
"dataframe"
] |
How to replace an image in WordPress via REST API 2.0?
| 39,598,454 |
<p>I have some images uploaded on a WordPress site that is to be completely REST controlled and I'm having trouble updating them. Altering image data is simple, via <code>POST</code> requests to <code>.../media/<id></code>, but I can find no way to actually replace the image <strong>content</strong> with another file.</p>
<p>Of course, I could delete such an image, <code>POST</code> the new file as new media, and re-<code>POST</code> the old data (name, alt), but this seems ugly, and will mess up my media IDs references in posts that used the old one (and should now use the new one).</p>
<p>So, is there a neat way to replace one image's content with another, without deleting media (i.e., without changing media's ID)?</p>
<p>I am using Python for this, but <code>curl</code>'s command line is also fine, easy to translate to Python.</p>
| 1 |
2016-09-20T15:44:02Z
| 39,624,040 |
<p>It seems that this <a href="https://github.com/WP-API/WP-API/issues/2715" rel="nofollow">cannot be done</a> and the media records have to be deleted and recreated.</p>
| 0 |
2016-09-21T18:30:47Z
|
[
"python",
"wordpress",
"rest",
"api",
"curl"
] |
Alternating row color using xlsxwriter in Python 3
| 39,598,568 |
<p>Has anybody implemented alternating row color while generating excel using xlsxwriter in Python3?</p>
<pre><code>data_format = workbook.add_format(
{
'bg_color': '#FFC7CE'
})
worksheet.write(data_row, data_col + 1, row[1], data_format)
</code></pre>
<p>This sets the color for each column.</p>
| 0 |
2016-09-20T15:48:57Z
| 39,599,296 |
<p>There is nothing stopping you from setting the formats manually as follows:</p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('hello.xlsx')
worksheet = workbook.add_worksheet()
data_format1 = workbook.add_format({'bg_color': '#FFC7CE'})
data_format2 = workbook.add_format({'bg_color': '#00C7CE'})
for row in range(0, 10, 2):
worksheet.set_row(row, cell_format=data_format1)
worksheet.set_row(row + 1, cell_format=data_format2)
worksheet.write(row, 0, "Hello")
worksheet.write(row + 1, 0, "world")
workbook.close()
</code></pre>
<p>This would give you output looking as follows:</p>
<p><a href="http://i.stack.imgur.com/qCjSM.png" rel="nofollow"><img src="http://i.stack.imgur.com/qCjSM.png" alt="Alternating row colours"></a></p>
| 2 |
2016-09-20T16:25:59Z
|
[
"python",
"python-3.x",
"xlsxwriter"
] |
Pandas Filter on date for quarterly ends
| 39,598,618 |
<p>In the index column I have a list of dates:</p>
<pre><code>DatetimeIndex(['2010-12-31', '2011-01-02', '2011-01-03', '2011-01-29',
'2011-02-26', '2011-02-28', '2011-03-26', '2011-03-31',
'2011-04-01', '2011-04-03',
...
'2016-02-27', '2016-02-29', '2016-03-26', '2016-03-31',
'2016-04-01', '2016-04-03', '2016-04-30', '2016-05-31',
'2016-06-30', '2016-07-02'],
dtype='datetime64[ns]', length=123, freq=None)
</code></pre>
<p>However I want to filter out all those which the month and day equal to 12/31, 3/31, 6/30, 9/30 to get the value at the end of the quarter. </p>
<p>Is there a good way of going about this?</p>
| 3 |
2016-09-20T15:51:03Z
| 39,598,726 |
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.is_quarter_end.html#pandas.Series.dt.is_quarter_end" rel="nofollow"><code>is_quarter_end</code></a> to filter the row labels:</p>
<pre><code>In [151]:
df = pd.DataFrame(np.random.randn(400,1), index= pd.date_range(start=dt.datetime(2016,1,1), periods=400))
df.loc[df.index.is_quarter_end]
Out[151]:
0
2016-03-31 -0.474125
2016-06-30 0.931780
2016-09-30 -0.281271
2016-12-31 0.325521
</code></pre>
| 3 |
2016-09-20T15:56:15Z
|
[
"python",
"datetime",
"pandas"
] |
Django Db routing
| 39,598,666 |
<p>I am trying to run my Django application with two db's (1 master, 1 read replica). My problem is if I try to read right after a write the code explodes. For example:</p>
<ul>
<li>p = Product.objects.create()</li>
<li><ol>
<li>Product.objects.get(id=p.id)</li>
</ol></li>
</ul>
<p>OR</p>
<ul>
<li><ol start="2">
<li>If user is redirected to Product's
details page</li>
</ol></li>
</ul>
<p>The code runs way faster than the read replica. And if the read operation uses the replica the code crashes, because it didn't update in time. </p>
<p>Is there any way to avoid this? For example, the db to read being chosen by request instead of by operation?</p>
<p>My Router is identical to Django's documentation:</p>
<pre><code>import random
class PrimaryReplicaRouter(object):
def db_for_read(self, model, **hints):
"""
Reads go to a randomly-chosen replica.
"""
return random.choice(['replica1', 'replica2'])
def db_for_write(self, model, **hints):
"""
Writes always go to primary.
"""
return 'primary'
def allow_relation(self, obj1, obj2, **hints):
"""
Relations between objects are allowed if both objects are
in the primary/replica pool.
"""
db_list = ('primary', 'replica1', 'replica2')
if obj1._state.db in db_list and obj2._state.db in db_list:
return True
return None
def allow_migrate(self, db, app_label, model_name=None, **hints):
"""
All non-auth models end up in this pool.
"""
return True
</code></pre>
| 7 |
2016-09-20T15:53:07Z
| 39,661,311 |
<p>Depending on the size of the data and the application I'd tackle this with either of the following methods:</p>
<ol>
<li>Database pinning:</li>
</ol>
<p>Extend your database router to allow pinning functions to specific databases. For example:</p>
<pre><code>from customrouter.pinning import use_master
@use_master
def save_and_fetch_foo():
...
</code></pre>
<p>A good example of that can be seen in <a href="https://github.com/jbalogh/django-multidb-router/blob/master/multidb/pinning.py" rel="nofollow">django-multidb-router</a>.
Of cource you could just use this package as well.</p>
<ol start="2">
<li><p>Use a <a href="https://docs.djangoproject.com/en/1.10/topics/db/multi-db/#using-managers-with-multiple-databases" rel="nofollow">model manager to route queries to specific databases</a>. </p>
<pre><code>class MyManager(models.Manager):
def get_queryset(self):
qs = CustomQuerySet(self.model)
if self._db is not None:
qs = qs.using(self._db)
return qs
</code></pre></li>
<li><p>Write a middleware that'd route your requests to master/slave automatically.
Basically same as the pinning method but you wouldn't specify when to run <code>GET</code> requests against master.</p></li>
</ol>
| 1 |
2016-09-23T12:54:16Z
|
[
"python",
"django"
] |
Django Db routing
| 39,598,666 |
<p>I am trying to run my Django application with two db's (1 master, 1 read replica). My problem is if I try to read right after a write the code explodes. For example:</p>
<ul>
<li>p = Product.objects.create()</li>
<li><ol>
<li>Product.objects.get(id=p.id)</li>
</ol></li>
</ul>
<p>OR</p>
<ul>
<li><ol start="2">
<li>If user is redirected to Product's
details page</li>
</ol></li>
</ul>
<p>The code runs way faster than the read replica. And if the read operation uses the replica the code crashes, because it didn't update in time. </p>
<p>Is there any way to avoid this? For example, the db to read being chosen by request instead of by operation?</p>
<p>My Router is identical to Django's documentation:</p>
<pre><code>import random
class PrimaryReplicaRouter(object):
def db_for_read(self, model, **hints):
"""
Reads go to a randomly-chosen replica.
"""
return random.choice(['replica1', 'replica2'])
def db_for_write(self, model, **hints):
"""
Writes always go to primary.
"""
return 'primary'
def allow_relation(self, obj1, obj2, **hints):
"""
Relations between objects are allowed if both objects are
in the primary/replica pool.
"""
db_list = ('primary', 'replica1', 'replica2')
if obj1._state.db in db_list and obj2._state.db in db_list:
return True
return None
def allow_migrate(self, db, app_label, model_name=None, **hints):
"""
All non-auth models end up in this pool.
"""
return True
</code></pre>
| 7 |
2016-09-20T15:53:07Z
| 39,698,943 |
<p>Solved it with : </p>
<pre><code>class Model(models.Model):
objects = models.Manager() -> objects only access master
sobjects = ReplicasManager() -> sobjects access either master and replicas
class Meta:
abstract = True -> so django doesn't create a table
</code></pre>
<p>make every model extend this one instead of models.Model, and then use objects or sobjects whether I want to access only master or if want to access either master or replicas</p>
| 1 |
2016-09-26T09:17:13Z
|
[
"python",
"django"
] |
Django Db routing
| 39,598,666 |
<p>I am trying to run my Django application with two db's (1 master, 1 read replica). My problem is if I try to read right after a write the code explodes. For example:</p>
<ul>
<li>p = Product.objects.create()</li>
<li><ol>
<li>Product.objects.get(id=p.id)</li>
</ol></li>
</ul>
<p>OR</p>
<ul>
<li><ol start="2">
<li>If user is redirected to Product's
details page</li>
</ol></li>
</ul>
<p>The code runs way faster than the read replica. And if the read operation uses the replica the code crashes, because it didn't update in time. </p>
<p>Is there any way to avoid this? For example, the db to read being chosen by request instead of by operation?</p>
<p>My Router is identical to Django's documentation:</p>
<pre><code>import random
class PrimaryReplicaRouter(object):
def db_for_read(self, model, **hints):
"""
Reads go to a randomly-chosen replica.
"""
return random.choice(['replica1', 'replica2'])
def db_for_write(self, model, **hints):
"""
Writes always go to primary.
"""
return 'primary'
def allow_relation(self, obj1, obj2, **hints):
"""
Relations between objects are allowed if both objects are
in the primary/replica pool.
"""
db_list = ('primary', 'replica1', 'replica2')
if obj1._state.db in db_list and obj2._state.db in db_list:
return True
return None
def allow_migrate(self, db, app_label, model_name=None, **hints):
"""
All non-auth models end up in this pool.
"""
return True
</code></pre>
| 7 |
2016-09-20T15:53:07Z
| 39,764,116 |
<p>IN master replica conf the new data will take few millisecond to replicate the data on all other replica server/database.</p>
<p>so whenever u tried to read after write it wont gives you correct result.</p>
<p>Instead of reading from replica you can use master to read immediately after write by using <code>using('primary')</code> keyword with your get query.</p>
| 0 |
2016-09-29T07:33:45Z
|
[
"python",
"django"
] |
UnboundLocalError: local variable 'k' referenced before assignment
| 39,598,720 |
<p>I have read <a href="https://stackoverflow.com/questions/17097273/unboundlocalerror-local-variable-referenced-before-assignment">StackQ1</a> and <a href="https://stackoverflow.com/questions/20873285/unboundlocalerror-local-variable-input-referenced-before-assignment">stackQ2</a> But unable to solve my error.
The given below program is giving </p>
<blockquote>
<p>UnboundLocalError: local variable 'k' referenced before assignment</p>
</blockquote>
<p>I already made variable <code>k</code> as <code>global</code> but it is not working.</p>
<pre><code>class myClass:
global k
k=0
def data(self):
def data2(k):
for j in range(5):
k=k+1
return k
for i in range(5):
k=k+1
data2(k)
Obj = myClass()
print(Obj.data())
</code></pre>
<p>I also tried as </p>
<pre><code>k=0
class myClass:
# global k
def data(self):
def data2(k):
for j in range(5):
k=k+1
return k
for i in range(5):
k=k+1
data2(k)
Obj = myClass()
print(Obj.data())
</code></pre>
<p>But not working.</p>
| 1 |
2016-09-20T15:55:51Z
| 39,598,974 |
<pre><code>class My_Class(object):
def __init__(self, **kwargs):
super(My_Class, self).__init__()
self.k = "hello"
def data(self, *args):
print "call data"
return self.k
my_class = My_Class()
print my_class.data()
</code></pre>
| 0 |
2016-09-20T16:08:38Z
|
[
"python"
] |
UnboundLocalError: local variable 'k' referenced before assignment
| 39,598,720 |
<p>I have read <a href="https://stackoverflow.com/questions/17097273/unboundlocalerror-local-variable-referenced-before-assignment">StackQ1</a> and <a href="https://stackoverflow.com/questions/20873285/unboundlocalerror-local-variable-input-referenced-before-assignment">stackQ2</a> But unable to solve my error.
The given below program is giving </p>
<blockquote>
<p>UnboundLocalError: local variable 'k' referenced before assignment</p>
</blockquote>
<p>I already made variable <code>k</code> as <code>global</code> but it is not working.</p>
<pre><code>class myClass:
global k
k=0
def data(self):
def data2(k):
for j in range(5):
k=k+1
return k
for i in range(5):
k=k+1
data2(k)
Obj = myClass()
print(Obj.data())
</code></pre>
<p>I also tried as </p>
<pre><code>k=0
class myClass:
# global k
def data(self):
def data2(k):
for j in range(5):
k=k+1
return k
for i in range(5):
k=k+1
data2(k)
Obj = myClass()
print(Obj.data())
</code></pre>
<p>But not working.</p>
| 1 |
2016-09-20T15:55:51Z
| 39,599,055 |
<p>The problem is resolved and posting it for the newbies</p>
<pre><code>class myClass:
k=0
def data(self):
def data2(k):
for j in range(5):
self.k=self.k+1
return self.k
for i in range(5):
self.k=self.k+1
data2(self.k)
Obj = myClass()
print(Obj.data())
</code></pre>
<p>Credit goes to @Moses Koledoye and @ Abhishek</p>
| 0 |
2016-09-20T16:13:23Z
|
[
"python"
] |
UnboundLocalError: local variable 'k' referenced before assignment
| 39,598,720 |
<p>I have read <a href="https://stackoverflow.com/questions/17097273/unboundlocalerror-local-variable-referenced-before-assignment">StackQ1</a> and <a href="https://stackoverflow.com/questions/20873285/unboundlocalerror-local-variable-input-referenced-before-assignment">stackQ2</a> But unable to solve my error.
The given below program is giving </p>
<blockquote>
<p>UnboundLocalError: local variable 'k' referenced before assignment</p>
</blockquote>
<p>I already made variable <code>k</code> as <code>global</code> but it is not working.</p>
<pre><code>class myClass:
global k
k=0
def data(self):
def data2(k):
for j in range(5):
k=k+1
return k
for i in range(5):
k=k+1
data2(k)
Obj = myClass()
print(Obj.data())
</code></pre>
<p>I also tried as </p>
<pre><code>k=0
class myClass:
# global k
def data(self):
def data2(k):
for j in range(5):
k=k+1
return k
for i in range(5):
k=k+1
data2(k)
Obj = myClass()
print(Obj.data())
</code></pre>
<p>But not working.</p>
| 1 |
2016-09-20T15:55:51Z
| 39,599,305 |
<p>For my explanation, I reindent and annotate your code:</p>
<pre><code>class myClass: # <- [1]
global k # <- [2]
k = 0 # <- [3]
def data(self):
def data2(k): # <- [4]
for j in range(5):
k = k + 1
return k # <- [5]
for i in range(5):
k = k + 1 # <- [6]
data2(k)
Obj = myClass()
print(Obj.data())
</code></pre>
<h2>Code Review</h2>
<p>[1] Class names should use CamelCase according to <a href="https://www.python.org/dev/peps/pep-0008/#naming-conventions" rel="nofollow">PEP8</a> recommandation. Note that your class should inherit from <code>object</code> (in Python 2.7) to use (modern) new-style class.</p>
<p>[2] Global variable <code>k</code> is not defined at module level. You can't declare a variable in Python, you must define it. Since <code>k</code> looks like a integer, you can write:</p>
<pre><code>k = 0
</code></pre>
<p>[3] Here, you define a class variable.</p>
<p>Python use <a href="http://www.rafekettler.com/magicmethods.html#construction" rel="nofollow">magic methods</a>
to construct and initialize an class instance.
You should use <code>__init__</code> method to initialise your class, as below:</p>
<pre><code>class MyClass(object):
def __init__(self):
self.k = 0
</code></pre>
<p>[4] Here, you define a function inside the <code>data</code> method. What do you want to do? A new method?</p>
<pre><code>class MyClass(object):
def data(self):
def data2(k): # <- [4]
pass
</code></pre>
<p>[5] I think you return prematurely: the loop ends at first iteration. Mind your indentation:</p>
<pre><code>def data2(k):
for j in range(5):
k = k + 1
return k # <- [5]
</code></pre>
<p>[6] The local variable <code>k</code> is not defined in the <code>data</code> function.
If you want to use the module level variable (and modify it), you must use <code>global</code> keyword.</p>
<pre><code>class MyClass(object):
def data(self):
global k
for i in range(5):
k = k + 1 # <- [6]
data2(k)
</code></pre>
<p>Letâs start with a <a href="https://docs.python.org/2/tutorial/classes.html" rel="nofollow">tutorial</a>.</p>
| 0 |
2016-09-20T16:26:26Z
|
[
"python"
] |
Convert KNN train from Opencv 3 to 2
| 39,598,724 |
<p>I am reading a tutorial for training KNN using Opencv. The code is written for Opencv 3 but I need to use it in Opencv 2. The original training is:</p>
<pre><code>cv2.ml.KNearest_create().train(npaFlattenedImages, cv2.ml.ROW_SAMPLE, npaClassifications)
</code></pre>
<p>I tried using this:</p>
<pre><code>cv2.KNearest().train(npaFlattenedImages, cv2.CV_ROW_SAMPLE, npaClassifications)
</code></pre>
<p>but the error is:</p>
<p><code>Unsupported index array data type (it should be 8uC1, 8sC1 or 32sC1) in function cvPreprocessIndexArray</code></p>
<p>The full code is here:
<a href="https://github.com/MicrocontrollersAndMore/OpenCV_3_KNN_Character_Recognition_Python/blob/master/TrainAndTest.py" rel="nofollow">https://github.com/MicrocontrollersAndMore/OpenCV_3_KNN_Character_Recognition_Python/blob/master/TrainAndTest.py</a></p>
| 4 |
2016-09-20T15:56:14Z
| 39,684,278 |
<p>Here the changes that appear to have made the <a href="https://github.com/MicrocontrollersAndMore/OpenCV_3_KNN_Character_Recognition_Python/blob/master/TrainAndTest.py" rel="nofollow">full code</a> work for me for OpenCV 2.4.13:</p>
<pre><code>60c60
< kNearest = cv2.ml.KNearest_create() # instantiate KNN object
---
> kNearest = cv2.KNearest() # instantiate KNN object
62c62
< kNearest.train(npaFlattenedImages, cv2.ml.ROW_SAMPLE, npaClassifications)
---
> kNearest.train(npaFlattenedImages, npaClassifications)
85c85
< imgContours, npaContours, npaHierarchy = cv2.findContours(imgThreshCopy, # input image, make sure to use a copy since the function will modify this image in the course of finding contours
---
> npaContours, npaHierarchy = cv2.findContours(imgThreshCopy, # input image, make sure to use a copy since the function will modify this image in the course of finding contours
125c125
< retval, npaResults, neigh_resp, dists = kNearest.findNearest(npaROIResized, k = 1) # call KNN function find_nearest
---
> retval, npaResults, neigh_resp, dists = kNearest.find_nearest(npaROIResized, k = 1) # call KNN function find_nearest
</code></pre>
| 0 |
2016-09-25T06:59:58Z
|
[
"python",
"python-2.7",
"opencv",
"opencv3.0"
] |
Convert KNN train from Opencv 3 to 2
| 39,598,724 |
<p>I am reading a tutorial for training KNN using Opencv. The code is written for Opencv 3 but I need to use it in Opencv 2. The original training is:</p>
<pre><code>cv2.ml.KNearest_create().train(npaFlattenedImages, cv2.ml.ROW_SAMPLE, npaClassifications)
</code></pre>
<p>I tried using this:</p>
<pre><code>cv2.KNearest().train(npaFlattenedImages, cv2.CV_ROW_SAMPLE, npaClassifications)
</code></pre>
<p>but the error is:</p>
<p><code>Unsupported index array data type (it should be 8uC1, 8sC1 or 32sC1) in function cvPreprocessIndexArray</code></p>
<p>The full code is here:
<a href="https://github.com/MicrocontrollersAndMore/OpenCV_3_KNN_Character_Recognition_Python/blob/master/TrainAndTest.py" rel="nofollow">https://github.com/MicrocontrollersAndMore/OpenCV_3_KNN_Character_Recognition_Python/blob/master/TrainAndTest.py</a></p>
| 4 |
2016-09-20T15:56:14Z
| 39,684,564 |
<ul>
<li>Unlike the generic <a href="http://docs.opencv.org/2.4/modules/ml/doc/statistical_models.html#bool%20CvStatModel::train%28const%20Mat&%20train_data,%20[int%20tflag,]%20...,%20const%20Mat&%20responses,%20...,%20[const%20Mat&%20var_idx,]%20...,%20[const%20Mat&%20sample_idx,]%20...%20[const%20Mat&%20var_type,]%20...,%20[const%20Mat&%20missing_mask,]%20%3Cmisc_training_alg_params%3E%20...%29" rel="nofollow"><code>CvStatModel::train()</code></a>, <a href="http://docs.opencv.org/2.4/modules/ml/doc/k_nearest_neighbors.html?highlight=cv_row_sample#cv2.KNearest.train" rel="nofollow"><code>cv2.KNearest.train()</code></a> doesn't have the 2nd optional argument <code>int tflag</code>, and the docs say: "Only <code>CV_ROW_SAMPLE</code> data layout is supported".
<ul>
<li>The error message (btw the cryptic mnemonics are <a href="http://docs.opencv.org/2.4/modules/core/doc/basic_structures.html#datatype" rel="nofollow">OpenCV data types</a>) was thus caused by the function trying to use <code>npaClassifications</code> as the next argument, <code>sampleIdx</code>.</li>
</ul></li>
</ul>
<p>Further errors after fixing this:</p>
<ul>
<li><p><a href="http://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#cv2.findContours" rel="nofollow"><code>cv2.findCountours()</code></a> only returns 2 values: <code>â contours, hierarchy</code> (you don't need the 3rd one, <code>imgContours</code>, anyway).</p></li>
<li><p><code>KNearest.findNearest()</code> was <a href="http://docs.opencv.org/2.4/modules/ml/doc/k_nearest_neighbors.html?highlight=find%20nearest#cv2.KNearest.find_nearest" rel="nofollow"><code>KNearest.find_nearest()</code></a>.</p></li>
</ul>
<p>And the result now:</p>
<p><a href="http://i.stack.imgur.com/I2hsI.png" rel="nofollow"><img src="http://i.stack.imgur.com/I2hsI.png" alt="result"></a></p>
<p><a href="http://stackoverflow.com/a/39684278/648265">Ulrich Stern already did me a favor to provide a raw diff</a>.</p>
| 0 |
2016-09-25T07:49:41Z
|
[
"python",
"python-2.7",
"opencv",
"opencv3.0"
] |
Using gdb's Python to backtrace different OS threads, when gdb is not OS-aware
| 39,598,893 |
<p>I am still learning about debugging C using python within gdb (arm-none-eabi-gdb, in my case). I am trying to use this facility to get thread information of a real-time OS running on ARM Cortex-M. Reading some OS structures, I can access <em>the</em> thread control blocks of the OS. I know the PC and the SP of each thread. How can I use gdb's Python to dump the threads' backtrace. Is there a generic API that can traverse the stack when given PC and SP?</p>
<p>I have read <a href="https://sourceware.org/gdb/current/onlinedocs/gdb/Unwinding-Frames-in-Python.html#Unwinding-Frames-in-Python" rel="nofollow">https://sourceware.org/gdb/current/onlinedocs/gdb/Unwinding-Frames-in-Python.html#Unwinding-Frames-in-Python</a> and I feel there might be a way to achieve that but I need some help.</p>
<p>Also, if possible, can I make gdb aware of the different threads of the OS? This link:
<a href="https://sourceware.org/gdb/current/onlinedocs/gdb/Threads-In-Python.html#Threads-In-Python" rel="nofollow">https://sourceware.org/gdb/current/onlinedocs/gdb/Threads-In-Python.html#Threads-In-Python</a> touches on threads but relies on OS info. Can these be overload with what I know about the different OS threads from their respective control blocks?</p>
<p>Thanks!</p>
| 0 |
2016-09-20T16:05:06Z
| 39,780,899 |
<p>After some more reading and trying to make use of old debugger knowledge that I have accumulated over the years, I managed to get this working. It lacks optimization but for now, I'm very please. This can be considered a poor-man's debugger making use of GDB's Python support to track the active threads in the system. It's generic, I assume, but the implementation targeted RTX (Keil's OS). It worked on Cortex-M0. It may need some tweaking to fit other operating systems or different cores.</p>
<p><strong>The main idea:</strong></p>
<ol>
<li>Use OS structures, to identify where the thread control block reside.</li>
<li>From the thread control block identify where is the different thread stacks are.</li>
<li>Read from the stack all the vital registers; SP, LR, and PC</li>
<li>Save the same registers for the current, running thread.</li>
<li>Loop over the different thread, change the vital registers to the ones matching the thread, then print the backtrace.</li>
<li>Enjoy a poor-man's OS-aware debugger.</li>
</ol>
<p>The script can be found here:</p>
<p><a href="https://gitlab.com/hesham/gdb-rtx-thread-backtrce/blob/master/rtx-threads-bt.py" rel="nofollow">https://gitlab.com/hesham/gdb-rtx-thread-backtrce/blob/master/rtx-threads-bt.py</a></p>
<p>It was a very good exercise to explore the power of GDB's Python extension!</p>
| 0 |
2016-09-29T22:19:36Z
|
[
"python",
"c",
"debugging",
"operating-system",
"gdb"
] |
Why Pylint says print('foo', end='') is an invalid syntax?
| 39,598,921 |
<p>This very simple code:</p>
<pre><code>#!/usr/bin/python3
print('foo', end='')
</code></pre>
<p>Makes Pylint unhappy (both on Python2 and Python3):</p>
<pre><code>pylint ./pylint.py
No config file found, using default configuration
************* Module pylint
E: 2, 0: invalid syntax (syntax-error)
</code></pre>
<p>Why?</p>
| 0 |
2016-09-20T16:06:02Z
| 39,600,882 |
<p>I got this error when running pylint. But my pylint only had support for python2. So it errored:</p>
<pre><code>$ pylint foo.py
No config file found, using default configuration
************* Module foo
E: 2, 0: invalid syntax (syntax-error)
</code></pre>
<p>So I did <code>pip3 install pylint</code>.</p>
<p>And then it all worked (or at least it got past the syntax error):</p>
<pre><code>$ python3 -m pylint foo.py | head
No config file found, using default configuration
************* Module foo
C: 1, 0: Black listed name "foo" (blacklisted-name)
C: 1, 0: Missing module docstring (missing-docstring)
.....
</code></pre>
<p>See here for more info on pylint for python2 and 3 in one system: <a href="http://askubuntu.com/questions/340940/installing-pylint-for-python3-on-ubuntu">http://askubuntu.com/questions/340940/installing-pylint-for-python3-on-ubuntu</a></p>
| 4 |
2016-09-20T18:04:49Z
|
[
"python",
"pylint"
] |
Theano fails to be imported with theano configuration cnmem = 1
| 39,599,014 |
<p>Theano fails to be imported with theano configuration cnmem = 1</p>
<p>Any idea how to make sure the GPU is totally allocated to the theano python script?</p>
<blockquote>
<p><strong>Note:</strong> Display is not used to avoid its GPU usage</p>
</blockquote>
<p><strong>File: .theanorc</strong></p>
<pre><code>cnmem = 1
</code></pre>
<p><strong>File: test.py</strong></p>
<pre><code>print 'Importing Theano Library ...'
import theano
print 'Imported'
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>$ python test.py
Importing Theano Library ...
Killed
$
</code></pre>
<p>It only works with cnmem = 0.75</p>
<p><strong>File: .theanorc</strong></p>
<pre><code>cnmem = 0.75
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>$ python test.py
Importing Theano Library ...
Imported
$
</code></pre>
| 2 |
2016-09-20T16:10:58Z
| 39,602,112 |
<p><a href="https://github.com/Theano/Theano/issues/4302#issuecomment-202067917" rel="nofollow">https://github.com/Theano/Theano/issues/4302#issuecomment-202067917</a></p>
<blockquote>
<p>Could you try with 1.0 instead of 1? according to the docs, it needs
to be a float. Also, it is limited to 0.95 to allow space for device
drivers. So, you can't use the entire GPU memory just like you can't
use all of RAM.</p>
</blockquote>
| 1 |
2016-09-20T19:15:12Z
|
[
"python",
"neural-network",
"deep-learning",
"theano",
"theano-cuda"
] |
Run multiple Django application tests with one command
| 39,599,054 |
<p>I have a Django project with multiple apps, <code>app_1</code>, <code>app_2</code>, and <code>app_3</code>. Currently, in order to run the test suite, I use the standard testing command: <code>python manage.py test</code>. This runs the tests for all of my apps. I am also able to run tests for a single app, specifically -- for example, <code>python manage.py test app_1/</code>. However, I'd like to be able to run all of the tests in some, but not all, of my apps. For example, I'd run <code>python manage.py test main_tests</code> and just have the tests in <code>app_1</code> and <code>app_2</code> run. Is there a way to do this? </p>
<p>I saw that <a href="http://stackoverflow.com/questions/6510680/how-do-i-group-unit-tests-in-django-at-a-higher-granularity-than-the-app">this</a> question specified how to run just a few tests within a single app, but not how to run just a few apps' tests within a project.</p>
| 0 |
2016-09-20T16:13:21Z
| 39,599,322 |
<p>You can supply multiple labels to the <code>test</code> command:</p>
<pre><code>python manage.py test app_1 app_2
</code></pre>
<p>This will run all tests in <code>app_1</code> and <code>app_2</code>. </p>
| 1 |
2016-09-20T16:26:58Z
|
[
"python",
"django",
"unit-testing"
] |
Writing multiple lists to CSV rows in python
| 39,599,085 |
<p>I am writing a program that extracts the history from the Google Chrome history database and outputs this to a CSV file. I am trying to put the information in multiple rows, for example a list of URL's in the first row and the webpage Title in the second row. However, when I do this, I receive the following error:</p>
<p>TypeError: decoding Unicode is not supported</p>
<p>Any help would be appreciated, below is my code:</p>
<pre><code>import sqlite3
import datetime
import csv
def urls():
conn = sqlite3.connect('C:\Users\username\Desktop\History.sql')
cursor = conn.execute("SELECT url, title, visit_count, last_visit_time from urls")
timestamp = row[3]
value = datetime.datetime(1601, 1, 1) + datetime.timedelta(microseconds=timestamp)
with open("C:\Users\username\Desktop\\historyulrs.csv", "ab") as filecsv:
filecsvwriter = csv.writer(filecsv)
filecsvwriter.writerow(["Url", "Title", "Visit Count", "Last visit Time"])
for row in cursor:
with open("C:\Users\username\Desktop\\historyulrs.csv", "ab") as filecsv:
filecsvwriter = csv.writer(filecsv)
filecsvwriter.writerows([unicode(row[0], row[1]).encode("utf-8")])
conn.close()
urls()
</code></pre>
<p>I also retrieve the visit count and last visit time from the database to add to the CSV however. I haven't implemented this yet. </p>
<p>Thanks</p>
| 0 |
2016-09-20T16:14:56Z
| 39,599,372 |
<p>Using Pandas can help you a lot with CSV files:</p>
<pre><code>import sqlite3
import datetime
import pandas
def urls():
urls = []
titles = []
counts = []
last = []
conn = sqlite3.connect('C:\Users\username\Desktop\History.sql')
cursor = conn.execute("SELECT url, title, visit_count, last_visit_time from urls")
for row in cursor:
#now I am just guessing
urls.append(row[0])
titles.append(row[1])
counts.append(row[2])
last.append(row[3])
df = pandas.DataFrame({'URL': urls,
'Title': titles,
'Visit Count': counts,
'Last visit Time': last})
df.to_csv('historyulrs.csv', encoding='utf-8', index=False)
conn.close()
urls()
</code></pre>
<p>Be aware that I completely guessed the order of data in a row and you would need to edit that according to your needs. Also I didn't quite catch why do you need <code>datetime</code>.</p>
| 1 |
2016-09-20T16:29:31Z
|
[
"python",
"sql",
"list",
"csv",
"unicode"
] |
Writing multiple lists to CSV rows in python
| 39,599,085 |
<p>I am writing a program that extracts the history from the Google Chrome history database and outputs this to a CSV file. I am trying to put the information in multiple rows, for example a list of URL's in the first row and the webpage Title in the second row. However, when I do this, I receive the following error:</p>
<p>TypeError: decoding Unicode is not supported</p>
<p>Any help would be appreciated, below is my code:</p>
<pre><code>import sqlite3
import datetime
import csv
def urls():
conn = sqlite3.connect('C:\Users\username\Desktop\History.sql')
cursor = conn.execute("SELECT url, title, visit_count, last_visit_time from urls")
timestamp = row[3]
value = datetime.datetime(1601, 1, 1) + datetime.timedelta(microseconds=timestamp)
with open("C:\Users\username\Desktop\\historyulrs.csv", "ab") as filecsv:
filecsvwriter = csv.writer(filecsv)
filecsvwriter.writerow(["Url", "Title", "Visit Count", "Last visit Time"])
for row in cursor:
with open("C:\Users\username\Desktop\\historyulrs.csv", "ab") as filecsv:
filecsvwriter = csv.writer(filecsv)
filecsvwriter.writerows([unicode(row[0], row[1]).encode("utf-8")])
conn.close()
urls()
</code></pre>
<p>I also retrieve the visit count and last visit time from the database to add to the CSV however. I haven't implemented this yet. </p>
<p>Thanks</p>
| 0 |
2016-09-20T16:14:56Z
| 39,599,608 |
<p>This is not easy to answer without seeing the DB. But something like this should work, potentially with a few small modifications depending on your actual data. </p>
<pre><code>import sqlite3
import datetime
import csv
def urls():
conn = sqlite3.connect('C:\Users\username\Desktop\History.sql')
c = conn.cursor()
query = "SELECT url, title FROM urls"
c.execute(query)
data = c.fetchall()
if data:
with open("C:\Users\username\Desktop\\historyulrs.csv", 'w') as outfile:
writer = csv.writer(outfile)
writer.writerow(['URL', 'Title'])
for entry in data:
writer.writerow([str(entry[0]), str(entry[1])])
</code></pre>
| 1 |
2016-09-20T16:44:35Z
|
[
"python",
"sql",
"list",
"csv",
"unicode"
] |
I want to add values while a recursive loop unfolds
| 39,599,102 |
<p>This is a bottom up approach to check if the tree is an AVL tree or not. So how this code works is:</p>
<p>Suppose this is a tree :</p>
<pre><code> 8
3 10
2
1
</code></pre>
<p>The leaf node is checked that it is a leaf node(here 1). It then unfolds one recursion when the node with data 2 is the current value. The value of cl = 1, while it compares the right tree. The right branch of 2 is empty i.e does not have any children so the avl_compare will have (1, 0) which is allowed. </p>
<p>After this i want to add one value to cl so that when the node with data 3 is the current value, the value of cl = 2. avl_check is an assignment question. I have done this on my own but i need some help here to play with recursive functions.</p>
<pre><code>def avl_check(self):
cl = cr = 0
if(self.left):
self.left.avl_check()
cl+=1
if(self.right):
self.right.avl_check()
cr += 1
if(not self.avl_compare(cl,cr)):
print("here")
</code></pre>
| 1 |
2016-09-20T16:15:39Z
| 39,600,226 |
<p>Your immediate problem is that you don't seem to understand local and global variables. <strong>cl</strong> and <strong>cr</strong> are local variables; with the given control flow, the only values they can ever have are 0 and 1. Remember that each instance of the routine gets a new set of local variables: you set them to 0, perhaps increment to 1, and then you return. This does <em>not</em> affect the values of the variables in other instances of the function.</p>
<p>A deeper problem is that you haven't thought this through for larger trees. Assume that you <em>do</em> learn to use global variables and correct these increments. Take your current tree, insert nodes 4, 9, 10, and 11 (nicely balanced). Walk through your algorithm, tracing the values of <strong>cl</strong> and <strong>cr</strong>. By the time you get to node 10, <strong>cl</strong> is disturbingly more than the tree depth -- I think this is a fatal error in your logic.</p>
<hr>
<p>Think through this again: a recursive routine should not have global variables, except perhaps for the data store of a dynamic programming implementation (which does not apply here). The function should check for the base case and return something trivial (such as 0 or 1). Otherwise, the function should reduce the problem <em>one simple step</em> and recur; when the recursion returns, the function does something simple with the result and returns the new result to its parent.</p>
<p>Your task is relatively simple:</p>
<pre><code>Find the depths of the two subtrees.
If their difference > 1, return False
else return True
</code></pre>
<p>You should already know how to check the depth of a tree. Implement this first. After that, make your implementation more intelligent: checking the depth of a subtree should <em>also</em> check its balance at each step. That will be your final solution.</p>
| 1 |
2016-09-20T17:22:04Z
|
[
"python",
"recursion",
"binary-tree",
"avl-tree"
] |
Fill in time data in pandas
| 39,599,192 |
<p>I have data that is every 15 seconds. But, there are some values that are missing. These are not tagged with NaN, but simply are not present. How can I fill in those values?<br>
I have tried to resample, but that also shifts my original data. So, why doesn't this work:</p>
<pre><code>a=pd.Series([1.,3.,4.,3.,5.],['2016-05-25 00:00:35','2016-05-25 00:00:50','2016-05-25 00:01:05','2016-05-25 00:01:35','2016-05-25 00:02:05'])
a.index=pd.to_datetime(a.index)
a.resample('15S').mean()
In [368]: a
Out[368]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:35 3.0
2016-05-25 00:02:05 5.0
dtype: float64
</code></pre>
<p>It shows me this:</p>
<pre><code>2016-05-25 00:00:30 1.0
2016-05-25 00:00:45 3.0
2016-05-25 00:01:00 4.0
2016-05-25 00:01:15 NaN
2016-05-25 00:01:30 3.0
2016-05-25 00:01:45 NaN
2016-05-25 00:02:00 5.0
Freq: 15S, dtype: float64
</code></pre>
<p>So, I no longer have a value at 00:35 or 00:50.<br>
For my original larger data set, I also end up seeing many NaN value in large groups at the end of the resampled data.<br>
What I would like to do resample my 15s data, to 15s, so whenever there is no data present for a particular time it should use the mean of the values around it to fill it in. Is there a way to do that?<br>
Also, why does the time basis change when I resample? My original data starts at 00:00:35 and after resampling it starts at 00:30? It seems like it got shifted by 5 seconds.<br>
In my example data, all it should have done is created an additional data entry at 00:01:50. </p>
<hr>
<p><strong>Edit</strong></p>
<p>I realized that my data is slightly more complex then I had thought. The 'base' actually changes part way through it. If I use the solution below, then it works for part of the data, but then the values stop changing. For example:</p>
<pre><code>a = pd.Series([1.,3.,4.,3.,5.,6.,7.,8.], ['2016-05-25 00:00:35','2016-05-25 00:00:50','2016-05-25 00:01:05','2016-05-25 00:01:35','2016-05-25 00:02:05','2016-05-25 00:03:00','2016-05-25 00:04:00','2016-05-25 00:06:00'])
In [79]: a
Out[79]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:35 3.0
2016-05-25 00:02:05 5.0
2016-05-25 00:03:00 6.0
2016-05-25 00:04:00 7.0
2016-05-25 00:06:00 8.0
dtype: float64
In [80]: a.index = pd.to_datetime(a.index)
In [81]: a.resample('15S', base=5).interpolate()
Out[81]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:20 3.5
2016-05-25 00:01:35 3.0
2016-05-25 00:01:50 4.0
2016-05-25 00:02:05 5.0
2016-05-25 00:02:20 5.0
2016-05-25 00:02:35 5.0
2016-05-25 00:02:50 5.0
2016-05-25 00:03:05 5.0
2016-05-25 00:03:20 5.0
2016-05-25 00:03:35 5.0
2016-05-25 00:03:50 5.0
2016-05-25 00:04:05 5.0
2016-05-25 00:04:20 5.0
2016-05-25 00:04:35 5.0
2016-05-25 00:04:50 5.0
2016-05-25 00:05:05 5.0
2016-05-25 00:05:20 5.0
2016-05-25 00:05:35 5.0
2016-05-25 00:05:50 5.0
Freq: 15S, dtype: float64
</code></pre>
<p>As you can see it stops interpolating after 2:05, and seems to ignore the data at 3:00,4:00 and 5:00. </p>
| 3 |
2016-09-20T16:20:00Z
| 39,599,390 |
<p>you need to use the <code>loffset</code> argument</p>
<pre><code>a.resample('15S', loffset='5S')
</code></pre>
<p><a href="http://i.stack.imgur.com/rQK52.png" rel="nofollow"><img src="http://i.stack.imgur.com/rQK52.png" alt="enter image description here"></a></p>
| 3 |
2016-09-20T16:30:45Z
|
[
"python",
"pandas"
] |
Fill in time data in pandas
| 39,599,192 |
<p>I have data that is every 15 seconds. But, there are some values that are missing. These are not tagged with NaN, but simply are not present. How can I fill in those values?<br>
I have tried to resample, but that also shifts my original data. So, why doesn't this work:</p>
<pre><code>a=pd.Series([1.,3.,4.,3.,5.],['2016-05-25 00:00:35','2016-05-25 00:00:50','2016-05-25 00:01:05','2016-05-25 00:01:35','2016-05-25 00:02:05'])
a.index=pd.to_datetime(a.index)
a.resample('15S').mean()
In [368]: a
Out[368]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:35 3.0
2016-05-25 00:02:05 5.0
dtype: float64
</code></pre>
<p>It shows me this:</p>
<pre><code>2016-05-25 00:00:30 1.0
2016-05-25 00:00:45 3.0
2016-05-25 00:01:00 4.0
2016-05-25 00:01:15 NaN
2016-05-25 00:01:30 3.0
2016-05-25 00:01:45 NaN
2016-05-25 00:02:00 5.0
Freq: 15S, dtype: float64
</code></pre>
<p>So, I no longer have a value at 00:35 or 00:50.<br>
For my original larger data set, I also end up seeing many NaN value in large groups at the end of the resampled data.<br>
What I would like to do resample my 15s data, to 15s, so whenever there is no data present for a particular time it should use the mean of the values around it to fill it in. Is there a way to do that?<br>
Also, why does the time basis change when I resample? My original data starts at 00:00:35 and after resampling it starts at 00:30? It seems like it got shifted by 5 seconds.<br>
In my example data, all it should have done is created an additional data entry at 00:01:50. </p>
<hr>
<p><strong>Edit</strong></p>
<p>I realized that my data is slightly more complex then I had thought. The 'base' actually changes part way through it. If I use the solution below, then it works for part of the data, but then the values stop changing. For example:</p>
<pre><code>a = pd.Series([1.,3.,4.,3.,5.,6.,7.,8.], ['2016-05-25 00:00:35','2016-05-25 00:00:50','2016-05-25 00:01:05','2016-05-25 00:01:35','2016-05-25 00:02:05','2016-05-25 00:03:00','2016-05-25 00:04:00','2016-05-25 00:06:00'])
In [79]: a
Out[79]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:35 3.0
2016-05-25 00:02:05 5.0
2016-05-25 00:03:00 6.0
2016-05-25 00:04:00 7.0
2016-05-25 00:06:00 8.0
dtype: float64
In [80]: a.index = pd.to_datetime(a.index)
In [81]: a.resample('15S', base=5).interpolate()
Out[81]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:20 3.5
2016-05-25 00:01:35 3.0
2016-05-25 00:01:50 4.0
2016-05-25 00:02:05 5.0
2016-05-25 00:02:20 5.0
2016-05-25 00:02:35 5.0
2016-05-25 00:02:50 5.0
2016-05-25 00:03:05 5.0
2016-05-25 00:03:20 5.0
2016-05-25 00:03:35 5.0
2016-05-25 00:03:50 5.0
2016-05-25 00:04:05 5.0
2016-05-25 00:04:20 5.0
2016-05-25 00:04:35 5.0
2016-05-25 00:04:50 5.0
2016-05-25 00:05:05 5.0
2016-05-25 00:05:20 5.0
2016-05-25 00:05:35 5.0
2016-05-25 00:05:50 5.0
Freq: 15S, dtype: float64
</code></pre>
<p>As you can see it stops interpolating after 2:05, and seems to ignore the data at 3:00,4:00 and 5:00. </p>
| 3 |
2016-09-20T16:20:00Z
| 39,599,454 |
<p>For the sake of completeness, the <code>base</code> argument works too:</p>
<pre><code>a.resample('15S', base=5).mean()
Out[4]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:20 NaN
2016-05-25 00:01:35 3.0
2016-05-25 00:01:50 NaN
2016-05-25 00:02:05 5.0
Freq: 15S, dtype: float64
</code></pre>
| 3 |
2016-09-20T16:34:29Z
|
[
"python",
"pandas"
] |
Fill in time data in pandas
| 39,599,192 |
<p>I have data that is every 15 seconds. But, there are some values that are missing. These are not tagged with NaN, but simply are not present. How can I fill in those values?<br>
I have tried to resample, but that also shifts my original data. So, why doesn't this work:</p>
<pre><code>a=pd.Series([1.,3.,4.,3.,5.],['2016-05-25 00:00:35','2016-05-25 00:00:50','2016-05-25 00:01:05','2016-05-25 00:01:35','2016-05-25 00:02:05'])
a.index=pd.to_datetime(a.index)
a.resample('15S').mean()
In [368]: a
Out[368]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:35 3.0
2016-05-25 00:02:05 5.0
dtype: float64
</code></pre>
<p>It shows me this:</p>
<pre><code>2016-05-25 00:00:30 1.0
2016-05-25 00:00:45 3.0
2016-05-25 00:01:00 4.0
2016-05-25 00:01:15 NaN
2016-05-25 00:01:30 3.0
2016-05-25 00:01:45 NaN
2016-05-25 00:02:00 5.0
Freq: 15S, dtype: float64
</code></pre>
<p>So, I no longer have a value at 00:35 or 00:50.<br>
For my original larger data set, I also end up seeing many NaN value in large groups at the end of the resampled data.<br>
What I would like to do resample my 15s data, to 15s, so whenever there is no data present for a particular time it should use the mean of the values around it to fill it in. Is there a way to do that?<br>
Also, why does the time basis change when I resample? My original data starts at 00:00:35 and after resampling it starts at 00:30? It seems like it got shifted by 5 seconds.<br>
In my example data, all it should have done is created an additional data entry at 00:01:50. </p>
<hr>
<p><strong>Edit</strong></p>
<p>I realized that my data is slightly more complex then I had thought. The 'base' actually changes part way through it. If I use the solution below, then it works for part of the data, but then the values stop changing. For example:</p>
<pre><code>a = pd.Series([1.,3.,4.,3.,5.,6.,7.,8.], ['2016-05-25 00:00:35','2016-05-25 00:00:50','2016-05-25 00:01:05','2016-05-25 00:01:35','2016-05-25 00:02:05','2016-05-25 00:03:00','2016-05-25 00:04:00','2016-05-25 00:06:00'])
In [79]: a
Out[79]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:35 3.0
2016-05-25 00:02:05 5.0
2016-05-25 00:03:00 6.0
2016-05-25 00:04:00 7.0
2016-05-25 00:06:00 8.0
dtype: float64
In [80]: a.index = pd.to_datetime(a.index)
In [81]: a.resample('15S', base=5).interpolate()
Out[81]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:20 3.5
2016-05-25 00:01:35 3.0
2016-05-25 00:01:50 4.0
2016-05-25 00:02:05 5.0
2016-05-25 00:02:20 5.0
2016-05-25 00:02:35 5.0
2016-05-25 00:02:50 5.0
2016-05-25 00:03:05 5.0
2016-05-25 00:03:20 5.0
2016-05-25 00:03:35 5.0
2016-05-25 00:03:50 5.0
2016-05-25 00:04:05 5.0
2016-05-25 00:04:20 5.0
2016-05-25 00:04:35 5.0
2016-05-25 00:04:50 5.0
2016-05-25 00:05:05 5.0
2016-05-25 00:05:20 5.0
2016-05-25 00:05:35 5.0
2016-05-25 00:05:50 5.0
Freq: 15S, dtype: float64
</code></pre>
<p>As you can see it stops interpolating after 2:05, and seems to ignore the data at 3:00,4:00 and 5:00. </p>
| 3 |
2016-09-20T16:20:00Z
| 39,599,529 |
<p>Both @IanS and @piRSquared address the shifting of the base. As for filling <code>NaN</code>s: pandas has methods for forward-filling (<code>.ffill()</code>/<code>.pad()</code>) and backward-filling (<code>.bfill()</code>/<code>.backfill()</code>), but not for taking the mean. A quick way of doing it is by taking the mean manually:</p>
<pre><code>b = a.resample('15S', base=5)
(b.ffill() + b.bfill()) / 2
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:20 3.5
2016-05-25 00:01:35 3.0
2016-05-25 00:01:50 4.0
2016-05-25 00:02:05 5.0
Freq: 15S, dtype: float64
</code></pre>
<p>EDIT: I stand corrected: there is a built-in method: <code>.interpolate()</code>.</p>
<pre><code>a.resample('15S', base=5).interpolate()
</code></pre>
| 4 |
2016-09-20T16:39:11Z
|
[
"python",
"pandas"
] |
Fill in time data in pandas
| 39,599,192 |
<p>I have data that is every 15 seconds. But, there are some values that are missing. These are not tagged with NaN, but simply are not present. How can I fill in those values?<br>
I have tried to resample, but that also shifts my original data. So, why doesn't this work:</p>
<pre><code>a=pd.Series([1.,3.,4.,3.,5.],['2016-05-25 00:00:35','2016-05-25 00:00:50','2016-05-25 00:01:05','2016-05-25 00:01:35','2016-05-25 00:02:05'])
a.index=pd.to_datetime(a.index)
a.resample('15S').mean()
In [368]: a
Out[368]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:35 3.0
2016-05-25 00:02:05 5.0
dtype: float64
</code></pre>
<p>It shows me this:</p>
<pre><code>2016-05-25 00:00:30 1.0
2016-05-25 00:00:45 3.0
2016-05-25 00:01:00 4.0
2016-05-25 00:01:15 NaN
2016-05-25 00:01:30 3.0
2016-05-25 00:01:45 NaN
2016-05-25 00:02:00 5.0
Freq: 15S, dtype: float64
</code></pre>
<p>So, I no longer have a value at 00:35 or 00:50.<br>
For my original larger data set, I also end up seeing many NaN value in large groups at the end of the resampled data.<br>
What I would like to do resample my 15s data, to 15s, so whenever there is no data present for a particular time it should use the mean of the values around it to fill it in. Is there a way to do that?<br>
Also, why does the time basis change when I resample? My original data starts at 00:00:35 and after resampling it starts at 00:30? It seems like it got shifted by 5 seconds.<br>
In my example data, all it should have done is created an additional data entry at 00:01:50. </p>
<hr>
<p><strong>Edit</strong></p>
<p>I realized that my data is slightly more complex then I had thought. The 'base' actually changes part way through it. If I use the solution below, then it works for part of the data, but then the values stop changing. For example:</p>
<pre><code>a = pd.Series([1.,3.,4.,3.,5.,6.,7.,8.], ['2016-05-25 00:00:35','2016-05-25 00:00:50','2016-05-25 00:01:05','2016-05-25 00:01:35','2016-05-25 00:02:05','2016-05-25 00:03:00','2016-05-25 00:04:00','2016-05-25 00:06:00'])
In [79]: a
Out[79]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:35 3.0
2016-05-25 00:02:05 5.0
2016-05-25 00:03:00 6.0
2016-05-25 00:04:00 7.0
2016-05-25 00:06:00 8.0
dtype: float64
In [80]: a.index = pd.to_datetime(a.index)
In [81]: a.resample('15S', base=5).interpolate()
Out[81]:
2016-05-25 00:00:35 1.0
2016-05-25 00:00:50 3.0
2016-05-25 00:01:05 4.0
2016-05-25 00:01:20 3.5
2016-05-25 00:01:35 3.0
2016-05-25 00:01:50 4.0
2016-05-25 00:02:05 5.0
2016-05-25 00:02:20 5.0
2016-05-25 00:02:35 5.0
2016-05-25 00:02:50 5.0
2016-05-25 00:03:05 5.0
2016-05-25 00:03:20 5.0
2016-05-25 00:03:35 5.0
2016-05-25 00:03:50 5.0
2016-05-25 00:04:05 5.0
2016-05-25 00:04:20 5.0
2016-05-25 00:04:35 5.0
2016-05-25 00:04:50 5.0
2016-05-25 00:05:05 5.0
2016-05-25 00:05:20 5.0
2016-05-25 00:05:35 5.0
2016-05-25 00:05:50 5.0
Freq: 15S, dtype: float64
</code></pre>
<p>As you can see it stops interpolating after 2:05, and seems to ignore the data at 3:00,4:00 and 5:00. </p>
| 3 |
2016-09-20T16:20:00Z
| 39,734,514 |
<p>An answer was posted to my <a href="https://github.com/pydata/pandas/issues/14297" rel="nofollow">bug report</a> that I wanted to share here for completeness. It is not my post, but does just what I had wanted:</p>
<p>Try this (maybe this is what interpolate should do by default, interpolating before re-sampling?)</p>
<pre><code>from scipy.interpolate import interp1d
# fit the interpolation in integer ns-space
f = interp1d(a.index.asi8, a.values)
# generating ending bins
dates = a.resample('15s', base=5).first().index
# apply
pd.Series(f(dates.asi8), dates)
Out[122]:
2016-05-25 00:00:35 1.000000
2016-05-25 00:00:50 3.000000
2016-05-25 00:01:05 4.000000
2016-05-25 00:01:20 3.500000
2016-05-25 00:01:35 3.000000
2016-05-25 00:01:50 4.000000
2016-05-25 00:02:05 5.000000
2016-05-25 00:02:20 5.272727
2016-05-25 00:02:35 5.545455
2016-05-25 00:02:50 5.818182
2016-05-25 00:03:05 6.083333
2016-05-25 00:03:20 6.333333
2016-05-25 00:03:35 6.583333
2016-05-25 00:03:50 6.833333
2016-05-25 00:04:05 7.041667
2016-05-25 00:04:20 7.166667
2016-05-25 00:04:35 7.291667
2016-05-25 00:04:50 7.416667
2016-05-25 00:05:05 7.541667
2016-05-25 00:05:20 7.666667
2016-05-25 00:05:35 7.791667
2016-05-25 00:05:50 7.916667
Freq: 15S, dtype: float64
</code></pre>
| 0 |
2016-09-27T21:22:31Z
|
[
"python",
"pandas"
] |
plotting box plot with seaborn with multidimensional data
| 39,599,409 |
<p>I am trying to create a box plot with seaboard as follows:</p>
<p>I have some synthetic data where I have 24 different categories which are generated as:</p>
<pre><code>import numpy as np
x = np.arange(10, 130, step=5)
</code></pre>
<p>Now, for each of these categories I generate 5 random observations as follows:</p>
<pre><code>y = np.zeros(shape=(len(y), 5)) # Each row contains 5 observations for a category
</code></pre>
<p>Now, what I want to do is do a box plot with seaboard where I plot these 5 values along the y-axes (highlighting the confidence interval) and on the x-axes, I would like each of these categories. So, I do:</p>
<pre><code>import seaborn as sis
fig = sns.boxplot(x=x, y=y)
fig.plt.show()
</code></pre>
<p>However, this comes with the exception that the data must be 1-dimensional. I am not sure how to structure my data so that I can plot it.</p>
| 0 |
2016-09-20T16:32:01Z
| 39,600,353 |
<p>The problem, as you point out, is the shape of your input data. Without trying to make too many assumptions as to what you are trying to do, I think you are looking for something like</p>
<pre><code>x = np.arange(10, 130, step=5)
y = 4 * np.random.randn(x.size, 5) + 3
x_for_boxplot = np.repeat(x, 5)
y_for_boxplot = y.flatten()
ax = sns.boxplot(x_for_boxplot, y_for_boxplot)
</code></pre>
<p>where <code>x_for_boxplot</code> and <code>y_for_boxplot</code> have been restructured so that they are 1D arrays of the same size which is what <code>sns.boxplot</code> is looking for. I also changed <code>y</code> so that it is made up of random normal values rather than zeros.</p>
<p><a href="http://i.stack.imgur.com/qvpwm.png" rel="nofollow"><img src="http://i.stack.imgur.com/qvpwm.png" alt="enter image description here"></a></p>
| 0 |
2016-09-20T17:30:31Z
|
[
"python",
"matplotlib",
"plot",
"seaborn"
] |
crontab not fully working. only echo statements being run
| 39,599,420 |
<p>I have a job in my crontab to run a script (<code>/home/sys_bio/username/tracer.sh</code>) every minute. The script contains</p>
<pre><code>#!/usr/bin/env bash
echo "starting"
/home/sys_bio/username/p35/bin/python3.5 -m qefunctional.qe.tests.prodprobe -p post -j test.json
echo "finised"
</code></pre>
<p>When I am in the directory <code>/home/sys_bio/username/</code> and I run the commany <code>./tracer.sh</code>, it runs as expected.</p>
<p>However when I add the job to the crontab, only the <code>echo</code> portions run. </p>
<p>Another thing to note is <code>p35</code> is where my python3 is located. So <code>/home/sys_bio/username/p35/bin/python3.5</code> is how I run all my python3 scripts. </p>
<p>This is what I have in my crontab</p>
<p><code>* * * * * /home/sys_bio/username/tracer.sh >> /home/username/tracer.cron.txt</code></p>
<p>I am getting the <code>.txt</code> file created but only the echo statements are being saved.</p>
<p>This may seem confusing but the directory structure is like so:</p>
<pre><code> /home
/sys_bio
/username
/qefunctional
__init__.py
/qe
__init__.py
/tests
__init__.py
prodprobe.py
test.json
/p35
/bin
python3.5
tracer.sh
/username
tracer.cron.txt
</code></pre>
<p><strong>* edits *</strong></p>
<p>I was able to add a bit more to my script so I can redirect errors to a file. The script now looks like this and I am getting this error:</p>
<pre><code>/home/sys_bio/username/p35/bin/python3.5: Error while finding spec for 'qefunctional.qe.tests.prodprobe' (<class 'ImportError'>: No module named 'qefunctional')
</code></pre>
<p>script:</p>
<pre><code>#!/usr/bin/env bash
echo "Starting Tracer POST"
/home/sys_bio/username/p35/bin/python3.5 -m qefunctional.qe.tests.prodprobe -p post -j /home/sys_bio/username/test.json -d -v 2>/home/username/crontab.output
echo "Ending Tracer POST"
</code></pre>
| 1 |
2016-09-20T16:32:25Z
| 39,600,001 |
<p>Including the path to bash shell might be needed:</p>
<pre><code>* * * * * /bin/sh /home/sys_bio/username/tracer.sh >> ...
</code></pre>
<p><code>cron</code> otherwise might not really know what to do. </p>
<p>The same principle also applies to what is included in your script. Using relative file names can fail since they are often not interpreted the same as if you were using your local interactive shell (eg):</p>
<pre><code>test.json
</code></pre>
<p>It might be necessary to use the absolute paths (eg):</p>
<pre><code>/full/path/to/test.json
</code></pre>
<p><strong>EDIT</strong>: It's apparent from the error generated by Python the issue is definitely path related. Python is trying to locate the module in the path <code>/home/sys_bio/username/p35/bin/python3.5</code> instead of two directories up in <code>/home/sys_bio/username/</code>. The fix would be to <code>cd</code> into the directory or by giving the absolute path to the module. By adding the following into the <code>.sh</code> script:</p>
<pre><code>cd /home/sys_bio/username
</code></pre>
<p>Should allow the command following it to look for the module there instead of two levels down.</p>
| 1 |
2016-09-20T17:07:38Z
|
[
"python",
"bash",
"cron"
] |
Vertices with edges not drawn with plotly in a network graph
| 39,599,460 |
<p>I have been following the instruction from here to draw a network graph:
<a href="http://nbviewer.jupyter.org/gist/empet/07ea33b2e4e0b84193bd" rel="nofollow">http://nbviewer.jupyter.org/gist/empet/07ea33b2e4e0b84193bd</a></p>
<p>I have been replicating the networkx example as I have had troubles to compile and install igraph on my Ubuntu VM.
Here is an example of result I would obtain: <a href="http://i.stack.imgur.com/owasw.png" rel="nofollow">plotly network graph</a></p>
<p>For some reason, the nodes that have edges are not visible.</p>
<p>I hope that someone has some idea about where the problem might originate. I have been reusing the same layout parameters as in the example I was trying to reproduce.</p>
<p>Thanks in advance!</p>
| 0 |
2016-09-20T16:34:34Z
| 39,612,303 |
<p>After further investigation, I found out the root of the problem. There was a mismatch between the identifiers of the nodes and the edges.</p>
<p>I was still using the labels of the nodes to add the edges instead of the node identifiers (integers used for positioning). As a consequence, plotly did not know where to connect the edges. </p>
| 0 |
2016-09-21T09:16:06Z
|
[
"python",
"plotly"
] |
Pymongo replace_one modified_count always 1 even if not changing anything
| 39,599,480 |
<p>Why and how can this work like this?</p>
<pre><code>item = db.test.find_one()
result = db.test.replace_one(item, item)
print(result.raw_result)
# Gives: {u'n': 1, u'nModified': 1, u'ok': 1, 'updatedExisting': True}
print(result.modified_count)
# Gives 1
</code></pre>
<p>when the equivalent in mongodb shell is always 0</p>
<pre><code>item = db.test.findOne()
db.test.replaceOne(item, item)
# Gives: {"acknowledged" : true, "matchedCount" : 1.0, "modifiedCount" : 0.0}
</code></pre>
<p>How can I get consistent results and properly detect when the replacement is actually changing the data?</p>
| 2 |
2016-09-20T16:35:26Z
| 39,714,522 |
<p>This is because MongoDB stores documents in binary (<a href="http://bsonspec.org/" rel="nofollow">BSON</a>) format. Key-value pairs in a BSON document can have any order (except that _id is always first).
Let's start with the <a href="https://docs.mongodb.com/manual/mongo/" rel="nofollow">mongo shell</a> first. The mongo shell preserves the key order when reading and writing data.
For example: </p>
<p></p>
<pre><code>> db.collection.insert({_id:1, a:2, b:3})
{ "_id" : 1, "a" : 2, "b" : 3 }
</code></pre>
<p>If you are performing <a href="https://docs.mongodb.com/manual/reference/method/db.collection.replaceOne/" rel="nofollow">replaceOne()</a> using this document value, it would avoid a modification because there's an existing BSON. </p>
<p></p>
<pre><code>> var doc = db.collection.findOne()
> db.collection.replaceOne(doc, doc)
{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 0 }
</code></pre>
<p>However, if you change the ordering of the fields it would detect a modification</p>
<p></p>
<pre><code>> var doc_2 = {_id:1, b:3, a:2}
> db.collection.replaceOne(doc_2, doc_2)
{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 }
</code></pre>
<p>Let's step into the Python world. <a href="https://api.mongodb.com/python/current/" rel="nofollow">PyMongo</a> represents BSON documents as Python dictionary by default, the order of keys in Python dictionary is not defined. Therefore, you cannot predict how it will be serialised to BSON. As per your example: </p>
<p></p>
<pre><code>> doc = db.collection.find_one()
{u'_id': 1.0, u'a': 2.0, u'b': 3.0}
> result = db.collection.replace_one(doc, doc)
> result.raw_result
{u'n': 1, u'nModified': 1, u'ok': 1, 'updatedExisting': True}
</code></pre>
<p>If it matters for your use case, one workaround is to use <a href="http://api.mongodb.com/python/current/api/bson/son.html" rel="nofollow">bson.SON</a>. For example: </p>
<p></p>
<pre><code>> from bson import CodecOptions, SON
> opts=CodecOptions(document_class=SON)
> collection_son = db.collection.with_options(codec_options=opts)
> doc_2 = collection_son.find_one()
SON([(u'_id', 1.0), (u'a', 2.0), (u'b', 3.0)])
> result = collection_son.replace_one(doc_2, doc_2)
{u'n': 1, u'nModified': 0, u'ok': 1, 'updatedExisting': True}
</code></pre>
<p>You can also observe that <code>bson.SON</code> is used in PyMongo (v3.3.0) i.e. <a href="https://github.com/mongodb/mongo-python-driver/blob/3.3.0/pymongo/collection.py#L697" rel="nofollow">_update() method</a>. See also related article: <a href="https://emptysqua.re/blog/pymongo-key-order/" rel="nofollow">PyMongo and Key Order in SubDocuments</a>.</p>
<p><strong>Update</strong> to answer an additional question: </p>
<p>As far as I know, there is no a 'standard' function to convert a nested dictionary to SON. Although you can write a custom <code>dict</code> to <code>SON</code> converter yourself, for example:</p>
<p></p>
<pre><code>def to_son(value):
for k, v in value.iteritems():
if isinstance(v, dict):
value[k] = to_son(v)
elif isinstance(v, list):
value[k] = [to_son(x) for x in v]
return bson.son.SON(value)
# Assuming the order of the dictionary is as you desired.
to_son(a_nested_dict)
</code></pre>
<p>Or utilise bson as an intermediate format</p>
<p></p>
<pre><code>from bson import CodecOptions, SON, BSON
nested_bson = BSON.encode(a_nested_dict)
nested_son = BSON.decode(nested_bson, codec_options=CodecOptions(document_class=SON))
</code></pre>
<p>Once in <code>SON</code> format, you can convert back to Python dictionary using <a href="http://api.mongodb.com/python/current/api/bson/son.html#bson.son.SON.to_dict" rel="nofollow">SON.to_dict()</a></p>
| 3 |
2016-09-27T00:56:43Z
|
[
"python",
"mongodb",
"pymongo"
] |
NumPy: Limited cumulative sum
| 39,599,512 |
<p>Is there some way to avoid the <code>for</code> loop in this code:</p>
<pre><code>X = numpy array ~8000 long
running_total = 0
for x in X:
this_step = min(x, running_total)
running_total += this_step
</code></pre>
<p>In words, that's calculating a cumulative sum of a series where the difference between samples is limited to the previous value of the cumulative sum. Is there some way to do this using numpy's element-wise operations? Or should I be looking to write a C function if I care how fast this runs?</p>
<p><strong>Edit</strong> Obviously I've not made myself clear. My attempt to simplify the code above seems to have caused more confusion than anything else. Here's something closer to the actual code:</p>
<pre><code>monthly_income = numpy array ~ 8000 long
monthly_expenditure = numpy array ~ 8000 long
running_credit = numpy.zeros(len(monthly_income) + 1)
monthly_borrowing = numpy.zeros(len(monthly_income))
for index, i, e in zip(range(len(monthly_income)), monthly_income, monthly_expenditure):
assets_used = max(0, e - i)
assets_used = min(assets_used, running_credit[index])
monthly_borrowing[index] = max(0, e - i - running_credit[index])
running_credit[index+1] += max(0, i - e) - assets_used
</code></pre>
<p>The point is that <code>running_index[index+1]</code> depends on <code>assets_used</code> at sample <code>index</code>, which depends on <code>running_credit[index]</code>. Doing this in a Python loop is slow - in a function which does many similar calculations on the same input arrays using NumPy operations, the above loop takes over 80% of the execution time. But I can't see a way of doing the above operation without a for loop.</p>
<p>So is there some way of doing this kind of iterative operation in NumPy without a for loop? Or do I need to write a C function if I want this to run fast?</p>
| -1 |
2016-09-20T16:38:09Z
| 39,607,277 |
<p>Not sure but guessing from your explanation (assuming x is nonnegative)</p>
<pre><code>X = [1 ... 999]
running_total = X[0]
for x in X[1:]:
this_step = min(x, running_total)
running_total += this_step
</code></pre>
| 0 |
2016-09-21T03:55:41Z
|
[
"python",
"numpy"
] |
NumPy: Limited cumulative sum
| 39,599,512 |
<p>Is there some way to avoid the <code>for</code> loop in this code:</p>
<pre><code>X = numpy array ~8000 long
running_total = 0
for x in X:
this_step = min(x, running_total)
running_total += this_step
</code></pre>
<p>In words, that's calculating a cumulative sum of a series where the difference between samples is limited to the previous value of the cumulative sum. Is there some way to do this using numpy's element-wise operations? Or should I be looking to write a C function if I care how fast this runs?</p>
<p><strong>Edit</strong> Obviously I've not made myself clear. My attempt to simplify the code above seems to have caused more confusion than anything else. Here's something closer to the actual code:</p>
<pre><code>monthly_income = numpy array ~ 8000 long
monthly_expenditure = numpy array ~ 8000 long
running_credit = numpy.zeros(len(monthly_income) + 1)
monthly_borrowing = numpy.zeros(len(monthly_income))
for index, i, e in zip(range(len(monthly_income)), monthly_income, monthly_expenditure):
assets_used = max(0, e - i)
assets_used = min(assets_used, running_credit[index])
monthly_borrowing[index] = max(0, e - i - running_credit[index])
running_credit[index+1] += max(0, i - e) - assets_used
</code></pre>
<p>The point is that <code>running_index[index+1]</code> depends on <code>assets_used</code> at sample <code>index</code>, which depends on <code>running_credit[index]</code>. Doing this in a Python loop is slow - in a function which does many similar calculations on the same input arrays using NumPy operations, the above loop takes over 80% of the execution time. But I can't see a way of doing the above operation without a for loop.</p>
<p>So is there some way of doing this kind of iterative operation in NumPy without a for loop? Or do I need to write a C function if I want this to run fast?</p>
| -1 |
2016-09-20T16:38:09Z
| 39,614,414 |
<p>Easiest quick-fix for this kind of problem i find is to use numba. E.g.</p>
<pre><code>from numba import jit
import numpy as np
def cumsumcapped(X):
running_total = 0
for x in X:
this_step = min(x, running_total)
running_total += this_step
@jit
def cumsumcappedjit(X):
running_total = 0
for x in X:
this_step = min(x, running_total)
running_total += this_step
X = np.random.randint(1, 100, 10000)
In [5]: %timeit cumsumcapped(X)
100 loops, best of 3: 4.1 ms per loop
In [6]: %timeit stack.cumsumcappedjit(X)
The slowest run took 170143.03 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 587 ns per loop
</code></pre>
| 0 |
2016-09-21T10:47:38Z
|
[
"python",
"numpy"
] |
Adding numbers in a list gives TypeError: unsupported operand type(s) for +: 'int' and 'str'
| 39,599,596 |
<p>I´m writing a simple calculator program that will let a user add a list of integers together as a kind of entry to the syntax of python. I want the program to allow the user to add as many numbers together as they want. My error is:</p>
<pre><code>Traceback (most recent call last):
File "Calculator.py", line 17, in <module>
addition = sum(inputs)
TypeError: unsupported operand type(s) for +: 'int' and 'str'
</code></pre>
<p>My code is:</p>
<pre><code>#declare variables
inputs = []
done = False
#while loop for inputting numbers
while done == False:
value = raw_input()
#escape loop if user enters done
if value == "Done":
print inputs
done = True
else:
inputs.append(value)
addition = sum(inputs)
print addition
</code></pre>
| -1 |
2016-09-20T16:43:52Z
| 39,599,634 |
<p>Try casting value to an int.</p>
<pre><code>value = int(raw_input())
</code></pre>
<p>Edit:
See the other answers, mine throw an exception when "Done" is typed into the prompt.</p>
| -2 |
2016-09-20T16:45:52Z
|
[
"python"
] |
Adding numbers in a list gives TypeError: unsupported operand type(s) for +: 'int' and 'str'
| 39,599,596 |
<p>I´m writing a simple calculator program that will let a user add a list of integers together as a kind of entry to the syntax of python. I want the program to allow the user to add as many numbers together as they want. My error is:</p>
<pre><code>Traceback (most recent call last):
File "Calculator.py", line 17, in <module>
addition = sum(inputs)
TypeError: unsupported operand type(s) for +: 'int' and 'str'
</code></pre>
<p>My code is:</p>
<pre><code>#declare variables
inputs = []
done = False
#while loop for inputting numbers
while done == False:
value = raw_input()
#escape loop if user enters done
if value == "Done":
print inputs
done = True
else:
inputs.append(value)
addition = sum(inputs)
print addition
</code></pre>
| -1 |
2016-09-20T16:43:52Z
| 39,599,641 |
<p>When using <code>raw_input()</code> you're storing a string in <code>value</code>. Convert it to an int before appending it to your list, e.g.</p>
<pre><code>inputs.append( int( value ) )
</code></pre>
| 0 |
2016-09-20T16:46:37Z
|
[
"python"
] |
Adding numbers in a list gives TypeError: unsupported operand type(s) for +: 'int' and 'str'
| 39,599,596 |
<p>I´m writing a simple calculator program that will let a user add a list of integers together as a kind of entry to the syntax of python. I want the program to allow the user to add as many numbers together as they want. My error is:</p>
<pre><code>Traceback (most recent call last):
File "Calculator.py", line 17, in <module>
addition = sum(inputs)
TypeError: unsupported operand type(s) for +: 'int' and 'str'
</code></pre>
<p>My code is:</p>
<pre><code>#declare variables
inputs = []
done = False
#while loop for inputting numbers
while done == False:
value = raw_input()
#escape loop if user enters done
if value == "Done":
print inputs
done = True
else:
inputs.append(value)
addition = sum(inputs)
print addition
</code></pre>
| -1 |
2016-09-20T16:43:52Z
| 39,599,661 |
<p><a href="https://docs.python.org/2/library/functions.html#raw_input" rel="nofollow"><code>raw_input</code></a> returns strings, not numbers. <a href="https://docs.python.org/2/library/functions.html#sum" rel="nofollow"><code>sum</code></a> operates only on numbers.</p>
<p>You can convert each item to an int as you add it to the list: <code>inputs.append(int(value))</code>. If you use <code>float</code> rather than <code>int</code> then non-integer numbers will work too. In either case, this will produce an error if the user enters something that is neither <code>Done</code> nor an integer. You can use <code>try</code>/<code>except</code> to deal with that, but that's probably out of the scope of this question.</p>
| 1 |
2016-09-20T16:47:48Z
|
[
"python"
] |
Regular expression, matching final group
| 39,599,616 |
<p>I'm making a c parser using python, and my function pattern is producing very strange results.</p>
<p>My pattern is:</p>
<pre><code>([\w\*\[\]]*)\W*([A-Za-z_][\w]+)\W*\(\W*([\w ,\*\[\]]*)\)\W*\{([\w\W]+)return.*;\n\};\n
</code></pre>
<p>When I supply the string:</p>
<pre><code>int main(int argc, char* argv[]) {
int test[] = {};
return 0;
};
int test(int a, int b) {
return a*b;
};
</code></pre>
<p>The call to <code>re.findall</code> returns:</p>
<pre><code>('int', 'main', 'int argc, char* argv[]', '\n return 0;\n};\n\nint test(int a, int b) {\n return a*b;\n')
</code></pre>
<p>I want it to return two matches, one for each function.</p>
<p>It appears to me that the parser is ignoring the ending <code>};</code> in each function.</p>
<p>How would I correct this?</p>
| -1 |
2016-09-20T16:44:57Z
| 39,599,847 |
<p>It's this part:</p>
<pre><code>return.*
</code></pre>
<p>The <code>.</code> is greedy, and it's simply going to match the entire rest of the document. You could make it non-greedy.</p>
<pre><code>return.*?
</code></pre>
<p>Or match anything <em>except</em> a newline</p>
<pre><code>return[^\n]*
</code></pre>
| 0 |
2016-09-20T16:58:40Z
|
[
"python",
"c",
"regex"
] |
CSV breaks up values on non delimeter
| 39,599,628 |
<p>I am trying to read mixed CSV fields, where there are quoted fields and non-quoted numerical with the following :</p>
<pre><code>from csv import reader
bar = """1234,"abc,def","dasd",341234234"""
foo = reader(bar)
[x for x in foo]
</code></pre>
<p>this returns </p>
<pre><code>[['1'], ['2'], ['3'], ['4'], ['', ''], ['abc,def'], ['', ''], ['dasd'], ['', ''], ['3'], ['4'], ['1'], ['2'], ['3'], ['4'], ['2'], ['3'], ['4']]
</code></pre>
<p>I tried using
foo = reader(bar, delimiter=',', quotechar='"')</p>
<p>But it still breaks up numbers. I basically need to read csv.QUOTE_NONNUMERIC
from writer, but it is not reading it properly.</p>
| 0 |
2016-09-20T16:45:45Z
| 39,599,811 |
<p>csv reader works on file objects. Here is what you could do</p>
<pre><code>from csv import reader
import StringIO
bar = """1234,"abc,def","dasd",341234234"""
f = StringIO.StringIO(bar)
foo = reader(f, delimiter=',')
print [x for x in foo]
</code></pre>
<p>This will give you o/p</p>
<pre><code>[['1234', 'abc,def', 'dasd', '341234234']]
</code></pre>
<p>Hope that works for you.</p>
| 2 |
2016-09-20T16:56:15Z
|
[
"python",
"csv"
] |
Use map and filter instead of a for loop?
| 39,599,642 |
<p>How do I write an equivalent of the code below using map and filter?</p>
<pre><code>res = []
for x in range(5):
if x % 2 == 0:
for y in range(5):
if y % 2 == 1:
res.append((x, y))
</code></pre>
<p>This is the expected result:</p>
<pre><code>[(0, 1), (0, 3), (2, 1), (2, 3), (4, 1), (4, 3)]
</code></pre>
<p>Here's the code I wrote, but it doesn't seem to work:</p>
<pre><code>list( map(( lambda x,y: (x,y)), filter((lambda x: x%2 == 0), range(5)), filter((lambda y: y%2 != 0), range(5))))
</code></pre>
| 0 |
2016-09-20T16:46:42Z
| 39,599,688 |
<p>You can write it as (in Python 2.x):</p>
<pre><code>xs = filter(lambda x: x % 2 == 0, range(5))
ys = filter(lambda y: y % 2 == 1, range(5))
res = [(x, y) for x in xs for y in ys]
</code></pre>
<p>This also uses a <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a>.</p>
<p>In Python 3.x:</p>
<pre><code>xs = list(filter(lambda x: x % 2 == 0, range(5)))
ys = list(filter(lambda y: y % 2 == 1, range(5)))
res = [(x, y) for x in xs for y in ys]
</code></pre>
| 3 |
2016-09-20T16:49:10Z
|
[
"python",
"dictionary",
"filter",
"functional-programming"
] |
Use map and filter instead of a for loop?
| 39,599,642 |
<p>How do I write an equivalent of the code below using map and filter?</p>
<pre><code>res = []
for x in range(5):
if x % 2 == 0:
for y in range(5):
if y % 2 == 1:
res.append((x, y))
</code></pre>
<p>This is the expected result:</p>
<pre><code>[(0, 1), (0, 3), (2, 1), (2, 3), (4, 1), (4, 3)]
</code></pre>
<p>Here's the code I wrote, but it doesn't seem to work:</p>
<pre><code>list( map(( lambda x,y: (x,y)), filter((lambda x: x%2 == 0), range(5)), filter((lambda y: y%2 != 0), range(5))))
</code></pre>
| 0 |
2016-09-20T16:46:42Z
| 39,599,722 |
<p>You don't even need map and filter, you could do the whole thing in a list comprehension</p>
<pre><code>[(x,y) for x in range(5) for y in range(5) if not x%2 and y%2]
</code></pre>
| 2 |
2016-09-20T16:51:38Z
|
[
"python",
"dictionary",
"filter",
"functional-programming"
] |
Use map and filter instead of a for loop?
| 39,599,642 |
<p>How do I write an equivalent of the code below using map and filter?</p>
<pre><code>res = []
for x in range(5):
if x % 2 == 0:
for y in range(5):
if y % 2 == 1:
res.append((x, y))
</code></pre>
<p>This is the expected result:</p>
<pre><code>[(0, 1), (0, 3), (2, 1), (2, 3), (4, 1), (4, 3)]
</code></pre>
<p>Here's the code I wrote, but it doesn't seem to work:</p>
<pre><code>list( map(( lambda x,y: (x,y)), filter((lambda x: x%2 == 0), range(5)), filter((lambda y: y%2 != 0), range(5))))
</code></pre>
| 0 |
2016-09-20T16:46:42Z
| 39,599,855 |
<p>You need to use <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow"><code>product</code></a> method from <a href="https://docs.python.org/3/library/itertools.html" rel="nofollow"><code>itertools</code></a>, which produces cartesian product of input iterables.</p>
<pre><code>x_arr = [x for x in range(5) if x % 2 == 0]
y_arr = [y for y in range(5) if x % 2 == 1]
from itertools import product
final_arr = product(x_arr, y_arr)
</code></pre>
<p>Output would be <code>[(0, 1), (0, 3), (2, 1), (2, 3), (4, 1), (4, 3)]</code></p>
| 1 |
2016-09-20T16:59:15Z
|
[
"python",
"dictionary",
"filter",
"functional-programming"
] |
Use map and filter instead of a for loop?
| 39,599,642 |
<p>How do I write an equivalent of the code below using map and filter?</p>
<pre><code>res = []
for x in range(5):
if x % 2 == 0:
for y in range(5):
if y % 2 == 1:
res.append((x, y))
</code></pre>
<p>This is the expected result:</p>
<pre><code>[(0, 1), (0, 3), (2, 1), (2, 3), (4, 1), (4, 3)]
</code></pre>
<p>Here's the code I wrote, but it doesn't seem to work:</p>
<pre><code>list( map(( lambda x,y: (x,y)), filter((lambda x: x%2 == 0), range(5)), filter((lambda y: y%2 != 0), range(5))))
</code></pre>
| 0 |
2016-09-20T16:46:42Z
| 39,599,960 |
<p>An alternative, you can make use of the fact that odd or even numbers can be captured by using the 3rd <code>step</code> argument of <code>range</code>:</p>
<pre><code>>>> sorted((x,y) for y in range(1,5,2) for x in range(0,5,2))
[(0, 1), (0, 3), (2, 1), (2, 3), (4, 1), (4, 3)]
</code></pre>
| 1 |
2016-09-20T17:05:14Z
|
[
"python",
"dictionary",
"filter",
"functional-programming"
] |
metaheuristic-algorithms-python throws import error but is definitely installed (this package is only tested in python3 and I'm using python 2.7)
| 39,599,691 |
<p>I can't import the <a href="https://pypi.python.org/pypi/metaheuristic-algorithms-python/0.1.6#downloads" rel="nofollow">metaheuristic-algorithms-python</a> library after installing it in python. Why isn't this working? It is installed in my site-packages but it cannot be imported. The docs say this is only tested for python3. Looking through the code, it looks like this should work in python 2.7. What's going on?</p>
<pre><code>$ virtualenv working
$ . working/bin/activate
$ pip install metaheuristic-algorithms-python
$ ls working/lib/python2.7/site-packages/metaheuristic_algorithms
base_algorithm.py command_line.pyc function_wrappers harmony_search.py simplified_particle_swarm_optimization.pyc version.py
base_algorithm.pyc firefly_algorithm.py genetic_algorithm.py harmony_search.pyc simulated_annealing.py version.pyc
command_line.py firefly_algorithm.pyc genetic_algorithm.pyc simplified_particle_swarm_optimization.py simulated_annealing.pyc
$ working/bin/python -c "import metaheuristic_algorithms"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named metaheuristic_algorithms
</code></pre>
| -1 |
2016-09-20T16:49:34Z
| 39,599,864 |
<p>You know how they said they don't support Python 2? Well, this is one of those things that works on Python 3 and not Python 2. Specifically, this package has no <code>__init__.py</code>.</p>
<p>On Python 3, a package with no <code>__init__.py</code> is a <a href="https://www.python.org/dev/peps/pep-0420/" rel="nofollow">namespace package</a>, a kind of package that works slightly differently from regular packages. On Python 2, a folder with no <code>__init__.py</code> isn't even a package. You can't import this thing, because Python doesn't consider it a package.</p>
| 2 |
2016-09-20T16:59:47Z
|
[
"python",
"python-2.7",
"python-3.x",
"pip",
"importerror"
] |
UnicodeEncodeError in Python when trying to encrypt and write to a file
| 39,599,747 |
<p>second post here (on the same code). However this time it is a different issue. It only happens every so often but it stumps me as to why. This is the output and error: </p>
<pre><code>Phrase to be encrypted: Hello world
Shift keyword, Word only: Hello
:-) :-) :-) Encrypting Phrase 10%... :-) :-) :-)
:-) :-) :-) Encrypting Phrase 20%... :-) :-) :-)
:-) :-) :-) Encrypting Phrase 30%... :-) :-) :-)
:-) :-) :-) Encrypting Phrase 40%... :-) :-) :-)
:-) :-) :-) Encrypting Phrase 50%... :-) :-) :-)
:-) :-) :-) Encrypting Phrase 60%... :-) :-) :-)
:-) :-) :-) Encrypting Phrase 70%... :-) :-) :-)
:-) :-) :-) Encrypting Phrase 80%... :-) :-) :-)
:-) :-) :-) Encrypting Phrase 90%... :-) :-) :-)
:-) :-) :-) Encrypting Phrase 100%... :-) :-) :-)
:-) :-) :-) Checking Security of encrypted phrase.... :-) :-) :-)
:-) :-) :-) Done! :-) :-) :-)
Here is your Encrypted Phrase:I?j!Qgea:~~[
Traceback (most recent call last):
File "C:\Users\Isaac Scarisbrick\Downloads\Keyword Cipher_1.py", line 60, in <module>
file.write (str(result) + " " + (Cipher))
File "C:\Users\Isaac Scarisbrick\AppData\Local\Programs\Python\Python35-32\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 2-3: character maps to <undefined>
</code></pre>
<p>And this is my code:</p>
<pre><code>import random
phrase = input('Phrase to be encrypted: ')
shift_key = input("Shift keyword, Word only: ")
Encryption_Base = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890,./;:<>?'@#~]}[{=+-_)(*&^%$£!`¬\|"
key_encryption = random.randint(0, 94)
Cipher = ''
for c in shift_key:
if c in Encryption_Base:
Cipher += Encryption_Base[(Encryption_Base.index(c)+key_encryption)%(len(Encryption_Base))]
def Keyword_Encryption(key, phrase):
if len(phrase) > len(key):
while len(phrase) > len(key):
length_to_add = len(phrase) - len(key)
key = key + key[0:length_to_add]
elif len(phrase) < len(key):
while len(phrase) < len(key):
length_to_sub = len(key) - (len(key) - len(phrase))
key = key[0:length_to_sub]
else:
pass
shifted_phrase = ''
for i in range(len(phrase)):
new_letter = (ord(key[i]) - 96) + (ord(phrase[i]) - 96) + 96
if new_letter > 1220:
new_letter = chr(new_letter - 26)
else:
new_letter = chr(new_letter)
shifted_phrase = shifted_phrase + new_letter
return shifted_phrase
result = Keyword_Encryption(Cipher, phrase)
print (" ")
print (":-) " * 3 + "Encrypting Phrase 10%... " + ":-) " * 3)
print (":-) " * 3 + "Encrypting Phrase 20%... " + ":-) " * 3)
print (":-) " * 3 + "Encrypting Phrase 30%... " + ":-) " * 3)
print (":-) " * 3 + "Encrypting Phrase 40%... " + ":-) " * 3)
print (":-) " * 3 + "Encrypting Phrase 50%... " + ":-) " * 3)
print (":-) " * 3 + "Encrypting Phrase 60%... " + ":-) " * 3)
print (":-) " * 3 + "Encrypting Phrase 70%... " + ":-) " * 3)
print (":-) " * 3 + "Encrypting Phrase 80%... " + ":-) " * 3)
print (":-) " * 3 + "Encrypting Phrase 90%... " + ":-) " * 3)
print (":-) " * 3 + "Encrypting Phrase 100%... " + ":-) " * 3)
print (":-) " * 3 + "Checking Security of encrypted phrase.... " + ":-) " * 3)
print (":-) " * 3 + "Done! " + ":-) " * 3)
print (" ")
print ('Here is your Encrypted Phrase:' + (result) + (Cipher))
file = open("Encrypted.txt", "w")
file.write (str(result) + " " + (Cipher))
file.close()
</code></pre>
<p>Thank you very much in advance as this is a little extension t the task I have been set in my A level class. There are some snippets of code in here which you may have seen before as there was an encryption program made in python which i took bits from. Thank you for your time :).</p>
<p>EDIT: If this helps it sometimes throws this error too:</p>
<pre><code>Phrase to be encrypted: Hello World
Shift keyword, Word only: Hello
Traceback (most recent call last):
File "C:\Users\Isaac Scarisbrick\Downloads\Keyword Cipher_1.py", line 42, in <module>
result = Keyword_Encryption(Cipher, phrase)
File "C:\Users\Isaac Scarisbrick\Downloads\Keyword Cipher_1.py", line 37, in Keyword_Encryption
new_letter = chr(new_letter)
ValueError: chr() arg not in range(0x110000)
</code></pre>
| 1 |
2016-09-20T16:52:43Z
| 39,611,041 |
<p>Okay. So anyone else viewing this post knows. All you need to do is put </p>
<p><code>import codecs</code> </p>
<p>at the top and then when you write the file type </p>
<p><code>encoding = "utf8"</code> </p>
<p>inside the write parenthesis. Hope I have helped your issue and thank you to all that helped me come to this conclusion :-).</p>
| 0 |
2016-09-21T08:16:11Z
|
[
"python",
"encryption",
"unicode",
"python-unicode"
] |
Why does django 1.9 keeps using python2.7 when my virtualenv have python3.5?
| 39,599,769 |
<p>Im having problems with python versions, im actually developing a web page with python3.5 under Windows 7. But in my server (CentOS 7) i created a virtualenv with python3.5 (because the default version of python in linux is 2.7). </p>
<p>The problem is that when i get an error, it says that django is using python2.7:</p>
<pre><code>Request Method: GET
Request URL: http://proyect/url/
Django Version: 1.9.8
Exception Type: UnicodeEncodeError
Exception Value:
'ascii' codec can't encode character u'\xed' in position 9: ordinal not in range(128)
Exception Location: /usr/lib/python2.7/site-packages/django/utils/encoding.py in force_text, line 80
Python Executable: /usr/bin/python
Python Version: 2.7.5
Python Path:
['/home/user/proyect',
'/home/user/proyect_env/lib/python3.5/site-packages',
'/usr/lib64/python27.zip',
'/usr/lib64/python2.7',
'/usr/lib64/python2.7/plat-linux2',
'/usr/lib64/python2.7/lib-tk',
'/usr/lib64/python2.7/lib-old',
'/usr/lib64/python2.7/lib-dynload',
'/usr/lib64/python2.7/site-packages',
'/usr/lib64/python2.7/site-packages/gtk-2.0',
'/usr/lib/python2.7/site-packages']
</code></pre>
<p>I'm almost 100% sure that this message is displayed because django is using a wrong python version.</p>
<p>In my django.conf inside /etc/httpd/conf.d/ i have this configured:</p>
<pre><code>WSGIDaemonProcess proyect python-path=/home/user/proyect:/home/user/proyect_env/lib/python3.5/site-packages
WSGIProcessGroup project
WSGIScriptAlias / /home/user/proyect/proyect/wsgi.py
</code></pre>
<p>I followed <a href="https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-apache-and-mod_wsgi-on-centos-7" rel="nofollow">this</a> tutorial to configure my server.</p>
<p><strong>Edit #1</strong>
After following @Ixer indications, i got this error traceback in /etc/httpd/logs/error_log</p>
<pre><code>7:49:05.114720 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] mod_wsgi (pid=14836): Exception occurred processing WSGI script '/home/user/proyect/proyect/wsgi.py'.
[Tue Sep 20 17:49:05.114779 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] Traceback (most recent call last):
[Tue Sep 20 17:49:05.114810 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 158, in __call__
[Tue Sep 20 17:49:05.114862 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] self.load_middleware()
[Tue Sep 20 17:49:05.114883 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 51, in load_middleware
[Tue Sep 20 17:49:05.114910 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] mw_class = import_string(middleware_path)
[Tue Sep 20 17:49:05.114926 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/utils/module_loading.py", line 20, in import_string
[Tue Sep 20 17:49:05.114951 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] module = import_module(module_path)
[Tue Sep 20 17:49:05.114966 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
[Tue Sep 20 17:49:05.114991 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] __import__(name)
[Tue Sep 20 17:49:05.115007 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/contrib/auth/middleware.py", line 3, in <module>
[Tue Sep 20 17:49:05.115031 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] from django.contrib.auth.backends import RemoteUserBackend
[Tue Sep 20 17:49:05.115046 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/contrib/auth/backends.py", line 4, in <module>
[Tue Sep 20 17:49:05.115070 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] from django.contrib.auth.models import Permission
[Tue Sep 20 17:49:05.115085 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/contrib/auth/models.py", line 4, in <module>
[Tue Sep 20 17:49:05.115109 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
[Tue Sep 20 17:49:05.115124 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/contrib/auth/base_user.py", line 49, in <module>
[Tue Sep 20 17:49:05.115148 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] class AbstractBaseUser(models.Model):
[Tue Sep 20 17:49:05.115163 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/db/models/base.py", line 94, in __new__
[Tue Sep 20 17:49:05.115187 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] app_config = apps.get_containing_app_config(module)
[Tue Sep 20 17:49:05.115203 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/apps/registry.py", line 239, in get_containing_app_config
[Tue Sep 20 17:49:05.115226 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] self.check_apps_ready()
[Tue Sep 20 17:49:05.115241 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/apps/registry.py", line 124, in check_apps_ready
[Tue Sep 20 17:49:05.115263 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] raise AppRegistryNotReady("Apps aren't loaded yet.")
</code></pre>
<p><strong>Edit #2</strong></p>
<p>The problem was solved, the situation was that when i was installing apps via pip install -r requirements.txt, for some reason they were installed under python 2.7 (even when i sourced the virtualenv activate). So what i did was to install again the apps via /home/user/project_env/bin/pip install -r requirements... this fixed the error but now im having problems with an app:</p>
<pre><code>[Tue Sep 20 21:34:49.998172 2016] [:error] [pid 18220] [remote 10.105.40.106:172] mod_wsgi (pid=18220): Target WSGI script '/home/user/project/project/wsgi.py' cannot be loaded as Python module.
[Tue Sep 20 21:34:49.998207 2016] [:error] [pid 18220] [remote 10.105.40.106:172] mod_wsgi (pid=18220): Exception occurred processing WSGI script '/home/rortega/smce/smce/wsgi.py'.
[Tue Sep 20 21:34:49.998229 2016] [:error] [pid 18220] [remote 10.105.40.106:172] Traceback (most recent call last):
[Tue Sep 20 21:34:49.998255 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project/project/wsgi.py", line 33, in <module>
[Tue Sep 20 21:34:49.998314 2016] [:error] [pid 18220] [remote 10.105.40.106:172] application = get_wsgi_application()
[Tue Sep 20 21:34:49.998326 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application
[Tue Sep 20 21:34:49.998362 2016] [:error] [pid 18220] [remote 10.105.40.106:172] django.setup()
[Tue Sep 20 21:34:49.998372 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/django/__init__.py", line 18, in setup
[Tue Sep 20 21:34:49.998406 2016] [:error] [pid 18220] [remote 10.105.40.106:172] apps.populate(settings.INSTALLED_APPS)
[Tue Sep 20 21:34:49.998417 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/django/apps/registry.py", line 85, in populate
[Tue Sep 20 21:34:49.998516 2016] [:error] [pid 18220] [remote 10.105.40.106:172] app_config = AppConfig.create(entry)
[Tue Sep 20 21:34:49.998527 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/django/apps/config.py", line 90, in create
[Tue Sep 20 21:34:49.998590 2016] [:error] [pid 18220] [remote 10.105.40.106:172] module = import_module(entry)
[Tue Sep 20 21:34:49.998601 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
[Tue Sep 20 21:34:49.998637 2016] [:error] [pid 18220] [remote 10.105.40.106:172] __import__(name)
[Tue Sep 20 21:34:49.998647 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/stdimage/__init__.py", line 5, in <module>
[Tue Sep 20 21:34:49.998676 2016] [:error] [pid 18220] [remote 10.105.40.106:172] from .models import StdImageField # NOQA
[Tue Sep 20 21:34:49.998686 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/stdimage/models.py", line 14, in <module>
[Tue Sep 20 21:34:49.998751 2016] [:error] [pid 18220] [remote 10.105.40.106:172] from PIL import Image, ImageOps
[Tue Sep 20 21:34:49.998761 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/PIL/Image.py", line 67, in <module>
[Tue Sep 20 21:34:49.999163 2016] [:error] [pid 18220] [remote 10.105.40.106:172] from PIL import _imaging as core
[Tue Sep 20 21:34:49.999185 2016] [:error] [pid 18220] [remote 10.105.40.106:172] ImportError: cannot import name _imaging
</code></pre>
| 1 |
2016-09-20T16:54:13Z
| 39,600,214 |
<p>That manual doesn't seem right, since it doesnt tell how to use the virtualenv on the server. Even though the last line of the apache config points to a wsgi config, it does not explain what should be in there. </p>
<p>try something like this:</p>
<pre><code>#/home/user/project/project/wsgi.py
import os
import sys
import site
# Add the site-packages of the chosen virtualenv to work with
site.addsitedir('~/.virtualenvs/myprojectenv/local/lib/python3.5/site-packages')
# Add the app's directory to the PYTHONPATH
sys.path.append('/home/django_projects/MyProject')
sys.path.append('/home/django_projects/MyProject/myproject')
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings'
# Activate your virtual env
activate_env=os.path.expanduser("~/.virtualenvs/myprojectenv/bin/activate_this.py")
execfile(activate_env, dict(__file__=activate_env))
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
</code></pre>
| -1 |
2016-09-20T17:21:02Z
|
[
"python",
"django",
"apache",
"mod-wsgi",
"centos7"
] |
Why does django 1.9 keeps using python2.7 when my virtualenv have python3.5?
| 39,599,769 |
<p>Im having problems with python versions, im actually developing a web page with python3.5 under Windows 7. But in my server (CentOS 7) i created a virtualenv with python3.5 (because the default version of python in linux is 2.7). </p>
<p>The problem is that when i get an error, it says that django is using python2.7:</p>
<pre><code>Request Method: GET
Request URL: http://proyect/url/
Django Version: 1.9.8
Exception Type: UnicodeEncodeError
Exception Value:
'ascii' codec can't encode character u'\xed' in position 9: ordinal not in range(128)
Exception Location: /usr/lib/python2.7/site-packages/django/utils/encoding.py in force_text, line 80
Python Executable: /usr/bin/python
Python Version: 2.7.5
Python Path:
['/home/user/proyect',
'/home/user/proyect_env/lib/python3.5/site-packages',
'/usr/lib64/python27.zip',
'/usr/lib64/python2.7',
'/usr/lib64/python2.7/plat-linux2',
'/usr/lib64/python2.7/lib-tk',
'/usr/lib64/python2.7/lib-old',
'/usr/lib64/python2.7/lib-dynload',
'/usr/lib64/python2.7/site-packages',
'/usr/lib64/python2.7/site-packages/gtk-2.0',
'/usr/lib/python2.7/site-packages']
</code></pre>
<p>I'm almost 100% sure that this message is displayed because django is using a wrong python version.</p>
<p>In my django.conf inside /etc/httpd/conf.d/ i have this configured:</p>
<pre><code>WSGIDaemonProcess proyect python-path=/home/user/proyect:/home/user/proyect_env/lib/python3.5/site-packages
WSGIProcessGroup project
WSGIScriptAlias / /home/user/proyect/proyect/wsgi.py
</code></pre>
<p>I followed <a href="https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-apache-and-mod_wsgi-on-centos-7" rel="nofollow">this</a> tutorial to configure my server.</p>
<p><strong>Edit #1</strong>
After following @Ixer indications, i got this error traceback in /etc/httpd/logs/error_log</p>
<pre><code>7:49:05.114720 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] mod_wsgi (pid=14836): Exception occurred processing WSGI script '/home/user/proyect/proyect/wsgi.py'.
[Tue Sep 20 17:49:05.114779 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] Traceback (most recent call last):
[Tue Sep 20 17:49:05.114810 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 158, in __call__
[Tue Sep 20 17:49:05.114862 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] self.load_middleware()
[Tue Sep 20 17:49:05.114883 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 51, in load_middleware
[Tue Sep 20 17:49:05.114910 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] mw_class = import_string(middleware_path)
[Tue Sep 20 17:49:05.114926 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/utils/module_loading.py", line 20, in import_string
[Tue Sep 20 17:49:05.114951 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] module = import_module(module_path)
[Tue Sep 20 17:49:05.114966 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
[Tue Sep 20 17:49:05.114991 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] __import__(name)
[Tue Sep 20 17:49:05.115007 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/contrib/auth/middleware.py", line 3, in <module>
[Tue Sep 20 17:49:05.115031 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] from django.contrib.auth.backends import RemoteUserBackend
[Tue Sep 20 17:49:05.115046 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/contrib/auth/backends.py", line 4, in <module>
[Tue Sep 20 17:49:05.115070 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] from django.contrib.auth.models import Permission
[Tue Sep 20 17:49:05.115085 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/contrib/auth/models.py", line 4, in <module>
[Tue Sep 20 17:49:05.115109 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
[Tue Sep 20 17:49:05.115124 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/contrib/auth/base_user.py", line 49, in <module>
[Tue Sep 20 17:49:05.115148 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] class AbstractBaseUser(models.Model):
[Tue Sep 20 17:49:05.115163 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/db/models/base.py", line 94, in __new__
[Tue Sep 20 17:49:05.115187 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] app_config = apps.get_containing_app_config(module)
[Tue Sep 20 17:49:05.115203 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/apps/registry.py", line 239, in get_containing_app_config
[Tue Sep 20 17:49:05.115226 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] self.check_apps_ready()
[Tue Sep 20 17:49:05.115241 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] File "/usr/lib/python2.7/site-packages/django/apps/registry.py", line 124, in check_apps_ready
[Tue Sep 20 17:49:05.115263 2016] [:error] [pid 14836] [remote 10.105.40.106:49676] raise AppRegistryNotReady("Apps aren't loaded yet.")
</code></pre>
<p><strong>Edit #2</strong></p>
<p>The problem was solved, the situation was that when i was installing apps via pip install -r requirements.txt, for some reason they were installed under python 2.7 (even when i sourced the virtualenv activate). So what i did was to install again the apps via /home/user/project_env/bin/pip install -r requirements... this fixed the error but now im having problems with an app:</p>
<pre><code>[Tue Sep 20 21:34:49.998172 2016] [:error] [pid 18220] [remote 10.105.40.106:172] mod_wsgi (pid=18220): Target WSGI script '/home/user/project/project/wsgi.py' cannot be loaded as Python module.
[Tue Sep 20 21:34:49.998207 2016] [:error] [pid 18220] [remote 10.105.40.106:172] mod_wsgi (pid=18220): Exception occurred processing WSGI script '/home/rortega/smce/smce/wsgi.py'.
[Tue Sep 20 21:34:49.998229 2016] [:error] [pid 18220] [remote 10.105.40.106:172] Traceback (most recent call last):
[Tue Sep 20 21:34:49.998255 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project/project/wsgi.py", line 33, in <module>
[Tue Sep 20 21:34:49.998314 2016] [:error] [pid 18220] [remote 10.105.40.106:172] application = get_wsgi_application()
[Tue Sep 20 21:34:49.998326 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application
[Tue Sep 20 21:34:49.998362 2016] [:error] [pid 18220] [remote 10.105.40.106:172] django.setup()
[Tue Sep 20 21:34:49.998372 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/django/__init__.py", line 18, in setup
[Tue Sep 20 21:34:49.998406 2016] [:error] [pid 18220] [remote 10.105.40.106:172] apps.populate(settings.INSTALLED_APPS)
[Tue Sep 20 21:34:49.998417 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/django/apps/registry.py", line 85, in populate
[Tue Sep 20 21:34:49.998516 2016] [:error] [pid 18220] [remote 10.105.40.106:172] app_config = AppConfig.create(entry)
[Tue Sep 20 21:34:49.998527 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/django/apps/config.py", line 90, in create
[Tue Sep 20 21:34:49.998590 2016] [:error] [pid 18220] [remote 10.105.40.106:172] module = import_module(entry)
[Tue Sep 20 21:34:49.998601 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
[Tue Sep 20 21:34:49.998637 2016] [:error] [pid 18220] [remote 10.105.40.106:172] __import__(name)
[Tue Sep 20 21:34:49.998647 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/stdimage/__init__.py", line 5, in <module>
[Tue Sep 20 21:34:49.998676 2016] [:error] [pid 18220] [remote 10.105.40.106:172] from .models import StdImageField # NOQA
[Tue Sep 20 21:34:49.998686 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/stdimage/models.py", line 14, in <module>
[Tue Sep 20 21:34:49.998751 2016] [:error] [pid 18220] [remote 10.105.40.106:172] from PIL import Image, ImageOps
[Tue Sep 20 21:34:49.998761 2016] [:error] [pid 18220] [remote 10.105.40.106:172] File "/home/user/project_env/lib/python3.5/site-packages/PIL/Image.py", line 67, in <module>
[Tue Sep 20 21:34:49.999163 2016] [:error] [pid 18220] [remote 10.105.40.106:172] from PIL import _imaging as core
[Tue Sep 20 21:34:49.999185 2016] [:error] [pid 18220] [remote 10.105.40.106:172] ImportError: cannot import name _imaging
</code></pre>
| 1 |
2016-09-20T16:54:13Z
| 39,604,286 |
<p>You need to rebuild <strong>mod_wsgi</strong> from source for Python 3.</p>
<p>Download <strong>mod_wsgi</strong>. Unpack it.</p>
<pre><code>tar xvfz mod_wsgi-X.Y.tar.gz
</code></pre>
<p>Configure for Python 3.5:</p>
<pre><code>./configure --with-apxs=/usr/local/apache/bin/apxs \
--with-python=/path/to/python3.5
</code></pre>
<p>Make and install:</p>
<pre><code>make
make install
</code></pre>
<p>But it's recommended to use Nginx/Uwsgi or Nginx/Gunicorn. It's easy to configure for both python2 and python3. </p>
| 1 |
2016-09-20T21:52:30Z
|
[
"python",
"django",
"apache",
"mod-wsgi",
"centos7"
] |
Python Conditionally Add Class to <td> Tags in HTML Table
| 39,599,802 |
<p>I have some data in the form of a csv file that I'm reading into Python and converting to an HTML table using Pandas.</p>
<p>Heres some example data:</p>
<pre><code>name threshold col1 col2 col3
A 10 12 9 13
B 15 18 17 23
C 20 19 22 25
</code></pre>
<p>And some code:</p>
<pre><code>import pandas as pd
df = pd.read_csv("data.csv")
table = df.to_html(index=False)
</code></pre>
<p>This creates the following HTML:</p>
<pre><code><table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>threshold</th>
<th>col1</th>
<th>col2</th>
<th>col3</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>10</td>
<td>12</td>
<td>9</td>
<td>13</td>
</tr>
<tr>
<td>B</td>
<td>15</td>
<td>18</td>
<td>17</td>
<td>23</td>
</tr>
<tr>
<td>C</td>
<td>20</td>
<td>19</td>
<td>22</td>
<td>25</td>
</tr>
</tbody>
</table>
</code></pre>
<p>No I want to conditionally add a class to each cell in the html table if its value is less than a certain threshold. The threshold is different for each row in the table. </p>
<p>So given the example data above, I want to add a class, class="custom" to cell col2 with name A and to cell col1 with name C. In CSS, I will then fill the cell color as red if it has the "custom" class.</p>
<p>The result would be something like: </p>
<pre><code><table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>threshold</th>
<th>col1</th>
<th>col2</th>
<th>col3</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>10</td>
<td>12</td>
<td class="custom">9</td>
<td>13</td>
</tr>
<tr>
<td>B</td>
<td>15</td>
<td>18</td>
<td>17</td>
<td>23</td>
</tr>
<tr>
<td>C</td>
<td>20</td>
<td class="custom">19</td>
<td>22</td>
<td>25</td>
</tr>
</tbody>
</table>
</code></pre>
<p>How can this be achieved using Beautiful Soup?</p>
| 0 |
2016-09-20T16:55:55Z
| 39,600,120 |
<p>Using <em>BeautifulSoup</em> you can add a class to the tags attrs as you would set a key/value in a dictionary:</p>
<pre><code>soup = BeautifulSoup(html,"html.parser")
for row in soup.select("tbody tr"):
tds = row.find_all("td")
if int(tds[3].text) < int(tds[1].text):
tds[3]["class"] = "custom"
if int(tds[2].text) < int(tds[1].text):
tds[2]["class"] = "custom"
</code></pre>
<p>Which using your input html would give you:</p>
<pre><code><html><body><table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>threshold</th>
<th>col1</th>
<th>col2</th>
<th>col3</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>10</td>
<td>12</td>
<td class="custom">9</td>
<td>13</td>
</tr>
<tr>
<td>B</td>
<td>15</td>
<td>18</td>
<td>17</td>
<td>23</td>
</tr>
<tr>
<td>C</td>
<td>20</td>
<td class="custom">19</td>
<td>22</td>
<td>25</td>
</tr>
</tbody>
</table></body></html>
</code></pre>
<p>Just use whatever it is in the if that decides whether to add it or not.</p>
| 0 |
2016-09-20T17:14:51Z
|
[
"python",
"html",
"pandas",
"beautifulsoup"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.