title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Pyvmomi get folders name | 39,706,470 | <p>I'm new to Python and Django and I need to list all my VMs.
I used pyvmomi and Django but I can't get the folders name from VSphere, it shows a strange line.</p>
<blockquote>
<p>VMware list</p>
<p>'vim.Folder:group-v207'</p>
<p>'vim.Folder:group-v3177'</p>
<p>'vim.Folder:group-v188'</p>
</blockquote>
<p>I have 3 folders on vSphere so I think my connection it's good but that's absolutely not their names.</p>
<p>Here is my code :</p>
<p>views.py</p>
<pre><code>from __future__ import print_function
from django.shortcuts import render
from pyVim.connect import SmartConnect, Disconnect
import ssl
def home(request):
s = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
s.verify_mode = ssl.CERT_NONE
try:
connect = SmartConnect(...)
except:
connect = SmartConnect(...)
datacenter = connect.content.rootFolder.childEntity[0]
vmsFolders = datacenter.vmFolder.childEntity
Disconnect(connect)
return render(request, 'vmware/home.html', {'vmsFolders':vmsFolders})
</code></pre>
<p>home.html</p>
<pre><code><h1>VMware list</h1>
{% for vmFolder in vmsFolders %}
<div>
<h3>{{ vmFolder }}</h3>
</div>
{% endfor %}
</code></pre>
<p>Can anybody help me to get the real names of my folders?</p>
| 0 | 2016-09-26T15:14:27Z | 39,752,543 | <p>You need to specifically state you want the name, like this:</p>
<pre><code>vmFolders = datacenter.vmFolder.childEntity
for folder in vmFolders:
print(folder.name)
</code></pre>
| 0 | 2016-09-28T16:05:19Z | [
"python",
"django",
"vsphere",
"pyvmomi"
]
|
Error type in my work about python | 39,706,493 | <pre><code>#!/usr/bin/env python
# coding=utf8
value=input("please enter value:")
result=hex(value)
r=hex(0xffff-result)
print r
print result
TypeError: unsupported operand type(s) for -: 'int' and 'str'
</code></pre>
<p>I study python for a few days,I try this python job,I can't understand what's the shape of '0xffff',it is str or int?and it's right for 'result' be str?</p>
| -4 | 2016-09-26T15:15:41Z | 39,706,611 | <p><code>hex</code> returns a string containing the hexadecimal representation of the given <code>value</code>. You don't need to convert the value to hex, though, since <code>0xffff</code> is just an <code>int</code> literal.</p>
<p>Don't use <code>input</code> in Python 2; use <code>raw_input</code> to get a string, then explicitly (try to) convert the string to the value of the desired type.</p>
<pre><code>value = raw_input("Please enter a value: ")
r = 0xffff - int(value)
print r # Print the result as a decimal value
print hex(r) # Print the result as a hexadecimal value
</code></pre>
| 0 | 2016-09-26T15:21:01Z | [
"python"
]
|
Error type in my work about python | 39,706,493 | <pre><code>#!/usr/bin/env python
# coding=utf8
value=input("please enter value:")
result=hex(value)
r=hex(0xffff-result)
print r
print result
TypeError: unsupported operand type(s) for -: 'int' and 'str'
</code></pre>
<p>I study python for a few days,I try this python job,I can't understand what's the shape of '0xffff',it is str or int?and it's right for 'result' be str?</p>
| -4 | 2016-09-26T15:15:41Z | 39,706,919 | <p>When you are just beginning with Python you can get a huge amount of help by just running an interactive Python session and typing code in. Just enter the command</p>
<pre><code>python
</code></pre>
<p>The code you enter can be expressions or statements. Expressions are automatically evaluated, and the result printed (unless it happens to be <code>None</code>). Statements are executed.</p>
<pre><code>>>> value=input("please enter value:")
please enter value:2134
>>> value
'2134'
</code></pre>
<p>The quotes around the value flag it as a string.</p>
<pre><code>>>> result=hex(value)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object cannot be interpreted as an integer
</code></pre>
<p>If you look at the <a href="https://docs.python.org/2/library/functions.html#hex" rel="nofollow">documentation for the <code>hex</code> built-in function</a> you will see that it takes an integer argument.</p>
<pre><code>>>> hex(int(value))
'0x856'
</code></pre>
<p>So you have now got a hexadecimal string. let's store it.</p>
<pre><code>>>> result = hex(int(value))
>>> r=hex(0xffff-result)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'int' and 'str'
>>> result
'0x856'
</code></pre>
<p>The problem here is that you are trying to subtract a hex <em>string</em> from a hex <em>integer</em>. Because <code>0xffff</code> is in fact an integer value. So all you actually need to do is subtract the (integer) value of your input and subtract it. Then you presumably want to convert the result to hexadecimal.</p>
<pre><code>>>> 0xffff - int(value)
63401
>>> hex(0xffff - int(value))
'0xf7a9'
</code></pre>
<p>By going through this interactive experimental methodology you save yourself considerable time by learning instantly what works and what doesn't work (and often, in the latter case, why not). So you are then much better placed to write your complete program.</p>
| 0 | 2016-09-26T15:36:02Z | [
"python"
]
|
Terminating a program within a time frame through python | 39,706,661 | <p>I'm running a fortran code from a python script which sometimes takes a while to run. Thus I'm limiting the run time with a code taken from <a href="http://stackoverflow.com/a/13821695/3910261">this link</a>:</p>
<pre><code>def timeout(func, args=(), kwargs={}, timeout_duration=15, default=1):
import signal
class TimeoutError(Exception):
pass
def handler(signum, frame):
raise TimeoutError()
# set the timeout handler
signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout_duration)
try:
result = func(*args, **kwargs)
except TimeoutError as exc:
result = default
finally:
signal.alarm(0)
return result
</code></pre>
<p>I'm basically putting another function (partly below) in to this one (above) to run fortran code as:</p>
<pre><code>subprocess.check_output('./../bin/SPhenoUMSSM ../UMSSM/LH_out_'+mod+' > SPheno_log_'+mod, shell=True)
</code></pre>
<p>However I realised that when fortran code takes more than 15 seconds, which is the boundary in timeout function, it leaves in the core and executes other one in the for loop which creates dump in my cores. In order to prevent that, I wanted to use <code>subprocess.popen()</code> since it gives me pid to terminate the job in the core, but I need to wait for the process to be executed as well, like <code>subprocess.check_output()</code> does. Thus I was wondaring if there is a way to combine popen and check_output properties to wait until job is done within 15 seconds and if its not just terminates that.</p>
| 1 | 2016-09-26T15:24:02Z | 39,706,841 | <p>There's a timeout argument on check_output, just set it to 15 seconds.</p>
<pre><code>try:
subprocess.check_output(['arg1', 'arg2'], timeout=15)
except:
print("Timed out")
</code></pre>
<p>Documentation here <a href="https://docs.python.org/3/library/subprocess.html#subprocess.check_output" rel="nofollow">https://docs.python.org/3/library/subprocess.html#subprocess.check_output</a></p>
<p>check_output returns the output as well, so if you care about it just store the result.</p>
<p>There's also a wait function that's useful for more complicated use cases. Both check_output and wait block until the process finishes, or until the timeout is reached.</p>
| 0 | 2016-09-26T15:32:29Z | [
"python",
"subprocess",
"popen"
]
|
Terminating a program within a time frame through python | 39,706,661 | <p>I'm running a fortran code from a python script which sometimes takes a while to run. Thus I'm limiting the run time with a code taken from <a href="http://stackoverflow.com/a/13821695/3910261">this link</a>:</p>
<pre><code>def timeout(func, args=(), kwargs={}, timeout_duration=15, default=1):
import signal
class TimeoutError(Exception):
pass
def handler(signum, frame):
raise TimeoutError()
# set the timeout handler
signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout_duration)
try:
result = func(*args, **kwargs)
except TimeoutError as exc:
result = default
finally:
signal.alarm(0)
return result
</code></pre>
<p>I'm basically putting another function (partly below) in to this one (above) to run fortran code as:</p>
<pre><code>subprocess.check_output('./../bin/SPhenoUMSSM ../UMSSM/LH_out_'+mod+' > SPheno_log_'+mod, shell=True)
</code></pre>
<p>However I realised that when fortran code takes more than 15 seconds, which is the boundary in timeout function, it leaves in the core and executes other one in the for loop which creates dump in my cores. In order to prevent that, I wanted to use <code>subprocess.popen()</code> since it gives me pid to terminate the job in the core, but I need to wait for the process to be executed as well, like <code>subprocess.check_output()</code> does. Thus I was wondaring if there is a way to combine popen and check_output properties to wait until job is done within 15 seconds and if its not just terminates that.</p>
| 1 | 2016-09-26T15:24:02Z | 39,707,979 | <p>Not the most sophisticated piece of code in the world but it may be useful.</p>
<pre><code>import subprocess, time
x = subprocess.Popen(['sleep', '15'])
polling = None
i = 0
while polling == None:
time.sleep(1)
polling = x.poll()
i +=1
if i > 15: break
if polling == None:
try:
x.kill()
print "Time out - process terminated" # process terminated by kill command
except OSError:
print "Process completed on time" # process terminated between poll and kill commands
except Exception as e:
print "Error "+str(e) # kill command failed due to another exception "e"
else:
print "Process Completed after "+str(i)+" seconds"
</code></pre>
<p>Edit: Problems with kill not appearing to function.<br>
Try using <code>os.kill(x.pid, signal.SIGKILL)</code> rather than <code>SIGTERM</code>.<br>
I believe that <code>SIGTERM</code> asks the process to close down cleanly, rather than terminate immediately. Not knowing what drives the fortran script, it's difficult to know what the terminate signal does. Perhaps the code is doing something.<br>
For example:<br>
if I ran a shell script as follows: </p>
<pre><code>#!/bin/bash
trap "echo signal" 15
sleep 30
</code></pre>
<p>and sent it <code>kill -15 pid_number</code>, it would not print "signal" until the sleep had terminated after 30 seconds, whereas if I issued <code>kill -9 pid_number</code> it would terminate immediately with nothing printed out. </p>
<p>The short answer, is I don't know but I suspect that the answer lies within the script running the fortran code.</p>
<p>EDIT:</p>
<p>Note: In order to successfully run <code>x.kill()</code> or <code>os.kill()</code> or <code>subprocess.call('kill '+ str(x.pid), shell=True)</code>, <code>shell</code> option in x needs to be False. Thus one can use </p>
<pre><code>import shlex
args = shlex.split(ARGS HERE)
x = subprocess.Popen(args) # shell=False is default
</code></pre>
<p>But also note that if you want to write the output to a log file by using <code>... >& log_file</code> it wont work since <code>>&</code> is not an valid argument for your script but for your shell environment. Thus one needs to use only arguments that are valid for the script that python runs.</p>
| 1 | 2016-09-26T16:34:26Z | [
"python",
"subprocess",
"popen"
]
|
Google analytics duration zero for the pages visited by Selenium | 39,706,711 | <p>I want to visit a web site with Selenium and python like this:</p>
<pre><code>browser = webdriver.Firefox()
browser.get(url)
time.sleep(20)
browser.close()
</code></pre>
<p>When I check the duration in google analytic after an hour, I see the duratioin is zero.
I've change it to this code to click on a link in the page after 30 seconds:</p>
<pre><code>browser = webdriver.Firefox()
browser.get(url)
time.sleep(30)
elm = browser.find_element_by_xpath(link)
elm.send_keys(Keys.CONTROL + Keys.RETURN)
time.sleep(10)
browser.quit()
</code></pre>
<p>But nothing has changes.<br>
what's wrong with this?</p>
| -1 | 2016-09-26T15:26:38Z | 39,709,708 | <p>Session duration is the delta between the timestamps of the first and the last interaction within a session (i.e. interactions with the same client id within a given timeframe). As far as I can tell your code will only create a single interaction per session, so there is no second data point to calculate a duration.</p>
| 0 | 2016-09-26T18:18:40Z | [
"python",
"selenium",
"selenium-webdriver",
"google-analytics"
]
|
Running sequential code in hdfs | 39,706,713 | <p>I am currently learning the "ins and outs" of working with Hadoop. Here's the </p>
<p>current setup: I have sequential code that I use to create .txt files which I will use as the input data for my mappers. I have currently been running this sequential code "preprocess.py" on a local machine and then moving the generated files to the hdfs, but there are many files that are generated and the moving takes much more time then their generation. </p>
<p>I was wondering if, having copied the "preprocess.py" code to the hdfs, there is any way to run it there allowing the generated files to be created on the hdfs instead of requiring the move. </p>
<p>Using </p>
<pre><code>"hdfs dfs -python preprocess.py"
</code></pre>
<p>returns an "Unknown command" error, so that obviously will not work. Thank you for your time!</p>
| 0 | 2016-09-26T15:26:40Z | 39,710,574 | <p>It is possible. Just make sure that you push all generated files to single unix location in your python code. Once they are there you can use <code>subprocess</code> module to run to shift the generated file to HDFS. In code it have to wait until file is transferred. Also for making sure that you are not again copying same file consider naming files differently (trying so will give Hadoop error) and delete the file after HDFS transfer is successful.</p>
| 0 | 2016-09-26T19:11:49Z | [
"python",
"hadoop"
]
|
Convert CURL Post to Python Requests Failing | 39,706,735 | <p>I am unable to successfully convert and execute a curl post command to python code.</p>
<p>curl command</p>
<pre><code>curl -X POST -H "Content-Type:application/json; charset=UTF-8" -d '{"name":joe, "type":22, "summary":"Test"}' http://url
</code></pre>
<p>Converted code</p>
<pre><code>import requests
import json
url="http://url"
data = {"name":joe, "type":22, "summary":"Test"}
headers = {'Content-type': "application/json; charset=utf8"}
response = requests.post (url, data=json.dumps(data), headers=headers)
print response.text
print response.headers
</code></pre>
<p>I get no response in return, when I execute it manually from the shell it works fine, but when I execute the code, nothing happens, I don't see errors or anything. </p>
| 2 | 2016-09-26T15:27:43Z | 39,707,404 | <p>If you are using one of the latest versions of requests:
Try using the 'json' kwarg (no need to convert to json explicitly) instead of the 'data' kwarg:</p>
<pre><code>response = requests.post(url, json=data, headers=headers)
</code></pre>
<p>Note: Also, this way you can omit the 'Content-type' header.</p>
| -1 | 2016-09-26T16:01:18Z | [
"python",
"curl",
"server",
"python-requests"
]
|
Using win32com to MERGE and UNMERGE cells | 39,706,776 | <p>All,</p>
<p>I have a <strong>table</strong> inside a <strong>Word document</strong> that contains <strong>merged cells</strong>. I would like to <strong>unmerge</strong> those cells using the <strong>win32com</strong> package.</p>
<p>An example of this is when ROW 1 contains 5 cells, and ROW 2 contains 6 cells.</p>
<p>Ideally, I would like to unmerge all merged cells in ROW 1 such that the unmerged cells line up with ROW 2 and such that the data is displayed in the left-most cell of the resulting unmerged range.</p>
<p><strong>Example:</strong></p>
<p><strong>(Merged)</strong></p>
<pre><code>+++++++++++++++++++++++++++++++++++++
|hi |bye | |Hello |none |
+++++++++++++++++++++++++++++++++++++
|1 |21 |23 |good |bye |3 |
+++++++++++++++++++++++++++++++++++++
</code></pre>
<p><strong>(Unmerged)</strong></p>
<pre><code>+++++++++++++++++++++++++++++++++++++
|hi |bye | |Hello| |none |
+++++++++++++++++++++++++++++++++++++
|1 |21 |23 |good |bye |3 |
+++++++++++++++++++++++++++++++++++++
</code></pre>
<p>In the table with <strong>merged</strong> cells, there are a total of <strong>11</strong> cells. In the <strong>unmerged</strong> table there are <strong>12</strong> cells.</p>
<p>Any ideas on how to do this. Documentation for the win32com module is pretty sparse, and what little appears to exist is blocked while I'm at work.</p>
<p>Help would be GREATLY appreciated.</p>
<p>+++--------------------------------------------------------------------------+++</p>
<p><strong>Extra Details:</strong></p>
<p>I'm bringing my data in like this:</p>
<pre><code>#Opens an instance of MS Word in the background, then accesses the referenced
#file.
path = "string containing directory name"
Word = win32.Dispatch("Word.Application")
Word.Visible = False
Word.Documents.Open(path)
#Creates a com element containing access to the document contents of the file referenced above
MT_doc = Word.ActiveDocument
</code></pre>
<p>I then grab the tables out of the file using the following code:</p>
<pre><code>#Determins the number of tables in the Word Document and outputs a table
#element to "table"
num_tables = MT_doc.Tables.Count
table = MT_doc.Tables
</code></pre>
<p>Where I get stuck is that:</p>
<pre><code>table(1).Rows(1).Cells.Count != table(1).Rows(2).Cells.Count
</code></pre>
<p>In this case the first row has 10 cells while the second row has 18 cells. Without being able to split these merged cells, the rest of my code fails to execute.</p>
| 0 | 2016-09-26T15:29:44Z | 39,732,201 | <p>I found a solution to my problem. After a lot of searching it would appear that tables embedded inside of MS Word do NOT have a "merge" property associated with them. However, they do have a "width" property associated with them. Getting this using win32com looks something like this:</p>
<pre><code>#See above for definition of "table" - it's just a win32com COM module
#containing all the tables inside of an MS Word document.
width = table(1).Rows(1).Cells(i).Width
</code></pre>
<p>This returns a type(width) = float for each cell. This value can be compared with the width value of any cell above or below as required. If the width value of any cell in question is NOT equal to the width of a cell above or below, then you know a given cell is formatted in such a way that potential exists for the embedded table to contain a cell formatted much like my original example cell layout. By iterating over the cells above or below, it is possible to add the widths of these cells together until they are equal or approximately equal to each other. For my code the cells must be equal to within one integer value of each other. This is accomplished by using a round() function to avoid a scenario where a person opens the Word document, accidentally changes the width of a single cell, and then only fixes that cell width by eye [This happened to me initially and it was because I had accidentally set all the cell widths in my file to width = 60. When I fixed the table layout, I didn't quite get one cell back where it belonged].</p>
<p>The implementation of the code discussed is rather lengthy and somewhat specific to my particular problem. However, the concept of using cell widths NOT some "merged" property of the cell is broadly applicable and should be a useful approach for others with similar problems to take.</p>
| 0 | 2016-09-27T18:47:58Z | [
"python",
"ms-word",
"win32com"
]
|
Issues with Signal-Slot while using Qthread | 39,706,786 | <p>I wrote a sample code for my issue. This code should produce sum of two given numbers and show the result in text-browser then replace first number with sum, and add it to second number and show the result in text-browser again.This process should continues. The issues are:</p>
<p>1 - Why signal is not working properly?</p>
<p>2 - How can i emit signal with sum-number to show on text-browser?</p>
<pre><code>import sys
import time
from PyQt4.QtCore import *
from PyQt4.QtGui import *
class AreaThread(QThread):
signal = pyqtSignal()
def __init__(self, no1, no2, parent):
QThread.__init__(self)
self.no1 = no1
self.no2 = no2
def area(self):
for i in range(10):
self.sum = self.no1 + self.no2
self.no1 = self.no2
self.no2 = self.sum
self.signalemit()
time.sleep(1)
def signalemit(self):
self.signal.emit()
def run(self):
self.area()
class GeneralizedRun (QMainWindow):
def __init__(self, no1, no2):
QMainWindow.__init__(self)
self.no1 = no1
self.no2 = no2
def initUi(self, mainwindow):
mainwindow.centralWidget = QWidget(mainwindow)
self.runflow = QTextBrowser()
self.runflow.isReadOnly()
self.runflow.setText('running:')
self.runflow.moveCursor(QTextCursor.End)
self.runflow.show()
mainwindow.centralWidget = self.runflow
self.threadruning()
def threadruning(self):
self.area = AreaThread(self.no1, self.no2, self)
self.area.start()
self.area.signal.connect(self.updatetexteditor)
def updatetexteditor(self):
self.runflow.insertPlainText('\n\n' + 'sum')
self.runflow.moveCursor(QTextCursor.End)
class MainApplication(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
self.setWindowTitle('Home')
self.setGeometry(50, 50, 500, 500)
#Main widget
self.mainwidget = QWidget()
self.mainlayout = QVBoxLayout()
#Widgets
#Form line edits
self.no1 = QLineEdit()
self.no2 = QLineEdit()
self.run = QPushButton('Run')
self.exit = QPushButton('Exit')
self.form = QFormLayout()
self.form.addRow('First number', self.no1)
self.form.addRow('Second number', self.no2)
self.mainlayout.addLayout(self.form)
self.mainlayout.addWidget(self.run)
self.mainlayout.addWidget(self.exit)
self.exit.clicked.connect(self.exitapplication)
self.run.clicked.connect(self.mainprogramrun)
self.mainwidget.setLayout(self.mainlayout)
self.mainwidget.show()
def exitapplication(self):
sys.exit()
def mainprogramrun(self):
number1 = float(self.no1.text())
number2 = float(self.no2.text())
run = GeneralizedRun(number1, number2)
run.initUi(self)
def main():
application = QApplication(sys.argv)
application_window = MainApplication()
application.exec_()
if __name__ == '__main__':
main()
</code></pre>
| 1 | 2016-09-26T15:30:00Z | 39,774,529 | <p>The example code won't work correctly because you are not keeping a reference to the <code>GeneralizedRun</code> window. So the first thing to fix is this:</p>
<pre><code>class MainApplication(QMainWindow):
...
def mainprogramrun(self):
number1 = float(self.no1.text())
number2 = float(self.no2.text())
# keep a reference to the window
self.runner = GeneralizedRun(number1, number2)
self.runner.initUi(self)
</code></pre>
<p>To pass the sum back to the gui, the thread class should look like this:</p>
<pre><code>class AreaThread(QThread):
# re-define the signal to send a value
signal = pyqtSignal(float)
def __init__(self, no1, no2, parent):
QThread.__init__(self)
self.no1 = no1
self.no2 = no2
def area(self):
for i in range(10):
self.sum = self.no1 + self.no2
self.no1 = self.no2
self.no2 = self.sum
# pass the value
self.signalemit(self.sum)
time.sleep(1)
def signalemit(self, value):
# send the value
self.signal.emit(value)
def run(self):
self.area()
</code></pre>
<p>and the signal handler should look like this:</p>
<pre><code>class GeneralizedRun (QMainWindow):
...
def updatetexteditor(self, value):
# show the value
self.runflow.insertPlainText('\n\nsum: %s' % value)
self.runflow.moveCursor(QTextCursor.End)
</code></pre>
| 1 | 2016-09-29T15:35:02Z | [
"python",
"multithreading",
"pyqt4",
"signals-slots"
]
|
I wish to make sure items in a list don't have duplicate elements not present in inputted word | 39,706,847 | <p>Right, so I need to make sure that each item in a list when compared to an inputted word don't contain duplicate elements that are not present in the inputted word. It's a bit difficult to explain so I'll just show what I've got so far:</p>
<pre><code>listword = input("Inputted word: ")
listcomparison = ["su", "pici", "nope", "pics", "susu", "poici","pic","pooici"]
for line in listcomparison:
word = list(listword)
while len(word) >10:
word = input("Inputted word needs to be less than or 10 letters long: ")
else:
imp_letter = word[4]
wordcount = len(line)
if wordcount >= 4:
chars = list(line)
if imp_letter in line and all(char in word for char in chars):
print(line)
</code></pre>
<p>I didn't mention it at the beginning, but the "imp_word" in the code is there because one requirement in the exercise is that the 5th letter in the inputted word must be present for all words in the list. </p>
<p>The problem with this is that when I run this program, using the inputted word "suspicion" for example, it will return "pici
pics
poici
pooici". The problem then is that words like pooici (well, "words") that have duplicate elements (pooici for example has two O) not present in the inputted word are showing up, when I'm supposed to make it so they... don't. BASICALLY, the inputted word is the allowed alphapet, and the quantity of each character is all that's allowed.</p>
<p>I know I SUCK at explaining this stuff, but I've been looking for a solution for almost a whole day now. Help a newbie out? </p>
| 1 | 2016-09-26T15:32:39Z | 39,707,529 | <p>Use <a href="https://docs.python.org/3.5/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter</code></a>.</p>
<p>Here are some examples of the <code>collections.Counter</code> in action:</p>
<pre><code>>>> Counter("read")
Counter({'a': 1, 'r': 1, 'e': 1, 'd': 1})
>>> Counter("dad") - Counter("read") # two "d" in "dad" minus one "d" in "read" leaves one "d"
Counter({'d': 1})
>>> Counter("ad") - Counter("read") # letters "a" and "d" are in "read"
Counter()
>>> bool(Counter()) # an empty Counter evaluates to False
False
</code></pre>
<p>Here's how to code a solution to your problem:</p>
<pre><code>words = ["su", "pici", "nope", "pics", "susu", "poici","pic","pooici"]
letter_counts = Counter("suspicion")
found_words = [word for word in words if not Counter(word) - letter_counts]
</code></pre>
<p><code>found_words</code> contains</p>
<pre><code>['su', 'pici', 'pics', 'poici', 'pic']
</code></pre>
| 1 | 2016-09-26T16:08:33Z | [
"python",
"list",
"python-3.x",
"for-loop",
"while-loop"
]
|
Python - Make module method variables global good practice? | 39,706,867 | <p>I want to separate the code function of a script into a new module. I want to define some configs to pass as parameters in that module. Then, in the module I define this config as global. Is it a good practice? Is there a better solution?</p>
<p><strong>main script:</strong></p>
<pre><code>import myModule
config = {
"foo1" : "bar1",
"foo2" : "bar2",
"foo3" : "bar3"
}
myModule.execute(config)
</code></pre>
<p><strong>module:</strong></p>
<pre><code>def execute(config):
global CONFIG
CONFIG = config
value1, value2 = handleRequest()
print(value1)
print(value2)
def handleRequest()
value1 = doSomething(CONFIG["foo1"])
value2 = doSomethingElse(CONFIG["foo2"])
return value1, value2
</code></pre>
| 1 | 2016-09-26T15:33:30Z | 39,706,935 | <p>What I normally do when I need such kind of global variables is to make a class and define them there. So for example a Configuration class with values stored in them. </p>
<p>This has the following advantages (not an ending list):</p>
<ul>
<li>The class constants/variables 'belong' to something which is more applicable (Configuration in this case).</li>
<li>The code to calculate can be hidden (i.e. functionality can be incorporated in the same class)</li>
<li>Name clashes are prevented. </li>
<li>If in the future there is a need for making multiple instances of configurations it is less of a problem.</li>
</ul>
| 2 | 2016-09-26T15:37:05Z | [
"python",
"python-2.7"
]
|
Serializing custom related field in DRF | 39,706,882 | <p>I am trying to make a serializer with a nested "many to many" relationship. The goal is to get a serialized JSON object contain an array of serialized related objects. The models look like this (names changed, structure preserved)</p>
<pre>
from django.contrib.auth.models import User
PizzaTopping(models.Model):
name = models.CharField(max_length=255)
inventor = models.ForeignKey(User)
Pizza(models.Model):
name = models.CharField(max_length=255)
toppings = models.ManyToManyField(PizzaTopping)
</pre>
<p>The incoming JSON looks like this </p>
<pre>
{
"name": "My Pizza",
"toppings": [
{"name": "cheese", "inventor": "bob"},
{"name": "tomatoes", "inventor": "alice"}
]
}
</pre>
<p>My current serializer code looks like this</p>
<pre>
class ToppingRelatedField(RelatedField):
def get_queryset(self):
return Topping.objects.all()
def to_representation(self, instance):
return {'name': instance.name, 'inventor': instance.inventor.username}
def to_internal_value(self, data):
name = data.get('name', None)
inventor = data.get('inventor', None)
try:
user = User.objects.get(username=inventor)
except Setting.DoesNotExist:
raise serializers.ValidationError('bad inventor')
return Topping(name=name, inventor=user)
class PizzaSerializer(ModelSerializer):
toppings = ToppingRelatedField(many=True)
class Meta:
model = Pizza
fields = ('name', 'toppings')
</pre>
<p>It seems that since I defined the to_internal_value() for the custom field, it should create/update the many-to-many field automatically. But when I try to create pizzas, I get "Cannot add "": the value for field "pizzatopping" is None" ValueError. It looks like somewhere deep inside, Django decided that the many to many field should be called by the model name. How do I convince it otherwise?</p>
<p>Edit #1: It seems that this might be a genuine bug somewhere in Django or DRF. DRF seems to be doing the right thing, it detects that it is dealing with a ManyToMany field and tries to create toppings from the data using the custom field and add them to the pizza. Since it only has a pizza instance and a field name, it uses <code>setattr(pizza, 'toppings', toppings)</code> to do it. Django seems to be doing the right thing. The <code>__set__</code> is defined and seems to figure out that it needs to use add() method in the manager. But somewhere along the way, the field name 'toppings' gets lost and replaced by the default. Which is "related model name in lower case".</p>
<p>Edit #2: I have found a solution. I will document it in an answer once I am allowed. It seems that the <code>to_internal_value()</code> method in the <code>RelatedField</code> subclass needs to return a saved instance of a Topping for the ManyToMany thing to work properly. The existing docs show the opposite, a this link (<a href="http://www.django-rest-framework.org/api-guide/fields/#custom-fields" rel="nofollow">http://www.django-rest-framework.org/api-guide/fields/#custom-fields</a>) the example clearly returns an unsaved instance.</p>
| 0 | 2016-09-26T15:34:10Z | 39,708,882 | <p>Seems like there is an undocumented requirement. For write operations to work with a custom ManyToMany field, the custom field class <code>to_internal_value()</code> method needs to save the instance before returning it. The DRF docs omit this and the example of making a custom field (at <a href="http://www.django-rest-framework.org/api-guide/fields/#custom-fields" rel="nofollow">http://www.django-rest-framework.org/api-guide/fields/#custom-fields</a>) shows the method returning an unsaved instance. I am going to update the issue I opened with the DRF team.</p>
| 0 | 2016-09-26T17:26:42Z | [
"python",
"json",
"django",
"serialization",
"django-rest-framework"
]
|
django custom form clean() raising error from clean_field() | 39,706,916 | <p>I have created a custom form and need to override both of the <code>clean_field()</code> method and <code>clean()</code> method. Here is my code:</p>
<pre><code>class MyForm(forms.Form):
username=forms.RegexField(regex=r'^1[34578]\d{9}$')
code = forms.RegexField(regex=r'^\d{4}$')
def clean_username(self):
u = User.objects.filter(username=username)
if u:
raise forms.ValidationError('username already exist')
return username
def clean(self):
cleaned_data = super(MyForm, self).clean()
# How can I raise the field error here?
</code></pre>
<p>If I save this form twice, and the username will be already exist in the second time, the <code>clean_username</code> method will raise an error, however, the <code>clean()</code> method still run without interruption. </p>
<p>So my question is, how can I stop calling <code>clean()</code> when error already raise by <code>cleaned_xxx</code>, if that is not possible, then how can I raised the error again which raised by <code>clean_xxxx()</code> in <code>clean()</code> method?</p>
| 0 | 2016-09-26T15:35:56Z | 39,707,638 | <p>In your <code>clean</code> method, you can check whether <code>username</code> is in the <code>cleaned_data</code> dictionary.</p>
<pre><code>def clean(self):
cleaned_data = super(MyForm, self).clean()
if 'username' in cleaned_data:
# username was valid, safe to continue
...
else:
# raise an exception if you really want to
</code></pre>
<p>You probably don't need the else statement. The user will see the error from the <code>clean_username</code> method so you don't need to create another one.</p>
| 1 | 2016-09-26T16:14:46Z | [
"python",
"django",
"forms",
"validation"
]
|
pandas error when convert object to datetime | 39,706,970 | <p>This is two lines of inputdataframe:</p>
<pre><code> ts country os product_id total_users total_purchases
0 0000-00-00 Brazil iOS 1 0
1 0000-00-00 Germany 1 0
</code></pre>
<p>I have tried following commands in order to convert 'ts' which is object to datetime:</p>
<pre><code> df['ts'] = df['ts'].astype('datetime64[ns]')
</code></pre>
<p>and this is an error I've got:
<strong>ValueError: Month out of range in datetime string "0000-00-00"</strong></p>
<p>I know that there is a problem with 0000-00-00, but I don't know how to get ride of it and solve it to work?</p>
| 0 | 2016-09-26T15:38:56Z | 39,707,864 | <p>easy peasy</p>
<pre><code>df['ts'] = pd.to_datetime(df['ts'], errors ='coerce')
</code></pre>
<p>no need to clean the data. wrong timestamps will get the <code>NaT</code> (not a timestamp)</p>
| 1 | 2016-09-26T16:28:04Z | [
"python",
"datetime",
"pandas"
]
|
Pandas - Alternative to rank() function that gives unique ordinal ranks for a column | 39,707,080 | <p>At this moment I am writing a Python script that aggregates data from multiple Excel sheets. The module I choose to use is Pandas, because of its speed and ease of use with Excel files. The question is only related to the use of Pandas and me trying to create a additional column that contains <em>unique, integer-only, ordinal</em> ranks within a group.</p>
<p>My Python and Pandas knowledge is limited as I am just a beginner.</p>
<p><strong>The Goal</strong></p>
<p>I am trying to achieve the following data structure. Where the top 10 adwords ads are ranked vertically on the basis of their position in Google. In order to do this I need to create a column in the original data (see Table 2 & 3) with a integer-only ranking that contains no duplicate values. </p>
<p>Table 1: Data structure I am trying to achieve</p>
<pre><code> device , weeks , rank_1 , rank_2 , rank_3 , rank_4 , rank_5
mobile , wk 1 , string , string , string , string , string
mobile , wk 2 , string , string , string , string , string
computer, wk 1 , string , string , string , string , string
computer, wk 2 , string , string , string , string , string
</code></pre>
<p><strong>The Problem</strong></p>
<p>The exact problem I run into is not being able to efficiently rank the rows with pandas. I have tried a number of things, but I cannot seem to get it ranked in this way. </p>
<p>Table 2: Data structure I have</p>
<pre><code> weeks device , website , ranking , adtext
wk 1 mobile , url1 , *2.1 , string
wk 1 mobile , url2 , *2.1 , string
wk 1 mobile , url3 , 1.0 , string
wk 1 mobile , url4 , 2.9 , string
wk 1 desktop , *url5 , 2.1 , string
wk 1 desktop , url2 , *1.5 , string
wk 1 desktop , url3 , *1.5 , string
wk 1 desktop , url4 , 2.9 , string
wk 2 mobile , url1 , 2.0 , string
wk 2 mobile , *url6 , 2.1 , string
wk 2 mobile , url3 , 1.0 , string
wk 2 mobile , url4 , 2.9 , string
wk 2 desktop , *url5 , 2.1 , string
wk 2 desktop , url2 , *2.9 , string
wk 2 desktop , url3 , 1.0 , string
wk 2 desktop , url4 , *2.9 , string
</code></pre>
<p>Table 3: The table I cannot seem to create</p>
<pre><code> weeks device , website , ranking , adtext , ranking
wk 1 mobile , url1 , *2.1 , string , 2
wk 1 mobile , url2 , *2.1 , string , 3
wk 1 mobile , url3 , 1.0 , string , 1
wk 1 mobile , url4 , 2.9 , string , 4
wk 1 desktop , *url5 , 2.1 , string , 3
wk 1 desktop , url2 , *1.5 , string , 1
wk 1 desktop , url3 , *1.5 , string , 2
wk 1 desktop , url4 , 2.9 , string , 4
wk 2 mobile , url1 , 2.0 , string , 2
wk 2 mobile , *url6 , 2.1 , string , 3
wk 2 mobile , url3 , 1.0 , string , 1
wk 2 mobile , url4 , 2.9 , string , 4
wk 2 desktop , *url5 , 2.1 , string , 2
wk 2 desktop , url2 , *2.9 , string , 3
wk 2 desktop , url3 , 1.0 , string , 1
wk 2 desktop , url4 , *2.9 , string , 4
</code></pre>
<p>The standard .rank(ascending=True), gives averages on duplicate values. But since I use these ranks to organize them vertically this does not work out.</p>
<pre><code>df = df.sort_values(['device', 'weeks', 'ranking'], ascending=[True, True, True])
df['newrank'] = df.groupby(['device', 'week'])['ranking'].rank( ascending=True)
</code></pre>
<p>The .rank(method="dense", ascending=True) maintains duplicate values and also does not solve my problem</p>
<pre><code>df = df.sort_values(['device', 'weeks', 'ranking'], ascending=[True, True, True])
df['newrank'] = df.groupby(['device', 'week'])['ranking'].rank( method="dense", ascending=True)
</code></pre>
<p>The .rank(method="first", ascending=True) throws a ValueError</p>
<pre><code>df = df.sort_values(['device', 'weeks', 'ranking'], ascending=[True, True, True])
df['newrank'] = df.groupby(['device', 'week'])['ranking'].rank( method="first", ascending=True)
</code></pre>
<p>ADDENDUM: If I would find a way to add the rankings in a column, I would then use pivot to transpose the table in the following way.</p>
<pre><code>df = pd.pivot_table(df, index = ['device', 'weeks'], columns='website', values='adtext', aggfunc=lambda x: ' '.join(x))
</code></pre>
<p><strong>My question to you</strong></p>
<p>I was hoping any of you could help me find a solution for this problem. This could either an efficient ranking script or something else to help me reach the final data structure.</p>
<p>Thank you!</p>
<p>Sebastiaan</p>
<hr>
<p>EDIT: Unfortunately, I think I was not clear in my original post. I am looking for a ordinal ranking that only gives integers and has no duplicate values. This means that when there is a duplicate value it will randomly give one a higher ranking than the other.</p>
<p>So what I would like to do is generate a ranking that labels each row with an ordinal value per group. The groups are based on the week number and device. The reason I want to create a new column with this ranking is so that I can make top 10s per week and device.</p>
<p>Also Steven G asked me for an example to play around with. I have provided that here. </p>
<p>Example data can be pasted directly into python</p>
<p>! IMPORTANT: The names are different in this sample. The dataframe is called placeholder, the column names are as follows: 'week', 'website', 'share', 'rank_google', 'device'. </p>
<pre><code>data = {u'week': [u'WK 1', u'WK 2', u'WK 3', u'WK 4', u'WK 2', u'WK 2', u'WK 1',
u'WK 3', u'WK 4', u'WK 3', u'WK 3', u'WK 4', u'WK 2', u'WK 4', u'WK 1', u'WK 1',
u'WK3', u'WK 4', u'WK 4', u'WK 4', u'WK 4', u'WK 2', u'WK 1', u'WK 4', u'WK 4',
u'WK 4', u'WK 4', u'WK 2', u'WK 3', u'WK 4', u'WK 3', u'WK 4', u'WK 3', u'WK 2',
u'WK 2', u'WK 4', u'WK 1', u'WK 1', u'WK 4', u'WK 4', u'WK 2', u'WK 1', u'WK 3',
u'WK 1', u'WK 4', u'WK 1', u'WK 4', u'WK 2', u'WK 2', u'WK 2', u'WK 4', u'WK 4',
u'WK 4', u'WK 1', u'WK 3', u'WK 4', u'WK 4', u'WK 1', u'WK 4', u'WK 3', u'WK 2',
u'WK 4', u'WK 4', u'WK 4', u'WK 4', u'WK 1'],
u'website': [u'site1.nl', u'website2.de', u'site1.nl', u'site1.nl', u'anothersite.com',
u'url2.at', u'url2.at', u'url2.at', u'url2.at', u'anothersite.com', u'url2.at',
u'url2.at', u'url2.at', u'url2.at', u'url2.at', u'anothersite.com', u'url2.at',
u'url2.at', u'url2.at', u'url2.at', u'anothersite.com', u'url2.at', u'url2.at',
u'anothersite.com', u'site2.co.uk', u'sitename2.com', u'sitename.co.uk', u'sitename.co.uk',
u'sitename2.com', u'sitename2.com', u'sitename2.com', u'url3.fi', u'sitename.co.uk',
u'sitename2.com', u'sitename.co.uk', u'sitename2.com', u'sitename2.com', u'ulr2.se',
u'sitename2.com', u'sitename.co.uk', u'sitename2.com', u'sitename2.com', u'sitename2.com',
u'sitename2.com', u'sitename2.com', u'sitename.co.uk', u'sitename.co.uk', u'sitename2.com',
u'facebook.com', u'alsoasite.com', u'ello.com', u'instagram.com', u'alsoasite.com', u'facebook.com',
u'facebook.com', u'singleboersen-vergleich.at', u'facebook.com', u'anothername.com', u'twitter.com',
u'alsoasite.com', u'alsoasite.com', u'alsoasite.com', u'alsoasite.com', u'facebook.com', u'alsoasite.com',
u'alsoasite.com'],
'adtext': [u'site1.nl 3,9 | < 10\xa0%', u'website2.de 1,4 | < 10\xa0%', u'site1.nl 4,3 | < 10\xa0%',
u'site1.nl 3,8 | < 10\xa0%', u'anothersite.com 2,5 | 12,36 %', u'url2.at 1,3 | 78,68 %', u'url2.at 1,2 | 92,58 %',
u'url2.at 1,1 | 85,47 %', u'url2.at 1,2 | 79,56 %', u'anothersite.com 2,8 | < 10\xa0%', u'url2.at 1,2 | 80,48 %',
u'url2.at 1,2 | 85,63 %', u'url2.at 1,1 | 88,36 %', u'url2.at 1,3 | 87,90 %', u'url2.at 1,1 | 83,70 %',
u'anothersite.com 3,1 | < 10\xa0%', u'url2.at 1,2 | 91,00 %', u'url2.at 1,1 | 92,11 %', u'url2.at 1,2 | 81,28 %'
, u'url2.at 1,1 | 86,49 %', u'anothersite.com 2,7 | < 10\xa0%', u'url2.at 1,2 | 83,96 %', u'url2.at 1,2 | 75,48 %'
, u'anothersite.com 3,0 | < 10\xa0%', u'site2.co.uk 3,1 | 16,24 %', u'sitename2.com 2,3 | 34,85 %',
u'sitename.co.uk 3,5 | < 10\xa0%', u'sitename.co.uk 3,6 | < 10\xa0%', u'sitename2.com 2,1 | < 10\xa0%',
u'sitename2.com 2,2 | 13,55 %', u'sitename2.com 2,1 | 47,91 %', u'url3.fi 3,4 | < 10\xa0%',
u'sitename.co.uk 3,1 | 14,15 %', u'sitename2.com 2,4 | 28,77 %', u'sitename.co.uk 3,1 | 22,55 %',
u'sitename2.com 2,1 | 17,03 %', u'sitename2.com 2,1 | 24,46 %', u'ulr2.se 2,7 | < 10\xa0%',
u'sitename2.com 2,0 | 49,12 %', u'sitename.co.uk 3,0 | < 10\xa0%', u'sitename2.com 2,1 | 40,00 %',
u'sitename2.com 2,1 | < 10\xa0%', u'sitename2.com 2,2 | 30,29 %', u'sitename2.com 2,0 |47,48 %',
u'sitename2.com 2,1 | 32,17 %', u'sitename.co.uk 3,2 | < 10\xa0%', u'sitename.co.uk 3,1 | 12,77 %',
u'sitename2.com 2,6 | < 10\xa0%', u'facebook.com 3,2 | < 10\xa0%', u'alsoasite.com 2,3 | < 10\xa0%',
u'ello.com 1,8 | < 10\xa0%',u'instagram.com 5,0 | < 10\xa0%', u'alsoasite.com 2,2 | < 10\xa0%',
u'facebook.com 3,0 | < 10\xa0%', u'facebook.com 3,2 | < 10\xa0%', u'singleboersen-vergleich.at 2,6 | < 10\xa0%',
u'facebook.com 3,4 | < 10\xa0%', u'anothername.com 1,9 | <10\xa0%', u'twitter.com 4,4 | < 10\xa0%',
u'alsoasite.com 1,1 | 12,35 %', u'alsoasite.com 1,1 | 11,22 %', u'alsoasite.com 2,0 | < 10\xa0%',
u'alsoasite.com 1,1| 10,86 %', u'facebook.com 3,4 | < 10\xa0%', u'alsoasite.com 1,1 | 10,82 %',
u'alsoasite.com 1,1 | < 10\xa0%'],
u'share': [u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'12,36 %', u'78,68 %',
u'92,58 %', u'85,47 %', u'79,56 %', u'< 10\xa0%', u'80,48 %', u'85,63 %', u'88,36 %',
u'87,90 %', u'83,70 %', u'< 10\xa0%', u'91,00 %', u'92,11 %', u'81,28 %', u'86,49 %',
u'< 10\xa0%', u'83,96 %', u'75,48 %', u'< 10\xa0%', u'16,24 %', u'34,85 %', u'< 10\xa0%',
u'< 10\xa0%', u'< 10\xa0%', u'13,55 %', u'47,91 %', u'< 10\xa0%', u'14,15 %', u'28,77 %',
u'22,55 %', u'17,03 %', u'24,46 %', u'< 10\xa0%', u'49,12 %', u'< 10\xa0%', u'40,00 %',
u'< 10\xa0%', u'30,29 %', u'47,48 %', u'32,17 %', u'< 10\xa0%', u'12,77 %', u'< 10\xa0%',
u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%',
u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'12,35 %', u'11,22 %', u'< 10\xa0%',
u'10,86 %', u'< 10\xa0%', u'10,82 %', u'< 10\xa0%'],
u'rank_google': [u'3,9', u'1,4', u'4,3', u'3,8', u'2,5', u'1,3', u'1,2', u'1,1', u'1,2', u'2,8',
u'1,2', u'1,2', u'1,1', u'1,3', u'1,1', u'3,1', u'1,2', u'1,1', u'1,2', u'1,1', u'2,7', u'1,2',
u'1,2', u'3,0', u'3,1', u'2,3', u'3,5', u'3,6', u'2,1', u'2,2', u'2,1', u'3,4', u'3,1', u'2,4',
u'3,1', u'2,1', u'2,1', u'2,7', u'2,0', u'3,0', u'2,1', u'2,1', u'2,2', u'2,0', u'2,1', u'3,2',
u'3,1', u'2,6', u'3,2', u'2,3', u'1,8', u'5,0', u'2,2', u'3,0', u'3,2', u'2,6', u'3,4', u'1,9',
u'4,4', u'1,1', u'1,1', u'2,0', u'1,1', u'3,4', u'1,1', u'1,1'],
u'device': [u'Mobile', u'Tablet', u'Mobile', u'Mobile', u'Tablet', u'Mobile', u'Tablet', u'Computer',
u'Mobile', u'Tablet', u'Mobile', u'Computer', u'Tablet', u'Tablet', u'Computer', u'Tablet', u'Tablet',
u'Tablet', u'Mobile', u'Computer', u'Tablet', u'Computer', u'Mobile', u'Tablet', u'Tablet', u'Mobile',
u'Tablet', u'Mobile', u'Computer', u'Computer', u'Tablet', u'Mobile', u'Tablet', u'Mobile', u'Tablet',
u'Mobile', u'Mobile', u'Mobile', u'Tablet', u'Computer', u'Tablet', u'Computer', u'Mobile', u'Tablet',
u'Tablet', u'Tablet', u'Mobile', u'Computer', u'Mobile', u'Computer', u'Tablet', u'Tablet', u'Tablet',
u'Mobile', u'Mobile', u'Tablet', u'Mobile', u'Mobile', u'Tablet', u'Mobile', u'Mobile', u'Computer',
u'Mobile', u'Tablet', u'Mobile', u'Mobile']}
placeholder = pd.DataFrame(data)
</code></pre>
<p><strong>Error I receive when I use the rank() function with method='first'</strong></p>
<pre><code>C:\Users\username\code\report-creator>python recomp-report-04.py
Traceback (most recent call last):
File "recomp-report-04.py", line 71, in <module>
placeholder['ranking'] = placeholder.groupby(['week', 'device'])['rank_googl
e'].rank(method='first').astype(int)
File "<string>", line 35, in rank
File "C:\Users\sthuis\AppData\Local\Continuum\Anaconda2\lib\site-packages\pand
as\core\groupby.py", line 561, in wrapper
raise ValueError
ValueError
</code></pre>
<p><strong>My solution</strong></p>
<p>Effectively, the answer is given by @Nickil Maveli. A huge thank you! Nevertheless, I thought it might be smart to outline how I finally incorporated the solution.</p>
<p>Rank(method='first') is a good way to get an ordinal ranking. But since I was working with numbers that were formatted in the European way, pandas interpreted them as strings and could not rank them this way. I came to this conclusion by the reaction of Nickil Maveli and trying to rank each group individually. I did that through the following code.</p>
<pre><code>for name, group in df.sort_values(by='rank_google').groupby(['weeks', 'device']):
df['new_rank'] = group['ranking'].rank(method='first').astype(int)
</code></pre>
<p>This gave me the following error:</p>
<pre><code>ValueError: first not supported for non-numeric data
</code></pre>
<p>So this helped me realize that I should convert the column to floats. This is how I did it.</p>
<pre><code># Converting the ranking column to a float
df['ranking'] = df['ranking'].apply(lambda x: float(unicode(x.replace(',','.'))))
# Creating a new column with a rank
df['new_rank'] = df.groupby(['weeks', 'device'])['ranking'].rank(method='first').astype(int)
# Dropping all ranks after the 10
df = df.sort_values('new_rank').groupby(['weeks', 'device']).head(n=10)
# Pivotting the column
df = pd.pivot_table(df, index = ['device', 'weeks'], columns='new_rank', values='adtext', aggfunc=lambda x: ' '.join(x))
# Naming the columns with 'top' + number
df.columns = ['top ' + str(i) for i in list(df.columns.values)]
</code></pre>
<p>So this worked for me. Thank you guys!</p>
| 2 | 2016-09-26T15:44:53Z | 39,708,191 | <p>I think the way you were trying to use the <code>method=first</code> to rank them after sorting were causing problems. </p>
<p>You could simply use the rank method with <code>first</code> arg on the grouped object itself giving you the desired unique ranks per group.</p>
<pre><code>df['new_rank'] = df.groupby(['weeks','device'])['ranking'].rank(method='first').astype(int)
print (df['new_rank'])
0 2
1 3
2 1
3 4
4 3
5 1
6 2
7 4
8 2
9 3
10 1
11 4
12 2
13 3
14 1
15 4
Name: new_rank, dtype: int32
</code></pre>
<p>Perform pivot operation:</p>
<pre><code>df = df.pivot_table(index=['weeks', 'device'], columns=['new_rank'],
values=['adtext'], aggfunc=lambda x: ' '.join(x))
</code></pre>
<p>Choose the second level of the multiindex columns which pertain to the rank numbers:</p>
<pre><code>df.columns = ['rank_' + str(i) for i in df.columns.get_level_values(1)]
df
</code></pre>
<p><a href="http://i.stack.imgur.com/iMT88.png" rel="nofollow"><img src="http://i.stack.imgur.com/iMT88.png" alt="Image_2"></a></p>
<hr>
<p><strong>Data:</strong>(to replicate)</p>
<pre><code>df = pd.DataFrame({'weeks': ['wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1',
'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2'],
'device': ['mobile', 'mobile', 'mobile', 'mobile', 'desktop', 'desktop', 'desktop', 'desktop',
'mobile', 'mobile', 'mobile', 'mobile', 'desktop', 'desktop', 'desktop', 'desktop'],
'website': ['url1', 'url2', 'url3', 'url4', 'url5', 'url2', 'url3', 'url4',
'url1', 'url16', 'url3', 'url4', 'url5', 'url2', 'url3', 'url4'],
'ranking': [2.1, 2.1, 1.0, 2.9, 2.1, 1.5, 1.5, 2.9,
2.0, 2.1, 1.0, 2.9, 2.1, 2.9, 1.0, 2.9],
'adtext': ['string', 'string', 'string', 'string', 'string', 'string', 'string', 'string',
'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string']})
</code></pre>
<p>Note: <code>method=first</code> assigns ranks in the order they appear in the array/series.</p>
| 1 | 2016-09-26T16:47:51Z | [
"python",
"pandas",
"ranking",
"rank",
"ordinal"
]
|
Reading binary file in c written in python | 39,707,159 | <p>I wrote numpy 2 dimensional float array as a binary file using </p>
<p><code>narr.tofile(open(filename,"wb"),sep="",format='f')</code> </p>
<p>and try to retrieve the same in c using</p>
<pre><code>FILE* fin = fopen(filename,"rb")
float* data = malloc(rows*2*sizeof(float));
fread(data, sizeof(float), rows*2, fin);
</code></pre>
<p>This data array when printed shows different values than original array. Am I missing something?
Thanks</p>
| 0 | 2016-09-26T15:49:08Z | 39,707,478 | <p>It may depends on system you are using, <code>ndarray.tofile()</code> outputs in little-endian, which means that the <strong>least significant byte is stored first</strong>, try to use <code>numpy.byteswap()</code> and then convert to file. Also try to do it without format specifier and see result. Documentation states that default value for specifier is <code>format="%s"</code>, try to put percentage symbol before your format specifier, like so <code>%f</code>. </p>
| 0 | 2016-09-26T16:05:24Z | [
"python",
"c",
"arrays",
"numpy"
]
|
Reading binary file in c written in python | 39,707,159 | <p>I wrote numpy 2 dimensional float array as a binary file using </p>
<p><code>narr.tofile(open(filename,"wb"),sep="",format='f')</code> </p>
<p>and try to retrieve the same in c using</p>
<pre><code>FILE* fin = fopen(filename,"rb")
float* data = malloc(rows*2*sizeof(float));
fread(data, sizeof(float), rows*2, fin);
</code></pre>
<p>This data array when printed shows different values than original array. Am I missing something?
Thanks</p>
| 0 | 2016-09-26T15:49:08Z | 39,708,552 | <p>Here is another way if you save your data in npy format through <code>np.save('foo.npy',narr)</code>. It is a (writer and) reader for 2D npy files that returns its data as a two dimensional Eigen matrix. Note that the code does lots of assumptions and only works for 2D arrays saved with the standard np save() options. </p>
<pre><code>// npymatrix.h
#ifndef NPYMATRIX_H
#define NPYMATRIX_H
#include <Eigen/Eigen>
// Routines for saving and loading Eigen matrices as npy files.
int npywrite(Eigen::MatrixXd& mat, const char *filename, bool do_flip_y);
int npyread(const char *filename,
// output
Eigen::MatrixXd& mat);
#endif /* NPYMATRIX */
</code></pre>
<p>And the C++ file:</p>
<pre><code>// npymatrix.cc
#include <stdio.h>
#include <string.h>
#include "npymatrix.h"
using namespace Eigen;
enum {
FLOAT8,
FLOAT4,
INT8
};
static const int data_size[] = {8,4, 1};
int npywrite(MatrixXd& mat, const char *filename, bool do_flip_y)
{
FILE *fh = fopen(filename, "wb");
if (!fh)
return -1;
// Write header and version number to file
fwrite("\223NUMPY"
"\001\000"
, 1, 8, fh);
char header[100];
sprintf(header,
"{'descr': '<f8', 'fortran_order': False, 'shape': (%d, %d), } \n",
mat.rows(),
mat.cols());
unsigned short header_len = strlen(header);
fwrite(&header_len, 2, 1, fh);
fwrite(header, header_len, 1, fh);
// Is there a faster way??
for (int row_idx=0; row_idx<mat.rows(); row_idx++) {
for (int col_idx=0; col_idx<mat.cols(); col_idx++) {
int r_idx = row_idx;
if (do_flip_y)
r_idx = mat.rows()-1-r_idx;
double v = mat(r_idx, col_idx);
fwrite(&v, sizeof(double), 1, fh);
}
}
fclose(fh);
return 0;
}
static const char *find_next_alphanum(const char *p)
{
while(*p && (!isalnum(*p)))
p++;
return p;
}
static const char *find_next_string_after(const char *p, const char *token)
{
p = strstr(p, token);
if (!p)
return p;
return p + strlen(token);
}
static const char *find_next_alnum_after(const char *p)
{
while(*p and isalnum(*p))
p++;
return p;
}
static char *strdup_to_delim(const char *p, const char *delim)
{
const char *pstart = p;
while(*p && !strchr(delim, *p))
p++;
return strndup(pstart, p-pstart);
}
int npyread(const char *filename,
// output
MatrixXd& mat)
{
FILE *fh = fopen(filename, "rb");
// Magic bytes
char magic_bytes[6], version[2];
fread(magic_bytes, 1, 6, fh);
// Version
fread(version, 1, 2, fh);
// Header len
short header_len;
fread(&header_len, 1, 2, fh);
// Read the header
char *header = new char[header_len];
fread(header, 1, header_len, fh);
// parse the header. This is ugly but works for a standard header...
const char *p = header;
p = find_next_string_after(p, "descr");
p = find_next_alphanum(p+1);
char *descr = strdup_to_delim(p, "'\"");
p = find_next_string_after(p, "fortran_order");
p = find_next_alphanum(p+1);
char *fortran_order = strdup_to_delim(p, ",");
p = find_next_string_after(p, "shape");
p = find_next_alphanum(p+1);
char *shape = strdup_to_delim(p, ")");
int height = atoi(shape);
int width = atoi(find_next_alphanum(find_next_alnum_after(shape)));
// Decode the type
int dtype=-1;
if (strcmp("<f8", descr)==0
|| strcmp("f8", descr)==0
) {
dtype=FLOAT8;
}
else if (strcmp("<f4", descr)==0
|| strcmp("f4", descr)==0) {
dtype=FLOAT4;
}
else {
printf("Unsupported data type: %s!\n", descr);
return -1;
}
int pixel_size = data_size[dtype];
mat.setZero(height, width);
for (int row_idx=0; row_idx<height; row_idx++) {
for (int col_idx=0; col_idx<width; col_idx++) {
unsigned char v[8];
double gl;
fread(v, 1, pixel_size, fh);
switch(dtype) {
case FLOAT8:
gl = *((double*)v);
break;
case FLOAT4:
gl = *((float*)v);
break;
default:
gl = *((unsigned char*)v);
break;
}
mat(row_idx,col_idx) = gl;
}
}
fclose(fh);
free(shape);
free(descr);
free(fortran_order);
delete [] header;
return 0;
}
</code></pre>
| 0 | 2016-09-26T17:08:24Z | [
"python",
"c",
"arrays",
"numpy"
]
|
How to get an all sticky grid of Treeview and Scrollbar in Python Tkinter? | 39,707,184 | <p>What I want in Tkinter in Python 2.7 is the following grid layout:</p>
<p><a href="http://i.stack.imgur.com/O1jHM.png" rel="nofollow"><img src="http://i.stack.imgur.com/O1jHM.png" alt="Grid Layout"></a></p>
<p>However once, I start using the <code>grid()</code> functions instead of <code>pack()</code> functions, nothing is showing on running the script. The following is what I am stuck with:</p>
<pre><code>import Tkinter, ttk
class App(Tkinter.Frame):
def __init__(self,parent):
Tkinter.Frame.__init__(self, parent, relief=Tkinter.SUNKEN, bd=2)
self.parent = parent
self.grid(row=0, column=0, sticky="nsew")
self.menubar = Tkinter.Menu(self)
try:
self.parent.config(menu=self.menubar)
except AttributeError:
self.tk.call(self.parent, "config", "-menu", self.menubar)
self.tree = ttk.Treeview(self.parent)
self.tree.grid(row=0, column=0, sticky="nsew")
self.yscrollbar = ttk.Scrollbar(self, orient='vertical', command=self.tree.yview)
self.yscrollbar.grid(row=0, column=1, sticky='nse')
self.tree.configure(yscrollcommand=self.yscrollbar.set)
self.yscrollbar.configure(command=self.tree.yview)
if __name__ == "__main__":
root = Tkinter.Tk()
root.title("MyApp")
app = App(root)
app.pack()
app.mainloop()
</code></pre>
<p>Any help will be highly appreciated.</p>
| 0 | 2016-09-26T15:50:38Z | 39,707,737 | <p>You have several problems that are affecting your layout. </p>
<p>First, some of the widgets inside <code>App</code> use <code>self</code> as the parent, some use <code>self.parent</code>. They should all use <code>self</code> in this particular case. So, the first thing to do is change the parent option of the <code>Treeview</code> to <code>self</code>.</p>
<pre><code>self.tree = ttk.Treeview(self)
</code></pre>
<p>Second, since your main code is calling <code>app.pack()</code>, you shouldn't be calling <code>self.grid</code>. Remove the line `self.grid(row=0, column=0, sticky="nsew"). It's redundant.</p>
<p>Third, you are using very unusual code to add the menubar. You need to configure the menu of the root window. There's no need to put this in a <code>try/except</code> block, and there's no reason to use <code>self.tk.call</code>. Simply do this:</p>
<pre><code>self.parent.configure(menu=self.menubar)
</code></pre>
<p>This assumes that <code>self.parent</code> is indeed the root window. If you don't want to force that assumption you can use <code>winfo_toplevel()</code> which will always return the top-most window:</p>
<pre><code>self.parent.winfo_toplevel().configure(menu=self.menubar)
</code></pre>
<p>Finally, since you are using <code>grid</code>, you need to give at least one row and one column a "weight" so tkinter knows how to allocate extra space (such as when the user resizes a window).</p>
<p>In your case you want to give all of the weight to row and column 0 (zero), since that's where you've placed the widget which needs the most space:</p>
<pre><code>def __init__(self, parent):
...
self.grid_rowconfigure(0, weight=1)
self.grid_columnconfigure(0, weight=1)
</code></pre>
<p>Note: you'll also want to make sure when you call <code>app.pack()</code> that you give it parameters that makes it fill any extra space, too. Otherwise the tree will fill "app", but "app" would not fill the window.</p>
<pre><code>app.pack(fill="both", expand=True)
</code></pre>
<p>Here is a fully working example with all of those changes. I grouped the main layout code together since that makes the code easier to visualize and easier to maintain:</p>
<pre><code>import Tkinter, ttk
class App(Tkinter.Frame):
def __init__(self,parent):
Tkinter.Frame.__init__(self, parent, relief=Tkinter.SUNKEN, bd=2)
self.parent = parent
self.menubar = Tkinter.Menu(self)
self.parent.winfo_toplevel().configure(menu=self.menubar)
self.tree = ttk.Treeview(self)
self.yscrollbar = ttk.Scrollbar(self, orient='vertical', command=self.tree.yview)
self.tree.configure(yscrollcommand=self.yscrollbar.set)
self.tree.grid(row=0, column=0, sticky="nsew")
self.yscrollbar.grid(row=0, column=1, sticky='nse')
self.yscrollbar.configure(command=self.tree.yview)
self.grid_rowconfigure(0, weight=1)
self.grid_columnconfigure(0, weight=1)
if __name__ == "__main__":
root = Tkinter.Tk()
root.title("MyApp")
app = App(root)
app.pack(fill="both", expand=True)
app.mainloop()
</code></pre>
<hr>
<p>Your question mentioned grid, but in this case you could save a few lines of code by using <code>pack</code>. <code>pack</code> excels in layouts like this, where your gui is aligned top-to-bottom and/or left-to-right. All you need to do is replace the last five lines (the calls to <code>grid</code>, grid_rowconfigure<code>and</code>grid_columnconfigure`) with these two lines:</p>
<pre><code>self.yscrollbar.pack(side="right", fill="y")
self.tree.pack(side="left", fill="both", expand=True)
</code></pre>
| 1 | 2016-09-26T16:20:39Z | [
"python",
"python-2.7",
"tkinter",
"treeview"
]
|
I want to print the json object in a tabular form passed using render_template() from python | 39,707,192 | <p>Python code is as follows:</p>
<pre><code> @app.route("/send", methods=['GET', 'POST'])
def send():
if request.method == "POST":
findemail=request.form['email']
datafound=findlogic(findemail)
data = jsonify(datafound)
#return data
return render_template("testjinja.html", x=data)
</code></pre>
<p>The data is of the form</p>
<pre><code>"629513533": [
{
"xyz": "629513533"
},
{
"a": "1.00"
},
{
"b": "3.00"
},
{
"c": "1.00"
},
{
"d": "1.00"
},
{
"e": "1.00"
},
{
"f": "1.00"
},
{
"g": "1.00"
},
{
"h": "1.00"
},
{
"i": "1.00"
},
</code></pre>
<p>I have tried testjinja.html as follows:</p>
<pre><code><body>
{% for value in x %} // I have tried x.iteritems(), x.items() also
<li>{{ x[value].xyz }} </li>
{% endfor %}
</body>
</code></pre>
<p>I get an error that response object is not iterable. I dont know how to handle json object x in testjinja.html. PLease help</p>
| 0 | 2016-09-26T15:50:48Z | 39,711,982 | <p>you cannot iterate a string like you want, but it looks like your <code>data</code> is a json string, so you could do something like:</p>
<pre><code>json_data = json.loads(data)
<body>
{% for value in json_data['629513533'] %}
<li>{{ x[value].xyz }} </li>
{% endfor %}
</body>
</code></pre>
<p>You will have to <code>import json</code></p>
| 0 | 2016-09-26T20:39:38Z | [
"python",
"html",
"json"
]
|
how can i control modules in package to import in python? | 39,707,315 | <p>Consider an example,</p>
<p>I have a package having list of modules:</p>
<pre><code> /mypackage/
__init__.py
mod1.py
mod2.py
mod3.py
</code></pre>
<p><code>prog1.py</code>: I would like to allow only <code>mod2</code> here
<code>prog2</code>: allow <code>mod1,2</code></p>
<p>If I write,</p>
<pre><code>prog1.py
import mypackage
# only mod2 should import
prog2.py
import mypackage
# only mod1,mod3 should import
</code></pre>
<p>How can I restrict at package or module level?</p>
| -1 | 2016-09-26T15:56:27Z | 39,707,396 | <pre><code>from mypackage import mod2
</code></pre>
<p>or</p>
<pre><code>from mypackage import mod1, mod3
</code></pre>
| 3 | 2016-09-26T16:00:45Z | [
"python"
]
|
how can i control modules in package to import in python? | 39,707,315 | <p>Consider an example,</p>
<p>I have a package having list of modules:</p>
<pre><code> /mypackage/
__init__.py
mod1.py
mod2.py
mod3.py
</code></pre>
<p><code>prog1.py</code>: I would like to allow only <code>mod2</code> here
<code>prog2</code>: allow <code>mod1,2</code></p>
<p>If I write,</p>
<pre><code>prog1.py
import mypackage
# only mod2 should import
prog2.py
import mypackage
# only mod1,mod3 should import
</code></pre>
<p>How can I restrict at package or module level?</p>
| -1 | 2016-09-26T15:56:27Z | 39,707,446 | <p>I don't think that packages should control who and how can import them, basically packages should not know about their importers. However if you for some reason still thing this is a good idea, you can get a main filename by:</p>
<pre><code>import __main__
main_file = __main__.__file__
</code></pre>
<p>And then modify your</p>
<pre><code>__all__
</code></pre>
<p>attribute of module based on a main file name. </p>
| 0 | 2016-09-26T16:03:41Z | [
"python"
]
|
Use of plot and curve in rpy2 | 39,707,359 | <p>In R, I can run plot and curve to get the relationship between a predicted probability and the predictor variable by just running:</p>
<pre><code>plot(outcome~survrate, data = d, ylab = "P(outcome = 1 |
survrate)", xlab = "SURVRATE: Probability of Survival after 5
Years", xaxp = c(0, 95, 19))
curve(transform(coef(mod1)[1] + coef(mod1)[2]*x), add = TRUE)
</code></pre>
<p>Where transform is a custom R function.</p>
<p>I am trying to do the same thing in rpy2, and so far have the following:</p>
<pre><code>rplot = ro.r('plot')
formula = Formula('outcome~survrate')
formula.getenvironment()['outcome'] = r_analytical_set.rx2('outcome')
formula.getenvironment()['survrate'] = r_analytical_set.rx2('survrate')
ro.r.plot(formula, data=r_analytical_set, ylab = 'P(outcome = 1 | pass)', xlab = 'SURVRATE: Probability of Survival after 5
Years', xaxp = ro.r.c(0, 95, 19))
# read in R function from file
with open('/Users/gregsilverman//development/python/rest_api/rest_api/utils.r', 'r') as f:
string = f.read()
from rpy2.robjects.packages import STAP
invlogit = STAP(string, "invlogit")
ro.r.curve(transform(ro.r.coef(fit)[0] + ro.r.coef(fit)[1]*ro.r.x), add = True)
</code></pre>
<p>In this state, <code>ro.r.curve</code> gives an error that <code>TypeError: unsupported operand type(s) for *: 'float' and 'FloatVector'</code></p>
<p>So, as per this <a href="http://stackoverflow.com/questions/3087145/multiplying-all-elements-of-a-vector-in-r">multiplying all elements of a vector in R</a>, I ran</p>
<pre><code>ro.r.curve(transform(ro.r.coef(fit)[0] + ro.r.prod(ro.r.coef(fit)[1],ro.r.x)), add = True)
</code></pre>
<p>But, now I am getting an error <code>TypeError: unsupported operand type(s) for +: 'float' and 'FloatVector'</code></p>
<p>Before I waste any more time figuring out how to add a scalar to a vector, I was wondering if there was a more efficient way of achieving my end goal.</p>
| 0 | 2016-09-26T15:58:27Z | 39,708,025 | <p>Use the accessor "R-operator" (<code>.ro</code> - see <a href="http://rpy2.readthedocs.io/en/version_2.8.x/vector.html#operators" rel="nofollow">http://rpy2.readthedocs.io/en/version_2.8.x/vector.html#operators</a>):</p>
<pre><code>In [1]: from rpy2.robjects.vectors import FloatVector
In [2]: FloatVector((1,2,3)).ro + 2
Out[2]:
R object with classes: ('numeric',) mapped to:
<FloatVector - Python:0x7fde744c0308 / R:0x44db740>
[3.000000, 4.000000, 5.000000]
</code></pre>
| 1 | 2016-09-26T16:37:17Z | [
"python",
"plot",
"curve",
"rpy2"
]
|
Authenticating with JWT token in Django | 39,707,471 | <p>I am beginner in Django and I am learning about JWT token from here.</p>
<p><a href="http://getblimp.github.io/django-rest-framework-jwt/#rest-framework-jwt-auth" rel="nofollow">http://getblimp.github.io/django-rest-framework-jwt/#rest-framework-jwt-auth</a></p>
<p>I have already set up in my settings.py.</p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES':
(
'rest_framework.authentication.SessionAuthentication',
'rest_framework.authentication.BasicAuthentication',
'rest_framework_jwt.authentication.JSONWebTokenAuthentication',
),
'DEFAULT_MODEL_SERIALIZER_CLASS':
'rest_framework.serializers.ModelSerializer',
'DEFAULT_PERMISSION_CLASSES':
(
'rest_framework.permissions.IsAuthenticated',
)
}
</code></pre>
<p>If I do curl, I actually get back my token. </p>
<pre><code>curl -X POST -d "username=khant&password=khant" http://127.0.0.1:8000/api-token-auth/
</code></pre>
<p>But when I access my protected url, </p>
<pre><code>curl -H "Authorization: JWT eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImtoYW50IiwidXNlcl9pZCI6OCwiZW1haWwiOiJraGFudEBnbWFpbC5jb20iLCJleHAiOjE0NzQ5MDQxNTJ9.jaZ3HwsXjx7Bk2ol5UdeE8UUlq4OEGCbnb1T8vDhO_w" http://127.0.0.1:8000/dialogue_dialoguemine/
</code></pre>
<p>It always say this when I access from web. Localhost is okay for me.</p>
<blockquote>
<p>{"detail":"Authentication credentials were not provided."}</p>
</blockquote>
<p>In my protected url, I just write simple api to query. May I know how to solve this? </p>
<pre><code>class DialogueMineView(generics.ListAPIView):
permission_classes = (IsAuthenticated,)
serializer_class = DialogueSerializer
paginate_by = 2
def get_queryset(self):
user = self.request.user
return Dialogue.objects.filter(owner=user)
</code></pre>
| 0 | 2016-09-26T16:05:01Z | 39,741,420 | <p>I know now. For my case, I am not okay only on web. I check from this link.</p>
<p><a href="http://stackoverflow.com/questions/26906630/django-rest-framework-authentication-credentials-were-not-provided">Django Rest Framework - Authentication credentials were not provided</a></p>
<p>It say I need to add </p>
<pre><code>WSGIPassAuthorization On
</code></pre>
<p>in my httpd.conf in apache directory. It is okay now. </p>
| 0 | 2016-09-28T08:00:13Z | [
"python",
"django",
"jwt"
]
|
Testing external URLs in Django | 39,707,524 | <p>I'm currently having a hard time getting some of my Django tests to work.
What I'm trying to do is test if a given URL (the REST API I want to consume) is up and running (returning status code 200) and later on if it's responding the expected values.
However, all I get returned is a status code 404 (Page not found), even though the URL is definitely the right one. (Tried the exact string in my browser)</p>
<p>This is the code:</p>
<pre><code>from django.test import TestCase
class RestTest(TestCase):
def test_api_test_endpoint(self):
response = self.client.get("http://ip.to.my.api:8181/test/")
self.assertEqual(response.status_code, 200, "Status code not equals 200")
</code></pre>
<p>It always returns a 404 instead of a 200...
Anyone knows what I do wrong here?</p>
| 0 | 2016-09-26T16:08:04Z | 39,707,581 | <p><code>self.client</code> is not a real HTTP client; it's the Django test client, which simulates requests to your own app for the purposes of testing. It doesn't make HTTP requests, and it only accepts a path, not a full URL.</p>
<p>If you really needed to check that an external URL was up, you would need a proper HTTP client like <code>requests</code>. However this doesn't seem to be an appropriate thing to do in a test case, since it depends on an external API; if that API went down, suddenly your tests would fail, which seems odd. This could be something that you do in a monitoring job, but not in a test.</p>
| 0 | 2016-09-26T16:12:02Z | [
"python",
"django",
"unit-testing",
"testing"
]
|
How to run a command inside virtual environment using Python | 39,707,585 | <p>I have the virutalenv created and installed. I have also installed jsnapy tool inside my virutal env. </p>
<p>This is the script that we are using:</p>
<pre><code>Filename : venv.py
import os
os.system('/bin/bash --rcfile ~/TestAutomation/End2EndAutomation/bin/activate')
os.system('End2EndAutomation/bin/jsnapy')
ubuntu@server:~/TestAutomation$ python venv.py
(End2EndAutomation) ubuntu@sdno-server:~/TestAutomation$ ^C
</code></pre>
<p>We need to know, is how we can get into virutalenv, run a command and deactivate it using python script?</p>
<p>[EDIT1]</p>
<p>i used the code given in the comment. its just entering virutal env. When i issue exit, its running jsnapy command. </p>
<pre><code>ubuntu@server:~/TestAutomation$ python venv.py
(End2EndAutomation) ubuntu@server:~/TestAutomation$ exit
exit
usage:
This tool enables you to capture and audit runtime environment of
networked devices running the Junos operating system (Junos OS)
Tool to capture snapshots and compare them
It supports four subcommands:
--snap, --check, --snapcheck, --diff
1. Take snapshot:
jsnapy --snap pre_snapfile -f main_configfil
</code></pre>
| 0 | 2016-09-26T16:12:06Z | 39,707,673 | <p>Each call to <code>os.system()</code> will create a new bash instance and terminate the previous one. To run all the commands in one bash instance you could put all your commands inside a single bash script and call that from <code>os.system()</code></p>
<p><strong>run.sh</strong></p>
<pre><code>source ~/TestAutomation/End2EndAutomation/bin/activate
End2EndAutomation/bin/jsnapy
deactivate
</code></pre>
<p><strong>Python</strong></p>
<pre><code>os.system('source run.sh')
</code></pre>
<p>Alternatively, you could write a multiline bash command, as long as it's all in one <code>os.system()</code> call.</p>
| 0 | 2016-09-26T16:17:04Z | [
"python"
]
|
How to run a command inside virtual environment using Python | 39,707,585 | <p>I have the virutalenv created and installed. I have also installed jsnapy tool inside my virutal env. </p>
<p>This is the script that we are using:</p>
<pre><code>Filename : venv.py
import os
os.system('/bin/bash --rcfile ~/TestAutomation/End2EndAutomation/bin/activate')
os.system('End2EndAutomation/bin/jsnapy')
ubuntu@server:~/TestAutomation$ python venv.py
(End2EndAutomation) ubuntu@sdno-server:~/TestAutomation$ ^C
</code></pre>
<p>We need to know, is how we can get into virutalenv, run a command and deactivate it using python script?</p>
<p>[EDIT1]</p>
<p>i used the code given in the comment. its just entering virutal env. When i issue exit, its running jsnapy command. </p>
<pre><code>ubuntu@server:~/TestAutomation$ python venv.py
(End2EndAutomation) ubuntu@server:~/TestAutomation$ exit
exit
usage:
This tool enables you to capture and audit runtime environment of
networked devices running the Junos operating system (Junos OS)
Tool to capture snapshots and compare them
It supports four subcommands:
--snap, --check, --snapcheck, --diff
1. Take snapshot:
jsnapy --snap pre_snapfile -f main_configfil
</code></pre>
| 0 | 2016-09-26T16:12:06Z | 39,709,190 | <p>Two successive calls to <code>os.system()</code> will create two independent processes, one after the other. The second will run when the first finishes. Any effects of commands executed in the first process will have been forgotten and flushed when the second runs.</p>
<p>You want to run the activation and the command which needs to be run in the virtualenv in the same process, i.e. the same single shell instance.</p>
<p>To do that, you can use <code>bash -c '...'</code> to run a sequence of commands. See below.</p>
<p><strong>However,</strong> a better solution is to simply activate the virtual environment from within Python itself.</p>
<pre><code>p = os.path.expanduser('~/TestAutomation/End2EndAutomation/bin/activate_this.py')
execfile(p, dict(__file__=p))
subprocess.check_call(['./End2EndAutomation/bin/jsnapy'])
</code></pre>
<p>For completeness, here is the Bash solution, with comments.</p>
<pre><code>import subprocess
subprocess.check_call(['bash', '-c', """
. ~/TestAutomation/End2EndAutomation/bin/activate
./End2EndAutomation/bin/jsnapy"""])
</code></pre>
<p>The preference for <code>subprocess</code> over <code>os.system</code> is recommended even in the <a href="https://docs.python.org/2/library/os.html#os.system" rel="nofollow"><code>os.system</code> documentation</a>.</p>
<p>There is no need to explicitly <code>deactivate</code>; when the <code>bash</code> command finishes, that will implicitly also deactivate the virtual environment.</p>
<p>The <code>--rcfile</code> trick is a nice idea, but it doesn't work when the shell you are calling isn't interactive.</p>
| 0 | 2016-09-26T17:45:53Z | [
"python"
]
|
GitPython list all files affected by a certain commit | 39,707,759 | <p>I am using this for loop to loop through all commits:</p>
<pre><code>repo = Repo("C:/Users/shiro/Desktop/lucene-solr/")
for commit in list(repo.iter_commits()):
print commit.files_list # how to do that ?
</code></pre>
<p>How can I get a list with the files affected from this specific commit ?</p>
| 0 | 2016-09-26T16:21:53Z | 39,792,599 | <p>I solved this problem for SCM Workbench. The important file is:</p>
<p><a href="https://github.com/barry-scott/scm-workbench/blob/master/Source/Git/wb_git_project.py" rel="nofollow">https://github.com/barry-scott/scm-workbench/blob/master/Source/Git/wb_git_project.py</a></p>
<p>Look at cmdCommitLogForFile() and its helper __addCommitChangeInformation().</p>
<p>The trick is to diff the tree objects.</p>
| 0 | 2016-09-30T13:22:16Z | [
"python",
"gitpython"
]
|
ValueError: could not convert string to float: '[231.49377550490459]' | 39,708,064 | <p>I am trying to cast a python list into a float.
This is the problem narrowed down:</p>
<pre><code>loss = ['[228.55112815111235]', '[249.41649450361379]']
print(float(loss[0]))
</code></pre>
<p>And results in the error:</p>
<pre><code>ValueError: could not convert string to float: '[231.49377550490459]'
</code></pre>
<p>Can anyone help me?</p>
| -4 | 2016-09-26T16:39:38Z | 39,708,097 | <p>Strip the brackets.</p>
<pre><code>float(loss[0].replace('[', '').replace(']', ''))
</code></pre>
| 1 | 2016-09-26T16:41:25Z | [
"python"
]
|
ValueError: could not convert string to float: '[231.49377550490459]' | 39,708,064 | <p>I am trying to cast a python list into a float.
This is the problem narrowed down:</p>
<pre><code>loss = ['[228.55112815111235]', '[249.41649450361379]']
print(float(loss[0]))
</code></pre>
<p>And results in the error:</p>
<pre><code>ValueError: could not convert string to float: '[231.49377550490459]'
</code></pre>
<p>Can anyone help me?</p>
| -4 | 2016-09-26T16:39:38Z | 39,708,121 | <p>You can use string slicing if there is always just one element in your string list.</p>
<pre><code>loss = ['[228.55112815111235]', '[249.41649450361379]']
print(float(loss[0][1:-1]))
</code></pre>
| 0 | 2016-09-26T16:42:27Z | [
"python"
]
|
ValueError: could not convert string to float: '[231.49377550490459]' | 39,708,064 | <p>I am trying to cast a python list into a float.
This is the problem narrowed down:</p>
<pre><code>loss = ['[228.55112815111235]', '[249.41649450361379]']
print(float(loss[0]))
</code></pre>
<p>And results in the error:</p>
<pre><code>ValueError: could not convert string to float: '[231.49377550490459]'
</code></pre>
<p>Can anyone help me?</p>
| -4 | 2016-09-26T16:39:38Z | 39,708,145 | <p>That is because your <code>float</code> value is encapsulated within brackets. And you'll get <code>ValueError</code> because that is not valid float value. In order to convert it, you have to firstly remove them. For example:</p>
<pre><code>>>> my_val = '[228.55112815111235]'
>>> print float(my_val[1:-1])
228.551128151
</code></pre>
<p>In your case, you have to write:</p>
<pre><code>>>> loss = ['[228.55112815111235]', '[249.41649450361379]']
>>> float(loss[0][1:-1])
228.55112815111235
</code></pre>
<p>In case you want to convert entire list to list of <code>float</code>, you may use <code>map()</code> function as:</p>
<pre><code>>>> map(lambda x: float(x[1:-1]), loss)
[228.55112815111235, 249.4164945036138]
</code></pre>
| 0 | 2016-09-26T16:44:47Z | [
"python"
]
|
ValueError: could not convert string to float: '[231.49377550490459]' | 39,708,064 | <p>I am trying to cast a python list into a float.
This is the problem narrowed down:</p>
<pre><code>loss = ['[228.55112815111235]', '[249.41649450361379]']
print(float(loss[0]))
</code></pre>
<p>And results in the error:</p>
<pre><code>ValueError: could not convert string to float: '[231.49377550490459]'
</code></pre>
<p>Can anyone help me?</p>
| -4 | 2016-09-26T16:39:38Z | 39,708,262 | <p>If you want to convert the list values to floats you can use list comprehension:</p>
<pre><code>loss = [float(loss[i][1:-1]) for i in range(len(loss))]
</code></pre>
<p>Then your loss list will look like this:</p>
<pre><code>[228.55112815111235, 249.4164945036138]
</code></pre>
| 0 | 2016-09-26T16:50:56Z | [
"python"
]
|
Ember serializer customization for alternate url | 39,708,111 | <p>I am a newbie to ember. So here is what i faced. I got backend RESTAPI written in python/Django. It provides following json response on <code>/api/works/</code></p>
<pre><code>[
{
"id": 17,
"title": "about us",
"description": "some project",
"owner": "admin"
},
{
"id": 19,
"title": "test1 profit project",
"description": "dev null",
"owner": "test1"
}
]
</code></pre>
<p>also on detail view E.g:<code>/api/works/17/</code>:</p>
<pre><code>{
"id": 17,
"title": "about us",
"description": "some project",
"owner": "admin"
}
</code></pre>
<p>there is also <code>/api/works/17/tasks/</code> for listing work tasks</p>
<pre><code>[
{
"id": 2,
"title": "secondWorkTask",
"description": "task2 description",
"due_date": "2016-09-26",
"assign": 1,
"project": "about us"
},
{
"id": 3,
"title": "some task name",
"description": "some task description",
"due_date": "2016-08-27",
"assign": 2,
"project": "about us"
}
]
</code></pre>
<p>on the front-end side i am using ember-cli version2.7.0 + <a href="https://github.com/dustinfarris/ember-django-adapter" rel="nofollow">ember-django-adapter</a>. I can get /api/works without problem. </p>
<p><em>serializer on ember</em> to get project:</p>
<pre><code>export default DRFSerializer.extend({
normalizeFindAllResponse(store, primaryModelClass, payload, id, requestType) {
payload.data = payload;
return this._super(...arguments);
}
});
</code></pre>
<p>What i want to achieve is on the <strong>ember side</strong> when work detail url on the ember side(<code>emberexample-app.com/app/17/</code>) load, it must show all tasks. Currently i can get work detail by this url <code>/api/works/17/</code> with above serializer. But how can i get tasks? Please help me to find a way to solve this.</p>
| 0 | 2016-09-26T16:42:05Z | 39,710,804 | <p>The serializer are used to customize the loading and saving (or serialization and deserialization) of data.</p>
<p>To customize the URLs you must use an Adapter,(e.g. <a href="http://emberjs.com/api/data/classes/DS.RESTAdapter.html" rel="nofollow" title="Adapter">RESTAdapter</a> is my most used adapter).</p>
<p>That will work in the case you want to create (<code>urlForCreateRecord</code>) or update (<code>urlForUpdateRecord</code>) tasks but it may not directly work if you just want to convert a <code>work.get('tasks')</code> following a <code>belongsTo</code> relationship to a <code>GET http://endpoint/work/<work_id>/tasks</code>, at least in my case it didn't "just work"ed.</p>
<p>A solution I found that works very well is adding a property <code>links</code> as a plain object to the parent objects that contains links to the different child models you want to retrieve as properties.</p>
<p>Something like this:
</p>
<pre><code>/* app/adapters/work.js */
import DS from 'ember';
export default DS.RESTSerializer({
normalize (model, hash, prop) {
hash.links = hash.links || {};
hash.links.tasks = this.buildURL(model, hash.id) + '/tasks';
return this._super(model, hash, prop);
}
});
</code></pre>
<p>In this example, when Ember try to get a tasks property from a work model, it will make a GET to the URL that <code>work.links.tasks</code> contains.</p>
| 1 | 2016-09-26T19:25:48Z | [
"python",
"json",
"django",
"rest",
"ember.js"
]
|
Global variables in a Python module spontaneously reset | 39,708,118 | <p>I have part of a program written in Python 3.5 and started by testing the first two modules. I managed to isolate a problem in one of the modules where it appears that two global variables are switching back to their original values for no reason that I can understand. One of these global variables (<code>event_count</code>) is used in only a single function (grep shows the string "event_count" doesn't appear anywhere else in any of my *.py files), yet the value of the variable changes between calls to the function. If I add print statements for the other global variable in this module, it also reverts to it's original value at the same moment. Moving <code>event_count</code> to another module (replacing it with <code>sensorlogic.event_count</code> in <code>eventcount()</code> and moving the initialization to the other module) makes the behavior go away, so I have a fix but no understanding.</p>
<p>Here is all of the code that uses <code>event_count</code>, in module <code>sensoreval</code>:</p>
<pre><code>event_count = 0
def eventcount(increment):
global event_count
print("entering eventcount, increment =", increment,
", event_count =", event_count)
event_count += increment
print("leaving eventcount, event_count =", event_count)
return event_count
</code></pre>
<p>If I run the following code segment:</p>
<pre><code> e.setvalue(1)
print("I am at marker #1")
eventcount(0)
</code></pre>
<p>(the last action in <code>e.setvalue()</code> is a call to <code>eventcount(0)</code>) it produces this output:</p>
<pre><code>entering eventcount, increment = 0 , event_count = 4
leaving eventcount, event_count = 4
I am at marker #1
entering eventcount, increment = 0 , event_count = 0
leaving eventcount, event_count = 0
</code></pre>
<p>I have tried trimming down the two modules to something of reasonable size, but the problem keeps going away when I do so. I'll keep working on that. Since I've never used Python 3 before, and only have a little Python 2.7 experience I assume I'm doing something stupid, I just have no idea what.</p>
<p>I believe that my example is different from some of the related posts that have been pointed out in that the variable <code>event_count</code> is global only so it will be static. It is used only in this single function. The string "event_count" doesn't appear anywhere else in this or any other module.</p>
<hr>
<p>After many edit/rerun iterations, I have a managably small example that demonstrates what is happening. It involves two modules with a total of 8 lines of code. The first module, <code>a.py</code>, is <code>__main__</code>:</p>
<pre><code>import b
c = 0
if __name__ == '__main__':
b.init()
print("c =", c)
</code></pre>
<p>The second module is <code>b.py</code>:</p>
<pre><code>import a
def init():
a.c = 1
</code></pre>
<p>Running <code>a.py</code> produces the output:</p>
<pre><code>c = 0
</code></pre>
<p>I expected <code>c</code> to still be 1 from the <code>a.c = 1</code> in <code>b.py</code>.</p>
<p>Also, I tried to reduce this further by deleting the <code>if __name__ == '__main__'</code> from <code>a.py</code>, but then the example no longer runs:</p>
<pre><code>Traceback (most recent call last):
File "...\a.py", line 1, in <module>
import b
File "...\b.py", line 1, in <module>
import a
File "...\a.py", line 3, in <module>
b.init()
AttributeError: module 'b' has no attribute 'init'
</code></pre>
<p>I can't explain that, either, but it seems likely to be related.</p>
<hr>
<p>Following Mata's lead, I believe that the following code shows what's going on. There are three modules involved. <code>a.py</code>:</p>
<pre><code>print("__name__ =", __name__)
import b
print("__name__ =", __name__)
def f(): pass
print(f)
if __name__ == '__main__':
print("f is b.a.f?", f is b.a.f)
</code></pre>
<p><code>b.py</code>:</p>
<pre><code>import a
</code></pre>
<p><code>c.py</code>:</p>
<pre><code>import a
import b
print("__name__ =", __name__)
print("a.f is b.a.f?", a.f is b.a.f)
</code></pre>
<p>You can see the problem by running <code>a.py</code>, giving the result:</p>
<pre><code>__name__ = __main__
__name__ = a
__name__ = a
<function f at 0x0000021A4A947840>
__name__ = __main__
<function f at 0x0000021A484E0400>
f is b.a.f? False
</code></pre>
<p>Running <code>c.py</code> so that <code>__main__</code> isn't part of the import cycle results in:</p>
<pre><code>__name__ = a
__name__ = a
<function f at 0x000001EA101B7840>
__name__ = __main__
a.f is b.a.f? True
</code></pre>
| 1 | 2016-09-26T16:42:18Z | 39,730,609 | <p>Let's take a look at your two-module example step-by-step. The behavior there is expected, but initially confusing and probably explains what is going on pretty well in the other cases.</p>
<p>If you run <code>a</code> as a script, it is not imported as <code>a</code> into <code>sys.modules</code>, but rather as <code>__main__</code>. The first statement is <code>import b</code>, which creates an empty module object <code>sys.modules['b']</code> and begins initializing it.</p>
<p>The first line of <code>b</code> imports <code>a</code> again. Normally, a module object under <code>sys.modules['a']</code> would be found, but in this case, you are running <code>a</code> as a script, so the initial import happened under a different name. Since the name of <code>a</code> is <code>a</code> and not <code>__main__</code> this time, <code>a.c</code> is set to zero and nothing else happens.</p>
<p>Now execution returns to <code>b</code>. It now creates a function <code>init</code>, which sets <code>sys.modules['a'].c</code> to one. I wrote out the reference to the <code>a</code> module very explicitly because this is the root cause of your discrepancy.</p>
<p>Once <code>b</code> is imported, execution returns back to <code>a</code>, but not <code>sys.modules['a']</code>. The next line, <code>c = 0</code> actually sets <code>sys.modules['__main__'].c</code> to zero. Hopefully you see the problem at this point. The next line calls <code>b.init</code>, which sets <code>sys.modules['a']</code> to one. Then you print <code>sys.modules['__main__']</code>, which is zero, as expected.</p>
<p>To verify the correctness of this exposition, try adding a print statement</p>
<pre><code>print(sys.modules['a'].c)
</code></pre>
<p>You will get <code>1</code>. Also, <code>sys.modules['a'] is sys.modules['__main__']</code> will be <code>False</code>. The easiest way to get around this is not to initialize members of other modules in a given module's import.</p>
<p>Your specific case is documented here: <a href="http://effbot.org/zone/import-confusion.htm#using-modules-as-scripts" rel="nofollow">http://effbot.org/zone/import-confusion.htm#using-modules-as-scripts</a>.</p>
<p><strong>Additional Resources</strong></p>
<p>You can get a lot more information on the gritty details of the import system here: <a href="https://docs.python.org/3/reference/import.html" rel="nofollow">https://docs.python.org/3/reference/import.html</a>. Various traps and caveats of imports are described here: <a href="http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html" rel="nofollow">http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html</a>.</p>
| 1 | 2016-09-27T17:10:10Z | [
"python",
"python-3.x"
]
|
TypeError: can't multiply sequence by non-int of type 'float' (python 2.7) | 39,708,133 | <p>I have a dataframe <code>t_unit</code>, which is the result of a <code>pd.read_csv()</code> function.</p>
<pre><code>datetime B18_LR_T B18_B1_T
24/03/2016 09:00 21.274 21.179
24/03/2016 10:00 19.987 19.868
24/03/2016 11:00 21.632 21.417
24/03/2016 12:00 26.285 24.779
24/03/2016 13:00 26.897 24.779
</code></pre>
<p>I am resampling the dataframe to calculate the 5th and 05th percentiles with the code:</p>
<pre><code>keys_actual = list(t_unit.columns.values)
for key in keys_actual:
ts_wk = t_unit[key].resample('W-MON')
ts_wk_05p = ts_wk.apply(lambda x: x.quantile(0.05)).round(decimals=1).rename(key+'_05p', inplace=True)
ts_wk_95p = ts_wk.apply(lambda x: x.quantile(0.95)).round(decimals=1).rename(key+'_95p', inplace=True)
</code></pre>
<p>All works fine, but when I add a column to my dataframe, by means of <code>pd.concat</code>, into:</p>
<pre><code>datetime B18_LR_T B18_B1_T ext_T
24/03/2016 09:00 21.274 21.179 6.9
24/03/2016 10:00 19.987 19.868 7.5
24/03/2016 11:00 21.632 21.417 9.1
24/03/2016 12:00 26.285 24.779 9.9
24/03/2016 13:00 26.897 24.779 9.2
</code></pre>
<blockquote>
<p>ts_wk_05p = ts_wk.apply(lambda x: x.quantile(0.05)).round(decimals=1).rename(key+'_05p', inplace=True)</p>
<p>TypeError: can't multiply sequence by non-int of type 'float'</p>
</blockquote>
<p>Do you have any idea why?</p>
| 0 | 2016-09-26T16:43:52Z | 39,710,267 | <p>There is problem some column is not numeric.
You can check <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dtypes.html" rel="nofollow"><code>dtypes</code></a>:</p>
<pre><code>print (t_unit.dtypes)
B18_LR_T float64
B18_B1_T float64
ext_T object
dtype: object
</code></pre>
<p>Then try convert to numeric first by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>astype</code></a>:</p>
<pre><code>t_unit.ext_T = t_unit.ext_T.astype(float)
</code></pre>
<p>If:</p>
<blockquote>
<p>ValueError: could not convert string to float</p>
</blockquote>
<p>then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a> with parameter <code>errors='coerce'</code> for convert bad data to <code>NaN</code>:</p>
<pre><code>t_unit.ext_T = pd.to_numeric(t_unit.ext_T, errors='coerce')
</code></pre>
<p>All code:</p>
<pre><code>#simulate string column
t_unit.ext_T = t_unit.ext_T.astype(str)
print (t_unit.dtypes)
B18_LR_T float64
B18_B1_T float64
ext_T object
dtype: object
#convert to float
t_unit.ext_T = t_unit.ext_T.astype(float)
print (t_unit)
L = []
for key in t_unit.columns:
ts_wk = t_unit[key].resample('W-MON')
#remove inplace=True
ts_wk_05p = ts_wk.apply(lambda x: x.quantile(0.05)).round(decimals=1).rename(key+'_05p')
ts_wk_95p = ts_wk.apply(lambda x: x.quantile(0.95)).round(decimals=1).rename(key+'_95p')
L.append(ts_wk_05p)
L.append(ts_wk_95p)
print (pd.concat(L, axis=1))
B18_LR_T_05p B18_LR_T_95p B18_B1_T_05p B18_B1_T_95p ext_T_05p \
datetime
2016-03-28 20.2 26.8 20.1 24.8 7.0
ext_T_95p
datetime
2016-03-28 9.8
</code></pre>
| 0 | 2016-09-26T18:52:21Z | [
"python",
"pandas",
"time-series",
"resampling",
"quantile"
]
|
error while installing django | 39,708,136 | <p>I am trying to learn Django framework.
I have windows machine and on that I have installed virtual machine (oracle VM VirtualBox manager).
I have python installed on that.</p>
<p>I dont see pip installed on this VM. so i tried to install django via below:</p>
<pre><code> wget https://www.djangoproject.com/download/1.10.1/tarball/
</code></pre>
<p>This gives the following error:</p>
<pre><code> Resolving www.djangoproject.com... 162.242.220.127, 2001:4802:7801:102:be76:4eff:fe20:789f
Connecting to www.djangoproject.com|162.242.220.127|:443... connected.
ERROR: cannot verify www.djangoproject.com's certificate, issued by `/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3':
Unable to locally verify the issuer's authority.
ERROR: certificate common name `djangoproject.com' doesn't match requested host name `www.djangoproject.com'.
To connect to www.djangoproject.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection
--------------------
</code></pre>
<p>so i tried below:</p>
<pre><code>wget --user=username --password --no-check-certificate https://www.djangoproject.com/download/1.10.1/tarball/
</code></pre>
<p>but still error:</p>
<pre><code>ERROR: cannot verify www.djangoproject.com's certificate, issued by `/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3':
Unable to locally verify the issuer's authority.
ERROR: certificate common name `djangoproject.com' doesn't match requested host name `www.djangoproject.com'.
To connect to www.djangoproject.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.
</code></pre>
<p>Can somebody help?</p>
| 0 | 2016-09-26T16:44:11Z | 39,708,488 | <p>why not just download from <a href="https://github.com/django/django/archive/master.tar.gz" rel="nofollow">github repo</a></p>
<p>in that way this will work </p>
<p><code>wget https://github.com/django/django/archive/master.tar.gz</code></p>
| -1 | 2016-09-26T17:04:11Z | [
"python",
"django"
]
|
error while installing django | 39,708,136 | <p>I am trying to learn Django framework.
I have windows machine and on that I have installed virtual machine (oracle VM VirtualBox manager).
I have python installed on that.</p>
<p>I dont see pip installed on this VM. so i tried to install django via below:</p>
<pre><code> wget https://www.djangoproject.com/download/1.10.1/tarball/
</code></pre>
<p>This gives the following error:</p>
<pre><code> Resolving www.djangoproject.com... 162.242.220.127, 2001:4802:7801:102:be76:4eff:fe20:789f
Connecting to www.djangoproject.com|162.242.220.127|:443... connected.
ERROR: cannot verify www.djangoproject.com's certificate, issued by `/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3':
Unable to locally verify the issuer's authority.
ERROR: certificate common name `djangoproject.com' doesn't match requested host name `www.djangoproject.com'.
To connect to www.djangoproject.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection
--------------------
</code></pre>
<p>so i tried below:</p>
<pre><code>wget --user=username --password --no-check-certificate https://www.djangoproject.com/download/1.10.1/tarball/
</code></pre>
<p>but still error:</p>
<pre><code>ERROR: cannot verify www.djangoproject.com's certificate, issued by `/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3':
Unable to locally verify the issuer's authority.
ERROR: certificate common name `djangoproject.com' doesn't match requested host name `www.djangoproject.com'.
To connect to www.djangoproject.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.
</code></pre>
<p>Can somebody help?</p>
| 0 | 2016-09-26T16:44:11Z | 39,708,965 | <p>(I am assuming you use Ubuntu on this VM) </p>
<p>Just install pip with the following command: </p>
<pre><code>sudo apt-get install python-pip python-dev build-essential
</code></pre>
| -1 | 2016-09-26T17:31:53Z | [
"python",
"django"
]
|
error while installing django | 39,708,136 | <p>I am trying to learn Django framework.
I have windows machine and on that I have installed virtual machine (oracle VM VirtualBox manager).
I have python installed on that.</p>
<p>I dont see pip installed on this VM. so i tried to install django via below:</p>
<pre><code> wget https://www.djangoproject.com/download/1.10.1/tarball/
</code></pre>
<p>This gives the following error:</p>
<pre><code> Resolving www.djangoproject.com... 162.242.220.127, 2001:4802:7801:102:be76:4eff:fe20:789f
Connecting to www.djangoproject.com|162.242.220.127|:443... connected.
ERROR: cannot verify www.djangoproject.com's certificate, issued by `/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3':
Unable to locally verify the issuer's authority.
ERROR: certificate common name `djangoproject.com' doesn't match requested host name `www.djangoproject.com'.
To connect to www.djangoproject.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection
--------------------
</code></pre>
<p>so i tried below:</p>
<pre><code>wget --user=username --password --no-check-certificate https://www.djangoproject.com/download/1.10.1/tarball/
</code></pre>
<p>but still error:</p>
<pre><code>ERROR: cannot verify www.djangoproject.com's certificate, issued by `/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3':
Unable to locally verify the issuer's authority.
ERROR: certificate common name `djangoproject.com' doesn't match requested host name `www.djangoproject.com'.
To connect to www.djangoproject.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.
</code></pre>
<p>Can somebody help?</p>
| 0 | 2016-09-26T16:44:11Z | 39,709,719 | <p>Get pip installed and working and then install it using pip as suggested </p>
<p><strong>Official instructions</strong>
[check out instructions][1]
Per <a href="http://www.pip-installer.org/en/latest/installing.html" rel="nofollow">http://www.pip-installer.org/en/latest/installing.html</a>:</p>
<p>Download get-pip.py, being careful to save it as a .py file rather than .txt. Then, run it from the command prompt:</p>
<pre><code>python get-pip.py
pip install django
pip freeze | grep Django
Django==1.9.9 # to see which version is installed
</code></pre>
<hr>
<p>ps. Keep in mind, once you have pip installed, if you run pip and you still get something like "command not found" you may have to just close the shell and start it again.</p>
| 1 | 2016-09-26T18:19:10Z | [
"python",
"django"
]
|
How can I not read the last line of a csv file if it has simply 1 column in Python? | 39,708,183 | <p>I have 2 csv files with 4 columns like these.</p>
<pre><code>1 3 6 8\n 1 3 7 2\n
7 9 1 3\n 9 2 4 1\n
\n 1 8 2 3\n
</code></pre>
<p>I only want to read rows with at least 2 columns because the row with only \n, it isn´t useful.</p>
<p>Currently, I am reading files using this:</p>
<pre><code>for line in f:
list = line.strip("\n").split(",")
...
</code></pre>
<p>But, I want that lines with only 1 column should be ignored.</p>
| 0 | 2016-09-26T16:47:31Z | 39,708,245 | <p>You can test the <em>truthy</em> value of the line after <em>stripping</em> the whitespace character. An empty string will have a falsy value which you can avoid by processing in an <code>if</code> block:</p>
<pre><code>for line in f:
line = line.strip("\n")
if line: # empty string will be falsy
lst = line.split(",")
</code></pre>
<hr>
<p>To read rows with two columns, you can test the length of the row after splitting:</p>
<pre><code>for line in f:
row = line.strip("\n").split()
if len(row) >= 2:
...
</code></pre>
<p>Be careful to not use the name <code>list</code> in your code as this will shadow the builtin <code>list</code> type.</p>
| 1 | 2016-09-26T16:50:04Z | [
"python",
"python-2.7",
"csv",
"readfile"
]
|
How can I not read the last line of a csv file if it has simply 1 column in Python? | 39,708,183 | <p>I have 2 csv files with 4 columns like these.</p>
<pre><code>1 3 6 8\n 1 3 7 2\n
7 9 1 3\n 9 2 4 1\n
\n 1 8 2 3\n
</code></pre>
<p>I only want to read rows with at least 2 columns because the row with only \n, it isn´t useful.</p>
<p>Currently, I am reading files using this:</p>
<pre><code>for line in f:
list = line.strip("\n").split(",")
...
</code></pre>
<p>But, I want that lines with only 1 column should be ignored.</p>
| 0 | 2016-09-26T16:47:31Z | 39,708,375 | <p>If they are legitimately comma-separated csv files, try the csv module. You will still have to eliminate empty rows, as in the other answers. For this, you can test for an empty list (which is a falsey value when evaluated as an expression in an if statement).</p>
<pre><code>import csv
with open('filename.csv' as f):
reader = csv.reader(f)
rows = [row for row in reader if row]
</code></pre>
<p><code>rows</code> now contains all non-empty rows in the csv file.</p>
| 2 | 2016-09-26T16:57:19Z | [
"python",
"python-2.7",
"csv",
"readfile"
]
|
How can I not read the last line of a csv file if it has simply 1 column in Python? | 39,708,183 | <p>I have 2 csv files with 4 columns like these.</p>
<pre><code>1 3 6 8\n 1 3 7 2\n
7 9 1 3\n 9 2 4 1\n
\n 1 8 2 3\n
</code></pre>
<p>I only want to read rows with at least 2 columns because the row with only \n, it isn´t useful.</p>
<p>Currently, I am reading files using this:</p>
<pre><code>for line in f:
list = line.strip("\n").split(",")
...
</code></pre>
<p>But, I want that lines with only 1 column should be ignored.</p>
| 0 | 2016-09-26T16:47:31Z | 39,708,416 | <p>You can just quit the loop when you hit the blank line.</p>
<pre><code>if not line.strip():
break
</code></pre>
| 2 | 2016-09-26T17:00:11Z | [
"python",
"python-2.7",
"csv",
"readfile"
]
|
Function with multiple options of returns | 39,708,247 | <p>Sorry, if this question was already answered, I couldn't think of a better way to describe this problem :/</p>
<p>Let's say I have a function like this:</p>
<pre><code>def func(arg):
if arg something:
return a, b, c, d
else:
return False
</code></pre>
<p>So, when i call this function like this:</p>
<pre><code>a, b, c, d = func(arg)
</code></pre>
<p>and it happens, that the function only returns False -would this create a problem (since I'm assigning 4 variables but only getting 1 bool) ? </p>
| 2 | 2016-09-26T16:50:13Z | 39,708,266 | <p>You can just test the returned value first, then unpack:</p>
<pre><code>value = func(arg)
if value:
a, b, c, d = value
</code></pre>
<p>Of course, you'll have to deal with what happens in the calling function when <code>abcd</code> don't get assigned. </p>
| 2 | 2016-09-26T16:51:09Z | [
"python",
"function",
"return"
]
|
Function with multiple options of returns | 39,708,247 | <p>Sorry, if this question was already answered, I couldn't think of a better way to describe this problem :/</p>
<p>Let's say I have a function like this:</p>
<pre><code>def func(arg):
if arg something:
return a, b, c, d
else:
return False
</code></pre>
<p>So, when i call this function like this:</p>
<pre><code>a, b, c, d = func(arg)
</code></pre>
<p>and it happens, that the function only returns False -would this create a problem (since I'm assigning 4 variables but only getting 1 bool) ? </p>
| 2 | 2016-09-26T16:50:13Z | 39,708,308 | <p>Yes it will raise a <code>ValueError</code>. You can take care of that by wrapping the assignment with a <code>try-except</code> statement:</p>
<pre><code>try:
a, b, c, d = func(arg)
except ValueError:
# pass or do something else
</code></pre>
<p>Note that you can also check the validation of your returned value but since it's <a href="https://docs.python.org/3.5/glossary.html" rel="nofollow">Easier to ask for forgiveness than permission</a> as a coding style manner, it's better to use a <code>try-except</code> for handling this. </p>
| 2 | 2016-09-26T16:53:21Z | [
"python",
"function",
"return"
]
|
Function with multiple options of returns | 39,708,247 | <p>Sorry, if this question was already answered, I couldn't think of a better way to describe this problem :/</p>
<p>Let's say I have a function like this:</p>
<pre><code>def func(arg):
if arg something:
return a, b, c, d
else:
return False
</code></pre>
<p>So, when i call this function like this:</p>
<pre><code>a, b, c, d = func(arg)
</code></pre>
<p>and it happens, that the function only returns False -would this create a problem (since I'm assigning 4 variables but only getting 1 bool) ? </p>
| 2 | 2016-09-26T16:50:13Z | 39,708,327 | <p>Functions with heterogeneous return types are awkward for the caller.</p>
<p>Can you refactor? If the <code>else</code> case is a failure mode, consider using exceptions for this case - python is not golang. </p>
| 1 | 2016-09-26T16:54:21Z | [
"python",
"function",
"return"
]
|
Function with multiple options of returns | 39,708,247 | <p>Sorry, if this question was already answered, I couldn't think of a better way to describe this problem :/</p>
<p>Let's say I have a function like this:</p>
<pre><code>def func(arg):
if arg something:
return a, b, c, d
else:
return False
</code></pre>
<p>So, when i call this function like this:</p>
<pre><code>a, b, c, d = func(arg)
</code></pre>
<p>and it happens, that the function only returns False -would this create a problem (since I'm assigning 4 variables but only getting 1 bool) ? </p>
| 2 | 2016-09-26T16:50:13Z | 39,708,334 | <p>Yes, this would cause a problem. Generally you don't want to do something like this, it often leads to unexpected errors later on that are difficult to catch. Python allows you to break the rules by returning whatever you want, but you should only do it if you have a good reason. As a general rule, your functions and methods should return the same type no matter what (with the possible exception of <code>None</code>). </p>
| 2 | 2016-09-26T16:54:40Z | [
"python",
"function",
"return"
]
|
Function with multiple options of returns | 39,708,247 | <p>Sorry, if this question was already answered, I couldn't think of a better way to describe this problem :/</p>
<p>Let's say I have a function like this:</p>
<pre><code>def func(arg):
if arg something:
return a, b, c, d
else:
return False
</code></pre>
<p>So, when i call this function like this:</p>
<pre><code>a, b, c, d = func(arg)
</code></pre>
<p>and it happens, that the function only returns False -would this create a problem (since I'm assigning 4 variables but only getting 1 bool) ? </p>
| 2 | 2016-09-26T16:50:13Z | 39,708,454 | <p>How about instead of having mixed return types, just refactor to require the function to have a truthy argument and raise an Error otherwise? This way the Error will be raised inside the function and not on assignment, which seems clearer to me.</p>
<pre><code>def func(arg):
if not arg:
raise ValueError('func requires a truthy argument')
return a, b, c, d
</code></pre>
| 1 | 2016-09-26T17:02:20Z | [
"python",
"function",
"return"
]
|
Python AES Decrypt printed with encrypted text | 39,708,284 | <p>i got a problem with decryption text file that say 'i want to be clear text file'.</p>
<p>the encryption is good work but when i decrypt it's get mixed, clear text and encrypt data.</p>
<p>now i know me writing is a little amateurish.. but i need your help to get it through.
i tried to build it in def functions but it didn't went well..</p>
<p>I'm thinking that the padding in the encryption is the problem, but when i test it on a 16 byte text file it's still was mixed.</p>
<pre><code>#!/usr/bin/env python
from Crypto.Cipher import AES
from Crypto.Protocol.KDF import PBKDF2
from Crypto import Random
import os
file_path = raw_input("Enter File path: ")
if os.path.isdir(file_path): #check if the path exists
print "\nFile path founded, continue..\n"
else:
print "\nFile path is not correct\nExiting.\n"
exit(0)
file_name = raw_input("Enter File name: ")
full_path = file_path + file_name
if os.path.isfile(full_path):
print "\nFile name founded, continue..\n"
else:
print "\nFile name is not correct\nExiting.\n"
exit(0)
print "Now encrypt"
key_size = 32 #AES256
iterations = 10000
key = os.urandom(32)
read = open(full_path,'r+')
line = read.readline()
secret = line
length = 16 - (len(secret) % 16) #PKCS7 adds bytes of the length of padding
secret += chr(length) * length
read.close()
salt = Random.new().read(key_size) #salt the hash
iv = Random.new().read(AES.block_size)
derived_key = PBKDF2(key, salt, key_size, iterations)
cipher = AES.new(derived_key, AES.MODE_CBC, iv)
encodedtext = iv + cipher.encrypt(secret)
read = open(full_path, 'w')
read.write(encodedtext)
read.close()
print "Now decrypt"
key_size2 = 32 #AES256
iterations2 = 10000
read2 = open(full_path,'r')
line2 = read2.readline()
secret2 = line2
length2 = 16 - (len(secret2) % 16) #PKCS7 adds bytes of the length of padding
secret2 += chr(length2) * length2
read2.close()
dencodedtext2 = iv + cipher.decrypt(secret2)
read2 = open(full_path, 'w')
read2.write(dencodedtext2)
read2.close()
print "that worked?"
</code></pre>
<p>for the test
i gave a path '/home/****/Dekstop/
and i gave file 'test.txt' with the text 'i want to be clear text file'
and i got this,</p>
<blockquote>
<p>ææï¢î¸¦ã±£à»¸å¼§æ®²í컫è
ãì¢äåç²ï»¿i want to be clear text fileÔÔ
Ô
á¡¶ä´áâºé«±ì¤áµê</p>
</blockquote>
<p>why when i print secret2 after 'dencodedtext2 = iv + cipher.decrypt(secret2)' the text is mixed with encryption?
how can i fixed it?
and what im doing so terrible?</p>
<p>thank a head for any kind of help!</p>
| 0 | 2016-09-26T16:52:18Z | 39,711,171 | <p>There are two major mistakes within the code:</p>
<ul>
<li>the IV needs to be stripped off before (and used during) decryption instead of adding it again to the plaintext; currently your plaintext starts with the IV and then a decrypted IV;</li>
<li>the padding needs to be removed after decryption, not before, currently some padding related muck is still shown at the end.</li>
</ul>
| 1 | 2016-09-26T19:47:22Z | [
"python",
"encryption",
"cryptography",
"aes"
]
|
Compare between 2 Excel files and give difference based on key column | 39,708,455 | <p>I have 2 excel files (which can be coverted to CSV).</p>
<pre><code>File 1:
Last Name First Name id(10 digit) email age course
abc def 1234567890 axd 00 y2k
bcd efg 9012345875 bxe 11 k2z
cnn nbc 5678912345 cxn 00 z2k
File 2:
Group_ID email Person_ID Name(Last,First)
1 axd 1234567890 def,abc
cxn 5678912345 nbc,cnn
</code></pre>
<p>So I want to generate a file, which after comparing the file1[id] and file2[Person_ID] would give me the result (I could also compare between file1[email] and file2[email] as both the Person_ID and email should be unique in each row:</p>
<pre><code>bcd efg 9012345875 bxe 11 k2z
</code></pre>
<p>I haven't yet figured out what and how to use, but you could tell me what python df functions I could use.</p>
| -1 | 2016-09-26T17:02:27Z | 39,711,090 | <p>Assuming that you read file 1 and file 2 into pandas data frames df1 and df2,</p>
<p>df1.loc[df1['id'] != df2['Person_ID']]</p>
| 0 | 2016-09-26T19:41:59Z | [
"python",
"csv",
"pandas"
]
|
Basic input with keydown/keyup in Python | 39,708,532 | <p>i am currently trying to create a little game in python, but as i try to work with keydown/keyup events my system would interpret both events as one. I wrote a simple script to monitor the events that are created by 'pygame'(a module to simplify making games in python) and as i press down a key it instantly shows both the keydown and keyup event in the python shell: <a href="http://i.stack.imgur.com/tc5Xx.png" rel="nofollow">code screenshot</a>. Is this related to my keyboard?</p>
| 0 | 2016-09-26T17:07:23Z | 39,778,081 | <p>Your issue with the KEYUP event immediately after the the KEYDOWN event is caused by the line <code>display = pygame.display.set_mode((800, 600))</code> being within your main loop. The program is creating a new display on every iteration of the loop. This line, needs to be placed before the main loop as it should be placed before the main loop. This is the same with <code>pygame.display.set_caption()</code> as well, as it only needs to be called once as well.</p>
<p>Doing that should clear up your issue with the KEYUP event; however, holding down the key still will not work as key repeat is disabled by default when Pygame is initialized. To enable it, the method <code>pygame.key.set_repeat(delay, interval)</code> must be called. This should also go before your main loop.</p>
<p>In the method the value <code>delay</code> is the delay before the first repeat of the key-press will happen in milliseconds, and if it is set to zero, key repeat will be disabled. The value <code>interval</code> is the time between each consecutive repeat after the first. The documentation for this can be found <a href="http://www.pygame.org/docs/ref/key.html#pygame.key.set_repeat" rel="nofollow">here in the Pygame documantation</a> where the description is as follows:</p>
<blockquote>
<p>When the keyboard repeat is enabled, keys that are held down will generate multiple pygame.KEYDOWN events. The delay is the number of milliseconds before the first repeated pygame.KEYDOWN will be sent. After that another pygame.KEYDOWN will be sent every interval milliseconds. If no arguments are passed the key repeat is disabled.</p>
<p>When pygame is initialized the key repeat is disabled.</p>
</blockquote>
<p>As an example: <code>pygame.key.set_repeat(1, 15)</code> will have essentially no delay and should provide a mostly smooth travel.</p>
| 0 | 2016-09-29T19:01:07Z | [
"python",
"events",
"pygame"
]
|
django request.POST data is empty but works with other similar form | 39,708,559 | <p>I'm new to django and I was following a tutorial from django on how to process POST form data with a model</p>
<p><a href="https://docs.djangoproject.com/en/dev/topics/forms/#using-a-form-in-a-view" rel="nofollow">https://docs.djangoproject.com/en/dev/topics/forms/#using-a-form-in-a-view</a></p>
<p>I was able to do that for a simple login, and I can print out the variables in the console. The form.is_valid() function is true and works as expected for the login view.</p>
<p>I did the same exact thing for a registration page and I'm getting FALSE returned from the is_valid() function. I was tinkering with csrf tokens and that didn't seem to be causing the issue and thats why I didn't require them.</p>
<p>I think it's going to be a silly problem because I'm able to get the request.POST in the login case but not the registration. Any help is appreciated.</p>
<p>Here is my html form</p>
<pre><code> <div id="login" class="animate form">
<form action="/signin" autocomplete="on" method ="POST">
<!--{% csrf_token} -->
<h1>Log in</h1>
<p>
<label for="username" class="uname" data-icon="u" > Your email or username </label>
<input id="username" name="username" required="required" type="text" placeholder="myusername or mymail@mail.com"/>
</p>
<p>
<label for="password" class="youpasswd" data-icon="p"> Your password </label>
<input id="password" name="password" required="required" type="password" placeholder="eg. X8df!90EO" />
</p>
<p class="keeplogin">
<input type="checkbox" name="loginkeeping" id="loginkeeping" value="loginkeeping" />
<label for="loginkeeping">Keep me logged in</label>
</p>
<p class="login button">
<input type="submit" value="Login" />
<div id="register" class="animate form">
<form action="/register" autocomplete="on" method ="POST">
<!--{% csrf_token %} -->
<h1> Sign up </h1>
<p>
<label for="usernamesignup" class="uname" data-icon="u">Your username</label>
<input id="usernamesignup" name="usernamesignup" required="required" type="text" placeholder="mysuperusername690" />
</p>
<p>
<label for="emailsignup" class="youmail" data-icon="e" > Your email</label>
<input id="emailsignup" name="emailsignup" required="required" type="email" placeholder="mysupermail@mail.com"/>
</p>
<p>
<label for="passwordsignup" class="youpasswd" data-icon="p">Your password </label>
<input id="passwordsignup" name="passwordsignup" required="required" type="password" placeholder="eg. X8df!90EO"/>
</p>
<p>
<label for="passwordsignup_confirm" class="youpasswd" data-icon="p">Please confirm your password </label>
<input id="passwordsignup_confirm" name="passwordsignup_confirm" required="required" type="password" placeholder="eg. X8df!90EO"/>
</p>
<p class="signin button">
<input type="submit" value="Sign up"/>
</code></pre>
<p>Here is the forms.py</p>
<pre><code>from django import forms
from django.forms import CharField
class NameForm(forms.Form):
username = forms.CharField(label = 'username', max_length=25)
password = forms.CharField(label = 'password', max_length=25)
class RegForm(forms.Form):
regName = forms.CharField(label = 'usernamesignup', max_length = 25)
regEmail = forms.CharField(label = 'emailsignup', max_length = 50)
regPassword = forms.CharField(label = 'passwordsignup', max_length = 30)
regPasswordConfirm = forms.CharField(label = 'passwordsignup_confirm', max_length = 30)
</code></pre>
<p>Here is the views.py that's handling login/registration (this is rough draft)</p>
<pre><code>@csrf_exempt
def signin(request):
#if this is a POST request we need to process the login credentials
if request.method == 'POST':
#create the form instance and populate with username/password
form = NameForm(request.POST)
#verify
print form
if form.is_valid():
username = form.cleaned_data['username']
password = form.cleaned_data['password']
print username
print password
return render(request, 'webServer/home.html')
else:
return render(request, 'webServer/login.html')
else:
return render(request, 'webServer/login.html')
@csrf_exempt
def register(request):
if request.method == 'POST':
#create form instance and grab register credentials
form2 = RegForm(request.POST)
#verify not a duplicate entry (email, username)
print form2
if form2.is_valid():
username = form2.cleaned_data['regPassword']
return render(request, 'webServer/home.html')
else:
print 'had error'
return render_to_response('webServer/errors.html', {'form': form2})
else:
return render(request, 'webServer/login.html')
</code></pre>
| -2 | 2016-09-26T17:08:54Z | 39,708,866 | <p>You seem to be using completely different field names in the template from the form - your form has <code>regName</code>, <code>regEmail</code> etc, but your template has <code>usernamesignup</code> etc.</p>
<p>In any case, you should be using the form object itself to output the fields:</p>
<pre><code>{{ form.regName.label_tag }}
{{ form.regName }}
{{ form.regName.errors }}
</code></pre>
| 1 | 2016-09-26T17:25:58Z | [
"python",
"html",
"django",
"forms",
"post"
]
|
Find elements based on neighbor values | 39,708,568 | <p>I would like to know if there is an efficient way to find indexes of elements next to a specific value in a Numpy array.</p>
<p>How can I find indexes of all the elements that are equal to 1 and that are next to a 0 in this array A ? Without a loop checking for the value of the 8 surrounded elements for each element ?</p>
<pre><code> A = [[ 0., 0., 0., 0., 0., 0.],
[ 0., 1., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 1., 0.],
[ 0., 0., 0., 0., 0., 0.]]
</code></pre>
<p>I would expect getting the indexes from a result like this :</p>
<pre><code> [[ False, False, False, False, False, False],
[ False, True, True, True, True, False],
[ False, True, False, False, True, False],
[ False, True, False, False, True, False],
[ False, True, True, True, True, False],
[ False, False, False, False, False, False]]
</code></pre>
<p>I used canny edge detection but it does not always work for elements that only have one 0 on the North/South-West or North/South-East neighbor. For example :</p>
<pre><code> B = [[ 1., 0., 0.],
[ 1., 0., 0.],
[ 1., 1., 1.]]
</code></pre>
<p>can lead to</p>
<pre><code> [[ True, False, False],
[ True, False, False],
[ False, True, True]]
</code></pre>
<p>instead of </p>
<pre><code> [[ True, False, False],
[ True, False, False],
[ True, True, True]]
</code></pre>
<p>Thanks</p>
<p>update 1 : I did try first canny edge detection form scikit.image but it misses elements. I then tried with np.gradient with same results.</p>
<p>update 2 : Example :</p>
<pre><code>B=np.array([[1,1,1,0,0,0,0],
[1,1,1,0,0,0,0],
[1,1,1,1,0,0,0],
[1,1,1,1,0,0,0],
[1,1,1,1,1,0,0],
[1,1,1,1,1,0,0],
[1,1,1,0,0,0,0],
[1,1,1,1,0,0,0]])
</code></pre>
<p>On this example, both the canny edge detection, the gradient method, and the ndimage.laplace (method mentioned in the answer below) lead to the same results, with missing elements (in yellow on the figure)
<a href="http://i.stack.imgur.com/bCToO.png" rel="nofollow"><img src="http://i.stack.imgur.com/bCToO.png" alt="results"></a></p>
<p>update 2:</p>
<p>Here is the looping method</p>
<pre><code>def check_8neigh_Value(arr,eltvalue, neighvalue):
"checking if the element of value=eltvalue is surrounded
by the value=neighvalue and returning the corresponding grid"
l, c = np.shape(arr)
contour = np.zeros(np.shape(arr))
for i in np.arange(1,l-1):
for j in np.arange(1,c-1):
window = arr[i-1:i+2, j-1:j+2]
if np.logical_and(arr[i,j]==eltvalue,neighvalue in window):
contour[i,j]=1
return contour
image=check_8neigh_Value(B,1,0)
</code></pre>
<p>It gives me what I am looking for, however it is not efficient on large array.</p>
<p>I am stuck with the as_strided method, since I don't low how to use the result:</p>
<p>For a 3 by 3 window using the array B, I am able to get the as_stried B but can't get further.</p>
<pre><code>window_h=3
window_w=3
l, c = image.shape
l_new, c_new = l - window_h + 1, c - window_w + 1
shape=[c_new, l_new, window_w, window_h]
strides=B.strides + B.strides
strided_image = np.lib.stride_tricks.as_strided(B,shape=shape,strides=strides)
</code></pre>
| 2 | 2016-09-26T17:09:43Z | 39,709,106 | <p>Here's one approach using binary erosion:</p>
<pre><code>import numpy as np
from scipy import ndimage
eroded = ndimage.binary_erosion(A, np.eye(3))
diff = (A - eroded).astype(np.bool)
print(repr(diff))
# array([[False, False, False, False, False, False],
# [False, True, True, True, True, False],
# [False, True, False, False, True, False],
# [False, True, False, False, True, False],
# [False, True, True, True, True, False],
# [False, False, False, False, False, False]], dtype=bool)
</code></pre>
<p>You could also take the Laplacian of your input array and find where it is negative:</p>
<pre><code>lap = ndimage.laplace(A)
print(repr(lap < 0))
# array([[False, False, False, False, False, False],
# [False, True, True, True, True, False],
# [False, True, False, False, True, False],
# [False, True, False, False, True, False],
# [False, True, True, True, True, False],
# [False, False, False, False, False, False]], dtype=bool)
</code></pre>
| 2 | 2016-09-26T17:40:04Z | [
"python",
"arrays",
"numpy",
"edge-detection"
]
|
How to insert children of one xml node in another xml node with python | 39,708,647 | <p>I have follwing xml file:</p>
<pre><code><root>
<nodeA>
<childrens_A>
</nodeA>
<nodeB>
<childrens_B>
</nodeB>
<nodeA>
<childrens_A>
</nodeA>
<nodeB>
<childrens_B>
</nodeB>
</root>
</code></pre>
<p>I want get something like</p>
<pre><code><root>
<nodeA>
<childrens_A>
<childrens_B>
</nodeA>
<nodeA>
<childrens_A>
<childrens_B>
</nodeA>
</root>
</code></pre>
<p>Numbers of nodes A and B equal.
I can import only from standard python library. I cannot import <code>lxml</code> because access restriction. So i want be limited <code>from xml.etree import ElementTree as et</code></p>
<p>My code is:</p>
<pre><code>from xml.etree import ElementTree as et
tree = et.parse(path_in)
root = tree.getroot()
for child in root.gethcildren()
if child.tag == "nodeA"
#insert children of nodeB in nodeA
tr.write(path_out)
</code></pre>
<p>Thanks in advance!</p>
| 0 | 2016-09-26T17:14:34Z | 39,753,303 | <p>Looks like i find solution:</p>
<pre><code>from xml.etree import ElementTree as et
tr = et.parse(path_in)
root = tr.getroot()
for child in root.getchildren():
if child.tag == 'nodeB':
sub = child.getchildren()
i = root.getchildren().index(child)
root.getchildren()[i - 1].extend(sub)
tr.write(path_out)
</code></pre>
<p>Hope once this answer can help to somebody.</p>
| 0 | 2016-09-28T16:45:53Z | [
"python",
"xml",
"python-2.7"
]
|
How does __setattr__ work with class attributes? | 39,708,662 | <p>I am trying to understand how exactly __setattr__ works with class attributes. This question came about when I had attempted to override __setattr__ to prevent attributes from being written to in a simple class.</p>
<p>My first attempt used instance level attributes as follows:</p>
<pre><code>class SampleClass(object):
def __init__(self):
self.PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I had originally thought that only external attempts to set the attribute would throw an error, but even the statement inside the __init__ caused the exception to be thrown.</p>
<p>Later I ended up finding another way to do this, while attempting to solve a different problem. What I ended up with was:</p>
<pre><code>class SampleClass(object):
PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I expected to get the same result, but to my surpirse I was able to set the class attribute while preventing changes from being made after the initial declaration. </p>
<p>I don't understand why this works though. I know it has to do with class vs. instance variables. My theory is that using __setattr__ on a class variable will create an instance variable of the same name, since I believe I think I have seen this behavior before, but I am not sure.</p>
<p>Can anyone explain exactly what is happening here?</p>
| 2 | 2016-09-26T17:15:19Z | 39,708,766 | <p>Unless I'm missing something here, the answer is quite obvious: the first case has an attribute assignment which triggers the <code>__setattr__</code> method, in the line:</p>
<pre><code>self.PublicAttribute1 = "attribute"
</code></pre>
<p>But the second case has no such attribute assignment. </p>
| 0 | 2016-09-26T17:21:10Z | [
"python"
]
|
How does __setattr__ work with class attributes? | 39,708,662 | <p>I am trying to understand how exactly __setattr__ works with class attributes. This question came about when I had attempted to override __setattr__ to prevent attributes from being written to in a simple class.</p>
<p>My first attempt used instance level attributes as follows:</p>
<pre><code>class SampleClass(object):
def __init__(self):
self.PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I had originally thought that only external attempts to set the attribute would throw an error, but even the statement inside the __init__ caused the exception to be thrown.</p>
<p>Later I ended up finding another way to do this, while attempting to solve a different problem. What I ended up with was:</p>
<pre><code>class SampleClass(object):
PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I expected to get the same result, but to my surpirse I was able to set the class attribute while preventing changes from being made after the initial declaration. </p>
<p>I don't understand why this works though. I know it has to do with class vs. instance variables. My theory is that using __setattr__ on a class variable will create an instance variable of the same name, since I believe I think I have seen this behavior before, but I am not sure.</p>
<p>Can anyone explain exactly what is happening here?</p>
| 2 | 2016-09-26T17:15:19Z | 39,708,827 | <p><code>__setattr__()</code> is called whenever a value is assigned to any of the class's property. Even if the property is initialized in the <code>__init__()</code>, it will make call to <code>__setattr__()</code>. Below is the example to illustrate that:</p>
<pre><code>>>> class X(object):
... def __init__(self, a, b):
... self.a = a
... self.b = b
... def __setattr__(self, k, v):
... print 'Set Attr: {} -> {}'.format(k, v)
...
>>> x = X(1, 3) # <-- __setattr__() called by __init__()
Set Attr: a -> 1
Set Attr: b -> 3
>>> x.a = 9 # <-- __setattr__() called during external assignment
Set Attr: a -> 9
</code></pre>
| 0 | 2016-09-26T17:24:06Z | [
"python"
]
|
How does __setattr__ work with class attributes? | 39,708,662 | <p>I am trying to understand how exactly __setattr__ works with class attributes. This question came about when I had attempted to override __setattr__ to prevent attributes from being written to in a simple class.</p>
<p>My first attempt used instance level attributes as follows:</p>
<pre><code>class SampleClass(object):
def __init__(self):
self.PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I had originally thought that only external attempts to set the attribute would throw an error, but even the statement inside the __init__ caused the exception to be thrown.</p>
<p>Later I ended up finding another way to do this, while attempting to solve a different problem. What I ended up with was:</p>
<pre><code>class SampleClass(object):
PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I expected to get the same result, but to my surpirse I was able to set the class attribute while preventing changes from being made after the initial declaration. </p>
<p>I don't understand why this works though. I know it has to do with class vs. instance variables. My theory is that using __setattr__ on a class variable will create an instance variable of the same name, since I believe I think I have seen this behavior before, but I am not sure.</p>
<p>Can anyone explain exactly what is happening here?</p>
| 2 | 2016-09-26T17:15:19Z | 39,708,928 | <p>The below can be found in the python documentation</p>
<blockquote>
<p>Attribute assignments and deletions update the instanceâs dictionary, never a classâs dictionary. If the class has a __setattr__() or __delattr__() method, this is called instead of updating the instance dictionary directly.</p>
</blockquote>
<p>As you can clearly see the <em>__setattr__()</em> changes the instances dictionary when it is called. So whenever you try to assign a <strong>self.instance</strong> it raises your exception <strong>Exception("Attribute is Read-Only")</strong>.
Whereas that's not the case with setting a class's attribute so no exception is raised.</p>
| 2 | 2016-09-26T17:29:33Z | [
"python"
]
|
How does __setattr__ work with class attributes? | 39,708,662 | <p>I am trying to understand how exactly __setattr__ works with class attributes. This question came about when I had attempted to override __setattr__ to prevent attributes from being written to in a simple class.</p>
<p>My first attempt used instance level attributes as follows:</p>
<pre><code>class SampleClass(object):
def __init__(self):
self.PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I had originally thought that only external attempts to set the attribute would throw an error, but even the statement inside the __init__ caused the exception to be thrown.</p>
<p>Later I ended up finding another way to do this, while attempting to solve a different problem. What I ended up with was:</p>
<pre><code>class SampleClass(object):
PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I expected to get the same result, but to my surpirse I was able to set the class attribute while preventing changes from being made after the initial declaration. </p>
<p>I don't understand why this works though. I know it has to do with class vs. instance variables. My theory is that using __setattr__ on a class variable will create an instance variable of the same name, since I believe I think I have seen this behavior before, but I am not sure.</p>
<p>Can anyone explain exactly what is happening here?</p>
| 2 | 2016-09-26T17:15:19Z | 39,709,070 | <p><code>__setattr__</code> is only invoked on <em>instances</em> of a class and not the actual class itself. However, the class is still an object, so when setting the attribute of the class the <code>__setattr__</code> of the class of the class is invoked. </p>
<p>To change how attributes are set on classes you need to look into meta-classes. Metaclasses are overkill for almost any application, and can get confusing very easily. And trying to make a user-defined class/object read-only is normally a bad idea. That said, here is a simple example. </p>
<pre><code>>>> class SampleMetaclass(type): # type as parent, not object
... def __setattr__(self, name, value):
... raise AttributeError("Class is read-only")
...
>>> class SampleClass(metaclass=SampleMetaclass):
... def __setattr__(self, name, value):
... raise AttributeError("Instance is read-only")
...
>>> SampleClass.attr = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __setattr__
AttributeError: Class is read-only
>>> s = SampleClass()
>>> s.attr = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __setattr__
AttributeError: Instance is read-only
</code></pre>
| 2 | 2016-09-26T17:38:10Z | [
"python"
]
|
How does __setattr__ work with class attributes? | 39,708,662 | <p>I am trying to understand how exactly __setattr__ works with class attributes. This question came about when I had attempted to override __setattr__ to prevent attributes from being written to in a simple class.</p>
<p>My first attempt used instance level attributes as follows:</p>
<pre><code>class SampleClass(object):
def __init__(self):
self.PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I had originally thought that only external attempts to set the attribute would throw an error, but even the statement inside the __init__ caused the exception to be thrown.</p>
<p>Later I ended up finding another way to do this, while attempting to solve a different problem. What I ended up with was:</p>
<pre><code>class SampleClass(object):
PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I expected to get the same result, but to my surpirse I was able to set the class attribute while preventing changes from being made after the initial declaration. </p>
<p>I don't understand why this works though. I know it has to do with class vs. instance variables. My theory is that using __setattr__ on a class variable will create an instance variable of the same name, since I believe I think I have seen this behavior before, but I am not sure.</p>
<p>Can anyone explain exactly what is happening here?</p>
| 2 | 2016-09-26T17:15:19Z | 39,709,154 | <p>The <code>__setattr__</code> method defined in a class is only called for attribute assignments on <em>istances</em> of the class. It is not called for class variables, since they're not being assigned on an instance of the class with the method.</p>
<p>Of course, classes are instances too. They're instances of <code>type</code> (or a custom metaclass, usually a subclass of <code>type</code>). So if you want to prevent class variable creation, you need to create a metaclass with a <code>__setattr__</code> method.</p>
<p>But that's not really what you need to make your class do what you want. To just get a read-only attribute that can only be written once (in the <code>__init__</code> method), you can probably get by with some simpler logic. One approach is to set another attribute at the end of the <code>__init__</code> method, which tells <code>__setattr__</code> to lock down assignments after it is set:</p>
<pre><code>class Foo:
def __init__(self, a, b):
self.a = a
self.b = b
self._initialized = True
def __setattr__(self, name, value):
if self.__dict__.get('_initialized'):
raise Exception("Attribute is Read-Only")
super().__setattr__(name, value)
</code></pre>
<p>Another option would be to use <code>property</code> descriptors for the read-only attributes and store the real values in "private" variables that can be assigned to normally. You'd not use <code>__setattr__</code> in this version:</p>
<pre><code>class Foo():
def __init__(a, b):
self._a = a
self._b = b
@property
def a(self):
return self._a
@property
def b(self):
return self._b
</code></pre>
| 3 | 2016-09-26T17:43:01Z | [
"python"
]
|
How does __setattr__ work with class attributes? | 39,708,662 | <p>I am trying to understand how exactly __setattr__ works with class attributes. This question came about when I had attempted to override __setattr__ to prevent attributes from being written to in a simple class.</p>
<p>My first attempt used instance level attributes as follows:</p>
<pre><code>class SampleClass(object):
def __init__(self):
self.PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I had originally thought that only external attempts to set the attribute would throw an error, but even the statement inside the __init__ caused the exception to be thrown.</p>
<p>Later I ended up finding another way to do this, while attempting to solve a different problem. What I ended up with was:</p>
<pre><code>class SampleClass(object):
PublicAttribute1 = "attribute"
def __setattr__(self, key, value):
raise Exception("Attribute is Read-Only")
</code></pre>
<p>I expected to get the same result, but to my surpirse I was able to set the class attribute while preventing changes from being made after the initial declaration. </p>
<p>I don't understand why this works though. I know it has to do with class vs. instance variables. My theory is that using __setattr__ on a class variable will create an instance variable of the same name, since I believe I think I have seen this behavior before, but I am not sure.</p>
<p>Can anyone explain exactly what is happening here?</p>
| 2 | 2016-09-26T17:15:19Z | 39,709,237 | <p><code>__setattr__()</code> applies only to instances of the class. In your second example, when you define <code>PublicAttribute1</code>, you are defining it on the class; there's no instance, so <code>__setattr__()</code> is not called.</p>
<p>N.B. In Python, things you access using the <code>.</code> notation are called attributes, not variables. (In other languages they might be called "member variables" or similar.)</p>
<p>You're correct that the class attribute will be shadowed if you set an attribute of the same name on an instance. For example:</p>
<pre><code>class C(object):
attr = 42
c = C()
print(c.attr) # 42
c.attr = 13
print(c.attr) # 13
print(C.attr) # 42
</code></pre>
<p>Python resolves attribute access by first looking on the instance, and if there's no attribute of that name on the instance, it looks on the instance's class, then that class's parent(s), and so on until it gets to <code>object</code>, the root object of the Python class hierarchy.</p>
<p>So in the example above, we define <code>attr</code> on the class. Thus, when we access <code>c.attr</code> (the instance attribute), we get 42, the value of the attribute on the class, because there's no such attribute on the instance. When we set the attribute of the instance, then print <code>c.attr</code> again, we get the value we just set, because there is now an attribute by that name on the instance. But the value <code>42</code> still exists as the attribute of the class, <code>C.attr</code>, as we see by the third <code>print</code>.</p>
<p>The statement to set the instance attribute in your <code>__init__()</code> method is handled by Python like any code to set an attribute on an object. Python does not care whether the code is "inside" or "outside" the class. So, you may wonder, how can you bypass the "protection" of <code>__setattr__()</code> when initializing the object? Simple: you call the <code>__setattr__()</code> method of a class that doesn't have that protection, usually your parent class's method, and pass it your instance.</p>
<p>So instead of writing:</p>
<pre><code>self.PublicAttribute1 = "attribute"
</code></pre>
<p>You have to write:</p>
<pre><code> object.__setattr__(self, "PublicAttribute1", "attribute")
</code></pre>
<p>Since attributes are stored in the instance's attribute dictionary, named <code>__dict__</code>, you can also get around your <code>__setattr__</code> by writing directly to that:</p>
<pre><code> self.__dict__["PublicAttribute1"] = "attribute"
</code></pre>
<p>Either syntax is ugly and verbose, but the relative ease with which you can subvert the protection you're trying to add (after all, if you can do that, so can anyone else) might lead you to the conclusion that Python doesn't have very good support for protected attributes. In fact it doesn't, and this is by design. "We're all consenting adults here." You should not think in terms of public or private attributes with Python. All attributes are public. There is a <em>convention</em> of naming "private" attributes with a single leading underscore; this warns whoever is using your object that they're messing with an implementation detail of some sort, but they can still do it if they need to and are willing to accept the risks.</p>
| 1 | 2016-09-26T17:48:28Z | [
"python"
]
|
Capture files that have been modified in the past x days in Python | 39,708,674 | <p>I'm using the below script to re-encode my existing media files to MP4 using the HandBrake CLI. It's going to be a long process, so I'd like to have a way to capture files that have been created in the past 7 days, as well as the other filters (on file extensions), so that new content can be updated, while older content can be run on a separate script at different times. What do I have to change in the script to only capture files created in the past 7 days?</p>
<pre><code>import os
import time
import subprocess
import sys
import httplib
import urllib
from xml.dom import minidom
import logging
import datetime
#Script Directory setup
myDateTime = datetime.datetime.now().strftime("%y-%m-%d-%H-%M")
logdir = 'D:\\logs\\'
logfile = logdir + 'TV_encode-' + myDateTime + '.log'
#Log Handler Setup
logger = logging.getLogger('TV_encode')
hdlr = logging.FileHandler(logfile)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.INFO)
logger.info('Getting list of files to re-encode...')
fileList = []
rootdir = 'T:\\'
logger.info('Using %s as root directory for scan...' % rootdir)
for root, subFolders, files in os.walk(rootdir):
for file in files:
theFile = os.path.join(root,file)
fileName, fileExtension = os.path.splitext(theFile)
if fileExtension.lower() in ('.avi', '.divx', '.flv', '.m4v', '.mkv', '.mov', '.mpg', '.mpeg', '.wmv'):
print 'Adding',theFile
logger.info('Adding %s to list of file to re-encode.' % theFile)
fileList.append(theFile)
runstr = '"C:\\Program Files\\Handbrake\\HandBrakeCLI.exe" -i "{0}" -o "{1}" --preset="Normal" --two-pass --turbo'
print '=======--------======='
logger.info('=======--------=======')
logger.info('Starting processing of files...')
while fileList:
inFile = fileList.pop()
logger.info('Original file: %s' % inFile)
fileName, fileExtension = os.path.splitext(inFile)
outFile = fileName+'.mp4'
logger.info('New file: %s' % outFile)
print 'Processing',inFile
logger.info('Processing %s' % inFile)
returncode = subprocess.call(runstr.format(inFile,outFile))
time.sleep(5)
print 'Removing',inFile
logger.info('Removing %s' % inFile)
os.remove(inFile)
logger.info('Sending Pushover notification...')
conn = httplib.HTTPSConnection("api.pushover.net:443")
conn.request("POST", "/1/messages.json",
urllib.urlencode({
"token": "TOKENHERE",
"user": "USERKEY",
"message": "Re-encoding complete for %s" % fileName,
}), {"Content-type": "application/x-www-form-urlencoded"})
conn.getresponse()
</code></pre>
| 1 | 2016-09-26T17:15:47Z | 39,709,112 | <p><code>os.path.getmtime(filename)</code> will give you the modification time in seconds since the epoch.</p>
<p>Use the <code>datetime</code> module to convert it to a <code>datetime</code> object, and compare it as usual.</p>
<pre><code>import datetime
import os
ONE_WEEK_AGO = datetime.datetime.today() - datetime.timedelta(days=7)
mod_date = datetime.datetime.fromtimestamp(os.path.getmtime(theFile))
if mod_date > ONE_WEEK_AGO:
# use the file.
</code></pre>
| 1 | 2016-09-26T17:40:24Z | [
"python",
"handbrake"
]
|
Read list of lists from text file and join and decode | 39,708,736 | <p>I have a list of lists in a text file that contains non encoded unicode. Python is recognising the object being read in as a list without me doing anything like a <code>json.loads()</code>statement direct from the text file.</p>
<p>When I try and read all lines of the text file and then do a for loop to iterate through each sublist nothing happens. When I try encoding the entire object first before attemping to load to JSON nothing also happens. </p>
<p>Here is my code:</p>
<pre><code>import glob
myglob = 'C:\\mypath\\*.txt'
myglob = ''.join(myglob)
for name in glob.glob(myglob):
split_name = name.split('\\')[5]
with open(name) as f:
content = f.readlines()
for xx in content:
print xx.encode('utf-8')
</code></pre>
<p>The input data looks like this:</p>
<pre><code>[['Alexis S\xe1nchez', 'Alexis', 'S\xe1nchez', 'Forward', 'Arsenal', '13', '25244'],['H\xe9ctor Beller\xedn', 'H\xe9ctor', 'Beller\xedn', 'Defender', 'Arsenal', '13', '125211'],['Libor Koz\xe1k', 'Libor', 'Koz\xe1k', 'Forward', 'Aston Villa', '24', '67285']]
</code></pre>
<p>The output should be three lines of strings with the above contents encoded. Can anyone tell me what I am doing wrong?</p>
<p>Thanks</p>
| 0 | 2016-09-26T17:19:06Z | 39,710,607 | <p>Maybe something like next code snippet?</p>
<pre><code>listInput = [['Alexis S\xe1nchez', 'Alexis', 'S\xe1nchez', 'Forward', 'Arsenal', '13', '25244'],
['H\xe9ctor Beller\xedn', 'H\xe9ctor', 'Beller\xedn', 'Defender', 'Arsenal', '13', '125211'],
['Libor Koz\xe1k', 'Libor', 'Koz\xe1k', 'Forward', 'Aston Villa', '24', '67285']]
for listItem in listInput:
for aItem in listItem:
aItem = aItem.encode('utf-8')
print (listItem)
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>==> python D:\test\Python\39708736.py
['Alexis Sánchez', 'Alexis', 'Sánchez', 'Forward', 'Arsenal', '13', '25244']
['Héctor BellerÃn', 'Héctor', 'BellerÃn', 'Defender', 'Arsenal', '13', '125211']
['Libor Kozák', 'Libor', 'Kozák', 'Forward', 'Aston Villa', '24', '67285']
==>
</code></pre>
| 0 | 2016-09-26T19:13:42Z | [
"python",
"unicode"
]
|
Making pygame sprites disappear in python | 39,709,065 | <p>I am working on a rpg game and I want my playbutton to disappear as soon as it was pressed. Is there a method of which I can do that?
I have different game states they are: GAME, MENU, START
The playbutton will appear on the START game state and I want it to disappear when it is pressed or when the game state changes.
Thank you for your contribution</p>
| 0 | 2016-09-26T17:37:52Z | 39,710,944 | <p>If the button is truly a sprite, you can:</p>
<ul>
<li>Add the sprite to a group called Buttons.</li>
<li>Render the Buttons on the screen.</li>
<li>Use the kill() method to remove the sprite from the Buttons group.</li>
<li>Re-render the screen on the next round.</li>
</ul>
<p><a href="http://pygame.org/docs/ref/sprite.html#pygame.sprite.Sprite.kill" rel="nofollow">http://pygame.org/docs/ref/sprite.html#pygame.sprite.Sprite.kill</a></p>
| 0 | 2016-09-26T19:33:51Z | [
"python",
"pygame"
]
|
Making pygame sprites disappear in python | 39,709,065 | <p>I am working on a rpg game and I want my playbutton to disappear as soon as it was pressed. Is there a method of which I can do that?
I have different game states they are: GAME, MENU, START
The playbutton will appear on the START game state and I want it to disappear when it is pressed or when the game state changes.
Thank you for your contribution</p>
| 0 | 2016-09-26T17:37:52Z | 39,711,223 | <p>To remove something from the screen you need to draw something else over it. So the most basic answer would be to just stop rendering the button and start render other stuff over it.</p>
<p>A great way to go about it is making all your visible objects inherit <a href="http://www.pygame.org/docs/ref/sprite.html#pygame.sprite.Sprite" rel="nofollow">pygame.sprite.Sprite</a> and put them in <a href="http://www.pygame.org/docs/ref/sprite.html#pygame.sprite.Group" rel="nofollow">sprite groups</a>. From here you could draw, update and remove sprites easily. </p>
<p>Here's a working example. Press the keys 1, 2 or 3 to make the "Buttons" reappear again:</p>
<pre><code>import pygame
pygame.init()
screen = pygame.display.set_mode((200, 200))
clock = pygame.time.Clock()
class Button(pygame.sprite.Sprite):
def __init__(self, pos, size=(32, 32), image=None):
super(Button, self).__init__()
if image is None:
self.rect = pygame.Rect(pos, size)
self.image = pygame.Surface(size)
else:
self.image = image
self.rect = image.get_rect(topleft=pos)
self.pressed = False
def update(self):
mouse_pos = pygame.mouse.get_pos()
mouse_clicked = pygame.mouse.get_pressed()[0]
if self.rect.collidepoint(*mouse_pos) and mouse_clicked:
print("BUTTON PRESSED!")
self.kill() # Will remove itself from all pygame groups.
image = pygame.Surface((100, 40))
image.fill((255, 0, 0))
buttons = pygame.sprite.Group()
buttons.add(
Button(pos=(50, 25), image=image),
Button(pos=(50, 75), image=image),
Button(pos=(50, 125), image=image)
)
while True:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.QUIT:
quit()
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_1:
buttons.add(Button(pos=(50, 25), image=image))
elif event.key == pygame.K_2:
buttons.add(Button(pos=(50, 75), image=image))
elif event.key == pygame.K_3:
buttons.add(Button(pos=(50, 125), image=image))
buttons.update() # Calls the update method on every sprite in the group.
screen.fill((0, 0, 0))
buttons.draw(screen) # Draws all sprites to the given Surface.
pygame.display.update()
</code></pre>
| 0 | 2016-09-26T19:50:15Z | [
"python",
"pygame"
]
|
Dictionary manipulation | 39,709,147 | <p>I have a dictionary of dictionaries which dialed down a notch or two looks like this:</p>
<pre><code>a = {114907: {114905: 1.4351310915,
114908: 0.84635577943,
114861: 61.490648372},
113820: {113826: 8.6999361654,
113819: 1.1412795216,
111068: 1.1964946282,
117066: 1.5595617822,
113822: 1.1958951003},
114908: {114906: 1.279878388,
114907: 0.77568252572,
114862: 2.5412545474}
}
</code></pre>
<p>The operation I wanna perform is as follows:</p>
<p>For every key of a:</p>
<ul>
<li>If its value (the innermost dictionary, e.g., <code>{114905: 1.435.., 114908: 0.846.., 114861: 61.490..}</code>) contains keys that are present as keys on the outermost one as well (in this case <code>114908</code>), replace them with the <code>k, v</code> values from the latter and remove it entirely.</li>
<li>Finally, convert the outermost key to a tuple containing both the original key and the key that was popped from the innermost dict.</li>
</ul>
<p>The desired output would be this:</p>
<pre><code>b = {(114907, 114908): {114905: 1.4351310915,
114906: 1.279878388,
114862: 2.5412545474,
114861: 61.490648372},
113820: {113826: 8.6999361654,
113819: 1.1412795216,
111068: 1.1964946282,
117066: 1.5595617822,
113822: 1.1958951003}
}
</code></pre>
<p>I really hope you got what I am trying to achieve here because this is not even describable.</p>
<p>This is what I have so far but it fails in several points and I am deeply convinced that I am going down the wrong road. Eventually I will get there but it would be the most inefficient thing ever coded.</p>
<pre><code>from copy import deepcopy
temp = deepcopy(a)
for item in temp:
for subitems, values in temp[item].items():
if values < 1.0:
for k, v in temp[subitems].items():
if k != item:
a[item][k] = v
# a[item].pop(subitems)
for i in a:
print(i, a[i])
#114908 {114905: 1.4351310915, 114906: 1.279878388, 114907: 0.77568252572, 114861: 61.490648372, 114862: 2.5412545474}
#114907 {114905: 1.4351310915, 114906: 1.279878388, 114908: 0.84635577943, 114861: 61.490648372, 114862: 2.5412545474}
#113820 {113826: 8.6999361654, 113819: 1.1412795216, 111068: 1.1964946282, 117066: 1.5595617822, 113822: 1.1958951003}
</code></pre>
<p>Side question, why does <code>pop</code> in dictionaries return the <code>value</code> only and not the <code>key: value</code> pair?</p>
<p>EDIT</p>
<p>An important detail which might make the thing easier is that another way to look for what has to be modified are the inner dict values. If they are below 1.0 their keys are bound to be keys of the outer dict as well.</p>
| 4 | 2016-09-26T17:42:28Z | 39,709,498 | <pre><code># for each "primary key"
for primary in a.keys():
# for each "sub-key"
for sub_key in a[primary].keys():
# if the sub-key is also a primary key
if sub_key in a.keys():
# assign to the subkey the value of its corresponding primary key
a[primary][sub_key] = a[sub_key]
</code></pre>
<p>Is this what you're looking for, at least for the first part of your question?</p>
| 0 | 2016-09-26T18:05:56Z | [
"python",
"python-3.x",
"dictionary"
]
|
Dictionary manipulation | 39,709,147 | <p>I have a dictionary of dictionaries which dialed down a notch or two looks like this:</p>
<pre><code>a = {114907: {114905: 1.4351310915,
114908: 0.84635577943,
114861: 61.490648372},
113820: {113826: 8.6999361654,
113819: 1.1412795216,
111068: 1.1964946282,
117066: 1.5595617822,
113822: 1.1958951003},
114908: {114906: 1.279878388,
114907: 0.77568252572,
114862: 2.5412545474}
}
</code></pre>
<p>The operation I wanna perform is as follows:</p>
<p>For every key of a:</p>
<ul>
<li>If its value (the innermost dictionary, e.g., <code>{114905: 1.435.., 114908: 0.846.., 114861: 61.490..}</code>) contains keys that are present as keys on the outermost one as well (in this case <code>114908</code>), replace them with the <code>k, v</code> values from the latter and remove it entirely.</li>
<li>Finally, convert the outermost key to a tuple containing both the original key and the key that was popped from the innermost dict.</li>
</ul>
<p>The desired output would be this:</p>
<pre><code>b = {(114907, 114908): {114905: 1.4351310915,
114906: 1.279878388,
114862: 2.5412545474,
114861: 61.490648372},
113820: {113826: 8.6999361654,
113819: 1.1412795216,
111068: 1.1964946282,
117066: 1.5595617822,
113822: 1.1958951003}
}
</code></pre>
<p>I really hope you got what I am trying to achieve here because this is not even describable.</p>
<p>This is what I have so far but it fails in several points and I am deeply convinced that I am going down the wrong road. Eventually I will get there but it would be the most inefficient thing ever coded.</p>
<pre><code>from copy import deepcopy
temp = deepcopy(a)
for item in temp:
for subitems, values in temp[item].items():
if values < 1.0:
for k, v in temp[subitems].items():
if k != item:
a[item][k] = v
# a[item].pop(subitems)
for i in a:
print(i, a[i])
#114908 {114905: 1.4351310915, 114906: 1.279878388, 114907: 0.77568252572, 114861: 61.490648372, 114862: 2.5412545474}
#114907 {114905: 1.4351310915, 114906: 1.279878388, 114908: 0.84635577943, 114861: 61.490648372, 114862: 2.5412545474}
#113820 {113826: 8.6999361654, 113819: 1.1412795216, 111068: 1.1964946282, 117066: 1.5595617822, 113822: 1.1958951003}
</code></pre>
<p>Side question, why does <code>pop</code> in dictionaries return the <code>value</code> only and not the <code>key: value</code> pair?</p>
<p>EDIT</p>
<p>An important detail which might make the thing easier is that another way to look for what has to be modified are the inner dict values. If they are below 1.0 their keys are bound to be keys of the outer dict as well.</p>
| 4 | 2016-09-26T17:42:28Z | 39,709,930 | <p>This should work</p>
<pre><code>a = {114907: {114905: 1.4351310915,
114908: 0.84635577943,
114861: 61.490648372},
113820: {113826: 8.6999361654,
113819: 1.1412795216,
111068: 1.1964946282,
117066: 1.5595617822,
113822: 1.1958951003},
114908: {114906: 1.279878388,
114907: 0.77568252572,
114862: 2.5412545474}
}
# Lets call the keys leaders and its value is a dict of
# keys ( call them members ) to floats.
# if a member is also a leader, then the two leaders combine.
leaders = set(a.keys())
leaders_to_members = { leader: set(member_dict.keys()) for leader, member_dict in a.items() }
seen_leaders =set()
b = {}
for leader, members in leaders_to_members.items():
if leader in seen_leaders:
continue
members_as_leaders = members.intersection(leaders)
members_as_leaders.add(leader)
v = {}
for member_leader in members_as_leaders:
v.update(a[member_leader])
seen_leaders.update(members_as_leaders)
# if its just one element, you want it as the key directly
b_key = tuple(members_as_leaders) if len(members_as_leaders) > 1 else members_as_leaders.pop()
# as per your output, you've removed the key to float value if it is a leader
b_val = { k: float_val for k, float_val in v.items() if k not in members_as_leaders }
b[b_key] = b_val
print(b)
</code></pre>
<p>Output</p>
<pre><code>{113820: {111068: 1.1964946282,
113819: 1.1412795216,
113822: 1.1958951003,
113826: 8.6999361654,
117066: 1.5595617822},
(114907, 114908): {114861: 61.490648372,
114862: 2.5412545474,
114905: 1.4351310915,
114906: 1.279878388}}
</code></pre>
<blockquote>
<p>The side question: why does pop in dictionaries return the value only and not the key: value pair?</p>
</blockquote>
<pre><code>>>> a.pop()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: pop expected at least 1 arguments, got 0
>>> help(a.pop)
"""
Help on built-in function pop:
pop(...) method of builtins.dict instance
D.pop(k[,d]) -> v, remove specified key and return the corresponding value.
If key is not found, d is returned if given, otherwise KeyError is raised
"""
</code></pre>
<p>As you can see, pop expects the key, so it can pop the value. Since you are required to give it the key, it doesn't have to return the key back to you.</p>
| 1 | 2016-09-26T18:32:43Z | [
"python",
"python-3.x",
"dictionary"
]
|
Dictionary manipulation | 39,709,147 | <p>I have a dictionary of dictionaries which dialed down a notch or two looks like this:</p>
<pre><code>a = {114907: {114905: 1.4351310915,
114908: 0.84635577943,
114861: 61.490648372},
113820: {113826: 8.6999361654,
113819: 1.1412795216,
111068: 1.1964946282,
117066: 1.5595617822,
113822: 1.1958951003},
114908: {114906: 1.279878388,
114907: 0.77568252572,
114862: 2.5412545474}
}
</code></pre>
<p>The operation I wanna perform is as follows:</p>
<p>For every key of a:</p>
<ul>
<li>If its value (the innermost dictionary, e.g., <code>{114905: 1.435.., 114908: 0.846.., 114861: 61.490..}</code>) contains keys that are present as keys on the outermost one as well (in this case <code>114908</code>), replace them with the <code>k, v</code> values from the latter and remove it entirely.</li>
<li>Finally, convert the outermost key to a tuple containing both the original key and the key that was popped from the innermost dict.</li>
</ul>
<p>The desired output would be this:</p>
<pre><code>b = {(114907, 114908): {114905: 1.4351310915,
114906: 1.279878388,
114862: 2.5412545474,
114861: 61.490648372},
113820: {113826: 8.6999361654,
113819: 1.1412795216,
111068: 1.1964946282,
117066: 1.5595617822,
113822: 1.1958951003}
}
</code></pre>
<p>I really hope you got what I am trying to achieve here because this is not even describable.</p>
<p>This is what I have so far but it fails in several points and I am deeply convinced that I am going down the wrong road. Eventually I will get there but it would be the most inefficient thing ever coded.</p>
<pre><code>from copy import deepcopy
temp = deepcopy(a)
for item in temp:
for subitems, values in temp[item].items():
if values < 1.0:
for k, v in temp[subitems].items():
if k != item:
a[item][k] = v
# a[item].pop(subitems)
for i in a:
print(i, a[i])
#114908 {114905: 1.4351310915, 114906: 1.279878388, 114907: 0.77568252572, 114861: 61.490648372, 114862: 2.5412545474}
#114907 {114905: 1.4351310915, 114906: 1.279878388, 114908: 0.84635577943, 114861: 61.490648372, 114862: 2.5412545474}
#113820 {113826: 8.6999361654, 113819: 1.1412795216, 111068: 1.1964946282, 117066: 1.5595617822, 113822: 1.1958951003}
</code></pre>
<p>Side question, why does <code>pop</code> in dictionaries return the <code>value</code> only and not the <code>key: value</code> pair?</p>
<p>EDIT</p>
<p>An important detail which might make the thing easier is that another way to look for what has to be modified are the inner dict values. If they are below 1.0 their keys are bound to be keys of the outer dict as well.</p>
| 4 | 2016-09-26T17:42:28Z | 39,710,128 | <p>How about this:</p>
<pre><code>import itertools
b ={}
for k1,v1 in a.items():
for k2,v2 in v1.items():
if k2 in a:
a[k2].pop(k1)
a[k1].pop(k2)
dest = dict(itertools.chain(a[k1].items(), a[k2].items())) #python 2.7
b[(k1,k2)] = dest
print b
</code></pre>
<p>answer:</p>
<pre><code>{(114908, 114907): {114905: 1.4351310915, 114906: 1.279878388, 114861: 61.490648372, 114862: 2.5412545474}}
</code></pre>
| 0 | 2016-09-26T18:43:58Z | [
"python",
"python-3.x",
"dictionary"
]
|
Dictionary manipulation | 39,709,147 | <p>I have a dictionary of dictionaries which dialed down a notch or two looks like this:</p>
<pre><code>a = {114907: {114905: 1.4351310915,
114908: 0.84635577943,
114861: 61.490648372},
113820: {113826: 8.6999361654,
113819: 1.1412795216,
111068: 1.1964946282,
117066: 1.5595617822,
113822: 1.1958951003},
114908: {114906: 1.279878388,
114907: 0.77568252572,
114862: 2.5412545474}
}
</code></pre>
<p>The operation I wanna perform is as follows:</p>
<p>For every key of a:</p>
<ul>
<li>If its value (the innermost dictionary, e.g., <code>{114905: 1.435.., 114908: 0.846.., 114861: 61.490..}</code>) contains keys that are present as keys on the outermost one as well (in this case <code>114908</code>), replace them with the <code>k, v</code> values from the latter and remove it entirely.</li>
<li>Finally, convert the outermost key to a tuple containing both the original key and the key that was popped from the innermost dict.</li>
</ul>
<p>The desired output would be this:</p>
<pre><code>b = {(114907, 114908): {114905: 1.4351310915,
114906: 1.279878388,
114862: 2.5412545474,
114861: 61.490648372},
113820: {113826: 8.6999361654,
113819: 1.1412795216,
111068: 1.1964946282,
117066: 1.5595617822,
113822: 1.1958951003}
}
</code></pre>
<p>I really hope you got what I am trying to achieve here because this is not even describable.</p>
<p>This is what I have so far but it fails in several points and I am deeply convinced that I am going down the wrong road. Eventually I will get there but it would be the most inefficient thing ever coded.</p>
<pre><code>from copy import deepcopy
temp = deepcopy(a)
for item in temp:
for subitems, values in temp[item].items():
if values < 1.0:
for k, v in temp[subitems].items():
if k != item:
a[item][k] = v
# a[item].pop(subitems)
for i in a:
print(i, a[i])
#114908 {114905: 1.4351310915, 114906: 1.279878388, 114907: 0.77568252572, 114861: 61.490648372, 114862: 2.5412545474}
#114907 {114905: 1.4351310915, 114906: 1.279878388, 114908: 0.84635577943, 114861: 61.490648372, 114862: 2.5412545474}
#113820 {113826: 8.6999361654, 113819: 1.1412795216, 111068: 1.1964946282, 117066: 1.5595617822, 113822: 1.1958951003}
</code></pre>
<p>Side question, why does <code>pop</code> in dictionaries return the <code>value</code> only and not the <code>key: value</code> pair?</p>
<p>EDIT</p>
<p>An important detail which might make the thing easier is that another way to look for what has to be modified are the inner dict values. If they are below 1.0 their keys are bound to be keys of the outer dict as well.</p>
| 4 | 2016-09-26T17:42:28Z | 39,712,237 | <p>In Python3.x, <code>{}.keys()</code> returns a view. You can use set operations on a dict view.</p>
<p>So your algorithm is somewhat simplified to:</p>
<pre><code>outer=a.keys()
deletions=set()
new_a={}
for k,di in a.items():
c=outer & di.keys()
if c:
c=c.pop()
if (c,k) not in deletions:
deletions.add((k,c))
else:
new_a[k]=di
for t in deletions:
del a[t[0]][t[1]], a[t[1]][t[0]]
new_a[t]=a[t[0]]
new_a[t].update(a[t[1]])
>>> new_a
{113820: {113826: 8.6999361654,
113819: 1.1412795216,
111068: 1.1964946282,
117066: 1.5595617822,
113822: 1.1958951003},
(114908, 114907): {114905: 1.4351310915,
114906: 1.279878388,
114861: 61.490648372,
114862: 2.5412545474}}
</code></pre>
<p>The order of the elements in the tuple may vary depending on the order of iteration and the order of the set operations. Both are unordered with dicts. Since the elements may vary, which dict which is used as the update dict is also unordered. </p>
<p>This function also only works with a single intersection; i.e., there are not tuples created with more than 2 elements as keys. </p>
| 1 | 2016-09-26T20:55:56Z | [
"python",
"python-3.x",
"dictionary"
]
|
PySys ProcessMonitor timestamps use different formats on Windows and Linux | 39,709,182 | <p>We're using PySys partly for performance testing, including the ProcessMonitor class to monitor CPU and memory usage, and a part of our tests parses the timestamp in the first column of the ProcessMonitor output, which requires us knowing the exact format. We run our tests on both Windows and Linux and have found that the format string is different on each platform's implementation of ProcessMonitor ("<strong>%d/%m</strong>/%y %H:%M:%S" on Windows and "<strong>%m/%d</strong>/%y %H:%M:%S" on Linux, with the month/day flipped).</p>
<p>We're on 1.1, though I see this issue is not yet patched in 1.2, the current latest version. Easy enough to patch on our own, but of course preferable to have it rolled into a future version. Thanks!</p>
| 0 | 2016-09-26T17:45:19Z | 39,711,891 | <p>This is an inconsistency in the framework as pointed out, thanks. I've added a defect to the project so it can be tracked there (<a href="https://sourceforge.net/p/pysys/bugs/15" rel="nofollow">https://sourceforge.net/p/pysys/bugs/15</a>) and we can take offline from SO. </p>
| 0 | 2016-09-26T20:33:19Z | [
"python",
"performance-testing"
]
|
Sorting list of tuples of tuples | 39,709,197 | <pre><code>[((D,A),0.0),((D,C),0.0),((D,E),0.5)]
</code></pre>
<p>I need to sort the list as:</p>
<pre><code>[((D,E),0.5),((D,A),0.0),((D,C),0.0)]
</code></pre>
<p>I have used the <code>sorted()</code> function and I am able to sort based on values <code>0.5, 0.0</code>... But I am not able to sort on the alphabetical order as I need the list to be sorted in descending order by the numbers and in ascending order of alphabets if the numbers have same value.</p>
| -1 | 2016-09-26T17:46:15Z | 39,709,308 | <p>Use a tuple as the sort key with a negative on the float to reverse the order:</p>
<pre><code>>>> li=[(('D','A'),0.0),(('D','C'),0.0),(('D','E'),0.5)]
>>> sorted(li, key=lambda t: (-t[-1],t[0]))
[(('D', 'E'), 0.5), (('D', 'A'), 0.0), (('D', 'C'), 0.0)]
</code></pre>
<p>If you cannot do negation (say on a string or letter value or something non numeric) then you can take advantage of the fact that the Python sort function is stable and do the sort in two steps:</p>
<pre><code>>>> li=[(('D','A'),'A'),(('D','C'),'A'),(('D','E'),'C')]
>>> sorted(sorted(li), key=lambda t: t[-1], reverse=True)
[(('D', 'E'), 'C'), (('D', 'A'), 'A'), (('D', 'C'), 'A')]
</code></pre>
| 2 | 2016-09-26T17:52:56Z | [
"python",
"python-3.x",
"sorting"
]
|
Sorting list of tuples of tuples | 39,709,197 | <pre><code>[((D,A),0.0),((D,C),0.0),((D,E),0.5)]
</code></pre>
<p>I need to sort the list as:</p>
<pre><code>[((D,E),0.5),((D,A),0.0),((D,C),0.0)]
</code></pre>
<p>I have used the <code>sorted()</code> function and I am able to sort based on values <code>0.5, 0.0</code>... But I am not able to sort on the alphabetical order as I need the list to be sorted in descending order by the numbers and in ascending order of alphabets if the numbers have same value.</p>
| -1 | 2016-09-26T17:46:15Z | 39,709,445 | <p>Similarly, you could supply <code>sorted</code> with an iterable that is the result of another <code>sort</code>:</p>
<pre><code>>>> from operator import itemgetter
>>> t = [(('D','A'),0.0),(('D','C'),0.0),(('D','E'),0.5)]
>>> sorted(sorted(t), key=itemgetter(1), reverse=True)
[(('D', 'E'), 0.5), (('D', 'A'), 0.0), (('D', 'C'), 0.0)]
</code></pre>
| 1 | 2016-09-26T18:02:17Z | [
"python",
"python-3.x",
"sorting"
]
|
Proper response to Telegram.org "Error 500: RPC_SEND_FAIL" | 39,709,349 | <p>After sending an auth.sendCode method to the telegram.org server I am receiving the following in return:</p>
<pre><code>{'MessageContainer': [{'msg': {u'bad_msg_notification': {u'bad_msg_seqno': 4, u'bad_msg_id': 6334696945916753920L, u'error_code': 35}}, 'seqno': 4, 'msg_id': 6334696948768376833L}, {'msg': {u'msgs_ack': {u'msg_ids': [6334696945916753920L]}}, 'seqno': 4, 'msg_id': 6334696948768387073L}]})
</code></pre>
<p>then:</p>
<pre><code>('sentCode: ', {u'req_msg_id': 6334696967778138112L, u'result': {u'error_message': '[1474911573] [6609] Error 500: RPC_SEND_FAIL: 27029.\nUnhandled Exception caught in file lib/global.lib.php at line 4622.\nBacktrace:\n#0 : 0x16497a7\n#1 : 0xc06579\n#2 : 0x14ddb18\n#3 : 0xc90029\n#4 : 0x127c5e7\n#5 : 0x12b634d\n#6 : 0x13a56d1\n#7 : 0x13a4e27\n#8 : 0x5a5946\n#9 : 0x16052d2\n#10 : 0x7fdd67383d10\n#11 : 0x7fff6e7b2790\n', u'error_code': -504}})
</code></pre>
<p>I understand the <code>bad_msg_seqno</code>, I think, and have a separate question submitted for proper resolution. But the <code>RPC_SEND_FAIL</code> appears to be an error on the server side and, according to the <a href="https://core.telegram.org/api/errors" rel="nofollow">documentation</a>, should be communicated to the telegram developers. What is the best way to do that?</p>
<p>BTW: this error wasn't happening a week ago with the same code. I came back from vacation, ran the unchanged code, and boom!</p>
| 1 | 2016-09-26T17:55:09Z | 39,711,193 | <p>The telegram.org servers are now responding as expected, no more <code>Error 500</code> conditions. All I did was wait a few hours, and updated from layer 54 to layer 55. I didn't see anything in the new layer that would cause the problem so I have to assume something was fixed on the server side. Time will tell.</p>
| 1 | 2016-09-26T19:48:19Z | [
"python",
"api",
"telegram"
]
|
How do I correctly use PCA followed by Logistic Regression? | 39,709,355 | <p>In the program, I am scanning a number of brain samples taken in a time series of 40 x 64 x 64 images every 2.5 seconds. The number of 'voxels' (3D pixels) in each image is thus ~ 168,000 ish (40 * 64 * 64), each of which is a 'feature' for an image sample. </p>
<p>I thought of using Principle Component Analysis (PCA) because of the rediculously high <code>n</code> to perform dimensionality reduction. I am aware PCA does not actually reduce the number of features. Correct me if I'm wrong, but PCA will produce a new set of features from the original ones. The new features however must follow certain conditions.</p>
<p>I definined a method to get the number of components :</p>
<pre><code>def get_optimal_number_of_components():
cov = np.dot(X,X.transpose())/float(X.shape[0])
U,s,v = svd(cov)
S_nn = sum(s)
for num_components in range(0,s.shape[0]):
temp_s = s[0:num_components]
S_ii = sum(temp_s)
if (1 - S_ii/float(S_nn)) <= 0.01:
return num_components
return s.shape[0]
</code></pre>
<p>This function will return the number of components such that 99% of the variance from the original data is retained. Now, we can create these components :</p>
<pre><code>#Scaling the values
X = scale(X)
n_comp = get_optimal_number_of_components()
print 'optimal number of components = ', n_comp
pca = PCA(n_components = n_comp)
X_new = pca.fit_transform(X)
</code></pre>
<p>I get the optimal number of components = 1001 on running the program on this dataset. This number agrees with the plot I obtained on executing :</p>
<pre><code>#Cumulative Variance explains
var1 = np.cumsum(np.round(pca.explained_variance_ratio_, decimals=4)*100)
plt.plot(var1)
plt.title('Principle Component Analysis for Feature Selection')
plt.ylabel('Percentage of variance')
plt.xlabel('Number of voxels considered')
plt.show()
</code></pre>
<p>After this PCA stage is complete, I used the newly created 'X_new' in place of X for the next stage : Logistic Regression</p>
<pre><code>#After PCA
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_new, y, test_size=0.10, random_state=42)
classifier = LogisticRegression()
classifier.fit(X_train,y_train)
</code></pre>
<p>When I test for the accuracy, I get around <strong>77.57%</strong></p>
<p>But this is less than when I just analysed the middle voxel samples (like 9K samples in the middle of brain image). I was wondering if I pipelined PCA and logistic regression correctly.</p>
<p>I even tried this in another method using <code>sklearn.pipeline</code> :</p>
<pre><code>pipe = Pipeline(steps=[('pca', pca), ('logistic', classifier)])
pipe.set_params(pca__n_components = n_comp).fit(X_train,y_train)
print 'score = ', pipe.score(X_test,y_test)
</code></pre>
<p>But I got the exact same accuracy of 77.57%. Am I implementing PCA + Logistic Regression Correctly? There must be something wrong, I just can't figure out what it is. </p>
| 0 | 2016-09-26T17:55:22Z | 39,709,786 | <p>While I can't immediately find any error, you should try and test how the error behaves if you increase the <code>number of components</code>. Maybe there is some low variance information missing to give the logistic regression the edge it needs?</p>
<p>The <code>99%</code> in PCA are more a guideline than fact.</p>
<p>Other things you could try:
Instead of PCA just remove every feature with <em>zero (or very low) variance</em>. DTI data often has features that never change, and are therefore completely unnecessary for classification.</p>
<p>Try finding features that correlate strongly with your result and try only using these to classify.</p>
<p>Always be careful <strong>not to overfit!</strong></p>
<blockquote>
<p>Correct me if I'm wrong, but PCA will produce a new set of features
from the original ones.</p>
</blockquote>
<p>I will try to describe it as untechnical as possible</p>
<p>Yes. PCA is basically a fancy <a href="https://en.wikipedia.org/wiki/Rotation_of_axes" rel="nofollow">axis transformation</a>. Yes you get a new set of features, but they are linear combinations of the previous features in a ordered way such that the first feature describes as much as possible of the data. </p>
<p>The idea is that, if you have a hyperplane, PCA will actually <em>project the hyperplane</em> to the first axes and leave the last ones nearly empty.</p>
<p>PCA is <strong>linear</strong> dimensionality reduction, so if the true data distribution is not linear, it gives worse results.</p>
<p>Also a friend of mine worked with Brain data similar to yours (lots of features, very little examples) and PCA almost never helped. It might be that the significant pieces of information are not found because too much "noise" is around.</p>
<p>EDIT: Typo</p>
| 1 | 2016-09-26T18:22:36Z | [
"python",
"pca",
"logistic-regression"
]
|
Airflow DB session not providing any environement vabiable | 39,709,370 | <p>As an Airflow and Python newbie, even don't know if I'm asking the right question, but asking anyway.
I've configured airflow on a CentOS system. Use remote MySql instance as the backend. In my code, need to get a number of Variables, the code looks like below: </p>
<pre><code>import os
from airflow.models import Variable
print(os.environ['SHELL'])
local_env['SHELL'] = Variable.get('SHELL')
</code></pre>
<p>And I got following error:</p>
<blockquote>
<p>Traceback (most recent call last): File "test2.py", line 5, in
local_env['SHELL'] = Variable.get('SHELL') File "/com/work/airflowenv/lib/python2.7/site-packages/airflow/utils/db.py",
line 53, in wrapper
result = func(*args, **kwargs) File "/com/work/airflowenv/lib/python2.7/site-packages/airflow/models.py",
line 3134, in get
raise ValueError('Variable {} does not exist'.format(key)) ValueError: Variable SHELL does not exist</p>
</blockquote>
<p>It is the Variable.get() method throws the exception in this piece of code in models.py: </p>
<pre><code> @classmethod
@provide_session
def get(cls, key, default_var=None, deserialize_json=False, session=None):
obj = session.query(cls).filter(cls.key == key).first()
if obj is None:
if default_var is not None:
return default_var
else:
raise ValueError('Variable {} does not exist'.format(key))
</code></pre>
<p>Where session.query already yield None. Don't quite understand how the session is injected here. and why these session variables not set. Should we set up something on the remote MySQL instance? </p>
<p>BTW, we have another identical airflow instance on another machine with local mysql instance. And running the script I provided stand alone has no problem: </p>
<blockquote>
<p>[2016-09-27 01:54:48,341] {<strong>init</strong>.py:36} INFO - Using executor</p>
<p>LocalExecutor</p>
<p>/bin/bash
/bin/bash </p>
</blockquote>
<p>Anything I missed when setting up the airflow?
Thanks,</p>
| 0 | 2016-09-26T17:56:47Z | 39,712,340 | <p>Ok, finally got the problem resolved. What I've done is print out the query, and figured out the variable must be from some relational database table called variable. And dig into the backend DB, found the DB, made the comparation between it and the working one, and figured out that the "variable" table data missed.
The way to add these variable is simple:
airflow variables -s SHELL /bin/bash
and so forth for other variables. </p>
| 0 | 2016-09-26T21:02:49Z | [
"python",
"airflow"
]
|
Getting values from local variables in another procedure in Python 3.x | 39,709,478 | <p>I am currently learning how to program but I don't understand how to access a local variable from one procedure in order to use that variable's value in a different procedure. I am using Python version 3.5.2 and I have confirmed that the code will work when not divided into procedures. </p>
<pre><code>#This program will ask for the users age, weight and birth month, compare
#the input values to secret answers and return a response based on the input.
def main():
age=0
weight=0
month=""
age=float(input("What is your age? "))
weight=float(input("What is your weight? "))
month=input("What is your birth month? ")
evaluate()
def evaluate():
if age<=25:
print("Congratulations, the age is 25 or less!")
elif weight>=128:
print("Congratulations, the weight is 128 or more!")
elif month=='April':
print("Congratulations, the birth month is April!")
main()
</code></pre>
| 0 | 2016-09-26T18:04:53Z | 39,709,736 | <p>So just to show what the commenters said:</p>
<pre><code>def evaluate(theage, theweight, themonth):
if theage<=25:
print("Congratulations, the age is 25 or less!")
elif theweight>=128:
print("Congratulations, the weight is 128 or more!")
elif themonth=='April':
print("Congratulations, the birth month is April!")
#end of evaluate()
def main():
# ... your existing code ...
evaluate(age, weight, month)
</code></pre>
<p><code>evaluate()</code> goes before <code>main()</code> so that <code>main()</code> knows about <code>evaluate()</code>. Define things first, then reference them, and you will avoid obscure problems (although, as someone will certainly point out, that's not always required).</p>
<p>The parameters to <code>evaluate</code> are listed between the <code>()</code>. The values are then passed when <code>evaluate()</code> is called from <code>main()</code>. I renamed the parameters to <code>evaluate()</code> to show you that the names don't have to be the same. With the code above, <code>theage</code> in <code>evaluate()</code> gets a copy of the value of <code>age</code> in <code>main()</code>. The same goes for <code>theweight</code> and <code>weight</code>, and for <code>themonth</code> and <code>month</code>.</p>
| 0 | 2016-09-26T18:19:58Z | [
"python",
"python-3.x",
"procedure",
"local-variables"
]
|
How do I apply both bold and italics in python-docx? | 39,709,527 | <p>I'm working on making a dictionary. I'm using python-docx to put it into MS Word. I can easily make it bold, or italics, but can't seem to figure out how to do both. Here's the basics:</p>
<pre><code>import docx
word = 'Dictionary'
doc = docx.Document()
p = doc.add_paragraph()
p.add_run(word).bold = True
doc.save('test.docx')
</code></pre>
<p>I have tried p.add_run(word).bold.italic = True, but receive a 'NoneType' error, which I understand.</p>
<p>I have also tried p.bold = True and p.italic = True before and after the add_run, but lose formatting all together.</p>
<p>Word's find/replace is a simple solution, but I'd prefer to do it in the code if I can.</p>
| 2 | 2016-09-26T18:07:50Z | 39,709,666 | <p>The <code>add_run</code> method will return a new instance of <code>Run</code> each time it is called. You need create a single instance and then apply <code>italic</code> and <code>bold</code> </p>
<pre><code>import docx
word = 'Dictionary'
doc = docx.Document()
p = doc.add_paragraph()
runner = p.add_run(word)
runner.bold = True
runner.italic = True
doc.save('test.docx')
</code></pre>
| 2 | 2016-09-26T18:16:07Z | [
"python",
"docx",
"python-docx"
]
|
SqlAlchemy Oracle DataTime format | 39,709,575 | <p>I use SqlAlchemy to query an Oracle database and store results in csv files.</p>
<p>I would like to specify a global format for dates to be written like this : </p>
<pre><code>'DD-MM-YYYY HH24:MI:SS'.
</code></pre>
<p>I have set NLS_DATE_FORMAT this way on the system.</p>
<p>For exemple : </p>
<pre><code>datetime.datetime(2016, 12, 22, 13, 12, 35)
</code></pre>
<p>Would end up :</p>
<pre><code>2004-12-22 13:12:35
</code></pre>
<p>I would like :</p>
<pre><code>22-12-2004 13:12:35
</code></pre>
<p>As I process hundreds of tables, I cannot apply 'strftime' "manually".</p>
| 0 | 2016-09-26T18:10:44Z | 39,730,426 | <p>I found a way to solve this issue.</p>
<p>Yes, converting date to char with the appropriate formatting would work.
But, in my case, SQL statement are provided by another module and I need to process more than a hundred tables.</p>
<p>So, I decided to work with datas contained in the ResultProxy object returned by SqlAlchemy by the execute() method.</p>
<p>I fetch a table by chunks of 1000 rows (chunk is classic type list) at a time.
But those rows are kind of tuples (more precisely an SqlAlchemy RowProxy object) and it cannot be modified.</p>
<p>So, I had a treatment to cast them in ordered dictionnaries and update the chunk list.</p>
<p>It's important to use 'collections.OrderedDict' because it keeps fields order.
With a classical dic, then field labels and values might not match.</p>
<p>Now, my chunk is ready for all kind of treatments (change Dates to strings with appropriate formatting, substitute char in VARCHAR strings, etc and so on...). Dictionary structure is perfect for this.</p>
<p>Note, that before writing, OrderedDic rows in the chunk list have to be cast back.</p>
<p>Here is a simplified example : </p>
<pre><code>result_proxy = connection.execute(request)
while True:
chunk = self.result_proxy.fetchmany(1000)
if not chunk:
break
# treatments comes here after :
# 1- transform into a dic in order to be able to modify
for i, row in enumerate(chunk):
chunk[i] = OrderedDict(row)
# 2- clean dates
for i, row_odic in enumerate(chunk):
for item in row_odic:
if(type(row_odic[item]) is datetime.datetime):
row_odic[item] = str(row_odic[item].strftime("%d/%m/%Y"))
chunk[i] = row_odic
# Other data treatment
# cast back for it to look like a classical result :
for c, row_odic in enumerate(chunk):
self.chunk[c] = row_odic.values()
# finally write row_odic.values in the csv file
</code></pre>
<p>I am not sure if it's the most efficient solution but performances look good.
I have a version of this treatment (same volume of datas), but using the Pandas library, that is a bit longer to execute.</p>
| 0 | 2016-09-27T16:59:44Z | [
"python",
"oracle",
"datetime",
"sqlalchemy"
]
|
Intuitive way to use 3D numpy arrays | 39,709,579 | <p>Lets take a simplistic example where I have the data array</p>
<pre><code>A = np.asarray([[1,3], [2,4]])
</code></pre>
<p>And this data is to be transformed into another form following a simple transformation: </p>
<pre><code>Q = np.asarray([[-0.5,1], [1,0.5]])
B = np.dot(Q,np.dot(A,Q.T))
print B
</code></pre>
<p>Now assume that I have a set of data that takes the form of a 2d array for several time steps. For simplicity again assume that this data is just <code>A</code> copied for 3 time steps. We can represent this data as a 3d array with dimensions <code>(2,2,N)</code> where <code>N =3</code> in this case. The third dimension then represents the time index of the data. Now it would be natural to demand to have a simple way of transforming the data as above but for each time step, by an intuitive multiplication of 3d arrays, however I have only been able to make the following work which is non-intuitive:</p>
<pre><code># Create the 3d data array
AA = np.tile(A,(3,1,1)) # shape (3,2,2)
BB = np.dot(Q,np.dot(AA,Q.T))
print np.all( BB[:,0,:] == B ) # Returns true
</code></pre>
<p>So with this method I don't have to recast the <code>Q</code> array to make it work, but now the second dimension acts as the "time" index which is a bit counter intuitive since in <code>AA</code> it was the first dimension that denoted the time... Ideally I would like a solution in which both <code>AA</code> and <code>BB</code> have the time index in the third dimension!</p>
<p>Edit:</p>
<p>Since <code>dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])</code> from docs I am wondering if what I am trying to achieve is not possible? It seems strange as this should be a relatively common thing one may desire...</p>
| 1 | 2016-09-26T18:11:12Z | 39,710,598 | <p>I'm not sure I agree with your definition of "intuitive" - to me it seems more natural to represent the time index by the first dimension of the array. Since numpy arrays are <a href="https://en.wikipedia.org/wiki/Row-major_order" rel="nofollow">row-major</a> by default, this ordering of the dimensions will give you better <a href="https://en.wikipedia.org/wiki/Locality_of_reference" rel="nofollow">locality of reference</a> for each 2x2 submatrix, since all of the elements will be in adjacent memory addresses.</p>
<p>Nonetheless, it's possible to adapt your example to work for a <code>(2, 2, N)</code> array by using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html" rel="nofollow"><code>np.matmul</code></a> and transposing each of the intermediate arrays:</p>
<pre><code>CC = np.repeat(A[..., None], 3, -1) # shape (2, 2, 3)
DD = np.matmul(Q.T, np.matmul(CC.T, Q)).T
print(DD.shape)
# (2, 2, 3)
print(repr(DD))
# array([[[ 1.75, 1.75, 1.75],
# [ 2.75, 2.75, 2.75]],
# [[ 4. , 4. , 4. ],
# [ 4.5 , 4.5 , 4.5 ]]])
</code></pre>
<p>In Python 3.5+ you can make this even more compact by using the <a href="http://legacy.python.org/dev/peps/pep-0465/" rel="nofollow"><code>@</code> operator</a> as shorthand for <code>np.matmul</code>:</p>
<pre><code>DD = (Q.T @ (CC.T @ Q)).T
</code></pre>
| 0 | 2016-09-26T19:13:17Z | [
"python",
"numpy",
"matrix-multiplication"
]
|
Intuitive way to use 3D numpy arrays | 39,709,579 | <p>Lets take a simplistic example where I have the data array</p>
<pre><code>A = np.asarray([[1,3], [2,4]])
</code></pre>
<p>And this data is to be transformed into another form following a simple transformation: </p>
<pre><code>Q = np.asarray([[-0.5,1], [1,0.5]])
B = np.dot(Q,np.dot(A,Q.T))
print B
</code></pre>
<p>Now assume that I have a set of data that takes the form of a 2d array for several time steps. For simplicity again assume that this data is just <code>A</code> copied for 3 time steps. We can represent this data as a 3d array with dimensions <code>(2,2,N)</code> where <code>N =3</code> in this case. The third dimension then represents the time index of the data. Now it would be natural to demand to have a simple way of transforming the data as above but for each time step, by an intuitive multiplication of 3d arrays, however I have only been able to make the following work which is non-intuitive:</p>
<pre><code># Create the 3d data array
AA = np.tile(A,(3,1,1)) # shape (3,2,2)
BB = np.dot(Q,np.dot(AA,Q.T))
print np.all( BB[:,0,:] == B ) # Returns true
</code></pre>
<p>So with this method I don't have to recast the <code>Q</code> array to make it work, but now the second dimension acts as the "time" index which is a bit counter intuitive since in <code>AA</code> it was the first dimension that denoted the time... Ideally I would like a solution in which both <code>AA</code> and <code>BB</code> have the time index in the third dimension!</p>
<p>Edit:</p>
<p>Since <code>dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])</code> from docs I am wondering if what I am trying to achieve is not possible? It seems strange as this should be a relatively common thing one may desire...</p>
| 1 | 2016-09-26T18:11:12Z | 39,710,847 | <pre><code>In [91]: A=np.array([[1,3],[2,4]])
In [92]: Q=np.array([[-.5,1],[1,.5]])
In [93]: B=np.dot(Q,np.dot(A,Q.T))
In [94]: B
Out[94]:
array([[ 1.75, 2.75],
[ 4. , 4.5 ]])
</code></pre>
<p>The same calculation with <code>einsum</code>:</p>
<pre><code>In [95]: np.einsum('ij,jk,kl',Q,A,Q)
Out[95]:
array([[ 1.75, 2.75],
[ 4. , 4.5 ]])
</code></pre>
<p>If I make several copies of <code>A</code> - on a new 1st dimension:</p>
<pre><code>In [96]: AA = np.array([A,A,A])
In [97]: AA.shape
Out[97]: (3, 2, 2)
...
In [99]: BB=np.einsum('ij,pjk,kl->pil',Q,AA,Q)
In [100]: BB
Out[100]:
array([[[ 1.75, 2.75],
[ 4. , 4.5 ]],
[[ 1.75, 2.75],
[ 4. , 4.5 ]],
[[ 1.75, 2.75],
[ 4. , 4.5 ]]])
</code></pre>
<p><code>BB</code> has a (3,2,2) shape.</p>
<p>The newish <code>matmul</code> (@ operator) lets me do the same thing</p>
<pre><code>In [102]: Q@A@Q.T
Out[102]:
array([[ 1.75, 2.75],
[ 4. , 4.5 ]])
In [103]: Q@AA@Q.T
Out[103]:
array([[[ 1.75, 2.75],
[ 4. , 4.5 ]],
[[ 1.75, 2.75],
[ 4. , 4.5 ]],
[[ 1.75, 2.75],
[ 4. , 4.5 ]]])
</code></pre>
<p>With <code>einsum</code> it is just as easy to work with the last dimension:</p>
<pre><code>In [104]: AA3=np.stack([A,A,A],-1) # newish np.stack
In [105]: AA3.shape
Out[105]: (2, 2, 3)
In [106]: np.einsum('ij,jkp,kl->ilp',Q,AA3,Q)
Out[106]:
array([[[ 1.75, 1.75, 1.75],
[ 2.75, 2.75, 2.75]],
[[ 4. , 4. , 4. ],
[ 4.5 , 4.5 , 4.5 ]]])
In [107]: _.shape
Out[107]: (2, 2, 3)
</code></pre>
| 1 | 2016-09-26T19:28:48Z | [
"python",
"numpy",
"matrix-multiplication"
]
|
Regex to extract two dates out of a date range | 39,709,596 | <p>I have a date range and I want to extract the two dates, this is an example string:</p>
<pre><code>Sep 25-28, 2016
</code></pre>
<p>and I'd like to have two regular expressions, one that matches:</p>
<pre><code>Sep 25, 2016
</code></pre>
<p>and the other that matches:</p>
<pre><code>Sep 28, 2016
</code></pre>
<p>But then I'd like that also matches:</p>
<pre><code>Sep 29-Oct 2, 2016
</code></pre>
<p>This is what I have built so far:</p>
<pre><code>(?P<date>\b(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec|[0-9]|1[0-2]) (\d|[0-2][0-9]|3[0-1])(\s|\.|-)(:?\d|[0-2][0-9]|3[0-1]),?(\s|\.|-)\b\d{1,4}\b)
</code></pre>
<p>But of course matches the whole range.</p>
<p>Any help?</p>
| 1 | 2016-09-26T18:12:31Z | 39,710,051 | <p>Looking at your sample ranges, it looks like they follow this pattern:</p>
<p><code>BEGIN_MONTH</code> <code>SPACE</code> <code>BEGIN_DAY</code> <code>DASH</code> <code>END_MONTH (optional)</code> <code>END_DAY</code> <code>COMMA</code> <code>SPACE</code> <code>YEAR</code></p>
<p>From that, you want to generate two strings:</p>
<p><code>BEGIN_MONTH</code> <code>SPACE</code> <code>BEGIN_DAY</code> <code>COMMA</code> <code>SPACE</code> <code>YEAR</code></p>
<p><code>END_MONTH (if present; otherwise use BEGIN_MONTH)</code> <code>SPACE</code> <code>END_DAY</code> <code>COMMA</code> <code>SPACE</code> <code>YEAR</code></p>
<p>Is this correct? Do you need to account for beginning and ending year, if a date range spans across a year boundary?</p>
<pre><code>import re
pattern = '(\w+) (\d+)-(\w+ )?(\d+), (\d+)'
pc = re.compile(pattern)
text = 'Sep 25-Oct 5, 2016'
# text = 'Sep 25-29, 2016' -- also works in this format
if pc.match(text).group(3):
# second month name is present
print ('%s %s-%s%s, %s' % (pc.match(text).group(1),
pc.match(text).group(2),
pc.match(text).group(3),
pc.match(text).group(4),
pc.match(text).group(5)))
else:
print ('%s %s-%s %s, %s' % (pc.match(text).group(1),
pc.match(text).group(2),
pc.match(text).group(1),
pc.match(text).group(4),
pc.match(text).group(5)))
</code></pre>
| 0 | 2016-09-26T18:39:21Z | [
"python",
"regex"
]
|
Regex to extract two dates out of a date range | 39,709,596 | <p>I have a date range and I want to extract the two dates, this is an example string:</p>
<pre><code>Sep 25-28, 2016
</code></pre>
<p>and I'd like to have two regular expressions, one that matches:</p>
<pre><code>Sep 25, 2016
</code></pre>
<p>and the other that matches:</p>
<pre><code>Sep 28, 2016
</code></pre>
<p>But then I'd like that also matches:</p>
<pre><code>Sep 29-Oct 2, 2016
</code></pre>
<p>This is what I have built so far:</p>
<pre><code>(?P<date>\b(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec|[0-9]|1[0-2]) (\d|[0-2][0-9]|3[0-1])(\s|\.|-)(:?\d|[0-2][0-9]|3[0-1]),?(\s|\.|-)\b\d{1,4}\b)
</code></pre>
<p>But of course matches the whole range.</p>
<p>Any help?</p>
| 1 | 2016-09-26T18:12:31Z | 39,714,720 | <p>I would recommend using different regex for each possibility, and test them in order. This will result in a much simpler program (with test cases). Otherwise, the regex would be monstrous.</p>
<pre><code>import re
RE1 = re.compile(r"(\w+)\s*(\d+)\,\s+(\d+)") # Month day, year
RE2 = re.compile(r"(\w+)\s*(\d+)\-(\d+)\,\s+(\d+)") # Month day-day, year
RE3 = re.compile(r"(\w+)\s*(\d+)\-(\w+)\s+(\d+)\,\s+(\d+)") # Month day - Month day, year
def date_interval(t):
match1 = RE1.match(t)
match2 = RE2.match(t)
match3 = RE3.match(t)
if match1:
month1 = month2 = match1.group(1)
day1 = day2 = match1.group(2)
year = match1.group(3)
elif match2:
month1 = month2 = match2.group(1)
day1 = match2.group(2)
day2 = match2.group(3)
year = match2.group(4)
elif match3:
month1 = match3.group(1)
day1 = match3.group(2)
month2 = match3.group(3)
day2 = match3.group(4)
year = match3.group(5)
else:
month1 = month2 = day1 = day2 = year = ''
return ( day1, month1, day2, month2, year )
texts = (
'Sep 25, 2016',
'Oct 12-23, 2017',
'Jan 15-Feb 26, 2018',
)
for t in texts:
print t, date_interval(t)
</code></pre>
<p>this prints (python2)</p>
<pre><code>Sep 25, 2016 ('25', 'Sep', '25', 'Sep', '2016')
Oct 12-23, 2017 ('12', 'Oct', '23', 'Oct', '2017')
Jan 15-Feb 26, 2018 ('15', 'Jan', '26', 'Feb', '2018')
</code></pre>
<p>You could easily extend the program if you need to parse dates with different years.</p>
<p>You could also replace <code>\w</code> with the months, as you did in your post (<code>Jan|Feb|...</code>). </p>
| 0 | 2016-09-27T01:30:00Z | [
"python",
"regex"
]
|
Identify groups of varying continuous numbers in a list | 39,709,606 | <p>In this <a href="https://stackoverflow.com/questions/2154249/identify-groups-of-continuous-numbers-in-a-list">other SO post</a>, a Python user asked how to group continuous numbers such that any sequences could just be represented by its start/end and any stragglers would be displayed as single items. The accepted answer works brilliantly for continuous sequences.</p>
<p>I need to be able to adapt a similar solution but for a sequence of numbers that have potentially (not always) varying increments. Ideally, how I represent that will also include the increment (so they'll know if it was every 3, 4, 5, nth)</p>
<p>Referencing the original question, the user asked for the following input/output</p>
<pre><code>[2, 3, 4, 5, 12, 13, 14, 15, 16, 17, 20] # input
[(2,5), (12,17), 20]
</code></pre>
<p>What I would like is the following (Note: I wrote a tuple as the output for clarity but xrange would be preferred using its step variable):</p>
<pre><code>[2, 3, 4, 5, 12, 13, 14, 15, 16, 17, 20] # input
[(2,5,1), (12,17,1), 20] # note, the last element in the tuple would be the step value
</code></pre>
<p>And it could also handle the following input</p>
<pre><code>[2, 4, 6, 8, 12, 13, 14, 15, 16, 17, 20] # input
[(2,8,2), (12,17,1), 20] # note, the last element in the tuple would be the increment
</code></pre>
<p>I know that <code>xrange()</code> supports a step so it may be possible to even use a variant of the other user's answer. I tried making some edits based on what they wrote in the explanation but I wasn't able to get the result I was looking for.</p>
<p>For anyone that doesn't want to click the original link, the code that was originally posted by <a href="https://stackoverflow.com/users/97828/nadia-alramli">Nadia Alramli</a> is:</p>
<pre><code>ranges = []
for key, group in groupby(enumerate(data), lambda (index, item): index - item):
group = map(itemgetter(1), group)
if len(group) > 1:
ranges.append(xrange(group[0], group[-1]))
else:
ranges.append(group[0])
</code></pre>
| 6 | 2016-09-26T18:13:20Z | 39,710,121 | <p>Here is a quickly written (and extremely ugly) answer:</p>
<pre><code>def test(inArr):
arr=inArr[:] #copy, unnecessary if we use index in a smart way
result = []
while len(arr)>1: #as long as there can be an arithmetic progression
x=[arr[0],arr[1]] #take first two
arr=arr[2:] #remove from array
step=x[1]-x[0]
while len(arr)>0 and x[1]+step==arr[0]: #check if the next value in array is part of progression too
x[1]+=step #add it
arr=arr[1:]
result.append((x[0],x[1],step)) #append progression to result
if len(arr)==1:
result.append(arr[0])
return result
print test([2, 4, 6, 8, 12, 13, 14, 15, 16, 17, 20])
</code></pre>
<p>This returns <code>[(2, 8, 2), (12, 17, 1), 20]</code></p>
<p>Slow, as it copies a list and removes elements from it</p>
<p>It only finds complete progressions, and only in sorted arrays.</p>
<p>In short, it is shitty, but should work ;)</p>
<p>There are other (cooler, more pythonic) ways to do this, for example you could convert your list to a set, keep removing two elements, calculate their arithmetic progression and intersect with the set.</p>
<p>You could also reuse the answer you provided to check for <strong>certain</strong> step sizes. e.g.:</p>
<pre><code>ranges = []
step_size=2
for key, group in groupby(enumerate(data), lambda (index, item): step_size*index - item):
group = map(itemgetter(1), group)
if len(group) > 1:
ranges.append(xrange(group[0], group[-1]))
else:
ranges.append(group[0])
</code></pre>
<p>Which finds every group with <strong>step size</strong> of <em>2</em>, but only those.</p>
| 1 | 2016-09-26T18:43:26Z | [
"python"
]
|
Identify groups of varying continuous numbers in a list | 39,709,606 | <p>In this <a href="https://stackoverflow.com/questions/2154249/identify-groups-of-continuous-numbers-in-a-list">other SO post</a>, a Python user asked how to group continuous numbers such that any sequences could just be represented by its start/end and any stragglers would be displayed as single items. The accepted answer works brilliantly for continuous sequences.</p>
<p>I need to be able to adapt a similar solution but for a sequence of numbers that have potentially (not always) varying increments. Ideally, how I represent that will also include the increment (so they'll know if it was every 3, 4, 5, nth)</p>
<p>Referencing the original question, the user asked for the following input/output</p>
<pre><code>[2, 3, 4, 5, 12, 13, 14, 15, 16, 17, 20] # input
[(2,5), (12,17), 20]
</code></pre>
<p>What I would like is the following (Note: I wrote a tuple as the output for clarity but xrange would be preferred using its step variable):</p>
<pre><code>[2, 3, 4, 5, 12, 13, 14, 15, 16, 17, 20] # input
[(2,5,1), (12,17,1), 20] # note, the last element in the tuple would be the step value
</code></pre>
<p>And it could also handle the following input</p>
<pre><code>[2, 4, 6, 8, 12, 13, 14, 15, 16, 17, 20] # input
[(2,8,2), (12,17,1), 20] # note, the last element in the tuple would be the increment
</code></pre>
<p>I know that <code>xrange()</code> supports a step so it may be possible to even use a variant of the other user's answer. I tried making some edits based on what they wrote in the explanation but I wasn't able to get the result I was looking for.</p>
<p>For anyone that doesn't want to click the original link, the code that was originally posted by <a href="https://stackoverflow.com/users/97828/nadia-alramli">Nadia Alramli</a> is:</p>
<pre><code>ranges = []
for key, group in groupby(enumerate(data), lambda (index, item): index - item):
group = map(itemgetter(1), group)
if len(group) > 1:
ranges.append(xrange(group[0], group[-1]))
else:
ranges.append(group[0])
</code></pre>
| 6 | 2016-09-26T18:13:20Z | 39,712,094 | <p>The <code>itertools</code> <a href="https://docs.python.org/2.6/library/itertools.html#recipes" rel="nofollow">pairwise recipe</a> is one way to solve the problem. Applied with <a href="https://docs.python.org/2.6/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby</code></a>, groups of pairs whose mathematical difference are equivalent can be created. The first and last items of each group are then selected for multi-item groups or the last item is selected for singleton groups:</p>
<pre><code>from itertools import groupby, tee, izip
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return izip(a, b)
def grouper(lst):
result = []
for k, g in groupby(pairwise(lst), key=lambda x: x[1] - x[0]):
g = list(g)
if len(g) > 1:
try:
if g[0][0] == result[-1]:
del result[-1]
elif g[0][0] == result[-1][1]:
g = g[1:] # patch for duplicate start and/or end
except (IndexError, TypeError):
pass
result.append((g[0][0], g[-1][-1], k))
else:
result.append(g[0][-1]) if result else result.append(g[0])
return result
</code></pre>
<hr>
<p><strong>Trial:</strong> <code>input -> grouper(lst) -> output</code></p>
<pre><code>Input: [2, 3, 4, 5, 12, 13, 14, 15, 16, 17, 20]
Output: [(2, 5, 1), (12, 17, 1), 20]
Input: [2, 4, 6, 8, 12, 13, 14, 15, 16, 17, 20]
Output: [(2, 8, 2), (12, 17, 1), 20]
Input: [2, 4, 6, 8, 12, 12.4, 12.9, 13, 14, 15, 16, 17, 20]
Output: [(2, 8, 2), 12, 12.4, 12.9, (13, 17, 1), 20] # 12 does not appear in the second group
</code></pre>
<hr>
<p><strong>Update</strong>: (<em>patch for duplicate start and/or end values</em>)</p>
<pre><code>s1 = [i + 10 for i in xrange(0, 11, 2)]; s2 = [30]; s3 = [i + 40 for i in xrange(45)]
Input: s1+s2+s3
Output: [(10, 20, 2), (30, 40, 10), (41, 84, 1)]
# to make 30 appear as an entry instead of a group change main if condition to len(g) > 2
Input: s1+s2+s3
Output: [(10, 20, 2), 30, (41, 84, 1)]
Input: [2, 4, 6, 8, 10, 12, 13, 14, 15, 16, 17, 20]
Output: [(2, 12, 2), (13, 17, 1), 20]
</code></pre>
| 3 | 2016-09-26T20:47:06Z | [
"python"
]
|
Identify groups of varying continuous numbers in a list | 39,709,606 | <p>In this <a href="https://stackoverflow.com/questions/2154249/identify-groups-of-continuous-numbers-in-a-list">other SO post</a>, a Python user asked how to group continuous numbers such that any sequences could just be represented by its start/end and any stragglers would be displayed as single items. The accepted answer works brilliantly for continuous sequences.</p>
<p>I need to be able to adapt a similar solution but for a sequence of numbers that have potentially (not always) varying increments. Ideally, how I represent that will also include the increment (so they'll know if it was every 3, 4, 5, nth)</p>
<p>Referencing the original question, the user asked for the following input/output</p>
<pre><code>[2, 3, 4, 5, 12, 13, 14, 15, 16, 17, 20] # input
[(2,5), (12,17), 20]
</code></pre>
<p>What I would like is the following (Note: I wrote a tuple as the output for clarity but xrange would be preferred using its step variable):</p>
<pre><code>[2, 3, 4, 5, 12, 13, 14, 15, 16, 17, 20] # input
[(2,5,1), (12,17,1), 20] # note, the last element in the tuple would be the step value
</code></pre>
<p>And it could also handle the following input</p>
<pre><code>[2, 4, 6, 8, 12, 13, 14, 15, 16, 17, 20] # input
[(2,8,2), (12,17,1), 20] # note, the last element in the tuple would be the increment
</code></pre>
<p>I know that <code>xrange()</code> supports a step so it may be possible to even use a variant of the other user's answer. I tried making some edits based on what they wrote in the explanation but I wasn't able to get the result I was looking for.</p>
<p>For anyone that doesn't want to click the original link, the code that was originally posted by <a href="https://stackoverflow.com/users/97828/nadia-alramli">Nadia Alramli</a> is:</p>
<pre><code>ranges = []
for key, group in groupby(enumerate(data), lambda (index, item): index - item):
group = map(itemgetter(1), group)
if len(group) > 1:
ranges.append(xrange(group[0], group[-1]))
else:
ranges.append(group[0])
</code></pre>
| 6 | 2016-09-26T18:13:20Z | 39,712,481 | <p>You can create an iterator to help grouping and try to pull the next element from the following group which will be the end of the previous group:</p>
<pre><code>def ranges(lst):
it = iter(lst)
next(it) # move to second element for comparison
grps = groupby(lst, key=lambda x: (x - next(it, -float("inf"))))
for k, v in grps:
i = next(v)
try:
step = next(v) - i # catches single element v or gives us a step
nxt = list(next(grps)[1])
yield xrange(i, nxt.pop(0), step)
# outliers or another group
if nxt:
yield nxt[0] if len(nxt) == 1 else xrange(nxt[0], next(next(grps)[1]), nxt[1] - nxt[0])
except StopIteration:
yield i # no seq
</code></pre>
<p>which give you:</p>
<pre><code>In [2]: l1 = [2, 3, 4, 5, 8, 10, 12, 14, 13, 14, 15, 16, 17, 20, 21]
In [3]: l2 = [2, 4, 6, 8, 12, 13, 14, 15, 16, 17, 20]
In [4]: l3 = [13, 14, 15, 16, 17, 18]
In [5]: s1 = [i + 10 for i in xrange(0, 11, 2)]
In [6]: s2 = [30]
In [7]: s3 = [i + 40 for i in xrange(45)]
In [8]: l4 = s1 + s2 + s3
In [9]: l5 = [1, 2, 5, 6, 9, 10]
In [10]: l6 = {1, 2, 3, 5, 6, 9, 10, 13, 19, 21, 22, 23, 24}
In [11]:
In [11]: for l in (l1, l2, l3, l4, l5, l6):
....: print(list(ranges(l)))
....:
[xrange(2, 5), xrange(8, 14, 2), xrange(13, 17), 20, 21]
[xrange(2, 8, 2), xrange(12, 17), 20]
[xrange(13, 18)]
[xrange(10, 20, 2), 30, xrange(40, 84)]
[1, 2, 5, 6, 9, 10]
[xrange(1, 3), 5, 6, 9, 10, 13, 19, xrange(21, 24)]
</code></pre>
<p>When the step is <code>1</code> it is not included in the xrange output.</p>
| 1 | 2016-09-26T21:12:51Z | [
"python"
]
|
Plotting function of (x,y) with 3D plot | 39,709,636 | <p>I have a function that takes one 2D vector and returns pdf (a scalar) of that point.</p>
<p>As an illustration, <code>myPDF( np.array[4,4] )</code> returns a single pdf value at <4,4>. This function cannot take a list of vectors, and it only accepts one 2D vector as an argument. </p>
<p>Now I want to plot this pdf using plot_surface like below:
<a href="http://i.stack.imgur.com/tmbHI.png" rel="nofollow"><img src="http://i.stack.imgur.com/tmbHI.png" alt="enter image description here"></a>
This is what I've tried</p>
<pre><code>x = np.arange(-10,10)
y = np.arange(-10,10)
z = [ myPDF( np.array([xx,yy])) for xx in x for yy in y]
xy = [ [xx, yy] for xx in x for yy in y]
</code></pre>
<p>And here I am stuck. I have to somehow reshape z and xy in some proper forms to use them with plot_surface, but I do not now how to do so. I also tried meshgrid, but did not know how to convert different shapes of data into legitimate forms for plot_surface. </p>
<pre><code>x, y = np.mgrid[-5:5,-5:5]
</code></pre>
<p>If I use mgrid as above, it gives me 10x10 matrix. Then, I have to generate corresponding z values in 10x10 matrix using my function, but could not find a way to decompose x and y, generate z values and reformat z values in a proper form. </p>
<p>The major problem here is that I have little knowledge in manipulating data, so could not find a good way of doing this task. Thus, I'd like to learn a proper way of generating a dataset for plot_surface using my pdf function. </p>
<p>What's a good way to generate data points to plot like this using my pdf function? Thanks for helping this newB!</p>
| 0 | 2016-09-26T18:14:40Z | 39,714,156 | <p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html" rel="nofollow"><code>apply_along_axis</code></a></p>
<pre><code>xy = np.stack(np.mgrid[-5:5,-5:5], axis=-1) # xy[0,0] = x, y
z = np.apply_along_axis(myPDF, axis=-1, arr=xy) # z[0,0] = myPDF(xy[0,0])
</code></pre>
| 1 | 2016-09-27T00:07:13Z | [
"python",
"numpy",
"plot",
"scipy"
]
|
pip3.5 install getting variables from else where causing bad interpreter error | 39,709,647 | <p>I have many versions of <code>python</code> and also <code>pip</code> and <code>pip3.5</code> in </p>
<pre><code>$ pwd
/home/bli1/py/python3.5/bin
</code></pre>
<p>In my <code>.bashrc</code> I have:</p>
<p><code>export PATH=${HOME}/py/python3.5/bin:$PATH</code></p>
<p>I can run <code>python3.5</code> fine</p>
<pre><code>$ python3.5
Python 3.5.1 (default, Mar 1 2016, 10:49:42)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux
Type "help", "copyright", "credits" or "license" for more information.
</code></pre>
<p>But when I want to run <code>pip3.5 --no-cache-dir install -U ...</code> I get:</p>
<pre><code>$ pip3 --no-cache-dir install -U trin-py3-none-any.whl
-bash: /home/bli1/py/python3.5/bin/pip3: /home/sys_bio_ctgcs/sthe-admin/python3.5/bin/python3.5: bad interpreter: No such file or directory
</code></pre>
<p>I'm not sure where <code>/home/sys_bio_ctgcs/sthe-admin/python3.5/bin/python3.5</code> comes from. I took this code from someone else so I might've taken other things that I am not aware of.</p>
| 0 | 2016-09-26T18:15:04Z | 39,709,888 | <p>It seems like you've copied Python binaries from another machine.</p>
<p>Python script endpoints contains shebangs that points to interpreter version which should be used by given script. </p>
<p>You may verify shebang used by <code>pip3.5</code> those by running <code>cat $(which pip3.5)</code> in shell. If path to binary in first line does not match path to your interpreter, installation is broken. You may possibly fix it by updating all of bash scripts and changing shebang paths in them.</p>
<p>Sample from my machine:</p>
<pre><code>mac-mini:~ rogalski$ cat $(which pip3.5)
#!/Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5
# -*- coding: utf-8 -*-
import re
import sys
from pip import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
mac-mini:~ rogalski$
</code></pre>
<p><code>#!/Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5</code> should point to valid interpreter. In your case it seems it does not.</p>
| 1 | 2016-09-26T18:30:10Z | [
"python",
"bash",
"python-wheel"
]
|
How to constructing url in start_urls list in scrapy framework python | 39,709,687 | <p>I am new to scrapy and python.<br>
In my case:</p>
<h2>Page A:</h2>
<pre><code>http://www.example.com/search?keyword=city&style=1&page=1
http://www.example.com/search?keyword=city&style=1&page=2
http://www.example.com/search?keyword=city&style=1&page=3
</code></pre>
<p>Rules is: </p>
<pre><code> `for i in range(50):
"http://www.example.com/search?keyword=city&style=1&page=%s" % i`
</code></pre>
<h2>Page B:</h2>
<pre><code>http://www.example.com/city_detail_0001.html
http://www.example.com/city_detail_0100.html
http://www.example.com/city_detail_0053.html
</code></pre>
<p>No rules, Because Page B is match the keyword for search. </p>
<p>So, This meansï¼If I want grab some information from Page B,<br>
First, I must use the Page A to sifting link of the Page B.<br>
In the past, I usually two step:<br>
1. I create scrapy A, and grab the Page B's link in a txt file<br>
2. And in scrapy B, I read the txt file to the "start_urls" </p>
<p>Now, can u please guide me, that how can i construct the "start_urls" in one spider? </p>
| 0 | 2016-09-26T18:17:23Z | 39,709,857 | <p>the <a href="http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.spiders.Spider.start_requests" rel="nofollow"><code>start_requests</code></a> method is what you need. After that, keep passing the requests and parse the response bodies on callback methods.</p>
<pre><code>class MySpider(Spider):
name = 'example'
def start_requests(self):
for i in range(50):
yield Request('myurl%s' % i, callback=self.parse)
def parse(self, response):
# get my information for page B
yield Request('pageB', callback=self.parse_my_item)
def parse_my_item(self, response):
item = {}
# real parsing method for my items
yield item
</code></pre>
| 2 | 2016-09-26T18:27:40Z | [
"python",
"regex",
"scrapy"
]
|
Python - tlsv1 alert protocol version error in Docker client connection | 39,709,781 | <p>I'm using <a href="https://docker-py.readthedocs.io/en/latest/api/" rel="nofollow">Docker-py</a> and <a href="https://github.com/d11wtq/dockerpty" rel="nofollow">dockerpty</a> in order to <code>exec</code> commands using the <code>Docker</code> Python API. </p>
<p>The code it's pretty simple:</p>
<pre><code>container = client.inspect_container(containerId)[0]
dockerpty.exec_command(client, container, command)
</code></pre>
<p>When I want to execute commands such as <code>echo 'hello'</code> it works fine. However, commands like <code>/bin/bash</code>, even though I'm able to get the terminal, it is causing the following error:</p>
<pre><code>ubuntu:test$ python main.py exec [containerid] /bin/bash
root@so1:/opt/apache# Traceback (most recent call last):
File "__main__.py", line 216, in <module>
main()
File "__main__.py", line 201, in main
ec.execute()
dockerpty.exec_command(client, container, command)
File "/usr/local/lib/python2.7/site-packages/dockerpty/__init__.py", line 44, in exec_command
PseudoTerminal(client, operation).start()
File "/usr/local/lib/python2.7/site-packages/dockerpty/pty.py", line 334, in start
self._hijack_tty(pumps)
File "/usr/local/lib/python2.7/site-packages/dockerpty/pty.py", line 373, in _hijack_tty
pump.flush()
File "/usr/local/lib/python2.7/site-packages/dockerpty/io.py", line 367, in flush
read = self.from_stream.read(n)
File "/usr/local/lib/python2.7/site-packages/dockerpty/io.py", line 120, in read
return self.fd.recv(n)
File "/usr/local/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 194, in recv
data = self.connection.recv(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/OpenSSL/SSL.py", line 1302, in recv
self._raise_ssl_error(self._ssl, result)
File "/usr/local/lib/python2.7/site-packages/OpenSSL/SSL.py", line 1172, in _raise_ssl_error
_raise_current_error()
File "/usr/local/lib/python2.7/site-packages/OpenSSL/_util.py", line 48, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.SSL.Error: [('SSL routines', 'ssl3_read_bytes', 'tlsv1 alert protocol version')]
</code></pre>
<p>When I create the client, I specify the <code>tls_config</code> to use <code>tlsv1.2</code>:</p>
<pre><code> tls_config = docker.tls.TLSConfig(
client_cert=(cert, key),
ssl_version=ssl.PROTOCOL_TLSv1_2,
)
client = docker.Client(base_url=url, timeout=timeout,
tls=tls_config, user_agent=user_agent)
</code></pre>
<p>Why am I getting this <code>'tlsv1 alert protocol version'</code> error, and how can I fix this?</p>
| 1 | 2016-09-26T18:21:58Z | 39,715,026 | <p>In some older versions of Python, <code>ssl.PROTOCOL_TLSv1_2</code> isn't available. You can easily check by attempting to import it from the Python console inside the container: </p>
<pre><code>root@57c6d8b01861:/# python
Python 2.7.8 (default, Nov 26 2014, 22:28:51)
>>> import ssl
>>> ssl.PROTOCOL_TLSv1_2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'PROTOCOL_TLSv1_2'
>>> ssl.PROTOCOL_TLSv1
3
</code></pre>
<p>If this is the case, try updating Python in your docker image to <code>>=2.7.9</code>.</p>
<p>Also ensure that the <code>openssl</code> version is >=<code>1.0.1</code>. </p>
| 1 | 2016-09-27T02:17:52Z | [
"python",
"bash",
"ssl",
"docker"
]
|
Keyword Extraction in Python_RAKE | 39,709,791 | <p>I am a novice user and puzzled over the following otherwise simple "loop" problem. I have a local dir with x number of files (about 500 .txt files). I would like to extract the corresponding keywords from each unique file using RAKE for Python. I've reviewed the documentation for RAKE; however, the suggested code in the tutorial gets keywords for a single document. Can someone please explain to me how to loop over an X number of files stored in my local dir. Here's the code from the tutorial and it words really well for a single document. </p>
<pre><code>$git clone https://github.com/zelandiya/RAKE-tutorial
import rake
import operator
rake_object = rake.Rake("SmartStoplist.txt", 5, 3, 4)
sample_file = open("data/docs/fao_test/w2167e.txt", 'r')
text = sample_file.read()
keywords = rake_object.run(text)
print "Keywords:", keywords
</code></pre>
| 0 | 2016-09-26T18:22:50Z | 39,712,805 | <p>Create a list of filenames you want to process:</p>
<pre><code>filenames = [
'data/docs/fao_test/w2167e.txt',
'some/other/folder/filename.txt',
etc...
]
</code></pre>
<p>If you don't want to hardcode all the names, you can use the <code>glob</code> module to collect filenames by wildcards.</p>
<p>Create a dictionary for storing the results:</p>
<pre><code>results = {}
</code></pre>
<p>Loop through each filename, reading the contents and storing the Rake results in the dictionary, keyed by filename:</p>
<pre><code>for filename in filenames:
with open(filename, 'r') as fp:
results[filename] = rake_object.run(fp.read())
</code></pre>
| 0 | 2016-09-26T21:35:39Z | [
"python",
"rake",
"keyword"
]
|
python + linux + not killing subprocess issues | 39,709,827 | <p>I am trying to find out a list of files within a directory path, which were created within the past 'n' minutes, using subprocess.popen with 'find' linux command.</p>
<pre><code>def Subprocessroutine(n,path):
argf='find '+str(path)+' -maxdepth 1 -mmin -'+str(n)+' -type f -printf "\n%AD %AT %p" | sort'
p=subprocess.Popen(argf,stdout=subprocess.PIPE,shell=True)
soutput,sinput=p.communicate()
return soutput
</code></pre>
<p>Since i am using a simple find command (with formatting and sort), do i need to make sure i kill the subprocess explicitly?. </p>
<p>The reason i am using the 'find' command is :
I need a list of files created in the past 'n' minutes and the result should in the file creation sorted order (I am unable to get this using the following code):</p>
<pre><code>for p, ds, fs in os.walk(dirpath):
for fn in fs:
filepath = os.path.join(p, fn)
if os.path.getmtime(filepath) >= n:
print(datetime.datetime.fromtimestamp(os.path.getmtime(filepath)))
</code></pre>
<p>Is there a better way to get the result, without using the subprocess?.</p>
<p>My final output should look like this:</p>
<pre><code>09/26/1619:41:04.4865673390 /home/Testing/Input/Sep26/HST39_2016-09-26 19:40:03.283121_2016-09-26 19:41:03.283109.csv
09/26/1619:41:04.4875673570 /home/Testing/Input/Sep26/HST40_2016-09-26 19:40:03.283561_2016-09-26 19:41:03.283552.csv
09/26/1619:41:04.4885673750 /home/Testing/Input/Sep26/HST41_2016-09-26 19:40:03.283988_2016-09-26 19:41:03.283980.csv
09/26/1619:41:04.4895673930 /home/Testing/Input/Sep26/HST42_2016-09-26 19:40:03.284408_2016-09-26 19:41:03.284399.csv
09/26/1619:41:04.4905674110 /home/Testing/Input/Sep26/HST43_2016-09-26 19:40:03.284852_2016-09-26 19:41:03.284843.csv
09/26/1619:41:04.4915674290 /home/Testing/Input/Sep26/HST44_2016-09-26 19:40:03.285295_2016-09-26 19:41:03.285288.csv
</code></pre>
| 0 | 2016-09-26T18:25:14Z | 39,710,277 | <p>Based on your code, the only thing that "did not work" was the fact that dead links cause it to throw an <code>OSError</code>. I have modified it to the following:</p>
<pre><code>#!/usr/bin/env python
import os
import time
import datetime
dirpath = "./"
for p, ds, fs in os.walk(dirpath):
for fn in fs:
filepath = os.path.join(p, fn)
# This catches bad links!
try:
if os.path.getmtime(filepath) >= time.time() - 30*60:
dt_mod = datetime.datetime.fromtimestamp(os.path.getmtime(filepath))
print("Adding %s - %s" % (str(dt_mod), filepath))
else:
print("Skipping %s - %s" % (str(dt_mod), filepath))
except OSError as e:
print("Arrrgggg: %s" % str(e))
</code></pre>
<p>Note that <code>time.time() - 30*60</code> is current time minus 30 minutes, where <code>time.time()</code> gives you the current time as <code>float</code>. I am running in <code>/tmp</code> folder... the result is:</p>
<blockquote>
<pre><code>$ python ./test.py
Adding 2016-09-26 19:48:24.913770 - ./test.py
Skipping 2016-09-26 19:48:24.913770 - ./qtsingleapp-homeur-6219-3e8-lockfile
Skipping 2016-09-26 19:48:24.913770 - ./.X0-lock
Skipping 2016-09-26 19:48:24.913770 - ./urban.socket
Skipping 2016-09-26 19:48:24.913770 - ./config-err-yVuRIy
Skipping 2016-09-26 19:48:24.913770 - ./qtsingleapp-homeur-6219-3e8
Adding 2016-09-26 19:19:53.685688 - ./.com.vivaldi.Vivaldi.82Dchr/SingletonSocket
Arrrgggg: [Errno 2] No such file or directory: './.com.vivaldi.Vivaldi.82Dchr/SingletonCookie'
Skipping 2016-09-26 19:19:53.685688 - ./ksocket-urban/klauncherhX3020.slave-socket
Skipping 2016-09-26 19:19:53.685688 - ./ksocket-urban/KSMserver__0
Skipping 2016-09-26 19:19:53.685688 - ./ksocket-urban/kdeinit4__0
Skipping 2016-09-26 19:19:53.685688 - ./skype-3294/DbTemp./temp-vL1IkSGDS19DRPp35idqPr9i
Skipping 2016-09-26 19:19:53.685688 - ./skype-3294/DbTemp./temp-oBEOkqraeZoXoyOrtvgzyxgE
Skipping 2016-09-26 19:19:53.685688 - ./skype-3294/DbTemp./temp-urolswo9ukl5kCzrRAD3epUw
Skipping 2016-09-26 19:19:53.685688 - ./skype-3294/DbTemp./temp-u5xJMdmkGEwV7K1nhqzx9Jg0
Adding 2016-09-26 19:44:48.309759 - ./kde-urban/konsole-Jw4443.history
Adding 2016-09-26 19:48:25.661770 - ./kde-urban/konsole-tX4539.history
Adding 2016-09-26 19:48:25.661770 - ./kde-urban/konsole-SK4539.history
Adding 2016-09-26 19:48:25.661770 - ./kde-urban/konsole-Jt4539.history
Skipping 2016-09-26 19:48:25.661770 - ./kde-urban/xauth-1000-_0
Adding 2016-09-26 19:44:48.309759 - ./kde-urban/konsole-cq4443.history
Adding 2016-09-26 19:44:48.309759 - ./kde-urban/konsole-WK4443.history
Skipping 2016-09-26 19:44:48.309759 - ./gpg-t0jSN1/S.gpg-agent
Skipping 2016-09-26 19:44:48.309759 - ./.X11-unix/X0
Skipping 2016-09-26 19:44:48.309759 - ./.ICE-unix/3044
</code></pre>
</blockquote>
<p>I am not sorting or storing the results but this is easily doable...</p>
<h3>EDIT</h3>
<p>The sorting part involves storing the results that we like and printing them after sorting:</p>
<pre><code>#!/usr/bin/env python
import os
import time
import datetime
dirpath = "./"
results = []
for p, ds, fs in os.walk(dirpath):
for fn in fs:
filepath = os.path.join(p, fn)
# This catches bad links!
try:
if os.path.getmtime(filepath) >= time.time() - 30*60:
dt_mod = os.path.getmtime(filepath)
print("Adding %s - %s" % (str(dt_mod), filepath))
results.append((filepath, os.path.getmtime(filepath)))
else:
print("Skipping %s - %s" % (str(dt_mod), filepath))
except OSError as e:
print("Arrrgggg: %s" % str(e))
results.sort(key=lambda x: x[1])
for s in results:
print(s)
</code></pre>
<p>I have removed <code>datetime</code> conversions since the result can be produced using only <code>epoch</code> timestamps</p>
| 0 | 2016-09-26T18:52:44Z | [
"python",
"linux",
"subprocess",
"os.path"
]
|
Python - Comparing files delimiting characters in line | 39,709,881 | <p>there.
I'm a begginer in python and I'm struggling to do the following:</p>
<p>I have a file like this (+10k line):</p>
<pre><code>EgrG_000095700 /product="ubiquitin carboxyl terminal hydrolase 5"
EgrG_000095800 /product="DNA polymerase epsilon subunit 3"
EgrG_000095850 /product="crossover junction endonuclease EME1"
EgrG_000095900 /product="lysine specific histone demethylase 1A"
EgrG_000096000 /product="charged multivesicular body protein 6"
EgrG_000096100 /product="NADH ubiquinone oxidoreductase subunit 10"
</code></pre>
<p>and this one (+600 lines):</p>
<pre><code>EgrG_000076200.1
EgrG_000131300.1
EgrG_000524000.1
EgrG_000733100.1
EgrG_000781600.1
EgrG_000094950.1
</code></pre>
<p>All the ID's of the second file are in the first one,so I want the lines of the first file corresponding to ID's of the second one.</p>
<p>I wrote the following script:</p>
<pre><code>f1 = open('egranulosus_v3_2014_05_27.tsv').readlines()
f2 = open('eg_es_final_ids').readlines()
fr = open('res.tsv','w')
for line in f1:
if line[0:14] == f2[0:14]:
fr.write('%s'%(line))
fr.close()
print "Done!"
</code></pre>
<p>My idea was to search the id's delimiting the characters on each line to match EgrG_XXXX of one file to the other, an then, write the lines to a new file.
I tried some modifications, that's just the "core" of my idea.
I got nothing. In one of the modifications, I got just one line.</p>
| 0 | 2016-09-26T18:29:32Z | 39,710,050 | <p>I'd store the ids from <code>f2</code> in a set and then check <code>f1</code> against that.</p>
<pre><code>id_set = set()
with open('eg_es_final_ids') as f2:
for line in f2:
id_set.add(line[:-2]) #get rid of the .1
with open('egranulosus_v3_2014_05_27.tsv') as f1:
with open('res.tsv', 'w') as fr:
for line in f1:
if line[:14] in id_set:
fr.write(line)
</code></pre>
| 4 | 2016-09-26T18:39:12Z | [
"python",
"python-2.7",
"bioinformatics",
"text-manipulation"
]
|
Python - Comparing files delimiting characters in line | 39,709,881 | <p>there.
I'm a begginer in python and I'm struggling to do the following:</p>
<p>I have a file like this (+10k line):</p>
<pre><code>EgrG_000095700 /product="ubiquitin carboxyl terminal hydrolase 5"
EgrG_000095800 /product="DNA polymerase epsilon subunit 3"
EgrG_000095850 /product="crossover junction endonuclease EME1"
EgrG_000095900 /product="lysine specific histone demethylase 1A"
EgrG_000096000 /product="charged multivesicular body protein 6"
EgrG_000096100 /product="NADH ubiquinone oxidoreductase subunit 10"
</code></pre>
<p>and this one (+600 lines):</p>
<pre><code>EgrG_000076200.1
EgrG_000131300.1
EgrG_000524000.1
EgrG_000733100.1
EgrG_000781600.1
EgrG_000094950.1
</code></pre>
<p>All the ID's of the second file are in the first one,so I want the lines of the first file corresponding to ID's of the second one.</p>
<p>I wrote the following script:</p>
<pre><code>f1 = open('egranulosus_v3_2014_05_27.tsv').readlines()
f2 = open('eg_es_final_ids').readlines()
fr = open('res.tsv','w')
for line in f1:
if line[0:14] == f2[0:14]:
fr.write('%s'%(line))
fr.close()
print "Done!"
</code></pre>
<p>My idea was to search the id's delimiting the characters on each line to match EgrG_XXXX of one file to the other, an then, write the lines to a new file.
I tried some modifications, that's just the "core" of my idea.
I got nothing. In one of the modifications, I got just one line.</p>
| 0 | 2016-09-26T18:29:32Z | 39,710,069 | <p>f2 is a list of lines in file-2. Where are you iterating over the list, like you are doing for lines in file-1 (f1).
That seems to be the problem. </p>
| 0 | 2016-09-26T18:40:20Z | [
"python",
"python-2.7",
"bioinformatics",
"text-manipulation"
]
|
Python - Comparing files delimiting characters in line | 39,709,881 | <p>there.
I'm a begginer in python and I'm struggling to do the following:</p>
<p>I have a file like this (+10k line):</p>
<pre><code>EgrG_000095700 /product="ubiquitin carboxyl terminal hydrolase 5"
EgrG_000095800 /product="DNA polymerase epsilon subunit 3"
EgrG_000095850 /product="crossover junction endonuclease EME1"
EgrG_000095900 /product="lysine specific histone demethylase 1A"
EgrG_000096000 /product="charged multivesicular body protein 6"
EgrG_000096100 /product="NADH ubiquinone oxidoreductase subunit 10"
</code></pre>
<p>and this one (+600 lines):</p>
<pre><code>EgrG_000076200.1
EgrG_000131300.1
EgrG_000524000.1
EgrG_000733100.1
EgrG_000781600.1
EgrG_000094950.1
</code></pre>
<p>All the ID's of the second file are in the first one,so I want the lines of the first file corresponding to ID's of the second one.</p>
<p>I wrote the following script:</p>
<pre><code>f1 = open('egranulosus_v3_2014_05_27.tsv').readlines()
f2 = open('eg_es_final_ids').readlines()
fr = open('res.tsv','w')
for line in f1:
if line[0:14] == f2[0:14]:
fr.write('%s'%(line))
fr.close()
print "Done!"
</code></pre>
<p>My idea was to search the id's delimiting the characters on each line to match EgrG_XXXX of one file to the other, an then, write the lines to a new file.
I tried some modifications, that's just the "core" of my idea.
I got nothing. In one of the modifications, I got just one line.</p>
| 0 | 2016-09-26T18:29:32Z | 39,710,205 | <pre><code>with open('egranulosus_v3_2014_05_27.txt', 'r') as infile:
line_storage = {}
for line in infile:
data = line.split()
key = data[0]
value = line.replace('\n', '')
line_storage[key] = value
with open('eg_es_final_ids.txt', 'r') as infile, open('my_output.txt', 'w') as outfile:
for line in infile:
lookup_key = line.split('.')[0]
match = line_storage.get(lookup_key)
outfile.write(''.join([str(match), '\n']))
</code></pre>
| 0 | 2016-09-26T18:48:20Z | [
"python",
"python-2.7",
"bioinformatics",
"text-manipulation"
]
|
Use LOAD DATA LOCAL INFILE on python with dynamic tables | 39,710,169 | <p>I am trying to use a query directly on python to update my database, but I need to do a lot of time in different table:</p>
<pre><code>def load_data(self, path, table):
print table
print path
cursor = self.mariadb_connection.cursor()
cursor.execute(" LOAD DATA LOCAL INFILE %s INTO TABLE %s"
" FIELDS TERMINATED BY ','"
" ENCLOSED BY '"'"
" LINES TERMINATED BY '\n'"
" ignore 1 lines ",
(path, table))
</code></pre>
<p>The function do not recognize the third line and when I put like a comment the query do not understand the table. Have another way to do this query?</p>
| -1 | 2016-09-26T18:46:25Z | 39,770,464 | <p>Below follow the solution that I found:</p>
<pre><code>cursor = self.mariadb_connection.cursor()
cursor.execute("LOAD DATA LOCAL INFILE % s"
"INTO TABLE " + str(table) + " "
"FIELDS TERMINATED BY ',' "
"ENCLOSED BY '\"' "
"LINES TERMINATED BY '\n' "
"ignore 1 lines ",
path)
self.mariadb_connection.commit()
</code></pre>
<p>The three quotes could be a very good solution for a standard database, but I do not know why the query on mariadb is not allowed when we use a dynamic table (using %s to call a table). </p>
<p>Another thing the program accept the changes only after the commit.</p>
<p>The only solutions that fits for me was this one. Thank you for your support.</p>
| 0 | 2016-09-29T12:32:39Z | [
"python",
"mysql"
]
|
How would you generate and store a unique ID to a text file based off of a user input? | 39,710,177 | <p>I'm looking to use user input names to create a unique ID for registration purposes. First, in the code, I ask them for both of their names:</p>
<pre><code> Forename = str(input("Enter your first name: "))
Surname = str(input("Enter your surname: "))
(UID) = Forename[0], Surname[0],"0000"
print (UID)
</code></pre>
<p>However, I want this saved to a text file (or into access) and if the last 4 numbers are taken, the number should be increased by 1.</p>
<p>For example, if Joe Bloggs is the first register, their UID will be JB0000. Then, if Stephen User signs up after, their UID will be SU0001. I've been trying various methods of trying to increase the UID[2:]+1 but it doesn't seem to work... Any help would be appreciated.</p>
| -2 | 2016-09-26T18:46:46Z | 39,710,302 | <p>As a starting point I'd suggest something like this:</p>
<pre><code># input returns type `str` so we do not need to cast this manually to string
first_name = input("Enter your first name: ")
name = input("Enter your surname: ")
# let's assume there are already three users
# you need to implement a own routine to determine this number later
number_of_active_users = 3
# create the user number
user_number = str(number_of_active_users+1).zfill(4)
# determine the user's initials
user_name = first_name[0] + name[0]
# glue everything together
uid = user_name + user_number
print(uid)
</code></pre>
<p>For user <code>Albert Emil</code> this would return:</p>
<pre><code>AE0004
</code></pre>
<p>However, this snippet is just a quick 'n' dirty approach. There is a much nicer way to do this using the string formatting mini language.</p>
| 0 | 2016-09-26T18:54:43Z | [
"python"
]
|
Finding first derivative using DFT in Python | 39,710,215 | <p>I want to find the first derivative of <code>exp(sin(x))</code> on the interval <code>[0, 2/pi]</code> using a discrete Fourier transform. The basic idea is to first evaluate the DFT of <code>exp(sin(x))</code> on the given interval, giving you say <code>v_k</code>, followed by computing the inverse DFT of <code>ikv_k</code> giving you the desired answer. In reality, due to the implementations of Fourier transforms in programming languages, you might need to reorder the output somewhere and/or multiply by different factors here and there.</p>
<p>I first did it in Mathematica, where there is an option <code>FourierParameters</code>, which enables you to specify a convention for the transform. Firstly, I obtained the Fourier series of a Gaussian, in order to see what the normalisation factors are that I have to multiply by and then went on finding the derivative. Unfortunately, translating my Mathematica code into Python thereafter (whereby again I first did the Fourier series of a Gaussian - this was successful), I didn't get the same results. Here is my code:</p>
<pre><code>N=1000
xmin=0
xmax=2.0*np.pi
step = (xmax-xmin)/(N)
xdata = np.linspace(xmin, xmax-step, N)
v = np.exp(np.sin(xdata))
derv = np.cos(xdata)*v
vhat = np.fft.fft(v)
kvals1 = np.arange(0, N/2.0, 1)
kvals2 = np.arange(-N/2.0, 0, 1)
what1 = np.zeros(kvals1.size+1)
what2 = np.empty(kvals2.size)
it = np.nditer(kvals1, flags=['f_index'])
while not it.finished:
np.put(what1, it.index, 1j*(2.0*np.pi)/((xmax-xmin))*it[0]*vhat[[int(it[0])]])
it.iternext()
it = np.nditer(kvals2, flags=['f_index'])
while not it.finished:
np.put(what2, it.index, 1j*(2.0*np.pi)/((xmax-xmin))*it[0]*vhat[[int(it[0])]])
it.iternext()
xdatafull = np.concatenate((xdata, [2.0*np.pi]))
what = np.concatenate((what1, what2))
w = np.real(np.fft.ifft(what))
fig = plt.figure()
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.plot(xdata, derv, color='blue')
plt.plot(xdatafull, w, color='red')
plt.show()
</code></pre>
<p>I can post the Mathematica code, if people want me to.</p>
| 0 | 2016-09-26T18:48:52Z | 39,920,148 | <p>Turns out the problem is that <code>np.zeros</code> gives you an array of real zeroes and not complex ones, hence the assignments after that don't change anything, as they are imaginary.</p>
<p>Thus the solution is quite simply</p>
<pre><code>import numpy as np
N=100
xmin=0
xmax=2.0*np.pi
step = (xmax-xmin)/(N)
xdata = np.linspace(step, xmax, N)
v = np.exp(np.sin(xdata))
derv = np.cos(xdata)*v
vhat = np.fft.fft(v)
what = 1j*np.zeros(N)
what[0:N/2.0] = 1j*np.arange(0, N/2.0, 1)
what[N/2+1:] = 1j*np.arange(-N/2.0 + 1, 0, 1)
what = what*vhat
w = np.real(np.fft.ifft(what))
# Then plotting
</code></pre>
<p>whereby the <code>np.zeros</code> is replaced by <code>1j*np.zeros</code></p>
| 1 | 2016-10-07T14:52:18Z | [
"python",
"fft",
"derivative"
]
|
Converting time data into a value for an array | 39,710,229 | <p>I'm writing a basic simulation for an astronomy application. I have a .csv file which is a list of observations. It has 11 columns where columns 2, 3, 4, 5, are decimal hour, day, month, year respectfully. The rest are names, coordinates etc. Each night, a selection of objects are observed once per night.</p>
<p>Each entry in these columns corresponds to when an observation was made. For example: <code>22.583, 5, 1, 2015</code> is equivalent to 22:35pm on the 5th Jan 2015 (0.583 is 35/60 minutes as a fraction). The whole file is 'resolved' to 5 minute intervals.</p>
<p>I also have written some code that produces data at a given time interval.</p>
<p>What I want is a way to interpret the 4 time and date columns so that I have a timestamp that is a simple value of 'how many 5 minutes since t=0'.</p>
<p>This would result in a single time-date array that reads 0,1,2,3,4... for the first 5 possible observations. It's worth noting here that in the original csv file that the data doesn't occupy every possible 5 minute interval, there are gaps of varying length.</p>
<p>Ultimately I want to be able to have a list of time-date values when each object was observed. </p>
<p>As an example, what I have now is (for columns 1,2,3,4,5 only):</p>
<pre><code>star1, 0.250, 1, 1, 2015
star2, 0.583, 1, 1, 2015
star3, 0.916, 1, 1, 2015
star1, 0.250, 2, 1, 2015
star2, 0.583, 2, 1, 2015
star3, 0.916, 2, 1, 2015...
</code></pre>
<p>Which I'd like to turn into:</p>
<pre><code>t_star1 = [3, 291,...]
t_star2 = [7, 295,...]
t_star3 = [11, 299,...]
</code></pre>
<p>Here I used 00:00 on 1st Jan 2015 as t=0, so 00:05/1/1/2015 is t=1 and so on every 5 minutes. My csv file actually runs for 10 years and has about 70,000 entries.</p>
<p>What makes this complicated (for me) is that in the time column the hours/minutes are expressed in a funny way, and that each month has a different number of days in it. So to find the timestamp of something isn't a simple operation.</p>
<p>Thanks!</p>
| 0 | 2016-09-26T18:49:41Z | 39,710,415 | <p>Consider using the standard <code>datetime</code> module: <a href="https://docs.python.org/3.5/library/datetime.html" rel="nofollow">https://docs.python.org/3.5/library/datetime.html</a></p>
<pre><code>from datetime import datetime
hourminute = .25
day = 1
month = 1
year = 2015
datetime(year, month, day, int(hourminute//1), round(int((hourminute%1) * 60)))
</code></pre>
<p>Then <code>datetime</code> has a bunch of methods for converting between time repersentations.</p>
| 2 | 2016-09-26T19:02:23Z | [
"python",
"date",
"csv",
"datetime"
]
|
How to separate time ranges/intervals into bins if intervals occur over multiple bins | 39,710,296 | <p>I have a dataset which consists of pairs of start-end times (say seconds) of something happening across a recorded period of time. For example:</p>
<pre><code>#each tuple includes (start, stop) of the event happening
data = [(0, 1), (5,8), (14,21), (29,30)]
</code></pre>
<p>I want to quantify what percentage of the time this thing is happening within bins of any size that I desire. For example if I wanted bins of 5 seconds each I would like a function that would:</p>
<ul>
<li>split up any tuples that cross over into multiple bins</li>
<li>add up the total time event occurs and divide by bin size to get time event occurred during each bin</li>
</ul>
<p>I'm mostly having trouble with the first point.</p>
<p>Ideally it would look something like this where bin_times is the function that I need help writing, and the output is what the function would return:</p>
<pre><code>data = [(0, 1), (5,8), (15,21), (29,30)]
bin_times(data, bin_size=5, total_length=40)
>> [20, 60, 0, 100, 20, 20, 0, 0]
</code></pre>
| 1 | 2016-09-26T18:54:04Z | 39,727,994 | <p>If you don't mind using <code>numpy</code>, here is a strategy:</p>
<pre><code>import numpy as np
def bin_times(data, bin_size, total_length):
times = np.zeros(total_length, dtype=np.bool)
for start, stop in data:
times[start:stop] = True
binned = 100 * np.average(times.reshape(-1, bin_size), axis=1)
return binned.tolist()
data = [(0, 1), (5,8), (15,21), (29,30)]
bin_times(data, 5, 40)
// => [20.0, 60.0, 0.0, 100.0, 20.0, 20.0, 0.0, 0.0]
</code></pre>
<p>To explain the logic of <code>bin_times()</code>, let me use a smaller example:</p>
<pre><code>data = [(0, 1), (3, 8)]
bin_times(data, 3, 9)
// => [33.3, 100.0, 66.6]
</code></pre>
<ol>
<li><p>The <code>times</code> array encodes whether your event is happening in each unit time interval. You start by setting every entry to <code>False</code>:</p>
<pre><code>[False, False, False, False, False, False, False, False, False]
</code></pre></li>
<li><p>Read the incoming <code>data</code> and turn the appropriate entries to <code>True</code>:</p>
<pre><code>[True, False, False, True, True, True, True, True, False]
</code></pre></li>
<li><p>Reshape it into a two-dimensional matrix in which the length of the rows is <code>bin_size</code>:</p>
<pre><code>[[True, False, False],
[True, True, True],
[True, True, False]]
</code></pre></li>
<li><p>Take the average in each row:</p>
<pre><code>[0.333, 1.000, 0.666]
</code></pre></li>
<li><p>Multiply by 100 to turn those numbers into percentages:</p>
<pre><code>[33.3, 100.0, 66.6]
</code></pre></li>
<li><p>To hide the use of <code>numpy</code> from the consumer of the function, use the <code>.tolist()</code> method to turn the resulting <code>numpy</code> array into a plain Python list.</p></li>
</ol>
<p>One caveat: <code>bin_size</code> needs to evenly divide <code>total_length</code> — the reshaping will throw a <code>ValueError</code> otherwise.</p>
| 1 | 2016-09-27T14:59:18Z | [
"python"
]
|
Get values from a text file using Python | 39,710,345 | <p>I have some data in text file organized like this:</p>
<pre><code>26.11.2014 154601 26 53.07
16.12.2014 155001 25.2 52.1
</code></pre>
<p>Where first column is date, second kilometers, third fuel in litre and last is cost in polish zloty. I am trying create my personal program with refuelling statistics. Here is the code where I save new data to the text file:</p>
<pre><code>def add_petrol_refueling(date, km, petrol_cost, petrol_volume):
plik = open('paliwo/benzyna.txt', 'a')
plik.write(date + '\t' + km + '\t' + petrol_cost + '\t' + petrol_volume)
plik.write('\n')
plik.close()
return True
</code></pre>
<p>I get data from user using:</p>
<pre><code>print add_petrol_refueling(raw_input("Refueling date [dd:mm:rrrr]: "),
raw_input("Kilometers on car screen [km]: "),
raw_input("Refueling cost [zÅ]: "),
raw_input("Refueling volume [l]: "))
</code></pre>
<p>And my question is How is the best option to get data from text file to do some operation with it and show later some statistics like average fuel consumption, average cost, when I can assume next refuelling etc .</p>
| -2 | 2016-09-26T18:58:16Z | 39,710,770 | <p>The following approach should help to get you started to get the data back from your text file. It first reads each line of your <code>benzyna.txt</code> file, removes the trailing newline and splits it based on <code>\t</code>. This creates a <code>data</code> list containing 2 rows with 4 strings in each. Next it goes through each row and converts the last three entries from strings into floats so you can carry out your calculations on them:</p>
<pre><code>with open('benzyna.txt') as f_input:
data = [row.strip().split('\t') for row in f_input]
data = [[rdate, float(km), float(petrol_cost), float(petrol_vol)] for rdate, km, petrol_cost, petrol_vol in data]
print data
</code></pre>
<p>This would display:</p>
<pre><code>[['26.11.2014', 154601.0, 26.0, 53.07], ['16.12.2014', 155001.0, 25.2, 52.1]]
</code></pre>
<hr>
<p>Alternatively, the following approach would also give you the same result:</p>
<pre><code>data = []
with open('benzyna.txt') as f_input:
for row in f_input:
rdate, km, petrol_cost, petrol_vol = row.strip().split('\t')
data.append([rdate, float(km), float(petrol_cost), float(petrol_vol)])
print data
</code></pre>
| 0 | 2016-09-26T19:23:57Z | [
"python",
"text-files"
]
|
How to convert unicode numbers to ints? | 39,710,365 | <blockquote>
<p>Arabic and Chinese have their own glyphs for digits. <code>int</code> works correctly with all the different ways to write numbers.</p>
</blockquote>
<p>I was not able to reproduce the behaviour (python 3.5.0)</p>
<pre><code>>>> from unicodedata import name
>>> name('í í¹¤')
'RUMI DIGIT FIVE'
>>> int('í í¹¤')
ValueError: invalid literal for int() with base 10: 'í í¹¤'
>>> int('äº') # chinese/japanese number five
ValueError: invalid literal for int() with base 10: 'äº'
</code></pre>
<p>Am I doing something wrong? Or is the claim simply incorrect (<a href="http://www.jjinux.com/2012/03/pycon-advanced-python-tutorials.html" rel="nofollow">source</a>). </p>
| 5 | 2016-09-26T18:59:25Z | 39,710,479 | <p>Here's a way to convert to numerical values (casting to <code>int</code> does not work in all cases, unless there's a secret setting somewhere)</p>
<pre><code>from unicodedata import numeric
print(numeric('äº'))
</code></pre>
<p>result: 5.0</p>
<p>Someone noted (and was right) that some arabic or other chars worked fine with <code>int</code>, so a routine with a fallback mechanism could be done:</p>
<pre><code>from unicodedata import numeric
def to_integer(s):
try:
r = int(s)
except ValueError:
r = int(numeric(s))
return r
</code></pre>
<p>EDIT: as zvone noted, there are fraction characters that return floating point numbers: ex: <code>numeric('\u00be') is 0.75</code> (3/4 char). So rounding to int is not always safe.</p>
<p>EDIT2: the <code>numeric</code> function only accepts one character. So the "conversion to numeric" that could handle most cases without risks of rounding would be</p>
<pre><code>from unicodedata import numeric
def to_float(s):
try:
r = float(s)
except ValueError:
r = numeric(s)
return r
print(to_float('ÛµÛµ'))
print(to_float('äº'))
print(to_float('¾'))
</code></pre>
<p>result:</p>
<pre><code>55.0
5.0
0.75
</code></pre>
<p>(I don't want to steal user2357112 excellent explanation, but still wanted to provide a solution that tries to cover all cases)</p>
| 5 | 2016-09-26T19:05:55Z | [
"python",
"python-3.x",
"unicode"
]
|
How to convert unicode numbers to ints? | 39,710,365 | <blockquote>
<p>Arabic and Chinese have their own glyphs for digits. <code>int</code> works correctly with all the different ways to write numbers.</p>
</blockquote>
<p>I was not able to reproduce the behaviour (python 3.5.0)</p>
<pre><code>>>> from unicodedata import name
>>> name('í í¹¤')
'RUMI DIGIT FIVE'
>>> int('í í¹¤')
ValueError: invalid literal for int() with base 10: 'í í¹¤'
>>> int('äº') # chinese/japanese number five
ValueError: invalid literal for int() with base 10: 'äº'
</code></pre>
<p>Am I doing something wrong? Or is the claim simply incorrect (<a href="http://www.jjinux.com/2012/03/pycon-advanced-python-tutorials.html" rel="nofollow">source</a>). </p>
| 5 | 2016-09-26T18:59:25Z | 39,710,531 | <p>The source is incorrect.</p>
<p>From python doc:</p>
<blockquote>
<p>class int(x, base=10)</p>
<p>Return an integer object constructed from a
number or string x, or return 0 if no arguments are given. If x is a
number, return x.__int__(). For floating point numbers, this truncates
towards zero.</p>
<p>If x is not a number or if base is given, then x must be a string,
bytes, or bytearray instance <strong>representing an integer literal in radix
base</strong>. </p>
</blockquote>
<p>And an integer literal is just a string of numbers.</p>
<p>Edit: Was wrong, dug into the source code and found <a href="http://svn.python.org/projects/python/trunk/Python/mystrtoul.c" rel="nofollow">this</a> function is called when python wants to convert a string to int. There is a <strong>py_CHARMASK</strong> which <em>I guess</em> contains the information we need, but I could not find it :/</p>
| -1 | 2016-09-26T19:08:57Z | [
"python",
"python-3.x",
"unicode"
]
|
How to convert unicode numbers to ints? | 39,710,365 | <blockquote>
<p>Arabic and Chinese have their own glyphs for digits. <code>int</code> works correctly with all the different ways to write numbers.</p>
</blockquote>
<p>I was not able to reproduce the behaviour (python 3.5.0)</p>
<pre><code>>>> from unicodedata import name
>>> name('í í¹¤')
'RUMI DIGIT FIVE'
>>> int('í í¹¤')
ValueError: invalid literal for int() with base 10: 'í í¹¤'
>>> int('äº') # chinese/japanese number five
ValueError: invalid literal for int() with base 10: 'äº'
</code></pre>
<p>Am I doing something wrong? Or is the claim simply incorrect (<a href="http://www.jjinux.com/2012/03/pycon-advanced-python-tutorials.html" rel="nofollow">source</a>). </p>
| 5 | 2016-09-26T18:59:25Z | 39,710,800 | <p><code>int</code> does not accept all ways to write numbers. It understands digit characters used for positional numeral systems, but neither <a href="http://std.dkuug.dk/jtc1/sc2/wg2/docs/n3087-1.pdf" rel="nofollow">Rumi</a> nor <a href="https://en.wikipedia.org/wiki/Chinese_numerals#Characters_used_to_represent_numbers" rel="nofollow">Chinese</a> numerals are positional. Neither <code>'äºäº'</code> nor two copies of Rumi numeral 5 would represent 55, so <code>int</code> doesn't accept them.</p>
| 5 | 2016-09-26T19:25:29Z | [
"python",
"python-3.x",
"unicode"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.