title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
UDP Packet received okay in Java but corrupted in Python | 39,914,553 | <p>I am trying to record audio from an Android tablet and send it to a python server. At the start of the byte packet, I include some relevant information about the state of the Android app (A byte array called "actives" -- but considering it's receiving fine by a Java server, this should not be relevant). The android code is as follows:</p>
<pre><code> int read = recorder.read(buffer, 0, buffer.length);
for (int a = 0; a < actives.length; a++) {
outBuffer[a+1] = (byte)actives[a];
logger = logger + Byte.toString(actives[a]) + ",";
}
int furthest=0;
for(int a =0; a < buffer.length; a++){
outBuffer[actives.length+1+a]=buffer[a];
if(buffer[a]!=0)furthest=a;
}
packet = new DatagramPacket(outBuffer, read,
serverAddress, PORT);
Log.d("writing", logger+Byte.toString(outBuffer[7])+".length"+Integer.toString(1+furthest+actives.length+1));
Log.d("streamer","Packet length "+outBuffer.length);
try {
socket.send(packet);
}catch (IOException e){
Log.e("streamer", "Exception: " + e);
}
Log.d("streamer","packetSent");
</code></pre>
<p>I receive a clean signal on the other end using a Java server.
Image of received java output: !(<a href="http://i.imgur.com/31UWzya.png" rel="nofollow">http://i.imgur.com/31UWzya.png</a>)
This is my Java server:</p>
<pre><code>DatagramSocket serverSocket = new DatagramSocket(3001);
int byteSize=970;
byte[] receiveData = new byte[byteSize];
DatagramPacket receivePacket = new DatagramPacket(receiveData,
receiveData.length);
while(true){ // recieve data until timeout
try {
serverSocket.receive(receivePacket);
String rcvd = "rcvd from " + receivePacket.getAddress();
System.out.println("receiver"+"Received a packet!" +rcvd);
break;
}
catch (Exception e) {
// timeout exception.
System.out.println("Timeout reached without packet!!! " + e);
timeoutReached=true;
break;
}
}
if(timeoutReached)continue;
currTime = System.currentTimeMillis();
data = receivePacket.getData();
</code></pre>
<p>Here is my Python server's output:
!(<a href="http://i.imgur.com/RYkcCCE.png" rel="nofollow">http://i.imgur.com/RYkcCCE.png</a>)
And here is the code:</p>
<pre><code>import socket
ip="192.ip.address"
port=3001;
sock=socket.socket(socket.AF_INET,socket.SOCK_DGRAM);
sock.bind(('',port));
while(True):
data,addr=sock.recvfrom(970);
print("address",addr);
print("received a data!");
print(data);
</code></pre>
<p>In the last line of the python script, I have tried to change "print(data)" to "print(data.decode())", in which case I get this error: </p>
<pre><code>UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)
</code></pre>
<p><strong>I am not running these servers at the same time</strong>
My guess is that it has to do something with Java using unsigned ints and python not doing that. Is there a way in Python that I can convert this data, because data.decode() is not working? Alternatively I should be able to convert the data in Java somehow? None of the answers on stackoverflow that I have tried have worked. </p>
| 0 | 2016-10-07T10:01:17Z | 39,915,112 | <p>Decoding is the right approach. In your android app explicitly mention the character encoding. <a href="https://docs.oracle.com/javase/7/docs/api/java/nio/charset/StandardCharsets.html#UTF_8" rel="nofollow">UTF-8</a> is the standard Charset that is used.</p>
<p>Your log is pretty clear. You are trying to decode the data packet as ASCII (which is the default encoding of the <a href="https://docs.python.org/2/library/codecs.html#codecs.decode" rel="nofollow">decode() function</a>) but I'm guessing its ISO_8859_1 or UTF-8 (more likely).</p>
<p>Next try <code>data.decode('utf8', 'ignore')</code> in your android app. Note: <code>'ignore'</code> is an optional argument and to be used only in case of debugging as it will ignore malformed(corrupted) data and try to convert individual characters. If you want to use decode() in production use '<code>strict'</code> or no second argument (<code>'strict'</code> is the default).</p>
<p>In place of <code>'utf8'</code> try other options from other <a href="https://docs.python.org/2/library/codecs.html#module-codecs" rel="nofollow">Python Encodings</a>.</p>
| 1 | 2016-10-07T10:28:18Z | [
"java",
"android",
"python",
"udp",
"datagram"
] |
UDP Packet received okay in Java but corrupted in Python | 39,914,553 | <p>I am trying to record audio from an Android tablet and send it to a python server. At the start of the byte packet, I include some relevant information about the state of the Android app (A byte array called "actives" -- but considering it's receiving fine by a Java server, this should not be relevant). The android code is as follows:</p>
<pre><code> int read = recorder.read(buffer, 0, buffer.length);
for (int a = 0; a < actives.length; a++) {
outBuffer[a+1] = (byte)actives[a];
logger = logger + Byte.toString(actives[a]) + ",";
}
int furthest=0;
for(int a =0; a < buffer.length; a++){
outBuffer[actives.length+1+a]=buffer[a];
if(buffer[a]!=0)furthest=a;
}
packet = new DatagramPacket(outBuffer, read,
serverAddress, PORT);
Log.d("writing", logger+Byte.toString(outBuffer[7])+".length"+Integer.toString(1+furthest+actives.length+1));
Log.d("streamer","Packet length "+outBuffer.length);
try {
socket.send(packet);
}catch (IOException e){
Log.e("streamer", "Exception: " + e);
}
Log.d("streamer","packetSent");
</code></pre>
<p>I receive a clean signal on the other end using a Java server.
Image of received java output: !(<a href="http://i.imgur.com/31UWzya.png" rel="nofollow">http://i.imgur.com/31UWzya.png</a>)
This is my Java server:</p>
<pre><code>DatagramSocket serverSocket = new DatagramSocket(3001);
int byteSize=970;
byte[] receiveData = new byte[byteSize];
DatagramPacket receivePacket = new DatagramPacket(receiveData,
receiveData.length);
while(true){ // recieve data until timeout
try {
serverSocket.receive(receivePacket);
String rcvd = "rcvd from " + receivePacket.getAddress();
System.out.println("receiver"+"Received a packet!" +rcvd);
break;
}
catch (Exception e) {
// timeout exception.
System.out.println("Timeout reached without packet!!! " + e);
timeoutReached=true;
break;
}
}
if(timeoutReached)continue;
currTime = System.currentTimeMillis();
data = receivePacket.getData();
</code></pre>
<p>Here is my Python server's output:
!(<a href="http://i.imgur.com/RYkcCCE.png" rel="nofollow">http://i.imgur.com/RYkcCCE.png</a>)
And here is the code:</p>
<pre><code>import socket
ip="192.ip.address"
port=3001;
sock=socket.socket(socket.AF_INET,socket.SOCK_DGRAM);
sock.bind(('',port));
while(True):
data,addr=sock.recvfrom(970);
print("address",addr);
print("received a data!");
print(data);
</code></pre>
<p>In the last line of the python script, I have tried to change "print(data)" to "print(data.decode())", in which case I get this error: </p>
<pre><code>UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)
</code></pre>
<p><strong>I am not running these servers at the same time</strong>
My guess is that it has to do something with Java using unsigned ints and python not doing that. Is there a way in Python that I can convert this data, because data.decode() is not working? Alternatively I should be able to convert the data in Java somehow? None of the answers on stackoverflow that I have tried have worked. </p>
| 0 | 2016-10-07T10:01:17Z | 39,923,615 | <p>This was pretty brutal to attack head-on. I tried specifying the encoding in Java (before sending) like another SO post suggested, but that didn't help. So I side-stepped the problem by converting my Android byte array into a comma-separated string, then converting the string back into UTF-8 bytes.</p>
<pre><code>sendString="";
for(int a =0; a < buffer.length; a++){
sendString=sendString+Byte.toString(buffer[a])+",";
}
byte[] outBuffer = sendString.getBytes("UTF-8");
</code></pre>
<p>Make sure you reset your string to null ("") each time you go through the while loop, or your ish will get very slow af.</p>
<p>Then in Python,right after receiving:</p>
<pre><code>data=data.decode("utf8");
</code></pre>
<p>Although I am stringifying 980 characters, it does not appear to add much to the processing time... although I do wish that I could send the raw bytes, as speed is very important to me here. I'll leave the question open in case someone can come up with a better solution.</p>
| 0 | 2016-10-07T18:21:51Z | [
"java",
"android",
"python",
"udp",
"datagram"
] |
How to add an object to already existing json object if a key from csv already exists? | 39,914,598 | <p>I have a <code>csv</code> file that looks like this:</p>
<pre><code>NDB_No,Seq,Amount,Msre_Desc,Gm_Wgt,Num_Data_Pts,Std_Dev
01001,1,1,pat (1" sq, 1/3" high),5,,
01001,2,1,tbsp,14.2,,
01001,3,1,cup,227,,
01001,4,1,stick,113,,
01002,1,1,pat (1" sq, 1/3" high),3.8,,
01002,2,1,tbsp,9.4,,
01002,3,1,cup,151,,
01002,4,1,stick,76,,
</code></pre>
<p>And I want to export it as a json like this:</p>
<pre><code>{
"NDB_No": [
{
"01001":
{
"Seq 1":
{
"Amount": 1,
"Msre_Desc": "pat (1 in sq, 1/3 in high)",
"Gm_Wgt": 5,
"Num_Data_Pts": null,
"Std_Dev": null
},
"Seq 2":
{
"Amount": 1,
"Msre_Desc": "tbsp",
"Gm_Wgt": 14.2,
"Num_Data_Pts": null,
"Std_Dev": null
},
"Seq 3":
{
"Amount": 1,
"Msre_Desc": "cup",
"Gm_Wgt": 227,
"Num_Data_Pts": null,
"Std_Dev": null
},
"Seq 4":
{
"Amount": 1,
"Msre_Desc": "stick",
"Gm_Wgt": 113,
"Num_Data_Pts": null,
"Std_Dev": null
}
}
},
{
"01002":
{
"Seq 1":
{
"Amount": 1,
"Msre_Desc": "pat (1 in sq, 1/3 in high)",
"Gm_Wgt": 3.8,
"Num_Data_Pts": null,
"Std_Dev": null
},
"Seq 2":
{
"Amount": 1,
"Msre_Desc": "tbsp",
"Gm_Wgt": 9.4,
"Num_Data_Pts": null,
"Std_Dev": null
},
"Seq 3":
{
"Amount": 1,
"Msre_Desc": "cup",
"Gm_Wgt": 151,
"Num_Data_Pts": null,
"Std_Dev": null
},
"Seq 4":
{
"Amount": 1,
"Msre_Desc": "stick",
"Gm_Wgt": 76,
"Num_Data_Pts": null,
"Std_Dev": null
}
}
}]
}
</code></pre>
<p>As you can see, if <code>NDB_No</code> from the csv is repeated I just add a new <code>Seq</code> to it. Otherwise, I continue with another object.</p>
<p>What I've got was:</p>
<pre><code>{
"NDB_No": [
{
"01001": {
"Seq 1": {
"Msre_Desc": "pat (1\" sq",
"Std_Dev": "NULL",
"Amount": "1",
"Gm_Wgt": " 1/3\" high)",
"Num_Data_Pts": "5"
}
}
}
]
}
{
"NDB_No": [
{
"01001": {
"Seq 2": {
"Msre_Desc": "tbsp",
"Std_Dev": "NULL",
"Amount": "1",
"Gm_Wgt": "14.2",
"Num_Data_Pts": "NULL"
}
}
}
]
}
{
"NDB_No": [
{
"01001": {
"Seq 3": {
"Msre_Desc": "cup",
"Std_Dev": "NULL",
"Amount": "1",
"Gm_Wgt": "227",
"Num_Data_Pts": "NULL"
}
}
}
]
}
{
"NDB_No": [
{
"01001": {
"Seq 4": {
"Msre_Desc": "stick",
"Std_Dev": "NULL",
"Amount": "1",
"Gm_Wgt": "113",
"Num_Data_Pts": "NULL"
}
}
}
]
}
{
"NDB_No": [
{
"01002": {
"Seq 1": {
"Msre_Desc": "pat (1\" sq",
"Std_Dev": "NULL",
"Amount": "1",
"Gm_Wgt": " 1/3\" high)",
"Num_Data_Pts": "3.8"
}
}
}
]
}
{
"NDB_No": [
{
"01002": {
"Seq 2": {
"Msre_Desc": "tbsp",
"Std_Dev": "NULL",
"Amount": "1",
"Gm_Wgt": "9.4",
"Num_Data_Pts": "NULL"
}
}
}
]
}
{
"NDB_No": [
{
"01002": {
"Seq 3": {
"Msre_Desc": "cup",
"Std_Dev": "NULL",
"Amount": "1",
"Gm_Wgt": "151",
"Num_Data_Pts": "NULL"
}
}
}
]
}
{
"NDB_No": [
{
"01002": {
"Seq 4": {
"Msre_Desc": "stick",
"Std_Dev": "NULL",
"Amount": "1",
"Gm_Wgt": "76",
"Num_Data_Pts": "NULL"
}
}
}
]
}
</code></pre>
<p>By writing it in Python like this:</p>
<pre><code>import json
from collections import OrderedDict, defaultdict
with open('Example_Measures.csv') as csvfile, open('x.json', 'w') as json_file:
reader = csv.DictReader(csvfile)
for row in reader:
source = {'NDB_No': []}
source['NDB_No'].append({
row['NDB_No']: {
'Seq {}'.format(row['Seq']): {
'Amount': row['Amount'],
'Msre_Desc': row['Msre_Desc'],
'Gm_Wgt': row['Gm_Wgt'],
'Num_Data_Pts': 'NULL' if not row['Num_Data_Pts'] else row['Num_Data_Pts'],
'Std_Dev': 'NULL' if not row['Std_Dev'] else row['Std_Dev']
}
}
})
data_json = json.dumps(dict(OrderedDict(source)), indent=2, sort_keys=False)
print(data_json)
</code></pre>
<p>The problems that I'm facing are:</p>
<ul>
<li>how can I only add <code>Seq</code> to <code>NDB_No</code> if it already exists ?</li>
<li>how can I avoid printing <code>NDB_No</code> if <code>NDB_No</code> already exists ?</li>
</ul>
<p>I'd like to know if it's possible to get the above JSON from the csv. I've been struggling a lot but I coulnd't figure it out (all that I've got was the above JSON - which is not so correct). Any help/advice is welcome!</p>
| -1 | 2016-10-07T10:03:01Z | 39,915,827 | <p>You have two keys and a dictionary of data.
To make things simpler change <code>{"NDB_No": [{key1: {key2: data}}]}</code> to <code>{key1: {key2: data}}</code>.
This is as you're just making noise.</p>
<p>Make <code>key1</code>, <code>key2</code> and <code>data</code>:</p>
<pre><code>key1, key2 = row['NDB_No'], 'Seq {}'.format(row['Seq'])
data = {
'Amount': row['Amount'],
'Msre_Desc': row['Msre_Desc'],
'Gm_Wgt': row['Gm_Wgt'],
'Num_Data_Pts': row['Num_Data_Pts'] or None,
'Std_Dev': row['Std_Dev'] or None,
}
</code></pre>
<p>Then you want to build the data, and wrap it in the additional dictionary:</p>
<pre><code>src = {}
for row in reader:
src.setdefault(key1, {})[key2] = data
src = {"NDB_No": src}
</code></pre>
<p>You then want to write your own <code>setdefault</code> to build a list rather than a dictionary.</p>
<pre><code>def setdefault(list, key, value):
for item in list:
if key in item:
return item[key]
list.append({key: value})
return value
</code></pre>
<p>And then adapt the above:</p>
<pre><code>src = []
for row in reader:
key1, key2 = row['NDB_No'], 'Seq {}'.format(row['Seq'])
data = ...
setdefault(src, key1, {})[key2] = data
src = {"NDB_No": src}
</code></pre>
| 1 | 2016-10-07T11:11:37Z | [
"python",
"json",
"python-3.x",
"csv"
] |
What's the best way to get continuous data from another program in Django? | 39,914,635 | <p>Here's the setup: On a single-board computer with a very rudimentary linux I'm running a <code>Django app</code>. This app is, when a button is pressed or as a response to the data described below, supposed to call either a function from a library written in <code>C</code>, or a compiled <code>C</code> program, to write data to system memory at a specified address, poke/peek like. (<code>Python</code> doesn't seem to be able to do that natively).<br>
The <code>Django</code> app should also display data, continuously, which is being read from the memory from the same library / program.</p>
<p>My question now is how to even begin with setting up the scenario described above. Is this even possible with a web app? Is a <code>Django</code> or more fundamentally any web framework even the right approach here? I'm at a bit of a loss here, since I've spent quite a few hours now trying to figure out how to do this while not getting the most basic starting point...</p>
<hr>
<p>Disclaimer: I'm pretty new to the entire web framework thing, and more importantly web development in general, so sorry if this is a <em>bad</em> question as in, I could have easily found information on this topic online, but I couldn't really find a good starting point on this.</p>
| 1 | 2016-10-07T10:04:57Z | 39,918,938 | <p>I wanted to add a comment but not enough space... anyway</p>
<p>You can write a native extension in C for Python that could do what you need, check <a href="https://docs.python.org/2/extending/extending.html" rel="nofollow">this</a>.</p>
<p>Now for the fact of <em>displaying data continuously</em> this is kind of vague, if this C library is switching this hypothetical address, very often and very fast you have to update a browser client as fast as possible.</p>
<p>I think <a href="https://en.wikipedia.org/wiki/WebSocket" rel="nofollow">websockets</a> would do the trick but they are js related, so I think NodeJs would be a better candidate for the server side of your application instead of Django.</p>
<p>If you want to stick to Django you can also expose an URL with the generated address value and have a webpage continuously (with a little <a href="https://developer.mozilla.org/en-US/docs/Web/API/WindowTimers/setInterval" rel="nofollow">Interval</a>) checking that URL using a simple ajax call, kind of ugly and inefficient but would work.</p>
<p>Anyway IMHO your best bet is for websockets because with them you have a fullduplex communication between client and server.</p>
<p>Good Luck with your project.</p>
<p>Info:</p>
<p><a href="https://github.com/stephenmcd/django-socketio" rel="nofollow">Websockets in Django with socket.io</a></p>
<p><a href="http://socket.io/" rel="nofollow">Nodejs socket.io</a></p>
| 0 | 2016-10-07T13:51:42Z | [
"python",
"django",
"shared-memory",
"web-frameworks"
] |
Change global variable to value in entry field | 39,914,640 | <p>How can I change a global variable to a value inputted by a user in an entry field?</p>
<pre><code>card_no = 0
def cardget():
global card_no
card_no = e1.get()
print(card_no)
def menu():
global card_no
root = Tk()
e1 = Entry(root).pack()
Label(root, text= "Enter card number").pack(anchor= NW)
Button(root, text= "Confirm card", command=cardget).pack(anchor= NW)
menu()
</code></pre>
| 0 | 2016-10-07T10:05:10Z | 39,914,797 | <p>Don't use global variables. Tkinter apps work much better with OOP.</p>
<pre><code>import tkinter as tk
class App:
def __init__(self, parent):
self.e1 = tk.Entry(parent)
self.e1.pack()
self.l = tk.Label(root, text="Enter card number")
self.l.pack(anchor=tk.NW)
self.b = tk.Button(root, text="Confirm card", command=self.cardget)
self.b.pack(anchor=tk.NW)
self.card_no = 0
def cardget(self):
self.card_no = int(self.e1.get()) # add validation if you want
print(self.card_no)
root = tk.Tk()
app = App(root)
root.mainloop()
</code></pre>
| 1 | 2016-10-07T10:12:58Z | [
"python",
"tkinter"
] |
Change global variable to value in entry field | 39,914,640 | <p>How can I change a global variable to a value inputted by a user in an entry field?</p>
<pre><code>card_no = 0
def cardget():
global card_no
card_no = e1.get()
print(card_no)
def menu():
global card_no
root = Tk()
e1 = Entry(root).pack()
Label(root, text= "Enter card number").pack(anchor= NW)
Button(root, text= "Confirm card", command=cardget).pack(anchor= NW)
menu()
</code></pre>
| 0 | 2016-10-07T10:05:10Z | 39,914,889 | <p>You can pass the card no to the function called when the button is clicked:</p>
<pre><code>import tkinter as tk
def cardget(card_no):
print(card_no)
def menu():
root = tk.Tk()
tk.Label(root, text="Enter card number").pack(anchor=tk.NW)
e1 = tk.Entry(root)
e1.pack()
tk.Button(root, text="Confirm card", command=lambda *args: cardget(e1.get())).pack(anchor=tk.NW)
root.mainloop()
menu()
</code></pre>
| -1 | 2016-10-07T10:18:13Z | [
"python",
"tkinter"
] |
How to put a Label/unique ID on Date-time in Database or when analyzing data with R/Python? | 39,914,736 | <p>I am looking for a general solution in the database.It could be oracle or SQL server or the operation could be done in R/Python when I will import the data to R/Python. I have a Date-time(D-M-YY) column I want to put a label on it according to month. Day part is static it is trimmed by first day of the month. the month and year part is variable. For example:</p>
<pre><code>Date Label
1-1-16 1
1-2-16 2
1-3-16 3
1-4-16 4
.
.
.
</code></pre>
| 0 | 2016-10-07T10:09:56Z | 39,915,760 | <p>Finally my friend help me to find a solution in R, which is very simple,</p>
<p>date_sample <-c('1-1-2016','1-2-2016','1-3-2016','1-4-2016')</p>
<p>format(as.Date(date_sample,format = '%d-%m-%Y'),"%m")</p>
<p>It gives me output:
"01" "02" "03" "04"</p>
| 0 | 2016-10-07T11:07:52Z | [
"python",
"sql"
] |
Converting list to dictionary with list elements as index - Python | 39,914,744 | <p>From <a href="http://stackoverflow.com/questions/7974959/python-create-dict-from-list-and-auto-gen-increment-the-keys-list-is-the-actua">Python: create dict from list and auto-gen/increment the keys (list is the actual key values)?</a>, it's possible to create a <code>dict</code> from a <code>list</code> using <code>enumerate</code> to generate tuples made up of incremental keys and elements in life, i.e.:</p>
<pre><code>>>> x = ['a', 'b', 'c']
>>> list(enumerate(x))
[(0, 'a'), (1, 'b'), (2, 'c')]
>>> dict(enumerate(x))
{0: 'a', 1: 'b', 2: 'c'}
</code></pre>
<p>It is also possible to reverse the key-value by iterating through every key in the <code>dict</code> (assuming that there is a one-to-one mapping between key-value pairs:</p>
<pre><code>>>> x = ['a', 'b', 'c']
>>> d = dict(enumerate(x))
>>> {v:k for k,v in d.items()}
{'a': 0, 'c': 2, 'b': 1}
</code></pre>
<p>Given the input list <code>['a', 'b', 'c']</code>, how can achieve the dictionary where the elements as the key and incremental index as values <strong>without trying to loop an additional time to reverse the dictionary</strong>?</p>
| 0 | 2016-10-07T10:10:23Z | 39,914,756 | <p>How about simply:</p>
<pre><code>>>> x = ['a', 'b', 'c']
>>> {j:i for i,j in enumerate(x)}
{'a': 0, 'c': 2, 'b': 1}
</code></pre>
| -1 | 2016-10-07T10:10:44Z | [
"python",
"list",
"dictionary"
] |
Execute function at specific GMT time daily | 39,914,796 | <p>How can I implement what is described in the tittle in python, regardless of when the initial script will be run?<br>
For example if I run the script at 2 am in a <code>"GMT +2"</code> timezone or at 8 pm at a <code>"GMT -1"</code> timezone, this script must schedule a function to run every day at <code>GMT 00:00</code>.</p>
| -2 | 2016-10-07T10:12:58Z | 39,914,908 | <p>You can just use <code>strftime</code> and <code>gmtime</code> (GMT):</p>
<pre><code>from time import strftime, gmtime
def foo():
if strftime("%H%M", gmtime()) == "0000": # Gets time in this format: HHMM
# Do stuff
</code></pre>
<p>For more information, go to <a href="https://docs.python.org/2/library/time.html#time.strftime" rel="nofollow">here</a> (Python docs).</p>
| -1 | 2016-10-07T10:18:57Z | [
"python",
"time"
] |
python 2.7 â getting this timer object right | 39,914,853 | <p>I'm making a text-based game to practice python.<br><br></p>
<p>The following is the code of the Trap scene. In the scene, the player falls into a hole, and if he doesn't react in time (5 seconds), he dies. After starting the timer, I ask for input inside a while loop. I'm seeing, that this code somehow redundant â as i define what happens after the time specified passes twice â but i'm not really sure how to fix it.<br><br></p>
<pre><code>def b_trap():
"""Trap. I want to implement a timing function here. The player will need to type the appropriate command under a given time. If he cannot make a decision in time, the player dies. Timer object!!!"""
def b_timeout():
"""If player runs out of time, this is called."""
b_dead("You fall into the deep hole in the floor, and die when you\nhit the floor fifty meters down. : (")
print "The floor suddenly disappears behind your feet. You start to fall. What do you do? HURRY!"
time_to_do_stuff = Timer(5.0, b_timeout)
time_to_do_stuff.start()
while time_to_do_stuff.is_alive() is True:
command = raw_input(" > ")
if command in (bv_commands['climb'] or bv_commands['jump']):
time_to_do_stuff.cancel()
b_not_ready()
else:
print "You were saying?"
else:
b_timeout()
</code></pre>
<p>After running this code and not doing anything, <code>b_timeout()</code> is executed, but the <code>raw_input()</code> prompt remains â i'm asked for input again, but now without the prompt specified in the <code>raw_input</code> function(<code>" > "</code>).<br><br></p>
<p>Could you point out for me what am I missing? I've just started learning python, so please forgive my novice mistakes.</p>
| 1 | 2016-10-07T10:16:02Z | 39,915,126 | <p><code>raw_input</code> is a blocking function that effectively halts the program until the input is received (waits for a new line/carriage return).</p>
<p>There are two ways around this:</p>
<ul>
<li>Record the time before and after the <code>raw_input</code>. If the latter is less than five seconds after the former then they responded quickly enough. This approach will have a potentially unwanted behaviour of not showing player death until they respond.</li>
<li>Run the <code>raw_input</code> concurrently in a separate python thread <a href="http://stackoverflow.com/questions/30929661/non-blocking-raw-input-in-python">like this</a>.</li>
</ul>
| 0 | 2016-10-07T10:29:03Z | [
"python",
"python-2.7",
"time"
] |
Recursive Generator Function Python Nested JSON Data | 39,914,906 | <p>I'm attempting to write a recursive generator function to flatten a nested json object of mixed types, lists and dictionaries. I am doing this partly for my own learning so have avoided grabbing an example from the internet to ensure I better understand what's happening, but have got stuck, with what I think is the correct placement of the yield statement in the function in relation to the loop.</p>
<p>The source of the data passed to the generator function is the output of an outer loop which is iterating through a mongo collection.</p>
<p>When I used a print statement in the same place as the Yield statement I get the results I am expecting but when I switch that to a yield statement the generator seems to only yield one item per iteration of the outer loop.</p>
<p>Hopefully someone can show me where I am going wrong.</p>
<pre><code>columns = ['_id'
, 'name'
, 'personId'
, 'status'
, 'explorerProgress'
, 'isSelectedForReview'
]
db = MongoClient().abcDatabase
coll = db.abcCollection
def dic_recurse(data, fields, counter, source_field):
counter += 1
if isinstance(data, dict):
for k, v in data.items():
if k in fields and isinstance(v, list) is False and isinstance(v, dict) is False:
# print "{0}{1}".format(source_field, k)[1:], v
yield "{0}{1}".format(source_field, k)[1:], v
elif isinstance(v, list):
source_field += "_{0}".format(k)
[dic_recurse(l, fields, counter, source_field) for l in data.get(k)]
elif isinstance(v, dict):
source_field += "_{0}".format(k)
dic_recurse(v, fields, counter, source_field)
elif isinstance(data, list):
[dic_recurse(l, fields, counter, '') for l in data]
for item in coll.find():
for d in dic_recurse(item, columns, 0, ''):
print d
</code></pre>
<p>And below is a sample of the data it's iterating, but the nesting does increase beyond what's shown.</p>
<pre><code>{
"_id" : ObjectId("5478464ee4b0a44213e36eb0"),
"consultationId" : "54784388e4b0a44213e36d5f",
"modules" : [
{
"_id" : "FF",
"name" : "Foundations",
"strategyHeaders" : [
{
"_id" : "FF_Money",
"description" : "Let's see where you're spending your money.",
"name" : "Managing money day to day",
"statuses" : [
{
"pid" : "54784388e4b0a44213e36d5d",
"status" : "selected",
"whenUpdated" : NumberLong(1425017616062)
},
{
"pid" : "54783da8e4b09cf5d82d4e11",
"status" : "selected",
"whenUpdated" : NumberLong(1425017616062)
}
],
"strategies" : [
{
"_id" : "FF_Money_CF",
"description" : "This option helps you get a picture of how much you're spending",
"name" : "Your spending and savings.",
"relatedGoals" : [
{
"_id" : ObjectId("54784581e4b0a44213e36e2f")
},
{
"_id" : ObjectId("5478458ee4b0a44213e36e33")
},
{
"_id" : ObjectId("547845a5e4b0a44213e36e37")
},
{
"_id" : ObjectId("54784577e4b0a44213e36e2b")
},
{
"_id" : ObjectId("5478456ee4b0a44213e36e27")
}
],
"soaTrashWarning" : "Understanding what you are spending and saving is crucial to helping you achieve your goals. Without this in place, you may be spending more than you can afford. ",
"statuses" : [
{
"personId" : "54784388e4b0a44213e36d5d",
"status" : "selected",
"whenUpdated" : NumberLong(1425017616062)
},
{
"personId" : "54783da8e4b09cf5d82d4e11",
"status" : "selected",
"whenUpdated" : NumberLong(1425017616062)
}
],
"trashWarning" : "This option helps you get a picture of how much you're spending and how much you could save.\nAre you sure you don't want to take up this option now?\n\n",
"weight" : NumberInt(1)
},
</code></pre>
<p><strong>Update</strong>
I've made a few changes to the generator function, although I'm not sure that they've really changed anything and I've been stepping through line by line in a debugger for both the print version and the yield version. The new code is below.</p>
<pre><code>def dic_recurse(data, fields, counter, source_field):
print 'Called'
if isinstance(data, dict):
for k, v in data.items():
if isinstance(v, list):
source_field += "_{0}".format(k)
[dic_recurse(l, fields, counter, source_field) for l in v]
elif isinstance(v, dict):
source_field += "_{0}".format(k)
dic_recurse(v, fields, counter, source_field)
elif k in fields and isinstance(v, list) is False and isinstance(v, dict) is False:
counter += 1
yield "L{0}_{1}_{2}".format(counter, source_field, k.replace('_', ''))[1:], v
elif isinstance(data, list):
for l in data:
dic_recurse(l, fields, counter, '')
</code></pre>
<p>The key difference between the two versions when debugging seems to be that when this section of code is hit.</p>
<pre><code>elif isinstance(data, list):
for l in data:
dic_recurse(l, fields, counter, '')
</code></pre>
<p>If I am testing the yield version the call to <code>dic_recurse(l, fields, counter, '')</code> line get's hit but it doesn't seem to call the function because any print statements I set at the opening of the function aren't hit, but if I do the same using print then when the code hits the same section it happily calls the function and runs back through the whole function.</p>
<p>I'm sure I'm probably misunderstanding something fundamental about generators and the use of the yield statement.</p>
| 0 | 2016-10-07T10:18:49Z | 39,967,882 | <p>In lieu of any response on this I just wanted to post my updated solution in case it proves useful for anyone else.</p>
<p>I need to add additional yield statements to the function so the result of each recursive call of the generator function can be handed off to be used by the next, at least that's how I've understood it. Happy to be corrected.</p>
<pre><code>def dic_recurse(data, fields, counter, source_field):
if isinstance(data, dict):
counter += 1
for k, v in data.items():
if isinstance(v, list):
for field_data in v:
for list_field in dic_recurse(field_data, fields, counter, source_field):
yield list_field
elif isinstance(v, dict):
for dic_field in dic_recurse(v, fields, counter, source_field):
yield dic_field
elif k in fields and isinstance(v, list) is False and isinstance(v, dict) is False:
yield counter, {"{0}_L{1}".format(k, counter): v}
elif isinstance(data, list):
counter += 1
for list_item in data:
for li2 in dic_recurse(list_item, fields, counter, ''):
yield li2
</code></pre>
| 0 | 2016-10-10T22:46:25Z | [
"python",
"json",
"function",
"yield"
] |
PyQt - Load SQL in QAbstractTableModel (QTableView) using pandas DataFrame - editing datas in a GUI | 39,914,926 | <p>I'm quite new to python and using <code>WinPython-32bit-2.7.10.3</code> (including <code>QTDesigner 4.8.7</code>). I'm trying to program an interface for using a sqlite database on two separates projects, using QtableViews.</p>
<p>The algorithm is roughly so :<br>
- connect to database and convert datas to <code>pandas.DataFrame</code><br>
- convert DataFrame to QAbstractTableModel<br>
- apply the QAbstractTableModel to the <code>tableview.model</code><br>
- load the dialog</p>
<p>I don't get the same comportment, depending of the sql used to create the dataframe :
given a SQL table "parametres", with 3 fields (LIBELLE as varchar, VALEUR as varchar, TEST as boolean), the sql tried are :</p>
<ul>
<li>'SELECT LIBELLE AS "Paramètre", VALEUR AS "Valeur" FROM parametres'.encode("utf-8")</li>
<li>'SELECT * FROM parametres'.encode("utf-8")</li>
</ul>
<p>With first request, I can edit the datas inside the tableview. With second request, I can select a "cell", edit it, but when I commit the edition (pressing enter), the data is set back to it's first value. </p>
<p>While searching, I saw that this line of the setData code wouldn't event work, whatever the "anything" value :</p>
<pre><code>self._data.values[index.row()][index.column()] = "anything"
</code></pre>
<p>You can test the incidence of the sql source by deleting the # character at the beginning of line 27, in the main code.</p>
<p>I've truncated the code to the strict minimum (being very close to the original working code of my first project) and I'm seriously confused. If anybody has an idea, that would be great ! </p>
<p>Thanks</p>
<p>PS : I post the code afterward, but I haven't find a way to join the <code>sqlite.db</code>... If anyone can guide me, I'll be glad to add it ; in the meantime, I've joined a whole zip of the lot on my <a href="https://drive.google.com/file/d/0ByjzDYUpdDswRjhoY1d6WEtmQ3M/view?usp=sharing" rel="nofollow">google.drive</a></p>
<hr>
<p><strong>EDIT #2 :</strong> </p>
<p>Still can't understand what is wrong there, but I've just found that I can't commit the data to the model, once it has been loaded. I am pretty sure this is the core of my problem and have subsequently updated both question and title.</p>
<hr>
<p>Main code :</p>
<pre><code>#Â*Âcoding: utfÂ8 Â*Â
from PyQt4 import QtCore, QtGui
import os,sys
from parametrage import Ui_WinParam
from PANDAS_TO_PYQT import PandasModel
import pandas as pd
import sqlite3
class window_parametreur(QtGui.QDialog, Ui_WinParam):
def __init__(self, dataframemodel, parent=None):
QtGui.QDialog.__init__(self, parent)
# Set up the user interface from Designer.
self.ui = Ui_WinParam()
self.ui.setupUi(self)
self.setModal(True)
self.ui.tableView.setModel(dataframemodel)
self.ui.tableView.resizeColumnsToContents()
def OpenParametreur(self, db_path):
#connecting to database and getting datas as pandas.dataframe
con = sqlite3.connect(db_path)
strSQL = u'SELECT LIBELLE AS "Paramètre", VALEUR AS "Valeur" FROM parametres'.encode("utf-8")
#strSQL = u'SELECT * FROM parametres'.encode("utf-8")
data = pd.read_sql_query(strSQL, con)
con.close()
#converting to QtCore.QAbstractTableModel
model = PandasModel(data)
#loading dialog
self.f=window_parametreur(model)
self.f.exec_()
if __name__=="__main__":
a=QtGui.QApplication(sys.argv)
f=QtGui.QMainWindow()
print OpenParametreur(f, ".\SQLiteDataBase.db")
</code></pre>
<p>Code of "PANDAS_TO_PYQT.py", beeing called to transform pandas.dataframe to QtCore.QAbstractTableModel</p>
<pre><code>#Â*Âcoding: utfÂ8 Â*Â
from PyQt4 import QtCore, QtGui
class PandasModel(QtCore.QAbstractTableModel):
def __init__(self, data, parent=None):
QtCore.QAbstractTableModel.__init__(self, parent)
self._data = data
def rowCount(self, parent=None):
return len(self._data.values)
def columnCount(self, parent=None):
return self._data.columns.size
def data(self, index, role=QtCore.Qt.DisplayRole):
if index.isValid():
if role == QtCore.Qt.DisplayRole or role == QtCore.Qt.EditRole:
return QtCore.QVariant(unicode(
self._data.values[index.row()][index.column()]))
return QtCore.QVariant()
def headerData(self, section, orientation, role=QtCore.Qt.DisplayRole):
if role != QtCore.Qt.DisplayRole:
return None
if orientation == QtCore.Qt.Horizontal:
try:
return '%s' % unicode(self._data.columns.tolist()[section], encoding="utf-8")
except (IndexError, ):
return QtCore.QVariant()
elif orientation == QtCore.Qt.Vertical:
try:
return '%s' % self._data.index.tolist()[section]
except (IndexError, ):
return QtCore.QVariant()
def flags(self, index):
return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable | QtCore.Qt.ItemIsEditable
def setData(self, index, value, role=QtCore.Qt.EditRole):
if index.isValid():
print "data set with keyboard : " + value.toByteArray().data().decode("latin1")
self._data.values[index.row()][index.column()] = "anything"
print "data committed : " +self._data.values[index.row()][index.column()]
self.dataChanged.emit(index, index)
return True
return QtCore.QVariant()
</code></pre>
<p>Code of parametrage.py, beeing created by QtDesigner, and containing the dialog source :</p>
<pre><code># -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'parametrage.ui'
#
# Created by: PyQt4 UI code generator 4.11.4
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_WinParam(object):
def setupUi(self, WinParam):
WinParam.setObjectName(_fromUtf8("WinParam"))
WinParam.resize(608, 279)
icon = QtGui.QIcon()
icon.addPixmap(QtGui.QPixmap(_fromUtf8("../../pictures/EAUX.png")), QtGui.QIcon.Normal, QtGui.QIcon.Off)
WinParam.setWindowIcon(icon)
self.gridLayout = QtGui.QGridLayout(WinParam)
self.gridLayout.setObjectName(_fromUtf8("gridLayout"))
self.ButtonBox = QtGui.QDialogButtonBox(WinParam)
self.ButtonBox.setOrientation(QtCore.Qt.Horizontal)
self.ButtonBox.setStandardButtons(QtGui.QDialogButtonBox.Cancel|QtGui.QDialogButtonBox.Ok)
self.ButtonBox.setCenterButtons(True)
self.ButtonBox.setObjectName(_fromUtf8("ButtonBox"))
self.gridLayout.addWidget(self.ButtonBox, 1, 0, 1, 1)
self.tableView = QtGui.QTableView(WinParam)
self.tableView.setEditTriggers(QtGui.QAbstractItemView.DoubleClicked)
self.tableView.setSortingEnabled(False)
self.tableView.setObjectName(_fromUtf8("tableView"))
self.gridLayout.addWidget(self.tableView, 0, 0, 1, 1)
self.retranslateUi(WinParam)
QtCore.QObject.connect(self.ButtonBox, QtCore.SIGNAL(_fromUtf8("accepted()")), WinParam.accept)
QtCore.QObject.connect(self.ButtonBox, QtCore.SIGNAL(_fromUtf8("rejected()")), WinParam.reject)
QtCore.QMetaObject.connectSlotsByName(WinParam)
def retranslateUi(self, WinParam):
WinParam.setWindowTitle(_translate("WinParam", "Paramétrage", None))
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
WinParam = QtGui.QDialog()
ui = Ui_WinParam()
ui.setupUi(WinParam)
WinParam.show()
sys.exit(app.exec_())
</code></pre>
| 0 | 2016-10-07T10:19:43Z | 39,971,773 | <p>I finally figured it... But I still don't know why pandas worked differently just by changing the SQL request (must be something inside the read_sql_query process...)</p>
<p>For the class to work, I had to change the code of "PANDAS_TO_PYQT.py", replacing the</p>
<pre><code>self._data.values[index.row()][index.column()]
</code></pre>
<p>by </p>
<pre><code>self._data.iloc[index.row(),index.column()]
</code></pre>
<p>in the setData and data functions.</p>
<p>Somehow, pandas seems to have produced a copy of the dataframe during the process (for those looking for explanations, go to <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow">the doc</a>).</p>
<p>So the correct class code for transforming the dataframe to QAbstractTableModel would be :</p>
<pre><code>#Â*Âcoding: utfÂ8 Â*Â
from PyQt4 import QtCore, QtGui
class PandasModel(QtCore.QAbstractTableModel):
def __init__(self, data, parent=None):
QtCore.QAbstractTableModel.__init__(self, parent)
self._data = data
def rowCount(self, parent=None):
return len(self._data.values)
def columnCount(self, parent=None):
return self._data.columns.size
def data(self, index, role=QtCore.Qt.DisplayRole):
if index.isValid():
if role == QtCore.Qt.DisplayRole or role == QtCore.Qt.EditRole:
return QtCore.QVariant(unicode(
self._data.iloc[index.row(),index.column()]))
return QtCore.QVariant()
def headerData(self, section, orientation, role=QtCore.Qt.DisplayRole):
if role != QtCore.Qt.DisplayRole:
return None
if orientation == QtCore.Qt.Horizontal:
try:
return '%s' % unicode(self._data.columns.tolist()[section], encoding="utf-8")
except (IndexError, ):
return QtCore.QVariant()
elif orientation == QtCore.Qt.Vertical:
try:
return '%s' % self._data.index.tolist()[section]
except (IndexError, ):
return QtCore.QVariant()
def flags(self, index):
return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable | QtCore.Qt.ItemIsEditable
def setData(self, index, value, role=QtCore.Qt.EditRole):
if index.isValid():
self._data.iloc[index.row(),index.column()] = value.toByteArray().data().decode("latin1")
if self.data(index,QtCore.Qt.DisplayRole) == value.toByteArray().data().decode("latin1"):
self.dataChanged.emit(index, index)
return True
return QtCore.QVariant()
</code></pre>
| 0 | 2016-10-11T06:48:59Z | [
"python",
"python-2.7",
"pandas",
"pyqt",
"qtableview"
] |
If number comparison too slow | 39,914,950 | <p>I have a buffer and I need to make sure I don't exceed certain size. If I do, I want to append the buffer to a file and empty it.</p>
<p>My code:</p>
<pre><code>import sys
MAX_BUFFER_SIZE = 4 * (1024 ** 3)
class MyBuffer(object):
b = ""
def append(self, s):
if sys.getsizeof(self.b) > MAX_BUFFER_SIZE:
#...print to file... empty buffer
self.b = ""
else:
self.b += s
buffer = MyBuffer()
for s in some_text:
buffer.append(s)
</code></pre>
<p>However, this comparison (<code>sys.getsizeof(self.buffer) > MAX_BUFFER_SIZE</code>) is way too slow (ie. without comparison, the whole execution takes less than 1 second, with comparison it takes like 5 minutes).</p>
<p>At the moment I can fit the whole <code>some_string</code> to memory and so buffer actually never gets bigger than <code>MAX_BUFFER_SIZE</code>, but I must make sure my code work for huge files (several TBs in size) too.</p>
<p>Edit:</p>
<p>This code runs in under 1 second:</p>
<pre><code>import sys
buffer = ""
for s in some_text:
buffer += s
#print out to file
</code></pre>
<p>The problem is that the buffer might become too big.</p>
<p>Similarly, this code also runs under 1 second:</p>
<pre><code>import sys
MAX_BUFFER_SIZE = 4 * (1024 ** 3)
class MyBuffer(object):
b = ""
def append(self, s):
print sys.getsizeof(self.b)
buffer = MyBuffer()
for s in some_text:
buffer.append(s)
</code></pre>
<p><strong>EDIT 2:</strong></p>
<p>Sorry, the slow part is actually appending to the buffer, not the comparison itself as I thought... When I was testing the code, I commented out the whole <code>if / else</code> statement instead of just the first part.</p>
<p>Hence, is there an efficient way to keep a buffer?</p>
| 0 | 2016-10-07T10:20:47Z | 39,915,039 | <p>Undeleting and editing my answer based on edits to the question.</p>
<p>It's incorrect to assume that the comparision is slow. In fact the comparision is fast. Really, really fast. </p>
<p>Why don't you avoid re inventing the wheel by using buffered IO?</p>
<blockquote>
<p>The optional buffering argument specifies the fileâs desired buffer size: 0 means unbuffered, 1 means line buffered, any other positive value means use a buffer of (approximately) that size (in bytes). A negative buffering means to use the system default, which is usually line buffered for tty devices and fully buffered for other files. If omitted, the system default is used. [2]</p>
</blockquote>
<p><a href="https://docs.python.org/2/library/functions.html#open" rel="nofollow">https://docs.python.org/2/library/functions.html#open</a></p>
| 1 | 2016-10-07T10:24:21Z | [
"python"
] |
Getting slightly different input in each data fetch operation | 39,915,120 | <p>I am following the <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/models/image/cifar10" rel="nofollow">cifar10 tutorial</a> and fetch my data using <a href="https://github.com/tensorflow/tensorflow/blob/r0.11/tensorflow/models/image/cifar10/cifar10_input.py#L197" rel="nofollow">inputs()</a> function called by <a href="https://github.com/tensorflow/tensorflow/blob/r0.11/tensorflow/models/image/cifar10/cifar10.py#L163" rel="nofollow">cifar10.inputs</a>. My code is as follows:</p>
<pre><code>import tensorflow as tf
import cifar10
# Get images and labels
images, labels = cifar10.inputs(eval_data='test')
# Start running operations on the Graph.
sess = tf.Session()
sess.run(tf.initialize_all_variables())
# Start the queue runners.
tf.train.start_queue_runners(sess=sess)
img, lab = sess.run([images, labels])
print(lab)
</code></pre>
<p>To test this, I run the code with batch size 10 and print the <strong>labels</strong> of the first batch. I expect the same output each time I execute this code, because <strong>shuffle=False</strong> in the <a href="https://github.com/tensorflow/tensorflow/blob/r0.11/tensorflow/models/image/cifar10/cifar10_input.py#L197" rel="nofollow">inputs()</a> function. However, I observe slightly different labels in each execution, for example outputs from three consecutive executions are:</p>
<pre><code>run-1: [0 1 2 2 1 1 1 9 8 8]
run-2: [1 2 2 0 1 1 1 9 8 8]
run-3: [0 2 2 1 1 1 1 9 8 8]
</code></pre>
<p>I would like to know: </p>
<ol>
<li>Why I am getting different labels although shuffle is off? and </li>
<li>What should I do to obtain the same labels at each execution?</li>
</ol>
| 0 | 2016-10-07T10:28:47Z | 39,923,120 | <p>The queue runners you are starting there are multiple threads loading data from disk and adding it to the queue. <a href="https://github.com/tensorflow/tensorflow/blob/r0.11/tensorflow/models/image/cifar10/cifar10_input.py#L117" rel="nofollow">There are 16 of them by default</a>. </p>
<p>The queue itself is thread-safe, but there is nothing enforcing order.</p>
<p>So even though you're not using the shuffling queue, the race-condition between the threads to fetch and insert the items randomizes the order slightly.</p>
<p>The easiest way to make it completely repeatable is to only use 1 thread to fetch items. This will probably be slow.</p>
| 2 | 2016-10-07T17:45:39Z | [
"python",
"io",
"tensorflow"
] |
Print unicode character in Python 3 | 39,915,296 | <p>How can I print unicode strings within Python 3?</p>
<pre><code>myString = 'My unicode character: \ufffd'
print(myString)
</code></pre>
<p>The Output should be: "My unicode character: ü"</p>
<blockquote>
<p>File "C:\Program Files (x86)\Python35-32\lib\encodings\cp850.py", line
19, in encode return
codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\ufffd' in
position 22: character maps to </p>
</blockquote>
<p>I have read lots of articles about this on Stack Exchange, but non of the answers worked.</p>
<p>Please help. I am really curious how to solve this really simple looking example! Thans very much in advance!</p>
| 0 | 2016-10-07T10:39:34Z | 39,915,371 | <p>It seems that you are doing this using Windows command line.</p>
<pre><code>chcp 65001
set PYTHONIOENCODING=utf-8
</code></pre>
<p>You can try to run above command first before running <code>python3</code>. It will set the console encoder to utf-8 that can represent your data.</p>
| 0 | 2016-10-07T10:43:48Z | [
"python",
"unicode"
] |
Python function returning string which is function name, need to execute the returned statement | 39,915,320 | <p>1) Function 1 encodes the string</p>
<p>def Encode(String):<br>
..<br>
..code block<br>
..<br>
return String</p>
<p>2) Function 2 return the string, which actually forms function call of Function 1</p>
<p>def FunctionReturningEncodeFuntionCall(String):<br>
..<br>
..code block<br>
..<br>
return EncodeFunctionString<br></p>
<p>3) In Function 3 parse the string and pass to Function 2 to form Function 1 call and execute the Function 1 and store its returned value</p>
<p>def LastFuntionToAssignValue(String):<br>
..<br>
..code block<br>
..<br>
a = exec FunctionReturningMyFuntionCall("abcd") <br>
print a</p>
<p>Thanks in advance</p>
| 0 | 2016-10-07T10:40:34Z | 39,915,440 | <p>Consider using the <a href="https://docs.python.org/2.0/ref/exec.html" rel="nofollow">exec statement</a></p>
| 0 | 2016-10-07T10:48:40Z | [
"python",
"string"
] |
Python function returning string which is function name, need to execute the returned statement | 39,915,320 | <p>1) Function 1 encodes the string</p>
<p>def Encode(String):<br>
..<br>
..code block<br>
..<br>
return String</p>
<p>2) Function 2 return the string, which actually forms function call of Function 1</p>
<p>def FunctionReturningEncodeFuntionCall(String):<br>
..<br>
..code block<br>
..<br>
return EncodeFunctionString<br></p>
<p>3) In Function 3 parse the string and pass to Function 2 to form Function 1 call and execute the Function 1 and store its returned value</p>
<p>def LastFuntionToAssignValue(String):<br>
..<br>
..code block<br>
..<br>
a = exec FunctionReturningMyFuntionCall("abcd") <br>
print a</p>
<p>Thanks in advance</p>
| 0 | 2016-10-07T10:40:34Z | 39,918,138 | <p>I think the safest way is to use a dictionary where the key is the function's name and the value is the function itself.</p>
| 0 | 2016-10-07T13:11:16Z | [
"python",
"string"
] |
Combine a list of pairs (tuples)? | 39,915,402 | <p>From a list of linked pairs, I want to combine the pairs into groups of common IDs, so that I can then write group_ids back to the database, eg:</p>
<pre><code>UPDATE table SET group = n WHERE id IN (...........);
</code></pre>
<p>Example:</p>
<pre><code>[(1,2), (3, 4), (1, 5), (6, 3), (7, 8)]
</code></pre>
<p>becomes</p>
<pre><code>[[1, 2, 5], [3, 4, 6], [7, 8]]
</code></pre>
<p>Which allows:</p>
<pre><code>UPDATE table SET group = 1 WHERE id IN (1, 2, 5);
UPDATE table SET group = 2 WHERE id IN (3, 4, 6);
UPDATE table SET group = 3 WHERE id IN (7, 8);
</code></pre>
<p>and</p>
<pre><code>[(1,2), (3, 4), (1, 5), (6, 3), (7, 8), (5, 3)]
</code></pre>
<p>becomes</p>
<pre><code>[[1, 2, 5, 3, 4, 6], [7, 8]]
</code></pre>
<p>Which allows:</p>
<pre><code>UPDATE table SET group = 1 WHERE id IN (1, 2, 5, 3, 4, 6);
UPDATE table SET group = 2 WHERE id IN (7, 8);
</code></pre>
<p>I have written some code which works. I pass in a list of tuples, where each tuple is a pair of linked ids. I return a list of lists, where each internal list is a list of common id's.</p>
<p>I iterate over the list of tuples and assign each tuple element to groups as follows:</p>
<ul>
<li>if neither a nor b is in a list, create a new list, appened a and b and append the new list to the list of lists</li>
<li>if a is in a group but b isn't, add b to the a group</li>
<li>if b is in a group but a isn't, add a to the b group</li>
<li>if a and b are already in separate groups merge the a and b groups</li>
<li>if a and b are already in the same group, do nothing</li>
</ul>
<p>I am expecting millions of linked pairs, and I am expecting hundreds of thousand of gropus and hunderds of thousands of group members. So, I need the algorithm to be fast, I am looking for some suggestions for some really efficient code. Although I have implemented this to build a list of lists, I am open to anything, they key thing is being able to build the above SQL to get the group ids back to the database.</p>
<pre><code>def group_pairs(list_of_pairs):
"""
:param list_of_pairs:
:return:
"""
groups = list()
for pair in list_of_pairs:
a_group = None
b_group = None
for group in groups:
# find what group if any a and b belong to
# don't bother checking if a group already found
if a_group is None and pair[0] in group:
a_group = group
# don't bother checking if b group already found
if b_group is None and pair[1] in group:
b_group = group
# if a and b found, stop looking
if a_group is not None and b_group is not None:
break
if a_group is None:
if b_group is None:
# neither a nor b are in a group; create a new group and
# add a and b
groups.append([pair[0], pair[1]])
else:
# b is in a group but a isn't, so add a to the b group
b_group.append(pair[0])
elif a_group != b_group:
if b_group is None:
# a is in a group but b isn't, so add b to the a group
a_group.append(pair[1])
else:
# a and b are in different groups, add join b to a and get
# rid of b
a_group.extend(b_group)
groups.remove(b_group)
elif a_group == b_group:
# a and b already in same group, so nothing to do
pass
return groups
</code></pre>
| 1 | 2016-10-07T10:46:18Z | 39,915,554 | <p>With:</p>
<pre><code>def make_equiv_classes(pairs):
groups = {}
for (x, y) in pairs:
xset = groups.get(x, set([x]))
yset = groups.get(y, set([y]))
jset = xset | yset
for z in jset:
groups[z] = jset
return set(map(tuple, groups.values()))
</code></pre>
<p>you can get:</p>
<pre><code>>>> make_equiv_classes([(1,2), (3, 4), (1, 5), (6, 3), (7, 8)])
{(1, 2, 5), (3, 4, 6), (8, 7)}
>>> make_equiv_classes([(1,2), (3, 4), (1, 5), (6, 3), (7, 8), (5, 3)])
{(1, 2, 3, 4, 5, 6), (8, 7)}
</code></pre>
<p>The complexity should be <em>O(np)</em>, where <em>n</em> is the number of distinct integer values, and <em>p</em> is the number of pairs.</p>
<p>I think <code>set</code> is the proper type for a single group, because it makes the union operation fast and easy to express, and <code>dict</code> is the right way to store <code>groups</code> because you get constant-time lookup for asking the question of what group a particular integer value belongs to.</p>
<p>We can set up a test harness to time this code, if we want to. To start, we can build a random graph over something moderately large, like 10K nodes (i.e., distinct integer values). I'll put in 5K random links (pairs), since that tends to give me thousands of groups, which together account for about two thirds of the nodes (that is, about 3K nodes will be in a singleton groups, not linked to anything else).</p>
<pre><code>import random
pairs = []
while len(pairs) < 5000:
a = random.randint(1,10000)
b = random.randint(1,10000)
if a != b:
pairs.append((a,b))
</code></pre>
<p>Then, we can just time this (I'm using IPython magic here):</p>
<pre><code>In [48]: %timeit c = make_equiv_classes(pairs)
10 loops, best of 3: 63 ms per loop
</code></pre>
<p>which is faster than your initial solution:</p>
<pre><code>In [49]: %timeit c = group_pairs(pairs)
1 loop, best of 3: 2.08 s per loop
</code></pre>
<p>We can also use this random graph to check that the output of the two functions is identical for some large random input:</p>
<pre><code>>>> c = make_equiv_classes(pairs)
>>> c2 = group_pairs(pairs)
>>> set(tuple(sorted(x)) for x in c) == set(tuple(sorted(x)) for x in c2)
True
</code></pre>
| 1 | 2016-10-07T10:55:50Z | [
"python",
"list"
] |
Speed up code that doesn't use groupby()? | 39,915,454 | <p>I have two pieces of code (doing the same job) which takes in array of <code>datetime</code> and produces clusters of <code>datetime</code> which have difference of 1 hour.</p>
<p>First piece is:</p>
<pre><code>def findClustersOfRuns(data):
runClusters = []
for k, g in groupby(itertools.izip(data[0:-1], data[1:]),
lambda (i, x): (i - x).total_seconds() / 3600):
runClusters.append(map(itemgetter(1), g))
</code></pre>
<p>Second piece is:</p>
<pre><code>def findClustersOfRuns(data):
if len(data) <= 1:
return []
current_group = [data[0]]
delta = 3600
results = []
for current, next in itertools.izip(data, data[1:]):
if abs((next - current).total_seconds()) > delta:
# Here, `current` is the last item of the previous subsequence
# and `next` is the first item of the next subsequence.
if len(current_group) >= 2:
results.append(current_group)
current_group = [next]
continue
current_group.append(next)
return results
</code></pre>
<p>The first code takes 5 minutes to execute while second piece takes few seconds. I am trying to understand why.</p>
<p>The data over which I am running the code has size:</p>
<pre><code>data.shape
(13989L,)
</code></pre>
<p>The data contents is as:</p>
<pre><code>data
array([datetime.datetime(2016, 10, 1, 8, 0),
datetime.datetime(2016, 10, 1, 9, 0),
datetime.datetime(2016, 10, 1, 10, 0), ...,
datetime.datetime(2019, 1, 3, 9, 0),
datetime.datetime(2019, 1, 3, 10, 0),
datetime.datetime(2019, 1, 3, 11, 0)], dtype=object)
</code></pre>
<p>How do I improve the first piece of code to make it run as fast?</p>
| -1 | 2016-10-07T10:49:47Z | 39,915,992 | <p>Based on the size, it looks you are having a huge <code>list</code> of elements i.e. huge <code>len</code>. Your second code is having just one <code>for</code> loop where as your first approach has many. You see just one right? They are in the form of <code>map()</code>, <code>groupby()</code>. Multiple iteration on the huge list is adding huge cost to the time complexity. <em>These are not just additional iterations, but also these are slower than the normal <code>for</code> loop.</em> </p>
<p>I made a comparison for another post which you might find useful <a href="http://stackoverflow.com/a/39518977/2063361">Comparing list comprehensions and explicit loops</a>.</p>
<p>Also, the usage of <code>lambda</code> function is adding up extra time. </p>
<p>However, you may further improve the execution time of the code by storing <code>results.append</code> to a separate variable say <code>my_func</code> and make a call as <code>my_func(current_group)</code>.</p>
<p>Few more comparisons are:</p>
<ul>
<li><a href="http://stackoverflow.com/q/36261621/2063361">Python 3: Loops, list comprehension and map slower compared to Python 2</a></li>
<li><a href="http://stackoverflow.com/a/28584157/2063361">Speed/efficiency comparison for loop vs list comprehension vs other methods</a></li>
</ul>
| 1 | 2016-10-07T11:21:25Z | [
"python",
"lambda"
] |
Add a value to a column and print results | 39,915,904 | <p>I have a file like this</p>
<pre><code>Daniel 400 411 f
Mark 976 315 g
</code></pre>
<p>I would like to add 20 to line[2] and subtract 20 from line[1] and print new results overwriting this lines or to a new file.This is my try.</p>
<pre><code>f=open('w', 'r')
r = open('w2','a')
lines=f.readlines()
for line in lines:
new_list = line.rstrip('\r\n').split('\t')
q_start=int(new_list[1]) - 20
q_end=int(new_list[2]) + 20
# I think something is missing here, but I don't know what
r.writelines(lines)
f.close()
r.close()
</code></pre>
<p>Expected outcome</p>
<pre><code>Daniel 380 431 f
Mark 956 335 g
</code></pre>
| 0 | 2016-10-07T11:16:28Z | 39,916,027 | <pre><code>f=open('w', 'r')
r=open('w2','a')
lines=f.readlines()
for line in lines:
new_list = line.rstrip('\r\n').split('\t')
new_list[1] = str(int(new_list[1]) - 20)
new_list[2] = str(int(new_list[2]) + 20)
r.write("\t".join(new_list)+"\n")
f.close()
r.close()
</code></pre>
<p>This should works. Basically, you need split your line into segments first, then modify the parts you want, and stitch them together.</p>
| 4 | 2016-10-07T11:23:40Z | [
"python"
] |
Add a value to a column and print results | 39,915,904 | <p>I have a file like this</p>
<pre><code>Daniel 400 411 f
Mark 976 315 g
</code></pre>
<p>I would like to add 20 to line[2] and subtract 20 from line[1] and print new results overwriting this lines or to a new file.This is my try.</p>
<pre><code>f=open('w', 'r')
r = open('w2','a')
lines=f.readlines()
for line in lines:
new_list = line.rstrip('\r\n').split('\t')
q_start=int(new_list[1]) - 20
q_end=int(new_list[2]) + 20
# I think something is missing here, but I don't know what
r.writelines(lines)
f.close()
r.close()
</code></pre>
<p>Expected outcome</p>
<pre><code>Daniel 380 431 f
Mark 956 335 g
</code></pre>
| 0 | 2016-10-07T11:16:28Z | 39,916,090 | <p>There's really no need to read in all the lines into memory, then loop over them. It just lengthens the code and is less efficient. Instead, try</p>
<pre><code>with open('w2','w') as out:
for line in open('w', 'r'):
new_list = line.rstrip('\r\n').split('\t')
q_start=int(new_list[1]) - 20
q_end=int(new_list[2]) + 20
out.write('\t'.join([line[0], str(q_start), str(q_end), line[3]]) + '\n')
</code></pre>
<p>Note the use of <a href="http://stackoverflow.com/questions/9282967/how-to-open-a-file-using-the-open-with-statement"><code>with</code> to open <code>'w2'</code></a>, and the <a href="http://stackoverflow.com/questions/5733419/how-to-iterate-over-the-file-in-python">more efficient loop over the <code>'w'</code>'s lines</a>.</p>
<hr>
<p><strong>Edit Addressing Comment</strong></p>
<p>Note that if you have multiple columns (but still need to update only those at columns at 1 and 2), you can change the last line to </p>
<pre><code> out.write('\t'.join([line[0], str(q_start), str(q_end)] + line[3: ]) + '\n')
</code></pre>
| 3 | 2016-10-07T11:26:08Z | [
"python"
] |
Add a value to a column and print results | 39,915,904 | <p>I have a file like this</p>
<pre><code>Daniel 400 411 f
Mark 976 315 g
</code></pre>
<p>I would like to add 20 to line[2] and subtract 20 from line[1] and print new results overwriting this lines or to a new file.This is my try.</p>
<pre><code>f=open('w', 'r')
r = open('w2','a')
lines=f.readlines()
for line in lines:
new_list = line.rstrip('\r\n').split('\t')
q_start=int(new_list[1]) - 20
q_end=int(new_list[2]) + 20
# I think something is missing here, but I don't know what
r.writelines(lines)
f.close()
r.close()
</code></pre>
<p>Expected outcome</p>
<pre><code>Daniel 380 431 f
Mark 956 335 g
</code></pre>
| 0 | 2016-10-07T11:16:28Z | 39,916,343 | <p>I see that you have already accepted an answer, but still, here is my comment. There are a couple of mistakes in your code. The first is that even though you calculate the values correctly, you store them into two variable(<code>q_start</code> and <code>q_end</code>) that are not used afterwards.</p>
<p>The second mistake is that you write to the output file the unaltered original data (the <code>lines</code> list) and do so inside the loop (check your identations). hence, I believe that your output was an N repetition of the original file, where N is the number of lines in it.</p>
<p>The possible thirs mistake is using <code>'a'</code> to open the output file. If there was any content in it, it would not be deleted. Since I am not sure of your intentions, this may not be a mistake, but pay attention if you wanted a clean slate or not.</p>
<p>Finally, here is my version of your code, minimally altered to work. I left the <code>'a'</code> in the output file open process because I did not know if you did this on purpose.</p>
<pre><code>f=open('w.txt', 'r')
r = open('w2.txt','a')
lines=f.readlines()
for line in lines:
print line
new_list = line.rstrip('\r\n').split(' ')
new_list[1]=str(int(new_list[1]) - 20)
new_list[2]=str(int(new_list[2]) + 20)
r.write('\t'.join(new_list)+"\n")
f.close()
r.close()
</code></pre>
<p>I hope this helps. </p>
| 1 | 2016-10-07T11:38:56Z | [
"python"
] |
Make navbar item active in Django Template | 39,916,348 | <p>i'm creating menu with forloop and i need to add active class after click. </p>
<pre><code>{% for menu in TopMenu %}
<li><a href="/content/{{menu.slug_link}}">{{menu.title}}</a></li>
{% endfor %}
</code></pre>
<p>i tried to use django template inheritance but it didn't work. any solutions?</p>
<pre><code>{% for menu in TopMenu %}
<li {%if activeflag == '{{menu.slug_link}}' %} class="active" {%endif%} ><a href="/content/{{menu.slug_link}}">{{menu.title}}</a></li>
{% endfor %}
</code></pre>
| 1 | 2016-10-07T11:39:23Z | 39,916,526 | <p>You don't need <code>{{ }}</code> when using the <code>if</code> tag.</p>
<p>Try:</p>
<pre><code>{% if activeflag == menu.slug_link %} class="active" {% endif %}
</code></pre>
| 1 | 2016-10-07T11:49:04Z | [
"python",
"django"
] |
Django Rest: Correct data isn't sent to serializer with M2M model | 39,916,480 | <p>I have a simple relational structure with projects containing several sequences with an intermediate meta model.</p>
<p>I can perform a GET request easily enough and it formats the data correctly. However, when I want to post the validated_data variable does not contain data formatted correctly, so I can't write a create/update method.</p>
<p>The data should look like:</p>
<pre><code>{'name': 'something',
'seqrecords': [{
'id': 5,
'sequence': 'ACGG...AAAA',
'name': 'Z78529',
'order': 1
},
{
'id': 6,
'sequence': 'CGTA...ACCC',
'name': 'Z78527',
'order': 2
},
}
</code></pre>
<p>But instead it looks like this:</p>
<pre><code>{'name': 'something',
'projectmeta_set': [
OrderedDict([('order', 1)]),
OrderedDict([('order', 2)]),
OrderedDict([('order', 3)])
]
}
</code></pre>
<p><strong>Serializers:</strong></p>
<pre><code>class ProjectMetaSerializer(serializers.HyperlinkedModelSerializer):
id = serializers.ReadOnlyField(source='sequence.id')
name = serializers.ReadOnlyField(source='sequence.name')
class Meta:
model = ProjectMeta
fields = ['id', 'name', 'order']
class ProjectSerializer(serializers.HyperlinkedModelSerializer):
seqrecords = ProjectMetaSerializer(source='projectmeta_set', many=True)
class Meta:
model = Project
fields = ['id', 'name', 'seqrecords']
ReadOnlyField = ['id']
def create(self, validated_data):
project = Project(name=validated_data['name'])
project.save()
# This is where it all fails
for seq in validated_data['seqrecords']:
sequence = SeqRecord.objects.filter(id=seq['id'])
meta = ProjectMeta(project=project,
sequence=sequence,
order=seq['order'])
meta.save()
return project
class SeqRecordSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = SeqRecord
fields = ['id', 'name', 'sequence']
read_only_fields = ['id']
</code></pre>
<p><strong>Models:</strong></p>
<pre><code>class SeqRecord(models.Model):
name = models.CharField(max_length=50)
sequence = models.TextField()
class Project(models.Model):
name = models.CharField(max_length=50)
sequences = models.ManyToManyField(SeqRecord,
through='primer_suggestion.ProjectMeta')
class ProjectMeta(models.Model):
project = models.ForeignKey(Project)
sequence = models.ForeignKey(SeqRecord)
order = models.PositiveIntegerField()
</code></pre>
<p><strong>View</strong></p>
<pre><code>class ProjectApiList(viewsets.ViewSetMixin, generics.ListCreateAPIView):
queryset = Project.objects.all()
serializer_class = ProjectSerializer
</code></pre>
<p>Is there some way to change the validation of the data, so it returns the things I need or can I write the create and functions some other way?</p>
| 0 | 2016-10-07T11:46:43Z | 39,919,250 | <p>As you see in the <code>ProjectMetaSerializer</code>, the fields <code>id</code> and <code>name</code> are ReadOnlyFields. So you can't use them in <code>post</code> request.</p>
<pre><code>class ProjectMetaSerializer(serializers.ModelSerializer):
seqrecords = SeqRecordSerializer(many=True)
class Meta:
model = ProjectMeta
fields = ['seqrecords',]
class ProjectSerializer(serializers.ModelSerializer):
project_meta = ProjectMetaSerializer(many=True)
class Meta:
model = Project
fields = ['id', 'name', 'project_meta']
ReadOnlyField = ['id']
def create(self, validated_data):
project = Project(name=validated_data['name'])
project.save()
# This is where it all fails
for seq in validated_data['seqrecords']:
sequence = SeqRecord.objects.filter(id=seq['id'])
meta = ProjectMeta(project=project,
sequence=sequence,
order=seq['order'])
meta.save()
return project
class SeqRecordSerializer(serializers.ModelSerializer):
class Meta:
model = SeqRecord
fields = ['id', 'name', 'sequence']
</code></pre>
| 1 | 2016-10-07T14:07:01Z | [
"python",
"django",
"django-rest-framework"
] |
Django Rest: Correct data isn't sent to serializer with M2M model | 39,916,480 | <p>I have a simple relational structure with projects containing several sequences with an intermediate meta model.</p>
<p>I can perform a GET request easily enough and it formats the data correctly. However, when I want to post the validated_data variable does not contain data formatted correctly, so I can't write a create/update method.</p>
<p>The data should look like:</p>
<pre><code>{'name': 'something',
'seqrecords': [{
'id': 5,
'sequence': 'ACGG...AAAA',
'name': 'Z78529',
'order': 1
},
{
'id': 6,
'sequence': 'CGTA...ACCC',
'name': 'Z78527',
'order': 2
},
}
</code></pre>
<p>But instead it looks like this:</p>
<pre><code>{'name': 'something',
'projectmeta_set': [
OrderedDict([('order', 1)]),
OrderedDict([('order', 2)]),
OrderedDict([('order', 3)])
]
}
</code></pre>
<p><strong>Serializers:</strong></p>
<pre><code>class ProjectMetaSerializer(serializers.HyperlinkedModelSerializer):
id = serializers.ReadOnlyField(source='sequence.id')
name = serializers.ReadOnlyField(source='sequence.name')
class Meta:
model = ProjectMeta
fields = ['id', 'name', 'order']
class ProjectSerializer(serializers.HyperlinkedModelSerializer):
seqrecords = ProjectMetaSerializer(source='projectmeta_set', many=True)
class Meta:
model = Project
fields = ['id', 'name', 'seqrecords']
ReadOnlyField = ['id']
def create(self, validated_data):
project = Project(name=validated_data['name'])
project.save()
# This is where it all fails
for seq in validated_data['seqrecords']:
sequence = SeqRecord.objects.filter(id=seq['id'])
meta = ProjectMeta(project=project,
sequence=sequence,
order=seq['order'])
meta.save()
return project
class SeqRecordSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = SeqRecord
fields = ['id', 'name', 'sequence']
read_only_fields = ['id']
</code></pre>
<p><strong>Models:</strong></p>
<pre><code>class SeqRecord(models.Model):
name = models.CharField(max_length=50)
sequence = models.TextField()
class Project(models.Model):
name = models.CharField(max_length=50)
sequences = models.ManyToManyField(SeqRecord,
through='primer_suggestion.ProjectMeta')
class ProjectMeta(models.Model):
project = models.ForeignKey(Project)
sequence = models.ForeignKey(SeqRecord)
order = models.PositiveIntegerField()
</code></pre>
<p><strong>View</strong></p>
<pre><code>class ProjectApiList(viewsets.ViewSetMixin, generics.ListCreateAPIView):
queryset = Project.objects.all()
serializer_class = ProjectSerializer
</code></pre>
<p>Is there some way to change the validation of the data, so it returns the things I need or can I write the create and functions some other way?</p>
| 0 | 2016-10-07T11:46:43Z | 39,957,118 | <p>To return the correct data format the function "to_internal" can be overridden in the ProjectSerializer, like this:</p>
<pre><code>class ProjectSerializer(serializers.HyperlinkedModelSerializer):
seqrecords = ProjectMetaSerializer(source='projectmeta_set', many=True)
class Meta:
model = Project
fields = ['id', 'name', 'seqrecords']
ReadOnlyField = ['id']
def to_internal_value(self, data):
''' Override standard method so validated data have seqrecords '''
context = super(ProjectSerializer, self).to_internal_value(data)
context['seqrecords'] = data['seqrecords']
return context
def validate(self, data):
...
def create(self, validated_data):
...
return project
</code></pre>
<p>This method is of course dependent on good a validation function for the "seqrecords" field in validate().</p>
| 0 | 2016-10-10T11:22:22Z | [
"python",
"django",
"django-rest-framework"
] |
how to reconnect after autobahn websocket timeout? | 39,916,500 | <p>I'm using Autobahn to connect to a websocket like this.</p>
<pre><code>class MyComponent(ApplicationSession):
@inlineCallbacks
def onJoin(self, details):
print("session ready")
def oncounter(*args, **args2):
print("event received: args: {} args2: {}".format(args, args2))
try:
yield self.subscribe(oncounter, u'topic')
print("subscribed to topic")
except Exception as e:
print("could not subscribe to topic: {0}".format(e))
if __name__ == '__main__':
addr = u"wss://mywebsocketaddress.com"
runner = ApplicationRunner(url=addr, realm=u"realm1", debug=False, debug_app=False)
runner.run(MyComponent)
</code></pre>
<p>This works great, and I am able to receive messages. However, after around 3-4 hours, sometimes much sooner, the messages abruptly stop coming. It appears that the websocket times out (does that happen?), possibly due to connection issues. </p>
<p>How can I auto-reconnect with autobahn when that happens?</p>
<hr>
<p>Here's my attempt, but the reconnecting code is never called.</p>
<pre><code>class MyClientFactory(ReconnectingClientFactory, WampWebSocketClientFactory):
maxDelay = 10
maxRetries = 5
def startedConnecting(self, connector):
print('Started to connect.')
def clientConnectionLost(self, connector, reason):
print('Lost connection. Reason: {}'.format(reason))
ReconnectingClientFactory.clientConnectionLost(self, connector, reason)
def clientConnectionFailed(self, connector, reason):
print('Connection failed. Reason: {}'.format(reason))
ReconnectingClientFactory.clientConnectionFailed(self, connector, reason)
class MyApplicationRunner(object):
log = txaio.make_logger()
def __init__(self, url, realm, extra=None, serializers=None,
debug=False, debug_app=False,
ssl=None, proxy=None):
assert(type(url) == six.text_type)
assert(realm is None or type(realm) == six.text_type)
assert(extra is None or type(extra) == dict)
assert(proxy is None or type(proxy) == dict)
self.url = url
self.realm = realm
self.extra = extra or dict()
self.serializers = serializers
self.debug = debug
self.debug_app = debug_app
self.ssl = ssl
self.proxy = proxy
def run(self, make, start_reactor=True):
if start_reactor:
# only select framework, set loop and start logging when we are asked
# start the reactor - otherwise we are running in a program that likely
# already tool care of all this.
from twisted.internet import reactor
txaio.use_twisted()
txaio.config.loop = reactor
if self.debug or self.debug_app:
txaio.start_logging(level='debug')
else:
txaio.start_logging(level='info')
isSecure, host, port, resource, path, params = parseWsUrl(self.url)
# factory for use ApplicationSession
def create():
cfg = ComponentConfig(self.realm, self.extra)
try:
session = make(cfg)
except Exception as e:
if start_reactor:
# the app component could not be created .. fatal
self.log.error(str(e))
reactor.stop()
else:
# if we didn't start the reactor, it's up to the
# caller to deal with errors
raise
else:
session.debug_app = self.debug_app
return session
# create a WAMP-over-WebSocket transport client factory
transport_factory = MyClientFactory(create, url=self.url, serializers=self.serializers,
proxy=self.proxy, debug=self.debug)
# supress pointless log noise like
# "Starting factory <autobahn.twisted.websocket.WampWebSocketClientFactory object at 0x2b737b480e10>""
transport_factory.noisy = False
# if user passed ssl= but isn't using isSecure, we'll never
# use the ssl argument which makes no sense.
context_factory = None
if self.ssl is not None:
if not isSecure:
raise RuntimeError(
'ssl= argument value passed to %s conflicts with the "ws:" '
'prefix of the url argument. Did you mean to use "wss:"?' %
self.__class__.__name__)
context_factory = self.ssl
elif isSecure:
from twisted.internet.ssl import optionsForClientTLS
context_factory = optionsForClientTLS(host)
from twisted.internet import reactor
if self.proxy is not None:
from twisted.internet.endpoints import TCP4ClientEndpoint
client = TCP4ClientEndpoint(reactor, self.proxy['host'], self.proxy['port'])
transport_factory.contextFactory = context_factory
elif isSecure:
from twisted.internet.endpoints import SSL4ClientEndpoint
assert context_factory is not None
client = SSL4ClientEndpoint(reactor, host, port, context_factory)
else:
from twisted.internet.endpoints import TCP4ClientEndpoint
client = TCP4ClientEndpoint(reactor, host, port)
d = client.connect(transport_factory)
# as the reactor shuts down, we wish to wait until we've sent
# out our "Goodbye" message; leave() returns a Deferred that
# fires when the transport gets to STATE_CLOSED
def cleanup(proto):
if hasattr(proto, '_session') and proto._session is not None:
if proto._session.is_attached():
return proto._session.leave()
elif proto._session.is_connected():
return proto._session.disconnect()
# when our proto was created and connected, make sure it's cleaned
# up properly later on when the reactor shuts down for whatever reason
def init_proto(proto):
reactor.addSystemEventTrigger('before', 'shutdown', cleanup, proto)
return proto
# if we connect successfully, the arg is a WampWebSocketClientProtocol
d.addCallback(init_proto)
# if the user didn't ask us to start the reactor, then they
# get to deal with any connect errors themselves.
if start_reactor:
# if an error happens in the connect(), we save the underlying
# exception so that after the event-loop exits we can re-raise
# it to the caller.
class ErrorCollector(object):
exception = None
def __call__(self, failure):
self.exception = failure.value
reactor.stop()
connect_error = ErrorCollector()
d.addErrback(connect_error)
# now enter the Twisted reactor loop
reactor.run()
# if we exited due to a connection error, raise that to the
# caller
if connect_error.exception:
raise connect_error.exception
else:
# let the caller handle any errors
return d
</code></pre>
<p>The error I'm getting:</p>
<pre>
2016-10-09T21:00:40+0100 Connection to/from tcp4:xxx.xx.xx.xx:xxx was lost in a non-clean fashion: Connection lost
2016-10-09T21:00:40+0100 _connectionLost: [Failure instance: Traceback (failure with no frames): : Connection to the other side was lost in a non-clean fashion: Connection l
ost.
]
2016-10-09T21:00:40+0100 WAMP-over-WebSocket transport lost: wasClean=False, code=1006, reason="connection was closed uncleanly (peer dropped the TCP connection without previous WebSocket closing handshake)"
2016-10-09T21:10:39+0100 EXCEPTION: no messages received
2016-10-09T21:10:39+0100 Traceback (most recent call last):
</pre>
| 0 | 2016-10-07T11:48:00Z | 39,925,529 | <p>You can use a <code>ReconnectingClientFactory</code> if you're using Twisted. <a href="https://github.com/crossbario/autobahn-python/blob/master/examples/twisted/websocket/echo_variants/client_reconnecting.py" rel="nofollow">There's a simple example by the autobahn developer on github</a>. Unfortunately there doesn't seem to be an implementation of an <code>ApplicationRunner</code> that comes pre-built with this functionality but it doesn't look too difficult to implement on your own. <a href="https://github.com/isra17/autobahn-autoreconnect/blob/master/autobahn_autoreconnect/__init__.py" rel="nofollow">Here's an <code>asyncio</code> variant</a> which should be straight forward to Twisted. You could follow <a href="https://github.com/crossbario/autobahn-python/issues/295" rel="nofollow">this issue</a> as it seems there is a desire by the dev team to incorporate reconnecting client.</p>
| 0 | 2016-10-07T20:44:30Z | [
"python",
"twisted",
"autobahn"
] |
how to reconnect after autobahn websocket timeout? | 39,916,500 | <p>I'm using Autobahn to connect to a websocket like this.</p>
<pre><code>class MyComponent(ApplicationSession):
@inlineCallbacks
def onJoin(self, details):
print("session ready")
def oncounter(*args, **args2):
print("event received: args: {} args2: {}".format(args, args2))
try:
yield self.subscribe(oncounter, u'topic')
print("subscribed to topic")
except Exception as e:
print("could not subscribe to topic: {0}".format(e))
if __name__ == '__main__':
addr = u"wss://mywebsocketaddress.com"
runner = ApplicationRunner(url=addr, realm=u"realm1", debug=False, debug_app=False)
runner.run(MyComponent)
</code></pre>
<p>This works great, and I am able to receive messages. However, after around 3-4 hours, sometimes much sooner, the messages abruptly stop coming. It appears that the websocket times out (does that happen?), possibly due to connection issues. </p>
<p>How can I auto-reconnect with autobahn when that happens?</p>
<hr>
<p>Here's my attempt, but the reconnecting code is never called.</p>
<pre><code>class MyClientFactory(ReconnectingClientFactory, WampWebSocketClientFactory):
maxDelay = 10
maxRetries = 5
def startedConnecting(self, connector):
print('Started to connect.')
def clientConnectionLost(self, connector, reason):
print('Lost connection. Reason: {}'.format(reason))
ReconnectingClientFactory.clientConnectionLost(self, connector, reason)
def clientConnectionFailed(self, connector, reason):
print('Connection failed. Reason: {}'.format(reason))
ReconnectingClientFactory.clientConnectionFailed(self, connector, reason)
class MyApplicationRunner(object):
log = txaio.make_logger()
def __init__(self, url, realm, extra=None, serializers=None,
debug=False, debug_app=False,
ssl=None, proxy=None):
assert(type(url) == six.text_type)
assert(realm is None or type(realm) == six.text_type)
assert(extra is None or type(extra) == dict)
assert(proxy is None or type(proxy) == dict)
self.url = url
self.realm = realm
self.extra = extra or dict()
self.serializers = serializers
self.debug = debug
self.debug_app = debug_app
self.ssl = ssl
self.proxy = proxy
def run(self, make, start_reactor=True):
if start_reactor:
# only select framework, set loop and start logging when we are asked
# start the reactor - otherwise we are running in a program that likely
# already tool care of all this.
from twisted.internet import reactor
txaio.use_twisted()
txaio.config.loop = reactor
if self.debug or self.debug_app:
txaio.start_logging(level='debug')
else:
txaio.start_logging(level='info')
isSecure, host, port, resource, path, params = parseWsUrl(self.url)
# factory for use ApplicationSession
def create():
cfg = ComponentConfig(self.realm, self.extra)
try:
session = make(cfg)
except Exception as e:
if start_reactor:
# the app component could not be created .. fatal
self.log.error(str(e))
reactor.stop()
else:
# if we didn't start the reactor, it's up to the
# caller to deal with errors
raise
else:
session.debug_app = self.debug_app
return session
# create a WAMP-over-WebSocket transport client factory
transport_factory = MyClientFactory(create, url=self.url, serializers=self.serializers,
proxy=self.proxy, debug=self.debug)
# supress pointless log noise like
# "Starting factory <autobahn.twisted.websocket.WampWebSocketClientFactory object at 0x2b737b480e10>""
transport_factory.noisy = False
# if user passed ssl= but isn't using isSecure, we'll never
# use the ssl argument which makes no sense.
context_factory = None
if self.ssl is not None:
if not isSecure:
raise RuntimeError(
'ssl= argument value passed to %s conflicts with the "ws:" '
'prefix of the url argument. Did you mean to use "wss:"?' %
self.__class__.__name__)
context_factory = self.ssl
elif isSecure:
from twisted.internet.ssl import optionsForClientTLS
context_factory = optionsForClientTLS(host)
from twisted.internet import reactor
if self.proxy is not None:
from twisted.internet.endpoints import TCP4ClientEndpoint
client = TCP4ClientEndpoint(reactor, self.proxy['host'], self.proxy['port'])
transport_factory.contextFactory = context_factory
elif isSecure:
from twisted.internet.endpoints import SSL4ClientEndpoint
assert context_factory is not None
client = SSL4ClientEndpoint(reactor, host, port, context_factory)
else:
from twisted.internet.endpoints import TCP4ClientEndpoint
client = TCP4ClientEndpoint(reactor, host, port)
d = client.connect(transport_factory)
# as the reactor shuts down, we wish to wait until we've sent
# out our "Goodbye" message; leave() returns a Deferred that
# fires when the transport gets to STATE_CLOSED
def cleanup(proto):
if hasattr(proto, '_session') and proto._session is not None:
if proto._session.is_attached():
return proto._session.leave()
elif proto._session.is_connected():
return proto._session.disconnect()
# when our proto was created and connected, make sure it's cleaned
# up properly later on when the reactor shuts down for whatever reason
def init_proto(proto):
reactor.addSystemEventTrigger('before', 'shutdown', cleanup, proto)
return proto
# if we connect successfully, the arg is a WampWebSocketClientProtocol
d.addCallback(init_proto)
# if the user didn't ask us to start the reactor, then they
# get to deal with any connect errors themselves.
if start_reactor:
# if an error happens in the connect(), we save the underlying
# exception so that after the event-loop exits we can re-raise
# it to the caller.
class ErrorCollector(object):
exception = None
def __call__(self, failure):
self.exception = failure.value
reactor.stop()
connect_error = ErrorCollector()
d.addErrback(connect_error)
# now enter the Twisted reactor loop
reactor.run()
# if we exited due to a connection error, raise that to the
# caller
if connect_error.exception:
raise connect_error.exception
else:
# let the caller handle any errors
return d
</code></pre>
<p>The error I'm getting:</p>
<pre>
2016-10-09T21:00:40+0100 Connection to/from tcp4:xxx.xx.xx.xx:xxx was lost in a non-clean fashion: Connection lost
2016-10-09T21:00:40+0100 _connectionLost: [Failure instance: Traceback (failure with no frames): : Connection to the other side was lost in a non-clean fashion: Connection l
ost.
]
2016-10-09T21:00:40+0100 WAMP-over-WebSocket transport lost: wasClean=False, code=1006, reason="connection was closed uncleanly (peer dropped the TCP connection without previous WebSocket closing handshake)"
2016-10-09T21:10:39+0100 EXCEPTION: no messages received
2016-10-09T21:10:39+0100 Traceback (most recent call last):
</pre>
| 0 | 2016-10-07T11:48:00Z | 40,019,926 | <p>Here is an <a href="https://github.com/crossbario/autobahn-python/blob/master/examples/twisted/wamp/basic/client_using_apprunner.py" rel="nofollow">example</a> of an automatically reconnecting ApplicationRunner. The important line to enable auto-reconnect is:</p>
<pre><code>runner.run(session, auto_reconnect=True)
</code></pre>
<p>You will also want to activate automatic WebSocket ping/pong (eg in Crossbar.io), to a) minimize connection drops because of timeouts, and b) to allow fast detection of lost connections.</p>
| 0 | 2016-10-13T11:38:27Z | [
"python",
"twisted",
"autobahn"
] |
python + Sqlite: How to save the changes into a new db file? | 39,916,558 | <p>After the changes were made to a db file, I want to save the data into a new db file. </p>
<pre><code>import sqlite3
conn = sqlite3.connect('Original.db')
cur = conn.cursor()
# make changes here.
conn.close() #Close without save
connA = sqlite3.connect('NewFile.db')
connA.commit() # Here is my problem. How to save the changed data into this new file?
</code></pre>
<p>Thanks for your help!</p>
<p>UPDATE: My db file is huge. If I make a copy of it at the beginning, it takes to much time. I would rather let it run after I made the changes, to save the starting time.</p>
| 0 | 2016-10-07T11:50:23Z | 39,916,856 | <p>Each databases have their own table, data, schema, etc. If you just commit those changes on a brand new file, errors will occur.</p>
<p>If you want to save the changed data into a new database, you can make a copy of current database file by using <code>shutil.copyfile</code> and then operate on the new database.</p>
| 1 | 2016-10-07T12:05:17Z | [
"python",
"sqlite"
] |
TypeError: only length-1 arrays can be converted to Python scalars, while using Kfold cross Validation | 39,916,655 | <p>I am trying to use <code>Kfold</code> cross valiadtion for my model, but get this error while doing so. I know that <code>KFold</code> only accepts 1D arrays but even after converting the length input to an array its giving me this problem.</p>
<pre><code>from sklearn.ensemble import ExtraTreesClassifier, RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import KFold
if __name__ == "__main__":
np.random.seed(1335)
verbose = True
shuffle = False
n_folds = 5
y = np.array(y)
if shuffle:
idx = np.random.permutation(y.size)
X_train = X_train[idx]
y = y[idx]
skf = KFold(y, n_folds)
models = [RandomForestClassifier(n_estimators=100, n_jobs=-1, criterion='gini'),ExtraTreesClassifier(n_estimators=100, n_jobs=-1, criterion='entropy')]
print("Stacking in progress")
A = []
for j, clf in enumerate(models):
print(j, clf)
for i, (itrain, itest) in enumerate(skf):
print("Fold :", i)
x_train = X_train[itrain]
x_test = X_train[itest]
y_train = y[itrain]
y_test = y[itest]
print(x_train.shape, x_test.shape)
print(len(x_train), len(x_test))
clf.fit(x_train, y_train)
pred = clf.predict_proba(x_test)
A.append(pred)
</code></pre>
<p>I get the error for the line "<code>skf = KFold(y, n_folds)</code>". Any help with this will be appreciated.</p>
| 0 | 2016-10-07T11:55:03Z | 39,916,807 | <p>From <a href="http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html#sklearn.model_selection.KFold" rel="nofollow">its doc</a>, <code>KFold()</code> does not expect <code>y</code> as an input, but only the number of splits (n_folds). </p>
<p>Once you have an instance of <code>KFold</code>, you do <code>myKfold.split(x)</code> (<code>x</code> being all of your input data) to obtain an iterator yielding train and test indices. Example copy pasted from sklearn doc:</p>
<pre><code>>>> from sklearn.model_selection import KFold
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([1, 2, 3, 4])
>>> kf = KFold(n_splits=2)
>>> kf.get_n_splits(X)
2
>>> print(kf)
KFold(n_splits=2, random_state=None, shuffle=False)
>>> for train_index, test_index in kf.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [2 3] TEST: [0 1]
TRAIN: [0 1] TEST: [2 3]
</code></pre>
| 1 | 2016-10-07T12:02:47Z | [
"python",
"arrays",
"pandas",
"scikit-learn",
"cross-validation"
] |
"ImportError: cannot import name Browser" with python mechanize | 39,916,657 | <p>I am unable to use a <a href="https://docs.google.com/document/d/1hEAMNvbPgNzVxNmmDnLdGlVvQwypKhcpjAUAif1Jdkk/" rel="nofollow">mechanize code</a> ). The part that lead to the error is </p>
<pre><code>#!/usr/bin/python
import re
from mechanize import
br = Browser()
</code></pre>
<p>I also tried </p>
<p>When executing it with python3.5, I find the following error:</p>
<pre><code># python mechanize.py
Traceback (most recent call last):
File "mechanize.py", line 6, in <module>
from mechanize import Browser
File "/root/git/stakexchange-ask-question/mechanize.py", line 6, in <module>
from mechanize import Browser
ImportError: cannot import name Browser
</code></pre>
<p>This is however precisely what is suggested by the official <a href="http://wwwsearch.sourceforge.net/mechanize/" rel="nofollow">mechanize website</a></p>
<p>If I modify the code to</p>
<pre><code>#!/usr/bin/python
import re
br = mechanize.Browser()
</code></pre>
<p>I also get an error</p>
<pre><code> # python mechanize.py
Traceback (most recent call last):
File "mechanize.py", line 5, in <module>
import mechanize
File "/root/git/stakexchange-ask-question/mechanize.py", line 6, in <module>
br =mechanize.Browser()
AttributeError: module 'mechanize' has no attribute 'Browser'
</code></pre>
<p>I have installed <code>mechanize</code> with </p>
<pre><code>easy_install mechanize
</code></pre>
| 0 | 2016-10-07T11:55:13Z | 39,916,717 | <p>Have you tried mechanize.Browser()?</p>
| 0 | 2016-10-07T11:58:13Z | [
"python",
"html",
"linux",
"forms",
"mechanize"
] |
"ImportError: cannot import name Browser" with python mechanize | 39,916,657 | <p>I am unable to use a <a href="https://docs.google.com/document/d/1hEAMNvbPgNzVxNmmDnLdGlVvQwypKhcpjAUAif1Jdkk/" rel="nofollow">mechanize code</a> ). The part that lead to the error is </p>
<pre><code>#!/usr/bin/python
import re
from mechanize import
br = Browser()
</code></pre>
<p>I also tried </p>
<p>When executing it with python3.5, I find the following error:</p>
<pre><code># python mechanize.py
Traceback (most recent call last):
File "mechanize.py", line 6, in <module>
from mechanize import Browser
File "/root/git/stakexchange-ask-question/mechanize.py", line 6, in <module>
from mechanize import Browser
ImportError: cannot import name Browser
</code></pre>
<p>This is however precisely what is suggested by the official <a href="http://wwwsearch.sourceforge.net/mechanize/" rel="nofollow">mechanize website</a></p>
<p>If I modify the code to</p>
<pre><code>#!/usr/bin/python
import re
br = mechanize.Browser()
</code></pre>
<p>I also get an error</p>
<pre><code> # python mechanize.py
Traceback (most recent call last):
File "mechanize.py", line 5, in <module>
import mechanize
File "/root/git/stakexchange-ask-question/mechanize.py", line 6, in <module>
br =mechanize.Browser()
AttributeError: module 'mechanize' has no attribute 'Browser'
</code></pre>
<p>I have installed <code>mechanize</code> with </p>
<pre><code>easy_install mechanize
</code></pre>
| 0 | 2016-10-07T11:55:13Z | 39,917,035 | <p>The <code>python</code> version has to be at least <code>3.0</code> (<a href="http://wwwsearch.sourceforge.net/mechanize/faq.html" rel="nofollow">reference</a>)</p>
<p>Check your python version with</p>
<pre><code>readlink -f $(which python) | xargs -I % sh -c 'echo -n "%: "; % -V'
</code></pre>
<p>But the error didn't came only from this. <code>mechanize</code> wasn't properly installed and I install it again from the <a href="http://wwwsearch.sourceforge.net/mechanize/download.html" rel="nofollow">source code</a>.</p>
| 0 | 2016-10-07T12:13:48Z | [
"python",
"html",
"linux",
"forms",
"mechanize"
] |
Replacing a \t with something in a list | 39,916,674 | <pre><code>>>>user_sentence = "hello \t how are you?"
>>>import re
>>>user_sentenceSplit = re.findall(r"([\s]|[\w']+|[.,!?;])",user_sentence)
>>>print user_sentenceSplit
</code></pre>
<p>I get <code>['hello', '\t', 'how', 'are', 'you', '?']</code></p>
<p>I don't know how to create any code that will replace the <code>'\t'</code> with <code>'tab'</code>.</p>
| 0 | 2016-10-07T11:56:02Z | 39,916,882 | <p>I think <code>str.replace</code> would do the job.</p>
<pre class="lang-py prettyprint-override"><code>user_sentence.replace('\t', 'tab')
</code></pre>
<p>Do this before splitting the string.</p>
| 1 | 2016-10-07T12:06:33Z | [
"python",
"regex"
] |
Replacing a \t with something in a list | 39,916,674 | <pre><code>>>>user_sentence = "hello \t how are you?"
>>>import re
>>>user_sentenceSplit = re.findall(r"([\s]|[\w']+|[.,!?;])",user_sentence)
>>>print user_sentenceSplit
</code></pre>
<p>I get <code>['hello', '\t', 'how', 'are', 'you', '?']</code></p>
<p>I don't know how to create any code that will replace the <code>'\t'</code> with <code>'tab'</code>.</p>
| 0 | 2016-10-07T11:56:02Z | 39,916,884 | <p>It is behavior of Python's compiler. You should not be worrying about it. Pyhton's Compiler store <code>tab</code> as <code>\t</code>. You need not to do anything on it as it will treat it as tab while performing any action over it. For example:</p>
<pre><code>>>> my_string = 'Yes Hello So?' # <- String with tab
>>> my_string
'Yes\tHello\tSo?' # <- Stored tab as '\t'
>>> print my_string
Yes Hello So? # While printing, again tab
</code></pre>
<p>However you exact requirement is not clear to me. In case you want to replace the value of <code>\t</code> with <code>tab</code> string, you may do:</p>
<pre><code>>>> my_string = my_string.replace('\t', 'tab')
>>> my_string
'YestabHellotabSo?'
</code></pre>
<p>where <code>my_string</code> is holding the value I mentioned in previous example.</p>
| 0 | 2016-10-07T12:06:34Z | [
"python",
"regex"
] |
Replacing a \t with something in a list | 39,916,674 | <pre><code>>>>user_sentence = "hello \t how are you?"
>>>import re
>>>user_sentenceSplit = re.findall(r"([\s]|[\w']+|[.,!?;])",user_sentence)
>>>print user_sentenceSplit
</code></pre>
<p>I get <code>['hello', '\t', 'how', 'are', 'you', '?']</code></p>
<p>I don't know how to create any code that will replace the <code>'\t'</code> with <code>'tab'</code>.</p>
| 0 | 2016-10-07T11:56:02Z | 39,917,486 | <p>I do not believe that replacing <code>\t</code> in the original string will ever work, you have two issues:</p>
<ul>
<li>Your code also outputs spaces as tokens, but you do not want to have them</li>
<li>The <code>\t</code> in between letters will become a part of a word token.</li>
</ul>
<p>So, you need to replace <code>[\s]</code> with <code>[^\S ]</code> pattern that matches any whitespace but a regular space (add more excluded whitespace symbols if necessary into the negated character class) and you need to iterate through all the tokens and check if a token is equal to a tab, and then replace it with <code>tab</code> value. So, the best is to use <code>re.finditer</code> and push the found values into a list variable, see sample code below:</p>
<pre><code>import re
user_sentence = "hello \t how are you?"
user_sentenceSplit = []
for x in re.finditer(r"[^\S ]|[\w']+|[.,!?;]",user_sentence):
if x.group() == "\t": # if it is a tab, replace the value
user_sentenceSplit.append("tab")
else: # else, push the match value
user_sentenceSplit.append(x.group())
print(user_sentenceSplit)
</code></pre>
<p>See the <a href="https://ideone.com/sbn0Dp" rel="nofollow">Python demo</a></p>
| 1 | 2016-10-07T12:36:39Z | [
"python",
"regex"
] |
How can I replicate similar behaviour in python 3 as seen in python 2 | 39,916,676 | <p>I have this operation done in <code>python 2</code>:</p>
<p>python 2:</p>
<pre><code>garbled = "IXXX aXXmX aXXXnXoXXXXXtXhXeXXXXrX sXXXXeXcXXXrXeXt mXXeXsXXXsXaXXXXXXgXeX!XX"
message = filter(lambda x: x != 'X', garbled)
print message
</code></pre>
<p>result:</p>
<pre><code>'I am another secret message!'
</code></pre>
<p>But in <code>python 3</code>:</p>
<pre><code>garbled = "IXXX aXXmX aXXXnXoXXXXXtXhXeXXXXrX sXXXXeXcXXXrXeXt mXXeXsXXXsXaXXXXXXgXeX!XX"
message = list(filter(lambda x: x != 'X', garbled))
print(message)
</code></pre>
<p>Result for `python 3:</p>
<pre><code>['I', ' ', 'a', 'm', ' ', 'a', 'n', 'o', 't', 'h', 'e', 'r', ' ', 's', 'e', 'c', 'r', 'e', 't', ' ', 'm', 'e', 's', 's', 'a', 'g', 'e', '!']
</code></pre>
<p>I want to know if its possible to get the same result in <code>python 3</code> as gotten in <code>python 2</code> from the <code>message</code> variable <code>without adding</code> a new line to the code in <code>python 3</code>. I already know the <code>join function</code>, I would like to make it as straight forward as seen in <code>the python 2 code</code>.</p>
| -1 | 2016-10-07T11:56:08Z | 39,916,714 | <p>In python 3.X <code>filter()</code> returns an iterator and when you convert it to list you'll get a list of your characters. Thus, instead you can pass the iterator to <code>str.join()</code> method in order to join the filtered result together:</p>
<pre><code>In [1]: garbled = "IXXX aXXmX aXXXnXoXXXXXtXhXeXXXXrX sXXXXeXcXXXrXeXt mXXeXsXXXsXaXXXXXXgXeX!XX"
In [2]: message = ''.join(filter(lambda x: x != 'X', garbled))
In [3]: message
Out[3]: 'I am another secret message!'
</code></pre>
<p>But Note that using filter is not a proper way for removing a character, instead you can simply use <code>str.replace()</code>:</p>
<pre><code>In [4]: garbled.replace('X', '')
Out[4]: 'I am another secret message!'
</code></pre>
<p>Here is why <code>str.replace</code> is more pythonic:</p>
<pre><code>In [5]: %timeit''.join(filter(lambda x: x != 'X', garbled))
100000 loops, best of 3: 8.08 µs per loop
In [6]: %timeit garbled.replace('X', '')
1000000 loops, best of 3: 1.04 µs per loop
</code></pre>
| 2 | 2016-10-07T11:58:06Z | [
"python",
"python-2.7",
"python-3.x"
] |
Group by multiple columns in sqlalchemy | 39,916,678 | <p>I have written this python function to query a database using the SQLAlchemy package:</p>
<pre><code>def group_by_one_var(start_date, end_date, groupby):
data = db.session.query(
groupby,
MyModel.SnapDate,
func.count(MyModel.CustomerID).label("TotalCustomers")
)\
.filter(
MyModel.SnapDate >= start_date,
MyModel.SnapDate <= end_date
)\
.group_by(
groupby
).all()
return(data)
test_1 = group_by_one_var("2016-08-01", "2016-08-31", MyModel.Country) # Success
</code></pre>
<p>Which does a good job of grouping my query by a variable of my choosing.</p>
<p>However, I'm stuck when it comes to grouping by multiple variables.</p>
<p>Here is a function I wrote to group by two variables:</p>
<pre><code>def group_by_two_vars(start_date, end_date, groupby):
data = db.session.query(
groupby[0],
groupby[1],
MyModel.SnapDate,
func.count(MyModel.CustomerID).label("TotalCustomers")
)\
.filter(
MyModel.SnapDate >= start_date,
MyModel.SnapDate <= end_date
)\
.group_by(
groupby[0]
)\
.group_by(
groupby[1]
).all()
return(data)
tes2 = group_by_two_vars("2016-08-01", "2016-08-31", (MyModel.Country, MyModel.Currency)) # Success
</code></pre>
<p>This function also does a fine job grouping my query by two variables of my choosing.</p>
<p>How can I alter these functions to accept a dynamic number of group bys?</p>
<p>Thanks!</p>
| 0 | 2016-10-07T11:56:19Z | 39,917,425 | <p>This is how you can parse a dynamic number of groupby arguments and have SQLAlchemy include them all in the query:</p>
<pre><code>def group_by_n_vars(start_date, end_date, *groupby):
data = db.session.query(
*groupby,
MyModel.BookingDateLocal,
func.count(MyModel.BookingId).label("TotalCustomers")
)\
.filter(
MyModel.SnapDate >= start_date,
MyModel.SnapDate <= end_date
)\
.group_by(
*groupby
).all()
return(data)
</code></pre>
| 0 | 2016-10-07T12:33:21Z | [
"python",
"sqlalchemy"
] |
Error in acorr_ljungbox from statsmodel | 39,916,771 | <p>So I am trying to do a box-ljung test on a resudual, but I am getting a strange error and am not able to figure out why.</p>
<pre><code>x = diag.acorr_ljungbox(np.random.random(20))
</code></pre>
<p>I tried doing the same with a random array also, still the same error:</p>
<pre class="lang-none prettyprint-override"><code>ValueError: operands could not be broadcast together with shapes (19,) (40,)
</code></pre>
| 0 | 2016-10-07T12:00:49Z | 39,919,347 | <p>This looks like a bug in the default lag setting, which is set to 40 independent of the length of the data.</p>
<p>As a workaround and to get a proper statistic, the <code>lags</code> needs to be restricted, e.g. using 5 lags below.</p>
<pre><code>>>> from statsmodels.stats import diagnostic as diag
>>> diag.acorr_ljungbox(np.random.random(50))[0].shape
(40,)
>>> diag.acorr_ljungbox(np.random.random(20), lags=5)
(array([ 0.36718151, 1.02009595, 1.23734092, 3.75338034, 4.35387236]),
array([ 0.54454461, 0.60046677, 0.74406305, 0.44040973, 0.49966951]))
</code></pre>
| 0 | 2016-10-07T14:12:29Z | [
"python",
"statsmodels"
] |
Making a real time application which uses same python code continuously on a server, libraries import on each run and consume memory and time | 39,916,811 | <p>I am developing a real time navigation application on Android, what it does is collects relevant data from user's smartphone and pings a python code on server with the data. The python code then returns the real time location of the user. Now given that I need to run the same python code again and again, the same exact libraries need to be loaded (I am using libraries like numpy, MySQLdb, sys, rpy2). Now everytime I run the code there's some memory and time which is consumed in loading these libraries. For me as it turns out the main amount of memory and time is getting consumed in this. This is increasing the server costs and execution time unnecessarily as I am utilizing my resources in just loading the same libraries. Is there some way in Python where I can permanently load my libraries onto the server's RAM and cut the cost. Is it even possible using just Python. I thought this would be a common issue but I am finding very disconnected answers every time I google it. Please suggest the best way to do this. I lack a bit on server side coding best practices maybe my problem is a part of that rather than part of Python coding. Thanks!</p>
<p>PS: As I was using AWS I thought AWS Lambda should take care of it automatically, I implemented my code as an AWS Lambda package but I see a degradation in performance actually.</p>
| 0 | 2016-10-07T12:02:55Z | 39,916,917 | <p>You usually have a web API that responds to the Android apps. These APIs are usually provided by a Python App behind a web server. One way of doing a Python app behind a webserver is by using <a href="https://www.fullstackpython.com/wsgi-servers.html" rel="nofollow">WSGI</a>. All available WSGI implementation do the job of starting the app once (sometimes a few of them) and dispatching requests to the already running app. The job of maintaining running instances etc. is essentially all taken care of.</p>
<p>If you have a Python job spawn upon request and exit after it is done sounds like a badly designed homemade webapp. In this case you should port it over to WSGI.</p>
<p>If on the other hand you have jobs that need to be done asynchronously to your web service (ones with long execution time etc.) you should start a second Python daemon that receives jobs from your webapp through a <a href="https://www.fullstackpython.com/task-queues.html" rel="nofollow">queue</a>. In this case, too, the daemon would be started at boot time and persistently run until you shut down the server.</p>
| 2 | 2016-10-07T12:08:21Z | [
"python",
"amazon-web-services",
"server",
"real-time",
"libraries"
] |
Matplotlib updating live plot | 39,916,824 | <p>I want to update a line plot with matplotlib and wonder, if there is a good modification of the code, such that the line plotted simply gets updated instead of getting redrawn every time. Here is a sample code:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
matplotlib.style.use('ggplot')
plt.ion()
fig=plt.figure()
i=0
df = pd.DataFrame({"time": [pd.datetime.now()], "value": 0}).set_index("time")
plt.plot(df);
while True:
temp_y=np.random.random();
df2 = pd.DataFrame({"time": [pd.datetime.now()], "value": temp_y}).set_index("time")
df = df.append(df2)
plt.plot(df)
i+=1
plt.show()
plt.pause(0.000001)
</code></pre>
<p>As you see, the plotting gets slower and slower after a while and I think the line chart is redrawn every iteration since it changes colours. </p>
| -1 | 2016-10-07T12:03:41Z | 39,916,933 | <p>Yes.</p>
<pre><code>x = np.arange(10)
y = np.random.rand(10)
line, = plt.plot(x,y)
line.set_data(x,np.random.rand(10))
plt.draw()
</code></pre>
<p>However, your plotting gets slower because your are extending your data frame and each append operation presumably copies that frame in memory to a new location. As your data frame increases in size, that copy operation takes longer. I would loop over an index and plot that (<code>for ii in range(len(data)): line.set_data(x[:ii], y[:ii])</code>)</p>
<p>EDIT: </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt; plt.ion()
import pandas as pd
n = 100
x = np.arange(n)
y = np.random.rand(n)
# I don't get the obsession with pandas...
df = pd.DataFrame(dict(time=x, value=y))
# initialise plot and line
line, = plt.plot(df['time'], df['value'])
# simulate line drawing
for ii in range(len(df)):
line.set_data(df['time'][:ii], df['value'][:ii]) # couldn't be bothered to look up the proper way to index in pd
plt.draw()
plt.pause(0.001)
</code></pre>
| 1 | 2016-10-07T12:09:11Z | [
"python",
"matplotlib",
"plot",
"while-loop"
] |
Difference between 2 datetimes in a data frame lose ns precision | 39,916,845 | <p>I have a data frame with 2 columns Event_A / Event_B that looks like this :</p>
<pre class="lang-none prettyprint-override"><code>In [8]: print (df)
Event_A Event_B
0 2016-10-03 02:00:09.123456789 2016-10-03 02:00:09.123456547
</code></pre>
<p>Both columns contain datetimes and have ns precision.
My issue is that when I do following I lose the ns precision in my new time_diff column :</p>
<pre class="lang-none prettyprint-override"><code>In [9]: df['time_diff'] = df['Event_A'] - df['Event_B']
In [10]: print (df)
Event_A Event_B time_diff
0 2016-10-03 02:00:09.123456789 2016-10-03 02:00:09.123456547 00:00:00.000000
</code></pre>
<p>Is there a way to force to output the difference in ns between 2 date times?
Do I have to somehow convert to ns epoch timestamp (not sure how to convert) to perform this operation?</p>
| 2 | 2016-10-07T12:04:36Z | 39,917,008 | <p>This looks like a display issue, if you access <code>dt.nanoseconds</code> then it will show the difference in nanosecond resolution:</p>
<pre><code>In [85]:
(df['Event_A']- df['Event_B']).dt.nanoseconds
Out[85]:
0 242
dtype: int64
</code></pre>
| 1 | 2016-10-07T12:12:39Z | [
"python",
"pandas",
"dataframe"
] |
How to extract all href content from a page using scrapy | 39,916,940 | <p>I am trying to crawl <a href="https://www.goodreads.com/list/show/19793.I_Marked_My_Calendar_For_This_Book_s_Release" rel="nofollow">https://www.goodreads.com/list/show/19793.I_Marked_My_Calendar_For_This_Book_s_Release</a>.</p>
<p>I want to get all links from a given website using Scrapy</p>
<p>I am trying to this way -</p>
<pre><code>import scrapy
import unidecode
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from lxml import html
class ElementSpider(scrapy.Spider):
name = 'linkdata'
start_urls = ["https://www.goodreads.com/list/show/19793.I_Marked_My_Calendar_For_This_Book_s_Release",]
def parse(self, response):
links = response.xpath('//div[@id="all_votes"]/table[@class="tableList js-dataTooltip"]/div[@class="js-tooltipTrigger tooltipTrigger"]/a/@href').extract()
print links
</code></pre>
<p>But i am getting nothing in output.</p>
| 0 | 2016-10-07T12:09:32Z | 39,917,070 | <p>I think your xpath is worng. Try this-</p>
<pre><code>for href in response.xpath('//div[@id="all_votes"]/table[@class="tableList js-dataTooltip"]/tr/td[2]/div[@class="js-tooltipTrigger tooltipTrigger"]/a/@href'):
full_url = response.urljoin(href.extract())
print full_url
</code></pre>
<p>Hope it helps :)</p>
<p>Good luck...</p>
| 1 | 2016-10-07T12:15:14Z | [
"python",
"scrapy"
] |
pyspark: StructField(..., ..., False) always returns `nullable=true` instead of `nullable=false` | 39,917,075 | <p>I'm new to pyspark and am facing a strange problem. I'm trying to set some column to non-nullable while loading a CSV dataset. I can reproduce my case with a very small dataset (<code>test.csv</code>):</p>
<pre><code>col1,col2,col3
11,12,13
21,22,23
31,32,33
41,42,43
51,,53
</code></pre>
<p>There is a null value at row 5, column 2 and I don't want to get that row inside my DF. I set all fields as non-nullable (<code>nullable=false</code>) but I get a schema with all the three columns having <code>nullable=true</code>. This happens even if I set all the three columns as non-nullable! I'm running the latest available version of Spark, 2.0.1.</p>
<p>Here's the code:</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
struct = StructType([ StructField("col1", StringType(), False), \
StructField("col2", StringType(), False), \
StructField("col3", StringType(), False) \
])
df = spark.read.load("test.csv", schema=struct, format="csv", header="true")
</code></pre>
<p><code>df.printSchema()</code> returns:</p>
<pre><code>root
|-- col1: string (nullable = true)
|-- col2: string (nullable = true)
|-- col3: string (nullable = true)
</code></pre>
<p>and <code>df.show()</code> returns:</p>
<pre><code>+----+----+----+
|col1|col2|col3|
+----+----+----+
| 11| 12| 13|
| 21| 22| 23|
| 31| 32| 33|
| 41| 42| 43|
| 51|null| 53|
+----+----+----+
</code></pre>
<p>while I expect this:</p>
<pre><code>root
|-- col1: string (nullable = false)
|-- col2: string (nullable = false)
|-- col3: string (nullable = false)
+----+----+----+
|col1|col2|col3|
+----+----+----+
| 11| 12| 13|
| 21| 22| 23|
| 31| 32| 33|
| 41| 42| 43|
+----+----+----+
</code></pre>
| 0 | 2016-10-07T12:15:24Z | 39,917,782 | <p>While Spark behavior (switch from <code>False</code> to <code>True</code> here is confusing there is nothing fundamentally wrong going on here. <code>nullable</code> argument is not a constraint but a reflection of the source and type semantics which enables certain types of optimization</p>
<p>You state that you want to avoid null values in your data. For this you should use <code>na.drop</code> method.</p>
<pre><code>df.na.drop()
</code></pre>
<p>For other ways of handling nulls please take a look at the <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameNaFunctions" rel="nofollow"><code>DataFrameNaFunctions</code></a> (exposed using <code>DataFrame.na</code> property) documentation. </p>
<p>CSV format doesn't provide any tools which allow you to specify data constraints so by definition reader cannot assume that input is not null and your data indeed contains nulls.</p>
| 1 | 2016-10-07T12:52:25Z | [
"python",
"apache-spark",
"pyspark",
"spark-dataframe"
] |
Implement delayed Slack slash response | 39,917,272 | <p>I want to implement slack slash command that has to process fucntion <code>pipeline</code> which takes roughly 30 seconds to process. Now since <a href="https://api.slack.com/slash-commands#responding_to_a_command" rel="nofollow">Slack</a> slash commands only allows 3 seconds to respond, how to go about implementing this. I referred <a href="http://stackoverflow.com/questions/34896954/how-to-avoid-slack-command-timeout-error">this</a> but don't how to implement it. </p>
<p>Please hold up with me. I am doing this first time.
This is what I have tried. I know how to respond with <code>ok status</code> within 3 seconds but I don't understand how to again call <code>pipeline</code></p>
<pre><code>import requests
import json
from bottle import route, run, request
from S3_download import s3_download
from index import main_func
@route('/action')
def action():
pipeline()
return "ok"
def pipeline():
s3_download()
p = main_func()
print (p)
if __name__ == "__main__":
run(host='0.0.0.0', port=8082, debug=True)
</code></pre>
<p>I came across <a href="https://claudiajs.com/tutorials/slack-delayed-responses.html" rel="nofollow">this</a> article. Is using AWS lambda the only solution?
Can't we do this completely in python?</p>
| 1 | 2016-10-07T12:25:19Z | 39,918,072 | <p>Something like this:</p>
<pre><code>from boto import sqs
@route('/action', method='POST')
def action():
#retrieving all the required request example
params = request.forms.get('response_url')
sqs_queue = get_sqs_connection(queue_name)
message_object = sqs.message.Message()
message_object.set_body(params)
mail_queue.write(message_object)
return "request under process"
</code></pre>
<p>and you can have another process which processes the queue and call long running function:</p>
<pre><code>sqs_queue = get_sqs_connection(queue_name)
for sqs_msg in sqs_queue.get_messages(10, wait_time_seconds=5):
processed_msg = json.loads(sqs_msg.get_body())
response = pipeline(processed_msg)
if response:
sqs_queue.delete_message(sqs_msg)
</code></pre>
<p>you can run this 2nd process maybe in a diff standalone python file, as a daemon process or cron.</p>
<p>I`v used sqs <a href="http://boto.cloudhackers.com/en/latest/ref/sqs.html" rel="nofollow">Amazon Queue</a> here, but there are different options available.</p>
| 2 | 2016-10-07T13:07:19Z | [
"python",
"response",
"slack-api",
"slack",
"slash"
] |
Implement delayed Slack slash response | 39,917,272 | <p>I want to implement slack slash command that has to process fucntion <code>pipeline</code> which takes roughly 30 seconds to process. Now since <a href="https://api.slack.com/slash-commands#responding_to_a_command" rel="nofollow">Slack</a> slash commands only allows 3 seconds to respond, how to go about implementing this. I referred <a href="http://stackoverflow.com/questions/34896954/how-to-avoid-slack-command-timeout-error">this</a> but don't how to implement it. </p>
<p>Please hold up with me. I am doing this first time.
This is what I have tried. I know how to respond with <code>ok status</code> within 3 seconds but I don't understand how to again call <code>pipeline</code></p>
<pre><code>import requests
import json
from bottle import route, run, request
from S3_download import s3_download
from index import main_func
@route('/action')
def action():
pipeline()
return "ok"
def pipeline():
s3_download()
p = main_func()
print (p)
if __name__ == "__main__":
run(host='0.0.0.0', port=8082, debug=True)
</code></pre>
<p>I came across <a href="https://claudiajs.com/tutorials/slack-delayed-responses.html" rel="nofollow">this</a> article. Is using AWS lambda the only solution?
Can't we do this completely in python?</p>
| 1 | 2016-10-07T12:25:19Z | 39,921,127 | <p>You have an option or two for doing this in a single process, but it's fraught with peril. If you spin up a new <code>Thread</code> to handle the long process, you might end up deploying or crashing in the middle and losing it.</p>
<p>If durability is important to you, look into background-task workers like SQS, Lambda, or even a Celery task queue backed with Redis. A separate task has some interesting failure modes, and these tools will help you deal with them better than just spawning a thread.</p>
| 0 | 2016-10-07T15:42:42Z | [
"python",
"response",
"slack-api",
"slack",
"slash"
] |
Why doesnt this regex work? | 39,917,569 | <p>Here is the code in question:</p>
<pre><code>import subprocess
import re
import os
p = subprocess.Popen(["nc -zv 8.8.8.8 53"], stdout=subprocess.PIPE, shell = True)
out, err = p.communicate()
regex = re.search("succeeded", out)
if not regex:
print ("test")
</code></pre>
<p>What i want it to do is to print out test if the regex does not match the netcat command. Right now im only matching on "succeeded" but that's all i need because the netcat command prints out :</p>
<pre><code>Connection to 8.8.8.8 53 port [tcp/domain] succeeded!
</code></pre>
<p>The code runs fine but it matches when it shouldn't ? </p>
| 0 | 2016-10-07T12:41:01Z | 39,917,623 | <p>The output is coming out stderr not stdout:</p>
<pre><code>stderr=subprocess.PIPE
</code></pre>
<p>You can simplify to using in and you don't need shell=True:</p>
<pre><code>p = subprocess.Popen(["nc", "-zv", "8.8.8.8", "53"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if "succeeded" not in err:
print ("test")
</code></pre>
<p>You can also redirect <em>stderr</em> to <em>STDOUT</em> and use check_output presuming you are using python >= 2.7:</p>
<pre><code>out = subprocess.check_output(["nc", "-zv", "8.8.8.8", "53"],stderr=subprocess.STDOUT)
if "succeeded" not in out:
print ("test")
</code></pre>
| 3 | 2016-10-07T12:44:01Z | [
"python",
"regex",
"netcat"
] |
When trying to log in using requests module, I get an error because the User/Password fields are blank | 39,917,676 | <p>I've been trying to make a scraper to get my grades from my schools website. Unfortunately i cannot log in. When i try to run the program, the return page validates the user/password fields, and since they are blank, it's not letting me proceed.</p>
<p>Also, i am not really sure if I am even coding this correctly.</p>
<pre><code>from twill.commands import *
import requests
payload = {
'ctl00$cphMainContent$lgn$UserName':'user',
'ctl00$cphMainContent$lgn$Password':'pass',
}
cookie = {
'En_oneTime_ga_tracking_v2' : 'true',
'ASP.NET_SessionId' : ''
}
with requests.Session() as s:
p = s.post('schoolUrl', data=payload, cookies=cookie)
print p.text
</code></pre>
<hr>
<p>Updated payload:</p>
<pre><code>payload = {
'ctl00$cphMainContent$lgnEaglesNest$UserName':'user',
'ctl00$cphMainContent$lgnEaglesNest$Password':'pass',
'__LASTFOCUS': '',
'__EVENTTARGET':'',
'__EVENTARGUMENT':'',
'__VIEWSTATE': 'LONG NUMBER',
'__VIEWSTATEGENERATOR': 'C2EE9ABB',
'__EVENTVALIDATION' : 'LONG NUMBER',
'ctl00$cphMainContent$lgnEaglesNest$RememberMe': 'on',
'ctl00$cphMainContent$lgnEaglesNest$LoginButton':'Log+In'
}
</code></pre>
<p>How do i know if my POST was successful?</p>
<p>The returned page was saying that Username/Password cannot be blank.</p>
<hr>
<p>Complete source:</p>
<pre><code>from twill.commands import *
import requests
payload = {
'ctl00$cphMainContent$lgnEaglesNest$UserName':'user',
'ctl00$cphMainContent$lgnEaglesNest$Password':'pass',
'__LASTFOCUS': '',
'__EVENTTARGET':'',
'__EVENTARGUMENT':'',
'__VIEWSTATE': 'LONG NUMBER',
'__VIEWSTATEGENERATOR': 'C2EE9ABB',
'__EVENTVALIDATION' : 'LONG NUMBER',
'ctl00$cphMainContent$lgnEaglesNest$RememberMe': 'on',
'ctl00$cphMainContent$lgnEaglesNest$LoginButton':'Log In'
}
cookie = {
'En_oneTime_ga_tracking_v2' : 'true',
'ASP.NET_SessionId' : ''
}
with requests.Session() as s:
loginUrl = 'http://eaglesnest.pcci.edu/Login.aspx?ReturnUrl=%2f'
gradeUrl = 'http://eaglesnest.pcci.edu/StudentServices/ClassGrades/Default.aspx'
p = s.post( loginUrl, data=payload)
print p.text
</code></pre>
| 0 | 2016-10-07T12:47:16Z | 39,917,792 | <p>Your payload uses the wrong keys, try</p>
<pre><code>ctl00$cphMainContent$lgnEaglesNest$UserName
ctl00$cphMainContent$lgnEaglesNest$Password
</code></pre>
<p>You can check the names by watching the network traffic in your browser (e.g. in Firefox: inspect element --> network --> post --> params)</p>
<p>In addition you need to specify which command you want to perform, i.e. which button was pressed.</p>
<pre><code>payload['ctl00$cphMainContent$lgnEaglesNest$LoginButton': 'Log In']
</code></pre>
| 0 | 2016-10-07T12:52:53Z | [
"python",
"web-scraping",
"python-requests"
] |
When trying to log in using requests module, I get an error because the User/Password fields are blank | 39,917,676 | <p>I've been trying to make a scraper to get my grades from my schools website. Unfortunately i cannot log in. When i try to run the program, the return page validates the user/password fields, and since they are blank, it's not letting me proceed.</p>
<p>Also, i am not really sure if I am even coding this correctly.</p>
<pre><code>from twill.commands import *
import requests
payload = {
'ctl00$cphMainContent$lgn$UserName':'user',
'ctl00$cphMainContent$lgn$Password':'pass',
}
cookie = {
'En_oneTime_ga_tracking_v2' : 'true',
'ASP.NET_SessionId' : ''
}
with requests.Session() as s:
p = s.post('schoolUrl', data=payload, cookies=cookie)
print p.text
</code></pre>
<hr>
<p>Updated payload:</p>
<pre><code>payload = {
'ctl00$cphMainContent$lgnEaglesNest$UserName':'user',
'ctl00$cphMainContent$lgnEaglesNest$Password':'pass',
'__LASTFOCUS': '',
'__EVENTTARGET':'',
'__EVENTARGUMENT':'',
'__VIEWSTATE': 'LONG NUMBER',
'__VIEWSTATEGENERATOR': 'C2EE9ABB',
'__EVENTVALIDATION' : 'LONG NUMBER',
'ctl00$cphMainContent$lgnEaglesNest$RememberMe': 'on',
'ctl00$cphMainContent$lgnEaglesNest$LoginButton':'Log+In'
}
</code></pre>
<p>How do i know if my POST was successful?</p>
<p>The returned page was saying that Username/Password cannot be blank.</p>
<hr>
<p>Complete source:</p>
<pre><code>from twill.commands import *
import requests
payload = {
'ctl00$cphMainContent$lgnEaglesNest$UserName':'user',
'ctl00$cphMainContent$lgnEaglesNest$Password':'pass',
'__LASTFOCUS': '',
'__EVENTTARGET':'',
'__EVENTARGUMENT':'',
'__VIEWSTATE': 'LONG NUMBER',
'__VIEWSTATEGENERATOR': 'C2EE9ABB',
'__EVENTVALIDATION' : 'LONG NUMBER',
'ctl00$cphMainContent$lgnEaglesNest$RememberMe': 'on',
'ctl00$cphMainContent$lgnEaglesNest$LoginButton':'Log In'
}
cookie = {
'En_oneTime_ga_tracking_v2' : 'true',
'ASP.NET_SessionId' : ''
}
with requests.Session() as s:
loginUrl = 'http://eaglesnest.pcci.edu/Login.aspx?ReturnUrl=%2f'
gradeUrl = 'http://eaglesnest.pcci.edu/StudentServices/ClassGrades/Default.aspx'
p = s.post( loginUrl, data=payload)
print p.text
</code></pre>
| 0 | 2016-10-07T12:47:16Z | 39,919,164 | <p>First, make sure you are setting the cookies that you need to set correctly. I'd recommend setting them on the session-object:</p>
<pre><code>s = requests.Session()
s.cookies.set("key", "value", domain=".site.com") # the domain is optional
</code></pre>
<p>There are a couple of ways to inspect the results of your POST-request.
Just a few suggestions here:</p>
<pre><code>s = requests.Session()
p = s.post('schoolUrl', data=payload)
print p.text
print p.status_code
print p.cookies
print p.body
print p.headers
</code></pre>
<p>A fact quickly missed is that the <code>requests</code> library silently follows redirects. The response-object will then only contain the last response from the redirect-chain. You can inspect previous responses thusly: <code>p.history</code>. Each element in the returned array is a <code>response</code>-object which you can inspect individually as described above.</p>
<p>Please also unsure that you are fully replicating the request-payload which you can see in google chrome's dev-tools (the network-tab) or the firefox equivalent.</p>
<p>You couldn't per chance post the URL of the site you're trying to login to?</p>
| -1 | 2016-10-07T14:02:52Z | [
"python",
"web-scraping",
"python-requests"
] |
Add new version scikit-learn to spyder | 39,917,683 | <p>I have downloaded the newest version of scikit-learn (0.18), however, spyder keeps on using the previous version (0.17). How do I make the two compatible? </p>
<p>I have updated both distributions using Anaconda: </p>
<pre><code>conda update spyder
conda update scikit-learn
</code></pre>
| 0 | 2016-10-07T12:47:29Z | 39,918,040 | <p>You may have different versions of sklearn installed (via pip and conda), and spyder using your /usr/bin/python instead of anaconda/bin/python: make sure that the interpreter selected in Spyder preferences is the anaconda one.</p>
| 0 | 2016-10-07T13:06:04Z | [
"python",
"scikit-learn",
"anaconda",
"spyder"
] |
How to search and treat the result with Tweepy | 39,917,708 | <p>I 'm was looking for a module to make research in tweeter (the research bar) and take the profile id/username of all the profile in link with the research.</p>
<p>I have seen the api tweepy, i think the answer i'm looking for is hide in thes 2 fonction :
search_users
_lookup_users</p>
<pre><code>#!/usr/bin/env python
#-*-coding:utf-8-*-
import tweepy, time, sys
CONSUMER_KEY = '#'
CONSUMER_SECRET = '#'
ACCESS_KEY = '#'
ACCESS_SECRET = '#'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
liste2 = ["X", "Y", "Z"]
i = 0
while (liste2[i] != '\0'):
file = api.search.users(liste2[i])
print "passed"
i = i + 1
</code></pre>
<p>I still got nothing for list all the profile match with the search.</p>
<p>the doc : <a href="https://github.com/tweepy/tweepy/blob/master/tweepy/api.py" rel="nofollow">https://github.com/tweepy/tweepy/blob/master/tweepy/api.py</a></p>
<p>Thanks :)</p>
| 0 | 2016-10-07T12:48:24Z | 39,918,189 | <p>You linked to the source code, not to the doc. The doc can be found <a href="http://docs.tweepy.org/en/v3.5.0/api.html" rel="nofollow">here</a>; be careful though, as it is not very up-to-date.</p>
<p>You can just do:</p>
<pre><code>import tweepy
CONSUMER_KEY = "#"
CONSUMER_SECRET = "#"
ACCESS_KEY = "#"
ACCESS_SECRET = "#"
#Twitter credentials
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
#api = tweepy.API(auth_handler=auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
api = tweepy.API(auth)
liste2 = ["X", "Y", "Z"]
for element in liste2:
results = api.search_users(q=element)
#results now contains a lot of `user` objects, let's iterate through it to see the usernames
for user in results:
print(user.screen_name)
</code></pre>
| 0 | 2016-10-07T13:14:16Z | [
"python",
"tweepy"
] |
Pythonic way of writing multiple regex tests | 39,917,765 | <p>How to write in a Pythonic way when there are multiple regex patterns to test with and extract matched groups if a test succeeds?</p>
<p>That is to say, what is the Pythonic equivalent of the following code snippet?</p>
<pre><code>if re.match(pattern1, string):
m = re.match(pattern1, string)
grps = m.groups()
...[process matched groups for pattern1]...
elif re.match(pattern2, string):
m = re.match(pattern2, string)
grps = m.groups()
...[process matched groups for pattern2]...
elif re.match(pattern3, string):
m = re.match(pattern3, string)
grps = m.groups()
...[process matched groups for pattern3]...
</code></pre>
| 1 | 2016-10-07T12:51:29Z | 39,917,826 | <pre><code>patterns = [pattern1, pattern2, pattern3]
for pattern in patterns:
m = re.match(pattern, string)
if m:
grps = m.groups()
...
break
</code></pre>
| 4 | 2016-10-07T12:54:43Z | [
"python"
] |
Regex and returning two lists as a tuple | 39,917,965 | <p>I have this function, that I would like to see if can be done more pythonic.
The function explains it self what it is trying to achieve.</p>
<p>My concern is that I'm using two regex expressions for <code>content</code> and <code>expected</code> that gives room for error creeping in, best would be if these two variables could use the same regex.</p>
<p>input example:</p>
<pre><code>test_names = "tests[\"Status code: \" +responseCode.code] = responseCode.code === 200;\ntests[\"Schema validator GetChecksumReleaseEventForAll\"] = tv4.validate(data, schema);"
def custom_steps(self, test_names):
""" Extracts unique tests from postman collection """
content = re.findall(r'(?<=tests\[")(.*)(?::|"\])', test_names)
expected = re.findall(r'(?<=\] = )(.*)(?::|;)', test_names)
for i, er in enumerate(expected):
if "===" in er:
expected[i] = er[er.find('===')+4:]
else:
expected[i] = "true"
return content, expected
</code></pre>
| 1 | 2016-10-07T13:01:33Z | 39,918,365 | <p>You can match both groups at the same time:</p>
<pre><code>def custom_steps(self, test_names):
regex = 'tests\["(.*)(?::|"\]).* = (.+)(?::|;)'
for match in re.finditer(regex, test_names):
content, expected = match.groups()
if '===' in expected:
expected = expected[expected.index('===') + 4:]
else:
expected = 'true'
yield content, expected
</code></pre>
<p>This gives you a generator over pairs of <code>content</code>, <code>expected</code>:</p>
<pre><code>for c, e in custom_steps(None, test_names):
print c, e
</code></pre>
<p>Output:</p>
<pre><code>Status code 200
Schema validator GetChecksumReleaseEventForAll true
</code></pre>
| 1 | 2016-10-07T13:24:48Z | [
"python",
"regex",
"python-2.7"
] |
Django 1.9 to 1.10 raises NoReverseMatch: u'en-gb' is not a registered namespace | 39,917,987 | <p>I am trying to update my 1.9 application to 1.10 and I am getting the following error on running all my unit tests:</p>
<pre><code>Traceback (most recent call last): File "/home/â¦/tests/views/test_configurator.py", line 261, in test_view_configurator_post
args=[self.configurator.id]), File "/home/â¦/.virtualenvs/intranet/lib/python2.7/site-packages/django/urls/base.py", line 87, in reverse
raise NoReverseMatch("%s is not a registered namespace" % key) NoReverseMatch: 'en-gb' is not a registered namespace
</code></pre>
<p>My <code>setting.py</code> file contains the following:</p>
<pre><code>LANGUAGE_CODE = 'en-gb'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
TIME_ZONE = 'Europe/London'
</code></pre>
<p>What am I missing?</p>
| 2 | 2016-10-07T13:02:48Z | 39,918,791 | <p>Your <code>LANGUAGE_CODE</code> setting is set to <code>en_gb</code>.. Notice the underscore character.. It should be<code>en-gb</code>.</p>
| -1 | 2016-10-07T13:45:04Z | [
"python",
"django",
"internationalization"
] |
Understanding variable types, names and assignment | 39,917,988 | <p>In Python, if you want to define a variable, you don't have to specify the type of it, unlike other languages such as C and Java.</p>
<p>So how can the Python interpreter distinguish between variables and give it the required space in memory like <code>int</code> or <code>float</code>?</p>
| 3 | 2016-10-07T13:02:58Z | 39,918,038 | <p>In Python all values are objects with built-in type info. Variables are references to these values. So their type is 'dynamic', just equal to the type of what they happen to refer to (point to) at a particular moment.</p>
<p>Whenever memory is allocated for the contents of a variable, a value is available. Since it has a type, the amount of memory needed is known.</p>
<p>The references (variables) themselves always occupy the same amount of memory, no matter what they point to, since they just contain a conceptual address.</p>
<p>This indeed means that in</p>
<pre><code>def f (x):
print (x)
</code></pre>
<p>x doesn't have a type, since it doesn't have a particular value yet.
The upside is that this is very flexible.
The downside is that the compiler has only limited means to discover errors.
For this reason Python was recently enriched with <a href="http://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5">type hints</a>.
Tools like <a href="http://mypy-lang.org/" rel="nofollow">mypy</a> allow static typechecking, even though the interpreter doesn't need it.
But the programmer sometimes does, especially at module boundaries (API's) when she's working in a team.</p>
| 7 | 2016-10-07T13:06:02Z | [
"python"
] |
Understanding variable types, names and assignment | 39,917,988 | <p>In Python, if you want to define a variable, you don't have to specify the type of it, unlike other languages such as C and Java.</p>
<p>So how can the Python interpreter distinguish between variables and give it the required space in memory like <code>int</code> or <code>float</code>?</p>
| 3 | 2016-10-07T13:02:58Z | 39,918,089 | <p>Python is dynamically typed language which means that the type of variables are decided in running time. As a result python interpreter will distinguish the variable's types (in running time) and give the exact space in memory needed. Despite being dynamically typed, Python is strongly typed, forbidding operations that are not well-defined (for example, adding a number to a string) . </p>
<p>On the other hand C and C++ are statically typed languages which means that the types of variables are known in compilation time.</p>
<p>Using dynamic typing in programming languages has the advantage that gives more potential to language, for example we can have lists with different types (for example a list that contains chars and integers). This wouldn't be possible with static typing since the type of the list should be known from the compilation time...).<br>
One disadvantage of dynamic typing is that the compiler-interpreter in many cases must keeps a record of types in order to extract the types of variables, which makes it more slow in comparison with C or C++.</p>
<p>A dynamic typed language like python can be also strongly typed. Python is strongly typed as the interpreter keeps track of all variables types and is restrictive about how types can be intermingled.</p>
| 2 | 2016-10-07T13:08:05Z | [
"python"
] |
Understanding variable types, names and assignment | 39,917,988 | <p>In Python, if you want to define a variable, you don't have to specify the type of it, unlike other languages such as C and Java.</p>
<p>So how can the Python interpreter distinguish between variables and give it the required space in memory like <code>int</code> or <code>float</code>?</p>
| 3 | 2016-10-07T13:02:58Z | 39,918,342 | <p>Dynamically typed languages typically use boxed representation, which includes runtime type information. E.g. instead of storing direct pointers to a value, the system uses a box struct that contains the value (or pointer to it) as well as some additional metainformation. You can see how he standard Python implementation does it here: <a href="https://github.com/python/cpython/blob/master/Include/object.h">https://github.com/python/cpython/blob/master/Include/object.h</a></p>
<p>There are some interesting tricks that can be employed here. For instance, one technique is value tagging, where the type description is stored as part of the value itself, utilising unused bytes. For instance, pointers on current x86-64 CPUs can't utilise the full address space, which gives you some bits to play with. Another variant of this technique is NaN-tagging (I believe this was first used by Mike Pall, author of LuaJIT) - where all values are stored as doubles, and various NaN states of the value signal that it is actually a pointer or some other type of data. </p>
| 6 | 2016-10-07T13:23:45Z | [
"python"
] |
Understanding variable types, names and assignment | 39,917,988 | <p>In Python, if you want to define a variable, you don't have to specify the type of it, unlike other languages such as C and Java.</p>
<p>So how can the Python interpreter distinguish between variables and give it the required space in memory like <code>int</code> or <code>float</code>?</p>
| 3 | 2016-10-07T13:02:58Z | 39,918,601 | <p>The Python interpreter analyzes each variable when the program runs. Before running, it doesn't know whether you've got an integer, a float, or a string in any of your variables.</p>
<p>When you have a statically typed language background (Java in my case), it's a bit unusual. Dynamic typing saves you a lot of time and lines of code in large scripts. It prevents you from having errors because you have forgotten to define some variable. However, static typing lets you have more control on how data is stored in a computer's memory.</p>
| 1 | 2016-10-07T13:35:49Z | [
"python"
] |
Pandas dataframe rearrangement stack to two value columns (for factorplots) | 39,918,053 | <p>I have been trying to rearrange my dataframe to use it as input for a factorplot. The raw data would look like this:</p>
<pre><code> A B C D
1 0 1 2 "T"
2 1 2 3 "F"
3 2 1 0 "F"
4 1 0 2 "T"
...
</code></pre>
<p>My question is how can I rearrange it into this form:</p>
<pre><code> col val val2
1 A 0 "T"
1 B 1 "T"
1 C 2 "T"
2 A 1 "F"
...
</code></pre>
<p>I was trying:</p>
<pre><code>df = DF.cumsum(axis=0).stack().reset_index(name="val")
</code></pre>
<p>However this produces only one value column not two.. thanks for your support</p>
| 3 | 2016-10-07T13:06:39Z | 39,918,391 | <p>consider your dataframe <code>df</code></p>
<pre><code>df = pd.DataFrame([
[0, 1, 2, 'T'],
[1, 2, 3, 'F'],
[2, 1, 3, 'F'],
[1, 0, 2, 'T'],
], [1, 2, 3, 4], list('ABCD'))
</code></pre>
<p><a href="http://i.stack.imgur.com/XtqBK.png" rel="nofollow"><img src="http://i.stack.imgur.com/XtqBK.png" alt="enter image description here"></a></p>
<p><strong><em>solution</em></strong></p>
<pre><code>df.set_index('D', append=True) \
.rename_axis(['col'], 1) \
.rename_axis([None, 'val2']) \
.stack().to_frame('val') \
.reset_index(['col', 'val2']) \
[['col', 'val', 'val2']]
</code></pre>
<p><a href="http://i.stack.imgur.com/RRNtS.png" rel="nofollow"><img src="http://i.stack.imgur.com/RRNtS.png" alt="enter image description here"></a></p>
| 2 | 2016-10-07T13:25:56Z | [
"python",
"pandas",
"stack",
"cumsum"
] |
Pandas dataframe rearrangement stack to two value columns (for factorplots) | 39,918,053 | <p>I have been trying to rearrange my dataframe to use it as input for a factorplot. The raw data would look like this:</p>
<pre><code> A B C D
1 0 1 2 "T"
2 1 2 3 "F"
3 2 1 0 "F"
4 1 0 2 "T"
...
</code></pre>
<p>My question is how can I rearrange it into this form:</p>
<pre><code> col val val2
1 A 0 "T"
1 B 1 "T"
1 C 2 "T"
2 A 1 "F"
...
</code></pre>
<p>I was trying:</p>
<pre><code>df = DF.cumsum(axis=0).stack().reset_index(name="val")
</code></pre>
<p>However this produces only one value column not two.. thanks for your support</p>
| 3 | 2016-10-07T13:06:39Z | 39,918,511 | <p>I would use melt, and you can sort it how ever you like</p>
<pre><code>pd.melt(df.reset_index(),id_vars=['index','D'], value_vars=['A','B','C']).sort_values(by='index')
Out[40]:
index D variable value
0 1 T A 0
4 1 T B 1
8 1 T C 2
1 2 F A 1
5 2 F B 2
9 2 F C 3
2 3 F A 2
6 3 F B 1
10 3 F C 0
3 4 T A 1
7 4 T B 0
11 4 T C 2
</code></pre>
<p>then obviously you can name column as you like</p>
<pre><code>df.set_index('index').rename(columns={'D': 'col', 'variable': 'val2', 'value': 'val'})
</code></pre>
| 3 | 2016-10-07T13:31:29Z | [
"python",
"pandas",
"stack",
"cumsum"
] |
What is the pythonic way to skip parent method? | 39,918,071 | <pre><code>class A:
def open_spider(self, spider):
#do some hacking
class B(A):
def open_spider(self, spider):
super(B, self).open_spider(spider)
#something else
</code></pre>
<p>Now I want C to call A's method but not B's, which can be done at least in two ways:</p>
<pre><code> class C(B):
def open_spider(self, spider):
A.open_spider(self, spider)
#do things
class C(B):
def open_spider(self, spider):
super(B, self).open_spider(spider)
#do things
</code></pre>
| 1 | 2016-10-07T13:07:19Z | 39,918,219 | <p>Use the second method; this is one reason why <code>super</code> <em>takes</em> a class as its first argument. </p>
| 0 | 2016-10-07T13:15:42Z | [
"python",
"inheritance"
] |
What is the pythonic way to skip parent method? | 39,918,071 | <pre><code>class A:
def open_spider(self, spider):
#do some hacking
class B(A):
def open_spider(self, spider):
super(B, self).open_spider(spider)
#something else
</code></pre>
<p>Now I want C to call A's method but not B's, which can be done at least in two ways:</p>
<pre><code> class C(B):
def open_spider(self, spider):
A.open_spider(self, spider)
#do things
class C(B):
def open_spider(self, spider):
super(B, self).open_spider(spider)
#do things
</code></pre>
| 1 | 2016-10-07T13:07:19Z | 39,918,526 | <p>You can do it using the second way. But I must say that part where child skips parent method and call grandparent instead is wrong and you should take a look on your design once again and think about it.</p>
| 3 | 2016-10-07T13:32:16Z | [
"python",
"inheritance"
] |
Pandas, filter max values by row and columns | 39,918,262 | <p>I have this dataframe.</p>
<pre><code>In [6]: df
Out[6]:
Beam Pos Comb As
0 B1 1 1 3
1 B1 1 1 2
2 B1 2 1 5
3 B1 2 1 8
4 B1 1 2 10
5 B1 1 2 1
6 B1 2 2 3
7 B1 2 2 2
8 B2 1 1 1
9 B2 1 1 2
10 B2 2 1 5
11 B2 2 1 6
12 B2 1 2 8
13 B2 1 2 1
14 B2 2 2 3
15 B2 2 2 2
</code></pre>
<p>I need to get the max value per beam and position searching in diferent combinations.</p>
<pre><code> Beam Pos Comb As
0 B1 1 2 10
1 B1 2 1 8
2 B2 1 2 8
3 B2 2 1 6
</code></pre>
<p>I can't figured out how can I compare the "As" values for Beam, position and combination.</p>
<p>Perhaps grouping the beam, the positions and then get the max value?</p>
<p>Any help will be appreciated </p>
| -1 | 2016-10-07T13:18:21Z | 39,918,343 | <p>you have to use the <code>groupby</code> method on the multi level index:</p>
<pre><code>d = df.groupby(by= [ "Beam", "Pos", "Comb"])
g=d.agg({"As":"max"})
g.reset_index(inplace=True)
</code></pre>
<p>the first line groups together the items that have the same <code>(Beam,Pos,Comb)</code> index,
the second line select the max <code>As</code> on each group and the <code>reset_index</code> undo the groups in the dataframe</p>
| 2 | 2016-10-07T13:23:47Z | [
"python",
"pandas",
"filter",
"group"
] |
Pandas, filter max values by row and columns | 39,918,262 | <p>I have this dataframe.</p>
<pre><code>In [6]: df
Out[6]:
Beam Pos Comb As
0 B1 1 1 3
1 B1 1 1 2
2 B1 2 1 5
3 B1 2 1 8
4 B1 1 2 10
5 B1 1 2 1
6 B1 2 2 3
7 B1 2 2 2
8 B2 1 1 1
9 B2 1 1 2
10 B2 2 1 5
11 B2 2 1 6
12 B2 1 2 8
13 B2 1 2 1
14 B2 2 2 3
15 B2 2 2 2
</code></pre>
<p>I need to get the max value per beam and position searching in diferent combinations.</p>
<pre><code> Beam Pos Comb As
0 B1 1 2 10
1 B1 2 1 8
2 B2 1 2 8
3 B2 2 1 6
</code></pre>
<p>I can't figured out how can I compare the "As" values for Beam, position and combination.</p>
<p>Perhaps grouping the beam, the positions and then get the max value?</p>
<p>Any help will be appreciated </p>
| -1 | 2016-10-07T13:18:21Z | 39,918,558 | <p>How about this?</p>
<pre><code>groups = df.groupby(by=['Beam', "Pos"])
idx = []
for group in groups:
idx += [group[1].As.argmax()]
</code></pre>
<p>than show
<code>df.iloc[idx]</code></p>
| 0 | 2016-10-07T13:33:33Z | [
"python",
"pandas",
"filter",
"group"
] |
Pandas, filter max values by row and columns | 39,918,262 | <p>I have this dataframe.</p>
<pre><code>In [6]: df
Out[6]:
Beam Pos Comb As
0 B1 1 1 3
1 B1 1 1 2
2 B1 2 1 5
3 B1 2 1 8
4 B1 1 2 10
5 B1 1 2 1
6 B1 2 2 3
7 B1 2 2 2
8 B2 1 1 1
9 B2 1 1 2
10 B2 2 1 5
11 B2 2 1 6
12 B2 1 2 8
13 B2 1 2 1
14 B2 2 2 3
15 B2 2 2 2
</code></pre>
<p>I need to get the max value per beam and position searching in diferent combinations.</p>
<pre><code> Beam Pos Comb As
0 B1 1 2 10
1 B1 2 1 8
2 B2 1 2 8
3 B2 2 1 6
</code></pre>
<p>I can't figured out how can I compare the "As" values for Beam, position and combination.</p>
<p>Perhaps grouping the beam, the positions and then get the max value?</p>
<p>Any help will be appreciated </p>
| -1 | 2016-10-07T13:18:21Z | 39,922,599 | <p>You should be using the native pandas functions instead of recreating the wheel to make this extremely simple and easy to remember:</p>
<pre><code>df.groupby(['Beam', 'Pos', 'Comb']).max()
</code></pre>
| 0 | 2016-10-07T17:07:21Z | [
"python",
"pandas",
"filter",
"group"
] |
cleaning a txt. Best way to delete recuring beginning code line | 39,918,322 | <p>I have to clean a csv file that has this structure:</p>
<pre>Schema Compare Sync Script
06/10/2016 11:05:03 Page 1
1 --------------------------------------------------------------------------
2 -- Play this script in ASIA@COG2 to make it look like ASIA@TSTCOG2
3 --
4 -- Please review the script before using it to make sure it won't
5 -- cause any unacceptable data loss.
---
---
14 Set define off;
15
16 ALTER TABLE ASIA_MART.FDM_INVOICE
17 MODIFY(I_STATUS VARCHAR2(32 CHAR));
--
--
Schema Compare Sync Script
06/10/2016 11:05:03 Page 2
76 ACCRUED_GP FLOAT(126) NOT NULL,
77 ACCRUALS_CREATE_BY VARCHAR2(64 CHAR) NOT NULL,
--
--
150 MINEXTENTS 1
Schema Compare Sync Script
06/10/2016 11:05:03 Page 3
151 MAXEXTENTS UNLIMITED
</pre>
<p>So the my goal is to keep in another file only the SQL code without any comment or line number.</p>
<p>So far I been able to eliminate the first comment part, that end withe the flag
string "Set define off", and also catch the others like "Schema Compare Sync Script" would it be a problem. The challenge for me is been so far catch the line and eliminate.
Actually my code produce a list from the line 15, but also a strange recurration of the date.</p>
<p>First I am quite sure it is not the best code even, so suggestion are more then welcome, and if someone has an idea of how get ride of the number line, would me more then appreciate.</p>
<p>Here my code:</p>
<pre><code>import re
from itertools import dropwhile
flag = 'Set define off'
found = False
buff = []
with open("delta.txt", "r") as infile, open('delta_fil2.txt', 'w') as outfile:
searchlines = infile.readlines()
for i, line in enumerate(searchlines):
if flag in line :
found = True
if found:
#iterate over the list after the flag and attach to the list buff
for l in searchlines[i:i+1]:
buff.append(searchlines[i+2:len(searchlines)])
else:
searchlines.remove(line)
#generator to append a list of string to the list values = ','.join(str(v) for v in buff)
for i, line in enumerate(searchlines):
for line in dropwhile(lambda line: line.startswith(r'\d+'), searchlines):
buff.append(searchlines[i])
outfile.write(''.join(str(v) for v in buff))
</code></pre>
| 0 | 2016-10-07T13:22:39Z | 39,919,001 | <p>Digits at start of line can be used for filtering:</p>
<pre><code>with open("delta.txt", "r") as infile, open('delta_fil2.txt', 'w') as outfile:
for line in infile:
sline = line.split(" ")
if len(sline) < 2 : continue
if sline[0].isdigit() and sline[1] != "--":
outfile.write(line[len(sline[0])+1:])
</code></pre>
| 0 | 2016-10-07T13:54:35Z | [
"python",
"parsing",
"itertools"
] |
How to use the subprocess.check_output comand correctly | 39,918,331 | <p>I want to execute the the shell script <code>generateLicense.sh</code> with different arguments. I do it like that:</p>
<pre><code>License = subprocess.check_output(['./generateLicense.sh -firstargument 1 -secondargument 2 -thirdargument 3'])
</code></pre>
<p>The shell script is in the same folder as the file that starts the shell script but i always get this error:</p>
<pre><code>OSError: [Errno 2] No such file or directory
</code></pre>
| 0 | 2016-10-07T13:23:02Z | 39,918,409 | <p>Using <a href="https://docs.python.org/3/library/subprocess.html#subprocess.check_output" rel="nofollow">check_output</a> the way you are trying to, each command has to be an item in the list you are passing. So, what you should have is: </p>
<pre><code>License = subprocess.check_output(['./generateLicense.sh', '-firstargument', '1', '-secondargument', '2', '-thirdargument', '3'])
</code></pre>
<p>Here is a replication of your problem using a different command to showcase what is happening when you pass a multi-argument command as a single string in the list being passed to <code>check_output</code>: </p>
<pre><code>>>> import subprocess
>>> subprocess.check_output(["ls -a"])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 566, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
</code></pre>
<p>Doing this in Python 3 you will actually get something more coherent: </p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'ls -a'
</code></pre>
| 0 | 2016-10-07T13:26:37Z | [
"python",
"shell",
"subprocess"
] |
Split string based on number of commas | 39,918,386 | <p>I have a text which is splited with commas.</p>
<p>e.g.:</p>
<pre><code>FOO( something, BOO(tmp, temp), something else)
</code></pre>
<p>It could be that <em>something else</em> contain as well a string with commas...</p>
<p>I would like to split the text inside the brakets of FOO to its elements and then pasrse the elements.</p>
<p>What i do know is that <em>FOO</em> must have two commas.</p>
<p>How could I split the contant of <em>FOO</em> to its three elements?</p>
<p><strong>Remark:</strong> <em>something else</em> could be <em>BOO(ddd, ddd)</em> or simply <em>ddd</em>. I can not assume a simple regex regel of <em>'FOO\(\w+, BOO(\w+, \w+), \w+\)'</em></p>
| 0 | 2016-10-07T13:25:49Z | 39,918,505 | <p>Take a look at the python CSV library: </p>
<p><a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a> (python 2)</p>
<p><a href="https://docs.python.org/3/library/csv.html" rel="nofollow">https://docs.python.org/3/library/csv.html</a> (python 3)</p>
<p>It's a good way of manipulating text files, I have used it for files that have tabs-delimiters \t rather than commas</p>
<p>A similar problem, with some other examples can be found:
<a href="http://stackoverflow.com/questions/20599233/splitting-comma-delimited-strings-in-python?rq=1">Splitting comma delimited strings in python</a></p>
<p>I just tested the code from this page, it seems to be able to split your text into three if you remove the 'Foo(' from the string</p>
| -1 | 2016-10-07T13:31:16Z | [
"python",
"regex"
] |
Split string based on number of commas | 39,918,386 | <p>I have a text which is splited with commas.</p>
<p>e.g.:</p>
<pre><code>FOO( something, BOO(tmp, temp), something else)
</code></pre>
<p>It could be that <em>something else</em> contain as well a string with commas...</p>
<p>I would like to split the text inside the brakets of FOO to its elements and then pasrse the elements.</p>
<p>What i do know is that <em>FOO</em> must have two commas.</p>
<p>How could I split the contant of <em>FOO</em> to its three elements?</p>
<p><strong>Remark:</strong> <em>something else</em> could be <em>BOO(ddd, ddd)</em> or simply <em>ddd</em>. I can not assume a simple regex regel of <em>'FOO\(\w+, BOO(\w+, \w+), \w+\)'</em></p>
| 0 | 2016-10-07T13:25:49Z | 39,918,942 | <p>Assuming that the string is Python code you can use <strong>parser</strong> for this. If you look carefully at the result you might agree that it's not as bad as it first appears to be.</p>
<pre><code>>>> from parser import *
>>> source="FOO( something, BOO(tmp, temp), something)"
>>> st=suite(source)
>>> st2tuple(st)
(257, (268, (269, (270, (271, (272, (302, (306, (307, (308, (309, (312, (313, (314, (315, (316, (317, (318, (319, (320, (1, 'FOO')), (322, (7, '('), (330, (331, (302, (306, (307, (308, (309, (312, (313, (314, (315, (316, (317, (318, (319, (320, (1, 'something')))))))))))))))), (12, ','), (331, (302, (306, (307, (308, (309, (312, (313, (314, (315, (316, (317, (318, (319, (320, (1, 'BOO')), (322, (7, '('), (330, (331, (302, (306, (307, (308, (309, (312, (313, (314, (315, (316, (317, (318, (319, (320, (1, 'tmp')))))))))))))))), (12, ','), (331, (302, (306, (307, (308, (309, (312, (313, (314, (315, (316, (317, (318, (319, (320, (1, 'temp'))))))))))))))))), (8, ')')))))))))))))))), (12, ','), (331, (302, (306, (307, (308, (309, (312, (313, (314, (315, (316, (317, (318, (319, (320, (1, 'something'))))))))))))))))), (8, ')')))))))))))))))))), (4, ''))), (4, ''), (0, ''))
</code></pre>
| 0 | 2016-10-07T13:51:47Z | [
"python",
"regex"
] |
Split string based on number of commas | 39,918,386 | <p>I have a text which is splited with commas.</p>
<p>e.g.:</p>
<pre><code>FOO( something, BOO(tmp, temp), something else)
</code></pre>
<p>It could be that <em>something else</em> contain as well a string with commas...</p>
<p>I would like to split the text inside the brakets of FOO to its elements and then pasrse the elements.</p>
<p>What i do know is that <em>FOO</em> must have two commas.</p>
<p>How could I split the contant of <em>FOO</em> to its three elements?</p>
<p><strong>Remark:</strong> <em>something else</em> could be <em>BOO(ddd, ddd)</em> or simply <em>ddd</em>. I can not assume a simple regex regel of <em>'FOO\(\w+, BOO(\w+, \w+), \w+\)'</em></p>
| 0 | 2016-10-07T13:25:49Z | 39,918,951 | <p>You may use this regex</p>
<pre><code>,(?=(?:(?:\([^)]*\))?[^)]*)+\)$)
</code></pre>
<p>to split your string in the comas, bot not inside BOO(...)</p>
<p><a href="https://eval.in/657163" rel="nofollow">sample</a></p>
| 0 | 2016-10-07T13:52:14Z | [
"python",
"regex"
] |
Split string based on number of commas | 39,918,386 | <p>I have a text which is splited with commas.</p>
<p>e.g.:</p>
<pre><code>FOO( something, BOO(tmp, temp), something else)
</code></pre>
<p>It could be that <em>something else</em> contain as well a string with commas...</p>
<p>I would like to split the text inside the brakets of FOO to its elements and then pasrse the elements.</p>
<p>What i do know is that <em>FOO</em> must have two commas.</p>
<p>How could I split the contant of <em>FOO</em> to its three elements?</p>
<p><strong>Remark:</strong> <em>something else</em> could be <em>BOO(ddd, ddd)</em> or simply <em>ddd</em>. I can not assume a simple regex regel of <em>'FOO\(\w+, BOO(\w+, \w+), \w+\)'</em></p>
| 0 | 2016-10-07T13:25:49Z | 39,919,721 | <p>You can do it with the <a href="https://pypi.python.org/pypi/regex" rel="nofollow">regex module</a> that supports recursion (useful to deal with nested structures):</p>
<pre><code>import regex
s = 'FOO( something, BOO(tmp, temp), something else)'
pat = regex.compile(r'''(?(DEFINE) # inside a definition group
# you can define subpatterns to use later
(?P<elt> # define the subpattern "elt"
[^,()]*+
(?:
\( (?&elt) (?: , (?&elt) )* \)
[^,()]*
)*+
)
)
# start of the main pattern
FOO\( \s*
(?P<elt1> (?&elt) ) # capture group "elt1" contains the subpattern "elt"
, \s*
(?P<elt2> (?&elt) ) # same here
, \s*
(?P<elt3> (?&elt) ) # etc.
\)''', regex.VERSION1 | regex.VERBOSE )
m = pat.search(s)
print(m.group('elt1'))
print(m.group('elt2'))
print(m.group('elt3'))
</code></pre>
<p><a href="https://regex101.com/r/eMtX6U/1" rel="nofollow">demo</a></p>
| 0 | 2016-10-07T14:30:10Z | [
"python",
"regex"
] |
Split string based on number of commas | 39,918,386 | <p>I have a text which is splited with commas.</p>
<p>e.g.:</p>
<pre><code>FOO( something, BOO(tmp, temp), something else)
</code></pre>
<p>It could be that <em>something else</em> contain as well a string with commas...</p>
<p>I would like to split the text inside the brakets of FOO to its elements and then pasrse the elements.</p>
<p>What i do know is that <em>FOO</em> must have two commas.</p>
<p>How could I split the contant of <em>FOO</em> to its three elements?</p>
<p><strong>Remark:</strong> <em>something else</em> could be <em>BOO(ddd, ddd)</em> or simply <em>ddd</em>. I can not assume a simple regex regel of <em>'FOO\(\w+, BOO(\w+, \w+), \w+\)'</em></p>
| 0 | 2016-10-07T13:25:49Z | 39,922,023 | <p>Assuming that you need a list of elements inside <code>FOO</code>, so pre-processing it first</p>
<pre><code>>>> s = 'FOO( something, BOO(tmp, temp), something else)'
>>> s
'FOO( something, BOO(tmp, temp), something else)'
>>> s = re.sub(r'^[^(]+\(|\)\s*$','',s)
>>> s
' something, BOO(tmp, temp), something else'
</code></pre>
<p>Using <a href="https://pypi.python.org/pypi/regex" rel="nofollow">regex</a> module:</p>
<pre><code>>>> regex.split(r'[^,(]+\([^)]+\)(*SKIP)(?!)|,', s)
[' something', ' BOO(tmp, temp)', ' something else']
</code></pre>
<ul>
<li><code>[^,(]+\([^)]+\)(*SKIP)(?!)</code> to skip the pattern <code>[^,(]+\([^)]+\)</code></li>
<li><code>|,</code> alternate pattern to actually split the input string, in this case it is <code>,</code></li>
</ul>
<p><br>
another example:</p>
<pre><code>>>> t = 'd(s,sad,e),g(3,2),c(d)'
>>> regex.split(r'[^,(]+\([^)]+\)(*SKIP)(?!)|,', t)
['d(s,sad,e)', 'g(3,2)', 'c(d)']
</code></pre>
| 0 | 2016-10-07T16:32:14Z | [
"python",
"regex"
] |
Seaborn/Matplotlib - Only Showing Certain X Values in FacetGrid | 39,918,424 | <p>I am trying to create a facet of charts that show total scores over time, in seconds. X axis is the time in seconds and y axis is the total score.</p>
<p>As you can see, I am restricting the output to 2 1/2 minutes via xlim . </p>
<p>What I would like to do is to only show values on the xaxis for every 30 seconds (i.e. 30, 60, 90, 120, 150).. I still want to show the values of (for example) 10 and 15 seconds on the chart, just not on the labels on the xaxis.</p>
<p>How do I modify the code below to do this? Been trying various forms of xticks and xlabel but just can't figure it out.. Google has not been my friend either..</p>
<p>Would really appreciate some help</p>
<p>Thanks</p>
<pre><code>df = pd.DataFrame({'Subject': ['Math', 'Math', 'Math','Math', 'Math', 'Math', 'Math','Math', 'Math', 'Math','English', 'English', 'English','English', 'English', 'English', 'English','English', 'English', 'English'],
'timeinseconds': [1, 10, 15, 30, 45, 60, 90, 120, 140, 150,1, 10, 15, 30, 45, 60, 90, 120, 140, 150],
'totalscore': [.2, .3, .4, .37, .45, .55, .60, .54, .63, .72,
.4, .34, .23, .52, .56, .59, .63, .66, .76, .82]})
g = sns.FacetGrid(df, col="Subject", col_wrap=5, size=3.5, ylim=(0, 1),xlim=(0,360))
g = g.map(sns.pointplot, "timeinseconds", "totalscore", scale=.7)
</code></pre>
| 1 | 2016-10-07T13:27:10Z | 39,921,257 | <p>This will solve your problem:</p>
<pre><code>g = (g.map(sns.pointplot, "timeinseconds", "totalscore", scale=.7)
.set(xticks=[3, 5, 6, 7, 9], xticklabels=[30, 60, 90, 120, 150]))
</code></pre>
<p>The <code>xticks</code> indicate the positions where you want to place the labels (numbered from <code>0</code> to <code>n-1</code> where <code>n</code> is the original number of ticks), and <code>xticklabels</code> the actual labels. </p>
<p>You can certainly find a way to do it with less hard-coding.</p>
| 1 | 2016-10-07T15:49:35Z | [
"python",
"pandas",
"matplotlib",
"seaborn",
"facet"
] |
How to print a div data-reactid? | 39,918,436 | <p>I'm doing a project in my spare time where I have hit a problem with getting data from a webpage into the program.</p>
<p>This is my current code:</p>
<pre><code>import urllib
import re
htmlfile = urllib.urlopen("http://www.superliga.dk/klub/aab?sub=squad")
htmltext = htmlfile.read()
regex = r'<div data-reactid=".3.$squad content.0.$=11:0.0.0.0.1:0.2.0.0">([^<]*)</div>'
pattern = re.compile(regex)
goal = re.findall(pattern,htmltext)
print goal
</code></pre>
<p>And it's working okay except this part:</p>
<pre><code>regex = r'<div data-reactid=".3.$squad content.0.$=11:0.0.0.0.1:0.2.0.0">([^<]*)</div>'
</code></pre>
<p>I can't make it display all values on the webpage with this <code>reactid</code>, and I can't find any solution to this problem.
Any suggestions to how I can get Python to print it?</p>
| 2 | 2016-10-07T13:27:44Z | 39,918,761 | <p>You are trying to match a tag you saw on the on the developer console of you browser, right?
Unfortunately the html you saw is only the "final form" of a dynamic page: what you did download with <code>urlopen</code> is only the skeleton of the webpage, which in the browser is then dynamically filled with other elements by the javascript using data fetched from some backend server.</p>
<p>If you try to print the actual value stored in <code>htmltest</code> you will find nothing like what you are trying to match with the regex, and that's because it missed all the further processing normally performed by by the javascript.</p>
<p>What you can try to do is to monitor (through the dev console) the fetched resources and reverse-engineer the API call in order to recover the desired info. Chances are the response of these API call is in JSON format or has a structure way more easily parsable than the html body.</p>
<p><strong>UPDATE</strong>: for example, in Chrome's dev tools I can see async calls like:</p>
<p><code>http://ss2.tjekscores.dk/pro-stats/tournaments/46/top-players?sortBy=eventsStats.goals&limit=5&skip=0&positionId=&q=&seasonId=10392&teamId[]=8470</code></p>
<p>Maybe this returns the info you are looking for.</p>
| 1 | 2016-10-07T13:43:43Z | [
"python",
"html",
"regex"
] |
Python Pandas_How to select data after using drop_duplicates()? | 39,918,483 | <p>I am learning python pandas to processing data. </p>
<ol>
<li>I firstly using <code>drop_duplicates()</code> method to treat <code>db_new</code> and get <code>a</code>;</li>
<li>Then I'd like to find what kind of data in a using <code>print</code>;</li>
<li>I try to find if a data is in a using <code>for...in</code>, but I found that even the data in a can not be find in it, why?</li>
</ol>
<p><a href="http://i.stack.imgur.com/Z60lq.png" rel="nofollow"><img src="http://i.stack.imgur.com/Z60lq.png" alt="I found 9 is not in a"></a></p>
<pre><code>a = db_new.iloc[:i,4:5].drop_duplicates()
print a
for x in a:
print x**
</code></pre>
<ol start="4">
<li>I try to use <code>for in</code> to find what can get in a. I only get <code>E</code>, which is the column index. Do you know why this happen?</li>
</ol>
| -2 | 2016-10-07T13:30:15Z | 39,919,475 | <p>Here <code>a</code> is a dataframe, so when you iterate over <code>a</code> you iterate over column names, hence the result, <code>E</code>.</p>
<p>If you want to iterate over values, you need to make <code>a</code> a series, which you can do using <code>squeeze</code>:</p>
<pre><code>for x in a.squeeze():
print x
</code></pre>
| 1 | 2016-10-07T14:19:34Z | [
"python",
"pandas"
] |
BeautifulSoup: get text from some tag | 39,918,563 | <p>I have data</p>
<pre><code><span class="label">ÐÑивод:</span> пеÑедний<br/>
<span class="label">Тип кÑзова:</span> Ñедан<br/>
<span class="label">ЦвеÑ:</span> ÑеÑÑй<br/>
<span class="label">ÐÑобег по РоÑÑии:</span> еÑÑÑ<br/>
<span class="label">ÐÑобег, км:</span> 87000<br/>
<span class="label">Ð ÑлÑ:</span> левÑй<br/>
</code></pre>
<p>I need to get <code>87000</code>
I try</p>
<pre><code>mileage = soup.find('span', class_='label', text='ÐÑобег, км:').findNext('br').get_text()
</code></pre>
<p>or</p>
<pre><code>mileage = soup.find('span', class_='label', text='ÐÑобег, км:').next_subling
</code></pre>
<p>but it returns None.
What I do wrong?</p>
| 0 | 2016-10-07T13:33:48Z | 39,918,596 | <p>In the first code snippet, you are trying to get the text of the <code>br</code> element but it does not have any.</p>
<p>In the second code snippet you have a typo - it is not <code>next_subling</code>, it is <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#next-sibling-and-previous-sibling" rel="nofollow"><code>next_sibling</code></a>:</p>
<pre><code>soup.find('span', class_='label', text='ÐÑобег, км:').next_sibling
</code></pre>
| 3 | 2016-10-07T13:35:32Z | [
"python",
"beautifulsoup"
] |
replacing value with median in python | 39,918,769 | <pre><code>lat
50.63757782
50.6375742
50.6375742
50.6374077762
50.63757782
50.6374077762
50.63757782
50.63757782
</code></pre>
<p>I have plotted a graph with these latitude values and noticed that there was sudden spike in the graph (outlier). I want to replace every lat value with median of last three values so that I can see a meaningful result</p>
<p>The output might be</p>
<pre><code>lat lat_med
50.63757782 50.63757782
50.6375742 50.6375742
50.6375742 50.6375742
50.63740778 50.6375742
50.63757782 50.6375742
50.63740778 50.6375742
50.63757782 50.6375742
50.63757782 50.6375742
</code></pre>
<p>I have thousands of such lat values and need to solve this using a for loop. I know that the following code has errors and since I am a beginner in python, I appreciate your help in solving this.</p>
<pre><code>for i in range(0,len(df['lat'])):
df['lat_med'][i]=numpy.median(numpy.array(df['lat'][i],df['lat'][i-2]))
</code></pre>
<p>I just realized that median calculation for three points is not serving my purpose and I need to consider five values. is there a way to change the median function for as many as values I want. Thank you for your help</p>
<pre><code>def median(a, b, c):
if a > b and a > c:
return b if b > c else c
if a < b and a < c:
return b if b < c else c
return a
</code></pre>
| 1 | 2016-10-07T13:43:58Z | 39,918,926 | <p>Just go thought second to second to last elements and put save the median out of this, previous and next element. Note that first and last elements are left as they were.</p>
<p>Try this:</p>
<pre><code>lat = [50.63757782, 50.6375742, 50.6375742, 50.6374077762, 50.63757782, 50.6374077762, 50.63757782, 50.63757782]
# returns median value out of the three values
def median(a, b, c):
if a > b and a > c:
return b if b > c else c
if a < b and a < c:
return b if b < c else c
return a
# add the first element
filtered = [lat[0]]
for i in range(1, len(lat) - 1):
filtered += [median(lat[i - 1], lat[i], lat[i + 1])]
# add the last element
filtered += [lat[-1]]
print(filtered)
</code></pre>
<p>What you are doing is a very basic <a href="https://en.wikipedia.org/wiki/Median_filter" rel="nofollow">Median filter</a></p>
| 0 | 2016-10-07T13:51:25Z | [
"python",
"numpy",
"replace",
"median",
"imputation"
] |
replacing value with median in python | 39,918,769 | <pre><code>lat
50.63757782
50.6375742
50.6375742
50.6374077762
50.63757782
50.6374077762
50.63757782
50.63757782
</code></pre>
<p>I have plotted a graph with these latitude values and noticed that there was sudden spike in the graph (outlier). I want to replace every lat value with median of last three values so that I can see a meaningful result</p>
<p>The output might be</p>
<pre><code>lat lat_med
50.63757782 50.63757782
50.6375742 50.6375742
50.6375742 50.6375742
50.63740778 50.6375742
50.63757782 50.6375742
50.63740778 50.6375742
50.63757782 50.6375742
50.63757782 50.6375742
</code></pre>
<p>I have thousands of such lat values and need to solve this using a for loop. I know that the following code has errors and since I am a beginner in python, I appreciate your help in solving this.</p>
<pre><code>for i in range(0,len(df['lat'])):
df['lat_med'][i]=numpy.median(numpy.array(df['lat'][i],df['lat'][i-2]))
</code></pre>
<p>I just realized that median calculation for three points is not serving my purpose and I need to consider five values. is there a way to change the median function for as many as values I want. Thank you for your help</p>
<pre><code>def median(a, b, c):
if a > b and a > c:
return b if b > c else c
if a < b and a < c:
return b if b < c else c
return a
</code></pre>
| 1 | 2016-10-07T13:43:58Z | 39,919,540 | <p>You seem to be using <code>pandas</code>' <code>Dataframe</code> structures, so:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'lat' : [50.63757782,
50.6375742,
50.6375742,
50.6374077762,
50.63757782,
50.6374077762,
50.63757782,
50.63757782]})
def replace_values_with_medians(array):
last = array.shape[0]-2
index = 0
result = np.zeros(last)
while index < last:
result[index] = np.median(array[index:index+3])
index += 1
return result
lat_med_df = pd.DataFrame({'lat_med':replace_values_with_medians(df['lat'])})
df = pd.concat([df,lat_med_df], axis = 1)
del lat_med_df
</code></pre>
<p>With the result:</p>
<pre><code>>>> df
lat lat_med
0 50.637578 50.637574
1 50.637574 50.637574
2 50.637574 50.637574
3 50.637408 50.637408
4 50.637578 50.637578
5 50.637408 50.637578
6 50.637578 NaN
7 50.637578 NaN
</code></pre>
| 0 | 2016-10-07T14:22:29Z | [
"python",
"numpy",
"replace",
"median",
"imputation"
] |
NetworkX: plotting the same graph first intact and then with a few nodes removed | 39,918,821 | <p>Say I have a graph with <code>10</code> nodes, and I want to plot it when:</p>
<ol>
<li>It is intact</li>
<li>It has had a couple of nodes removed</li>
</ol>
<p><strong>How can I make sure that the second plot has exactly the same positions as the first one?</strong></p>
<p>My attempt generates two graphs that are drawn with a different layout:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
%pylab inline
#Intact
G=nx.barabasi_albert_graph(10,3)
fig1=nx.draw_networkx(G)
#Two nodes are removed
e=[4,6]
G.remove_nodes_from(e)
plt.figure()
fig2=nx.draw_networkx(G)
</code></pre>
| 0 | 2016-10-07T13:46:33Z | 39,925,948 | <p>The drawing commands for networkx accept an argument <code>pos</code>. </p>
<p>So before creating <code>fig1</code>, define <code>pos</code> The two lines should be</p>
<pre><code>pos = nx.spring_layout(G) #other layout commands are available.
fig1 = nx.draw_networkx(G, pos = pos)
</code></pre>
<p>later you will do</p>
<pre><code>fig2 = nx.draw_networkx(G, pos=pos).
</code></pre>
| 2 | 2016-10-07T21:18:50Z | [
"python",
"matplotlib",
"networkx"
] |
NetworkX: plotting the same graph first intact and then with a few nodes removed | 39,918,821 | <p>Say I have a graph with <code>10</code> nodes, and I want to plot it when:</p>
<ol>
<li>It is intact</li>
<li>It has had a couple of nodes removed</li>
</ol>
<p><strong>How can I make sure that the second plot has exactly the same positions as the first one?</strong></p>
<p>My attempt generates two graphs that are drawn with a different layout:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
%pylab inline
#Intact
G=nx.barabasi_albert_graph(10,3)
fig1=nx.draw_networkx(G)
#Two nodes are removed
e=[4,6]
G.remove_nodes_from(e)
plt.figure()
fig2=nx.draw_networkx(G)
</code></pre>
| 0 | 2016-10-07T13:46:33Z | 39,927,005 | <p>The following works for me:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
from random import random
figure = plt.figure()
#Intact
G=nx.barabasi_albert_graph(10,3)
node_pose = {}
for i in G.nodes_iter():
node_pose[i] = (random(),random())
plt.subplot(121)
fig1 = nx.draw_networkx(G,pos=node_pose, fixed=node_pose.keys())
#Two nodes are removed
e=[4,6]
G.remove_nodes_from(e)
plt.subplot(122)
fig2 = nx.draw_networkx(G,pos=node_pose, fixed=node_pose.keys())
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/FHRXc.png" rel="nofollow"><img src="http://i.stack.imgur.com/FHRXc.png" alt="enter image description here"></a></p>
| 1 | 2016-10-07T23:17:23Z | [
"python",
"matplotlib",
"networkx"
] |
Data extraction: Creating dictionary of dictionaries with lists in python | 39,918,972 | <p>I have data similar to the following in a file:</p>
<pre><code>Name, Age, Sex, School, height, weight, id
Joe, 10, M, StThomas, 120, 20, 111
Jim, 9, M, StThomas, 126, 22, 123
Jack, 8, M, StFrancis, 110, 15, 145
Abel, 10, F, StFrancis, 128, 23, 166
</code></pre>
<p>The actual data might be 100 columns and a million rows.</p>
<p>What I am trying to do is create a dict in the following pattern:</p>
<pre><code>school_data = {'StThomas': {'weight':[20,22], 'height': [120,126]},
'StFrancis': {'weight':[15,23], 'height': [110,128]} }
</code></pre>
<p>Things I tried:</p>
<ol>
<li><p>Trial 1: (very expensive in terms of computation)</p>
<pre><code>school_names = []
for lines in read_data[1:]:
data = lines.split('\t')
school_names.append(data[3])
school_names = set(school_names)
for lines in read_data[1:]:
for school in schools:
if school in lines:
print lines
</code></pre></li>
<li><p>Trial 2:</p>
<pre><code>for lines in read_data[1:]:
data = lines.split('\t')
school_name = data[3]
height = data[4]
weight = data[5]
id = data [6]
x[id] = {school_name: (weight, height)}
</code></pre></li>
</ol>
<p>The above two are methods in which I tried to proceed but did not get closer to the solution.</p>
| 1 | 2016-10-07T13:53:13Z | 39,919,171 | <p>The easiest way to do this within the standard library is using existing tools, <a href="https://docs.python.org/3/library/csv.html#csv.DictReader" rel="nofollow"><code>csv.DictReader</code></a> and <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>collections.defaultdict</code></a>:</p>
<pre><code>from collections import defaultdict
from csv import DictReader
data = defaultdict(lambda: defaultdict(list)) #Â *
with open(datafile) as file_:
for row in DictReader(file_):
data[row[' School'].strip()]['height'].append(int(row[' height']))
data[row[' School'].strip()]['weight'].append(int(row[' weight']))
</code></pre>
<p>Note that the spaces in e.g. <code>' School'</code> and the <code>.strip()</code> are necessary because of the spaces in the header row of your input file. Result:</p>
<pre><code>>>> data
defaultdict(<function <lambda> at 0x10261c0c8>, {'StFrancis': defaultdict(<type 'list'>, {'weight': [15, 23], 'height': [110, 128]}), 'StThomas': defaultdict(<type 'list'>, {'weight': [20, 22], 'height': [120, 126]})})
>>> data['StThomas']['height']
[120, 126]
</code></pre>
<p>Alternatively, if you're planning to do further analysis, look into something like <a href="http://pandas.pydata.org/" rel="nofollow"><code>pandas</code></a> and its <code>DataFrame</code> data structure.</p>
<p>* <em>see <a href="http://stackoverflow.com/q/8419401/3001761">Python defaultdict and lambda</a> if this seems weird</em> </p>
| 1 | 2016-10-07T14:03:12Z | [
"python",
"dictionary"
] |
Django/Python: Show pdf in a template | 39,919,012 | <p>I'm using django 1.8 in python 2.7.</p>
<p>I want to show a pdf in a template.</p>
<p>Up to know, thanks to <a href="http://stackoverflow.com/a/29718326/1395874">MKM's answer</a> I render it in a full page. </p>
<p>Do you know how to render it?</p>
<p>Here is my code:</p>
<pre><code>def userManual(request):
with open('C:/Users/admin/Desktop/userManual.pdf', 'rb') as pdf:
response = HttpResponse(pdf.read(), content_type='application/pdf')
response['Content-Disposition'] = 'inline;filename=some_file.pdf'
return response
pdf.closed
</code></pre>
| 1 | 2016-10-07T13:55:09Z | 39,934,037 | <p>The ability to embed a PDF in a page is out of the scope of Django itself, you are already doing all you can do with Django by successfully generating the PDF, so you should look at how to embed PDFs in webpages instead:</p>
<p>Therefore, please check:</p>
<p><a href="http://stackoverflow.com/questions/291813/recommended-way-to-embed-pdf-in-html">Recommended way to embed PDF in HTML?</a></p>
<p><a href="http://stackoverflow.com/questions/14081128/how-can-i-embed-a-pdf-viewer-in-a-web-page">How can I embed a PDF viewer in a web page?</a></p>
| 1 | 2016-10-08T15:18:00Z | [
"python",
"django",
"pdf"
] |
Column appended in a pandas dataframe malfunctions | 39,919,020 | <p>I have a dataframe named df1</p>
<pre><code> df1.columns
Out[55]: Index(['TowerLon', 'TowerLat'], dtype='object')
df1.shape
Out[56]: (1141, 2)
df1.head(3)
Out[57]:
TowerLon TowerLat
0 -96.709417 32.731611
1 -96.709500 32.731722
2 -96.910389 32.899944
</code></pre>
<p>I also have an np.ndarray named labels:</p>
<pre><code> type(labels)
Out[62]: numpy.ndarray
labels
Out[63]: array([1, 1, 0, ..., 0, 0, 1])
</code></pre>
<p>I appended the np.ndarray labels to the dataframe df1 using the following command:</p>
<pre><code> df1["labels"] = labels.tolist()
</code></pre>
<p>The operation was seemingly successful:</p>
<pre><code> df1.shape
Out[68]: (1141, 3)
df1.dtypes
Out[69]:
TowerLon float64
TowerLat float64
labels int64
dtype: object
df1.head(3)
Out[70]:
TowerLon TowerLat labels
0 -96.709417 32.731611 1
1 -96.709500 32.731722 1
2 -96.910389 32.899944 0
</code></pre>
<p>However when I try to list the values of the new column I get an error message:</p>
<pre><code> df1.labels
Traceback (most recent call last):
File "<ipython-input-71-6afd8264be10>", line 1, in <module>
df1.labels
File "C:\Users\Alexandros_7\Anaconda3\lib\site- packages\pandas\core\generic.py", line 2668, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'labels'
</code></pre>
<p>I get a similar error message when I try to use labels to form a scatter plot:</p>
<pre><code> plt.scatter(df1.TowerLat, df1.TowerLon, c=df1.labels)
Traceback (most recent call last):
File "<ipython-input-73-b71c79011e38>", line 1, in <module>
plt.scatter(df1.TowerLat, df1.TowerLon, c=df1.labels)
File "C:\Users\Alexandros_7\Anaconda3\lib\site - packages\pandas\core\generic.py", line 2668, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'labels'
</code></pre>
<p>Could you help me? Thank you!</p>
| 0 | 2016-10-07T13:55:37Z | 39,973,962 | <p>The <code>AttributeError</code> occurs due to the fact that <code>labels</code> is not part of the original <code>DataFrame</code>. You can however access the data by using the following method:</p>
<pre><code>df1['labels']
</code></pre>
<p>This will give you the following output:</p>
<pre><code>0 1
1 1
2 0
Name: labels, dtype: int64
</code></pre>
<p>for your plotting error also replace <code>df1.labels</code> with the method suggested above as follows:</p>
<pre><code>plt.scatter(df1.TowerLat, df1.TowerLon, c=df1['labels'])
</code></pre>
<p>As for the error with the <code>groupby</code> method change it to:</p>
<pre><code>bylevel=df1.groupby(df1["labels"])
</code></pre>
<p>as <code>levels</code> is not part of the <code>DataFrame</code></p>
| 0 | 2016-10-11T09:13:28Z | [
"python"
] |
Duplicate tweet removal from csv file | 39,919,041 | <p>With the part of code shown below i fetch tweets from twitter and store them initially in "backup.txt". I also create a file "tweets3.csv"and save some specific fields of each tweets. But i realized some tweets have exactly the same text (duplicates). How could i remove those from my csv file?</p>
<pre><code>from tweepy import Stream
from tweepy import OAuthHandler
from tweepy.streaming import StreamListener
import time
import json
import csv
ckey = ''
csecret = ''
atoken = ''
asecret = ''
class listener(StreamListener):
def on_data(self, data):
try:
all_data = json.loads(data)
with open("backup.txt", 'a') as backup:
backup.write(str(all_data) + "\n")
backup.close()
text = str(all_data["text"]).encode("utf-8")
id = str(all_data["id"]).encode("utf-8")
timestamp = str(all_data["timestamp_ms"]).encode("utf-8")
sn = str(all_data["user"]["screen_name"]).encode("utf-8")
user_id = str(all_data["user"]["id"]).encode("utf-8")
create = str(all_data["created_at"]).encode("utf-8")
follower = str(all_data["user"]["followers_count"]).encode("utf-8")
following = str(all_data["user"]["following"]).encode("utf-8")
status = str(all_data["user"]["statuses_count"]).encode("utf-8")
# text = data.split(',"text":"')[1].split('","source')[0]
# name = data.split(',"screen_name":"')[1].split('","location')[0]
contentlist = []
contentlist.append(text)
contentlist.append(id)
contentlist.append(timestamp)
contentlist.append(sn)
contentlist.append(user_id)
contentlist.append(create)
contentlist.append(follower)
contentlist.append(following)
contentlist.append(status)
print contentlist
f = open("tweets3.csv", 'ab')
wrt = csv.writer(f, dialect='excel')
try:
wrt.writerow(contentlist)
except UnicodeEncodeError, UnicodeEncodeError:
return True
return True
except BaseException, e:
print 'failed on data',type(e),str(e)
time.sleep(3)
def on_error(self, status):
print "Error status:" + str(status)
auth = OAuthHandler(ckey, csecret)
auth.set_access_token(atoken, asecret)
twitterStream = Stream(auth, listener())
twitterStream.filter(track=["zikavirus"], languages=['en'])
</code></pre>
| 0 | 2016-10-07T13:56:36Z | 39,920,097 | <p>I wrote this code that makes a list, and every time that it gets over a tweet, it checks that list. If the text doesn't exists, add it to the list.</p>
<pre><code># Defines a list - It stores all unique tweets
tweetChecklist = [];
# All your tweets. I represent them as a list to test the code
AllTweets = ["Hello", "HelloFoo", "HelloBar", "Hello", "hello", "Bye"];
# Goes over all "tweets"
for current_tweet in AllTweets:
# If tweet doesn't exist in the list
if current_tweet not in tweetChecklist:
tweetChecklist.append(current_tweet);
# Do what you want with this tweet, it won't appear two times...
# Print ["Hello", "HelloFoo", "HelloBar", "hello", "Bye"]
# Note that the second Hello doesn't show up - It's what you want
# However, it's case sensitive.
print(tweetIDlist);
# Clear the list
tweetChecklist = [];
</code></pre>
<p>I think your code should show like this after implementing my solution in it:</p>
<pre><code>from tweepy import Stream
from tweepy import OAuthHandler
from tweepy.streaming import StreamListener
import time
import json
import csv
# Define a list - It stores all unique tweets
# Clear this list after completion of fetching all tweets
tweetChecklist = [];
ckey = ''
csecret = ''
atoken = ''
asecret = ''
class listener(StreamListener):
def on_data(self, data):
try:
all_data = json.loads(data)
with open("backup.txt", 'a') as backup:
backup.write(str(all_data) + "\n")
backup.close()
text = str(all_data["text"]).encode("utf-8")
id = str(all_data["id"]).encode("utf-8")
timestamp = str(all_data["timestamp_ms"]).encode("utf-8")
sn = str(all_data["user"]["screen_name"]).encode("utf-8")
user_id = str(all_data["user"]["id"]).encode("utf-8")
create = str(all_data["created_at"]).encode("utf-8")
follower = str(all_data["user"]["followers_count"]).encode("utf-8")
following = str(all_data["user"]["following"]).encode("utf-8")
status = str(all_data["user"]["statuses_count"]).encode("utf-8")
# If the text does not exist in the list that stores all unique tweets
if text not in tweetChecklist:
# Store it, so that on further times with the same text,
# it didn't reach this code
tweetChecklist.append(current_tweet);
# Now, do your unique stuff
contentlist = []
contentlist.append(text)
contentlist.append(id)
contentlist.append(timestamp)
contentlist.append(sn)
contentlist.append(user_id)
contentlist.append(create)
contentlist.append(follower)
contentlist.append(following)
contentlist.append(status)
print contentlist
f = open("tweets3.csv", 'ab')
wrt = csv.writer(f, dialect='excel')
try:
wrt.writerow(contentlist)
except UnicodeEncodeError, UnicodeEncodeError:
return True
return True
except BaseException, e:
print 'failed on data',type(e),str(e)
time.sleep(3)
def on_error(self, status):
print "Error status:" + str(status)
auth = OAuthHandler(ckey, csecret)
auth.set_access_token(atoken, asecret)
twitterStream = Stream(auth, listener())
twitterStream.filter(track=["zikavirus"], languages=['en'])
</code></pre>
| 1 | 2016-10-07T14:50:03Z | [
"python",
"tweepy"
] |
Calculate moving average in numpy array with NaNs | 39,919,050 | <p>I am trying to calculate the moving average in a large numpy array that contains NaNs. Currently I am using:</p>
<pre><code>import numpy as np
def moving_average(a,n=5):
ret = np.cumsum(a,dtype=float)
ret[n:] = ret[n:]-ret[:-n]
return ret[-1:]/n
</code></pre>
<p>When calculating with a masked array:</p>
<pre><code>x = np.array([1.,3,np.nan,7,8,1,2,4,np.nan,np.nan,4,4,np.nan,1,3,6,3])
mx = np.ma.masked_array(x,np.isnan(x))
y = moving_average(mx).filled(np.nan)
print y
>>> array([3.8,3.8,3.6,nan,nan,nan,2,2.4,nan,nan,nan,2.8,2.6])
</code></pre>
<p>The result I am looking for (below) should ideally have NaNs only in the place where the original array, x, had NaNs and the averaging should be done over the number of non-NaN elements in the grouping (I need some way to change the size of n in the function.)</p>
<pre><code>y = array([4.75,4.75,nan,4.4,3.75,2.33,3.33,4,nan,nan,3,3.5,nan,3.25,4,4.5,3])
</code></pre>
<p>I could loop over the entire array and check index by index but the array I am using is very large and that would take a long time. Is there a numpythonic way to do this? </p>
| 2 | 2016-10-07T13:56:59Z | 39,919,275 | <p>You could create a temporary array and use np.nanmean() (new in version 1.8 if I'm not mistaken):</p>
<pre><code>import numpy as np
temp = np.vstack([x[i:-(5-i)] for i in range(5)]) # stacks vertically the strided arrays
means = np.nanmean(temp, axis=0)
</code></pre>
<p>and put original nan back in place with <code>means[np.isnan(x[:-5])] = np.nan</code></p>
<p>However this look redundant both in terms of memory (stacking the same array strided 5 times) and computation.</p>
| 0 | 2016-10-07T14:07:55Z | [
"python",
"numpy",
"masked-array"
] |
Calculate moving average in numpy array with NaNs | 39,919,050 | <p>I am trying to calculate the moving average in a large numpy array that contains NaNs. Currently I am using:</p>
<pre><code>import numpy as np
def moving_average(a,n=5):
ret = np.cumsum(a,dtype=float)
ret[n:] = ret[n:]-ret[:-n]
return ret[-1:]/n
</code></pre>
<p>When calculating with a masked array:</p>
<pre><code>x = np.array([1.,3,np.nan,7,8,1,2,4,np.nan,np.nan,4,4,np.nan,1,3,6,3])
mx = np.ma.masked_array(x,np.isnan(x))
y = moving_average(mx).filled(np.nan)
print y
>>> array([3.8,3.8,3.6,nan,nan,nan,2,2.4,nan,nan,nan,2.8,2.6])
</code></pre>
<p>The result I am looking for (below) should ideally have NaNs only in the place where the original array, x, had NaNs and the averaging should be done over the number of non-NaN elements in the grouping (I need some way to change the size of n in the function.)</p>
<pre><code>y = array([4.75,4.75,nan,4.4,3.75,2.33,3.33,4,nan,nan,3,3.5,nan,3.25,4,4.5,3])
</code></pre>
<p>I could loop over the entire array and check index by index but the array I am using is very large and that would take a long time. Is there a numpythonic way to do this? </p>
| 2 | 2016-10-07T13:56:59Z | 39,919,618 | <p>If I understand correctly, you want to create a moving average and then populate the resulting elements as <code>nan</code> if their index in the original array was <code>nan</code>.</p>
<pre><code>import numpy as np
>>> inc = 5 #the moving avg increment
>>> x = np.array([1.,3,np.nan,7,8,1,2,4,np.nan,np.nan,4,4,np.nan,1,3,6,3])
>>> mov_avg = np.array([np.nanmean(x[idx:idx+inc]) for idx in range(len(x))])
# Determine indices in x that are nans
>>> nan_idxs = np.where(np.isnan(x))[0]
# Populate output array with nans
>>> mov_avg[nan_idxs] = np.nan
>>> mov_avg
array([ 4.75, 4.75, nan, 4.4, 3.75, 2.33333333, 3.33333333, 4., nan, nan, 3., 3.5, nan, 3.25, 4., 4.5, 3.])
</code></pre>
| 0 | 2016-10-07T14:26:02Z | [
"python",
"numpy",
"masked-array"
] |
Calculate moving average in numpy array with NaNs | 39,919,050 | <p>I am trying to calculate the moving average in a large numpy array that contains NaNs. Currently I am using:</p>
<pre><code>import numpy as np
def moving_average(a,n=5):
ret = np.cumsum(a,dtype=float)
ret[n:] = ret[n:]-ret[:-n]
return ret[-1:]/n
</code></pre>
<p>When calculating with a masked array:</p>
<pre><code>x = np.array([1.,3,np.nan,7,8,1,2,4,np.nan,np.nan,4,4,np.nan,1,3,6,3])
mx = np.ma.masked_array(x,np.isnan(x))
y = moving_average(mx).filled(np.nan)
print y
>>> array([3.8,3.8,3.6,nan,nan,nan,2,2.4,nan,nan,nan,2.8,2.6])
</code></pre>
<p>The result I am looking for (below) should ideally have NaNs only in the place where the original array, x, had NaNs and the averaging should be done over the number of non-NaN elements in the grouping (I need some way to change the size of n in the function.)</p>
<pre><code>y = array([4.75,4.75,nan,4.4,3.75,2.33,3.33,4,nan,nan,3,3.5,nan,3.25,4,4.5,3])
</code></pre>
<p>I could loop over the entire array and check index by index but the array I am using is very large and that would take a long time. Is there a numpythonic way to do this? </p>
| 2 | 2016-10-07T13:56:59Z | 39,919,709 | <p>Here's an approach using strides -</p>
<pre><code>w = 5 # Window size
n = x.strides[0]
avgs = np.nanmean(np.lib.stride_tricks.as_strided(x, \
shape=(x.size-w+1,w), strides=(n,n)),1)
x_rem = np.append(x[-w+1:],np.full(w-1,np.nan))
avgs_rem = np.nanmean(np.lib.stride_tricks.as_strided(x_rem, \
shape=(w-1,w), strides=(n,n)),1)
avgs = np.append(avgs,avgs_rem)
avgs[np.isnan(x)] = np.nan
</code></pre>
| 0 | 2016-10-07T14:29:39Z | [
"python",
"numpy",
"masked-array"
] |
Calculate moving average in numpy array with NaNs | 39,919,050 | <p>I am trying to calculate the moving average in a large numpy array that contains NaNs. Currently I am using:</p>
<pre><code>import numpy as np
def moving_average(a,n=5):
ret = np.cumsum(a,dtype=float)
ret[n:] = ret[n:]-ret[:-n]
return ret[-1:]/n
</code></pre>
<p>When calculating with a masked array:</p>
<pre><code>x = np.array([1.,3,np.nan,7,8,1,2,4,np.nan,np.nan,4,4,np.nan,1,3,6,3])
mx = np.ma.masked_array(x,np.isnan(x))
y = moving_average(mx).filled(np.nan)
print y
>>> array([3.8,3.8,3.6,nan,nan,nan,2,2.4,nan,nan,nan,2.8,2.6])
</code></pre>
<p>The result I am looking for (below) should ideally have NaNs only in the place where the original array, x, had NaNs and the averaging should be done over the number of non-NaN elements in the grouping (I need some way to change the size of n in the function.)</p>
<pre><code>y = array([4.75,4.75,nan,4.4,3.75,2.33,3.33,4,nan,nan,3,3.5,nan,3.25,4,4.5,3])
</code></pre>
<p>I could loop over the entire array and check index by index but the array I am using is very large and that would take a long time. Is there a numpythonic way to do this? </p>
| 2 | 2016-10-07T13:56:59Z | 39,920,628 | <p>I'll just add to the great answers before that you could still use cumsum to achieve this:</p>
<pre><code>import numpy as np
def moving_average(a, n=5):
ret = np.cumsum(a.filled(0))
ret[n:] = ret[n:] - ret[:-n]
counts = np.cumsum(~a.mask)
counts[n:] = counts[n:] - counts[:-n]
ret[~a.mask] /= counts[~a.mask]
ret[a.mask] = np.nan
return ret
x = np.array([1.,3,np.nan,7,8,1,2,4,np.nan,np.nan,4,4,np.nan,1,3,6,3])
mx = np.ma.masked_array(x,np.isnan(x))
y = moving_average(mx)
</code></pre>
| 1 | 2016-10-07T15:15:35Z | [
"python",
"numpy",
"masked-array"
] |
django gunicorn sock file not created by wsgi | 39,919,053 | <p>I have a basic django rest application in my digital ocean server (Ubuntu 16.04) with a local virtual environment.
The basic wsgi.py is:</p>
<pre><code>import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "workout_rest.settings")
# This application object is used by any WSGI server configured to use this
# file. This includes Django's development server, if the WSGI_APPLICATION
# setting points here.
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
# Apply WSGI middleware here.
# from helloworld.wsgi import HelloWorldApplication
# application = HelloWorldApplication(application)
</code></pre>
<p>I have followed step by step this tutorial:
<a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04" rel="nofollow">https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04</a></p>
<p>When I test Gunicorn's ability to serve the project with this command:
gunicorn --bind 0.0.0.0:8000 myproject.wsgi:application
All works well.</p>
<p>So I've tried to setup Gunicorn to use systemd service file.
My /etc/systemd/system/gunicorn.service file is:</p>
<pre><code>[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ben
Group=www-data
WorkingDirectory=/home/ben/myproject
ExecStart=/home/ben/myproject/myprojectenv/bin/gunicorn --workers 3 --bind unix:/home/ben/myproject/myproject.sock myproject.wsgi:application
[Install]
WantedBy=multi-user.target
</code></pre>
<p>My Nginx configuration is:</p>
<pre><code>server {
listen 8000;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ben/myproject;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/ben/myproject/myproject.sock;
}
}
</code></pre>
<p>I've changed listen port from 80 to 8000 because 80 give me a err_connection_refused error.
After starting the server with this command:</p>
<pre><code>sudo systemctl restart nginx
</code></pre>
<p>When I try to run my website, I get an 502 Bad Gateway error.
I've tried these commands (found on the tutorial comments):</p>
<pre><code>sudo systemctl daemon-reload
sudo systemctl start gunicorn
sudo systemctl enable gunicorn
sudo systemctl restart nginx
</code></pre>
<p>but nothing changes.
When I take a look at the Nginix logs with this command:</p>
<pre><code>sudo tail -f /var/log/nginx/error.log
</code></pre>
<p>I can read that sock file doesn't exists:</p>
<pre><code>2016/10/07 09:00:18 [crit] 24974#24974: *1 connect() to unix:/home/ben/myproject/myproject.sock failed (2: No such file or directory) while connecting to upstream, client: 86.197.20.27, server: 139.59.150.116, request: "GET / HTTP/1.1", upstream: "http://unix:/home/ben/myproject/myproject.sock:/", host: "server_ip_adress:8000"
</code></pre>
<p>Why this sock file isn't created? How can I configure django/gunicorn to create this file?
I have added gunicorn in my INSTALLED_APP in my Django project but it doesn't change anything.</p>
<p><strong>EDIT:</strong></p>
<p>When I test the nginx config file with <code>nginx -t</code> i get an error: <code>open() "/run/nginx.pid" failed (13: Permission denied)</code>.
But if I run the command with sudo: <code>sudo nginx -t</code>, the test is successful. Does that mean that I have to allow 'ben' user to run Ngnix?</p>
<p>About gunicorn logfile, I cannot find a way to read them. Where are they stored?</p>
<p>When I check whether gunicorn is running by using <code>ps aux | grep gunicorn</code>:</p>
<pre><code>ben 26543 0.0 0.2 14512 1016 pts/0 S+ 14:52 0:00 grep --color=auto gunicorn
</code></pre>
<p>Here is hat happens when you run the systemctl enable and start commands for gunicorn:</p>
<pre><code>sudo systemctl enable gunicorn
Synchronizing state of gunicorn.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable gunicorn
sudo systemctl start gunicorn
I get no output with this command
sudo systemctl is-active gunicorn
active
sudo systemctl status gunicorn
â gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: enabled)
Active: active (exited) since Thu 2016-10-06 15:40:29 UTC; 23h ago
Oct 06 15:40:29 DevUsine systemd[1]: Started gunicorn.service.
Oct 06 18:52:56 DevUsine systemd[1]: Started gunicorn.service.
Oct 06 20:55:05 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 20:55:17 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:07:36 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:16:42 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:21:38 DevUsine systemd[1]: Started gunicorn daemon.
Oct 06 21:25:28 DevUsine systemd[1]: Started gunicorn daemon.
Oct 07 08:58:43 DevUsine systemd[1]: Started gunicorn daemon.
Oct 07 15:01:22 DevUsine systemd[1]: Started gunicorn daemon.
</code></pre>
| 1 | 2016-10-07T13:57:02Z | 39,924,519 | <p>I had to change the permissions of my sock folder:</p>
<pre><code>sudo chown ben:www-data /home/ben/myproject/
</code></pre>
<p>Another thing is that I have changed the sock location after reading in many post that it's not a good pratice to keep the sock file in the django project.
My new location is:</p>
<pre><code>/home/ben/run/
</code></pre>
<p>Don't forget to change permissions:</p>
<pre><code>sudo chown ben:www-data /home/ben/run/
</code></pre>
<p>To be sure that gunicorn is refreshed, run these commands:</p>
<pre><code>pkill gunicorn
sudo systemctl daemon-reload
sudo systemctl start gunicorn
</code></pre>
<p>That will kill the gunicorn processes and start new ones.</p>
<p>You can run this command to make the process start at server boot:</p>
<pre><code>sudo systemctl enable gunicorn
</code></pre>
<p>All works well now.</p>
| 0 | 2016-10-07T19:27:11Z | [
"python",
"django",
"nginx",
"gunicorn"
] |
IndexError: index is out of bounds for axis 0 with size | 39,919,181 | <p>I have array <code>x_train</code> and <code>targets_train</code>. I want to shuffle the training data and split it into smaller batches and use the batches as training data. My original data has 1000 rows and each time I try to use 250 rows of them :</p>
<pre><code> x_train = np.memmap('/home/usr/train', dtype='float32', mode='r', shape=(1000, 1, 784))
# print(x_train)
targets_train = np.memmap('/home/usr/train_label', dtype='int32', mode='r', shape=(1000, 1))
train_idxs = [i for i in range(x_train.shape[0])]
np.random.shuffle(train_idxs)
num_batches_train = 4
def next_batch(start, train, labels, batch_size=250):
newstart = start + batch_size
if newstart > train.shape[0]:
newstart = 0
idxs = train_idxs[start:start + batch_size]
# print(idxs)
return train[idxs, :], labels[idxs, :], newstart
# x_train_lab = x_train[:200]
# # x_train = np.array(targets_train)
# targets_train_lab = targets_train[:200]
for i in range(num_batches_train):
x_train, targets_train, newstart = next_batch(i*batch_size, x_train, targets_train, batch_size=250)
</code></pre>
<p>The problem is, when I shuffle the training data and try to access to batches I get error saying: </p>
<pre><code> return train[idxs, :], labels[idxs, :], newstart
IndexError: index 250 is out of bounds for axis 0 with size 250
</code></pre>
<p>Is there anybody who knows what am I doing wrong?</p>
| 0 | 2016-10-07T14:03:38Z | 39,920,080 | <p>The problem is in this line in the function definition:</p>
<pre><code>idxs = train_idxs[start:start + batch_size]
</code></pre>
<p>Change it to:</p>
<pre><code>idxs = train_idxs[start: newstart]
</code></pre>
<p>Then it should work as expected!</p>
<p>Also, please change the variable names in the <code>for</code> loop to something like:</p>
<pre><code>batch_size = 250
for i in range(num_batches_train):
x_train_split, targets_train_split, newstart = next_batch(i*batch_size,
x_train,
targets_train,
batch_size=250)
print(x_train_split.shape, targets_train_split.shape, newstart)
</code></pre>
<p>Sample output:</p>
<pre><code>(250, 1, 784) (250, 1) 250
(250, 1, 784) (250, 1) 500
(250, 1, 784) (250, 1) 750
(250, 1, 784) (250, 1) 1000
</code></pre>
| 1 | 2016-10-07T14:49:11Z | [
"python",
"arrays",
"list",
"numpy",
"machine-learning"
] |
IndexError: index is out of bounds for axis 0 with size | 39,919,181 | <p>I have array <code>x_train</code> and <code>targets_train</code>. I want to shuffle the training data and split it into smaller batches and use the batches as training data. My original data has 1000 rows and each time I try to use 250 rows of them :</p>
<pre><code> x_train = np.memmap('/home/usr/train', dtype='float32', mode='r', shape=(1000, 1, 784))
# print(x_train)
targets_train = np.memmap('/home/usr/train_label', dtype='int32', mode='r', shape=(1000, 1))
train_idxs = [i for i in range(x_train.shape[0])]
np.random.shuffle(train_idxs)
num_batches_train = 4
def next_batch(start, train, labels, batch_size=250):
newstart = start + batch_size
if newstart > train.shape[0]:
newstart = 0
idxs = train_idxs[start:start + batch_size]
# print(idxs)
return train[idxs, :], labels[idxs, :], newstart
# x_train_lab = x_train[:200]
# # x_train = np.array(targets_train)
# targets_train_lab = targets_train[:200]
for i in range(num_batches_train):
x_train, targets_train, newstart = next_batch(i*batch_size, x_train, targets_train, batch_size=250)
</code></pre>
<p>The problem is, when I shuffle the training data and try to access to batches I get error saying: </p>
<pre><code> return train[idxs, :], labels[idxs, :], newstart
IndexError: index 250 is out of bounds for axis 0 with size 250
</code></pre>
<p>Is there anybody who knows what am I doing wrong?</p>
| 0 | 2016-10-07T14:03:38Z | 39,922,558 | <p>(edit - first guess about <code>newstart</code> deleted)</p>
<p>In this line:</p>
<pre><code>x_train, targets_train, newstart = next_batch(i*batch_size, x_train, targets_train, batch_size=250)
</code></pre>
<p>you change the size of <code>x_train</code> with each iteration, yet you continue to use the <code>train_idxs</code> array that you created for the full size array.</p>
<p>It's one thing to pull out random values from <code>x_train</code> in batches, but you have to keep the selection arrays consistent.</p>
<p>This question probably should have been closed for lack of a minimal, and verifiable example. It's frustrating to have to guess and make a small testable example in hopes of recreating the problem.</p>
<p><a href="http://stackoverflow.com/help/mcve">http://stackoverflow.com/help/mcve</a></p>
<p>If my current guess is wrong, just a few intermediate print statements would have made the problem clear. </p>
<p>========================</p>
<p>Reducing your code to a simple case</p>
<pre><code>import numpy as np
x_train = np.arange(20).reshape(20,1)
train_idxs = np.arange(x_train.shape[0])
np.random.shuffle(train_idxs)
num_batches_train = 4
batch_size=5
def next_batch(start, train):
idxs = train_idxs[start:start + batch_size]
print(train.shape, idxs)
return train[idxs, :]
for i in range(num_batches_train):
x_train = next_batch(i*batch_size, x_train)
print(x_train)
</code></pre>
<p>a run produces:</p>
<pre><code>1658:~/mypy$ python3 stack39919181.py
(20, 1) [ 7 18 3 0 9]
[[ 7]
[18]
[ 3]
[ 0]
[ 9]]
(5, 1) [13 5 2 15 1]
Traceback (most recent call last):
File "stack39919181.py", line 14, in <module>
x_train = next_batch(i*batch_size, x_train)
File "stack39919181.py", line 11, in next_batch
return train[idxs, :]
IndexError: index 13 is out of bounds for axis 0 with size 5
</code></pre>
<p>I fed the (5,1) <code>x_train</code> back to the <code>next_batch</code> but tried to index it as though it were the original.</p>
<p>Changing the iteration to:</p>
<pre><code>for i in range(num_batches_train):
x_batch = next_batch(i*batch_size, x_train)
print(x_batch)
</code></pre>
<p>lets it run through producing 4 batches of 5 rows.</p>
| 0 | 2016-10-07T17:04:37Z | [
"python",
"arrays",
"list",
"numpy",
"machine-learning"
] |
kivy - save user input into a python list | 39,919,218 | <p>I`m having some difficulties understanding the interrelation between kivy and python.</p>
<p>I`m trying to do something super simple, as a first step, and it would be great if anybody could show an example:
How can I store input data into a python list once the user feeds-in data and press "enter"?
Thanks</p>
| 0 | 2016-10-07T14:05:28Z | 39,919,298 | <p>An example of this. User can input 3 things and they get stored in an array. If you want the user to input one thing and store it in an array you will have to split the input.</p>
<pre><code>result = []
for i in range(3):
answer = input()
result.append(answer)
print(result)
</code></pre>
<p>NOTE: <code>input()</code> is <code>raw_input()</code> in python2.x</p>
<p>In <code>kivy</code>:</p>
<pre><code>from kivy.app import App
from kivy.uix.gridlayout import GridLayout
from kivy.uix.textinput import TextInput
class Screen(GridLayout):
def __init__(self, **kwargs):
super(Screen, self).__init__(**kwargs)
self.input = TextInput(multiline=False)
self.add_widget(self.input)
self.input.bind(on_text_validate=self.print_input)
def print_input(self, value):
print(value.text)
class MyApp(App):
def build(self):
return Screen()
if __name__ == '__main__':
MyApp().run()
</code></pre>
<p>This simple script will give you an input box and when you press enter it will print its text to terminal. You can easily store it in a list if you wish. </p>
| 1 | 2016-10-07T14:09:08Z | [
"python",
"kivy",
"kivy-language"
] |
Etree returns a "random" string instead of attribute name | 39,919,308 | <p>I am new to python and trees at all, and have encountered some problems.</p>
<p>I have the following dataset structured as:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns">
<node id="someNode">
<data key="label">someNode</data>
</node>
</graphml>
</code></pre>
<p>I want to reach the attribute and attribute values for both the root and the node elements.</p>
<p>I have tried using Python xml.etree.ElementTree like this:</p>
<pre><code>import xml.etree.ElementTree as etree
tree = etree.parse('myDataset')
root = tree.getroot()
print('Root: ', root)
print('Children: ', root.getchildren())
</code></pre>
<p>but this is what I get:</p>
<pre><code>Root: <Element '{http://graphml.graphdrawing.org/xmlns}graphml' at 0x031DB5A0>
Children: [<Element '{http://graphml.graphdrawing.org/xmlns}key' at 0x03F9BFC0>
</code></pre>
<p>I did also try .text and .tag, which only removed the "at 0x03...".</p>
<p>Hope the question is understandable and someone know a solution.</p>
| 0 | 2016-10-07T14:09:56Z | 39,920,145 | <p>If you want to output your root and children nodes as xml text, instead of the default representation, use <code>xml.etree.ElementTree.tostring(root)</code> and</p>
<pre><code>for child in root:
xml.etree.ElementTree.tostring(child)
</code></pre>
<p>official doc here: <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.tostring" rel="nofollow">https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.tostring</a></p>
<p>And if you want he tag name, use the <code>tag</code> property of each element:</p>
<pre><code>print(root.tag)
for child in root:
print(child.tag)
</code></pre>
<p>doc describing the available attributes: <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element" rel="nofollow">https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element</a></p>
| 1 | 2016-10-07T14:52:10Z | [
"python",
"xml",
"tree",
"elementtree",
"graphml"
] |
How to check and encode input emojis from Facebook messenger? | 39,919,314 | <p>I'm building a Facebook messenger bot in Python. And everything works fine. But if I send <code>emojis</code> as text from Facebook chat to API, then it goes wrong.
This is an example when I send <code>emojis</code> from Facebook.</p>
<pre><code>{'message': {'mid': 'mid.1475846223244:e7eea53884', 'seq': 10863, 'text': 'í ½í±í ¼í¿½'},
</code></pre>
<p>So the <code>received_message = message['message']['text']</code>.
What I want is whenever I send a (emoji) <code>non text message</code> from Facebook, I can scan and encode it before I send it to my API. I have read documents before asking this question but most of them are given <code>emojis</code> from user and not all a scanner and encode any <code>emojis</code> (if I miss something, please correct me because i'm a newbie). Tell me if I need to update my question.</p>
| 0 | 2016-10-07T14:10:16Z | 39,919,614 | <p>You may use a mapping between unicode code-points and ASCII representation. See this kind of table here: <a href="http://lolhug.com/facebook-emoticons/" rel="nofollow">http://lolhug.com/facebook-emoticons/</a></p>
<p>The official Emoticons table is here: <a href="http://unicode-table.com/en/blocks/emoticons/" rel="nofollow">http://unicode-table.com/en/blocks/emoticons/</a></p>
<p>The library <a href="https://pypi.python.org/pypi/emoji" rel="nofollow">Emoji</a> can help you to convert your Emojis.</p>
| 1 | 2016-10-07T14:25:55Z | [
"python",
"facebook",
"encoding",
"emoji"
] |
How to check and encode input emojis from Facebook messenger? | 39,919,314 | <p>I'm building a Facebook messenger bot in Python. And everything works fine. But if I send <code>emojis</code> as text from Facebook chat to API, then it goes wrong.
This is an example when I send <code>emojis</code> from Facebook.</p>
<pre><code>{'message': {'mid': 'mid.1475846223244:e7eea53884', 'seq': 10863, 'text': 'í ½í±í ¼í¿½'},
</code></pre>
<p>So the <code>received_message = message['message']['text']</code>.
What I want is whenever I send a (emoji) <code>non text message</code> from Facebook, I can scan and encode it before I send it to my API. I have read documents before asking this question but most of them are given <code>emojis</code> from user and not all a scanner and encode any <code>emojis</code> (if I miss something, please correct me because i'm a newbie). Tell me if I need to update my question.</p>
| 0 | 2016-10-07T14:10:16Z | 39,920,668 | <p>You should use the escaped version of the corresponding code point. This is a technique that allow you to express the whole Unicode range using only ASCII characters.</p>
<p>EG. The Emoji í ½í²© can be represented in Java as <code>"\uD83D\uDCA9"</code> or in Python as <code>u"\U0001F4A9"</code>. <a href="http://www.fileformat.info/info/unicode/char/1f4a9/index.htm" rel="nofollow">http://www.fileformat.info/info/unicode/char/1f4a9/index.htm</a></p>
<p>NB: some emojis are composed by multiple code points such as flags or families. Please find here the complete list of Unicode Emojis <a href="http://unicode.org/emoji/charts/full-emoji-list.html" rel="nofollow">http://unicode.org/emoji/charts/full-emoji-list.html</a></p>
| 0 | 2016-10-07T15:17:18Z | [
"python",
"facebook",
"encoding",
"emoji"
] |
How do you preserve precision when scaling a Decimal? | 39,919,432 | <p>How can you scale a <a href="https://docs.python.org/library/decimal.html#decimal.Decimal" rel="nofollow"><code>Decimal</code></a>, but keep its original precision? E.g.,</p>
<pre><code>>>> from decimal import Decimal
>>> Decimal('0.1230')
Decimal('0.1230') # Precision is 4.
</code></pre>
<p>Multiplication does not preserve the original precision:</p>
<pre><code>>>> Decimal('0.1230') * 100
Decimal('12.3000') # Precision is 6.
>>> Decimal('0.1230') * Decimal('100')
Decimal('12.3000') # Precision is 6.
</code></pre>
<p>The <a href="https://docs.python.org/library/decimal.html#decimal.Decimal.shift" rel="nofollow"><code>.shift()</code></a> method only keeps the same number of decimal places when shifting by a negative amount:</p>
<pre><code>>>> Decimal('0.1230').shift(2)
Decimal('12.3000') # Precision is 6.
>>> Decimal('0.1230').shift(-2)
Decimal('0.0012') # Precision is 2.
</code></pre>
| 1 | 2016-10-07T14:17:11Z | 39,919,747 | <p>From the answer, <a href="http://stackoverflow.com/a/33948596/47078">http://stackoverflow.com/a/33948596/47078</a>, you could do something like the following. I'm not sure, though, if what you want is possible without additional code: namely to have a Decimal that preserves precision automatically.</p>
<pre><code>>>> import decimal
>>> f = decimal.Context(prec=4).create_decimal('0.1230')
>>> f
Decimal('0.1230')
>>> decimal.Context(prec=4).create_decimal(f * 100)
Decimal('12.30')
</code></pre>
<p>You may want to write your own class or helper methods to make it look nicer.</p>
<pre><code>>>> Decimal4 = decimal.Context(prec=4).create_decimal
>>> Decimal4('0.1230')
Decimal('0.1230')
>>> Decimal4(Decimal4('0.1230') * 100)
Decimal('12.30')
</code></pre>
<p>Looks like (from <a href="https://docs.python.org/2/library/decimal.html#decimal.Context.create_decimal" rel="nofollow">decimal.Context.create_decimal</a>) you can also do it this way:</p>
<pre><code># This is copied verbatim from the Python documentation
>>> getcontext().prec = 3
>>> Decimal('3.4445') + Decimal('1.0023')
Decimal('4.45')
>>> Decimal('3.4445') + Decimal(0) + Decimal('1.0023')
</code></pre>
<p>Perhaps you could create a method to replace Decimal() that creates the Decimal object, and updates the global precision if the current precision is less than the current.</p>
<pre><code>>>> def make_decimal(value):
... number = decimal.Decimal(value)
... prec = len(number.as_tuple().digits)
... if prec < decimal.getcontext().prec:
... decimal.getcontext().prec = prec
... return number
...
>>> make_decimal('0.1230')
Decimal('0.1230')
>>> make_decimal('0.1230') * 100
Decimal('12.30')
</code></pre>
<p>You would then want a <code>reset_decimal_precision</code> method or an optional parameter to <code>make_decimal</code> to do the same so you could start new calculations. The downside is that it limits you to only one global precision at a time. I think the best solution might be to subclass Decimal() and make it keep track of precision.</p>
| 0 | 2016-10-07T14:31:07Z | [
"python",
"python-2.7"
] |
How do you preserve precision when scaling a Decimal? | 39,919,432 | <p>How can you scale a <a href="https://docs.python.org/library/decimal.html#decimal.Decimal" rel="nofollow"><code>Decimal</code></a>, but keep its original precision? E.g.,</p>
<pre><code>>>> from decimal import Decimal
>>> Decimal('0.1230')
Decimal('0.1230') # Precision is 4.
</code></pre>
<p>Multiplication does not preserve the original precision:</p>
<pre><code>>>> Decimal('0.1230') * 100
Decimal('12.3000') # Precision is 6.
>>> Decimal('0.1230') * Decimal('100')
Decimal('12.3000') # Precision is 6.
</code></pre>
<p>The <a href="https://docs.python.org/library/decimal.html#decimal.Decimal.shift" rel="nofollow"><code>.shift()</code></a> method only keeps the same number of decimal places when shifting by a negative amount:</p>
<pre><code>>>> Decimal('0.1230').shift(2)
Decimal('12.3000') # Precision is 6.
>>> Decimal('0.1230').shift(-2)
Decimal('0.0012') # Precision is 2.
</code></pre>
| 1 | 2016-10-07T14:17:11Z | 39,920,314 | <p>Use <a href="https://docs.python.org/2/library/decimal.html#decimal.setcontext" rel="nofollow">setcontext</a>:</p>
<pre><code>>>> mycontext=decimal.Context(prec=4)
>>> decimal.setcontext(mycontext)
>>> decimal.Decimal('0.1230')*decimal.Decimal('100')
Decimal('12.30')
</code></pre>
<p>If you want something that feels more local, use <a href="https://docs.python.org/2/library/decimal.html#decimal.localcontext" rel="nofollow">localcontext</a> and a context:</p>
<pre><code>import decimal as dec
with dec.localcontext() as ctx:
ctx.prec=4
d1=dec.Decimal('0.1230')*100
d2=dec.Decimal('0.1230')*100
>>> d1, d2
12.30 12.3000
</code></pre>
<p>If you want something where the lesser precision representation (with no decimal integers use a default) you can do something like:</p>
<pre><code>def muld(s1, s2, dprec=40):
with dec.localcontext() as ctx:
d1=dec.Decimal(s1)
d2=dec.Decimal(s2)
p1=dprec if '.' not in s1 else len(d1.as_tuple().digits)
p2=dprec if '.' not in s2 else len(d2.as_tuple().digits)
ctx.prec=min(p1, p2)
return d1*d2
>>> muld('0.1230', '100')
12.30
>>> muld('0.1230', '1.0')
0.12
</code></pre>
| 1 | 2016-10-07T15:00:29Z | [
"python",
"python-2.7"
] |
How do you preserve precision when scaling a Decimal? | 39,919,432 | <p>How can you scale a <a href="https://docs.python.org/library/decimal.html#decimal.Decimal" rel="nofollow"><code>Decimal</code></a>, but keep its original precision? E.g.,</p>
<pre><code>>>> from decimal import Decimal
>>> Decimal('0.1230')
Decimal('0.1230') # Precision is 4.
</code></pre>
<p>Multiplication does not preserve the original precision:</p>
<pre><code>>>> Decimal('0.1230') * 100
Decimal('12.3000') # Precision is 6.
>>> Decimal('0.1230') * Decimal('100')
Decimal('12.3000') # Precision is 6.
</code></pre>
<p>The <a href="https://docs.python.org/library/decimal.html#decimal.Decimal.shift" rel="nofollow"><code>.shift()</code></a> method only keeps the same number of decimal places when shifting by a negative amount:</p>
<pre><code>>>> Decimal('0.1230').shift(2)
Decimal('12.3000') # Precision is 6.
>>> Decimal('0.1230').shift(-2)
Decimal('0.0012') # Precision is 2.
</code></pre>
| 1 | 2016-10-07T14:17:11Z | 39,920,543 | <p>The <a href="https://docs.python.org/library/decimal.html#decimal.Decimal.scaleb" rel="nofollow"><code>.scaleb()</code></a> method will scale a decimal number while preserving its precision. From the documentation:</p>
<blockquote>
<p>scaleb(other[, context])</p>
<blockquote>
<p>Return the first operand with <em>exponent</em> adjusted by the second.</p>
</blockquote>
</blockquote>
<p>The key part is that the <em>exponent</em> gets adjusted and thus maintains precision.</p>
<pre><code>>>> Decimal('0.1230').scaleb(2)
Decimal('12.30') # Precision is 4.
>>> Decimal('0.1230').scaleb(-2)
Decimal('0.001230') # Precision is 4.
</code></pre>
| 0 | 2016-10-07T15:11:25Z | [
"python",
"python-2.7"
] |
CSV data causes error when uploading to SQL table via Python | 39,919,459 | <p>So I've been struggling with this problem for the last couple of days. I need to upload a CSV file with about 25 columns & 50K rows into a SQL Server table (<code>zzzOracle_Extract</code>) which also contains 25 columns, same Column names & in the same order.</p>
<p>This is what a row looks like from the CSV file:</p>
<pre><code>['M&M OPTICAL SHOP', '3001211', 'SHORE', '*', 'PO BOX 7891', '', '', '', 'GUAYNABO', 'GUAYNABO', 'PR', '0090', 'United States', '24-NSH RETAIL CUSTOMER', 'SH02-SHORE COLUMN 2', '3001211', '*', '*', '*', '3001211744-BILL_TO', '', '', '', '', 'RACHAEL']
</code></pre>
<p>So in total, there are 25 columns with some values being blank. Maybe this is causing an error. Here is my code:</p>
<pre><code>import csv
import pymssql
conn = pymssql.connect(
server="xxxxxxxxxx",
port = 2433,
user='SQLAdmin',
password='xxxxx',
database='NasrWeb'
)
with open('cleanNVG.csv','r') as f:
reader = csv.reader(f)
columns = next(reader)
query = 'insert into dbo.zzzOracle_Extract({0}) Values({1})'
query = query.format(','.join(columns),','.join('?' * len(columns)))
cursor = conn.cursor()
for data in reader:
print(data) #What a row looks like
cursor.execute(query,data)
cursor.commit()
cursor.close()
print("Done")
conn.close()
</code></pre>
<p>After the script is executed, one of the errors I get is the following:</p>
<pre><code>ValueError: 'params' arg (<class 'list'>) can be only a tuple or a dictionary.
</code></pre>
<p>What can be wrong with my code? I really appreciate the help!</p>
| 0 | 2016-10-07T14:18:32Z | 39,921,143 | <blockquote>
<p>How do join the [ ] to each column in my code?</p>
</blockquote>
<p>So you have something like </p>
<pre class="lang-python prettyprint-override"><code>>>> columns = ['ID','Last Name','First Name']
</code></pre>
<p>and you're currently using </p>
<pre class="lang-python prettyprint-override"><code>>>> ','.join(columns)
'ID,Last Name,First Name'
</code></pre>
<p>but now you need to wrap the column names in square brackets. That could be done with</p>
<pre class="lang-python prettyprint-override"><code>>>> ','.join('[' + x + ']' for x in columns)
'[ID],[Last Name],[First Name]'
</code></pre>
| 1 | 2016-10-07T15:43:45Z | [
"python",
"sql-server",
"python-3.x",
"pymssql"
] |
Pandas: calculating the mean values of duplicate entries in a dataframe | 39,919,570 | <p>I have been working with a dataframe in python and pandas that contains duplicate entries in the first column. The dataframe looks something like this:</p>
<pre><code> sample_id qual percent
0 sample_1 10 20
1 sample_2 20 30
2 sample_1 50 60
3 sample_2 10 90
4 sample_3 100 20
</code></pre>
<p>I want to write something that identifies duplicate entries within the first column and calculates the mean values of the subsequent columns. An ideal output would be something similar to the following:</p>
<pre><code> sample_id qual percent
0 sample_1 30 40
1 sample_2 15 60
2 sample_3 100 20
</code></pre>
<p>I have been struggling with this problem all afternoon and would appreciate any help.</p>
| 1 | 2016-10-07T14:23:35Z | 39,919,681 | <p><code>groupby</code> the <code>sample_id</code> column and use <code>mean</code></p>
<p><code>df.groupby('sample_id').mean().reset_index()</code><br>
<strong><em>or</em></strong><br>
<code>df.groupby('sample_id', as_index=False).mean()</code></p>
<p>get you </p>
<p><a href="http://i.stack.imgur.com/nw7e9.png" rel="nofollow"><img src="http://i.stack.imgur.com/nw7e9.png" alt="enter image description here"></a></p>
| 2 | 2016-10-07T14:28:38Z | [
"python",
"pandas"
] |
Pandas: calculating the mean values of duplicate entries in a dataframe | 39,919,570 | <p>I have been working with a dataframe in python and pandas that contains duplicate entries in the first column. The dataframe looks something like this:</p>
<pre><code> sample_id qual percent
0 sample_1 10 20
1 sample_2 20 30
2 sample_1 50 60
3 sample_2 10 90
4 sample_3 100 20
</code></pre>
<p>I want to write something that identifies duplicate entries within the first column and calculates the mean values of the subsequent columns. An ideal output would be something similar to the following:</p>
<pre><code> sample_id qual percent
0 sample_1 30 40
1 sample_2 15 60
2 sample_3 100 20
</code></pre>
<p>I have been struggling with this problem all afternoon and would appreciate any help.</p>
| 1 | 2016-10-07T14:23:35Z | 39,919,791 | <p>Groupby will work. </p>
<pre><code>data.groupby('sample_id').mean()
</code></pre>
<p>You can then use reset_index() to make look exactly as you want.</p>
| 0 | 2016-10-07T14:33:43Z | [
"python",
"pandas"
] |
How to do calculation on pandas dataframe that require processing multiple rows? | 39,919,672 | <p>I have a dataframe from which I need to calculate a number of features from. The dataframe <code>df</code> looks something like this for a object and an event:</p>
<pre><code>id event_id event_date age money_spent rank
1 100 2016-10-01 4 150 2
2 100 2016-09-30 5 10 4
1 101 2015-12-28 3 350 3
2 102 2015-10-25 5 400 5
3 102 2015-10-25 7 500 2
1 103 2014-04-15 2 1000 1
2 103 2014-04-15 3 180 6
</code></pre>
<p>From this I need to know for each id and event_id (basically each row), what was the number of days since the last event date, total money spend upto that date, avg. money spent upto that date, rank in last 3 events etc.</p>
<p>What is the best way to work with this kind of problem in pandas where for each row I need information from all rows with the same <code>id</code> before the date of that row, and so the calculations? I want to return a new dataframe with the corresponding calculated features like</p>
<pre><code>id event_id event_date days_last_event avg_money_spent total_money_spent
1 100 2016-10-01 278 500 1500
2 100 2016-09-30 361 196.67 590
1 101 2015-12-28 622 675 1350
2 102 2015-10-25 558 290 580
3 102 2015-10-25 0 500 500
1 103 2014-04-15 0 1000 1000
2 103 2014-04-15 0 180 180
</code></pre>
| 1 | 2016-10-07T14:28:29Z | 39,920,750 | <p>I came up with the following solution:</p>
<pre><code>df1= df.sort_values(by="event_date",ascending = False)
g = df1.groupby(by=["id"])
df1["total_money_spent","count"]= g.agg({"money_spent":["cumsum","cumcount"]})
df1["avg_money_spent"]=df1["total_money_spent"]/(df1["count"]+1)
</code></pre>
| 0 | 2016-10-07T15:22:08Z | [
"python",
"python-2.7",
"pandas"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.