title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Hidden references to function arguments causing big memory usage?
| 39,644,060 |
<p><strong>Edit:</strong> Never mind, I was just being completely stupid.</p>
<p>I came across code with recursion on smaller and smaller substrings, here's its essence plus my testing stuff:</p>
<pre><code>def f(s):
if len(s) == 2**20:
input('check your memory usage again')
else:
f(s[1:])
input('check your memory usage, then press enter')
f('a' * (2**20 + 500))
</code></pre>
<p>Before the call, my Python process takes about 9 MB (as checked by Windows task manager). After the 500 levels with ~1MB strings, it's at about 513 MB. No surprise, as each call level is still holding on to its string in its <code>s</code> variable.</p>
<p>But I tried to fix it by <strong>replacing</strong> the reference to the string with a reference to the new string and it still goes up to 513 MB:</p>
<pre><code>def f(s):
if len(s) == 2**20:
input('check your memory usage again')
else:
s = s[1:]
f(s)
input('check your memory usage, then press enter')
f('a' * (2**20 + 500))
</code></pre>
<p>Why doesn't that let go off the memory? The strings even only get smaller, so later strings would easily fit into the space of earlier strings. Are there hidden additional references to the strings somewhere or what is going on?</p>
<p>I had expected it to behave like this, which only goes up to 10 MB (a change of 1 MB, as expected because the new string is built while the old string still exists):</p>
<pre><code>input('check your memory usage, then press enter')
s = 'a' * (2**20 + 500)
while len(s) != 2**20:
s = s[1:]
input('check your memory usage again')
</code></pre>
<p>(Never mind the poor time complexity, btw, I know that, don't bother.)</p>
| 0 |
2016-09-22T16:14:50Z
| 39,644,615 |
<p>Your function is recursive, so when you call <code>f()</code>, your current frame is put onto a stack, and a new one is created. So basically each function call keeps a reference to the new string it creates to pass down to the next call.</p>
<p>To illustrate the stack</p>
<pre><code>import traceback
def recursive(x):
if x:
recursive(x[1:])
else:
traceback.print_stack()
recursive('abc')
</code></pre>
<p>Gives</p>
<pre><code>$ python tmp.py
File "tmp.py", line 10, in <module>
recursive('abc')
File "tmp.py", line 5, in recursive
recursive(x[1:])
File "tmp.py", line 5, in recursive
recursive(x[1:])
File "tmp.py", line 5, in recursive
recursive(x[1:])
File "tmp.py", line 7, in recursive
traceback.print_stack()
</code></pre>
<p>When the final call to <code>recursive()</code> returns, it returns back into the next call above it which still has the reference to <code>x</code>.</p>
<blockquote>
<p>But I tried to fix it by <strong>replacing</strong> the reference to the string with a reference to the new string and it still goes up to 513 MB</p>
</blockquote>
<p>Well you did in the current function being called, but the function which called it still has the reference to what was passed in. e.g.</p>
<pre><code>def foo(x):
print "foo1", locals()
bar(x)
print "foo2", locals()
def bar(x):
print "bar1", locals()
x = "something else"
print "bar2", locals()
foo('original thing')
</code></pre>
<p>When <code>foo()</code> is called, it passes the string <code>'original thing'</code> to <code>bar()</code>. And even though <code>bar()</code> then gets rid of the reference, the current call above to <code>foo()</code> still has the reference</p>
<pre><code>$ python tmp_x.py
foo1 {'x': 'original thing'}
bar1 {'x': 'original thing'}
bar2 {'x': 'something else'}
foo2 {'x': 'original thing'}
</code></pre>
<p>I hope that illustrates it. I have been a little vague in my first statement about stack frames.</p>
| 2 |
2016-09-22T16:44:36Z
|
[
"python",
"function",
"memory"
] |
Hidden references to function arguments causing big memory usage?
| 39,644,060 |
<p><strong>Edit:</strong> Never mind, I was just being completely stupid.</p>
<p>I came across code with recursion on smaller and smaller substrings, here's its essence plus my testing stuff:</p>
<pre><code>def f(s):
if len(s) == 2**20:
input('check your memory usage again')
else:
f(s[1:])
input('check your memory usage, then press enter')
f('a' * (2**20 + 500))
</code></pre>
<p>Before the call, my Python process takes about 9 MB (as checked by Windows task manager). After the 500 levels with ~1MB strings, it's at about 513 MB. No surprise, as each call level is still holding on to its string in its <code>s</code> variable.</p>
<p>But I tried to fix it by <strong>replacing</strong> the reference to the string with a reference to the new string and it still goes up to 513 MB:</p>
<pre><code>def f(s):
if len(s) == 2**20:
input('check your memory usage again')
else:
s = s[1:]
f(s)
input('check your memory usage, then press enter')
f('a' * (2**20 + 500))
</code></pre>
<p>Why doesn't that let go off the memory? The strings even only get smaller, so later strings would easily fit into the space of earlier strings. Are there hidden additional references to the strings somewhere or what is going on?</p>
<p>I had expected it to behave like this, which only goes up to 10 MB (a change of 1 MB, as expected because the new string is built while the old string still exists):</p>
<pre><code>input('check your memory usage, then press enter')
s = 'a' * (2**20 + 500)
while len(s) != 2**20:
s = s[1:]
input('check your memory usage again')
</code></pre>
<p>(Never mind the poor time complexity, btw, I know that, don't bother.)</p>
| 0 |
2016-09-22T16:14:50Z
| 39,644,683 |
<blockquote>
<p>Are there hidden additional references to the strings somewhere or what is going on</p>
</blockquote>
<p>Well, each function has a reference to its string while its on the stack, so the <code>s = s[1:]</code> is still going to keep <code>s[1:]</code> in the next function call. After 500 recursive calls the fact that you're copying 1 character less each time is insignificant to the roughly 2**20 characters that are being passed in each time. </p>
| 1 |
2016-09-22T16:49:11Z
|
[
"python",
"function",
"memory"
] |
Hidden references to function arguments causing big memory usage?
| 39,644,060 |
<p><strong>Edit:</strong> Never mind, I was just being completely stupid.</p>
<p>I came across code with recursion on smaller and smaller substrings, here's its essence plus my testing stuff:</p>
<pre><code>def f(s):
if len(s) == 2**20:
input('check your memory usage again')
else:
f(s[1:])
input('check your memory usage, then press enter')
f('a' * (2**20 + 500))
</code></pre>
<p>Before the call, my Python process takes about 9 MB (as checked by Windows task manager). After the 500 levels with ~1MB strings, it's at about 513 MB. No surprise, as each call level is still holding on to its string in its <code>s</code> variable.</p>
<p>But I tried to fix it by <strong>replacing</strong> the reference to the string with a reference to the new string and it still goes up to 513 MB:</p>
<pre><code>def f(s):
if len(s) == 2**20:
input('check your memory usage again')
else:
s = s[1:]
f(s)
input('check your memory usage, then press enter')
f('a' * (2**20 + 500))
</code></pre>
<p>Why doesn't that let go off the memory? The strings even only get smaller, so later strings would easily fit into the space of earlier strings. Are there hidden additional references to the strings somewhere or what is going on?</p>
<p>I had expected it to behave like this, which only goes up to 10 MB (a change of 1 MB, as expected because the new string is built while the old string still exists):</p>
<pre><code>input('check your memory usage, then press enter')
s = 'a' * (2**20 + 500)
while len(s) != 2**20:
s = s[1:]
input('check your memory usage again')
</code></pre>
<p>(Never mind the poor time complexity, btw, I know that, don't bother.)</p>
| 0 |
2016-09-22T16:14:50Z
| 39,645,025 |
<p>While each call level <strong>does</strong> get rid of its own old string, it <strong>creates</strong> and <strong>keeps</strong> its own <strong>new</strong> string.</p>
<p>(Just putting this in my own words after reading the other answers, more directly addressing what I (question author) had missed.)</p>
| 0 |
2016-09-22T17:09:01Z
|
[
"python",
"function",
"memory"
] |
Coalesce terminal output strings into one string with Python
| 39,644,114 |
<p>I am trying to fetch the output of the <code>fortune</code> command and send it to web-based WhatsApp. I am able to fetch the output from the <code>fortune</code> command, but when I send it to WhatsApp, the <code>fortune</code> output is outputted as separate lines/messages. How do I make them into one and send it as a single message? Thanks. </p>
<pre><code>fortune_list = ["(?i)art","(?i)comp","(?i)cookie","(?i)drugs","(?i)education","(?i)ethnic","(?i)food"]
for i in range(len(fortune_list)):
if re.search(re.compile(fortune_list[i]),reply):
cmd = ['fortune', fortune_list_reply[i]]
output = subprocess.Popen( cmd, stdout=subprocess.PIPE ).communicate()[0]
print output
input_box[1].send_keys(output)
time.sleep(1)
b.find_element_by_class_name('send-container').click()
</code></pre>
<p>The output on the terminal (<code>print output</code>)</p>
<p><a href="http://i.stack.imgur.com/YXo93.png" rel="nofollow"><img src="http://i.stack.imgur.com/YXo93.png" alt="enter image description here"></a></p>
<p>The ouput sent on WhatsApp as seperate messages.</p>
<p><a href="http://i.stack.imgur.com/Lj2vT.png" rel="nofollow"><img src="http://i.stack.imgur.com/Lj2vT.png" alt="enter image description here"></a></p>
<p>Desired output as a single message.</p>
<p><a href="http://i.stack.imgur.com/dFYao.png" rel="nofollow"><img src="http://i.stack.imgur.com/dFYao.png" alt="enter image description here"></a></p>
<p><strong><em>Edit 1 :</em></strong> Using <code>repr</code> : Coalesces the strings but with these characters. Using regex replace to replace the characters isn't working.</p>
<p><code>"XXXI:\n\tThe optimum committee has no members.\nXXXII:\n\tHiring consultants to conduct studies can be an excellent means of\n\tturning problems into gold -- your problems into their gold.\nXXXIII:\n\tFools rush in where incumbents fear to tread.\nXXXIV:\n\tThe process of competitively selecting contractors to perform work\n\tis based on a system of rewards and penalties, all distributed\n\trandomly.\nXXXV:\n\tThe weaker the data available upon which to base one's conclusion,\n\tthe greater the precision which should be quoted in order to give\n\tthe data authenticity.\n\t\t-- Norman Augustine\n"</code></p>
<p><strong><em>Edit 2:</em></strong> Answer added.</p>
| 1 |
2016-09-22T16:17:56Z
| 39,697,319 |
<p>To coalesce the output strings of terminal into one, split the output at new lines and store them in a list. </p>
<pre><code>sentence = ""
cmd = ['fortune', fortune_list_reply[i]]
output = subprocess.Popen( cmd, stdout=subprocess.PIPE ).communicate()[0]
output = output.split('\n')
for string in output:
string = string.split("\t")
for substring in string:
if len(substring) == 0:
continue
else:
sentence = sentence + " " + substring
</code></pre>
| 0 |
2016-09-26T07:48:54Z
|
[
"python",
"shell",
"selenium",
"terminal",
"command"
] |
python pandas how to drop duplicates selectively
| 39,644,167 |
<p>I need to look at all the rows in a column ['b'] and if the row is non-empty go to another corresponding column ['c'] and drop duplicates of this particular index against all other rows in that third column ['c'] while preserving this particular index. I came across drop_duplicates, however I was unable to find a way to only look for duplicates of a highlighted row as opposed to all duplicates in a column. I can't use drop_duplicates on the whole column because I want to retain duplicates in this column that may correspond to only empty values in column ['b']. </p>
<p>So possible scenarios would be: if in ['b'] you find a non empty value, you may go to the current index in ['c'] and find all duplicates of that ONE index and drop those. These duplicates could correspond to empty OR non-empty values in ['b']. If in ['b'] you find empty value skip to next index. This way it is possible that empty value indices in ['b'] get removed indirectly because they are duplicates of an index in ['c'] corresponding from a non empty ['b'] value.</p>
<p>Edited With Sample Data:</p>
<p>Preprocessed:</p>
<pre><code>df1 = pd.DataFrame([['','CCCH'], ['CHC','CCCH'], ['CCHCC','CNHCC'], ['','CCCH'], ['CNHCC','CNOCH'], ['','NCH'], ['','NCH']], columns=['B', 'C'])
df1
B C
0 CCCH
1 CHC CCCH
2 CCHCC CNHCC
3 CCCH
4 CNHCC CNOCH
5 NCH
6 NCH
</code></pre>
<p>Post Processing and dropping correct duplicates:</p>
<pre><code>df2 = pd.DataFrame([['CHC','CCCH'], ['CCHCC','CNHCC'], ['CNHCC','CNOCH'], ['','NCH'], ['','NCH']], columns=['B', 'C'])
df2
B C
1 CHC CCCH
2 CCHCC CNHCC
4 CNHCC CNOCH
5 NCH
6 NCH
</code></pre>
<p>Above we see the result that the only rows removed were rows 0,3 as they are duplicates in column ['C'] of row 1 which has a non zero 'B' value. Row 5,6 are kept even though they are duplicates of each other in column ['C'] because they have no non zero 'B' value. Rows 2 and 4 are kept because they are not duplicates in column ['C']. </p>
<p>So the logic would be to go through each row in column 'B' if it is empty then move down a row and continue. If it is not empty then go to its corresponding column 'C' and drop any duplicates of that column 'C' row ONLY while preserving that index and then continue to the next row untill this logic has been applied to all values in column 'B'.</p>
<p>Column B value empty --> Look at next value in Column B</p>
<p>| or if not empty |</p>
<p>Column B not empty --> Column C --> Drop all duplicates of that index of Column C while keeping the current index --> Look at next value in Column B</p>
| 1 |
2016-09-22T16:21:24Z
| 39,645,757 |
<p>Say you group your DataFrame according to the <code>'C'</code> column, and check each group for the existence of a <code>'B'</code>-column non-empty entry:</p>
<ul>
<li><p>If there is no such entry, return the entire group</p></li>
<li><p>Otherwise, return the group, for the non-empty entries in <code>'B'</code>, with the duplicates dropped</p></li>
</ul>
<p>In code:</p>
<pre><code>def remove_duplicates(g):
return g if sum(g.B == '') == len(g) else g[g.B != ''].drop_duplicates(subset='B')
>>> df1.groupby(df1.C).apply(remove_duplicates)['B'].reset_index()[['B', 'C']]
B C
0 CHC CCCH
1 CCHCC CNHCC
2 CNHCC CNOCH
3 NCH
4 NCH
</code></pre>
| 0 |
2016-09-22T17:52:41Z
|
[
"python",
"pandas",
"dataframe"
] |
Can you shorten logging headers when using python logging `%(funcName)s`?
| 39,644,189 |
<p>Sometimes I configure the python logging formatter using the <code>%(funcName)s</code>. But I don't like this when the function names are really long.</p>
<p><strong>Can you shorten logging headers when using python logging <code>%(funcName)s</code>? If yes, how?</strong></p>
<p>Can you say... limit the total number of characters to like 10 characters?</p>
| 1 |
2016-09-22T16:22:46Z
| 39,644,322 |
<p>You can extend the <code>logging.formatter</code> class to get the desired formatting like below.</p>
<p>This is just an example. you should modify it based on your requirement
class MyFormatter(logging.Formatter):
in_console = False
def <strong>init</strong>(self):
self.mod_width = 30
self.datefmt = '%H:%M:%S'</p>
<pre><code>def format(self, record):
s = []
cmodule = record.module + ":" + str(record.lineno)
cmodule = cmodule[-self.mod_width:].ljust(self.mod_width)
format_str = ("%-7s %s %s" % (record.levelname, self.formatTime(record, self.datefmt), cmodule))
pad_len = len(format_str)
for line in record_msg[1:]:
line = " " * pad_len + " : " + line
s.append(line)
s = "%s : %s" % (format_str, "\n".join(s))
return s
my_formatter = MyFormatter()
log.setFormatter(my_formatter)
</code></pre>
| 0 |
2016-09-22T16:29:26Z
|
[
"python",
"logging",
"instrumentation"
] |
Can you shorten logging headers when using python logging `%(funcName)s`?
| 39,644,189 |
<p>Sometimes I configure the python logging formatter using the <code>%(funcName)s</code>. But I don't like this when the function names are really long.</p>
<p><strong>Can you shorten logging headers when using python logging <code>%(funcName)s</code>? If yes, how?</strong></p>
<p>Can you say... limit the total number of characters to like 10 characters?</p>
| 1 |
2016-09-22T16:22:46Z
| 39,644,790 |
<p>The <code>%(...)s</code> items in the logging format string are % replacements, and you can limit the length of a string replacement by doing something like <code>%(funcName).10s</code></p>
<p>e.g.</p>
<pre><code>import logging
logging.basicConfig(
format='%(funcName).10s %(message)s',
level=logging.INFO,
)
logger = logging.getLogger()
def short():
logger.info("I'm only little!")
def really_really_really_really_long():
logger.info("I'm really long")
short()
really_really_really_really_long()
</code></pre>
<p>gives</p>
<pre><code>andy@batman[17:54:01]:~$ p tmp_x.py
short I'm only little!
really_rea I'm really long
</code></pre>
| 2 |
2016-09-22T16:55:49Z
|
[
"python",
"logging",
"instrumentation"
] |
Assigning empty list
| 39,644,202 |
<p>I don't really know how I stumbled upon this, and I don't know what to think about it, but apparently <code>[] = []</code> is a legal operation in python, so is <code>[] = ''</code>, but <code>'' = []</code> is not allowed. It doesn't seem to have any effect though, but I'm wondering: what the hell ? </p>
| 4 |
2016-09-22T16:23:41Z
| 39,644,347 |
<p>This is related to Python's multiple assignment (sequence unpacking):</p>
<pre><code>a, b, c = 1, 2, 3
</code></pre>
<p>works the same as:</p>
<pre><code>[a, b, c] = 1, 2, 3
</code></pre>
<p>Since strings are sequences of characters, you can also do:</p>
<pre><code>a, b, c = "abc" # assign each character to a variable
</code></pre>
<p>What you've discovered is the degenerative case: empty sequences on both sides. Syntactically valid because it's a list on the left rather than a tuple. Nice find; never thought to try that before!</p>
<p>Interestingly, if you try that with an empty tuple on the left, Python complains:</p>
<pre><code>() = () # SyntaxError: can't assign to ()
</code></pre>
<p>Looks like the Python developers forgot to close a little loophole!</p>
| 3 |
2016-09-22T16:30:51Z
|
[
"python",
"list",
"binding"
] |
Assigning empty list
| 39,644,202 |
<p>I don't really know how I stumbled upon this, and I don't know what to think about it, but apparently <code>[] = []</code> is a legal operation in python, so is <code>[] = ''</code>, but <code>'' = []</code> is not allowed. It doesn't seem to have any effect though, but I'm wondering: what the hell ? </p>
| 4 |
2016-09-22T16:23:41Z
| 39,644,456 |
<p>Do some search on packing/unpacking on python and you will find your answer.
This is basically for assigning multiple variables in a single go.</p>
<pre><code>>>> [a,v] = [2,4]
>>> print a
2
>>> print v
4
</code></pre>
| 1 |
2016-09-22T16:36:18Z
|
[
"python",
"list",
"binding"
] |
Argparse: parse multiple subcommands
| 39,644,246 |
<p>Did some research, but couldn't find any working solution. I'm trying to parse the following command line, where 'test' and 'train' are two independent subcommands each having distinct arguments:</p>
<pre><code>./foo.py train -a 1 -b 2
./foo.py test -a 3 -c 4
./foo.py train -a 1 -b 2 test -a 3 -c 4
</code></pre>
<p>I've been trying using two subparsers ('test','train') but it seems like only one can be parsed at the time. Also it would be great to have those subparsers parents of the main parser such that, e.g. command '-a' doesn't have to be added both to the subparsers 'train' and 'test'</p>
<p>Any solution?</p>
| 0 |
2016-09-22T16:25:37Z
| 39,645,406 |
<p>This has been asked before, though I'm not sure the best way of finding those questions.</p>
<p>The whole subparser mechanism is designed for one such command. There are several things to note:</p>
<ul>
<li><p><code>add_subparsers</code> creates a positional argument; unlike <code>optionals</code> a `positional acts only once.</p></li>
<li><p>'add_subparsers' raises an error if you invoke it several times</p></li>
<li><p>the parsing is built around only one such call</p></li>
</ul>
<p>One work around that we've proposed in the past is 'nested' or 'recursive' subparers. In other words <code>train</code> is setup so it too takes a subparser. But there's the complication as to whether subparsers are required or not.</p>
<p>Or you can detect and call multiple parsers, bypassing the <code>subparser</code> mechanism.</p>
<p>From the sidebar</p>
<p><a href="http://stackoverflow.com/questions/24484035/multiple-invocation-of-the-same-subcommand-in-a-single-command-line">Multiple invocation of the same subcommand in a single command line</a></p>
<p>and</p>
<p><a href="http://stackoverflow.com/questions/31559910/parse-multiple-subcommands-in-python-simultaneously-or-other-way-to-group-parsed">Parse multiple subcommands in python simultaneously or other way to group parsed arguments</a></p>
| 0 |
2016-09-22T17:32:42Z
|
[
"python",
"argparse",
"subcommand",
"subparsers"
] |
Flask and Tornado Applciation does not handle multiple concurrent requests
| 39,644,247 |
<p>I am running a simple Flask app with Tornado, but the view only handles one request at a time. How can I make it handle multiple concurrent requests?</p>
<p>The fix I'm using is to fork and use the multiple processes to handle requests, but I don't like that solution.</p>
<pre><code>from flask import Flask
app = Flask(__name__)
@app.route('/flask')
def hello_world():
return 'This comes from Flask ^_^'
from tornado.wsgi import WSGIContainer
from tornado.ioloop import IOLoop
from tornado.web import FallbackHandler, RequestHandler, Application
from flasky import app
class MainHandler(RequestHandler):
def get(self):
self.write("This message comes from Tornado ^_^")
tr = WSGIContainer(app)
application = Application([
(r"/tornado", MainHandler),
(r".*", FallbackHandler, dict(fallback=tr)),
])
if __name__ == "__main__":
application.listen(8000)
IOLoop.instance().start()
</code></pre>
| 0 |
2016-09-22T16:25:39Z
| 39,644,448 |
<p>Your fix of spawning processes is correct in as much as using WSGI with Tornado is "correct". WSGI is a synchronous protocol: one worker handles one request at a time. Flask doesn't know about Tornado, so it can't play nice with it by using coroutines: handling the request happens synchronously.</p>
<p><a href="http://www.tornadoweb.org/en/stable/wsgi.html#running-wsgi-apps-on-tornado-servers" rel="nofollow">Tornado has a big warning in their docs about this exact thing.</a></p>
<blockquote>
<p>WSGI is a <em>synchronous</em> interface, while Tornadoâs concurrency model is based on single-threaded asynchronous execution. This means that running a WSGI app with Tornadoâs <code>WSGIContainer</code> is <em>less scalable</em> than running the same app in a multi-threaded WSGI server like <a href="http://gunicorn.org/" rel="nofollow">gunicorn</a> or <a href="http://uwsgi-docs.readthedocs.io/en/latest/" rel="nofollow">uwsgi</a>. Use <code>WSGIContainer</code> only when there are benefits to combining Tornado and WSGI in the same process that outweigh the reduced scalability.</p>
</blockquote>
<p>In other words: to handle more concurrent requests with a WSGI application, spawn more workers. The type of worker also matters: threads vs. processes vs. eventlets all have tradeoffs. You're spawning workers by creating processes yourself, but it's more common to use a WSGI server such as uWSGI or Gunicorn.</p>
| 0 |
2016-09-22T16:35:54Z
|
[
"python",
"flask",
"tornado"
] |
Positive Count // Negative Sum
| 39,644,259 |
<p>A fairly easy problem, but I'm still practicing iterating over multiple variables with for loops. In the below, I seek to return a new list, where x is the count of positive numbers and y is the sum of negative numbers from an input array <code>arr.</code></p>
<p>If the input array is empty or null, I am to return an empty array. </p>
<p>Here's what I've got! </p>
<pre><code>def count_positives_sum_negatives(arr):
return [] if not arr else [(count(x), sum(y)) for x, y in arr]
</code></pre>
<p>Currently receiving... </p>
<p>TypeError: 'int' object is not iterable</p>
| 0 |
2016-09-22T16:26:07Z
| 39,644,378 |
<p>Simply use a <code>sum</code> comprehension</p>
<pre><code>>>> arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11, -12, -13, -14, -15]
>>> sum(1 for x in arr if x > 0)
10
>>> sum(x for x in arr if x < 0)
-65
</code></pre>
| 1 |
2016-09-22T16:32:37Z
|
[
"python",
"list-comprehension",
"multiple-variable-return"
] |
Positive Count // Negative Sum
| 39,644,259 |
<p>A fairly easy problem, but I'm still practicing iterating over multiple variables with for loops. In the below, I seek to return a new list, where x is the count of positive numbers and y is the sum of negative numbers from an input array <code>arr.</code></p>
<p>If the input array is empty or null, I am to return an empty array. </p>
<p>Here's what I've got! </p>
<pre><code>def count_positives_sum_negatives(arr):
return [] if not arr else [(count(x), sum(y)) for x, y in arr]
</code></pre>
<p>Currently receiving... </p>
<p>TypeError: 'int' object is not iterable</p>
| 0 |
2016-09-22T16:26:07Z
| 39,644,844 |
<p>wim's way is good. Numpy is good for these types of things too. </p>
<pre><code>import numpy as np
arr = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11, -12, -13, -14, -15])
print([arr[arr >= 0].size, arr[arr < 0].sum()])
>> [10, -65]
</code></pre>
| 1 |
2016-09-22T16:58:41Z
|
[
"python",
"list-comprehension",
"multiple-variable-return"
] |
Positive Count // Negative Sum
| 39,644,259 |
<p>A fairly easy problem, but I'm still practicing iterating over multiple variables with for loops. In the below, I seek to return a new list, where x is the count of positive numbers and y is the sum of negative numbers from an input array <code>arr.</code></p>
<p>If the input array is empty or null, I am to return an empty array. </p>
<p>Here's what I've got! </p>
<pre><code>def count_positives_sum_negatives(arr):
return [] if not arr else [(count(x), sum(y)) for x, y in arr]
</code></pre>
<p>Currently receiving... </p>
<p>TypeError: 'int' object is not iterable</p>
| 0 |
2016-09-22T16:26:07Z
| 39,645,221 |
<p>the error you get is from this part <code>for x,y in arr</code> that mean that <code>arr</code> is expected to be a list of tuples of 2 elements (or any similar container), like for example this <code>[(1,2), (5,7), (7,9)]</code> but what you have is a list of numbers, which don't contain anything else inside... </p>
<p>Now to get your desire result you can use the solution of <strong>wim</strong>, which need to iterate over the list twice or you can get it in one go with</p>
<pre><code>>>> def fun(iterable):
if not iterable:
return []
pos = 0
neg = 0
for n in iterable:
if n>=0:
pos = pos + 1
else:
neg = neg + n
return [pos, neg]
>>> arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11, -12, -13, -14, -15]
>>> fun(arr)
[10, -65]
>>>
</code></pre>
| 1 |
2016-09-22T17:22:06Z
|
[
"python",
"list-comprehension",
"multiple-variable-return"
] |
Django: Link a field of one model to a field of another
| 39,644,287 |
<p>I'm working through the Tango with Django book and have decided to add some of my own functionality but have an issue.</p>
<p>I have two models, Category and Page</p>
<pre><code>class Category(models.Model):
name = models.CharField(max_length=128, unique=True)
views = models.IntegerField(default=0)
likes = models.IntegerField(default=0)
slug = models.SlugField(unique=True)
class Page(models.Model):
category = models.ForeignKey(Category)
title = models.CharField(max_length=128)
url = models.URLField()
views = models.IntegerField(default=0)
</code></pre>
<p>Now what I'm trying to do is make the Category "views" field a sum of the views of all of the pages within that category</p>
<p>In my test database population script I am doing it this way:</p>
<pre><code>cats = {"Python": {"pages": python_pages,
"views": sum(page["views"] for page in python_pages),
},
"Django": {"pages": django_pages,
"views": sum(page["views"] for page in django_pages),
},
"Other Frameworks": {"pages": other_pages,
"views": sum(page["views"] for page in other_pages),
}
}
</code></pre>
<p>This works for the populating the database, but how would I make it so that the category "views" updates whenever a page "views" field is changed? </p>
<p>For example if two different pages in the same category's "views" go up by one, category's "views" would go up by two?</p>
| 1 |
2016-09-22T16:27:44Z
| 39,644,807 |
<p>I would add this to your category class instead of using the views field:</p>
<pre><code>def get_views(self):
views=0
for page in pages_set:
views+=page.views
return views
</code></pre>
| 0 |
2016-09-22T16:56:32Z
|
[
"python",
"django",
"database",
"models"
] |
Django: Link a field of one model to a field of another
| 39,644,287 |
<p>I'm working through the Tango with Django book and have decided to add some of my own functionality but have an issue.</p>
<p>I have two models, Category and Page</p>
<pre><code>class Category(models.Model):
name = models.CharField(max_length=128, unique=True)
views = models.IntegerField(default=0)
likes = models.IntegerField(default=0)
slug = models.SlugField(unique=True)
class Page(models.Model):
category = models.ForeignKey(Category)
title = models.CharField(max_length=128)
url = models.URLField()
views = models.IntegerField(default=0)
</code></pre>
<p>Now what I'm trying to do is make the Category "views" field a sum of the views of all of the pages within that category</p>
<p>In my test database population script I am doing it this way:</p>
<pre><code>cats = {"Python": {"pages": python_pages,
"views": sum(page["views"] for page in python_pages),
},
"Django": {"pages": django_pages,
"views": sum(page["views"] for page in django_pages),
},
"Other Frameworks": {"pages": other_pages,
"views": sum(page["views"] for page in other_pages),
}
}
</code></pre>
<p>This works for the populating the database, but how would I make it so that the category "views" updates whenever a page "views" field is changed? </p>
<p>For example if two different pages in the same category's "views" go up by one, category's "views" would go up by two?</p>
| 1 |
2016-09-22T16:27:44Z
| 39,645,271 |
<p>You should calculate this when you need it, rather than storing it. You can do it with an aggregation on the set of related objects for your category:</p>
<pre><code>from django.db.models import Sum
my_category.page_set.aggregate(Sum('views'))
</code></pre>
<p>or use annotationif you need to get several categories at once with their view count:</p>
<pre><code>Category.objects.all().annotate(Sum('page__count'))
</code></pre>
| 0 |
2016-09-22T17:25:11Z
|
[
"python",
"django",
"database",
"models"
] |
Matplotpib figure in a loop not responding
| 39,644,336 |
<p>Python 2.7.11, Win 7, x64, Numpy 1.10.4, matplotlib 1.5.1</p>
<p>I ran the following script from an iPython console after entering <code>%matplotlib qt</code> at the command line</p>
<pre><code>from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import numpy as np
number = input("Number: ")
coords = np.array(np.random.randint(0, number, (number, 3)))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(coords[:,0], coords[:,1], coords[:,2])
plt.show()
</code></pre>
<p>It plots a random scatter in 3D. So I thought it would be a trivial matter to just pop it into a while loop & get a new figure on each iteration.</p>
<pre><code>from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import numpy as np
s = True
while s:
number = input("Number: ")
coords = np.array(np.random.randint(0, number, (number, 3)))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(coords[:,0], coords[:,1], coords[:,2])
plt.show()
cont = input("Continue? (y/n)")
if cont == 'n':
s = False
</code></pre>
<p>...but the figures are just blank & unresponsive until I enter an input for <code>cont</code> then I get,</p>
<blockquote>
<p>NameError: name 'y' is not defined</p>
</blockquote>
<p>...and the whole thing crashes.</p>
<p>So what am I missing here?</p>
<p>EDIT: Taking into account Aquatically challenged's answer below. The figures still hang until the loop is exited, then they are all plotted at the same time. Anybody know why the plots are not done within the loop?</p>
| 0 |
2016-09-22T16:30:19Z
| 39,644,651 |
<p>I haven't replicated but when you input the <code>'y'</code> or <code>'n'</code>. try to put the single (or double quotes) are the <code>y</code> or <code>n</code></p>
<p>to input strings without quotes. Use <code>raw_input</code> instead of <code>input</code></p>
<p>as described here <a href="http://stackoverflow.com/questions/4960208/python-2-7-getting-user-input-and-manipulating-as-string-without-quotations">Python 2.7 getting user input and manipulating as string without quotations</a></p>
| 1 |
2016-09-22T16:47:20Z
|
[
"python",
"numpy",
"matplotlib"
] |
Matplotpib figure in a loop not responding
| 39,644,336 |
<p>Python 2.7.11, Win 7, x64, Numpy 1.10.4, matplotlib 1.5.1</p>
<p>I ran the following script from an iPython console after entering <code>%matplotlib qt</code> at the command line</p>
<pre><code>from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import numpy as np
number = input("Number: ")
coords = np.array(np.random.randint(0, number, (number, 3)))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(coords[:,0], coords[:,1], coords[:,2])
plt.show()
</code></pre>
<p>It plots a random scatter in 3D. So I thought it would be a trivial matter to just pop it into a while loop & get a new figure on each iteration.</p>
<pre><code>from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import numpy as np
s = True
while s:
number = input("Number: ")
coords = np.array(np.random.randint(0, number, (number, 3)))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(coords[:,0], coords[:,1], coords[:,2])
plt.show()
cont = input("Continue? (y/n)")
if cont == 'n':
s = False
</code></pre>
<p>...but the figures are just blank & unresponsive until I enter an input for <code>cont</code> then I get,</p>
<blockquote>
<p>NameError: name 'y' is not defined</p>
</blockquote>
<p>...and the whole thing crashes.</p>
<p>So what am I missing here?</p>
<p>EDIT: Taking into account Aquatically challenged's answer below. The figures still hang until the loop is exited, then they are all plotted at the same time. Anybody know why the plots are not done within the loop?</p>
| 0 |
2016-09-22T16:30:19Z
| 39,648,572 |
<p><a href="https://docs.python.org/2.7/library/functions.html?highlight=input#input" rel="nofollow"><code>input</code></a> tries to <a href="https://docs.python.org/2.7/library/functions.html?highlight=input#eval" rel="nofollow"><code>eval</code></a> the string you type in, treating it as though it is Python code, then returns the result of the evaluation. For example, if <code>result = input()</code> and I type in <code>2 + abs(-3)</code> then <code>result</code> will be equal to <code>5</code>.</p>
<p>When you enter the string <code>y</code> this is treated as a variable name. Since you haven't defined any variable named <code>y</code> you will get a <code>NameError</code>. Instead of <code>input</code> you want to use <a href="https://docs.python.org/2.7/library/functions.html#raw_input" rel="nofollow"><code>raw_input</code></a>, which just returns the input string without trying to evaluate it.</p>
<hr>
<p>In order to get your figures to display within the while loop you need to insert a short pause to allow the contents of the figure to be drawn before you continue executing your <code>while</code> loop. You could use <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.pause" rel="nofollow"><code>plt.pause</code></a>, which also takes care of updating the active figure.</p>
<pre><code>s = True
while s:
number = input("Number: ")
coords = np.array(np.random.randint(0, number, (number, 3)))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(coords[:,0], coords[:,1], coords[:,2])
plt.pause(0.1)
cont = raw_input("Continue? (y/n)")
if cont == 'n':
s = False
</code></pre>
| 1 |
2016-09-22T20:44:22Z
|
[
"python",
"numpy",
"matplotlib"
] |
Matplotlib animation with blit -- how to update plot title?
| 39,644,461 |
<p>I use matplotlib to animate a plot, by copying the background and blitting:</p>
<pre><code>f = Figure(tight_layout=True)
canvas = FigureCanvasTkAgg(f, master=pframe)
canvas.get_tk_widget().pack()
ax = f.add_subplot(111)
# Set inial plot title
title = ax.set_title("First title")
canvas.show()
# Capture the background of the figure
background = canvas.copy_from_bbox(ax.bbox)
line, = ax.plot(x, y)
canvas._tkcanvas.pack()
</code></pre>
<p>Periodically I update the plot:</p>
<pre><code># How to update the title here?
line.set_ydata(new_data)
ax.draw_artist(line)
canvas.blit(ax.bbox)
</code></pre>
<p>How could I update -- as efficiently as possible, the plot title every time I update the plot?</p>
<p>Edit:</p>
<pre><code>title.set_text("New title")
ax.draw_artist(title)
</code></pre>
<p>either before or after</p>
<pre><code>canvas.blit(ax.bbox)
</code></pre>
<p>does not update the title. I think somehow I should redraw the <code>title</code> artist, or I should only capture the graph, as <code>blit(ax.bbox)</code> overwrites the entire title plot area including the title.</p>
| 0 |
2016-09-22T16:36:41Z
| 39,664,639 |
<p>The following draws the plot and allows you to make changes,</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(1,1)
ax.plot(np.linspace(0.,10.,100))
title = ax.set_title("First title")
plt.show(block=False)
</code></pre>
<p>The title can then be updated using,</p>
<pre><code>title.set_text("Second title")
plt.draw()
</code></pre>
| 0 |
2016-09-23T15:36:34Z
|
[
"python",
"animation",
"matplotlib",
"blit"
] |
Django Template Tags Creating Spaces
| 39,644,478 |
<p>I am working on a Django site which is live at <a href="http://petervkay.pythonanywhere.com/police_archive/officer/1903/" rel="nofollow">this site</a>. I am getting unwanted spaces in my output caused by unwanted whitespace in the HTML. </p>
<p>For instance, "01-1737 , Civilian Review Authority , INAPPROPRIATE LANGUAGE, SUSTAINED," has extra spaces before most of the commas.</p>
<p>I have found other posts with similar problems, but no solution has worked for me. I tried the {% spaceless %} tag, but that didn't work. The only thing that did work for me was putting all of the template tags in the for loop on a single line, but I'd really like to find a more readable solution than this.</p>
<p>Here is the code for the Django template:</p>
<pre><code>{% extends 'police_archive/base.html' %}
{% block content %}
<h2> {{officer.first_name}} {{officer.last_name}}, badge #{{officer.badge}} </h2>
<p><strong>Department:</strong> {{officer.department}}</p>
<h2>Complaints</h2>
<ul>
{% for details in details_list %}
<li>
{% if details.incident.case_number %}
<a href='/police_archive/complaint/{{details.incident.case_number}}'>
{{details.incident.case_number}}
</a>
{% else %}
No Case Number Found
{% endif %}
{% if details.incident.office %}
, {{details.incident.get_office_display}}
{% else %}
, No office found
{% endif %}
{% if details.allegation %}
, {{details.allegation}}
{% endif %}
{% if details.finding %}
, {{details.finding}}
{% endif %}
{% if details.action %}
, {{details.action}}
{% endif %}
</li>
{% endfor %}
{% endblock %}
</code></pre>
| 2 |
2016-09-22T16:38:08Z
| 39,645,098 |
<p>The reason <code>{% spaceless %}</code> didn't give remove all the space for you is because it only works <strong>between</strong> HTML tags. You whitespace is showing up <strong>within</strong> the <code><li></code> tag.</p>
<p>I can't seem to find a good solution for Django's standard templating system, but it does look like <a href="http://jinja.pocoo.org/docs/dev/templates/" rel="nofollow">Jinja</a> offers what you're looking for. It uses a dash to strip trailing or leading whitespace:</p>
<pre><code>{% for item in seq -%}
{{ item }}
{%- endfor %}
</code></pre>
<p>In order to use Jinja instead of Django's default templating system, you'll have to change your <code>settings.py</code> file as described by <a href="https://docs.djangoproject.com/en/1.10/topics/templates/#s-configuration" rel="nofollow">Django's docs</a>:</p>
<pre><code>TEMPLATES = [
{
'BACKEND': 'django.template.backends.jinja2.Jinja2.',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
# ... some options here ...
},
},
]
</code></pre>
| 1 |
2016-09-22T17:14:25Z
|
[
"python",
"django"
] |
How can i remove all extra characters from list of strings to convert to ints
| 39,644,486 |
<p>Hi I'm pretty new to programming and Python, and this is my first post, so I apologize for any poor form.</p>
<p>I am scraping a website's download counts and am receiving the following error when attempting to convert the list of string numbers to integers to get the sum.
<strong>ValueError: invalid literal for int() with base 10: '1,015'</strong></p>
<p>I have tried .replace() but it does not seem to be doing anything.</p>
<p>And tried to build an if statement to take the commas out of any string that contains them:
<a href="http://stackoverflow.com/questions/3437059/does-python-have-a-string-contains-substring-method">Does Python have a string contains substring method?</a></p>
<p>Here's my code:</p>
<pre><code> downloadCount = pageHTML.xpath('//li[@class="download"]/text()')
downloadCount_clean = []
for download in downloadCount:
downloadCount_clean.append(str.strip(download))
for item in downloadCount_clean:
if "," in item:
item.replace(",", "")
print(downloadCount_clean)
downloadCount_clean = map(int, downloadCount_clean)
total = sum(downloadCount_clean)
</code></pre>
| -1 |
2016-09-22T16:38:27Z
| 39,644,556 |
<p>Strings are not mutable in Python. So when you call <code>item.replace(",", "")</code>, the method returns what you want, but it is not stored anywhere (thus not in <code>item</code>).</p>
<p><strong>EDIT :</strong></p>
<p>I suggest this :</p>
<pre><code>for i in range(len(downloadCount_clean)):
if "," in downloadCount_clean[i]:
downloadCount_clean[i] = downloadCount_clean[i].replace(",", "")
</code></pre>
<p><strong>SECOND EDIT :</strong></p>
<p>For a bit more simplicity and/or elegance :</p>
<pre><code>for index,value in enumerate(downloadCount_clean):
downloadCount_clean[index] = int(value.replace(",", ""))
</code></pre>
| 2 |
2016-09-22T16:41:47Z
|
[
"python",
"replace",
"string-conversion"
] |
How can i remove all extra characters from list of strings to convert to ints
| 39,644,486 |
<p>Hi I'm pretty new to programming and Python, and this is my first post, so I apologize for any poor form.</p>
<p>I am scraping a website's download counts and am receiving the following error when attempting to convert the list of string numbers to integers to get the sum.
<strong>ValueError: invalid literal for int() with base 10: '1,015'</strong></p>
<p>I have tried .replace() but it does not seem to be doing anything.</p>
<p>And tried to build an if statement to take the commas out of any string that contains them:
<a href="http://stackoverflow.com/questions/3437059/does-python-have-a-string-contains-substring-method">Does Python have a string contains substring method?</a></p>
<p>Here's my code:</p>
<pre><code> downloadCount = pageHTML.xpath('//li[@class="download"]/text()')
downloadCount_clean = []
for download in downloadCount:
downloadCount_clean.append(str.strip(download))
for item in downloadCount_clean:
if "," in item:
item.replace(",", "")
print(downloadCount_clean)
downloadCount_clean = map(int, downloadCount_clean)
total = sum(downloadCount_clean)
</code></pre>
| -1 |
2016-09-22T16:38:27Z
| 39,644,689 |
<p>For simplicities sake:</p>
<pre><code>>>> aList = ["abc", "42", "1,423", "def"]
>>> bList = []
>>> for i in aList:
... bList.append(i.replace(',',''))
...
>>> bList
['abc', '42', '1423', 'def']
</code></pre>
<p>or working just with a single list:</p>
<pre><code>>>> aList = ["abc", "42", "1,423", "def"]
>>> for i, x in enumerate(aList):
... aList[i]=(x.replace(',',''))
...
>>> aList
['abc', '42', '1423', 'def']
</code></pre>
<p>Not sure if this one breaks any python rules or not :)</p>
| 0 |
2016-09-22T16:49:31Z
|
[
"python",
"replace",
"string-conversion"
] |
How to find ASN.1 components of EC key python-cryptography
| 39,644,514 |
<p>I am generating a EC key using python cryptography module in this way</p>
<pre><code>from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.asymmetric import ec
key=ec.generate_private_key(ec.SECP256R1(), default_backend())
</code></pre>
<p>The asn.1 structure of EC key is as follows</p>
<pre><code> ECPrivateKey ::= SEQUENCE {
version INTEGER { ecPrivkeyVer1(1) } (ecPrivkeyVer1),
privateKey OCTET STRING,
parameters [0] ECParameters {{ NamedCurve }} OPTIONAL,
publicKey [1] BIT STRING OPTIONAL
}
</code></pre>
<p>from <a href="https://tools.ietf.org/html/rfc5915">https://tools.ietf.org/html/rfc5915</a> setion 3.</p>
<p>my question is how to get the ASN.1 components from this key. I want to convert the key object to OpenSSH private key, something like</p>
<pre><code>-----BEGIN EC PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,9549ED842979FDAF5299BD7B0E25B384
Z+B7I6jfgC9C03Kcq9rbWKo88mA5+YqxSFpnfRG4wkm2eseWBny62ax9Y1izGPvb
J7gn2eBjEph9xobNewgPfW6/3ZDw9VGeaBAYRkSolNRadyN2Su6OaT9a2gKiVQi+
mqFeJmxsLyvew9XPkZqQIjML1d1M3T3oSA32zYX21UY=
-----END EC PRIVATE KEY-----
</code></pre>
<p>It is easy with handling DSA or RSA because all the ASN.1 parameters are integers in that.</p>
<p>Thank You in advance</p>
| 6 |
2016-09-22T16:39:48Z
| 39,694,727 |
<p>It's relatively easy to extract the public point from the ASN.1 sequence using <a href="https://pypi.python.org/pypi/pyasn1" rel="nofollow">pyasn1</a>, but if you want PEM-encrypted PKCS1 (aka "traditional OpenSSL") then pyca/cryptography can do that quite easily:</p>
<pre><code>from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import ec
backend = default_backend()
key = ec.generate_private_key(ec.SECP256R1(), backend)
serialized_key = key.private_bytes(
serialization.Encoding.PEM,
serialization.PrivateFormat.TraditionalOpenSSL,
serialization.BestAvailableEncryption(b"my_great_password")
)
</code></pre>
<p>You can find more information about the <a href="https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ec/#cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKeyWithSerialization.private_bytes" rel="nofollow">private_bytes</a> method in the docs. At this time <code>BestAvailableEncryption</code> will encrypt using <code>AES-256-CBC</code>.</p>
| 0 |
2016-09-26T04:26:39Z
|
[
"python",
"cryptography",
"openssh",
"python-cryptography"
] |
Slice list of lists without numpy
| 39,644,517 |
<p>In Python, how could I slice my list of lists and get a sub list of lists without numpy?</p>
<p>For example, get a list of lists from A[1][1] to A[2][2] and store it in B:</p>
<pre><code>A = [[1, 2, 3, 4 ],
[11, 12, 13, 14],
[21, 22, 23, 24],
[31, 32, 33, 34]]
B = [[12, 13],
[22, 23]]
</code></pre>
| 2 |
2016-09-22T16:39:55Z
| 39,644,552 |
<p>You can <em>slice</em> <code>A</code> and its sublists:</p>
<pre><code>In [1]: A = [[1, 2, 3, 4 ],
...: [11, 12, 13, 14],
...: [21, 22, 23, 24],
...: [31, 32, 33, 34]]
In [2]: B = [l[1:3] for l in A[1:3]]
In [3]: B
Out[3]: [[12, 13], [22, 23]]
</code></pre>
| 7 |
2016-09-22T16:41:36Z
|
[
"python",
"list",
"slice"
] |
Slice list of lists without numpy
| 39,644,517 |
<p>In Python, how could I slice my list of lists and get a sub list of lists without numpy?</p>
<p>For example, get a list of lists from A[1][1] to A[2][2] and store it in B:</p>
<pre><code>A = [[1, 2, 3, 4 ],
[11, 12, 13, 14],
[21, 22, 23, 24],
[31, 32, 33, 34]]
B = [[12, 13],
[22, 23]]
</code></pre>
| 2 |
2016-09-22T16:39:55Z
| 39,645,000 |
<p>You may also perform nested list slicing using <a href="https://docs.python.org/2/library/functions.html#map" rel="nofollow"><code>map()</code></a> function as:</p>
<pre><code>B = map(lambda x: x[1:3], A[1:3])
# Value of B: [[12, 13], [22, 23]]
</code></pre>
<p>where <code>A</code> is the list mentioned in the question.</p>
| 0 |
2016-09-22T17:07:46Z
|
[
"python",
"list",
"slice"
] |
Django 1.9 adding a class to form label
| 39,644,629 |
<p>In my forms.py file I reconfigured the class UserForm's <code>__init__</code> function to include a css class. Unfortunately doing this only allowed me to add a class to the <code><input></code> and not the <code><label></code>. How can I add a class to the label as well?</p>
<p>Here is my forms.py file:</p>
<pre><code>class UserForm(forms.ModelForm):
# password = forms.CharField(widget=forms.PasswordInput())
def __init__(self, *args, **kwargs):
super(UserForm, self).__init__(*args, **kwargs)
self.fields['email'].widget.attrs = {
'class': 'form-control'
}
class Meta:
model = User
fields = ('email',)
def clean_email(self):
email = self.cleaned_data['email']
if not email:
raise forms.ValidationError(u'Please enter an email address.')
if User.objects.exclude(pk=self.instance.pk).filter(email=email).exists():
raise forms.ValidationError(u'Email "%s" is already in use.' % email)
return email
</code></pre>
<p>Thanks!</p>
| 0 |
2016-09-22T16:45:46Z
| 39,645,855 |
<p>Look at <a href="https://github.com/kmike/django-widget-tweaks/" rel="nofollow">Django Widget Tweaks</a>.</p>
| 0 |
2016-09-22T17:58:53Z
|
[
"python",
"django",
"django-forms",
"django-templates",
"django-1.9"
] |
How to take the nth digit of a number in python
| 39,644,638 |
<p>I want to take the nth digit from an N digit number in python. For example:</p>
<pre><code>number = 9876543210
i = 4
number[i] # should return 6
</code></pre>
<p>How can I do something like that in python? Should I change it to string first and then change it to int for the calculation?</p>
| -3 |
2016-09-22T16:46:31Z
| 39,644,706 |
<p>First treat the number like a string</p>
<pre><code>number = 9876543210
number = str(number)
</code></pre>
<p>Then to get the first digit:</p>
<pre><code>number[0]
</code></pre>
<p>The fourth digit:</p>
<pre><code>number[3]
</code></pre>
<p>EDIT:</p>
<p>This will return the digit as a character, not as a number. To convert it back use:</p>
<pre><code>int(number[0])
</code></pre>
| 2 |
2016-09-22T16:50:04Z
|
[
"python",
"int"
] |
How to take the nth digit of a number in python
| 39,644,638 |
<p>I want to take the nth digit from an N digit number in python. For example:</p>
<pre><code>number = 9876543210
i = 4
number[i] # should return 6
</code></pre>
<p>How can I do something like that in python? Should I change it to string first and then change it to int for the calculation?</p>
| -3 |
2016-09-22T16:46:31Z
| 39,644,726 |
<p>You can do it with integer division and remainder methods</p>
<pre><code>def get_digit(number, n):
return number // 10**n % 10
get_digit(987654321, 0)
# 1
get_digit(987654321, 5)
# 6
</code></pre>
<p>The <code>//</code> does integer division by a power of ten to move the digit to the ones position, then the <code>%</code> gets the remainder after division by 10. Note that the numbering in this scheme uses zero-indexing and starts from the right side of the number. </p>
| 0 |
2016-09-22T16:51:18Z
|
[
"python",
"int"
] |
Notation for intervals?
| 39,644,748 |
<p>I want to make a Python class for intervals of real numbers. Syntax most closely related to mathematical notation would be <code>Interval([a, b))</code> or, even better, <code>Interval[a, b)</code> to construct the interval of all real <code>x</code> satisfying <code>a <= x < b</code>.</p>
<p>Is it possible to construct a class that would handle this syntax? </p>
| 3 |
2016-09-22T16:53:10Z
| 39,644,792 |
<p>It's impossible to "fix" syntactically invalid python by making a custom class.</p>
<p>I think the closest you can get to the mathematical interval notation in python is</p>
<pre><code>Interval('[a, b)')
</code></pre>
<p>This way becomes even more lightweight if you are passing intervals as arguments to a function and the function converts it's arguments to an appropriate type before using them. Example:</p>
<pre><code>def do_foo(interval, bar, baz):
interval = Interval(interval)
# do stuff
do_foo('[3,4)', 42, true)
</code></pre>
<p><a href="http://ideone.com/BXBeXi" rel="nofollow">Possible implementation</a>:</p>
<pre><code>import re
class Interval:
def __init__(self, interval):
"""Initialize an Interval object from a string representation of an interval
e.g: Interval('(3,4]')"""
if isinstance(interval, Interval):
self.begin, self.end = interval.begin, interval.end
self.begin_included = interval.begin_included
self.end_included = interval.end_included
return
number_re = '-?[0-9]+(?:.[0-9]+)?'
interval_re = ('^\s*'
+'(\[|\()' # opeing brecket
+ '\s*'
+ '(' + number_re + ')' # beginning of the interval
+ '\s*,\s*'
+ '(' + number_re + ')' # end of the interval
+ '\s*'
+ '(\]|\))' # closing brecket
+ '\s*$'
)
match = re.search(interval_re, interval)
if match is None:
raise ValueError('Got an incorrect string representation of an interval: {!r}'. format(interval))
opening_brecket, begin, end, closing_brecket = match.groups()
self.begin, self.end = float(begin), float(end)
if self.begin >= self.end:
raise ValueError("Interval's begin shoud be smaller than it's end")
self.begin_included = opening_brecket == '['
self.end_included = closing_brecket == ']'
# It might have been batter to use number_re = '.*' and catch exeptions float() raises instead
def __repr__(self):
return 'Interval({!r})'.format(str(self))
def __str__(self):
opening_breacket = '[' if self.begin_included else '('
closing_breacket = ']' if self.end_included else ')'
return '{}{}, {}{}'.format(opening_breacket, self.begin, self.end, closing_breacket)
def __contains__(self, number):
if self.begin < number < self.end:
return True
if number == self.begin:
return self.begin_included
if number == self.end:
return self.end_included
</code></pre>
| 4 |
2016-09-22T16:55:51Z
|
[
"python"
] |
Notation for intervals?
| 39,644,748 |
<p>I want to make a Python class for intervals of real numbers. Syntax most closely related to mathematical notation would be <code>Interval([a, b))</code> or, even better, <code>Interval[a, b)</code> to construct the interval of all real <code>x</code> satisfying <code>a <= x < b</code>.</p>
<p>Is it possible to construct a class that would handle this syntax? </p>
| 3 |
2016-09-22T16:53:10Z
| 39,645,859 |
<p>You cannot make this exact syntax work. But you <em>could</em> do something like this by overriding the relevant comparison methods:</p>
<pre><code>a <= Interval() < b
</code></pre>
<p>This whole expression could then return a new <code>Interval</code> object that includes everything greater than or equal to a and strictly less than b. <code>Interval()</code> by itself could be interpreted as the fully open interval from negative to positive infinity (i.e. the unbounded interval of all real numbers), and <code>Interval() < b</code> by itself could refer to an interval bounded from above but not from below.</p>
<p>NumPy uses a similar technique for array comparison operations (where A < B means "return an array of ones and zeros that correspond to whether or not each element of A is less than the respective element of B").</p>
| 0 |
2016-09-22T17:59:04Z
|
[
"python"
] |
Notation for intervals?
| 39,644,748 |
<p>I want to make a Python class for intervals of real numbers. Syntax most closely related to mathematical notation would be <code>Interval([a, b))</code> or, even better, <code>Interval[a, b)</code> to construct the interval of all real <code>x</code> satisfying <code>a <= x < b</code>.</p>
<p>Is it possible to construct a class that would handle this syntax? </p>
| 3 |
2016-09-22T16:53:10Z
| 39,646,218 |
<p>You can't change Python's existing syntax rules (without changing the whole language), but you can get usably close to what you want:</p>
<pre><code>class Interval(object):
def __init__(self, left_bracket, a, b, right_bracket):
if len(left_bracket) !=1 or left_bracket not in '[(':
raise ValueError(
'Unknown left bracket character: {!r}'.format(left_bracket))
if len(right_bracket) !=1 or right_bracket not in '])':
raise ValueError(
'Unknown right bracket character: {!r}'.format(right_bracket))
if a < b:
self.lower, self.upper = a, b
else:
self.lower, self.upper = b, a
self.left_bracket, self.right_bracket = left_bracket, right_bracket
if left_bracket == '[':
if right_bracket == ']':
self._contains = (
lambda self, val: self.lower <= val <= self.upper)
else:
self._contains = (
lambda self, val: self.lower <= val < self.upper)
else:
if right_bracket == ']':
self._contains = (
lambda self, val: self.lower < val <= self.upper)
else:
self._contains = (
lambda self, val: self.lower < val < self.upper)
__contains__ = lambda self, val: self._contains(self, val)
def __str__(self):
return '{}{}, {}{}'.format(self.left_bracket, self.lower, self.upper,
self.right_bracket)
def __repr__(self):
return '{}({!r}, {}, {}, {!r})'.format(self.__class__.__name__,
self.left_bracket, self.lower, self.upper, self.right_bracket)
if __name__ == '__main__':
interval1 = Interval('[', 1, 3, ']') # closed interval
interval2 = Interval('[', 1, 3, ')') # half-open interval
print('{} in {}? {}'.format(3, interval1, 3 in interval1))
print('{} in {}? {}'.format(3, interval2, 3 in interval2))
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>3 in [1, 3]? True
3 in [1, 3)? False
</code></pre>
<p>Note: The <code>a</code> and <code>b</code> arguments can be any type that can be compared.</p>
| 1 |
2016-09-22T18:19:32Z
|
[
"python"
] |
Call text but totally exclude tables
| 39,644,778 |
<p>I am using Beautiful Soup to load an XMl. All I need is the text, ignoring the tags, and the <code>text</code> attribute words nice.</p>
<p>However, I would like to totally exclude anything within <code><table><\table></code> tags. I had the idea of substituting everything in between with a regex, but I am wondering whether there is a cleaner solution partly because <a href="http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454">Don't parse [X]HTML with regex!</a>. For instance:</p>
<pre><code>s =""" <content><p>Hasselt ( ) is a <link target="Belgium">Belgian</link> <link target="city">city</link> and <link target="Municipalities in Belgium">municipality</link>.
<table><cell>Passenger growth
<cell>Year</cell><cell>Passengers</cell><cell>Percentage </cell></cell>
<cell>1996</cell><cell>360 000</cell><cell>100%</cell>
<cell>1997</cell><cell>1 498 088</cell><cell>428%</cell>
</table>"""
clean = Soup(s)
print clean.text
</code></pre>
<p>will give </p>
<pre><code>Hasselt ( ) is a Belgian city and municipality.
Passenger growth
YearPassengersPercentage
1996360 000100%
19971 498 088428%
</code></pre>
<p>whereas I only want:</p>
<pre><code>Hasselt ( ) is a Belgian city and municipality.
</code></pre>
| 1 |
2016-09-22T16:55:05Z
| 39,644,824 |
<p>You can locate the <code>content</code> element and remove all <code>table</code> elements from it, then get the text:</p>
<pre><code>from bs4 import BeautifulSoup
s =""" <content><p>Hasselt ( ) is a <link target="Belgium">Belgian</link> <link target="city">city</link> and <link target="Municipalities in Belgium">municipality</link>.
<table><cell>Passenger growth
<cell>Year</cell><cell>Passengers</cell><cell>Percentage </cell></cell>
<cell>1996</cell><cell>360 000</cell><cell>100%</cell>
<cell>1997</cell><cell>1 498 088</cell><cell>428%</cell>
</table>"""
soup = BeautifulSoup(s, "xml")
content = soup.content
for table in content("table"):
table.extract()
print(content.get_text().strip())
</code></pre>
<p>Prints:</p>
<pre><code>Hasselt ( ) is a Belgian city and municipality.
</code></pre>
| 1 |
2016-09-22T16:57:24Z
|
[
"python",
"xml",
"xml-parsing",
"beautifulsoup"
] |
Storing Kernel in Separate File - PyOpenCL
| 39,644,821 |
<p>I'm trying to store the kernel part of the code, with the 3 """ , in a different file. I tried saving it as a text file and a bin file, and reading it in, but I didn't find success with it. It started giving me an error saying """ is missing, or ) is missing. "However, if i just copy paste the kernel code into cl.Program(, it works. </p>
<p>So, is there a way to abstract long kernel code out into another file? This is specific to python, thank you!</p>
<pre><code>#Kernel function
prg = cl.Program(ctx, """
__kernel void sum(__global double *a, __global double *b, __global double *c)
{
int gid = get_global_id(0);
c[gid] = 1;
}
""").build()
</code></pre>
<p>So pretty much everything inside """ """, the second argument of cl.Program() function, I wan't to move into a different file.</p>
| 0 |
2016-09-22T16:57:19Z
| 39,646,088 |
<p>Just put your kernel code in a plain text file, and then use <code>open(...).read()</code> to get the contents:</p>
<p><strong>foo.cl</strong></p>
<pre><code>__kernel void sum(__global double *a, __global double *b, __global double *c)
{
int gid = get_global_id(0);
c[gid] = 1;
}
</code></pre>
<p><strong>Python code</strong></p>
<pre><code>prg = cl.Program(ctx, open('foo.cl').read()).build()
</code></pre>
| 0 |
2016-09-22T18:12:23Z
|
[
"python",
"opencl",
"pycuda",
"pyopencl"
] |
Turning a list into a string to then finding the location of the words in the string
| 39,644,847 |
<p><strong>Code: (Python 3.5.2)</strong></p>
<pre><code>import time
import sys
def Word_Position_Finder():
Chosen_Sentence = input("Make a simple sentence: ")
Sentence_List = Chosen_Sentence.split()
if len(Chosen_Sentence) == 0:
print("Your Sentence has no words! Restarting Program.")
time.sleep(1)
Restarting_Program()
print(Sentence_List)
time.sleep(1)
Users_Choice = input("Do you want to make a new sentence (press 1) or keep current sentence (press 2): ")
if Users_Choice == "1":
print("Restarting Program.")
time.sleep(1)
Restarting_Program()
elif Users_Choice == "2":
print(Chosen_Sentence + ". This is your sentence.")
Chosen_Word = input("Which word in your sentence do you want to find the position of? ")
for index, word in enumerate(Sentence_List):
if(word == Chosen_Word):
print("Your word appears in the number " + str(index) + " slot of this sentence")
elif Chosen_Word not in Users_Sentence:
print("That word isn't in the sentence")
Choose_To_Restart()
else:
print("Restarting Program.")
time.sleep(1)
Restarting_Program()
def Choose_To_Restart():
time.sleep(1)
loop = input("Want to try again, Y/N?")
if loop.upper() == "Y" or loop.upper() == "YES":
print("Restarting Program")
time.sleep(1)
Restarting_Program()
elif loop.upper() == "N" or loop.upper() == "NO":
print("Ending Program")
time.sleep(1)
sys.exit("Program Ended")
else:
print("That isn't a valid answer, going to assume you said no.")
time.sleep(1)
sys.exit("Program Ended")
def Restarting_Program():
Word_Position_Finder()
Word_Position_Finder()
</code></pre>
<p><strong>Question:</strong></p>
<ul>
<li>I'm having a problem fulling turning a users list into a string to then be printed and having the problem after that printing the location of the users chosen words. The code which needs help with is between the "This code needs fixing:" line; which obviously aren't in the actual code.</li>
</ul>
<p>I'm writing code which a user inputs a sentence, the sentence is then turned into a list and shown to the user to decide if he/she is happy with it. After that (the bit I am having trouble with) the user chooses a word in their sentence that they want to know the location of and I must print the locations of the chosen word(s). I decided to fully show all my code so anyone reading could help me improve (since I do enjoy coding but I only do it for school tasks) the overall code.</p>
| 0 |
2016-09-22T16:58:58Z
| 39,645,311 |
<p>How's this?</p>
<pre><code> words_list = Users_Sentence.split()
for index, word in enumerate(words_list):
if(word == Chosen_Word):
print("Your word appears in the number " + str(index) + " slot of this sentence")
</code></pre>
<p>Here's some console output to show what split and enumerate are doing:</p>
<pre><code> >>> a="This is an example sentence"
>>> words = a.split()
>>> print(words)
['This', 'is', 'an', 'example', 'sentence']
>>> for i, word in enumerate(words):
... print(i)
... print(word)
...
0
This
1
is
2
an
3
example
4
sentence
</code></pre>
| 1 |
2016-09-22T17:27:51Z
|
[
"python"
] |
How to convert the string '1.000,0.001' to the complex number (1+0.001j)?
| 39,644,848 |
<p>The best I could come up with is</p>
<pre><code>s = '1.000,0.001'
z = [float(w) for w in s.split(',')]
x = complex(z[0],z[1])
</code></pre>
<p>Is there a shorter, cleaner, nicer way?</p>
| 1 |
2016-09-22T16:59:01Z
| 39,644,899 |
<p>I guess you could do the slightly shorter</p>
<pre><code>real, imag = s.split(',')
x = complex(float(real), float(imag))
</code></pre>
<p>without involving the list comprehension.</p>
| 0 |
2016-09-22T17:02:12Z
|
[
"python",
"complex-numbers"
] |
How to convert the string '1.000,0.001' to the complex number (1+0.001j)?
| 39,644,848 |
<p>The best I could come up with is</p>
<pre><code>s = '1.000,0.001'
z = [float(w) for w in s.split(',')]
x = complex(z[0],z[1])
</code></pre>
<p>Is there a shorter, cleaner, nicer way?</p>
| 1 |
2016-09-22T16:59:01Z
| 39,644,903 |
<p>What you have is fine. The only improvement I could suggest is to use </p>
<pre><code>complex(*z)
</code></pre>
<p>If you want to one-liner it:</p>
<pre><code>>>> complex(*map(float, s.split(',')))
(1+0.001j)
</code></pre>
| 2 |
2016-09-22T17:02:22Z
|
[
"python",
"complex-numbers"
] |
How to convert the string '1.000,0.001' to the complex number (1+0.001j)?
| 39,644,848 |
<p>The best I could come up with is</p>
<pre><code>s = '1.000,0.001'
z = [float(w) for w in s.split(',')]
x = complex(z[0],z[1])
</code></pre>
<p>Is there a shorter, cleaner, nicer way?</p>
| 1 |
2016-09-22T16:59:01Z
| 39,644,904 |
<p>There's a more concise way, but it's not really any cleaner and it's certainly not clearer.</p>
<pre><code>x = complex(*[float(w) for w in '1.000,.001'.split(',')])
</code></pre>
| 2 |
2016-09-22T17:02:30Z
|
[
"python",
"complex-numbers"
] |
How to convert the string '1.000,0.001' to the complex number (1+0.001j)?
| 39,644,848 |
<p>The best I could come up with is</p>
<pre><code>s = '1.000,0.001'
z = [float(w) for w in s.split(',')]
x = complex(z[0],z[1])
</code></pre>
<p>Is there a shorter, cleaner, nicer way?</p>
| 1 |
2016-09-22T16:59:01Z
| 39,645,633 |
<p>If you can trust the data to not be dangerous or want this for code golf:</p>
<pre><code>>>> eval('complex(%s)' % s)
(1+0.001j)
</code></pre>
| -1 |
2016-09-22T17:45:58Z
|
[
"python",
"complex-numbers"
] |
python: best way convey missing value count
| 39,644,866 |
<p>I currently have a data frame with 9 features and some features have missing values. I do the following to get the <code>count</code> of missing values in each feature:</p>
<pre><code>df.isnull().sum()
</code></pre>
<p>which gives me:</p>
<pre><code>A 0
B 0
C 15844523
D 717
E 18084
F 118679
G 0
H 978505
I 0
</code></pre>
<p>I want to display this information in a nice way. I can always create a table in the report but is there any other way to display this in a plot?</p>
| 1 |
2016-09-22T17:00:39Z
| 39,645,171 |
<p>You can visualize the count of the missing values with vertical bars.</p>
<p>Use the pandas.DataFrame.plot() method :</p>
<pre><code>df.isnull().sum().plot(kind='bar')
</code></pre>
<p>For more fancy plots you can use the python library
<a href="https://plot.ly/python/" rel="nofollow"> <strong>plot.ly</strong></a></p>
| 1 |
2016-09-22T17:19:24Z
|
[
"python",
"python-2.7",
"pandas",
"visualization",
"data-visualization"
] |
python: best way convey missing value count
| 39,644,866 |
<p>I currently have a data frame with 9 features and some features have missing values. I do the following to get the <code>count</code> of missing values in each feature:</p>
<pre><code>df.isnull().sum()
</code></pre>
<p>which gives me:</p>
<pre><code>A 0
B 0
C 15844523
D 717
E 18084
F 118679
G 0
H 978505
I 0
</code></pre>
<p>I want to display this information in a nice way. I can always create a table in the report but is there any other way to display this in a plot?</p>
| 1 |
2016-09-22T17:00:39Z
| 39,645,233 |
<p>I think you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.log.html" rel="nofollow"><code>numpy.log</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.bar.html" rel="nofollow"><code>Series.plot.bar</code></a>:</p>
<pre><code>import matplotlib.pyplot as plt
np.log(s).plot.bar()
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/XlAPA.png" rel="nofollow"><img src="http://i.stack.imgur.com/XlAPA.png" alt="log"></a></p>
<p>Another solution is categorize data to bins by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow"><code>cut</code></a> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.bar.html" rel="nofollow"><code>Series.plot.bar</code></a>:</p>
<pre><code>import matplotlib.pyplot as plt
#convert Series to one column df with column name 'name'
df = s.rename('name').to_frame()
bins = [-1,1, 10, 100, 1000,10000,100000,1000000,10000000, 100000000,np.Inf]
labels=[0,1,2,3,4,5,6,7,8,9]
df['label'] = pd.cut(df['name'], bins=bins, labels=labels)
print (df.label)
A 0
B 0
C 8
D 3
E 5
F 6
G 0
H 6
I 0
Name: label, dtype: category
Categories (10, int64): [0 < 1 < 2 < 3 ... 6 < 7 < 8 < 9]
df.label.astype(int).plot.bar()
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/jnBTw.png" rel="nofollow"><img src="http://i.stack.imgur.com/jnBTw.png" alt="binned graph"></a></p>
<p>I think it is nicer as plot column <code>name</code>:</p>
<pre><code>df.name.plot.bar()
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/cjS1G.png" rel="nofollow"><img src="http://i.stack.imgur.com/cjS1G.png" alt="orig"></a></p>
| 2 |
2016-09-22T17:22:42Z
|
[
"python",
"python-2.7",
"pandas",
"visualization",
"data-visualization"
] |
django validation error message get displayed twice
| 39,644,875 |
<p>I just want to have an error message when user failed to login due to invalid username or password. Is there any preferable why to have it than overriding clean method. I found that django have login_failed signals but i am unsure it best to use that.</p>
<p>Here is the print screen
<a href="http://i.stack.imgur.com/CiGzH.png" rel="nofollow">validation error message get displayed twice</a></p>
<p>Here is my updated code from github :</p>
<p><a href="https://github.com/afdallismen/Django-error-message-displayed-twice" rel="nofollow">https://github.com/afdallismen/Django-error-message-displayed-twice</a></p>
<p>Here is my python version and output from pip list, and pip freeze</p>
<p>python -V</p>
<pre><code>Python 3.5.2
</code></pre>
<p>pip list</p>
<pre><code>Django (1.10.1)
pip (8.1.2)
setuptools (27.1.2)
wheel (0.29.0)
</code></pre>
<p>pip freeze</p>
<pre><code>Django==1.10.1
</code></pre>
<p>forms.py</p>
<pre><code>class AuthorLogin(forms.Form):
username = forms.CharField(label='Your name', max_length=100)
password = forms.CharField(
label='Your password',
max_length=100,
widget=forms.PasswordInput)
def clean(self):
username = self.cleaned_data.get('username')
password = self.cleaned_data.get('password')
user = authenticate(username=username, password=password)
if not user or not user.is_active:
raise forms.ValidationError('Invalid username or password', code='invalid')
return self.cleaned_data
def login(self, request):
username = self.cleaned_data.get('username')
password = self.cleaned_data.get('password')
user = authenticate(username=username, password=password)
return user
def form_invalid(self, form):
return self.render_to_response(self.get_context_data(form=form))
</code></pre>
<p>view.py</p>
<pre><code>def author_login(request):
form = AuthorLogin(request.POST or None)
if request.POST and form.is_valid():
user = form.login(request)
if user is not None:
login(request, user)
return redirect('microblog:index')
return render(request, 'microblog/author_login.html', {'form': form})
</code></pre>
<p>urls.py</p>
<pre><code>app_name = 'microblog'
urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^login/', views.author_login, name='author_login'),
url(r'^logout/', views.author_logout, name='author_logout'),
]
</code></pre>
<p>author_login.html</p>
<pre><code>{% extends "base.html" %}
{% block content %}
{% if form.non_field_errors %}
<ul>
{% for error in form.non_field_errors %}
<li>{{ error }}</li>
{% endfor %}
</ul>
{% endif %}
<form action="" method="post">
{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Login" />
</form>
{% endblock %}
</code></pre>
<p>base.html</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Microblog</title>
</head>
<body>
{% if user.is_authenticated %}
<p>Welcome, {{ request.user.username }}</p>
<a href="{% url 'microblog:author_logout' %}">Logout</a>
{% else %}
<a href="{% url 'microblog:author_login' %}">Login</a>
{% endif %}
<nav>
<ul>
<li><a href="{% url 'microblog:index' %}">Microblog</a></li>
</ul>
</nav>
{% block content %}{% endblock %}
</body>
</html>
</code></pre>
<p>As a workaround i tried to sent validation error through form.add_error method in views.py, and removing clean, login and form_invalid method on forms.py. </p>
<p>Here is my new views.py :</p>
<pre><code>def author_login(request):
if request.method == 'POST':
form = AuthorLogin(request.POST)
if form.is_valid():
username = form.cleaned_data['username']
password = form.cleaned_data['password']
user = authenticate(username=username, password=password)
if user is not None:
login(request, user)
return redirect('microblog:index')
else:
form.add_error(None, 'Invalid username or password.') # This line seems to get executed twice
else:
form = AuthorLogin()
return render(request, 'microblog/author_login.html', {'form': form})
</code></pre>
<p>Turn out it still displayed error message twice. Now it seems form.add_error was called twice, was it ?.</p>
<p>Tried this :</p>
<pre><code>def author_login(request):
if request.method = 'POST':
count = 0
...
if form.is_valid()
....
if user is not None:
....
else:
count = count + 1
print(count)
</code></pre>
<p>It print 1 and only printed once, so author_login with POST method just executed once, same for else block that print count. But form.add_error get executed twice ?</p>
<p>It just getting weird. Right now i am using from.add_error in views.py. And then in templates i can fetch the error message through messages context and it did shows the message once :</p>
<pre><code>{% if messages %}
<ul class="messages">
{% for message in messages %}
<li>{{ message }}</li>
{% endfor %}
</ul>
{% endif %}
</code></pre>
| 0 |
2016-09-22T17:01:04Z
| 39,645,206 |
<p>The <code>clean</code> method needs to use <code>super()</code>:</p>
<pre><code>def clean(self):
cleaned_data = super(AuthorLogin, self).clean() #insert this line
username = cleaned_data.get('username')
...
</code></pre>
| 0 |
2016-09-22T17:21:12Z
|
[
"python",
"django"
] |
django validation error message get displayed twice
| 39,644,875 |
<p>I just want to have an error message when user failed to login due to invalid username or password. Is there any preferable why to have it than overriding clean method. I found that django have login_failed signals but i am unsure it best to use that.</p>
<p>Here is the print screen
<a href="http://i.stack.imgur.com/CiGzH.png" rel="nofollow">validation error message get displayed twice</a></p>
<p>Here is my updated code from github :</p>
<p><a href="https://github.com/afdallismen/Django-error-message-displayed-twice" rel="nofollow">https://github.com/afdallismen/Django-error-message-displayed-twice</a></p>
<p>Here is my python version and output from pip list, and pip freeze</p>
<p>python -V</p>
<pre><code>Python 3.5.2
</code></pre>
<p>pip list</p>
<pre><code>Django (1.10.1)
pip (8.1.2)
setuptools (27.1.2)
wheel (0.29.0)
</code></pre>
<p>pip freeze</p>
<pre><code>Django==1.10.1
</code></pre>
<p>forms.py</p>
<pre><code>class AuthorLogin(forms.Form):
username = forms.CharField(label='Your name', max_length=100)
password = forms.CharField(
label='Your password',
max_length=100,
widget=forms.PasswordInput)
def clean(self):
username = self.cleaned_data.get('username')
password = self.cleaned_data.get('password')
user = authenticate(username=username, password=password)
if not user or not user.is_active:
raise forms.ValidationError('Invalid username or password', code='invalid')
return self.cleaned_data
def login(self, request):
username = self.cleaned_data.get('username')
password = self.cleaned_data.get('password')
user = authenticate(username=username, password=password)
return user
def form_invalid(self, form):
return self.render_to_response(self.get_context_data(form=form))
</code></pre>
<p>view.py</p>
<pre><code>def author_login(request):
form = AuthorLogin(request.POST or None)
if request.POST and form.is_valid():
user = form.login(request)
if user is not None:
login(request, user)
return redirect('microblog:index')
return render(request, 'microblog/author_login.html', {'form': form})
</code></pre>
<p>urls.py</p>
<pre><code>app_name = 'microblog'
urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^login/', views.author_login, name='author_login'),
url(r'^logout/', views.author_logout, name='author_logout'),
]
</code></pre>
<p>author_login.html</p>
<pre><code>{% extends "base.html" %}
{% block content %}
{% if form.non_field_errors %}
<ul>
{% for error in form.non_field_errors %}
<li>{{ error }}</li>
{% endfor %}
</ul>
{% endif %}
<form action="" method="post">
{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Login" />
</form>
{% endblock %}
</code></pre>
<p>base.html</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Microblog</title>
</head>
<body>
{% if user.is_authenticated %}
<p>Welcome, {{ request.user.username }}</p>
<a href="{% url 'microblog:author_logout' %}">Logout</a>
{% else %}
<a href="{% url 'microblog:author_login' %}">Login</a>
{% endif %}
<nav>
<ul>
<li><a href="{% url 'microblog:index' %}">Microblog</a></li>
</ul>
</nav>
{% block content %}{% endblock %}
</body>
</html>
</code></pre>
<p>As a workaround i tried to sent validation error through form.add_error method in views.py, and removing clean, login and form_invalid method on forms.py. </p>
<p>Here is my new views.py :</p>
<pre><code>def author_login(request):
if request.method == 'POST':
form = AuthorLogin(request.POST)
if form.is_valid():
username = form.cleaned_data['username']
password = form.cleaned_data['password']
user = authenticate(username=username, password=password)
if user is not None:
login(request, user)
return redirect('microblog:index')
else:
form.add_error(None, 'Invalid username or password.') # This line seems to get executed twice
else:
form = AuthorLogin()
return render(request, 'microblog/author_login.html', {'form': form})
</code></pre>
<p>Turn out it still displayed error message twice. Now it seems form.add_error was called twice, was it ?.</p>
<p>Tried this :</p>
<pre><code>def author_login(request):
if request.method = 'POST':
count = 0
...
if form.is_valid()
....
if user is not None:
....
else:
count = count + 1
print(count)
</code></pre>
<p>It print 1 and only printed once, so author_login with POST method just executed once, same for else block that print count. But form.add_error get executed twice ?</p>
<p>It just getting weird. Right now i am using from.add_error in views.py. And then in templates i can fetch the error message through messages context and it did shows the message once :</p>
<pre><code>{% if messages %}
<ul class="messages">
{% for message in messages %}
<li>{{ message }}</li>
{% endfor %}
</ul>
{% endif %}
</code></pre>
| 0 |
2016-09-22T17:01:04Z
| 39,651,617 |
<p>you can try this as is described in the documentation:</p>
<pre><code>def clean(self):
super(AuthorLogin, self).clean()
...
# not return anything
</code></pre>
| 0 |
2016-09-23T02:25:23Z
|
[
"python",
"django"
] |
Generic binary operation in a class definition?
| 39,644,885 |
<p>I am writing a tiny linear algebra module in Python 3, and there are a number of binary operators to define. Since each definition of a binary operator is essentially the same with only the operator itself changed, I would like to save some work by writing a generic binary operator definition only once.</p>
<p>For example:</p>
<pre><code>class Vector(tuple):
def __new__(self, x):
super().__new__(x)
# Binary operators
def __add__(self, xs):
try:
return Vector(a + x for a, x in zip(self, xs)))
except:
return Vector(a + x for a in self)
def __and__(self, xs):
try:
return Vector(a & x for a, x in zip(self, xs))
except:
return Vector(a & x for a in self)
... # mul, div, or, sub, and all other binary operations
</code></pre>
<p>The binary operators above all have the same form. Only the operator is changed. I wonder if I could instead write all operators at once, something like this:</p>
<pre><code>def __bop__(self, xs):
bop = get_bop_somehow()
try:
return Vector(bop(a, x) for a, x in zip(self, xs)))
except:
return Vector(bop(a, x) for a in self)
</code></pre>
<p>I've heard that Python can do magical things with the <code>__getattr__</code> method, which I tried to use to extract the name of the operator like so:</p>
<pre><code>def __getattr__(self, name):
print('Method name:', name.strip('_'))
</code></pre>
<p>But, unfortunately, this only works when called using the full method name, not when an operator is used. How can I write a one-size-fits-all binary operator definition?</p>
| 2 |
2016-09-22T17:01:28Z
| 39,645,057 |
<p>You <em>can</em> do magical things with <code>__getattr__</code>, but if you can avoid doing so then I would - it starts to get complicated! In this situation you'd likely need to overwrite <code>__getattribute__</code>, but please don't because you will bite yourself in the seat of your own pants if you start messing around with <code>__getattribute__</code>.</p>
<p>You can achieve this in a very simple way, by simply defining the first one, and then doing <code>__and__ = __add__</code> in the other functions.</p>
<pre><code>class MyClass(object):
def comparison_1(self, thing):
return self is not thing
comparison_2 = comparison_1
A = MyClass()
print A.comparison_1(None)
print A.comparison_2(None)
print A.comparison_1(A)
print A.comparison_2(A)
</code></pre>
<p>gives</p>
<pre><code>$ python tmp_x.py
True
True
False
False
</code></pre>
<p>However, I'm not a fan of this kind of hackery. I would just do</p>
<pre><code>class MyClass(object):
def comparison_1(self, thing):
"Compares this thing and another thing"
return self is not thing
def comparison_2(self, thing):
"compares this thing and another thing as well"
return self.comparison_1(thing)
</code></pre>
<p>Better to write the extra couple of lines for clarity.</p>
<h1>EDIT:</h1>
<p>So I tried it with <code>__getattribute__</code>, doesn't work :/. I admit I don't know why.</p>
<pre><code>class MyClass(object):
def add(self, other):
print self, other
return None
def __getattribute__(self, attr):
if attr == '__add__':
attr = 'add'
return object.__getattribute__(self, attr)
X = MyClass()
print X.__add__
X + X
</code></pre>
<p>Doesn't work :/</p>
<pre><code>andy@batman[18:15:12]:~$ p tmp_x.py
<bound method MyClass.add of <__main__.MyClass object at 0x7f52932ea450>>
Traceback (most recent call last):
File "tmp_x.py", line 15, in <module>
X + X
TypeError: unsupported operand type(s) for +: 'MyClass' and 'MyClass'
</code></pre>
| 0 |
2016-09-22T17:11:20Z
|
[
"python",
"class"
] |
Generic binary operation in a class definition?
| 39,644,885 |
<p>I am writing a tiny linear algebra module in Python 3, and there are a number of binary operators to define. Since each definition of a binary operator is essentially the same with only the operator itself changed, I would like to save some work by writing a generic binary operator definition only once.</p>
<p>For example:</p>
<pre><code>class Vector(tuple):
def __new__(self, x):
super().__new__(x)
# Binary operators
def __add__(self, xs):
try:
return Vector(a + x for a, x in zip(self, xs)))
except:
return Vector(a + x for a in self)
def __and__(self, xs):
try:
return Vector(a & x for a, x in zip(self, xs))
except:
return Vector(a & x for a in self)
... # mul, div, or, sub, and all other binary operations
</code></pre>
<p>The binary operators above all have the same form. Only the operator is changed. I wonder if I could instead write all operators at once, something like this:</p>
<pre><code>def __bop__(self, xs):
bop = get_bop_somehow()
try:
return Vector(bop(a, x) for a, x in zip(self, xs)))
except:
return Vector(bop(a, x) for a in self)
</code></pre>
<p>I've heard that Python can do magical things with the <code>__getattr__</code> method, which I tried to use to extract the name of the operator like so:</p>
<pre><code>def __getattr__(self, name):
print('Method name:', name.strip('_'))
</code></pre>
<p>But, unfortunately, this only works when called using the full method name, not when an operator is used. How can I write a one-size-fits-all binary operator definition?</p>
| 2 |
2016-09-22T17:01:28Z
| 39,645,129 |
<p>You can do this with the <code>operator</code> module, which gives you functional versions of the operators. For example, <code>operator.and_(a, b)</code> is the same as <code>a & b</code>.</p>
<p>So <code>return Vector(a + x for a in self)</code> becomes <code>return Vector(op(a, x) for a in self)</code> and you can parameterize <code>op</code>. You still need to define all of the magic methods, but they can be simple pass-throughs.</p>
| 1 |
2016-09-22T17:16:35Z
|
[
"python",
"class"
] |
Generic binary operation in a class definition?
| 39,644,885 |
<p>I am writing a tiny linear algebra module in Python 3, and there are a number of binary operators to define. Since each definition of a binary operator is essentially the same with only the operator itself changed, I would like to save some work by writing a generic binary operator definition only once.</p>
<p>For example:</p>
<pre><code>class Vector(tuple):
def __new__(self, x):
super().__new__(x)
# Binary operators
def __add__(self, xs):
try:
return Vector(a + x for a, x in zip(self, xs)))
except:
return Vector(a + x for a in self)
def __and__(self, xs):
try:
return Vector(a & x for a, x in zip(self, xs))
except:
return Vector(a & x for a in self)
... # mul, div, or, sub, and all other binary operations
</code></pre>
<p>The binary operators above all have the same form. Only the operator is changed. I wonder if I could instead write all operators at once, something like this:</p>
<pre><code>def __bop__(self, xs):
bop = get_bop_somehow()
try:
return Vector(bop(a, x) for a, x in zip(self, xs)))
except:
return Vector(bop(a, x) for a in self)
</code></pre>
<p>I've heard that Python can do magical things with the <code>__getattr__</code> method, which I tried to use to extract the name of the operator like so:</p>
<pre><code>def __getattr__(self, name):
print('Method name:', name.strip('_'))
</code></pre>
<p>But, unfortunately, this only works when called using the full method name, not when an operator is used. How can I write a one-size-fits-all binary operator definition?</p>
| 2 |
2016-09-22T17:01:28Z
| 39,645,305 |
<p>You could use a class decorator to mutate your class and add them all in with the help of a factory function:</p>
<pre><code>import operator
def natural_binary_operators(cls):
for name, op in {
'__add__': operator.add,
'__sub__': operator.sub,
'__mul__': operator.mul,
'__truediv__': operator.truediv,
'__floordiv__': operator.floordiv,
'__and__': operator.and_,
'__or__': operator.or_,
'__xor__': operator.xor
}.items():
setattr(cls, name, cls._make_binop(op))
return cls
@natural_binary_operators
class Vector(tuple):
@classmethod
def _make_binop(cls, operator):
def binop(self, other):
try:
return cls([operator(a, x) for a, x in zip(self, other)])
except:
return cls([operator(a, other) for a in self])
return binop
</code></pre>
<p>There are a few other ways to do this, but the general idea is still the same.</p>
| 2 |
2016-09-22T17:27:15Z
|
[
"python",
"class"
] |
Generic binary operation in a class definition?
| 39,644,885 |
<p>I am writing a tiny linear algebra module in Python 3, and there are a number of binary operators to define. Since each definition of a binary operator is essentially the same with only the operator itself changed, I would like to save some work by writing a generic binary operator definition only once.</p>
<p>For example:</p>
<pre><code>class Vector(tuple):
def __new__(self, x):
super().__new__(x)
# Binary operators
def __add__(self, xs):
try:
return Vector(a + x for a, x in zip(self, xs)))
except:
return Vector(a + x for a in self)
def __and__(self, xs):
try:
return Vector(a & x for a, x in zip(self, xs))
except:
return Vector(a & x for a in self)
... # mul, div, or, sub, and all other binary operations
</code></pre>
<p>The binary operators above all have the same form. Only the operator is changed. I wonder if I could instead write all operators at once, something like this:</p>
<pre><code>def __bop__(self, xs):
bop = get_bop_somehow()
try:
return Vector(bop(a, x) for a, x in zip(self, xs)))
except:
return Vector(bop(a, x) for a in self)
</code></pre>
<p>I've heard that Python can do magical things with the <code>__getattr__</code> method, which I tried to use to extract the name of the operator like so:</p>
<pre><code>def __getattr__(self, name):
print('Method name:', name.strip('_'))
</code></pre>
<p>But, unfortunately, this only works when called using the full method name, not when an operator is used. How can I write a one-size-fits-all binary operator definition?</p>
| 2 |
2016-09-22T17:01:28Z
| 39,645,463 |
<h2>Update:</h2>
<p>This might be super-slow, but you can create an abstract class with all of the binary methods and inherit from it.</p>
<pre><code>import operator
def binary_methods(cls):
operator_list = (
'__add__', '__sub__', '__mul__', '__truediv__',
'__floordiv__', '__and__', '__or__', '__xor__'
)
for name in operator_list:
bop = getattr(operator, name)
method = cls.__create_binary_method__(bop)
setattr(cls, name, method)
return cls
@binary_methods
class AbstractBinary:
@classmethod
def __create_binary_method__(cls, bop):
def binary_method(self, xs):
try:
return self.__class__(bop(a, x) for a, x in zip(self, xs))
except:
return self.__class__(bop(a, x) for a in self)
return binary_method
class Vector(AbstractBinary, tuple):
def __new__(self, x):
return super(self, Vector).__new__(Vector, x)
</code></pre>
<hr>
<h2>Original:</h2>
<p>Okay, I think I've got a working solution (only tested in Python 2.X) that uses a class decorator to dynamically create the binary methods.</p>
<pre><code>import operator
def add_methods(cls):
operator_list = ('__add__', '__and__', '__mul__')
for name in operator_list:
func = getattr(operator, name)
# func needs to be a default argument to avoid the late-binding closure issue
method = lambda self, xs, func=func: cls.__bop__(self, func, xs)
setattr(cls, name, method)
return cls
@add_methods
class Vector(tuple):
def __new__(self, x):
return super(self, Vector).__new__(Vector, x)
def __bop__(self, bop, xs):
try:
return Vector(bop(a, x) for a, x in zip(self, xs))
except:
return Vector(bop(a, x) for a in self)
</code></pre>
<p>Here's some example usage:</p>
<pre><code>v1 = Vector((1,2,3))
v2 = Vector((3,4,5))
print v1 * v2
# (3, 8, 15)
</code></pre>
| 1 |
2016-09-22T17:36:47Z
|
[
"python",
"class"
] |
Errors using sys from Python Embedded in C++
| 39,644,907 |
<p>I am using Eclipse to run C++. In my code,I use a High Level Embedding of Python to run a function. When I try to use sys and import it. I get the error:</p>
<p><em>Fatal Python error: no mem for sys.argv</em></p>
<p>CODE:</p>
<pre><code>#include <python3.4m/Python.h>
#include <iostream>
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
int main(int argc, char **argv)
{
Py_Initialize();
PySys_SetArgv(argc, (wchar_t**)argv);
PyRun_SimpleString("import sys\n");
Py_Finalize();
return 0;
}
</code></pre>
<p>When I run the .exe from Terminal I get </p>
<p><em>ValueError: character U+384d2f2e is not in range [U+0000; U+10ffff]
Aborted</em></p>
<p>Any help is appreciated in resolving this problem.
Thank you.</p>
| 0 |
2016-09-22T17:02:46Z
| 39,645,493 |
<p>The error was that Python expected an **argv to point to a set of unicode values. Instead argv was pointing to chars.</p>
<p>To solve this:</p>
<pre><code>wchar_t **wargv;
wargv = (wchar_t**)malloc(1*sizeof(wchar_t *));
*wargv = (wchar_t*)malloc(6*sizeof(wchar_t));
**wargv = L'argv1';
Py_Initialize();
PySys_SetArgv(1, (wchar_t**)wargv);
PyRun_SimpleString("import sys\n"
"print('test')\n");
Py_Finalize();
return 0;
</code></pre>
<p>Hope this helps someone else.</p>
| 0 |
2016-09-22T17:38:55Z
|
[
"python",
"c++",
"unicode",
"sys",
"python-embedding"
] |
Ability to read and eval() any arbitrary function / lambda from JSON config file
| 39,645,014 |
<p>I am looking for a way to a user to enter any arbitrary python formula, store the formula in a text file or a JSON file, then have Python read the formula and apply the transformation to a data frame. My original requirement is to have a frontend web UI where the user can specify any transformation rules / scripts / formulas.</p>
<p>For example:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'state_full': ['Alabama', 'Alaska', 'American Samoa'],
'state_abbrev': ['Ala.', 'Alaska', np.nan],
'state_postal': ['AL', 'AK', 'AS']}, index=[0, 1, 2])
df['state_abbrev'].str.upper() # This works, but I need to read the formula from a file
</code></pre>
<p>Want ability to read any arbitrary lambda function from a config file in JSON:</p>
<pre><code>lambda_transform_str = "str.upper()"
df['state_abbrev'].eval(lambda_transform_str)
</code></pre>
<p>Encountered Error:
AttributeError: 'Series' object has no attribute 'eval'</p>
| 0 |
2016-09-22T17:08:28Z
| 39,645,122 |
<p>The <code>eval</code> function in Python isn't a method attached to strings; it's a globally available function. You should be able to call it like:</p>
<pre><code>eval(..)
</code></pre>
<p>If you're trying to transform <code>df['state_abbrev']</code> according to the contents of <code>lambda_transform_str</code>, you need to approach this problem another way.</p>
<pre><code>eval('"{}".upper()'.format(df['state_abbrev']))
</code></pre>
<p>This has limitations obviously since you have to ensure that your textual functions are all compatible with the <code>str</code> type, but that's an implementation detail I'll leave up to you.</p>
<p>As @martineau pointed out in the comments, be wary of using <code>eval</code>. It can expose codebases to serious security vulnerabilities.</p>
| 1 |
2016-09-22T17:16:03Z
|
[
"python",
"json",
"lambda"
] |
Value Error: x and y must have the same first dimension
| 39,645,020 |
<p>Let me quickly brief you first, I am working with a .txt file with 5400 data points. Each is a 16 second average over a 24 hour period (24 hrs * 3600 s/hr = 86400...86400/16 = 5400). In short this is the average magnetic strength in the z direction for an inbound particle field curtsy of the Advanced Composition Experiment satellite. Data publicly available <a href="http://www.srl.caltech.edu/ACE/ASC/level2/index.html" rel="nofollow">here</a>. So when I try to plot it says the error </p>
<pre><code>Value Error: x and y must have the same first dimension
</code></pre>
<p>So I created a numpy lin space of 5400 points broken apart by 16 units. I did this because I thought that my dimensions didn't match with my previous array that I had defined. But now I am sure these two array's are of the same dimension and yet it still gives back that Value Error. The code is as follows:</p>
<p><strong>First try (without the linspace):</strong></p>
<pre><code>import numpy as np
import matplotlib as plt
Bz = np.loadtxt(r"C:\Users\Schmidt\Desktop\Project\Data\ACE\MAG\ACE_MAG_Data_20151202_GSM.txt", dtype = bytes).astype(float)
Start_ACE = dt.date(2015,12,2)
Finish_ACE = dt.date(2015,12,2)
dt_Mag = 16
time_Mag = np.arange(Start_ACE, Finish_ACE, dt_Mag)
plt.subplot(3,1,1)
plt.plot(time_Mag, Bz)
plt.title('Bz 2015 12 02')
</code></pre>
<p><strong>Second Try (with linspace):</strong></p>
<pre><code>import numpy as np
import matplotlib as plt
Bz = np.loadtxt(r"C:\Users\Schmidt\Desktop\Project\Data\ACE\MAG\ACE_MAG_Data_20151202_GSM.txt", dtype = bytes).astype(float)
Mag_time = np.linspace(0,5399,16, dtype = float)
plt.subplot(3,1,1)
plt.plot(Mag_time, Bz)
plt.title('Bz 2015 12 02')
</code></pre>
<p>Other than it being a dimensional problem I don't know what else could be holding back this plotting procedure back.</p>
<p><strong>Full traceback:</strong></p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-68-c5dc0bdf5117> in <module>()
1 plt.subplot(3,1,1)
----> 2 plt.plot(Mag_time, Bz)
3 plt.title('Bz 2015 12 02')
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\pyplot.py in plot(*args, **kwargs)
3152 ax.hold(hold)
3153 try:
-> 3154 ret = ax.plot(*args, **kwargs)
3155 finally:
3156 ax.hold(washold)
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\__init__.py in inner(ax, *args, **kwargs)
1809 warnings.warn(msg % (label_namer, func.__name__),
1810 RuntimeWarning, stacklevel=2)
-> 1811 return func(ax, *args, **kwargs)
1812 pre_doc = inner.__doc__
1813 if pre_doc is None:
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\axes\_axes.py in plot(self, *args, **kwargs)
1422 kwargs['color'] = c
1423
-> 1424 for line in self._get_lines(*args, **kwargs):
1425 self.add_line(line)
1426 lines.append(line)
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in _grab_next_args(self, *args, **kwargs)
384 return
385 if len(remaining) <= 3:
--> 386 for seg in self._plot_args(remaining, kwargs):
387 yield seg
388 return
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in _plot_args(self, tup, kwargs)
362 x, y = index_of(tup[-1])
363
--> 364 x, y = self._xy_from_xy(x, y)
365
366 if self.command == 'plot':
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in _xy_from_xy(self, x, y)
221 y = _check_1d(y)
222 if x.shape[0] != y.shape[0]:
--> 223 raise ValueError("x and y must have same first dimension")
224 if x.ndim > 2 or y.ndim > 2:
225 raise ValueError("x and y can be no greater than 2-D")
ValueError: x and y must have same first dimension
</code></pre>
| 0 |
2016-09-22T17:08:56Z
| 39,647,130 |
<p>The problem was the selection of array creation. Instead of linspace, I should have used arange. </p>
<pre><code>Mag_time = np.arange(0,86400, 16, dtype = float)
</code></pre>
| 0 |
2016-09-22T19:10:47Z
|
[
"python",
"numpy",
"matplotlib"
] |
Dynamic database selection based on URL in Django
| 39,645,043 |
<p>Let's say the first page of the app has two links. Is it possible to pick the database depending on which link is clicked? The databases both have the same models, but different data. For example, let's say the application contains students for different colleges <code>A</code> and <code>B</code>. If link for <code>A</code> is clicked, then database for <code>A</code> is used which contains the students for college <code>A</code>. The entire application after this point should use database for college <code>A</code>.</p>
<p>I understand there are ways to work around this problem by just designing the databases differently, i.e. having a college field, and just filtering out students with the particular college affiliation. But I am hoping to find a solution using Django to just use two different databases. </p>
| 0 |
2016-09-22T17:10:14Z
| 39,645,212 |
<p>So you need to store the chosen database in <code>session</code> or smth and you can easily pick the database. From the <a href="https://docs.djangoproject.com/en/1.10/topics/db/multi-db/#manually-selecting-a-database-for-a-queryset" rel="nofollow">docs</a></p>
<pre><code>>>> # This will run on the 'default' database.
>>> Author.objects.all()
>>> # So will this.
>>> Author.objects.using('default').all()
>>> # This will run on the 'other' database.
>>> Author.objects.using('other').all()
</code></pre>
| 1 |
2016-09-22T17:21:28Z
|
[
"python",
"django",
"django-models"
] |
LabelEncoder().fit_transform vs. pd.get_dummies for categorical coding
| 39,645,125 |
<p>It was recently brought to my attention that if you have a dataframe <code>df</code> like this:</p>
<pre><code> A B C
0 0 Boat 45
1 1 NaN 12
2 2 Cat 6
3 3 Moose 21
4 4 Boat 43
</code></pre>
<p>You can encode the categorical data automatically with <code>pd.get_dummies</code>:</p>
<pre><code>df1 = pd.get_dummies(df)
</code></pre>
<p>Which yields this:</p>
<pre><code> A C B_Boat B_Cat B_Moose
0 0 45 1.0 0.0 0.0
1 1 12 0.0 0.0 0.0
2 2 6 0.0 1.0 0.0
3 3 21 0.0 0.0 1.0
4 4 43 1.0 0.0 0.0
</code></pre>
<p>I typically use <code>LabelEncoder().fit_transform</code> for this sort of task before putting it in <code>pd.get_dummies</code>, but if I can skip a few steps that'd be desirable. </p>
<p>Am I losing anything by simply using <code>pd.get_dummies</code> on my entire dataframe to encode it? </p>
| 3 |
2016-09-22T17:16:11Z
| 39,649,124 |
<p>Yes, you can skip the use of <code>LabelEncoder</code> if you only want to encode string features. On the other hand if you have a categorical column of integers (instead of strings) then <code>pd.get_dummies</code> will leave as it is (see your A or C column for example). In that case you should use <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html" rel="nofollow"><code>OneHotEncoder</code></a>. Ideally <code>OneHotEncoder</code> would support both integer and strings but this is being <a href="https://github.com/scikit-learn/scikit-learn/pull/7327" rel="nofollow">worked on at the moment</a>.</p>
| 3 |
2016-09-22T21:25:38Z
|
[
"python",
"pandas",
"scikit-learn",
"sklearn-pandas"
] |
How to access a .txt file at a secured url?
| 39,645,127 |
<p>I want to read a file from a secured url.</p>
<p>For example: <a href="https://foo.net/test.txt" rel="nofollow">https://foo.net/test.txt</a></p>
<p>when I use: </p>
<pre><code>readtext = urllib.urlopen('https://foo.net/test.txt').read()
</code></pre>
<p>I get a request for username and password. After entering them I can read the file. Is there a way to hardcode the username and password ?</p>
| 0 |
2016-09-22T17:16:23Z
| 39,645,393 |
<p>I'm sure it's almost as trivial to do in urllib as it is in requests, but requests is just so darn pretty:</p>
<pre><code>import requests
from requests.auth import HTTPBasicAuth
r = requests.get('https://foo.net/test.txt', auth=HTTPBasicAuth('user', 'pass'))
</code></pre>
| 2 |
2016-09-22T17:31:38Z
|
[
"python"
] |
The area of the intersection of two ovals (ellipses)?
| 39,645,153 |
<p>I need to calculate the amount of two oval intersects in a python program.
I know in <a href="https://pypi.python.org/pypi/Shapely" rel="nofollow">shaply</a> there is a function that return true if two object has intersects. As like as this:</p>
<pre><code>from shapely.geometry import Polygon
p1=Polygon([(0,0),(1,1),(1,0)])
p2=Polygon([(0,1),(1,0),(1,1)])
print p1.intersects(p2)
</code></pre>
<p>is there any library or function That help me?
Thanks.</p>
| 2 |
2016-09-22T17:17:44Z
| 39,645,243 |
<p>Is this what you are looking for? (the polygon that results from the intersection)</p>
<pre><code>x = p1.intersection(p2)
x.area
</code></pre>
<p>Find more information in the documentation <a href="http://toblerity.org/shapely/manual.html" rel="nofollow">here</a></p>
| 3 |
2016-09-22T17:23:31Z
|
[
"python",
"geometry",
"polygon",
"shapely.geometry"
] |
Mapping sums of defaultdict(list) to one list
| 39,645,290 |
<p>I have a large collection of data formatted somewhat like the <code>d.items()</code> of a <code>defaultdict(list)</code>. See below:</p>
<pre><code>products = [(('blue'), ([2, 4, 2, 4, 2, 4, 2, 4, 2, 4], [2, 4, 2, 4, 2, 4, 2, 4, 2, 4], [2, 4, 2, 4, 2, 4, 2, 4, 2, 4])),
(('yellow'), ([1, 3, 1, 3, 1, 3, 1, 3, 1, 3], [1, 3, 1, 3, 1, 3, 1, 3, 1, 3], [1, 3, 1, 3, 1, 3, 1, 3, 1, 3])),
(('red'), ([1, 1, 1, 1, 1, 2, 5, 4, 6, 4], [2, 5, 3, 4, 8, 1, 1, 1, 1, 1], [8, 6, 3, 9, 2, 1, 1, 1, 1, 1]))]
</code></pre>
<p>And I want to map the sum of each data value in the nested lists to their corresponding counterparts in the same position or index to produce the final sums like follows:</p>
<pre><code>['blue', 6, 12, 6, 12, 6, 12, 6, 12, '6.000000', 12]
['yellow', 3, 9, 3, 9, 3, 9, 3, 9, '3.000000', 9]
['red', 11, 12, 7, 14, 11, 4, 7, 6, '8.000000', 6]
</code></pre>
<p>With loops, it can be done easily as shown in this function:</p>
<pre><code>def summation(products):
sums = []
for item in products:
sums.append([(item[0]),
sum(int(x[0]) for x in item[1]),
sum(int(x[1]) for x in item[1]),
sum(int(x[2]) for x in item[1]),
sum(int(x[3]) for x in item[1]),
sum(int(x[4]) for x in item[1]),
sum(int(x[5]) for x in item[1]),
sum(int(x[6]) for x in item[1]),
sum(int(x[7]) for x in item[1]),
"{:.6f}".format(sum(float(x[8]) for x in item[1])),
sum(int(x[9]) for x in item[1])])
for s in sums:
print(s)
</code></pre>
<p>The problem arises when the size of products is millions, i.e. it is very time-consuming. So I thought of implementing mapping each value to its corresponding one in the nested list for the same key. This is what I tried:</p>
<pre><code>def mappingSum(products):
sums = []
for item in products:
sums.append([item[0], map((sum(x), sum(y), sum(z)) for x, y, z in item[1])])
for s in sums:
print(s)
</code></pre>
<p>However, I get the following error:</p>
<pre><code>TypeError: map() must have at least two arguments.
</code></pre>
<p>I don't know how to resolve it and I am not sure whether <code>map</code> is the right tool to do my task.</p>
| 0 |
2016-09-22T17:26:12Z
| 39,645,434 |
<p>From what I understand, you need to <em>zip</em> the sublists in the list and sum them up:</p>
<pre><code>>>> sums = [(key, [sum(value) for value in zip(*values)]) for key, values in products]
>>> for s in sums:
... print(s)
...
('blue', [6, 12, 6, 12, 6, 12, 6, 12, 6, 12])
('yellow', [3, 9, 3, 9, 3, 9, 3, 9, 3, 9])
('red', [11, 12, 7, 14, 11, 4, 7, 6, 8, 6])
</code></pre>
| 3 |
2016-09-22T17:34:45Z
|
[
"python",
"list",
"python-3.x",
"mapping",
"defaultdict"
] |
Mapping sums of defaultdict(list) to one list
| 39,645,290 |
<p>I have a large collection of data formatted somewhat like the <code>d.items()</code> of a <code>defaultdict(list)</code>. See below:</p>
<pre><code>products = [(('blue'), ([2, 4, 2, 4, 2, 4, 2, 4, 2, 4], [2, 4, 2, 4, 2, 4, 2, 4, 2, 4], [2, 4, 2, 4, 2, 4, 2, 4, 2, 4])),
(('yellow'), ([1, 3, 1, 3, 1, 3, 1, 3, 1, 3], [1, 3, 1, 3, 1, 3, 1, 3, 1, 3], [1, 3, 1, 3, 1, 3, 1, 3, 1, 3])),
(('red'), ([1, 1, 1, 1, 1, 2, 5, 4, 6, 4], [2, 5, 3, 4, 8, 1, 1, 1, 1, 1], [8, 6, 3, 9, 2, 1, 1, 1, 1, 1]))]
</code></pre>
<p>And I want to map the sum of each data value in the nested lists to their corresponding counterparts in the same position or index to produce the final sums like follows:</p>
<pre><code>['blue', 6, 12, 6, 12, 6, 12, 6, 12, '6.000000', 12]
['yellow', 3, 9, 3, 9, 3, 9, 3, 9, '3.000000', 9]
['red', 11, 12, 7, 14, 11, 4, 7, 6, '8.000000', 6]
</code></pre>
<p>With loops, it can be done easily as shown in this function:</p>
<pre><code>def summation(products):
sums = []
for item in products:
sums.append([(item[0]),
sum(int(x[0]) for x in item[1]),
sum(int(x[1]) for x in item[1]),
sum(int(x[2]) for x in item[1]),
sum(int(x[3]) for x in item[1]),
sum(int(x[4]) for x in item[1]),
sum(int(x[5]) for x in item[1]),
sum(int(x[6]) for x in item[1]),
sum(int(x[7]) for x in item[1]),
"{:.6f}".format(sum(float(x[8]) for x in item[1])),
sum(int(x[9]) for x in item[1])])
for s in sums:
print(s)
</code></pre>
<p>The problem arises when the size of products is millions, i.e. it is very time-consuming. So I thought of implementing mapping each value to its corresponding one in the nested list for the same key. This is what I tried:</p>
<pre><code>def mappingSum(products):
sums = []
for item in products:
sums.append([item[0], map((sum(x), sum(y), sum(z)) for x, y, z in item[1])])
for s in sums:
print(s)
</code></pre>
<p>However, I get the following error:</p>
<pre><code>TypeError: map() must have at least two arguments.
</code></pre>
<p>I don't know how to resolve it and I am not sure whether <code>map</code> is the right tool to do my task.</p>
| 0 |
2016-09-22T17:26:12Z
| 39,645,939 |
<p>As an alternative to @alecxe's answer consider the following using <code>map</code> and a nice list literal-unpack:</p>
<pre><code>res = [(k, [*map(sum, zip(*v))]) for k, v in products]
</code></pre>
<p>This yields:</p>
<pre><code>[('blue', [6, 12, 6, 12, 6, 12, 6, 12, 6, 12]),
('yellow', [3, 9, 3, 9, 3, 9, 3, 9, 3, 9]),
('red', [11, 12, 7, 14, 11, 4, 7, 6, 8, 6])]
</code></pre>
<p>This is slightly faster but requires Python <code>>= 3.5</code> due to the literal unpack. If on earlier versions, you'd have to wrap it in a <code>list</code> call to unpack the <code>map</code> iterator:</p>
<pre><code>res = [(k, list(map(sum, zip(*v)))) for k, v in products]
</code></pre>
| 2 |
2016-09-22T18:03:44Z
|
[
"python",
"list",
"python-3.x",
"mapping",
"defaultdict"
] |
Display Pandas DataFrame in csv format
| 39,645,404 |
<p>I have a pandas dataframe <code>q2</code> which looks like this:</p>
<pre><code> StudentID Subjects
6 323 History
9 323 Physics
8 999 Chemistry
7 999 History
4 999 Physics
0 1234 Chemistry
5 2834 Physics
1 3455 Chemistry
2 3455 History
10 3455 Mathematics
3 56767 Mathematics
</code></pre>
<p>I want to find out which student has taken which courses and display it on screen.</p>
<pre><code>gb = q2.groupby(('StudentID'))
result = gb['Subjects'].unique()
c1=pd.DataFrame({'StudentID':result.index, 'Subjects':result.values})
</code></pre>
<p><code>c1</code> looks like this</p>
<pre><code> StudentID Subjects
0 323 [History, Physics]
1 999 [Chemistry, History, Physics]
2 1234 [Chemistry]
3 2834 [Physics]
4 3455 [Chemistry, History, Mathematics]
5 56767 [Mathematics]
</code></pre>
<p>However, the desired output is the following:</p>
<pre><code>323: History, Physics
999: Chemistry, History, Physics
1234: Chemistry
2834: Physics
3455: Chemistry, History, Mathematics
56767: Mathematics
</code></pre>
<p>what can I do?</p>
| 1 |
2016-09-22T17:32:39Z
| 39,645,454 |
<p>I think you can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html" rel="nofollow"><code>apply</code></a> function <code>join</code>. Also for creating <code>DataFrame</code> you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p>
<pre><code>gb = q2.groupby(('StudentID'))
result = gb['Subjects'].unique()
c1 = result.reset_index()
c1.Subjects = c1.Subjects.apply(', '.join)
print (c1)
StudentID Subjects
0 323 History, Physics
1 999 Chemistry, History, Physics
2 1234 Chemistry
3 2834 Physics
4 3455 Chemistry, History, Mathematics
5 56767 Mathematics
</code></pre>
<p>Last you can cast column <code>StudentID</code> to <code>str</code> (if <code>dtype</code> is <code>int</code>) and concanecate together:</p>
<pre><code>c1['new'] = c1.StudentID.astype(str) + ':' + c1.Subjects
print (c1)
StudentID Subjects \
0 323 History, Physics
1 999 Chemistry, History, Physics
2 1234 Chemistry
3 2834 Physics
4 3455 Chemistry, History, Mathematics
5 56767 Mathematics
new
0 323:History, Physics
1 999:Chemistry, History, Physics
2 1234:Chemistry
3 2834:Physics
4 3455:Chemistry, History, Mathematics
5 56767:Mathematics
</code></pre>
<p>Also if original data can be overwrite, use:</p>
<pre><code>result = result.index.to_series().astype(str) + ':' + result.apply(', '.join)
print (result)
StudentID
323 323:History, Physics
999 999:Chemistry, History, Physics
1234 1234:Chemistry
2834 2834:Physics
3455 3455:Chemistry, History, Mathematics
56767 56767:Mathematics
dtype: object
</code></pre>
| 2 |
2016-09-22T17:35:58Z
|
[
"python",
"list",
"pandas",
"dataframe",
"unique"
] |
How to upload multiple files in django rest framework
| 39,645,410 |
<p>In django rest framework, I am able to upload single file using <a href="https://github.com/danialfarid/ng-file-upload" rel="nofollow">danialfarid/ng-file-upload</a> </p>
<p>views.py:</p>
<pre><code>class PhotoViewSet(viewsets.ModelViewSet):
serializer_class = PhotoSerializer
parser_classes = (MultiPartParser, FormParser,)
queryset=Photo.objects.all()
def perform_create(self, serializer):
serializer.save(blogs=Blogs.objects.latest('created_at'),
image=self.request.data.get('image'))
</code></pre>
<p>serializers.py:</p>
<pre><code>class PhotoSerializer(serializers.ModelSerializer):
class Meta:
model = Photo
</code></pre>
<p>models.py:</p>
<pre><code>class Photo(models.Model):
blogs = models.ForeignKey(Blogs, related_name='blogs_img')
image = models.ImageField(upload_to=content_file_name)
</code></pre>
<p>When I try to upload multiple file. I get in </p>
<p>chrome developer tools:
Request Payload</p>
<pre><code>------WebKitFormBoundaryjOsYUxPLKB1N69Zn
Content-Disposition: form-data; name="image[0]"; filename="datacable.jpg"
Content-Type: image/jpeg
------WebKitFormBoundaryjOsYUxPLKB1N69Zn
Content-Disposition: form-data; name="image[1]"; filename="datacable2.jpg"
Content-Type: image/jpeg
</code></pre>
<p>Response:</p>
<pre><code>{"image":["No file was submitted."]}
</code></pre>
<p>I don't know how to write serializer for uploading multiple file. Any body please help.</p>
<p>Thanks in Advance</p>
| 0 |
2016-09-22T17:32:48Z
| 39,668,395 |
<p>I manage to solve this issue and I hope it will help community</p>
<p>serializers.py:</p>
<pre><code>class FileListSerializer ( serializers.Serializer ) :
image = serializers.ListField(
child=serializers.FileField( max_length=100000,
allow_empty_file=False,
use_url=False )
)
def create(self, validated_data):
blogs=Blogs.objects.latest('created_at')
image=validated_data.pop('image')
for img in image:
photo=Photo.objects.create(image=img,blogs=blogs,**validated_data)
return photo
class PhotoSerializer(serializers.ModelSerializer):
class Meta:
model = Photo
read_only_fields = ("blogs",)
</code></pre>
<p>views.py:</p>
<pre><code>class PhotoViewSet(viewsets.ModelViewSet):
serializer_class = FileListSerializer
parser_classes = (MultiPartParser, FormParser,)
queryset=Photo.objects.all()
</code></pre>
| 2 |
2016-09-23T19:39:46Z
|
[
"python",
"angularjs",
"django",
"django-rest-framework"
] |
Mocking out two redis hgets with different return values in the same python function
| 39,645,472 |
<p>I have some code like this:</p>
<pre><code>import redis
redis_db = redis.Redis(host=redis_host_ip, port=redis_port, password=redis_auth_password)
def mygroovyfunction():
var_a = redis_db.hget(user, 'something_a')
var_b = redis_db.hget(user, 'something_b')
if var_a == something_a:
return Response(json.dumps({}), status=200, mimetype='application/json')
if var_b == something_b:
return Response(json.dumps({}), status=400, mimetype='application/json')
</code></pre>
<p>And then in a tests.py file for unit testing this, I have some code like this:</p>
<pre><code>import unittest
from mock import MagicMock, Mock, patch
@patch('redis.StrictRedis.hget', return_value='some value')
class MyGroovyFunctionTests(unittest.TestCase):
def test_success(self, mock_redis_hget):
response = self.app.get('mygroovyfunction')
self.assertEqual(response.status_code, 200)
</code></pre>
<p>So there's some other flask stuff which I left out because it is not relevant for this question. </p>
<p>What I wanted to know was if it is possible to mock a return value for each individual redis hget. With my current code, the mocked return value replaces <code>var_b</code>, and so when the test case runs, it makes <code>var_a</code> also the same value, causing the code to go down the path ending in a return status_code of 400.</p>
<p>What is the proper way to do this kind of thing?</p>
| 1 |
2016-09-22T17:37:29Z
| 39,647,623 |
<p>Ok I found the answer here: <a href="http://stackoverflow.com/questions/24897145/python-mock-multiple-return-values">Python mock multiple return values</a></p>
<p>The accepted answer for that question is what I was looking for, which is to use <code>side_effect</code> and make it a list of values and so each patched redis hget will be given each value in the list. So for my example the solution is to do this:</p>
<pre><code>import unittest
from mock import MagicMock, Mock, patch
@patch('redis.StrictRedis.hget', side_effect=['something_a','something_b'])
class MyGroovyFunctionTests(unittest.TestCase):
def test_success(self, mock_redis_hget):
response = self.app.get('mygroovyfunction')
self.assertEqual(response.status_code, 200)
</code></pre>
<p>Thanks <a href="http://stackoverflow.com/users/100297/martijn-pieters">http://stackoverflow.com/users/100297/martijn-pieters</a> for the answer I was looking for :)</p>
| 2 |
2016-09-22T19:42:49Z
|
[
"python",
"unit-testing",
"mocking",
"python-unittest",
"python-mock"
] |
How to transform nonlinear model to linear?
| 39,645,488 |
<p>I'm ananlyzing a dataset, and I know that the data should follow a power model:</p>
<pre><code>y = a*x**b
</code></pre>
<p>I transformed it to linear by taking logarithms:</p>
<pre><code>ln(y) = ln(a) + b* ln(x)
</code></pre>
<p>However, the problems arised on adding a trend line to the plot</p>
<pre><code>slope, intercept, r_value, p_value, std_err = scipy.stats.mstats.linregress(x_ln, y_ln)
yy = np.exp(intercept)*wetarea_x**slope
plt.scatter(wetarea_x, arcgis_wtrshd_x, color = 'blue')
plt.plot(wetarea_x, yy, color = 'green')
</code></pre>
<p><a href="http://i.stack.imgur.com/DkDKP.png" rel="nofollow"><img src="http://i.stack.imgur.com/DkDKP.png" alt="enter image description here"></a></p>
<p>This is what I get with this code.
How to modify the code, so that the trend line on the plot would be correct?</p>
| 1 |
2016-09-22T17:38:30Z
| 39,646,128 |
<p>Your green strange plot is what you get when you do a line plot in <code>matplotlib</code>, with the <code>x</code> values unsorted. It's a line plot, but it connects by lines <em>(x, y)</em> pairs jumping right and left (in your specific case, it looks like back to near the x-origin). That gives these strange patterns. </p>
<p>You don't have this problem with the blue plot, because it's a scatter plot.</p>
<p>Try calling the plot after sorting both arrays according to the indices of the first using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html"><code>numpy.argsort</code></a>, say </p>
<pre><code>wetarea_x[np.argsort(wetarea_x)]
</code></pre>
<p>and</p>
<pre><code>yy[np.argsort(wetarea_x)]
</code></pre>
| 5 |
2016-09-22T18:14:53Z
|
[
"python",
"matplotlib",
"plot",
"scipy",
"linear-regression"
] |
Clean separation between application thread and Qt thread (Python - PyQt)
| 39,645,504 |
<p>I prefer to write my application without even thinking about a graphical user interface. Once the application code is working properly, I like to glue a GUI layer on top of it - with a clean interface between the two.</p>
<p>I first tried to make the GUI run in a different <strong>process</strong> from the application. But I soon regretted that experiment. It is far from trivial to setup a communication link between two processes. So I decided that for now, multiple <strong>threads</strong> are fine (although the Python Global Interpreter Lock makes them run on a single core).</p>
<p>The <em>MainThread</em> is completely in the hands of the Qt GUI. Apparently this is standard practice. So let us suppose that the overall structure of the software should look like this (note that <em>qtThread</em> is synonymous to <em>MainThread</em>):</p>
<p><a href="http://i.stack.imgur.com/z32Fa.png" rel="nofollow"><img src="http://i.stack.imgur.com/z32Fa.png" alt="enter image description here"></a></p>
<p>My application code is running in the <em>appThread</em> - cleanly separated from the GUI. But at some point, there has to be interaction.</p>
<p>I have read many articles about how to organize this, but many sources contradict each other. Even the official Qt application is wrong according to many people (the Official documentation encourages to subclass QThread). The most enlightening articles I could find are these:</p>
<p><a href="http://ilearnstuff.blogspot.be/2012/08/when-qthread-isnt-thread.html" rel="nofollow">http://ilearnstuff.blogspot.be/2012/08/when-qthread-isnt-thread.html</a>
<a href="http://ilearnstuff.blogspot.be/2012/09/qthread-best-practices-when-qthread.html" rel="nofollow">http://ilearnstuff.blogspot.be/2012/09/qthread-best-practices-when-qthread.html</a></p>
<p>Even after considering all that, I still remain in doubt about several things.</p>
<p><br></p>
<p><strong>Question 1. What is the most proper way to start the <em>appThread</em>?</strong><br></p>
<hr>
<p>What is the most proper way to start the <em>appThread</em>? Correct me if I am wrong, but I believe that there are two choices:</p>
<p><strong>Choice 1: Start a standard Python thread<br></strong>
Python provides the <code>threading</code> library that one can import to spawn new threads:</p>
<pre><code>import threading
if __name__ == '__main__':
# 1. Create the qt thread (is MainThread in fact)
qtApp = QApplication(sys.argv)
QApplication.setStyle(QStyleFactory.create('Fusion'))
# 2. Create the appThread
appThread = threading.Thread(name='appThread', target=appThreadFunc, args=(p1,p2,))
appThread.start()
# 3. Start the qt event loop
qtApp.exec_()
print('Exiting program')
</code></pre>
<p>This choice looks the cleanest to me. You can truly write your <em>appThread</em> code without even thinking about a GUI. After all, you're using the standard Python <code>threading</code> library. There is no Qt stuff in there.<br>
But I cannot find clear documentation about setting up a communication link between the <em>appThread</em> and the <em>MainThread</em>. More about that issue in the second question..</p>
<p><strong>Choice 2: Start a QThread thread<br></strong>
This choice looks not so clean, because you have to mess with Qt stuff to write your application code. Anyway, it looks like a viable option, because the communication link between both threads - <em>appThread</em> and <em>MainThread</em> - is probably better supported.<br>
There are myriads of ways to start a QThread thread. The official Qt documentation encouraged to subclass <code>QThread</code> and reimplement the run() method. But I read that this practice is in fact very bad. Refer for more info to the two links I've posted at the beginning of my question.<br></p>
<p><br></p>
<p><strong>Question 2. What is the best communication link between both threads?<br></strong></p>
<hr>
<p>What is the best communication link between both threads? Obviously the answer to this question depends very much on the choice made in <strong>Question 1</strong>. I can imagine that linking a standard Python thread to the GUI differs very much from linking a QThread.<br>
I will leave it up to you to make suggestions, but a few mechanisms that pop up in my mind are:</p>
<ul>
<li>Queues: using standard Python queues or Qt Queues?</li>
<li>Signal/Slot mechanism: ...</li>
<li>Sockets: should work, but looks a bit cumbersome</li>
<li>Pipes: ...</li>
<li>Temporary files: cumbersome</li>
</ul>
<p><br></p>
<p><strong>Notes :<br></strong></p>
<hr>
<p>Please mention if your answer applies to Python 2.x or 3.x. Also keep in mind that confusion may arise quickly when speaking about threads, queues and the like. Please mention if you refer to a standard Python thread or a QThread, a standard Python queue or a QQueue, ...</p>
| 1 |
2016-09-22T17:39:39Z
| 39,694,075 |
<p>What I suggest is to do what most others do. Wait until there is code that needs to be run in a separate thread, and then <em>only</em> put that piece of code in a thread. There is no need for your code to be in a separate thread to have good code separation. The way I would do it is the following:</p>
<p>Have your <em>appThread</em> code (the code that has no knowledge of a GUI) in a base class that only has knowledge of non-GUI libraries. This makes it easy to support a command-line version of your code later as well. Put code that you need to execute asynchronously inside regular Python threads for this base class. Make sure the code you want to execute asynchronously is just a single function call to make it easier for my next point.</p>
<p>Then, have a child class in a separate file that inherits from both the base class you just wrote and the QMainWindow class. Any code that you need to run asynchronously can be called via the QThread class. If you made the code you want to run asynchronously available in one function call as I mentioned above, it's easy to make this step work for your QThread child class.</p>
<p><strong>Why do the above?</strong></p>
<p>It makes it <em>much</em> easier to manage state and communication. Why make yourself go insane with race conditions and thread communication when you don't have to? There's also really no performance reason to have separate threads in a GUI for app code vs GUI code since most of the time the user is not actually inputting much as far as the CPU is concerned. Only the parts that are slow should be put in threads, both to save sanity and to make code management easier. Plus, with Python, you don't gain anything from separate threads thanks to the GIL.</p>
| 1 |
2016-09-26T02:56:52Z
|
[
"python",
"multithreading",
"python-3.x"
] |
Python TypeError Traceback (most recent call last)
| 39,645,563 |
<p>I am trying the build a crawler, and I want to print all the links on that page
I am using Python 3.5</p>
<p>there is my code </p>
<pre><code>import requests
from bs4 import BeautifulSoup
def crawler(link):
source_code = requests.get(link)
source_code_string = str(source_code)
source_code_soup = BeautifulSoup(source_code_string,'lxml')
for item in source_code_soup.findAll("a"):
title = item.string
print(title)
crawler("https://www.youtube.com/watch?v=pLHejmLB16o")
</code></pre>
<p>but I get the error like this</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-13-9aa10c5a03ef> in <module>()
----> 1 crawler('http://archive.is/DPG9M')
TypeError: 'module' object is not callable
</code></pre>
| 0 |
2016-09-22T17:42:35Z
| 39,646,332 |
<p>If you are intentions are to just print the titles of the link, you are making a small mistake, replace the line :</p>
<pre><code>source_code_string = str(source_code)
</code></pre>
<p>use </p>
<pre><code>source_code_string = source_code.text
</code></pre>
<p>Apart from that the code looks fine and is running.
lets call the file web_crawler_v1.py</p>
<pre><code>import requests
from bs4 import BeautifulSoup
def crawler(link):
source_code = requests.get(link)
source_code_string = source_code.text
source_code_soup = BeautifulSoup(source_code_string,'lxml')
for item in source_code_soup.findAll("a"):
title = item.string
print(title)
crawler("https://www.youtube.com/watch?v=pLHejmLB16o")
</code></pre>
<p>And about that error, you should not be getting that error if you are calling the file properly like this</p>
<pre><code>python3 wen_crawler_v1.py
</code></pre>
| 2 |
2016-09-22T18:26:04Z
|
[
"python",
"web-crawler"
] |
How to let python function pass more variables than what's accepted in the definition?
| 39,645,570 |
<p>I have a very generic function call that looks like</p>
<pre><code>result = getattr(class_name, func_name)(result)
</code></pre>
<p>This function call updates <code>result</code>. This function call is very generic such that it can invoke many functions from different classes. Currently, all these functions only take one argument <code>result</code>. However, it doesn't scale to the cases that some functions need to pass more than just <code>result</code> but more arguments (say, <code>args</code>).</p>
<p>Is there a way in Python (2.7) that allows this generic function call to pass <code>args</code> but still invoke the functions that don't have the extra arguments? If not, what would be a better approach to solve this problem?</p>
<p>EDIT: I cannot change the existing function definitions, including those that only take only the argument <code>result</code>. I can only change this line:</p>
<pre><code>result = getattr(class_name, func_name)(result)
</code></pre>
| 0 |
2016-09-22T17:43:01Z
| 39,645,671 |
<p>The general solution for this is that when we want to provide a common function name such as this, that it is the responsibility of each class to implement its local definition of that function. This is why you see, for example, a method <strong>__init__</strong> in many different classes. The system standardizes the name, placing the requirement on the classes.</p>
<p>The other common way is borrowed from <strong>C</strong>: you pass a list of arguments, the canonical *args solution. This allows one generic, flexible function to interpret the arglist as appropriate.</p>
| 0 |
2016-09-22T17:47:27Z
|
[
"python",
"python-2.7"
] |
How to let python function pass more variables than what's accepted in the definition?
| 39,645,570 |
<p>I have a very generic function call that looks like</p>
<pre><code>result = getattr(class_name, func_name)(result)
</code></pre>
<p>This function call updates <code>result</code>. This function call is very generic such that it can invoke many functions from different classes. Currently, all these functions only take one argument <code>result</code>. However, it doesn't scale to the cases that some functions need to pass more than just <code>result</code> but more arguments (say, <code>args</code>).</p>
<p>Is there a way in Python (2.7) that allows this generic function call to pass <code>args</code> but still invoke the functions that don't have the extra arguments? If not, what would be a better approach to solve this problem?</p>
<p>EDIT: I cannot change the existing function definitions, including those that only take only the argument <code>result</code>. I can only change this line:</p>
<pre><code>result = getattr(class_name, func_name)(result)
</code></pre>
| 0 |
2016-09-22T17:43:01Z
| 39,645,775 |
<p>You can add a <code>*</code> within the function call. This will pass <code>result</code> as multiple arguments. Lets say you have two functions:</p>
<pre><code>class YourClass:
def my_first_def(one_arg):
return (1, 2)
def my_second_def(one_arg, second_arg):
return (1, 2, 3)
if not instanceof(result, tuple):
result = (result,)
result = getattr(YourClass, 'my_first_def')(*result)
result = getattr(YourClass, 'my_second_def')(*result)
</code></pre>
<p>Do note that <code>result</code> must be a <code>tuple</code></p>
| 0 |
2016-09-22T17:53:41Z
|
[
"python",
"python-2.7"
] |
Calling mpmath directly from C
| 39,645,580 |
<p>I want to access mpmath's special functions from a C code.
I know how to do it via an intermediate python script.
For instance, in order to evaluate the hypergeometric function, the C program:</p>
<pre><code>#include <Python.h>
void main (int argc, char *argv[])
{
int npars= 4;
double a1, a2, b1, x, res;
PyObject *pName, *pModule, *pFunc, *pArgs, *pValue;
PyObject *pa1, *pa2, *pb1, *px;
a1= atof(argv[1]);
a2= atof(argv[2]);
b1= atof(argv[3]);
x= atof(argv[4]);
setenv("PYTHONPATH", ".", 1); // Set PYTHONPATH TO bin directory
Py_Initialize();
pa1= PyFloat_FromDouble(a1);
pa2= PyFloat_FromDouble(a2);
pb1= PyFloat_FromDouble(b1);
px= PyFloat_FromDouble(x);
pName = PyString_FromString("GGauss_2F1");
pModule = PyImport_Import(pName);
pFunc = PyObject_GetAttrString(pModule, "Gauss_2F1");
pArgs = PyTuple_Pack(npars, pa1, pa2, pb1, px);
pValue = PyObject_CallObject(pFunc, pArgs);
res= PyFloat_AsDouble(pValue);
printf("2F1(x)= %.15f\n", res);
}
</code></pre>
<p>works all right by calling the GGauss_2F1.py script:</p>
<pre><code>from mpmath import *
def Gauss_2F1(a1, a2, b1, z):
hpg= hyp2f1(a1, a2, b1, z)
return hpg
</code></pre>
<p>Is there a way to call the mpmath function hyp2f1 directly from C, without having to resort to an intermediate python script?
I guess that the mpmath module can be imported by the command </p>
<pre><code>PyRun_SimpleString("from mpmath import *");
</code></pre>
<p>But how do I access the actual function?</p>
| -1 |
2016-09-22T17:43:19Z
| 39,648,752 |
<blockquote>
<p>What? No! Literally do the things you did to access
GGauss_2F1.Gauss_2F1, just with the names changed. Why are you trying
to PyRun_SimpleString("from mpmath import *")? â user2357112</p>
</blockquote>
<p>Ok. Following your suggestions:</p>
<pre><code>#include <Python.h>
void main (int argc, char *argv[])
{
int npars= 4;
double a1, a2, b1, x, res;
PyObject *pName, *pModule, *pFunc, *pArgs, *pValue;
PyObject *pa1, *pa2, *pb1, *px;
a1= atof(argv[1]);
a2= atof(argv[2]);
b1= atof(argv[3]);
x= atof(argv[4]);
setenv("PYTHONPATH", ".", 1); // Set PYTHONPATH TO bin directory
Py_Initialize();
pa1= PyFloat_FromDouble(a1);
pa2= PyFloat_FromDouble(a2);
pb1= PyFloat_FromDouble(b1);
px= PyFloat_FromDouble(x);
pName = PyString_FromString("mpmath");
pModule = PyImport_Import(pName);
pFunc = PyObject_GetAttrString(pModule, "hyp2f1");
pArgs = PyTuple_Pack(npars, pa1, pa2, pb1, px);
pValue = PyObject_CallObject(pFunc, pArgs);
res= PyFloat_AsDouble(pValue);
printf("2F1(x)= %.15f\n", res);
}
</code></pre>
<p>The code seems to work as expected and is generating the correct result.
Thank you for your "patience"...</p>
| 0 |
2016-09-22T20:57:58Z
|
[
"python",
"c",
"mpmath"
] |
Rounding up a value to next int
| 39,645,691 |
<p>I need to make a paint program that rounds up to the next gallon, however I am having a problem. When all said in done, lets say height is 96, width is 240, and length is 200, it equals out to 919 feet. Subtract a few walls and windows and I have 832 square feet. Now I divide that by 200 and it gives me 4.3 but I can't have 4.3 gallons of paint, I need it rounded to 5. The code below though, round it out to 4. Its probably very simple, and an easy fix but any help?</p>
<pre><code>primer_area = (2*(length * height) + 2*(width*height)) / 144
primer_needed = (primer_area + result - door_area - window_area) / 200
# The primer is then rounded up to the next gallon
primer = primer_needed
primer = math.ceil(primer)
</code></pre>
| -1 |
2016-09-22T17:48:23Z
| 39,645,979 |
<p>I think if you change these lines it will work</p>
<pre><code> primer_needed = float(primer_area + result - door_area - window_area) / 200
primer_needed = float(primer_area + result - door_area - window_area) / 200
</code></pre>
<p>or you can determine the result to be float like</p>
<pre><code>x = float
</code></pre>
| -2 |
2016-09-22T18:05:55Z
|
[
"python",
"python-3.x"
] |
Python: How do I create effects that apply to variables in classes?
| 39,645,707 |
<p>I am working on a text-based RPG, and I am trying to create magic spells that affect stats such as base attack, turn order, etc. I am using various classes for the spells in the format:</p>
<pre><code>class BuffSpell(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
</code></pre>
<p>and I am calling the spells from a dictionary </p>
<pre><code>bardSpells = {
1: BuffSpell(name= "Flare", level= 0, stat= baseAttack, value -1)
}
</code></pre>
<p>How would I use Flare on an enemy such as a goblin? I tried making a method</p>
<pre><code>def useBuffSpell(target, spell):
target.spell.stat = target.spell.stat + spell.value
</code></pre>
<p>using parameters of <code>goblin</code> and <code>BuffSpell[1]</code>.</p>
<p>Additionally, what would be a good method of affecting next turn actions in a combat sequence via a spell?</p>
| 1 |
2016-09-22T17:49:19Z
| 39,646,091 |
<p>Use <code>getattr</code> and <code>setattr</code>, and make the attribute reference in the spell a <code>str</code>:</p>
<pre><code>bardSpells = {
1: BuffSpell(name= "Flare", level= 0, stat="baseAttack", value -1)
}
def useBuffSpell(target, spell):
setattr(target, spell.stat,
getattr(target, spell.stat, 0) + spell.value)
</code></pre>
<p><code>setattr</code> is a builtin that, given an object, the name of an attribute, and a value, sets that attribute of the object to the value.
<code>getattr</code> is a builtin that, given an object, the name of an attribute, and optionally a default (<code>0</code> in the above example), returns that attribute of the object, or the default if the object does not have an attribute by that name. If no default is given, nonexistent attribute access raises an <code>AttributeError</code>.</p>
<p>The above example, if you did <code>useBuffSpell(goblin, bardSpells[1])</code>, would reduce the <code>goblin</code>s <code>baseAttack</code> by 1, or set it to <code>-1</code> if it did not have a <code>baseAttack</code>.</p>
| 0 |
2016-09-22T18:12:35Z
|
[
"python",
"class",
"methods"
] |
Open a csv.gz file in Python and print first 100 rows
| 39,645,804 |
<p>I'm trying to get only the first 100 rows of a csv.gz file that has over 4 million rows in Python. I also want information on the # of columns and the headers of each. How can I do this? </p>
<p>I looked at <a href="http://stackoverflow.com/questions/10566558/python-read-lines-from-compressed-text-files">python: read lines from compressed text files</a> to figure out how to open the file but I'm struggling to figure out how to actually print the first 100 rows and get some metadata on the information in the columns. </p>
<p>I found this <a href="https://stackoverflow.com/questions/1767513/read-first-n-lines-of-a-file-in-python">Read first N lines of a file in python</a> but not sure how to marry this to opening the csv.gz file and reading it without saving an uncompressed csv file. </p>
<p>I have written this code:</p>
<pre><code>import gzip
import csv
import json
import pandas as pd
df = pd.read_csv('google-us-data.csv.gz', compression='gzip', header=0, sep=' ', quotechar='"', error_bad_lines=False)
for i in range (100):
print df.next()
</code></pre>
<p>I'm new to Python and I don't understand the results. I'm sure my code is wrong and I've been trying to debug it but I don't know which documentation to look at. </p>
<p>I get these results (and it keeps going down the console - this is an excerpt): </p>
<pre><code>Skipping line 63: expected 3 fields, saw 7
Skipping line 64: expected 3 fields, saw 7
Skipping line 65: expected 3 fields, saw 7
Skipping line 66: expected 3 fields, saw 7
Skipping line 67: expected 3 fields, saw 7
Skipping line 68: expected 3 fields, saw 7
Skipping line 69: expected 3 fields, saw 7
Skipping line 70: expected 3 fields, saw 7
Skipping line 71: expected 3 fields, saw 7
Skipping line 72: expected 3 fields, saw 7
</code></pre>
| 2 |
2016-09-22T17:55:39Z
| 39,645,923 |
<p>I think you could do something like this (from the gzip module <a href="https://docs.python.org/3/library/gzip.html#examples-of-usage" rel="nofollow">examples</a>)</p>
<pre><code>import gzip
with gzip.open('/home/joe/file.txt.gz', 'rb') as f:
header = f.readline()
# Read lines any way you want now.
</code></pre>
| 0 |
2016-09-22T18:02:50Z
|
[
"python",
"csv"
] |
Open a csv.gz file in Python and print first 100 rows
| 39,645,804 |
<p>I'm trying to get only the first 100 rows of a csv.gz file that has over 4 million rows in Python. I also want information on the # of columns and the headers of each. How can I do this? </p>
<p>I looked at <a href="http://stackoverflow.com/questions/10566558/python-read-lines-from-compressed-text-files">python: read lines from compressed text files</a> to figure out how to open the file but I'm struggling to figure out how to actually print the first 100 rows and get some metadata on the information in the columns. </p>
<p>I found this <a href="https://stackoverflow.com/questions/1767513/read-first-n-lines-of-a-file-in-python">Read first N lines of a file in python</a> but not sure how to marry this to opening the csv.gz file and reading it without saving an uncompressed csv file. </p>
<p>I have written this code:</p>
<pre><code>import gzip
import csv
import json
import pandas as pd
df = pd.read_csv('google-us-data.csv.gz', compression='gzip', header=0, sep=' ', quotechar='"', error_bad_lines=False)
for i in range (100):
print df.next()
</code></pre>
<p>I'm new to Python and I don't understand the results. I'm sure my code is wrong and I've been trying to debug it but I don't know which documentation to look at. </p>
<p>I get these results (and it keeps going down the console - this is an excerpt): </p>
<pre><code>Skipping line 63: expected 3 fields, saw 7
Skipping line 64: expected 3 fields, saw 7
Skipping line 65: expected 3 fields, saw 7
Skipping line 66: expected 3 fields, saw 7
Skipping line 67: expected 3 fields, saw 7
Skipping line 68: expected 3 fields, saw 7
Skipping line 69: expected 3 fields, saw 7
Skipping line 70: expected 3 fields, saw 7
Skipping line 71: expected 3 fields, saw 7
Skipping line 72: expected 3 fields, saw 7
</code></pre>
| 2 |
2016-09-22T17:55:39Z
| 39,645,994 |
<p>The first answer you linked suggests using <a href="https://docs.python.org/3/library/gzip.html#gzip.GzipFile" rel="nofollow"><code>gzip.GzipFile</code></a> - this gives you a file-like object that decompresses for you on the fly.</p>
<p>Now you just need some way to parse csv data out of a file-like object ... like <a href="https://docs.python.org/3/library/csv.html#csv.reader" rel="nofollow">csv.reader</a>.</p>
<p>The <code>csv.reader</code> object will give you a list of fieldnames, so you know the columns, their names, and how many there are.</p>
<p>Then you need to get the first 100 csv row objects, which will work exactly like in the second question you linked, and each of those 100 objects will be a list of fields.</p>
<p>So far this is all covered in your linked questions, apart from knowing about the existence of the csv module, which is listed in the <a href="https://docs.python.org/3/library/index.html" rel="nofollow">library index</a>.</p>
| 1 |
2016-09-22T18:06:50Z
|
[
"python",
"csv"
] |
Open a csv.gz file in Python and print first 100 rows
| 39,645,804 |
<p>I'm trying to get only the first 100 rows of a csv.gz file that has over 4 million rows in Python. I also want information on the # of columns and the headers of each. How can I do this? </p>
<p>I looked at <a href="http://stackoverflow.com/questions/10566558/python-read-lines-from-compressed-text-files">python: read lines from compressed text files</a> to figure out how to open the file but I'm struggling to figure out how to actually print the first 100 rows and get some metadata on the information in the columns. </p>
<p>I found this <a href="https://stackoverflow.com/questions/1767513/read-first-n-lines-of-a-file-in-python">Read first N lines of a file in python</a> but not sure how to marry this to opening the csv.gz file and reading it without saving an uncompressed csv file. </p>
<p>I have written this code:</p>
<pre><code>import gzip
import csv
import json
import pandas as pd
df = pd.read_csv('google-us-data.csv.gz', compression='gzip', header=0, sep=' ', quotechar='"', error_bad_lines=False)
for i in range (100):
print df.next()
</code></pre>
<p>I'm new to Python and I don't understand the results. I'm sure my code is wrong and I've been trying to debug it but I don't know which documentation to look at. </p>
<p>I get these results (and it keeps going down the console - this is an excerpt): </p>
<pre><code>Skipping line 63: expected 3 fields, saw 7
Skipping line 64: expected 3 fields, saw 7
Skipping line 65: expected 3 fields, saw 7
Skipping line 66: expected 3 fields, saw 7
Skipping line 67: expected 3 fields, saw 7
Skipping line 68: expected 3 fields, saw 7
Skipping line 69: expected 3 fields, saw 7
Skipping line 70: expected 3 fields, saw 7
Skipping line 71: expected 3 fields, saw 7
Skipping line 72: expected 3 fields, saw 7
</code></pre>
| 2 |
2016-09-22T17:55:39Z
| 39,646,258 |
<p>Your code is OK;</p>
<p>pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">read_csv</a></p>
<blockquote>
<p><strong>warn_bad_lines</strong> : boolean, default True </p>
<pre><code>If error_bad_lines is False, and warn_bad_lines is True,
a warning for each âbad lineâ will be output. (Only valid with C parser).
</code></pre>
</blockquote>
| 1 |
2016-09-22T18:21:59Z
|
[
"python",
"csv"
] |
Open a csv.gz file in Python and print first 100 rows
| 39,645,804 |
<p>I'm trying to get only the first 100 rows of a csv.gz file that has over 4 million rows in Python. I also want information on the # of columns and the headers of each. How can I do this? </p>
<p>I looked at <a href="http://stackoverflow.com/questions/10566558/python-read-lines-from-compressed-text-files">python: read lines from compressed text files</a> to figure out how to open the file but I'm struggling to figure out how to actually print the first 100 rows and get some metadata on the information in the columns. </p>
<p>I found this <a href="https://stackoverflow.com/questions/1767513/read-first-n-lines-of-a-file-in-python">Read first N lines of a file in python</a> but not sure how to marry this to opening the csv.gz file and reading it without saving an uncompressed csv file. </p>
<p>I have written this code:</p>
<pre><code>import gzip
import csv
import json
import pandas as pd
df = pd.read_csv('google-us-data.csv.gz', compression='gzip', header=0, sep=' ', quotechar='"', error_bad_lines=False)
for i in range (100):
print df.next()
</code></pre>
<p>I'm new to Python and I don't understand the results. I'm sure my code is wrong and I've been trying to debug it but I don't know which documentation to look at. </p>
<p>I get these results (and it keeps going down the console - this is an excerpt): </p>
<pre><code>Skipping line 63: expected 3 fields, saw 7
Skipping line 64: expected 3 fields, saw 7
Skipping line 65: expected 3 fields, saw 7
Skipping line 66: expected 3 fields, saw 7
Skipping line 67: expected 3 fields, saw 7
Skipping line 68: expected 3 fields, saw 7
Skipping line 69: expected 3 fields, saw 7
Skipping line 70: expected 3 fields, saw 7
Skipping line 71: expected 3 fields, saw 7
Skipping line 72: expected 3 fields, saw 7
</code></pre>
| 2 |
2016-09-22T17:55:39Z
| 39,646,318 |
<p>Pretty much what you've already done, except <code>read_csv</code> also has <code>nrows</code> where you can specify the number of columns you want from the data set.</p>
<p>Additionally, to prevent the errors you were getting, you can set <code>error_bad_lines</code> to <code>False</code>. You'll still get warnings (if that bothers you, set <code>warn_bad_lines</code> to <code>False</code> as well). These are there to indicate inconsistency in how your dataset is filled out.</p>
<pre><code>import pandas as pd
data = pd.read_csv('google-us-data.csv.gz', nrows=100, compression='gzip',
error_bad_lines=False)
print(data)
</code></pre>
<p>You can easily do something similar with the <code>csv</code> built-in library, but it'll require a <code>for</code> loop to iterate over the data, has shown in other examples.</p>
| 1 |
2016-09-22T18:25:14Z
|
[
"python",
"csv"
] |
boolean indexing in xarray
| 39,645,853 |
<p>I have some arrays with dims <code>'time', 'lat', 'lon'</code> and some with just <code>'lat', 'lon'</code>. I often have to do this in order to mask time-dependent data with a 2d (lat-lon) mask:</p>
<pre><code>x.data[:, mask.data] = np.nan
</code></pre>
<p>Of course, computations broadcast as expected. If <code>y</code> is 2d lat-lon data, its values are broadcast to all time coordinates in x:</p>
<pre><code>z = x + y
</code></pre>
<p>But indexing doesn't broadcast as I'd expect. I'd like to be able to do this, but it raises <em>ValueError: Buffer has wrong number of dimensions</em>:</p>
<pre><code>x[mask] = np.nan
</code></pre>
<p>Lastly, it seems that <code>xr.where</code> <em>does</em> broadcast the values of the mask across time coordinates as expected, but you can't set values this way.</p>
<pre><code>x_masked = x.where(mask)
</code></pre>
<p>So, is there something I'm missing here that facilitates setting values using a boolean mask that is missing dimensions (and needs to be broadcast)? Is the option I provided at the top really the way to do this (in which case, I might as well just be using standard numpy arrays...)</p>
| 0 |
2016-09-22T17:58:47Z
| 39,665,384 |
<p>Somewhat related question here: <a href="http://stackoverflow.com/questions/38884283/concise-way-to-filter-data-in-xarray">Concise way to filter data in xarray</a></p>
<p>Currently the best approach is a combination of <code>.where</code> and <code>.fillna</code>. </p>
<pre><code>valid = date_by_items.notnull()
positive = date_by_items > 0
positive = positive * 2
result = positive.fillna(0.).where(valid)
result
</code></pre>
<p>But changes are coming in xarray that will make this more concise - checkout the GitHub repo if you're interested</p>
| 2 |
2016-09-23T16:20:54Z
|
[
"python",
"numpy",
"python-xarray"
] |
Can I prevent the Cmd dialogue from popping up when using SymPy preview?
| 39,645,899 |
<p>I have written some code in Python 2.7 that will read a string from a .ini file and generate a .png image of the string in LaTeX format using <code>sympy.preview</code>. The idea is to use this script to generate the images in the background while I am typing them out. The problem is that even when I run the script in the background (using pythonw.exe), two empty command line windows will pop up for <code>latex.exe</code> and <code>dvipng.exe</code> and close immediately. This is annoying because it interrupts my typing. I want to be able to do it "on the fly" without being interrupted.</p>
<p>Is there a way to prevent these windows from opening?</p>
<p>Here is my minimum working example:</p>
<pre><code>from sympy import preview # for generating Latex images
import ConfigParser # for reading .ini files
# read the latex commands from the .ini file:
config = ConfigParser.RawConfigParser()
config.read('mathPredictor.ini')
one = config.get('Words', 'one')
prefix = '\\Huge $' # make the Latex equations bigger
suffix = '$'
eqone = prefix + one + suffix
# for generating Latex images:
preview(eqone, viewer='file', filename='one.png', euler=False)
</code></pre>
<p>for the .ini file</p>
<pre><code>[Words]
one=\alpha
</code></pre>
<p>I am running Windows 8 64-bit with Python 2.7.10. This question is cross-posted at <a href="http://tex.stackexchange.com/questions/330876/">tex.SE</a>.</p>
| 0 |
2016-09-22T18:01:06Z
| 39,649,148 |
<p>I solved my own problem. If I ran the Python script from a previously-existing cmd instance, the windows didn't pop up. My solution then was to start a hidden instance of cmd and send the path location to the hidden cmd window. This enabled the Python code to be executed without the annoying popups from <code>latex.exe</code> and <code>dvipng.exe</code>. </p>
<p>I accomplished this in Autohotkey language using the following function:</p>
<pre><code>Run, %comspec%, , max, pid2 ; runs cmd in hidden mode with instance name given by %pid%
WinWait, ahk_pid %pid2% ; waits for instance to be active
ControlSend, ,python mypythonscript.py`r, ahk_pid %pid2% ; send text for script execute
</code></pre>
| 1 |
2016-09-22T21:27:54Z
|
[
"python",
"latex",
"command-prompt",
"sympy"
] |
Printing formatted floats in nested tuple of mixed type
| 39,645,950 |
<p>I have a list of tuples where the entries in the tuples are mixed type (int, float, tuple) and want to print each element of the list on one line. </p>
<p>Example list:</p>
<pre><code> [('520',
(0.26699505214910974, 9.530913611077067e-22, 1431,
(0.21819421133984918, 0.31446394340528838), 11981481)),
('1219',
(0.2775519783082116, 2.0226340976042765e-25, 1431,
(0.22902629625165472, 0.32470159534237308), 14905481))]
</code></pre>
<p>I would like to print each tuple as a single line with the floats formatted to print to the ten-thousandth place:</p>
<pre><code> [('520', (0.2669, 9.5309e-22, 1431, (0.2181, 0.3144), 11981481)),
('1219', (0.2775, 2.0226e-25, 1431, (0.2290, 0.3247), 14905481))]
</code></pre>
<p>I was using <code>pprint</code> to get everything on one line</p>
<pre><code>pprint(myList, depth=3, compact=True)
> ('1219', (0.2775519783082116, 2.0226340976042765e-25, 1431, (...), 14905481))]
</code></pre>
<p>but I wasn't sure how to properly format the floats in a pythonic manner. (There has to be a nicer way of doing it than looping through the list, looping through each tuple, checking if-float/if-int/if-tuple and converting all floats via <code>"%6.4f" % x</code>). </p>
| 1 |
2016-09-22T18:04:24Z
| 39,656,204 |
<p>This is not exactly what you need, but very close, and the code is pretty compact.</p>
<pre><code>def truncateFloat(data):
return tuple( ["{0:.4}".format(x) if isinstance(x,float) else (x if not isinstance(x,tuple) else truncateFloat(x)) for x in data])
pprint(truncateFloat(the_list))
</code></pre>
<p>For your example the result is</p>
<pre><code>(('520', ('0.267', '9.531e-22', 1431, ('0.2182', '0.3145'), 11981481)),
('1219', ('0.2776', '2.023e-25', 1431, ('0.229', '0.3247'), 14905481)))
</code></pre>
<p>You can play with options of <code>.format()</code> to get what you want.</p>
| 1 |
2016-09-23T08:29:49Z
|
[
"python",
"pretty-print",
"pprint"
] |
Zapier Python Code Error Segment.com (usercode.py, line 9)
| 39,645,963 |
<p>I'm trying to take Campaign Monitor Open events and pipe the data to Segment.com via POST API using Python code Action on Zapier.</p>
<p>I keep getting the following <strong>error</strong>:</p>
<blockquote>
<p>Bargle. We hit an error creating a run python. :-( Error:
Your code had an error! Traceback (most recent call last): SyntaxError: invalid >syntax (usercode.py, line 9)</p>
</blockquote>
<p>Here is the existing setup screenshot (masking the auth code):
<a href="http://i.stack.imgur.com/ee3Z2.png" rel="nofollow">Zapier Zap Setup for Code</a></p>
<p>The Python code returning the error is:</p>
<pre><code>url = 'https://api.segment.io/v1/track/'
payload =
{
'userId': input_data['email'],
'event': 'Email Opened',
'properties': {
'listid': input_data['listid'],
'open_date': input_data['date'],
'cm_id': input_data['cm_id'],
'open_city': input_data['city'],
'open_region': input_data['region'],
'open_country': input_data['country'],
'open_lat': input_data['lat'],
'open_long': input_data['long'],
'open_country_code': input_data['country_code']
},
'context': {
'ip': input_data['ip']
}
}
headers = {
'content-type': 'application/json',
'Authorization': 'Basic BASE64ENCODEDWRITEKEY'
}
response = requests.post(url, data=json.dumps(payload), headers=headers)
response.raise_for_status()
return response.json()
</code></pre>
<p>Any advice on what the error may be referencing? Any advice overall on how to better achieve this?</p>
| 1 |
2016-09-22T18:05:04Z
| 39,646,072 |
<p>Doing this:</p>
<pre><code>payload =
{}
</code></pre>
<p>Is improper syntax. Try:</p>
<pre><code>payload = {}
</code></pre>
<p>I also recommend using a linter - maybe <a href="http://infoheap.com/python-lint-online/" rel="nofollow">http://infoheap.com/python-lint-online/</a> would be helpful to you!</p>
| 0 |
2016-09-22T18:11:16Z
|
[
"python",
"zapier",
"segment-io"
] |
Zapier Python Code Error Segment.com (usercode.py, line 9)
| 39,645,963 |
<p>I'm trying to take Campaign Monitor Open events and pipe the data to Segment.com via POST API using Python code Action on Zapier.</p>
<p>I keep getting the following <strong>error</strong>:</p>
<blockquote>
<p>Bargle. We hit an error creating a run python. :-( Error:
Your code had an error! Traceback (most recent call last): SyntaxError: invalid >syntax (usercode.py, line 9)</p>
</blockquote>
<p>Here is the existing setup screenshot (masking the auth code):
<a href="http://i.stack.imgur.com/ee3Z2.png" rel="nofollow">Zapier Zap Setup for Code</a></p>
<p>The Python code returning the error is:</p>
<pre><code>url = 'https://api.segment.io/v1/track/'
payload =
{
'userId': input_data['email'],
'event': 'Email Opened',
'properties': {
'listid': input_data['listid'],
'open_date': input_data['date'],
'cm_id': input_data['cm_id'],
'open_city': input_data['city'],
'open_region': input_data['region'],
'open_country': input_data['country'],
'open_lat': input_data['lat'],
'open_long': input_data['long'],
'open_country_code': input_data['country_code']
},
'context': {
'ip': input_data['ip']
}
}
headers = {
'content-type': 'application/json',
'Authorization': 'Basic BASE64ENCODEDWRITEKEY'
}
response = requests.post(url, data=json.dumps(payload), headers=headers)
response.raise_for_status()
return response.json()
</code></pre>
<p>Any advice on what the error may be referencing? Any advice overall on how to better achieve this?</p>
| 1 |
2016-09-22T18:05:04Z
| 39,646,724 |
<p>Thanks to @Bryan Helmig. That syntax, in addition to import json fixed the issue. For those interested, this works...</p>
<pre><code>import json
import requests
url = 'https://api.segment.io/v1/track/'
payload = {
'userId': input_data['email'],
'event': 'Email Opened',
'properties': {
'listid': input_data['listid'],
'open_date': input_data['date'],
'cm_id': input_data['cm_id'],
'open_city': input_data['city'],
'open_region': input_data['region'],
'open_country': input_data['country'],
'open_lat': input_data['lat'],
'open_long': input_data['long'],
'open_country_code': input_data['country_code']
},
'context': {
'ip': input_data['ip']
}
}
headers = {
'content-type': 'application/json',
'Authorization': 'Basic WRITEKEYHERE'
}
response = requests.post(url, data=json.dumps(payload), headers=headers)
response.raise_for_status()
</code></pre>
| 0 |
2016-09-22T18:47:07Z
|
[
"python",
"zapier",
"segment-io"
] |
Best way to combine datetime in pandas dataframe when times are close
| 39,645,972 |
<p>What is the best way to combine times together if they are very close (within 5 seconds).</p>
<pre><code> start end delta
0 2016-01-01 08:00:01 2016-01-01 08:07:53 472.0
1 2016-01-01 08:07:54 2016-01-01 08:09:23 89.0
2 2016-01-01 08:09:24 2016-01-01 08:32:51 1407.0
3 2016-01-01 08:38:56 2016-01-01 08:38:58 2.0
4 2016-01-01 08:39:00 2016-01-01 08:58:06 1146.0
5 2016-01-01 09:07:26 2016-01-01 09:07:27 1.0
6 2016-01-01 09:07:31 2016-01-01 09:07:33 2.0
7 2016-01-01 09:07:35 2016-01-01 09:11:28 233.0
</code></pre>
<p>becomes</p>
<pre><code> start end delta
0 2016-01-01 08:00:01 2016-01-01 08:07:53 472.0
1 2016-01-01 08:07:54 2016-01-01 08:32:51 1496.0
2 2016-01-01 08:38:56 2016-01-01 08:58:06 1148.0
3 2016-01-01 09:07:26 2016-01-01 09:11:28 236.0
</code></pre>
| 0 |
2016-09-22T18:05:38Z
| 39,647,024 |
<p>Try this:</p>
<pre><code>timediff = df.start.diff()/np.timedelta64(1, 's')
pd.DataFrame(
{'start': df[(timediff>5) | (timediff.isnull())].start.tolist(),
'end': df[(timediff.shift(-1)>5) | (timediff.shift(-1).isnull())].end.tolist()}
)
</code></pre>
<p>This will give you start and end columns. Then you can take the delta.</p>
| 0 |
2016-09-22T19:04:36Z
|
[
"python",
"pandas",
"numpy"
] |
Need help writing code that will automatically write more code?
| 39,646,016 |
<p>I need help with writing code for a work project. I have written a script that uses pandas to read an excel file. I have a while-loop written to iterate through each row and append latitude/longitude data from the excel file onto a map (Folium, Open Street Map)</p>
<p>The issue I've run into has to do with the GPS data. I download a CVS file with vehicle coordinates. On some of the vehicles I'm tracking, the GPS loses signal for whatever reason and doesn't come back online for hundreds of miles. This causes issues when I'm using line plots to track the vehicle movement on the map. I end up getting long straight lines running across cities since Folium is trying to connect the last GPS coordinate before the vehicle went offline, with the next GPS coordinate available once the vehicle is back online, which could be hundreds of miles away as <a href="http://imgur.com/a/klX2g" rel="nofollow">shown here</a>. I think if every time the script finds a gap in GPS coords, I can have a new loop generated that will basically start a completely new line plot and append it to the existing map. This way I should still see the entire vehicle route on the map but without the long lines trying to connect broken points together.</p>
<p>My idea is to have my script calculate the absolute value difference between each iteration of longitude data. If the difference between each point is greater than 0.01, I want my program to end the loop and to start a new loop. This new loop would then need to have new variables init. I will not know how many new loops would need to be created since there's no way to predict how many times the GPS will go offline/online in each vehicle.</p>
<p><a href="https://gist.github.com/tapanojum/81460dd89cb079296fee0c48a3d625a7" rel="nofollow">https://gist.github.com/tapanojum/81460dd89cb079296fee0c48a3d625a7</a></p>
<pre><code>import folium
import pandas as pd
# Pulls CSV file from this location and adds headers to the columns
df = pd.read_csv('Example.CSV',names=['Longitude', 'Latitude',])
lat = (df.Latitude / 10 ** 7) # Converting Lat/Lon into decimal degrees
lon = (df.Longitude / 10 ** 7)
zoom_start = 17 # Zoom level and starting location when map is opened
mapa = folium.Map(location=[lat[1], lon[1]], zoom_start=zoom_start)
i = 0
j = (lat[i] - lat[i - 1])
location = []
while i < len(lat):
if abs(j) < 0.01:
location.append((lat[i], lon[i]))
i += 1
else:
break
# This section is where additional loops would ideally be generated
# Line plot settings
c1 = folium.MultiPolyLine(locations=[location], color='blue', weight=1.5, opacity=0.5)
c1.add_to(mapa)
mapa.save(outfile="Example.html")
</code></pre>
<p>Here's pseudocode for how I want to accomplish this.</p>
<p>1) Python reads csv</p>
<p>2) Converts Long/Lat into decimal degrees</p>
<p>3) Init location1</p>
<p>4) Runs while loop to append coords</p>
<p>5) If abs(j) >= 0.01, break loop</p>
<p>6) Init location(2,3,...)</p>
<p>7) Generates new while i < len(lat): loop using location(2,3,...)</p>
<p>9) Repeats step 5-7 while i < len(lat) (Repeat as many times as there are
instances of abs(j) >= 0.01))</p>
<p>10) Creats (c1, c2, c3,...) = folium.MultiPolyLine(locations=[location], color='blue', weight=1.5, opacity=0.5) for each variable of location</p>
<p>11) Creates c1.add_to(mapa) for each c1,c2,c3... listed above</p>
<p>12) mapa.save</p>
<p>Any help would be tremendously appreciated!</p>
<p><strong>UPDATE:</strong>
Working Solution</p>
<pre><code>import folium
import pandas as pd
# Pulls CSV file from this location and adds headers to the columns
df = pd.read_csv(EXAMPLE.CSV',names=['Longitude', 'Latitude'])
lat = (df.Latitude / 10 ** 7) # Converting Lat/Lon into decimal degrees
lon = (df.Longitude / 10 ** 7)
zoom_start = 17 # Zoom level and starting location when map is opened
mapa = folium.Map(location=[lat[1], lon[1]], zoom_start=zoom_start)
i = 1
location = []
while i < (len(lat)-1):
location.append((lat[i], lon[i]))
i += 1
j = (lat[i] - lat[i - 1])
if abs(j) > 0.01:
c1 = folium.MultiPolyLine(locations=[location], color='blue', weight=1.5, opacity=0.5)
c1.add_to(mapa)
location = []
mapa.save(outfile="Example.html")
</code></pre>
| 0 |
2016-09-22T18:07:54Z
| 39,647,308 |
<p>Your while loop looks wonky. You only set j once, outside the loop. Also, I think you want a list of line segments. Did you want something like this;</p>
<pre><code>i = 0
segment = 0
locations = []
while i < len(lat):
locations[segment] = [] # start a new segment
# add points to the current segment until all are
# consumed or a disconnect is detected
while i < len(lat):
locations[segment].append((lat[i], lon[i]))
i += 1
j = (lat[i] - lat[i - 1])
if abs(j) > 0.01:
break
segment += 1
</code></pre>
<p>When this is done <code>locations</code> will be a list of segments, e.g.;</p>
<pre><code> [ segment0, segment1, ..... ]
</code></pre>
<p>each segment will be a list of points, e.g.;</p>
<pre><code> [ (lat,lon), (lan,lon), ..... ]
</code></pre>
| 0 |
2016-09-22T19:22:46Z
|
[
"python",
"automation",
"openstreetmap",
"folium"
] |
How to stop `colorbar` from reshaping `networkx` plot? (Python 3)
| 39,646,027 |
<p>I am trying to change the <code>colorbar</code> on my <code>networkx</code> plot. The bar gets extra wide and also smooshes my original <code>networkx</code> plot (left) when I add the <code>colorbar</code> on there (right). </p>
<p><strong>How can I make my <code>colorbar</code> thinner and not alter my original <code>networkx</code> graph?</strong> </p>
<p>My code below with <code>colorbar</code> courtesy of <a href="http://stackoverflow.com/users/5285918/lanery">http://stackoverflow.com/users/5285918/lanery</a> but used w/ a larger network. </p>
<pre><code># Set up Graph
DF_adj = pd.DataFrame(np.array(
[[1, 0, 1, 1],
[0, 1, 1, 0],
[1, 1, 1, 1],
[1, 0, 1, 1] ]), columns=['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'], index=['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'])
G = nx.Graph(DF_adj.as_matrix())
G = nx.relabel_nodes(G, dict(zip(range(4), ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'])))
# Color mapping
color_palette = sns.cubehelix_palette(3)
cmap = {k:color_palette[v-1] for k,v in zip(['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'],[2, 1, 3, 2])}
# Draw
fig, ax = plt.subplots()
nx.draw(G, node_color=[cmap[node] for node in G.nodes()], with_labels=True, ax=ax)
sm = plt.cm.ScalarMappable(cmap=ListedColormap(color_palette),
norm=plt.Normalize(vmin=0, vmax=3))
sm._A = []
fig.colorbar(sm, ticks=range(1,4))
</code></pre>
<p><a href="http://i.stack.imgur.com/g3vG8.png" rel="nofollow"><img src="http://i.stack.imgur.com/g3vG8.png" alt="enter image description here"></a></p>
| 0 |
2016-09-22T18:08:30Z
| 39,646,338 |
<p>Easiest way should be to plot the colorbar on its own axis and play around with the [left, bottom, width, height] parameters.</p>
<pre><code>cbaxes = fig.add_axes([0.9, 0.1, 0.015, 0.8])
sm = plt.cm.ScalarMappable(cmap=ListedColormap(color_palette),
norm=plt.Normalize(vmin=0, vmax=3))
sm._A = []
plt.colorbar(sm, cax=cbaxes, ticks=range(4))
</code></pre>
<p>Source: <a href="http://stackoverflow.com/questions/13310594/positioning-the-colorbar">positioning the colorbar</a></p>
| 2 |
2016-09-22T18:26:23Z
|
[
"python",
"matplotlib",
"colors",
"networkx",
"colorbar"
] |
Sublime3 text can't ignore PEP8 formatting for Python
| 39,646,060 |
<p>I installed Sublime for Python programming but I found that PEP8 error detection is pretty annoying and I couldn't get rid of it.</p>
<p>I tried this but it's not working:</p>
<p><a href="http://i.stack.imgur.com/119l5.png" rel="nofollow"><img src="http://i.stack.imgur.com/119l5.png" alt="enter image description here"></a></p>
<p>It kept showing this:</p>
<p><a href="http://i.stack.imgur.com/OyMkx.png" rel="nofollow"><img src="http://i.stack.imgur.com/OyMkx.png" alt="enter image description here"></a></p>
| -1 |
2016-09-22T18:10:29Z
| 39,646,370 |
<p>Try to add <code>"pep8": false,</code>. </p>
<p>If it does not work, add <code>"sublimelinter_disable":["python"],</code> to disable python's inspections completely. </p>
<p>And you would like to look <a href="https://github.com/SublimeLinter/SublimeLinter-pep8" rel="nofollow">https://github.com/SublimeLinter/SublimeLinter-pep8</a></p>
| 0 |
2016-09-22T18:28:11Z
|
[
"python",
"sublimetext",
"pep8"
] |
pandas histogram: plot histogram for each column as subplot of a big figure
| 39,646,070 |
<p>I am using the following code, trying to plot the histogram of every column of a my pandas data frame df_in as subplot of a big figure.</p>
<pre><code>%matplotlib notebook
from itertools import combinations
import matplotlib.pyplot as plt
fig, axes = plt.subplots(len(df_in.columns) // 3, 3, figsize=(12, 48))
for x in df_in.columns:
df_in.hist(column = x, bins = 100)
fig.tight_layout()
</code></pre>
<p>However, the histogram didn't show in the subplot. Any one knows what I missed? Thanks!</p>
| 0 |
2016-09-22T18:11:05Z
| 39,646,226 |
<p>You need to specify which axis you are plotting to. This should work:</p>
<pre><code>fig, axes = plt.subplots(len(df_in.columns)//3, 3, figsize=(12, 48))
for col, axis in zip(df_in.columns, axes):
df_in.hist(column = col, bins = 100, ax=axis)
</code></pre>
| 0 |
2016-09-22T18:20:15Z
|
[
"python",
"pandas",
"histogram"
] |
How to read a .py file after I install Anaconda?
| 39,646,077 |
<p>I have installed <code>Anaconda</code>, but I do not know how to open a <code>.py</code> file..</p>
<p>If it is possible, please explain plainly, I browsed several threads, but I understood none of them..</p>
<p>Thanks a lot for your helps..</p>
<p>Best,</p>
| -2 |
2016-09-22T18:11:26Z
| 39,646,217 |
<p>You can use any text editor to open a .py file, e.g. TextMate, TextWrangler, TextEdit, PyCharm, AquaMacs, etc.</p>
| 0 |
2016-09-22T18:19:31Z
|
[
"python",
"anaconda"
] |
How to read a .py file after I install Anaconda?
| 39,646,077 |
<p>I have installed <code>Anaconda</code>, but I do not know how to open a <code>.py</code> file..</p>
<p>If it is possible, please explain plainly, I browsed several threads, but I understood none of them..</p>
<p>Thanks a lot for your helps..</p>
<p>Best,</p>
| -2 |
2016-09-22T18:11:26Z
| 39,646,328 |
<p>In the menu structure of your operating system, you should see a folder for Anaconda. In that folder is an icon for Spyder. Click that icon.</p>
<p>After a while (Spyder loads slowly) you will see the Spyder integrated environment. You can choose File then Open from the menu, or just click the Open icon that looks like an open folder. In the resulting Open dialog box, navigate to the relevant folder and open the relevant .py file. The Open dialog box will see .py, .pyw, and .ipy files by default, but clicking the relevant list box will enable you to see and load many other kinds of files. Opening that file will load the contents into the editor section of Spyder. You can view or edit the file there, or use other parts of Spyder to run, debug, and do other things with the file.</p>
<p>As of now, there is no in-built way to load a .py file in Spyder directly from the operating system. You can set that up in Windows by double-clicking a .py file, then choosing the spyder.exe file, and telling Windows to always use that application to load the file. The Anaconda developers have said that a soon-to-come version of Anaconda will modify the operating system so that .py and other files will load in Spyder with a double-click. But what I said above works for Windows.</p>
<p>This answer was a bit condensed, since I do not know your level of understanding. Ask if you need more details.</p>
| 0 |
2016-09-22T18:25:38Z
|
[
"python",
"anaconda"
] |
Bundling C++ extension headers with a Python package source distribution
| 39,646,097 |
<p>I'm writing a Cython wrapper to a C++ library that I would like to distribute as a Python package. I've come up with a dummy version of my package that looks like this (full source <a href="https://github.com/standage/packagetest" rel="nofollow">here</a>).</p>
<pre><code>$ tree
.
âââ bogus.pyx
âââ inc
â  âââ bogus.hpp
âââ setup.py
âââ src
âââ bogus.cpp
$
$ cat inc/bogus.hpp
#ifndef BOGUS
#define BOGUS
class bogus
{
protected:
int data;
public:
bogus();
int get_double(int value);
};
#endif
$
$ cat src/bogus.cpp
#include "bogus.hpp"
bogus::bogus() : data(0)
{
}
int bogus::get_double(int value)
{
data = value * 2;
return data;
}
$ cat bogus.pyx
# distutils: language = c++
# distutils: sources = src/bogus.cpp
# cython: c_string_type=str, c_string_encoding=ascii
cdef extern from 'bogus.hpp':
cdef cppclass bogus:
bogus() except +
int get_double(int value)
cdef class Bogus:
cdef bogus b
def get_double(self, int value):
return self.b.get_double(value)
</code></pre>
<p>With the following <code>setup.py</code> file, I can can confirm that the library installs correctly with <code>python setup.py install</code> and that it works correctly.</p>
<pre><code>from setuptools import setup, Extension
import glob
headers = list(glob.glob('inc/*.hpp'))
bogus = Extension(
'bogus',
sources=['bogus.pyx', 'src/bogus.cpp'],
include_dirs=['inc/'],
language='c++',
extra_compile_args=['--std=c++11', '-Wno-unused-function'],
extra_link_args=['--std=c++11'],
)
setup(
name='bogus',
description='Troubleshooting Python packaging and distribution',
author='Daniel Standage',
ext_modules=[bogus],
install_requires=['cython'],
version='0.1.0'
)
</code></pre>
<p>However, when I build a source distribution using <code>python setup.py sdist build</code>, the C++ header files are not included and the C++ extension cannot be compiled.</p>
<p><strong>How can I make sure the C++ header files get bundled with the source distribution?!?!</strong></p>
<p><rant></p>
<p>Troubleshooting this has uncovered a tremendously convoluted and inconsistent mess of documentation, suggestions, and hacks, none of which have worked for me. Put a <code>graft</code> line in <code>MANIFEST.in</code>? Nope. The <code>package_data</code> or <code>data_files</code> options? Nope. Python packaging seems to have improved a lot in the last few years, but it is still nigh impenetrable for those of us that don't live and breathe Python packaging!</p>
<p></rant></p>
| 4 |
2016-09-22T18:13:08Z
| 39,735,501 |
<h2>Short answer</h2>
<p>Put <code>include inc/*.hpp</code> in the <code>MANIFEST.in</code> file.</p>
<h2>Long answer</h2>
<p>Based on various blog posts and SO threads, I had tried the suggestion of declaring the files in a <code>MANIFEST.in</code> file. Following <a href="https://docs.python.org/2/distutils/sourcedist.html" rel="nofollow">these instructions</a>, I added a <code>graft inc/</code> line to <code>MANIFEST.in</code> to include the entire directory. This did not work.</p>
<p>However, replacing this line with <code>include inc/*.hpp</code> did work. Arguably this should have been the first thing I tried, but being unfamiliar with the intricacies and warts of setuptools and distutils, I had no reason to expect that <code>graft</code> wouldn't work.</p>
| 1 |
2016-09-27T22:47:31Z
|
[
"python",
"packaging",
"python-extensions",
"python-packaging"
] |
Looking up Pandas dataFrame
| 39,646,134 |
<p>I have created a data frame by reading a text file. I am interested in knowing if few values exist in a particular column and if they do, I want to print the entire row. </p>
<p>This is my input file(analyte_map.txt):</p>
<pre><code>Analyte_id mass Intensity
A34579 101.2 786788
B12345 99.2 878787
B943470 103.89 986443
C12345 11.2 101
</code></pre>
<p>This is my code: </p>
<pre><code>import pandas as pd
map_file="analyte_map.txt"
array=['A34579','B943470','D583730']
analyte_df=pd.read_table(map_file,sep="\t")
for value in array:
if analyte_df.lookup([value],['Analyte_id']):
print '%s\t%s'%(analyte_df['mass'],analyte_df['Intensity'])
</code></pre>
| 1 |
2016-09-22T18:15:12Z
| 39,646,164 |
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html"><code>isin</code></a>:</p>
<pre><code>array=['A34579','B943470','D583730']
print (df[df.analyte_id.isin(array)])
analyte_id mass Intensity
0 A34579 101.20 786788
2 B943470 103.89 986443
</code></pre>
<p>Also if need only some columns use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html"><code>ix</code></a>:</p>
<pre><code>array=['A34579','B943470','D583730']
print (df.ix[df.analyte_id.isin(array), ['mass','Intensity']])
mass Intensity
0 101.20 786788
2 103.89 986443
</code></pre>
| 5 |
2016-09-22T18:17:01Z
|
[
"python",
"pandas",
"indexing",
"dataframe",
"condition"
] |
Looking up Pandas dataFrame
| 39,646,134 |
<p>I have created a data frame by reading a text file. I am interested in knowing if few values exist in a particular column and if they do, I want to print the entire row. </p>
<p>This is my input file(analyte_map.txt):</p>
<pre><code>Analyte_id mass Intensity
A34579 101.2 786788
B12345 99.2 878787
B943470 103.89 986443
C12345 11.2 101
</code></pre>
<p>This is my code: </p>
<pre><code>import pandas as pd
map_file="analyte_map.txt"
array=['A34579','B943470','D583730']
analyte_df=pd.read_table(map_file,sep="\t")
for value in array:
if analyte_df.lookup([value],['Analyte_id']):
print '%s\t%s'%(analyte_df['mass'],analyte_df['Intensity'])
</code></pre>
| 1 |
2016-09-22T18:15:12Z
| 39,646,265 |
<p>using <code>.query()</code> method:</p>
<pre><code>In [9]: look_up=['A34579','B943470','D583730']
In [10]: df.query('Analyte_id in @look_up')
Out[10]:
Analyte_id mass Intensity
0 A34579 101.20 786788
2 B943470 103.89 986443
In [11]: df.query('Analyte_id in @look_up')[['mass','Intensity']]
Out[11]:
mass Intensity
0 101.20 786788
2 103.89 986443
</code></pre>
| 2 |
2016-09-22T18:22:08Z
|
[
"python",
"pandas",
"indexing",
"dataframe",
"condition"
] |
Python 2.7 BeautifulSoup, website addresses scraping
| 39,646,144 |
<p>Hope you are all well. I'm new in Python and using python 2.7. </p>
<p>I'm trying to extract only the websites from this public website business directory: <a href="https://www.dmcc.ae/business-directory" rel="nofollow">https://www.dmcc.ae/business-directory</a><br>
the websites i'm looking for are the websites mentioned in every widget.
This directory does not have an API unfortunately.<br>
I'm using BeautifulSoup, but with no success so far.<br>
here is mycode: </p>
<pre><code>import urllib
from bs4 import BeautifulSoup
website = raw_input("Type Website:>\n")
html = urllib.urlopen('https://'+ website).read()
soup = BeautifulSoup(html)
tags = soup('a')
for tag in tags:
print tag.get('href', None)
</code></pre>
<p>what i get is just the website of the actual website , like <a href="http://portal.dmcc.ae" rel="nofollow">http://portal.dmcc.ae</a> along with other href rather then the websites in the widgets. i also tried replacing soup('a') with soup ('class'), but no luck!
Can anybody help me please? </p>
| 1 |
2016-09-22T18:15:47Z
| 39,646,378 |
<p>As somebody has written into comments:</p>
<p>The web page is build by client using JavaScript. What your Python script gets is only a (relatively) short HTML website, that loads other scripts to load the right data. Unfortunately, BeautifulSoup can't execute the website so let it load all of its resources and then parse it.</p>
<p>You should use another technology, which can actually run the site and then parse it. I know about <a href="http://www.seleniumhq.org" rel="nofollow">Selenium</a>, but I have no experiences with it</p>
| -1 |
2016-09-22T18:28:27Z
|
[
"python",
"python-2.7",
"web-scraping",
"beautifulsoup"
] |
Python 2.7 BeautifulSoup, website addresses scraping
| 39,646,144 |
<p>Hope you are all well. I'm new in Python and using python 2.7. </p>
<p>I'm trying to extract only the websites from this public website business directory: <a href="https://www.dmcc.ae/business-directory" rel="nofollow">https://www.dmcc.ae/business-directory</a><br>
the websites i'm looking for are the websites mentioned in every widget.
This directory does not have an API unfortunately.<br>
I'm using BeautifulSoup, but with no success so far.<br>
here is mycode: </p>
<pre><code>import urllib
from bs4 import BeautifulSoup
website = raw_input("Type Website:>\n")
html = urllib.urlopen('https://'+ website).read()
soup = BeautifulSoup(html)
tags = soup('a')
for tag in tags:
print tag.get('href', None)
</code></pre>
<p>what i get is just the website of the actual website , like <a href="http://portal.dmcc.ae" rel="nofollow">http://portal.dmcc.ae</a> along with other href rather then the websites in the widgets. i also tried replacing soup('a') with soup ('class'), but no luck!
Can anybody help me please? </p>
| 1 |
2016-09-22T18:15:47Z
| 39,646,593 |
<p>The data is dynamically generate using Jquery though an ajax request, you can do a get request to the url to get the dynamically loaded data:</p>
<pre><code>from requests import Session
from time import time
data = {
"page_num": "1", # set it to whatever page you like
"query_type": "activities",
"_": str(int(time()))}
js_url = "https://dmcc.secure.force.com/services/apexrest/DMCC_BusinessDirectory_API_1/get"
with Session() as s:
soup = BeautifulSoup(s.get("https://www.dmcc.ae/business-directory").content, "html.parser")
r = s.get(js_url, params=data)
data = r.json()
</code></pre>
<p>Which will give you:</p>
<pre><code>{u'success': True, u'requestURI': u'/DMCC_BusinessDirectory_API_1/get', u'params': [u'DMCC_BusinessDirectory_API_1', u'get', u' ', u' '], u'message': u'Getting all activities.', u'sObjects': [{u'Account__c': u'001b000000MV4LaAAL', u'Building__c': u'55.1450717', u'Property_Location__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Property_Location__c/a1G10000000XPVOEA4', u'type': u'Property_Location__c'}, u'Id': u'a1G10000000XPVOEA4', u'Function_Type_Class__c': u'Office', u'Name': u'PL-032972'}, u'Operating_Name__c': u'001b000000MV4LaAAL', u'Longitude__c': u'55.1450717', u'License_Address_for_Business_Directroy__c': u'Unit No: 3006-002<br>Mazaya Business Avenue BB1<br>Jumeirah Lakes Towers<br>Dubai<br>UAE', u'License_Status__c': u'Active', u'Property_Location__c': u'a1G10000000XPVOEA4', u'attributes': {u'url': u'/services/data/v37.0/sobjects/License__c/a03b0000006h16cAAA', u'type': u'License__c'}, u'Operating_Name__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/001b000000MV4LaAAL', u'type': u'Account'}, u'Id': u'001b000000MV4LaAAL', u'Name': u'1 ON 1 HR CONSULTING DMCC'}, u'License_Address__c': u'Unit No: 3006-002<br>Mazaya Business Avenue BB1<br>Plot No: JLTE-PH2-BB1<br>Jumeirah Lakes Towers<br>Dubai<br>UAE', u'Id': u'a03b0000006h16cAAA', u'Latitude__c': u'25.06828081', u'Account__r': {u'Company_Official_Email_Address__c': u'resume@1on1hrconsulting.com', u'Monday_To__c': u'18:00', u'Tuesday_To__c': u'18:00', u'Thursday_To__c': u'18:00', u'Id': u'001b000000MV4LaAAL', u'Operating_Time_from_regular__c': u'08:00', u'Name': u'1 ON 1 HR CONSULTING DMCC', u'Tuesday_From__c': u'08:00', u'LinkedIn_URL__c': u'https://www.linkedin.com/company/1on1-hr-consulting', u'Saturday_From__c': u'Closed', u'Phone_BD__c': u'+97144470173', u'Facebook_Link__c': u'https://www.facebook.com/duabiinterviewandresumecoaching', u'Wednesday_To__c': u'18:00', u'Operating_Time_to_regular__c': u'18:00', u'Friday_To__c': u'Closed', u'Friday_From__c': u'Closed', u'Monday_From__c': u'08:00', u'Saturday_To__c': u'Closed', u'Company_Website_Address__c': u'www.1on1hrconsulting.com', u'Wednesday_From__c': u'08:00', u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/001b000000MV4LaAAL', u'type': u'Account'}, u'Thursday_From__c': u'08:00', u'Publishing_agreement_for_BD__c': u'Publish all details in DMCC online/printed content'}}, {u'Account__c': u'001b000000MV2s2AAD', u'Building__c': u'Gold Tower (AU)', u'Property_Location__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Property_Location__c/a1G10000000XHiKEAW', u'type': u'Property_Location__c'}, u'Id': u'a1G10000000XHiKEAW', u'Function_Type_Class__c': u'Office', u'Name': u'PL-005466'}, u'Operating_Name__c': u'001b000000MV2s2AAD', u'Longitude__c': u'55.14324963', u'License_Address_for_Business_Directroy__c': u'Unit No: AU-23-J<br>Gold Tower (AU)<br>Jumeirah Lakes Towers<br>Dubai<br>United Arab Emirates', u'License_Status__c': u'Active', u'Property_Location__c': u'a1G10000000XHiKEAW', u'attributes': {u'url': u'/services/data/v37.0/sobjects/License__c/a03b0000006h1DaAAI', u'type': u'License__c'}, u'Operating_Name__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/001b000000MV2s2AAD', u'type': u'Account'}, u'Id': u'001b000000MV2s2AAD', u'Name': u'1 PLATINUM CONCIERGE MANAGEMENT DMCC'}, u'License_Address__c': u'Unit No: AU-23-J<br>Gold Tower (AU)<br>Plot No: JLT-PH1-I3A<br>Jumeirah Lakes Towers<br>Dubai<br>United Arab Emirates', u'Id': u'a03b0000006h1DaAAI', u'Latitude__c': u'25.06937097', u'Account__r': {u'Saturday_To__c': u'Closed', u'Tuesday_From__c': u'09:00', u'Friday_From__c': u'Closed', u'Phone_BD__c': u'+971505687478', u'Monday_From__c': u'09:00', u'Monday_To__c': u'21:00', u'Publishing_agreement_for_BD__c': u'Publish only name and address in DMCC online/printed content', u'Company_Official_Email_Address__c': u'melvin@1boxoffice.ae', u'Company_Website_Address__c': u'www.1platinumconcierge.com', u'Operating_Time_to_regular__c': u'21:00', u'Friday_To__c': u'Closed', u'Tuesday_To__c': u'21:00', u'Wednesday_To__c': u'21:00', u'Wednesday_From__c': u'09:00', u'Thursday_To__c': u'21:00', u'Saturday_From__c': u'Closed', u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/001b000000MV2s2AAD', u'type': u'Account'}, u'Thursday_From__c': u'09:00', u'Id': u'001b000000MV2s2AAD', u'Operating_Time_from_regular__c': u'09:00', u'Name': u'1 PLATINUM CONCIERGE MANAGEMENT DMCC'}}, {u'Account__c': u'0011000000jki9iAAA', u'Building__c': u'Tiffany Towers', u'Property_Location__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Property_Location__c/a1G10000000XL4ZEAW', u'type': u'Property_Location__c'}, u'Id': u'a1G10000000XL4ZEAW', u'Function_Type_Class__c': u'Office', u'Name': u'PL-015938'}, u'Operating_Name__c': u'0011000000jki9iAAA', u'Longitude__c': u'55.14960835', u'License_Address_for_Business_Directroy__c': u'Unit No: 1906<br>Tiffany Towers<br>Jumeirah Lakes Towers<br>Dubai<br>UAE', u'License_Status__c': u'Active', u'Property_Location__c': u'a1G10000000XL4ZEAW', u'attributes': {u'url': u'/services/data/v37.0/sobjects/License__c/a031000000LBSCcAAP', u'type': u'License__c'}, u'Operating_Name__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/0011000000jki9iAAA', u'type': u'Account'}, u'Id': u'0011000000jki9iAAA', u'Name': u'1000HEADS CONSULTING DMCC'}, u'License_Address__c': u'Unit No: 1906<br>Tiffany Towers<br>Plot No: JLT-PH2-W2A<br>Jumeirah Lakes Towers<br>Dubai<br>UAE', u'Id': u'a031000000LBSCcAAP', u'Latitude__c': u'25.07726334', u'Account__r': {u'Company_Official_Email_Address__c': u'dubai@1000heads.com', u'Monday_To__c': u'18:00', u'Tuesday_To__c': u'18:00', u'Thursday_To__c': u'18:00', u'Id': u'0011000000jki9iAAA', u'Operating_Time_from_regular__c': u'09:00', u'Name': u'1000HEADS CONSULTING DMCC', u'Tuesday_From__c': u'09:00', u'Twitter_Name__c': u'@1000heads', u'Saturday_From__c': u'Closed', u'Phone_BD__c': u'+97143641221', u'Friday_To__c': u'Closed', u'Wednesday_To__c': u'18:00', u'Operating_Time_to_regular__c': u'18:00', u'Friday_From__c': u'Closed', u'Monday_From__c': u'09:00', u'Saturday_To__c': u'Closed', u'Company_Website_Address__c': u'www.1000heads.com', u'Wednesday_From__c': u'09:00', u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/0011000000jki9iAAA', u'type': u'Account'}, u'Thursday_From__c': u'09:00', u'Publishing_agreement_for_BD__c': u'Publish only name and address in DMCC online/printed content'}}, {u'Account__c': u'0011000000jkjRyAAI', u'Building__c': u'Platinum Tower', u'Property_Location__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Property_Location__c/a1G10000000XKGtEAO', u'type': u'Property_Location__c'}, u'Id': u'a1G10000000XKGtEAO', u'Function_Type_Class__c': u'Retail', u'Name': u'PL-012858'}, u'Operating_Name__c': u'0011000000jkjRyAAI', u'Longitude__c': u'55.14244634', u'License_Address_for_Business_Directroy__c': u'Unit No: G07<br>Platinum Tower<br>Jumeirah Lakes Towers<br>Dubai<br>UAE', u'License_Status__c': u'Active', u'Property_Location__c': u'a1G10000000XKGtEAO', u'attributes': {u'url': u'/services/data/v37.0/sobjects/License__c/a031000000Li7cUAAR', u'type': u'License__c'}, u'Operating_Name__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/0011000000jkjRyAAI', u'type': u'Account'}, u'Id': u'0011000000jkjRyAAI', u'Name': u'101 PARATHAS DMCC'}, u'License_Address__c': u'Unit No: G07<br>Platinum Tower<br>Plot No: JLT-PH1-I2<br>Jumeirah Lakes Towers<br>Dubai<br>UAE', u'Id': u'a031000000Li7cUAAR', u'Latitude__c': u'25.06927734', u'Account__r': {u'Company_Official_Email_Address__c': u'pankajpathak@hotmail.com', u'Monday_To__c': u'23:00', u'Tuesday_To__c': u'23:00', u'Thursday_To__c': u'23:00', u'Id': u'0011000000jkjRyAAI', u'Operating_Time_from_regular__c': u'09:00', u'Name': u'101 PARATHAS DMCC', u'Tuesday_From__c': u'09:00', u'Twitter_Name__c': u'www.twitter.com/101parathas', u'Saturday_From__c': u'09:00', u'Phone_BD__c': u'+97144249950', u'Facebook_Link__c': u'www.facebook.com/101parathas', u'Wednesday_To__c': u'23:00', u'Operating_Time_to_regular__c': u'23:00', u'Friday_To__c': u'23:00', u'Friday_From__c': u'09:00', u'Monday_From__c': u'09:00', u'Saturday_To__c': u'23:00', u'Company_Website_Address__c': u'www.101parathas.com', u'Wednesday_From__c': u'09:00', u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/0011000000jkjRyAAI', u'type': u'Account'}, u'Thursday_From__c': u'09:00', u'Publishing_agreement_for_BD__c': u'Publish all details in DMCC online/printed content'}}, {u'Account__c': u'0011000000kWyCsAAK', u'License_Status__c': u'Active', u'Property_Location__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Property_Location__c/a1G10000000XHXzEAO', u'type': u'Property_Location__c'}, u'Id': u'a1G10000000XHXzEAO', u'Function_Type_Class__c': u'Flexi Desk', u'Name': u'PL-004825'}, u'Account__r': {u'Saturday_To__c': u'Closed', u'Tuesday_From__c': u'09:00', u'Friday_From__c': u'Closed', u'Phone_BD__c': u'+971529859280', u'Monday_From__c': u'09:00', u'Monday_To__c': u'18:00', u'Publishing_agreement_for_BD__c': u'Publish only name and address in DMCC online/printed content', u'Company_Official_Email_Address__c': u'csaplala@lkmbgroup.com', u'Company_Website_Address__c': u'www', u'Operating_Time_to_regular__c': u'18:00', u'Friday_To__c': u'Closed', u'Tuesday_To__c': u'18:00', u'Wednesday_To__c': u'18:00', u'Wednesday_From__c': u'09:00', u'Thursday_To__c': u'18:00', u'Saturday_From__c': u'Closed', u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/0011000000kWyCsAAK', u'type': u'Account'}, u'Thursday_From__c': u'09:00', u'Id': u'0011000000kWyCsAAK', u'Operating_Time_from_regular__c': u'09:00', u'Name': u'1682 CONSULTING DMCC'}, u'Building__c': u'55.13646334', u'Property_Location__c': u'a1G10000000XHXzEAO', u'attributes': {u'url': u'/services/data/v37.0/sobjects/License__c/a031000000N2n8DAAR', u'type': u'License__c'}, u'License_Address__c': u'Unit No: 3O-01-1057<br>Jewellery &amp; Gemplex 3<br>Plot No: DMCC-PH2-J&amp;GPlexS<br>Jewellery &amp; Gemplex<br>Dubai<br>United Arab Emirates', u'Id': u'a031000000N2n8DAAR'}, {u'Account__c': u'0011000000riesbAAA', u'Building__c': u'55.13646334', u'Property_Location__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Property_Location__c/a1G10000000XHGNEA4', u'type': u'Property_Location__c'}, u'Id': u'a1G10000000XHGNEA4', u'Function_Type_Class__c': u'Flexi Desk', u'Name': u'PL-003733'}, u'Operating_Name__c': u'00110000012j7wXAAQ', u'Account__r': {u'Saturday_To__c': u'Closed', u'Tuesday_From__c': u'09:00', u'Friday_From__c': u'Closed', u'Phone_BD__c': u'+971502473000', u'Monday_From__c': u'09:00', u'Monday_To__c': u'13:00', u'Publishing_agreement_for_BD__c': u'Publish only name and address in DMCC online/printed content', u'Company_Official_Email_Address__c': u'thaer@1765hospitlity.com', u'Company_Website_Address__c': u'www.1765hospitality.com', u'Operating_Time_to_regular__c': u'13:00', u'Friday_To__c': u'Closed', u'Tuesday_To__c': u'13:00', u'Wednesday_To__c': u'13:00', u'Wednesday_From__c': u'09:00', u'Thursday_To__c': u'13:00', u'Saturday_From__c': u'Closed', u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/0011000000riesbAAA', u'type': u'Account'}, u'Thursday_From__c': u'09:00', u'Id': u'0011000000riesbAAA', u'Operating_Time_from_regular__c': u'09:00', u'Name': u'1765 HOSPITALITY DMCC'}, u'License_Status__c': u'Active', u'Property_Location__c': u'a1G10000000XHGNEA4', u'attributes': {u'url': u'/services/data/v37.0/sobjects/License__c/a031000000TaAN9AAN', u'type': u'License__c'}, u'Operating_Name__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/00110000012j7wXAAQ', u'type': u'Account'}, u'Id': u'00110000012j7wXAAQ', u'Name': u'SPIKY HOUSE OF CHICKEN'}, u'License_Address__c': u'Unit No: 3O-01-398<br>Jewellery &amp; Gemplex 3<br>Plot No: DMCC-PH2-J&amp;GPlexS<br>Jewellery &amp; Gemplex<br>Dubai<br>United Arab Emirates', u'Id': u'a031000000TaAN9AAN'}, {u'Account__c': u'0011000000xj8H3AAI', u'Building__c': u'55.15331119', u'Property_Location__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Property_Location__c/a1G10000001YJ39EAG', u'type': u'Property_Location__c'}, u'Id': u'a1G10000001YJ39EAG', u'Function_Type_Class__c': u'Office', u'Name': u'PL-180144'}, u'Longitude__c': u'55.15331119', u'License_Address_for_Business_Directroy__c': u'Unit No: 2703-B<br>Jumeirah Bay Tower X3<br>Jumeirah Lakes Towers<br>Dubai<br>UAE', u'License_Status__c': u'Active', u'Property_Location__c': u'a1G10000001YJ39EAG', u'attributes': {u'url': u'/services/data/v37.0/sobjects/License__c/a031000000YZaeMAAT', u'type': u'License__c'}, u'License_Address__c': u'Unit No: 2703-B<br>Jumeirah Bay Tower X3<br>Plot No: JLT-PH2-X3A<br>Jumeirah Lakes Towers<br>Dubai<br>UAE', u'Id': u'a031000000YZaeMAAT', u'Latitude__c': u'25.08018592', u'Account__r': {u'Company_Official_Email_Address__c': u'lobo.rosario@alwadiholding.com', u'Monday_To__c': u'20:00', u'Tuesday_To__c': u'20:00', u'Thursday_To__c': u'20:00', u'Operating_Time_from_regular__c': u'07:00', u'Name': u'1851 LAUNDRY AND DRY CLEANING SERVICES DMCC', u'Website': u'www1851laundries.com', u'Tuesday_From__c': u'07:00', u'Saturday_From__c': u'07:00', u'Phone_BD__c': u'+971508669551', u'Friday_To__c': u'20:00', u'Wednesday_To__c': u'20:00', u'Operating_Time_to_regular__c': u'20:00', u'Saturday_To__c': u'20:00', u'Friday_From__c': u'14:00', u'Monday_From__c': u'07:00', u'Publishing_agreement_for_BD__c': u'Publish all details in DMCC online/printed content', u'Company_Website_Address__c': u'www.1851laundries.com', u'Id': u'0011000000xj8H3AAI', u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/0011000000xj8H3AAI', u'type': u'Account'}, u'Thursday_From__c': u'07:00', u'Wednesday_From__c': u'07:00'}}, {u'Account__c': u'0011000000rhvGWAAY', u'License_Status__c': u'Active', u'Property_Location__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Property_Location__c/a1G10000000XHwFEAW', u'type': u'Property_Location__c'}, u'Id': u'a1G10000000XHwFEAW', u'Function_Type_Class__c': u'Flexi Desk', u'Name': u'PL-006329'}, u'Account__r': {u'Saturday_To__c': u'20:00', u'Tuesday_From__c': u'08:00', u'Friday_From__c': u'08:00', u'Phone_BD__c': u'+971562160241', u'Monday_From__c': u'08:00', u'Monday_To__c': u'20:00', u'Publishing_agreement_for_BD__c': u'Publish only name and address in DMCC online/printed content', u'Company_Official_Email_Address__c': u'l.olivari@gmail.com', u'Company_Website_Address__c': u'N.A.', u'Operating_Time_to_regular__c': u'20:00', u'Friday_To__c': u'20:00', u'Tuesday_To__c': u'20:00', u'Wednesday_To__c': u'20:00', u'Wednesday_From__c': u'08:00', u'Thursday_To__c': u'20:00', u'Saturday_From__c': u'08:00', u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/0011000000rhvGWAAY', u'type': u'Account'}, u'Thursday_From__c': u'08:00', u'Id': u'0011000000rhvGWAAY', u'Operating_Time_from_regular__c': u'08:00', u'Name': u'19 FAMILY & BUSINESS DMCC'}, u'Building__c': u'55.13646334', u'Property_Location__c': u'a1G10000000XHwFEAW', u'attributes': {u'url': u'/services/data/v37.0/sobjects/License__c/a031000000RhyojAAB', u'type': u'License__c'}, u'License_Address__c': u'Unit No: 3O-01-654<br>Jewellery &amp; Gemplex 3<br>Plot No: DMCC-PH2-J&amp;GPlexS<br>Jewellery &amp; Gemplex<br>Dubai<br>United Arab Emirates', u'Id': u'a031000000RhyojAAB'}, {u'Account__c': u'001b000000MV2kHAAT', u'Building__c': u'Tiffany Towers', u'Property_Location__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Property_Location__c/a1G10000000XJi2EAG', u'type': u'Property_Location__c'}, u'Id': u'a1G10000000XJi2EAG', u'Function_Type_Class__c': u'Office', u'Name': u'PL-010697'}, u'Operating_Name__c': u'001b000000MV2kHAAT', u'Longitude__c': u'55.14960835', u'License_Address_for_Business_Directroy__c': u'Unit No: 403<br>Tiffany Towers<br>Jumeirah Lakes Towers<br>Dubai<br>UAE', u'License_Status__c': u'Active', u'Property_Location__c': u'a1G10000000XJi2EAG', u'attributes': {u'url': u'/services/data/v37.0/sobjects/License__c/a03b0000006h21sAAA', u'type': u'License__c'}, u'Operating_Name__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/001b000000MV2kHAAT', u'type': u'Account'}, u'Id': u'001b000000MV2kHAAT', u'Name': u'21ST CENTURY GROUP HOLDINGS LIMITED (BRANCH)'}, u'License_Address__c': u'Unit No: 403<br>Tiffany Towers<br>Plot No: JLT-PH2-W2A<br>Jumeirah Lakes Towers<br>Dubai<br>UAE', u'Id': u'a03b0000006h21sAAA', u'Latitude__c': u'25.07726334', u'Account__r': {u'Saturday_To__c': u'Closed', u'Tuesday_From__c': u'08:00', u'Friday_From__c': u'Closed', u'Phone_BD__c': u'+97144269021', u'Monday_From__c': u'08:00', u'Monday_To__c': u'17:00', u'Publishing_agreement_for_BD__c': u'Publish all details in DMCC online/printed content', u'Company_Official_Email_Address__c': u'bhavani@areefinvestments.com', u'Company_Website_Address__c': u'www.areefinvestments.com', u'Operating_Time_to_regular__c': u'17:00', u'Friday_To__c': u'Closed', u'Tuesday_To__c': u'17:00', u'Wednesday_To__c': u'17:00', u'Wednesday_From__c': u'08:00', u'Thursday_To__c': u'17:00', u'Saturday_From__c': u'Closed', u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/001b000000MV2kHAAT', u'type': u'Account'}, u'Thursday_From__c': u'08:00', u'Id': u'001b000000MV2kHAAT', u'Operating_Time_from_regular__c': u'08:00', u'Name': u'21ST CENTURY GROUP HOLDINGS LIMITED (BRANCH)'}}, {u'Account__c': u'001b000000MV3NPAA1', u'Building__c': u'55.13603472', u'Property_Location__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Property_Location__c/a1G10000000XHF4EAO', u'type': u'Property_Location__c'}, u'Id': u'a1G10000000XHF4EAO', u'Function_Type_Class__c': u'Flexi Desk', u'Name': u'PL-003652'}, u'Operating_Name__c': u'001b000000MV3NPAA1', u'Account__r': {u'Saturday_To__c': u'Closed', u'Tuesday_From__c': u'10:00', u'Friday_From__c': u'Closed', u'Phone_BD__c': u'+971556099922', u'Monday_From__c': u'10:00', u'Monday_To__c': u'16:00', u'Publishing_agreement_for_BD__c': u'Publish only name and address in DMCC online/printed content', u'Company_Official_Email_Address__c': u'sameh@237communications.com', u'Company_Website_Address__c': u'237communications.com', u'Operating_Time_to_regular__c': u'16:00', u'Friday_To__c': u'Closed', u'Tuesday_To__c': u'16:00', u'Wednesday_To__c': u'16:00', u'Wednesday_From__c': u'10:00', u'Thursday_To__c': u'16:00', u'Saturday_From__c': u'Closed', u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/001b000000MV3NPAA1', u'type': u'Account'}, u'Thursday_From__c': u'10:00', u'Id': u'001b000000MV3NPAA1', u'Operating_Time_from_regular__c': u'10:00', u'Name': u'237 COMMUNICATIONS DMCC'}, u'License_Status__c': u'Active', u'Property_Location__c': u'a1G10000000XHF4EAO', u'attributes': {u'url': u'/services/data/v37.0/sobjects/License__c/a03b0000006h16xAAA', u'type': u'License__c'}, u'Operating_Name__r': {u'attributes': {u'url': u'/services/data/v37.0/sobjects/Account/001b000000MV3NPAA1', u'type': u'Account'}, u'Id': u'001b000000MV3NPAA1', u'Name': u'237 COMMUNICATIONS DMCC'}, u'License_Address__c': u'Unit No: 2H-05-412<br>Jewellery &amp; Gemplex 2<br>Plot No: DMCC-PH2-J&amp;GPlexS<br>Jewellery &amp; Gemplex<br>DUBAI<br>United Arab Emirates', u'Id': u'a03b0000006h16xAAA'}], u'result_count': 0}
</code></pre>
<p>What you want is in <code>data[u'sObjects']</code>, to get the company websites:</p>
<pre><code>for d in data['sObjects']:
if 'Company_Website_Address__c' in d['Account__r']:
print(d['Account__r']['Company_Website_Address__c'])
</code></pre>
<p>Which gives you:</p>
<pre><code>www.1on1hrconsulting.com
www.1platinumconcierge.com
www.1000heads.com
www.101parathas.com
www
www.1765hospitality.com
www.1851laundries.com
N.A.
www.areefinvestments.com
237communications.com
</code></pre>
<p>You can see some companies don't have a website listed, you will have to decide what to do with those. If you just print <code>d[u'Account__r']</code> you can see all the info for each company. You should also be aware that it is an internal api so make sure you are violating their terms of service by scraping their site although they should probably implement their <code>authToken</code> logic a bit more stringently to prevent calls to the api if they don't want to be scraped so easily. You can see it in chrome tools when you make a request but it is not required.</p>
| 2 |
2016-09-22T18:40:40Z
|
[
"python",
"python-2.7",
"web-scraping",
"beautifulsoup"
] |
Why does logging not work when running a Flask app with werkzeug?
| 39,646,236 |
<p>So here is a copy paste example that reproduces the problem.</p>
<pre><code>import logging
from flask import Flask
from werkzeug.serving import run_simple
from werkzeug.wsgi import DispatcherMiddleware
def app_builder(app_name, log_file):
app = Flask(app_name)
app.debug = True
handler = logging.FileHandler(log_file)
handler.setLevel(logging.DEBUG)
app.logger.addHandler(handler)
return app
def _simple(env, resp):
resp(b'200 OK', [(b'Content-Type', b'text/plain')])
return [b'root']
if __name__ == "__main__":
app = app_builder(app_name='app', log_file='app.log')
@app.route('/')
def index():
return '<a href="/app/error">click for error</a>'
@app.route('/error')
def error():
1/0
return 'error page'
app2 = app_builder(app_name='app2', log_file='app2.log')
@app2.route('/')
def index():
return 'you are getting responses from app2'
app.debug = True
app2.debug = True
application = DispatcherMiddleware(_simple, {
'/app': app,
'/app2': app2
})
run_simple(hostname='localhost',
port=5000,
application=application,
use_reloader=True,
use_debugger=True)
</code></pre>
<p>To make an error show up navigate to <code>http://localhost:5000/app/error</code>, I want to know why the stack trace doesn't show up in the <code>app.log</code> file. I assume that the <code>DispatcherMiddleware</code> or <code>run_simple</code> are somehow catching the exception before it can be logged. If I run only the <code>app</code> instance using <code>app.run()</code> the error logging works fine.</p>
| 4 |
2016-09-22T18:20:56Z
| 39,701,542 |
<p>I found a <a href="https://gist.github.com/ibeex/3257877" rel="nofollow">gist</a> which talks about logging in flask. The comment by andyxning (commented on Apr 18, 2015) mentions this - <code>if app.debug is True then all log level above DEBUG will be logged to stderr(StreamHandler)</code>. </p>
<p>The comment also has a link to the source code of <a href="https://github.com/pallets/flask/blob/master/flask/logging.py" rel="nofollow">flask/logging.py</a>. The <code>create_logger</code> method creates an instance of <code>DebugHandler</code> which inherits from <code>StreamHandler</code> class.</p>
<p>If you print <code>app.logger.handlers</code> you can see that it has an object of <code>flask.logging.DebugHandler</code>. </p>
<pre><code>print app.logger.handlers
[<flask.logging.DebugHandler object at 0x110315090>]
</code></pre>
<p>This DebugHandler is probably used when <code>app.debug is set to true</code> and hence the stack trace gets printed on the console.</p>
<p>Hope this is what you are looking for.</p>
| 1 |
2016-09-26T11:23:56Z
|
[
"python",
"flask",
"wsgi",
"werkzeug"
] |
Why does logging not work when running a Flask app with werkzeug?
| 39,646,236 |
<p>So here is a copy paste example that reproduces the problem.</p>
<pre><code>import logging
from flask import Flask
from werkzeug.serving import run_simple
from werkzeug.wsgi import DispatcherMiddleware
def app_builder(app_name, log_file):
app = Flask(app_name)
app.debug = True
handler = logging.FileHandler(log_file)
handler.setLevel(logging.DEBUG)
app.logger.addHandler(handler)
return app
def _simple(env, resp):
resp(b'200 OK', [(b'Content-Type', b'text/plain')])
return [b'root']
if __name__ == "__main__":
app = app_builder(app_name='app', log_file='app.log')
@app.route('/')
def index():
return '<a href="/app/error">click for error</a>'
@app.route('/error')
def error():
1/0
return 'error page'
app2 = app_builder(app_name='app2', log_file='app2.log')
@app2.route('/')
def index():
return 'you are getting responses from app2'
app.debug = True
app2.debug = True
application = DispatcherMiddleware(_simple, {
'/app': app,
'/app2': app2
})
run_simple(hostname='localhost',
port=5000,
application=application,
use_reloader=True,
use_debugger=True)
</code></pre>
<p>To make an error show up navigate to <code>http://localhost:5000/app/error</code>, I want to know why the stack trace doesn't show up in the <code>app.log</code> file. I assume that the <code>DispatcherMiddleware</code> or <code>run_simple</code> are somehow catching the exception before it can be logged. If I run only the <code>app</code> instance using <code>app.run()</code> the error logging works fine.</p>
| 4 |
2016-09-22T18:20:56Z
| 39,701,662 |
<p>The normal exception handler is not called when <code>app.debug = True</code>. Looking
in the code of <code>app.py</code> in Flask:</p>
<pre><code>def log_exception(self, exc_info):
"""Logs an exception. This is called by :meth:`handle_exception`
if debugging is disabled and right before the handler is called.
^^^^^^^^^^^^^^^^^^^^^^^^
The default implementation logs the exception as error on the
:attr:`logger`.
</code></pre>
<p>Indeed, when setting <code>app.debug = True</code> the exceptions propagation is set
to True explicitly, which prevents <code>log_exception</code> to be called. Here is an excerpt of the documentation (emphasis is mine):</p>
<blockquote>
<p>PROPAGATE_EXCEPTIONS: explicitly enable or disable the propagation of exceptions. If not set or explicitly set to None this is <strong>implicitly true</strong> if either TESTING or <strong>DEBUG is true</strong>.</p>
</blockquote>
<p>So, I managed to get both werkzeug debugging and logging working happily
together with a little tweak and the following code:</p>
<pre><code>import logging
from flask import Flask
from werkzeug.serving import run_simple
from werkzeug.wsgi import DispatcherMiddleware
## NEW CODE HERE
import functools
from flask._compat import reraise
def my_log_exception(exc_info, original_log_exception=None):
original_log_exception(exc_info)
exc_type, exc, tb = exc_info
# re-raise for werkzeug
reraise(exc_type, exc, tb)
##
def app_builder(app_name, log_file):
app = Flask(app_name)
app.debug = True
app.config.update(PROPAGATE_EXCEPTIONS=False)
handler = logging.FileHandler(log_file)
handler.setLevel(logging.DEBUG)
app.logger.addHandler(handler)
## NEW CODE
app.log_exception = functools.partial(my_log_exception, original_log_exception=app.log_exception)
##
return app
# rest of your code is unchanged
</code></pre>
| 1 |
2016-09-26T11:30:05Z
|
[
"python",
"flask",
"wsgi",
"werkzeug"
] |
Adding column from one CSV file to another CSV file
| 39,646,296 |
<p>I have 2 CSV files that I need to merge together based on a key i created ( the purpose was to mask ID's then join the ids on the key later ) I can do this in SSIS, but im have an error runing the batch script from my python script (something to do with SSIS not running packages outside SSIS. Working with software team to fix ) but in the mean time I would like to just have it working for a demo.</p>
<p>Is this possible in Python?</p>
<pre><code>File 1:
input_id multiple columns --->
1
2
3
File 2:
input_id ID
1 1234
2 1235
3 1236
output:
input_id multiple columns ---> ID
1 1234
2 1235
3 1236
</code></pre>
| -2 |
2016-09-22T18:24:14Z
| 39,649,263 |
<pre><code>a = pd.read_csv("import.csv")
b = pd.read_csv("entity_ids.csv")
merge = a.merge(b, how='left', on='input_id')
merge.to_csv("test2.csv", index = False)
</code></pre>
| -1 |
2016-09-22T21:36:10Z
|
[
"python",
"csv",
"ssis"
] |
Pandas DataFrame slicing based on logical conditions?
| 39,646,300 |
<p>I have this dataframe called data:</p>
<pre><code> Subjects Professor StudentID
8 Chemistry Jane 999
1 Chemistry Jane 3455
0 Chemistry Joseph 1234
2 History Jane 3455
6 History Smith 323
7 History Smith 999
3 Mathematics Doe 56767
10 Mathematics Einstein 3455
5 Physics Einstein 2834
9 Physics Smith 323
4 Physics Smith 999
</code></pre>
<p>I want to run this query "Professors with at least 2 classes with 2 or more of the same students". Desired Output</p>
<pre><code>Smith: Physics, History, 323, 999
</code></pre>
<p>I am familiar with SQL and could have done this easily, but I am still beginner in Python. How to achieve this output in Python? Another line of thought is to convert this dataframe into SQL database and have a SQL interface through python to run queries. Is there a way to accomplish that?</p>
| 1 |
2016-09-22T18:24:31Z
| 39,647,126 |
<pre><code>students_and_subjects = df.groupby(
['Professor', 'Subjects']
).StudentID.nunique().ge(2) \
.groupby(level='Professor').sum().ge(2)
df[df.Professor.map(students_and_subjects)]
</code></pre>
<p><a href="http://i.stack.imgur.com/NBlPy.png" rel="nofollow"><img src="http://i.stack.imgur.com/NBlPy.png" alt="enter image description here"></a></p>
| 2 |
2016-09-22T19:10:40Z
|
[
"python",
"mysql",
"pandas"
] |
Pandas DataFrame slicing based on logical conditions?
| 39,646,300 |
<p>I have this dataframe called data:</p>
<pre><code> Subjects Professor StudentID
8 Chemistry Jane 999
1 Chemistry Jane 3455
0 Chemistry Joseph 1234
2 History Jane 3455
6 History Smith 323
7 History Smith 999
3 Mathematics Doe 56767
10 Mathematics Einstein 3455
5 Physics Einstein 2834
9 Physics Smith 323
4 Physics Smith 999
</code></pre>
<p>I want to run this query "Professors with at least 2 classes with 2 or more of the same students". Desired Output</p>
<pre><code>Smith: Physics, History, 323, 999
</code></pre>
<p>I am familiar with SQL and could have done this easily, but I am still beginner in Python. How to achieve this output in Python? Another line of thought is to convert this dataframe into SQL database and have a SQL interface through python to run queries. Is there a way to accomplish that?</p>
| 1 |
2016-09-22T18:24:31Z
| 39,653,653 |
<p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#filtration" rel="nofollow"><code>filter</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow"><code>value_counts</code></a>:</p>
<pre><code>df1 = df.groupby('Professor').filter(lambda x: (len(x.Subjects) > 1) &
((x.StudentID.value_counts() > 1).sum() > 1))
print (df1)
Subjects Professor StudentID
6 History Smith 323
7 History Smith 999
9 Physics Smith 323
4 Physics Smith 999
</code></pre>
<p>and with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.duplicated.html" rel="nofollow"><code>duplicated</code></a>:</p>
<pre><code>df1 = df.groupby('Professor').filter(lambda x: (len(x.Subjects) > 1) &
(x.StudentID.duplicated().sum() > 1))
print (df1)
Subjects Professor StudentID
6 History Smith 323
7 History Smith 999
9 Physics Smith 323
4 Physics Smith 999
</code></pre>
<p>EDIT by comment:</p>
<p>You can return custom output from custom function and then remove <code>NaN</code> rows by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dropna.html" rel="nofollow"><code>Series.dropna</code></a>:</p>
<pre><code>df.StudentID = df.StudentID.astype(str)
def f(x):
if (len(x.Subjects) > 1) & (x.StudentID.duplicated().sum() > 1):
return ', '.join((x.Subjects.unique().tolist() + x.StudentID.unique().tolist()))
df1 = df.groupby('Professor').apply(f).dropna()
df1 = df1.index.to_series() + ': ' + df1
print (df1)
Professor
Smith Smith: History, Physics, 323, 999
dtype: object
</code></pre>
| 1 |
2016-09-23T06:05:50Z
|
[
"python",
"mysql",
"pandas"
] |
Convert string type array to array
| 39,646,400 |
<p>I have this:</p>
<pre><code>[s[8] = 5,
s[4] = 3,
s[19] = 2,
s[17] = 8,
s[16] = 8,
s[2] = 8,
s[9] = 7,
s[1] = 2,
s[3] = 9,
s[15] = 7,
s[11] = 0,
s[10] = 9,
s[12] = 3,
s[18] = 1,
s[0] = 4,
s[14] = 5,
s[7] = 4,
s[6] = 2,
s[5] = 7,
s[13] = 9]
</code></pre>
<p>How can I turn this into a python array where I can do <code>for items in x:</code> ?</p>
| 0 |
2016-09-22T18:29:42Z
| 39,646,730 |
<pre><code>import re
data = """[s[8] = 5,
s[4] = 3,
s[19] = 2,
s[17] = 8,
s[16] = 8,
s[2] = 8,
s[9] = 7,
s[1] = 2,
s[3] = 9,
s[15] = 7,
s[11] = 0,
s[10] = 9,
s[12] = 3,
s[18] = 1,
s[0] = 4,
s[14] = 5,
s[7] = 4,
s[6] = 2,
s[5] = 7,
s[13] = 9]"""
d = {int(m.group(1)): int(m.group(2)) for m in re.finditer(r"s\[(\d*)\] = (\d*)", data)}
seq = [d.get(x) for x in range(max(d))]
print(seq)
#result: [4, 2, 8, 9, 3, 7, 2, 4, 5, 7, 9, 0, 3, 9, 5, 7, 8, 8, 1]
</code></pre>
| 4 |
2016-09-22T18:47:29Z
|
[
"python",
"python-2.7"
] |
Convert string type array to array
| 39,646,400 |
<p>I have this:</p>
<pre><code>[s[8] = 5,
s[4] = 3,
s[19] = 2,
s[17] = 8,
s[16] = 8,
s[2] = 8,
s[9] = 7,
s[1] = 2,
s[3] = 9,
s[15] = 7,
s[11] = 0,
s[10] = 9,
s[12] = 3,
s[18] = 1,
s[0] = 4,
s[14] = 5,
s[7] = 4,
s[6] = 2,
s[5] = 7,
s[13] = 9]
</code></pre>
<p>How can I turn this into a python array where I can do <code>for items in x:</code> ?</p>
| 0 |
2016-09-22T18:29:42Z
| 39,646,814 |
<p>Assuming this is one giant string, if you want <code>print s[0]</code> to print <code>4</code>, then you need to split this up by the commas, then iterate through each item.</p>
<pre><code>inputArray = yourInput[1:-1].replace(' ','').split(',\n\n')
endArray = [0]*20
for item in inputArray:
endArray[int(item[item.index('[')+1:item.index(']')])]= int(item[item.index('=')+1:])
print endArray
</code></pre>
| 1 |
2016-09-22T18:51:37Z
|
[
"python",
"python-2.7"
] |
Convert string type array to array
| 39,646,400 |
<p>I have this:</p>
<pre><code>[s[8] = 5,
s[4] = 3,
s[19] = 2,
s[17] = 8,
s[16] = 8,
s[2] = 8,
s[9] = 7,
s[1] = 2,
s[3] = 9,
s[15] = 7,
s[11] = 0,
s[10] = 9,
s[12] = 3,
s[18] = 1,
s[0] = 4,
s[14] = 5,
s[7] = 4,
s[6] = 2,
s[5] = 7,
s[13] = 9]
</code></pre>
<p>How can I turn this into a python array where I can do <code>for items in x:</code> ?</p>
| 0 |
2016-09-22T18:29:42Z
| 39,646,873 |
<p>An easy way, although less efficient than Kevin's solution (without using regular expressions), would be the following (where <code>some_array</code> is your string):</p>
<pre><code>sub_list = some_array.split(',')
some_dict = {}
for item in sub_list:
sanitized_item = item.strip().rstrip().lstrip().replace('=', ':')
# split item in key val
k = sanitized_item.split(':')[0].strip()
v = sanitized_item.split(':')[1].strip()
if k.startswith('['):
k = k.replace('[', '')
if v.endswith(']'):
v = v.replace(']', '')
some_dict.update({k: int(v)})
print(some_dict)
print(some_dict['s[9]'])
</code></pre>
<p>Output sample:</p>
<pre><code>{'s[5]': 7, 's[16]': 8, 's[0]': 4, 's[9]': 7, 's[2]': 8, 's[3]': 9, 's[10]': 9, 's[15]': 7, 's[6]': 2, 's[7]': 4, 's[14]': 5, 's[19]': 2, 's[17]': 8, 's[4]': 3, 's[12]': 3, 's[11]': 0, 's[13]': 9, 's[18]': 1, 's[1]': 2, 's8]': 5}
7
</code></pre>
| 0 |
2016-09-22T18:55:09Z
|
[
"python",
"python-2.7"
] |
Convert string type array to array
| 39,646,400 |
<p>I have this:</p>
<pre><code>[s[8] = 5,
s[4] = 3,
s[19] = 2,
s[17] = 8,
s[16] = 8,
s[2] = 8,
s[9] = 7,
s[1] = 2,
s[3] = 9,
s[15] = 7,
s[11] = 0,
s[10] = 9,
s[12] = 3,
s[18] = 1,
s[0] = 4,
s[14] = 5,
s[7] = 4,
s[6] = 2,
s[5] = 7,
s[13] = 9]
</code></pre>
<p>How can I turn this into a python array where I can do <code>for items in x:</code> ?</p>
| 0 |
2016-09-22T18:29:42Z
| 39,647,086 |
<p>If you want the array to have the same name that is in the input string, you could use exec. This is not very pythonic, but it works for simple stuff</p>
<pre><code>string = ("[s[8] = 5, s[4] = 3, s[19] = 2,"
"s[17] = 8, s[16] = 8, s[2] = 8,"
"s[9] = 7, s[1] = 2, s[3] = 9,"
"s[15] = 7, s[11] = 0, s[10] = 9,"
"s[12] = 3, s[18] = 1, s[0] = 4,"
"s[14] = 5, s[7] = 4, s[6] = 2,"
"s[5] = 7, s[13] = 9]")
items = [item.rstrip().lstrip() for item in string[1:-1].split(",")]
name = items[0].partition("[")[0]
# Create the array
exec("{} = [None] * {}".format(name, len(items)))
# Populate with the values of the string
for item in items:
exec(items[0].partition("[")[0] )
</code></pre>
<p>This will generate an array named "s" and if there is an index missing it will be initialized as None</p>
| 2 |
2016-09-22T19:08:29Z
|
[
"python",
"python-2.7"
] |
How to merge the elements in a list sequentially in python
| 39,646,401 |
<p>I have a list <code>[ 'a' , 'b' , 'c' , 'd']</code>. How do I get the list which joins two letters sequentially i.e the ouptut should be <code>[ 'ab', 'bc' , 'cd']</code> in python easily instead of manually looping and joining</p>
| 3 |
2016-09-22T18:29:43Z
| 39,646,555 |
<p>Use <code>zip</code> within a list comprehension:</p>
<pre><code>In [13]: ["".join(seq) for seq in zip(lst, lst[1:])]
Out[13]: ['ab', 'bc', 'cd']
</code></pre>
<p>Or since you just want to concatenate two character you can also use <code>add</code> operator, by using <a href="https://docs.python.org/3/library/itertools.html#itertools.starmap" rel="nofollow"><code>itertools.starmap</code></a> in order to apply the add function on character pairs:</p>
<pre><code>In [14]: from itertools import starmap
In [15]: list(starmap(add, zip(lst, lst[1:])))
Out[15]: ['ab', 'bc', 'cd']
</code></pre>
| 4 |
2016-09-22T18:38:54Z
|
[
"python",
"python-2.7"
] |
How to merge the elements in a list sequentially in python
| 39,646,401 |
<p>I have a list <code>[ 'a' , 'b' , 'c' , 'd']</code>. How do I get the list which joins two letters sequentially i.e the ouptut should be <code>[ 'ab', 'bc' , 'cd']</code> in python easily instead of manually looping and joining</p>
| 3 |
2016-09-22T18:29:43Z
| 39,646,594 |
<p>Just one line of code is enough :</p>
<pre><code>a = ['a','b','c','d']
output = [a[i] + a[i+1] for i in xrange(len(a)) if i < len(a)-1]
print output
</code></pre>
| 0 |
2016-09-22T18:40:49Z
|
[
"python",
"python-2.7"
] |
How to make a function loop through multiple dictionaries in Python
| 39,646,403 |
<p>I am looking to create a function in Python that will move between multiple dictionaries that are within a class. There will be several dictionaries that the user will go through in order, and I want the function to be able to go through each one and select a random pair from the dictionary. The user will then submit an answer or some input, and then the function will move on to the next dictionary, again selecting a random pair.</p>
<p>Short of creating a separate function for each dictionary, Iâm not sure how to do this. Any help would be appreciated.</p>
| 1 |
2016-09-22T18:29:44Z
| 39,646,510 |
<p>Well, if you've got a list of dictionaries in the class, something like:</p>
<pre><code>import random
for d in dict_list:
random_pair = random.choice(d.items())
# Then do whatever you were going to do with that pair
# then it goes on to the next dictionary in the list
</code></pre>
<p>If you don't already have all the dictionaries in a list, just make the list before that loop.</p>
| 4 |
2016-09-22T18:36:19Z
|
[
"python",
"dictionary"
] |
Python Requests vs Curl Requests on Mac Terminal to Fetch Code Content via Github API
| 39,646,573 |
<p>I want to get the decrypted content of a code file in a Jquery project on Github. If I do curl request, the returned code content is decrypted. </p>
<p>But using the same parameter on Python requests, the encrypted exists. Why is that and what can I do to get the decrypted version? </p>
<p>Here's my curl command:</p>
<pre><code>curl https://api.github.com/repos/jquery/jquery/git/blobs/1d2872e34a809a9469ac5cb149a40fc7b8007633 -H "Accept: application/vnd.github-blob.raw"
</code></pre>
<p>The output is the following: </p>
<pre><code><?php
# Load and run the test suite as a proper XHTML page
header("Content-type: application/xhtml+xml");
readfile("index.html");
?>
</code></pre>
<p>Here's my python code: </p>
<pre><code>import requests
code = requests.get('https://api.github.com/repos/jquery/jquery/git/blobs/1d2872e34a809a9469ac5cb149a40fc7b8007633'\
,headers={'content-type':'application/vnd.github-blob.raw'})
code.json()
</code></pre>
<p>The output is this:</p>
<pre><code>{'content': 'PD9waHAKCSMgTG9hZCBhbmQgcnVuIHRoZSB0ZXN0IHN1aXRlIGFzIGEgcHJv\ncGVyIFhIVE1MIHBhZ2UKCWhlYWRlcigiQ29udGVudC10eXBlOiBhcHBsaWNh\ndGlvbi94aHRtbCt4bWwiKTsKCXJlYWRmaWxlKCJpbmRleC5odG1sIik7Cj8+\nCg==\n',
'encoding': 'base64',
'sha': '1d2872e34a809a9469ac5cb149a40fc7b8007633',
'size': 136,
'url': 'https://api.github.com/repos/jquery/jquery/git/blobs/1d2872e34a809a9469ac5cb149a40fc7b8007633'}
</code></pre>
| 0 |
2016-09-22T18:39:42Z
| 39,646,624 |
<pre><code>>>> import base64
>>> base64.b64decode('PD9waHAKCSMgTG9hZCBhbmQgcnVuIHRoZSB0ZXN0IHN1aXRlIGFzIGEgcH
Jv\ncGVyIFhIVE1MIHBhZ2UKCWhlYWRlcigiQ29udGVudC10eXBlOiBhcHBsaWNh\ndGlvbi94aHRtbC
t4bWwiKTsKCXJlYWRmaWxlKCJpbmRleC5odG1sIik7Cj8+\nCg==')
'<?php\n\t# Load and run the test suite as a proper XHTML page\n\theader("Conten
t-type: application/xhtml+xml");\n\treadfile("index.html");\n?>\n'
>>>
</code></pre>
<p>alternatively send the same header that you use with the curl command...</p>
<pre><code>>>> requests.get('https://api.github.com/repos/jquery/jquery/git/blobs/1d2872e34
a809a9469ac5cb149a40fc7b8007633',headers={"Accept": "application/vnd.github-blob
.raw"}).text
u'<?php\n\t# Load and run the test suite as a proper XHTML page\n\theader("Conte
nt-type: application/xhtml+xml");\n\treadfile("index.html");\n?>\n'
>>>
</code></pre>
<p>notice the key is "Accept" ... not "content-type"</p>
| 2 |
2016-09-22T18:42:41Z
|
[
"python",
"curl",
"github"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.